Please use this identifier to cite or link to this item:
|Title:||Hierarchical Reinforcement Learning: Learning Sub-goals and State-Abstraction|
|Abstract:||In this paper we present a method that allows an agent to discover and create temporal abstractions autonomously. Our method is based on the concept that to reach the goal, the agent must pass through relevant states that we will interpret as subgoals. To detect useful subgoals, our method creates intersections between several paths leading to a goal. Our research focused on domains largely used in the study of temporal abstractions. We used several versions of the room-to-room navigation problem. We determined that, in the problems tested, an agent can learn more rapidly by automatically discovering subgoals and creating abstractions.|
|Appears in Collections:||CTI-CRI - Comunicações a conferências internacionais|
Files in This Item:
|HRL Short Paper.pdf||321.61 kB||Adobe PDF||View/Open Request a copy|
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.