AAAI2019: Diversity-Driven Extensible Hierarchical Reinforcement Learning

Published in The 33rd AAAI Conference on Artificial Intelligence (AAAI), (CCF Rank A, Acceptance rate: 16.2%), 2019

Authors: Yuhang Song, Jianyi Wang, Thomas Lukasiewicz, Zhenghua Xu* and Mai Xu
Abstract: Hierarchical reinforcement learning (HRL) has recently shown promising advances on speeding up learning, improving the exploration, and discovering intertask transferable skills.MostrecentworksfocusonHRLwithtwolevels,i.e.,a master policy manipulates subpolicies, which in turn manipulate primitive actions. However, HRL with multiple levels is usually needed in many real-world scenarios, whose ultimate goals are highly abstract, while their actions are very primitive. Therefore, in this paper, we propose a diversity- driven extensible HRL (DEHRL), where an extensible and scalable framework is built and learned levelwise to realize HRLwithmultiplelevels.DEHRLfollowsapopularassumption: diverse subpolicies are useful, i.e., subpolicies are believed to be more useful if they are more diverse. However, existing implementations of this diversity assumption usually have their own drawbacks, which makes them inapplicable to HRL with multiple levels. Consequently, we further propose a novel diversity-driven solution to achieve this assumption in DEHRL. Experimental studies evaluate DEHRL with nine baselines from four perspectives in two domains; the results show that DEHRL outperforms the state-of-the-art baselines in all four aspects.

[Download paper here] [Code Release] [Oral Presentation]