强化学习是目前热门的研究方向。对不同强化学习的方法与paper进行分类有助于我们进一步了解针对不同的应用场景,如何使用合适的强化学习方法。本文将对强化学习进行分类并列出对应的paper。
2. Exploration
a. Intrinsic Motivation系列
算法名称:VIME
论文标题:VIME: Variational Information Maximizing Exploration
发表会议:NIPS , 2016
论文链接:https://arxiv.org/abs/1605.09674
当前谷歌学术引用次数:454
算法名称:CTS-based Pseudocounts
论文标题:Unifying Count-Based Exploration and Intrinsic Motivation
发表会议:NIPS , 2016
论文链接:https://arxiv.org/abs/1606.01868
当前谷歌学术引用次数:726
算法名称:PixelCNN-based Pseudocounts
论文标题:Count-Based Exploration with Neural Density Models
发表会议:ICML , 2017
论文链接:https://arxiv.org/abs/1703.01310
当前谷歌学术引用次数:294
算法名称:Hash-based Counts
论文标题:#Exploration: A Study of Count-Based Exploration for Deep Reinforcement Learning
发表会议:NIPS , 2017
论文链接:https://arxiv.org/abs/1611.04717
当前谷歌学术引用次数:306
算法名称:EX2
论文标题:EX2: Exploration with Exemplar Models for Deep Reinforcement Learning
发表会议:NIPS , 2017
论文链接:https://arxiv.org/abs/1703.01260
当前谷歌学术引用次数:83
算法名称:ICM
论文标题:Curiosity-driven Exploration by Self-supervised Prediction
发表会议:ICML, 2017
论文链接:https://arxiv.org/abs/1705.05363
当前谷歌学术引用次数:1020
算法名称:RND
论文标题:Exploration by Random Network Distillation
发表会议:ICLR, 2019
论文链接:https://arxiv.org/abs/1810.12894
当前谷歌学术引用次数:332
b. Unsupervised RL系列
算法名称:VIC
论文标题:Variational Intrinsic Control
发表会议:ICLR, 2017
论文链接:https://arxiv.org/abs/1611.07507
当前谷歌学术引用次数:138
算法名称:DIAYN
论文标题:Diversity is All You Need: Learning Skills without a Reward Function
发表会议:ICLR, 2019
论文链接:https://arxiv.org/abs/1802.06070
当前谷歌学术引用次数:281
算法名称:VALOR
论文标题:Variational Option Discovery Algorithms
发表会议:Arxiv
论文链接:https://arxiv.org/abs/1807.10299
当前谷歌学术引用次数:49
3. Transfer and Multitask RL
算法名称:Progressive Networks
论文标题:Progressive Neural Networks
发表会议:NIPS, 2016
论文链接:https://arxiv.org/abs/1606.04671
当前谷歌学术引用次数:997
算法名称:UVFA
论文标题:Universal Value Function Approximators
发表会议:ICML, 2015
论文链接:http://proceedings.mlr.press/v37/schaul15.pdf
当前谷歌学术引用次数:522
算法名称:IU Agent
论文标题:The Intentional Unintentional Agent: Learning to Solve Many Continuous Control Tasks Simultaneously
发表会议:Conference on Robot Learning, 2017
论文链接:https://arxiv.org/abs/1707.03300
当前谷歌学术引用次数:26
算法名称:MATL
论文标题:Mutual Alignment Transfer Learning
发表会议:Conference on Robot Learning, 2017
论文链接:https://arxiv.org/abs/1707.07907
当前谷歌学术引用次数:37
算法名称:HER
论文标题:Hindsight Experience Replay
发表会议:NIPS, 2017
论文链接:https://arxiv.org/abs/1707.01495
当前谷歌学术引用次数:919
4. Hierarchy系列
算法名称:STRAW
论文标题:Strategic Attentive Writer for Learning Macro-Actions
发表会议:NIPS, 2016
论文链接:https://arxiv.org/abs/1606.04695
当前谷歌学术引用次数:127
算法名称:Feudal Networks
论文标题:FeUdal Networks for Hierarchical Reinforcement Learning
发表会议:ICML, 2017
论文链接:https://arxiv.org/abs/1703.01161
当前谷歌学术引用次数:457
算法名称:HIRO
论文标题:Data-Efficient Hierarchical Reinforcement Learning
发表会议:NIPS, 2018
论文链接:https://arxiv.org/abs/1805.08296
当前谷歌学术引用次数:265