Abstract
这篇博客大概会记录OpenAI gym的安装以及使用的简要说明。
在强化学习里面我们需要让agent运行在一个环境里面,然鹅手动编环境是一件很耗时间的事情, 所以如果有能力使用别人已经编好的环境, 可以节约我们很多时间。 OpenAI gym 就是这样一个模块, 他提供了我们很多优秀的模拟环境. 我们的各种 RL 算法都能使用这些环境.。
不过 OpenAI gym 暂时只支持 MacOS 和 Linux 系统,如果是Win 10 用户可以参考之前的[1]博客 安装Win的Linux子系统。
安装
首先需要安装一些必要依赖,如果brew或者apt-get没有安装或者更新的话需要安装更新一下:
# MacOS:
$ brew install cmake boost boost-python sdl2 swig wget
# Ubuntu 14.04:
$ apt-get install -y python-numpy python-dev cmake zlib1g-dev libjpeg-dev xvfb libav-tools
然后就可以使用pip安装gym,如果要安装gym的全部游戏需要把下面的gym
替换成gym[all]
# python 2.7
$ pip install gym
# python 3.5
$ pip3 install gym
使用
我们先看一段简短的代码:
demo1.py
import gym
env = gym.make('CartPole-v0')
for i_episode in range(20):
observation = env.reset()
for step in range(100):
env.render()
print(observation)
action = env.action_space.sample()
observation, reward, done, info = env.step(action)
if done:
print("Episode finished after {} timesteps".format(step+1))
break
- 首先是
gym.make('CartPole-v0')
,gym会运行CartPole-v0的游戏环境 - 在每个episode里面,
env.reset()
会重置环境,即重新开始游戏,并返回观测值 - 在每次的step里面,
env.render()
会刷新画面 -
env.action_space.sample()
返回一个action的随机sample,即随机在动作空间里面选择一个动作 -
env.step(action)
返回值有四个:-
observation (object): an environment-specific object representing your observation of the environment. For example, pixel data from a camera, joint angles and joint velocities of a robot, or the board state in a board game.
特定于环境的对象表示人对环境的观察。 例如,来自相机的像素数据,机器人的关节角度和关节速度,或棋盘游戏中的棋盘状态 -
reward (float): amount of reward achieved by the previous action. The scale varies between environments, but the goal is always to increase your total reward.
上一个行动获得的奖励数额 -
done (boolean): whether it’s time to reset the environment again. Most (but not all) tasks are divided up into well-defined episodes, and done being True indicates the episode has terminated. (For example, perhaps the pole tipped too far, or you lost your last life.)
游戏是否已经结束 -
info (dict): diagnostic information useful for debugging. It can sometimes be useful for learning (for example, it might contain the raw probabilities behind the environment’s last state change). However, official evaluations of your agent are not allowed to use this for learning.
调试用的诊断信息
-
observation (object): an environment-specific object representing your observation of the environment. For example, pixel data from a camera, joint angles and joint velocities of a robot, or the board state in a board game.
我们可以用下图来表示agent和环境之间的关系:
运行效果如http://s3-us-west-2.amazonaws.com/rl-gym-doc/cartpole-no-reset.mp4
Space
之后我们看一下上面代码的action_space
。每个游戏都有自己的action_space和observation_space,表示可以执行的动作空间与观察空间。我们可以将其打印出来,看动作空间和观察空间的最大值或者最小值
import gym
env = gym.make('CartPole-v0')
print(env.action_space)
#> Discrete(2) 离散值0或1
print(env.observation_space)
#> Box(4,) 区间值,数组中包含四个数,取值如下
print(env.observation_space.high)
#> array([ 2.4 , inf, 0.20943951, inf])
print(env.observation_space.low)
#> array([-2.4 , -inf, -0.20943951, -inf])