强化学习——MDPs求解之动态规划
学习目标
-
理解策略评估(Policy Evaluation)和策略提升(Policy Improvement); -
理解策略迭代(Policy Iteration)算法; -
理解值迭代(Value Iteration)算法; -
理解策略迭代和值迭代的不同之处; -
动态规划方法的局限性; -
Python实现格子世界(Gridworld)策略迭代和值迭代。
动态规划(Dynamic Programming, DP)是一种解决复杂问题的方法,它通过定义问题状态和状态之间的关系,将复杂问题拆分成若干较为简单的子问题,使得问题能够以递推(或者说分治)的方式去解决。「所以要能使用动态规划,这种问题一要能够分解成许多子问题,二要这些子问题能够多次被迭代使用」。而马尔科夫决策过程就正好满足了这两个条件,MDPs可以看成是各个状态之间的转移,而贝尔曼方程则将这个问题分解成了一个个状态的递归求解问题,而值函数就用于存储这个求解的结果,得到每一个状态的最优策略,合在一起以后就完成了整个MDPs的求解。但是DP的使用是建立在我们知道MDP环境的模型的基础上的,所以也称其为model based method。
策略评估(Policy Evaluation)
策略评估如其字面意思,就是评价一个策略好不好。计算任意一个策略 的状态值函数 即可,这也叫做预测(Prediction),上一篇文章已经通过backup图得到了 的求解公式,如下:
那这个式子怎么算呢?状态 的值函数我也不知道啊。这里我们会使用高斯-赛德尔迭代算法来求解,先人为给一个初值,再根据下面的式子迭代求解,可以证明,当k趋于无穷时,最后是会收敛到 的。
策略提升(Policy Improvement)
我们已经知道怎么去评价一个策略好不好,那接下来就要找到那个最好的策略。每到一个状态,我们可能就会想是不是需要改变一下策略,这样也许能使回报更大,即选择一个动作 ,然后再继续遵循 ,这种方式的值就是动作值函数(还记得在上一篇中提出那个思考吗,这里就是一个比较好的回答):
我们用一种贪婪的方式来提升我们策略,即选择那个能使动作值函数最大的动作:
可以证明,改变了策略 以后,状态值函数也变大了,即 ,具体证明参见学习资料。
策略迭代(Policy Iteration)
说完了策略评估和策略提升,策略迭代就简单了,就是反复使用策略评估和策略提升,最后会收敛到最优策略。
其伪代码如图所示Sutton的书中给了一个Gridworld的例子,如下图所示,具体规则我就不翻译了,大致就是说最上角和右下角是终点(终止状态),每一步的reward都是-1,最终目的是要找到一个最优策略。我们现在就用这个例子来用Python实现策略迭代。
import numpy as np
from lib.envs.gridworld import GridworldEnv
def policy_eval(policy, env, discount_factor=1.0, theta=0.00001):
"""
Evaluate a policy given an environment and a full description of the environment's dynamics.
Args:
policy: [S, A] shaped matrix representing the policy.
env: OpenAI env. env.P represents the transition probabilities of the environment.
env.P[s][a] is a list of transition tuples (prob, next_state, reward, done).
env.nS is a number of states in the environment.
env.nA is a number of actions in the environment.
theta: We stop evaluation once our value function change is less than theta for all states.
discount_factor: Gamma discount factor.
Returns:
Vector of length env.nS representing the value function.
"""
# Start with a random (all 0) value function
V = np.zeros(env.nS)
while True:
delta = 0
# For each state, perform a "full backup"
for s in range(env.nS):
v = 0
# Look at the possible next actions
for a, action_prob in enumerate(policy[s]):
# For each action, look at the possible next states...
for prob, next_state, reward, done in env.P[s][a]:
# Calculate the expected value
v += action_prob * prob * (reward + discount_factor * V[next_state])
# How much our value function changed (across any states)
delta = max(delta, np.abs(v - V[s]))
V[s] = v
# Stop evaluating once our value function change is below a threshold
if delta < theta:
break
return np.array(V)
def policy_improvement(env, policy_eval_fn=policy_eval, discount_factor=1.0):
"""
Policy Improvement Algorithm. Iteratively evaluates and improves a policy
until an optimal policy is found.
Args:
env: The OpenAI envrionment.
policy_eval_fn: Policy Evaluation function that takes 3 arguments:
policy, env, discount_factor.
discount_factor: gamma discount factor.
Returns:
A tuple (policy, V).
policy is the optimal policy, a matrix of shape [S, A] where each state s
contains a valid probability distribution over actions.
V is the value function for the optimal policy.
"""
def one_step_lookahead(state, V):
"""
Helper function to calculate the value for all action in a given state.
Args:
state: The state to consider (int)
V: The value to use as an estimator, Vector of length env.nS
Returns:
A vector of length env.nA containing the expected value of each action.
"""
A = np.zeros(env.nA)
for a in range(env.nA):
for prob, next_state, reward, done in env.P[state][a]:
A[a] += prob * (reward + discount_factor * V[next_state])
return A
# Start with a random policy
policy = np.ones([env.nS, env.nA]) / env.nA
while True:
# Evaluate the current policy
V = policy_eval_fn(policy, env, discount_factor)
# Will be set to false if we make any changes to the policy
policy_stable = True
# For each state...
for s in range(env.nS):
# The best action we would take under the currect policy
chosen_a = np.argmax(policy[s])
# Find the best action by one-step lookahead
# Ties are resolved arbitarily
action_values = one_step_lookahead(s, V)
best_a = np.argmax(action_values)
# Greedily update the policy
if chosen_a != best_a:
policy_stable = False
policy[s] = np.eye(env.nA)[best_a]
# If the policy is stable we've found an optimal policy. Return it
if policy_stable:
return policy, V
env = GridworldEnv()
random_policy = np.ones([env.nS, env.nA]) / env.nA
v = policy_eval(random_policy, env)
policy, v = policy_improvement(env)
print("Policy Probability Distribution:")
print(policy)
print("")
print("Reshaped Grid Policy (0=up, 1=right, 2=down, 3=left):")
print(np.reshape(np.argmax(policy, axis=1), env.shape))
print("")
print("Value Function:")
print(v)
print("")
print("Reshaped Grid Value Function:")
print(v.reshape(env.shape))
print("")
得到如下结果:可以看出,这和书上得到的最优策略时一致的。
值迭代(Value Iteration)
策略迭代有一个缺点,就是每一步都要进行策略评估,当状态空间很大的时候是非常耗费时间的。值迭代是直接将贝尔曼最优化方程拿来迭代计算的,这一点是不同于策略迭代的,我们直接对比两者的伪代码。所以值迭代会直接收敛到最优值,从而我们就可以得到最优策略,因为它就是一个贪婪的选择。再反过去看一下策略迭代的过程,策略评估过程是应用贝尔曼方程来计算当前最优策略下的值函数,接着进行策略提升,即在每个状态都选择一个最优动作来最大化值函数,以改进策略。但是想一下,在策略评估过程我们一定要等到它收敛到准确的值函数吗?答案是不一定,我们可以设定一个误差,中断这个过程,用一个近似的值函数用以策略提升(格子世界的例子中就可以看出,在迭代到第三步以后,其实最优策略就已经确定了),而我们提出这个方法的时候并不是这么做的,而是等到策略评价过程收敛,这是一个极端的选择,相当于在迭代贝尔曼最优化方程!所以,换句话说,值迭代其实可以看成是策略迭代一个极端情况。
一般来说,策略迭代的收敛速度更快一些,在状态空间较小时,最好选用策略迭代方法。当状态空间较大时,值迭代的计算量更小一些。
同样,还是以格子世界为例,用Python实现一遍值迭代算法:
import numpy as np
from lib.envs.gridworld import GridworldEnv
def value_iteration(env, theta=0.0001, discount_factor=1.0):
"""
Value Iteration Algorithm.
Args:
env: OpenAI env. env.P represents the transition probabilities of the environment.
env.P[s][a] is a list of transition tuples (prob, next_state, reward, done).
env.nS is a number of states in the environment.
env.nA is a number of actions in the environment.
theta: We stop evaluation once our value function change is less than theta for all states.
discount_factor: Gamma discount factor.
Returns:
A tuple (policy, V) of the optimal policy and the optimal value function.
"""
def one_step_lookahead(state, V):
"""
Helper function to calculate the value for all action in a given state.
Args:
state: The state to consider (int)
V: The value to use as an estimator, Vector of length env.nS
Returns:
A vector of length env.nA containing the expected value of each action.
"""
A = np.zeros(env.nA)
for a in range(env.nA):
for prob, next_state, reward, done in env.P[state][a]:
A[a] += prob * (reward + discount_factor * V[next_state])
return A
V = np.zeros(env.nS)
while True:
# Stopping condition
delta = 0
# Update each state...
for s in range(env.nS):
# Do a one-step lookahead to find the best action
A = one_step_lookahead(s, V)
best_action_value = np.max(A)
# Calculate delta across all states seen so far
delta = max(delta, np.abs(best_action_value - V[s]))
# Update the value function
V[s] = best_action_value
# Check if we can stop
if delta < theta:
break
# Create a deterministic policy using the optimal value function
policy = np.zeros([env.nS, env.nA])
for s in range(env.nS):
# One step lookahead to find the best action for this state
A = one_step_lookahead(s, V)
best_action = np.argmax(A)
# Always take the best action
policy[s, best_action] = 1.0
return policy, V
env = GridworldEnv()
policy, v = value_iteration(env)
print("Policy Probability Distribution:")
print(policy)
print("")
print("Reshaped Grid Policy (0=up, 1=right, 2=down, 3=left):")
print(np.reshape(np.argmax(policy, axis=1), env.shape))
print("")
print("Value Function:")
print(v)
print("")
print("Reshaped Grid Value Function:")
print(v.reshape(env.shape))
print("")
输出结果与策略迭代一致。
参考
[1] Reinforcement Learning: An Introduction- Chapter 4: Dynamic Programming
[2] David Silver's RL Course Lecture 3 - Planning by Dynamic Programming(video, slides)
[3] Quora: https://www.quora.com/How-is-policy-iteration-different-from-value-iteration by Sergio Valcarcel Macua
[4] 策略迭代与值迭代的区别[1]
[5] github开源代码[2]
Reference
策略迭代与值迭代的区别: https://link.zhihu.com/?target=http%3A//blog.csdn.net/panglinzhuo/article/details/77752574
[2]github开源代码: https://link.zhihu.com/?target=https%3A//github.com/dennybritz/reinforcement-learning/