Policy Improvement Step of Policy Iteration
up vote
0
down vote
favorite
Hello Dear StackOverflow Community!
I am taking a Reinforcement Learning course right now and have a confusion about the Policy Iteration method.
In the Policy Iteration, we have a random policy at the beginning and value functions for each state. In the "Policy Evaluation" part of the Policy Iteration process, we find new value functions for each state. After that, in the "Policy Improvement" part, based on the new value functions, we update our policy. We do these steps iteratively until our value functions converge. But my problem is, how do we adapt our new policy for the next policy evaluation process? Let me explain my point on an example.
Grid world example:
As you can see on the image, black boxes are terminal states, our immediate reward is -1, discount factor is 1, and 0.25 possibility for all directions at the beginning. In the policy we get at the policy improvement part k = 1, we have to go left from the state which is on the right of the top-leftmost terminal state. After that, while updating the value functions for k = 2, why don't we consider this change and write -1.75 (abbreviated as -1.7) to this state like we can go all directions? In my opinion as mathematically:
new value function for this state = -1 + 1 * (0.0) = -1
This should be like that because we have only one possible action from that state based on our last policy instead of:
-1.7 = ((-1) + 1 * (-1)) * (0.75) + (-1 + 1 * (0.0)) * (0.25)
If we do it like that, so what is the intermediate missions of those policies? Just do it as value iteration if we do not use them for the new value functions!
I am so confused about that, can you help me please?
dynamic-programming reinforcement-learning planning value-iteration
add a comment |
up vote
0
down vote
favorite
Hello Dear StackOverflow Community!
I am taking a Reinforcement Learning course right now and have a confusion about the Policy Iteration method.
In the Policy Iteration, we have a random policy at the beginning and value functions for each state. In the "Policy Evaluation" part of the Policy Iteration process, we find new value functions for each state. After that, in the "Policy Improvement" part, based on the new value functions, we update our policy. We do these steps iteratively until our value functions converge. But my problem is, how do we adapt our new policy for the next policy evaluation process? Let me explain my point on an example.
Grid world example:
As you can see on the image, black boxes are terminal states, our immediate reward is -1, discount factor is 1, and 0.25 possibility for all directions at the beginning. In the policy we get at the policy improvement part k = 1, we have to go left from the state which is on the right of the top-leftmost terminal state. After that, while updating the value functions for k = 2, why don't we consider this change and write -1.75 (abbreviated as -1.7) to this state like we can go all directions? In my opinion as mathematically:
new value function for this state = -1 + 1 * (0.0) = -1
This should be like that because we have only one possible action from that state based on our last policy instead of:
-1.7 = ((-1) + 1 * (-1)) * (0.75) + (-1 + 1 * (0.0)) * (0.25)
If we do it like that, so what is the intermediate missions of those policies? Just do it as value iteration if we do not use them for the new value functions!
I am so confused about that, can you help me please?
dynamic-programming reinforcement-learning planning value-iteration
add a comment |
up vote
0
down vote
favorite
up vote
0
down vote
favorite
Hello Dear StackOverflow Community!
I am taking a Reinforcement Learning course right now and have a confusion about the Policy Iteration method.
In the Policy Iteration, we have a random policy at the beginning and value functions for each state. In the "Policy Evaluation" part of the Policy Iteration process, we find new value functions for each state. After that, in the "Policy Improvement" part, based on the new value functions, we update our policy. We do these steps iteratively until our value functions converge. But my problem is, how do we adapt our new policy for the next policy evaluation process? Let me explain my point on an example.
Grid world example:
As you can see on the image, black boxes are terminal states, our immediate reward is -1, discount factor is 1, and 0.25 possibility for all directions at the beginning. In the policy we get at the policy improvement part k = 1, we have to go left from the state which is on the right of the top-leftmost terminal state. After that, while updating the value functions for k = 2, why don't we consider this change and write -1.75 (abbreviated as -1.7) to this state like we can go all directions? In my opinion as mathematically:
new value function for this state = -1 + 1 * (0.0) = -1
This should be like that because we have only one possible action from that state based on our last policy instead of:
-1.7 = ((-1) + 1 * (-1)) * (0.75) + (-1 + 1 * (0.0)) * (0.25)
If we do it like that, so what is the intermediate missions of those policies? Just do it as value iteration if we do not use them for the new value functions!
I am so confused about that, can you help me please?
dynamic-programming reinforcement-learning planning value-iteration
Hello Dear StackOverflow Community!
I am taking a Reinforcement Learning course right now and have a confusion about the Policy Iteration method.
In the Policy Iteration, we have a random policy at the beginning and value functions for each state. In the "Policy Evaluation" part of the Policy Iteration process, we find new value functions for each state. After that, in the "Policy Improvement" part, based on the new value functions, we update our policy. We do these steps iteratively until our value functions converge. But my problem is, how do we adapt our new policy for the next policy evaluation process? Let me explain my point on an example.
Grid world example:
As you can see on the image, black boxes are terminal states, our immediate reward is -1, discount factor is 1, and 0.25 possibility for all directions at the beginning. In the policy we get at the policy improvement part k = 1, we have to go left from the state which is on the right of the top-leftmost terminal state. After that, while updating the value functions for k = 2, why don't we consider this change and write -1.75 (abbreviated as -1.7) to this state like we can go all directions? In my opinion as mathematically:
new value function for this state = -1 + 1 * (0.0) = -1
This should be like that because we have only one possible action from that state based on our last policy instead of:
-1.7 = ((-1) + 1 * (-1)) * (0.75) + (-1 + 1 * (0.0)) * (0.25)
If we do it like that, so what is the intermediate missions of those policies? Just do it as value iteration if we do not use them for the new value functions!
I am so confused about that, can you help me please?
dynamic-programming reinforcement-learning planning value-iteration
dynamic-programming reinforcement-learning planning value-iteration
edited Nov 10 at 17:57
R.F. Nelson
1,3161419
1,3161419
asked Nov 9 at 17:41
dummyHead
62
62
add a comment |
add a comment |
active
oldest
votes
active
oldest
votes
active
oldest
votes
active
oldest
votes
active
oldest
votes
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53230810%2fpolicy-improvement-step-of-policy-iteration%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown