Policy Improvement Step of Policy Iteration









up vote
0
down vote

favorite












Hello Dear StackOverflow Community!



I am taking a Reinforcement Learning course right now and have a confusion about the Policy Iteration method.



In the Policy Iteration, we have a random policy at the beginning and value functions for each state. In the "Policy Evaluation" part of the Policy Iteration process, we find new value functions for each state. After that, in the "Policy Improvement" part, based on the new value functions, we update our policy. We do these steps iteratively until our value functions converge. But my problem is, how do we adapt our new policy for the next policy evaluation process? Let me explain my point on an example.



Grid world example:
enter image description here



As you can see on the image, black boxes are terminal states, our immediate reward is -1, discount factor is 1, and 0.25 possibility for all directions at the beginning. In the policy we get at the policy improvement part k = 1, we have to go left from the state which is on the right of the top-leftmost terminal state. After that, while updating the value functions for k = 2, why don't we consider this change and write -1.75 (abbreviated as -1.7) to this state like we can go all directions? In my opinion as mathematically:



new value function for this state = -1 + 1 * (0.0) = -1



This should be like that because we have only one possible action from that state based on our last policy instead of:



-1.7 = ((-1) + 1 * (-1)) * (0.75) + (-1 + 1 * (0.0)) * (0.25)



If we do it like that, so what is the intermediate missions of those policies? Just do it as value iteration if we do not use them for the new value functions!



I am so confused about that, can you help me please?










share|improve this question



























    up vote
    0
    down vote

    favorite












    Hello Dear StackOverflow Community!



    I am taking a Reinforcement Learning course right now and have a confusion about the Policy Iteration method.



    In the Policy Iteration, we have a random policy at the beginning and value functions for each state. In the "Policy Evaluation" part of the Policy Iteration process, we find new value functions for each state. After that, in the "Policy Improvement" part, based on the new value functions, we update our policy. We do these steps iteratively until our value functions converge. But my problem is, how do we adapt our new policy for the next policy evaluation process? Let me explain my point on an example.



    Grid world example:
    enter image description here



    As you can see on the image, black boxes are terminal states, our immediate reward is -1, discount factor is 1, and 0.25 possibility for all directions at the beginning. In the policy we get at the policy improvement part k = 1, we have to go left from the state which is on the right of the top-leftmost terminal state. After that, while updating the value functions for k = 2, why don't we consider this change and write -1.75 (abbreviated as -1.7) to this state like we can go all directions? In my opinion as mathematically:



    new value function for this state = -1 + 1 * (0.0) = -1



    This should be like that because we have only one possible action from that state based on our last policy instead of:



    -1.7 = ((-1) + 1 * (-1)) * (0.75) + (-1 + 1 * (0.0)) * (0.25)



    If we do it like that, so what is the intermediate missions of those policies? Just do it as value iteration if we do not use them for the new value functions!



    I am so confused about that, can you help me please?










    share|improve this question

























      up vote
      0
      down vote

      favorite









      up vote
      0
      down vote

      favorite











      Hello Dear StackOverflow Community!



      I am taking a Reinforcement Learning course right now and have a confusion about the Policy Iteration method.



      In the Policy Iteration, we have a random policy at the beginning and value functions for each state. In the "Policy Evaluation" part of the Policy Iteration process, we find new value functions for each state. After that, in the "Policy Improvement" part, based on the new value functions, we update our policy. We do these steps iteratively until our value functions converge. But my problem is, how do we adapt our new policy for the next policy evaluation process? Let me explain my point on an example.



      Grid world example:
      enter image description here



      As you can see on the image, black boxes are terminal states, our immediate reward is -1, discount factor is 1, and 0.25 possibility for all directions at the beginning. In the policy we get at the policy improvement part k = 1, we have to go left from the state which is on the right of the top-leftmost terminal state. After that, while updating the value functions for k = 2, why don't we consider this change and write -1.75 (abbreviated as -1.7) to this state like we can go all directions? In my opinion as mathematically:



      new value function for this state = -1 + 1 * (0.0) = -1



      This should be like that because we have only one possible action from that state based on our last policy instead of:



      -1.7 = ((-1) + 1 * (-1)) * (0.75) + (-1 + 1 * (0.0)) * (0.25)



      If we do it like that, so what is the intermediate missions of those policies? Just do it as value iteration if we do not use them for the new value functions!



      I am so confused about that, can you help me please?










      share|improve this question















      Hello Dear StackOverflow Community!



      I am taking a Reinforcement Learning course right now and have a confusion about the Policy Iteration method.



      In the Policy Iteration, we have a random policy at the beginning and value functions for each state. In the "Policy Evaluation" part of the Policy Iteration process, we find new value functions for each state. After that, in the "Policy Improvement" part, based on the new value functions, we update our policy. We do these steps iteratively until our value functions converge. But my problem is, how do we adapt our new policy for the next policy evaluation process? Let me explain my point on an example.



      Grid world example:
      enter image description here



      As you can see on the image, black boxes are terminal states, our immediate reward is -1, discount factor is 1, and 0.25 possibility for all directions at the beginning. In the policy we get at the policy improvement part k = 1, we have to go left from the state which is on the right of the top-leftmost terminal state. After that, while updating the value functions for k = 2, why don't we consider this change and write -1.75 (abbreviated as -1.7) to this state like we can go all directions? In my opinion as mathematically:



      new value function for this state = -1 + 1 * (0.0) = -1



      This should be like that because we have only one possible action from that state based on our last policy instead of:



      -1.7 = ((-1) + 1 * (-1)) * (0.75) + (-1 + 1 * (0.0)) * (0.25)



      If we do it like that, so what is the intermediate missions of those policies? Just do it as value iteration if we do not use them for the new value functions!



      I am so confused about that, can you help me please?







      dynamic-programming reinforcement-learning planning value-iteration






      share|improve this question















      share|improve this question













      share|improve this question




      share|improve this question








      edited Nov 10 at 17:57









      R.F. Nelson

      1,3161419




      1,3161419










      asked Nov 9 at 17:41









      dummyHead

      62




      62



























          active

          oldest

          votes











          Your Answer






          StackExchange.ifUsing("editor", function ()
          StackExchange.using("externalEditor", function ()
          StackExchange.using("snippets", function ()
          StackExchange.snippets.init();
          );
          );
          , "code-snippets");

          StackExchange.ready(function()
          var channelOptions =
          tags: "".split(" "),
          id: "1"
          ;
          initTagRenderer("".split(" "), "".split(" "), channelOptions);

          StackExchange.using("externalEditor", function()
          // Have to fire editor after snippets, if snippets enabled
          if (StackExchange.settings.snippets.snippetsEnabled)
          StackExchange.using("snippets", function()
          createEditor();
          );

          else
          createEditor();

          );

          function createEditor()
          StackExchange.prepareEditor(
          heartbeatType: 'answer',
          convertImagesToLinks: true,
          noModals: true,
          showLowRepImageUploadWarning: true,
          reputationToPostImages: 10,
          bindNavPrevention: true,
          postfix: "",
          imageUploader:
          brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
          contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
          allowUrls: true
          ,
          onDemand: true,
          discardSelector: ".discard-answer"
          ,immediatelyShowMarkdownHelp:true
          );



          );













           

          draft saved


          draft discarded


















          StackExchange.ready(
          function ()
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53230810%2fpolicy-improvement-step-of-policy-iteration%23new-answer', 'question_page');

          );

          Post as a guest















          Required, but never shown






























          active

          oldest

          votes













          active

          oldest

          votes









          active

          oldest

          votes






          active

          oldest

          votes















           

          draft saved


          draft discarded















































           


          draft saved


          draft discarded














          StackExchange.ready(
          function ()
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53230810%2fpolicy-improvement-step-of-policy-iteration%23new-answer', 'question_page');

          );

          Post as a guest















          Required, but never shown





















































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown

































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown







          這個網誌中的熱門文章

          How to read a connectionString WITH PROVIDER in .NET Core?

          In R, how to develop a multiplot heatmap.2 figure showing key labels successfully

          Museum of Modern and Contemporary Art of Trento and Rovereto