One hot encoding huge 3D array










0















As the title my data looks like this:
["test", "bob", "romeo"] - etc just random words
I have converted them into numbers based on position in alphabet for each letter in the word so now it would be:



[[19, 4, 18, 19], [1, 14, 1], [17, 14, 12, 4, 14]]


and now I'd want to hot-encode it



tf.one_hot(featuresVectors, longestWordLen)


results in



ResourceExhaustedError: OOM when allocating tensor with shape[512996,62,62]









share|improve this question


























    0















    As the title my data looks like this:
    ["test", "bob", "romeo"] - etc just random words
    I have converted them into numbers based on position in alphabet for each letter in the word so now it would be:



    [[19, 4, 18, 19], [1, 14, 1], [17, 14, 12, 4, 14]]


    and now I'd want to hot-encode it



    tf.one_hot(featuresVectors, longestWordLen)


    results in



    ResourceExhaustedError: OOM when allocating tensor with shape[512996,62,62]









    share|improve this question
























      0












      0








      0








      As the title my data looks like this:
      ["test", "bob", "romeo"] - etc just random words
      I have converted them into numbers based on position in alphabet for each letter in the word so now it would be:



      [[19, 4, 18, 19], [1, 14, 1], [17, 14, 12, 4, 14]]


      and now I'd want to hot-encode it



      tf.one_hot(featuresVectors, longestWordLen)


      results in



      ResourceExhaustedError: OOM when allocating tensor with shape[512996,62,62]









      share|improve this question














      As the title my data looks like this:
      ["test", "bob", "romeo"] - etc just random words
      I have converted them into numbers based on position in alphabet for each letter in the word so now it would be:



      [[19, 4, 18, 19], [1, 14, 1], [17, 14, 12, 4, 14]]


      and now I'd want to hot-encode it



      tf.one_hot(featuresVectors, longestWordLen)


      results in



      ResourceExhaustedError: OOM when allocating tensor with shape[512996,62,62]






      python tensorflow one-hot-encoding






      share|improve this question













      share|improve this question











      share|improve this question




      share|improve this question










      asked Nov 13 '18 at 15:19









      HigeathHigeath

      156




      156






















          2 Answers
          2






          active

          oldest

          votes


















          2














          You are running out of memory, meaning that there isn't enough memory left on your device to create such a tensor. Given that your batch size is 512996 and your depth dimension is 62, you are trying to create a tensor of 512996x62x62xsizeof(float): ~7.34Go !



          Since the indices are never going to be greater than 26. You can try to use a smaller data type for this tensor, like int8: tf.one_hot(featuresVectors, longestWordLen, dtype=tf.int8).
          That should take 512996x62x62x1: ~1.83 Go on your device.



          If your device still cannot allocate the tensor, then you'll have to reduce your batch size. (Aka the number of words)






          share|improve this answer






























            2














            You basically run out of memory. Two approaches which could help are using less features (e.g count the words and just keep the top 10000 or so and a "unknown toekn" for therest) to make the onehot size smaller. Or you could use an embedding layer in your network and feed the integers directly.






            share|improve this answer






















              Your Answer






              StackExchange.ifUsing("editor", function ()
              StackExchange.using("externalEditor", function ()
              StackExchange.using("snippets", function ()
              StackExchange.snippets.init();
              );
              );
              , "code-snippets");

              StackExchange.ready(function()
              var channelOptions =
              tags: "".split(" "),
              id: "1"
              ;
              initTagRenderer("".split(" "), "".split(" "), channelOptions);

              StackExchange.using("externalEditor", function()
              // Have to fire editor after snippets, if snippets enabled
              if (StackExchange.settings.snippets.snippetsEnabled)
              StackExchange.using("snippets", function()
              createEditor();
              );

              else
              createEditor();

              );

              function createEditor()
              StackExchange.prepareEditor(
              heartbeatType: 'answer',
              autoActivateHeartbeat: false,
              convertImagesToLinks: true,
              noModals: true,
              showLowRepImageUploadWarning: true,
              reputationToPostImages: 10,
              bindNavPrevention: true,
              postfix: "",
              imageUploader:
              brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
              contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
              allowUrls: true
              ,
              onDemand: true,
              discardSelector: ".discard-answer"
              ,immediatelyShowMarkdownHelp:true
              );



              );













              draft saved

              draft discarded


















              StackExchange.ready(
              function ()
              StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53284138%2fone-hot-encoding-huge-3d-array%23new-answer', 'question_page');

              );

              Post as a guest















              Required, but never shown

























              2 Answers
              2






              active

              oldest

              votes








              2 Answers
              2






              active

              oldest

              votes









              active

              oldest

              votes






              active

              oldest

              votes









              2














              You are running out of memory, meaning that there isn't enough memory left on your device to create such a tensor. Given that your batch size is 512996 and your depth dimension is 62, you are trying to create a tensor of 512996x62x62xsizeof(float): ~7.34Go !



              Since the indices are never going to be greater than 26. You can try to use a smaller data type for this tensor, like int8: tf.one_hot(featuresVectors, longestWordLen, dtype=tf.int8).
              That should take 512996x62x62x1: ~1.83 Go on your device.



              If your device still cannot allocate the tensor, then you'll have to reduce your batch size. (Aka the number of words)






              share|improve this answer



























                2














                You are running out of memory, meaning that there isn't enough memory left on your device to create such a tensor. Given that your batch size is 512996 and your depth dimension is 62, you are trying to create a tensor of 512996x62x62xsizeof(float): ~7.34Go !



                Since the indices are never going to be greater than 26. You can try to use a smaller data type for this tensor, like int8: tf.one_hot(featuresVectors, longestWordLen, dtype=tf.int8).
                That should take 512996x62x62x1: ~1.83 Go on your device.



                If your device still cannot allocate the tensor, then you'll have to reduce your batch size. (Aka the number of words)






                share|improve this answer

























                  2












                  2








                  2







                  You are running out of memory, meaning that there isn't enough memory left on your device to create such a tensor. Given that your batch size is 512996 and your depth dimension is 62, you are trying to create a tensor of 512996x62x62xsizeof(float): ~7.34Go !



                  Since the indices are never going to be greater than 26. You can try to use a smaller data type for this tensor, like int8: tf.one_hot(featuresVectors, longestWordLen, dtype=tf.int8).
                  That should take 512996x62x62x1: ~1.83 Go on your device.



                  If your device still cannot allocate the tensor, then you'll have to reduce your batch size. (Aka the number of words)






                  share|improve this answer













                  You are running out of memory, meaning that there isn't enough memory left on your device to create such a tensor. Given that your batch size is 512996 and your depth dimension is 62, you are trying to create a tensor of 512996x62x62xsizeof(float): ~7.34Go !



                  Since the indices are never going to be greater than 26. You can try to use a smaller data type for this tensor, like int8: tf.one_hot(featuresVectors, longestWordLen, dtype=tf.int8).
                  That should take 512996x62x62x1: ~1.83 Go on your device.



                  If your device still cannot allocate the tensor, then you'll have to reduce your batch size. (Aka the number of words)







                  share|improve this answer












                  share|improve this answer



                  share|improve this answer










                  answered Nov 13 '18 at 15:40









                  TezirgTezirg

                  1,205719




                  1,205719























                      2














                      You basically run out of memory. Two approaches which could help are using less features (e.g count the words and just keep the top 10000 or so and a "unknown toekn" for therest) to make the onehot size smaller. Or you could use an embedding layer in your network and feed the integers directly.






                      share|improve this answer



























                        2














                        You basically run out of memory. Two approaches which could help are using less features (e.g count the words and just keep the top 10000 or so and a "unknown toekn" for therest) to make the onehot size smaller. Or you could use an embedding layer in your network and feed the integers directly.






                        share|improve this answer

























                          2












                          2








                          2







                          You basically run out of memory. Two approaches which could help are using less features (e.g count the words and just keep the top 10000 or so and a "unknown toekn" for therest) to make the onehot size smaller. Or you could use an embedding layer in your network and feed the integers directly.






                          share|improve this answer













                          You basically run out of memory. Two approaches which could help are using less features (e.g count the words and just keep the top 10000 or so and a "unknown toekn" for therest) to make the onehot size smaller. Or you could use an embedding layer in your network and feed the integers directly.







                          share|improve this answer












                          share|improve this answer



                          share|improve this answer










                          answered Nov 13 '18 at 15:39









                          SyriusSyrius

                          966




                          966



























                              draft saved

                              draft discarded
















































                              Thanks for contributing an answer to Stack Overflow!


                              • Please be sure to answer the question. Provide details and share your research!

                              But avoid


                              • Asking for help, clarification, or responding to other answers.

                              • Making statements based on opinion; back them up with references or personal experience.

                              To learn more, see our tips on writing great answers.




                              draft saved


                              draft discarded














                              StackExchange.ready(
                              function ()
                              StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53284138%2fone-hot-encoding-huge-3d-array%23new-answer', 'question_page');

                              );

                              Post as a guest















                              Required, but never shown





















































                              Required, but never shown














                              Required, but never shown












                              Required, but never shown







                              Required, but never shown

































                              Required, but never shown














                              Required, but never shown












                              Required, but never shown







                              Required, but never shown







                              這個網誌中的熱門文章

                              How to read a connectionString WITH PROVIDER in .NET Core?

                              Node.js Script on GitHub Pages or Amazon S3

                              Museum of Modern and Contemporary Art of Trento and Rovereto