One hot encoding huge 3D array
As the title my data looks like this:["test", "bob", "romeo"]
- etc just random words
I have converted them into numbers based on position in alphabet for each letter in the word so now it would be:
[[19, 4, 18, 19], [1, 14, 1], [17, 14, 12, 4, 14]]
and now I'd want to hot-encode it
tf.one_hot(featuresVectors, longestWordLen)
results in
ResourceExhaustedError: OOM when allocating tensor with shape[512996,62,62]
python tensorflow one-hot-encoding
add a comment |
As the title my data looks like this:["test", "bob", "romeo"]
- etc just random words
I have converted them into numbers based on position in alphabet for each letter in the word so now it would be:
[[19, 4, 18, 19], [1, 14, 1], [17, 14, 12, 4, 14]]
and now I'd want to hot-encode it
tf.one_hot(featuresVectors, longestWordLen)
results in
ResourceExhaustedError: OOM when allocating tensor with shape[512996,62,62]
python tensorflow one-hot-encoding
add a comment |
As the title my data looks like this:["test", "bob", "romeo"]
- etc just random words
I have converted them into numbers based on position in alphabet for each letter in the word so now it would be:
[[19, 4, 18, 19], [1, 14, 1], [17, 14, 12, 4, 14]]
and now I'd want to hot-encode it
tf.one_hot(featuresVectors, longestWordLen)
results in
ResourceExhaustedError: OOM when allocating tensor with shape[512996,62,62]
python tensorflow one-hot-encoding
As the title my data looks like this:["test", "bob", "romeo"]
- etc just random words
I have converted them into numbers based on position in alphabet for each letter in the word so now it would be:
[[19, 4, 18, 19], [1, 14, 1], [17, 14, 12, 4, 14]]
and now I'd want to hot-encode it
tf.one_hot(featuresVectors, longestWordLen)
results in
ResourceExhaustedError: OOM when allocating tensor with shape[512996,62,62]
python tensorflow one-hot-encoding
python tensorflow one-hot-encoding
asked Nov 13 '18 at 15:19
HigeathHigeath
156
156
add a comment |
add a comment |
2 Answers
2
active
oldest
votes
You are running out of memory, meaning that there isn't enough memory left on your device to create such a tensor. Given that your batch size is 512996
and your depth dimension is 62
, you are trying to create a tensor of 512996x62x62xsizeof(float): ~7.34Go !
Since the indices are never going to be greater than 26. You can try to use a smaller data type for this tensor, like int8
: tf.one_hot(featuresVectors, longestWordLen, dtype=tf.int8)
.
That should take 512996x62x62x1: ~1.83 Go
on your device.
If your device still cannot allocate the tensor, then you'll have to reduce your batch size. (Aka the number of words)
add a comment |
You basically run out of memory. Two approaches which could help are using less features (e.g count the words and just keep the top 10000 or so and a "unknown toekn" for therest) to make the onehot size smaller. Or you could use an embedding layer in your network and feed the integers directly.
add a comment |
Your Answer
StackExchange.ifUsing("editor", function ()
StackExchange.using("externalEditor", function ()
StackExchange.using("snippets", function ()
StackExchange.snippets.init();
);
);
, "code-snippets");
StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "1"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);
else
createEditor();
);
function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader:
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
,
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);
);
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53284138%2fone-hot-encoding-huge-3d-array%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
2 Answers
2
active
oldest
votes
2 Answers
2
active
oldest
votes
active
oldest
votes
active
oldest
votes
You are running out of memory, meaning that there isn't enough memory left on your device to create such a tensor. Given that your batch size is 512996
and your depth dimension is 62
, you are trying to create a tensor of 512996x62x62xsizeof(float): ~7.34Go !
Since the indices are never going to be greater than 26. You can try to use a smaller data type for this tensor, like int8
: tf.one_hot(featuresVectors, longestWordLen, dtype=tf.int8)
.
That should take 512996x62x62x1: ~1.83 Go
on your device.
If your device still cannot allocate the tensor, then you'll have to reduce your batch size. (Aka the number of words)
add a comment |
You are running out of memory, meaning that there isn't enough memory left on your device to create such a tensor. Given that your batch size is 512996
and your depth dimension is 62
, you are trying to create a tensor of 512996x62x62xsizeof(float): ~7.34Go !
Since the indices are never going to be greater than 26. You can try to use a smaller data type for this tensor, like int8
: tf.one_hot(featuresVectors, longestWordLen, dtype=tf.int8)
.
That should take 512996x62x62x1: ~1.83 Go
on your device.
If your device still cannot allocate the tensor, then you'll have to reduce your batch size. (Aka the number of words)
add a comment |
You are running out of memory, meaning that there isn't enough memory left on your device to create such a tensor. Given that your batch size is 512996
and your depth dimension is 62
, you are trying to create a tensor of 512996x62x62xsizeof(float): ~7.34Go !
Since the indices are never going to be greater than 26. You can try to use a smaller data type for this tensor, like int8
: tf.one_hot(featuresVectors, longestWordLen, dtype=tf.int8)
.
That should take 512996x62x62x1: ~1.83 Go
on your device.
If your device still cannot allocate the tensor, then you'll have to reduce your batch size. (Aka the number of words)
You are running out of memory, meaning that there isn't enough memory left on your device to create such a tensor. Given that your batch size is 512996
and your depth dimension is 62
, you are trying to create a tensor of 512996x62x62xsizeof(float): ~7.34Go !
Since the indices are never going to be greater than 26. You can try to use a smaller data type for this tensor, like int8
: tf.one_hot(featuresVectors, longestWordLen, dtype=tf.int8)
.
That should take 512996x62x62x1: ~1.83 Go
on your device.
If your device still cannot allocate the tensor, then you'll have to reduce your batch size. (Aka the number of words)
answered Nov 13 '18 at 15:40
TezirgTezirg
1,205719
1,205719
add a comment |
add a comment |
You basically run out of memory. Two approaches which could help are using less features (e.g count the words and just keep the top 10000 or so and a "unknown toekn" for therest) to make the onehot size smaller. Or you could use an embedding layer in your network and feed the integers directly.
add a comment |
You basically run out of memory. Two approaches which could help are using less features (e.g count the words and just keep the top 10000 or so and a "unknown toekn" for therest) to make the onehot size smaller. Or you could use an embedding layer in your network and feed the integers directly.
add a comment |
You basically run out of memory. Two approaches which could help are using less features (e.g count the words and just keep the top 10000 or so and a "unknown toekn" for therest) to make the onehot size smaller. Or you could use an embedding layer in your network and feed the integers directly.
You basically run out of memory. Two approaches which could help are using less features (e.g count the words and just keep the top 10000 or so and a "unknown toekn" for therest) to make the onehot size smaller. Or you could use an embedding layer in your network and feed the integers directly.
answered Nov 13 '18 at 15:39
SyriusSyrius
966
966
add a comment |
add a comment |
Thanks for contributing an answer to Stack Overflow!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53284138%2fone-hot-encoding-huge-3d-array%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown