How to find out more about the Cloud TPU device you are running your programs against?










2















Whether we are using Google Colab or accessing Cloud TPUs directly, the below program gives only limited information about the underlying TPUs:





import os
import tensorflow as tf

tpu_address = 'grpc://' + os.environ['COLAB_TPU_ADDR']
print ('TPU address is', tpu_address)

def printTPUDevices():
with tf.Session(tpu_address) as session:
devices = session.list_devices()

print ('TPU devices:')
return devices

printTPUDevices()


Is there any documentation of programmatically or via bash commands to display more information, see this gist for e.g. https://gist.github.com/neomatrix369/256913dcf77cdbb5855dd2d7f5d81b84.










share|improve this question
























  • Thanks Rajiv for amending the tag to the more appropriate tag

    – Mani Sarkar
    Nov 16 '18 at 22:20















2















Whether we are using Google Colab or accessing Cloud TPUs directly, the below program gives only limited information about the underlying TPUs:





import os
import tensorflow as tf

tpu_address = 'grpc://' + os.environ['COLAB_TPU_ADDR']
print ('TPU address is', tpu_address)

def printTPUDevices():
with tf.Session(tpu_address) as session:
devices = session.list_devices()

print ('TPU devices:')
return devices

printTPUDevices()


Is there any documentation of programmatically or via bash commands to display more information, see this gist for e.g. https://gist.github.com/neomatrix369/256913dcf77cdbb5855dd2d7f5d81b84.










share|improve this question
























  • Thanks Rajiv for amending the tag to the more appropriate tag

    – Mani Sarkar
    Nov 16 '18 at 22:20













2












2








2


1






Whether we are using Google Colab or accessing Cloud TPUs directly, the below program gives only limited information about the underlying TPUs:





import os
import tensorflow as tf

tpu_address = 'grpc://' + os.environ['COLAB_TPU_ADDR']
print ('TPU address is', tpu_address)

def printTPUDevices():
with tf.Session(tpu_address) as session:
devices = session.list_devices()

print ('TPU devices:')
return devices

printTPUDevices()


Is there any documentation of programmatically or via bash commands to display more information, see this gist for e.g. https://gist.github.com/neomatrix369/256913dcf77cdbb5855dd2d7f5d81b84.










share|improve this question
















Whether we are using Google Colab or accessing Cloud TPUs directly, the below program gives only limited information about the underlying TPUs:





import os
import tensorflow as tf

tpu_address = 'grpc://' + os.environ['COLAB_TPU_ADDR']
print ('TPU address is', tpu_address)

def printTPUDevices():
with tf.Session(tpu_address) as session:
devices = session.list_devices()

print ('TPU devices:')
return devices

printTPUDevices()


Is there any documentation of programmatically or via bash commands to display more information, see this gist for e.g. https://gist.github.com/neomatrix369/256913dcf77cdbb5855dd2d7f5d81b84.







tensorflow google-cloud-tpu






share|improve this question















share|improve this question













share|improve this question




share|improve this question








edited Nov 16 '18 at 6:50









Rajiv Bharadwaja

807




807










asked Nov 13 '18 at 18:40









Mani SarkarMani Sarkar

256




256












  • Thanks Rajiv for amending the tag to the more appropriate tag

    – Mani Sarkar
    Nov 16 '18 at 22:20

















  • Thanks Rajiv for amending the tag to the more appropriate tag

    – Mani Sarkar
    Nov 16 '18 at 22:20
















Thanks Rajiv for amending the tag to the more appropriate tag

– Mani Sarkar
Nov 16 '18 at 22:20





Thanks Rajiv for amending the tag to the more appropriate tag

– Mani Sarkar
Nov 16 '18 at 22:20












1 Answer
1






active

oldest

votes


















1














The Cloud TPU system architecture is a bit different from GPU's so this level of information is not available.



Because the client talks to a remote TensorFlow server and uses XLA, client code doesn't need to change based on the available features on the TPU, the remote server will compile machine instructions based on the TPU's capabilities.



However the Cloud TPU Profiler does give a lower level view of the TPU for performance optimization. You can see a trace level view of what operations are using up memory and compute time.






share|improve this answer























  • Thanks @michaelb, I have come across the Cloud TPU profile although it seems some setup is needed before we can use it. Is there a easier/shorter way to use it via Google Colab.

    – Mani Sarkar
    Nov 21 '18 at 0:09












  • I'm working on this aspect of the TPU - on my reading list, thanks for the tips

    – Mani Sarkar
    Dec 14 '18 at 18:42










Your Answer






StackExchange.ifUsing("editor", function ()
StackExchange.using("externalEditor", function ()
StackExchange.using("snippets", function ()
StackExchange.snippets.init();
);
);
, "code-snippets");

StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "1"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);

StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);

else
createEditor();

);

function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader:
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
,
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);



);













draft saved

draft discarded


















StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53287556%2fhow-to-find-out-more-about-the-cloud-tpu-device-you-are-running-your-programs-ag%23new-answer', 'question_page');

);

Post as a guest















Required, but never shown

























1 Answer
1






active

oldest

votes








1 Answer
1






active

oldest

votes









active

oldest

votes






active

oldest

votes









1














The Cloud TPU system architecture is a bit different from GPU's so this level of information is not available.



Because the client talks to a remote TensorFlow server and uses XLA, client code doesn't need to change based on the available features on the TPU, the remote server will compile machine instructions based on the TPU's capabilities.



However the Cloud TPU Profiler does give a lower level view of the TPU for performance optimization. You can see a trace level view of what operations are using up memory and compute time.






share|improve this answer























  • Thanks @michaelb, I have come across the Cloud TPU profile although it seems some setup is needed before we can use it. Is there a easier/shorter way to use it via Google Colab.

    – Mani Sarkar
    Nov 21 '18 at 0:09












  • I'm working on this aspect of the TPU - on my reading list, thanks for the tips

    – Mani Sarkar
    Dec 14 '18 at 18:42















1














The Cloud TPU system architecture is a bit different from GPU's so this level of information is not available.



Because the client talks to a remote TensorFlow server and uses XLA, client code doesn't need to change based on the available features on the TPU, the remote server will compile machine instructions based on the TPU's capabilities.



However the Cloud TPU Profiler does give a lower level view of the TPU for performance optimization. You can see a trace level view of what operations are using up memory and compute time.






share|improve this answer























  • Thanks @michaelb, I have come across the Cloud TPU profile although it seems some setup is needed before we can use it. Is there a easier/shorter way to use it via Google Colab.

    – Mani Sarkar
    Nov 21 '18 at 0:09












  • I'm working on this aspect of the TPU - on my reading list, thanks for the tips

    – Mani Sarkar
    Dec 14 '18 at 18:42













1












1








1







The Cloud TPU system architecture is a bit different from GPU's so this level of information is not available.



Because the client talks to a remote TensorFlow server and uses XLA, client code doesn't need to change based on the available features on the TPU, the remote server will compile machine instructions based on the TPU's capabilities.



However the Cloud TPU Profiler does give a lower level view of the TPU for performance optimization. You can see a trace level view of what operations are using up memory and compute time.






share|improve this answer













The Cloud TPU system architecture is a bit different from GPU's so this level of information is not available.



Because the client talks to a remote TensorFlow server and uses XLA, client code doesn't need to change based on the available features on the TPU, the remote server will compile machine instructions based on the TPU's capabilities.



However the Cloud TPU Profiler does give a lower level view of the TPU for performance optimization. You can see a trace level view of what operations are using up memory and compute time.







share|improve this answer












share|improve this answer



share|improve this answer










answered Nov 19 '18 at 23:34









michaelbmichaelb

23616




23616












  • Thanks @michaelb, I have come across the Cloud TPU profile although it seems some setup is needed before we can use it. Is there a easier/shorter way to use it via Google Colab.

    – Mani Sarkar
    Nov 21 '18 at 0:09












  • I'm working on this aspect of the TPU - on my reading list, thanks for the tips

    – Mani Sarkar
    Dec 14 '18 at 18:42

















  • Thanks @michaelb, I have come across the Cloud TPU profile although it seems some setup is needed before we can use it. Is there a easier/shorter way to use it via Google Colab.

    – Mani Sarkar
    Nov 21 '18 at 0:09












  • I'm working on this aspect of the TPU - on my reading list, thanks for the tips

    – Mani Sarkar
    Dec 14 '18 at 18:42
















Thanks @michaelb, I have come across the Cloud TPU profile although it seems some setup is needed before we can use it. Is there a easier/shorter way to use it via Google Colab.

– Mani Sarkar
Nov 21 '18 at 0:09






Thanks @michaelb, I have come across the Cloud TPU profile although it seems some setup is needed before we can use it. Is there a easier/shorter way to use it via Google Colab.

– Mani Sarkar
Nov 21 '18 at 0:09














I'm working on this aspect of the TPU - on my reading list, thanks for the tips

– Mani Sarkar
Dec 14 '18 at 18:42





I'm working on this aspect of the TPU - on my reading list, thanks for the tips

– Mani Sarkar
Dec 14 '18 at 18:42



















draft saved

draft discarded
















































Thanks for contributing an answer to Stack Overflow!


  • Please be sure to answer the question. Provide details and share your research!

But avoid


  • Asking for help, clarification, or responding to other answers.

  • Making statements based on opinion; back them up with references or personal experience.

To learn more, see our tips on writing great answers.




draft saved


draft discarded














StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53287556%2fhow-to-find-out-more-about-the-cloud-tpu-device-you-are-running-your-programs-ag%23new-answer', 'question_page');

);

Post as a guest















Required, but never shown





















































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown

































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown







這個網誌中的熱門文章

How to read a connectionString WITH PROVIDER in .NET Core?

Museum of Modern and Contemporary Art of Trento and Rovereto

In R, how to develop a multiplot heatmap.2 figure showing key labels successfully