How to configure Tensorflow object detection Android demo to work with Inception v2










1















We have built an Android app based on the Tensorflow object detection Android demo app. It works when using a Mobilenet network, but crashes if we try to use an Inception v2 based network.



It is possible for Tensorflow Inception v2 object detection to work on Android?



https://github.com/tensorflow/models/tree/master/research/object_detection



We are using the exact same code as the Tensorflow detector demo here.
https://github.com/tensorflow/tensorflow/blob/master/tensorflow/examples/android/src/org/tensorflow/demo/TensorFlowObjectDetectionAPIModel.java



Models from zoo here,
https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/detection_model_zoo.md



Using TF Object Detection API.
If we use a network pb file trained using ssd_mobilenet_v2_coco the demo app works.
If we use a network pb file trained using faster_rcnn_inception_v2_coco it crashes, (see below)



Is it possible for the Android app to work with the Inception v2 model?
(the mobile-net accuracy is very bad, but Inception is much better)



11-14 12:11:47.817 7122-7199/org.tensorflow.demo E/AndroidRuntime: FATAL EXCEPTION: inference
Process: org.tensorflow.demo, PID: 7122
java.nio.BufferOverflowException
at java.nio.FloatBuffer.put(FloatBuffer.java:444)
at org.tensorflow.Tensor.writeTo(Tensor.java:488)
at org.tensorflow.contrib.android.TensorFlowInferenceInterface.fetch(TensorFlowInferenceInterface.java:488)
at org.tensorflow.contrib.android.TensorFlowInferenceInterface.fetch(TensorFlowInferenceInterface.java:442)
at org.tensorflow.demo.TensorFlowObjectDetectionAPIModel.recognizeImage(TensorFlowObjectDetectionAPIModel.java:170)
at org.tensorflow.demo.DetectorActivity$3.run(DetectorActivity.java:288)
at android.os.Handler.handleCallback(Handler.java:739)
at android.os.Handler.dispatchMessage(Handler.java:95)
at android.os.Looper.loop(Looper.java:148)
at android.os.HandlerThread.run(HandlerThread.java:61)









share|improve this question
























  • Please provide more information about problem itself? Why the app crashes? Is there any ouptut here? What framework did you us? TFmobile or TFLite? Like @Derek said, it should be pssoble to use inception on mobile in theory.

    – Janikan
    Nov 12 '18 at 21:34











  • Updated post with details and stack.

    – James
    Nov 14 '18 at 20:05











  • You need to limit the maximum number of detections per image or it will hog the memory and you will end up with overflow errors

    – mlRocks
    Nov 16 '18 at 6:59















1















We have built an Android app based on the Tensorflow object detection Android demo app. It works when using a Mobilenet network, but crashes if we try to use an Inception v2 based network.



It is possible for Tensorflow Inception v2 object detection to work on Android?



https://github.com/tensorflow/models/tree/master/research/object_detection



We are using the exact same code as the Tensorflow detector demo here.
https://github.com/tensorflow/tensorflow/blob/master/tensorflow/examples/android/src/org/tensorflow/demo/TensorFlowObjectDetectionAPIModel.java



Models from zoo here,
https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/detection_model_zoo.md



Using TF Object Detection API.
If we use a network pb file trained using ssd_mobilenet_v2_coco the demo app works.
If we use a network pb file trained using faster_rcnn_inception_v2_coco it crashes, (see below)



Is it possible for the Android app to work with the Inception v2 model?
(the mobile-net accuracy is very bad, but Inception is much better)



11-14 12:11:47.817 7122-7199/org.tensorflow.demo E/AndroidRuntime: FATAL EXCEPTION: inference
Process: org.tensorflow.demo, PID: 7122
java.nio.BufferOverflowException
at java.nio.FloatBuffer.put(FloatBuffer.java:444)
at org.tensorflow.Tensor.writeTo(Tensor.java:488)
at org.tensorflow.contrib.android.TensorFlowInferenceInterface.fetch(TensorFlowInferenceInterface.java:488)
at org.tensorflow.contrib.android.TensorFlowInferenceInterface.fetch(TensorFlowInferenceInterface.java:442)
at org.tensorflow.demo.TensorFlowObjectDetectionAPIModel.recognizeImage(TensorFlowObjectDetectionAPIModel.java:170)
at org.tensorflow.demo.DetectorActivity$3.run(DetectorActivity.java:288)
at android.os.Handler.handleCallback(Handler.java:739)
at android.os.Handler.dispatchMessage(Handler.java:95)
at android.os.Looper.loop(Looper.java:148)
at android.os.HandlerThread.run(HandlerThread.java:61)









share|improve this question
























  • Please provide more information about problem itself? Why the app crashes? Is there any ouptut here? What framework did you us? TFmobile or TFLite? Like @Derek said, it should be pssoble to use inception on mobile in theory.

    – Janikan
    Nov 12 '18 at 21:34











  • Updated post with details and stack.

    – James
    Nov 14 '18 at 20:05











  • You need to limit the maximum number of detections per image or it will hog the memory and you will end up with overflow errors

    – mlRocks
    Nov 16 '18 at 6:59













1












1








1








We have built an Android app based on the Tensorflow object detection Android demo app. It works when using a Mobilenet network, but crashes if we try to use an Inception v2 based network.



It is possible for Tensorflow Inception v2 object detection to work on Android?



https://github.com/tensorflow/models/tree/master/research/object_detection



We are using the exact same code as the Tensorflow detector demo here.
https://github.com/tensorflow/tensorflow/blob/master/tensorflow/examples/android/src/org/tensorflow/demo/TensorFlowObjectDetectionAPIModel.java



Models from zoo here,
https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/detection_model_zoo.md



Using TF Object Detection API.
If we use a network pb file trained using ssd_mobilenet_v2_coco the demo app works.
If we use a network pb file trained using faster_rcnn_inception_v2_coco it crashes, (see below)



Is it possible for the Android app to work with the Inception v2 model?
(the mobile-net accuracy is very bad, but Inception is much better)



11-14 12:11:47.817 7122-7199/org.tensorflow.demo E/AndroidRuntime: FATAL EXCEPTION: inference
Process: org.tensorflow.demo, PID: 7122
java.nio.BufferOverflowException
at java.nio.FloatBuffer.put(FloatBuffer.java:444)
at org.tensorflow.Tensor.writeTo(Tensor.java:488)
at org.tensorflow.contrib.android.TensorFlowInferenceInterface.fetch(TensorFlowInferenceInterface.java:488)
at org.tensorflow.contrib.android.TensorFlowInferenceInterface.fetch(TensorFlowInferenceInterface.java:442)
at org.tensorflow.demo.TensorFlowObjectDetectionAPIModel.recognizeImage(TensorFlowObjectDetectionAPIModel.java:170)
at org.tensorflow.demo.DetectorActivity$3.run(DetectorActivity.java:288)
at android.os.Handler.handleCallback(Handler.java:739)
at android.os.Handler.dispatchMessage(Handler.java:95)
at android.os.Looper.loop(Looper.java:148)
at android.os.HandlerThread.run(HandlerThread.java:61)









share|improve this question
















We have built an Android app based on the Tensorflow object detection Android demo app. It works when using a Mobilenet network, but crashes if we try to use an Inception v2 based network.



It is possible for Tensorflow Inception v2 object detection to work on Android?



https://github.com/tensorflow/models/tree/master/research/object_detection



We are using the exact same code as the Tensorflow detector demo here.
https://github.com/tensorflow/tensorflow/blob/master/tensorflow/examples/android/src/org/tensorflow/demo/TensorFlowObjectDetectionAPIModel.java



Models from zoo here,
https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/detection_model_zoo.md



Using TF Object Detection API.
If we use a network pb file trained using ssd_mobilenet_v2_coco the demo app works.
If we use a network pb file trained using faster_rcnn_inception_v2_coco it crashes, (see below)



Is it possible for the Android app to work with the Inception v2 model?
(the mobile-net accuracy is very bad, but Inception is much better)



11-14 12:11:47.817 7122-7199/org.tensorflow.demo E/AndroidRuntime: FATAL EXCEPTION: inference
Process: org.tensorflow.demo, PID: 7122
java.nio.BufferOverflowException
at java.nio.FloatBuffer.put(FloatBuffer.java:444)
at org.tensorflow.Tensor.writeTo(Tensor.java:488)
at org.tensorflow.contrib.android.TensorFlowInferenceInterface.fetch(TensorFlowInferenceInterface.java:488)
at org.tensorflow.contrib.android.TensorFlowInferenceInterface.fetch(TensorFlowInferenceInterface.java:442)
at org.tensorflow.demo.TensorFlowObjectDetectionAPIModel.recognizeImage(TensorFlowObjectDetectionAPIModel.java:170)
at org.tensorflow.demo.DetectorActivity$3.run(DetectorActivity.java:288)
at android.os.Handler.handleCallback(Handler.java:739)
at android.os.Handler.dispatchMessage(Handler.java:95)
at android.os.Looper.loop(Looper.java:148)
at android.os.HandlerThread.run(HandlerThread.java:61)






android tensorflow object-detection object-detection-api






share|improve this question















share|improve this question













share|improve this question




share|improve this question








edited Nov 14 '18 at 20:03







James

















asked Nov 8 '18 at 21:10









JamesJames

20.8k85499




20.8k85499












  • Please provide more information about problem itself? Why the app crashes? Is there any ouptut here? What framework did you us? TFmobile or TFLite? Like @Derek said, it should be pssoble to use inception on mobile in theory.

    – Janikan
    Nov 12 '18 at 21:34











  • Updated post with details and stack.

    – James
    Nov 14 '18 at 20:05











  • You need to limit the maximum number of detections per image or it will hog the memory and you will end up with overflow errors

    – mlRocks
    Nov 16 '18 at 6:59

















  • Please provide more information about problem itself? Why the app crashes? Is there any ouptut here? What framework did you us? TFmobile or TFLite? Like @Derek said, it should be pssoble to use inception on mobile in theory.

    – Janikan
    Nov 12 '18 at 21:34











  • Updated post with details and stack.

    – James
    Nov 14 '18 at 20:05











  • You need to limit the maximum number of detections per image or it will hog the memory and you will end up with overflow errors

    – mlRocks
    Nov 16 '18 at 6:59
















Please provide more information about problem itself? Why the app crashes? Is there any ouptut here? What framework did you us? TFmobile or TFLite? Like @Derek said, it should be pssoble to use inception on mobile in theory.

– Janikan
Nov 12 '18 at 21:34





Please provide more information about problem itself? Why the app crashes? Is there any ouptut here? What framework did you us? TFmobile or TFLite? Like @Derek said, it should be pssoble to use inception on mobile in theory.

– Janikan
Nov 12 '18 at 21:34













Updated post with details and stack.

– James
Nov 14 '18 at 20:05





Updated post with details and stack.

– James
Nov 14 '18 at 20:05













You need to limit the maximum number of detections per image or it will hog the memory and you will end up with overflow errors

– mlRocks
Nov 16 '18 at 6:59





You need to limit the maximum number of detections per image or it will hog the memory and you will end up with overflow errors

– mlRocks
Nov 16 '18 at 6:59












2 Answers
2






active

oldest

votes


















2





+100









I read about this issue once.



I think the problem is in this line of your code:



 private static final int MAX_RESULTS = 100;


This creates an array for output with the specified length. I think SSD mobilenet gives this number of predictions at maximum, but default faster RCNN (without any configurations from your side) gives you more. Try to increase this value for example to 500.






share|improve this answer























  • Thanks, this fixes the issue. It works now (but is too slow to be usable, so back to mobilenet)

    – James
    Nov 19 '18 at 14:34


















2














It should be possible to use SSD Inception, although not advisable. Inception is quite large for mobile, and I don't believe we don't have quantization support for it right now.






share|improve this answer























  • So do you mean faster_rcnn_inception_v2_coco will not work, but you think ssd_inception_v2_coco will work?

    – James
    Nov 14 '18 at 20:04











  • I think this is untested, but I have no reason to believe it doesn't work. So long as the Tensorflow graph rewriter that adds the FakeQuant nodes works, it will train with range support for quantization.

    – Derek Chow
    Nov 16 '18 at 1:18










Your Answer






StackExchange.ifUsing("editor", function ()
StackExchange.using("externalEditor", function ()
StackExchange.using("snippets", function ()
StackExchange.snippets.init();
);
);
, "code-snippets");

StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "1"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);

StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);

else
createEditor();

);

function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader:
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
,
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);



);













draft saved

draft discarded


















StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53216184%2fhow-to-configure-tensorflow-object-detection-android-demo-to-work-with-inception%23new-answer', 'question_page');

);

Post as a guest















Required, but never shown

























2 Answers
2






active

oldest

votes








2 Answers
2






active

oldest

votes









active

oldest

votes






active

oldest

votes









2





+100









I read about this issue once.



I think the problem is in this line of your code:



 private static final int MAX_RESULTS = 100;


This creates an array for output with the specified length. I think SSD mobilenet gives this number of predictions at maximum, but default faster RCNN (without any configurations from your side) gives you more. Try to increase this value for example to 500.






share|improve this answer























  • Thanks, this fixes the issue. It works now (but is too slow to be usable, so back to mobilenet)

    – James
    Nov 19 '18 at 14:34















2





+100









I read about this issue once.



I think the problem is in this line of your code:



 private static final int MAX_RESULTS = 100;


This creates an array for output with the specified length. I think SSD mobilenet gives this number of predictions at maximum, but default faster RCNN (without any configurations from your side) gives you more. Try to increase this value for example to 500.






share|improve this answer























  • Thanks, this fixes the issue. It works now (but is too slow to be usable, so back to mobilenet)

    – James
    Nov 19 '18 at 14:34













2





+100







2





+100



2




+100





I read about this issue once.



I think the problem is in this line of your code:



 private static final int MAX_RESULTS = 100;


This creates an array for output with the specified length. I think SSD mobilenet gives this number of predictions at maximum, but default faster RCNN (without any configurations from your side) gives you more. Try to increase this value for example to 500.






share|improve this answer













I read about this issue once.



I think the problem is in this line of your code:



 private static final int MAX_RESULTS = 100;


This creates an array for output with the specified length. I think SSD mobilenet gives this number of predictions at maximum, but default faster RCNN (without any configurations from your side) gives you more. Try to increase this value for example to 500.







share|improve this answer












share|improve this answer



share|improve this answer










answered Nov 15 '18 at 8:42









JanikanJanikan

2427




2427












  • Thanks, this fixes the issue. It works now (but is too slow to be usable, so back to mobilenet)

    – James
    Nov 19 '18 at 14:34

















  • Thanks, this fixes the issue. It works now (but is too slow to be usable, so back to mobilenet)

    – James
    Nov 19 '18 at 14:34
















Thanks, this fixes the issue. It works now (but is too slow to be usable, so back to mobilenet)

– James
Nov 19 '18 at 14:34





Thanks, this fixes the issue. It works now (but is too slow to be usable, so back to mobilenet)

– James
Nov 19 '18 at 14:34













2














It should be possible to use SSD Inception, although not advisable. Inception is quite large for mobile, and I don't believe we don't have quantization support for it right now.






share|improve this answer























  • So do you mean faster_rcnn_inception_v2_coco will not work, but you think ssd_inception_v2_coco will work?

    – James
    Nov 14 '18 at 20:04











  • I think this is untested, but I have no reason to believe it doesn't work. So long as the Tensorflow graph rewriter that adds the FakeQuant nodes works, it will train with range support for quantization.

    – Derek Chow
    Nov 16 '18 at 1:18















2














It should be possible to use SSD Inception, although not advisable. Inception is quite large for mobile, and I don't believe we don't have quantization support for it right now.






share|improve this answer























  • So do you mean faster_rcnn_inception_v2_coco will not work, but you think ssd_inception_v2_coco will work?

    – James
    Nov 14 '18 at 20:04











  • I think this is untested, but I have no reason to believe it doesn't work. So long as the Tensorflow graph rewriter that adds the FakeQuant nodes works, it will train with range support for quantization.

    – Derek Chow
    Nov 16 '18 at 1:18













2












2








2







It should be possible to use SSD Inception, although not advisable. Inception is quite large for mobile, and I don't believe we don't have quantization support for it right now.






share|improve this answer













It should be possible to use SSD Inception, although not advisable. Inception is quite large for mobile, and I don't believe we don't have quantization support for it right now.







share|improve this answer












share|improve this answer



share|improve this answer










answered Nov 12 '18 at 17:42









Derek ChowDerek Chow

59416




59416












  • So do you mean faster_rcnn_inception_v2_coco will not work, but you think ssd_inception_v2_coco will work?

    – James
    Nov 14 '18 at 20:04











  • I think this is untested, but I have no reason to believe it doesn't work. So long as the Tensorflow graph rewriter that adds the FakeQuant nodes works, it will train with range support for quantization.

    – Derek Chow
    Nov 16 '18 at 1:18

















  • So do you mean faster_rcnn_inception_v2_coco will not work, but you think ssd_inception_v2_coco will work?

    – James
    Nov 14 '18 at 20:04











  • I think this is untested, but I have no reason to believe it doesn't work. So long as the Tensorflow graph rewriter that adds the FakeQuant nodes works, it will train with range support for quantization.

    – Derek Chow
    Nov 16 '18 at 1:18
















So do you mean faster_rcnn_inception_v2_coco will not work, but you think ssd_inception_v2_coco will work?

– James
Nov 14 '18 at 20:04





So do you mean faster_rcnn_inception_v2_coco will not work, but you think ssd_inception_v2_coco will work?

– James
Nov 14 '18 at 20:04













I think this is untested, but I have no reason to believe it doesn't work. So long as the Tensorflow graph rewriter that adds the FakeQuant nodes works, it will train with range support for quantization.

– Derek Chow
Nov 16 '18 at 1:18





I think this is untested, but I have no reason to believe it doesn't work. So long as the Tensorflow graph rewriter that adds the FakeQuant nodes works, it will train with range support for quantization.

– Derek Chow
Nov 16 '18 at 1:18

















draft saved

draft discarded
















































Thanks for contributing an answer to Stack Overflow!


  • Please be sure to answer the question. Provide details and share your research!

But avoid


  • Asking for help, clarification, or responding to other answers.

  • Making statements based on opinion; back them up with references or personal experience.

To learn more, see our tips on writing great answers.




draft saved


draft discarded














StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53216184%2fhow-to-configure-tensorflow-object-detection-android-demo-to-work-with-inception%23new-answer', 'question_page');

);

Post as a guest















Required, but never shown





















































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown

































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown







這個網誌中的熱門文章

How to read a connectionString WITH PROVIDER in .NET Core?

Node.js Script on GitHub Pages or Amazon S3

Museum of Modern and Contemporary Art of Trento and Rovereto