How to increase BatchSize with Tensorflow's C++ API?










3















I took the code in https://gist.github.com/kyrs/9adf86366e9e4f04addb (which takes an opencv cv::Mat image as input and converts it to tensor) and I use it to label images with the model inception_v3_2016_08_28_frozen.pb stated in the Tensorflow tutorial (https://www.tensorflow.org/tutorials/image_recognition#usage_with_the_c_api). Everything worked fine when using a batchsize of 1. However, when I increase the batchsize to 2 (or greater), the size of
finalOutput (which is of type std::vector) is zero.



Here's the code to reproduce the error:



// Only for VisualStudio
#define COMPILER_MSVC
#define NOMINMAX

#include <string>
#include <iostream>
#include <fstream>

#include <opencv2/opencv.hpp>
#include <opencv2/imgproc/imgproc.hpp>

#include "tensorflow/core/public/session.h"
#include "tensorflow/core/platform/env.h"
#include "tensorflow/core/framework/tensor.h"

int batchSize = 2;
int height = 299;
int width = 299;
int depth = 3;

int mean = 0;
int stdev = 255;

// Set image paths
cv::String pathFilenameImg1 = "D:/IMGS/grace_hopper.jpg";
cv::String pathFilenameImg2 = "D:/IMGS/lenna.jpg";

// Set model paths
std::string graphFile = "D:/Tensorflow/models/inception_v3_2016_08_28_frozen.pb";
std::string labelfile = "D:/Tensorflow/models/imagenet_slim_labels.txt";
std::string InputName = "input";
std::string OutputName = "InceptionV3/Predictions/Reshape_1";


void read_prepare_image(cv::String pathImg, cv::Mat &imgPrepared)

// Read Color image:
cv::Mat imgBGR = cv::imread(pathImg);

// Now we resize the image to fit Model's expected sizes:
cv::Size s(height, width);
cv::Mat imgResized;
cv::resize(imgBGR, imgResized, s, 0, 0, cv::INTER_CUBIC);

// Convert the image to float and normalize data:
imgResized.convertTo(imgPrepared, CV_32FC1);
imgPrepared = imgPrepared - mean;
imgPrepared = imgPrepared / stdev;



int main()

// Read and prepare images using OpenCV:
cv::Mat img1, img2;
read_prepare_image(pathFilenameImg1, img1);
read_prepare_image(pathFilenameImg2, img2);

// creating a Tensor for storing the data
tensorflow::Tensor input_tensor(tensorflow::DT_FLOAT, tensorflow::TensorShape( batchSize, height, width, depth ));
auto input_tensor_mapped = input_tensor.tensor<float, 4>();

// Copy images data into the tensor:
for (int b = 0; b < batchSize; ++b)

const float * source_data;

if (b == 0)
source_data = (float*)img1.data;
else
source_data = (float*)img2.data;

for (int y = 0; y < height; ++y)

const float* source_row = source_data + (y * width * depth);
for (int x = 0; x < width; ++x)

const float* source_pixel = source_row + (x * depth);
const float* source_B = source_pixel + 0;
const float* source_G = source_pixel + 1;
const float* source_R = source_pixel + 2;

input_tensor_mapped(b, y, x, 0) = *source_R;
input_tensor_mapped(b, y, x, 1) = *source_G;
input_tensor_mapped(b, y, x, 2) = *source_B;





// Load the graph:
tensorflow::GraphDef graph_def;
ReadBinaryProto(tensorflow::Env::Default(), graphFile, &graph_def);

// create a session with the graph
std::unique_ptr<tensorflow::Session> session_inception(tensorflow::NewSession(tensorflow::SessionOptions()));
session_inception->Create(graph_def);

// run the loaded graph
std::vector<tensorflow::Tensor> finalOutput;
session_inception->Run( InputName,input_tensor , OutputName , , &finalOutput);

// Get Top 5 classes:
std::cerr << "final output size = " << finalOutput.size() << std::endl;
tensorflow::Tensor output = std::move(finalOutput.at(0));
auto scores = output.flat<float>();
std::cerr << "scores size=" << scores.size() << std::endl;

std::ifstream label(labelfile);
std::string line;

std::vector<std::pair<float, std::string>> sorted;

for (unsigned int i = 0; i <= 1000; ++i)
std::getline(label, line);
sorted.emplace_back(scores(i), line);


std::sort(sorted.begin(), sorted.end());
std::reverse(sorted.begin(), sorted.end());
std::cout << "size of the sorted file is " << sorted.size() << std::endl;
for (unsigned int i = 0; i< 5; ++i)
std::cout << "The output of the current graph has category " << sorted[i].second << " with probability " << sorted[i].first << std::endl;




Do I miss anything? Any ideas?



Thanks in advance!










share|improve this question
























  • thanks..it works

    – Hara Hara Mahadevaki
    Jul 28 '17 at 1:45















3















I took the code in https://gist.github.com/kyrs/9adf86366e9e4f04addb (which takes an opencv cv::Mat image as input and converts it to tensor) and I use it to label images with the model inception_v3_2016_08_28_frozen.pb stated in the Tensorflow tutorial (https://www.tensorflow.org/tutorials/image_recognition#usage_with_the_c_api). Everything worked fine when using a batchsize of 1. However, when I increase the batchsize to 2 (or greater), the size of
finalOutput (which is of type std::vector) is zero.



Here's the code to reproduce the error:



// Only for VisualStudio
#define COMPILER_MSVC
#define NOMINMAX

#include <string>
#include <iostream>
#include <fstream>

#include <opencv2/opencv.hpp>
#include <opencv2/imgproc/imgproc.hpp>

#include "tensorflow/core/public/session.h"
#include "tensorflow/core/platform/env.h"
#include "tensorflow/core/framework/tensor.h"

int batchSize = 2;
int height = 299;
int width = 299;
int depth = 3;

int mean = 0;
int stdev = 255;

// Set image paths
cv::String pathFilenameImg1 = "D:/IMGS/grace_hopper.jpg";
cv::String pathFilenameImg2 = "D:/IMGS/lenna.jpg";

// Set model paths
std::string graphFile = "D:/Tensorflow/models/inception_v3_2016_08_28_frozen.pb";
std::string labelfile = "D:/Tensorflow/models/imagenet_slim_labels.txt";
std::string InputName = "input";
std::string OutputName = "InceptionV3/Predictions/Reshape_1";


void read_prepare_image(cv::String pathImg, cv::Mat &imgPrepared)

// Read Color image:
cv::Mat imgBGR = cv::imread(pathImg);

// Now we resize the image to fit Model's expected sizes:
cv::Size s(height, width);
cv::Mat imgResized;
cv::resize(imgBGR, imgResized, s, 0, 0, cv::INTER_CUBIC);

// Convert the image to float and normalize data:
imgResized.convertTo(imgPrepared, CV_32FC1);
imgPrepared = imgPrepared - mean;
imgPrepared = imgPrepared / stdev;



int main()

// Read and prepare images using OpenCV:
cv::Mat img1, img2;
read_prepare_image(pathFilenameImg1, img1);
read_prepare_image(pathFilenameImg2, img2);

// creating a Tensor for storing the data
tensorflow::Tensor input_tensor(tensorflow::DT_FLOAT, tensorflow::TensorShape( batchSize, height, width, depth ));
auto input_tensor_mapped = input_tensor.tensor<float, 4>();

// Copy images data into the tensor:
for (int b = 0; b < batchSize; ++b)

const float * source_data;

if (b == 0)
source_data = (float*)img1.data;
else
source_data = (float*)img2.data;

for (int y = 0; y < height; ++y)

const float* source_row = source_data + (y * width * depth);
for (int x = 0; x < width; ++x)

const float* source_pixel = source_row + (x * depth);
const float* source_B = source_pixel + 0;
const float* source_G = source_pixel + 1;
const float* source_R = source_pixel + 2;

input_tensor_mapped(b, y, x, 0) = *source_R;
input_tensor_mapped(b, y, x, 1) = *source_G;
input_tensor_mapped(b, y, x, 2) = *source_B;





// Load the graph:
tensorflow::GraphDef graph_def;
ReadBinaryProto(tensorflow::Env::Default(), graphFile, &graph_def);

// create a session with the graph
std::unique_ptr<tensorflow::Session> session_inception(tensorflow::NewSession(tensorflow::SessionOptions()));
session_inception->Create(graph_def);

// run the loaded graph
std::vector<tensorflow::Tensor> finalOutput;
session_inception->Run( InputName,input_tensor , OutputName , , &finalOutput);

// Get Top 5 classes:
std::cerr << "final output size = " << finalOutput.size() << std::endl;
tensorflow::Tensor output = std::move(finalOutput.at(0));
auto scores = output.flat<float>();
std::cerr << "scores size=" << scores.size() << std::endl;

std::ifstream label(labelfile);
std::string line;

std::vector<std::pair<float, std::string>> sorted;

for (unsigned int i = 0; i <= 1000; ++i)
std::getline(label, line);
sorted.emplace_back(scores(i), line);


std::sort(sorted.begin(), sorted.end());
std::reverse(sorted.begin(), sorted.end());
std::cout << "size of the sorted file is " << sorted.size() << std::endl;
for (unsigned int i = 0; i< 5; ++i)
std::cout << "The output of the current graph has category " << sorted[i].second << " with probability " << sorted[i].first << std::endl;




Do I miss anything? Any ideas?



Thanks in advance!










share|improve this question
























  • thanks..it works

    – Hara Hara Mahadevaki
    Jul 28 '17 at 1:45













3












3








3








I took the code in https://gist.github.com/kyrs/9adf86366e9e4f04addb (which takes an opencv cv::Mat image as input and converts it to tensor) and I use it to label images with the model inception_v3_2016_08_28_frozen.pb stated in the Tensorflow tutorial (https://www.tensorflow.org/tutorials/image_recognition#usage_with_the_c_api). Everything worked fine when using a batchsize of 1. However, when I increase the batchsize to 2 (or greater), the size of
finalOutput (which is of type std::vector) is zero.



Here's the code to reproduce the error:



// Only for VisualStudio
#define COMPILER_MSVC
#define NOMINMAX

#include <string>
#include <iostream>
#include <fstream>

#include <opencv2/opencv.hpp>
#include <opencv2/imgproc/imgproc.hpp>

#include "tensorflow/core/public/session.h"
#include "tensorflow/core/platform/env.h"
#include "tensorflow/core/framework/tensor.h"

int batchSize = 2;
int height = 299;
int width = 299;
int depth = 3;

int mean = 0;
int stdev = 255;

// Set image paths
cv::String pathFilenameImg1 = "D:/IMGS/grace_hopper.jpg";
cv::String pathFilenameImg2 = "D:/IMGS/lenna.jpg";

// Set model paths
std::string graphFile = "D:/Tensorflow/models/inception_v3_2016_08_28_frozen.pb";
std::string labelfile = "D:/Tensorflow/models/imagenet_slim_labels.txt";
std::string InputName = "input";
std::string OutputName = "InceptionV3/Predictions/Reshape_1";


void read_prepare_image(cv::String pathImg, cv::Mat &imgPrepared)

// Read Color image:
cv::Mat imgBGR = cv::imread(pathImg);

// Now we resize the image to fit Model's expected sizes:
cv::Size s(height, width);
cv::Mat imgResized;
cv::resize(imgBGR, imgResized, s, 0, 0, cv::INTER_CUBIC);

// Convert the image to float and normalize data:
imgResized.convertTo(imgPrepared, CV_32FC1);
imgPrepared = imgPrepared - mean;
imgPrepared = imgPrepared / stdev;



int main()

// Read and prepare images using OpenCV:
cv::Mat img1, img2;
read_prepare_image(pathFilenameImg1, img1);
read_prepare_image(pathFilenameImg2, img2);

// creating a Tensor for storing the data
tensorflow::Tensor input_tensor(tensorflow::DT_FLOAT, tensorflow::TensorShape( batchSize, height, width, depth ));
auto input_tensor_mapped = input_tensor.tensor<float, 4>();

// Copy images data into the tensor:
for (int b = 0; b < batchSize; ++b)

const float * source_data;

if (b == 0)
source_data = (float*)img1.data;
else
source_data = (float*)img2.data;

for (int y = 0; y < height; ++y)

const float* source_row = source_data + (y * width * depth);
for (int x = 0; x < width; ++x)

const float* source_pixel = source_row + (x * depth);
const float* source_B = source_pixel + 0;
const float* source_G = source_pixel + 1;
const float* source_R = source_pixel + 2;

input_tensor_mapped(b, y, x, 0) = *source_R;
input_tensor_mapped(b, y, x, 1) = *source_G;
input_tensor_mapped(b, y, x, 2) = *source_B;





// Load the graph:
tensorflow::GraphDef graph_def;
ReadBinaryProto(tensorflow::Env::Default(), graphFile, &graph_def);

// create a session with the graph
std::unique_ptr<tensorflow::Session> session_inception(tensorflow::NewSession(tensorflow::SessionOptions()));
session_inception->Create(graph_def);

// run the loaded graph
std::vector<tensorflow::Tensor> finalOutput;
session_inception->Run( InputName,input_tensor , OutputName , , &finalOutput);

// Get Top 5 classes:
std::cerr << "final output size = " << finalOutput.size() << std::endl;
tensorflow::Tensor output = std::move(finalOutput.at(0));
auto scores = output.flat<float>();
std::cerr << "scores size=" << scores.size() << std::endl;

std::ifstream label(labelfile);
std::string line;

std::vector<std::pair<float, std::string>> sorted;

for (unsigned int i = 0; i <= 1000; ++i)
std::getline(label, line);
sorted.emplace_back(scores(i), line);


std::sort(sorted.begin(), sorted.end());
std::reverse(sorted.begin(), sorted.end());
std::cout << "size of the sorted file is " << sorted.size() << std::endl;
for (unsigned int i = 0; i< 5; ++i)
std::cout << "The output of the current graph has category " << sorted[i].second << " with probability " << sorted[i].first << std::endl;




Do I miss anything? Any ideas?



Thanks in advance!










share|improve this question
















I took the code in https://gist.github.com/kyrs/9adf86366e9e4f04addb (which takes an opencv cv::Mat image as input and converts it to tensor) and I use it to label images with the model inception_v3_2016_08_28_frozen.pb stated in the Tensorflow tutorial (https://www.tensorflow.org/tutorials/image_recognition#usage_with_the_c_api). Everything worked fine when using a batchsize of 1. However, when I increase the batchsize to 2 (or greater), the size of
finalOutput (which is of type std::vector) is zero.



Here's the code to reproduce the error:



// Only for VisualStudio
#define COMPILER_MSVC
#define NOMINMAX

#include <string>
#include <iostream>
#include <fstream>

#include <opencv2/opencv.hpp>
#include <opencv2/imgproc/imgproc.hpp>

#include "tensorflow/core/public/session.h"
#include "tensorflow/core/platform/env.h"
#include "tensorflow/core/framework/tensor.h"

int batchSize = 2;
int height = 299;
int width = 299;
int depth = 3;

int mean = 0;
int stdev = 255;

// Set image paths
cv::String pathFilenameImg1 = "D:/IMGS/grace_hopper.jpg";
cv::String pathFilenameImg2 = "D:/IMGS/lenna.jpg";

// Set model paths
std::string graphFile = "D:/Tensorflow/models/inception_v3_2016_08_28_frozen.pb";
std::string labelfile = "D:/Tensorflow/models/imagenet_slim_labels.txt";
std::string InputName = "input";
std::string OutputName = "InceptionV3/Predictions/Reshape_1";


void read_prepare_image(cv::String pathImg, cv::Mat &imgPrepared)

// Read Color image:
cv::Mat imgBGR = cv::imread(pathImg);

// Now we resize the image to fit Model's expected sizes:
cv::Size s(height, width);
cv::Mat imgResized;
cv::resize(imgBGR, imgResized, s, 0, 0, cv::INTER_CUBIC);

// Convert the image to float and normalize data:
imgResized.convertTo(imgPrepared, CV_32FC1);
imgPrepared = imgPrepared - mean;
imgPrepared = imgPrepared / stdev;



int main()

// Read and prepare images using OpenCV:
cv::Mat img1, img2;
read_prepare_image(pathFilenameImg1, img1);
read_prepare_image(pathFilenameImg2, img2);

// creating a Tensor for storing the data
tensorflow::Tensor input_tensor(tensorflow::DT_FLOAT, tensorflow::TensorShape( batchSize, height, width, depth ));
auto input_tensor_mapped = input_tensor.tensor<float, 4>();

// Copy images data into the tensor:
for (int b = 0; b < batchSize; ++b)

const float * source_data;

if (b == 0)
source_data = (float*)img1.data;
else
source_data = (float*)img2.data;

for (int y = 0; y < height; ++y)

const float* source_row = source_data + (y * width * depth);
for (int x = 0; x < width; ++x)

const float* source_pixel = source_row + (x * depth);
const float* source_B = source_pixel + 0;
const float* source_G = source_pixel + 1;
const float* source_R = source_pixel + 2;

input_tensor_mapped(b, y, x, 0) = *source_R;
input_tensor_mapped(b, y, x, 1) = *source_G;
input_tensor_mapped(b, y, x, 2) = *source_B;





// Load the graph:
tensorflow::GraphDef graph_def;
ReadBinaryProto(tensorflow::Env::Default(), graphFile, &graph_def);

// create a session with the graph
std::unique_ptr<tensorflow::Session> session_inception(tensorflow::NewSession(tensorflow::SessionOptions()));
session_inception->Create(graph_def);

// run the loaded graph
std::vector<tensorflow::Tensor> finalOutput;
session_inception->Run( InputName,input_tensor , OutputName , , &finalOutput);

// Get Top 5 classes:
std::cerr << "final output size = " << finalOutput.size() << std::endl;
tensorflow::Tensor output = std::move(finalOutput.at(0));
auto scores = output.flat<float>();
std::cerr << "scores size=" << scores.size() << std::endl;

std::ifstream label(labelfile);
std::string line;

std::vector<std::pair<float, std::string>> sorted;

for (unsigned int i = 0; i <= 1000; ++i)
std::getline(label, line);
sorted.emplace_back(scores(i), line);


std::sort(sorted.begin(), sorted.end());
std::reverse(sorted.begin(), sorted.end());
std::cout << "size of the sorted file is " << sorted.size() << std::endl;
for (unsigned int i = 0; i< 5; ++i)
std::cout << "The output of the current graph has category " << sorted[i].second << " with probability " << sorted[i].first << std::endl;




Do I miss anything? Any ideas?



Thanks in advance!







opencv tensorflow






share|improve this question















share|improve this question













share|improve this question




share|improve this question








edited Jul 5 '17 at 8:23







Tides

















asked Jul 4 '17 at 11:03









TidesTides

568




568












  • thanks..it works

    – Hara Hara Mahadevaki
    Jul 28 '17 at 1:45

















  • thanks..it works

    – Hara Hara Mahadevaki
    Jul 28 '17 at 1:45
















thanks..it works

– Hara Hara Mahadevaki
Jul 28 '17 at 1:45





thanks..it works

– Hara Hara Mahadevaki
Jul 28 '17 at 1:45












2 Answers
2






active

oldest

votes


















1














I had the same problem. When I changed to the model used in https://github.com/tensorflow/tensorflow/tree/master/tensorflow/tools/benchmark (differente version of inception) bigger batch sizes work correctly.



Notice you need to change the input size from 299,299,3 to 224,224,3 and the input and output layer names to: input:0 and output:0






share|improve this answer























  • Yes, probably this other model was frozen with a variable batch size and the one I was testing it was not. Thanks!

    – Tides
    Nov 9 '18 at 12:11


















1














Probably the graph in the protobuf file had a fixed batch size of 1 and I was only changing the shape of the input, not the graph. The graph has to accept a variable batch size by setting the shape to (None, width, heihgt, channels). This is done when you freeze the graph. Since the graph we have is already frozen, there is no way to change the batch size at this point.






share|improve this answer
























    Your Answer






    StackExchange.ifUsing("editor", function ()
    StackExchange.using("externalEditor", function ()
    StackExchange.using("snippets", function ()
    StackExchange.snippets.init();
    );
    );
    , "code-snippets");

    StackExchange.ready(function()
    var channelOptions =
    tags: "".split(" "),
    id: "1"
    ;
    initTagRenderer("".split(" "), "".split(" "), channelOptions);

    StackExchange.using("externalEditor", function()
    // Have to fire editor after snippets, if snippets enabled
    if (StackExchange.settings.snippets.snippetsEnabled)
    StackExchange.using("snippets", function()
    createEditor();
    );

    else
    createEditor();

    );

    function createEditor()
    StackExchange.prepareEditor(
    heartbeatType: 'answer',
    autoActivateHeartbeat: false,
    convertImagesToLinks: true,
    noModals: true,
    showLowRepImageUploadWarning: true,
    reputationToPostImages: 10,
    bindNavPrevention: true,
    postfix: "",
    imageUploader:
    brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
    contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
    allowUrls: true
    ,
    onDemand: true,
    discardSelector: ".discard-answer"
    ,immediatelyShowMarkdownHelp:true
    );



    );













    draft saved

    draft discarded


















    StackExchange.ready(
    function ()
    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f44904264%2fhow-to-increase-batchsize-with-tensorflows-c-api%23new-answer', 'question_page');

    );

    Post as a guest















    Required, but never shown

























    2 Answers
    2






    active

    oldest

    votes








    2 Answers
    2






    active

    oldest

    votes









    active

    oldest

    votes






    active

    oldest

    votes









    1














    I had the same problem. When I changed to the model used in https://github.com/tensorflow/tensorflow/tree/master/tensorflow/tools/benchmark (differente version of inception) bigger batch sizes work correctly.



    Notice you need to change the input size from 299,299,3 to 224,224,3 and the input and output layer names to: input:0 and output:0






    share|improve this answer























    • Yes, probably this other model was frozen with a variable batch size and the one I was testing it was not. Thanks!

      – Tides
      Nov 9 '18 at 12:11















    1














    I had the same problem. When I changed to the model used in https://github.com/tensorflow/tensorflow/tree/master/tensorflow/tools/benchmark (differente version of inception) bigger batch sizes work correctly.



    Notice you need to change the input size from 299,299,3 to 224,224,3 and the input and output layer names to: input:0 and output:0






    share|improve this answer























    • Yes, probably this other model was frozen with a variable batch size and the one I was testing it was not. Thanks!

      – Tides
      Nov 9 '18 at 12:11













    1












    1








    1







    I had the same problem. When I changed to the model used in https://github.com/tensorflow/tensorflow/tree/master/tensorflow/tools/benchmark (differente version of inception) bigger batch sizes work correctly.



    Notice you need to change the input size from 299,299,3 to 224,224,3 and the input and output layer names to: input:0 and output:0






    share|improve this answer













    I had the same problem. When I changed to the model used in https://github.com/tensorflow/tensorflow/tree/master/tensorflow/tools/benchmark (differente version of inception) bigger batch sizes work correctly.



    Notice you need to change the input size from 299,299,3 to 224,224,3 and the input and output layer names to: input:0 and output:0







    share|improve this answer












    share|improve this answer



    share|improve this answer










    answered Jan 11 '18 at 20:37









    TFreitasTFreitas

    1608




    1608












    • Yes, probably this other model was frozen with a variable batch size and the one I was testing it was not. Thanks!

      – Tides
      Nov 9 '18 at 12:11

















    • Yes, probably this other model was frozen with a variable batch size and the one I was testing it was not. Thanks!

      – Tides
      Nov 9 '18 at 12:11
















    Yes, probably this other model was frozen with a variable batch size and the one I was testing it was not. Thanks!

    – Tides
    Nov 9 '18 at 12:11





    Yes, probably this other model was frozen with a variable batch size and the one I was testing it was not. Thanks!

    – Tides
    Nov 9 '18 at 12:11













    1














    Probably the graph in the protobuf file had a fixed batch size of 1 and I was only changing the shape of the input, not the graph. The graph has to accept a variable batch size by setting the shape to (None, width, heihgt, channels). This is done when you freeze the graph. Since the graph we have is already frozen, there is no way to change the batch size at this point.






    share|improve this answer





























      1














      Probably the graph in the protobuf file had a fixed batch size of 1 and I was only changing the shape of the input, not the graph. The graph has to accept a variable batch size by setting the shape to (None, width, heihgt, channels). This is done when you freeze the graph. Since the graph we have is already frozen, there is no way to change the batch size at this point.






      share|improve this answer



























        1












        1








        1







        Probably the graph in the protobuf file had a fixed batch size of 1 and I was only changing the shape of the input, not the graph. The graph has to accept a variable batch size by setting the shape to (None, width, heihgt, channels). This is done when you freeze the graph. Since the graph we have is already frozen, there is no way to change the batch size at this point.






        share|improve this answer















        Probably the graph in the protobuf file had a fixed batch size of 1 and I was only changing the shape of the input, not the graph. The graph has to accept a variable batch size by setting the shape to (None, width, heihgt, channels). This is done when you freeze the graph. Since the graph we have is already frozen, there is no way to change the batch size at this point.







        share|improve this answer














        share|improve this answer



        share|improve this answer








        edited Nov 13 '18 at 11:09

























        answered Nov 9 '18 at 12:09









        TidesTides

        568




        568



























            draft saved

            draft discarded
















































            Thanks for contributing an answer to Stack Overflow!


            • Please be sure to answer the question. Provide details and share your research!

            But avoid


            • Asking for help, clarification, or responding to other answers.

            • Making statements based on opinion; back them up with references or personal experience.

            To learn more, see our tips on writing great answers.




            draft saved


            draft discarded














            StackExchange.ready(
            function ()
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f44904264%2fhow-to-increase-batchsize-with-tensorflows-c-api%23new-answer', 'question_page');

            );

            Post as a guest















            Required, but never shown





















































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown

































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown







            這個網誌中的熱門文章

            How to read a connectionString WITH PROVIDER in .NET Core?

            In R, how to develop a multiplot heatmap.2 figure showing key labels successfully

            Museum of Modern and Contemporary Art of Trento and Rovereto