Keras' fit_generator() for binary classification predictions always 50%










0















I have set up a model to train on classifying whether an image is a certain video game or not. I pre-scaled my images into 250x250 pixels and have them separated into two folders (the two binary classes) labelled 0 and 1. The amount of both classes are within ~100 of each other and I have around 3500 images in total.



Here are photos of the training process, the model set up and some predictions: https://imgur.com/a/CN1b6LV



train_datagen = ImageDataGenerator(
rescale=1. / 255,
shear_range=0,
zoom_range=0,
horizontal_flip=True,
width_shift_range=0.1,
height_shift_range=0.1,
validation_split=0.2)
train_generator = train_datagen.flow_from_directory(
'data\',
batch_size=batchsize,
shuffle=True,
target_size=(250, 250),
subset="training",
class_mode="binary")
val_generator = train_datagen.flow_from_directory(
'data\',
batch_size=batchsize,
shuffle=True,
target_size=(250, 250),
subset="validation",
class_mode="binary")
pred_datagen = ImageDataGenerator(
rescale=1. / 255,
shear_range=0,
zoom_range=0,
horizontal_flip=False,
width_shift_range=0.1,
height_shift_range=0.1)
pred_generator = pred_datagen.flow_from_directory(
'batch_pred\',
batch_size=30,
shuffle=False,
target_size=(250, 250))


model = Sequential()
model.add(Conv2D(input_shape=(250, 250, 3), filters=25, kernel_size=3, activation="relu", padding="same"))
model.add(Conv2D(filters=32, kernel_size=3, activation="relu", padding="same"))
model.add(Conv2D(filters=32, kernel_size=3, activation="relu", padding="same"))
model.add(MaxPooling2D(pool_size=2, padding="same", strides=(2, 2)))
model.add(BatchNormalization())
model.add(Conv2D(filters=64, kernel_size=3, activation="relu", padding="same"))
model.add(Conv2D(filters=64, kernel_size=3, activation="relu", padding="same"))
model.add(Conv2D(filters=64, kernel_size=3, activation="relu", padding="same"))
model.add(MaxPooling2D(pool_size=2, padding="same", strides=(2, 2)))
model.add(BatchNormalization())
model.add(Conv2D(filters=128, kernel_size=3, activation="relu", padding="same"))
model.add(Conv2D(filters=128, kernel_size=3, activation="relu", padding="same"))
model.add(Conv2D(filters=128, kernel_size=3, activation="relu", padding="same"))
model.add(MaxPooling2D(pool_size=2, padding="same", strides=(2, 2)))
model.add(Conv2D(filters=256, kernel_size=3, activation="relu", padding="same"))
model.add(Conv2D(filters=256, kernel_size=3, activation="relu", padding="same"))
model.add(Conv2D(filters=256, kernel_size=3, activation="relu", padding="same"))
model.add(MaxPooling2D(pool_size=2, padding="same", strides=(2, 2)))
model.add(BatchNormalization())
dense = False
if dense:
model.add(Flatten())
model.add(Dense(250, activation="relu"))
model.add(BatchNormalization())
model.add(Dense(50, activation="relu"))
else:
model.add(GlobalAveragePooling2D())
model.add(Dense(1, activation="softmax"))
model.compile(loss='binary_crossentropy',
optimizer=Adam(0.0005), metrics=["acc"])
callbacks = [EarlyStopping(monitor='val_acc', patience=200, verbose=1),
ModelCheckpoint(filepath="model_checkpoint.h5py",
monitor='val_acc', save_best_only=True, verbose=1)]
model.fit_generator(
train_generator,
steps_per_epoch=train_generator.samples // batchsize,
validation_data=val_generator,
validation_steps=val_generator.samples // batchsize,
epochs=500,
callbacks=callbacks)


Everything appears to run correctly in terms of the model iterating the data by epoch, it finding the correct number of images etc. However, my predictions are always 50% despite a good validation accuracy, low loss, high accuracy etc.



I'm not sure what I'm doing wrong and any help would be appreciated.










share|improve this question
























  • If one of the answers below resolved your issue, kindly accept it by clicking on the checkmark next to the answer to mark it as "answered" - see What should I do when someone answers my question?

    – today
    Nov 26 '18 at 15:54















0















I have set up a model to train on classifying whether an image is a certain video game or not. I pre-scaled my images into 250x250 pixels and have them separated into two folders (the two binary classes) labelled 0 and 1. The amount of both classes are within ~100 of each other and I have around 3500 images in total.



Here are photos of the training process, the model set up and some predictions: https://imgur.com/a/CN1b6LV



train_datagen = ImageDataGenerator(
rescale=1. / 255,
shear_range=0,
zoom_range=0,
horizontal_flip=True,
width_shift_range=0.1,
height_shift_range=0.1,
validation_split=0.2)
train_generator = train_datagen.flow_from_directory(
'data\',
batch_size=batchsize,
shuffle=True,
target_size=(250, 250),
subset="training",
class_mode="binary")
val_generator = train_datagen.flow_from_directory(
'data\',
batch_size=batchsize,
shuffle=True,
target_size=(250, 250),
subset="validation",
class_mode="binary")
pred_datagen = ImageDataGenerator(
rescale=1. / 255,
shear_range=0,
zoom_range=0,
horizontal_flip=False,
width_shift_range=0.1,
height_shift_range=0.1)
pred_generator = pred_datagen.flow_from_directory(
'batch_pred\',
batch_size=30,
shuffle=False,
target_size=(250, 250))


model = Sequential()
model.add(Conv2D(input_shape=(250, 250, 3), filters=25, kernel_size=3, activation="relu", padding="same"))
model.add(Conv2D(filters=32, kernel_size=3, activation="relu", padding="same"))
model.add(Conv2D(filters=32, kernel_size=3, activation="relu", padding="same"))
model.add(MaxPooling2D(pool_size=2, padding="same", strides=(2, 2)))
model.add(BatchNormalization())
model.add(Conv2D(filters=64, kernel_size=3, activation="relu", padding="same"))
model.add(Conv2D(filters=64, kernel_size=3, activation="relu", padding="same"))
model.add(Conv2D(filters=64, kernel_size=3, activation="relu", padding="same"))
model.add(MaxPooling2D(pool_size=2, padding="same", strides=(2, 2)))
model.add(BatchNormalization())
model.add(Conv2D(filters=128, kernel_size=3, activation="relu", padding="same"))
model.add(Conv2D(filters=128, kernel_size=3, activation="relu", padding="same"))
model.add(Conv2D(filters=128, kernel_size=3, activation="relu", padding="same"))
model.add(MaxPooling2D(pool_size=2, padding="same", strides=(2, 2)))
model.add(Conv2D(filters=256, kernel_size=3, activation="relu", padding="same"))
model.add(Conv2D(filters=256, kernel_size=3, activation="relu", padding="same"))
model.add(Conv2D(filters=256, kernel_size=3, activation="relu", padding="same"))
model.add(MaxPooling2D(pool_size=2, padding="same", strides=(2, 2)))
model.add(BatchNormalization())
dense = False
if dense:
model.add(Flatten())
model.add(Dense(250, activation="relu"))
model.add(BatchNormalization())
model.add(Dense(50, activation="relu"))
else:
model.add(GlobalAveragePooling2D())
model.add(Dense(1, activation="softmax"))
model.compile(loss='binary_crossentropy',
optimizer=Adam(0.0005), metrics=["acc"])
callbacks = [EarlyStopping(monitor='val_acc', patience=200, verbose=1),
ModelCheckpoint(filepath="model_checkpoint.h5py",
monitor='val_acc', save_best_only=True, verbose=1)]
model.fit_generator(
train_generator,
steps_per_epoch=train_generator.samples // batchsize,
validation_data=val_generator,
validation_steps=val_generator.samples // batchsize,
epochs=500,
callbacks=callbacks)


Everything appears to run correctly in terms of the model iterating the data by epoch, it finding the correct number of images etc. However, my predictions are always 50% despite a good validation accuracy, low loss, high accuracy etc.



I'm not sure what I'm doing wrong and any help would be appreciated.










share|improve this question
























  • If one of the answers below resolved your issue, kindly accept it by clicking on the checkmark next to the answer to mark it as "answered" - see What should I do when someone answers my question?

    – today
    Nov 26 '18 at 15:54













0












0








0








I have set up a model to train on classifying whether an image is a certain video game or not. I pre-scaled my images into 250x250 pixels and have them separated into two folders (the two binary classes) labelled 0 and 1. The amount of both classes are within ~100 of each other and I have around 3500 images in total.



Here are photos of the training process, the model set up and some predictions: https://imgur.com/a/CN1b6LV



train_datagen = ImageDataGenerator(
rescale=1. / 255,
shear_range=0,
zoom_range=0,
horizontal_flip=True,
width_shift_range=0.1,
height_shift_range=0.1,
validation_split=0.2)
train_generator = train_datagen.flow_from_directory(
'data\',
batch_size=batchsize,
shuffle=True,
target_size=(250, 250),
subset="training",
class_mode="binary")
val_generator = train_datagen.flow_from_directory(
'data\',
batch_size=batchsize,
shuffle=True,
target_size=(250, 250),
subset="validation",
class_mode="binary")
pred_datagen = ImageDataGenerator(
rescale=1. / 255,
shear_range=0,
zoom_range=0,
horizontal_flip=False,
width_shift_range=0.1,
height_shift_range=0.1)
pred_generator = pred_datagen.flow_from_directory(
'batch_pred\',
batch_size=30,
shuffle=False,
target_size=(250, 250))


model = Sequential()
model.add(Conv2D(input_shape=(250, 250, 3), filters=25, kernel_size=3, activation="relu", padding="same"))
model.add(Conv2D(filters=32, kernel_size=3, activation="relu", padding="same"))
model.add(Conv2D(filters=32, kernel_size=3, activation="relu", padding="same"))
model.add(MaxPooling2D(pool_size=2, padding="same", strides=(2, 2)))
model.add(BatchNormalization())
model.add(Conv2D(filters=64, kernel_size=3, activation="relu", padding="same"))
model.add(Conv2D(filters=64, kernel_size=3, activation="relu", padding="same"))
model.add(Conv2D(filters=64, kernel_size=3, activation="relu", padding="same"))
model.add(MaxPooling2D(pool_size=2, padding="same", strides=(2, 2)))
model.add(BatchNormalization())
model.add(Conv2D(filters=128, kernel_size=3, activation="relu", padding="same"))
model.add(Conv2D(filters=128, kernel_size=3, activation="relu", padding="same"))
model.add(Conv2D(filters=128, kernel_size=3, activation="relu", padding="same"))
model.add(MaxPooling2D(pool_size=2, padding="same", strides=(2, 2)))
model.add(Conv2D(filters=256, kernel_size=3, activation="relu", padding="same"))
model.add(Conv2D(filters=256, kernel_size=3, activation="relu", padding="same"))
model.add(Conv2D(filters=256, kernel_size=3, activation="relu", padding="same"))
model.add(MaxPooling2D(pool_size=2, padding="same", strides=(2, 2)))
model.add(BatchNormalization())
dense = False
if dense:
model.add(Flatten())
model.add(Dense(250, activation="relu"))
model.add(BatchNormalization())
model.add(Dense(50, activation="relu"))
else:
model.add(GlobalAveragePooling2D())
model.add(Dense(1, activation="softmax"))
model.compile(loss='binary_crossentropy',
optimizer=Adam(0.0005), metrics=["acc"])
callbacks = [EarlyStopping(monitor='val_acc', patience=200, verbose=1),
ModelCheckpoint(filepath="model_checkpoint.h5py",
monitor='val_acc', save_best_only=True, verbose=1)]
model.fit_generator(
train_generator,
steps_per_epoch=train_generator.samples // batchsize,
validation_data=val_generator,
validation_steps=val_generator.samples // batchsize,
epochs=500,
callbacks=callbacks)


Everything appears to run correctly in terms of the model iterating the data by epoch, it finding the correct number of images etc. However, my predictions are always 50% despite a good validation accuracy, low loss, high accuracy etc.



I'm not sure what I'm doing wrong and any help would be appreciated.










share|improve this question
















I have set up a model to train on classifying whether an image is a certain video game or not. I pre-scaled my images into 250x250 pixels and have them separated into two folders (the two binary classes) labelled 0 and 1. The amount of both classes are within ~100 of each other and I have around 3500 images in total.



Here are photos of the training process, the model set up and some predictions: https://imgur.com/a/CN1b6LV



train_datagen = ImageDataGenerator(
rescale=1. / 255,
shear_range=0,
zoom_range=0,
horizontal_flip=True,
width_shift_range=0.1,
height_shift_range=0.1,
validation_split=0.2)
train_generator = train_datagen.flow_from_directory(
'data\',
batch_size=batchsize,
shuffle=True,
target_size=(250, 250),
subset="training",
class_mode="binary")
val_generator = train_datagen.flow_from_directory(
'data\',
batch_size=batchsize,
shuffle=True,
target_size=(250, 250),
subset="validation",
class_mode="binary")
pred_datagen = ImageDataGenerator(
rescale=1. / 255,
shear_range=0,
zoom_range=0,
horizontal_flip=False,
width_shift_range=0.1,
height_shift_range=0.1)
pred_generator = pred_datagen.flow_from_directory(
'batch_pred\',
batch_size=30,
shuffle=False,
target_size=(250, 250))


model = Sequential()
model.add(Conv2D(input_shape=(250, 250, 3), filters=25, kernel_size=3, activation="relu", padding="same"))
model.add(Conv2D(filters=32, kernel_size=3, activation="relu", padding="same"))
model.add(Conv2D(filters=32, kernel_size=3, activation="relu", padding="same"))
model.add(MaxPooling2D(pool_size=2, padding="same", strides=(2, 2)))
model.add(BatchNormalization())
model.add(Conv2D(filters=64, kernel_size=3, activation="relu", padding="same"))
model.add(Conv2D(filters=64, kernel_size=3, activation="relu", padding="same"))
model.add(Conv2D(filters=64, kernel_size=3, activation="relu", padding="same"))
model.add(MaxPooling2D(pool_size=2, padding="same", strides=(2, 2)))
model.add(BatchNormalization())
model.add(Conv2D(filters=128, kernel_size=3, activation="relu", padding="same"))
model.add(Conv2D(filters=128, kernel_size=3, activation="relu", padding="same"))
model.add(Conv2D(filters=128, kernel_size=3, activation="relu", padding="same"))
model.add(MaxPooling2D(pool_size=2, padding="same", strides=(2, 2)))
model.add(Conv2D(filters=256, kernel_size=3, activation="relu", padding="same"))
model.add(Conv2D(filters=256, kernel_size=3, activation="relu", padding="same"))
model.add(Conv2D(filters=256, kernel_size=3, activation="relu", padding="same"))
model.add(MaxPooling2D(pool_size=2, padding="same", strides=(2, 2)))
model.add(BatchNormalization())
dense = False
if dense:
model.add(Flatten())
model.add(Dense(250, activation="relu"))
model.add(BatchNormalization())
model.add(Dense(50, activation="relu"))
else:
model.add(GlobalAveragePooling2D())
model.add(Dense(1, activation="softmax"))
model.compile(loss='binary_crossentropy',
optimizer=Adam(0.0005), metrics=["acc"])
callbacks = [EarlyStopping(monitor='val_acc', patience=200, verbose=1),
ModelCheckpoint(filepath="model_checkpoint.h5py",
monitor='val_acc', save_best_only=True, verbose=1)]
model.fit_generator(
train_generator,
steps_per_epoch=train_generator.samples // batchsize,
validation_data=val_generator,
validation_steps=val_generator.samples // batchsize,
epochs=500,
callbacks=callbacks)


Everything appears to run correctly in terms of the model iterating the data by epoch, it finding the correct number of images etc. However, my predictions are always 50% despite a good validation accuracy, low loss, high accuracy etc.



I'm not sure what I'm doing wrong and any help would be appreciated.







python tensorflow machine-learning keras classification






share|improve this question















share|improve this question













share|improve this question




share|improve this question








edited Nov 15 '18 at 6:05









Md. Mokammal Hossen Farnan

586320




586320










asked Nov 15 '18 at 3:15









Charles AndersonCharles Anderson

85




85












  • If one of the answers below resolved your issue, kindly accept it by clicking on the checkmark next to the answer to mark it as "answered" - see What should I do when someone answers my question?

    – today
    Nov 26 '18 at 15:54

















  • If one of the answers below resolved your issue, kindly accept it by clicking on the checkmark next to the answer to mark it as "answered" - see What should I do when someone answers my question?

    – today
    Nov 26 '18 at 15:54
















If one of the answers below resolved your issue, kindly accept it by clicking on the checkmark next to the answer to mark it as "answered" - see What should I do when someone answers my question?

– today
Nov 26 '18 at 15:54





If one of the answers below resolved your issue, kindly accept it by clicking on the checkmark next to the answer to mark it as "answered" - see What should I do when someone answers my question?

– today
Nov 26 '18 at 15:54












2 Answers
2






active

oldest

votes


















0














I think your problem is that you're using sigmoid for binary classification, your final layer activation function should be linear.






share|improve this answer






























    0














    The problem is that you are using softmax on a Dense layer with one unit. Softmax function normalizes its input such that the sum of its elements becomes equal to one. So if it has one unit, then the output would be always 1. Instead, for binary classification you need to use sigmoid function as the activation function of last layer.






    share|improve this answer























    • I appreciate the suggestion but I've since tried sigmoid instead of softmax and I get the same issue! In fact, when I was using softmax I was having the issue of my loss/accuracy being extremely poor in training, with sigmoid my accuracy/validation accuracy reaches 95%+ but every prediction still comes out at 50%.

      – Charles Anderson
      Nov 15 '18 at 9:54












    • @CharlesAnderson Do you rescale your test images by dividing them by 255.0 before using predict() method?

      – today
      Nov 15 '18 at 10:09











    • @CharlesAnderson Ok, I see you have defined a pred_generator that I think you use for prediction in predict_generator(). Now tell us how do you interpret the result of prediction? How do you find the predicted class?

      – today
      Nov 15 '18 at 10:12










    Your Answer






    StackExchange.ifUsing("editor", function ()
    StackExchange.using("externalEditor", function ()
    StackExchange.using("snippets", function ()
    StackExchange.snippets.init();
    );
    );
    , "code-snippets");

    StackExchange.ready(function()
    var channelOptions =
    tags: "".split(" "),
    id: "1"
    ;
    initTagRenderer("".split(" "), "".split(" "), channelOptions);

    StackExchange.using("externalEditor", function()
    // Have to fire editor after snippets, if snippets enabled
    if (StackExchange.settings.snippets.snippetsEnabled)
    StackExchange.using("snippets", function()
    createEditor();
    );

    else
    createEditor();

    );

    function createEditor()
    StackExchange.prepareEditor(
    heartbeatType: 'answer',
    autoActivateHeartbeat: false,
    convertImagesToLinks: true,
    noModals: true,
    showLowRepImageUploadWarning: true,
    reputationToPostImages: 10,
    bindNavPrevention: true,
    postfix: "",
    imageUploader:
    brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
    contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
    allowUrls: true
    ,
    onDemand: true,
    discardSelector: ".discard-answer"
    ,immediatelyShowMarkdownHelp:true
    );



    );













    draft saved

    draft discarded


















    StackExchange.ready(
    function ()
    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53311885%2fkeras-fit-generator-for-binary-classification-predictions-always-50%23new-answer', 'question_page');

    );

    Post as a guest















    Required, but never shown

























    2 Answers
    2






    active

    oldest

    votes








    2 Answers
    2






    active

    oldest

    votes









    active

    oldest

    votes






    active

    oldest

    votes









    0














    I think your problem is that you're using sigmoid for binary classification, your final layer activation function should be linear.






    share|improve this answer



























      0














      I think your problem is that you're using sigmoid for binary classification, your final layer activation function should be linear.






      share|improve this answer

























        0












        0








        0







        I think your problem is that you're using sigmoid for binary classification, your final layer activation function should be linear.






        share|improve this answer













        I think your problem is that you're using sigmoid for binary classification, your final layer activation function should be linear.







        share|improve this answer












        share|improve this answer



        share|improve this answer










        answered Nov 15 '18 at 3:17









        yohan fritzyohan fritz

        1




        1























            0














            The problem is that you are using softmax on a Dense layer with one unit. Softmax function normalizes its input such that the sum of its elements becomes equal to one. So if it has one unit, then the output would be always 1. Instead, for binary classification you need to use sigmoid function as the activation function of last layer.






            share|improve this answer























            • I appreciate the suggestion but I've since tried sigmoid instead of softmax and I get the same issue! In fact, when I was using softmax I was having the issue of my loss/accuracy being extremely poor in training, with sigmoid my accuracy/validation accuracy reaches 95%+ but every prediction still comes out at 50%.

              – Charles Anderson
              Nov 15 '18 at 9:54












            • @CharlesAnderson Do you rescale your test images by dividing them by 255.0 before using predict() method?

              – today
              Nov 15 '18 at 10:09











            • @CharlesAnderson Ok, I see you have defined a pred_generator that I think you use for prediction in predict_generator(). Now tell us how do you interpret the result of prediction? How do you find the predicted class?

              – today
              Nov 15 '18 at 10:12















            0














            The problem is that you are using softmax on a Dense layer with one unit. Softmax function normalizes its input such that the sum of its elements becomes equal to one. So if it has one unit, then the output would be always 1. Instead, for binary classification you need to use sigmoid function as the activation function of last layer.






            share|improve this answer























            • I appreciate the suggestion but I've since tried sigmoid instead of softmax and I get the same issue! In fact, when I was using softmax I was having the issue of my loss/accuracy being extremely poor in training, with sigmoid my accuracy/validation accuracy reaches 95%+ but every prediction still comes out at 50%.

              – Charles Anderson
              Nov 15 '18 at 9:54












            • @CharlesAnderson Do you rescale your test images by dividing them by 255.0 before using predict() method?

              – today
              Nov 15 '18 at 10:09











            • @CharlesAnderson Ok, I see you have defined a pred_generator that I think you use for prediction in predict_generator(). Now tell us how do you interpret the result of prediction? How do you find the predicted class?

              – today
              Nov 15 '18 at 10:12













            0












            0








            0







            The problem is that you are using softmax on a Dense layer with one unit. Softmax function normalizes its input such that the sum of its elements becomes equal to one. So if it has one unit, then the output would be always 1. Instead, for binary classification you need to use sigmoid function as the activation function of last layer.






            share|improve this answer













            The problem is that you are using softmax on a Dense layer with one unit. Softmax function normalizes its input such that the sum of its elements becomes equal to one. So if it has one unit, then the output would be always 1. Instead, for binary classification you need to use sigmoid function as the activation function of last layer.







            share|improve this answer












            share|improve this answer



            share|improve this answer










            answered Nov 15 '18 at 5:34









            todaytoday

            11k22038




            11k22038












            • I appreciate the suggestion but I've since tried sigmoid instead of softmax and I get the same issue! In fact, when I was using softmax I was having the issue of my loss/accuracy being extremely poor in training, with sigmoid my accuracy/validation accuracy reaches 95%+ but every prediction still comes out at 50%.

              – Charles Anderson
              Nov 15 '18 at 9:54












            • @CharlesAnderson Do you rescale your test images by dividing them by 255.0 before using predict() method?

              – today
              Nov 15 '18 at 10:09











            • @CharlesAnderson Ok, I see you have defined a pred_generator that I think you use for prediction in predict_generator(). Now tell us how do you interpret the result of prediction? How do you find the predicted class?

              – today
              Nov 15 '18 at 10:12

















            • I appreciate the suggestion but I've since tried sigmoid instead of softmax and I get the same issue! In fact, when I was using softmax I was having the issue of my loss/accuracy being extremely poor in training, with sigmoid my accuracy/validation accuracy reaches 95%+ but every prediction still comes out at 50%.

              – Charles Anderson
              Nov 15 '18 at 9:54












            • @CharlesAnderson Do you rescale your test images by dividing them by 255.0 before using predict() method?

              – today
              Nov 15 '18 at 10:09











            • @CharlesAnderson Ok, I see you have defined a pred_generator that I think you use for prediction in predict_generator(). Now tell us how do you interpret the result of prediction? How do you find the predicted class?

              – today
              Nov 15 '18 at 10:12
















            I appreciate the suggestion but I've since tried sigmoid instead of softmax and I get the same issue! In fact, when I was using softmax I was having the issue of my loss/accuracy being extremely poor in training, with sigmoid my accuracy/validation accuracy reaches 95%+ but every prediction still comes out at 50%.

            – Charles Anderson
            Nov 15 '18 at 9:54






            I appreciate the suggestion but I've since tried sigmoid instead of softmax and I get the same issue! In fact, when I was using softmax I was having the issue of my loss/accuracy being extremely poor in training, with sigmoid my accuracy/validation accuracy reaches 95%+ but every prediction still comes out at 50%.

            – Charles Anderson
            Nov 15 '18 at 9:54














            @CharlesAnderson Do you rescale your test images by dividing them by 255.0 before using predict() method?

            – today
            Nov 15 '18 at 10:09





            @CharlesAnderson Do you rescale your test images by dividing them by 255.0 before using predict() method?

            – today
            Nov 15 '18 at 10:09













            @CharlesAnderson Ok, I see you have defined a pred_generator that I think you use for prediction in predict_generator(). Now tell us how do you interpret the result of prediction? How do you find the predicted class?

            – today
            Nov 15 '18 at 10:12





            @CharlesAnderson Ok, I see you have defined a pred_generator that I think you use for prediction in predict_generator(). Now tell us how do you interpret the result of prediction? How do you find the predicted class?

            – today
            Nov 15 '18 at 10:12

















            draft saved

            draft discarded
















































            Thanks for contributing an answer to Stack Overflow!


            • Please be sure to answer the question. Provide details and share your research!

            But avoid


            • Asking for help, clarification, or responding to other answers.

            • Making statements based on opinion; back them up with references or personal experience.

            To learn more, see our tips on writing great answers.




            draft saved


            draft discarded














            StackExchange.ready(
            function ()
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53311885%2fkeras-fit-generator-for-binary-classification-predictions-always-50%23new-answer', 'question_page');

            );

            Post as a guest















            Required, but never shown





















































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown

































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown







            這個網誌中的熱門文章

            How to read a connectionString WITH PROVIDER in .NET Core?

            In R, how to develop a multiplot heatmap.2 figure showing key labels successfully

            Museum of Modern and Contemporary Art of Trento and Rovereto