Keras TimeSeries - Regression with negative values









up vote
0
down vote

favorite














I am trying to make regression tasks for time series, my data is like the below, i make window size of 10, and input feature as below, and target is the 5th column. as you see it has data of 70, 110, -100, 540,-130, 50



My model as below:



model = Sequential((
Conv1D(filters=filters, kernel_size=kernel_size, activation='relu',
input_shape=(window_size, nb_series)),
MaxPooling1D(),
Conv1D(filters=filters, kernel_size=kernel_size, activation='relu'),
MaxPooling1D(),
Flatten(),
Dense(nb_outputs, activation='linear'),
))
model.compile(loss='mse', optimizer='adam', metrics=['mae'])


My Input features as below:




0.00000000,0.42857143,0.57142857,0.00000000,70.00000000,1.00061741,1.00002238,22.40000000,24.85000000,30.75000000,8.10000000,1.00015876,1.00294701,0.99736059,-44.57995000,1.00166700,0.99966561,-0.00003286,0.00030157,1.00252034,49.18000000,40.96386000,19.74918000,-62.22000000
0.00000000,0.09090909,0.72727273,0.18181818,110.00000000,0.99963650,0.99928427,19.19000000,28.89000000,26.65000000,8.60000000,0.99939526,1.00217111,0.99660950,12.04301000,1.00082978,0.99883018,0.00008147,0.00026953,1.00153663,53.70000000,84.81013000,49.33018000,-42.22000000
0.00000000,0.20000000,0.80000000,0.00000000,-100.00000000,1.00034178,1.00016118,19.04000000,27.35000000,36.43000000,9.00000000,1.00028776,1.00300655,0.99756896,-40.34054000,1.00162433,0.99962294,-0.00000094,0.00019842,1.00235166,48.98000000,73.17073000,64.22563000,-62.22000000
0.00000000,0.07407407,0.92592593,0.00000000,540.00000000,0.99554634,0.99608051,20.92000000,32.90000000,20.02000000,12.60000000,0.99583374,0.99957548,0.99209201,166.35514000,0.99723072,0.99523842,0.00069929,0.00025201,0.99342482,67.12000000,89.24051000,83.36000000,-4.23000000
1.00000000,0.30769231,0.53846154,0.15384615,-130.00000000,0.99639984,0.99731696,21.73000000,29.41000000,17.35000000,12.20000000,0.99672034,1.00037538,0.99306530,119.32773000,0.99799071,0.99599723,0.00083646,0.00027643,0.99429023,64.25000000,86.70213000,86.32629000,-13.89000000
1.00000000,0.20000000,0.20000000,0.60000000,50.00000000,0.99590955,0.99698694,24.48000000,37.15000000,15.04000000,12.90000000,0.99618042,1.00005922,0.99230162,123.46570000,0.99737959,0.99538689,0.00105610,0.00034937,0.99368338,66.72000000,87.79070000,86.43382000,-1.39000000





I get the below loss and no matter how many epochs, switching between activation functions, optimizer.
I understand that this is because of the mean of the output over my dataset is between 122-124 this is why i always get this value.




297055/297071 [============================>.] - ETA: 0s - loss: 22789.0087 - mean_absolute_error: 123.0670
297071/297071 [==============================] - 144s 486us/step - loss: 22788.9740 - mean_absolute_error: 123.0673 - val_loss: 10519.1722 - val_mean_absolute_error: 79.3461


And by testing the prediction using the below code:



pred = model.predict(X_test)
print('nnactual', 'predicted', sep='t')
for actual, predicted in zip(y_test, pred.squeeze()):
print(actual.squeeze(), predicted, sep='t')


I get the below output:

for linear activation at the output layer




20.0 -0.059563223
-22.0 -0.059563223
-55.0 -0.059563223


for relu activation at the output layer:




235.0 0.0
-170.0 0.0
154.0 0.0


And Sigmoid:




-54.0 1.4216835e-36
-39.0 0.0
66.0 2.0888916e-37


Is there a way to predict continuous integers like above ?



Is it the activation function ?



Is it an issue of feature selection ?



Is it an architectural issue, maybe LSTM is better ?



Also any recommendation regarding the kernel size, filters, loss, activation and optimizer is so much appreciate.



Update:
I have tried to use LSTM using the below model:



# design network
model = Sequential()
model.add(LSTM(50, input_shape=(X.shape[1], X.shape[2])))
model.add(Dense(1))
model.compile(loss='mae', optimizer='adam', metrics=['mae'])
# fit network
model.fit(X_train, y_train, epochs=2, batch_size=10,
validation_data=(X_test, y_test), shuffle=False)


And i got the below Loss:




297071/297071 [==============================] - 196s 661us/step - loss: 122.8202 - mean_absolute_error: 122.8202 - val_loss: 78.2440 - val_mean_absolute_error: 78.2440
Epoch 2/2
297071/297071 [==============================] - 196s 661us/step - loss: 122.3811 - mean_absolute_error: 122.3811 - val_loss: 78.4328 - val_mean_absolute_error: 78.4328


And the below predicted values:




-55.0 -45.222805
-105.0 -21.363165
29.0 -18.858946
-125.0 -34.27912
-134.0 20.847342
-108.0 30.286516
113.0 31.09069
-63.0 8.848535


Is it the architecture or the data ?










share|improve this question



























    up vote
    0
    down vote

    favorite














    I am trying to make regression tasks for time series, my data is like the below, i make window size of 10, and input feature as below, and target is the 5th column. as you see it has data of 70, 110, -100, 540,-130, 50



    My model as below:



    model = Sequential((
    Conv1D(filters=filters, kernel_size=kernel_size, activation='relu',
    input_shape=(window_size, nb_series)),
    MaxPooling1D(),
    Conv1D(filters=filters, kernel_size=kernel_size, activation='relu'),
    MaxPooling1D(),
    Flatten(),
    Dense(nb_outputs, activation='linear'),
    ))
    model.compile(loss='mse', optimizer='adam', metrics=['mae'])


    My Input features as below:




    0.00000000,0.42857143,0.57142857,0.00000000,70.00000000,1.00061741,1.00002238,22.40000000,24.85000000,30.75000000,8.10000000,1.00015876,1.00294701,0.99736059,-44.57995000,1.00166700,0.99966561,-0.00003286,0.00030157,1.00252034,49.18000000,40.96386000,19.74918000,-62.22000000
    0.00000000,0.09090909,0.72727273,0.18181818,110.00000000,0.99963650,0.99928427,19.19000000,28.89000000,26.65000000,8.60000000,0.99939526,1.00217111,0.99660950,12.04301000,1.00082978,0.99883018,0.00008147,0.00026953,1.00153663,53.70000000,84.81013000,49.33018000,-42.22000000
    0.00000000,0.20000000,0.80000000,0.00000000,-100.00000000,1.00034178,1.00016118,19.04000000,27.35000000,36.43000000,9.00000000,1.00028776,1.00300655,0.99756896,-40.34054000,1.00162433,0.99962294,-0.00000094,0.00019842,1.00235166,48.98000000,73.17073000,64.22563000,-62.22000000
    0.00000000,0.07407407,0.92592593,0.00000000,540.00000000,0.99554634,0.99608051,20.92000000,32.90000000,20.02000000,12.60000000,0.99583374,0.99957548,0.99209201,166.35514000,0.99723072,0.99523842,0.00069929,0.00025201,0.99342482,67.12000000,89.24051000,83.36000000,-4.23000000
    1.00000000,0.30769231,0.53846154,0.15384615,-130.00000000,0.99639984,0.99731696,21.73000000,29.41000000,17.35000000,12.20000000,0.99672034,1.00037538,0.99306530,119.32773000,0.99799071,0.99599723,0.00083646,0.00027643,0.99429023,64.25000000,86.70213000,86.32629000,-13.89000000
    1.00000000,0.20000000,0.20000000,0.60000000,50.00000000,0.99590955,0.99698694,24.48000000,37.15000000,15.04000000,12.90000000,0.99618042,1.00005922,0.99230162,123.46570000,0.99737959,0.99538689,0.00105610,0.00034937,0.99368338,66.72000000,87.79070000,86.43382000,-1.39000000





    I get the below loss and no matter how many epochs, switching between activation functions, optimizer.
    I understand that this is because of the mean of the output over my dataset is between 122-124 this is why i always get this value.




    297055/297071 [============================>.] - ETA: 0s - loss: 22789.0087 - mean_absolute_error: 123.0670
    297071/297071 [==============================] - 144s 486us/step - loss: 22788.9740 - mean_absolute_error: 123.0673 - val_loss: 10519.1722 - val_mean_absolute_error: 79.3461


    And by testing the prediction using the below code:



    pred = model.predict(X_test)
    print('nnactual', 'predicted', sep='t')
    for actual, predicted in zip(y_test, pred.squeeze()):
    print(actual.squeeze(), predicted, sep='t')


    I get the below output:

    for linear activation at the output layer




    20.0 -0.059563223
    -22.0 -0.059563223
    -55.0 -0.059563223


    for relu activation at the output layer:




    235.0 0.0
    -170.0 0.0
    154.0 0.0


    And Sigmoid:




    -54.0 1.4216835e-36
    -39.0 0.0
    66.0 2.0888916e-37


    Is there a way to predict continuous integers like above ?



    Is it the activation function ?



    Is it an issue of feature selection ?



    Is it an architectural issue, maybe LSTM is better ?



    Also any recommendation regarding the kernel size, filters, loss, activation and optimizer is so much appreciate.



    Update:
    I have tried to use LSTM using the below model:



    # design network
    model = Sequential()
    model.add(LSTM(50, input_shape=(X.shape[1], X.shape[2])))
    model.add(Dense(1))
    model.compile(loss='mae', optimizer='adam', metrics=['mae'])
    # fit network
    model.fit(X_train, y_train, epochs=2, batch_size=10,
    validation_data=(X_test, y_test), shuffle=False)


    And i got the below Loss:




    297071/297071 [==============================] - 196s 661us/step - loss: 122.8202 - mean_absolute_error: 122.8202 - val_loss: 78.2440 - val_mean_absolute_error: 78.2440
    Epoch 2/2
    297071/297071 [==============================] - 196s 661us/step - loss: 122.3811 - mean_absolute_error: 122.3811 - val_loss: 78.4328 - val_mean_absolute_error: 78.4328


    And the below predicted values:




    -55.0 -45.222805
    -105.0 -21.363165
    29.0 -18.858946
    -125.0 -34.27912
    -134.0 20.847342
    -108.0 30.286516
    113.0 31.09069
    -63.0 8.848535


    Is it the architecture or the data ?










    share|improve this question

























      up vote
      0
      down vote

      favorite









      up vote
      0
      down vote

      favorite













      I am trying to make regression tasks for time series, my data is like the below, i make window size of 10, and input feature as below, and target is the 5th column. as you see it has data of 70, 110, -100, 540,-130, 50



      My model as below:



      model = Sequential((
      Conv1D(filters=filters, kernel_size=kernel_size, activation='relu',
      input_shape=(window_size, nb_series)),
      MaxPooling1D(),
      Conv1D(filters=filters, kernel_size=kernel_size, activation='relu'),
      MaxPooling1D(),
      Flatten(),
      Dense(nb_outputs, activation='linear'),
      ))
      model.compile(loss='mse', optimizer='adam', metrics=['mae'])


      My Input features as below:




      0.00000000,0.42857143,0.57142857,0.00000000,70.00000000,1.00061741,1.00002238,22.40000000,24.85000000,30.75000000,8.10000000,1.00015876,1.00294701,0.99736059,-44.57995000,1.00166700,0.99966561,-0.00003286,0.00030157,1.00252034,49.18000000,40.96386000,19.74918000,-62.22000000
      0.00000000,0.09090909,0.72727273,0.18181818,110.00000000,0.99963650,0.99928427,19.19000000,28.89000000,26.65000000,8.60000000,0.99939526,1.00217111,0.99660950,12.04301000,1.00082978,0.99883018,0.00008147,0.00026953,1.00153663,53.70000000,84.81013000,49.33018000,-42.22000000
      0.00000000,0.20000000,0.80000000,0.00000000,-100.00000000,1.00034178,1.00016118,19.04000000,27.35000000,36.43000000,9.00000000,1.00028776,1.00300655,0.99756896,-40.34054000,1.00162433,0.99962294,-0.00000094,0.00019842,1.00235166,48.98000000,73.17073000,64.22563000,-62.22000000
      0.00000000,0.07407407,0.92592593,0.00000000,540.00000000,0.99554634,0.99608051,20.92000000,32.90000000,20.02000000,12.60000000,0.99583374,0.99957548,0.99209201,166.35514000,0.99723072,0.99523842,0.00069929,0.00025201,0.99342482,67.12000000,89.24051000,83.36000000,-4.23000000
      1.00000000,0.30769231,0.53846154,0.15384615,-130.00000000,0.99639984,0.99731696,21.73000000,29.41000000,17.35000000,12.20000000,0.99672034,1.00037538,0.99306530,119.32773000,0.99799071,0.99599723,0.00083646,0.00027643,0.99429023,64.25000000,86.70213000,86.32629000,-13.89000000
      1.00000000,0.20000000,0.20000000,0.60000000,50.00000000,0.99590955,0.99698694,24.48000000,37.15000000,15.04000000,12.90000000,0.99618042,1.00005922,0.99230162,123.46570000,0.99737959,0.99538689,0.00105610,0.00034937,0.99368338,66.72000000,87.79070000,86.43382000,-1.39000000





      I get the below loss and no matter how many epochs, switching between activation functions, optimizer.
      I understand that this is because of the mean of the output over my dataset is between 122-124 this is why i always get this value.




      297055/297071 [============================>.] - ETA: 0s - loss: 22789.0087 - mean_absolute_error: 123.0670
      297071/297071 [==============================] - 144s 486us/step - loss: 22788.9740 - mean_absolute_error: 123.0673 - val_loss: 10519.1722 - val_mean_absolute_error: 79.3461


      And by testing the prediction using the below code:



      pred = model.predict(X_test)
      print('nnactual', 'predicted', sep='t')
      for actual, predicted in zip(y_test, pred.squeeze()):
      print(actual.squeeze(), predicted, sep='t')


      I get the below output:

      for linear activation at the output layer




      20.0 -0.059563223
      -22.0 -0.059563223
      -55.0 -0.059563223


      for relu activation at the output layer:




      235.0 0.0
      -170.0 0.0
      154.0 0.0


      And Sigmoid:




      -54.0 1.4216835e-36
      -39.0 0.0
      66.0 2.0888916e-37


      Is there a way to predict continuous integers like above ?



      Is it the activation function ?



      Is it an issue of feature selection ?



      Is it an architectural issue, maybe LSTM is better ?



      Also any recommendation regarding the kernel size, filters, loss, activation and optimizer is so much appreciate.



      Update:
      I have tried to use LSTM using the below model:



      # design network
      model = Sequential()
      model.add(LSTM(50, input_shape=(X.shape[1], X.shape[2])))
      model.add(Dense(1))
      model.compile(loss='mae', optimizer='adam', metrics=['mae'])
      # fit network
      model.fit(X_train, y_train, epochs=2, batch_size=10,
      validation_data=(X_test, y_test), shuffle=False)


      And i got the below Loss:




      297071/297071 [==============================] - 196s 661us/step - loss: 122.8202 - mean_absolute_error: 122.8202 - val_loss: 78.2440 - val_mean_absolute_error: 78.2440
      Epoch 2/2
      297071/297071 [==============================] - 196s 661us/step - loss: 122.3811 - mean_absolute_error: 122.3811 - val_loss: 78.4328 - val_mean_absolute_error: 78.4328


      And the below predicted values:




      -55.0 -45.222805
      -105.0 -21.363165
      29.0 -18.858946
      -125.0 -34.27912
      -134.0 20.847342
      -108.0 30.286516
      113.0 31.09069
      -63.0 8.848535


      Is it the architecture or the data ?










      share|improve this question

















      I am trying to make regression tasks for time series, my data is like the below, i make window size of 10, and input feature as below, and target is the 5th column. as you see it has data of 70, 110, -100, 540,-130, 50



      My model as below:



      model = Sequential((
      Conv1D(filters=filters, kernel_size=kernel_size, activation='relu',
      input_shape=(window_size, nb_series)),
      MaxPooling1D(),
      Conv1D(filters=filters, kernel_size=kernel_size, activation='relu'),
      MaxPooling1D(),
      Flatten(),
      Dense(nb_outputs, activation='linear'),
      ))
      model.compile(loss='mse', optimizer='adam', metrics=['mae'])


      My Input features as below:




      0.00000000,0.42857143,0.57142857,0.00000000,70.00000000,1.00061741,1.00002238,22.40000000,24.85000000,30.75000000,8.10000000,1.00015876,1.00294701,0.99736059,-44.57995000,1.00166700,0.99966561,-0.00003286,0.00030157,1.00252034,49.18000000,40.96386000,19.74918000,-62.22000000
      0.00000000,0.09090909,0.72727273,0.18181818,110.00000000,0.99963650,0.99928427,19.19000000,28.89000000,26.65000000,8.60000000,0.99939526,1.00217111,0.99660950,12.04301000,1.00082978,0.99883018,0.00008147,0.00026953,1.00153663,53.70000000,84.81013000,49.33018000,-42.22000000
      0.00000000,0.20000000,0.80000000,0.00000000,-100.00000000,1.00034178,1.00016118,19.04000000,27.35000000,36.43000000,9.00000000,1.00028776,1.00300655,0.99756896,-40.34054000,1.00162433,0.99962294,-0.00000094,0.00019842,1.00235166,48.98000000,73.17073000,64.22563000,-62.22000000
      0.00000000,0.07407407,0.92592593,0.00000000,540.00000000,0.99554634,0.99608051,20.92000000,32.90000000,20.02000000,12.60000000,0.99583374,0.99957548,0.99209201,166.35514000,0.99723072,0.99523842,0.00069929,0.00025201,0.99342482,67.12000000,89.24051000,83.36000000,-4.23000000
      1.00000000,0.30769231,0.53846154,0.15384615,-130.00000000,0.99639984,0.99731696,21.73000000,29.41000000,17.35000000,12.20000000,0.99672034,1.00037538,0.99306530,119.32773000,0.99799071,0.99599723,0.00083646,0.00027643,0.99429023,64.25000000,86.70213000,86.32629000,-13.89000000
      1.00000000,0.20000000,0.20000000,0.60000000,50.00000000,0.99590955,0.99698694,24.48000000,37.15000000,15.04000000,12.90000000,0.99618042,1.00005922,0.99230162,123.46570000,0.99737959,0.99538689,0.00105610,0.00034937,0.99368338,66.72000000,87.79070000,86.43382000,-1.39000000





      I get the below loss and no matter how many epochs, switching between activation functions, optimizer.
      I understand that this is because of the mean of the output over my dataset is between 122-124 this is why i always get this value.




      297055/297071 [============================>.] - ETA: 0s - loss: 22789.0087 - mean_absolute_error: 123.0670
      297071/297071 [==============================] - 144s 486us/step - loss: 22788.9740 - mean_absolute_error: 123.0673 - val_loss: 10519.1722 - val_mean_absolute_error: 79.3461


      And by testing the prediction using the below code:



      pred = model.predict(X_test)
      print('nnactual', 'predicted', sep='t')
      for actual, predicted in zip(y_test, pred.squeeze()):
      print(actual.squeeze(), predicted, sep='t')


      I get the below output:

      for linear activation at the output layer




      20.0 -0.059563223
      -22.0 -0.059563223
      -55.0 -0.059563223


      for relu activation at the output layer:




      235.0 0.0
      -170.0 0.0
      154.0 0.0


      And Sigmoid:




      -54.0 1.4216835e-36
      -39.0 0.0
      66.0 2.0888916e-37


      Is there a way to predict continuous integers like above ?



      Is it the activation function ?



      Is it an issue of feature selection ?



      Is it an architectural issue, maybe LSTM is better ?



      Also any recommendation regarding the kernel size, filters, loss, activation and optimizer is so much appreciate.



      Update:
      I have tried to use LSTM using the below model:



      # design network
      model = Sequential()
      model.add(LSTM(50, input_shape=(X.shape[1], X.shape[2])))
      model.add(Dense(1))
      model.compile(loss='mae', optimizer='adam', metrics=['mae'])
      # fit network
      model.fit(X_train, y_train, epochs=2, batch_size=10,
      validation_data=(X_test, y_test), shuffle=False)


      And i got the below Loss:




      297071/297071 [==============================] - 196s 661us/step - loss: 122.8202 - mean_absolute_error: 122.8202 - val_loss: 78.2440 - val_mean_absolute_error: 78.2440
      Epoch 2/2
      297071/297071 [==============================] - 196s 661us/step - loss: 122.3811 - mean_absolute_error: 122.3811 - val_loss: 78.4328 - val_mean_absolute_error: 78.4328


      And the below predicted values:




      -55.0 -45.222805
      -105.0 -21.363165
      29.0 -18.858946
      -125.0 -34.27912
      -134.0 20.847342
      -108.0 30.286516
      113.0 31.09069
      -63.0 8.848535


      Is it the architecture or the data ?







      keras time-series regression lstm convolution






      share|improve this question















      share|improve this question













      share|improve this question




      share|improve this question








      edited Nov 12 at 14:44

























      asked Nov 11 at 8:03









      Ramzy

      197




      197



























          active

          oldest

          votes











          Your Answer






          StackExchange.ifUsing("editor", function ()
          StackExchange.using("externalEditor", function ()
          StackExchange.using("snippets", function ()
          StackExchange.snippets.init();
          );
          );
          , "code-snippets");

          StackExchange.ready(function()
          var channelOptions =
          tags: "".split(" "),
          id: "1"
          ;
          initTagRenderer("".split(" "), "".split(" "), channelOptions);

          StackExchange.using("externalEditor", function()
          // Have to fire editor after snippets, if snippets enabled
          if (StackExchange.settings.snippets.snippetsEnabled)
          StackExchange.using("snippets", function()
          createEditor();
          );

          else
          createEditor();

          );

          function createEditor()
          StackExchange.prepareEditor(
          heartbeatType: 'answer',
          convertImagesToLinks: true,
          noModals: true,
          showLowRepImageUploadWarning: true,
          reputationToPostImages: 10,
          bindNavPrevention: true,
          postfix: "",
          imageUploader:
          brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
          contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
          allowUrls: true
          ,
          onDemand: true,
          discardSelector: ".discard-answer"
          ,immediatelyShowMarkdownHelp:true
          );



          );













          draft saved

          draft discarded


















          StackExchange.ready(
          function ()
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53246870%2fkeras-timeseries-regression-with-negative-values%23new-answer', 'question_page');

          );

          Post as a guest















          Required, but never shown






























          active

          oldest

          votes













          active

          oldest

          votes









          active

          oldest

          votes






          active

          oldest

          votes















          draft saved

          draft discarded
















































          Thanks for contributing an answer to Stack Overflow!


          • Please be sure to answer the question. Provide details and share your research!

          But avoid


          • Asking for help, clarification, or responding to other answers.

          • Making statements based on opinion; back them up with references or personal experience.

          To learn more, see our tips on writing great answers.





          Some of your past answers have not been well-received, and you're in danger of being blocked from answering.


          Please pay close attention to the following guidance:


          • Please be sure to answer the question. Provide details and share your research!

          But avoid


          • Asking for help, clarification, or responding to other answers.

          • Making statements based on opinion; back them up with references or personal experience.

          To learn more, see our tips on writing great answers.




          draft saved


          draft discarded














          StackExchange.ready(
          function ()
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53246870%2fkeras-timeseries-regression-with-negative-values%23new-answer', 'question_page');

          );

          Post as a guest















          Required, but never shown





















































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown

































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown







          這個網誌中的熱門文章

          How to read a connectionString WITH PROVIDER in .NET Core?

          In R, how to develop a multiplot heatmap.2 figure showing key labels successfully

          Museum of Modern and Contemporary Art of Trento and Rovereto