LSTM model for time series predictions predicts irregular values like a sawtooth
up vote
0
down vote
favorite
I am training a Keras model to predict availability of bike-sharing stations. I am giving in the training set a whole row with day of the year, time, weekday, station and free bikes. Each sample contains the availability for the previous day (144 samples) and I am trying to predict the availability for the next day (144 samples). The shapes for the sets used are
Train X (2362, 144, 5)
Train Y (2362, 144)
Test X (39, 144, 5)
Test Y (39, 144)
Validation X (1535, 144, 5)
Validation Y (1535, 144)
The model I am using is this one
model.add(LSTM(20, input_shape=(self.train_x.shape[1], self.train_x.shape[2]), return_sequences = True))
model.add(Dropout(0.2))
model.add(LSTM(20))
model.add(Dense(144))
model.compile(loss='mse', optimizer='adam', metrics = ['acc', 'mape', 'mse'])
history = self.model.fit(self.train_x, self.train_y, batch_size=50, epochs=20, validation_data=(self.validation_x, self.validation_y), verbose=1, shuffle = True)
The predictions made after training have nothing to do with the expected output, they have like a sawtooth shape with values that exceed the original size.
The accuracy rarely goes up but loss has a normal shape.
As an example the history after each epoch looks like this
Epoch 17/20
2362/2362 [==============================] - 12s 5ms/step - loss: 9.1214 - acc: 0.0000e+00 - mean_absolute_percentage_error: 21925846.0813 - mean_squared_error: 9.1214 - val_loss: 9.0642 - val_acc: 0.0000e+00 - val_mean_absolute_percentage_error: 24162847.3779 - val_mean_squared_error: 9.0642
Epoch 18/20
2362/2362 [==============================] - 12s 5ms/step - loss: 8.2241 - acc: 0.0013 - mean_absolute_percentage_error: 21906919.9136 - mean_squared_error: 8.2241 - val_loss: 8.1923 - val_acc: 0.0000e+00 - val_mean_absolute_percentage_error: 22754663.8013 - val_mean_squared_error: 8.1923
Epoch 19/20
2362/2362 [==============================] - 12s 5ms/step - loss: 7.4190 - acc: 0.0000e+00 - mean_absolute_percentage_error: 21910003.1744 - mean_squared_error: 7.4190 - val_loss: 7.3926 - val_acc: 0.0000e+00 - val_mean_absolute_percentage_error: 24673277.8420 - val_mean_squared_error: 7.3926
Epoch 20/20
2362/2362 [==============================] - 12s 5ms/step - loss: 6.7067 - acc: 0.0013 - mean_absolute_percentage_error: 22076339.2168 - mean_squared_error: 6.7067 - val_loss: 6.6758 - val_acc: 6.5147e-04 - val_mean_absolute_percentage_error: 22987089.8436 - val_mean_squared_error: 6.6758
I really don't know where the problem might be, more layers?, less layers?, different approach?
UPDATE: Plots of training/test data. Left part of the plot shows the previous day of availability that is fed to the model and the right part shows what the result should be and the prediction made by the model.
python tensorflow machine-learning keras time-series
|
show 1 more comment
up vote
0
down vote
favorite
I am training a Keras model to predict availability of bike-sharing stations. I am giving in the training set a whole row with day of the year, time, weekday, station and free bikes. Each sample contains the availability for the previous day (144 samples) and I am trying to predict the availability for the next day (144 samples). The shapes for the sets used are
Train X (2362, 144, 5)
Train Y (2362, 144)
Test X (39, 144, 5)
Test Y (39, 144)
Validation X (1535, 144, 5)
Validation Y (1535, 144)
The model I am using is this one
model.add(LSTM(20, input_shape=(self.train_x.shape[1], self.train_x.shape[2]), return_sequences = True))
model.add(Dropout(0.2))
model.add(LSTM(20))
model.add(Dense(144))
model.compile(loss='mse', optimizer='adam', metrics = ['acc', 'mape', 'mse'])
history = self.model.fit(self.train_x, self.train_y, batch_size=50, epochs=20, validation_data=(self.validation_x, self.validation_y), verbose=1, shuffle = True)
The predictions made after training have nothing to do with the expected output, they have like a sawtooth shape with values that exceed the original size.
The accuracy rarely goes up but loss has a normal shape.
As an example the history after each epoch looks like this
Epoch 17/20
2362/2362 [==============================] - 12s 5ms/step - loss: 9.1214 - acc: 0.0000e+00 - mean_absolute_percentage_error: 21925846.0813 - mean_squared_error: 9.1214 - val_loss: 9.0642 - val_acc: 0.0000e+00 - val_mean_absolute_percentage_error: 24162847.3779 - val_mean_squared_error: 9.0642
Epoch 18/20
2362/2362 [==============================] - 12s 5ms/step - loss: 8.2241 - acc: 0.0013 - mean_absolute_percentage_error: 21906919.9136 - mean_squared_error: 8.2241 - val_loss: 8.1923 - val_acc: 0.0000e+00 - val_mean_absolute_percentage_error: 22754663.8013 - val_mean_squared_error: 8.1923
Epoch 19/20
2362/2362 [==============================] - 12s 5ms/step - loss: 7.4190 - acc: 0.0000e+00 - mean_absolute_percentage_error: 21910003.1744 - mean_squared_error: 7.4190 - val_loss: 7.3926 - val_acc: 0.0000e+00 - val_mean_absolute_percentage_error: 24673277.8420 - val_mean_squared_error: 7.3926
Epoch 20/20
2362/2362 [==============================] - 12s 5ms/step - loss: 6.7067 - acc: 0.0013 - mean_absolute_percentage_error: 22076339.2168 - mean_squared_error: 6.7067 - val_loss: 6.6758 - val_acc: 6.5147e-04 - val_mean_absolute_percentage_error: 22987089.8436 - val_mean_squared_error: 6.6758
I really don't know where the problem might be, more layers?, less layers?, different approach?
UPDATE: Plots of training/test data. Left part of the plot shows the previous day of availability that is fed to the model and the right part shows what the result should be and the prediction made by the model.
python tensorflow machine-learning keras time-series
1
The real data you have in your plot seems to be one everywhere. Is the scaling smoothing out some spikes (the prediction is laaarge), or is this really flat ? If it is indeed 1 everywhere, your model should learn this quite easily. Maybe you should compare your training and testing data. Are they from the same distribution ? Could you plot training and testing in the same layout ?
– lhk
Nov 11 at 11:55
Updated the question with a new image, the predicted values were so out of range that the real values look like are almost constant.
– jdmg718
Nov 11 at 11:58
1
Nice, thanks. In principle, your model looks fine to me. Some checks that you might do are: Reduce model size and reduce learning rate. That could make it more stable. Because these oscillations looks rather strange to me, could be some instability in your training setup. And you should make sure that the model is not picking up some trends that exist in your training data but not in your testing data. Plots of availability in training / testing data would be interesting
– lhk
Nov 11 at 12:00
It's been a while since I've used LSTM for time series predictions, but if I recall correctly the model itself has a state (so if you apply the model to the same data you can get different results as the internal state will change). If so you need to make sure that before you apply the model to the test data that it has been correctly initialised.
– kabdulla
Nov 11 at 12:04
1
Your loss is still decreasing, so you should train for a lot more than 20 epochs. Evaluating a model that hasn't converged (loss not changing) makes no sense.
– Matias Valdenegro
Nov 11 at 14:03
|
show 1 more comment
up vote
0
down vote
favorite
up vote
0
down vote
favorite
I am training a Keras model to predict availability of bike-sharing stations. I am giving in the training set a whole row with day of the year, time, weekday, station and free bikes. Each sample contains the availability for the previous day (144 samples) and I am trying to predict the availability for the next day (144 samples). The shapes for the sets used are
Train X (2362, 144, 5)
Train Y (2362, 144)
Test X (39, 144, 5)
Test Y (39, 144)
Validation X (1535, 144, 5)
Validation Y (1535, 144)
The model I am using is this one
model.add(LSTM(20, input_shape=(self.train_x.shape[1], self.train_x.shape[2]), return_sequences = True))
model.add(Dropout(0.2))
model.add(LSTM(20))
model.add(Dense(144))
model.compile(loss='mse', optimizer='adam', metrics = ['acc', 'mape', 'mse'])
history = self.model.fit(self.train_x, self.train_y, batch_size=50, epochs=20, validation_data=(self.validation_x, self.validation_y), verbose=1, shuffle = True)
The predictions made after training have nothing to do with the expected output, they have like a sawtooth shape with values that exceed the original size.
The accuracy rarely goes up but loss has a normal shape.
As an example the history after each epoch looks like this
Epoch 17/20
2362/2362 [==============================] - 12s 5ms/step - loss: 9.1214 - acc: 0.0000e+00 - mean_absolute_percentage_error: 21925846.0813 - mean_squared_error: 9.1214 - val_loss: 9.0642 - val_acc: 0.0000e+00 - val_mean_absolute_percentage_error: 24162847.3779 - val_mean_squared_error: 9.0642
Epoch 18/20
2362/2362 [==============================] - 12s 5ms/step - loss: 8.2241 - acc: 0.0013 - mean_absolute_percentage_error: 21906919.9136 - mean_squared_error: 8.2241 - val_loss: 8.1923 - val_acc: 0.0000e+00 - val_mean_absolute_percentage_error: 22754663.8013 - val_mean_squared_error: 8.1923
Epoch 19/20
2362/2362 [==============================] - 12s 5ms/step - loss: 7.4190 - acc: 0.0000e+00 - mean_absolute_percentage_error: 21910003.1744 - mean_squared_error: 7.4190 - val_loss: 7.3926 - val_acc: 0.0000e+00 - val_mean_absolute_percentage_error: 24673277.8420 - val_mean_squared_error: 7.3926
Epoch 20/20
2362/2362 [==============================] - 12s 5ms/step - loss: 6.7067 - acc: 0.0013 - mean_absolute_percentage_error: 22076339.2168 - mean_squared_error: 6.7067 - val_loss: 6.6758 - val_acc: 6.5147e-04 - val_mean_absolute_percentage_error: 22987089.8436 - val_mean_squared_error: 6.6758
I really don't know where the problem might be, more layers?, less layers?, different approach?
UPDATE: Plots of training/test data. Left part of the plot shows the previous day of availability that is fed to the model and the right part shows what the result should be and the prediction made by the model.
python tensorflow machine-learning keras time-series
I am training a Keras model to predict availability of bike-sharing stations. I am giving in the training set a whole row with day of the year, time, weekday, station and free bikes. Each sample contains the availability for the previous day (144 samples) and I am trying to predict the availability for the next day (144 samples). The shapes for the sets used are
Train X (2362, 144, 5)
Train Y (2362, 144)
Test X (39, 144, 5)
Test Y (39, 144)
Validation X (1535, 144, 5)
Validation Y (1535, 144)
The model I am using is this one
model.add(LSTM(20, input_shape=(self.train_x.shape[1], self.train_x.shape[2]), return_sequences = True))
model.add(Dropout(0.2))
model.add(LSTM(20))
model.add(Dense(144))
model.compile(loss='mse', optimizer='adam', metrics = ['acc', 'mape', 'mse'])
history = self.model.fit(self.train_x, self.train_y, batch_size=50, epochs=20, validation_data=(self.validation_x, self.validation_y), verbose=1, shuffle = True)
The predictions made after training have nothing to do with the expected output, they have like a sawtooth shape with values that exceed the original size.
The accuracy rarely goes up but loss has a normal shape.
As an example the history after each epoch looks like this
Epoch 17/20
2362/2362 [==============================] - 12s 5ms/step - loss: 9.1214 - acc: 0.0000e+00 - mean_absolute_percentage_error: 21925846.0813 - mean_squared_error: 9.1214 - val_loss: 9.0642 - val_acc: 0.0000e+00 - val_mean_absolute_percentage_error: 24162847.3779 - val_mean_squared_error: 9.0642
Epoch 18/20
2362/2362 [==============================] - 12s 5ms/step - loss: 8.2241 - acc: 0.0013 - mean_absolute_percentage_error: 21906919.9136 - mean_squared_error: 8.2241 - val_loss: 8.1923 - val_acc: 0.0000e+00 - val_mean_absolute_percentage_error: 22754663.8013 - val_mean_squared_error: 8.1923
Epoch 19/20
2362/2362 [==============================] - 12s 5ms/step - loss: 7.4190 - acc: 0.0000e+00 - mean_absolute_percentage_error: 21910003.1744 - mean_squared_error: 7.4190 - val_loss: 7.3926 - val_acc: 0.0000e+00 - val_mean_absolute_percentage_error: 24673277.8420 - val_mean_squared_error: 7.3926
Epoch 20/20
2362/2362 [==============================] - 12s 5ms/step - loss: 6.7067 - acc: 0.0013 - mean_absolute_percentage_error: 22076339.2168 - mean_squared_error: 6.7067 - val_loss: 6.6758 - val_acc: 6.5147e-04 - val_mean_absolute_percentage_error: 22987089.8436 - val_mean_squared_error: 6.6758
I really don't know where the problem might be, more layers?, less layers?, different approach?
UPDATE: Plots of training/test data. Left part of the plot shows the previous day of availability that is fed to the model and the right part shows what the result should be and the prediction made by the model.
python tensorflow machine-learning keras time-series
python tensorflow machine-learning keras time-series
edited Nov 11 at 12:14
asked Nov 11 at 11:48
jdmg718
6319
6319
1
The real data you have in your plot seems to be one everywhere. Is the scaling smoothing out some spikes (the prediction is laaarge), or is this really flat ? If it is indeed 1 everywhere, your model should learn this quite easily. Maybe you should compare your training and testing data. Are they from the same distribution ? Could you plot training and testing in the same layout ?
– lhk
Nov 11 at 11:55
Updated the question with a new image, the predicted values were so out of range that the real values look like are almost constant.
– jdmg718
Nov 11 at 11:58
1
Nice, thanks. In principle, your model looks fine to me. Some checks that you might do are: Reduce model size and reduce learning rate. That could make it more stable. Because these oscillations looks rather strange to me, could be some instability in your training setup. And you should make sure that the model is not picking up some trends that exist in your training data but not in your testing data. Plots of availability in training / testing data would be interesting
– lhk
Nov 11 at 12:00
It's been a while since I've used LSTM for time series predictions, but if I recall correctly the model itself has a state (so if you apply the model to the same data you can get different results as the internal state will change). If so you need to make sure that before you apply the model to the test data that it has been correctly initialised.
– kabdulla
Nov 11 at 12:04
1
Your loss is still decreasing, so you should train for a lot more than 20 epochs. Evaluating a model that hasn't converged (loss not changing) makes no sense.
– Matias Valdenegro
Nov 11 at 14:03
|
show 1 more comment
1
The real data you have in your plot seems to be one everywhere. Is the scaling smoothing out some spikes (the prediction is laaarge), or is this really flat ? If it is indeed 1 everywhere, your model should learn this quite easily. Maybe you should compare your training and testing data. Are they from the same distribution ? Could you plot training and testing in the same layout ?
– lhk
Nov 11 at 11:55
Updated the question with a new image, the predicted values were so out of range that the real values look like are almost constant.
– jdmg718
Nov 11 at 11:58
1
Nice, thanks. In principle, your model looks fine to me. Some checks that you might do are: Reduce model size and reduce learning rate. That could make it more stable. Because these oscillations looks rather strange to me, could be some instability in your training setup. And you should make sure that the model is not picking up some trends that exist in your training data but not in your testing data. Plots of availability in training / testing data would be interesting
– lhk
Nov 11 at 12:00
It's been a while since I've used LSTM for time series predictions, but if I recall correctly the model itself has a state (so if you apply the model to the same data you can get different results as the internal state will change). If so you need to make sure that before you apply the model to the test data that it has been correctly initialised.
– kabdulla
Nov 11 at 12:04
1
Your loss is still decreasing, so you should train for a lot more than 20 epochs. Evaluating a model that hasn't converged (loss not changing) makes no sense.
– Matias Valdenegro
Nov 11 at 14:03
1
1
The real data you have in your plot seems to be one everywhere. Is the scaling smoothing out some spikes (the prediction is laaarge), or is this really flat ? If it is indeed 1 everywhere, your model should learn this quite easily. Maybe you should compare your training and testing data. Are they from the same distribution ? Could you plot training and testing in the same layout ?
– lhk
Nov 11 at 11:55
The real data you have in your plot seems to be one everywhere. Is the scaling smoothing out some spikes (the prediction is laaarge), or is this really flat ? If it is indeed 1 everywhere, your model should learn this quite easily. Maybe you should compare your training and testing data. Are they from the same distribution ? Could you plot training and testing in the same layout ?
– lhk
Nov 11 at 11:55
Updated the question with a new image, the predicted values were so out of range that the real values look like are almost constant.
– jdmg718
Nov 11 at 11:58
Updated the question with a new image, the predicted values were so out of range that the real values look like are almost constant.
– jdmg718
Nov 11 at 11:58
1
1
Nice, thanks. In principle, your model looks fine to me. Some checks that you might do are: Reduce model size and reduce learning rate. That could make it more stable. Because these oscillations looks rather strange to me, could be some instability in your training setup. And you should make sure that the model is not picking up some trends that exist in your training data but not in your testing data. Plots of availability in training / testing data would be interesting
– lhk
Nov 11 at 12:00
Nice, thanks. In principle, your model looks fine to me. Some checks that you might do are: Reduce model size and reduce learning rate. That could make it more stable. Because these oscillations looks rather strange to me, could be some instability in your training setup. And you should make sure that the model is not picking up some trends that exist in your training data but not in your testing data. Plots of availability in training / testing data would be interesting
– lhk
Nov 11 at 12:00
It's been a while since I've used LSTM for time series predictions, but if I recall correctly the model itself has a state (so if you apply the model to the same data you can get different results as the internal state will change). If so you need to make sure that before you apply the model to the test data that it has been correctly initialised.
– kabdulla
Nov 11 at 12:04
It's been a while since I've used LSTM for time series predictions, but if I recall correctly the model itself has a state (so if you apply the model to the same data you can get different results as the internal state will change). If so you need to make sure that before you apply the model to the test data that it has been correctly initialised.
– kabdulla
Nov 11 at 12:04
1
1
Your loss is still decreasing, so you should train for a lot more than 20 epochs. Evaluating a model that hasn't converged (loss not changing) makes no sense.
– Matias Valdenegro
Nov 11 at 14:03
Your loss is still decreasing, so you should train for a lot more than 20 epochs. Evaluating a model that hasn't converged (loss not changing) makes no sense.
– Matias Valdenegro
Nov 11 at 14:03
|
show 1 more comment
active
oldest
votes
active
oldest
votes
active
oldest
votes
active
oldest
votes
active
oldest
votes
Thanks for contributing an answer to Stack Overflow!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Some of your past answers have not been well-received, and you're in danger of being blocked from answering.
Please pay close attention to the following guidance:
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53248435%2flstm-model-for-time-series-predictions-predicts-irregular-values-like-a-sawtooth%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
1
The real data you have in your plot seems to be one everywhere. Is the scaling smoothing out some spikes (the prediction is laaarge), or is this really flat ? If it is indeed 1 everywhere, your model should learn this quite easily. Maybe you should compare your training and testing data. Are they from the same distribution ? Could you plot training and testing in the same layout ?
– lhk
Nov 11 at 11:55
Updated the question with a new image, the predicted values were so out of range that the real values look like are almost constant.
– jdmg718
Nov 11 at 11:58
1
Nice, thanks. In principle, your model looks fine to me. Some checks that you might do are: Reduce model size and reduce learning rate. That could make it more stable. Because these oscillations looks rather strange to me, could be some instability in your training setup. And you should make sure that the model is not picking up some trends that exist in your training data but not in your testing data. Plots of availability in training / testing data would be interesting
– lhk
Nov 11 at 12:00
It's been a while since I've used LSTM for time series predictions, but if I recall correctly the model itself has a state (so if you apply the model to the same data you can get different results as the internal state will change). If so you need to make sure that before you apply the model to the test data that it has been correctly initialised.
– kabdulla
Nov 11 at 12:04
1
Your loss is still decreasing, so you should train for a lot more than 20 epochs. Evaluating a model that hasn't converged (loss not changing) makes no sense.
– Matias Valdenegro
Nov 11 at 14:03