Quantcast
Viewing all articles
Browse latest Browse all 14069

LSTM autoencoder for anomaly detection - shifts

I am new to the field of deep learning and am trying to implement an LSTM autoencoder for anomaly detection in time series data. I followed a tutorial on YouTube "Time Series Anomaly Detection with LSTM Autoencoders using Keras & TensorFlow 2 in Python" and used the code provided in their Github repository.

However, when applying the code to my own time series data, I encountered an issue. It seems that the results are shifted by a step. The points marked in the image I attached should be considered anomalies, but the anomaly detection algorithm identifies the preceding points as the anomalies instead.

Results: Image may be NSFW.
Clik here to view.

This is the model I used:

model = keras.Sequential()model.add(keras.layers.LSTM(units=128, input_shape=(X_train.shape[1], X_train.shape[2])))model.add(keras.layers.Dropout(rate=0.2))model.add(keras.layers.RepeatVector(n=X_train.shape[1]))model.add(keras.layers.LSTM(units=128, return_sequences=True))model.add(keras.layers.Dropout(rate=0.2))model.add(keras.layers.TimeDistributed(keras.layers.Dense(units=X_train.shape[2])))model.compile(loss='mae', optimizer='adam')history = model.fit(X_train, y_train, epochs=30, batch_size=5, validation_split=0.1, shuffle=False)

I'm unsure whether the issue stems from the data, the model, or if the observed shifts in anomaly detection are normal and they just need to be shifted back?


Viewing all articles
Browse latest Browse all 14069

Trending Articles