Autoencoders in TensorFlow — part 3

Sailaja Karra
3 min readDec 24, 2020

--

In this blog I am going to show to use recurrent neural networks as part of Autoencoders in Tensor flow. Please see my part1 blog here & part2 blog here for convolution neural network based Autoencoders.

As we did last time I am still going to use the fashion mnist data to show how we can use LSTM layers and re-construct an image. The training is going to take a lot longer compared to both Convolution layers and Dense layers.

Finally, I want to take a second to give a huge shout out to Aurelien Geron for his amazing book Hands-On Machine Learning with Scikit-Learn, Keras & TensorFlow. Most of the code is inspired from this book, I made a few tweaks here and there but I cannot emphasize enough how good this book this.

So without further ado, here is the code on how to do this.

#General imports
import tensorflow as tf
import matplotlib.pyplot as plt
%matplotlib inline#Load Dataset
mnist = tf.keras.datasets.fashion_mnist
(training_images, training_labels), (test_images, test_labels)
= mnist.load_data()
#Scale
X_train=training_images / 255.0
X_valid=test_images/255.0

Here is the encoder LSTM layers

encoder_rnn = tf.keras.models.Sequential([
tf.keras.layers.LSTM(100,return_sequences=True,
input_shape=[None,28]),
tf.keras.layers.LSTM(30)
])

The encoder rnn has only two layers, the first LSTM layers reads the images and creates a 100 nueron based layer. The second connected layer gets input and the hidden states fed to it with return_sequences set to True in layer1.

Here is the encoder_rnn summary

encoder_rnn.summary()

Here is the decoder LSTM layer to reconstruct the images

decoder_rnn = tf.keras.models.Sequential([
tf.keras.layers.RepeatVector(28,input_shape=[30]),
tf.keras.layers.LSTM(100,return_sequences=True),
tf.keras.layers.TimeDistributed(
tf.keras.layers.Dense(28,activation='sigmoid'))
])

The first layer repeats the inputs in effect creating 28 parallel inputs of the same data to be fed into the second LSTM layer. The second layers feeds it into a Time Distributed layer, that applies the same instance of Dense layer to each of the returned sequences generating our final 28x28 image.

Here is the decoder_rnn summary for more details.

Decoder_rnn.summary()

With this we are ready to build our full model and run our test. Here are the compile and fit steps.

AE_RNN = tf.keras.models.Sequential([encoder_rnn,decoder_rnn])
AE_RNN.compile(loss='binary_crossentropy',optimizer='adam')
history = AE_RNN.fit(X_train,X_train,epochs=10,verbose=2,
validation_data=(X_valid,X_valid))

Here are our final images

AutoEncoder_RNN generated images in second row

As you can see we are able to get back decent images but obviously we are losing a bit of detail like colors & patterns.

Now lets redo the same thing with a 50% dropout and slightly different layers.

Encoder Layer with 50% dropout

encoder_rnn2 = tf.keras.models.Sequential([
tf.keras.layers.Reshape([28,28],input_shape=[28,28]),
tf.keras.layers.Dropout(0.5),
tf.keras.layers.LSTM(100,return_sequences=True),
tf.keras.layers.LSTM(30)
])

Rest of the model remains the same

AE_RNN2 = tf.keras.models.Sequential([encoder_rnn2,decoder_rnn])
AE_RNN2.compile(loss='binary_crossentropy',optimizer='adam')
history2 = AE_RNN2.fit(X_train,X_train,epochs=10,verbose=2,
validation_data=(X_valid,X_valid))

Final images with 50% dropout

AutoEncoder_RNN generated images in second row with 50% dropout

Happy reading !!!

References:
Book
: Hands-On Machine Learning with Scikit-Learn, Keras & TensorFlow by Aurelien Geron

--

--

No responses yet