Additional resources for the work "A practical tutorial on autoencoders for nonlinear feature fusion: Taxonomy, models, software and guidelines" (DOI: 10.1016/j.inffus.2017.12.007, arXiv: 1801.01586).
This repository contains a series of examples on different types of autoencoders built on top of Keras.
autoencoder.py
defines anAutoencoder
class which describes the model that will be learned out of the data.utils.py
defines additional functionality needed the sparse, contractive, denoising and robust autoencoders.mnist.py
includes the training process for autoencoders with MNIST data.cancer.py
trains autoencoder with the Wisconsin Breast Cancer Diagnosis data set (filewdbc.arff
is needed for the script to run).pca.py
outputs the result of Principal Component Analysis with MNIST for comparison purposes.
Autoencoders are symmetrical neural networks trained to reconstruct their inputs onto their outputs, with some restrictions that force them to find meaningful codifications of data. These examples cover several types of autoencoders:
- Basic, undercomplete autoencoder
- Basic autoencoder with weight decay
- Sparse autoencoder*
- Contractive autoencoder*
- Denoising autoencoder
- Robust autoencoder
The features of these can be combined (for example, one can build a sparse denoising autoencoder with weight decay). Examples of this can be found inside file mnist.py
.
(*) Only for sigmoid and tanh activation functions in the encoding layer.
Other resources used:
- https://blog.keras.io/building-autoencoders-in-keras.html
- https://keras.io/getting-started/faq/#how-can-i-obtain-reproducible-results-using-keras-during-development
- https://wiseodd.github.io/techblog/2016/12/05/contractive-autoencoder/
- Keras docs
- Scikit-learn docs
- Tensorflow docs
The code in this repository is licensed under MPL v2.0, which allows you to redistribute and use it in your own work.