Autoencoders VS Sparse Coding

Sparse Coding

Sparse coding minimizes the objective
where  is a matrix of bases, H is a matrix of codes and  is a matrix of the data we wish to represent.  implements a trade of between sparsity and reconstruction. Note that if we are given , estimation of is easy via least squares.


Autoencoders

Autoencoders are a family of unsupervised neural networks. There are quite a lot of them, e.g. deep autoencoders or those having different regularisation tricks attached--e.g. denoising, contractive, sparse. There even exist probabilistic ones, such as generative stochastic networks or the variational autoencoder. Their most abstract form is
but we will go along with a much simpler one for now:
where σ is a nonlinear function such as the logistic sigmoid 








Comments

Popular posts from this blog

The difference between autoencoders and a restricted Boltzmann machine?

SSD: Single Shot MultiBox Detector

You Only Look Once