The difference between autoencoders and a restricted Boltzmann machine?
Autoencoder is a simple 3-layer neural network where output units are directly connected back to input units. E.g. in a network like this:
output[i] has edge back to input[j] for every hidden node. Usually, the number of hidden units is much less than the number of visible ones. As a result, when you pass data through such a network, it first encodes input vector to fit a smaller representation and then decodes it back. The task of the training is to minimize an error of reconstruction or find the most efficient compact representation for input data.
The AEs (Autoencoders) are similar to PCA and can be used to dimension reduction. The hidden layer decoded features can be used as input features for downstream classification or another AE.
RBM is a generative artificial neural network that can learn a probability distribution over a set of inputs. RBMs are a variant of Boltzmann machines, with the restriction that their neurons must form a bipartite graph: there are no connections between nodes within the visibal neurons or hidden neurons. It only has an input and hidden layer. Instead of deterministic (e.g. logistic or ReLU) it uses stochastic units with particular (usually binary of Gaussian) distribution. Learning procedure consists of several steps of Gibbs sampling (propagate: sampling hiddens given visibles; reconstruct: sample visibles given hiddens; repeat) and adjusting the weights to minimize reconstruction error.
The training of RBM includes to steps: first, initilization using alternate Gibbs sampling of both layers. Second, back propagation could still be used later to fine tune the model.
output[i] has edge back to input[j] for every hidden node. Usually, the number of hidden units is much less than the number of visible ones. As a result, when you pass data through such a network, it first encodes input vector to fit a smaller representation and then decodes it back. The task of the training is to minimize an error of reconstruction or find the most efficient compact representation for input data.
The AEs (Autoencoders) are similar to PCA and can be used to dimension reduction. The hidden layer decoded features can be used as input features for downstream classification or another AE.
RBM is a generative artificial neural network that can learn a probability distribution over a set of inputs. RBMs are a variant of Boltzmann machines, with the restriction that their neurons must form a bipartite graph: there are no connections between nodes within the visibal neurons or hidden neurons. It only has an input and hidden layer. Instead of deterministic (e.g. logistic or ReLU) it uses stochastic units with particular (usually binary of Gaussian) distribution. Learning procedure consists of several steps of Gibbs sampling (propagate: sampling hiddens given visibles; reconstruct: sample visibles given hiddens; repeat) and adjusting the weights to minimize reconstruction error.
The training of RBM includes to steps: first, initilization using alternate Gibbs sampling of both layers. Second, back propagation could still be used later to fine tune the model.
Comments
Post a Comment