|
I have a bit trouble implementing a gaussian-rectified RBM. I would like to show you how I would implement it. It would be nice if you point out errors or comment the implementation. These are the key parts in how I would implement this RBM (in a Matlab-like fashion): hiddenUnitActivation:
hiddenUnitSample:
visibleUnitActivation:
visibleUnitSample:
CD-1 Learning:
Weight Updates:
Most of all I wonder
I implemented it that way and it seems to work well in the scenario of dimensionality reduction/ feature learning, but only when I don't sample at all, thus, using this hiddenUnitSample:
instead of this
The noise term in the rectified unit sampling formula seems to be way to big for accurate reconstruction. I also tried to add labels to the topmost RBM to enable some kind of "selective generation". When I clamp the label neuron and circle in the top RBM the inference of the missing modality is instable and blows up to practically +- infinity. This does not happen when the hidden units are bounded by a sigmoid function like in gaussian-binary or rectified-binary RBMs. Any ideas how to solve this? |