|
I've trained a deep network with RBM with some data and I want to see what features the network is sensitive to. With the first layer I can just look at the weights, divided by the norm. But how can I do it for the deep layers? I saw some papers talking about numerical optimizations but I couldn't find any good reference on it. Thanks. e.g. http://ai.stanford.edu/~quocle/faces_full.pdf |
|
What is meant by numerical optimization is that the activation of a neuron is a nonlinear function of the inputs, so you can optimize it by computing its gradient, which turns out to be the same gradient of the neural network but with the roles of data and activation reversed. Can I optimize the whole network at once to get all the features? or should I do it unit by unit?
(Jan 24 '13 at 11:52)
rm9
2
As pointed out, to visualize the response of a particular neuron, one can "activate" it and perform gradient descent with it to "generate" an input it would be most active to. When you say optimize the whole network, if it means activating all the neurons in the higher layer, then your visualization be some input that can activate all of them. This is not what you desire. One way that would give you all the visualizations might be to have "batch" optimization, but I am not aware of any work related to this.
(Jan 24 '13 at 20:14)
Rakesh Chalasani
|