I am trying to apply the method shown in this paper:

Schraudolph, N. N. (2002). Fast curvature matrix-vector products for second-order gradient descent. Neural computation, 14(7), 1723–38,

to a simple model, but I'm not sure of my implementation. Apologies for the cross-post, but the math markup is much better on the math.stackexchange. The notation is here. I am trying to understand this technique better and also check my work.

asked Dec 13 '12 at 15:02

Neil's gravatar image

Neil
51223


One Answer:

Hi, James Martens has implemented perfectly Schraudolph's method for all the Hessian, Gauss-Newton and Fisher information matrices, within Matlab. You need to take a look at his code at: http://www.cs.toronto.edu/~jmartens/docs/HFDemo.zip Cheers, -- Arya

answered Dec 13 '12 at 18:52

Arya%20Iranmehr's gravatar image

Arya Iranmehr
106337

Thank you, I will do that!

(Dec 13 '12 at 19:11) Neil

Hmm, the code is not in a great state right now. There are thousands of lines of dead code…

(Dec 13 '12 at 21:05) Neil

Neil - have you had any luck ?

(Jan 04 '13 at 13:45) will henry
Your answer
toggle preview

powered by OSQA

User submitted content is under Creative Commons: Attribution - Share Alike; Other things copyright (C) 2010, MetaOptimize LLC.