I'm trying to find a paper I found a while ago that did something akin to lateral inhibition in an autoencoder scheme. Specifically, it looked something like:

 L = sigmoid(W1'x + b1)
 H = sigmoid(W2'L + b2)
 O = sigmoid(W3'H + b3)

Such that the lateral connections essentially transformed the input before going to the input layer. If I remember correctly, both W1 and W2 were learned at the same time (thus they were saved from having to swap back and forth from learning W1 then W2).

However, I lost the paper and can't find any references to it via google. Has anyone else seen a similar paper to this? Or more generally any that look at the effects of lateral connections from some external weights?

Thanks!

asked Jul 22 '11 at 14:54

nop's gravatar image

nop
2414712

Be the first one to answer this question!
toggle preview

powered by OSQA

User submitted content is under Creative Commons: Attribution - Share Alike; Other things copyright (C) 2010, MetaOptimize LLC.