I have a sparse coding problem where I want to find the sparse codes (not necessarily the dictionary for now) that lie between 0 and 1. More formally I want to minimize

L = (hW - X)^2 + lambda * |h| s.t. 0 < h_i < 1 for all i

My neural networky attempt to do this would be to substitute h by sig(h') and solve for h' instead. However, since my knowledge about L1 optimization is bad at best, I wonder if this is the best approach. Also, this has probably been done previously.

Any suggestions/references?

asked Dec 06 '12 at 03:18

Justin%20Bayer's gravatar image

Justin Bayer
170693045


One Answer:

One possibility would be to be to learn an unconstrained sparse coding for your data, apply asinh to the codes and split the components into positive & negative parts. You could also try solving for h directly and modify the penalty function to enforce the [0,1] constraint. I am guessing that substituting h = sin(h') and solving for h' may cause numerical instabilities because of the derivatives of sinh. It would help to know how these codes are meant to be used.

answered Dec 11 '12 at 13:35

Daniel%20Mahler's gravatar image

Daniel Mahler
122631322

edited Dec 11 '12 at 14:23

Your answer
toggle preview

powered by OSQA

User submitted content is under Creative Commons: Attribution - Share Alike; Other things copyright (C) 2010, MetaOptimize LLC.