|
I have a sparse coding problem where I want to find the sparse codes (not necessarily the dictionary for now) that lie between 0 and 1. More formally I want to minimize L = (hW - X)^2 + lambda * |h| s.t. 0 < h_i < 1 for all i My neural networky attempt to do this would be to substitute h by sig(h') and solve for h' instead. However, since my knowledge about L1 optimization is bad at best, I wonder if this is the best approach. Also, this has probably been done previously. Any suggestions/references? |
|
One possibility would be to be to learn an unconstrained sparse coding for your data, apply asinh to the codes and split the components into positive & negative parts. You could also try solving for h directly and modify the penalty function to enforce the [0,1] constraint. I am guessing that substituting h = sin(h') and solving for h' may cause numerical instabilities because of the derivatives of sinh. It would help to know how these codes are meant to be used. |