Following Online Dictionary Learning for Sparse Coding and using an inference function I've found on the Stanford's website, I'm trying to online learn a Dictionary matrix. Tested both on resized actual images and some revised version of MNIST, I see that I do manage to get something to happen, but the results just doesn't look anything like real feature detectors. The way I train is by taking an image, lets say 500500, then go through all 1515 patches (1px progress each iteration), do the inference and the dicotionary optimization.

The main file:

for i = 1 : height-atomSize
    i
    for j = 1 : width-atomSize
        xt = imgG(i:(i+atomSize-1), j:(j+atomSize-1));

        xt = reshape(xt, pixNum, 1);

        h = TwIST(xt, D, 7);

        A = A + h * h';
        B = B + xt * h';

        D = dictUpdate(D, A, B);
    end
end

The dictUpdate function:

for i=1:size(D,2)
   D(:,i) = (1/A(i,i)) * (B(:,i) - D * A(:,i)) + D(:,i);
   % L2 norm:
   D(:,i) = D(:,i) / max(norm(D(:,i),2), 1);
end

Another thing, if i'm already asking here, I'd like to do the inference myself and have found a vectorised version for it, but every time I test it it simply explodes:

h = h - alpha * D' * (D * h - xt);
h = shrink(h, alpha*lambda);

And the Shrink:

for i=1:size(h,1)
    h(i) = sign(h(i)) * max(abs(h(i)) - alphaLambda, 0);
end

Thanks for the answers

asked Mar 20 '14 at 12:12

Ido%20Freeman's gravatar image

Ido Freeman
1111

Be the first one to answer this question!
toggle preview

powered by OSQA

User submitted content is under Creative Commons: Attribution - Share Alike; Other things copyright (C) 2010, MetaOptimize LLC.