I have read in a huge number of papers that sparse models (sparse coding, dictionary learning, sparse matrix factorization, ...) are good solutions for image denoising problems.

I know that representing data as sparse combinations of atoms from a (overcomplete) dictionary should be the way the mammal primary visual cortex works. However it is not clear why sparse representations should eliminate noise, blur or other similar artifacts. Is there any valid mathematical explanation for this?

asked Jan 05 '13 at 18:13

_what_'s gravatar image

_what_
1111


One Answer:

So the signal has large coefficients in a few points ( ie has a sparse representation) whereas noise does not. so then thresholding removes the noise.

eg signal s(t) is sum of a couple of sine waves. ( so sparse representation in fourier series)

noise n(t) is say gaussian then it will have lots of small coefficients in fourier series which thresholding will then remove.

in other words the noise "model" should not have a sparse representation

answered Jan 05 '13 at 19:12

SeanV's gravatar image

SeanV
33629

Your answer
toggle preview

powered by OSQA

User submitted content is under Creative Commons: Attribution - Share Alike; Other things copyright (C) 2010, MetaOptimize LLC.