|
I have read in a huge number of papers that sparse models (sparse coding, dictionary learning, sparse matrix factorization, ...) are good solutions for image denoising problems. I know that representing data as sparse combinations of atoms from a (overcomplete) dictionary should be the way the mammal primary visual cortex works. However it is not clear why sparse representations should eliminate noise, blur or other similar artifacts. Is there any valid mathematical explanation for this? |
|
So the signal has large coefficients in a few points ( ie has a sparse representation) whereas noise does not. so then thresholding removes the noise. eg signal s(t) is sum of a couple of sine waves. ( so sparse representation in fourier series) noise n(t) is say gaussian then it will have lots of small coefficients in fourier series which thresholding will then remove. in other words the noise "model" should not have a sparse representation |