5
2

Suppose I have an array X with N dimensions, and after a linear transformation I get an array X_hat which is S-sparse, where S << N.

Is this condition sufficient to recover the signal through random (enough) sampling? What mathematical relations should apply on S and N to ensure sparse recovery?

How randomly should I sample the transformed signal X_hat to ensure a good recovery?

This question is marked "community wiki".

asked Jan 20 '12 at 13:40

Jaidev%20Deshpande's gravatar image

Jaidev Deshpande
1355


3 Answers:

Like @Gael Varoquaux said, it's a zoo! You might be interested in the different conditions listed in the Big Picture in Compressive Sensing ( https://sites.google.com/site/igorcarron2/cs#measurement ).

Many people refer to the RIP in their papers but that condition is overly restrictive.

What I generally advise people to do is take the sensing matrix of interest and run several tests with several sparsity levels and measurements and see if it fits with the Donoho-Tanner phase transition ( https://sites.google.com/site/igorcarron2/donohotannerphasetransition which also leads to the links given by @Alejandro ). If it does, it is a good sign that both your sensing matrix and the solver you are using will do. It is an emprical test, but in terms of state of the art, it really trumps most sufficient conditions given in the literature as we speak.

Then you may want to see how it behave by adding noise.

Please note the recent work show that the O(Slog(N/S)) can be decreased tremendously to m = S + 1 ( no O notation). An explanation and the paper are given here: http://nuit-blanche.blogspot.com/2011/09/short-summary.html

This answer is marked "community wiki".

answered Jan 22 '12 at 07:17

Igor%20Carron's gravatar image

Igor Carron
206169

Thanks for your answer.

If RIP is overly restrictive, then does it suffice to ascertain the Null Space Property for most practical signals?

Thanks!

(Jan 23 '12 at 08:51) Jaidev Deshpande

if I recall the null space property is a necessary and sufficient for l_1 recovery. So it should be less restrictive but should be harder to show than RIP.

(Jan 24 '12 at 03:55) Igor Carron

I think that NSP is sufficient, but not necessary. AFAIK, there is no known sufficient and necessary property. The Wainwright paper gives bounds for success and failure (in high probability), which is not the same thing, but as close as possible.

Also, I believe that it can be shown that the NSP includes the RIP, but not the converse. I cannot find the paper thought in the zoo :)

(Jan 24 '12 at 17:31) Gael Varoquaux

Gael, the condition is necessary and sufficient. However, actual conditions derived from it have been mostly just sufficient, hence your impression that it might be just sufficient. No worry through it is also NP hard to check these conditions too. Eventually, you might be interested in the following discussion with Remi Gribonval and Jared Tanner: http://nuit-blanche.blogspot.com/2009/10/cs-rip-vs-nsp-clarifications-by-jared.html

(Jan 25 '12 at 03:15) Igor Carron

Besides NSP another necessary and sufficient condition include the KGG condition (http://www.caam.rice.edu/%7Eyzhang/reports/siag-opt19-1.pdf) by Wotao YIn and Yin Zhang. Again it's NP had to check on it.

(Jan 25 '12 at 03:22) Igor Carron

another discussion of interest wrt to necessary and sufficient conditions: http://nuit-blanche.blogspot.com/2008/09/cs-checkable-necessary-and-sufficient.html

(Jan 26 '12 at 05:53) Igor Carron
showing 5 of 6 show all

There is zoology of conditions actually fairly complex that are often related, but not simply, and depend on the estimator that you use for the sparse recovery.

To try and do a hand waving summary, here is what I understand:

  1. Your sensing operator should be not to correlated, i.e. well-conditioned, or close to an isometry, on the signal subsample. This is the part of the so-called Restricted Isometry Property (RIP) concerning the restriction to the signal. It's also known as incoherence of the sensing operator on the signal subspace. In related terms, your signal should not lie in the null space of your sensing operator. This is part of the Null Space Property, which is broader, and when stated fully can actually be shown to include the RIP.

  2. The noise and the signal subspace should project to "sufficiently" separable measurements.

  3. The signal that you are trying to detect should be "sufficiently non-zero", i.e. more than the level of noise times a factor depending on the amount of data you have.

  4. Your signal should be "sufficiently" sparse, where "sufficiently" depends on the amount of data you have, and the general scaling rule is n = O(S log(p)) where n is the number of samples, S is the number of non-zero coefficients, and p the dimensionality of the original signal space.

If you are only try to devise the sensing matrix, you only care about 1 and 2. To give an example, if your signal in sparse in a given basis, say in time, condition 1 tells you that you should not sub-sample in time unless you are sure not to leave out time points that carry information. You can take another representation of your signal in which it is not sparse, and sub-sample in this representation. For instance, you can take the FFT, which has the advantage of being a rotation, and thus satisfying condition 1 well. Condition 2 tells you that this will work only if the signal and noise FFTs are not too similar.

For the theory behind l1-based sparse recovery, I like the paper by Wainwright which is very technical, but gives conditions for failure and not only for success.

To make a link between the hand-waiving conditions that I stated above and the paper, we are interested in theorem 3 (achievability conditions), and condition 1 corresponds to (26b), condition 2 to (26b), condition 3 to point (ii) in the theorem, and finally condition 4 to formulas (31) and (32).

answered Jan 21 '12 at 09:05

Gael%20Varoquaux's gravatar image

Gael Varoquaux
92141426

edited Jan 24 '12 at 17:32

Thanks Gael,

I'll read the Wainwright paper tonight. Meanwhile, it seems to me that we can build a null-hypothesis with a noisy signal under the conditions 1,2,3. Does that make sense? If so, any leads on how to go about it (the theory of null-hypotheses and the coding)? Perhaps this might also help us get some bounds on the sparsity of the signal.

Thanks!

(Jan 23 '12 at 08:55) Jaidev Deshpande

I don't know what null-hypotheses mean in this context. It may just be my lack of culture in signal.

(Jan 24 '12 at 17:33) Gael Varoquaux

If your matrix has random iid gaussian entries, there is a result that tells you that you need O(S log(N/S)) measurements. But this is not very practical, since it is a big-O result. A rule of thumb is that you need about 4*S measurements.

Also look at the work of Donoho and Tanner. They have a website where you can estimate how much undersampling you can afford (in the same website there is a link to the paper where they explain how it works).

This answer is marked "community wiki".

answered Jan 20 '12 at 15:16

Alejandro's gravatar image

Alejandro
301610

edited Jan 20 '12 at 17:17

Thanks Alejandro,

The website is awesome! Maybe I can look at the code and extend it.

(Jan 23 '12 at 08:59) Jaidev Deshpande
Your answer
toggle preview

powered by OSQA

User submitted content is under Creative Commons: Attribution - Share Alike; Other things copyright (C) 2010, MetaOptimize LLC.