From Wiki Course Notes
Faithfully and efficiently constructing a signal from a coded signal is an age old scientific problem. The Shannon/Nyquist sampling theorem states that in order to reconstruct a signal from a sampled signal without incurring artifacts such as aliasing, it is necessary to sample at least two times faster than the maximum signal bandwidth. However, in many applications such as digital image and video cameras, the Nyquist rate will result in an excessive number of samples, and making compression becomes a necessity before any storage or transmission of the samples can be made. Furthermore, in other applications such as medical scanners and radar, it is usually too costly to increase the sampling rate. This strict, and often costly, Nyquist criterion motivates a need for alternative methods. Compressive sensing, which is outlined in this paper summary, is one such approach that avoids the use of a high sampling rate for capturing signals.
Compressive sensing is a method of capturing and representing compressible signals at a rate significantly below the Nyquist rate. This method employs nonadaptive linear projections that maintain the structure of the signal, which is then reconstructed using the optimization techniques outlined in  .
In summary, in this paper we will learn the definition of compressible signal, how to construct a compressible signal and then how to reconstruct the compressed signal.
Let us define a discrete time signal such that it has following properties:
- Real valued
- Finite length
- One dimensional
Then is a column vector in with elements . Note that an image or any other high-dimensional data point is pre-processed by first representing it as a long one-dimensional vector. To define a signal in we define the basis vectors . To keep it simple, assume that the basis is orthonormal. Using a basis matrix , the signal can be expressed as
where is a column vector of weighting coefficients .
It is easy to see that both and are equivalent representations of the given signal that is to be captured, with being in the time or space domain (depending on the nature of the signal) and being in the domain. The signal is -sparse if it is a linear combination of only basis vectors. The signal will be compressible if the above representation has just a few large coefficients and many small coefficients. We shall now briefly overview how the transform coding of signals that uses a sample-then-compress framework is done in data acquisition systems such as digital cameras (for which transform coding plays a central role). The procedure of this type of transform coding can be described in the following steps:
- Step 1: The full -sample signal x is acquired.
- Step 2: The complete set of transform coefficients is computed via .
- Step 3: The largest coefficients are located.
- Step 4: The − smallest coefficients are discarded.
- Step 5: The values and locations of the largest coefficients are encoded.
Please note that the transform coding that uses a sample-then-compress framework has the following inherent inefficiencies:
- The initial number of samples may be very large even if the desired is small.
- The set of all transform coefficients must be calculated even though all but of them will be discarded.
- The locations of the large coefficients must be encoded, thus introducing an overhead to the algorithm.
The Problem Statement for the Compressive Sensing Problem
Compressive sensing directly acquires the compressed signal without going through the intermediate stage of acquiring samples. In order to do this, a general linear measurement process is defined such that inner products between and a collection of vectors are computed to give .
Ordering the new measurement vectors in a vector , and the measurement vectors as rows in an matrix , we can then use to get
where is an matrix. This measurement process is nonadaptive, which implies that is fixed and does not depend on the signal . So, in order to design a viable compressive sensing method the following two conditions must be met:
- A stable measurement matrix must be constructed which preserves the information while reducing the dimension from to .
- A reconstructing algorithm must be designed which recovers from by using only measurements (which is roughly the number of coefficients recorded by the traditional transform coder mentioned earlier).
Part 1 of the Solution: Constructing a Stable Measurement Matrix
Given the available measurements, , where , must be able to reconstruct the signal with a high level of accuracy. If is -sparse and the locations of the non-zero coefficients in is known, then the problem can be solved provided . For this problem to be well conditioned we must have the following necessary and sufficient condition.
where is a vector sharing the same non-zero entries as and , and where ε is the restrictive isometry constant. This means must preserve the lengths of these particular K-sparse vectors. This condition is referred to as the restricted isometry property (RIP) . Note that, in practice, it is typically impossible to known the locations of the non-zero entries in . However, a sufficient condition for a stable solution for both K-sparse signals and compressible signals is that satisfies the RIP for an arbitrary -sparse vector . A similar property to the RIP called incoherence requires that the rows of cannot sparsely represent the columns of , and vice versa. Consequently, both -sparse signals and compressible signals of length can be reconstructed using only random Gaussian measurements.
The RIP and incoherence conditions can be satisfied with high probability simply by choosing a random matrix, . For example, let us construct a matrix such that its matrix elements are independent and identically distributed (iid) random variables from a Gaussian probability distribution with mean zero and variance . Then, the measurements are merely different randomly weighted linear combinations of the elements of and the Gaussian measurement matrix has the following properties:
- The incoherence property is satisfied by the matrix with the basis with high probability since it is unlikely that the rows of the Gaussian matrix will sparsely represent the columns of the identity matrix and vice versa. In this case, if with a small constant, then will satisfy the RIP with high probability.
- The matrix is universal, meaning that will satisfy the RIP with high probability regardless of the choice of the orthonormal basis .
Part 2 of the Solution: Designing a Reconstruction Algorithm for Signals
Now we are left with the task of designing a reconstruction algorithm. The reconstruction algorithm must take into account the measurements in the vector , the random measurement matrix (or the seed that was used to generate it) and the basis and then reconstruct the length signal or its sparse coefficient vector . Since , the system is underdetermined (having fewer equations than variables), and thus we have infinitely many which satisfy
The justification for this is that if then for any vector in the null space of , denoted . This suggests finding the signal's sparse coefficient vector in the dimensional translated null space . Related to the concept of Lp space, the norm of the vector is . With this concept in mind, the reconstruction process can be attempted in the following ways that make use of the concept of norm.
Minimum norm reconstruction
In order to find the vector , the classical approach is to find the vector in the transformed null space with the smallest norm (also called energy norm) by solving
The closed form solution is given by . However, the minimization returns non-sparse with many non-zero elements. In other words, it is almost never able to reconstruct the original signal.
Minimum norm reconstruction
The drawback of using the norm in reconstruction is that the norm measures signal energy rather than signal sparsity. For reconstruction, the use of the norm is much more suitable, because the norm counts the number of non-zero entries in , i.e. the norm of a -sparse vector is simply . Let us now evaluate the norm, also known as the cardinality function (since it counts the number of non-zero entries in ). In this case the problem formulation is
At first glance this approach looks attractive since it recovers the K-sparse signal exactly with high probability using only i.i.d Gaussian measurements. However, solving the equation is numerically unstable and NP-complete which requires an exhaustive search of all possible locations of the non-zero entries in .
Minimum norm reconstruction
Now, if we perform minimization using the norm , we are able to recover -sparse signals and closely approximate the compressible signal with high probability using only i.i.d Gaussian measurements. We aim to solve
Figure 2 from the original paper  illustrates these three approaches for reconstructing signals. For convenience, this figure is reproduced below.
Furthermore, other related reconstruction algorithms can be found in  and  of the original paper.
The paper discusses a practical example of the technique of compressive sensing using a "single-pixel compressive sensing camera". The main idea is that the camera has a mirror for each pixel, and randomly, the mirror either reflects the light to something that can use it (a photodiode) or it doesn't. Thus, is the voltage at each photodiode and as in the problem description, it is the inner product of the image we want, , and . Here, is a vector of ones and zeros, indicating whether mirrors direct light towards the photodiode or not. This can be repeated to get . The image can be reconstructed using the techniques discussed. This example can be seen in the following diagram. The image in part (b) is from a conventional digital camera. The image in part (c) is constructed using the single-pixel camera. This method requires 60% fewer random measurements than reconstructed pixels.
In practice, no measurement system is perfect---devices do not have infinite precision. Thus, it is necessary that compressed sensing continue to recover signals relatively well, even in the presence of noise. Fortunately, it has been shown that the error bound of compressive sensing is proportional to the error which results from the approximation to a sparse signal (the change of basis), plus the noise which is input to the system.
where is the signal recovered under noisy conditions, is the vector with all but the largest components set to zero, and is the error bound in the noise. Here, and are generally small.
This is a very important result, since it means that the noise produced by compressive sensing is directly proportional to the noise of the measurements. It is this result that ultimately moves compressive sensing from being an interesting academic exercise to something which is pragmatic.
The compressive sensing algorithm can be applied to analog signals as well. This sensing technique finds many practical applications in image processing and similar fields. In this summary, we learned about compressive sensing which is a more efficient method compared to the traditional transform coding of signals that uses a sample-then-compress framework.
- ↑ 1.0 1.1 1.2 1.3 C. Dick, F. Harris, and M. Rice, “Synchronization in software defined radios—Carrier and timing recovery using FPGAs,” in Proc. IEEE Symp. Field-Programmable Custom Computing Machines, NapaValley, CA, pp. 195–204, Apr. 2000.
- ↑ 2.0 2.1 2.2 2.3 J. Valls, T. Sansaloni, A. Perez-Pascual, V. Torres, and V. Almenar, “The use of CORDIC in software defined radios: A tutorial,” IEEE Commun. Mag., vol. 44, no. 9, pp. 46–50, Sept. 2006.
- ↑ R.G. Baraniuk, M. Davenport, R. DeVore, and M.D. Wakin, "A simples proof of the restricted isometry principle for random matrices (aka the Johnson-Linstrauss lemma meets compressed sensing)," Constructive Approximation, 2007.
- ↑ D. Takhar, V. Bansal, M. Wakin, M. Duarte, D. Baron, J. Laska, K.F. Kelly, and R.G. Baraniuk, “A compressed sensing camera: New theory and an implementation using digital micromirrors,” in Proc. Comput. Imaging IV SPIE Electronic Imaging, San Jose, Jan. 2006.
5. Richard G. Baraniuk. Compressive Sensing.