Consider a one-dimensional, finite-length
signal $x \in \mathbb{C}^N $. We will vectorize a two-dimensional image
or higher-dimensional data into a long one-dimensional vectors. Many real-world
signals can be well-approximated by sparse or compressible under a suitable basis.
Let $\Phi=[\phi_1 \vert \phi_2 \vert \cdots \phi_N] $ be an
orthonormal basis. Then signal $x$ can be
expressed as $\displaystyle x=\sum_{n=1}^N {<x,\phi_N>\phi_N}$. We say that $x$ is
$K$-sparse under $\Phi$ if ${\lbrace s_n=<x,\phi_n>\phi_n \rbrace}_{n=1,\dots,N}$ has
only $K$-nonzero coefficients, and that $x$ is
compressible under $\Phi$ if ${\lbrace <x,\phi_n>\phi_n \rbrace}_{n=1,\dots,N}$ has a
few large coefficients.

Compressive sensing is that a sparse signal
can be recovered from what was previously believed to be incomplete
information. Consider $\Psi==[\psi_1 \vert \psi_2 \vert \cdots \psi_N] \in \mathbb{C}^{M \times N} $ for some
$M < M$. Then, we can obtains $y=\Psi x = \Psi \Phi f = As$ where $A=\Psi \Phi$. The measurements $\Psi is
fixed and does not depend on the signal $x$ and
then $A$ is
selected independent of $s$. $A$ is
referred to as the encoder and obviously encoder is linear. In encoder, we need
to design a good sensing matrix $A$. The decoder is the attempted recovery of $s$ from
its sensing matrix $A$ and $y$.

We define $\Vert s \Vert _0 := \vert$ supp $s \vert$ for a
signals. The quantity $\Vert \cdot \Vert _0$ is
often called $\ell _0$-norm although it is not a norm. With a
sparsity prior, a natural decoder is to search for the sparsest vector $s$ that $y=As$:

$min \Vert s \Vert _0$ subject
to $y=As$. (1)

It is known that if the problem (1) has a
solution for small $K$, the solution is unique. Since decoder is well-defined for
small $K$, we need an efficient reconstruction algorithm. Unfortunately, the
problem (1) is combinatorial problem and NP-hard in general. Essentially two
approaches have mainly been pursued : greedy algorithm and convex relaxation.

A greedy algorithm computes the support of
signal iteratively, at each step finding one or more new elements and
subtracting their contribution from the measurement vector. Examples include
Matching pursuit(MP), Orthogonal Matching pursuit(OMP), stagewise OMP,
regularized OMP, weak OMP.

The $\ell_1$ minimization approach considers the solution
of

$min \Vert s \Vert_1$ subject to $y=As$.(2)

This is a convex optimization problem and can
be seen as a convex relaxation of (1). The problem (2) is called “basis
pursuit”. In the real-valued case, (2) is casted by a linear program and in the
complex-valued case, it is casted by a second order cone program. The theoretical
results of compressive sensing is that the solution of basis pursuit (2) equals
the sparsest solution under the condition about $A$ with overwhelm probability.