The rate at which signals are sampled in their native form (e.g. the "time domain" for many signals of interest) in order to capture all of the information of a signal - the so-called Nyquist rate in traditional sampling - equals one over twice the Fourier bandwidth of the signal. This process exploits knowledge of the finite bandwidth of the signal. Alternatively, if the signal's Fourier spectrum were available, the signal could be sampled in the Fourier domain, and if it were known that some of the Fourier coefficients were negligible, the number of samples required to capture all of the signal's information could be reduced. If it were known that the signal had such a property -called sparseness - in the Fourier domain, would it be possible instead to sample the signal at a reduced rate in its native form while still capturing the signal's information? Moreover, would it be possible to do so without knowing exactly which Fourier coefficients were negligible? In this paper we examine a recently introduced approach called compressive sampling (CS) which attempts to go beyond the exploitation of a signal's finite bandwidth, and exploit signal sparseness to allow signals to be "under sampled" without losing information. We will develop the concept of CS based on signal sparseness and provide a justification for the compressive-sampling process, including an explanation for the need for randomness in the process, and subsequent signal reconstruction from the CS samples. In addition, examples of applications of CS will be provided, along with simulation results. © 2009 IEEE.
|Original language||English (US)|
|Title of host publication||2009 IEEE Long Island Systems, Applications and Technology Conference, LISAT 2009|
|State||Published - Sep 25 2009|