TENSORS IN SIGNAL PROCESSING


Tensors of order r are implicitly used for a long time in Engineering, since derivatives of order r of multivariate scalar functions are indeed tensors. For instance, cumulants of order r of an n-dimensional random variable are related to rth derivatives of the joint characteristic function. As such, they form a symmetric tensor of order r and dimension n. Cumulants have been first used in Signal Processing because they are asymptotically insensitive to Gaussian noise. But rapidly, since the eighties, they have been shown to be useful in identifying linear or polynomial models.

The Canonical Polyadic decomposition (CP), sometimes referred to as Parafac, plays a central role in some Blind Identification problems, as pointed out already fifteen years ago with the birth of Independent Component Analysis. One striking fact is that, even if their estimation variance increases with their order, high order cumulants are still attractive for at least to reasons. First, they may allow to restore identifiability, when lower orders are not sufficient. Second, the dimension of the noise subspace grows rapidly with the order, so that noise reduction may be eventually more important.

Furthermore, as demonstrated 10 years ago, the CP decomposition permits deterministic blind identification approaches, when some additional diversity is available. This occurs in antenna array processing and in digital communications for instance, but also in other data analysis problems, including medical imaging and fluorescence spectroscopy.
The key advantage of the CP decomposition in Blind Identification problems is that, contrary to matrices, decomposing a tensor into a sum of rank-one terms can be performed in an essentially unique manner. Uniqueness holds under sufficient conditions, originally due to Kruskal.

In Compressed Sensing, one looks for sparse representations of signals in a given redundant dictionary. It turns out that the dictionary coherence condition ensuring uniqueness of the sparsest decomposition is related to Kruskal's condition for tensors.

Another tensor decomposition, often referred to as Tucker decomposition, permits to compress the data. Resulting algorithms, such as High Order SVD, can be efficiently used as a preprocessing before CP; they permit to satisfy Kruskal's condition by reducing tensor rank, and also reduce noise. Note that compression and denoising generally do not require uniqueness.


Research Director, Dr. Pierre Comon
Centre National de la Recherche Scientifique (CNRS)
Laboratoire d'Informatique, Signaux et Systèmes de Sophia-Antipolis (I3S)
Sophia-Antipolis, France