SampTA 2013 logo
IEEE

Conference Program & Abstracts

Monday, July 1

08:40 - 09:40

Stemming the neural data deluge

Jan Rabaey
Room: Conference Hall
Chair: Yonina C. Eldar (Technion-Israel Institute of Technology, Israel)

The dynamic brain map initiative, recently announced by the Obama administration, calls for the simultaneous monitoring of 1 million neurons ("the million neuron march") in a single human brain. While the precise meaning of this depends upon the phenomena to be observed or the resolution thereof, the design of interfaces that collect that amount of data in situ and transfer it to outside the brain, is enormously challenging. Implanted neural interface circuits are extremely limited in size, energy and communication bandwidth. Scaling in both resolution and coverage requires innovative approaches that simultaneously reduce the energy related to data acquisition, and process and format the data so that it can be transmitted through a very narrow data pipe. In this presentation, we will present the overall challenges related to the design and realization of neural implants, both for scientific, neuro-prosthetic and brain-machine interface purposes. The role of sampling, compression and feature extraction will be emphasized. One thing is for certain - the outcomes of initiatives such as the brain map are bound to have a profound impact on the understanding of the brain and the interaction between cyber-human interactions.

10:00 - 12:00

Compressed Sensing

Room: Conference Hall
Chair: Deanna Needel (Stanford University, USA)
10:00 Overcoming the coherence barrier in compressed sensing
Ben Adcock (Purdue University, USA); Anders Hansen (University of Cambridge, USA); Clarice Poon (University of Cambridge, United Kingdom); Bogdan Roman (University of Cambridge, United Kingdom)
We introduce a mathematical framework that bridges a substantial gap between compressed sensing theory and its current use in real-world applications. Although completely general, one of the principal applications for our framework is the Magnetic Resonance Imaging (MRI) problem. Our theory provides a comprehensive explanation for the abundance of numerical evidence demonstrating the advantage of so-called variable density sampling strategies in compressive MRI. Another important conclusion of our theory is that the success of compressed sensing is resolution dependent. At low resolutions, there is little advantage over classical linear reconstruction. However, the situation changes dramatically once the resolution is increased, in which case compressed sensing can and will offer substantial benefits.
pp. 1-4
10:20 On construction and analysis of sparse matrices and expander graphs with applications to CS
Bubacarr Bah (École Polytechnique Fédérale de Lausanne (EPFL), Switzerland); Jared Tanner (University of Oxford, United Kingdom)
We revisit the probabilistic construction of sparse random matrices where each column has a fixed number of nonzeros whose row indices are drawn uniformly at random. These matrices have a one-to-one correspondence with the adjacency matrices of lossless expander graphs. We present tail bounds on the probability that the cardinality of the set of neighbors for these graphs will be less than the expected value. The bounds are derived through the analysis of collisions in unions of sets using a dyadic splitting technique. This analysis led to the derivation of better constants that allow for quantitative theorems on existence of lossless expander graphs and hence the sparse random matrices we consider and also quantitative compressed sensing (CS) sampling theorems when using sparse non mean-zero measurement matrices.
pp. 5-8
10:40 OMP with Highly Coherent Dictionaries
Raja Giryes (Technion, Israel); Michael Elad (Technion, Israel)
Recovering signals that has a sparse representation from a given set of linear measurements has been a major topic of research in recent years. Most of the work dealing with this subject focus on the reconstruction of the signal's representation as the means to recover the signal itself. This approach forces the dictionary to be of low-coherence and with no linear dependencies between its columns. Recently, a series of contributions show that such dependencies can be allowed by aiming at recovering the signal itself. However, most of these recent works consider the analysis framework, and only few discuss the synthesis model. This paper studies the synthesis and introduces a new mutual coherence definition for signal recovery, showing that a modified version of OMP can recover sparsely represented signals of a dictionary with very high correlations between pairs of columns. We show how the derived results apply to the plain OMP.
pp. 9-12
11:00 Recovery of cosparse signals with Gaussian measurements
Maryia Kabanava (Hausdorff Center for Mathematics, Germany); Holger Rauhut (University of Bonn, Germany)
This paper provides theoretical guarantees for the recovery of signals from undersampled measurements based on $\ell_1$-analysis regularization. We provide both nonuniform and stable uniform recovery guarantees for Gaussian random measurement matrices when the rows of the analysis operator form a frame. The nonuniform result relies on a recovery condition via tangent cones and the case of uniform recovery is based on an analysis version of the null space property.
pp. 13-16
11:20 q-ary compressive sensing
Youssef Mroueh (MIT-IIT, USA); Lorenzo Rosasco (DIBRIS, Unige and LCSL - MIT, IIT, USA)
We introduce q-ary compressive sensing, an extension of 1-bit compressive sensing. The recovery properties of the proposed approach are analyzed both theoretically and empirically. Results in 1-bit compressive sensing are recovered as a special case. Our theoretical results suggest a trade- off between the quantization parameter q, and the number of measurements m, in the control of the error of the resulting recovery algorithm, as well its robustness to noise.
pp. 17-20
11:40 Low-rank Tensor Recovery via Iterative Hard Thresholding
Holger Rauhut (University of Bonn, Germany); Reinhold Schneider (Technische Universität Berlin, Germany); Zeljka Stojanac (Universität Bonn, Germany)
We study recovery of low-rank tensors from a small number of measurements. A version of the iterative hard thresholding algorithm (TIHT) for the higher order singular value decomposition (HOSVD) is introduced. As a first step towards the analysis of the algorithm, we define a corresponding tensor restricted isometry property (HOSVD-TRIP) and show that Gaussian and Bernoulli random measurement ensembles satisfy it with high probability.
pp. 21-24

Time-Frequency Analysis

Room: Conference Room
Chair: Hans Feichtinger (University of Vienna, Austria)
10:00 (Non-)Density Properties of Discrete Gabor Multipliers
Dominik Bayer (Acoustics Research Institute & Austrian Academy of Sciences, Austria); Peter Balazs (Austrian Academy of Sciences, Austria)
This paper is concerned with the possibility of approximating arbitrary operators by multipliers for Gabor frames or more general Bessel sequences. It addresses the question of whether sets of multipliers (whose symbols come from prescribed function classes such as $\ell^2$) constitute dense subsets of various spaces of operators (such as Hilbert-Schmidt class). We prove a number of negative results that show that in the discrete setting subspaces of multipliers are usually not dense and thus too small to guarantee arbitrary good approximation. This is in contrast to the continuous case.
pp. 25-28
10:20 Estimation of frequency modulations on wideband signals; applications to audio signal analysis
Harold Omer (Aix Marseille Université, France); Bruno Torrésani (Aix-Marseille Université, France)
The problem of joint estimation of power spectrum and modulation from realizations of frequency modulated stationary wideband signals is considered. The study is motivated by some specific signal classes from which departures to stationarity can carry relevant information and has to be estimated. The estimation procedure is based upon explicit modeling of the signal as a wideband stationary Gaussian signal, transformed by time-dependent, smooth frequency modulation. Under such assumptions, an approximate expression for the second order statistics of the transformed signal's Gabor transform is obtained, which leads to an approximate maximum likelihood estimation procedure. The proposed approach is validated on numerical simulations.
pp. 29-32
10:40 Gabor dual windows using convex optimization
Nathanaël Perraudin (Austrian Academy of Sciences, Switzerland); Nicki Holighaus (Austrian Academy of Sciences, Austria); Peter Soendergaard (Austrian Academy of Sciences, Austria); Peter Balazs (Austrian Academy of Sciences, Austria)
Redundant Gabor frames admit an infinite number of dual frames, yet only the canonical dual Gabor system, constructed from the minimal l2-norm dual window, is widely used. This window function however, might lack desirable properties, such as good time-frequency concentration, small support or smoothness. We employ convex optimization methods to design dual windows satisfying the Wexler-Raz equations and optimizing various constraints. Numerical experiments show that alternate dual windows with considerably improved features can be found.
pp. 33-36
11:00 Sparse Finite Gabor Frames for Operator Sampling
Goetz Pfander (Jacobs University Bremen, Germany); David Walnut (George Mason University, USA)
We derive some interesting properties of finite Gabor frames and apply them to the sampling or identification of operators with bandlimited Kohn-Nirenberg symbols, or equivalently those with compactly supported spreading functions. Specifically we use the fact that finite Gabor matrices are full Spark for an open, dense set of window vectors to show the existence of periodically weighted delta trains that identify simultaneously large operator classes. We also show that sparse delta trains exist that identify operator classes for which the spreading support has small measure.
pp. 37-40
11:20 Optimal wavelet reconstructions from Fourier samples via generalized sampling
Clarice Poon (University of Cambridge, United Kingdom); Anders Hansen (University of Cambridge, USA); Ben Adcock (Purdue University, USA)
We consider the problem of computing wavelet coefficients of compactly supported functions from their Fourier samples. For this, we use the recently introduced framework of generalized sampling in the context of compactly supported orthonormal wavelet bases. Our first result demonstrates that using generalized sampling one obtains a stable and accurate reconstruction, provided the number of Fourier samples grows linearly in the number of wavelet coefficients recovered. We also present the exact constant of proportionality for the class of Daubechies wavelets. Our second result concerns the optimality of generalized sampling for this problem. Under some mild assumptions generalized sampling cannot be outperformed in terms of approximation quality by more than a constant factor. Moreover, for the class of so-called perfect methods, any attempt to lower the sampling ratio below a certain critical threshold necessarily results in exponential ill-conditioning. Thus generalized sampling provides a nearly-optimal solution to this problem.
pp. 41-44
11:40 Wavelet Signs: A New Tool for Signal Analysis
Martin Storath (Ecole Polytechnique Federale de Lausanne, Switzerland); Laurent Demaret (HelmholtzZentrum München, Germany); Peter Massopust (Helmholtz Zentrum München, Germany)
We propose a new analysis tool for signals, called signature, that is based on complex wavelet signs. The complex-valued signature of a signal at some spatial location is defined as the fine-scale limit of the signs of its complex wavelet coefficients. We show that the signature equals zero at sufficiently regular points of a signal whereas at salient features, such as jumps or cusps, it is non-zero. We establish that signature is invariant under fractional differentiation and rotates in the complex plane under fractional Hilbert transforms. We derive an appropriate discretization, which shows that wavelet signatures can be computed explicitly. This allows an immediate application to signal analysis.
pp. 45-48

13:20 - 15:20

Sampling and Frame Theory

Bernhard Bodmann, Peter Casazza, Matthew Fickus
Room: Conference Room
Chair: Matthew Fickus (AF Institute of Technology, USA)
13:20 Balayage and short time Fourier transform frames
Enrico Au-Yeung (University of British Columbia, Canada); John Benedetto (University of Maryland, USA)
Using his formulation of the potential theoretic notion of balayage and his deep results about this idea, Beurling gave sufficient conditions for Fourier frames in terms of balayage. The analysis makes use of spectral synthesis, due to Wiener and Beurling, as well as properties of strict multiplicity, whose origins go back to Riemann. In this setting and with this technology, we formulate and prove non-uniform sampling formulas in the context of the short time Fourier transform (STFT).
pp. 73-76
13:40 Fundamental Limits of Phase Retrieval
Afonso Bandeira (Princeton University, USA); Jameson Cahill (University of Missouri, USA); Dustin G. Mixon (Air Force Institute of Technology, USA); Aaron Nelson (Air Force Institute of Technology, USA)
Recent advances in convex optimization have led to new strides in the phase retrieval problem over finite-dimensional vector spaces. However, certain fundamental questions remain: What sorts of measurement vectors uniquely determine every signal up to a global phase factor, and how many are needed to do so? This paper presents several results that address these questions, specifically in the less-understood complex case. In particular, we characterize injectivity, we identify that the complement property is indeed necessary, we pose a conjecture that 4M-4 generic measurement vectors are necessary and sufficient for injectivity in M dimensions, and we describe how to prove this conjecture in the special cases where M=2,3. To prove the M=3 case, we leverage a new test for injectivity, which can be used to determine whether any 3-dimensional measurement ensemble is injective.
pp. 77-80
14:00 On transformations between Gabor frames and wavelet frames
Ole Christensen (Technical University of Denmark, Denmark); Say Goh (National University of Singapore, Singapore)
We describe a procedure that enables us to construct dual pairs of wavelet frames from certain dual pairs of Gabor frames. Applying the construction to Gabor frames generated by appropriate exponential B-splines gives wavelet frames generated by functions whose Fourier transforms are compactly supported splines with geometrically distributed knot sequences. There is also a reverse transform, which yields pairs of dual Gabor frames when applied to certain wavelet frames.
pp. 81-84
14:20 Perfect Preconditioning of Frames by a Diagonal Operator
Gitta Kutyniok (Technical University Berlin, Germany); Kasso Okoudjou (University of Maryland, USA); Friedrich Philipp (Technische Universität Berlin, Germany)
Frames which are tight might be considered optimally conditioned in the sense of their numerical stability. This leads to the question of perfect preconditioning of frames, i.e., modification of a given frame to generate a tight frame. In this paper, we analyze prefect preconditioning of frames by a diagonal operator. We derive various characterizations of functional analytic and geometric type of the class of frames which allow such a perfect preconditioning.
pp. 85-88
14:40 Characterizing completions of finite frames
Matthew Fickus (AF Institute of Technology, USA); Miriam Poteet (Air Force Institute of Technology, USA)
Finite frames are possibly-overcomplete generalizations of orthonormal bases. We consider the "frame completion" problem, that is, the problem of how to add vectors to an existing frame in order to make it better conditioned. In particular, we discuss a new, complete characterization of the spectra of the frame operators that arise from those completions whose newly-added vectors have given prescribed lengths. To do this, we build on recent work involving a frame's eigensteps, namely the interlacing sequence of spectra of its partial frame operators. We discuss how such eigensteps exist if and only if our prescribed lengths are majorized by another sequence which is obtained by comparing our completed frame's spectrum to our initial one.
pp. 89-92
15:00 A note on scalable frames
Jameson Cahill (University of Missouri, USA); Xuemei Chen (University of Maryland, College Park, USA)
We study the problem of determining whether a given frame is scalable, and when it is, understanding the set of all possible scalings. We show that for most frames this is a relatively simple task in that the frame is either not scalable or is scalable in a unique way, and to find this scaling we just have to solve a linear system. We also provide some insight into the set of all scalings when there is not a unique scaling. In particular, we show that this set is a convex polytope whose vertices correspond to minimal scalings.
pp. 93-96

Optical and RF Systems

Michael Gehm, Nathan Goodman
Room: Conference Hall
Chair: Nathan A Goodman (University of Oklahoma, USA)
13:20 Measurement Structures and Constraints in Compressive RF Systems
Nathan A Goodman (University of Oklahoma, USA)
Compressive sensing (CS) is a powerful technique for sub-sampling of signals combined with reconstruction based on sparsity. Many papers have been published on the topic; however, they often fail to consider practical hardware factors that may prevent or alter the implementation of desired CS measurement kernels. In particular, different compressive architectures in the RF domain either sacrifice collected signal energy or create noise folding, both of which cause SNR reduction. In this paper, we consider valid signal models and other system aspects of RF compressive systems.
pp. 49-52
13:40 Calibration—An open challenge in creating practical computational- and compressive-sensing systems
Michael Gehm (University of Arizona, USA)
The goal of this manuscript (and associated talk) is not to present any recent experimental results from my laboratory. Rather, the purpose is to elucidate why I believe that calibration is one of the few remaining significant challenges in the struggle to create a wide range of practical computational sensing and compressive sensing (CS) systems. Toward this end, I briefly describe the fundamental and implementation difficulties associated with calibration as well as the existing calibration approaches and their associated limitations before sketching the theoretical question that must be addressed in order to solve the calibration challenge.
pp. 53-56
14:00 Compressive CFAR Radar Processing
Laura Anitori (TNO, The Netherlands); Arian Maleki (Rice University, USA); Wim Lambertus van Rossum (TNO, The Netherlands); Matern Otten (TNO, The Netherlands); Richard Baraniuk (Rice University, USA)
In this paper we investigate the performance of a combined Compressive Sensing (CS) Constant False Alarm Rate (CFAR) radar processor under different interference scenarios using both the Cell Averaging (CA) and Order Statistic (OS) CFAR detectors. Using the properties of the Complex Approximate Message Passing (CAMP) algorithm, we demonstrate that the behavior of the CFAR processor is independent of the combination with the non-linear recovery and therefore its performance can be predicted using standard radar tools. We also compare the performance of the CS CFAR processor to that of an L1-norm detector using an experimental data set.
pp. 57-60
14:20 Sampling Techniques for Improved Algorithmic Efficiency in Electromagnetic Sensing
Kyle R Krueger (Georgia Institute of Technology, USA); James H McClellan (Georgia Institute of Technology, USA); Waymond R Scott, Jr. (Georgia Institute of Technology, USA)
Ground-penetrating radar (GPR) and electromagnetic induction (EMI) sensors are used to image and detect subterranean objects; for example, in landmine detection. Compressive sampling at the sensors is important for reducing the complexity of the acquisition process. However, there is a second form of sampling done in the imaging-detection algorithms where a parametric forward model of the EM wavefield is used to invert the measurements. This parametric model includes all the features that need to be extracted from the object; for subterranean targets this includes but is not limited to type, 3D location, and 3D orientation. As parameters are added to the model, the dimensionality increases. Current sparse recovery algorithms employ a dictionary created by sampling the entire parameter space of the model. If uniform sampling is done over the high-dimensional parameter space, the size of the dictionary and the complexity of the inversion algorithms grow rapidly, exceeding the capability of real-time processors. This paper shows that strategic sampling practices can be exploited in both the parameter space, and the acquisition process to dramatically improve the efficiency and scalability of the these EM sensor systems.
pp. 61-64
14:40 Coding and sampling for compressive tomography
David Brady (Duke University, USA)
This paper discusses sampling system design for estimation of multidimensional objects from lower dimensional measurements. We consider examples in geometric, diffractive, coherence, spectral and temporal tomography. Compressive tomography reduces or eliminates conventional tradeoffs between temporal and spatial resolution.
pp. 65-68
15:00 Challenges in Optical Compressive Imaging and Some Solutions
Adrian Stern (Ben-Gurion University of the Negev, Israel); Yair Rivenson (Ben-Gurion University of the Negev, Israel); Yitzhak August (Ben-Gurion University of the Negev, Israel)
The theory of compressive sensing (CS) has opened up new opportunities in the field of optical imaging. However, its implementation in this field is often not straight-forward. We list the implementation challenges that might arise in compressive imaging and present some solutions to overcome them.
pp. 69-72

15:40 - 16:40

Sampling theory and applications: developments in the last 20 years and future perspectives

Hans-Georg Feichtinger
Room: Conference Hall
Chair: Paul Butzer (RWTH Aachen, Germany)
It is by now 20 years that a group of visionaries (including Paul Butzer, Abdul Jerri, Farokh Marvasti, and others [e.g. organizer of the first meeting]) organized the first SampTA conference in Riga. Thanks to the efforts of many members of our community (e.g. Paulo Ferreira, who was willing to the second conference in Aveiro) the SampTA series of biannual conferences has been created, which has by now a well established position within the international scenario of conferences. It has kept the spirit of a meeting place between mathematicians and applied people, exchanging ideas at a theoretical level as well as sharing experiences about efficient algorithms and their implementations.
In my talk I will try to summarize the cornerstones of sampling theory as it stand now, indicating some of the key developments in those past 20 years. The concept of frames is now absolutely well established in the community, while the value of Banach frames is still often underestimated. Function spaces methods have developed further, and various new settings have been introduced, in particular the setting of shift invariant spaces. Although locally compact Abelian groups are the natural setting for the description of the (regular or irregular) sampling theorem, among others, because versions of Poisson's formula are valid in this context, many new settings (band-limited functions over other domains, as e.g. in the work of I.~Pesenson) have been considered in those last 20 years. Localization theory has come into place, providing tools to describe the locality of reconstruction, which is important if only local data are available or for real time applications (the user does not dispose over ``future'' samples).

As an outlook into the future I will indicate a few areas, where I expect still much more to come. Of course it will be the younger generation who will now shape the future, but I am confident that sampling theory has not just established itself as a distinguished research field for a small group of specialists, but will stay a meeting ground for different disciplines. We have to continue to keep contact between the different communities, to refresh ideas, keep the assumptions of our theorems realistic and useful, check their validity in practical situations, and work jointly towards something that I would like to call ``consumer reports'', telling the (experienced and non-experienced) users, from within and outside of our community, which methods might be most promising for certain application areas.

Tuesday, July 2

08:40 - 09:40

How to best sample a solution manifold?

Wolfgang Dahmen
Room: Conference Hall
Chair: Gitta Kutyniok (Technical University Berlin, Germany)

Many design or optimization tasks in scientific computation require a frequent (even online) evaluation of solutions to parameter dependent families of partial differential equations describing the underlying model. This is often only feasible when the model is suitably reduced. The so called Reduced Basis Method is a model reduction paradigm that has recently been attracting considerable attention since it aims at certifying the quality of the reduced model through a-posteriori error bounds. The central idea is to approximate the solution manifold, comprised of all solutions obtained when the parameter ranges over the the given parameter domain, by the linear hull of possibly few snapshots from that manifold so as to still guarantee that the maximal error stays below a given target tolerance. This talk highlights the basic ideas behind this method revolving around greedy strategies for the construction of reduced bases.

Moreover, some recent developments are indicated which address the optimal performance of new stabilized variants for problem classes that cannot not be treated well by conventional techniques, such as unsymmetric singularly perturbed problems. A crucial conceptual ingredient is shown to be a way of "preconditioning" the involved parameter dependent operators on the infinite dimensional level.

10:00 - 12:00

Sampling and Quantization

Holger Boche, Sinan Güntürk, Özgür Yilmaz
Room: Conference Hall
Chair: Sinan Gunturk (New York University, USA)
10:00 Finite-power spectral analytic framework for quantized sampled signals
Truong Thao Nguyen (City College of New York, CUNY, USA)
To be accurate, the theoretical spectral analysis of quantized sequences requires that the deterministic definition of power spectral density be used. We establish the functional space foundations for this analysis, which remarkably appear to be missing until now. With them, we then shed some new light on quantization error spectra in PCM and ΣΔ modulation.
pp. 97-100
10:40 Non-Convex Decoding for Sigma Delta Quantized Compressed Sensing
Evan Chou (New York University, USA)
Recently G\"unt\"urk et al. showed that $\Sigma\Delta$ quantization is more effective than pulse-code modulation (PCM) when applied to compressed sensing measurements of sparse signals as long as the step size of the quantizer is sufficiently fine. PCM with the $l^1$ decoder recovers an approximation to the original sparse signal with an error proportional to the quantization step size. For an $r$-th order $\Sigma\Delta$ scheme the reconstruction accuracy can be improved by a factor of $(m/k)^{\alpha(r-1/2)}$ for any $0 < \alpha < 1$ if $m \gtrsim k(\log N)^{1/(1-\alpha)}$, with high probability on the measurement matrix. In this paper, we make the observation that the sparsest minimizer subject to a $\Sigma\Delta$-type quantization constraint would approximate the original signal from the $\Sigma\Delta$ quantized measurements with a comparable reconstruction accuracy. Then we show that the less intractable non-convex $l^\tau$ minimization for $\tau$ sufficiently small can also be used as an alternative recovery method to achieve a comparable reconstruction accuracy. In both cases, we remove the requirement that the quantization alphabet be fine. Finally, we show using these results that root-exponential accuracy in the bitrate can be achieved for compressed sensing of sparse signals using $\Sigma\Delta$ quantization as the encoder and $l^\tau$ minimization as the decoder.
pp. 101-104
11:00 Quantized Iterative Hard Thresholding: Bridging 1bit and HighResolution Quantized Compressed Sensing
Laurent Jacques (University of Louvain, Belgium); Kévin Degraux (Université Catholique Louvain, Belgium); Christophe De Vleeschouwer (UCL, ? Be)
In this work, we show that reconstructing a sparse signal from quantized compressive measurement can be achieved in an unified formalism whatever the (scalar) quantization resolution, i.e., from 1-bit to high resolution assumption. This is achieved by generalizing the iterative hard thresholding (IHT) algorithm and its binary variant (BIHT) introduced in previous works to enforce the consistency of the reconstructed signal with respect to the quantization model. The performance of this algorithm, simply called quantized IHT (QIHT), is evaluated in comparison with other approaches (e.g., IHT, basis pursuit denoise) for several quantization scenarios.
pp. 105-108
11:20 Sigma-Delta quantization of sub-Gaussian compressed sensing measurements
Felix Krahmer (University of Göttingen, Germany); Rayan Saab (Duke University, USA); Ozgur Yilmaz (University of British Columbia, Canada)
Recently, it has been shown that for the setup of compressed sensing with Gaussian measurements that Sigma-Delta quantization can be effectively incorporated into the sensing mechanism [1]. In contrast to independently quantized measurements, the resulting schemes yield better reconstruction accuracy with a higher number of measurements even at a constant number of bits per signal. The original analysis of this method, however, crucially depends on the rotation invariance of the Gaussian measurements and hence does not directly generalize to other classes of measurements. In this note, we present a refined analysis that allows for a generalization to arbitrary sub-Gaussian measurements.
pp. 109-112
11:40 Stable Recovery with Analysis Decomposable Priors
Jalal Fadili (GREYC CNRS UMR 6072, ensicaen, France); Gabriel Peyré (CNRS and Université Paris-Dauphine, France); Samuel Vaiter (CNRS, CEREMADE, Université Paris-Dauphine, France); Charles-Alban Deledalle (CNRS-Université Bordeaux 1, France); Joseph Salmon (CNRS-Télécom ParisTech, France)
In this paper, we investigate in a unified way the structural properties of solutions to inverse problems. These solutions are regularized by the generic class of semi-norms defined as a decomposable norm composed with a linear operator, the so-called analysis type decomposable prior. This encompasses several well-known analysis-type regularizations such as the discrete total variation (in any dimension), analysis group-Lasso or the nuclear norm. Our main results establish sufficient conditions under which uniqueness and stability to a bounded noise of the regularized solution are guaranteed. Along the way, we also provide a necessary and sufficient uniqueness result that is of independent interest and goes beyond the case of decomposable norms.
pp. 113-116

Finite Rate of Innovation

Chandra Seelamantula
Room: Conference Room
Chair: Chandra Seelamantula (Indian Institute of Science, India)
10:00 FRI-based Sub-Nyquist Sampling and Beamforming in Ultrasound and Radar
Tanya Chernyakova (The Technion, IIT, Israel); Omer Bar-Ilan (Technion - Israel Institude of Technology, Israel); Yonina C. Eldar (Technion-Israel Institute of Technology, Israel)
Signals consisting of short pulses are present in many applications including ultrawideband communication, object detection and navigation (radar, sonar) and medical imaging. The structure of such signals, effectively captured within the finite rate of innovation (FRI) framework, allows for significant reduction in sampling rates, required for perfect reconstruction. In this work we consider two applications, ultrasound imaging and radar, where the FRI signal structure allows to reduce both sampling and processing rates. Furthermore, we show how the FRI framework inspires new processing techniques, such as beamforming in the frequency domain and Doppler focusing. In both applications a pulse of a known shape or a stream of such pulses is transmitted into the respective medium, and the received echoes are sampled and digitally processed in a way referred to as beamforming. Applied either spatially or temporally, beamforming allows to improve signal-to-noise ratio. In radar applications it also allows for target Doppler frequency estimation. Using FRI modeling both for detected and beamformed signals, we are able to reduce sampling rates and to perform digital beamforming directly on the low-rate samples.
pp. 117-120
10:20 Robust Spike Train Recovery from Noisy Data by Structured Low Rank Approximation
Laurent Condat (GIPSA-lab, France); Akira Hirabayashi (Yamaguchi University, Japan)
We consider the recovery of a finite stream of Dirac pulses at nonuniform locations, from noisy lowpass-filtered samples. We show that maximum-likelihood estimation of the unknown parameters amounts to solve a difficult, even believed NP-hard, matrix problem of structured low rank approximation. We propose a new heuristic iterative optimization algorithm to solve it. Although it comes, in absence of convexity, with no convergence proof, it converges in practice to a local solution, and even to the global solution of the problem, when the noise level is not too high. Thus, our method improves upon the classical Cadzow denoising method, for same implementation ease and speed.
pp. 121-124
10:40 Multichannel ECG Analysis using VPW-FRI
Amrish Nair (Nanyang Technological University, Singapore); Pina Marziliano (Nanyang Technological University, Singapore); Frank Quick (Qualcomm Inc., USA); Ronald Crochiere (Qualcomm Inc., USA); Gilles Baechler (EPFL, Switzerland)
In this paper, we present an application of Variable Pulse Width Finite Rate of Innovation (VPW-FRI) in dealing with multi-channel Electrocardiogram (ECG) data using a common annihilator. By extending the conventional FRI model to include additional parameters such as pulse width and asymmetry, VPW-FRI has been able to deal with a more general class of pulses. The common annihilator, which is introduced in the annihilating filter step, shows a common support in multi-channel ECG data, which provides interesting possibilities in compression. A model based de-noising method will be presented which is fast and non-iterative. Also, an application to detect QRS complexes in ECG signals will be demonstrated. The results will show the robustness of the common annihilator and the QRS detection even in the presence of noise.
pp. 125-128
11:00 Recovery of bilevel causal signals with finite rate of innovation using positive sampling kernels
Gayatri Ramesh (Applying, USA); Elie Atallah (University of Central Florida, USA); Qiyu Sun (University of Central Florida, USA)
Bilevel signal $x$ with maximal local rate of innovation $R$ is a continuous-time signal that takes only two values $0$ and $1$ and that there is at most one transition position in any time period of $1/R$. In this note, we introduce a recovery method for bilevel causal signals $x$ with maximal local rate of innovation $R$ from their uniform samples $x*h(nT), n\ge 1$, where the sampling kernel $h$ is causal and positive on $(0, T)$, and the sampling rate $\tau:=1/T$ is at (or above) the maximal local rate of innovation $R$. We also discuss stability of the bilevel signal recovery procedure in the presence of bounded noises.
pp. 129-132
11:20 Approximate FRI with Arbitrary Kernels
Jose Antonio Uriguen (Imperial College of London, Spain); Pier Luigi Dragotti (Imperial College London, United Kingdom); Thierry Blu (Chinese University of Hong Kong, Hong Kong)
In recent years, several methods have been developed for sampling and exact reconstruction of specific classes of non-bandlimited signals known as signals with finite rate of innovation (FRI). This is achieved by using adequate sampling kernels and reconstruction schemes, for example the exponential reproducing kernels. Proper linear combinations of this type of kernel with its shifted versions may reproduce polynomials or exponentials exactly. In this paper we briefly review the ideal FRI sampling and reconstruction scheme and some of the existing techniques to combat noise. We then present an alternative perspective of the FRI retrieval step, based on moments and approximate reproduction of exponentials. Allowing for a controlled model mismatch, we propose a unified reconstruction stage that addresses two current limitations in FRI: the number of degrees of freedom and the stability of the retrieval. Moreover, the approach is universal in that it can be used with any sampling kernel from which enough information is available.
pp. 133-136
11:40 Algebraic signal sampling, Gibbs phenomenon and Prony-type systems
Dmitry Batenkov (Weizmann Institute of Science, Israel); Yosef Yomdin (Weizmann Institute of Science, Israel)
Systems of Prony type appear in various reconstruction problems such as finite rate of innovation, superresolution and Fourier inversion of piecewise smooth functions. By keeping the number of equations small and fixed, we demonstrate that such "decimation" can lead to practical improvements in the reconstruction accuracy. As an application, we provide a solution to the so-called Eckhoff's conjecture, which asked for reconstructing jump positions and magnitudes of a piecewise-smooth function from its Fourier coefficients with maximal possible asymptotic accuracy -- thus eliminating the Gibbs phenomenon.
pp. 137-140

13:20 - 15:00

Super Resolution

Laurent Demanet
Room: Conference Hall
Chair: Laurent Demanet (MIT, USA)
13:20 Super-resolution via superset selection and pruning
Laurent Demanet (MIT, USA); Deanna Needell (Claremont McKenna College, USA); Nam Nguyen (Massachusetts Institute of Technology, USA)
We present a pursuit-like algorithm that we call the ``superset method" for recovery of sparse vectors from consecutive Fourier measurements in the super-resolution regime. The algorithm has a subspace identification step that hinges on the translation invariance of the Fourier transform, followed by a removal step to estimate the solution's support. The superset method is always successful in the noiseless regime (unlike $\ell_1$ minimization) and generalizes to higher dimensions (unlike the matrix pencil method). Relative robustness to noise is demonstrated numerically.
pp. 141-144
13:40 Support detection in super-resolution
Carlos Fernandez-Granda (Stanford University, USA)
We study the problem of super-resolving a superposition of point sources from noisy low-pass data with a cut-off frequency f. Solving a tractable convex program is shown to locate the elements of the support with high precision as long as they are separated by 2/f and the noise level is small with respect to the amplitude of the signal.
pp. 145-148
14:00 Using Correlated Subset Structure for Compressive Sensing Recovery
Deanna Needell (Claremont McKenna College, USA); Atul Divekar (Alcatel-Lucent, USA)
Compressive sensing is a methodology for the reconstruction of sparse or compressible signals using far fewer samples than required by the Nyquist criterion. However, many of the results in compressive sensing concern random sampling matrices such as Gaussian and Bernoulli matrices. In common physically feasible signal acquisition and reconstruction scenarios such as super-resolution of images, the sensing matrix has a non-random structure with highly correlated columns. Here we present a compressive sensing recovery algorithm that exploits this correlation structure. We provide algorithmic justification as well as empirical comparisons.
pp. 149-152
14:20 Sub-Wavelength Coherent Diffractive Imaging based on Sparsity
Yoav Shechtman (Technion, Israel); Alexander Szameit (Technion, Israel); Eliahyu Osherovich (Technion, Israel); Pavel Sidorenko (Technion, Israel); Elad Bullkich (Technion - Israel Institute of Technology, Israel); Hod Dana (Technion, Israel); Shy Shoham (Technion, Israel); Irad Yavneh (Technion, Israel); Michael Zibulevsky (Technion - Israel Institute of Technology, Israel); Oren Cohen (Technion, Israel); Yonina C. Eldar (Technion-Israel Institute of Technology, Israel); Mordechai Segev (Technion, Israel)
We propose and experimentally demonstrate a method of performing single-shot sub-wavelength resolution Coherent Diffractive Imaging (CDI), i.e. algorithmic object reconstruction from Fourier amplitude measurements. The method is applicable to objects that are sparse in a known basis. The prior knowledge of the object's sparsity compensates for the loss of phase information, and the loss of all information at the high spatial frequencies occurring in every microscope and imaging system due to the physics of electromagnetic waves in free-space.
pp. 153-155
14:40 Robust Polyhedral Regularization
Samuel Vaiter (CNRS, CEREMADE, Université Paris-Dauphine, France); Gabriel Peyré (CNRS and Université Paris-Dauphine, France); Jalal Fadili (GREYC CNRS UMR 6072, ensicaen, France)
In this paper, we establish robustness to noise perturbations of polyhedral regularization of linear inverse problems. We provide a sufficient condition that ensures that the polyhedral face associated to the true vector is equal to that of the recovered one. This criterion also implies that the $\ell^2$ recovery error is proportional to the noise level for a range of parameter. Our criterion is expressed in terms of the hyperplanes supporting the faces of the unit polyhedral ball of the regularization. This generalizes to an arbitrary polyhedral regularization results that are known to hold for sparse synthesis and analysis $\ell^1$ regularization which are encompassed in this framework. As a byproduct, we obtain recovery guarantees for $\ell^\infty$ and $\ell^1-\ell^\infty$ regularization.
pp. 156-159

Sampling and Learning

Albert Cohen
Room: Conference Room
Chair: Albert Cohen (Universite Pierre et Marie Curie, France)
13:20 On the Performance of Adaptive Sensing for Sparse Signal Inference
Rui Castro (Eindhoven University of Technology, The Netherlands)
In this short paper we survey recent results characterizing the fundamental draws and limitations of adaptive sensing for sparse signal inference. We consider two different adaptive sensing paradigms, based either on single-entry or linear measurements. Signal magnitude requirements for reliable inference are shown for two different inference problems, namely signal detection and signal support estimation.
pp. 160-163
14:00 Reconstruction of solutions to the Helmholtz equation from punctual measurements
Gilles Chardon (Austrian Academy of Sciences, Austria); Albert Cohen (Universite Pierre et Marie Curie, France); Laurent Daudet (Université Paris Diderot, France)
We analyze the sampling of solutions to the Helmholtz equation (e.g. sound fields in the harmonic regime) using a least-squares method based on approximations of the solutions by sums of Fourier-Bessel functions or plane waves. This method compares favorably to others such as Orthogonal Matching Pursuit with a Fourier dictionary. We show that using a significant proportion of samples on the border of the domain of interest improves the stability of the reconstruction, and that using cross-validation to estimate the model order yields good reconstruction results.
pp. 164-167
14:20 A priori convergence of the Generalized Empirical Interpolation Method
Yvon Maday (UPMC Univ Paris VI - Laboratoire Jacques-Louis Lions, France); Olga Mula (UPMC Univ Paris VI - Laboratoire Jacques-Louis Lions & CEA Saclay - DEN/DANS/DM2S/SERMA/LLPR, France); Turinici Gabriel (CEREMADE, Univ Paris Dauphine, France)
In an effort to extend the classical lagrangian interpolation tools, new interpolating methods that use general interpolating functions are explored. The Generalized Empirical Interpolation Method (GEIM) belongs to this class of new techniques. It generalizes the plain Empirical Interpolation Method by replacing the evaluation at interpolating points by application of a class of interpolating linear functions. Since its efficiency depends critically on the choice of the interpolating functions (that are chosen by a Greedy selection procedure), the purpose of this paper is therefore to provide a priori convergence rates for the Greedy algorithm that is used to build the GEIM interpolating spaces.
pp. 168-171
14:40 Test-size Reduction Using Sparse Factor Analysis
Divyanshu Vats (Rice University, USA); Christoph Studer (Rice University, USA); Richard Baraniuk (Rice University, USA)
Consider a large database of questions that test the knowledge of learners (e.g., students) about a range of different concepts. While the main goal of personalized learning is to obtain accurate estimates of each learner's concept understanding, it is additionally desirable to reduce the number of questions to minimize each learner's workload. In this paper, we propose a novel method to extract a small subset of questions (from a large question database) that still enables the accurate estimation of a learner's concept understanding. Our method builds upon the SPARse Factor Analysis (SPARFA) framework and chooses a subset of questions that minimizes the entropy of the error in estimating the level of concept understanding. We approximate the underlying combinatorial optimization problem using a mixture of convex and greedy methods and demonstrate the efficacy of our approach on real educational data.
pp. 172-175

15:00 - 16:20

Poster Session I

coffee served
Chair: Goetz Pfander (Jacobs University Bremen, Germany)
Special Frames
Daniel Abreu (Acoustic Research Institute & CMUC, University of Coimbra, Austria)
We will present three classes of special frames: special Fourier-type frames, special Gabor frames and special wavelet frames. Then we will give one example for each class: Fourier-Bessel frames, Gabor frames with Hermite functions and wavelet frames with Laguerre functions. Some results about these three class of special frames, currently under investigation, will be outlined.
pp. 176-177
Variation and approximation for Mellin-type operators
Laura Angeloni (University of Perugia, Italy); Gianluca Vinti (University of Perugia, Italy)
Mellin analysis is of extreme importance in approximation theory, also for its wide applications: among them, for example, it is connected with problems of Signal Analysis, such as the Exponential Sampling. Here we study a family of Mellin-type integral operators defined as $$ (T_w f)({\tt s})=\int_{\R_+^N} K_w({\tt t}) f({\tt st}){\,d{\tt t} \over \langle{\tt t}\rangle}, \ {\tt s}\in \R_+^N,\ w>0,\eqno \rm{(I)} $$ where $\{K_w\}_{w>0}$ are approximate identities, $\langle{\tt t}\rangle:=\prod_{i=1}^N t_i,$ ${\tt t}=(t_1,\dots,t_N)\in \R^N_+$, and $f:\R_+^N\rightarrow \R$ is a function of bounded $\varphi-$variation. We use a new concept of multidimensional $\varphi-$variation inspired by the Tonelli approach, which preserves some of the main properties of the classical variation. For the family of operators (I), besides several estimates and a result of approximation for the $\varphi-$modulus of smoothness, the main convergence result that we obtain proves that $$ \lim_{w\to +\infty} V^{\varphi}[\lambda(T_w f-f)]=0, $$ for some $\lambda>0$, provided that $f$ is $\varphi-$absolutely continuous. Moreover, the problem of the rate of approximation is studied, taking also into consideration the particular case of Fej\'er-type kernels.
pp. 178-181
iterative methods for random sampling recovery and compressed sensing recovery
Masomeh Azghani (Sharif University of Technology, Iran); Farokh Marvasti (Sharif university of Technology, Iran)
In this paper, an iterative sparse recovery method based on sampling and interpolation is suggested. The proposed method exploits the sparsity of the embedded signal to recover it from random samples. Simulation results indicate that the proposed method outperforms IMAT (a random sampling recovery method) and OMP (compressed sensing recovery method) in the case of image compression. Also an iterative method with adaptive thresholding for compressed sensing (IMATCS) recovery is proposed. Unlike its similar counterpart, iterative hard thresholding (IHT), the thresholding function of the proposed method is adaptive i.e. the threshold value changes with the iteration number, which enables IMATCS to reconstruct the sparse signal without having any knowledge of the sparsity number. The simulation results indicate that IMATCS outperforms IHT in both computational complexity and quality of the recovered signal. Compared to the orthogonal matching pursuit (OMP), the proposed method is computationally more efficient with similar recovery performance.
pp. 182-185
A Review of the Invertibility of Frame Multipliers
Peter Balazs (Austrian Academy of Sciences, Austria); Diana Stoeva (Austrian Academy of Sciences, Austria)
In this paper we give a review of recent results on the invertibility of frame multipliers $M_{m,\Phi,\Psi}$. In particular we give sufficient, necessary or equivalent conditions for the invertibility of such operators, depending on the properties of the sequences $\Psi$, $\Phi$ and $m$. We consider Bessel sequences, frames and Riesz sequences.
pp. 186-188
Hybrid Regularization and Sparse Reconstruction of Imaging Mass Spectrometry Data
Andreas Bartels (University of Bremen & Center for Industrial Mathematics - ZeTeM, Germany)
Imaging mass spectrometry (IMS) is a technique to visualize the molecular distributions from biological samples without the need of chemical labels or antibodies. The underlying data is taken from a mass spectrometer that ionizes the sample on spots on a grid of a certain size. Mathematical postprocessing methods has been investigated twice for better visualization but also for reducing the huge amount of data. We propose a first model that applies compressed sensing to reduce the number of measurements needed in IMS. At the same time we apply peak picking in spectra using the l1-norm and denoising on the m/z-images via the TV-norm which are both general procedures of mass spectrometry data postprocessing, but always done separately and not simultaneous. This is realized by using a hybrid regularization approach for a sparse reconstruction of both the spectra and the images. We show reconstruction results for a given rat brain dataset in spectral and spatial domain.
pp. 189-192
Level crossing sampling of strongly monoHölder functions
Brigitte Bidegaray-Fesquet (CNRS, France); Marianne Clausel (University Joseph Fourier, France)
We address the problem of quantifying the number of samples that can be obtained through a level crossing sampling procedure for applications to mobile systems. We specially investigate the link between the smoothness properties of the signal and the number of samples, both from a theoretical and a numerical point of view.
pp. 193-196
MAP Estimators for Self-Similar Sparse Stochastic Models
Emrah Bostan (Ecole Polytechnique Fédérale de Lausanne, Switzerland); Julien Fageot (EPFL, Switzerland); Ulugbek S. Kamilov (EPFL, Switzerland); Michael Unser (EPFL, Switzerland)
We consider the reconstruction of multi-dimensional signals from noisy samples. The problem is formulated within the framework of the theory of continuous-domain sparse stochastic processes. In particular, we study the fractional Laplacian as the whitening operator specifying the correlation structure of the model. We then derive a class of MAP estimators where the priors are confined to the family of infinitely divisible distributions. Finally, we provide simulations where the derived estimators are compared against total-variation (TV) denoising.
pp. 197-199
From variable density sampling to continuous sampling using Markov chains
Nicolas Chauffert (CEA, Neurospin Center, Parietal Team., France); Philippe Ciuciu (LNAO, France); Pierre Armand Weiss (ITAV USR 3505, France); Fabrice Gamboa (Université Paul Sabatier (Toulouse III), France)
Since its discovery over the last decade, Compressed Sensing (CS) has been successfully applied to Magnetic Resonance Imaging (MRI). It has been shown to be a powerful way to reduce scanning time without sacrificing image quality. MR images are actually strongly compressible in a wavelet basis, the latter being largely incoherent with the k-space or spatial Fourier domain where acquisition is performed. Nevertheless, since its first application to MRI [1], the theoretical justification of actual k-space sampling strategies is questionable. Indeed, the vast majority of k-space sampling distributions have been heuristically designed (e.g., variable density) or driven by experimental feasibility considerations (e.g., random radial or spiral sampling to achieve smoothness k-space trajectory). In this paper, we try to reconcile very recent CS results with the MRI specificities (magnetic field gradients) by enforcing the measurements, i.e. samples of k-space, to fit continuous trajectories. To this end, we propose random walk continuous sampling based on Markov chains and we compare the reconstruction quality of this scheme to the state-of-the art.
pp. 200-203
A Comparison of Reconstruction Methods for Compressed Sensing of the Photoplethysmogram
Nicholas Conn (Rochester Institute of Technology, USA); David Borkholder (Rochester Institute of Technology, USA)
Compressed sensing has the possibility to significantly decrease the power consumption of wireless medical devices. The photoplethysmogram (PPG) is a device which can greatly benefit from compressed sensing due to the large amount of power needed to capture data. The aim of this paper is to determine if the least absolute shrinkage and selection operator (LASSO) optimization algorithm is the best approach for reconstructing a compressively sampled PPG across varying physiological states. The results show that LASSO reconstruction approaches, but does not surpass, the reliability of constrained optimization.
pp. 204-207
Generalized sampling in $U$-invariant subspaces
Héctor Fernández-Morales (Universidad Carlos III de Madrid, Spain); Antonio García (Universidad Carlos III de Madrid, Spain); Miguel Hernández-Medina (Universidad Politécnica de Madrid, Spain)
In this work we carry out some results in sampling theory for $U$-invariant subspaces of a separable Hilbert space $\mathcal{H}$, also called atomic subspaces: \[ \mathcal{A}_a=\big\{\sum_{n\in\mathbb{Z}}a_nU^na:\, \{a_n\} \in \ell^2(\mathbb{Z}) \big\}\,, \] where $U$ is an unitary operator on $\mathcal{H}$ and $a$ is a fixed vector in $\mathcal{H}$. These spaces are a generalization of the well-known shift-invariant subspaces in $L^2(\mathbb{R})$; here the space $L^2(\mathbb{R})$ is replaced by $\mathcal{H}$, and the shift operator by $U$. Having as data the samples of some related operators, we derive frame expansions allowing the recovery of the elements in $\mathcal{A}_a$. Moreover, we include a frame perturbation-type result whenever the samples are affected with a jitter error.
pp. 208-211
Iterative Hard Thresholding with Near Optimal Projection for Signal Recovery
Raja Giryes (Technion, Israel); Michael Elad (Technion, Israel)
Recovering signals that have sparse representations under a given dictionary from a set of linear measurements got much attention in the recent decade. However, most of the work has focused on recovering the signal's representation, forcing the dictionary to be incoherent and with no linear dependencies between small sets of its columns. A series of recent papers show that such dependencies can be allowed by aiming at recovering the signal itself. However, most of these contributions focus on the analysis framework. One exception to these is the work reported in [Davenport, Needell and Wakin, 2012], proposing a variant of the CoSaMP for the synthesis model, and showing that signal recovery is possible even in high-coherence cases. In the theoretical study of this technique the existence of an efficient near optimal projection scheme is assumed. In this paper we extend the above work, showing that under very similar assumptions, a variant of IHT can recover the signal in cases where regular IHT fails.
pp. 212-215
The Design of Non-redundant Directional Wavelet Filter Bank Using 1-D Neville Filters
Youngmi Hur (Johns Hopkins University, USA); Fang Zheng (Johns Hopkins University, USA)
In this paper, we develop a method to construct non-redundant directional wavelet filter banks. Our method uses a special class of filters called Neville filters and can construct non-redundant wavelet filter banks in any dimension for any dilation matrix. The resulting filter banks have directional analysis highpass filters, thus can be used in extracting directional contents in multi-D signals such as images. Furthermore, one can custom-design the directions of highpass filters in the filter banks.
pp. 216-219
Sparse Approximation of Ion-Mobility Spectrometry Profiles by Minutely Shifted Discrete B-splines
Masaru Kamada (Ibaraki University, Japan); Masakazu Ohno (Ibaraki University, Japan)
Employing discrete B-splines instead of the Gaussian distribution, we construct an algorithm for the analysis of ion-mobility spectrometry profiles. The algorithm is suitable for hardware implementation because the discrete B-splines are supported by a simple digital filter to compute their weighted sum and their correlations with a given signal. Minutely shifted discrete B-splines are deployed of which weighted sum is to approximate a given profile with non-negative weights. Closely neighboring discrete B-splines are almost linearly dependent so that they may cause numerical instability in the approximation process. But numerical experiments deny this anxiety at least for the final results. Varying the width of discrete B-splines, we obtain a number of different approximations. Out of sufficiently precise approximations, we choose the sparsest one in the sense that it comprises few discrete B-splines with large weights.
pp. 220-223
Tracking Dynamic Sparse Signals with Kalman Filters: Framework and Improved Inference
Evripidis Karseras (Imperial College London, United Kingdom); Kin K. K. Leung (Imperial College, United Kingdom); Wei Dai (Imperial College, United Kingdom)
The standard Kalman filter performs optimally for conventional signals but tends to fail when it comes to recovering dynamic sparse signals. In this paper a method to solve this problem is proposed. The basic idea is to model the system dynamics with a hierarchical Bayesian network which successfully captures the inherent sparsity of the data, in contrast to the traditional state-space model. This probabilistic model provides all the necessary statistical information needed to perform sparsity-aware predictions and updates in the Kalman filter steps. A set of theorems show that a properly scaled version of the associated cost function can lead to less greedy optimisation algorithms, unlike the ones previously proposed. It is demonstrated empirically that the proposed method outperforms the traditional Kalman filter for dynamic sparse signals and also how the redesigned inference algorithm, termed here Bayesian Subspace Pursuit (BSP) greatly improves the inference procedure.
pp. 224-227
The Variation Detracting Property of some Shannon Sampling Series and their Derivatives
Andi Kivinukk (Tallinn University, Estonia); Tarmo Metsmägi (Tallinn University, Estonia)
In this paper are considered some generalized Shannon sampling operators which preserve the total variation of functions and their derivatives. For that purpose will be used the averaged kernel functions of certain even band-limited kernel functions.
pp. 228-231
Jointly filtering and regularizing seismic data using space-varying FIR filters
Apostolos Kontakis (Delft University of Technology, The Netherlands); Xander Campman (Shell Global Solutions International B. V., The Netherlands); Geert Leus (Delft University of Technology, The Netherlands); Zijian Tang (Shell Global Solutions International B. V., The Netherlands); Mike Danilouchkine (Shell Global Solutions International B. V., The Netherlands)
Array forming in seismic data acquisition can be likened to FIR filtering. Misplacement of the receivers used to record seismic waves can lead to degraded performance with respect to the filtering characteristics of the array. We propose two methods for generating linear space-varying filters that take receiver misplacements into account and demonstrate their performance on synthetic data.
pp. 232-235
Non-uniform sampling pattern recognition based on atomic decomposition
Tugdual Le Pelleter (TIMA Laboratory, France); Taha Beyrouthy (TIMA Laboratory, France); Robin Rolland (CIME Nanotech, France); Agnès Bonvilain (TIMA Laboratory, France); Laurent Fesquet (TIMA Laboratory, France)
Non-uniform sampling is an interesting scheme that can outperforms the uniform sampling with low activity signals. With such signals, it generates fewer samples, which means less data to process and lower power consumption. In addition, it is well-known that asynchronous logic is a low power technology. This paper deals with the coupling between a non-uniform sampling scheme and a pattern recognition algorithm implemented with an event-driven logic. This non-uniform analog-to-digital conversion and the specific processing have been implemented on an Altera FPGA platform. This paper reports the first results of this low-activity pattern recognition system and its availability to recognize specific patterns with very few samples. The objectives of this work target the future ultra-low power integrated systems.
pp. 236-239
Particle Filter Acceleration Using Multiscale Sampling Methods
Yaniv Shmueli (Tel-Aviv University, Israel); Gil Shabat (Tel-Aviv University, Israel); Amit Bermanis (Tel-Aviv University, Israel); Amir Averbuch (Tel Aviv University, Israel)
We present a multiscale based method that accelerates the computation of particle filters. Particle filter is a powerful method that tracks the state of a target based on non-linear observations. Unlike the conventional way that calculates weights over all particles in each cycle of the algorithm, we sample a small subset from the source particles using matrix decomposition methods. Then, we apply a function extension algorithm that uses the particle subset to recover the density function for all the rest of the particles. As often happens, the computational effort is substantial especially when tracking multiple objects takes place. The proposed algorithm reduces significantly the computational load. We demonstrate our method on both simulated and on real data such as tracking in videos sequences.
pp. 240-243
Analysis of Multistage Sampling Rate Conversion for Potential Optimal Factorization
Zhengmao Ye (Southern University, USA); Habib Mohamadian (Southern University, USA)
Digital multistage sampling rate conversion has many engineering applications in fields of signal and image processing, which is to adapt the sampling rates to the flows of diverse audio and video signals. The FIR (Finite Impulse Response) polyphase sampling rate converter is one of typical schemes that are suitable for interpolation or decimation by an integer factor. It also guarantees the stability performance with the stable gain margin and phase margin. The big challenge occurs upon implementation when a very high order filter is needed with large values of L (positive integer factor of interpolator) and/or M (positive integer factor of decimator). Narrowband linear phase filter specifications are hard to achieve, however. It leads to extra storage space, additional computation expense and detrimental finite word length effects. The multistage sampling rate converter has been introduced to factorize the L and M ratio into a product of ratios of integers or prime numbers. The optimal number of stages and optimal converting factors are both critical terms to minimize the computation time and storage requirements. Filter structure analysis is conducted in this study to search for the potential factors that could have a remarkable impact to optimize the sampling rate conversion.
pp. 244-247
Sparse 2D Fast Fourier Transform
Andre Rauh (University of Delaware, USA); Gonzalo Arce (University of Delaware, USA)
This paper extends the concepts of the Sparse Fast Fourier Transform (sFFT) Algorithm to work with two dimensional (2D) data. The 2D algorithm requires several generalizations to multiple key concepts of the 1D sparse Fourier transform algorithm. Furthermore, several parameters needed in the algorithm are optimized for the reconstruction of sparse image spectra. This paper addresses the case of the exact k-sparse Fourier transform but the underlying concepts can be applied to the general case of finding a k-sparse approximation of the Fourier transform of an arbitrary signal. The proposed algorithm can further be extended to even higher dimensions. Simulations illustrate the efficiency and accuracy of the proposed algorithm when applied to real images.
pp. 248-251
GESPAR: Efficient Sparse Phase Retrieval with Application to Optics
Yoav Shechtman (Technion, Israel); Amir Beck (The Technion, Israel); Yonina C. Eldar (Technion-Israel Institute of Technology, Israel)
The problem of phase retrieval, namely, recovery of a signal from the magnitude of its Fourier transform is ill-posed since the Fourier phase information is lost. Therefore, prior information on the signal is needed in order to recover it. In this work we consider the case in which the prior information on the signal is that it is sparse, i.e., it consists of a small number of nonzero elements. We propose GESPAR: A fast local search method for recovering a sparse signal from measurements of its Fourier transform magnitude. Our algorithm does not require matrix lifting, unlike previous approaches, and therefore is potentially suitable for large scale problems such as images. Simulation results indicate that the proposed algorithm is fast and more accurate than existing techniques. We demonstrate applications in optics where GESPAR is generalized and used for finding sparse solutions to sets of quadratic measurements.
pp. 252-255

16:20 - 17:20

Fast algorithms for sparse Fourier transform

Piotr Indyk
Room: Conference Hall
Chair: Farokh Marvasti (Sharif university of Technology, Iran)

The Fast Fourier Transform (FFT) is one of the most fundamental numerical algorithms. It computes the Discrete Fourier Transform (DFT) of an n-dimensional signal in O(n log n) time. The algorithm plays an important role in many areas. It is not known whether its running time can be improved. However, in many applications the output of the transform is (approximately) sparse. In this case, it is known that one can compute the set of non-zero coefficients faster than in O(n log n) time.

In this talk, I will describe a new set of efficient algorithms for the sparse Fourier Transform. One of the algorithms has the running time of O(k log n), where k is the number of non-zero Fourier coefficients of the signal. This improves over the runtime of the FFT for any k = o(n). If time allows, I will also describe some of the applications, to spectrum sensing and GPS locking, as well as mention a few outstanding open problems.

The talk will cover the material from the joint papers with Fadel Adib, Badih Ghazi, Haitham Hassanieh, Dina Katabi, Eric Price and Lixin Shi. The papers are available at http://groups.csail.mit.edu/netmit/sFFT/

17:30 - 18:30

Compressed Sensing

Room: Conference Hall
Chair: Felix Krahmer (University of Göttingen, Germany)
17:30 Sparse Signal Reconstruction from Phase-only Measurements
Petros T Boufounos (Mitsubishi Electric Research Laboratories & Rice University, USA)
We demonstrate that the phase of complex linear measurements of signals preserves significant information about the angles between those signals. We provide stable angle embedding guarantees, akin to the restricted isometry property in classical compressive sensing, that characterize how well the angle information is preserved. They also suggest that a number of measurements linear in the sparsity and logarithmic in the dimensionality of the signal contains sufficient information to acquire and reconstruct a sparse signal within a positive scalar factor. We further show that the reconstruction can be formulated and solved using standard convex and greedy algorithms taken directly from the CS literature. Even though the theoretical results only provide approximate reconstruction guarantees, our experiments suggest that exact reconstruction is possible.
pp. 256-259
17:50 Optimal Sampling Rates in Infinite-Dimensional Compressed Sensing
Mitra Fatemi (EPFL, Switzerland); Loic Baboulaz (Imperial College, United Kingdom); Martin Vetterli (EPFL, Switzerland)
The theory of compressed sensing studies the problem of recovering a high dimensional sparse vector from its projections onto lower dimensional subspaces. The recently introduced framework of infinite-dimensional compressed sensing [1], to some extent generalizes these results to infinite-dimensional scenarios. In particular, it is shown that the continuous-time signals that have sparse representations in a known domain can be recovered from random samples in a different domain. The range M and the minimum number m of samples for perfect recovery are limited by a balancing property of the two bases. In this paper, by considering Fourier and Haar wavelet bases, we experimentally show that M can be optimally tuned to minimize the number of samples m that guarantee perfect recovery. This study does not have any parallel in the finite-dimensional CS.
pp. 260-263
18:10 Deterministic Binary Sequences for Modulated Wideband Converter
Lu Gan (Brunel University, United Kingdom); Wang Huali (ICE, PLA UST, P.R. China)
The modulated wideband converter (MWC) is a promising spectrum blind, sub-Nyquist multi-channel sampling scheme for sparse multi-band signals. In an MWC, the input analog signal is modulated by a bank of periodic binary waveforms, low-pass filtered and then down sampled uniformly at a low rate. One important issue in the design and implementation of an MWC system is the selection of binary waveforms, which impacts the stability of sparse reconstruction. In this paper, we propose to construct the binary pattern with a circulant structure, in which each row is a random cyclic shift of a single deterministic sequence or a pair of complementary sequences. Such operators have hardware friendly structures and fast computation in recovery. They are incoherent with the FFT matrix and the corresponding sampling operators satisfy the restricted isometry property with sub-optimal bounds. Some simulation results are included to demonstrate the validity of the proposed sampling operators.
pp. 264-267

Harmonic Analysis

Room: Conference Room
Chair: Anna Rita Sambucini (University of Perugia, Italy)
17:30 Fractional Prolate Spheroidal Wave Functions
Ahmed Zayed (DePaul University, USA)
An important problem in communication engineering is the energy concentration problem, that is the problem of finding a signal bandlimited to $[-\sigma, \sigma]$ with maximum energy concentration in the interval $[-\tau, \tau], 0<\tau ,$ in the time domain or equivalently, finding a signal that is time limited to the interval $[-\tau, \tau]$ with maximum energy concentration in $[-\sigma, \sigma]$ in the frequency domain. This problem was solved by a group of mathematicians at Bell Labs in the early 1960's. The solution involves the prolate spheroidal wave functions which are eigenfunctions of a differential and an integral equations. The main goal of this talk is to solve the energy concentration problem in the fractional Fourier transform domain, that is to find a signal that is bandlimited to $[-\sigma, \sigma]$ in the fractional Fourier transform domain with maximum energy concentration in the interval $[-\tau, \tau], 0<\tau ,$ in the time domain. The solution involves a generalization of the prolate spheroidal wave functions which we call fractional prolate spheroidal wave functions. \end{abstract}
pp. 268-270
17:50 Absolute Convergence of the Series of Fourier-Haar Coefficients
Boris Golubov (Moscow Institute of Physics and Technologies, Russia); Sergey Volosivets (Saratov State University, Russia)
We give some sharp statements on absolute convergence of the series of Fourier-Haar coefficients. There are two-dimensional analogs of one-dimensional results.
pp. 271-273
18:10 Mellin analysis and exponential sampling. Part I: Mellin fractional integrals
Paul Butzer (RWTH Aachen, Germany); Carlo Bardaro (University of Perugia, Italy); Ilaria Mantellini (University of Perugia, Italy)
The Mellin transform and the associated convolution integrals are intimately connected with the exponential sampling theorem. Thus it is very important to develop the various tools of Mellin analysis. In this part we pave the way to sampling analysis by studying basic theoretical properties, including Mellin-type fractional integrals, and give a new approach and version on these integrals, specifying their basic semigroup property. Especially their domain and range need be studied in detail.
pp. 274-276
18:20 Mellin analysis and exponential sampling. Part II: Mellin differential operators and sampling
Paul Butzer (RWTH Aachen, Germany); Carlo Bardaro (University of Perugia, Italy); Ilaria Mantellini (University of Perugia, Italy)
Here, we introduce a notion of strong fractional derivative and we study the connection with the pointwise fractional derivative, which is defined by means of Hadamard-type integrals. The main result is a fractional version of the fundamental theorem of integral and differential calculus in Mellin frame. Finally there follow the first of several theorems in the sampling area, the highlight being the reproducing kernel theorem as well as its approximate version for non-bandlimited functions in the Mellin sense, both being new.
pp. 277-280

Wednesday, July 3

08:40 - 09:40

Sampling and high-dimensional convex geometry

Roman Vershynin
Room: Conference Hall
Chair: Holger Rauhut (University of Bonn, Germany)

This expository talk is about intuition in modern high dimensional convex geometry and random matrix theory. We will explore connections with sampling and quantization, in particular with problems arising in compressed sensing and high dimensional statistics.

10:00 - 12:00

Sampling in Bio Imaging

Brigitte Forster, Hagai Kirshner, Michael Unser
Room: Conference Hall
Chair: Hagai Kirshner (EPFL, Switzerland)
10:00 From super-resolution microscopy towards quantitative single-molecule biology
Mike Heilemann (Goethe-University Frankfurt, Germany)
In the fluorescence microscopy field, much interest has focused on new super-resolution techniques (collectively known as PALM/STORM, STED, SIM and others) [1] that have demonstrated to bypass the diffraction limit and provide a spatial resolution reaching a near-molecular level. With these techniques, it has become possible to image cellular structures in far greater detail than ever before. Single-molecule based methods such as photoactivation-localization microscopy (PALM) [1], stochastic optical reconstruction microscopy (STORM) [2] and directSTORM (dSTORM) [3] employ photoswitchable fluorophores and single-molecule localization to generate a super-resolution image. These methods are uniquely suited not only to resolve small cellular structures, but also to provide quantitative information on the number of molecules or stoichiometries. This talk will summarize our recent efforts in single-molecule based super-resolution imaging. It will include experimental developments, such as 3D tissue imaging [4] and new labeling strategies for cellular structures [5]. Furthermore, single-molecule based super-resolution methods are uniquely suited not only to resolve small cellular structures, but also to quantify these by extracting the number of molecules or stoichiometries [6].
10:20 Optimisation and control of sampling rate in localisation microscopy
Seamus Holden (Swiss Federal Institute of Technology (EPFL), Switzerland); Thomas Pengo (Center for Genomic Regulation, Spain); Suliana Manley (EPFL, Switzerland)
Localisation microscopy (PALM/ STORM, etc.) involves sampling sparse subsets of fluorescently labelled molecules, so that the density of bright ("active") molecules in a single frame is low enough to allow single molecule sub-diffraction limited localisation. The sampling rate, ie. the mean number of active molecules per unit time, is controlled by the illumination intensity of a "photoactivation" UV laser. Two key sampling problems are inherent in any localisation microscopy measurement: 1. What is the maximum sampling rate before sub-diffraction limited resolution is lost? The maximum sampling rate determines the temporal resolution of the technique. Clearly, the absolute maximum sampling rate is for all molecules to be active (conventional microscopy). However, the maximum usable sampling rate is largely determined by the localisation algorithm used. Our algorithm, DAOSTORM, is a high-density localisation algorithm which allows an order of magnitude increase in sampling rate compared to traditional low-density algorithms. 2. Can we automatically maintain optimal sampling rate during data acquisition? A careful balance in sampling rate is required: if sampling rate is too high, spatial resolution is reduced; if sampling rate is too low, temporal resolution is reduced. Traditionally, sampling rate is controlled by continuous manual assessment of the density of molecules in any single frame, and manual adjustment of photoactivation laser intensity. This is tedious, and incompatible with automation. To resolve this, we present AutoLase, an algorithm for real-time closed-loop measurement and control of sampling rate.
pp. 281-284
10:40 STORM by compressed sensing
Bo Huang (University of California, San Francisco, USA); Lei Zhu (Georgia Institute of Technology, USA); Joerg Schnitzbauer (University of California, San Francisco, USA)
In super-resolution microscopy methods based on single-molecule switching, each camera snapshot samples a random, sparse subset of probe molecules in the samples. The final super-resolution image is assembled from thousands of such snapshots. The rate of accumulating single-molecule activation events often limits the time resolution. We have developed a sparse-signal recovery technique using compressed sensing to analyze camera images with highly overlapping fluorescent spots. This method allows an activated fluorophore density an order of magnitude higher than what conventional single-molecule fitting methods can handle. Combination of compressed sensing with Bayesian statistics over the entire image sequence further enabled us to improve the spatial precision of determining fluorescent probe positions.
11:00 Video sampling and reconstruction using linear or non-linear Fourier measurements
Yoann Le-Montagner (Institute Pasteur, France); Elsa Angelini (Télécom ParisTech, France); Jean-Christophe Olivo-Marin (Institut Pasteur, France)
The theory of compressed sensing (CS) predicts that structured images can be sampled in a compressive manner with very few non-adaptive linear measurements, made in a proper adjacent domain. However, is such a recovery still possible with nonlinear measurements, such as optical-based Fourier modulus? Here, we investigate how phase retrieval methods can be extended to solve the problem of recovering a video signal from a subset of Fourier modulus samples, taking advantage of some relevant sparse prior assumptions on the signal of interest. We compare this recovery technique to the usual convex reconstruction method encountered when dealing with linear CS measurements. We present some simulation results obtained on real video sequences coming from biological imaging experiment.
11:20 Fast Maximum Likelihood High-density Low-SNR Super-resolution Localization Microscopy
Kyung Sang Kim (KAIST, Korea); Junhong Min (KAIST, Korea); Lina Carlini (EPFL, Switzerland); Michael Unser (EPFL, Switzerland); Suliana Manley (EPFL, Switzerland); Daejong Jeon (KAIST, Korea); Jong Chul Ye (KAIST, Korea)
Localization microscopy such as STORM/PALM achieves the super-resolution by sparsely activating photo-switchable probes. However, to make the activation sparse enough to obtain reconstruction images using conventional algorithms, only small set of probes need to be activated simultaneously, which limits the temporal resolution. Hence, to improve temporal resolution up to a level of live cell imaging, high-density imaging algorithms that can resolve several overlapping PSFs are required. In this paper, we propose a maximum likelihood algorithm under Poisson noise model for the high-density low-SNR STORM/PALM imaging. Using a sparsity promoting prior with concave-convex procedure (CCCP) optimization algorithm, we achieved high performance reconstructions with ultra-fast reconstruction speed of 5 second per frame under high density low SNR imaging conditions. Experimental results using simulated and real live-cell imaging data demonstrate that proposed algorithm is more robust than conventional methods in terms of both localization accuracy and molecular recall rate.
pp. 285-288
11:40 Analogies and differences in optical and mathematical systems and approaches
Bettina Heise (CDL MS-MACH/ ZONA, FLLL, Johannes Kepler University Linz, Austria); Stefan Schausberger (JKU Linz, Austria); Martin Reinhardt (TU Bergakademie Freiberg, Germany); David Stifter (JKU Linz, Austria)
We review traditions and trends in optics and imaging recently arising by applying programmable optical devices and by sophisticated approaches for data evaluation and image reconstruction. Furthermore, a short overview is given about modeling of well-known classical optical elements, and vice versa, about optical realizations of classical mathematical transforms, as in particular Fourier, Hilbert, and Riesz transforms.
pp. 289-292

Sampling and Geometry

Stephen Casey, Michael Robinson
Room: Conference Room
Chairs: Stephen D. Casey (American University & NWC at the University of Maryland, USA), Michael Robinson (American University, USA)
10:00 The Nyquist theorem for cellular sheaves
Michael Robinson (American University, USA)
We develop a unified sampling theory based on sheaves and show that the Shannon-Nyquist theorem is a cohomological consequence of an exact sequence of sheaves. Our theory indicates that there are additional cohomological obstructions for higher-dimensional sampling problems. Using these obstructions, we also present conditions for perfect reconstruction of piecewise linear functions on graphs, a collection of non-bandlimited functions on topologically nontrivial domains.
pp. 293-296
10:20 Frames of eigenspaces and localization of signal components
José Luis Romero (University of Vienna, Austria); Monika Doerfler (University of Vienna, Austria)
We present a construction of frames adapted to a given time-frequency cover and study certain computational aspects of it. These frames are based on a family of orthogonal projections that can be used to localize signals in the time-frequency plane. We compare the effect of the corresponding orthogonal projections to the traditional time-frequency masking.
pp. 297-300
10:40 A Lie group approach to diffusive wavelets
Swanhild Bernstein (TU Bergakademie Freiberg, Germany)
The aim of this paper is to give an overview of diffusive wavelets on compact groups, homogeneous spaces and the Heisenberg group. This approach is based on Lie groups and representation theory and generalizes well-known constructions of wavelets on the sphere. The key idea of diffusive wavelets is to generate a dilation from a diffusive semigroup where as the translation is the action of a compact group. We give examples for the construction of diffusive wavelets.
pp. 301-304
11:00 Shannon Sampling and Parseval Frames on Compact Manifolds
Isaac Pesenson (Temple University and CCP, USA)
The paper contains several generalizations of the classical Sampling Theorem for band limited functions constructed using a self-adjoint second order differential elliptic operator on a compact homogeneous manifolds.
pp. 305-308
11:20 Signal Analysis with Frame Theory and Persistent Homology
Mijail Guillemard (Technische Universität Berlin, Germany); Holger Boche (Technical University Munich, Germany); Gitta Kutyniok (Technical University Berlin, Germany); Friedrich Philipp (Technische Universität Berlin, Germany)
A basic task in signal analysis is to characterize data in a meaningful way for analysis and classification purposes. Time-Frequency transforms are powerful strategies for signal decomposition, and important recent generalizations have been achieved in the setting of frame theory. In parallel recent developments, tools from algebraic topology, traditionally developed in purely abstract settings, have provided new insights in applications to data analysis. In this report, we investigate some interactions of these tools, both theoretically and with numerical experiments in order to characterize signals and their corresponding adaptive frames. We explain basic concepts in persistent homology as an important new subfield of computational topology, as well as formulations of time-frequency analysis in frame theory. Our objective is to use persistent homology for constructing topological signatures of signals in the context of frame theory for classification and analysis purposes. The main motivation for studying these interactions is to combine the strength of frame theory as a fundamental signal analysis methodology, with persistent homology as a novel perspective in data analysis.
pp. 309-312
11:40 Signal Adaptive Frame Theory
Stephen D. Casey (American University & NWC at the University of Maryland, USA)
The projection method is an atomic signal decomposition designed for adaptive frequency band (AFB) and ultra-wide-band (UWB) systems. The method first windows the signal and then decomposes the signal into a basis via a continuous-time inner product operation, computing the basis coefficients in parallel. The windowing systems are key, and we develop systems that have variable partitioning length, variable roll-off and variable smoothness. These include systems developed to preserve orthogonality of any orthonormal systems between adjacent blocks, and almost orthogonal windowing systems that are more computable/constructible than the orthogonality preserving systems. The projection method is, in effect, an adaptive Gabor system for signal analysis. The natural language to express this structure is frame theory.
pp. 313-316

Thursday, July 4

08:40 - 09:40

Signal recognition and filter identification

Nikolai Nikolskii
Room: Conference Hall
Chair: Bruno Torrésani (Aix-Marseille Université, France)
Given a linear space of stationary filters A= {F}, S-->R= S*F, the following two problems are briefly surveyed. (1) The stable signal recognition problem consists to decide whether it is possible to control the upper bound c in the signal recognition estimate ||S|< c||R||= c||S*F|| in terms of the lower bound of the energy spectrum only. The classical Wiener and Beurling-Sobolev algebras are considered. In particular, it is explained why there exists no constructive proof of the Wiener 1/F theorem. (2) We say that a Weak Filter Identification (WFI) holds on a space X for a sampling (observation) grid T (a subset of the real line R) if there is a test signal S such that F*S(t)= 0 for all t in T implies F=0. Discrete and continuous time filters are considered; in particular for X= L^{p}(R) and T= Z (or any other discrete subgroup of R), the WFI holds iff p<2. If the time permits we discuss a special choice of X and T which leads to the entire dilation problem, the cyclic vectors in the Hardy space on the Hilbert infinite dimensional multi-disc, and the Riemann hypothesis.

10:00 - 12:00

Sampling of Bandlimited Functions

Room: Conference Room
Chair: Gerhard Schmeisser (Erlangen-Nürnberg University, Germany)
10:00 Generalized oversampling with missing samples
Sinuk Kang (National Institute for Mathematical Sciences, Korea); Kil Hyun Kwon (KAIST, Korea); Dae Gwan Lee (KAIST, Korea)
It is well known that in the classical Shannon sampling theory on band-limited signals, any finitely many missing samples can be recovered when the signal is oversampled at a rate higher than the minimum Nyquist rate. In this work, we consider the problem of recovering missing samples from multi-channel oversampling in a general shift-invariant space. We find conditions under which any finite or infinite number of missing samples can be recovered.
pp. 337-340
10:20 Identification of Rational Transfer Functions from Sampled Data
Hagai Kirshner (EPFL, Switzerland); John Paul Ward (Ecole Polytechnique Federale de Lausanne (EPFL), Switzerland); Michael Unser (EPFL, Switzerland)
We consider the task of estimating an operator from sampled data. The operator, which is described by a rational transfer function, is applied to continuous-time white noise and the resulting continuous-time process is sampled uniformly. The main question we are addressing is whether the stochastic properties of the time series that originates from the sample values of the process allows one to determine the operator. We focus on the autocorrelation property of the process and identify cases for which the sampling operator is injective. Our approach relies on sampling properties of almost periodic functions, which together with exponentially decaying functions, provide the building blocks of the autocorrelation measure. Our results indicate that it is possible, in principle, to estimate the parameters of the rational transfer function from sampled data, even in the presence of prominent aliasing.
pp. 341-343
10:40 Reconstruction of Signals from Highly Aliased Multichannel Samples by Generalized Matching Pursuit
Ali Ozbek (Schlumberger Cambridge Research, United Kingdom); Massimiliano Vassallo (WesternGeco London Technology Centre, United Kingdom); Kemal Ozdemir (WesternGeco Oslo Technology Center, Norway); Dirk-Jan van Manen (Schlumberger Cambridge Research, United Kingdom); Kurt Eggenberger (Schlumberger, USA)
This paper considers the problem of reconstructing a bandlimited signal from severely aliased multichannel samples. Multichannel sampling in this context means that the samples are available after the signal has been filtered by various linear operators. We propose the method of Generalized Matching Pursuit to solve the reconstruction problem. We illustrate the potential of the method using synthetic data that could be acquired using multimeasurement towed-streamer seismic data acquisition technology. A remarkable observation is that high-fidelity reconstruction is possible even when the data are uniformly and coarsely sampled, with the order of aliasing significantly exceeding the number of channels.
pp. 344-347
11:00 Joint Signal Sampling and Detection
Mirek Pawlak (University of Manitoba, Canada)
In this paper, we examine the joint signal sampling and detection problem when noisy samples of a signal are collected in the sequential fashion. In such a scheme, at the each observation time point we wish to make a decision that the observed data record represents a signal of the assumed target form. Moreover, we are able simultaneously to recover a signal when it departs from the target class. For such a joint signal detection and recovery setup, we introduce a novel algorithm relying on the smooth correction of linear sampling schemes. Given a finite frame of noisy samples of the signal we design a detector being able to test a departure from a target signal as quickly as possible. Our detector is represented as a continuous time normalized partial-sum stochastic process, for which we obtain a functional central limit theorem under weak assumptions on the correlation structure of the noise. The established limit theorems allow us to design monitoring algorithms with the desirable level of the probability of false alarm and able to detect a change with probability approaching one.
pp. 348-351
11:20 On Optimal Sampling Trajectories for Mobile Sensing
Jayakrishnan Unnikrishnan (EPFL, Switzerland); Martin Vetterli (EPFL, Switzerland)
We study the design of sampling trajectories for stable sampling and reconstruction of bandlimited spatial fields using mobile sensors. As a performance metric we use the path density of a set of sampling trajectories, defined as the total distance traveled by the moving sensors per unit spatial volume of the spatial region being monitored. We obtain new results for the problem of designing stable sampling trajectories with minimal path density, that admit perfect reconstruction of bandlimited fields. In particular, we identify the set of parallel lines with minimal path density that contains a stable sampling set for isotropic fields.
pp. 352-355
11:40 Phase Retrieval via Structured Modulations in Paley-Wiener Spaces
Fanny Yang (UC Berkeley, USA); Volker Pohl (Technische Universität München, Germany); Holger Boche (Technical University Munich, Germany)
This paper considers the recovery of continuous time signals from the magnitude of its samples. It uses a combination of structured modulation and oversampling and provides sufficient conditions on the signal and the sampling system such that signal recovery is possible. In particular, it is shown that an average sampling rate of four times the Nyquist rate is sufficient to reconstruct a signal from its magnitude measurements.
pp. 356-359

Sampling for Imaging Science

Jalal Fadili and Gabriel Peyré
Room: Conference Hall
Chair: Gabriel Peyré (CNRS and Université Paris-Dauphine, France)
10:00 Joint reconstruction of misaligned images from incomplete measurements for cardiac MRI
Gilles Puy (EPFL, Switzerland); Gabriele Bonanno (University of Lausanne, Switzerland); Matthias Stuber (University of Lausanne, Switzerland); Pierre Vandergheynst (EPFL, Switzerland)
We present a novel method for robust reconstruction of the image of a moving object from incomplete linear measurements. We assume that only few measurements of this object can be acquired at different instants and model the correlation between measurements using global geometric transformations represented by few parameters. Then, we design a method that is able to jointly estimate these transformation parameters and an image of the object, while taking into account possible occlusions of parts of the object during the acquisitions. The reconstruction algorithm minimizes a non-convex functional and generates a sequence of estimates converging to a critical point of this functional. Finally, we show how to apply this algorithm on a real cardiac acquisition for free breathing coronary magnetic resonance imaging.
pp. 317-320
10:40 Localization of point sources in wave fields from boundary measurements using new sensing principle
Zafer Dogan (EPFL, Switzerland); Ivana Jovanovic (EPFL Lausanne, Switzerland); Thierry Blu (Chinese University of Hong Kong, Hong Kong); Dimitri Van De Ville (Ecole Polytechnique Fédérale de Lausanne, Switzerland)
We address the problem of localizing point sources in 3D from boundary measurements of a wave field. Recently, we prosed the sensing principle which allows extracting volumetric samples of the unknown source distribution from the boundary measurements. The extracted samples allow a non-iterative re construction algorithm that can recover the parameters of the source distribution projected on a 2-D plane in the continuous domain without any discretization. Here we extend the method for the 3-D localization of multiple point sources by combining multiple 2-D planar projections. In particular, we propose a three-step algorithm to retrieve the locations by means of multiplanar application of the sensing principle. First, we find the projections of the locations onto several 2-D planes. Second, we propose a greedy algorithm to pair the solutions in each plane. Third, we retrieve the 3D locations by least squares regression.
pp. 321-324
11:00 Compressive Acquisition of Sparse Deflectometric Maps
Prasad Sudhakar (Universite Catholique de Louvain, Belgium); Laurent Jacques (University of Louvain, Belgium); Adriana Gonzalez Gonzalez (Université Catholique Louvain, Belgium); Xavier Dubois (Lambda-X SA, Belgium); Philippe Antoine (Lambda-X SA, Belgium); Luc Joannes (Lambda-X SA, Belgium)
Schlieren deflectometry aims at measuring deflections of light rays from transparent objects, which is subsequently used to characterize the objects. With each location on a smooth object surface a sparse deflection map (or spectrum) is associated. In this paper, we demonstrate the compressive acquisition and reconstruction of such maps, and the usage of deflection information for object characterization, using a schlieren deflectometer. To this end, we exploit the sparseness of deflection maps and we use the framework of spread spectrum compressed sensing. Further, at a second level, we demonstrate how to use the deflection information optimally to reconstruct the distribution of refractive index inside an object, by exploiting the sparsity of refractive index maps in gradient domain.
pp. 325-328
11:20 Fourier-Laguerre transform, convolution and wavelets on the ball
Jason McEwen (University College London, United Kingdom); Boris Leistedt (University College London, United Kingdom)
We review the Fourier-Laguerre transform, an alternative harmonic analysis on the three-dimensional ball to the usual Fourier-Bessel transform. The Fourier-Laguerre transform exhibits an exact quadrature rule and thus leads to a sampling theorem on the ball. We study the definition of convolution on the ball in this context, showing explicitly how translation on the radial line may be viewed as convolution with a shifted Dirac delta function. We review the exact Fourier-Laguerre wavelet transform on the ball, coined flaglets, and show that flaglets constitute a tight frame.
pp. 329-332
11:40 Truncation Error in Image Interpolation
Loic Simon (· ENSICAEN Ecole Nationale Superieure d'Ingenieurs de Caen, France)
Interpolation is a fundamental issue in image processing. In this short paper, we communicate ongoing results concerning the accuracy of two landmark approaches: the Shannon expansion and the DFT interpolation. Among all sources of error, we focus on the impact of spatial truncation. Our estimations are expressed in the form of upper bounds on the Root Mean Square Error as a function of the distance to the image border. The quality of these bounds is appraised through experiments driven on natural images.
pp. 333-336

13:20 - 15:00

Algorithms

Room: Conference Room
Chair: Pavel Zheltov (Jacobs University Bremen, Germany)
13:20 Optimal Interpolation Laws for Stable AR(1) Processes
Arash Amini (EPFL, Switzerland)
In this paper, we focus on the problem of interpolating a continuous-time AR(1) process with stable innovations using minimum average error criterion. Stable innovations can be either Gaussian or non-Gaussian. In the former case, the optimality of the exponential splines is well understood. For non-Gaussian innovations, however, the problem has been all too often addressed through Monte Carlo methods. In this paper, based on a recent non-Gaussian stochastic framework, we revisit the AR(1) processes in the context of stable innovations and we derive explicit expressions for the optimal interpolator. We find that the interpolator depends on the stability index of the innovation and is linear for all stable laws, including the Gaussian case. We also show that the solution can be expressed in terms of exponential splines.
pp. 380-383
13:40 Hierarchical Tucker Tensor Optimization - Applications to Tensor Completion
Curt Da Silva (University of British Columbia, Canada); Felix J. Herrmann (the University of British Columbia, Canada)
In this work, we develop an optimization framework for problems whose solutions are well-approximated by Hierarchical Tucker tensors, an efficient structured tensor format based on recursive subspace factorizations. Using the differential geometric tools presented here, we construct standard optimization algorithms such as Steepest Descent and Conjugate Gradient, for interpolating tensors in HT format. We also empirically examine the importance of one's choice of data organization in the success of tensor recovery by drawing upon insights from the Matrix Completion literature. Using these algorithms, we recover various seismic data sets with randomly missing source pairs.
pp. 384-387
14:00 Estimation of large data sets on the basis of sparse sampling
Anatoli Torokhti (University of South Australia, Australia); Phil Howlett (University of South Australia, Australia); Hamid Laga (University of South Australia & Phenomics and Bioinformatics Research Centre, Australia)
We propose a new technique which allows us to estimate any random signal from a large set of noisy observed data on the basis of samples of only a few reference signals.
pp. 388-391
14:20 Analysis of Hierarchical Image Alignment with Descent Methods
Elif Vural (Ecole Polytechnique Federale de Lausanne, Switzerland); Pascal Frossard (EPFL, Switzerland)
We present a performance analysis for image registration with gradient descent methods. We consider a multiscale registration setting where the global 2-D translation between a pair of images is estimated by smoothing the images and minimizing the distance between their intensity functions with gradient descent. We focus in particular on the effect of low-pass filtering on the alignment performance. We adopt an analytic representation for images and analyze the well-behavedness of the distance function by estimating the neighborhood of translations for which the distance function is free of undesired local minima. This corresponds to the set of translation vectors that are correctly computable with a simple gradient descent minimization. We show that the area of this neighborhood increases at least quadratically with the filter size, which justifies the use of smoothing in image registration with local optimizers. We finally use our results in the design of a regular multiscale grid in the translation parameter domain that has perfect alignment guarantees.
pp. 392-395
14:40 Spectrum Reconstruction from Sub-Nyquist Sampling of Stationary Wideband Signals
Deborah Cohen (Technion - Israel Institute of Technology, Israel); Yonina C. Eldar (Technion-Israel Institute of Technology, Israel)
In light of the ever-increasing demand for new spectral bands and the underutilization of those already allocated, the new concept of Cognitive Radio (CR) has emerged. Opportunistic users could exploit temporarily vacant bands after detecting the absence of activity of their owners. One of the most crucial tasks in the CR cycle is therefore spectrum sensing and detection which has to be precise and efficient. Yet, CRs typically deal with wideband signals whose Nyquist rates are very high. In this paper, we propose to reconstruct the spectrum of such signals from sub-Nyquist samples in order to perform detection. We consider both sparse and non sparse signals as well as blind and non blind detection in the sparse case. For each one of those scenarii, we derive the minimal sampling rate allowing perfect reconstruction of the signal spectrum in a noise-free environment and provide recovery techniques. The simulations shows spectrum recovery at the minimal rate in noise-free settings.
pp. 396-399

Advances in Compressive Sensing

Holger Rauhut, Joel Tropp
Room: Conference Hall
Chair: Holger Rauhut (University of Bonn, Germany)
13:20 Energy-aware adaptive bi-Lipschitz embeddings
Bubacarr Bah (École Polytechnique Fédérale de Lausanne (EPFL), Switzerland); Volkan Cevher (Ecole Polytechnique Federale de Lausanne, Switzerland); Ali Sadeghian (Sharif University of Technology, Iran)
We propose a dimensionality reducing matrix design based on training data with constraints on its Frobenius norm and number of rows. Our design criteria is aimed at preserving the distances between the data points in the dimensionality reduced space as much as possible relative to their distances in original data space. This approach can be considered as a deterministic Bi-Lipschitz embedding of the data points. We introduce a scalable learning algorithm, dubbed AMUSE, and provide a rigorous estimation guarantee by leveraging game theoretic tools. We also provide a generalization characterization of our matrix based on our sample data. We use compressive sensing problems as an example application of our problem, where the Frobenius norm design constraint translates into the sensing energy.
pp. 360-363
13:40 Randomized Singular Value Projection
Stephen Becker (IBM T. J. Watson Research Center, Yorktown Heights, New York, USA); Volkan Cevher (Ecole Polytechnique Federale de Lausanne, Switzerland); Anastasios Kyrillidis (EPFL & IBM Research Lab Zurich, Switzerland)
Affine rank minimization algorithms typically rely on calculating the gradient of a data error followed by a singular value decomposition at every iteration. Because these two steps are expensive, heuristic approximations are often used to reduce computational burden. To this end, we propose a recovery scheme that merges the two steps with randomized approximations, and as a result, operates on space proportional to the degrees of freedom in the problem. We theoretically establish the estimation guarantees of the algorithm as a function of approximation tolerance. While the theoretical approximation requirements are overly pessimistic, we demonstrate that in practice the algorithm performs well on the quantum tomography recovery problem.
pp. 364-367
14:00 On Sparsity Averaging
Rafael Carrillo (EPFL, Switzerland); Jason McEwen (University College London, United Kingdom); Yves Wiaux (EPFL, Switzerland)
Recent developments in [1] and [2] introduced a novel regularization method for compressive imaging in the context of compressed sensing with coherent redundant dictionaries. The approach relies on the observation that natural images exhibit strong average sparsity over multiple coherent frames. The associated reconstruction algorithm, based on an analysis prior and a reweighted L1 scheme, is dubbed Sparsity Averaging Reweighted Analysis (SARA). We review these advances and extend associated simulations establishing the superiority of SARA to regularization methods based on sparsity in a single frame, for a generic spread spectrum acquisition and for a Fourier acquisition of particular interest in radio astronomy.
pp. 368-371
14:20 Conditions for Dual Certificate Existence in Semidefinite Rank-1 Matrix Recovery
Paul Hand (Massachusetts Institute of Technology, USA)
We study the existence of dual certificates in convex minimization problems where a rank-1 matrix X0 is to be recovered under semidefinite and linear constraints. We provide an example where such a dual certificate does not exist. We prove that dual certificates are guaranteed to exist if the linear measurement matrices can not be recombined to form something positive and orthogonal to X0. If the measurements can be recombined in this way, the problem is equivalent to one with additional linear constraints. That augmented problem is guaranteed to have a dual certificate at the minimizer, providing the form of an optimality certificate for the original problem.
pp. 372-375
14:40 The restricted isometry property for random convolutions
Felix Krahmer (University of Göttingen, Germany); Shahar Mendelson (Technion, Israel); Holger Rauhut (University of Bonn, Germany)
We present significantly improved estimates for the restricted isometry constants of partial random circulant matrices as they arise in the matrix formulation of subsampled convolution with a random pulse. We show that the required condition on the number $m$ of rows in terms of the sparsity $s$ and the vector length $n$ is $m > C s \log^2 s \log^2 n$.
pp. 376-379

15:00 - 16:20

Poster Session II

coffee served
Chair: Goetz Pfander (Jacobs University Bremen, Germany)
Multivariate sampling Kantorovich operators: approximation and applications to civil engineering
Federico Cluni (University of Perugia, Italy); Danilo Costarelli (University of Roma 3, Italy); Anna Maria Minotti (University of Perugia, Italy); Gianluca Vinti (University of Perugia, Italy)
In this paper, we present the theory and some new applications of linear, multivariate, sampling Kantorovich operators. By means of the above operators, we are able to reconstruct pointwise, continuous and bounded signals (functions), and to approximate uniformly, uniformly continuous and bounded functions. Moreover, the reconstruction of signals belonging to Orlicz spaces are also considered. In the latter case, we show how our operators can be used to approximate not necessarily continuous signals/images, and an algorithm for image reconstruction is developed. Several applications of the theory in civil engineering are obtained. Thermographic images, such as masonries images, are processed to study the texture of the buildings, thus to separate the stones from the mortar and finally a real-world case-study is analyzed in terms of structural analysis.
pp. 400-403
On the Number of Degrees of Freedom of Band-Limited Functions
Tatiana Levitina (TU Braunschweig, Germany)
The concept of the number of degrees of freedom of band-limited signals is discussed. Classes of band-limited signals obtained as a result of successive application of the truncated direct and truncated inverse Fourier transforms are shown to posses a finite number of degrees of freedom.
pp. 404-407
Tracing Sound Objects in Audio Textures
Monika Doerfler (University of Vienna, Austria); Ewa Matusiak (Vienna, Austria)
This contribution presents first results on two proposed methods to trace sound objects within texture sounds. We first discuss what we mean by these two notions and explain how the properties of a sound that is known to be textural are exploited in order to detect changes which suggest the presence of a distinct sound event. We introduce two approaches, one is based on Gabor multipliers mapping consecutive time-segments of the signal to each other, the other one on dictionary learning. We present the results of simulations based on real data.
pp. 408-411
An Uncertainty Principle for Discrete Signals
Sangnam Nam (Aix-Marseille Université, France)
By use of window functions, time-frequency analysis tools like short time Fourier transform overcome a shortcoming of the Fourier transform and enable us to study the time-frequency characteristics of signals which exhibit transient oscillatory behaviours. Since the resulting representations depend on the choice of the window functions, it is important to know how they influence the analyses. One crucial question on a window function is how accurate it permits us to analyze the signals in the time and frequency domains. In the continuous domain (for functions defined on the real line), the limit on the accuracy is well-established by the Heisenberg's uncertainty principle when the time-frequency spread is measured in terms of the variance measures. However, for the finite discrete signals (where we consider the discrete Fourier transform), the uncertainty relation is not as well understood. Our work fills in some of the gap in the understanding and states uncertainty relation for a subclass of finite discrete signals. Interestingly, the result is a close parallel to that of the continuous domain: the time-frequency spread measure is, in some sense, natural generalization of the variance measure in the continuous domain, the lower bound for the uncertainty is close to that of the continuous domain, and the lower bound is achieved approximately by the `discrete Gaussians'.
pp. 412-415
Efficient Simulation of Continuous Time Digital Signal Processing RF Systems
Alin Ratiu (CEA & INSA de Lyon, France); Dominique Morche (CEA Leti, France); Arnaud Arias (CEA Leti, France); Bruno Allard (INSA Lyon, France); Xuefang Lin-Shi (INSA Lyon, France); Jacques Verdier (Institut National des Sciences Appliquées, France)
A new simulation method for continuous time digital signal processing RF architectures is proposed. The approach is based on a discrete time representation of the input signal combined with a linear interpolation. Detailed theoretical calculations are presented, which prove the efficiency of the simulation when dealing with RF signals. We show that, compared to a discrete time simulation, for the same simulation error a decrease of almost two orders of magnitude is expected in the necessary number of input samples.
pp. 416-419
Shift-Variance and Cyclostationarity of Linear Periodically Shift-Variant Systems
Bashir Sadeghi (Eastern Mediterranean University, Turkey); Runyi Yu (Eastern Mediterranean University, Turkey)
We study shift-variance and cyclostationarity of linear periodically shift-variant (LPSV) systems. Both input and output spaces are assumed to be of continuous-time. We first determine how far an LPSV system is away from the space of linear shift-invariant systems. We then consider cyclostationarity of a random process based on its autocorrelation operator. The results allow us to investigate properties of output of an LPSV system when its input is a random process. Finally, we analyze shift-variance and cyclostationarity of generalized sampling-reconstruction processes.
pp. 420-423
Constructive sampling for patch-based embedding
Moshe Salhov (Tel Aviv University, Israel); Guy Wolf (Tel Aviv University, Israel); Amit Bermanis (Tel-Aviv University, Israel); Amir Averbuch (Tel Aviv University, Israel)
To process high-dimensional big data, we assume that sufficiently small patches (or neighborhoods) of the data are approximately linear. These patches represent the tangent spaces of an underlying manifold structure from which we assume the data is sampled. We use these tangent spaces to extend the scalar relations that are used by many kernel methods to matrix relations, which encompass multidimensional similarities between local neighborhoods in the data. The incorporation of these matrix relations improves the utilization of kernel-based data analysis methodologies. However, they also result in a larger kernel and a higher computational cost of its spectral decomposition. We propose a dictionary construction that approximates the oversized kernel in this case and its associated patch-to-tensor embedding. The performance of the proposed dictionary construction is demonstrated on a super-kernel example that utilizes the Diffusion Maps methodology together with linear-projection operators between tangent spaces in the manifold.
pp. 424-427
The Constrained Earth Mover Distance Model, with Applications to Compressive Sensing
Ludwig Schmidt (MIT, USA); Chinmay Hegde (MIT, USA); Piotr Indyk (MIT, USA)
Sparse signal representations have emerged as powerful tools in signal processing theory and applications, and serve as the basis of the now-popular field of compressive sensing (CS). However, several practical signal ensembles exhibit additional, richer structure beyond mere sparsity. Our particular focus in this paper is on signals and images where, owing to physical constraints, the positions of the nonzero coefficients do not change significantly as a function of spatial (or temporal) location. Such signal and image classes are often encountered in seismic exploration, astronomical sensing, and biological imaging. Our contributions are threefold: (i) We propose a simple, deterministic model based on the Earth Mover Distance that effectively captures the structure of the sparse nonzeros of signals belonging to such classes. (ii) We formulate an approach for approximating any arbitrary signal by a signal belonging to our model. The key idea in our approach is a min-cost max-flow graph optimization problem that can be solved efficiently in polynomial time. (iii) We develop a CS algorithm for efficiently reconstructing signals belonging to our model, and numerically demonstrate its benefits over state-of-the-art CS approaches.
pp. 428-431
Orlicz Modulation Spaces
Catherine Schnackers (RWTH Aachen University, Germany); Hartmut Führ (RWTH Aachen University, Germany)
In this work we extend the definition of modulation spaces associated to Lebesgue spaces to Orlicz spaces and mixed-norm Orlicz spaces. We give the definition of the Orlicz spaces $L^\Phi$, a generalisation of the $L^p$ spaces of Lebesgue. Therefore we characterise the Young function $\Phi$ and give some basic properties of this spaces. We collect some facts about this spaces that we need for the time frequency analysis, then we introduce the Orlicz modulation spaces. Finally we present a discretisation of the Orlicz space and mixed-norm Orlicz space and a characterisation of the modulation space by discretisation.
pp. 432-435
Binary Reduced Row Echelon Form Approach for Subspace Segmentation
Ali Sekmen (Tennessee State University, USA); Akram Aldroubi (Vanderbilt University, USA)
This paper introduces a subspace segmentation and data clustering method for a set of data drawn from a union of subspaces. The proposed method works perfectly in absence of noise, i.e., it can find the number of subspaces, their dimensions, and an orthonormal basis for each subspace. The effect of noise on this approach depends on the noise level and relative positions of subspaces. We provide a comprehensive performance analysis in presence of noise and outliers.
pp. 436-439
Missing Entries Matrix Approximation and Completion
Gil Shabat (Tel-Aviv University, Israel); Yaniv Shmueli (Tel-Aviv University, Israel); Amir Averbuch (Tel Aviv University, Israel)
We describe several algorithms for matrix completion and matrix approximation when only some of its entries are known. The approximation constraint can be any whose approximated solution is known for the full matrix. For low rank approximations, similar algorithms appear recently in the literature under different names. In this work, we introduce new theorems for matrix approximation and show that these algorithms can be extended to handle different constraints such as nuclear norm, spectral norm, orthogonality constraints and more that are different than low rank approximations. As the algorithms can be viewed from an optimization point of view, we discuss their convergence to global solution for the convex case. We also discuss the optimal step size and show that it is fixed in each iteration. In addition, the derived matrix completion flow is robust and does not require any parameters. This matrix completion flow is applicable to different spectral minimizations and can be applied to physics, mathematics and electrical engineering problems such as data reconstruction of images and data coming from PDEs such as Helmholtz's equation used for electromagnetic waves.
pp. 440-443
Using Affinity Perturbations to Detect Web Traffic Anomalies
Yaniv Shmueli (Tel-Aviv University, Israel); Tuomo Sipola (University of Jyväskylä, Finland); Gil Shabat (Tel-Aviv University, Israel); Amir Averbuch (Tel Aviv University, Israel)
The initial training phase of machine learning algorithms is usually computationally expensive as it involves the processing of huge matrices. Evolving datasets are challenging from this point of view because changing behavior requires updating the training. We propose a method for updating the training profile efficiently and a sliding window algorithm for online processing of the data in smaller fractions. This assumes the data is modeled by a kernel method that includes spectral decomposition. We demonstrate the algorithm with a web server request log where an actual intrusion attack is known to happen. Updating the kernel dynamically using a sliding window technique, prevents the problem of single initial training and can process evolving datasets more efficiently.
pp. 444-447
Finite Rate of Innovation Signals: Quantization Analysis with Resistor-Capacitor Acquisition Filter
Srikanth Tenneti (California Institute of Technology, USA); Animesh Kumar (Indian Institute of Technology Bombay, India); Abhay Karandikar (IIT Bombay, India)
Finite rate of innovation or FRI signals, which are usually not bandlimited, have been studied as an alternate model for signal sampling and reconstruction. Sampling and perfect reconstruction of FRI signals was first presented by Vetterli, Marziliano, and Blu. A typical FRI reconstruction algorithm requires solving for FRI signal parameters from a power-sum series. This in turn requires annihilation filters and root-finding techniques. These non-linear steps complicate the analysis of FRI signal reconstruction in the presence of quantization. In this work, we introduce a resistor-capacitor filter bank for sample acquisition of FRI signal and an associated signal reconstruction scheme which uses much simpler operations than those of the existing techniques. This simplification allows us to analyze the effect of quantization noise. However, the sampling-rate required for our scheme is larger than the minimum sampling-rate of FRI signals.
pp. 448-451
Tangent space estimation bounds for smooth manifolds
Hemant Tyagi (ETH Zurich, Switzerland); Elif Vural (Ecole Polytechnique Federale de Lausanne, Switzerland); Pascal Frossard (EPFL, Switzerland)
Many manifold learning methods require the estimation of the tangent space of the manifold at a point from locally available data samples. Local sampling conditions such as (i) the size of the neighborhood and (ii) the number of samples in the neighborhood affect the performance of learning algorithms. In this paper, we propose a theoretical analysis of local sampling conditions for the estimation of the tangent space at a point P lying on an m-dimensional Riemannian manifold S in R^n. Assuming a smooth embedding of S in R^n, we estimate the tangent space by performing a Principal Component Analysis (PCA) on points sampled from the neighborhood of P on S. Our analysis explicitly takes into account the second order properties of the manifold at P, namely the principal curvatures as well as the higher order terms. Considering a random sampling framework, we leverage recent results from random matrix theory to derive local sampling conditions for an accurate estimation of tangent subspace. Our main results state that the width of the sampling region in the tangent space guaranteeing an accurate estimation is inversely proportional to the manifold dimension, curvature, and the square root of the ambient space dimension. At the same time, we show that the number of samples increases quadratically with the manifold dimension and logarithmically with the ambient space dimension.
pp. 452-455
A null space property approach to compressed sensing with frames
Rongrong Wang (University of Maryland, USA); Xuemei Chen (University of Maryland, College Park, USA); Haichao Wang (University of California, Davis, USA)
An interesting topic in compressive sensing concerns problems of sensing and recovering signals with sparse representations in a dictionary. In this note, we study conditions of sensing matrices $A$ for the $\ell^1$-synthesis method to accurately recovery sparse, or nearly sparse signals in a given dictionary $D$. In particular, we propose a dictionary based null space property (D-NSP) which, to the best of our knowledge, is the first sufficient and necessary condition for the success of the $\ell^1$ recovery. This new property is then utilized to detect some of those dictionaries whose sparse families cannot be compressed universally. Moreover, when the dictionary is full spark, we show that $AD$ being NSP, which is well-known to be only sufficient for stable recovery via $\ell^1$-synthesis method, is indeed necessary as well.
pp. 456-459
Irregular Sampling of the Radon Transform of Bandlimited Functions
Thomas Wiese (Technische Universität München, Germany); Laurent Demaret (HelmholtzZentrum München, Germany)
We provide conditions for exact reconstruction of a bandlimited function from irregular polar samples of its Radon transform. First, we prove that the Radon transform is a continuous L2 -operator for certain classes of bandlimited signals. We then show that the Beurling-Malliavin condition for the radial sampling density ensures existence and uniqueness of a solution. Moreover, Jaffard's density condition is sufficient for stable reconstruction.
pp. 460-463
Spline-based frames for image restoration
Valery Zheludev (Tel Aviv University, Israel); Pekka Neittaanmäki (University of Jyväskylä, Finland); Amir Averbuch (Tel Aviv University, Israel)
We present a design scheme to generate tight and semi-tight frames in the space of discrete-time periodic signals, which are originated from four-channel perfect reconstruction periodic filter banks. The filter banks are derived from interpolating and quasi-interpolating polynomial splines. Each filter bank comprises one linear phase low-pass filter (in most cases interpolating) and one high-pass filter, whose magnitude response mirrors that of a low-pass filter. In addition, these filter banks comprise two band-pass filters. In the semi-tight frames case, all the filters have linear phase and (anti)symmetric impulse response, while in the tight frame case, some of band-pass filters are slightly asymmetric. We introduce the notion of local discrete vanishing moments (LDVM). In the tight frame case, analysis framelets coincide with their synthesis counterparts. However, in the semi-tight frames, we have the option to swap LDVM between synthesis and analysis framelets. The design scheme is generic and it enables us to design framelets with any number of LDVM. The computational complexity of the framelet transforms, which consists of calculation of the forward and the inverse fast Fourier transforms and simple arithmetic operations, practically does not depend on the number of LDVM and on the size of the impulse response of filters . The designed frames are used for restoration of images, which are degraded by blurring, random noise and missing pixels. The images were restored by the application of the Split Bregman Iterations method. I
pp. 464-467
On the Noise-Resilience of OMP with BASC-Based Low Coherence Sensing Matrices
Henning Zörlein (Ulm University, Germany); Dejan Lazich (Ulm University, Germany); Martin Bossert (Ulm University, Germany)
In Compressed Sensing (CS), measurements of a sparse vector are obtained by applying a sensing matrix. With the means of CS, it is possible to reconstruct the sparse vector from a small number of such measurements. In order to provide reliable reconstruction also for less sparse vectors, sensing matrices are desired to be of low coherence. Motivated by this requirement, it was recently shown that low coherence sensing matrices can be obtained by Best Antipodal Spherical Codes (BASC). In this paper, the noise-resilience of the Orthogonal Matching Pursuit (OMP) used in combination with low coherence BASC-based sensing matrices is investigated.
pp. 468-471
Tight frames in spiral sampling
Somantika Datta (University of Idaho, USA); Enrico Au-Yeung (University of British Columbia, Canada)
The paper deals with the construction of Parseval tight frames for the space of square integrable functions whose domain is the ball of radius R and centered at the origin. The focus is on Fourier frames on a spiral. Starting with a Fourier frame on a spiral, a Parseval tight frame that spans the same space can then be obtained by a symmetric approximation of the original Fourier frame.
pp. 472-475

16:20 - 17:20

Seeing the invisible; predicting the unexpected

Michal Irani
Room: Conference Hall
Chair: Abdul Jerri (Clarkson University, USA)
Small image patches tend to repeat abundantly within a natural image, both at the original scale, as well as at coarser scales of the image. Similarly, small space-time patches recur abundantly within a video sequence, both within and across temporal scales. In this talk I will show how complex visual inference tasks can be performed by exploiting this inherent property of patch redundancy within and across different parts of the visual data. Comparing and integrating local pieces of visual information gives rise to complex notions of visual similarity and to a general "Inference by Composition" approach. This allows to infer about the likelihood of new visual data that was never seen before, make inferences about complex static and dynamic visual information without any prior examples or prior training. I will demonstrate the power of this approach to several example problems (as time permits):

1. Spatial super-resolution from a single image & Temporal super-resolution from a single video.
2. Prediction of missing visual information.
3. Inferring the "likelihood" of "never-before-seen" visual data.
4. Detecting the "irregular" and "unexpected"
5. Detecting complex objects and actions
6. Segmentation of complex visual data.
7. Generating visual summaries (of images and video)

17:30 - 18:10

Harmonic Analysis

Room: Conference Room
Chair: Rowland Higgins (Anglia Polytechnic University, Cambridge, United Kingdom)
17:30 Measure-based diffusion kernel methods
Amit Bermanis (Tel-Aviv University, Israel); Guy Wolf (Tel Aviv University, Israel); Amir Averbuch (Tel Aviv University, Israel)
A commonly used approach for analyzing massive high dimensional datasets is to utilize diffusion-based kernel methods. The kernel in these methods is based on a Markovian diffusion process, whose transition probabilities are determined by local similarities between data points. When the data lies on a low dimensional manifold, the diffusion distances according to this kernel encompass the geometry of the manifold. In this paper, we present a generalized approach for defining diffusion-based kernels by incorporating measure-based information, which represents the density or distribution of the data, together with its local distances. The generalized construction does not require an underlying manifold to provide a meaningful kernel interpretation but assumes a more relaxed assumption that the measure and its support are related to a locally low dimensional nature of the analyzed phenomena.
pp. 489-492
17:50 Spectral properties of dual frames
Felix Krahmer (University of Göttingen, Germany); Gitta Kutyniok (Technical University Berlin, Germany); Jakob Lemvig (Technical University of Denmark, Denmark)
We study spectral properties of dual frames of a given finite frame. We give a complete characterization for which spectral patterns of dual frames are possible for a fixed frame. For many cases, we provide simple explicit constructions for dual frames with a given spectrum, in particular, if the constraint on the dual is that it be tight.
pp. 493-496

17:30 - 18:30

Advances in Compressive Sensing

Holger Rauhut, Joel Tropp
Room: Conference Hall
Chair: Holger Rauhut (University of Bonn, Germany)
17:30 Local coherence sampling for stable sparse recovery
Felix Krahmer (University of Göttingen, Germany); Holger Rauhut (University of Bonn, Germany); Rachel Ward (University of Texas, USA)
Exact recovery guarantees in compressive sensing often assume incoherence between the sensing basis and sparsity basis, a strong assumption that is often unattainable in practice. Here we discuss the notion of local coherence, and show that by resampling from the sensing basis according to the local coherence function, stable and robust sparse recovery guarantees extend to a rich new class of sensing problems beyond incoherent systems. We discuss particular applications to compressive MRI imaging and polynomial interpolation.
pp. 476-480
17:50 Structured-signal recovery from single-bit measurements
Yaniv Plan (University of Michigan, USA)
1-bit compressed sensing was introduced by Boufounos and Baraniuk in 2008 as a model of extreme quantization; only the sign of each measurement is retained. Recent theoretical and algorithmic advances, combined with the ease of hardware implementation, show that it is an effective method of signal acquisition. Surprisingly, in the high-noise regime there is almost no information loss from 1-bit quantization. We review and revise recent results, and compare to closely related statistical problems: sparse binary regression and binary matrix completion.
pp. 481-484
18:10 Dictionary Identification Results for K-SVD with Sparsity Parameter 1
Karin Schnass (University of Sassari, Italy)
In this talk we summarise part of the results from our recent work \cite{sc13arxiv} and \cite{sc13b}. We give theoretical insights into the performance of K-SVD, a dictionary learning algorithm that has gained significant popularity in practical applications, by answering the question when a dictionary $\dico$ can be recovered as local minimum of the minimisation criterion underlying K-SVD from a set of training signals $y_n=\dico x_n$. Assuming the training signals are generated from a tight frame with coefficients drawn from a random symmetric distribution, then in expectation the generating dictionary can be recovered as a local minimum of the K-SVD criterion if the coefficient distribution exhibits sufficient decay. This decay can be characterised by the coherence of the dictionary and the $\ell_1$-norm of the coefficients. Further it is demonstrated that given a finite number of training samples $N$ with probability $O(\exp(-N^{1-4q}))$ there is a local minimum of the K-SVD criterion within a radius $O(N^{-q})$ of the generating dictionary.
pp. 485-488

Friday, July 5

08:40 - 09:40

Event-driven sampling and continuous-time digital signal processing

Yannis Tsividis
Room: Conference Hall
Chair: Laurent Fesquet (TIMA Laboratory, France)

Many new and emerging applications require extremely low power dissipation in order to preserve scarce energy resources; such applications include sensor networks and wearable/implantable/ingestible biomedical devices. In such cases, uniform sampling, as used in conventional, clocked circuits, represents undesirable and unnecessary energy waste. We review techniques in which the signal itself dictates when it needs to be sampled and processed, thus tightly coupling energy use to signal activity. Methods for implementing event-driven A/D converters and DSPs in this context, without using any clock, are reviewed. It is shown that, compared to traditional, clocked techniques, the techniques reviewed here produce circuits that completely avoid aliasing, respond immediately to input changes, result in better error spectral properties, and exhibit dynamic power dissipation that goes down when the input activity decreases. Experimental results from recent test chips, operating at kHz to GHz signal frequencies, fully confirm these properties.

10:00 - 12:00

Sampling of Bandlimited Functions

Room: Conference Room
Chair: David Walnut (George Mason University, USA)
10:00 Sampling and Reconstruction of Bandlimited BMO-Functions
Holger Boche (Technical University Munich, Germany); Ullrich J Mönich (Massachusetts Institute of Technology, USA)
Functions of bounded mean oscillation (BMO) play an important role complex function theory and harmonic analysis. In this paper a sampling theorem for bandlimited BMO-functions is derived for sampling points that are the zero sequence of some sine-type function. The class of sine-type functions is large and, in particular, contains the sine function, which corresponds to the special case of equidistant sampling. It is shown that the sampling series is locally uniformly convergent if oversampling is used. Without oversampling, the local approximation error is bounded.
pp. 521-524
10:20 Reconstruction of band-limited random signals from local averages
Sinuk Kang (National Institute for Mathematical Sciences, Korea); Gilles Faÿ (Ecole Centrale Paris, France)
We consider the problem of reconstructing a wide sense stationary band-limited random signal from its local averages taken at the Nyquist rate or above. Success of the perfect reconstruction depends on the length of intervals on which the averages are taken. The resulting average sampling expansions converge in mean square and are the same as the original signal with probability 1.
pp. 525-527
10:40 Bandlimited Signal Reconstruction From the Distribution of Unknown Sampling Locations
Animesh Kumar (Indian Institute of Technology Bombay, India)
We study the reconstruction of bandlimited fields from samples taken at unknown but statistically distributed sampling locations. The setup is motivated by distributed sampling where precise knowledge of sensor locations can be difficult. Periodic one-dimensional bandlimited fields are considered for sampling. Perfect samples of the field at independent and identically distributed locations are obtained. The statistical realization of sampling locations is not known. First, it is shown that a bandlimited field cannot be uniquely determined with samples taken at statistically distributed but unknown locations, even if the number of samples is infinite. Next, it is assumed that the order of sample locations is known. In this case, using insights from order-statistics, an estimate for the field with useful asymptotic properties is designed. Distortion (mean-squared error) and central-limit are established for this estimate.
pp. 528-531
11:00 Sampling aspects of approximately time-limited multiband and bandpass signals
Joseph Lakey (New Mexico State University, USA); Jeffrey Hogan (University of Newcastle, Australia)
We provide an overview of recent progress regarding the role of sampling in the study of signals that are in the image of a bandpass or multiband frequency limiting operation and have most of their energies concentrated in a given time interval. First we address the question of approximation of a time- and band-limited signal on its essential time support by a finite sinc series. Next we consider a method by which essentially time limited multiband signals can be approximated as superpositions of eigenfunctions of time- and band-limiting to each separate band. Finally we consider a means to approximate essentially time-limited bandpass signals. In this case we present a new phase-locking metric that arises in the study of EEG signals.
pp. 532-535
11:20 Recovery of Bandlimited Signal Based on Nonuniform Derivative Sampling
Dominik Rzepka (AGH University of Science and Technology, Poland); Marek Miskowicz (AGH University of Science and Technology, Poland); Anna Gryboś (AGH University of Science and Technology, Poland); Dariusz Koscielnik (AGH University of Science and Technology, Poland)
The paper focuses on the perfect recovery of band- limited signals from nonuniform samples of the signal and its derivatives. The main motivation to address signal recovery using nonuniform derivative sampling is a reduction of mean sampling frequency under Nyquist rate which is a critical issue in event-based signal processing chains with wireless link. In particular, we introduce a set of reconstructing functions for nonuniform derivative sampling as an extension of relevant set of reconstructing functions derived by Linden and Abramson for uniform derivative sampling. An example of signal recovery using the first derivative is finally reported.
pp. 536-539
11:40 Approximation by Shannon sampling operators in terms of an averaged modulus of smoothness
Gert Tamberg (Tallinn University of Technology, Estonia); Andi Kivinukk (Tallinn University, Estonia)
The aim of this paper is to study the approximation properties of generalized sampling operators in L^p(R) in terms of an averaged modulus of smoothness.
pp. 540-543

Compressive Sensing and Applications

Room: Conference Hall
Chair: Rachel Ward (University of Texas, USA)
10:00 Sparse Recovery with Fusion Frames via RIP
Ulas Ayaz (University of Bonn, Germany); Holger Rauhut (University of Bonn, Germany)
We extend ideas from compressed sensing to a structured sparsity model related to fusion frames. We present theoretical results concerning the recovery of sparse signals in a fusion frame from undersampled measurements. We provide both nonuniform and uniform recovery guarantees. The novelty of our work is to exploit an incoherence property of the fusion frame which allows us to reduce the number of measurements needed for sparse recovery.
pp. 497-500
10:20 Blind Sensor Calibration in Sparse Recovery Using Convex Optimization
Cagdas Bilen (INRIA Rennes, France); Gilles Puy (EPFL, Switzerland); Rémi Gribonval (INRIA, France); Laurent Daudet (Université Paris Diderot, France)
We investigate a compressive sensing system in which the sensors introduce a distortion to the measurements in the form of unknown gains. We focus on {\em blind} calibration, using measures performed on a few unknown (but sparse) signals. We extend our earlier study on real positive gains to two generalized cases (signed real-valued gains; complex-valued gains), and show that the recovery of unknown gains together with the sparse signals is possible in a wide variety of scenarios. The simultaneous recovery of the gains and the sparse signals is formulated as a convex optimization problem which can be solved easily using off-the-shelf algorithms. Numerical simulations demonstrate that the proposed approach is effective provided that sufficiently many (unknown, but sparse) calibrating signals are provided, especially when the sign or phase of the unknown gains are not completely random.
pp. 501-504
10:40 Sampling by blocks of measurements in Compressed Sensing
Claire Boyer (Université Paul Sabatier, France); Jérémie Bigot (ISAE, France); Pierre Armand Weiss (ITAV USR 3505, France)
Various acquisition devices impose sampling blocks of measurements. A typical example is parallel magnetic resonance imaging (MRI) where several radio-frequency coils simultaneously acquire a set of Fourier modulated coefficients. We study a new random sampling approach that consists in selecting a set of blocks that are predefined by the application of interest. We provide theoretical results on the number of blocks that are required for exact sparse signal reconstruction. We finish by illustrating these results on various examples, and discuss their connection to the literature on CS.
pp. 505-508
11:00 Travelling salesman-based variable density sampling
Nicolas Chauffert (CEA, Neurospin Center, Parietal Team., France); Philippe Ciuciu (LNAO, France); Jonas Kahn (Laboratoire Painlevé, CNRS, France); Pierre Armand Weiss (ITAV USR 3505, France)
Compressed sensing theory indicates that selecting a few measurements independently at random is a near optimal strategy to sense sparse or compressible signals. This is infeasible in practice for many acquisition devices that acquire samples along continuous trajectories (e.g., radial, spiral, ...). Examples include magnetic resonance imaging (MRI) or radio-interferometry. In this paper, we propose to generate continuous sampling trajectories by drawing a small set of measurements independently and joining them using a travelling salesman problem solver. Our contribution lies in the theoretical derivation of the appropriate probability density of the initial drawings. Preliminary simulation results show that this strategy is as efficient as independent drawings while being implementable on real acquisition systems.
pp. 509-512
11:20 Incremental Sparse Bayesian Learning for Parameter Estimation of Superimposed Signals
Dmitriy Shutin (German Aerospace Center (DLR), Germany); Wei Wang (German Aerospace Center (DLR), Germany); Jost Thomas (German Aerospace Center (DLR), Germany)
This work discuses a novel algorithm for joint sparse estimation of superimposed signals and their parameters. The proposed method is based in two concepts: a variational Bayesian version of the incremental sparse Bayesian learning (SBL) - fast variational SBL - and a variational Bayesian approach to parameter estimation of superimposed signal models. Both schemes estimate the unknown parameters by minimizing the variational lower bound on model evidence; also, these optimizations are performed incrementally with respect to the parameters of single component. It is demonstrated that these estimations can be naturally unified under the framework of variational Bayesian inference. This allows, on the one hand, for an adaptive dictionary design for FV-SBL schemes, and, on the other hand, for a fast superresolution approach to parameter estimation of superimposed signals. The experimental evidence collected with synthetic data as well as with estimation results for measured multipath channels demonstrate the effectiveness of the proposed algorithm.
pp. 513-516
11:40 Sparse MIMO Radar with Random Sensor Arrays and Kerdock Codes
Thomas Strohmer (University of California, Davis, USA); Haichao Wang (University of California, Davis, USA)
We derive a theoretical framework for the recoverability of targets in the azimuth-range-Doppler domain using random sensor array and tools developed in the area of compressive sensing. In one manifestation of our theory we use Kerdock codes as transmission waveforms and exploit some of their peculiar properties in our analysis. Not only our result is the first rigorous mathematical theory for the detection of moving targets using random sensor arrays, but also the transmitted waveforms satisfy a variety of properties that are very desirable and important from a practical viewpoint.
pp. 517-520

13:20 - 15:00

FFT and Related Algorithms

Room: Conference Room
Chair: Peter Massopust (Helmholtz Zentrum München, Germany)
13:20 Phase retrieval using time and Fourier magnitude measurements
Martin Ehler (University of Vienna, Germany); Stefan Kunis (University of Osnabrück & Helmholtz Zentrum München, Germany)
We discuss the reconstruction of a finite-dimensional signal from the absolute values of its Fourier coefficients. In many optical experiments the signal magnitude in time is also available. We combine time and frequency magnitude measurements to obtain closed reconstruction formulas. Random measurements are discussed to reduce the number of measurements.
pp. 564-567
13:40 Fast Ewald summation under 2d- and 1d-periodic boundary conditions based on NFFTs
Franziska Nestler (Chemnitz University of Technology, Germany); Daniel Potts (Chemnitz University, Germany)
Ewald summation has established as basic element of fast algorithms evaluating the Coulomb interaction energy of charged systems subject to periodic boundary conditions. In this context particle mesh routines, as the P3M method, and the P2NFFT, which is based on nonequispaced fast Fourier transforms (NFFT), should be mentioned. In this paper we present a new approach for the efficient calculation of the Coulomb interaction energy subject to mixed boundary conditions based on NFFTs.
pp. 568-571
14:00 A sparse Prony FFT
Daniel Potts (Chemnitz University, Germany); Stefan Kunis (University of Osnabrück & Helmholtz Zentrum München, Germany); Sabine Heider (University of Osnabrück, Germany); Michael Veit (Chemnitz University of Technology, Germany)
We describe the application of Prony-like reconstruction methods to the problem of the sparse Fast Fourier transform (sFFT). In particular, we adapt both important parts of the sFFT, quasi random sampling and filtering techniques, to Prony-like methods.
pp. 572-575
14:20 Taylor and rank-1 lattice based nonequispaced fast Fourier transform
Toni Volkmer (Chemnitz University of Technology, Germany)
The nonequispaced fast Fourier transform (NFFT) allows the fast approximate evaluation of trigonometric polynomials with frequencies supported on full box-shaped grids at arbitrary sampling nodes. Due to the curse of dimensionality, the total number of frequencies and thus, the total arithmetic complexity can already be very large for small refinements at medium dimensions. In this paper, we present an approach for the fast approximate evaluation of trigonometric polynomials with frequencies supported on symmetric hyperbolic cross index sets at arbitrary sampling nodes. This approach is based on Taylor expansion and rank-1 lattice methods. We prove error estimates for the approximation and present numerical results.
pp. 576-579
14:40 Decoupling of Fourier Reconstruction System for Shifts of Several Signals
Yosef Yomdin (Weizmann Institute of Science, Israel); Dmitry Batenkov (Weizmann Institute of Science, Israel); Niv Sarig (Nova Measuring Instruments & Weizmann Institute of Science, Israel)
We consider the problem of ``algebraic reconstruction'' of linear combinations of shifts of several signals $f_1,\ldots,f_k$ from the Fourier samples. For each $r=1,\ldots,k$ we choose sampling set $S_r$ to be a subset of the common set of zeroes of the Fourier transforms ${\cal F}(f_\l), \ \l \ne r$, on which ${\cal F}(f_r)\ne 0$. We show that in this way the reconstruction system is reduced to $k$ separate systems, each including only one of the signals $f_r$. Each of the resulting systems is of a ``generalized Prony'' form. We discuss the problem of unique solvability of such systems, and provide some examples.
pp. 580-583

Circuit Design for Analog to Digital Converters

Yun Chiu
Room: Conference Hall
Chair: Yun Chiu (University of Texas at Dallas, USA)
13:20 Digital Calibration of SAR ADC
Yun Chiu (University of Texas at Dallas, USA)
Four techniques for digital background calibration of SAR ADC are presented and compared. Sub-binary redundancy is the key to the realization of these techniques. Some experimental and simulation results are covered to support the effectiveness of these techniques.
pp. 544-547
13:40 Trend of High-Speed SAR ADC towards RF Sampling
Mike Shuo-Wei Chen (University of Southern California, USA)
One emerging trend of high-speed low-power ADC design is to leverage the successive approximation (SAR) topology. It has successfully advanced the power efficiency by orders of magnitude over the past decade. Given the nature of SAR algorithm, the conversion speed is intrinsically slow compared to other high-speed ADC architectures, and yet minimal static power is required due to the mostly digital implementation. This paper examines various speed enhancement techniques that enable SAR ADCs towards RF sampling, i.e. >GS/s sampling rate with >GHz input bandwidth, while maintaining low power and area consumption. It is expected to play a crucial role in the future energy-constrained wideband system.
pp. 548-551
14:00 Multi-Step Switching Methods for SAR ADCs
Ying-Zu Lin (Novatek Inc. & National Cheng Kung University, Taiwan); Ya-Ting Shyu (National Cheng Kung University, Taiwan); Che-Hsun Kuo (National Cheng Kung University, Taiwan); Guan-Ying Huang (National Cheng Kung University, Taiwan); Chun-Cheng Liu (MediaTek Inc., Taiwan); Soon-Jyh Chang (NCKU, Taiwan)
This paper presents multi-step capacitor switching methods for SAR ADCs based on precharge with floating capacitors and charge sharing. The proposed switching methods further reduce the transient power of the split monotonic switching method (an improved version of the monotonic switching method). Compared to the split monotonic switching, adding charge sharing achieves around 50% reduction in switching power. Using precharge with floating capacitors and charge sharing simultaneously, the switching power reduces around 75%. The proposed switching methods do not require additional intermediate reference voltages.
pp. 552-555
14:20 On the use of redundancy in successive approximation A/D converters
Boris Murmann (Stanford University, USA)
In practical realizations of sequential (or pipelined) A/D converters, some form of redundancy is typically employed to help absorb imperfection in the underlying circuit. The purpose of this paper is to review the various ways in which redundancy has been used in successive approximating register (SAR) ADCs, and to connect findings from the information theory community to ideas that drive modern hardware realizations.
pp. 556-559
14:40 Design Considerations of Ultra-Low-Voltage Self-Calibrated SAR ADC
Hai Huang (UESTC, P.R. China); Xiaoyang Wang (UESTC, P.R. China); Qiang Li (University of Electronic Science and Technology of China, P.R. China)
This paper discusses the design of 0.5V 12bit successive approximation register (SAR) analog-to-digital converter (ADC) with focus on the considerations of self calibration at low supply voltage. Relationships among noises of comparators and overall ADC performance are studied. Moreover, an ultra-low-leakage switch is demonstrated in a 0.13μm CMOS process and an improved process of measuring mismatch is proposed to alleviate the charge injection of sampling switch. Simulation shows the ADC achieves an ENOB of 11.4b and a SFDR of 90dB near Nyquist rate with capacitor mismatch up to 3%. At 12b 1MS/s, the ADC exhibits an FOM of 13.2fJ/step under 0.5V supply voltage.
pp. 560-563

15:20 - 16:20

Robust subspace clustering

Emmanuel Candes
Room: Conference Hall
Chair: Hans Feichtinger (University of Vienna, Austria)

Subspace clustering refers to the task of finding a multi-subspace representation that best fits a collection of points taken from a high-dimensional space. Subspace clustering can be regarded as a generalization of PCA in which points do not lie around a single lower-dimensional subspace but rather around a union of subspaces as shown in the picture on the left. It can also be seen as a nonstandard clustering problem in which neighbors are not close according to a pre-defined notion of metric but rather belong to the same lower dimensional structure.

We introduce an algorithm inspired by sparse subspace clustering (SSC) to cluster noisy data, and develops some novel theory demonstrating its correctness. In particular, the theory uses ideas from geometric functional analysis to show that the algorithm can accurately recover the underlying subspaces under minimal requirements on their orientation, and on the number of samples per subspace. We present synthetic as well as real data experiments illustrating our approach and demonstrating its effectiveness.