List of Plenary Talks

Tuesday, August 28th
09:00 ‚Äď 10:00


PT-1: Automatic Speech Recognition: Past, Present and Future
Jean-Paul Haton

Room: I.I.C. BrńÉtianu Hall

14:40 ‚Äď 15:40


PT-2: Computable Performance Analysis of Sparsity Recovery with Application
Arye Nehorai

Room: I.I.C. BrńÉtianu Hall

Wednesday, August 29th
09:00 ‚Äď 10:00


PT-3: Source Separation in Nonlinear Mixtures: How and Why?
Christian Jutten

Room: I.I.C. BrńÉtianu Hall

14:40 ‚Äď 15:40


PT-4: A New Theory for Designing Socio-Computational Systems
Mihaela van der Schaar

Room: I.I.C. BrńÉtianu Hall

Thursday, August 30th
09:00 ‚Äď 10:00


PT-5: Shannonian Abstractions, Real-Time Interactive Communications Calamities and Near-Capacity Multimedia Transceivers..., EURASIP Fellow Inaugural Lecture
Lajos Hanzo

Room: I.I.C. BrńÉtianu Hall

14:40 ‚Äď 15:40


PT-6: Sensing the Universe: Signal Processing Challenges for Large Radio Telescope Arrays
Alle-Jan van der Veen

Room: I.I.C. BrńÉtianu Hall

Friday, August 31st
09:00 ‚Äď 10:00


PT-7: Information Theoretic Aggregation for Sparse Prediction, EURASIP Fellow Inaugural Lecture
Eric Moulines

Room: I.I.C. BrńÉtianu Hall

14:40 ‚Äď 15:40


PT-8: Space-from-Time Imaging: Acquiring Reflectance and Depth with Less Optics
Vivek Goyal

Room: I.I.C. BrńÉtianu Hall



Plenary Speakers


Vivek Goyal


‚ÄúSpace-from-Time Imaging: Acquiring Reflectance and Depth with Less Optics‚ÄĚ



Traditional cameras use lenses to form an optical image of the scene and thus obtain spatial correspondences between the scene and the film or sensor array. These cameras do not sample the incident light fast enough to record any transient variations in the light field. This talk describes space-from-time imaging -- a signal processing framework in which spatial resolution comes from computationally processing samples of the response to time-varying illumination. Examples of this concept enable imaging using only omnidirectional illumination and sensing. Along with the formation of ordinary reflectance images in extraordinary configurations, we show a range sensing system that uses neither scene scanning by laser (as in LIDAR) nor multiple sensors (as in a time-of-flight camera). These technologies depend on novel parametric signal modeling and sampling theory.

Goyal

Vivek K. Goyal is Esther and Harold E. Edgerton Associate Professor of Electrical Engineering at the Massachusetts Institute of Technology. He received the B.S. degree in mathematics and the B.S.E. degree in electrical engineering from the University of Iowa, Iowa City, where he received the John Briggs Memorial Award for the top undergraduate across all colleges. He received the M.S. and Ph.D. degrees in electrical engineering from the University of California, Berkeley, where in 1998, he received the Eliahu Jury Award for outstanding achievement in systems, communications, control, or signal processing.

His previous positions include Research Assistant in the Laboratoire de Communications Audiovisuelles at √Čcole Polytechnique F√©d√©rale de Lausanne, Switzerland; Member of Technical Staff in the Mathematics of Communications Research Department of Bell Laboratories, Lucent Technologies; and Senior Research Engineer for Digital Fountain, Inc., Fremont, CA. His research interests include source coding theory, quantization, sampling, and computational imaging.

Professor Goyal was awarded the 2002 IEEE Signal Processing Society Magazine Award and an NSF CAREER Award. He served on the IEEE Signal Processing Society’s Image and Multiple Dimensional Signal Processing Technical Committee, is a permanent Co-chair of the SPIE Wavelets and Sparsity conference series, and is a TPC Co-Chair of the IEEE International Conference on Image Processing 2016. He is a co-author of a forthcoming textbook available for download at FourierAndWavelets.org, and he will present a tutorial on teaching signal processing at IEEE ICASSP 2012.



Eric Moulines


‚ÄúInformation Theoretic Aggregation for Sparse Prediction‚ÄĚ


Joint work with P. Alquier, G. Biau, and B. Guedj


Sparse regression model addresses inference problems in which the number of parameters p to estimate is large compared to the sample size n. The main problem when the hypothesis space is high-dimensional problems is to propose estimators displaying favorable statistical performance but still having a manageable computational cost. Estimators based on penalized empirical risk minimization (with appropriately chosen sparsity inducing penalization) are known to perform theoretically well but are not able to address the combinatorial explosion of the hypothesis space. The Lasso estimator (and its many variants) makes the minimization problem convex, and leads to practical algorithms even when the number of regressors p is large. However stringent conditions on the design have to be imposed to establish fast rates of convergence for this estimator.

Recently, several authors have introduced a new class of Bayesian methods achieving sensible statistical bounds without stringent assumption on the design and leading to practical algorithms (not as fast as the Lasso of course!).

These methods are all based on some form of aggregation (rather than selection) of estimators, and are obtained by sampling a quantity which might be seen as a posterior distribution. We will also discuss the statistical performance of this construction using a sparsity oracle inequality in probability for the true excess risk for a version of exponential weight estimator.



Lajos Hanzo


‚ÄúShannonian Abstractions, Real-Time Interactive Communications Calamities and Near-Capacity Multimedia Transceivers...‚ÄĚ


Commencing with a brief historical perspective on Shannon’s source- and channel-coding theorems, near-capacity multimedia transceivers are designed in a systematic manner.

The lossless, but potentially infinite-length Shannonian entropy codes are extremely prone to loss of synchronization in the presence of channel-induced errors, especially for transmission over wireless channels exhibiting bursty, rather than randomly distributed transmission errors. Furthermore, most multimedia source signals are capable of tolerating lossy, rather than lossless delivery to the human ’receiver, namely to the eye, ear and other human sensors, when the associated psycho-visual and psycho-acoustic masking properties are exploited.

However, the highly compressed source-coded signal becomes vulnerable to transmission errors, hence it has to be protected by powerful iteratively detected codes. As a lossless design example, the novel family of EXtrinsic Information Transfer (EXIT) chart aided Variable-Length Codes (VLCs) will be alluded to. Furthermore, as a lossy example, H.264-coded video will be protected by the novel family of EXIT-chart-Optimized Short Block Codes (EOSBCs) and transmitted with the aid of multi-dimensional Sphere Packing (modulation aided Layered Steered Space-Time Coding termed as LSSTC. The LSSTC scheme combines all the benefits of the known MIMO schemes in terms of achieveing a multiplexing gain, diversity gain and beamforming gain. It is demonstrated that the conventional twostage turbo-detection schemes may suffer from a Bit Error Rate (BER) floor. We circumvent this deficiency with the aid of a three-stage turbo detected scheme, which employs a low-complexity unity-rate code as the intermediate code between the outer and inner code of the proposed architecture. The iterative decoding convergence behaviour of the advocatedMIMO transceiver is also investigated with the aid of EXIT charts.

Lajos Hanzo. Master degree in electronics in 1976, his PhD in 1983 and his Doctor of Sciences (DSc) degree in 2004.

He is a Fellow of the Royal Academy of Engineering (FREng), FIEEE, FEIT and the Felloow of EURASIP. He co-authored 20 IEEE Press - John Wiley books totalling in excess of 10 000 pages on mobile radio communications, published 1200+ research entries at IEEE Xplore, organised and chaired major IEEE conferences and has been awarded a number of distinctions.

Dr. Lajos is also an IEEE Distinguished Lecturer and a Chaired Professor at Tsinghua University, Beijing. He is the Editor-in-Chief of the IEEE Press.

For further information on research in progress and associated publications please refer to http://www-mobile.ecs.soton.ac.uk.



Christian Jutten


‚ÄúSource Separation in Nonlinear Mixtures: How and Why?‚ÄĚ


The problem of source separation has been addressed mainly for linear mixtures, either memoryless or convolutive. Methods for solving the problem are based on source assumptions like statistical independence (ICA), time properties (coloration or nonstationarity), positivity or sparsity. However, although linearity is very often a convenient approximation, there are some applications in which the mixing process is clearly nonlinear.

In this talk, in a first part, we explain what are the main problems encountered by source separation in nonlinear mixtures and how they can be overcome.

Then, in a second part, we will consider actual strongly nonlinear problems: one in image processing and another one in chemical sensor array processing. For each problem, we will derive the nonlinear models, show how source separation can be applied and experiment results which can be achieved.



Jean-Paul Haton


‚ÄúAutomatic Speech Recognition: Past, Present and Future‚ÄĚ


The use of speech as a man-machine communication medium has been extensively studied during the past few decades. This presentation will address one aspect of this domain, i.e., automatic speech recognition (ASR) that consists of interacting with a machine by voice. Commercial products have existed for more than 20 years, at first for isolated word recognition, and then for connected words and continuous speech, with applications of increasing complexity: dictation and data entry, interactive voice response, control, telecommunications, medias and meetings transcription, etc. Most of these systems are based on statistical modeling, both at the acoustic and linguistic levels.

If automatic speech recognition systems perform remarkably well, even for large vocabulary or multi-speaker tasks, their performance degrades dramatically in adverse situations, especially in the presence of noise or distortion. In particular, problems are created by differences that may occur between training and testing conditions (noise level as measured by the signal-to-noise ratio (SNR), distance to the microphone and orientation, type of speakers, etc.). Speech recognition in adverse conditions has received much attention since system robustness has become one of the major bottlenecks for practical use of speech recognizers in real life.

After briefly recalling the basic principles of speech signal acquisition and parameterization, we will present the statistical approach to ASR (especially in a Bayesian framework) that is now the most widely used, based on Hidden Markov Models (HMM) for acoustic modeling. We will then turn to the types of solutions that have been proposed so far to increase the robustness of ASR systems in order to obtain good performance in real life conditions, including robust feature extraction and model adaptation.

Jean-Paul Haton is Emeritus Professor in Computer Science at University Henri-Poincaré, Nancy, France. He is a senior member of the Institut Universitaire de France where he created the first chair in computer science.

Jean-Paul Haton has been Director of the French National Project on Man-Machine Communication from 1981 to 1993, and Research Director at INRIA from 1988 to 1993. His research interests relate to Artificial Intelligence and Man-Machine Communication, especially in the fields of automatic speech recognition and understanding, speech training, signal interpretation, knowledge-based systems, and robotics. He has supervised more than 100 PhD theses in these fields. He authored or co-authored about 300 articles and books. He is member of the Editorial Board of several journals, including Speech Communication, Computer Speech and Language, Journal of Intelligent Manufacturing, IEICE Transactions on Information and Systems. He has been involved in several tens of national and international research projects, and served several times as an expert for the European Commission and for the French Agence Nationale de la Recherche.

Jean-Paul Haton is a member of AAAI, Acoustical Society of America, French Acoustical Society, a Fellow of IEEE, a Fellow of International Pattern Recognition Society, IAPR and a Fellow of the European Artificial Intelligence Assocviation, ECCAI. He served as chairman of AFIA (French Association for Artificial Intelligence) until 1994 and of ASTI, the French federation of associations for information processing. He was awarded a Doctorate Honoris Causa from the University of Genève, Switzerland.



Arye Nehorai


‚ÄúComputable Performance Analysis of Sparsity Recovery with Applications‚ÄĚ


The last decade has witnessed burgeoning development in the reconstruction of signals based on exploiting their low-dimensional structures, particularly their sparsity, block-sparsity, and low-rankness. The reconstruction performance of these signals is heavily dependent on the structure of the operating matrix used in sensing. The quality of these matrices in the context of signal recovery is usually quantified by the restricted isometry constant and its variants. However, the restricted isometry constant and its variants are extremely difficult to compute.

We present a framework for analytically computing the performance of the recovery of signals with sparsity or block-sparsity structures. We define a family of incoherence measures to quantify the goodness of arbitrary sensing matrices. Our primary contribution is the association of the incoherence measures with fixed points of certain scalar functions. These scalar functions are defined by a set of optimization problems and computed by series of programs. Linear programs, second-order cone programs, or semi-definite programs can be used, depending on the specific problem. We then use fixed-point theory and bisection search to generate an efficient algorithm to compute the incoherence measures with guaranteed global convergence. As a by-product, we implement efficient algorithms to verify sufficient conditions for exact signal recovery in the noise-free case. The utility of the proposed incoherence measures lies in their relation to the performance of reconstruction methods. We derive bounds on the recovery errors of convex relaxation algorithms in terms of these measures. We then discuss applications of these bounds to numerically assess the performance of sparsity systems arising in radar and other practical areas.

Arye Nehorai is the Eugene and Martha Lohman Professor and Chair of the Preston M. Green Department of Electrical and Systems Engineering at Washington University in St. Louis (WUSTL). He serves as the Director of the Center for Sensor Signal and Information Processing at WUSTL. Earlier he was a faculty member at Yale University and the University of Illinois at Chicago. He received the B.Sc. and M.Sc. degrees from the Technion, Israel, and the Ph.D. from Stanford University, California.

Dr. Nehorai has served as Editor-in-Chief of the IEEE Transactions on Signal Processing during the years 2000 to 2002. In the years 2003 to 2005 he was Vice President (Publications) of the IEEE Signal Processing Society (SPS), Chair of the Publications Board, and member of the Executive Committee of this Society. He was the Founding Editor of the special columns on Leadership Reflections in the IEEE Signal Processing Magazine from 2003 to 2006.

Dr. Nehorai has served as Editor-in-Chief of the IEEE Transactions on Signal Processing during the years 2000 to 2002. In the years 2003 to 2005 he was Vice President (Publications) of the IEEE Signal Processing Society (SPS), Chair of the Publications Board, and member of the Executive Committee of this Society. He was the Founding Editor of the special columns on Leadership Reflections in the IEEE Signal Processing Magazine from 2003 to 2006.

Dr. Nehorai received the 2006 IEEE SPS Technical Achievement Award and the 2009 IEEE SPS Meritorious Service Award. He was elected Distinguished Lecturer of the IEEE SPS for the term 2004 to 2005. He was co-recipient of the IEEE SPS 1989 Senior Award for Best Paper co-author of the 2003 Young Author Best Paper Award and co-recipient of the 2004 Magazine Paper Award. In 2001 he was named University Scholar of the University of Illinois. Dr. Nehorai was the Principal Investigator of the Multidisciplinary University Research Initiative (MURI) project entitled Adaptive Waveform Diversity for Full Spectral Dominance. He has been a Fellow of the IEEE since 1994 and of the Royal Statistical Society since 1996.



Mihaela van der Schaar


‚ÄúA New Theory for Designing Socio-Computational Systems‚ÄĚ


This talk proposes a new generation of ideas and technologies for designing the interactions between self-interested, learning agents in socio-computational systems. When systems or networks are composed of compliant machines (wireless nodes, sensors, routers, mobile phones etc.), network utility maximization (NUM) and other well-known control and optimization methods can be used to achieve efficient designs. When the communities are composed of intelligent and self-interested agents (as in peer-to-peer networks, social networks, crowdsourcing, etc.), such methods are not effective and efficiency is much more difficult to achieve because the interests of the individual agents may be in conflict with that of the system designer. This talk introduces a new theoretical framework for efficiently designing socio-computational systems using a novel class of incentives (rewards and punishments).

Mihaela van der Schaar is Chancellor's Professor of Electrical Engineering at University of California, Los Angeles. Her research interests include multimedia networking, communication, processing, and systems, multimedia stream mining, dynamic multi-user networks and system designs, online learning, network economics and game theory. She is an IEEE Fellow, a Distinguished Lecturer of the Communications Society for 2011-2012, the Editor in Chief of IEEE Transactions on Multimedia and a member of the Editorial Board of the IEEE Journal on Selected Topics in Signal Processing.

She received an NSF CAREER Award (2004), the Best Paper Award from IEEE Transactions on Circuits and Systems for Video Technology (2005), the Okawa Foundation Award (2006), the IBM Faculty Award (2005, 2007, 2008), the Most Cited Paper Award from EURASIP: Image Communications Journal (2006), the Gamenets Conference Best Paper Award (2011) and the 2011 IEEE Circuits and Systems Society Darlington Award Best Paper Award.

She received three ISO awards for her contributions to the MPEG video compression and streaming international standardization activities, and holds 33 granted US patents. For more information about her research visit: http://medianetlab.ee.ucla.edu/



Alle-Jan van der Veen


‚ÄúSensing the Universe: Signal Processing Challenges for Large Radio Telescope Arrays‚ÄĚ


Radio astronomy is known for its very large telescope dishes, but currently there is a transition towards the use of large numbers of small elements. E.g., the recently commissioned LOFAR low frequency array uses 50 stations each with some 200 antennas, and the numbers will be even larger for the Square Kilometer Array, planned for 2020.

Meanwhile some of the existing telescope dishes are being retrofitted with focal plane arrays. These instruments pose interesting challenges for array signal processing. One aspect, which we cover in this talk, is the calibration of such large numbers of antennas, especially if they are distributed over a wide area. Apart from the unknown element gains and phases (which may be directionally dependent), there is the unknown propagation through the ionosphere, which at low frequencies may be diffractive and different over the extent of the array. The talk will discuss several of the challenges, present the underlying data models, and propose some of the answers. We will also touch upon a recent initiative to develop a low-frequency telescope array in space, on a distributed platform formed by a swarm of nanosatellites.

Alle-Jan van der Veen (F'2005) was born in The Netherlands in 1966. He received the Ph.D. degree (cum laude) from TU Delft in 1993. Throughout 1994, he was a postdoctoral scholar at Stanford University. At present, he is a Full Professor in Signal Processing at TU Delft.

He is the recipient of a 1994 and a 1997 IEEE Signal Processing Society (SPS) Young Author paper award, and was an Associate Editor for IEEE Tr. Signal Processing (1998--2001), chairman of IEEE SPS Signal Processing for Communications Technical Committee (2002-2004), Member-at-Large of the Board of Governors of IEEE SPS, Editor-in-Chief of IEEE Signal Processing Letters (2002-2005), Editor-in-Chief of IEEE Transactions on Signal Processing (2006-2008), and technical co-chair of IEEE ICASSP-2011 (Prague). He currently is chairing the IEEE SPS Fellow Reference Commitee, and is member of the IEEE TAB Periodicals Review and Advisory Committee.

His research interests are in the general area of system theory applied to signal processing, and in particular algebraic methods for array signal processing, with applications to wireless communications and radio astronomy.