You are here

Sanjoy Mitter

Year: 
2007
Citation: 
For contributions to the unification of communication and control, nonlinear filtering and its relationship to stochastic control, optimization, optimal control, and infinite-dimensional systems theory

Dr. Sanjoy K. Mitter is Professor of Electrical Engineering at the Laboratory for Information and Decision Systems at the Massachusetts Institute of Technology (MIT). Prior to 1965, he worked as a research engineer at Brown Boveri & Co. Ltd., Switzerland (now ASEA Brown Boveri) and Battelle Institute in Geneva, Switzerland. He taught at Case Western Reserve University from 1965-1969. He joined MIT in 1969 where he has been a Professor of Electrical Engineering since 1973. He was the Director of the MIT Laboratory for Information and Decision Systems from 1981-1999. He has also been a Professor of Mathematics at the Scuola Normale, Pisa, Italy from 1986-1996. Professor Mitter’s other visiting positions include Imperial College, London; University of Groningen, Holland; INRIA, France; Tata Institute of Fundamental Research, India and ETH, Zürich, Switzerland. At the University of California, Berkeley, he was the McKay Professor in March 2000 and the Russell Springer Professor from September 2003 to January 2004. He was a Visiting Professor at the California Institute of Technology in January 2005. He is a Fellow of the IEEE and winner of the 2000 IEEE Control Systems Award. In addition, he is a Member of the National Academy of Engineering and is associate editor of several journals. He is a Foreign Member of Istituto Veneto di Scienze, Lettere de Arti. Professor Mitter received his doctor’s degree in 1965 from the Imperial College of Science and Technology, University of London.

Text of Acceptance Speech: 

July 12, 2007. New York, NY

It is a great honor for me to receive the Bellman Award—quite undeserved I believe, but I decided not to emulate Gregory Perelman by refusing to accept the award. I might however follow his footsteps (apparently he has stopped doing Mathematics) and concentrate only on the more conceptual and philosophical aspects of the broad field of Systems and Control.

On an occasion like this it is perhaps appropriate to say a few words about the seminal contributions of Richard Bellman. As we all know, he is the founder of the methodological framework of Dynamic Programming, probably the only general method of systematically and optimally dealing with uncertainty, when uncertainty has a probabilistic description, and there is an underlying Markov structure in the description of the evolution of the system. It is often mentioned that the work of Bellman was not as original as would appear at first sight. There was, after all, Abraham Wald’s seminal work on Optimal Sequential Decisions and the Carat´eodory view of Calculus of Variations, intimately related to Hamilton–Jacobi Theory. But the generality of these ideas, both for deterministic optimal control and stochastic optimal control with full or partial observations, is undoubtedly due to Bellman. Bellman, I believe, was also the first to present a precise view of stochastic adaptive control using methods of dynamic programming. Now, there are two essential steps in invoking Dynamic Programming, namely, invariant embedding whereby a fixed variational problem is embedded in a potentially infinite family of variational problems and then invoking the Principle of Optimality which states that any sub-trajectory of an optimal trajectory is necessarily optimal to characterize optimal trajectories. This is where the Markov structure of dynamic evolution comes into operation. It should be noted that there is wide flexibility in the invariant embedding procedure and this needs to be exploited in a creative way. It is this embedding that permits obtaining the optimal control in feedback form (that is a “control law” as opposed to open loop control).

The solution of the Partially-Observed Stochastic Control in continuous time leading to the characterization of the optimal control as a function of the unnormalized conditional density of the state given the observations via the solution of an infinite-dimensional Bellman–Hamilton–Jacobi equation is one of the crowning achievements of the Bellman view of stochastic control. It is worth mentioning that Stochastic Finance Theory would not exist but for this development. There are still open mathematical questions here that deserved further work. Indeed, the average cost problem for partially-observed finite-state Markov chains is still open—a natural necessary and sufficient condition for the existence of a bounded solution to the dynamic programming equation is still not available.

Much of my recent work has been concerned with the unification of theories of Communication and Control. More precisely, how does one bring to bear Information Theory to gain understanding of Stochastic Control and how does one bring to bear the theory of Partially-Observed Stochastic Control to gain qualitative understanding of reliable communication. There does not exist a straightforward answer to this question since the Noisy Channel Coding Theorem which characterizes the optimal rate of transmission for reliable communication requires infinite delay. The encoder in digital communication can legitimately be thought of as a controller and the decoder an estimator, but they interact in complicated ways. It is only in the limit of infinite delay that the problem simplifies and a theorem like the Noisy Channel Coding Theorem can be proved. This procedure is exactly analogous to passing to the thermodynamic limit in Statistical Mechanics.

In the doctoral dissertation of Sekhar Tatikonda, and in subsequent work, the Shannon Capacity of a Markov Channel with Feedback under certain information structure hypotheses can be characterized as the value function of a partially-observed stochastic control problem. This work in many ways exhibits the power of the dynamic programming style of thinking. I believe that this style of thinking, in the guise of a backward induction procedure, will be helpful in understanding the transmission capabilities of wireless networks. More generally, dynamic programming, when time is replaced by a partially ordered set, is a fruitful area of research.

Can one give an “information flow” view of path estimation of a diffusion process given noisy observations? An estimator, abstractly can be thought of as a map from the space of observations to a conditional distribution of the estimand given the observations. What is the nature of the flow of information from the observations to the estimator? Is it conservative or dissipative? In joint work with Nigel Newton, I have given a quite complete view of this subject. It turns out that the path estimator can be constructed as a backward likelihood filter which estimate the initial state combined with a fully observed stochastic controller moving in forward time starting at this estimated state solves the problem in the sense that the resulting path space measure is the requisite conditional distribution. The backward filter dissipates historical information at an optimal rate, namely that information which is not required to estimate the initial state and the forward control problem fully recovers this information. The optimal path estimator is conservative. This result establishes the relation between stochastic control and optimal filtering. Somewhat surprisingly, the optimal filter in a stationary situation satisfies a second law of thermodynamics.

What of the future? Undoubtedly we have to understand control under uncertainty in a distributed environment. Understanding the interaction between communication and control in a fundamental way will be the key to developing any such theory. I believe that an interconnection view where sensors, actuators, controllers, encoders, channels and decoders, each viewed abstractly as stochastic kernels, are interconnected to realize desirable joint distributions, will be the “correct” abstract view for a theory of distributed control. Except in the field of distributed algorithms, not much fundamental seems to be known here.

It is customary to end acceptance discourses on an autobiographical note and I will not depart from this tradition. Firstly, my early education at Presidency College, Calcutta, where I had the privilege of interacting with some of the most brilliant fellow students, decisively formed my intellectual make-up. Whatever culture I acquired, I acquired it at that time. At Imperial College, while I was doing my doctoral work, I was greatly influenced by John Florentin (a pioneer in Stochastic Control), Martin Clark and several other fellow students. I have also been fortunate in my association with two great institutions—MIT and the Scuola Normale, Pisa. I cannot overstate everything that I have learnt from my doctoral students, too many to mention by name—Allen gewidmet von denen ich lernte [Dedicated to all from whom I have learnt (taken from dedication of G¨unter Grass in “Beim H¨auten der Zwiebel” (“Peeling the Onion”))]. I find that they have extraordinary courage in shaping some half-baked idea into a worthwhile contribution. In recent years, my collaborative work with Vivek Borkar and Nigel Newton has been very important for me. I have great intellectual affinity with members of Club 34, the most exclusive club of its kind and I thank the members of this club for their friendship. There are many others whose intellectual views I share, but at the cost of exclusion let me single out Jan Willems and Pravin Varaiya. I admire their passion for intellectual discourse. Last, but not least, I thank my wife, Adriana, for her love and support. I am sorry she could not be here today. My acceptance speech is dedicated to her.