You are here

Richard E. Bellman Control Heritage Award

The Bellman Award is given for distinguished career contributions to the theory or application of automatic control. It is the highest recognition of professional achievement for US control systems engineers and scientists. The recipient must have spent a significant part of his/her career in the USA. The awardee is strongly encouraged to give a plenary presentation at the ACC Awards Luncheon.

Eduardo D. Sontag

For pioneering contributions to stability analysis and nonlinear control, and for advancing the control theoretic foundations of systems biology

Eduardo D. Sontag received his Licenciado in Mathematics at the University of Buenos Aires (1972) and a Ph.D. in Mathematics (1977) under Rudolf E. Kalman at the University of Florida. From 1977 to 2017, he was at Rutgers University, where he was a Distinguished Professor of Mathematics and a Member of the Graduate Faculty of the Departments of Computer Science of Electrical and Computer Engineering and the Cancer Institute of NJ.

Text of Acceptance Speech: 

Video of Acceptance Speech

Richard Bellman was a paragon of deep foundational thinking and interdisciplinary work, so I am deeply grateful to receive an award that honors him. It is especially meaningful that the prize is awarded by the Automatic Control Council, which brings together such disparate areas of applications in engineering, mathematics, and the sciences.

Exactly 50 years ago, as an undergraduate in Buenos Aires looking for a senior project, I discovered the work of Rudolf Kalman, which sparked a stable and robust attraction to control theory that continues to this day. One of my professors met Kalman at a conference, which led to Kalman inviting me to be his student. Kalman’s rigorous mathematical approach inspired research excellence, deep thinking, and clear exposition. The 1970s witnessed an explosion of new and exciting ideas in systems theory, and many of the leaders in the field visited Kalman's Center. I was extremely lucky to have the opportunity to learn from all of them. 

After my PhD, I went to Rutgers, where I was fortunate to collaborate with Hector Sussmann, and to learn so much from him.

Five years ago, I was recruited by Northeastern University, where I have fantastic colleagues, especially Mario Sznaier and Bahram Shafai.

Of course, I am grateful to all who influenced my work, too many to credit here, and to those who applied, enriched, and extended my initial ideas. At the risk of sounding presumptuous, let me share some thoughts about research in systems and control theory. First, it is important to formulate questions that are mathematically elegant and general. Paradoxically, general facts are often easier to prove than special ones, because they are stripped of irrelevant details. Second, we should strive to simplify arguments to their most elementary form. It is the simplest ideas, those that look obvious in retrospect, that are the most influential, as Bellman’s dynamic programming so beautifully illustrates. Third, we should be aware of the essential connection between theory and applications. Applications provide the inspiration for an eventual conceptual synthesis. Conversely, theory is strengthened and refined by working out particular cases and applications. Fourth, one should be cautiously open to new ideas, even those orthogonal to current fashion. But not all new ideas are good: novelty by itself is not enough. Finally, we should not lose sight of the fact that, while fun and intellectually challenging, our ultimate objective is to improve the world through scientific and engineering advances.

Which brings me back to Richard Bellman’s heritage, which we honor today. Years after his foundational work on optimality, Bellman turned to biology and medicine, even starting a mathematical biology journal. I am sure that the mechanistic understanding of behavior at all scales, from cells to organisms will lead to control and elimination of disease and the extension of healthy lifespans. I find immunology and its connections to infectious diseases and cancer to be a fascinating field for systems thinking. In addition, the associated engineering field of synthetic biology will lead to new therapeutic approaches as well as scientific understanding, and new mathematics and control problems suggest themselves all the time. In my view, the main value of systems and control to molecular biology will not be in applying deep theoretical results. Instead, conceptual ideas like controls, measurements, robustness, optimization, and estimation are where the main impact of our field will be felt.

Thank you so much.

June 9, 2022

Atlanta, GA USA

ACC 2022

Miroslav Krstic

For transformational contributions in PDE control, nonlinear delay systems, extremum seeking, adaptive control, stochastic nonlinear stabilization. And their industrial applications

Miroslav Krstic is Distinguished Professor of Mechanical and Aerospace Engineering, holds the Alspach endowed chair, and is the founding director of the Center for Control Systems and Dynamics at UC San Diego. He also serves as Senior Associate Vice Chancellor for Research at UCSD. As a graduate student, Krstic won the UC Santa Barbara best dissertation award and student best paper awards at CDC and ACC. Krstic has been elected Fellow of IEEE, IFAC, ASME, SIAM, AAAS, IET (UK), and AIAA (Assoc.

Text of Acceptance Speech: 

Dear Automatic Control colleagues,

I am happy and humbled to receive the Bellman Award.

My profound gratitude goes to the colleagues who supported my nomination. I am thankful and deeply moved by the selection committee and the A2C2, which advanced a candidate in his mid-fifties, an adolescent by Bellman award standards.

The timing of this award, which recognizes the achievement of an American control systems researcher, carries significance for me. The Bellman award came in the year that happened to be the thirtieth anniversary of my coming to the United States as a graduate student.

It is customary on this occasion for the recipient to say a few words about their formative years and professional trajectory.

I was born and grew up in a small city called Pirot, in remote southeastern Serbia. I was fortunate that my provincial city had one of the top science high schools in former Yugoslavia. And my caring parents spared no expense to provide my brother and me with broader cultural opportunities than those that our hometown could offer.

My undergraduate years at the Department of Electrical Engineering of the University of Belgrade provided me with two things. First, the toughest academic competition I’ve experienced, before or since, was during those five undergraduate years. And, second, I met my future wife in our freshman math class.

Before Petar Kokotovic gave me a PhD opportunity, I had only an inkling that I might have a shot at some success in research. But, within a few weeks of arriving in Santa Barbara, I had the fortune of solving a problem that had a reputation of being unsolvable, though I didn’t know that. So things moved quickly with research from that point on, and I had Petar’s unlimited attention. I could fill hours on being mentored by Petar. But let me just say that, during those Santa Barbara years, Petar’s enthusiasm and support for my work left me feeling that there was nothing more important happening in the world than what I was doing in research. At the same time, with everything I would produce or say, I had the training benefit of a keener, more unforgiving, and yet more nuanced critique than I would ever subsequently encounter, as a researcher or academic administrator.

Of the areas credited to me, the ones that probably come to mind first are PDE backstepping and extremum seeking. Let me describe how these interests started, soon after I left Santa Barbara.

Petar Kokotovic, Richard Murray, and Art Krener had a large project on controlling flow instabilities in jet engines. We solved those problems using reduced-order nonlinear ODE models of those flows. And it was clear that, for a nonlinear control researcher, there was hardly a more fertile ground than fluids. The only problem was: who would provide an ODE reduction for me for the next control design problem I tackle? If fluids people spend their entire careers refining, for a specific type of flow, the reductions from the Navier-Stokes representation to ODEs, it was obvious I could not count on them for control-oriented reduced models. I had to roll up my sleeves and build control methods directly for PDEs. From scratch. Because Riccati equations—in infinite dimension to boot—are not the way to extend PDE control to the nonlinear case. The answer to the challenge of constructive PDE control came in the form of continuum backstepping transformations, employing Volterra operators and easy-to-solve Goursat-form PDEs for the control gain functions. If you have interest in an example of this line of PDE control research, I recommend the paper with Coron, Bastin, and my student Vazquez, which has enabled stabilization of traffic flows in a congested, stop-and-go regime.

How I got drawn to extremum seeking is also interesting. In 1997, a combustion colleague at Maryland pointed me to publications from the 1940s and 1950s on what I would describe as an approach to adaptive control for nonlinear systems. Heuristic, but orders of magnitude simpler than what I had written my PhD on. Attempts at sleep were futile, for several days, until I figured out how to prove stability of this algorithm, using a combination of averaging and singular perturbation theorems. If you wanted to sample one control paper from the last quarter century on extremum seeking, I recommend the one on model-free seeking of Nash equilibria with Tamer Basar and my student Paul Frihauf.

To my students and collaborators, I would like to say: this Bellman award is yours. For your papers, books, theorems, and industrial products.

As I mention students, I want to extend gratitude to two companies that have been the environments in which my former students have been able to thrive and leave a legacy. At ASML, control of extreme ultraviolet photolithography has improved the density of microchips by 2-3 orders of magnitude. At General Atomics, control of aircraft arrestment on carriers has enabled one of the most impressive and deployed recent advances in defense technology.

I won’t pretend that it is not a delight to see my name in the list of the 44 recipients of the Bellman award. Scholars of incredible depth and engineers of stunning impact. I’ve studied the list. Amazingly, the numbers of American-born and foreign-born recipients of this US award seem to be the same: 22 each. If you sought an example of how the US is unequaled in extending opportunity to scientific immigrants, like myself, you could hardly find a clearer illustration.

It was also impossible for me to miss in the list that, after India, represented by four Bellman awardees, the second most highly represented foreign country is a certain little country, just a few percent more populous than the city of Atlanta, the country from which Petar Kokotovic, Drago Šiljak, and I came to the US. If I don’t mention this, in the hope of inspiring a few young minds at the Universities of Belgrade, Novi Sad, or Niš, who should?

I couldn’t have made it here without role models and without pioneers who charted the pathways along which it was then not that hard for me to walk. Among them are people who have also generously supported me over the years: Tamer Basar, Manfred Morari, Art Krener, Eduardo Sontag, Masayoshi Tomizuka, Galip Ulsoy, Jason Speyer, Graham Goodwin, Jean-Michel Coron, Petros Ioannou—to limit myself to ten. I hope that, in the remainder of my research career, I more fully deserve their support, as well as by other friends I don’t mention here but who are aware of the extent of my gratitude and respect.

Let me close and thank you with a quote from my former department chair who astutely observed: “To you guys, in control systems, every other field is a special case of control theory.”

What if that’s true?

June 7, 2022

Atlanta, GA USA

ACC 2022

Galip Ulsoy

For seminal research contributions with industrial impact in the dynamics and control of mechanical systems especially manufacturing systems and automotive systems

A. Galip Ulsoy is the C.D. Mote, Jr. Distinguished University Professor Emeritus of Mechanical Engineering (ME) and the William Clay Ford Professor Emeritus of Manufacturing at University of Michigan (UM), Ann Arbor, where he served as the ME Department Chair, Deputy Director of the National Science Foundation (NSF) Engineering Research Center for Reconfigurable Manufacturing Systems, and the Director of the USA Army Ground Robotics Reliability Center.

Text of Acceptance Speech: 

To receive the Richard E. Bellman Control Heritage Award is truly an honor. I am thankful first to all of you for attending today after two postponements of these ceremonies due to the pandemic. I am grateful to the honors committee for selecting me, and to my nominator and references for their willingness to put forth and support my nomination.

The Bellman Award is given for “distinguished career contributions to the theory or application of automatic control.” My career in control started as a junior at Swarthmore College in 1972 when I took a course based on the textbook Dynamics of Physical Systems by Robert Cannon. That course really challenged me, and I found myself putting in a lot of time and energy just to get by. That investment sparked my interest, and so as a master's student at Cornell University I worked with Dick Phelan and learned the practical and experimental side of automatic control in the laboratory using analog computers. In 1975 I decided to pursue control engineering for my Ph.D. work and Prof. Phelan said, in mechanical engineering at that time, there were really only two choices: MIT or UC Berkeley. So I wound up at UC Berkeley where I learned controls from Yasundo Takahashi, Masayoshi Tomizuka (Tomi is also a Bellman Award recipient), and Dave Auslander. I not only learned the latest in control theory from the book Control and Dynamic Systems by Takahashi, Rabins and Auslander, but did my first experiments using digital controllers. My doctoral advisor and professional role model, Dan Mote, is a dynamicist, and my research was on reducing sawdust by controlling vibrations of bandsaw blades during cutting and included theory, computation and experiment.

When I started as an Assistant Professor at the University of Michigan in 1980, I had the great fortune to have two very special mentors. The late Elmer Gilbert (another Bellman Award recipient) came to my office to welcome me, to offer his help with the new graduate course I was developing, and to invite me to participate in a College of Engineering control seminar – a regular Friday afternoon seminar which I still continue to attend! The other was my longtime friend and collaborator Yoram Koren, together with whom I conducted many joint research projects, and from whom I learned much of what I know about control of manufacturing systems. Yoram and I had the first digital control computer, a PDP-11, at UM in our laboratory. Michigan was, and is, a wonderful place for control engineering. I had the good fortune to work with, not only Elmer and Yoram, but many outstanding collaborators: Joe Whitesell, the late Pierre Kabamba, Panos Papalambros, Dawn Tilbury, Huei Peng, Ilya Kolmanovsky, Harris McClamroch, Jeff Stein, Gabor Orosz, Chinedum Okwudire and many others! I worked on topics such as automotive belt dynamics, adaptive control of milling, reconfigurable manufacturing, vehicle lane-keeping, co-design of an artifact and its controller, time delay systems, and I was always richer for the experience. Throughout my professional career I worked extensively with industry, especially the Ford Motor Company, where I collaborated with and learned from excellent engineers like Davor Hrovat and Siva Shivashankar (automotive control), Charles Wu (control of drilling), and Mahmoud Demeri (stamping control).

I would like to recognize my wife, Sue Glowski, who is here today, for her love and support. She was educated in English and Linguistics but is always willing to patiently listen to my latest idea about control, even if she has to eventually ask: "what the hell is an eigenvalue?"

Finally, and most importantly, I want to recognize and thank my students and postdocs. This award recognizes your great ideas, and your fine work, and I am delighted to be here today to accept it on your behalf. Thank you!

June 7, 2022

Atlanta, GA USA

ACC 2022

Irena Lasiecka

For contributions to boundary control of distributed parameter systems

Text of Acceptance Speech: 
Dear President Braatz, colleagues, students and friends.
I am very grateful and indeed humbled by being honored to receive the Richard E. Bellman Control Heritage Award for 2019 and to join the distinguished list of prior recipients. I wish to express my sincerest thanks to those who nominated me and supported my nomination and to the awards committee. I am deeply moved by the honor I receive today.
More as a rule than an exception, such an honor is not a credit to a single individual but rather the result of collective work and many collaborations over the years. This is particularly true in areas which are by nature interdisciplinary. And control theory, as such, is one of these. It offers an excellent example of synergy where purely theoretical questions, mathematical in nature, are prompted and stimulated by technological advances and engineering design.
I was attracted to mathematical control theory from my early days at the University of Warsaw, where I was privileged to join a distinct and (at that time) experimental program, called Studies in Applied Mathematics. This was an interdisciplinary initiative under the collaboration of a few home departments. After graduating with a Master Degree, I was fortunate to receive a doctoral fellowship which allowed me to complete my PhD in Applied Mathematics-Control Theory within 3 years, with a thesis on a problem of non-smooth optimization, which extended Milutin-Dubovitski's work and had applications to control systems with delays.
I am extremely grateful to my mentors of that time: Professors A. Wierzbicki, A. Manitius from Control Theory [the latter now chair at George Mason University], the late Professor S. Rolewicz and Professor K. Malanowski both from the Polish Academy of Sciences. They, along with other colleagues, gave me an opportunity to embrace a large spectrum of the field of control theory, to include functional analysis, abstract optimization, differential equations.
My further education took a critical turn at UCLA in Los Angeles, which I joined in 1978, at the invitation of the late Professor A.V. Balakrishnan, the 2001 recipient of the Bellman's award. Bal for all of us. Here, under his mentorship, I was offered the challenge to get involved in the mathematical area of boundary control theory for Distributed Parameter Systems, still at its infancy at that time, even from the viewpoint of Partial Differential Equations, with many basic mathematical problems still open. That was about the time when Richard Bellman's book on Dynamic Programming appeared, in 1977, rooted on Bellman's equation and the Optimality Principle. I always looked at Bellman as a problem-solving mathematician, and the mathematical theory of boundary control of DPS is in line with this philosophy.
Controlling or observing an evolution equation from a restricted set [such as the boundary of a multi-dimensional bounded domain where the controlled system evolves] is both a mathematical challenge and a technological necessity within the realm of practical and physically implementable control theory. Most often, the interior of the domain is not accessible to external manipulations. One first goal of the time within the DPS control community was to construct an appropriate control theory, inspired also by the late R. Kalman, the 1997 recipient of the Bellman's award. Main initial contributors were J.L. Lions, A. Bensoussan and their influential school in Paris, and A.V. Balakrishnan and his associates. But DPS come in a large variety. It requires that each distinct class (parabolic, hyperbolic, etc.) be studied on its own with properties and methods pertinent to it, which however fail for other classes. The systematic study of boundary control, which leads to distributional calculus for various distinct classes of physically significant DPS, became the first long-range object of my research. Both, the results and the methods are dynamics dependent. Finite or infinite speed of propagation becomes an essential feature in controllability theory. For instance, the wave equation is boundary exactly controllable on a sufficiently large time, while the heat equation is only null-controllable yet on an arbitrary short time. Existence, uniqueness and robustness of solutions to nonlinear dynamics were just the first questions asked but still open within the existing PDE culture.
Topics investigated over the years included: optimal control, Riccati and H-J-Bellman theory and their numerical implementation, appropriate controllability and stabilization notions, all in the framework of boundary control of partially observed systems. This research effort, which continues to this very day, was conducted with collaborators and PhD students. It started with my association with A.V. Balakrishnan at UCLA, J.L. Lions at College de France and R. Kalman during my 7 years at the University of Florida. And it continued during my subsequent 26 years at the University of Virginia, the home of MacShane, and now at the University of Memphis. In both cases with talented PhD students. Some of these occupy now distinguished positions in the US academia.
Once the control theory of single distinct DPS classes became mature, engineering applications motivated the need to move on toward the study of more complex DPS consisting of interactive structures where different types of dynamics coupled at an interface define a given control system. Propagation of control properties through the interface then plays a main role.
Thus, in its second phase, my research in DPS then evolved toward these coupled interactive systems of several PDEs. Applications include large flexible structures, structural acoustic interaction, fluid-structure interaction, attenuation of turbulence in fluid dynamics [Navier Stokes] and flutter suppression in nonlinear aero-elasticity. In the latter area, my collaboration with Earl Dowell [Duke Univ.] was most enlightening, and is a further proof of the interdisciplinary nature of the field. These problems, while deeply rooted in engineering control technology, were also benchmark models at the forefront of developing a PDE-based mathematical control theory, which accounts for the infinite dimensional nature of continuum mechanics and fluid dynamics.
In closing, I would like to acknowledge with gratitude my personal and professional interaction over the years with people such as the late David Russell [VPI], Walter Littmann [U of Minnesota], Giuseppe Da Prato [Scuola Normale, Pisa], Michel Delfour [Univ. of Montreal] and Sanjoy Mitter [MIT], the latter the 2007 recipient of the Bellman award. Their pioneering works paved the way to further developments along a road-map which I am proud to be a part of.
Special thanks to my long-time collaborator and husband Roberto Triggiani, to the late Igor Chueshov [both co-authors of major research monographs, two with Roberto in Cambridge University Press and one with Igor in Monograph Series of Springer], as well as to my former students, now collaborators and colleagues.
Many thanks also to funding agencies such as NSF, AFOSR, ARO and NASA for many years of generous support.
Irena Lasiecka,
Philadelphia, July 11, 2019.

Masayoshi Tomizuka

For seminal and pioneering contributions to the theory and practice of mechatronic systems control

Professor Masayoshi Tomizuka holds the Cheryl and John Neerhout, Jr., Distinguished Professorship Chair. He received his B.S. and M.S. degrees in Mechanical Engineering from Keio University, Tokyo, Japan and his Ph. D.

Text of Acceptance Speech: 

Acceptance Video

Dear President Braatz, colleagues, students, ladies and gentlemen:

I feel tremendously honored to receive the Richard Bellman Control Heritage Award.  Thank you to those who nominated me and supported my nomination, to the selection committee, and to the AACC Board for making me this year’s recipient.

I completed my undergraduate studies at Keio University in Japan and my graduate studies at MIT. Following my education at these wonderful institutions, I was able to join the excellent academic environment at the University of California, Berkeley. I am grateful to my teachers and colleagues at these institutions.  I thank in particular my PhD advisor Dan Whitney and my early control colleagues at Berkeley, Yasundo Takahashi and David Aulander, and the many bright graduate students that I have had the privilege of having in my lab at Berkeley who are approximately 120 PhDs strong now.  I thank the National Science Foundation and other government sponsors as well as industrial sponsors for providing me resources to maintain the Mechanical Systems Control laboratory, which is the home of my research group. Last but not least, I thank my wife Miwako for supporting me and our family, permitting me to concentrate on academics and schoolwork for many years, starting almost 50 years ago in my MIT days.

I jumped into the area of dynamic systems and control during my senior year at Keio University.   The first book I read was Modern Control Theory by Julius Tou.  The book was an excellent summary of the State Space Control Theory, and I was fascinated by the elegant mathematical aspects of the subject.    There was no internet back then of course, and major periodicals such as IEEE Transactions on Automatic Control and ASME Journal of Basic Engineering were the best sources to find the latest developments in the field. I was frustrated by the time delay between the time of research and publication.  About at the time I completed my MS at Keio, I was fortunate to receive an admission offer from MIT. The time delay problem was naturally resolved.  At MIT, I was inspired by many people including Dan Whitney, Tom Sheridan and Hank Paynter. Sheridan’s early work on preview control was the starting point of my dissertation work on the “optimal finite preview” problem.    

In September 1974, I joined the University of California as an Assistant Professor of Mechanical Engineering.  It’s hard to believe, but I am now completing my 44th year at Berkeley.    

At Berkeley, I have worked on many different mechanical systems.  I joined UC Berkeley when the large scale integration technology was starting to make it possible to implement advanced control algorithms by using mini and micro computers.  This allowed me to emphasize both the analytical aspects of control and the laboratory work.  This research style still continues now.

Robots are multivariable and nonlinear. In particular, a configuration-dependent inertial matrix and nonlinear terms are unique for robots.  I convinced one of my PhD students, Roberto Horowitz (who is now a professor and chair of the Mechanical Engineering Department at Berkeley), to work with me on model reference adaptive control as it applied to robots. Since then, robot control has remained a major research topic in my group.  Our current research emphasizes efficiency and safety in human-robot interactions and merging model-based control and machine learning to make the robot system intelligent.

I worked on machining for a while. One control issue with machining is the dependence of input-output dynamics to cutting conditions and tool wear. One day, Jun-Ho Oh (who is now a professor at KAIST), took me down to the lab to show me model reference adaptive control on a Bridgeport milling machine. It was cleverly implemented and was the first application of modern adaptive control theory to machining. 

In many mechanical systems involving rotational parts, we encounter periodic disturbances with known periods.  Repetitive control is applied to this class of disturbances.  I learned of it from visitors from Japan in the mid-1980s. Tsu-Chin Tsao (who is now a professor at UCLA) and I then developed our version of repetitive control algorithms emphasizing discrete time formulation and easy implementation.

Another fundamental control problem for mechanical systems is tracking arbitrary shaped reference inputs. Feedforward control is popular in tracking, but unstable system zeros make the problem complicated.  To overcome this issue, I suggested to cancel phase shift induced by unstable zeros and introduced zero phase error tracking (ZPET) control in the late 1980s. The citation of this paper has reached 1,600 by now.  

In the mid-1980’s, UC Berkeley started Program on Advanced Transit and Highway under the sponsorship of CalTrans.  Automated highway systems was a topic of interest for quite a few control professors.  Karl Hedrick and I were the primary faculty participants from ME: Karl worked on controls in the longitudinal direction and I in the lateral direction of vehicles.  My first PhD student on this topic was Huei Peng (who is now a professor at University of Michigan). During the past five years or so, autonomous vehicles have become very hot as we all know, and I now have quite a few students working to blend control and machine-learning for applications to vehicles.

I have been fortunate to have had the opportunity to address a variety of challenging mechanical control problems over the span on my career so far.  My research has been and continues to be rooted in the mechatronic approach; namely I have worked on the synergetic integration of mechanical systems with sensing, computation, and control theory.  This approach provides the opportunity for academic research to have broad impacts on control engineering in practice, and I am honored to have had a hand in helping to advance a small part of it.    

Thank you very much for this award. I am extremely grateful and honored.

ACC 2018

Milwaukee, WI USA

June 28, 2018

John S. Baras

For innovative contributions to control theory, stochastic systems, and networks and academic leadership in systems and control

John S. Baras holds a permanent joint appointment as professor in the department of electrical and computer engineering and the Institute for Systems Research. He was the founding director of ISR, which is one of the first six National Science Foundation engineering research centers. Dr.

Text of Acceptance Speech: 

Dear President Masada, colleagues, students, ladies and gentlemen.

I am deeply moved by this award and honor, and truly humbled to join a group of such stellar members of our extended systems and control community, several of whom have been my mentors, teachers and role models throughout my career.

I am grateful to those who nominated me and supported my nomination and to the selection committee for their decision to honor my work and accomplishments.

I was fortunate through my entire life to receive the benefits of exceptional education. From special and highly selective elementary school and high school back in Greece, to the National Technical University of Athens for my undergraduate studies and finally to Harvard University for my graduate studies. My sincere and deep appreciation for such an education goes to my parents who distilled in me a rigorous work ethic and the ambition to excel, my teachers in Greece for the sound education and training in basic and fundamental science and engineering and to my teachers and mentors at Harvard and MIT (Roger Brockett, Sanjoy Mitter and the late Jan Willems) and the incredibly stimulating environment in Cambridge in the early 70’s.

Many thanks are also due to my students and colleagues at the University of Maryland, in the US and around the world, and in particular in Sweden and Germany, for their collaboration, constructive criticism and influence through the years. Several are here and I would like to sincerely thank you all very much.

I am grateful to the agencies that supported my research: NSF, ARO, ARL, ONR, NRL, AFOSR, NIST, DARPA, NASA. I am particularly grateful to NSF for the support that helped us establish the Institute for Systems Research (ISR) at the University of Maryland in 1985, and to NASA for the support that helped us establish the Maryland Center for Hybrid Networks (HyNet) in 1992.

I would also like to thank many industry leaders and engineers for their advice, support, and collaboration during the establishment and development of both the ISR and HyNet to the renowned centers of excellence they are today.

Most importantly I am grateful to my wife Mary, my partner, advisor and supporter, for her love and selfless support and sacrifices during my entire career.

When I came to the US in 1970 I was debating whether to pursue a career in Mathematics, Physics or Engineering. The Harvard-MIT exceptional environment allowed me freedom of choice. Thanks to Roger Brockett I was convinced that systems and control, our field, would be the best choice as I could pursue all of the above. It has indeed proven to be a most exciting and satisfying choice. But there were important adjustments that I had to make and lessons I learned.

I did my PhD thesis work on infinite dimensional realization theory, and worked extensively with complex variable methods, Hardy function algebras, the famous Carleson corona theorem and several other rather esoteric math. From my early work at the Naval Research Laboratory in Electronic Warfare (the “cross-eye” system) and in urban traffic control (adaptive control of queues) I learned, the hard way, the difficulty and critical importance of building appropriate models and turning initially amorphous problems to models amenable to systems and control thinking and methods. I learned the importance of judiciously blending data-based and model-based techniques.

In the seventies, I took a successful excursion into detection, estimation and filtering with quantum mechanical models, inspired by deep space laser communication problems, where my mathematical physics training at Harvard came in handy. I then worked on nonlinear filtering, trying to understand how physicists turned nonlinear inference problems to linear ones and investigate why we could not do the same for nonlinear filtering and partially observed stochastic control. This led me to unnormalized conditional densities, the Duncan-Mortensen-Zakai equation and to information states. This led me naturally to construct nonlinear observers as asymptotic limits of nonlinear filtering problems and the complete solution of the nonlinear robust output feedback control problem (nonlinear H-infinity problem) via two coupled Hamilton Jacobi Bellman equations. We even investigated the development of special chips to implement real-time solutions, a topic we are revisiting currently.  

With the development and progress of the ISR I worked on many problems including: speech and image compression breaking the Shannon separation of source and channel coding, manufacturing processes, network management, communication network protocols, smart materials (piezoelectric, shape memory alloys), mobile wireless network design, network security and trust, and more recently human-machine perception and cognition, networked control systems, networked cyber-physical systems, combining metric temporal logic and reachability analysis for safety, collaborative decision management in autonomous vehicles and teams of humans and robots, new analytics for learning and for the design of deep learning networks mapping abstractions of the brain cortex, quantum control and computing.

Why I am telling you about all these diverse topics? Not to attract your admiration. But because at the heart of all my works are fundamental principles and methods from systems and controls, often appropriately extended and modified. Even in my highest impact (economic and social) work in conceiving, demonstrating and commercializing Internet over satellite services (with billions of sales world-wide – remember me when you use Internet in planes over oceans), we modified the flow control algorithm (the TCP) and the physical path, to avoid having TCP interpret the satellite physical path delay as congestion. That is we used systems and control principles.

Our science and engineering, systems and control, has some unparalleled unifying power and efficiency. That is, if we are willing to build the new models required by the new applications (especially models requiring a combination of multiple physics and cyber logic) and if we are willing to learn and apply the incredible new capabilities and technologies that are developed in information technology and materials. As is apparent especially in this conference (ACC), and in the CDC conference, by any measure, our field is exceptionally alive and well and continues to surprise many other disciplines by its contributions and accomplishments, which now extend even in biology, medicine and healthcare. So for the many young people here, please continue the excitement, continue getting involved in challenging and high impact problems, and continue the long tradition and record of accomplishments we have established for so many years. And most importantly continue seeking the common ground and unification of our methods and models.

Let me close with what I consider some major challenges and promising broad areas for the next 10 years or so:

1)     Considering networked control systems we need to understand what we mean by a “network” and the various abstractions and system aspects involved. Clearly there are more than one dynamic graphs involved. This needs new foundations for control, communication, information, computing.

2)     Systems and control scientists and engineers are the best qualified to develop further the modern field of Model-Based Systems Engineering (MBSE): the design, manufacturing/implementation and operation of complex systems with heterogeneous physical, cyber components and even including humans.

3)     The need for analog computing is back, for example in real-time and progressive learning and in CPS. Some of the very early successes of control were implemented in analog electromechanical systems due to the need for real-time behavior. Yet we do not have a synthesis theory and methodology for such systems due to the heterogeneous physics that may be involved. Nothing like we have for VLSI.

Thank you all very much! This is indeed a very special day for me! 

Jason L. Speyer

For pioneering contributions to deterministic and stochastic optimal control theory and their applications to aerospace engineering, including spacecraft, aircraft, and turbulent flows

Jason L. Speyer received a B.S. in aeronautics and astronautics from MIT, Cambridge and Ph.D. in applied mathematics from Harvard University, Cambridge, MA. He is the Ronald and Valerie Sugar Distinguished Professor in Engineering in the Mechanical and Aerospace Engineering Department and the Electrical Engineering Department, UCLA. He was the Harry H. Power Professor in Engineering Mechanics, University of Texas, Austin from 1976-1990.

Text of Acceptance Speech: 
I am extremely grateful and humbled by being honored to receive the Richard E. Bellman Control Heritage Award for 2016. I thank those that recommended me and the awards committee for supporting that nomination. I also thank my colleagues, students, family and especially my wife for the support I have received over these many years.
For me this award occurs at an auspicious time and place. Boston is the place of my birth and my home. It was sixty years ago that I graduated from Malden High School and entered into a world I could never have anticipated; a world where I would be nurtured for the next twenty years by many people, some of  whom have been recipients of this esteemed award. 
I enrolled in the Department of Aeronautics at MIT, which after Sputnik became the Department of Aeronautics and Astronautics. In my junior year I entered into the space age. More consequential for me was that the department head was Doc (Charles Stark) Draper[1], whose second volume of his three sequence series on Instrument Engineering (1952) was one the first books on what we know as Classical Control covering such topics as Evens root locus, Bode plots, Nyquist criterion, and Nichols charts. Doc Draper instituted an undergraduate course in classical control that I took my junior year.  This inspired me to take a graduate course and write my undergraduate thesis in controls.
After graduation in 1960 I left Boston to work for Boeing in Seattle. There, I worked with my lead engineer Raymond Morth, who introduced me to the new world of control theory using state space that was just emerging in the early 1960’s. I learned of dynamic programming of Richard Bellman for global sufficiency of an optimal trajectory and the Pontryagin Maximum principle inspired by the deficiency of dynamic programing to solve certain classes of optimization problems. The Bushaw problem of determining the minimum time to the origin of a double integrator was just such a problem, since the optimal return function in dynamic programing is not differentiable at the switching curve and the Bellman theory did not apply.  Interestingly, for my bachelor’s thesis I applied the results of the Bushaw problem to the minimum time problem of bringing the yaw and yaw rate of an aircraft to the origin. However, at that time I had no idea about the ramification of the Bushaw problem to optimization theory. I also learned of the work of Rudolf Kalman in estimation, the work of Arthur Bryson and Henry Kelley in the development of numerical methods for determining optimal constrained trajectories, and J. Halcombe (Hal) Laning and Richard Battin on the determination of orbits for moon rendezvous.
After an incredible year at Boeing I returned to Boston to work at the Analytical Research Department at Raytheon, where Art Bryson was a consultant. There, I worked with a student of Bryson, Walter Denham. We were contracted by MIT’s Instrumentation Laboratory, monitored by Richard Battin, to enhance the Apollo autonomous navigation system over the trans-Lunar orbit. We developed a scheme for determining the optimal angle-measurement sequence between the best stars in a catalogue and near and far horizons of the Earth or the Moon using a sextant. This angle-measurement sequence minimized some linear function of the terminal value of the error covariance of position and velocity near the Earth or Moon. Our optimization scheme, which required a matrix dynamic constraint, seemed to be a first. This scheme, used in the Apollo autonomous navigation system, was tested on Apollo 8, and used on every mission thereon. My next task at Raytheon was working on neighboring optimal guidance scheme. This work was with Art Bryson and John Breakwell. I remember travelling to Lockheed’s Palo Alto Research Laboratory and meeting with John, the beginning of a long and delightful collegial relationship.
After my first two years at Raytheon I somehow convinced Art Bryson to take me on as a graduate student at Harvard, supported by the Raytheon Fellowship program. To understand the intellectual level I had to contend with, on my doctorial preliminary exam committee, three of the four examiners were recipients of the Richard E. Bellman Control Heritage Award; Art Bryson, Larry (Yu-Chi) Ho, and Bob (Kumpati) Narendra, all of whom have been my life time colleagues. I was also fortunate to take a course taught by Rudy Kalman.  Surprisingly, he taught many of the controls areas he had pioneered, except filtering for Gauss-Markov systems (the Kalman filter); the Aizerman conjecture, the Popov criterion and Lyopunov functions, duality in linear systems, optimality for linear-quadratic systems, etc. After finishing my PhD thesis on optimal control problems with state variable inequality constraints, I returned to Raytheon. Fortunately, Art Bryson made me aware of some interest at Raytheon in using modern control theory for developing guidance laws for a new missile. At Raytheon’s Missile Division I worked with Bill O’Halloran on the homing missile guidance system where Bill worked on development of the Kalman filter and I worked on the development of the linear-quadratic closed-form guidance gains that had to include the nonminimal phase autopilot. This homing missile, the Patriot missile system, appears to be the first fielded system using modern control theory.
I left Boston for New York to work at the Analytical Mechanics Associates (AMA), in particular, with Hank Kelley. Although I had a lasting friendship with Hank, I only lasted seven months in New York before returning to the AMA office in Cambridge. Unfortunately, the Cambridge NASA Center closed, and I took a position under Dick Battin at the Instrumentation (later the Charles Stark Draper) Laboratory at MIT. There, I worked on the necessary and sufficient conditions for optimality of singular control problems, the linear-exponential-Gaussian control problem, optimal control problems with state variable inequality constraints, optimal control problems with cost criterion and dynamic functions with kinks, and periodic optimal control problems.  On many of these issues I collaborated with David Jacobson, whom I first met in the open forum of my PhD final exam. This remarkable collaboration culminated in our book on optimal control theory that appeared in 2010. Also, during my tenure at Draper, I took a post-doctoral year leave at the Weizmann institute in Israel. Here, I learned that I could work very happily by myself. A few years after returning to Draper, I started what is now a forty year career in academia and I left Boston.
As I look back, I feel so fortunate that I had such great mentoring over my early years and by so many who have won this award. My success over the last forty years has been due to my many students who have worked with me to mold numerous new ideas together. Today, I find the future as bright as anytime in my past. I have embarked in such new directions as estimation and control of linear systems with additive noises described by heavy tailed Cauchy probability density functions with my colleague Moshe Idan at the Technion and deep space navigation using pulsars as beacons with JPL. 
To conclude, I am grateful to so many of my teachers, colleagues and students, who have nurtured, inspired, and educated me.  Without them and my loving wife and family, I would not be here today. Thank you all. 

[1] Boldface names are recipients of the Richard E. Bellman Control Heritage Award.


Thomas F. Edgar

For a career of outstanding educational and professional leadership in automatic control, mentoring a large number of practicing professionals, and research contributions in the process industries, especially semiconductor manufacturing
Text of Acceptance Speech: 
When I look back upon my career in the field of control, I think it may have started in 1957, when Sputnik was launched by the Russians. I was in the seventh grade at that time. The reaction of our local school board to losing the space race was to have a group of students take algebra one year earlier, in the eighth grade. During high school, I participated in my class science fairs and won at the state level. When I was a freshman at the University of Kansas in 1967, I was given the ability to do independent research in the area of nucleate boiling. I also was exposed to computer programming, which was a fairly new topic at that time in undergraduate engineering. I became interested in numerical analysis and selected Princeton University for doctoral study, because Professor Leon Lapidus was a leading authority on that topic.
I discovered his interest in numerical analysis was driven by solving control problems (specifically two point boundary value problems). The optimal control project I selected was on singular bang-bang and minimum time control. I used discrete dynamic programming with penalty functions (influenced by Bellman and Kalman) as a way to solve this particular class of control problems. In 1971 I accepted a faculty position at the University of Texas.
That era was the heyday of optimal control in the aerospace program. Many of us in chemical engineering wanted to apply these ideas to chemical plants, however, there were some obstacles. Economic justification was strictly required for any commercial application vs. government funding for space vehicles. In addition, proprietary considerations prevented technology transfer from one plant to another. It wasn't until the late 1970s, when Honeywell introduced the distributed digital control system, that computer process control really began to become more popular (and economic) in industry. In 1972, I purchased a Data General minicomputer to be used with a distillation column for process control. That computer was very antiquated by today’s standards; in fact, we had to use paper tape for inputting software instructions to the machine.
Given that there was a lack of industrial receptivity to advanced control research and NSF funding was very limited, I looked around for other types of problems where my skills might be valuable. In 1974 the energy crisis was rearing its head due to the Arab oil embargo. Funding agencies like NSF and the Office of Coal Research in the U.S. were quite interested in how we could use the large domestic resource of coal to meet the shortage of oil and gas. I came across some literature about a technology called underground coal gasification (UCG), where one would gasify the coal resource in situ as a way of avoiding the mining step. I recall reading it was a very promising technology but they didn't know how to control it. That sparked my interest as a possible topic where I could apply my skill set. But I first had to learn about the long history of coal gasification and coal utilization in general.
There were many issues that had to be addressed before developing control methodologies for UCG. There was a need to develop three-dimensional modeling tools that would predict the recovery of the coal as well as the gas composition that you make (similar to a chemical reactor). Thus 80% of the research work was on modeling as opposed to control. It was also a highly multidisciplinary project involving rock mechanics and environmental considerations. I worked in this area for about 10 years. Later in the mid-1980s, the U.S. no longer had an energy crisis, so I started looking at some other possible areas for application of modeling and control.
In 1984 a new senior faculty member joined my department from Texas Instruments. He was very familiar with semiconductor manufacturing and the lack of process control, and he was able to teach me a lot about that industry. Fortunately I did not have to learn a new field on my own since I was Department Chair with limited discretionary time. The same issues were present as for UCG: models were needed in order to develop control strategies. I have continued working in that area with over 20 graduate students spread out over the past 25 years and process control is now a mature technology in semiconductor manufacturing (see my plenary talk at this year’s ACC).
During the 1980s, I became interested in textbook writing and particularly the need to develop a new textbook in process control. I began collaborating with two colleagues at UC Santa Barbara (Dale Seborg and Duncan Mellichamp) and thought that UCSB would be a great place to spend some time in the summer writing and giving short courses on the topic. The course notes were eventually developed into a textbook eight years later. We now are working on the fourth edition of the book and it is the leading textbook for process control in the world. It has been a very rewarding endeavor to work with other educators, and I would recommend that anyone writing a textbook collaborate with other co-authors as a way of improving the product. In 2010, we added a fourth co-author (Frank Doyle) to cover biosystems control; in fact, he is receiving the practice award from AACC today.
In the early 1990s at UT Austin, Jim Rawlings and I concluded that we wanted to work on control problems that would impact industrial practice rather than just writing more technical papers that maybe only a few people would read. So we formed the Texas Modeling and Control Consortium (TMCC) which had 16 member companies. Over twenty plus years the consortium has morphed into one involving multiple universities investigating process control, monitoring, optimization, and modeling. When Jim left the University of Texas and went to Wisconsin, we decided to keep the consortium going, so it became TWMCC (Texas Wisconsin Modeling and Control Consortium). Joe Qin replaced Jim on the faculty at UT but then 10 years later he left for USC. So our consortium became TWCCC (Texas Wisconsin California Control Consortium). I have learned a lot from both Joe and Jim over the years and have been able to mentor them in their professional development as faculty members. I am now mentoring a new UT control researcher (Michael Baldea) as we continue to close the gap between theory and practice.
One other thing I should mention is my involvement with the American Control Conference. I first gave a paper in 1972 at what was known as the Joint Automatic Control Conference (JACC) and have been coming to this meeting ever since. In the 1970s each meeting was entirely run by a different society each year. To improve the business model and instill more interdisciplinarity with five participating societies, in 1982 we started the American Control Conference with leadership from Mike Rabins, John
Zaborsky, and also Bill Levine who is here today. I was Treasurer of the 1982 meeting, which was held in Arlington, VA. That began an extremely successful series of meetings that is one of the best conference values today.  It is very beneficial to attend to see control research carried out in the other societies and not just your own society.
During my 40+ year career, I have had a lot of help from colleagues in academia and industry and collaborated with over 100 bright graduate students. I also should thank my wife Donna, who has put up with me over these many years since we first started going to the computer center at the University of Kansas for dates 50 years ago.
My advice to younger researchers is to think 10 years out as to what the new areas might be and start learning about them. Fortunately, today’s control technology is more ubiquitous than ever and the future is bright, although the path forward may not be clear. I still remember a discussion I had with a fellow graduate student before leaving Princeton in 1971 as we embarked on academic careers. His view was that after all the great things achieved by luminaries like Pontryagin, Bellman, and Kalman, all that's really left are the crumbs… So I guess that means that I must have had a pretty crummy career.

Dimitri P. Bertsekas

For contributions to the foundations of deterministic and stochastic optimization-based methods in systems and control

Dimitri P. Bertsekas' undergraduate studies were in engineering at the National Technical University of Athens, Greece. He obtained his MS in electrical engineering at the George Washington University, Wash. DC in 1969, and his Ph.D. in system science in 1971 at the Massachusetts Institute of Technology

Text of Acceptance Speech: 
I feel honored and grateful for this award. After having spent so much time on dynamic programming and written several books about its various facets, receiving an award named after Richard Bellman has a special meaning for me.
It is common in award acceptance speeches to thank one's institutions, mentors, and collaborators, and I have many to thank. I was fortunate to be surrounded by first class students and colleagues, at high quality institutions, which gave me space and freedom to work in any direction I wished to go. As Lucille Ball has told us, "Ability is of little account without opportunity."
Also common when receiving an award is to chart one's intellectual roots and journey, and I will not depart from this tradition. It is customary to advise scholarly Ph.D. students in our field to take the time to get a broad many-course education, with substantial mathematical content, and special depth in their research area. Then upon graduation, to use their Ph.D. research area as the basis and focus for further research, while gradually branching out into neighboring fields, and networking within the profession. This is good advice, which I often give, but this is not how it worked for me at all!
I came from Greece with an undergraduate degree in mechanical engineering, got my MS in control theory at George Washington University in three semesters, while holding a full-time job in an unrelated field, and finished two years later my Ph.D. thesis on control under set membership uncertainty at MIT. I benefited from the stimulating intellectual atmosphere of the Electronic Systems Laboratory (later LIDS), nurtured by Mike Athans and Sanjoy Mitter, but because of my short stay there, I graduated with little knowledge beyond Kalman filtering and LQG control. Then I went to teach at Stanford in a department that combined mathematical engineering and operations research (in which my background was rather limited) with economics (in which I had no exposure at all). In my department there was little interest in control theory, and none at all in my thesis work. Never having completed a first course in analysis, my first assignment was to teach to unsuspecting students optimization by functional analytic methods from David Luenberger's wonderful book. The optimism and energy of youth carried me through, and I found inspiration in what I saw as an exquisite connection between elegant mathematics and interesting practical problems. Studying David Luenberger's other works (including his Nonlinear Programming book) and working next door to him had a lasting effect on me. Two more formative experiences at Stanford were studying Terry Rockafellar's Convex Analysis book (and teaching a seminar course from it), and most importantly teaching a new course on dynamic programming, for which I studied Bellman's books in great detail. My department valued rigorous mathematical analysis that could be broadly applied, and provided a stimulating environment where both could thrive. Accordingly, my course aimed to combine Bellman's vision of wide practical applicability with the emerging mathematical theory of Markov Decision Processes. The course was an encouraging success at Stanford, and set me on a good track. It has survived to the present day at MIT, enriched by subsequent developments in theoretical and approximation methodologies.
After three years at Stanford, I taught for five years in the quiet and scholarly environment of the University of Illinois. There I finally had a chance to consolidate my mathematics and optimization background,  through research to a great extent. In particular, it helped a lot that with the spirit of youth, I took the plunge into the world of the measure-theoretic foundations of stochastic optimal control, aiming to expand the pioneering Borel space framework of David Blackwell, in the company of my then Ph.D. student Steven Shreve.
I changed again direction by moving back to MIT, to work in the then emerging field of data networks and the related field of distributed computation. There I had the good fortune to meet two colleagues with whom I interacted closely over many years: Bob Gallager, who coauthored with me a book on data networks in the mid-80s, and John Tsitsiklis, who worked with me first while a doctoral student and then as a colleague, and over time coauthored with me two research monographs on distributed algorithms and neuro-dynamic programming, and a probability textbook. Working with Bob and John, and writing books with them was exciting and rewarding, and made MIT a special place for me.
Nonetheless, at the same time I was getting distracted by many side activities, such as books in nonlinear programming and dynamic programming, getting involved in applications of queueing theory and power systems, and personally writing several network optimization codes. By that time, however, I realized that simultaneous engagement in multiple, diverse, and frequently changing intellectual activities (while not recommended broadly) was a natural and exciting mode of operation that worked well for me, and also had some considerable benefits. It stimulated the cross-fertilization of ideas, and allowed the creation of more broadly integrated courses and books.
In retrospect I was very fortunate to get into methodologies that eventually prospered. Dynamic programming developed perhaps beyond Bellman's own expectation. He correctly emphasized the curse of dimensionality as a formidable impediment in its use, but probably could not have foreseen the transformational impact of the advances brought about by reinforcement learning, neuro-dynamic programming, and other approximation methodologies. When I got into convex analysis and optimization, it was an emerging theoretical subject, overshadowed by linear, nonlinear, and integer programming. Now, however, it has taken center stage thanks to the explosive growth of machine learning and large scale computation, and it has become the lynchpin that holds together most of the popular optimization methodologies. Data networks and distributed computation were thought promising when I got involved, but it was hard to imagine the profound impact they had on engineering, as well as the world around us today. Even set membership description of uncertainty, my Ph.D. thesis subject, which was totally overlooked for nearly fifteen years, eventually came to the mainstream, and has connected with the popular areas of robust optimization, robust control, and model predictive control. Was it good judgement or fortunate accident that steered me towards these fields? I honestly cannot say. Albert Einstein wisely told us that "Luck is when opportunity meets preparation." In my case, I also think it helped that I resisted overly lengthy distractions in practical directions that were too specialized, as well as in mathematical directions that had little visible connection to the practical world.
An academic journey must have companions to learn from and share with, and for me these were my students and collaborators. In fact it is hard to draw a distinction, because I always viewed my Ph.D. students as my collaborators. On more than one occasion, collaboration around a Ph.D. thesis evolved into a book, as in the cases of Angelia Nedic and Asuman Ozdaglar, or into a long multi-year series of research papers after graduation, as in the cases of Paul Tseng and Janey Yu. I am very thankful to my collaborators for our stimulating interactions, and for all that I learned from them. They are many and I cannot mention them all, but they were special to me and I was fortunate to have met them. I wish that I had met Richard Bellman, I only corresponded with him a couple of times (he was the editor of my first book on dynamic programming). I still keep several of his books close to me, including his scintillating and highly original book on matrix theory. I am also satisfied that I paid part of my debt to him in a small way. I have used systematically, for the first time I think in a textbook in 1987, the name "Bellman equation" for the central fixed point equation of infinite horizon discrete-time dynamic programming. It is a name that is widely used now, and most deservedly so.

A. Stephen Morse

For fundamental contributions to linear systems theory, geometric control theory, logic-based and adaptive control, and distributed sensing and control

A. Stephen Morse was born in Mt. Vernon, New York. He received a BSEE degree from Cornell University, MS degree from the University of Arizona, and a Ph.D. degree from Purdue University. From 1967 to 1970 he was associated with the Office of Control Theory and Application (OCTA) at the NASA Electronics Research Center in Cambridge, Mass. Since 1970 he has been with Yale University where he is presently the Dudley Professor of Engineering.

Text of Acceptance Speech: 

President Rhinehart, Lucy, Danny, fellow members of the greatest technological field in the world, I am to, say the least, absolutely thrilled and profoundly humbled to be this years recipient of the Richard E. Bellman Control Heritage Award. I am grateful to those who supported my nomination, as well to the American Automatic Control Council for selecting me.

I am indebted to a great many people who have helped me throughout my career. Among these are my graduate students, post docs, and colleagues including in recent years, John Baillieul, Roger Brockett, Bruce Francis, Art Krener, and JanWillems. In addition, I’ve been fortunate enough to have had the opportunity to collaborate with some truly great people including Brian Anderson, Ali Bellabas, Chris Byrnes, Alberto Isidori, Petar Kokotovic, Eduardo Sontag and Murray Wonham. I’ve been lucky enough to have had a steady stream of research support from a combination of agencies including AFOSR, ARO and NSF.
I actually never met Richard Bellman, but I certainly was exposed to much of his work. While I was still a graduate student at Purdue, I learned all about Dynamic Programming, Bellman’s Equation, and that the Principle of Optimality meant “Don’t cry over spilled milk.” Then I found out about the Curse of Dimensionally. After finishing school I discovered that there was life before dynamic programming, even in Bellman’s world. In particular I read Bellman’s 1953 monograph on the Stability Theory of Differential Equations. I was struck by this book’s clarity and ease of understanding which of course are hallmarks of Richard Bellman’s writings. It was from this stability book that I first learned about what Bellman called his “fundamental lemma.” Bellman used this important lemma to study the stability of perturbed differential equations which are nominally stable. Bellman first derived the lemma in 1943, apparently without knowing that essentially the same result had been derived by Thomas Gronwall in 1919 for establishing the uniqueness of solutions to smooth differential equations. Not many years after learning about what is now known as the Bellman - Gronwall Lemma, I found myself faced with the problem of trying to prove that the continuous time version of the Egardt - Goodwin - Ramadge - Caines discrete-time model reference adaptive control system was “stable.” As luck would have it, I had the Bellman - Gronwall Lemma in my hip pocket and was able to use it to easily settle the question. As Pasteur one said, “Luck favors the prepared mind.”
After leaving school I joined the Office of Control Theory and Application at the now defunct NASA Electronics Research Center in Cambridge, Mass. OCTA had just been formed and was headed by Hugo Schuck. OCTA’s charter was to bridge the gap between theory and application. Yes people agonized about the so-called theory - application gap way back then. One has to wonder if the agony was worth it. Somehow the gap, if it really exists, has not prevented the field from bringing to fruition a huge number of technological advances and achievements including landing on the moon, cruise control, minimally invasive robotic surgery, advanced agricultural equipment, anti-lock brakes, and a great deal more. What gap? The only gap I know about sells clothes.
In the late 1990s I found myself one day listening to lots of talks about UAVs at a contractors meeting at the Naval Post Graduate School in Monterey Bay, California. I had a Saturday night layover and so I spent Saturday, by myself, going to the Monterey Bay 1 Aquarium. I was totally awed by the massive fish tank display there and in particular by how a school of sardines could so gracefully move through the tank, sometimes bifurcating and then merging to avoid larger fish. With UAVs in the back of my mind, I had an idea: Why not write a proposal on coordinated motion and cooperative control for the NSF’s new initiative on Knowledge and Distributed Intelligence? Acting I this, I was fortunate to be able to recruit a dream team: Roger Brockett, for his background in nonlinear systems; Naomi Leonard for her knowledge of underwater gliders; Peter Belhumeur for his expertise in computer vision, and biologists Danny Grunbaum and Julia Parish for their vast knowledge of fish schooling. We submitted a proposal aimed at trying to understand on the one hand, the traffic rules which large animal aggregations such as fish schools and bird flocks use to coordinate their motions and on the other, how one might use similar concepts to coordinate the motion of manmade groups. The proposal was funded and at the time the research began in 2000, the playing field was almost empty. The project produced several pieces of work about which I am especially proud. One made a connection between the problem of maintaining a robot formation and the classical idea of a rigid framework; an offshoot of this was the application of graph rigidity theory to the problem of localizing a large, distributed network of sensors. Another thrust started when my physics - trained graduate student Jie Lin, ran across a paper in Physical Review Letter by Tomas Vicsek and co-authors which provided experimental justification for why a group of self - driven particles might end up moving in the same direction as a result of local interactions. Jie Lin, my post doc Ali Jadbabaie, and I set out to explain the observed phenomenon, but were initially thwarted by what seemed to be an intractable convergence question for time - varying, discrete - time, linear systems. All attempts to address the problem using standard tools such as quadratic Lyapunov functions failed. Finally Ali ran across a theorem by JacobWolfowitz, and with the help of Marc Artzrouni at the University of Pau in France, a convergence proof was obtained. We immediately wrote a paper and submitted it to a well known physics journal where it was promptly rejected because the reviewers did not like theorems and lemmas. We then submitted a full length version of the work to the TAC where it was eventually published as the paper “Coordination of Groups of Mobile Autonomous Agents Using Nearest Neighbor Rules.”
Over the years, many things have changed. The American Control Conference was once the Joint Automatic Control Conference and was held at universities. Today the ACC proceedings sits on a tiny flash drive about the size of two pieces of bubble gum while a mere 15 years ago the proceedings consisted of 6 bound volumes weighing about 10 pounds and taking up approximately 1100 cubic inches of space on one’s bookshelf. And people carried those proceedings home on planes - of course there were no checked baggage fees back then.
The field of automatic control itself has undergone enormous and healthy changes. When I was a student, problem formulations typically began with “Consider the system described by the differential equation.” Today things are different and one of the most obvious changes is that problem formulations often include not only a differential equations, but also graphs and networks. The field has broadened its outlook considerably as this American Control Conference clearly demonstrates.
And where might things be going in the future? Take a look at the “Impact of Control Technology” papers on the CSS website including the nice article about cyber - physical systems by Kishan Baheti and Helen Gill. Or try to attend the workshop on “Future Directions in Control Theory” which Fariba Fahroo is organizing for AFOSR.
Automatic control is a really great field and I love it. However, it is also probably the most difficult field to explain to non - specialists. Paraphrasing Donald Knuth : “A {control} algorithm will have to be seen to be believed.”
I believe that most people do not understand what a control engineer does or what a control system is. This of course is not an unusual situation. But it is a problem. IBM, now largely a service company, faced a similar problem trying to explain itself after it stopped producing laptops. We of course are primarily a service field. Perhaps like IBM, we need to take some time to rethink how we should explain what we do?
Thank you very much for listening and enjoy the rest of the conference