## Award/Recognition Menu

The Bellman Award is given for distinguished career contributions to the theory or application of automatic control. It is the highest recognition of professional achievement for US control systems engineers and scientists.

The awardee is expected to make a short acceptance speech at the AACC Awards Ceremonies during the ACC.

The recipient must have spent a significant part of his/her career in the USA.

### 2023

First of all: I'm very sorry to not be there in person. Over the last few years I've really missed my control colleagues from around the world.

I'm very honored and grateful to receive the Richard E. Bellman award. Bellman is one of the most frequently occurring names in my papers and books. He's right behind Gauss and Lyapunov, so he's in very good company. Bellman is also one of my heros, for formulating sequential decision making, also known as control, in such a clear way, as well as developing an elegant solution method that is often practical.

On the other hand, it's a bit scary to receive an award with the word 'heritage' in it. Is there a hidden message there? Like, maybe it's time for me to retire?

I've outlined my trajectory from pure math to EECS and control before, so I'll skip right to the many thanks I need to give.

My first thanks are to Charlie Desoer, Leon Chua, Shankar Sastry, Pravin Varaiya, and Alberto Sangiovanni. They are why I transferred from Math to EECS at Berkeley more than 40 years ago. I've never regretted that move. At the same time, I'm very happy that my initial training was in pure math, with Andy Gleason.

I thank all of my former and current PhD students, post-docs, research visitors, and research collaborators and co-authors. 'Thanks' really isn't the right word; these are the people who did the work. I'm proud of what we've done and created. *I* had a lot of fun, and I suspect my research collaborators mostly did too. (Maybe not when we were going over the 10th revision of a paper.) Special thanks to my collaborator Lieven Vandenberghe, with whom I've written two books, with another one in the oven.

I want to thank the students in my classes. My classes are big, and I've been teaching for almost 40 years now, so there are a lot of them. The students come from a wide range of fields and backgrounds. They were the training and test sets for explaining ideas. And they suggested plenty of new and interesting ideas. I have learned a lot from them.

I am very grateful to my colleagues in EE at Stanford. I really like my colleagues. And that's after serving as department chair. (You have to have been chair to fully understand that.) They are awesome, and I'm proud to be their colleague.

I are grateful for my colleagues in control and optimization

across the world. Control is a great field, which I fell into at Berkeley during my PhD, under the influence of Charlie Desoer and Shankar Sastry. Control is my intellectual home, it’s where I grew up, and where I’ve made a huge number of very good life-long friends. I’m really proud to be a part of this community.

Thanks also to my optimization colleagues, especially Arkadi Nemirovsky and Yuri Nesterov. And super special thanks to Boris Polyak, who was, like Bellman, a hero of mine.

I'm also very grateful to have spent time in industry, in multiple areas. I've learned an enormous amount from my industry friends, especially how someone with industry knowledge and experience, along with some good street-fighting skills, can do very well in a practical setting.

And finally, family. It has been a fantastic adventure, and I could not be more grateful to have shared it with Anna, my wife of more than 35 years, and our two great 'kids' Nick and Nora, now in their early 30s.

Thank you all again.

For pioneering contributions and sustained leadership in the development and application of advanced optimization algorithms.

### 2022

Richard Bellman was a paragon of deep foundational thinking and interdisciplinary work, so I am deeply grateful to receive an award that honors him. It is especially meaningful that the prize is awarded by the Automatic Control Council, which brings together such disparate areas of applications in engineering, mathematics, and the sciences.

Exactly 50 years ago, as an undergraduate in Buenos Aires looking for a senior project, I discovered the work of Rudolf Kalman, which sparked a stable and robust attraction to control theory that continues to this day. One of my professors met Kalman at a conference, which led to Kalman inviting me to be his student. Kalman’s rigorous mathematical approach inspired research excellence, deep thinking, and clear exposition. The 1970s witnessed an explosion of new and exciting ideas in systems theory, and many of the leaders in the field visited Kalman's Center. I was extremely lucky to have the opportunity to learn from all of them.

After my PhD, I went to Rutgers, where I was fortunate to collaborate with Hector Sussmann, and to learn so much from him.

Five years ago, I was recruited by Northeastern University, where I have fantastic colleagues, especially Mario Sznaier and Bahram Shafai.

Of course, I am grateful to all who influenced my work, too many to credit here, and to those who applied, enriched, and extended my initial ideas. At the risk of sounding presumptuous, let me share some thoughts about research in systems and control theory. First, it is important to formulate questions that are mathematically elegant and general. Paradoxically, general facts are often easier to prove than special ones, because they are stripped of irrelevant details. Second, we should strive to simplify arguments to their most elementary form. It is the simplest ideas, those that look obvious in retrospect, that are the most influential, as Bellman’s dynamic programming so beautifully illustrates. Third, we should be aware of the essential connection between theory and applications. Applications provide the inspiration for an eventual conceptual synthesis. Conversely, theory is strengthened and refined by working out particular cases and applications. Fourth, one should be cautiously open to new ideas, even those orthogonal to current fashion. But not all new ideas are good: novelty by itself is not enough. Finally, we should not lose sight of the fact that, while fun and intellectually challenging, our ultimate objective is to improve the world through scientific and engineering advances.

Which brings me back to Richard Bellman’s heritage, which we honor today. Years after his foundational work on optimality, Bellman turned to biology and medicine, even starting a mathematical biology journal. I am sure that the mechanistic understanding of behavior at all scales, from cells to organisms will lead to control and elimination of disease and the extension of healthy lifespans. I find immunology and its connections to infectious diseases and cancer to be a fascinating field for systems thinking. In addition, the associated engineering field of synthetic biology will lead to new therapeutic approaches as well as scientific understanding, and new mathematics and control problems suggest themselves all the time. In my view, the main value of systems and control to molecular biology will not be in applying deep theoretical results. Instead, conceptual ideas like controls, measurements, robustness, optimization, and estimation are where the main impact of our field will be felt.

Thank you so much.

June 9, 2022

Atlanta, GA USA

ACC 2022

For pioneering contributions to stability analysis and nonlinear control, and for advancing the control theoretic foundations of systems biology

### 2021

### Miroslav Krstić

Dear Automatic Control colleagues,

I am happy and humbled to receive the Bellman Award.

My profound gratitude goes to the colleagues who supported my nomination. I am thankful and deeply moved by the selection committee and the A2C2, which advanced a candidate in his mid-fifties, an adolescent by Bellman award standards.

The timing of this award, which recognizes the achievement of an American control systems researcher, carries significance for me. The Bellman award came in the year that happened to be the thirtieth anniversary of my coming to the United States as a graduate student.

It is customary on this occasion for the recipient to say a few words about their formative years and professional trajectory.

I was born and grew up in a small city called Pirot, in remote southeastern Serbia. I was fortunate that my provincial city had one of the top science high schools in former Yugoslavia. And my caring parents spared no expense to provide my brother and me with broader cultural opportunities than those that our hometown could offer.

My undergraduate years at the Department of Electrical Engineering of the University of Belgrade provided me with two things. First, the toughest academic competition I’ve experienced, before or since, was during those five undergraduate years. And, second, I met my future wife in our freshman math class.

Before Petar Kokotovic gave me a PhD opportunity, I had only an inkling that I might have a shot at some success in research. But, within a few weeks of arriving in Santa Barbara, I had the fortune of solving a problem that had a reputation of being unsolvable, though I didn’t know that. So things moved quickly with research from that point on, and I had Petar’s unlimited attention. I could fill hours on being mentored by Petar. But let me just say that, during those Santa Barbara years, Petar’s enthusiasm and support for my work left me feeling that there was nothing more important happening in the world than what I was doing in research. At the same time, with everything I would produce or say, I had the training benefit of a keener, more unforgiving, and yet more nuanced critique than I would ever subsequently encounter, as a researcher or academic administrator.

Of the areas credited to me, the ones that probably come to mind first are PDE backstepping and extremum seeking. Let me describe how these interests started, soon after I left Santa Barbara.

Petar Kokotovic, Richard Murray, and Art Krener had a large project on controlling flow instabilities in jet engines. We solved those problems using reduced-order nonlinear ODE models of those flows. And it was clear that, for a nonlinear control researcher, there was hardly a more fertile ground than fluids. The only problem was: who would provide an ODE reduction for me for the next control design problem I tackle? If fluids people spend their entire careers refining, for a specific type of flow, the reductions from the Navier-Stokes representation to ODEs, it was obvious I could not count on them for control-oriented reduced models. I had to roll up my sleeves and build control methods directly for PDEs. From scratch. Because Riccati equations—in infinite dimension to boot—are not the way to extend PDE control to the nonlinear case. The answer to the challenge of constructive PDE control came in the form of continuum backstepping transformations, employing Volterra operators and easy-to-solve Goursat-form PDEs for the control gain functions. If you have interest in an example of this line of PDE control research, I recommend the paper with Coron, Bastin, and my student Vazquez, which has enabled stabilization of traffic flows in a congested, stop-and-go regime.

How I got drawn to extremum seeking is also interesting. In 1997, a combustion colleague at Maryland pointed me to publications from the 1940s and 1950s on what I would describe as an approach to adaptive control for nonlinear systems. Heuristic, but orders of magnitude simpler than what I had written my PhD on. Attempts at sleep were futile, for several days, until I figured out how to prove stability of this algorithm, using a combination of averaging and singular perturbation theorems. If you wanted to sample one control paper from the last quarter century on extremum seeking, I recommend the one on model-free seeking of Nash equilibria with Tamer Basar and my student Paul Frihauf.

To my students and collaborators, I would like to say: this Bellman award is yours. For your papers, books, theorems, and industrial products.

As I mention students, I want to extend gratitude to two companies that have been the environments in which my former students have been able to thrive and leave a legacy. At ASML, control of extreme ultraviolet photolithography has improved the density of microchips by 2-3 orders of magnitude. At General Atomics, control of aircraft arrestment on carriers has enabled one of the most impressive and deployed recent advances in defense technology.

I won’t pretend that it is not a delight to see my name in the list of the 44 recipients of the Bellman award. Scholars of incredible depth and engineers of stunning impact. I’ve studied the list. Amazingly, the numbers of American-born and foreign-born recipients of this US award seem to be the same: 22 each. If you sought an example of how the US is unequaled in extending opportunity to scientific immigrants, like myself, you could hardly find a clearer illustration.

It was also impossible for me to miss in the list that, after India, represented by four Bellman awardees, the second most highly represented foreign country is a certain little country, just a few percent more populous than the city of Atlanta, the country from which Petar Kokotovic, Drago Šiljak, and I came to the US. If I don’t mention this, in the hope of inspiring a few young minds at the Universities of Belgrade, Novi Sad, or Niš, who should?

I couldn’t have made it here without role models and without pioneers who charted the pathways along which it was then not that hard for me to walk. Among them are people who have also generously supported me over the years: Tamer Basar, Manfred Morari, Art Krener, Eduardo Sontag, Masayoshi Tomizuka, Galip Ulsoy, Jason Speyer, Graham Goodwin, Jean-Michel Coron, Petros Ioannou—to limit myself to ten. I hope that, in the remainder of my research career, I more fully deserve their support, as well as by other friends I don’t mention here but who are aware of the extent of my gratitude and respect.

Let me close and thank you with a quote from my former department chair who astutely observed: “To you guys, in control systems, every other field is a special case of control theory.” What if that’s true?

June 7, 2022

Atlanta, GA USA

ACC 2022

For transformational contributions in PDE control, nonlinear delay systems, extremum seeking, adaptive control, stochastic nonlinear stabilization and their industrial applications

### 2020

To receive the Richard E. Bellman Control Heritage Award is truly an honor. I am thankful first to all of you for attending today after two postponements of these ceremonies due to the pandemic. I am grateful to the honors committee for selecting me, and to my nominator and references for their willingness to put forth and support my nomination.

The Bellman Award is given for “distinguished career contributions to the theory or application of automatic control.” My career in control started as a junior at Swarthmore College in 1972 when I took a course based on the textbook Dynamics of Physical Systems by Robert Cannon. That course really challenged me, and I found myself putting in a lot of time and energy just to get by. That investment sparked my interest, and so as a master's student at Cornell University I worked with Dick Phelan and learned the practical and experimental side of automatic control in the laboratory using analog computers. In 1975 I decided to pursue control engineering for my Ph.D. work and Prof. Phelan said, in mechanical engineering at that time, there were really only two choices: MIT or UC Berkeley. So I wound up at UC Berkeley where I learned controls from Yasundo Takahashi, Masayoshi Tomizuka (Tomi is also a Bellman Award recipient), and Dave Auslander. I not only learned the latest in control theory from the book Control and Dynamic Systems by Takahashi, Rabins and Auslander, but did my first experiments using digital controllers. My doctoral advisor and professional role model, Dan Mote, is a dynamicist, and my research was on reducing sawdust by controlling vibrations of bandsaw blades during cutting and included theory, computation and experiment.

When I started as an Assistant Professor at the University of Michigan in 1980, I had the great fortune to have two very special mentors. The late Elmer Gilbert (another Bellman Award recipient) came to my office to welcome me, to offer his help with the new graduate course I was developing, and to invite me to participate in a College of Engineering control seminar – a regular Friday afternoon seminar which I still continue to attend! The other was my longtime friend and collaborator Yoram Koren, together with whom I conducted many joint research projects, and from whom I learned much of what I know about control of manufacturing systems. Yoram and I had the first digital control computer, a PDP-11, at UM in our laboratory. Michigan was, and is, a wonderful place for control engineering. I had the good fortune to work with, not only Elmer and Yoram, but many outstanding collaborators: Joe Whitesell, the late Pierre Kabamba, Panos Papalambros, Dawn Tilbury, Huei Peng, Ilya Kolmanovsky, Harris McClamroch, Jeff Stein, Gabor Orosz, Chinedum Okwudire and many others! I worked on topics such as automotive belt dynamics, adaptive control of milling, reconfigurable manufacturing, vehicle lane-keeping, co-design of an artifact and its controller, time delay systems, and I was always richer for the experience. Throughout my professional career I worked extensively with industry, especially the Ford Motor Company, where I collaborated with and learned from excellent engineers like Davor Hrovat and Siva Shivashankar (automotive control), Charles Wu (control of drilling), and Mahmoud Demeri (stamping control).

I would like to recognize my wife, Sue Glowski, who is here today, for her love and support. She was educated in English and Linguistics but is always willing to patiently listen to my latest idea about control, even if she has to eventually ask: "what the hell is an eigenvalue?"

Finally, and most importantly, I want to recognize and thank my students and postdocs. This award recognizes your great ideas, and your fine work, and I am delighted to be here today to accept it on your behalf. Thank you!

June 7, 2022

Atlanta, GA USA

ACC 2022

For seminal research contributions with industrial impact in the dynamics and control of mechanical systems especially manufacturing systems and automotive systems

### 2019

Dear President Braatz, colleagues, students and friends.

I am very grateful and indeed humbled by being honored to receive the Richard E. Bellman Control Heritage Award for 2019 and to join the distinguished list of prior recipients. I wish to express my sincerest thanks to those who nominated me and supported my nomination and to the awards committee. I am deeply moved by the honor I receive today.

More as a rule than an exception, such an honor is not a credit to a single individual but rather the result of collective work and many collaborations over the years. This is particularly true in areas which are by nature interdisciplinary. And control theory, as such, is one of these. It offers an excellent example of synergy where purely theoretical questions, mathematical in nature, are prompted and stimulated by technological advances and engineering design.

I was attracted to mathematical control theory from my early days at the University of Warsaw, where I was privileged to join a distinct and (at that time) experimental program, called Studies in Applied Mathematics. This was an interdisciplinary initiative under the collaboration of a few home departments. After graduating with a Master Degree, I was fortunate to receive a doctoral fellowship which allowed me to complete my PhD in Applied Mathematics-Control Theory within 3 years, with a thesis on a problem of non-smooth optimization, which extended Milutin-Dubovitski's work and had applications to control systems with delays.

I am extremely grateful to my mentors of that time: Professors A. Wierzbicki, A. Manitius from Control Theory [the latter now chair at George Mason University], the late Professor S. Rolewicz and Professor K. Malanowski both from the Polish Academy of Sciences. They, along with other colleagues, gave me an opportunity to embrace a large spectrum of the field of control theory, to include functional analysis, abstract optimization, differential equations.

My further education took a critical turn at UCLA in Los Angeles, which I joined in 1978, at the invitation of the late Professor A.V. Balakrishnan, the 2001 recipient of the Bellman's award. Bal for all of us. Here, under his mentorship, I was offered the challenge to get involved in the mathematical area of boundary control theory for Distributed Parameter Systems, still at its infancy at that time, even from the viewpoint of Partial Differential Equations, with many basic mathematical problems still open. That was about the time when Richard Bellman's book on Dynamic Programming appeared, in 1977, rooted on Bellman's equation and the Optimality Principle. I always looked at Bellman as a problem-solving mathematician, and the mathematical theory of boundary control of DPS is in line with this philosophy.

Controlling or observing an evolution equation from a restricted set [such as the boundary of a multi-dimensional bounded domain where the controlled system evolves] is both a mathematical challenge and a technological necessity within the realm of practical and physically implementable control theory. Most often, the interior of the domain is not accessible to external manipulations. One first goal of the time within the DPS control community was to construct an appropriate control theory, inspired also by the late R. Kalman, the 1997 recipient of the Bellman's award. Main initial contributors were J.L. Lions, A. Bensoussan and their influential school in Paris, and A.V. Balakrishnan and his associates. But DPS come in a large variety. It requires that each distinct class (parabolic, hyperbolic, etc.) be studied on its own with properties and methods pertinent to it, which however fail for other classes. The systematic study of boundary control, which leads to distributional calculus for various distinct classes of physically significant DPS, became the first long-range object of my research. Both, the results and the methods are dynamics dependent. Finite or infinite speed of propagation becomes an essential feature in controllability theory. For instance, the wave equation is boundary exactly controllable on a sufficiently large time, while the heat equation is only null-controllable yet on an arbitrary short time. Existence, uniqueness and robustness of solutions to nonlinear dynamics were just the first questions asked but still open within the existing PDE culture.

Topics investigated over the years included: optimal control, Riccati and H-J-Bellman theory and their numerical implementation, appropriate controllability and stabilization notions, all in the framework of boundary control of partially observed systems. This research effort, which continues to this very day, was conducted with collaborators and PhD students. It started with my association with A.V. Balakrishnan at UCLA, J.L. Lions at College de France and R. Kalman during my 7 years at the University of Florida. And it continued during my subsequent 26 years at the University of Virginia, the home of MacShane, and now at the University of Memphis. In both cases with talented PhD students. Some of these occupy now distinguished positions in the US academia.

Once the control theory of single distinct DPS classes became mature, engineering applications motivated the need to move on toward the study of more complex DPS consisting of interactive structures where different types of dynamics coupled at an interface define a given control system. Propagation of control properties through the interface then plays a main role.

Thus, in its second phase, my research in DPS then evolved toward these coupled interactive systems of several PDEs. Applications include large flexible structures, structural acoustic interaction, fluid-structure interaction, attenuation of turbulence in fluid dynamics [Navier Stokes] and flutter suppression in nonlinear aero-elasticity. In the latter area, my collaboration with Earl Dowell [Duke Univ.] was most enlightening, and is a further proof of the interdisciplinary nature of the field. These problems, while deeply rooted in engineering control technology, were also benchmark models at the forefront of developing a PDE-based mathematical control theory, which accounts for the infinite dimensional nature of continuum mechanics and fluid dynamics.

In closing, I would like to acknowledge with gratitude my personal and professional interaction over the years with people such as the late David Russell [VPI], Walter Littmann [U of Minnesota], Giuseppe Da Prato [Scuola Normale, Pisa], Michel Delfour [Univ. of Montreal] and Sanjoy Mitter [MIT], the latter the 2007 recipient of the Bellman award. Their pioneering works paved the way to further developments along a road-map which I am proud to be a part of.

Special thanks to my long-time collaborator and husband Roberto Triggiani, to the late Igor Chueshov [both co-authors of major research monographs, two with Roberto in Cambridge University Press and one with Igor in Monograph Series of Springer], as well as to my former students, now collaborators and colleagues.

Many thanks also to funding agencies such as NSF, AFOSR, ARO and NASA for many years of generous support.

July 11, 2019.

Philadelphia

For contributions to boundary control of distributed parameter systems

### 2018

Dear President Braatz, colleagues, students, ladies and gentlemen:

I feel tremendously honored to receive the Richard Bellman Control Heritage Award. Thank you to those who nominated me and supported my nomination, to the selection committee, and to the AACC Board for making me this year’s recipient.

I completed my undergraduate studies at Keio University in Japan and my graduate studies at MIT. Following my education at these wonderful institutions, I was able to join the excellent academic environment at the University of California, Berkeley. I am grateful to my teachers and colleagues at these institutions. I thank in particular my PhD advisor Dan Whitney and my early control colleagues at Berkeley, Yasundo Takahashi and David Aulander, and the many bright graduate students that I have had the privilege of having in my lab at Berkeley who are approximately 120 PhDs strong now. I thank the National Science Foundation and other government sponsors as well as industrial sponsors for providing me resources to maintain the Mechanical Systems Control laboratory, which is the home of my research group. Last but not least, I thank my wife Miwako for supporting me and our family, permitting me to concentrate on academics and schoolwork for many years, starting almost 50 years ago in my MIT days.

I jumped into the area of dynamic systems and control during my senior year at Keio University. The first book I read was Modern Control Theory by Julius Tou. The book was an excellent summary of the State Space Control Theory, and I was fascinated by the elegant mathematical aspects of the subject. There was no internet back then of course, and major periodicals such as IEEE Transactions on Automatic Control and ASME Journal of Basic Engineering were the best sources to find the latest developments in the field. I was frustrated by the time delay between the time of research and publication. About at the time I completed my MS at Keio, I was fortunate to receive an admission offer from MIT. The time delay problem was naturally resolved. At MIT, I was inspired by many people including Dan Whitney, Tom Sheridan and Hank Paynter. Sheridan’s early work on preview control was the starting point of my dissertation work on the “optimal finite preview” problem.

In September 1974, I joined the University of California as an Assistant Professor of Mechanical Engineering. It’s hard to believe, but I am now completing my 44th year at Berkeley.

At Berkeley, I have worked on many different mechanical systems. I joined UC Berkeley when the large scale integration technology was starting to make it possible to implement advanced control algorithms by using mini and micro computers. This allowed me to emphasize both the analytical aspects of control and the laboratory work. This research style still continues now.

Robots are multivariable and nonlinear. In particular, a configuration-dependent inertial matrix and nonlinear terms are unique for robots. I convinced one of my PhD students, Roberto Horowitz (who is now a professor and chair of the Mechanical Engineering Department at Berkeley), to work with me on model reference adaptive control as it applied to robots. Since then, robot control has remained a major research topic in my group. Our current research emphasizes efficiency and safety in human-robot interactions and merging model-based control and machine learning to make the robot system intelligent.

I worked on machining for a while. One control issue with machining is the dependence of input-output dynamics to cutting conditions and tool wear. One day, Jun-Ho Oh (who is now a professor at KAIST), took me down to the lab to show me model reference adaptive control on a Bridgeport milling machine. It was cleverly implemented and was the first application of modern adaptive control theory to machining.

In many mechanical systems involving rotational parts, we encounter periodic disturbances with known periods. Repetitive control is applied to this class of disturbances. I learned of it from visitors from Japan in the mid-1980s. Tsu-Chin Tsao (who is now a professor at UCLA) and I then developed our version of repetitive control algorithms emphasizing discrete time formulation and easy implementation.

Another fundamental control problem for mechanical systems is tracking arbitrary shaped reference inputs. Feedforward control is popular in tracking, but unstable system zeros make the problem complicated. To overcome this issue, I suggested to cancel phase shift induced by unstable zeros and introduced zero phase error tracking (ZPET) control in the late 1980s. The citation of this paper has reached 1,600 by now.

In the mid-1980’s, UC Berkeley started Program on Advanced Transit and Highway under the sponsorship of CalTrans. Automated highway systems was a topic of interest for quite a few control professors. Karl Hedrick and I were the primary faculty participants from ME: Karl worked on controls in the longitudinal direction and I in the lateral direction of vehicles. My first PhD student on this topic was Huei Peng (who is now a professor at University of Michigan). During the past five years or so, autonomous vehicles have become very hot as we all know, and I now have quite a few students working to blend control and machine-learning for applications to vehicles.

I have been fortunate to have had the opportunity to address a variety of challenging mechanical control problems over the span on my career so far. My research has been and continues to be rooted in the mechatronic approach; namely I have worked on the synergetic integration of mechanical systems with sensing, computation, and control theory. This approach provides the opportunity for academic research to have broad impacts on control engineering in practice, and I am honored to have had a hand in helping to advance a small part of it.

Thank you very much for this award. I am extremely grateful and honored.

ACC 2018

Milwaukee, WI USA

June 28, 2018

For seminal and pioneering contributions to the theory and practice of mechatronic systems control

### 2017

Dear President Masada, colleagues, students, ladies and gentlemen:

I am deeply moved by this award and honor, and truly humbled to join a group of such stellar members of our extended systems and control community, several of whom have been my mentors, teachers and role models throughout my career.

I am grateful to those who nominated me and supported my nomination and to the selection committee for their decision to honor my work and accomplishments.

I was fortunate through my entire life to receive the benefits of exceptional education. From special and highly selective elementary school and high school back in Greece, to the National Technical University of Athens for my undergraduate studies and finally to Harvard University for my graduate studies. My sincere and deep appreciation for such an education goes to my parents who distilled in me a rigorous work ethic and the ambition to excel, my teachers in Greece for the sound education and training in basic and fundamental science and engineering and to my teachers and mentors at Harvard and MIT (Roger Brockett, Sanjoy Mitter and the late Jan Willems) and the incredibly stimulating environment in Cambridge in the early 70’s.

Many thanks are also due to my students and colleagues at the University of Maryland, in the US and around the world, and in particular in Sweden and Germany, for their collaboration, constructive criticism and influence through the years. Several are here and I would like to sincerely thank you all very much.

I am grateful to the agencies that supported my research: NSF, ARO, ARL, ONR, NRL, AFOSR, NIST, DARPA, NASA. I am particularly grateful to NSF for the support that helped us establish the Institute for Systems Research (ISR) at the University of Maryland in 1985, and to NASA for the support that helped us establish the Maryland Center for Hybrid Networks (HyNet) in 1992.

I would also like to thank many industry leaders and engineers for their advice, support, and collaboration during the establishment and development of both the ISR and HyNet to the renowned centers of excellence they are today.

Most importantly I am grateful to my wife Mary, my partner, advisor and supporter, for her love and selfless support and sacrifices during my entire career.

When I came to the US in 1970 I was debating whether to pursue a career in Mathematics, Physics or Engineering. The Harvard-MIT exceptional environment allowed me freedom of choice. Thanks to Roger Brockett I was convinced that systems and control, our field, would be the best choice as I could pursue all of the above. It has indeed proven to be a most exciting and satisfying choice. But there were important adjustments that I had to make and lessons I learned.

I did my PhD thesis work on infinite dimensional realization theory, and worked extensively with complex variable methods, Hardy function algebras, the famous Carleson corona theorem and several other rather esoteric math. From my early work at the Naval Research Laboratory in Electronic Warfare (the “cross-eye” system) and in urban traffic control (adaptive control of queues) I learned, the hard way, the difficulty and critical importance of building appropriate models and turning initially amorphous problems to models amenable to systems and control thinking and methods. I learned the importance of judiciously blending data-based and model-based techniques.

In the seventies, I took a successful excursion into detection, estimation and filtering with quantum mechanical models, inspired by deep space laser communication problems, where my mathematical physics training at Harvard came in handy. I then worked on nonlinear filtering, trying to understand how physicists turned nonlinear inference problems to linear ones and investigate why we could not do the same for nonlinear filtering and partially observed stochastic control. This led me to unnormalized conditional densities, the Duncan-Mortensen-Zakai equation and to information states. This led me naturally to construct nonlinear observers as asymptotic limits of nonlinear filtering problems and the complete solution of the nonlinear robust output feedback control problem (nonlinear H-infinity problem) via two coupled Hamilton Jacobi Bellman equations. We even investigated the development of special chips to implement real-time solutions, a topic we are revisiting currently.

With the development and progress of the ISR I worked on many problems including: speech and image compression breaking the Shannon separation of source and channel coding, manufacturing processes, network management, communication network protocols, smart materials (piezoelectric, shape memory alloys), mobile wireless network design, network security and trust, and more recently human-machine perception and cognition, networked control systems, networked cyber-physical systems, combining metric temporal logic and reachability analysis for safety, collaborative decision management in autonomous vehicles and teams of humans and robots, new analytics for learning and for the design of deep learning networks mapping abstractions of the brain cortex, quantum control and computing.

Why I am telling you about all these diverse topics? Not to attract your admiration. But because at the heart of all my works are fundamental principles and methods from systems and controls, often appropriately extended and modified. Even in my highest impact (economic and social) work in conceiving, demonstrating and commercializing Internet over satellite services (with billions of sales world-wide – remember me when you use Internet in planes over oceans), we modified the flow control algorithm (the TCP) and the physical path, to avoid having TCP interpret the satellite physical path delay as congestion. That is we used systems and control principles.

Our science and engineering, systems and control, has some unparalleled unifying power and efficiency. That is, if we are willing to build the new models required by the new applications (especially models requiring a combination of multiple physics and cyber logic) and if we are willing to learn and apply the incredible new capabilities and technologies that are developed in information technology and materials. As is apparent especially in this conference (ACC), and in the CDC conference, by any measure, our field is exceptionally alive and well and continues to surprise many other disciplines by its contributions and accomplishments, which now extend even in biology, medicine and healthcare. So for the many young people here, please continue the excitement, continue getting involved in challenging and high impact problems, and continue the long tradition and record of accomplishments we have established for so many years. And most importantly continue seeking the common ground and unification of our methods and models.

Let me close with what I consider some major challenges and promising broad areas for the next 10 years or so:

1) Considering networked control systems we need to understand what we mean by a “network” and the various abstractions and system aspects involved. Clearly there are more than one dynamic graphs involved. This needs new foundations for control, communication, information, computing.

2) Systems and control scientists and engineers are the best qualified to develop further the modern field of Model-Based Systems Engineering (MBSE): the design, manufacturing/implementation and operation of complex systems with heterogeneous physical, cyber components and even including humans.

3) The need for analog computing is back, for example in real-time and progressive learning and in CPS. Some of the very early successes of control were implemented in analog electromechanical systems due to the need for real-time behavior. Yet we do not have a synthesis theory and methodology for such systems due to the heterogeneous physics that may be involved. Nothing like we have for VLSI.

Thank you all very much! This is indeed a very special day for me!

For innovative contributions to control theory, stochastic systems, and networks and academic leadership in systems and control

### 2016

### Jason L. Speyer

I am extremely grateful and humbled by being honored to receive the Richard E. Bellman Control Heritage Award for 2016. I thank those that recommended me and the awards committee for supporting that nomination. I also thank my colleagues, students, family and especially my wife for the support I have received over these many years.

For me this award occurs at an auspicious time and place. Boston is the place of my birth and my home. It was sixty years ago that I graduated from Malden High School and entered into a world I could never have anticipated; a world where I would be nurtured for the next twenty years by many people, some of whom have been recipients of this esteemed award.

I enrolled in the Department of Aeronautics at MIT, which after Sputnik became the Department of Aeronautics and Astronautics. In my junior year I entered into the space age. More consequential for me was that the department head was Doc (Charles Stark) Draper[1], whose second volume of his three sequence series on Instrument Engineering (1952) was one the first books on what we know as Classical Control covering such topics as Evens root locus, Bode plots, Nyquist criterion, and Nichols charts. Doc Draper instituted an undergraduate course in classical control that I took my junior year. This inspired me to take a graduate course and write my undergraduate thesis in controls.

After graduation in 1960 I left Boston to work for Boeing in Seattle. There, I worked with my lead engineer Raymond Morth, who introduced me to the new world of control theory using state space that was just emerging in the early 1960’s. I learned of dynamic programming of Richard Bellman for global sufficiency of an optimal trajectory and the Pontryagin Maximum principle inspired by the deficiency of dynamic programing to solve certain classes of optimization problems. The Bushaw problem of determining the minimum time to the origin of a double integrator was just such a problem, since the optimal return function in dynamic programing is not differentiable at the switching curve and the Bellman theory did not apply. Interestingly, for my bachelor’s thesis I applied the results of the Bushaw problem to the minimum time problem of bringing the yaw and yaw rate of an aircraft to the origin. However, at that time I had no idea about the ramification of the Bushaw problem to optimization theory. I also learned of the work of Rudolf Kalman in estimation, the work of Arthur Bryson and Henry Kelley in the development of numerical methods for determining optimal constrained trajectories, and J. Halcombe (Hal) Laning and Richard Battin on the determination of orbits for moon rendezvous.

After an incredible year at Boeing I returned to Boston to work at the Analytical Research Department at Raytheon, where Art Bryson was a consultant. There, I worked with a student of Bryson, Walter Denham. We were contracted by MIT’s Instrumentation Laboratory, monitored by Richard Battin, to enhance the Apollo autonomous navigation system over the trans-Lunar orbit. We developed a scheme for determining the optimal angle-measurement sequence between the best stars in a catalogue and near and far horizons of the Earth or the Moon using a sextant. This angle-measurement sequence minimized some linear function of the terminal value of the error covariance of position and velocity near the Earth or Moon. Our optimization scheme, which required a matrix dynamic constraint, seemed to be a first. This scheme, used in the Apollo autonomous navigation system, was tested on Apollo 8, and used on every mission thereon. My next task at Raytheon was working on neighboring optimal guidance scheme. This work was with Art Bryson and John Breakwell. I remember travelling to Lockheed’s Palo Alto Research Laboratory and meeting with John, the beginning of a long and delightful collegial relationship.

After my first two years at Raytheon I somehow convinced Art Bryson to take me on as a graduate student at Harvard, supported by the Raytheon Fellowship program. To understand the intellectual level I had to contend with, on my doctorial preliminary exam committee, three of the four examiners were recipients of the Richard E. Bellman Control Heritage Award; Art Bryson, Larry (Yu-Chi) Ho, and Bob (Kumpati) Narendra, all of whom have been my life time colleagues. I was also fortunate to take a course taught by Rudy Kalman. Surprisingly, he taught many of the controls areas he had pioneered, except filtering for Gauss-Markov systems (the Kalman filter); the Aizerman conjecture, the Popov criterion and Lyopunov functions, duality in linear systems, optimality for linear-quadratic systems, etc. After finishing my PhD thesis on optimal control problems with state variable inequality constraints, I returned to Raytheon. Fortunately, Art Bryson made me aware of some interest at Raytheon in using modern control theory for developing guidance laws for a new missile. At Raytheon’s Missile Division I worked with Bill O’Halloran on the homing missile guidance system where Bill worked on development of the Kalman filter and I worked on the development of the linear-quadratic closed-form guidance gains that had to include the nonminimal phase autopilot. This homing missile, the Patriot missile system, appears to be the first fielded system using modern control theory.

I left Boston for New York to work at the Analytical Mechanics Associates (AMA), in particular, with Hank Kelley. Although I had a lasting friendship with Hank, I only lasted seven months in New York before returning to the AMA office in Cambridge. Unfortunately, the Cambridge NASA Center closed, and I took a position under Dick Battin at the Instrumentation (later the Charles Stark Draper) Laboratory at MIT. There, I worked on the necessary and sufficient conditions for optimality of singular control problems, the linear-exponential-Gaussian control problem, optimal control problems with state variable inequality constraints, optimal control problems with cost criterion and dynamic functions with kinks, and periodic optimal control problems. On many of these issues I collaborated with David Jacobson, whom I first met in the open forum of my PhD final exam. This remarkable collaboration culminated in our book on optimal control theory that appeared in 2010. Also, during my tenure at Draper, I took a post-doctoral year leave at the Weizmann institute in Israel. Here, I learned that I could work very happily by myself. A few years after returning to Draper, I started what is now a forty year career in academia and I left Boston.

As I look back, I feel so fortunate that I had such great mentoring over my early years and by so many who have won this award. My success over the last forty years has been due to my many students who have worked with me to mold numerous new ideas together. Today, I find the future as bright as anytime in my past. I have embarked in such new directions as estimation and control of linear systems with additive noises described by heavy tailed Cauchy probability density functions with my colleague Moshe Idan at the Technion and deep space navigation using pulsars as beacons with JPL.

To conclude, I am grateful to so many of my teachers, colleagues and students, who have nurtured, inspired, and educated me. Without them and my loving wife and family, I would not be here today. Thank you all.

[1] Boldface names are recipients of the Richard E. Bellman Control Heritage Award.

For pioneering contributions to deterministic and stochastic optimal control theory and their applications to aerospace engineering, including spacecraft, aircraft, and turbulent flows

### 2015

When I look back upon my career in the field of control, I think it may have started in 1957, when Sputnik was launched by the Russians. I was in the seventh grade at that time. The reaction of our local school board to losing the space race was to have a group of students take algebra one year earlier, in the eighth grade. During high school, I participated in my class science fairs and won at the state level. When I was a freshman at the University of Kansas in 1967, I was given the ability to do independent research in the area of nucleate boiling. I also was exposed to computer programming, which was a fairly new topic at that time in undergraduate engineering. I became interested in numerical analysis and selected Princeton University for doctoral study, because Professor Leon Lapidus was a leading authority on that topic.

I discovered his interest in numerical analysis was driven by solving control problems (specifically two point boundary value problems). The optimal control project I selected was on singular bang-bang and minimum time control. I used discrete dynamic programming with penalty functions (influenced by Bellman and Kalman) as a way to solve this particular class of control problems. In 1971 I accepted a faculty position at the University of Texas.

That era was the heyday of optimal control in the aerospace program. Many of us in chemical engineering wanted to apply these ideas to chemical plants, however, there were some obstacles. Economic justification was strictly required for any commercial application vs. government funding for space vehicles. In addition, proprietary considerations prevented technology transfer from one plant to another. It wasn't until the late 1970s, when Honeywell introduced the distributed digital control system, that computer process control really began to become more popular (and economic) in industry. In 1972, I purchased a Data General minicomputer to be used with a distillation column for process control. That computer was very antiquated by today’s standards; in fact, we had to use paper tape for inputting software instructions to the machine.

Given that there was a lack of industrial receptivity to advanced control research and NSF funding was very limited, I looked around for other types of problems where my skills might be valuable. In 1974 the energy crisis was rearing its head due to the Arab oil embargo. Funding agencies like NSF and the Office of Coal Research in the U.S. were quite interested in how we could use the large domestic resource of coal to meet the shortage of oil and gas. I came across some literature about a technology called underground coal gasification (UCG), where one would gasify the coal resource in situ as a way of avoiding the mining step. I recall reading it was a very promising technology but they didn't know how to control it. That sparked my interest as a possible topic where I could apply my skill set. But I first had to learn about the long history of coal gasification and coal utilization in general.

There were many issues that had to be addressed before developing control methodologies for UCG. There was a need to develop three-dimensional modeling tools that would predict the recovery of the coal as well as the gas composition that you make (similar to a chemical reactor). Thus 80% of the research work was on modeling as opposed to control. It was also a highly multidisciplinary project involving rock mechanics and environmental considerations. I worked in this area for about 10 years. Later in the mid-1980s, the U.S. no longer had an energy crisis, so I started looking at some other possible areas for application of modeling and control.

In 1984 a new senior faculty member joined my department from Texas Instruments. He was very familiar with semiconductor manufacturing and the lack of process control, and he was able to teach me a lot about that industry. Fortunately I did not have to learn a new field on my own since I was Department Chair with limited discretionary time. The same issues were present as for UCG: models were needed in order to develop control strategies. I have continued working in that area with over 20 graduate students spread out over the past 25 years and process control is now a mature technology in semiconductor manufacturing (see my plenary talk at this year’s ACC).

During the 1980s, I became interested in textbook writing and particularly the need to develop a new textbook in process control. I began collaborating with two colleagues at UC Santa Barbara (Dale Seborg and Duncan Mellichamp) and thought that UCSB would be a great place to spend some time in the summer writing and giving short courses on the topic. The course notes were eventually developed into a textbook eight years later. We now are working on the fourth edition of the book and it is the leading textbook for process control in the world. It has been a very rewarding endeavor to work with other educators, and I would recommend that anyone writing a textbook collaborate with other co-authors as a way of improving the product. In 2010, we added a fourth co-author (Frank Doyle) to cover biosystems control; in fact, he is receiving the practice award from AACC today.

In the early 1990s at UT Austin, Jim Rawlings and I concluded that we wanted to work on control problems that would impact industrial practice rather than just writing more technical papers that maybe only a few people would read. So we formed the Texas Modeling and Control Consortium (TMCC) which had 16 member companies. Over twenty plus years the consortium has morphed into one involving multiple universities investigating process control, monitoring, optimization, and modeling. When Jim left the University of Texas and went to Wisconsin, we decided to keep the consortium going, so it became TWMCC (Texas Wisconsin Modeling and Control Consortium). Joe Qin replaced Jim on the faculty at UT but then 10 years later he left for USC. So our consortium became TWCCC (Texas Wisconsin California Control Consortium). I have learned a lot from both Joe and Jim over the years and have been able to mentor them in their professional development as faculty members. I am now mentoring a new UT control researcher (Michael Baldea) as we continue to close the gap between theory and practice.

One other thing I should mention is my involvement with the American Control Conference. I first gave a paper in 1972 at what was known as the Joint Automatic Control Conference (JACC) and have been coming to this meeting ever since. In the 1970s each meeting was entirely run by a different society each year. To improve the business model and instill more interdisciplinarity with five participating societies, in 1982 we started the American Control Conference with leadership from Mike Rabins, John

Zaborsky, and also Bill Levine who is here today. I was Treasurer of the 1982 meeting, which was held in Arlington, VA. That began an extremely successful series of meetings that is one of the best conference values today. It is very beneficial to attend to see control research carried out in the other societies and not just your own society.

During my 40+ year career, I have had a lot of help from colleagues in academia and industry and collaborated with over 100 bright graduate students. I also should thank my wife Donna, who has put up with me over these many years since we first started going to the computer center at the University of Kansas for dates 50 years ago.

My advice to younger researchers is to think 10 years out as to what the new areas might be and start learning about them. Fortunately, today’s control technology is more ubiquitous than ever and the future is bright, although the path forward may not be clear. I still remember a discussion I had with a fellow graduate student before leaving Princeton in 1971 as we embarked on academic careers. His view was that after all the great things achieved by luminaries like Pontryagin, Bellman, and Kalman, all that's really left are the crumbs… So I guess that means that I must have had a pretty crummy career.

For a career of outstanding educational and professional leadership in automatic control, mentoring a large number of practicing professionals, and research contributions in the process industries, especially semiconductor manufacturing

### 2014

I feel honored and grateful for this award. After having spent so much time on dynamic programming and written several books about its various facets, receiving an award named after Richard Bellman has a special meaning for me.

It is common in award acceptance speeches to thank one's institutions, mentors, and collaborators, and I have many to thank. I was fortunate to be surrounded by first class students and colleagues, at high quality institutions, which gave me space and freedom to work in any direction I wished to go. As Lucille Ball has told us, "Ability is of little account without opportunity."

Also common when receiving an award is to chart one's intellectual roots and journey, and I will not depart from this tradition. It is customary to advise scholarly Ph.D. students in our field to take the time to get a broad many-course education, with substantial mathematical content, and special depth in their research area. Then upon graduation, to use their Ph.D. research area as the basis and focus for further research, while gradually branching out into neighboring fields, and networking within the profession. This is good advice, which I often give, but this is not how it worked for me at all!

I came from Greece with an undergraduate degree in mechanical engineering, got my MS in control theory at George Washington University in three semesters, while holding a full-time job in an unrelated field, and finished two years later my Ph.D. thesis on control under set membership uncertainty at MIT. I benefited from the stimulating intellectual atmosphere of the Electronic Systems Laboratory (later LIDS), nurtured by Mike Athans and Sanjoy Mitter, but because of my short stay there, I graduated with little knowledge beyond Kalman filtering and LQG control. Then I went to teach at Stanford in a department that combined mathematical engineering and operations research (in which my background was rather limited) with economics (in which I had no exposure at all). In my department there was little interest in control theory, and none at all in my thesis work. Never having completed a first course in analysis, my first assignment was to teach to unsuspecting students optimization by functional analytic methods from David Luenberger's wonderful book. The optimism and energy of youth carried me through, and I found inspiration in what I saw as an exquisite connection between elegant mathematics and interesting practical problems. Studying David Luenberger's other works (including his Nonlinear Programming book) and working next door to him had a lasting effect on me. Two more formative experiences at Stanford were studying Terry Rockafellar's Convex Analysis book (and teaching a seminar course from it), and most importantly teaching a new course on dynamic programming, for which I studied Bellman's books in great detail. My department valued rigorous mathematical analysis that could be broadly applied, and provided a stimulating environment where both could thrive. Accordingly, my course aimed to combine Bellman's vision of wide practical applicability with the emerging mathematical theory of Markov Decision Processes. The course was an encouraging success at Stanford, and set me on a good track. It has survived to the present day at MIT, enriched by subsequent developments in theoretical and approximation methodologies.

After three years at Stanford, I taught for five years in the quiet and scholarly environment of the University of Illinois. There I finally had a chance to consolidate my mathematics and optimization background, through research to a great extent. In particular, it helped a lot that with the spirit of youth, I took the plunge into the world of the measure-theoretic foundations of stochastic optimal control, aiming to expand the pioneering Borel space framework of David Blackwell, in the company of my then Ph.D. student Steven Shreve.

I changed again direction by moving back to MIT, to work in the then emerging field of data networks and the related field of distributed computation. There I had the good fortune to meet two colleagues with whom I interacted closely over many years: Bob Gallager, who coauthored with me a book on data networks in the mid-80s, and John Tsitsiklis, who worked with me first while a doctoral student and then as a colleague, and over time coauthored with me two research monographs on distributed algorithms and neuro-dynamic programming, and a probability textbook. Working with Bob and John, and writing books with them was exciting and rewarding, and made MIT a special place for me.

Nonetheless, at the same time I was getting distracted by many side activities, such as books in nonlinear programming and dynamic programming, getting involved in applications of queueing theory and power systems, and personally writing several network optimization codes. By that time, however, I realized that simultaneous engagement in multiple, diverse, and frequently changing intellectual activities (while not recommended broadly) was a natural and exciting mode of operation that worked well for me, and also had some considerable benefits. It stimulated the cross-fertilization of ideas, and allowed the creation of more broadly integrated courses and books.

In retrospect I was very fortunate to get into methodologies that eventually prospered. Dynamic programming developed perhaps beyond Bellman's own expectation. He correctly emphasized the curse of dimensionality as a formidable impediment in its use, but probably could not have foreseen the transformational impact of the advances brought about by reinforcement learning, neuro-dynamic programming, and other approximation methodologies. When I got into convex analysis and optimization, it was an emerging theoretical subject, overshadowed by linear, nonlinear, and integer programming. Now, however, it has taken center stage thanks to the explosive growth of machine learning and large scale computation, and it has become the lynchpin that holds together most of the popular optimization methodologies. Data networks and distributed computation were thought promising when I got involved, but it was hard to imagine the profound impact they had on engineering, as well as the world around us today. Even set membership description of uncertainty, my Ph.D. thesis subject, which was totally overlooked for nearly fifteen years, eventually came to the mainstream, and has connected with the popular areas of robust optimization, robust control, and model predictive control. Was it good judgement or fortunate accident that steered me towards these fields? I honestly cannot say. Albert Einstein wisely told us that "Luck is when opportunity meets preparation." In my case, I also think it helped that I resisted overly lengthy distractions in practical directions that were too specialized, as well as in mathematical directions that had little visible connection to the practical world.

An academic journey must have companions to learn from and share with, and for me these were my students and collaborators. In fact it is hard to draw a distinction, because I always viewed my Ph.D. students as my collaborators. On more than one occasion, collaboration around a Ph.D. thesis evolved into a book, as in the cases of Angelia Nedic and Asuman Ozdaglar, or into a long multi-year series of research papers after graduation, as in the cases of Paul Tseng and Janey Yu. I am very thankful to my collaborators for our stimulating interactions, and for all that I learned from them. They are many and I cannot mention them all, but they were special to me and I was fortunate to have met them. I wish that I had met Richard Bellman, I only corresponded with him a couple of times (he was the editor of my first book on dynamic programming). I still keep several of his books close to me, including his scintillating and highly original book on matrix theory. I am also satisfied that I paid part of my debt to him in a small way. I have used systematically, for the first time I think in a textbook in 1987, the name "Bellman equation" for the central fixed point equation of infinite horizon discrete-time dynamic programming. It is a name that is widely used now, and most deservedly so.

For contributions to the foundations of deterministic and stochastic optimization-based methods in systems and control

### 2013

President Rhinehart, Lucy, Danny, fellow members of the greatest technological field in the world, I am to, say the least, absolutely thrilled and profoundly humbled to be this year’s recipient of the Richard E. Bellman Control Heritage Award. I am grateful to those who supported my nomination, as well to the American Automatic Control Council for selecting me.

I am indebted to a great many people who have helped me throughout my career. Among these are my graduate students, post docs, and colleagues including in recent years, John Baillieul, Roger Brockett, Bruce Francis, Art Krener, and Jan Willems. In addition, I‘ve been fortunate enough to have had the opportunity to collaborate with some truly great people including Brian Anderson, Ali Bellabas, Chris Byrnes, Alberto Isidori, Petar Kokotovic, Eduardo Sontag and Murray Wonham. I‘ve been lucky enough to have had a steady stream of research support from a combination of agencies including AFOSR, ARO and NSF.

I actually never met Richard Bellman, but I certainly was exposed to much of his work. While I was still a graduate student at Purdue, I learned all about Dynamic Programming, Bellman’s Equation, and that the Principle of Optimality meant “Don’t cry over spilled milk.” Then I found out about the Curse of Dimensionally. After finishing school, I discovered that there was life before dynamic programming, even in Bellman ‘s world. In particular I read Bellman ‘s 1953 monograph on the Stability Theory of Differential Equations. I was struck by this book ‘s clarity and ease of understanding which of course are hallmarks of Richard Bellman ‘s writings. It was from this stability book that I first learned about what Bellman called his “fundamental lemma.” Bellman used this important lemma to study the stability of perturbed differential equations which are nominally stable. Bellman first derived the lemma in 1943, apparently without knowing that essentially the same result had been derived by Thomas Gronwall in 1919 for establishing the uniqueness of solutions to smooth differential equations. Not many years after learning about what is now known as the Bellman - Gronwall Lemma, I found myself faced with the problem of trying to prove that the continuous time version of the Egardt - Goodwin - Ramadge - Caines discrete-time model reference adaptive control system was “stable.” As luck would have it, I had the Bellman - Gronwall Lemma in my hip pocket and was able to use it to easily settle the question. As Pasteur one said, “Luck favors the prepared mind.”

After leaving school I joined the Office of Control Theory and Application at the now defunct NASA Electronics Research Center in Cambridge, Mass. OCTA had just been formed and was headed by Hugo Schuck. OCTA ‘s charter was to bridge the gap between theory and application. Yes, people agonized about the so-called theory - application gap way back then. One has to wonder if the agony was worth it. Somehow the gap, if it really exists, has not prevented the field from bringing to fruition a huge number of technological advances and achievements including landing on the moon, cruise control, minimally invasive robotic surgery, advanced agricultural equipment, anti-lock brakes, and a great deal more. What gap? The only gap I know about sells clothes.

In the late 1990s I found myself one day listening to lots of talks about UAVs at a contractor’s meeting at the Naval Post Graduate School in Monterey Bay, California. I had a Saturday night layover and so I spent Saturday, by myself, going to the Monterey Bay 1 Aquarium. I was totally awed by the massive fish tank display there and in particular by how a school of sardines could so gracefully move through the tank, sometimes bifurcating and then merging to avoid larger fish. With UAVs in the back of my mind, I had an idea: Why not write a proposal on coordinated motion and cooperative control for the NSF ‘s new initiative on Knowledge and Distributed Intelligence? Acting I this, I was fortunate to be able to recruit a dream team: Roger Brockett, for his background in nonlinear systems; Naomi Leonard for her knowledge of underwater gliders; Peter Belhumeur for his expertise in computer vision, and biologists Danny Grunbaum and Julia Parish for their vast knowledge of fish schooling. We submitted a proposal aimed at trying to understand on the one hand, the traffic rules which large animal aggregations such as fish schools and bird flocks use to coordinate their motions and on the other, how one might use similar concepts to coordinate the motion of manmade groups. The proposal was funded and at the time the research began in 2000, the playing field was almost empty. The project produced several pieces of work about which I am especially proud. One made a connection between the problem of maintaining a robot formation and the classical idea of a rigid framework; an offshoot of this was the application of graph rigidity theory to the problem of localizing a large, distributed network of sensors. Another thrust started when my physics - trained graduate student Jie Lin, ran across a paper in Physical Review Letter by Tomas Vicsek and co-authors which provided experimental justification for why a group of self - driven particles might end up moving in the same direction as a result of local interactions. Jie Lin, my post doc Ali Jadbabaie, and I set out to explain the observed phenomenon, but were initially thwarted by what seemed to be an intractable convergence question for time - varying, discrete - time, linear systems. All attempts to address the problem using standard tools such as quadratic Lyapunov functions failed. Finally, Ali ran across a theorem by Jacob Wolfowitz, and with the help of Marc Artzrouni at the University of Pau in France, a convergence proof was obtained. We immediately wrote a paper and submitted it to a well-known physics journal where it was promptly rejected because the reviewers did not like theorems and lemmas. We then submitted a full-length version of the work to the TAC where it was eventually published as the paper “Coordination of Groups of Mobile Autonomous Agents Using Nearest Neighbor Rules.”

Over the years, many things have changed. The American Control Conference was once the Joint Automatic Control Conference and was held at universities. Today the ACC proceedings sits on a tiny flash drive about the size of two pieces of bubble gum while a mere 15 years ago the proceedings consisted of 6 bound volumes weighing about 10 pounds and taking up approximately 1100 cubic inches of space on one ‘s bookshelf. And people carried those proceedings home on planes - of course there were no checked baggage fees back then.

The field of automatic control itself has undergone enormous and healthy changes. When I was a student, problem formulations typically began with “Consider the system described by the differential equation.” Today things are different and one of the most obvious changes is that problem formulations often include not only a differential equations, but also graphs and networks. The field has broadened its outlook considerably as this American Control Conference clearly demonstrates.

And where might things be going in the future? Take a look at the “Impact of Control Technology” papers on the CSS website including the nice article about cyber - physical systems by Kishan Baheti and Helen Gill. Or try to attend the workshop on “Future Directions in Control Theory” which Fariba Fahroo is organizing for AFOSR.

Automatic control is a really great field and I love it. However, it is also probably the most difficult field to explain to non - specialists. Paraphrasing Donald Knuth: “A {control} algorithm will have to be seen to be believed.”

I believe that most people do not understand what a control engineer does or what a control system is. This of course is not an unusual situation. But it is a problem. IBM, now largely a service company, faced a similar problem trying to explain itself after it stopped producing laptops. We of course are primarily a service field. Perhaps like IBM, we need to take some time to rethink how we should explain what we do?

Thank you very much for listening and enjoy the rest of the conference.

For fundamental contributions to linear systems theory, geometric control theory, logic-based and adaptive control, and distributed sensing and control

### 2012

It is a honor to receive the 2012 Richard E. Bellman Control Heritage Award. I am deeply humbled to join the very distinguished group of prior winners. At this conference there are so many people whose work I have admired for years. To be singled out among this group is a great honor.

I did not know Richard Bellman personally but we are all his intellectual descendants. Years ago, my first thesis problem came from Bellman and currently I am working on numerical solutions to Hamilton-Jacobi-Bellman partial differential equations.

I began graduate school in mathematics at Berkeley in 1964, the year of the Free Speech Movement. After passing my oral exams in 1966, I started my thesis work with R. Sherman Lehman who had been a postdoc with Bellman at the Rand Corporation in the 1950s. Bellman and Lehman had worked on continuous linear programs also called bottleneck problems in Bellman’s book on Dynamic Programming. These problems are dynamic versions of linear programs, with linear integral transformations replacing finite dimensional linear transformations. At each frozen time they reduce to a standard linear program. Bellman and Lehman had worked out several examples and found that often the optimal solution was basic, at each time an extreme point of the set of feasible solutions to the time frozen linear program. These extreme points moved with time and the optimal solution would stay on one moving extreme point for awhile and then jump to another. It would jump from one bottleneck to another.

Lehman asked me to study this problem and find conditions for this to happen. We thought that it was a problem in functional analysis and so I started taking advanced courses in this area. Unfortunately, about a year later Lehman had a very serious auto accident and lost the ability to think mathematically for some time. I drifted, one of hundreds of graduate students in Mathematics at that time. Moreover, Berkeley in the late 1960s was full of distractions and I was distractable. After a year or so Lehman recovered and we started to meet regularly. But then he had a serious stroke, perhaps as a consequence of the accident, and I was on my own again.

I was starting to doubt that my thesis problem was rooted in functional analysis. Fortunately, I had taken a course in differential geometry from S. S. Chern, one of the pre-eminent geometers of his generation. Among other things, Chern had taught me about the Lie bracket. And one of my graduate student colleagues told me that I was trying to prove a bang-bang theorem in Control Theory, a field that I had never heard of before. I then realized that my problem was local in nature and intimately connected with flows of vector fields so the Lie bracket was an essential tool. I went to Chern and asked him some questions about the range of flows of multiple vector fields. He referred me to Bob Hermann who was visiting the Berkeley Physics Department at that time.

I went to see Hermann in his cigar smoked-filled office accompanied by my faithful companion, a German Shepherd named Hogan. If this sounds strange, remember this was Berkeley in the 1960s. Bob was welcoming and gracious, he gave me galley proofs of his forthcoming book which contained Chow’s theorem. It was almost the theorem that I had been groping for. Heartened by this encounter I continued to compute Lie brackets in the hope of proving a bang-bang theorem.

Time drifted by and I needed to get out of graduate school so I approached the only math faculty member who knew anything about control, Stephen Diliberto. He agreed to take me on as a thesis student. He said that we should meet for an hour each week and I should tell him what I had done. After a couple of months, I asked him what more I needed to do to get a PhD. His answer was “write it up”. My “proofs” fell apart several times trying to accomplish this. But finally, I came up with a lemma that might be called Chow’s theorem with drift that allowed me to finish my thesis.

I am deeply indebted to Diliberto for getting me out of graduate school. He also did another wonderful thing for me; he wrote over a hundred letters to help me find a job. The job market in 1971 was not as terrible as it is today but it was bad. One of these letters landed on the desk of a young full professor at Harvard, Roger Brockett. He had also realized that the Lie bracket had a lot to contribute to control. Over the ensuing years, Roger has been a great supporter of my work and I am deeply indebted to him.

Another Diliberto letter got me a position at Davis where I prospered as an Assistant Professor. Tenure came easily as I had learned to do independent research in graduate school. I brought my dog, Hogan, to class every day, he worked the crowds of students and boosted my teaching evaluations by at least a point. After 35 wonderful years at Davis, I retired and joined the Naval Postgraduate School where I continue to teach and do research. I am indebted to these institutions and also to the NSF and the AFOSR for supporting my career.

I feel very fortunate to have discovered control theory both for the intellectual beauty of the subject and the numerous wonderful people that I have met in this field. I mentioned a few names, let me also acknowledge my intellectual debt to and friendship with Hector Sussman, Petar Kokotovic, Alberto Isidori, Chris Byrnes, Steve Morse, Anders Lindquist, Wei Kang and numerous others.

In my old age I have come back to the legacy of Bellman. Two National Research Council Postdocs, Cesar Aguilar and Thomas Hunt, have been working with me on developing patchy methods for solving the Hamilton-Jacobi-Bellman equations of optimal control. We haven’t whipped the “curse of dimensionality” yet but we are making it nervous.

The first figure shows the patchy solution of the HJB equation to invert a pendulum. There are about 1800 patches on 34 levels and calculation took about 13 seconds on a laptop. The algorithm is adaptive, it adds patches or rings of patches when the residual of the HJB equation is too large. The optimal cost is periodic in the angle. The second figure shows this. Notice that there is a negatively slanted line of focal points. At these points there is an optimal clockwise and an optimal counterclockwise torque. If the angular velocity is large enough then the optimal trajectory will pass through the up position several times before coming to rest there.

What are the secrets to success? Almost everybody at this conference has deep mathematical skills. In the parlance of the NBA playoffs which has just ended, what separates researchers is “shot selection” and “follow through”. Choosing the right problem at the right time and perseverance, nailing the problem, are needed along with good luck and, to paraphrase the Beatles, “a little help from your friends”.

For contributions to the control and estimation of nonlinear systems

### 2011

### Manfred Morari

Usually when you are nominated for an award you know about it or – at least – you have a suspicion – for example, when somebody asks you for your CV, but you are sure that they are not interested in hiring you. This award came to me as a total surprise. Indeed, I had written a letter of support for another most worthy candidate. So, when I received Tamer Başar’s email I thought that it was to inform me that this colleague had won. Who was actually responsible for my nomination? Several of my former graduate students! So, not only were they responsible for doing the work that qualified me for the award, they were even responsible for my getting it!

Over the course of my career, I was fortunate to have worked with a fantastic group of people and I am very proud of them: 64 Phd Students to date and about 25 postdocs. 27 of them are holding professorships all over the world – from the Korean Advanced Institute of Science and Technology KAIST in the East to Berkeley and Santa Barbara in the West from the Norwegian Technical University and the U Toronto in the North to the Technion in Israel and the Instituto Tecnologico de Buenos Aires in the South. Many others are now in industry, about 15 in Finance, Management Consulting and Legal, holding positions of major responsibility. I regard this group of former co-workers as my most important legacy.

This award means a lot to me because of the awe-inspiring people who received it in the past. I remember Hendrik Bode receiving the inaugural award in 1979. I remember Rutherford Aris, one of my PhD advisors at the University of Minnesota receiving it in 1992. Aris had actually worked and published with Richard Bellman. I remember Harmon Ray receiving it in 2000, my colleague and mentor at the University of Wisconsin.

Receiving this award made me also reflect on what I felt our major contributions were in these 34 years since I started my career as an Asst. Prof at Wisconsin. In what way was our work important? I was reminded of a dinner conversation a few months back with a group of my former PhD students who had joined McKinsey after graduating from ETH. One of them told me that our group had supplied more young consultants to McKinsey Switzerland than any other institute of any university in Switzerland. He also talked informally about the results of a survey done internally on what may be the main traits characterizing a CEO. It is not charm. It is not tactfulness and sensitivity. It is not intelligence. The only common trait seems to be that in their past these CEOs headed a division that experienced unusual growth. For example, the CEO of a telecom company had headed the mobile phone division. All the CEOs seemed to have been at the right place at the right time.

Similar considerations may apply to doing research and to heading a research group. Richard Hamming, best known for the Hamming code and the Hamming window, wrote in a wonderful essay: “If you are to do important work then you must work on the right problem at the right time and in the right way. Without any one of the three, you may do good work but you will almost certainly miss real greatness….”

So, what are the right problems? Eric Sevareid, the famous CBS journalist once quipped: “The chief cause of problems is solutions.” We were never interested in working on problems solely for their mathematical beauty. We always wanted to solve real practical problems with potential impact. Several times we were lucky to be standing at a turning point, ready to embark on a new line of research before the community at large had recognized it. Let me share with you three examples.

Around 1975, when I started my PhD at the University of Minnesota, interest in process control was just about at an all-time low. In 1979 this conference, which was then called the Joint Automatic Control Conference, had barely 300 attendees. The benefits of optimal control and the state space approach had been hyped so much for more than a decade that disillusionment was unavoidable. Many people advised me not to commence a thesis in process control. But my advisor George Stephanopoulos convinced me that the reason for all the disappointment was that people had been working on the wrong problem. The problem was not how to design controllers for poorly designed system but how to design systems such that they are easy to control. The work that was started at that time by us and several other groups provided valuable insights that are in common use today and set off a whole research movement with special sessions, special journal issues and even separate workshops and conferences.

The second example is our work on Internal Model Control (IMC) and Robust Control. In the early 1980s the term “robust control” did not exist or, at least, it was not widely used and accepted. From our application work and influenced by several senior members of our community we had become convinced that model uncertainty is a critical obstacle affecting controller design. We discovered singular values and the condition number as important indicators before we learned that these were established mathematical quantities with established names. In 1982 at a workshop in Interlaken I met John Doyle, Gunter Stein and essentially everybody else who started to push the robust control agenda. Indeed, it was there that Jürgen Ackerman made the researchers in the West aware of the results of Kharitonov. A year later I went to Caltech, John Doyle followed soon afterwards and an exciting research collaboration commenced that lasted for almost a decade. We also cofounded the Control and Dynamical Systems option/department at that time.

The third example is our more recent work on Model Predictive Control (MPC) and Hybrid Systems. As I returned to Switzerland 17 years ago, I moved from a chemical to an electrical engineering department. I was thrown into a new world of systems with time constants of micro- or even nanoseconds rather than the minutes or hours that I was used to. So, we set out to dispel the myth that MPC was only suited to slow process control problems and showed that it could even be applied to switched power electronics systems. Through this activity in parallel with a couple of other groups in the world, among them the group of Graham Goodwin, we started this era of “fast MPC” and contributed to the spread of MPC to just about every control application area.

I would never claim that in the mentioned areas we made the most significant contributions and some of the results may even seem trivial to you now, but we were there at the beginning. The Hungarian author Arthur Koestler remarked that “the more original a discovery, the more obvious it seems afterwards”

Notwithstanding this over-the-hill award that I received today and the mandatory retirement age in Switzerland I fully intend to strive to match these contributions in the coming years – together with my students, of course.

I want to close my remarks quoting from an interview Woody Allen gave last year. When he was asked “How do you feel about the aging process?” he replied: “Well, I’m against it. I think it has nothing to recommend it.”

For pioneering contributions to the theory and application of robust process control, model predictive control, and hybrid systems control

### 2010

### Dragoslav D. Šiljak

I am exceedingly happy to receive the Richard Bellman Control Heritage Award. I am thankful to the American Automatic Control Council for recognizing my work as worthy of this award, and I am deeply humbled when I consider the previous recipients of the award.

My first thanks go to my dear wife Dragana who put up for a long time with a workaholic husband with an oversized ambition. I am grateful to Santa Clara University and, in particular, to the School of Engineering for providing institutional support to our research. I am exceedingly thankful to many people from all around the world who came to Santa Clara to work on our projects as fellow researchers on an exploratory journey; and what a journey it has been!

At this occasion, it gives me a great pleasure to recall my visit to University of Southern California and my brief encounter with Professor Bellman. After my talk, he invited me to his office, and among myriad of his interests, he chose to talk with me about his recent work in Pharmacokinetics. At that time, I was deeply into the competitive equilibrium in economics, and we had a very stimulating discussion on the connection of the two fields via the Metzler matrix which I have been using since then in a wide variety of models to this very day.

Looking at this award in a prudential light, my obtaining this award is as much a compliment to the Control Council as it is to me. My winning of this award at Santa Clara University, which is not a research 1 university but prides itself as an excellent teaching institution, proves that the system is open, and that any of you wherever you are can win this award solely by the merit of your research.

I recall when at eighteen I made the Yugoslav Olympic Water Polo Team for the 1952 Helsinki Olympic Games. We won all our games except the final one, which ended in a draw. At that time, there were no overtimes and penalty kicks; the winner was determined by the cumulative goal ratio. I continued playing water polo, but did not make the team for the 1956 Melbourne games; I broke my right hand and stayed home. I kept playing on and in 1960 made the team for the Rome Olympics. We did not win a medal in Rome, let alone the gold. At that point I was already a committed researcher in control systems.

I continued the research for many years and to borrow from a song by Neil Young:

"I kept searching for a heart of gold, and I was getting old ... "

Today I found a heart of gold. Thank you all very much for your attention, and God bless!

July 1, 2010. Baltimore, MD

For fundamental contributions to the theory of large-scale systems, decentralized control, and parametric approach to robust stability

### 2009

First of all, I wish to express my sincere thanks to the American Automatic Control Council for bestowing on me the Bellman Control Heritage Award. This great honor was completely unexpected so that my gratitude is very deep indeed. I would like to use this rare opportunity to say a few words about a topic which has concerned me for some time, namely, the question Who did what first?. In so doing, I shall relate two examples of which the first is especially a propos since it involves the patron of the award, Richard Bellman, as well as Rufus Isaacs, both long-time friends of mine. When I attended the 1966 International Congress of Mathematicians in Moscow, where Dick was a plenary speaker and Rufus was to present a paper entitled Differential games and dynamic programming, and what the latter can learn from the former, the meeting was buzzing with excitement about an upcoming confrontation between two well known American mathematicians. And indeed, when Rufus presented his paper it was his take on the discovery of the Principle of Optimality which, in his view, appeared after the in-house publication of three RAND reports on differential games, and which appeared to be just a one-player version of his Tenet of Transition. The result of this implied accusation of plagiarism had two unhappy consequences. I had lunch with Dick on that day. He was deeply hurt, so much so that he was near tears. Equally unfortunate was the effect on Rufus who devoted much of his remaining time to trying to prove the priority of his discovery instead of continuing to produce new and important research of which his fertile mind was surely capable. The second example is a much happier one. In the mid-1960's I published a brief paper in which I proposed constructive sufficiency conditions for extremizing a class of integrals by solving an equivalent problem by inspection. It was not until 1999 that I returned to this subject at the urging of a Canadian colleague. After revisiting the original 1967 paper, I published a generalization in JOTA in 2001. On presenting these results at my 75th birthday symposium in Sicily in 2001, Pierre Bernhard remarked that my approach seemed to be related to Caratheodory's in his 1935 text on the calculus of variations and partial differential equations, first translated into English in the mid-1960's and not known to me. And indeed, in 2002, Dean Carlson published in JOTA a paper in which he discussed a relation between the two approaches in that both are based on the equivalent problem methodology. Caratheodory obtained an equivalent problem by allowing for a different integrand, and I obtained an equivalent problem by the use of transformed variables. Dean then proposed a generalization by combining the two approaches. A happy consequence of this paper has been and continues to be a fruitful collaboration which has resulted in many extensions and applications, e.g., to classes of optimal control and differential game problems, to multiple integrals, and to economic problems, the most recent concerned with differential constraints (state equations) and presented just a couple of weeks ago at the 15th International Workshop on Dynamics and Control. A particularly interesting discussion and some generalizations by Florian Wagener may be found in the July 2009 issue of JOTA. Thus, Caratheodory received his well deserved citation and I learned a great deal, allowing me to make some small contributions to optimization theory.

June 11, 2009. St. Louis, MO

For pioneering contributions to geometric optimal control, quantitative and qualitative differential games, and stabilization and control of deterministic uncertain systems, and for exemplary service to the control field

### 2008

It is an honor to receive the Bellman Award. I am sure the Award Committee received many outstanding nominations, and I thank the Committee for selecting me. I was invited to make a few remarks, so long as I did not exceed five minutes. I will point out some landmarks along my intellectual journey. The young people among you may find it of some interest. I came to Berkeley as a graduate student in 1960. I owe a great deal to Professor Lotfi Zadeh who was my PhD adviser and who has been a mentor to me ever since. Much of my intellectual development came from interaction with visitors and students. Karl Astrom visited me in the early 1960s. His paper with Bohlin on system identification became for me a standard of research quality and research exposition. Another significant visitor was Bill Root. Bill showed me how to use mathematics in the analysis of communication systems, and he introduced me to information theory.

Stochastic Systems

There was a buzz at the time about white noise and martingales. Gene Wong was talking about it, as was Moshe Zakai. Tyrone Duncan was visiting. Ty de-mystified the buzz for me. He taught me how to think about stochastic systems. Thus began my lifelong attraction towards randomness. Sanjoy Mitter, who I first met about that time, reinforced that attraction. Sanjoy became a lifelong friend, for which I am very grateful.

Mark Davis was the first in a sequence of brilliant PhD students in stochastic systems. Mark discovered the deep relation between martingales and optimum decisions. Rene Boel, Jan van Schuppen, and Gene Wong found that martingales were also key to point processes as well as Ito processes. Jean Walrand grasped this insight and developed it into an outstanding thesis on queuing networks. Venkat Anantharam knew little or nothing about probability theory when he began his PhD. I still recall how much he impressed me with his spectacular work on multi-armed bandits. The third in this group was Vivek Borkar. Vivek was the most quiet, but equally stunning.

This was when P.R. Kumar visited Berkeley. He is the first of the next generation that I got to know as a friend. I have become a fan of his, along with so many others. Intellectual life moves in circles. Borkar and Kumar re-connected me with Karl Astrom, this time through his paper with Wittenmark.

Networking

Jean Walrand introduced me to computer communication networks. This has continued to be an area of research for the past twenty years. We've had outstanding students, who have gone on to brilliant careers. Sri Kumar, then at Northwestern, Jean Walrand and I got to know each other through our interest in networking.

Power

I learned power engineering in undergraduate school. But then I lost contact with the field, until years later when Felix Wu joined our faculty. Eyad Abed, Fathi Salem and Shankar Sastry wrote their dissertations on difficult questions in nonlinear systems, inspired by problems of power systems. I lost contact with the field once again, until deregulation became the rage in California. Once again Felix recruited me. Felix Wu, Shmuel Oren of IEOR, Pablo Spiller of the Business School, and I joined forces to save California from the clutches of the utilities. We developed a provably good deregulation strategy. The strategy was not adopted.

Wireless

Ahmad Bahai and Andrea Goldsmith sparked my interest in wireless communications. They have become stars. They inspired my very recent students, Mustafa Ergen and Sinem Coleri.

Hybrid

In the late sixties, Noam Chomsky came to Berkeley and gave a lecture on formal languages. Chomsky's talk opened up a whole world for me. I spent a lot of time learning recursive functions, Turing machines, and Godel's theory. Walt Burkhard wanted to do a thesis on space-time complexity of recursive functions, and he helped consolidate what I had learned. However, my involvement with that subject declined.

My interest was revived by the Wonham-Ramadge paper on discrete-event systems, while Joseph Sifakis, Tom Henzinger and others began the study of time automata. These developments combined to create the area of Hybrid Systems. My students Anuj Puri and Alex Kurzhansky obtained some outstanding results in Hybrid Systems.

Transportation

My flirtation with transportation began 30 years ago when I taught urban economics. Mario Ripper was my first doctoral student in transportation planning. My interest then waned. In 1990, Steve Shladover helped spark a national, indeed worldwide, interest in automated highways. Berkeley became a leading research center in highway automation, culminating in a full demonstration in 1997 in San Diego. It was very exciting to work with an interdisciplinary group of experts to build something all the way from theory to demonstration.

Since I could not wait for 25 years before automated highways became practical, my attention shifted to today's highways. My student Karl Petty built the PeMS system, which is now world-renowned as a repository of highway data. Roberto Horowitz and I are now developing a control system for the management of highways. It might become an important follow-on to the PeMS system.

Let me conclude with a remark on Richard Bellman, whom I met in the late sixties. Bellman was a renowned mathematician with contributions in many, many areas. I learned two things from him. First, over the years I continue to marvel at the significance of the optimality principle in the form of the verification theorem, which I have used in many contexts. Second and more important, I learned that good theory is very practical.

Thank you very much for being such courteous listeners.

June 12, 2008. Seattle, WA

For pioneering contributions to stochastic control, hybrid systems and the unification of theories of control and computation

### 2007

It is a great honor for me to receive the Bellman Award—quite undeserved I believe, but I decided not to emulate Gregory Perelman by refusing to accept the award. I might however follow his footsteps (apparently he has stopped doing Mathematics) and concentrate only on the more conceptual and philosophical aspects of the broad field of Systems and Control.

On an occasion like this it is perhaps appropriate to say a few words about the seminal contributions of Richard Bellman. As we all know, he is the founder of the methodological framework of Dynamic Programming, probably the only general method of systematically and optimally dealing with uncertainty, when uncertainty has a probabilistic description, and there is an underlying Markov structure in the description of the evolution of the system. It is often mentioned that the work of Bellman was not as original as would appear at first sight. There was, after all, Abraham Wald’s seminal work on Optimal Sequential Decisions and the Carat´eodory view of Calculus of Variations, intimately related to Hamilton–Jacobi Theory. But the generality of these ideas, both for deterministic optimal control and stochastic optimal control with full or partial observations, is undoubtedly due to Bellman. Bellman, I believe, was also the first to present a precise view of stochastic adaptive control using methods of dynamic programming. Now, there are two essential steps in invoking Dynamic Programming, namely, invariant embedding whereby a fixed variational problem is embedded in a potentially infinite family of variational problems and then invoking the Principle of Optimality which states that any sub-trajectory of an optimal trajectory is necessarily optimal to characterize optimal trajectories. This is where the Markov structure of dynamic evolution comes into operation. It should be noted that there is wide flexibility in the invariant embedding procedure and this needs to be exploited in a creative way. It is this embedding that permits obtaining the optimal control in feedback form (that is a “control law” as opposed to open loop control).

The solution of the Partially-Observed Stochastic Control in continuous time leading to the characterization of the optimal control as a function of the unnormalized conditional density of the state given the observations via the solution of an infinite-dimensional Bellman–Hamilton–Jacobi equation is one of the crowning achievements of the Bellman view of stochastic control. It is worth mentioning that Stochastic Finance Theory would not exist but for this development. There are still open mathematical questions here that deserved further work. Indeed, the average cost problem for partially-observed finite-state Markov chains is still open—a natural necessary and sufficient condition for the existence of a bounded solution to the dynamic programming equation is still not available.

Much of my recent work has been concerned with the unification of theories of Communication and Control. More precisely, how does one bring to bear Information Theory to gain understanding of Stochastic Control and how does one bring to bear the theory of Partially-Observed Stochastic Control to gain qualitative understanding of reliable communication. There does not exist a straightforward answer to this question since the Noisy Channel Coding Theorem which characterizes the optimal rate of transmission for reliable communication requires infinite delay. The encoder in digital communication can legitimately be thought of as a controller and the decoder an estimator, but they interact in complicated ways. It is only in the limit of infinite delay that the problem simplifies and a theorem like the Noisy Channel Coding Theorem can be proved. This procedure is exactly analogous to passing to the thermodynamic limit in Statistical Mechanics.

In the doctoral dissertation of Sekhar Tatikonda, and in subsequent work, the Shannon Capacity of a Markov Channel with Feedback under certain information structure hypotheses can be characterized as the value function of a partially-observed stochastic control problem. This work in many ways exhibits the power of the dynamic programming style of thinking. I believe that this style of thinking, in the guise of a backward induction procedure, will be helpful in understanding the transmission capabilities of wireless networks. More generally, dynamic programming, when time is replaced by a partially ordered set, is a fruitful area of research.

Can one give an “information flow” view of path estimation of a diffusion process given noisy observations? An estimator, abstractly can be thought of as a map from the space of observations to a conditional distribution of the estimand given the observations. What is the nature of the flow of information from the observations to the estimator? Is it conservative or dissipative? In joint work with Nigel Newton, I have given a quite complete view of this subject. It turns out that the path estimator can be constructed as a backward likelihood filter which estimate the initial state combined with a fully observed stochastic controller moving in forward time starting at this estimated state solves the problem in the sense that the resulting path space measure is the requisite conditional distribution. The backward filter dissipates historical information at an optimal rate, namely that information which is not required to estimate the initial state and the forward control problem fully recovers this information. The optimal path estimator is conservative. This result establishes the relation between stochastic control and optimal filtering. Somewhat surprisingly, the optimal filter in a stationary situation satisfies a second law of thermodynamics.

What of the future? Undoubtedly we have to understand control under uncertainty in a distributed environment. Understanding the interaction between communication and control in a fundamental way will be the key to developing any such theory. I believe that an interconnection view where sensors, actuators, controllers, encoders, channels and decoders, each viewed abstractly as stochastic kernels, are interconnected to realize desirable joint distributions, will be the “correct” abstract view for a theory of distributed control. Except in the field of distributed algorithms, not much fundamental seems to be known here.

It is customary to end acceptance discourses on an autobiographical note and I will not depart from this tradition. Firstly, my early education at Presidency College, Calcutta, where I had the privilege of interacting with some of the most brilliant fellow students, decisively formed my intellectual make-up. Whatever culture I acquired, I acquired it at that time. At Imperial College, while I was doing my doctoral work, I was greatly influenced by John Florentin (a pioneer in Stochastic Control), Martin Clark and several other fellow students. I have also been fortunate in my association with two great institutions—MIT and the Scuola Normale, Pisa. I cannot overstate everything that I have learnt from my doctoral students, too many to mention by name—Allen gewidmet von denen ich lernte [Dedicated to all from whom I have learnt (taken from dedication of G¨unter Grass in “Beim H¨auten der Zwiebel” (“Peeling the Onion”))]. I find that they have extraordinary courage in shaping some half-baked idea into a worthwhile contribution. In recent years, my collaborative work with Vivek Borkar and Nigel Newton has been very important for me. I have great intellectual affinity with members of Club 34, the most exclusive club of its kind and I thank the members of this club for their friendship. There are many others whose intellectual views I share, but at the cost of exclusion let me single out Jan Willems and Pravin Varaiya. I admire their passion for intellectual discourse. Last, but not least, I thank my wife, Adriana, for her love and support. I am sorry she could not be here today. My acceptance speech is dedicated to her.

July 12, 2007. New York, NY

For contributions to the unification of communication and control, nonlinear filtering and its relationship to stochastic control, optimization, optimal control, and infinite-dimensional systems theory

### 2006

I am honored to receive this most prestigious award and recognition by the American Automatic Control Council, named after Richard Ernest Bellman (the creator of "dynamic programming")---who has shaped our field and influenced through his creative ideas and voluminous multifaceted work the research of tens of thousands, not only in control, but also in several other fields and disciplines. In my own research, which has encompassed control, games, and decisions, I have naturally also been influenced by the work of Bellman (on dynamic programming), as well as of Rufus Isaacs (the creator of differential games) whose tenure at RAND Corporation (Santa Monica, California) partially overlapped with that of Bellman in the 1950s. I want to use the few minutes I have here to say a few words on those early days of control and game theory research (just a brief historical perspective), and Bellman's role in that development.

In a Bode Lecture I delivered (at the IEEE Conference on Decision and Control in the Bahamas) in December 2004, I had described how modern control theory was influenced by the research conducted and initiatives taken at the RAND Corporation in the early 1950s. RAND had attracted and housed some of the great minds of the time, among whom was also Richard Bellman, in addition to names like Leonard D. Berkovitz, David Blackwell, George Dantzig, Wendell Fleming, M.R. Hestenes, Rufus Isaacs, Samuel Karlin, John Nash, J.P. LaSalle, and Lloyd Shapley (to list just a few). These individuals, and several others, laid the foundations of decision and game theory, which subsequently fueled the drive for control research. In this unique and highly conducive environment, Bellman started working on multi-stage decision processes, as early as 1949, but more fully after 1952---and it is perhaps a lesser known historical fact that one of the earlier topics Bellman worked on at RAND was ! game theory (both zero- and nonzero-sum games) on which he co-authored research reports with Blackwell and LaSalle. In an informative and entertaining autobiography he wrote 32 years later ("Eye of the Hurricane", World Scientific, Singapore), completed in 1984 shortly before his untimely death (March 19), Bellman describes eloquently the research environment at RAND and the reason for coining the term "dynamic programming".

At the time, the funding for RAND came primarily from the Air Force, and hence it was indirectly under the Secretary of Defense, who was in the early 1950s someone by the name Wilson. According to Bellman, "Wilson had a pathological fear and hatred of the word 'research' and also of anything 'mathematical' ". Hence, it was quite a challenge for Bellman to explain what he was doing and interested in doing in the future (which was research on multi-stage decision processes) in terms which would not offend the sponsor. "Programming" was an OK word; after all Linear Programming had passed the test. He wanted "to get across the idea that what he was doing was dynamic, multi-stage, and time-varying", and therefore picked the term "Dynamic Programming". He thought that "it was a term not even a Congressman could object to". This being the official reason given for his pick of the term, some say (Harold Kushner--recipient of this award two years ago--being one of them, based on a personal conversation with Bellman) that he wanted to upstage Dantzig's Linear Programming by substituting "dynamic" for "linear". Whatever the reasons were, the terminology (and of course also the concept and the technique) was something to stay with us for the next fifty plus years, and undoubtedly for many more decades into the future, as also evidenced by the number of papers at this conference using the conceptual framework of dynamic programming.

Applying dynamic programming to different classes of problems, and arriving at "functional equations of dynamic programming", subsequently led Bellman, as a unifying principle, to the "Principle of Optimality", which Isaacs, also at RAND, and at about the same time, had called "tenet of transition" in the broader context of differential games, capturing strategic dynamic decision making in adversarial environments.

Bellman also recognized early on that a solution to a multi-stage decision problem is not merely a set of functions of time or a set of numbers, but a rule telling the decision maker what to do, that is, a "policy". This led in his thinking, when he started looking into control problems, to the concept of "feedback control", and along with it to the notions of sensitivity and robustness. These developments, along with the more refined notions of information structures (who knows what and when), have been key ingredients in my research for the past thirty plus years.

It is interesting that at RAND at the time (that is in the 1950s), in spite of the anti-research and anti-mathematical attitude that existed in the higher echelons of the government, and the Department of Defense in particular, fundamental research did prosper, perhaps somewhat camouflaged initially, which in turn drove the creation of modern control theory, fueled also by the post-Sputnik anxiety. There is perhaps a message that should be taken from that: "Don't give up doing what you think and believe is right and important, but also be flexible and accommodating in how you promote it".

Before closing, I want to thank all who have been involved in the nomination process and the selection process of the Bellman Control Heritage Award this year. I want to use this occasion also to acknowledge several educational and research institutions which have impacted my life and career.

First, I want to acknowledge the contributions of the educational institutions in my native country, Turkey, in the early years of my upbringing, and the comfortable research environment provided by the Marmara Research Institute I was affiliated with in the mid to late 1970s. Second, I want to acknowledge the love for research and the drive for pushing the frontiers of knowledge I was infected with during my years at Yale and Harvard in the early 1970s. And last, but foremost, I want to acknowledge the perfect academic environment I found and have still been enjoying at the University of Illinois at Urbana-Champaign---wonderful colleagues, stimulating teaching environment at the Department of Electrical and Computer Engineering, and exemplary conducive research environment at the Coordinated Science Laboratory with its top quality graduate students. I also want to recognize all students, post-docs, and colleagues I have had the privilege of having research interactions and collaborations with over the years. I thank them all for the memorable journeys in exploring the frontiers in control science and technology.

Thank you very much.

June 15, 2006. Minneapolis, MN

June 15, 2006. Minneapolis, MN

For fundamental developments in and applications of dynamic games, multiple-person decision making, large scale systems analysis, and robust control

### 2005

### Gene F. Franklin

Grow old along with me

The best is yet to be'

Browning

I don't feel particularly old but to be in the midst of friends and colleagues with this recognition is as good as it gets.

I'd like to use these few minutes to comment on several of the times when I've come to a fork in the road as an illustration of how difficult it is to predict how a given path will turn out. There may be people who plan their lives carefully and take each step based on the best prediction of a good outcome; I'm not one of them. Too many events in my life were based on random events to pretend that they were based on any good planning of mine.

My first decision was a good one: I selected outstanding parents. My father was a math teacher, my mother an RN and they gave me a love of books and learning that have served me well for over 7 decades. They did, however, make one mistake: they gave me a defective gene that prevents me from seeing colors the way most others see them. If you see me going Ooh and Ah over a rainbow, don't believe it; I'm faking it.

The next decision I wish to mention was in 1945 when I became eligible for the military draft. The good news was that I was admitted to the Navy Radio Technician program but the bad news was that I had to sign up for four years to accept the offer. The evidence was that the war would last several more years so I signed up. That decision did not look so good a few weeks later when President Truman approved use of atomic bombs to reduce Hiroshima to rubble and Nagasaki to ruin in a matter of seconds. The war ended soon after but I was still stuck with four years obligation to the Navy. When I got to Chicago for my final physical, one of the doctors asked me to identify the numbers in a set of circles filled with colored dots. I'm sure that I gave him some values never before found! My performance was such that he marked me as partially disabled, put me on medical special assignment, and sent me off to the electronics school.

I finished the school in the summer of 1946 and was selected to be an instructor at a new campus being set up at the Great Lakes Naval Training Center north of Chicago. I taught electronic amplifiers there using the book Radio Engineering by F E Terman. One of my fellow students there later became well known in the control field (and a Vice President of IBM): Jack Bertram had also signed up for the Navy electronics program. In the early summer of 1947 my defective gene came to my rescue. The Navy announced that any sailor on medical special assignment was eligible for discharge! My response: That's ME.

Out of the Navy I went and set about looking for a school that would accept me at that late date. I was turned down by several fine schools but Georgia Tech told me to come on down so off I went to Atlanta where I got my EE degree in 1950. The months I'd served in the Navy made me eligible for enough GI Bill support to pay the tuition and expenses which I could never have afforded otherwise. This time the bad news was that in the spring of 1950 the Bureau of Labor Statistics reported that the country was to graduate twice as many engineers as the economy could absorb. My only choice was to accept a fellowship to MIT and continue my education using the last of my GI Bill of Rights tuition support. As an aside, while there I took a graduate course on pulse and timing circuits that contained little new from what we had learned in the Navy program as high school graduates! I also had a great time learning how to play rugby from a group of graduate students from South Africa. A most memorable part of this experience was when we were one of the teams selected to play in a tournament as the entertainment for spring break in Bermuda.

After finishing my MS in 1952 I had married the love of my life and needed to get a job. A fellow student introduced me to Professor Jack Millman who was visiting MIT looking for possible appointments to Columbia University. I interviewed with him and was offered a position as Instructor which involved teaching responsibilities but allowed me to study for the doctorate at the same time. I had no idea that I was stepping into a fantastic center of control research assembled by John Ragazzini. With his colleague Lotfi Zadeh he had attracted great students including Eli Jury, Art Bergen, Jack Bertram, Rudy Kalman, Bernie Friedland, George Kranc, and Phil Sarachik. Sampled Data control was never the same again. The first treatment of 'pulsed circuits' was chapter 5 by Hurewicz in the Rad Lab Vol. 25 on The Theory of Servomechanisms edited by James Nichols and Phillips. Hurewicz selected the variable of discrete transforms as z, a prediction of one period and we kept the same convention. At about the time as Ragazzini's group were starting our study, some at MIT selected z to be a delay operator. In the end, z as predictor prevailed but to this day MATLAB treats discrete transforms differently in the Signal Processing toolbox as they do in the Control toolbox. You can look it up.

After I got my degree in 1955 I was promoted to Assistant Professor. I loved Columbia and was pleased to be selected by Professor Ragazzini to join him as co-author of a book on sampled data but New York City left a lot to be desired as a place to raise the two children who had joined my family by this time and soon another fork in the road appeared. It was presented in the person of Professor John Linvill whose class I had taken at MIT and who had moved from MIT to Stanford by way of Bell Labs. John knew Lotfi Zadeh and at his invitation came to Columbia looking for possible new appointments to Stanford's faculty. Again I interviewed and was offered a position on the Stanford Faculty. Thus it was that in late May of 1957 we loaded up the (non air-conditioned) Ford and headed west. I'll never forget the hot day in June when we stopped for gas in Sacramento where the temperature was well over 100 degrees. The pavement was so soft my shoes sank into the asphalt. Then later that day we crossed the mountains into the Bay Area and the temperature dropped about 1 degree per mile for the last 30 miles. We've been in love with the San Francisco Bay area ever since.

As an aside comment on control at the time, in the paper on The history of the Society by Danny Abramovitch and myself, George Axelby is quoted as saying that papers presented at the 1959 conference on control by Kalman and Bertram using state notation were 'quite a mystery to most attendees.' I'd say that the idea of state was not long a mystery to those who had worked with analog computers. On those machines, the only dynamic elements are integrators whose outputs comprise the state quite naturally. In my opinion, every control engineer should be required to program an analog computer where one also quickly learns the value of amplitude and time scaling too.

In any case, such was the random walk through time and space that has taken me from the mountains of North Carolina to the coast of California. My tenure at Stanford has been marked by many things but first and foremost in my affection has been the steady stream of excellent students with whom I have been privileged to work. Without a doubt they have made major contributions to control and to them is owed much of the credit for which this award in made. So let me close with the moral of my story aimed mainly to those in academia:

You can never be too careful when selecting your students.

The corollary to this is applicable to everyone:It's hard to soar like an Eagle if you fly with a bunch of turkeys.

Thank you very much.

June 9, 2005. Portland, OR

For fundamental contributions to the theory and practice of digital, modern, adaptive, and multivariable control and for being a mentor, inspiration and friend to five decades of graduate students

### 2004

### Harold J. Kushner

It is a great honor to receive this award. It is a particular honor that it is in memory of Richard Bellman. I doubt that there are many here who knew Bellman, so I would like to make some comments concerning his role in the field.

Bellman left RAND after the summer of 1965 for the position of Professor of Electrical Engineering, Mathematics, and Medicine at the University of Southern California. This triple title gives you some inkling of how he was viewed at the time. I spent that summer at RAND. My office was right next to Bellman's and we had lots of opportunity to talk.

Bellman was always very supportive of my work. He encouraged me to write my first book, Stochastic Stability and Control, in 1967 for his Academic Press Series. Although naive by modern standards, the book seemed to have a significant impact on subsequent development in that it made many mathematicians realize that there was serious probability to be done in stochastic control, and established the foundations of stochastic stability theory. Numerical methods were among his strong interests. He was well acquainted with my work on numerical methods for continuous time stochastic systems and encouraged me to write my first book on the subject, later updated in two books with Paul Dupuis, and still the methods of choice. Despite his enormous output of published papers, something like 900, he was a strong believer in books since they allowed one to develop a subject with considerable freedom.

There are other connections, albeit indirect, between us. He was a New Yorker, and did his early undergraduate work at CCNY. During those years and, indeed, until the late 50's, CCNY was one of the most intellectual institutions of higher learning in the US. During that time, before the middle class migration out of the city, and the simultaneous opening of opportunities in the elite institutions for the "typical New Yorker," CCNY had the choice of the best of New Yorkers with a serious intellectual bent. Later, he switched to Brooklyn College, which was much closer to his home.

He intended to be a pure mathematician: His primary interest was analytic number theory. When did he become interested in applications? He graduated college at the start of WW2 and the demands of the war exposed him to a great variety of problems. He taught electronics in Princeton and then worked at a sonar lab in San Diego (which kept him out of the Army for a while). He spent the last two years of the war in the army, but assigned to the Manhattan project at Los Alamos. He was a social creature and it was easy for him to meet many of the talented people working on the project. Typically, the physicists considered a mathematician as simply a human calculator, ideally constructed to do numerical computations but not much more. Bellman was asked to numerically solve some PDE's. His mathematical pride refused. To the great surprise of the physicists, he actually managed to integrate some of the equations, obtaining closed form solutions. Holding true to tradition, they checked his solutions, not by verifying the derivation, but by trying some very special cases. Thus his reputation there as a very bright young mathematician was established. This jealously guarded independence and self confidence (and lack of modesty) continued to serve him well. During these years, he absorbed a great variety of scientific experiences. So much was being done due to the needs of the war.

There is one more indirect connection between us. Bellman was a student of Solomon Lefschetz at Princeton, head of the Math. Dept. at the time, a very tough minded mathematician and one of the powerhouses of American mathematics, and impressed with Bellman's ability. While at Los Alamos in WW2 Bellman worked out various results on stability of ODE's. Although he initially intended to do a thesis with someone else on a number theoretic problem, Lefschetz convinced him that those stability results were the quickest way to a thesis, which was in fact true. It took only several months and was the basis of his book on stability of ODE's. I was the director of the Lefschetz Center for Dynamical Systems at Brown University for many years, with Lefschetz our patron saint. Some of you might recall the book (not the movie) "A Beautiful Mind" about John Nash, a Nobel Laureate in Game Theory, which describes Lefschetz's key role in mathematics during Nash's time at Princeton.

Bellman spent the summer of 1948 at RAND, where an amazing array of talent was gathered, including David Blackwell, George Dantzig, Ted Harris, Sam Karlin, Lloyd Shapley, and many others, who provided the foundations of much of decision and game theory. The original intention was to do mathematics with some of the RAND talent on problems of prior interest. But Bellman turned out to be fascinated and partially seduced by the excitement in OR, and the developing role of mathematics in the social and biological sciences. His mathematical abilities were widely recognized. He was a tenured Associate Professor at Stanford at 28, after being an Associate Professor at Princeton, where all indications were that he would have had an assured future had he remained there. He began to have doubts about the payoff for himself in number theory and returned to the atmosphere at RAND often, where he eventually settled and became fully involved in multistage decision processes, having been completely seduced, and much to our great benefit.

Here is a non mathematical item that should be of interest. To work at RAND one needed a security clearance, even though much of the work did not involve "security." Due to an anonymous tip, Bellman lost his clearance for a while: His brother-in-law, whom Bellman had not seen since he (his brother-in-law) was about 13, was rumored to be a communist? This was an example of a serious national problem that was fed, exploited, and made into a national paranoia by unscrupulous politicians.

Bellman was a remarkable person, thoroughly a man of his time and renaissance in his interests, with a fantastic memory. Some epochs are represented by individuals that are towering because of their powerful personalities and abilities. People who could not be ignored. Bellman was one of those. He was one of the driving forces behind the great intellectual excitement of the times.

The word programming was used by the military to mean scheduling. Dantzig's linear programming was an abbreviation of "programming with linear models." Bellman has described the origin of the name "dynamic programming" as follows. An Assistant Secretary of the Air Force, who was believed to be strongly anti-mathematics was to visit RAND. So Bellman was concerned that his work on the mathematics of multi-stage decision process would be unappreciated. But "programming" was still OK, and the Air Force was concerned with rescheduling continuously due to uncertainties. Thus "dynamic programming" was chosen a politically wise descriptor. On the other hand, when I asked him the same question, he replied that he was trying to upstage Dantzig's linear programming by adding dynamic. Perhaps both motivations were true.

If one looks closely at scientific discoveries, ancient seeds often appear. Bellman did not quite invent dynamic programming, and many others contributed to its early development. It was used earlier in inventory control. Peter Dorato once showed me a (somwhat obscure) economics paper from the late thirties where something close to the principle of optimality was used. The calculus of variations had related ideas (e.g., the work of Caratheodory, the Hamilton-Jacobi equation). This led to conflicts with the calculus of variations community. But no one grasped its essence, isolated its essential features, and showed and promoted its full potential in control and operations research as well as in applications to the biological and social sciences, as did Bellman.

Bellman published many seminal works. It is sometimes claimed that many of his vast number of papers are repetitive and did not develop the ideas as far as they could have been. Despite this criticism, his works were poured over word for word, with every comment and detail mined for ideas, technique, and openings into new areas. His work was a mother lode. It was clearly the work of someone with a superb background in analysis as well as a facile mind and sharp eye for aplications. There are lots of examples, with broad coverage, accessible, and usually simple assumptions. His writing is articulate. It flows very smoothly through the problem formulation and mathematical analysis, and he is in full command of it.

We still owe a great debt to him.

July 1, 2004. Boston, MA

For fundamental contributions to Stochastic Systems Theory and Engineering Applications, and for inspiring generations of researchers in the field

### 2003

For fundamental contributions to Stochastic Systems Theory and Engineering Applications, and for inspiring generations of researchers in the field

### 2002

### Petar V. Kokotović

For pioneering contribution to control theory and engineering, and for inspirational leadership as mentor, advisor, and lecturer over a period spanning four decades

### 2001

### A.V. Balakrishnan

For pioneering contributions to stochastic and distributed systems theory, optimization, control, and aerospace flight systems research

### 2000

### W. Harmon Ray

### 1999

### Yu-Chi Ho

For sustained and significant contributions to research and education in optimization and control of dynamic systems, and his establishment of a new branch of these fields, Discrete Event Dynamic Systems

### 1998

### Lotfi A. Zadeh

For fundamental contributions to systems theory and pioneering works on fuzzy sets and systems leading to a global trend on machine intelligence quotient systems

### 1997

### Rudolf E. Kalman

For fundamental contributions to control and system theory

### 1996

### Elmer G. Gilbert

I am immensely pleased by the Award! It is indeed a special honor, coming from the American Automatic Control Council, which has done so much to advance and to unify the field of control. I recall with delight the long sequence of Joint Automatic Control Conferences and the subsequent American Control Conferences. The Council's many current activities, including its participation in this 13th IFAC World Congress, continue its invaluable service to the control community.

In receiving the award I wish to recognize the support of friends, colleagues and former students. They have played a vital role in my work. I must also acknowledge the special influence of others I have known mostly or entirely through their publications. It is no surprise that Richard Bellman was one of them. Let me make a few remarks about his legacy and how it affects us today.

In examining his writings I am struck by his genuine interest in applications and obvious desire to make his findings useful to a wide audience. In this, I believe, there are lessons to be learned. I'll note four.

1. Fundamental ideas have greater power when they are elegantly expressed. There is no better example than Bellman's formulation of dynamic programming. Its wonderfully stated ideas permeate and illuminate much of what we do, ranging from deep theoretical results in optimal control to practical, on-line implementation of controllers.

2. Propagation of knowledge is enhanced by the establishment of connections across fields and disciplines. Bellman's 1960 book, "Introduction to Matrix Analysis," illustrates this point beautifully. The discussions and bibliographies and the end of each chapter are marvelous sources of insight and diversity.

3. In mathematical exposition, clarity and accessibility are precious attributes. Bellman had a special talent for keeping mathematical developments closely connected to first principles and organizing them in simple, easy to understand parcels. He had the courage to compromise generality for clarity and, on occasion, rigor for insight.

4. Numerical issues are crucial to control applications. Bellman realized this early, four decades ago, when he addressed controller implementation, algorithm design, error analysis, and computational complexity.

Over the years the field of control has become mature, complex and diverse. We now need, as Richard Bellman did so well, to give greater attention to the means by which we encourage its progress and impact on society. On that point I will end.

Thank you.

July 4, 1996

In recognition of a distinguished career in automatic control, with pioneering research contributions to a broad range of subjects including linear multivariable systems theory, computation of optimal controls, nonlinear systems theory, and motion planning in the presence of obstacles

### 1995

### Michael Athans

### 1994

### Jose B. Cruz Jr.

### 1993

### Eliahu I. Jury

### 1992

### Rutherford Aris

### 1991

### John G. Truxal

In recognition of life-long contributions to the field of automatic control as an author, teacher, and academic administrator, and for his continuing efforts to foster understanding of the role of technology in the conduct of human affairs.

### 1990

### Arthur E. Bryson Jr.

In recognition of his inspiration and guidance to a generation of researchers, his innovations in optimal control and estimation theory, and his seminal contributions to the field of automatic control.

### 1989

In recognition of his leading role in the development of stability theory, linear systems theory, nonlinear control theory, and robotics.

### 1988

### Walter R. Evans

For his very significant contribution to the field of automatic control systems analysis and synthesis by inventing the root-locus technique.

### 1987

### John "Jack" Lozier

### 1986

### John Zaborszky

For distinguished career contributions to the theory or application of automatic control.