The Bellman Award is given for distinguished career contributions to the theory or application of automatic control. It is the highest recognition of professional achievement for US control systems engineers and scientists. The recipient must have spent a significant part of his/her career in the USA. The awardee is expected to make a short acceptance speech at the AACC Awards Ceremonies during the ACC.
(Click a recipient to expand more information)
A. Stephen Morse was born in Mt. Vernon, New York. He received a BSEE degree from Cornell University, MS degree from the University of Arizona, and a Ph.D. degree from Purdue University. From 1967 to 1970 he was associated with the Office of Control Theory and Application (OCTA) at the NASA Electronics Research Center in Cambridge, Mass. Since 1970 he has been with Yale University where he is presently the Dudley Professor of Engineering. His main interest is in system theory and he has done research in network synthesis, optimal control, multivariable control, adaptive control, urban transportation, vision-based control, hybrid and nonlinear systems, sensor networks, and coordination and control of large grouping of mobile autonomous agents. He is a Fellow of the IEEE, a past Distinguished Lecturer of the IEEE Control System Society, and a co-recipient of the Society's 1993 and 2005 George S. Axelby Outstanding Paper Awards. He has twice received the American Automatic Control Council's Best Paper Award and is a co-recipient of the Automatica Theory/Methodology Prize. He is the 1999 recipient of the IEEE Technical Field Award for Control Systems. He is a member of the National Academy of Engineering and the Connecticut Academy of Science and Engineering.
Arthur J. Krener received the PhD in Mathematics from the University of California,
Berkeley in 1971. From 1971 to 2006 he was at the University of California, Davis. He
retired in 2006 as a Distinguished Professor of Mathematics. Currently he is a Distinguished Visiting Professor in the Department of Applied Mathematics at the Naval Postgraduate School.
His research interests are in developing methods for the control and estimation of nonlinear dynamical systems and stochastic processes.
Professor Krener is a Life Fellow of the IEEE, a Fellow of IFAC and of SIAM. His 1981 IEEE Transactions on Automatic Control paper with Isidori, Gori-Giorgi and Monaco won a Best Paper Award. The IEEE Control Systems Society chose his 1977 IEEE Transactions on Automatic Control paper with Hermann as one of 25 Seminal Papers in Control in the last century. He was a Fellow of the John Simon Guggenheim Foundation for 2001-2. In 2004 he received the W. T. and Idalia Reid Prize from SIAM for his contributions to control and system theory. He was the Bode Prize Lecturer at 2006 IEEE CDC and in 2010 he received a Certificate of Excellent Achievements from IFAC. His research has been continuously funded since 1975 by NSF, NASA, AFOSR and ONR.
In 1988 he founded the SIAM Activity Group on Control and Systems Theory and was its first Chair. He was again Chair of the SIAG CST in 2005-07. He chaired the first SIAM Conference on Control and its Applications in 1989 and the same conference in 2007 both in San Francisco. He also co-chaired the IFAC Nonlinear Control Design Symposium held at Lake Tahoe in 1996. He has served as an Associate Editor for the SIAM Journal on Control and Optimization and for the SIAM book series on Advances in Design and Control.
It is a honor to receive the 2012 Richard E. Bellman Control Heritage Award. I am deeply humbled to join the very distinguished group of prior winners. At this conference there are so many people whose work I have admired for years. To be singled out among this
group is a great honor.
I did not know Richard Bellman personally but we are all his intellectual descendants. Years ago my first thesis problem came from Bellman and currently I am working on numerical solutions to Hamilton-Jacobi-Bellman partial differential equations.
I began graduate school in mathematics at Berkeley in 1964, the year of the Free Speech Movement. After passing my oral exams in 1966, I started my thesis work with R. Sherman Lehman who had been a postdoc with Bellman at the Rand Corporation in the 1950s. Bellman and Lehman had worked on continuous linear programs also called bottleneck problems in Bellman’s book on Dynamic Programming. These problems are dynamic versions of linear
programs, with linear integral transformations replacing finite dimensional linear transformations. At each frozen time they reduce to a standard linear program. Bellman and Lehman had worked out several examples and found that often the optimal solution was basic, at each time an extreme point of the set of feasible solutions to the time frozen linear program. These extreme points moved with time and the optimal solution would stay on one moving extreme point for awhile and then jump to another. It would jump from one bottleneck to another.
Lehman asked me to study this problem and find conditions for this to happen. We thought that it was a problem in functional analysis and so I started taking advanced courses in this area. Unfortunately about a year later Lehman had a very serious auto accident and lost the ability to think mathematically for some time. I drifted, one of hundreds of graduate students in Mathematics at that time. Moreover, Berkeley in the late 1960s was full of distractions and I was distractable. After a year or so Lehman recovered and we started to meet regularly. But then he had a serious stroke, perhaps as a consequence of the accident, and I was on my own again.
I was starting to doubt that my thesis problem was rooted in functional analysis. Fortunately I had taken a course in differential geometry from S. S. Chern, one of the pre-eminent geometers of his generation. Among other things, Chern had taught me about the Lie bracket. And one of my graduate student colleagues told me that I was trying to prove a bang-bang theorem in Control Theory, a field that I had never heard of before. I then realized that my problem was local in nature and intimately connected with flows of vector fields so the Lie bracket was an essential tool. I went to Chern and asked him some questions about the range of flows of multiple vector fields. He referred me to Bob Hermann who was visiting the Berkeley Physics Department at that time.
I went to see Hermann in his cigar smoked-filled office accompanied by my faithful companion, a German Shepherd named Hogan. If this sounds strange, remember this was Berkeley in the 1960s. Bob was welcoming and gracious, he gave me galley proofs of his forthcoming book which contained Chow’s theorem. It was almost the theorem that I had been groping for. Heartened by this encounter I continued to compute Lie brackets in the hope of proving a bang-bang theorem.
Time drifted by and I needed to get out of graduate school so I approached the only math faculty member who knew anything about control, Stephen Diliberto. He agreed to take me on as a thesis student. He said that we should meet for an hour each week and I should tell him what I had done. After a couple of months, I asked him what more I needed to do to get a PhD. His answer was ”write it up”. My ”proofs” fell apart several times trying to accomplish this. But finally I came up with a lemma that might be called Chow’s theorem with drift that allowed me to finish my thesis.
I am deeply indebted to Diliberto for getting me out of graduate school. He also did another wonderful thing for me, he wrote over a hundred letters to help me find a job. The job market in 1971 was not as terrible as it is today but it was bad. One of these letters landed on the desk of a young full professor at Harvard, Roger Brockett. He had also realized that the Lie bracket had a lot to contribute to control. Over the ensuing years, Roger has been a great supporter of my work and I am deeply indebted to him.
Another Diliberto letter got me a position at Davis where I prospered as an Assistant Professor. Tenure came easily as I had learned to do independent research in graduate school. I brought my dog, Hogan, to class every day, he worked the crowds of students and boosted my teaching evaluations by at least a point. After 35 wonderful years at Davis, I retired and joined the Naval Postgraduate School where I continue to teach and do research. I am indebted to these institutions and also to the NSF and the AFOSR for supporting my career.
I feel very fortunate to have discovered control theory both for the intellectual beauty of the subject and the numerous wonderful people that I have met in this field. I mentioned a few names, let me also acknowledge my intellectual debt to and friendship with Hector Sussman, Petar Kokotovic, Alberto Isidori, Chris Byrnes, Steve Morse, Anders Lindquist, Wei Kang and numerous others.
In my old age I have come back to the legacy of Bellman. Two National Research Council Postdocs, Cesar Aguilar and Thomas Hunt, have been working with me on developing patchy methods for solving the Hamilton-Jacobi-Bellman equations of optimal control. We haven’t whipped the ”curse of dimensionality” yet but we are making it nervous.
The first figure shows the patchy solution of the HJB equation to invert a pendulum. There are about 1800 patches on 34 levels and calculation took about 13 seconds on a laptop. The algorithm is adaptive, it adds patches or rings of patches when the residual of the HJB equation is too large. The optimal cost is periodic in the angle. The second figure shows this. Notice that there is a negatively slanted line of focal points. At these points there is an optimal clockwise and an optimal counterclockwise torque. If the angular velocity is large enough then the optimal trajectory will pass through the up position several times before coming to rest there.
What are the secrets to success? Almost everybody at this conference has deep mathematical skills. In the parlance of the NBA playoffs which has just ended, what separates researchers is “shot selection” and ”follow through”. Choosing the right problem at the right time and perseverance, nailing the problem, are needed along with good luck and, to paraphrase the Beatles, ”a little help from your friends”.
Manfred Morari was appointed head of the Department of Information Technology and Electrical Engineering at ETH Zurich in 2009. He was head of the Automatic Control Laboratory from 1994 to 2008. Before that he was the McCollum-Corcoran Professor of Chemical Engineering and Executive Officer for Control and Dynamical Systems at the California Institute of Technology. He obtained the diploma from ETH Zurich and the Ph.D. from the University of Minnesota, both in chemical engineering. His interests are in hybrid systems and the control of biomedical systems. In recognition of his research contributions he received numerous awards, among them the Donald P. Eckman Award and the John R. Ragazzini Award of the Automatic Control Council, the Allan P. Colburn Award and the Professional Progress Award of the AIChE, the Curtis W. McGraw Research Award of the ASEE, Doctor Honoris Causa from Babes-Bolyai University, Fellow of IEEE and IFAC, the IEEE Control Systems Field Award, and was elected to the National Academy of Engineering (U.S.). Manfred Morari has held appointments with Exxon and ICI plc and serves on the technical advisory boards of several major corporations.
Usually when you are nominated for an award you know about it or – at least – you have a suspicion – for example, when somebody asks you for your CV, but you are sure that they are not interested in hiring you. This award came to me as a total surprise. Indeed I had written a letter of support for another most worthy candidate. So, when I received Tamer Başar’s email I thought that it was to inform me that this colleague had won. Who was actually responsible for my nomination? Several of my former graduate students! So, not only were they responsible for doing the work that qualified me for the award, they were even responsible for my getting it!
Over the course of my career I was fortunate to have worked with a fantastic group of people and I am very proud of them: 64 Phd Students to date and about 25 postdocs. 27 of them are holding professorships all over the world – from the Korean Advanced Institute of Science and Technology KAIST in the East to Berkeley and Santa Barbara in the West from the Norwegian Technical University and the U Toronto in the North to the Technion in Israel and the Instituto Tecnologico de Buenos Aires in the South. Many others are now in industry, about 15 in Finance, Management Consulting and Legal, holding positions of major responsibility. I regard this group of former co-workers as my most important legacy.
This award means a lot to me because of the awe-inspiring people who received it in the past. I remember Hendrik Bode receiving the inaugural award in 1979. I remember Rutherford Aris, one of my PhD advisors at the University of Minnesota receiving it in 1992. Aris had actually worked and published with Richard Bellman. I remember Harmon Ray receiving it in 2000, my colleague and mentor at the University of Wisconsin.
Receiving this award made me also reflect on what I felt our major contributions were in these 34 years since I started my career as an Asst. Prof at Wisconsin. In what way was our work important? I was reminded of a dinner conversation a few months back with a group of my former PhD students who had joined McKinsey after graduating from ETH. One of them told me that our group had supplied more young consultants to McKinsey Switzerland than any other institute of any university in Switzerland. He also talked informally about the results of a survey done internally on what may be the main traits characterizing a CEO. It is not charm. It is not tactfulness and sensitivity. It is not intelligence. The only common trait seems to be that in their past these CEOs headed a division that experienced unusual growth. For example, the CEO of a telecom company had headed the mobile phone division. All the CEOs seemed to have been at the right place at the right time.
Similar considerations may apply to doing research and to heading a research group. Richard Hamming, best known for the Hamming code and the Hamming window, wrote in a wonderful essay: “If you are to do important work then you must work on the right problem at the right time and in the right way. Without any one of the three, you may do good work but you will almost certainly miss real greatness….”
So, what are the right problems? Eric Sevareid, the famous CBS journalist once quipped: “The chief cause of problems is solutions.” We were never interested in working on problems solely for their mathematical beauty. We always wanted to solve real practical problems with potential impact. Several times we were lucky to be standing at a turning point, ready to embark on a new line of research before the community at large had recognized it. Let me share with you three examples.
Around 1975, when I started my PhD at the University of Minnesota, interest in process control was just about at an all-time low. In 1979 this conference, which was then called the Joint Automatic Control Conference, had barely 300 attendees. The benefits of optimal control and the state space approach had been hyped so much for more than a decade that disillusionment was unavoidable. Many people advised me not to commence a thesis in process control. But my advisor George Stephanopoulos convinced me that the reason for all the disappointment was that people had been working on the wrong problem. The problem was not how to design controllers for poorly designed system but how to design systems such that they are easy to control. The work that was started at that time by us and several other groups provided valuable insights that are in common use today and set off a whole research movement with special sessions, special journal issues and even separate workshops and conferences.
The second example is our work on Internal Model Control (IMC) and Robust Control. In the early 1980s the term “robust control” did not exist or, at least, it was not widely used and accepted. From our application work and influenced by several senior members of our community we had become convinced that model uncertainty is a critical obstacle affecting controller design. We discovered singular values and the condition number as important indicators before we learned that these were established mathematical quantities with established names. In 1982 at a workshop in Interlaken I met John Doyle, Gunter Stein and essentially everybody else who started to push the robust control agenda. Indeed it was there that Jürgen Ackerman made the researchers in the West aware of the results of Kharitonov. A year later I went to Caltech, John Doyle followed soon afterwards and an exciting research collaboration commenced that lasted for almost a decade. We also cofounded the Control and Dynamical Systems option/department at that time.
The third example is our more recent work on Model Predictive Control (MPC) and Hybrid Systems. As I returned to Switzerland 17 years ago, I moved from a chemical to an electrical engineering department. I was thrown into a new world of systems with time constants of micro- or even nanoseconds rather than the minutes or hours that I was used to. So we set out to dispel the myth that MPC was only suited to slow process control problems and showed that it could even be applied to switched power electronics systems. Through this activity in parallel with a couple of other groups in the world, among them the group of Graham Goodwin, we started this era of “fast MPC” and contributed to the spread of MPC to just about every control application area.
I would never claim that in the mentioned areas we made the most significant contributions and some of the results may even seem trivial to you now, but we were there at the beginning. The Hungarian author Arthur Koestler remarked that “the more original a discovery, the more obvious it seems afterwards”
Not withstanding this over-the-hill award that I received today and the mandatory retirement age in Switzerland I fully intend to strive to match these contributions in the coming years – together with my students, of course.
I want to close my remarks quoting from an interview Woody Allen gave last year. When he was asked “How do you feel about the aging process?” he replied: “Well, I’m against it. I think it has nothing to recommend it.”
Dragoslav D. Šiljak received his doctorate degree (D.Sc.) in Electrical Engineering from the University of Belgrade, Serbia, in 1963. He joined the Department of Electrical Engineering at the Santa Clara University in 1964, where he is presently the Benjamin and Mae Swig University Professor.
Upon the arrival to Santa Clara, Dr. Šiljak continued his research on parameter space methods for robust control design, which he presented in a monograph Nonlinear Systems: The Parameter Analysis and Design (Wiley, 1969). He established collaboration with control groups at NASA Ames Research Center and Marshal Space Flight Center, which focused on parameter methods for control design of space vehicles.
In the early 1970s, Dr. Šiljak introduced the concept of connective stability of large-scale dynamic systems within the framework of comparison principle and vector Lyapunov functions. He applied the concept to a wide variety of models, in areas as diverse as population biology, arms race, large space structures, competitive equilibrium in mathematical economics, and electric power systems. Within the same framework, he introduced robust decentralized feedback and developed graph-theoretic methods for decentralization and stabilization of uncertain large-scale systems. These concepts and results appeared in a monograph entitled Large-scale Dynamic Systems: Stability and Structure (North Holland, 1978) which, after almost thirty years, was reprinted as a paperback by Dover Publications (2007).
In the 1980s, Dr. Šiljak and his collaborators developed a large number of new and highly original concepts and methods for the decentralized control of uncertain large-scale interconnected systems. Structurally fixed modes, multiple controllers for reliable stabilization, decentralized optimization, and hierarchical, epsilon, and overlapping decompositions laid the foundation for a powerful and efficient approach to a broad set of problems in control design of large complex systems. This development was reported in a comprehensive monograph Decentralized Control of Complex Systems (Academic Press, 1991).
Over the past two decades, Dr. Šiljak and his collaborators have raised the research on complex systems to a higher level. Decomposition schemes involving inputs and outputs have been developed for and applied to complex systems of unprecedented dimensions. Dynamic graphs have been defined in a linear space as one parameter groups of transformations of the graph space into itself. This new mathematical entity opened the possibility to include continuous Boolean networks in a theoretical study of gene regulation and modeling of large-scale organic structures. These new and exciting developments have been published in a recent monograph Control of Complex Systems: Structural Constraints and Uncertainty (Springer, 2010, coauthor A. I. Zečević).
Dr. Šiljak’s research on large complex systems has involved a large number of collaborators, including researchers and students who came from all over the world to Santa Clara to study complex dynamic systems. Over the years, this research has been generously supported by NASA, NSF, DOE, and DARPA.
In 1981, Šiljak served as a Distinguished Scholar of the Japan Society for Promotion of Science, lecturing on large-scale systems at major universities and companies in Japan. He was selected as a Distinguished Professor of the Fulbright Foundation in 1984, and in 1985 became an Honorary Member of the Serbian Academy of Arts and Sciences. In 1986, he served as a Director of the NSF Workshop “Challenges to Control: A Collective View,” organizing a forum of top control scientists at Santa Clara University for the purpose of assessing the state of the art of the field and outline directions of research. In 1991, he gave a week-long seminar on decentralized control at the Seoul National University as a Hoam Distinguished Foreign Scholar. In 2001, he became a Life Fellow of the IEEE. He has presented many plenary talks at conferences and served on editorial boards of a variety of journals in the fields of applied mathematics and engineering.
July 1, 2010. Baltimore, MD
I am exceedingly happy to receive the Richard Bellman Control Heritage Award. I am thankful to the American Automatic Control Council for recognizing my work as worthy of this award, and I am deeply humbled when I consider the previous recipients of the award.
My first thanks go to my dear wife Dragana who put up for a long time with a workaholic husband with an oversized ambition. I am grateful to Santa Clara University and, in particular, to the School of Engineering for providing institutional support to our research. I am exceedingly thankful to many people from all around the world who came to Santa Clara to work on our projects as fellow researchers on an exploratory journey; and what a journey it has been!
At this occasion, it gives me a great pleasure to recall my visit to University of Southern California and my brief encounter with Professor Bellman. After my talk, he invited me to his office, and among myriad of his interests, he chose to talk with me about his recent work in Pharmacokinetics. At that time, I was deeply into the competitive equilibrium in economics, and we had a very stimulating discussion on the connection of the two fields via the Metzler matrix which I have been using since then in a wide variety of models to this very day.
Looking at this award in a prudential light, my obtaining this award is as much a compliment to the Control Council as it is to me. My winning of this award at Santa Clara University, which is not a research 1 university but prides itself as an excellent teaching institution, proves that the system is open, and that any of you wherever you are can win this award solely by the merit of your research.
I recall when at eighteen I made the Yugoslav Olympic Water Polo Team for the 1952 Helsinki Olympic Games. We won all our games except the final one, which ended in a draw. At that time, there were no overtimes and penalty kicks; the winner was determined by the cumulative goal ratio. I continued playing water polo, but did not make the team for the 1956 Melbourne games; I broke my right hand and stayed home. I kept playing on and in 1960 made the team for the Rome Olympics. We did not win a medal in Rome, let alone the gold. At that point I was already a committed researcher in control systems. I continued the research for many years and to borrow from a song by Neil Young:
"I kept searching for a heart of gold, and I was getting old ... "
Today I found a heart of gold. Thank you all very much for your attention, and God bless!
George Leitmann is a Professor Emeritus of engineering science and associate Dean for International Relations at the University of California, Berkeley. His 50+year Berkeley career has included everything from research and teaching to serving as the first ombudsman in the UC system. During seven years at the US Naval Ordnance Test Station, China Lake, he worked mostly on rocket trajectory optimization and testing. He joined the Berkeley faculty in 1957. With the beginning of his appointment at UC in 1957 he began to extend his work in variational calculus and optimal control theory, both in theory and applications, some of which is contained in an introductory text (1967) and two edited volumes (1965 and '69), later expanded to a basic text (1981). This work was awarded the Goddard aerospace and the flight mechanics awards of the American Institute of Aeronautics and Astronautics. That in turn led to research in dynamical game theory and its applications, which can be found in three books (1966, 1967 and 1974) and numerous edited volumes. In the early 1970's and extending into the 1990's, this led to research on robust control with applications to uncertain systems in engineering, science, economics and management for which he was awarded the Levy medal of the Franklin Institute and more recently the first Isaacs Award of the International Society of Dynamic Games. He is a member of the National Academy of Engineering as well as of six foreign academies of science and engineering, and he holds three honorary doctorates. Since emeritation in 1991, he has returned to earlier work in the calculus of variations, especially numerous extensions of a 1967 paper, which are based on the methodology of equivalent problem solutions and regularizing transformations, which simplify the classical approach of Caratheodory. Lately, he has also turned to topics of more recent interest such as an analysis of the dynamics of terrorism. Professionally, he edited or co-edited over a dozen journals including the largest and arguably the most prestigious journal of mathematical analysis and applications founded by Richard Bellman, the latter as editor for sixteen years. Since so many of Professor Leitmann's doctoral students and post-doctoral fellows were international ones and his interests always transcended the US border, he became very involved with international collaborations, was awarded an Alexander von Humboldt Prize in 1980 and subsequently the A. von Humboldt medal and the Werner Heisenberg medal of the A. von Humboldt Foundation.
June 11, 2009. St. Louis, MO
First of all, I wish to express my sincere thanks to the American Automatic Control Council for bestowing on me the ?Bellman Control Heritage Award?. This great honor was completely unexpected so that my gratitude is very deep indeed. I would like to use this rare opportunity to say a few words about a topic which has concerned me for some time, namely, the question ?Who did what first??. In so doing, I shall relate two examples of which the first is especially a propos since it involves the patron of the award, Richard Bellman, as well as Rufus Isaacs, both long-time friends of mine. When I attended the 1966 International Congress of Mathematicians in Moscow, where Dick was a plenary speaker and Rufus was to present a paper entitled ?Differential games and dynamic programming, and what the latter can learn from the former?, the meeting was buzzing with excitement about an upcoming confrontation between two well known American mathematicians. And indeed, when Rufus presented his paper it was his take on the discovery of the Principle of Optimality which, in his view, appeared after the in-house publication of three RAND reports on differential games, and which appeared to be just a one-player version of his Tenet of Transition. The result of this implied accusation of plagiarism had two unhappy consequences. I had lunch with Dick on that day. He was deeply hurt, so much so that he was near tears. Equally unfortunate was the effect on Rufus who devoted much of his remaining time to trying to prove the priority of his discovery instead of continuing to produce new and important research of which his fertile mind was surely capable. The second example is a much happier one. In the mid-1960?s I published a brief paper in which I proposed constructive sufficiency conditions for extremizing a class of integrals by solving an equivalent problem by inspection. It was not until 1999 that I returned to this subject at the urging of a Canadian colleague. After revisiting the original 1967 paper, I published a generalization in JOTA in 2001. On presenting these results at my 75th birthday symposium in Sicily in 2001, Pierre Bernhard remarked that my approach seemed to be related to Caratheodory?s in his 1935 text on the calculus of variations and partial differential equations, first translated into English in the mid-1960?s and not known to me. And indeed, in 2002, Dean Carlson published in JOTA a paper in which he discussed a relation between the two approaches in that both are based on the equivalent problem methodology. Caratheodory obtained an equivalent problem by allowing for a different integrand, and I obtained an equivalent problem by the use of transformed variables. Dean then proposed a generalization by combining the two approaches. A happy consequence of this paper has been and continues to be a fruitful collaboration which has resulted in many extensions and applications, e.g., to classes of optimal control and differential game problems, to multiple integrals, and to economic problems, the most recent concerned with differential constraints (state equations) and presented just a couple of weeks ago at the 15th International Workshop on Dynamics and Control. A particularly interesting discussion and some generalizations by Florian Wagener may be found in the July 2009 issue of JOTA. Thus, Caratheodory received his well deserved citation and I learned a great deal, allowing me to make some small contributions to optimization theory.
Pravin Varaiya is Nortel Networks Distinguished Professor in the Department of Electrical Engineering and Computer Sciences at the University of California, Berkeley. From 1975 to 1992 he was also Professor of Economics at Berkeley. During 1994-1997 he was Director of California PATH, a multi-university research program concerned with the development and use of technology to help solve California’s transportation problems. His current research is concerned with sensor networks, transportation, and hybrid systems. His earlier research dealt with stochastic control, communication networks, discrete event and large scale systems. Varaiya has held a Guggenheim Fellowship and a Miller Research Professorship. He has received two Honorary Doctorates, and the Field Medal and Bode Prize of the IEEE Control Systems Society. He is a Fellow of IEEE, a member of the National Academy of Engineering, and a Fellow of the American Academy of Arts and Science. He is on the editorial board of Transportation Research--C. He has co-authored three books and 300 technical papers. The second edition of High-Performance Communication Networks (with Jean Walrand) was published by Morgan-Kaufmann in 2000. “Structure and Interpretation of Signals and Systems” (with Edward Lee) was published in 2003 by Addison-Wesley.
June 12, 2008. Seattle, WA
It is an honor to receive the Bellman Award. I am sure the Award Committee received many outstanding nominations, and I thank the Committee for selecting me. I was invited to make a few remarks, so long as I did not exceed five minutes. I will point out some landmarks along my intellectual journey. The young people among you may find it of some interest. I came to Berkeley as a graduate student in 1960. I owe a great deal to Professor Lotfi Zadeh who was my PhD adviser and who has been a mentor to me ever since. Much of my intellectual development came from interaction with visitors and students. Karl Astrom visited me in the early 1960s. His paper with Bohlin on system identification became for me a standard of research quality and research exposition. Another significant visitor was Bill Root. Bill showed me how to use mathematics in the analysis of communication systems, and he introduced me to information theory.
There was a buzz at the time about white noise and martingales. Gene Wong was talking about it, as was Moshe Zakai. Tyrone Duncan was visiting. Ty de-mystified the buzz for me. He taught me how to think about stochastic systems. Thus began my lifelong attraction towards randomness. Sanjoy Mitter, who I first met about that time, reinforced that attraction. Sanjoy became a lifelong friend, for which I am very grateful.
Mark Davis was the first in a sequence of brilliant PhD students in stochastic systems. Mark discovered the deep relation between martingales and optimum decisions. Rene Boel, Jan van Schuppen, and Gene Wong found that martingales were also key to point processes as well as Ito processes. Jean Walrand grasped this insight and developed it into an outstanding thesis on queuing networks. Venkat Anantharam knew little or nothing about probability theory when he began his PhD. I still recall how much he impressed me with his spectacular work on multi-armed bandits. The third in this group was Vivek Borkar. Vivek was the most quiet, but equally stunning.
This was when P.R. Kumar visited Berkeley. He is the first of the next generation that I got to know as a friend. I have become a fan of his, along with so many others. Intellectual life moves in circles. Borkar and Kumar re-connected me with Karl Astrom, this time through his paper with Wittenmark.
Jean Walrand introduced me to computer communication networks. This has continued to be an area of research for the past twenty years. We've had outstanding students, who have gone on to brilliant careers. Sri Kumar, then at Northwestern, Jean Walrand and I got to know each other through our interest in networking.
I learned power engineering in undergraduate school. But then I lost contact with the field, until years later when Felix Wu joined our faculty. Eyad Abed, Fathi Salem and Shankar Sastry wrote their dissertations on difficult questions in nonlinear systems, inspired by problems of power systems. I lost contact with the field once again, until deregulation became the rage in California. Once again Felix recruited me. Felix Wu, Shmuel Oren of IEOR, Pablo Spiller of the Business School, and I joined forces to save California from the clutches of the utilities. We developed a provably good deregulation strategy. The strategy was not adopted.
Ahmad Bahai and Andrea Goldsmith sparked my interest in wireless communications. They have become stars. They inspired my very recent students, Mustafa Ergen and Sinem Coleri.
In the late sixties, Noam Chomsky came to Berkeley and gave a lecture on formal languages. Chomsky's talk opened up a whole world for me. I spent a lot of time learning recursive functions, Turing machines, and Godel's theory. Walt Burkhard wanted to do a thesis on space-time complexity of recursive functions, and he helped consolidate what I had learned. However, my involvement with that subject declined.
My interest was revived by the Wonham-Ramadge paper on discrete-event systems, while Joseph Sifakis, Tom Henzinger and others began the study of time automata. These developments combined to create the area of Hybrid Systems. My students Anuj Puri and Alex Kurzhansky obtained some outstanding results in Hybrid Systems.
My flirtation with transportation began 30 years ago when I taught urban economics. Mario Ripper was my first doctoral student in transportation planning. My interest then waned. In 1990, Steve Shladover helped spark a national, indeed worldwide, interest in automated highways. Berkeley became a leading research center in highway automation, culminating in a full demonstration in 1997 in San Diego. It was very exciting to work with an interdisciplinary group of experts to build something all the way from theory to demonstration.
Since I could not wait for 25 years before automated highways became practical, my attention shifted to today's highways. My student Karl Petty built the PeMS system, which is now world-renowned as a repository of highway data. Roberto Horowitz and I are now developing a control system for the management of highways. It might become an important follow-on to the PeMS system.
Let me conclude with a remark on Richard Bellman, whom I met in the late sixties. Bellman was a renowned mathematician with contributions in many, many areas. I learned two things from him. First, over the years I continue to marvel at the significance of the optimality principle in the form of the verification theorem, which I have used in many contexts. Second and more important, I learned that good theory is very practical.
Thank you very much for being such courteous listeners.
Dr. Sanjoy K. Mitter is Professor of Electrical Engineering at the Laboratory for Information and Decision Systems at the Massachusetts Institute of Technology (MIT). Prior to 1965, he worked as a research engineer at Brown Boveri & Co. Ltd., Switzerland (now ASEA Brown Boveri) and Battelle Institute in Geneva, Switzerland. He taught at Case Western Reserve University from 1965-1969. He joined MIT in 1969 where he has been a Professor of Electrical Engineering since 1973. He was the Director of the MIT Laboratory for Information and Decision Systems from 1981-1999. He has also been a Professor of Mathematics at the Scuola Normale, Pisa, Italy from 1986-1996. Professor Mitter’s other visiting positions include Imperial College, London; University of Groningen, Holland; INRIA, France; Tata Institute of Fundamental Research, India and ETH, Zürich, Switzerland. At the University of California, Berkeley, he was the McKay Professor in March 2000 and the Russell Springer Professor from September 2003 to January 2004. He was a Visiting Professor at the California Institute of Technology in January 2005. He is a Fellow of the IEEE and winner of the 2000 IEEE Control Systems Award. In addition, he is a Member of the National Academy of Engineering and is associate editor of several journals. He is a Foreign Member of Istituto Veneto di Scienze, Lettere de Arti. Professor Mitter received his doctor’s degree in 1965 from the Imperial College of Science and Technology, University of London.
July 12, 2007. New York, NY
It is a great honor for me to receive the Bellman Award—quite undeserved I believe, but I decided not to emulate Gregory Perelman by refusing to accept the award. I might however follow his footsteps (apparently he has stopped doing Mathematics) and concentrate only on the more conceptual and philosophical aspects of the broad field of Systems and Control.
On an occasion like this it is perhaps appropriate to say a few words about the seminal contributions of Richard Bellman. As we all know, he is the founder of the methodological framework of Dynamic Programming, probably the only general method of systematically and optimally dealing with uncertainty, when uncertainty has a probabilistic description, and there is an underlying Markov structure in the description of the evolution of the system. It is often mentioned that the work of Bellman was not as original as would appear at first sight. There was, after all, Abraham Wald’s seminal work on Optimal Sequential Decisions and the Carat´eodory view of Calculus of Variations, intimately related to Hamilton–Jacobi Theory. But the generality of these ideas, both for deterministic optimal control and stochastic optimal control with full or partial observations, is undoubtedly due to Bellman. Bellman, I believe, was also the first to present a precise view of stochastic adaptive control using methods of dynamic programming. Now, there are two essential steps in invoking Dynamic Programming, namely, invariant embedding whereby a fixed variational problem is embedded in a potentially infinite family of variational problems and then invoking the Principle of Optimality which states that any sub-trajectory of an optimal trajectory is necessarily optimal to characterize optimal trajectories. This is where the Markov structure of dynamic evolution comes into operation. It should be noted that there is wide flexibility in the invariant embedding procedure and this needs to be exploited in a creative way. It is this embedding that permits obtaining the optimal control in feedback form (that is a “control law” as opposed to open loop control).
The solution of the Partially-Observed Stochastic Control in continuous time leading to the characterization of the optimal control as a function of the unnormalized conditional density of the state given the observations via the solution of an infinite-dimensional Bellman–Hamilton–Jacobi equation is one of the crowning achievements of the Bellman view of stochastic control. It is worth mentioning that Stochastic Finance Theory would not exist but for this development. There are still open mathematical questions here that deserved further work. Indeed, the average cost problem for partially-observed finite-state Markov chains is still open—a natural necessary and sufficient condition for the existence of a bounded solution to the dynamic programming equation is still not available.
Much of my recent work has been concerned with the unification of theories of Communication and Control. More precisely, how does one bring to bear Information Theory to gain understanding of Stochastic Control and how does one bring to bear the theory of Partially-Observed Stochastic Control to gain qualitative understanding of reliable communication. There does not exist a straightforward answer to this question since the Noisy Channel Coding Theorem which characterizes the optimal rate of transmission for reliable communication requires infinite delay. The encoder in digital communication can legitimately be thought of as a controller and the decoder an estimator, but they interact in complicated ways. It is only in the limit of infinite delay that the problem simplifies and a theorem like the Noisy Channel Coding Theorem can be proved. This procedure is exactly analogous to passing to the thermodynamic limit in Statistical Mechanics.
In the doctoral dissertation of Sekhar Tatikonda, and in subsequent work, the Shannon Capacity of a Markov Channel with Feedback under certain information structure hypotheses can be characterized as the value function of a partially-observed stochastic control problem. This work in many ways exhibits the power of the dynamic programming style of thinking. I believe that this style of thinking, in the guise of a backward induction procedure, will be helpful in understanding the transmission capabilities of wireless networks. More generally, dynamic programming, when time is replaced by a partially ordered set, is a fruitful area of research.
Can one give an “information flow” view of path estimation of a diffusion process given noisy observations? An estimator, abstractly can be thought of as a map from the space of observations to a conditional distribution of the estimand given the observations. What is the nature of the flow of information from the observations to the estimator? Is it conservative or dissipative? In joint work with Nigel Newton, I have given a quite complete view of this subject. It turns out that the path estimator can be constructed as a backward likelihood filter which estimate the initial state combined with a fully observed stochastic controller moving in forward time starting at this estimated state solves the problem in the sense that the resulting path space measure is the requisite conditional distribution. The backward filter dissipates historical information at an optimal rate, namely that information which is not required to estimate the initial state and the forward control problem fully recovers this information. The optimal path estimator is conservative. This result establishes the relation between stochastic control and optimal filtering. Somewhat surprisingly, the optimal filter in a stationary situation satisfies a second law of thermodynamics.
What of the future? Undoubtedly we have to understand control under uncertainty in a distributed environment. Understanding the interaction between communication and control in a fundamental way will be the key to developing any such theory. I believe that an interconnection view where sensors, actuators, controllers, encoders, channels and decoders, each viewed abstractly as stochastic kernels, are interconnected to realize desirable joint distributions, will be the “correct” abstract view for a theory of distributed control. Except in the field of distributed algorithms, not much fundamental seems to be known here.
It is customary to end acceptance discourses on an autobiographical note and I will not depart from this tradition. Firstly, my early education at Presidency College, Calcutta, where I had the privilege of interacting with some of the most brilliant fellow students, decisively formed my intellectual make-up. Whatever culture I acquired, I acquired it at that time. At Imperial College, while I was doing my doctoral work, I was greatly influenced by John Florentin (a pioneer in Stochastic Control), Martin Clark and several other fellow students. I have also been fortunate in my association with two great institutions—MIT and the Scuola Normale, Pisa. I cannot overstate everything that I have learnt from my doctoral students, too many to mention by name—Allen gewidmet von denen ich lernte [Dedicated to all from whom I have learnt (taken from dedication of G¨unter Grass in “Beim H¨auten der Zwiebel” (“Peeling the Onion”))]. I find that they have extraordinary courage in shaping some half-baked idea into a worthwhile contribution. In recent years, my collaborative work with Vivek Borkar and Nigel Newton has been very important for me. I have great intellectual affinity with members of Club 34, the most exclusive club of its kind and I thank the members of this club for their friendship. There are many others whose intellectual views I share, but at the cost of exclusion let me single out Jan Willems and Pravin Varaiya. I admire their passion for intellectual discourse. Last, but not least, I thank my wife, Adriana, for her love and support. I am sorry she could not be here today. My acceptance speech is dedicated to her.
Tamer Başar is with the University of Illinois at Urbana-Champaign (UIUC), where he holds the positions of the Fredric G. and Elizabeth H. Nearing Endowed Professor of Electrical and Computer Engineering, Center for Advanced Study Professor, and Research Professor at the Coordinated Science Laboratory. He was born in Istanbul, Turkey, in 1946, and received B.S.E.E. degree from Robert College, Istanbul, in 1969, and M.S., M.Phil, and Ph.D. degrees from Yale University, in 1970, 1971 and 1972, respectively. He joined UIUC in 1981, after holding positions at Harvard University and Marmara Research Institute (Gebze, Turkey). He has published extensively in systems, control, communications, and dynamic games, with over 400 publications, including two books (with several editions)--one on dynamic noncooperative game theory (with G.J. Olsder), and the other one on H∞-optimal control (with P. Bernhard). He has made fundamental contributions to a diverse set of topics, including decision making under uncertainty; information structures, stochastic teams, and differential games; large scale systems; hierarchical and decentralized control; worst-case identification, estimation, and control; H∞-optimal control for linear and nonlinear systems; robust adaptive control and filtering; distributed computation; and routing, congestion control, and pricing in networks. His current research interests are in modeling and control of communication networks; control over heterogeneous networks; usage-constrained sensing, estimation and control; network economics; mobile and distributed computing; security and trustworthiness in computer networks; and risk-sensitive estimation and control. He has served in various capacities for several professional organizations, including IEEE, Control Systems Society (CSS), AACC, the International Federation of Automatic Control (IFAC), and the International Society of Dynamic Games (ISDG). He is currently the Editor-in-Chief of Automatica, Editor of the Birkhauser Series on Systems & Control: Foundations & Applications, Editor of the Annals of ISDG, and member of editorial and advisory boards of several international journals in control, wireless networks, and applied mathematics. He has received several awards and recognitions over the years, among which are the Medal of Science of Turkey (1993); Distinguished Member Award (1993), Axelby Outstanding Paper Award (1995) and Bode Lecture Prize (2004) of CSS; Millennium Medal of IEEE (2000); Tau Beta Pi Drucker Eminent Faculty Award of UIUC (2004); and the Outstanding Service Award (2005) and the Giorgio Quazza Medal (2005) of IFAC. He is a member of the National Academy of Engineering, a member of the European Academy of Sciences, a Fellow of IEEE, a Fellow of IFAC, a past president of CSS, and the founding president of ISDG.
June 15, 2006. Minneapolis, MN
I am honored to receive this most prestigious award and recognition by the American Automatic Control Council, named after Richard Ernest Bellman (the creator of "dynamic programming")---who has shaped our field and influenced through his creative ideas and voluminous multifaceted work the research of tens of thousands, not only in control, but also in several other fields and disciplines. In my own research, which has encompassed control, games, and decisions, I have naturally also been influenced by the work of Bellman (on dynamic programming), as well as of Rufus Isaacs (the creator of differential games) whose tenure at RAND Corporation (Santa Monica, California) partially overlapped with that of Bellman in the 1950s. I want to use the few minutes I have here to say a few words on those early days of control and game theory research (just a brief historical perspective), and Bellman's role in that development.
In a Bode Lecture I delivered (at the IEEE Conference on Decision and Control in the Bahamas) in December 2004, I had described how modern control theory was influenced by the research conducted and initiatives taken at the RAND Corporation in the early 1950s. RAND had attracted and housed some of the great minds of the time, among whom was also Richard Bellman, in addition to names like Leonard D. Berkovitz, David Blackwell, George Dantzig, Wendell Fleming, M.R. Hestenes, Rufus Isaacs, Samuel Karlin, John Nash, J.P. LaSalle, and Lloyd Shapley (to list just a few). These individuals, and several others, laid the foundations of decision and game theory, which subsequently fueled the drive for control research. In this unique and highly conducive environment, Bellman started working on multi-stage decision processes, as early as 1949, but more fully after 1952---and it is perhaps a lesser known historical fact that one of the earlier topics Bellman worked on at RAND was ! game theory (both zero- and nonzero-sum games) on which he co-authored research reports with Blackwell and LaSalle. In an informative and entertaining autobiography he wrote 32 years later ("Eye of the Hurricane", World Scientific, Singapore), completed in 1984 shortly before his untimely death (March 19), Bellman describes eloquently the research environment at RAND and the reason for coining the term "dynamic programming".
At the time, the funding for RAND came primarily from the Air Force, and hence it was indirectly under the Secretary of Defense, who was in the early 1950s someone by the name Wilson. According to Bellman, "Wilson had a pathological fear and hatred of the word 'research' and also of anything 'mathematical' ". Hence, it was quite a challenge for Bellman to explain what he was doing and interested in doing in the future (which was research on multi-stage decision processes) in terms which would not offend the sponsor. "Programming" was an OK word; after all Linear Programming had passed the test. He wanted "to get across the idea that what he was doing was dynamic, multi-stage, and time-varying", and therefore picked the term "Dynamic Programming". He thought that "it was a term not even a Congressman could object to". This being the official reason given for his pick of the term, some say (Harold Kushner--recipient of this award two years ago--being one of them, based on a personal conversation with Bellman) that he wanted to upstage Dantzig's Linear Programming by substituting "dynamic" for "linear". Whatever the reasons were, the terminology (and of course also the concept and the technique) was something to stay with us for the next fifty plus years, and undoubtedly for many more decades into the future, as also evidenced by the number of papers at this conference using the conceptual framework of dynamic programming.
Applying dynamic programming to different classes of problems, and arriving at "functional equations of dynamic programming", subsequently led Bellman, as a unifying principle, to the "Principle of Optimality", which Isaacs, also at RAND, and at about the same time, had called "tenet of transition" in the broader context of differential games, capturing strategic dynamic decision making in adversarial environments.
Bellman also recognized early on that a solution to a multi-stage decision problem is not merely a set of functions of time or a set of numbers, but a rule telling the decision maker what to do, that is, a "policy". This led in his thinking, when he started looking into control problems, to the concept of "feedback control", and along with it to the notions of sensitivity and robustness. These developments, along with the more refined notions of information structures (who knows what and when), have been key ingredients in my research for the past thirty plus years.
It is interesting that at RAND at the time (that is in the 1950s), in spite of the anti-research and anti-mathematical attitude that existed in the higher echelons of the government, and the Department of Defense in particular, fundamental research did prosper, perhaps somewhat camouflaged initially, which in turn drove the creation of modern control theory, fueled also by the post-Sputnik anxiety. There is perhaps a message that should be taken from that: "Don't give up doing what you think and believe is right and important, but also be flexible and accommodating in how you promote it".
Before closing, I want to thank all who have been involved in the nomination process and the selection process of the Bellman Control Heritage Award this year. I want to use this occasion also to acknowledge several educational and research institutions which have impacted my life and career.
First, I want to acknowledge the contributions of the educational institutions in my native country, Turkey, in the early years of my upbringing, and the comfortable research environment provided by the Marmara Research Institute I was affiliated with in the mid to late 1970s. Second, I want to acknowledge the love for research and the drive for pushing the frontiers of knowledge I was infected with during my years at Yale and Harvard in the early 1970s. And last, but foremost, I want to acknowledge the perfect academic environment I found and have still been enjoying at the University of Illinois at Urbana-Champaign---wonderful colleagues, stimulating teaching environment at the Department of Electrical and Computer Engineering, and exemplary conducive research environment at the Coordinated Science Laboratory with its top quality graduate students. I also want to recognize all students, post-docs, and colleagues I have had the privilege of having research interactions and collaborations with over the years. I thank them all for the memorable journeys in exploring the frontiers in control science and technology.
Thank you very much.
Gene F. Franklin received the Bachelor’s degree in Electrical Engineering from Georgia Tech in 1950 the Master of Science in Electrical Engineering from MIT in 1952, and the Doctor of Engineering Science degree in Electrical Engineering from Columbia University in 1955. He was appointed Assistant Professor of Electrical Engineering at Columbia University from 1955-1957 and has been on the Faculty of Electrical Engineering at Stanford University since 1957 where he is now Professor of Electrical Engineering, Emeritus. He was Vice Chairman of the Department of Electrical Engineering from 1989-1994 and was Chairman of the Department for the 1994-1995 He was Director of the Information Systems Laboratory from its founding in 1962 until 1971 and was Associate Provost for Computing for Stanford University from 1971-1975.
He is co-author of three books: Sampled Data Systems, Digital Control of Dynamic Systems and Feedback Control of Dynamic Systems. The Second Edition of the last of these books received the IFAC prize as the best book in the controls area published during the period 1987-1990; the fifth edition is now in preparation. Professor Franklin has supervised the research of over 60 Ph.D. candidates in many aspects of control and systems.
He has for many years been an active member of the IEEE. He joined as a Student Member in April, 1950, and became a Life Fellow of the Institute in January, 1993. He was on the Board of Directors of the CSS from 1982 until 1988 and was Vice President for Technical Affairs for 1985 and 1986. He was General Chairman of the JACC of 1964 and General Chairman of the CDC in 1984. He received the Ragazzini Education Award of the AACC for 1985,.and gave the Bode Lecture at the1994 CDC. He is a Distinguished Member of the CSS and Franklin and Abramovitch were awarded the prize for the best paper published in the CSM in 2003. for their review of the control of disk drives.
June 9, 2005. Portland, OR
'Grow old along with me
The best is yet to be'
I don't feel particularly old but to be in the midst of friends and colleagues with this recognition is as good as it gets.
I'd like to use these few minutes to comment on several of the times when I've come to a fork in the road as an illustration of how difficult it is to predict how a given path will turn out. There may be people who plan their lives carefully and take each step based on the best prediction of a good outcome; I'm not one of them. Too many events in my life were based on random events to pretend that they were based on any good planning of mine.
My first decision was a good one: I selected outstanding parents. My father was a math teacher, my mother an RN and they gave me a love of books and learning that have served me well for over 7 decades. They did, however, make one mistake: they gave me a defective gene that prevents me from seeing colors the way most others see them. If you see me going Ooh and Ah over a rainbow, don't believe it; I'm faking it.
The next decision I wish to mention was in 1945 when I became eligible for the military draft. The good news was that I was admitted to the Navy Radio Technician program but the bad news was that I had to sign up for four years to accept the offer. The evidence was that the war would last several more years so I signed up. That decision did not look so good a few weeks later when President Truman approved use of atomic bombs to reduce Hiroshima to rubble and Nagasaki to ruin in a matter of seconds. The war ended soon after but I was still stuck with four years obligation to the Navy. When I got to Chicago for my final physical, one of the doctors asked me to identify the numbers in a set of circles filled with colored dots. I'm sure that I gave him some values never before found! My performance was such that he marked me as partially disabled, put me on medical special assignment, and sent me off to the electronics school.
I finished the school in the summer of 1946 and was selected to be an instructor at a new campus being set up at the Great Lakes Naval Training Center north of Chicago. I taught electronic amplifiers there using the book Radio Engineering by F E Terman. One of my fellow students there later became well known in the control field (and a Vice President of IBM): Jack Bertram had also signed up for the Navy electronics program. In the early summer of 1947 my defective gene came to my rescue. The Navy announced that any sailor on medical special assignment was eligible for discharge! My response: That's ME.
Out of the Navy I went and set about looking for a school that would accept me at that late date. I was turned down by several fine schools but Georgia Tech told me to come on down so off I went to Atlanta where I got my EE degree in 1950. The months I'd served in the Navy made me eligible for enough GI Bill support to pay the tuition and expenses which I could never have afforded otherwise. This time the bad news was that in the spring of 1950 the Bureau of Labor Statistics reported that the country was to graduate twice as many engineers as the economy could absorb. My only choice was to accept a fellowship to MIT and continue my education using the last of my GI Bill of Rights tuition support. As an aside, while there I took a graduate course on pulse and timing circuits that contained little new from what we had learned in the Navy program as high school graduates! I also had a great time learning how to play rugby from a group of graduate students from South Africa. A most memorable part of this experience was when we were one of the teams selected to play in a tournament as the entertainment for spring break in Bermuda.
After finishing my MS in 1952 I had married the love of my life and needed to get a job. A fellow student introduced me to Professor Jack Millman who was visiting MIT looking for possible appointments to Columbia University. I interviewed with him and was offered a position as Instructor which involved teaching responsibilities but allowed me to study for the doctorate at the same time. I had no idea that I was stepping into a fantastic center of control research assembled by John Ragazzini. With his colleague Lotfi Zadeh he had attracted great students including Eli Jury, Art Bergen, Jack Bertram, Rudy Kalman, Bernie Friedland, George Kranc, and Phil Sarachik. Sampled Data control was never the same again. The first treatment of 'pulsed circuits' was chapter 5 by Hurewicz in the Rad Lab Vol. 25 on The Theory of Servomechanisms edited by James Nichols and Phillips. Hurewicz selected the variable of discrete transforms as z, a prediction of one period and we kept the same convention. At about the time as Ragazzini's group were starting our study, some at MIT selected z to be a delay operator. In the end, z as predictor prevailed but to this day MATLAB treats discrete transforms differently in the Signal Processing toolbox as they do in the Control toolbox. You can look it up.
After I got my degree in 1955 I was promoted to Assistant Professor. I loved Columbia and was pleased to be selected by Professor Ragazzini to join him as co-author of a book on sampled data but New York City left a lot to be desired as a place to raise the two children who had joined my family by this time and soon another fork in the road appeared. It was presented in the person of Professor John Linvill whose class I had taken at MIT and who had moved from MIT to Stanford by way of Bell Labs. John knew Lotfi Zadeh and at his invitation came to Columbia looking for possible new appointments to Stanford's faculty. Again I interviewed and was offered a position on the Stanford Faculty. Thus it was that in late May of 1957 we loaded up the (non air-conditioned) Ford and headed west. I'll never forget the hot day in June when we stopped for gas in Sacramento where the temperature was well over 100 degrees. The pavement was so soft my shoes sank into the asphalt. Then later that day we crossed the mountains into the Bay Area and the temperature dropped about 1 degree per mile for the last 30 miles. We've been in love with the San Francisco Bay area ever since.
As an aside comment on control at the time, in the paper on The history of the Society by Danny Abramovitch and myself, George Axelby is quoted as saying that papers presented at the 1959 conference on control by Kalman and Bertram using state notation were 'quite a mystery to most attendees.' I'd say that the idea of state was not long a mystery to those who had worked with analog computers. On those machines, the only dynamic elements are integrators whose outputs comprise the state quite naturally. In my opinion, every control engineer should be required to program an analog computer where one also quickly learns the value of amplitude and time scaling too.
In any case, such was the random walk through time and space that has taken me from the mountains of North Carolina to the coast of California. My tenure at Stanford has been marked by many things but first and foremost in my affection has been the steady stream of excellent students with whom I have been privileged to work. Without a doubt they have made major contributions to control and to them is owed much of the credit for which this award in made. So let me close with the moral of my story aimed mainly to those in academia:
You can never be too careful when selecting your students.
The corollary to this is applicable to everyone:
It's hard to soar like an Eagle if you fly with a bunch of turkeys.
Thank you very much.
Harold J. Kushner received the Ph.D. in Electrical Engineering from the University of Wisconsin in 1958. Since then, in ten books and more than two hundred papers, he has established a substantial part of modern stochastic systems theory. These include seminal developments of stochastic stability for both Markovian and non-Markovian systems, optimal nonlinear filtering and effective algorithms for approximating optimal nonlinear filters, stochastic variational methods and the stochastic maximum principle, numerical methods for jump-diffusion type control and game problems (the current methods of choice), efficient numerical methods for Markov chain models, methods for singularly perturbed stochastic systems, an extensive development of controlled stochastic networks such as queueing/communications systems under conditions of heavy traffic, methods for the analysis and approximation of systems driven by wideband noise, large-deviation methods for control problems with small noise effects, stochastic distributed and delay systems, and nearly optimal control and filtering for non-Markovian systems.
His work on stochastic approximations and recursive algorithms has set much of the current framework, and he has contributed heavily to applications of control methods to communications problems.
He is a past Chairman of the Applied Mathematics Department and past Director of the Lefschetz Center for Dynamical Systems, at Brown University, where he is currently a University Professor Emeritus.
July 1, 2004. Boston, MA
It is a great honor to receive this award. It is a particular honor that it is in memory of Richard Bellman. I doubt that there are many here who knew Bellman, so I would like to make some comments concerning his role in the field.
Bellman left RAND after the summer of 1965 for the position of Professor of Electrical Engineering, Mathematics, and Medicine at the University of Southern California. This triple title gives you some inkling of how he was viewed at the time. I spent that summer at RAND. My office was right next to Bellman's and we had lots of opportunity to talk.
Bellman was always very supportive of my work. He encouraged me to write my first book, Stochastic Stability and Control, in 1967 for his Academic Press Series. Although naive by modern standards, the book seemed to have a significant impact on subsequent development in that it made many mathematicians realize that there was serious probability to be done in stochastic control, and established the foundations of stochastic stability theory. Numerical methods were among his strong interests. He was well acquainted with my work on numerical methods for continuous time stochastic systems and encouraged me to write my first book on the subject, later updated in two books with Paul Dupuis, and still the methods of choice. Despite his enormous output of published papers, something like 900, he was a strong believer in books since they allowed one to develop a subject with considerable freedom.
There are other connections, albeit indirect, between us. He was a New Yorker, and did his early undergraduate work at CCNY. During those years and, indeed, until the late 50's, CCNY was one of the most intellectual institutions of higher learning in the US. During that time, before the middle class migration out of the city, and the simultaneous opening of opportunities in the elite institutions for the "typical New Yorker," CCNY had the choice of the best of New Yorkers with a serious intellectual bent. Later, he switched to Brooklyn College, which was much closer to his home.
He intended to be a pure mathematician: His primary interest was analytic number theory. When did he become interested in applications? He graduated college at the start of WW2 and the demands of the war exposed him to a great variety of problems. He taught electronics in Princeton and then worked at a sonar lab in San Diego (which kept him out of the Army for a while). He spent the last two years of the war in the army, but assigned to the Manhattan project at Los Alamos. He was a social creature and it was easy for him to meet many of the talented people working on the project. Typically, the physicists considered a mathematician as simply a human calculator, ideally constructed to do numerical computations but not much more. Bellman was asked to numerically solve some PDE's. His mathematical pride refused. To the great surprise of the physicists, he actually managed to integrate some of the equations, obtaining closed form solutions. Holding true to tradition, they checked his solutions, not by verifying the derivation, but by trying some very special cases. Thus his reputation there as a very bright young mathematician was established. This jealously guarded independence and self confidence (and lack of modesty) continued to serve him well. During these years, he absorbed a great variety of scientific experiences. So much was being done due to the needs of the war.
There is one more indirect connection between us. Bellman was a student of Solomon Lefschetz at Princeton, head of the Math. Dept. at the time, a very tough minded mathematician and one of the powerhouses of American mathematics, and impressed with Bellman's ability. While at Los Alamos in WW2 Bellman worked out various results on stability of ODE's. Although he initially intended to do a thesis with someone else on a number theoretic problem, Lefschetz convinced him that those stability results were the quickest way to a thesis, which was in fact true. It took only several months and was the basis of his book on stability of ODE's. I was the director of the Lefschetz Center for Dynamical Systems at Brown University for many years, with Lefschetz our patron saint. Some of you might recall the book (not the movie) "A Beautiful Mind" about John Nash, a Nobel Laureate in Game Theory, which describes Lefschetz's key role in mathematics during Nash's time at Princeton.
Bellman spent the summer of 1948 at RAND, where an amazing array of talent was gathered, including David Blackwell, George Dantzig, Ted Harris, Sam Karlin, Lloyd Shapley, and many others, who provided the foundations of much of decision and game theory. The original intention was to do mathematics with some of the RAND talent on problems of prior interest. But Bellman turned out to be fascinated and partially seduced by the excitement in OR, and the developing role of mathematics in the social and biological sciences. His mathematical abilities were widely recognized. He was a tenured Associate Professor at Stanford at 28, after being an Associate Professor at Princeton, where all indications were that he would have had an assured future had he remained there. He began to have doubts about the payoff for himself in number theory and returned to the atmosphere at RAND often, where he eventually settled and became fully involved in multistage decision processes, having been completely seduced, and much to our great benefit.
Here is a non mathematical item that should be of interest. To work at RAND one needed a security clearance, even though much of the work did not involve "security." Due to an anonymous tip, Bellman lost his clearance for a while: His brother-in-law, whom Bellman had not seen since he (his brother-in-law) was about 13, was rumored to be a communist? This was an example of a serious national problem that was fed, exploited, and made into a national paranoia by unscrupulous politicians.
Bellman was a remarkable person, thoroughly a man of his time and renaissance in his interests, with a fantastic memory. Some epochs are represented by individuals that are towering because of their powerful personalities and abilities. People who could not be ignored. Bellman was one of those. He was one of the driving forces behind the great intellectual excitement of the times.
The word programming was used by the military to mean scheduling. Dantzig's linear programming was an abbreviation of "programming with linear models." Bellman has described the origin of the name "dynamic programming" as follows. An Assistant Secretary of the Air Force, who was believed to be strongly anti-mathematics was to visit RAND. So Bellman was concerned that his work on the mathematics of multi-stage decision process would be unappreciated. But "programming" was still OK, and the Air Force was concerned with rescheduling continuously due to uncertainties. Thus "dynamic programming" was chosen a politically wise descriptor. On the other hand, when I asked him the same question, he replied that he was trying to upstage Dantzig's linear programming by adding dynamic. Perhaps both motivations were true.
If one looks closely at scientific discoveries, ancient seeds often appear. Bellman did not quite invent dynamic programming, and many others contributed to its early development. It was used earlier in inventory control. Peter Dorato once showed me a (somwhat obscure) economics paper from the late thirties where something close to the principle of optimality was used. The calculus of variations had related ideas (e.g., the work of Caratheodory, the Hamilton-Jacobi equation). This led to conflicts with the calculus of variations community. But no one grasped its essence, isolated its essential features, and showed and promoted its full potential in control and operations research as well as in applications to the biological and social sciences, as did Bellman.
Bellman published many seminal works. It is sometimes claimed that many of his vast number of papers are repetitive and did not develop the ideas as far as they could have been. Despite this criticism, his works were poured over word for word, with every comment and detail mined for ideas, technique, and openings into new areas. His work was a mother lode. It was clearly the work of someone with a superb background in analysis as well as a facile mind and sharp eye for aplications. There are lots of examples, with broad coverage, accessible, and usually simple assumptions. His writing is articulate. It flows very smoothly through the problem formulation and mathematical analysis, and he is in full command of it.
We still owe a great debt to him.