You are here

Richard E. Bellman Control Heritage Award

The Bellman Award is given for distinguished career contributions to the theory or application of automatic control. It is the highest recognition of professional achievement for US control systems engineers and scientists. The recipient must have spent a significant part of his/her career in the USA. The awardee is strongly encouraged to give a plenary presentation at the ACC Awards Luncheon.

Yu-Chi Ho

For sustained and significant contributions to research and education in optimization and control of dynamic systems, and his establishment of a new branch of these fields, Discrete Event Dynamic Systems

Yu-Chi (Larry) Ho received his S.B. and S.M. degrees in Electrical Engineering from M.I.T. and his Ph.D. in Applied Mathematics from Harvard University. Except for three years of full time industrial work he has been on the Harvard Faculty. Since 1969 he has been Gordon McKay Professor of Engineering and Applied Mathematics. Since 1989, he has been the T. Jefferson Coolidge Chair in Applied Mathematics and Gordon McKay Professor of Systems Engineering at Harvard.

W. Harmon Ray


W. Harmon Ray is Vilas Research Professor and past chairman of the Department of Chemical Engineering at the University of Wisconsin in Madison. He received his B.A. and B.S.Ch.E. from Rice University and his Ph.D. from the University of Minnesota in 1966. Before joining the University of Wisconsin he was a faculty member at the University of Waterloo in Canada, from 1966 to 1970, and at the State University of New York at Buffalo, from 1970 to 1976.

A.V. Balakrishnan

For pioneering contributions to stochastic and distributed systems theory, optimization, control, and aerospace flight systems research

A.V. Balakrishnan earned his M.S. Degree in Electrical Engineering and his Ph.D. in Mathematics from the University of Southern California in 1950 and 1954, respectively. Prof. Balakrishnan has been with the University of California, Los Angeles, since 1961; he has been a Professor of Engineering since 1962 and a Professor of Mathematics since 1965. He was Chair of the Department of Systems Science in the (then) School of Engineering from 1969-1975.

Petar V. Kokotović

For pioneering contribution to control theory and engineering, and for inspirational leadership as mentor, advisor, and lecturer over a period spanning four decades

Petar V. Kokotović received graduate degrees in 1962 from the University of Belgrade, Yugoslavia, and in 1965 at the Institute of Automation and Remote Control, USSR Academy of Sciences, Moscow. During his studies, he worked for two six month periods in 1956, at Electricite de France, Paris and then in 1957, at AEG, Stuttgart, Germany. From 1959 until 1966, he was with the Pupin Reseach Institute in Belgrade, Yugoslvia.

Kumpati S. Narendra

For pioneering contributions to stability theory, adaptive and learning systems theory, and for inspiring leadership as mentor, advisor, and teacher over a period spanning four decades

Kumpati S.Narendra received the Bachelor of Engineering degree, with Honors, in Electrical Engineering from Madras University, India in 1954, and the M.S.and Ph.D.degrees in Applied Physics from Harvard University in 1955 and 1959, respectively. He was a postdoctoral fellow from 1959 to 1961, and Assistant Professor from 1961 to 1965 at Harvard.

Harold J. Kushner

For fundamental contributions to Stochastic Systems Theory and Engineering Applications, and for inspiring generations of researchers in the field

Harold J. Kushner received the Ph.D. in Electrical Engineering from the University of Wisconsin in 1958. Since then, in ten books and more than two hundred papers, he has established a substantial part of modern stochastic systems theory.

Text of Acceptance Speech: 

July 1, 2004. Boston, MA

It is a great honor to receive this award. It is a particular honor that it is in memory of Richard Bellman. I doubt that there are many here who knew Bellman, so I would like to make some comments concerning his role in the field.

Bellman left RAND after the summer of 1965 for the position of Professor of Electrical Engineering, Mathematics, and Medicine at the University of Southern California. This triple title gives you some inkling of how he was viewed at the time. I spent that summer at RAND. My office was right next to Bellman's and we had lots of opportunity to talk.

Bellman was always very supportive of my work. He encouraged me to write my first book, Stochastic Stability and Control, in 1967 for his Academic Press Series. Although naive by modern standards, the book seemed to have a significant impact on subsequent development in that it made many mathematicians realize that there was serious probability to be done in stochastic control, and established the foundations of stochastic stability theory. Numerical methods were among his strong interests. He was well acquainted with my work on numerical methods for continuous time stochastic systems and encouraged me to write my first book on the subject, later updated in two books with Paul Dupuis, and still the methods of choice. Despite his enormous output of published papers, something like 900, he was a strong believer in books since they allowed one to develop a subject with considerable freedom.

There are other connections, albeit indirect, between us. He was a New Yorker, and did his early undergraduate work at CCNY. During those years and, indeed, until the late 50's, CCNY was one of the most intellectual institutions of higher learning in the US. During that time, before the middle class migration out of the city, and the simultaneous opening of opportunities in the elite institutions for the "typical New Yorker," CCNY had the choice of the best of New Yorkers with a serious intellectual bent. Later, he switched to Brooklyn College, which was much closer to his home.

He intended to be a pure mathematician: His primary interest was analytic number theory. When did he become interested in applications? He graduated college at the start of WW2 and the demands of the war exposed him to a great variety of problems. He taught electronics in Princeton and then worked at a sonar lab in San Diego (which kept him out of the Army for a while). He spent the last two years of the war in the army, but assigned to the Manhattan project at Los Alamos. He was a social creature and it was easy for him to meet many of the talented people working on the project. Typically, the physicists considered a mathematician as simply a human calculator, ideally constructed to do numerical computations but not much more. Bellman was asked to numerically solve some PDE's. His mathematical pride refused. To the great surprise of the physicists, he actually managed to integrate some of the equations, obtaining closed form solutions. Holding true to tradition, they checked his solutions, not by verifying the derivation, but by trying some very special cases. Thus his reputation there as a very bright young mathematician was established. This jealously guarded independence and self confidence (and lack of modesty) continued to serve him well. During these years, he absorbed a great variety of scientific experiences. So much was being done due to the needs of the war.

There is one more indirect connection between us. Bellman was a student of Solomon Lefschetz at Princeton, head of the Math. Dept. at the time, a very tough minded mathematician and one of the powerhouses of American mathematics, and impressed with Bellman's ability. While at Los Alamos in WW2 Bellman worked out various results on stability of ODE's. Although he initially intended to do a thesis with someone else on a number theoretic problem, Lefschetz convinced him that those stability results were the quickest way to a thesis, which was in fact true. It took only several months and was the basis of his book on stability of ODE's. I was the director of the Lefschetz Center for Dynamical Systems at Brown University for many years, with Lefschetz our patron saint. Some of you might recall the book (not the movie) "A Beautiful Mind" about John Nash, a Nobel Laureate in Game Theory, which describes Lefschetz's key role in mathematics during Nash's time at Princeton.

Bellman spent the summer of 1948 at RAND, where an amazing array of talent was gathered, including David Blackwell, George Dantzig, Ted Harris, Sam Karlin, Lloyd Shapley, and many others, who provided the foundations of much of decision and game theory. The original intention was to do mathematics with some of the RAND talent on problems of prior interest. But Bellman turned out to be fascinated and partially seduced by the excitement in OR, and the developing role of mathematics in the social and biological sciences. His mathematical abilities were widely recognized. He was a tenured Associate Professor at Stanford at 28, after being an Associate Professor at Princeton, where all indications were that he would have had an assured future had he remained there. He began to have doubts about the payoff for himself in number theory and returned to the atmosphere at RAND often, where he eventually settled and became fully involved in multistage decision processes, having been completely seduced, and much to our great benefit.

Here is a non mathematical item that should be of interest. To work at RAND one needed a security clearance, even though much of the work did not involve "security." Due to an anonymous tip, Bellman lost his clearance for a while: His brother-in-law, whom Bellman had not seen since he (his brother-in-law) was about 13, was rumored to be a communist? This was an example of a serious national problem that was fed, exploited, and made into a national paranoia by unscrupulous politicians.

Bellman was a remarkable person, thoroughly a man of his time and renaissance in his interests, with a fantastic memory. Some epochs are represented by individuals that are towering because of their powerful personalities and abilities. People who could not be ignored. Bellman was one of those. He was one of the driving forces behind the great intellectual excitement of the times.

The word programming was used by the military to mean scheduling. Dantzig's linear programming was an abbreviation of "programming with linear models." Bellman has described the origin of the name "dynamic programming" as follows. An Assistant Secretary of the Air Force, who was believed to be strongly anti-mathematics was to visit RAND. So Bellman was concerned that his work on the mathematics of multi-stage decision process would be unappreciated. But "programming" was still OK, and the Air Force was concerned with rescheduling continuously due to uncertainties. Thus "dynamic programming" was chosen a politically wise descriptor. On the other hand, when I asked him the same question, he replied that he was trying to upstage Dantzig's linear programming by adding dynamic. Perhaps both motivations were true.

If one looks closely at scientific discoveries, ancient seeds often appear. Bellman did not quite invent dynamic programming, and many others contributed to its early development. It was used earlier in inventory control. Peter Dorato once showed me a (somwhat obscure) economics paper from the late thirties where something close to the principle of optimality was used. The calculus of variations had related ideas (e.g., the work of Caratheodory, the Hamilton-Jacobi equation). This led to conflicts with the calculus of variations community. But no one grasped its essence, isolated its essential features, and showed and promoted its full potential in control and operations research as well as in applications to the biological and social sciences, as did Bellman.

Bellman published many seminal works. It is sometimes claimed that many of his vast number of papers are repetitive and did not develop the ideas as far as they could have been. Despite this criticism, his works were poured over word for word, with every comment and detail mined for ideas, technique, and openings into new areas. His work was a mother lode. It was clearly the work of someone with a superb background in analysis as well as a facile mind and sharp eye for aplications. There are lots of examples, with broad coverage, accessible, and usually simple assumptions. His writing is articulate. It flows very smoothly through the problem formulation and mathematical analysis, and he is in full command of it.

We still owe a great debt to him.

Richard E. Bellman Control Heritage Award

The Bellman Award is given for distinguished career contributions to the theory or application of automatic control. It is the highest recognition of professional achievement for US control systems engineers and scientists. The recipient must have spent a significant part of his/her career in the USA. The awardee is expected to make a short acceptance speech at the AACC Awards Ceremonies during the ACC.

George Leitmann

For pioneering contributions to geometric optimal control, quantitative and qualitative differential games, and stabilization and control of deterministic uncertain systems, and for exemplary service to the control field

George Leitmann is a Professor Emeritus of engineering science and associate Dean for International Relations at the University of California, Berkeley. His 50+year Berkeley career has included everything from research and teaching to serving as the first ombudsman in the UC system. During seven years at the US Naval Ordnance Test Station, China Lake, he worked mostly on rocket trajectory optimization and testing. He joined the Berkeley faculty in 1957.

Text of Acceptance Speech: 

June 11, 2009. St. Louis, MO

First of all, I wish to express my sincere thanks to the American Automatic Control Council for bestowing on me the Bellman Control Heritage Award. This great honor was completely unexpected so that my gratitude is very deep indeed. I would like to use this rare opportunity to say a few words about a topic which has concerned me for some time, namely, the question Who did what first?. In so doing, I shall relate two examples of which the first is especially a propos since it involves the patron of the award, Richard Bellman, as well as Rufus Isaacs, both long-time friends of mine. When I attended the 1966 International Congress of Mathematicians in Moscow, where Dick was a plenary speaker and Rufus was to present a paper entitled Differential games and dynamic programming, and what the latter can learn from the former, the meeting was buzzing with excitement about an upcoming confrontation between two well known American mathematicians. And indeed, when Rufus presented his paper it was his take on the discovery of the Principle of Optimality which, in his view, appeared after the in-house publication of three RAND reports on differential games, and which appeared to be just a one-player version of his Tenet of Transition. The result of this implied accusation of plagiarism had two unhappy consequences. I had lunch with Dick on that day. He was deeply hurt, so much so that he was near tears. Equally unfortunate was the effect on Rufus who devoted much of his remaining time to trying to prove the priority of his discovery instead of continuing to produce new and important research of which his fertile mind was surely capable. The second example is a much happier one. In the mid-1960's I published a brief paper in which I proposed constructive sufficiency conditions for extremizing a class of integrals by solving an equivalent problem by inspection. It was not until 1999 that I returned to this subject at the urging of a Canadian colleague. After revisiting the original 1967 paper, I published a generalization in JOTA in 2001. On presenting these results at my 75th birthday symposium in Sicily in 2001, Pierre Bernhard remarked that my approach seemed to be related to Caratheodory's in his 1935 text on the calculus of variations and partial differential equations, first translated into English in the mid-1960's and not known to me. And indeed, in 2002, Dean Carlson published in JOTA a paper in which he discussed a relation between the two approaches in that both are based on the equivalent problem methodology. Caratheodory obtained an equivalent problem by allowing for a different integrand, and I obtained an equivalent problem by the use of transformed variables. Dean then proposed a generalization by combining the two approaches. A happy consequence of this paper has been and continues to be a fruitful collaboration which has resulted in many extensions and applications, e.g., to classes of optimal control and differential game problems, to multiple integrals, and to economic problems, the most recent concerned with differential constraints (state equations) and presented just a couple of weeks ago at the 15th International Workshop on Dynamics and Control. A particularly interesting discussion and some generalizations by Florian Wagener may be found in the July 2009 issue of JOTA. Thus, Caratheodory received his well deserved citation and I learned a great deal, allowing me to make some small contributions to optimization theory.

Sanjoy Mitter

For contributions to the unification of communication and control, nonlinear filtering and its relationship to stochastic control, optimization, optimal control, and infinite-dimensional systems theory

Dr. Sanjoy K. Mitter is Professor of Electrical Engineering at the Laboratory for Information and Decision Systems at the Massachusetts Institute of Technology (MIT). Prior to 1965, he worked as a research engineer at Brown Boveri & Co. Ltd., Switzerland (now ASEA Brown Boveri) and Battelle Institute in Geneva, Switzerland. He taught at Case Western Reserve University from 1965-1969.

Text of Acceptance Speech: 

July 12, 2007. New York, NY

It is a great honor for me to receive the Bellman Award—quite undeserved I believe, but I decided not to emulate Gregory Perelman by refusing to accept the award. I might however follow his footsteps (apparently he has stopped doing Mathematics) and concentrate only on the more conceptual and philosophical aspects of the broad field of Systems and Control.

On an occasion like this it is perhaps appropriate to say a few words about the seminal contributions of Richard Bellman. As we all know, he is the founder of the methodological framework of Dynamic Programming, probably the only general method of systematically and optimally dealing with uncertainty, when uncertainty has a probabilistic description, and there is an underlying Markov structure in the description of the evolution of the system. It is often mentioned that the work of Bellman was not as original as would appear at first sight. There was, after all, Abraham Wald’s seminal work on Optimal Sequential Decisions and the Carat´eodory view of Calculus of Variations, intimately related to Hamilton–Jacobi Theory. But the generality of these ideas, both for deterministic optimal control and stochastic optimal control with full or partial observations, is undoubtedly due to Bellman. Bellman, I believe, was also the first to present a precise view of stochastic adaptive control using methods of dynamic programming. Now, there are two essential steps in invoking Dynamic Programming, namely, invariant embedding whereby a fixed variational problem is embedded in a potentially infinite family of variational problems and then invoking the Principle of Optimality which states that any sub-trajectory of an optimal trajectory is necessarily optimal to characterize optimal trajectories. This is where the Markov structure of dynamic evolution comes into operation. It should be noted that there is wide flexibility in the invariant embedding procedure and this needs to be exploited in a creative way. It is this embedding that permits obtaining the optimal control in feedback form (that is a “control law” as opposed to open loop control).

The solution of the Partially-Observed Stochastic Control in continuous time leading to the characterization of the optimal control as a function of the unnormalized conditional density of the state given the observations via the solution of an infinite-dimensional Bellman–Hamilton–Jacobi equation is one of the crowning achievements of the Bellman view of stochastic control. It is worth mentioning that Stochastic Finance Theory would not exist but for this development. There are still open mathematical questions here that deserved further work. Indeed, the average cost problem for partially-observed finite-state Markov chains is still open—a natural necessary and sufficient condition for the existence of a bounded solution to the dynamic programming equation is still not available.

Much of my recent work has been concerned with the unification of theories of Communication and Control. More precisely, how does one bring to bear Information Theory to gain understanding of Stochastic Control and how does one bring to bear the theory of Partially-Observed Stochastic Control to gain qualitative understanding of reliable communication. There does not exist a straightforward answer to this question since the Noisy Channel Coding Theorem which characterizes the optimal rate of transmission for reliable communication requires infinite delay. The encoder in digital communication can legitimately be thought of as a controller and the decoder an estimator, but they interact in complicated ways. It is only in the limit of infinite delay that the problem simplifies and a theorem like the Noisy Channel Coding Theorem can be proved. This procedure is exactly analogous to passing to the thermodynamic limit in Statistical Mechanics.

In the doctoral dissertation of Sekhar Tatikonda, and in subsequent work, the Shannon Capacity of a Markov Channel with Feedback under certain information structure hypotheses can be characterized as the value function of a partially-observed stochastic control problem. This work in many ways exhibits the power of the dynamic programming style of thinking. I believe that this style of thinking, in the guise of a backward induction procedure, will be helpful in understanding the transmission capabilities of wireless networks. More generally, dynamic programming, when time is replaced by a partially ordered set, is a fruitful area of research.

Can one give an “information flow” view of path estimation of a diffusion process given noisy observations? An estimator, abstractly can be thought of as a map from the space of observations to a conditional distribution of the estimand given the observations. What is the nature of the flow of information from the observations to the estimator? Is it conservative or dissipative? In joint work with Nigel Newton, I have given a quite complete view of this subject. It turns out that the path estimator can be constructed as a backward likelihood filter which estimate the initial state combined with a fully observed stochastic controller moving in forward time starting at this estimated state solves the problem in the sense that the resulting path space measure is the requisite conditional distribution. The backward filter dissipates historical information at an optimal rate, namely that information which is not required to estimate the initial state and the forward control problem fully recovers this information. The optimal path estimator is conservative. This result establishes the relation between stochastic control and optimal filtering. Somewhat surprisingly, the optimal filter in a stationary situation satisfies a second law of thermodynamics.

What of the future? Undoubtedly we have to understand control under uncertainty in a distributed environment. Understanding the interaction between communication and control in a fundamental way will be the key to developing any such theory. I believe that an interconnection view where sensors, actuators, controllers, encoders, channels and decoders, each viewed abstractly as stochastic kernels, are interconnected to realize desirable joint distributions, will be the “correct” abstract view for a theory of distributed control. Except in the field of distributed algorithms, not much fundamental seems to be known here.

It is customary to end acceptance discourses on an autobiographical note and I will not depart from this tradition. Firstly, my early education at Presidency College, Calcutta, where I had the privilege of interacting with some of the most brilliant fellow students, decisively formed my intellectual make-up. Whatever culture I acquired, I acquired it at that time. At Imperial College, while I was doing my doctoral work, I was greatly influenced by John Florentin (a pioneer in Stochastic Control), Martin Clark and several other fellow students. I have also been fortunate in my association with two great institutions—MIT and the Scuola Normale, Pisa. I cannot overstate everything that I have learnt from my doctoral students, too many to mention by name—Allen gewidmet von denen ich lernte [Dedicated to all from whom I have learnt (taken from dedication of G¨unter Grass in “Beim H¨auten der Zwiebel” (“Peeling the Onion”))]. I find that they have extraordinary courage in shaping some half-baked idea into a worthwhile contribution. In recent years, my collaborative work with Vivek Borkar and Nigel Newton has been very important for me. I have great intellectual affinity with members of Club 34, the most exclusive club of its kind and I thank the members of this club for their friendship. There are many others whose intellectual views I share, but at the cost of exclusion let me single out Jan Willems and Pravin Varaiya. I admire their passion for intellectual discourse. Last, but not least, I thank my wife, Adriana, for her love and support. I am sorry she could not be here today. My acceptance speech is dedicated to her.

Tamer Başar

For fundamental developments in and applications of dynamic games, multiple-person decision making, large scale systems analysis, and robust control

Tamer Başar is with the University of Illinois at Urbana-Champaign (UIUC), where he holds the positions of the Fredric G. and Elizabeth H. Nearing Endowed Professor of Electrical and Computer Engineering, Center for Advanced Study Professor, and Research Professor at the Coordinated Science Laboratory. He was born in Istanbul, Turkey, in 1946, and received B.S.E.E. degree from Robert College, Istanbul, in 1969, and M.S., M.Phil, and Ph.D.

Text of Acceptance Speech: 

June 15, 2006. Minneapolis, MN

I am honored to receive this most prestigious award and recognition by the American Automatic Control Council, named after Richard Ernest Bellman (the creator of "dynamic programming")---who has shaped our field and influenced through his creative ideas and voluminous multifaceted work the research of tens of thousands, not only in control, but also in several other fields and disciplines. In my own research, which has encompassed control, games, and decisions, I have naturally also been influenced by the work of Bellman (on dynamic programming), as well as of Rufus Isaacs (the creator of differential games) whose tenure at RAND Corporation (Santa Monica, California) partially overlapped with that of Bellman in the 1950s. I want to use the few minutes I have here to say a few words on those early days of control and game theory research (just a brief historical perspective), and Bellman's role in that development.

In a Bode Lecture I delivered (at the IEEE Conference on Decision and Control in the Bahamas) in December 2004, I had described how modern control theory was influenced by the research conducted and initiatives taken at the RAND Corporation in the early 1950s. RAND had attracted and housed some of the great minds of the time, among whom was also Richard Bellman, in addition to names like Leonard D. Berkovitz, David Blackwell, George Dantzig, Wendell Fleming, M.R. Hestenes, Rufus Isaacs, Samuel Karlin, John Nash, J.P. LaSalle, and Lloyd Shapley (to list just a few). These individuals, and several others, laid the foundations of decision and game theory, which subsequently fueled the drive for control research. In this unique and highly conducive environment, Bellman started working on multi-stage decision processes, as early as 1949, but more fully after 1952---and it is perhaps a lesser known historical fact that one of the earlier topics Bellman worked on at RAND was ! game theory (both zero- and nonzero-sum games) on which he co-authored research reports with Blackwell and LaSalle. In an informative and entertaining autobiography he wrote 32 years later ("Eye of the Hurricane", World Scientific, Singapore), completed in 1984 shortly before his untimely death (March 19), Bellman describes eloquently the research environment at RAND and the reason for coining the term "dynamic programming".

At the time, the funding for RAND came primarily from the Air Force, and hence it was indirectly under the Secretary of Defense, who was in the early 1950s someone by the name Wilson. According to Bellman, "Wilson had a pathological fear and hatred of the word 'research' and also of anything 'mathematical' ". Hence, it was quite a challenge for Bellman to explain what he was doing and interested in doing in the future (which was research on multi-stage decision processes) in terms which would not offend the sponsor. "Programming" was an OK word; after all Linear Programming had passed the test. He wanted "to get across the idea that what he was doing was dynamic, multi-stage, and time-varying", and therefore picked the term "Dynamic Programming". He thought that "it was a term not even a Congressman could object to". This being the official reason given for his pick of the term, some say (Harold Kushner--recipient of this award two years ago--being one of them, based on a personal conversation with Bellman) that he wanted to upstage Dantzig's Linear Programming by substituting "dynamic" for "linear". Whatever the reasons were, the terminology (and of course also the concept and the technique) was something to stay with us for the next fifty plus years, and undoubtedly for many more decades into the future, as also evidenced by the number of papers at this conference using the conceptual framework of dynamic programming.

Applying dynamic programming to different classes of problems, and arriving at "functional equations of dynamic programming", subsequently led Bellman, as a unifying principle, to the "Principle of Optimality", which Isaacs, also at RAND, and at about the same time, had called "tenet of transition" in the broader context of differential games, capturing strategic dynamic decision making in adversarial environments.

Bellman also recognized early on that a solution to a multi-stage decision problem is not merely a set of functions of time or a set of numbers, but a rule telling the decision maker what to do, that is, a "policy". This led in his thinking, when he started looking into control problems, to the concept of "feedback control", and along with it to the notions of sensitivity and robustness. These developments, along with the more refined notions of information structures (who knows what and when), have been key ingredients in my research for the past thirty plus years.

It is interesting that at RAND at the time (that is in the 1950s), in spite of the anti-research and anti-mathematical attitude that existed in the higher echelons of the government, and the Department of Defense in particular, fundamental research did prosper, perhaps somewhat camouflaged initially, which in turn drove the creation of modern control theory, fueled also by the post-Sputnik anxiety. There is perhaps a message that should be taken from that: "Don't give up doing what you think and believe is right and important, but also be flexible and accommodating in how you promote it".

Before closing, I want to thank all who have been involved in the nomination process and the selection process of the Bellman Control Heritage Award this year. I want to use this occasion also to acknowledge several educational and research institutions which have impacted my life and career.

First, I want to acknowledge the contributions of the educational institutions in my native country, Turkey, in the early years of my upbringing, and the comfortable research environment provided by the Marmara Research Institute I was affiliated with in the mid to late 1970s. Second, I want to acknowledge the love for research and the drive for pushing the frontiers of knowledge I was infected with during my years at Yale and Harvard in the early 1970s. And last, but foremost, I want to acknowledge the perfect academic environment I found and have still been enjoying at the University of Illinois at Urbana-Champaign---wonderful colleagues, stimulating teaching environment at the Department of Electrical and Computer Engineering, and exemplary conducive research environment at the Coordinated Science Laboratory with its top quality graduate students. I also want to recognize all students, post-docs, and colleagues I have had the privilege of having research interactions and collaborations with over the years. I thank them all for the memorable journeys in exploring the frontiers in control science and technology.

Thank you very much.