## Irena Lasiecka

The Bellman Award is given for distinguished career contributions to the theory or application of automatic control. It is the highest recognition of professional achievement for US control systems engineers and scientists. The recipient must have spent a significant part of his/her career in the USA. The awardee is strongly encouraged to give a plenary presentation at the ACC Awards Luncheon.

Year:

2019Citation:

For contributions to boundary control of distributed parameter systems

Text of Acceptance Speech:

Dear President Braatz, colleagues, students and friends.

I am very grateful and indeed humbled by being honored to receive the Richard E. Bellman Control Heritage Award for 2019 and to join the distinguished list of prior recipients. I wish to express my sincerest thanks to those who nominated me and supported my nomination and to the awards committee. I am deeply moved by the honor I receive today.

More as a rule than an exception, such an honor is not a credit to a single individual but rather the result of collective work and many collaborations over the years. This is particularly true in areas which are by nature interdisciplinary. And control theory, as such, is one of these. It offers an excellent example of synergy where purely theoretical questions, mathematical in nature, are prompted and stimulated by technological advances and engineering design.

I was attracted to mathematical control theory from my early days at the University of Warsaw, where I was privileged to join a distinct and (at that time) experimental program, called Studies in Applied Mathematics. This was an interdisciplinary initiative under the collaboration of a few home departments. After graduating with a Master Degree, I was fortunate to receive a doctoral fellowship which allowed me to complete my PhD in Applied Mathematics-Control Theory within 3 years, with a thesis on a problem of non-smooth optimization, which extended Milutin-Dubovitski's work and had applications to control systems with delays.

I am extremely grateful to my mentors of that time: Professors A. Wierzbicki, A. Manitius from Control Theory [the latter now chair at George Mason University], the late Professor S. Rolewicz and Professor K. Malanowski both from the Polish Academy of Sciences. They, along with other colleagues, gave me an opportunity to embrace a large spectrum of the field of control theory, to include functional analysis, abstract optimization, differential equations.

My further education took a critical turn at UCLA in Los Angeles, which I joined in 1978, at the invitation of the late Professor A.V. Balakrishnan, the 2001 recipient of the Bellman's award. Bal for all of us. Here, under his mentorship, I was offered the challenge to get involved in the mathematical area of boundary control theory for Distributed Parameter Systems, still at its infancy at that time, even from the viewpoint of Partial Differential Equations, with many basic mathematical problems still open. That was about the time when Richard Bellman's book on Dynamic Programming appeared, in 1977, rooted on Bellman's equation and the Optimality Principle. I always looked at Bellman as a problem-solving mathematician, and the mathematical theory of boundary control of DPS is in line with this philosophy.

Controlling or observing an evolution equation from a restricted set [such as the boundary of a multi-dimensional bounded domain where the controlled system evolves] is both a mathematical challenge and a technological necessity within the realm of practical and physically implementable control theory. Most often, the interior of the domain is not accessible to external manipulations. One first goal of the time within the DPS control community was to construct an appropriate control theory, inspired also by the late R. Kalman, the 1997 recipient of the Bellman's award. Main initial contributors were J.L. Lions, A. Bensoussan and their influential school in Paris, and A.V. Balakrishnan and his associates. But DPS come in a large variety. It requires that each distinct class (parabolic, hyperbolic, etc.) be studied on its own with properties and methods pertinent to it, which however fail for other classes. The systematic study of boundary control, which leads to distributional calculus for various distinct classes of physically significant DPS, became the first long-range object of my research. Both, the results and the methods are dynamics dependent. Finite or infinite speed of propagation becomes an essential feature in controllability theory. For instance, the wave equation is boundary exactly controllable on a sufficiently large time, while the heat equation is only null-controllable yet on an arbitrary short time. Existence, uniqueness and robustness of solutions to nonlinear dynamics were just the first questions asked but still open within the existing PDE culture.

Topics investigated over the years included: optimal control, Riccati and H-J-Bellman theory and their numerical implementation, appropriate controllability and stabilization notions, all in the framework of boundary control of partially observed systems. This research effort, which continues to this very day, was conducted with collaborators and PhD students. It started with my association with A.V. Balakrishnan at UCLA, J.L. Lions at College de France and R. Kalman during my 7 years at the University of Florida. And it continued during my subsequent 26 years at the University of Virginia, the home of MacShane, and now at the University of Memphis. In both cases with talented PhD students. Some of these occupy now distinguished positions in the US academia.

Once the control theory of single distinct DPS classes became mature, engineering applications motivated the need to move on toward the study of more complex DPS consisting of interactive structures where different types of dynamics coupled at an interface define a given control system. Propagation of control properties through the interface then plays a main role.

Thus, in its second phase, my research in DPS then evolved toward these coupled interactive systems of several PDEs. Applications include large flexible structures, structural acoustic interaction, fluid-structure interaction, attenuation of turbulence in fluid dynamics [Navier Stokes] and flutter suppression in nonlinear aero-elasticity. In the latter area, my collaboration with Earl Dowell [Duke Univ.] was most enlightening, and is a further proof of the interdisciplinary nature of the field. These problems, while deeply rooted in engineering control technology, were also benchmark models at the forefront of developing a PDE-based mathematical control theory, which accounts for the infinite dimensional nature of continuum mechanics and fluid dynamics.

In closing, I would like to acknowledge with gratitude my personal and professional interaction over the years with people such as the late David Russell [VPI], Walter Littmann [U of Minnesota], Giuseppe Da Prato [Scuola Normale, Pisa], Michel Delfour [Univ. of Montreal] and Sanjoy Mitter [MIT], the latter the 2007 recipient of the Bellman award. Their pioneering works paved the way to further developments along a road-map which I am proud to be a part of.

Special thanks to my long-time collaborator and husband Roberto Triggiani, to the late Igor Chueshov [both co-authors of major research monographs, two with Roberto in Cambridge University Press and one with Igor in Monograph Series of Springer], as well as to my former students, now collaborators and colleagues.

Many thanks also to funding agencies such as NSF, AFOSR, ARO and NASA for many years of generous support.

Irena Lasiecka,

Philadelphia, July 11, 2019.

Year:

2018Citation:

For seminal and pioneering contributions to the theory and practice of mechatronic systems control**Masayoshi Tomizuka** received his B.S. and M.S. from Keio University in 1968 and 1970, respectively. He received his Ph. D. from MIT in 1974, after which he joined the ME Department at UC Berkeley. Here, he served as the Vice Chair of Instruction from Dec. 1989 to Dec. 1991, and as the Vice Chair of graduate studies from Jul. 1995 to Dec. 1996.

Year:

2017Citation:

For innovative contributions to control theory, stochastic systems, and networks and academic leadership in systems and control**John S. Baras** holds a permanent joint appointment as professor in the department of electrical and computer engineering and the Institute for Systems Research. He was the founding director of ISR, which is one of the first six National Science Foundation engineering research centers. Dr.

Text of Acceptance Speech:

Dear President Masada, colleagues, students, ladies and gentlemen.

I am deeply moved by this award and honor, and truly humbled to join a group of such stellar members of our extended systems and control community, several of whom have been my mentors, teachers and role models throughout my career.

I am grateful to those who nominated me and supported my nomination and to the selection committee for their decision to honor my work and accomplishments.

I was fortunate through my entire life to receive the benefits of exceptional education. From special and highly selective elementary school and high school back in Greece, to the National Technical University of Athens for my undergraduate studies and finally to Harvard University for my graduate studies. My sincere and deep appreciation for such an education goes to my parents who distilled in me a rigorous work ethic and the ambition to excel, my teachers in Greece for the sound education and training in basic and fundamental science and engineering and to my teachers and mentors at Harvard and MIT (Roger Brockett, Sanjoy Mitter and the late Jan Willems) and the incredibly stimulating environment in Cambridge in the early 70’s.

Many thanks are also due to my students and colleagues at the University of Maryland, in the US and around the world, and in particular in Sweden and Germany, for their collaboration, constructive criticism and influence through the years. Several are here and I would like to sincerely thank you all very much.

I am grateful to the agencies that supported my research: NSF, ARO, ARL, ONR, NRL, AFOSR, NIST, DARPA, NASA. I am particularly grateful to NSF for the support that helped us establish the Institute for Systems Research (ISR) at the University of Maryland in 1985, and to NASA for the support that helped us establish the Maryland Center for Hybrid Networks (HyNet) in 1992.

I would also like to thank many industry leaders and engineers for their advice, support, and collaboration during the establishment and development of both the ISR and HyNet to the renowned centers of excellence they are today.

Most importantly I am grateful to my wife Mary, my partner, advisor and supporter, for her love and selfless support and sacrifices during my entire career.

When I came to the US in 1970 I was debating whether to pursue a career in Mathematics, Physics or Engineering. The Harvard-MIT exceptional environment allowed me freedom of choice. Thanks to Roger Brockett I was convinced that systems and control, our field, would be the best choice as I could pursue all of the above. It has indeed proven to be a most exciting and satisfying choice. But there were important adjustments that I had to make and lessons I learned.

I did my PhD thesis work on infinite dimensional realization theory, and worked extensively with complex variable methods, Hardy function algebras, the famous Carleson corona theorem and several other rather esoteric math. From my early work at the Naval Research Laboratory in Electronic Warfare (the “cross-eye” system) and in urban traffic control (adaptive control of queues) I learned, the hard way, the difficulty and critical importance of building appropriate models and turning initially amorphous problems to models amenable to systems and control thinking and methods. I learned the importance of judiciously blending data-based and model-based techniques.

In the seventies, I took a successful excursion into detection, estimation and filtering with quantum mechanical models, inspired by deep space laser communication problems, where my mathematical physics training at Harvard came in handy. I then worked on nonlinear filtering, trying to understand how physicists turned nonlinear inference problems to linear ones and investigate why we could not do the same for nonlinear filtering and partially observed stochastic control. This led me to unnormalized conditional densities, the Duncan-Mortensen-Zakai equation and to information states. This led me naturally to construct nonlinear observers as asymptotic limits of nonlinear filtering problems and the complete solution of the nonlinear robust output feedback control problem (nonlinear H-infinity problem) via two coupled Hamilton Jacobi Bellman equations. We even investigated the development of special chips to implement real-time solutions, a topic we are revisiting currently.

With the development and progress of the ISR I worked on many problems including: speech and image compression breaking the Shannon separation of source and channel coding, manufacturing processes, network management, communication network protocols, smart materials (piezoelectric, shape memory alloys), mobile wireless network design, network security and trust, and more recently human-machine perception and cognition, networked control systems, networked cyber-physical systems, combining metric temporal logic and reachability analysis for safety, collaborative decision management in autonomous vehicles and teams of humans and robots, new analytics for learning and for the design of deep learning networks mapping abstractions of the brain cortex, quantum control and computing.

Why I am telling you about all these diverse topics? Not to attract your admiration. But because at the heart of all my works are fundamental principles and methods from systems and controls, often appropriately extended and modified. Even in my highest impact (economic and social) work in conceiving, demonstrating and commercializing Internet over satellite services (with billions of sales world-wide – remember me when you use Internet in planes over oceans), we modified the flow control algorithm (the TCP) and the physical path, to avoid having TCP interpret the satellite physical path delay as congestion. That is we used systems and control principles.

Our science and engineering, systems and control, has some unparalleled unifying power and efficiency. That is, if we are willing to build the new models required by the new applications (especially models requiring a combination of multiple physics and cyber logic) and if we are willing to learn and apply the incredible new capabilities and technologies that are developed in information technology and materials. As is apparent especially in this conference (ACC), and in the CDC conference, by any measure, our field is exceptionally alive and well and continues to surprise many other disciplines by its contributions and accomplishments, which now extend even in biology, medicine and healthcare. So for the many young people here, please continue the excitement, continue getting involved in challenging and high impact problems, and continue the long tradition and record of accomplishments we have established for so many years. And most importantly continue seeking the common ground and unification of our methods and models.

Let me close with what I consider some major challenges and promising broad areas for the next 10 years or so:

1) Considering networked control systems we need to understand what we mean by a “network” and the various abstractions and system aspects involved. Clearly there are more than one dynamic graphs involved. This needs new foundations for control, communication, information, computing.

2) Systems and control scientists and engineers are the best qualified to develop further the modern field of Model-Based Systems Engineering (MBSE): the design, manufacturing/implementation and operation of complex systems with heterogeneous physical, cyber components and even including humans.

3) The need for analog computing is back, for example in real-time and progressive learning and in CPS. Some of the very early successes of control were implemented in analog electromechanical systems due to the need for real-time behavior. Yet we do not have a synthesis theory and methodology for such systems due to the heterogeneous physics that may be involved. Nothing like we have for VLSI.

Thank you all very much! This is indeed a very special day for me!

Year:

2016Citation:

For pioneering contributions to deterministic and stochastic optimal control theory and their applications to aerospace engineering, including spacecraft, aircraft, and turbulent flows**Jason L. Speyer** received a B.S. in aeronautics and astronautics from MIT, Cambridge and Ph.D. in applied mathematics from Harvard University, Cambridge, MA. He is the Ronald and Valerie Sugar Distinguished Professor in Engineering in the Mechanical and Aerospace Engineering Department and the Electrical Engineering Department, UCLA. He was the Harry H. Power Professor in Engineering Mechanics, University of Texas, Austin from 1976-1990.

Text of Acceptance Speech:

I am extremely grateful and humbled by being honored to receive the Richard E. Bellman Control Heritage Award for 2016. I thank those that recommended me and the awards committee for supporting that nomination. I also thank my colleagues, students, family and especially my wife for the support I have received over these many years.

For me this award occurs at an auspicious time and place. Boston is the place of my birth and my home. It was sixty years ago that I graduated from Malden High School and entered into a world I could never have anticipated; a world where I would be nurtured for the next twenty years by many people, some of whom have been recipients of this esteemed award.

I enrolled in the Department of Aeronautics at MIT, which after Sputnik became the Department of Aeronautics and Astronautics. In my junior year I entered into the space age. More consequential for me was that the department head was Doc (**Charles Stark**) **Draper****[1]**, whose second volume of his three sequence series on Instrument Engineering (1952) was one the first books on what we know as Classical Control covering such topics as **Evens** root locus, **Bode** plots, Nyquist criterion, and **Nichols** charts. Doc **Draper** instituted an undergraduate course in classical control that I took my junior year. This inspired me to take a graduate course and write my undergraduate thesis in controls.

After graduation in 1960 I left Boston to work for Boeing in Seattle. There, I worked with my lead engineer Raymond Morth, who introduced me to the new world of control theory using state space that was just emerging in the early 1960’s. I learned of dynamic programming of **Richard Bellman** for global sufficiency of an optimal trajectory and the Pontryagin Maximum principle inspired by the deficiency of dynamic programing to solve certain classes of optimization problems. The Bushaw problem of determining the minimum time to the origin of a double integrator was just such a problem, since the optimal return function in dynamic programing is *not* differentiable at the switching curve and the Bellman theory did not apply. Interestingly, for my bachelor’s thesis I applied the results of the Bushaw problem to the minimum time problem of bringing the yaw and yaw rate of an aircraft to the origin. However, at that time I had no idea about the ramification of the Bushaw problem to optimization theory. I also learned of the work of **Rudolf** **Kalman** in estimation, the work of **Arthur Bryson** and Henry Kelley in the development of numerical methods for determining optimal constrained trajectories, and J. Halcombe (Hal) Laning and Richard Battin on the determination of orbits for moon rendezvous.

After an incredible year at Boeing I returned to Boston to work at the Analytical Research Department at Raytheon, where **Art Bryson** was a consultant. There, I worked with a student of Bryson, Walter Denham. We were contracted by MIT’s Instrumentation Laboratory, monitored by Richard Battin, to enhance the Apollo autonomous navigation system over the trans-Lunar orbit. We developed a scheme for determining the optimal angle-measurement sequence between the best stars in a catalogue and near and far horizons of the Earth or the Moon using a sextant. This angle-measurement sequence minimized some linear function of the terminal value of the error covariance of position and velocity near the Earth or Moon. Our optimization scheme, which required a matrix dynamic constraint, seemed to be a first. This scheme, used in the Apollo autonomous navigation system, was tested on Apollo 8, and used on every mission thereon. My next task at Raytheon was working on neighboring optimal guidance scheme. This work was with **Art Bryson** and **John Breakwell**. I remember travelling to Lockheed’s Palo Alto Research Laboratory and meeting with John, the beginning of a long and delightful collegial relationship.

After my first two years at Raytheon I somehow convinced **Art Bryson** to take me on as a graduate student at Harvard, supported by the Raytheon Fellowship program. To understand the intellectual level I had to contend with, on my doctorial preliminary exam committee, three of the four examiners were recipients of the Richard E. Bellman Control Heritage Award; **Art Bryson, Larry (Yu-Chi) Ho, **and** Bob (Kumpati) Narendra, **all of whom have been my life time colleagues. I was also fortunate to take a course taught by **Rudy Kalman**. Surprisingly, he taught many of the controls areas he had pioneered, except filtering for Gauss-Markov systems (the Kalman filter); the Aizerman conjecture, the Popov criterion and Lyopunov functions, duality in linear systems, optimality for linear-quadratic systems, etc. After finishing my PhD thesis on optimal control problems with state variable inequality constraints, I returned to Raytheon. Fortunately, **Art Bryson** made me aware of some interest at Raytheon in using modern control theory for developing guidance laws for a new missile. At Raytheon’s Missile Division I worked with Bill O’Halloran on the homing missile guidance system where Bill worked on development of the Kalman filter and I worked on the development of the linear-quadratic *closed-form *guidance gains that had to include the nonminimal phase autopilot. This homing missile, the Patriot missile system, appears to be the first fielded system using modern control theory.

I left Boston for New York to work at the Analytical Mechanics Associates (AMA), in particular, with Hank Kelley. Although I had a lasting friendship with Hank, I only lasted seven months in New York before returning to the AMA office in Cambridge. Unfortunately, the Cambridge NASA Center closed, and I took a position under Dick Battin at the Instrumentation (later the Charles Stark Draper) Laboratory at MIT. There, I worked on the necessary and sufficient conditions for optimality of singular control problems, the linear-exponential-Gaussian control problem, optimal control problems with state variable inequality constraints, optimal control problems with cost criterion and dynamic functions with kinks, and periodic optimal control problems. On many of these issues I collaborated with David Jacobson, whom I first met in the open forum of my PhD final exam. This remarkable collaboration culminated in our book on optimal control theory that appeared in 2010. Also, during my tenure at Draper, I took a post-doctoral year leave at the Weizmann institute in Israel. Here, I learned that I could work very happily by myself. A few years after returning to Draper, I started what is now a forty year career in academia and I left Boston.

As I look back, I feel so fortunate that I had such great mentoring over my early years and by so many who have won this award. My success over the last forty years has been due to my many students who have worked with me to mold numerous new ideas together. Today, I find the future as bright as anytime in my past. I have embarked in such new directions as estimation and control of linear systems with additive noises described by heavy tailed Cauchy probability density functions with my colleague Moshe Idan at the Technion and deep space navigation using pulsars as beacons with JPL.

To conclude, I am grateful to so many of my teachers, colleagues and students, who have nurtured, inspired, and educated me. Without them and my loving wife and family, I would not be here today. Thank you all.

[1] Boldface names are recipients of the Richard E. Bellman Control Heritage Award.

Year:

2015Citation:

For a career of outstanding educational and professional leadership in automatic control, mentoring a large number of practicing professionals, and research contributions in the process industries, especially semiconductor manufacturingText of Acceptance Speech:

When I look back upon my career in the field of control, I think it may have started in 1957, when Sputnik was launched by the Russians. I was in the seventh grade at that time. The reaction of our local school board to losing the space race was to have a group of students take algebra one year earlier, in the eighth grade. During high school, I participated in my class science fairs and won at the state level. When I was a freshman at the University of Kansas in 1967, I was given the ability to do independent research in the area of nucleate boiling. I also was exposed to computer programming, which was a fairly new topic at that time in undergraduate engineering. I became interested in numerical analysis and selected Princeton University for doctoral study, because Professor Leon Lapidus was a leading authority on that topic.

I discovered his interest in numerical analysis was driven by solving control problems (specifically two point boundary value problems). The optimal control project I selected was on singular bang-bang and minimum time control. I used discrete dynamic programming with penalty functions (influenced by Bellman and Kalman) as a way to solve this particular class of control problems. In 1971 I accepted a faculty position at the University of Texas.

That era was the heyday of optimal control in the aerospace program. Many of us in chemical engineering wanted to apply these ideas to chemical plants, however, there were some obstacles. Economic justification was strictly required for any commercial application vs. government funding for space vehicles. In addition, proprietary considerations prevented technology transfer from one plant to another. It wasn't until the late 1970s, when Honeywell introduced the distributed digital control system, that computer process control really began to become more popular (and economic) in industry. In 1972, I purchased a Data General minicomputer to be used with a distillation column for process control. That computer was very antiquated by today’s standards; in fact, we had to use paper tape for inputting software instructions to the machine.

Given that there was a lack of industrial receptivity to advanced control research and NSF funding was very limited, I looked around for other types of problems where my skills might be valuable. In 1974 the energy crisis was rearing its head due to the Arab oil embargo. Funding agencies like NSF and the Office of Coal Research in the U.S. were quite interested in how we could use the large domestic resource of coal to meet the shortage of oil and gas. I came across some literature about a technology called underground coal gasification (UCG), where one would gasify the coal resource in situ as a way of avoiding the mining step. I recall reading it was a very promising technology but they didn't know how to control it. That sparked my interest as a possible topic where I could apply my skill set. But I first had to learn about the long history of coal gasification and coal utilization in general.

There were many issues that had to be addressed before developing control methodologies for UCG. There was a need to develop three-dimensional modeling tools that would predict the recovery of the coal as well as the gas composition that you make (similar to a chemical reactor). Thus 80% of the research work was on modeling as opposed to control. It was also a highly multidisciplinary project involving rock mechanics and environmental considerations. I worked in this area for about 10 years. Later in the mid-1980s, the U.S. no longer had an energy crisis, so I started looking at some other possible areas for application of modeling and control.

In 1984 a new senior faculty member joined my department from Texas Instruments. He was very familiar with semiconductor manufacturing and the lack of process control, and he was able to teach me a lot about that industry. Fortunately I did not have to learn a new field on my own since I was Department Chair with limited discretionary time. The same issues were present as for UCG: models were needed in order to develop control strategies. I have continued working in that area with over 20 graduate students spread out over the past 25 years and process control is now a mature technology in semiconductor manufacturing (see my plenary talk at this year’s ACC).

During the 1980s, I became interested in textbook writing and particularly the need to develop a new textbook in process control. I began collaborating with two colleagues at UC Santa Barbara (Dale Seborg and Duncan Mellichamp) and thought that UCSB would be a great place to spend some time in the summer writing and giving short courses on the topic. The course notes were eventually developed into a textbook eight years later. We now are working on the fourth edition of the book and it is the leading textbook for process control in the world. It has been a very rewarding endeavor to work with other educators, and I would recommend that anyone writing a textbook collaborate with other co-authors as a way of improving the product. In 2010, we added a fourth co-author (Frank Doyle) to cover biosystems control; in fact, he is receiving the practice award from AACC today.

In the early 1990s at UT Austin, Jim Rawlings and I concluded that we wanted to work on control problems that would impact industrial practice rather than just writing more technical papers that maybe only a few people would read. So we formed the Texas Modeling and Control Consortium (TMCC) which had 16 member companies. Over twenty plus years the consortium has morphed into one involving multiple universities investigating process control, monitoring, optimization, and modeling. When Jim left the University of Texas and went to Wisconsin, we decided to keep the consortium going, so it became TWMCC (Texas Wisconsin Modeling and Control Consortium). Joe Qin replaced Jim on the faculty at UT but then 10 years later he left for USC. So our consortium became TWCCC (Texas Wisconsin California Control Consortium). I have learned a lot from both Joe and Jim over the years and have been able to mentor them in their professional development as faculty members. I am now mentoring a new UT control researcher (Michael Baldea) as we continue to close the gap between theory and practice.

One other thing I should mention is my involvement with the American Control Conference. I first gave a paper in 1972 at what was known as the Joint Automatic Control Conference (JACC) and have been coming to this meeting ever since. In the 1970s each meeting was entirely run by a different society each year. To improve the business model and instill more interdisciplinarity with five participating societies, in 1982 we started the American Control Conference with leadership from Mike Rabins, John

Zaborsky, and also Bill Levine who is here today. I was Treasurer of the 1982 meeting, which was held in Arlington, VA. That began an extremely successful series of meetings that is one of the best conference values today. It is very beneficial to attend to see control research carried out in the other societies and not just your own society.

During my 40+ year career, I have had a lot of help from colleagues in academia and industry and collaborated with over 100 bright graduate students. I also should thank my wife Donna, who has put up with me over these many years since we first started going to the computer center at the University of Kansas for dates 50 years ago.

My advice to younger researchers is to think 10 years out as to what the new areas might be and start learning about them. Fortunately, today’s control technology is more ubiquitous than ever and the future is bright, although the path forward may not be clear. I still remember a discussion I had with a fellow graduate student before leaving Princeton in 1971 as we embarked on academic careers. His view was that after all the great things achieved by luminaries like Pontryagin, Bellman, and Kalman, all that's really left are the crumbs… So I guess that means that I must have had a pretty crummy career.

Year:

2014Citation:

For contributions to the foundations of deterministic and stochastic optimization-based methods in systems and control**Dimitri P. Bertsekas'** undergraduate studies were in engineering at the National Technical University of Athens, Greece. He obtained his MS in electrical engineering at the George Washington University, Wash. DC in 1969, and his Ph.D. in system science in 1971 at the Massachusetts Institute of Technology

Text of Acceptance Speech:

I feel honored and grateful for this award. After having spent so much time on dynamic programming and written several books about its various facets, receiving an award named after Richard Bellman has a special meaning for me.

It is common in award acceptance speeches to thank one's institutions, mentors, and collaborators, and I have many to thank. I was fortunate to be surrounded by first class students and colleagues, at high quality institutions, which gave me space and freedom to work in any direction I wished to go. As Lucille Ball has told us, "Ability is of little account without opportunity."

Also common when receiving an award is to chart one's intellectual roots and journey, and I will not depart from this tradition. It is customary to advise scholarly Ph.D. students in our field to take the time to get a broad many-course education, with substantial mathematical content, and special depth in their research area. Then upon graduation, to use their Ph.D. research area as the basis and focus for further research, while gradually branching out into neighboring fields, and networking within the profession. This is good advice, which I often give, but this is not how it worked for me at all!

I came from Greece with an undergraduate degree in mechanical engineering, got my MS in control theory at George Washington University in three semesters, while holding a full-time job in an unrelated field, and finished two years later my Ph.D. thesis on control under set membership uncertainty at MIT. I benefited from the stimulating intellectual atmosphere of the Electronic Systems Laboratory (later LIDS), nurtured by Mike Athans and Sanjoy Mitter, but because of my short stay there, I graduated with little knowledge beyond Kalman filtering and LQG control. Then I went to teach at Stanford in a department that combined mathematical engineering and operations research (in which my background was rather limited) with economics (in which I had no exposure at all). In my department there was little interest in control theory, and none at all in my thesis work. Never having completed a first course in analysis, my first assignment was to teach to unsuspecting students optimization by functional analytic methods from David Luenberger's wonderful book. The optimism and energy of youth carried me through, and I found inspiration in what I saw as an exquisite connection between elegant mathematics and interesting practical problems. Studying David Luenberger's other works (including his Nonlinear Programming book) and working next door to him had a lasting effect on me. Two more formative experiences at Stanford were studying Terry Rockafellar's Convex Analysis book (and teaching a seminar course from it), and most importantly teaching a new course on dynamic programming, for which I studied Bellman's books in great detail. My department valued rigorous mathematical analysis that could be broadly applied, and provided a stimulating environment where both could thrive. Accordingly, my course aimed to combine Bellman's vision of wide practical applicability with the emerging mathematical theory of Markov Decision Processes. The course was an encouraging success at Stanford, and set me on a good track. It has survived to the present day at MIT, enriched by subsequent developments in theoretical and approximation methodologies.

After three years at Stanford, I taught for five years in the quiet and scholarly environment of the University of Illinois. There I finally had a chance to consolidate my mathematics and optimization background, through research to a great extent. In particular, it helped a lot that with the spirit of youth, I took the plunge into the world of the measure-theoretic foundations of stochastic optimal control, aiming to expand the pioneering Borel space framework of David Blackwell, in the company of my then Ph.D. student Steven Shreve.

I changed again direction by moving back to MIT, to work in the then emerging field of data networks and the related field of distributed computation. There I had the good fortune to meet two colleagues with whom I interacted closely over many years: Bob Gallager, who coauthored with me a book on data networks in the mid-80s, and John Tsitsiklis, who worked with me first while a doctoral student and then as a colleague, and over time coauthored with me two research monographs on distributed algorithms and neuro-dynamic programming, and a probability textbook. Working with Bob and John, and writing books with them was exciting and rewarding, and made MIT a special place for me.

Nonetheless, at the same time I was getting distracted by many side activities, such as books in nonlinear programming and dynamic programming, getting involved in applications of queueing theory and power systems, and personally writing several network optimization codes. By that time, however, I realized that simultaneous engagement in multiple, diverse, and frequently changing intellectual activities (while not recommended broadly) was a natural and exciting mode of operation that worked well for me, and also had some considerable benefits. It stimulated the cross-fertilization of ideas, and allowed the creation of more broadly integrated courses and books.

In retrospect I was very fortunate to get into methodologies that eventually prospered. Dynamic programming developed perhaps beyond Bellman's own expectation. He correctly emphasized the curse of dimensionality as a formidable impediment in its use, but probably could not have foreseen the transformational impact of the advances brought about by reinforcement learning, neuro-dynamic programming, and other approximation methodologies. When I got into convex analysis and optimization, it was an emerging theoretical subject, overshadowed by linear, nonlinear, and integer programming. Now, however, it has taken center stage thanks to the explosive growth of machine learning and large scale computation, and it has become the lynchpin that holds together most of the popular optimization methodologies. Data networks and distributed computation were thought promising when I got involved, but it was hard to imagine the profound impact they had on engineering, as well as the world around us today. Even set membership description of uncertainty, my Ph.D. thesis subject, which was totally overlooked for nearly fifteen years, eventually came to the mainstream, and has connected with the popular areas of robust optimization, robust control, and model predictive control. Was it good judgement or fortunate accident that steered me towards these fields? I honestly cannot say. Albert Einstein wisely told us that "Luck is when opportunity meets preparation." In my case, I also think it helped that I resisted overly lengthy distractions in practical directions that were too specialized, as well as in mathematical directions that had little visible connection to the practical world.

An academic journey must have companions to learn from and share with, and for me these were my students and collaborators. In fact it is hard to draw a distinction, because I always viewed my Ph.D. students as my collaborators. On more than one occasion, collaboration around a Ph.D. thesis evolved into a book, as in the cases of Angelia Nedic and Asuman Ozdaglar, or into a long multi-year series of research papers after graduation, as in the cases of Paul Tseng and Janey Yu. I am very thankful to my collaborators for our stimulating interactions, and for all that I learned from them. They are many and I cannot mention them all, but they were special to me and I was fortunate to have met them. I wish that I had met Richard Bellman, I only corresponded with him a couple of times (he was the editor of my first book on dynamic programming). I still keep several of his books close to me, including his scintillating and highly original book on matrix theory. I am also satisfied that I paid part of my debt to him in a small way. I have used systematically, for the first time I think in a textbook in 1987, the name "Bellman equation" for the central fixed point equation of infinite horizon discrete-time dynamic programming. It is a name that is widely used now, and most deservedly so.

Year:

2013Citation:

For fundamental contributions to linear systems theory, geometric control theory, logic-based and adaptive control, and distributed sensing and control**A. Stephen Morse** was born in Mt. Vernon, New York. He received a BSEE degree from Cornell University, MS degree from the University of Arizona, and a Ph.D. degree from Purdue University. From 1967 to 1970 he was associated with the Office of Control Theory and Application (OCTA) at the NASA Electronics Research Center in Cambridge, Mass. Since 1970 he has been with Yale University where he is presently the Dudley Professor of Engineering.

Text of Acceptance Speech:

President Rhinehart, Lucy, Danny, fellow members of the greatest technological field in the world, I am to, say the least, absolutely thrilled and profoundly humbled to be this years recipient of the Richard E. Bellman Control Heritage Award. I am grateful to those who supported my nomination, as well to the American Automatic Control Council for selecting me.

I am indebted to a great many people who have helped me throughout my career. Among these are my graduate students, post docs, and colleagues including in recent years, John Baillieul, Roger Brockett, Bruce Francis, Art Krener, and JanWillems. In addition, I’ve been fortunate enough to have had the opportunity to collaborate with some truly great people including Brian Anderson, Ali Bellabas, Chris Byrnes, Alberto Isidori, Petar Kokotovic, Eduardo Sontag and Murray Wonham. I’ve been lucky enough to have had a steady stream of research support from a combination of agencies including AFOSR, ARO and NSF.

I actually never met Richard Bellman, but I certainly was exposed to much of his work. While I was still a graduate student at Purdue, I learned all about Dynamic Programming, Bellman’s Equation, and that the Principle of Optimality meant “Don’t cry over spilled milk.” Then I found out about the Curse of Dimensionally. After finishing school I discovered that there was life before dynamic programming, even in Bellman’s world. In particular I read Bellman’s 1953 monograph on the Stability Theory of Differential Equations. I was struck by this book’s clarity and ease of understanding which of course are hallmarks of Richard Bellman’s writings. It was from this stability book that I first learned about what Bellman called his “fundamental lemma.” Bellman used this important lemma to study the stability of perturbed differential equations which are nominally stable. Bellman first derived the lemma in 1943, apparently without knowing that essentially the same result had been derived by Thomas Gronwall in 1919 for establishing the uniqueness of solutions to smooth differential equations. Not many years after learning about what is now known as the Bellman - Gronwall Lemma, I found myself faced with the problem of trying to prove that the continuous time version of the Egardt - Goodwin - Ramadge - Caines discrete-time model reference adaptive control system was “stable.” As luck would have it, I had the Bellman - Gronwall Lemma in my hip pocket and was able to use it to easily settle the question. As Pasteur one said, “Luck favors the prepared mind.”

After leaving school I joined the Office of Control Theory and Application at the now defunct NASA Electronics Research Center in Cambridge, Mass. OCTA had just been formed and was headed by Hugo Schuck. OCTA’s charter was to bridge the gap between theory and application. Yes people agonized about the so-called theory - application gap way back then. One has to wonder if the agony was worth it. Somehow the gap, if it really exists, has not prevented the field from bringing to fruition a huge number of technological advances and achievements including landing on the moon, cruise control, minimally invasive robotic surgery, advanced agricultural equipment, anti-lock brakes, and a great deal more. What gap? The only gap I know about sells clothes.

In the late 1990s I found myself one day listening to lots of talks about UAVs at a contractors meeting at the Naval Post Graduate School in Monterey Bay, California. I had a Saturday night layover and so I spent Saturday, by myself, going to the Monterey Bay 1 Aquarium. I was totally awed by the massive fish tank display there and in particular by how a school of sardines could so gracefully move through the tank, sometimes bifurcating and then merging to avoid larger fish. With UAVs in the back of my mind, I had an idea: Why not write a proposal on coordinated motion and cooperative control for the NSF’s new initiative on Knowledge and Distributed Intelligence? Acting I this, I was fortunate to be able to recruit a dream team: Roger Brockett, for his background in nonlinear systems; Naomi Leonard for her knowledge of underwater gliders; Peter Belhumeur for his expertise in computer vision, and biologists Danny Grunbaum and Julia Parish for their vast knowledge of fish schooling. We submitted a proposal aimed at trying to understand on the one hand, the traffic rules which large animal aggregations such as fish schools and bird flocks use to coordinate their motions and on the other, how one might use similar concepts to coordinate the motion of manmade groups. The proposal was funded and at the time the research began in 2000, the playing field was almost empty. The project produced several pieces of work about which I am especially proud. One made a connection between the problem of maintaining a robot formation and the classical idea of a rigid framework; an offshoot of this was the application of graph rigidity theory to the problem of localizing a large, distributed network of sensors. Another thrust started when my physics - trained graduate student Jie Lin, ran across a paper in Physical Review Letter by Tomas Vicsek and co-authors which provided experimental justification for why a group of self - driven particles might end up moving in the same direction as a result of local interactions. Jie Lin, my post doc Ali Jadbabaie, and I set out to explain the observed phenomenon, but were initially thwarted by what seemed to be an intractable convergence question for time - varying, discrete - time, linear systems. All attempts to address the problem using standard tools such as quadratic Lyapunov functions failed. Finally Ali ran across a theorem by JacobWolfowitz, and with the help of Marc Artzrouni at the University of Pau in France, a convergence proof was obtained. We immediately wrote a paper and submitted it to a well known physics journal where it was promptly rejected because the reviewers did not like theorems and lemmas. We then submitted a full length version of the work to the TAC where it was eventually published as the paper “Coordination of Groups of Mobile Autonomous Agents Using Nearest Neighbor Rules.”

Over the years, many things have changed. The American Control Conference was once the Joint Automatic Control Conference and was held at universities. Today the ACC proceedings sits on a tiny flash drive about the size of two pieces of bubble gum while a mere 15 years ago the proceedings consisted of 6 bound volumes weighing about 10 pounds and taking up approximately 1100 cubic inches of space on one’s bookshelf. And people carried those proceedings home on planes - of course there were no checked baggage fees back then.

The field of automatic control itself has undergone enormous and healthy changes. When I was a student, problem formulations typically began with “Consider the system described by the differential equation.” Today things are different and one of the most obvious changes is that problem formulations often include not only a differential equations, but also graphs and networks. The field has broadened its outlook considerably as this American Control Conference clearly demonstrates.

And where might things be going in the future? Take a look at the “Impact of Control Technology” papers on the CSS website including the nice article about cyber - physical systems by Kishan Baheti and Helen Gill. Or try to attend the workshop on “Future Directions in Control Theory” which Fariba Fahroo is organizing for AFOSR.

Automatic control is a really great field and I love it. However, it is also probably the most difficult field to explain to non - specialists. Paraphrasing Donald Knuth : “A {control} algorithm will have to be seen to be believed.”

I believe that most people do not understand what a control engineer does or what a control system is. This of course is not an unusual situation. But it is a problem. IBM, now largely a service company, faced a similar problem trying to explain itself after it stopped producing laptops. We of course are primarily a service field. Perhaps like IBM, we need to take some time to rethink how we should explain what we do?

Thank you very much for listening and enjoy the rest of the conference

Year:

2012Citation:

For contributions to the control and estimation of nonlinear systems**Arthur J. Krener **received the PhD in Mathematics from the University of California, Berkeley in 1971. From 1971 to 2006 he was at the University of California, Davis. He retired in 2006 as a Distinguished Professor of Mathematics. Currently he is a Distinguished Visiting Professor in the Department of Applied Mathematics at the Naval Postgraduate School.

Text of Acceptance Speech:

It is a honor to receive the 2012 Richard E. Bellman Control Heritage Award. I am deeply humbled to join the very distinguished group of prior winners. At this conference there are so many people whose work I have admired for years. To be singled out among this group is a great honor.

I did not know Richard Bellman personally but we are all his intellectual descendants. Years ago my first thesis problem came from Bellman and currently I am working on numerical solutions to Hamilton-Jacobi-Bellman partial differential equations.

I began graduate school in mathematics at Berkeley in 1964, the year of the Free Speech Movement. After passing my oral exams in 1966, I started my thesis work with R. Sherman Lehman who had been a postdoc with Bellman at the Rand Corporation in the 1950s. Bellman and Lehman had worked on continuous linear programs also called bottleneck problems in Bellman’s book on Dynamic Programming. These problems are dynamic versions of linear programs, with linear integral transformations replacing finite dimensional linear transformations. At each frozen time they reduce to a standard linear program. Bellman and Lehman had worked out several examples and found that often the optimal solution was basic, at each time an extreme point of the set of feasible solutions to the time frozen linear program. These extreme points moved with time and the optimal solution would stay on one moving extreme point for awhile and then jump to another. It would jump from one bottleneck to another.

Lehman asked me to study this problem and find conditions for this to happen. We thought that it was a problem in functional analysis and so I started taking advanced courses in this area. Unfortunately about a year later Lehman had a very serious auto accident and lost the ability to think mathematically for some time. I drifted, one of hundreds of graduate students in Mathematics at that time. Moreover, Berkeley in the late 1960s was full of distractions and I was distractable. After a year or so Lehman recovered and we started to meet regularly. But then he had a serious stroke, perhaps as a consequence of the accident, and I was on my own again.

I was starting to doubt that my thesis problem was rooted in functional analysis. Fortunately I had taken a course in differential geometry from S. S. Chern, one of the pre-eminent geometers of his generation. Among other things, Chern had taught me about the Lie bracket. And one of my graduate student colleagues told me that I was trying to prove a bang-bang theorem in Control Theory, a field that I had never heard of before. I then realized that my problem was local in nature and intimately connected with flows of vector fields so the Lie bracket was an essential tool. I went to Chern and asked him some questions about the range of flows of multiple vector fields. He referred me to Bob Hermann who was visiting the Berkeley Physics Department at that time.

I went to see Hermann in his cigar smoked-filled office accompanied by my faithful companion, a German Shepherd named Hogan. If this sounds strange, remember this was Berkeley in the 1960s. Bob was welcoming and gracious, he gave me galley proofs of his forthcoming book which contained Chow’s theorem. It was almost the theorem that I had been groping for. Heartened by this encounter I continued to compute Lie brackets in the hope of proving a bang-bang theorem.

Time drifted by and I needed to get out of graduate school so I approached the only math faculty member who knew anything about control, Stephen Diliberto. He agreed to take me on as a thesis student. He said that we should meet for an hour each week and I should tell him what I had done. After a couple of months, I asked him what more I needed to do to get a PhD. His answer was ”write it up”. My ”proofs” fell apart several times trying to accomplish this. But finally I came up with a lemma that might be called Chow’s theorem with drift that allowed me to finish my thesis.

I am deeply indebted to Diliberto for getting me out of graduate school. He also did another wonderful thing for me, he wrote over a hundred letters to help me find a job. The job market in 1971 was not as terrible as it is today but it was bad. One of these letters landed on the desk of a young full professor at Harvard, Roger Brockett. He had also realized that the Lie bracket had a lot to contribute to control. Over the ensuing years, Roger has been a great supporter of my work and I am deeply indebted to him.

Another Diliberto letter got me a position at Davis where I prospered as an Assistant Professor. Tenure came easily as I had learned to do independent research in graduate school. I brought my dog, Hogan, to class every day, he worked the crowds of students and boosted my teaching evaluations by at least a point. After 35 wonderful years at Davis, I retired and joined the Naval Postgraduate School where I continue to teach and do research. I am indebted to these institutions and also to the NSF and the AFOSR for supporting my career.

I feel very fortunate to have discovered control theory both for the intellectual beauty of the subject and the numerous wonderful people that I have met in this field. I mentioned a few names, let me also acknowledge my intellectual debt to and friendship with Hector Sussman, Petar Kokotovic, Alberto Isidori, Chris Byrnes, Steve Morse, Anders Lindquist, Wei Kang and numerous others.

In my old age I have come back to the legacy of Bellman. Two National Research Council Postdocs, Cesar Aguilar and Thomas Hunt, have been working with me on developing patchy methods for solving the Hamilton-Jacobi-Bellman equations of optimal control. We haven’t whipped the ”curse of dimensionality” yet but we are making it nervous.

The first figure shows the patchy solution of the HJB equation to invert a pendulum. There are about 1800 patches on 34 levels and calculation took about 13 seconds on a laptop. The algorithm is adaptive, it adds patches or rings of patches when the residual of the HJB equation is too large. The optimal cost is periodic in the angle. The second figure shows this. Notice that there is a negatively slanted line of focal points. At these points there is an optimal clockwise and an optimal counterclockwise torque. If the angular velocity is large enough then the optimal trajectory will pass through the up position several times before coming to rest there.

What are the secrets to success? Almost everybody at this conference has deep mathematical skills. In the parlance of the NBA playoffs which has just ended, what separates researchers is “shot selection” and ”follow through”. Choosing the right problem at the right time and perseverance, nailing the problem, are needed along with good luck and, to paraphrase the Beatles, ”a little help from your friends”.

Year:

2011Citation:

For pioneering contributions to the theory and application of robust process control, model predictive control, and hybrid systems control**Manfred Morari** was appointed head of the Department of Information Technology and Electrical Engineering at ETH Zurich in 2009. He was head of the Automatic Control Laboratory from 1994 to 2008. Before that he was the McCollum-Corcoran Professor of Chemical Engineering and Executive Officer for Control and Dynamical Systems at the California Institute of Technology. He obtained the diploma from ETH Zurich and the Ph.D.

Text of Acceptance Speech:

Usually when you are nominated for an award you know about it or – at least – you have a suspicion – for example, when somebody asks you for your CV, but you are sure that they are not interested in hiring you. This award came to me as a total surprise. Indeed I had written a letter of support for another most worthy candidate. So, when I received Tamer Başar’s email I thought that it was to inform me that this colleague had won. Who was actually responsible for my nomination? Several of my former graduate students! So, not only were they responsible for doing the work that qualified me for the award, they were even responsible for my getting it!

Over the course of my career I was fortunate to have worked with a fantastic group of people and I am very proud of them: 64 Phd Students to date and about 25 postdocs. 27 of them are holding professorships all over the world – from the Korean Advanced Institute of Science and Technology KAIST in the East to Berkeley and Santa Barbara in the West from the Norwegian Technical University and the U Toronto in the North to the Technion in Israel and the Instituto Tecnologico de Buenos Aires in the South. Many others are now in industry, about 15 in Finance, Management Consulting and Legal, holding positions of major responsibility. I regard this group of former co-workers as my most important legacy.

This award means a lot to me because of the awe-inspiring people who received it in the past. I remember Hendrik Bode receiving the inaugural award in 1979. I remember Rutherford Aris, one of my PhD advisors at the University of Minnesota receiving it in 1992. Aris had actually worked and published with Richard Bellman. I remember Harmon Ray receiving it in 2000, my colleague and mentor at the University of Wisconsin.

Receiving this award made me also reflect on what I felt our major contributions were in these 34 years since I started my career as an Asst. Prof at Wisconsin. In what way was our work important? I was reminded of a dinner conversation a few months back with a group of my former PhD students who had joined McKinsey after graduating from ETH. One of them told me that our group had supplied more young consultants to McKinsey Switzerland than any other institute of any university in Switzerland. He also talked informally about the results of a survey done internally on what may be the main traits characterizing a CEO. It is not charm. It is not tactfulness and sensitivity. It is not intelligence. The only common trait seems to be that in their past these CEOs headed a division that experienced unusual growth. For example, the CEO of a telecom company had headed the mobile phone division. All the CEOs seemed to have been at the right place at the right time.

Similar considerations may apply to doing research and to heading a research group. Richard Hamming, best known for the Hamming code and the Hamming window, wrote in a wonderful essay: “If you are to do important work then you must work on the right problem at the right time and in the right way. Without any one of the three, you may do good work but you will almost certainly miss real greatness….”

So, what are the right problems? Eric Sevareid, the famous CBS journalist once quipped: “The chief cause of problems is solutions.” We were never interested in working on problems solely for their mathematical beauty. We always wanted to solve real practical problems with potential impact. Several times we were lucky to be standing at a turning point, ready to embark on a new line of research before the community at large had recognized it. Let me share with you three examples.

Around 1975, when I started my PhD at the University of Minnesota, interest in process control was just about at an all-time low. In 1979 this conference, which was then called the Joint Automatic Control Conference, had barely 300 attendees. The benefits of optimal control and the state space approach had been hyped so much for more than a decade that disillusionment was unavoidable. Many people advised me not to commence a thesis in process control. But my advisor George Stephanopoulos convinced me that the reason for all the disappointment was that people had been working on the wrong problem. The problem was not how to design controllers for poorly designed system but how to design systems such that they are easy to control. The work that was started at that time by us and several other groups provided valuable insights that are in common use today and set off a whole research movement with special sessions, special journal issues and even separate workshops and conferences.

The second example is our work on Internal Model Control (IMC) and Robust Control. In the early 1980s the term “robust control” did not exist or, at least, it was not widely used and accepted. From our application work and influenced by several senior members of our community we had become convinced that model uncertainty is a critical obstacle affecting controller design. We discovered singular values and the condition number as important indicators before we learned that these were established mathematical quantities with established names. In 1982 at a workshop in Interlaken I met John Doyle, Gunter Stein and essentially everybody else who started to push the robust control agenda. Indeed it was there that Jürgen Ackerman made the researchers in the West aware of the results of Kharitonov. A year later I went to Caltech, John Doyle followed soon afterwards and an exciting research collaboration commenced that lasted for almost a decade. We also cofounded the Control and Dynamical Systems option/department at that time.

The third example is our more recent work on Model Predictive Control (MPC) and Hybrid Systems. As I returned to Switzerland 17 years ago, I moved from a chemical to an electrical engineering department. I was thrown into a new world of systems with time constants of micro- or even nanoseconds rather than the minutes or hours that I was used to. So we set out to dispel the myth that MPC was only suited to slow process control problems and showed that it could even be applied to switched power electronics systems. Through this activity in parallel with a couple of other groups in the world, among them the group of Graham Goodwin, we started this era of “fast MPC” and contributed to the spread of MPC to just about every control application area.

I would never claim that in the mentioned areas we made the most significant contributions and some of the results may even seem trivial to you now, but we were there at the beginning. The Hungarian author Arthur Koestler remarked that “the more original a discovery, the more obvious it seems afterwards”

Not withstanding this over-the-hill award that I received today and the mandatory retirement age in Switzerland I fully intend to strive to match these contributions in the coming years – together with my students, of course.

I want to close my remarks quoting from an interview Woody Allen gave last year. When he was asked “How do you feel about the aging process?” he replied: “Well, I’m against it. I think it has nothing to recommend it.”

Year:

1979**Hendrik Wade Bode** was born 24 December 1905, in Madison, Wisconsin. He attended High School in Urbana, Illinois, and Normal School in Tempe, Arizona. Continuing his education, he received his B.A. Degree in 1924 from Ohio State University and his M.A. Degree from the same institution in 1926. During this time, he was a teaching assistant for one year.