You are here

A. Stephen Morse

For fundamental contributions to linear systems theory, geometric control theory, logic-based and adaptive control, and distributed sensing and control

A. Stephen Morse was born in Mt. Vernon, New York. He received a BSEE degree from Cornell University, MS degree from the University of Arizona, and a Ph.D. degree from Purdue University. From 1967 to 1970 he was associated with the Office of Control Theory and Application (OCTA) at the NASA Electronics Research Center in Cambridge, Mass. Since 1970 he has been with Yale University where he is presently the Dudley Professor of Engineering. His main interest is in system theory and he has done research in network synthesis, optimal control, multivariable control, adaptive control, urban transportation, vision-based control, hybrid and nonlinear systems, sensor networks, and coordination and control of large grouping of mobile autonomous agents. He is a Fellow of the IEEE, a past Distinguished Lecturer of the IEEE Control System Society, and a co-recipient of the Society's 1993 and 2005 George S. Axelby Outstanding Paper Awards. He has twice received the American Automatic Control Council's Best Paper Award and is a co-recipient of the Automatica Theory/Methodology Prize. He is the 1999 recipient of the IEEE Technical Field Award for Control Systems. He is a member of the National Academy of Engineering and the Connecticut Academy of Science and Engineering.

Text of Acceptance Speech: President Rhinehart, Lucy, Danny, fellow members of the greatest technological field in the world, I am to, say the least, absolutely thrilled and profoundly humbled to be this years recipient of the Richard E. Bellman Control Heritage Award. I am grateful to those who supported my nomination, as well to the American Automatic Control Council for selecting me.

I am indebted to a great many people who have helped me throughout my career. Among these are my graduate students, post docs, and colleagues including in recent years, John Baillieul, Roger Brockett, Bruce Francis, Art Krener, and JanWillems. In addition, I’ve been fortunate enough to have had the opportunity to collaborate with some truly great people including Brian Anderson, Ali Bellabas, Chris Byrnes, Alberto Isidori, Petar Kokotovic, Eduardo Sontag and Murray Wonham. I’ve been lucky enough to have had a steady stream of research support from a combination of agencies including AFOSR, ARO and NSF.
I actually never met Richard Bellman, but I certainly was exposed to much of his work. While I was still a graduate student at Purdue, I learned all about Dynamic Programming, Bellman’s Equation, and that the Principle of Optimality meant “Don’t cry over spilled milk.” Then I found out about the Curse of Dimensionally. After finishing school I discovered that there was life before dynamic programming, even in Bellman’s world. In particular I read Bellman’s 1953 monograph on the Stability Theory of Differential Equations. I was struck by this book’s clarity and ease of understanding which of course are hallmarks of Richard Bellman’s writings. It was from this stability book that I first learned about what Bellman called his “fundamental lemma.” Bellman used this important lemma to study the stability of perturbed differential equations which are nominally stable. Bellman first derived the lemma in 1943, apparently without knowing that essentially the same result had been derived by Thomas Gronwall in 1919 for establishing the uniqueness of solutions to smooth differential equations. Not many years after learning about what is now known as the Bellman - Gronwall Lemma, I found myself faced with the problem of trying to prove that the continuous time version of the Egardt - Goodwin - Ramadge - Caines discrete-time model reference adaptive control system was “stable.” As luck would have it, I had the Bellman - Gronwall Lemma in my hip pocket and was able to use it to easily settle the question. As Pasteur one said, “Luck favors the prepared mind.”
After leaving school I joined the Office of Control Theory and Application at the now defunct NASA Electronics Research Center in Cambridge, Mass. OCTA had just been
formed and was headed by Hugo Schuck. OCTA’s charter was to bridge the gap between theory and application. Yes people agonized about the so-called theory - application gap way back then. One has to wonder if the agony was worth it. Somehow the gap, if it really exists, has not prevented the field from bringing to fruition a huge number of technological advances and achievements including landing on the moon, cruise control, minimally invasive robotic surgery, advanced agricultural equipment, anti-lock brakes, and a great deal more. What gap? The only gap I know about sells clothes.
In the late 1990s I found myself one day listening to lots of talks about UAVs at a contractors meeting at the Naval Post Graduate School in Monterey Bay, California. I had a Saturday night layover and so I spent Saturday, by myself, going to the Monterey Bay 1 Aquarium. I was totally awed by the massive fish tank display there and in particular by how a school of sardines could so gracefully move through the tank, sometimes bifurcating and then merging to avoid larger fish. With UAVs in the back of my mind, I had an idea: Why not write a proposal on coordinated motion and cooperative control for the NSF’s new initiative on Knowledge and Distributed Intelligence? Acting I this, I was fortunate to be able to recruit a dream team: Roger Brockett, for his background in nonlinear systems; Naomi Leonard for her knowledge of underwater gliders; Peter Belhumeur for his expertise in computer vision, and biologists Danny Grunbaum and Julia Parish for their vast knowledge of fish schooling. We submitted a proposal aimed at trying to understand on the one hand, the traffic rules which large animal aggregations such as fish schools and bird flocks use to
coordinate their motions and on the other, how one might use similar concepts to coordinate the motion of manmade groups. The proposal was funded and at the time the research began in 2000, the playing field was almost empty. The project produced several pieces of work about which I am especially proud. One made a connection between the problem of maintaining a robot formation and the classical idea of a rigid framework; an offshoot of this was the application of graph rigidity theory to the problem of localizing a large, distributed network of sensors. Another thrust started when my physics - trained graduate student Jie Lin, ran across a paper in Physical Review Letter by Tomas Vicsek and co-authors which provided experimental justification for why a group of self - driven particles might end up moving in the same direction as a result of local interactions. Jie Lin, my post doc Ali Jadbabaie, and I set out to explain the observed phenomenon, but were initially thwarted by what seemed to be an intractable convergence question for time - varying, discrete - time, linear systems. All attempts to address the problem using standard tools such as quadratic Lyapunov functions failed. Finally Ali ran across a theorem by JacobWolfowitz, and with the help of Marc Artzrouni at the University of Pau in France, a convergence proof was obtained. We immediately wrote a paper and submitted it to a well known physics journal where it
was promptly rejected because the reviewers did not like theorems and lemmas. We then submitted a full length version of the work to the TAC where it was eventually published as the paper “Coordination of Groups of Mobile Autonomous Agents Using Nearest Neighbor Rules.”
Over the years, many things have changed. The American Control Conference was once the Joint Automatic Control Conference and was held at universities. Today the ACC proceedings sits on a tiny flash drive about the size of two pieces of bubble gum while a mere 15 years ago the proceedings consisted of 6 bound volumes weighing about 10 pounds and taking up approximately 1100 cubic inches of space on one’s bookshelf. And people carried those proceedings home on planes - of course there were no checked baggage fees back then.
The field of automatic control itself has undergone enormous and healthy changes. When I was a student, problem formulations typically began with “Consider the system described by the differential equation.” Today things are different and one of the most obvious changes is that problem formulations often include not only a differential equations, but also graphs and networks. The field has broadened its outlook considerably as this American Control Conference clearly demonstrates.
And where might things be going in the future? Take a look at the “Impact of Control
Technology” papers on the CSS website including the nice article about cyber - physical systems by Kishan Baheti and Helen Gill. Or try to attend the workshop on “Future Directions in Control Theory” which Fariba Fahroo is organizing for AFOSR.
Automatic control is a really great field and I love it. However, it is also probably the
most difficult field to explain to non - specialists. Paraphrasing Donald Knuth : “A {control} algorithm will have to be seen to be believed.”
I believe that most people do not understand what a control engineer does or what a control system is. This of course is not an unusual situation. But it is a problem. IBM, now largely a service company, faced a similar problem trying to explain itself after it stopped producing laptops. We of course are primarily a service field. Perhaps like IBM, we need to take some time to rethink how we should explain what we do?
Thank you very much for listening and enjoy the rest of the conference