AACC Award Recipients

(Click a recipient to expand more information)

2013: A. Stephen Morse
Recipient of Richard E. Bellman Control Heritage Award

For fundamental contributions to linear systems theory, geometric control theory, logic-based and adaptive control, and distributed sensing and control


A. Stephen Morse was born in Mt. Vernon, New York. He received a BSEE degree from Cornell University, MS degree from the University of Arizona, and a Ph.D. degree from Purdue University. From 1967 to 1970 he was associated with the Office of Control Theory and Application (OCTA) at the NASA Electronics Research Center in Cambridge, Mass. Since 1970 he has been with Yale University where he is presently the Dudley Professor of Engineering. His main interest is in system theory and he has done research in network synthesis, optimal control, multivariable control, adaptive control, urban transportation, vision-based control, hybrid and nonlinear systems, sensor networks, and coordination and control of large grouping of mobile autonomous agents. He is a Fellow of the IEEE, a past Distinguished Lecturer of the IEEE Control System Society, and a co-recipient of the Society's 1993 and 2005 George S. Axelby Outstanding Paper Awards. He has twice received the American Automatic Control Council's Best Paper Award and is a co-recipient of the Automatica Theory/Methodology Prize. He is the 1999 recipient of the IEEE Technical Field Award for Control Systems. He is a member of the National Academy of Engineering and the Connecticut Academy of Science and Engineering.

Text of Acceptance Speech: President Rhinehart, Lucy, Danny, fellow members of the greatest technological field in the world, I am to, say the least, absolutely thrilled and profoundly humbled to be this years recipient of the Richard E. Bellman Control Heritage Award. I am grateful to those who supported my nomination, as well to the American Automatic Control Council for selecting me.

I am indebted to a great many people who have helped me throughout my career. Among these are my graduate students, post docs, and colleagues including in recent years, John Baillieul, Roger Brockett, Bruce Francis, Art Krener, and JanWillems. In addition, I’ve been fortunate enough to have had the opportunity to collaborate with some truly great people including Brian Anderson, Ali Bellabas, Chris Byrnes, Alberto Isidori, Petar Kokotovic, Eduardo Sontag and Murray Wonham. I’ve been lucky enough to have had a steady stream of research support from a combination of agencies including AFOSR, ARO and NSF.
I actually never met Richard Bellman, but I certainly was exposed to much of his work. While I was still a graduate student at Purdue, I learned all about Dynamic Programming, Bellman’s Equation, and that the Principle of Optimality meant “Don’t cry over spilled milk.” Then I found out about the Curse of Dimensionally. After finishing school I discovered that there was life before dynamic programming, even in Bellman’s world. In particular I read Bellman’s 1953 monograph on the Stability Theory of Differential Equations. I was struck by this book’s clarity and ease of understanding which of course are hallmarks of Richard Bellman’s writings. It was from this stability book that I first learned about what Bellman called his “fundamental lemma.” Bellman used this important lemma to study the stability of perturbed differential equations which are nominally stable. Bellman first derived the lemma in 1943, apparently without knowing that essentially the same result had been derived by Thomas Gronwall in 1919 for establishing the uniqueness of solutions to smooth differential equations. Not many years after learning about what is now known as the Bellman - Gronwall Lemma, I found myself faced with the problem of trying to prove that the continuous time version of the Egardt - Goodwin - Ramadge - Caines discrete-time model reference adaptive control system was “stable.” As luck would have it, I had the Bellman - Gronwall Lemma in my hip pocket and was able to use it to easily settle the question. As Pasteur one said, “Luck favors the prepared mind.”
After leaving school I joined the Office of Control Theory and Application at the now defunct NASA Electronics Research Center in Cambridge, Mass. OCTA had just been
formed and was headed by Hugo Schuck. OCTA’s charter was to bridge the gap between theory and application. Yes people agonized about the so-called theory - application gap way back then. One has to wonder if the agony was worth it. Somehow the gap, if it really exists, has not prevented the field from bringing to fruition a huge number of technological advances and achievements including landing on the moon, cruise control, minimally invasive robotic surgery, advanced agricultural equipment, anti-lock brakes, and a great deal more. What gap? The only gap I know about sells clothes.
In the late 1990s I found myself one day listening to lots of talks about UAVs at a contractors meeting at the Naval Post Graduate School in Monterey Bay, California. I had a Saturday night layover and so I spent Saturday, by myself, going to the Monterey Bay 1 Aquarium. I was totally awed by the massive fish tank display there and in particular by how a school of sardines could so gracefully move through the tank, sometimes bifurcating and then merging to avoid larger fish. With UAVs in the back of my mind, I had an idea: Why not write a proposal on coordinated motion and cooperative control for the NSF’s new initiative on Knowledge and Distributed Intelligence? Acting I this, I was fortunate to be able to recruit a dream team: Roger Brockett, for his background in nonlinear systems; Naomi Leonard for her knowledge of underwater gliders; Peter Belhumeur for his expertise in computer vision, and biologists Danny Grunbaum and Julia Parish for their vast knowledge of fish schooling. We submitted a proposal aimed at trying to understand on the one hand, the traffic rules which large animal aggregations such as fish schools and bird flocks use to
coordinate their motions and on the other, how one might use similar concepts to coordinate the motion of manmade groups. The proposal was funded and at the time the research began in 2000, the playing field was almost empty. The project produced several pieces of work about which I am especially proud. One made a connection between the problem of maintaining a robot formation and the classical idea of a rigid framework; an offshoot of this was the application of graph rigidity theory to the problem of localizing a large, distributed network of sensors. Another thrust started when my physics - trained graduate student Jie Lin, ran across a paper in Physical Review Letter by Tomas Vicsek and co-authors which provided experimental justification for why a group of self - driven particles might end up moving in the same direction as a result of local interactions. Jie Lin, my post doc Ali Jadbabaie, and I set out to explain the observed phenomenon, but were initially thwarted by what seemed to be an intractable convergence question for time - varying, discrete - time, linear systems. All attempts to address the problem using standard tools such as quadratic Lyapunov functions failed. Finally Ali ran across a theorem by JacobWolfowitz, and with the help of Marc Artzrouni at the University of Pau in France, a convergence proof was obtained. We immediately wrote a paper and submitted it to a well known physics journal where it
was promptly rejected because the reviewers did not like theorems and lemmas. We then submitted a full length version of the work to the TAC where it was eventually published as the paper “Coordination of Groups of Mobile Autonomous Agents Using Nearest Neighbor Rules.”
Over the years, many things have changed. The American Control Conference was once the Joint Automatic Control Conference and was held at universities. Today the ACC proceedings sits on a tiny flash drive about the size of two pieces of bubble gum while a mere 15 years ago the proceedings consisted of 6 bound volumes weighing about 10 pounds and taking up approximately 1100 cubic inches of space on one’s bookshelf. And people carried those proceedings home on planes - of course there were no checked baggage fees back then.
The field of automatic control itself has undergone enormous and healthy changes. When I was a student, problem formulations typically began with “Consider the system described by the differential equation.” Today things are different and one of the most obvious changes is that problem formulations often include not only a differential equations, but also graphs and networks. The field has broadened its outlook considerably as this American Control Conference clearly demonstrates.
And where might things be going in the future? Take a look at the “Impact of Control
Technology” papers on the CSS website including the nice article about cyber - physical systems by Kishan Baheti and Helen Gill. Or try to attend the workshop on “Future Directions in Control Theory” which Fariba Fahroo is organizing for AFOSR.
Automatic control is a really great field and I love it. However, it is also probably the
most difficult field to explain to non - specialists. Paraphrasing Donald Knuth : “A {control} algorithm will have to be seen to be believed.”
I believe that most people do not understand what a control engineer does or what a control system is. This of course is not an unusual situation. But it is a problem. IBM, now largely a service company, faced a similar problem trying to explain itself after it stopped producing laptops. We of course are primarily a service field. Perhaps like IBM, we need to take some time to rethink how we should explain what we do?
Thank you very much for listening and enjoy the rest of the conference

2013: Hongtei Eric Tseng
Recipient of Control Engineering Practice Award

For original applications of advanced and classical estimation and control theory to automotive industry


Hongtei Eric Tseng received the B.S. degree from National Taiwan University, Taipei, Taiwan in 1986. He received the M.S. and Ph.D. degrees from the University of California, Berkeley in 1991 and 1994, respectively, all in Mechanical Engineering.

Since he joined Ford Motor Company in 1994, he has contributed to a number of technologies that lead to production vehicle implementation, including vehicle state estimation for Ford’s Roll Stability Control system (RSC) which is implemented on both Ford and Volvo vehicles; the design/development of fault detection on Ford’s engine only traction control and AdvanceTrac systems. His research work includes a low pressure tire warning system using wheel speed sensors; traction control; electronic stability control, and interactive vehicle dynamics control; real-time interactive powertrain control emulation through a motion based vehicle simulator; engine and transmission coordination control to improve shift feel; real-time model predictive control for vehicle applications in automated evasive maneuvers. His technical achievement at Ford has been recognized with Henry Ford Technical Fellow Award in 2004, 2010, and 2011. His current interest includes both powertrain and vehicle dynamics control. He is currently a Technical Leader in Controls Engineering at Research and Innovation Center, Ford Motor Company.

Eric has numerous patents and is the author/coauthor of over 70 publications including chapters in two handbooks (The Control Handbook, 2nd edition, and Road and Off-road Vehicle System Dynamics Handbook). He was the recipient of the Best Paper Award from 2012 International Conference on Bond Graph Modeling, and the Best Paper Award from International Symposium of Advanced Vehicle Control (AVEC) in 2006 and 2010. He has been a member of the AVEC International Science Committee since 2010 and a member of International Federation of Automotive Control (IFAC) Technical Committee since 2007.


2013: Vijay Gupta
Recipient of Donald P. Eckman Award

For contributions to theory of estimation and control of networked, cyberphysical systems


Vijay Gupta is with the Department of Electrical Engineering at the University of Notre Dame. He received his B. Tech degree from the Indian Institute of Technology, Delhi and the M.S. and Ph.D. degrees from the California Institute of Technology, all in Electrical Engineering. Prior to joining Notre Dame, he also served as a research associate in the Institute for Systems Research at the University of Maryland, College Park. He received the NSF CAREER award in 2009, and the Ruth and Joel Spira award for excellence in teaching in 2010. His research interests include cyber-physical systems, distributed estimation, detection and control, and, in general, the interaction of communication, computation and control.

2013: Mathukumalli Vidyasagar
Recipient of John R. Ragazzini Education Award

For outstanding contributions to automatic control education through publication of textbooks and research monographs


Mathukumalli Vidyasagar was born in Guntur, India on September 29, 1947. He received the B.S., M.S. and Ph.D. degrees in electrical engineering from the University of Wisconsin in Madison, in 1965, 1967 and 1969 respectively. Between 1969 and 1989, he was a Professor of Electrical Engineering at Marquette University, Milwaukee (1969-70), Concordia University, Montreal (1970-80), and the University of Waterloo, Waterloo, Canada (1980-89). In 1989 he returned to India as the Director of the newly created Centre for Artificial Intelligence and Robotics (CAIR) in Bangalore, under the Ministry of Defence, Government of India. Between 1989 and 2000, he built up CAIR into a leading research laboratory with about 40 scientists and a total of about 85 persons, working in areas such as flight control, robotics, neural networks, and image processing. In 2000 he moved to the Indian private sector as an Executive Vice President of India's largest software company, Tata Consultancy Services. In the city of Hyderabad, he created the Advanced Technology Center, an industrial R&D laboratory of around 80 engineers, working in areas such as computational biology, quantitative finance, e-security, identity management, and open source software to support Indian languages.

In 2009 he retired from TCS and joined the Erik Jonsson School of Engineering & Computer Science at the University of Texas at Dallas, as a Cecil & Ida Green Chair in Systems Biology Science. In March 2010 he was named as the Founding Head of the newly created Bioengineering Department. His current research interests are in the application of stochastic processes and stochastic modeling to problems in computational biology, and control systems.

Vidyasagar has received a number of awards in recognition of his research contributions, including Fellowship in The Royal Society, the world's oldest scientific academy in continuous existence, the IEEE Control Systems (Field) Award, the Rufus Oldenburger Medal of ASME, and others. He is the author of ten books and nearly 140 papers in peer-reviewed journals.

2013: Laurent Lessard and Sanjay Lall
Recipient of O. Hugo Schuck Award

Optimal Controller Synthesis for the Decentralized Two-Player Problem with Output Feedback

2012: Arthur J. Krener
Recipient of Richard E. Bellman Control Heritage Award

For contributions to the control and estimation of nonlinear systems


Arthur J. Krener received the PhD in Mathematics from the University of California,
Berkeley in 1971. From 1971 to 2006 he was at the University of California, Davis. He
retired in 2006 as a Distinguished Professor of Mathematics. Currently he is a Distinguished Visiting Professor in the Department of Applied Mathematics at the Naval Postgraduate School.

His research interests are in developing methods for the control and estimation of nonlinear dynamical systems and stochastic processes.

Professor Krener is a Life Fellow of the IEEE, a Fellow of IFAC and of SIAM. His 1981 IEEE Transactions on Automatic Control paper with Isidori, Gori-Giorgi and Monaco won a Best Paper Award. The IEEE Control Systems Society chose his 1977 IEEE Transactions on Automatic Control paper with Hermann as one of 25 Seminal Papers in Control in the last century. He was a Fellow of the John Simon Guggenheim Foundation for 2001-2. In 2004 he received the W. T. and Idalia Reid Prize from SIAM for his contributions to control and system theory. He was the Bode Prize Lecturer at 2006 IEEE CDC and in 2010 he received a Certificate of Excellent Achievements from IFAC. His research has been continuously funded since 1975 by NSF, NASA, AFOSR and ONR.

In 1988 he founded the SIAM Activity Group on Control and Systems Theory and was its first Chair. He was again Chair of the SIAG CST in 2005-07. He chaired the first SIAM Conference on Control and its Applications in 1989 and the same conference in 2007 both in San Francisco. He also co-chaired the IFAC Nonlinear Control Design Symposium held at Lake Tahoe in 1996. He has served as an Associate Editor for the SIAM Journal on Control and Optimization and for the SIAM book series on Advances in Design and Control.

Text of Acceptance Speech:

It is a honor to receive the 2012 Richard E. Bellman Control Heritage Award. I am deeply humbled to join the very distinguished group of prior winners. At this conference there are so many people whose work I have admired for years. To be singled out among this
group is a great honor.

I did not know Richard Bellman personally but we are all his intellectual descendants. Years ago my first thesis problem came from Bellman and currently I am working on numerical solutions to Hamilton-Jacobi-Bellman partial differential equations.

I began graduate school in mathematics at Berkeley in 1964, the year of the Free Speech Movement. After passing my oral exams in 1966, I started my thesis work with R. Sherman Lehman who had been a postdoc with Bellman at the Rand Corporation in the 1950s. Bellman and Lehman had worked on continuous linear programs also called bottleneck problems in Bellman’s book on Dynamic Programming. These problems are dynamic versions of linear
programs, with linear integral transformations replacing finite dimensional linear transformations. At each frozen time they reduce to a standard linear program. Bellman and Lehman had worked out several examples and found that often the optimal solution was basic, at each time an extreme point of the set of feasible solutions to the time frozen linear program. These extreme points moved with time and the optimal solution would stay on one moving extreme point for awhile and then jump to another. It would jump from one bottleneck to another.

Lehman asked me to study this problem and find conditions for this to happen. We thought that it was a problem in functional analysis and so I started taking advanced courses in this area. Unfortunately about a year later Lehman had a very serious auto accident and lost the ability to think mathematically for some time. I drifted, one of hundreds of graduate students in Mathematics at that time. Moreover, Berkeley in the late 1960s was full of distractions and I was distractable. After a year or so Lehman recovered and we started to meet regularly. But then he had a serious stroke, perhaps as a consequence of the accident, and I was on my own again.

I was starting to doubt that my thesis problem was rooted in functional analysis. Fortunately I had taken a course in differential geometry from S. S. Chern, one of the pre-eminent geometers of his generation. Among other things, Chern had taught me about the Lie bracket. And one of my graduate student colleagues told me that I was trying to prove a bang-bang theorem in Control Theory, a field that I had never heard of before. I then realized that my problem was local in nature and intimately connected with flows of vector fields so the Lie bracket was an essential tool. I went to Chern and asked him some questions about the range of flows of multiple vector fields. He referred me to Bob Hermann who was visiting the Berkeley Physics Department at that time.

I went to see Hermann in his cigar smoked-filled office accompanied by my faithful companion, a German Shepherd named Hogan. If this sounds strange, remember this was Berkeley in the 1960s. Bob was welcoming and gracious, he gave me galley proofs of his forthcoming book which contained Chow’s theorem. It was almost the theorem that I had been groping for. Heartened by this encounter I continued to compute Lie brackets in the hope of proving a bang-bang theorem.

Time drifted by and I needed to get out of graduate school so I approached the only math faculty member who knew anything about control, Stephen Diliberto. He agreed to take me on as a thesis student. He said that we should meet for an hour each week and I should tell him what I had done. After a couple of months, I asked him what more I needed to do to get a PhD. His answer was ”write it up”. My ”proofs” fell apart several times trying to accomplish this. But finally I came up with a lemma that might be called Chow’s theorem with drift that allowed me to finish my thesis.

I am deeply indebted to Diliberto for getting me out of graduate school. He also did another wonderful thing for me, he wrote over a hundred letters to help me find a job. The job market in 1971 was not as terrible as it is today but it was bad. One of these letters landed on the desk of a young full professor at Harvard, Roger Brockett. He had also realized that the Lie bracket had a lot to contribute to control. Over the ensuing years, Roger has been a great supporter of my work and I am deeply indebted to him.

Another Diliberto letter got me a position at Davis where I prospered as an Assistant Professor. Tenure came easily as I had learned to do independent research in graduate school. I brought my dog, Hogan, to class every day, he worked the crowds of students and boosted my teaching evaluations by at least a point. After 35 wonderful years at Davis, I retired and joined the Naval Postgraduate School where I continue to teach and do research. I am indebted to these institutions and also to the NSF and the AFOSR for supporting my career.

I feel very fortunate to have discovered control theory both for the intellectual beauty of the subject and the numerous wonderful people that I have met in this field. I mentioned a few names, let me also acknowledge my intellectual debt to and friendship with Hector Sussman, Petar Kokotovic, Alberto Isidori, Chris Byrnes, Steve Morse, Anders Lindquist, Wei Kang and numerous others.

In my old age I have come back to the legacy of Bellman. Two National Research Council Postdocs, Cesar Aguilar and Thomas Hunt, have been working with me on developing patchy methods for solving the Hamilton-Jacobi-Bellman equations of optimal control. We haven’t whipped the ”curse of dimensionality” yet but we are making it nervous.

The first figure shows the patchy solution of the HJB equation to invert a pendulum. There are about 1800 patches on 34 levels and calculation took about 13 seconds on a laptop. The algorithm is adaptive, it adds patches or rings of patches when the residual of the HJB equation is too large. The optimal cost is periodic in the angle. The second figure shows this. Notice that there is a negatively slanted line of focal points. At these points there is an optimal clockwise and an optimal counterclockwise torque. If the angular velocity is large enough then the optimal trajectory will pass through the up position several times before coming to rest there.

What are the secrets to success? Almost everybody at this conference has deep mathematical skills. In the parlance of the NBA playoffs which has just ended, what separates researchers is “shot selection” and ”follow through”. Choosing the right problem at the right time and perseverance, nailing the problem, are needed along with good luck and, to paraphrase the Beatles, ”a little help from your friends”.

2012: Eugene Lavretsky
Recipient of Control Engineering Practice Award

Contributions to the development and transitioning of adaptive controls technologies to advanced flight controls


Eugene Lavretsky is a Boeing Senior Technical Fellow, working at the Boeing Research & Technology in Huntington Beach, CA. During his career at Boeing, Dr. Lavretsky has developed flight control methods, system identification tools, and flight simulation technologies for transport aircraft, advanced unmanned aerial platforms, and weapon systems. Highlights include the MD-11 aircraft, NASA F/A-18 Autonomous Formation Flight and High Speed Civil Transport aircraft, JDAM guided munitions, X-45 and Phantom Ray autonomous aircraft, High Altitude Long Endurance (HALE) hydrogenpowered aircraft, and VULTURE solar-powered unmanned aerial vehicle. His research interests include robust and adaptive control, system identification and flight dynamics. He has written over 100 technical articles, and has taught graduate control courses at the California Long Beach State University, Claremont Graduate University, California Institute of Technology, University of Missouri Science and Technology, and at the University of Southern California. Dr. Lavretsky is an Associate Fellow of AIAA and a Senior Member of IEEE. He is the recipient of the AIAA Mechanics and Control of Flight Award (2009) and the IEEE Control System Magazine Outstanding Paper Award (2011).

2012: Jason Marden
Recipient of Donald P. Eckman Award

For outstanding contributions to game theoretic methods for distributed and networked control systems


Jason Marden is an Assistant Professor in the Department of Electrical, Computer, and Energy Engineering at the University of Colorado.  He received a B.S. degree in Mechanical Engineering in 2001 from UCLA, and a Ph.D. in Mechanical Engineering in 2007, also from UCLA, where he was awarded the Outstanding Graduating Ph.D. Student in Mechanical Engineering.  After graduating from UCLA, he served as a junior fellow in the Social and Information Sciences Laboratory at the California Institute of Technology until 2010 after which he joined the University of Colorado.  He received an AFOSR Young Investigator Award in 2012 and his student's paper was a finalist for the Best Student Paper Award at the IEEE Conference on Decision and Control in 2011.  His research interests focus on game theoretic methods for distributed and networked control systems.

2012: Cameron Nowzari and Jorge Cortés
Recipient of O. Hugo Schuck Award

Self-triggered coordination of robotic networks for optimal deployment

2012: Douglas MacMynowski and Mattias Bjorklund
Recipient of O. Hugo Schuck Award

Large aperture segmented space telescope (LASST): Can we control a 12000 segment mirror?