AACC Award Recipients

(Click a recipient to expand more information)

2014: Dimitri P. Bertsekas
Recipient of Richard E. Bellman Control Heritage Award

Citation:
For contributions to the foundations of deterministic and stochastic optimization-based methods in systems and control

Biography:

Dimitri P. Bertsekas' undergraduate studies were in engineering at the National Technical University of Athens, Greece. He obtained his MS in electrical engineering at the George Washington University, Wash. DC in 1969, and his Ph.D. in system science in 1971 at the Massachusetts Institute of Technology.

Dr. Bertsekas has held faculty positions with the Engineering-Economic Systems Dept., Stanford University (1971-1974) and the Electrical Engineering Dept. of the University of Illinois, Urbana (1974-1979). Since 1979 he has been teaching at the Electrical Engineering and Computer Science Department of the Massachusetts Institute of Technology (M.I.T.), where he is currently McAfee Professor of Engineering. He consults regularly with private industry and has held editorial positions in several journals. His research at M.I.T. spans several fields, including optimization, control, large-scale computation, and data communication networks, and is closely tied to his teaching and book authoring activities. He has written numerous research papers, and fifteen books, several of which are used as textbooks in MIT classes.

Professor Bertsekas was awarded the INFORMS 1997 Prize for Research Excellence in the Interface Between Operations Research and Computer Science for his book "Neuro-Dynamic Programming" (co-authored with John Tsitsiklis), the 2000 Greek National Award for Operations Research, the 2001 ACC John R. Ragazzini Education Award, and the 2009 INFORMS Expository Writing Award. In 2001, he was elected to the United States National Academy of Engineering for "pioneering contributions to fundamental research, practice and education of optimization/control theory, and especially its application to data communication networks."

Dr. Bertsekas' recent books are "Introduction to Probability: 2nd Edition" (2008), Convex Optimization Theory (2009), "Dynamic Programming and Optimal Control, Vol. II: Approximate Dynamic Programming" (2012), and "Abstract Dynamic Programming" (2013), all published by Athena Scientific.

2014: No recipient
Recipient of Control Engineering Practice Award

2014: Hamsa Balakrishnan
Recipient of Donald P. Eckman Award

Citation:
For excellence in the control design, analysis, implementation, and evaluation of practical algorithms to improve the efficiency and environmental performance of air transportation systems

Biography:

Hamsa Balakrishnan is an Associate Professor of Aeronautics and Astronautics at the Massachusetts Institute of Technology. Her research is in the design, analysis, and implementation of control and optimization algorithms for large-scale cyber-physical infrastructures, with an emphasis on air transportation systems. Her contributions include airport congestion control algorithms, air traffic routing and airspace resource allocation methods, machine learning for weather forecasts and flight delay prediction, and methods to mitigate environmental impacts. Her work spans theory and practice, including both algorithm development and real-world field tests.

Dr. Balakrishnan is a member of the Operations Research Center (ORC) and the Laboratory for Information and Decision Systems (LIDS) at MIT. Before joining MIT, she was at the NASA Ames Research Center, after receiving her PhD from Stanford University and a B.Tech. from the Indian Institute of Technology Madras.

Dr. Balakrishnan received the Lawrence Sperry Award from the American Institute of Aeronautics and Astronautics (2012), the inaugural CNA Award for Operational Analysis (2012), the Kevin Corker Award for Best Paper at the FAA/Eurocontrol Air Traffic Management R&D Seminar (2011), and an NSF CAREER Award (2008). She is an Associate Fellow of the AIAA.

2014: Roger Brockett
Recipient of John R. Ragazzini Education Award

Citation:
For inspirational mentorship of generations of graduate students who have participated in defining the field of control engineering

Biography:

Roger Brockett is An Wang research professor of Electrical Engineering and Computer Science at Harvard University.  He was a student at Case Institute of Technology and did his Ph.D. work under the supervision of Mihajlo D. Mesarovic, in the Systems Research Center then led by Donald P. Eckman.   Prior to joining the Harvard faculty in 1969, he taught for six years in the Electrical Engineering department at MIT, where he developed the textbook Finite Dimensional Linear Systems and involved graduate students in a range of topics centering on stability theory and applications.  At Harvard, working along side of Y.C. Ho and an outstanding group of younger colleagues, he initially focused on the theory and applications of nonlinear systems emphasizing the use of differential geometric ideas.  In the mid 1980s, fostered in part by the new NSF Engineering Research Center imitative and the ARO MURI program administered by Jagdish Chandra, the focus of his research turned to the application of control theoretic ideas to problems in robotics, computer vision and other aspects of intelligent machines.  An important part of this transition was the development of a broadly inclusive robotics laboratory, engaging a number of Harvard faculty members as well as involving, long-term collaborations with colleagues and former students at Brown University, the University of Maryland, and MIT.  His teaching has involved the development of courses for engineering students, ranging from a freshman design course to graduate level teaching across the field of control.  His Ph.D. students and post doctoral researchers have, in many cases, gone on to become leaders in the field with their accomplishments being recognized, not only through their “day jobs” as teachers, researchers and managers, but also through their participation in the operation and editorial processes of some of the participating societies of the ACC.

2014: Richard P. Mason, Richard P. Mason, and Antonis Papachristodoulou
Recipient of ACC Best Student Paper Award

Paper:
"Chordal Sparsity, Decomposing SDPs and the Lyapunov Equation"

2014: Konstantinos Gatsis, Alejandro Ribeiro, and George J. Pappas
Recipient of O. Hugo Schuck Award

Paper:
(Theory) “Optimal Power Management in Wireless Control Systems”

2014: Davood Babaei Pourkargar and Antonios Armaou
Recipient of O. Hugo Schuck Award

Paper:
(Application) “Control of dissipative partial differential equation systems using APOD based dynamic observer designs”

2013: A. Stephen Morse
Recipient of Richard E. Bellman Control Heritage Award

Citation:
For fundamental contributions to linear systems theory, geometric control theory, logic-based and adaptive control, and distributed sensing and control

Biography:

A. Stephen Morse was born in Mt. Vernon, New York. He received a BSEE degree from Cornell University, MS degree from the University of Arizona, and a Ph.D. degree from Purdue University. From 1967 to 1970 he was associated with the Office of Control Theory and Application (OCTA) at the NASA Electronics Research Center in Cambridge, Mass. Since 1970 he has been with Yale University where he is presently the Dudley Professor of Engineering. His main interest is in system theory and he has done research in network synthesis, optimal control, multivariable control, adaptive control, urban transportation, vision-based control, hybrid and nonlinear systems, sensor networks, and coordination and control of large grouping of mobile autonomous agents. He is a Fellow of the IEEE, a past Distinguished Lecturer of the IEEE Control System Society, and a co-recipient of the Society's 1993 and 2005 George S. Axelby Outstanding Paper Awards. He has twice received the American Automatic Control Council's Best Paper Award and is a co-recipient of the Automatica Theory/Methodology Prize. He is the 1999 recipient of the IEEE Technical Field Award for Control Systems. He is a member of the National Academy of Engineering and the Connecticut Academy of Science and Engineering.

Text of Acceptance Speech: President Rhinehart, Lucy, Danny, fellow members of the greatest technological field in the world, I am to, say the least, absolutely thrilled and profoundly humbled to be this years recipient of the Richard E. Bellman Control Heritage Award. I am grateful to those who supported my nomination, as well to the American Automatic Control Council for selecting me.

 
I am indebted to a great many people who have helped me throughout my career. Among these are my graduate students, post docs, and colleagues including in recent years, John Baillieul, Roger Brockett, Bruce Francis, Art Krener, and JanWillems. In addition, I’ve been fortunate enough to have had the opportunity to collaborate with some truly great people including Brian Anderson, Ali Bellabas, Chris Byrnes, Alberto Isidori, Petar Kokotovic, Eduardo Sontag and Murray Wonham. I’ve been lucky enough to have had a steady stream of research support from a combination of agencies including AFOSR, ARO and NSF.
 
I actually never met Richard Bellman, but I certainly was exposed to much of his work. While I was still a graduate student at Purdue, I learned all about Dynamic Programming, Bellman’s Equation, and that the Principle of Optimality meant “Don’t cry over spilled milk.” Then I found out about the Curse of Dimensionally. After finishing school I discovered that there was life before dynamic programming, even in Bellman’s world. In particular I read Bellman’s 1953 monograph on the Stability Theory of Differential Equations. I was struck by this book’s clarity and ease of understanding which of course are hallmarks of Richard Bellman’s writings. It was from this stability book that I first learned about what Bellman called his “fundamental lemma.” Bellman used this important lemma to study the stability of perturbed differential equations which are nominally stable. Bellman first derived the lemma in 1943, apparently without knowing that essentially the same result had been derived by Thomas Gronwall in 1919 for establishing the uniqueness of solutions to smooth differential equations. Not many years after learning about what is now known as the Bellman - Gronwall Lemma, I found myself faced with the problem of trying to prove that the continuous time version of the Egardt - Goodwin - Ramadge - Caines discrete-time model reference adaptive control system was “stable.” As luck would have it, I had the Bellman - Gronwall Lemma in my hip pocket and was able to use it to easily settle the question. As Pasteur one said, “Luck favors the prepared mind.”
 
After leaving school I joined the Office of Control Theory and Application at the now defunct NASA Electronics Research Center in Cambridge, Mass. OCTA had just been
formed and was headed by Hugo Schuck. OCTA’s charter was to bridge the gap between theory and application. Yes people agonized about the so-called theory - application gap way back then. One has to wonder if the agony was worth it. Somehow the gap, if it really exists, has not prevented the field from bringing to fruition a huge number of technological advances and achievements including landing on the moon, cruise control, minimally invasive robotic surgery, advanced agricultural equipment, anti-lock brakes, and a great deal more. What gap? The only gap I know about sells clothes.
 
In the late 1990s I found myself one day listening to lots of talks about UAVs at a contractors meeting at the Naval Post Graduate School in Monterey Bay, California. I had a Saturday night layover and so I spent Saturday, by myself, going to the Monterey Bay 1 Aquarium. I was totally awed by the massive fish tank display there and in particular by how a school of sardines could so gracefully move through the tank, sometimes bifurcating and then merging to avoid larger fish. With UAVs in the back of my mind, I had an idea: Why not write a proposal on coordinated motion and cooperative control for the NSF’s new initiative on Knowledge and Distributed Intelligence? Acting I this, I was fortunate to be able to recruit a dream team: Roger Brockett, for his background in nonlinear systems; Naomi Leonard for her knowledge of underwater gliders; Peter Belhumeur for his expertise in computer vision, and biologists Danny Grunbaum and Julia Parish for their vast knowledge of fish schooling. We submitted a proposal aimed at trying to understand on the one hand, the traffic rules which large animal aggregations such as fish schools and bird flocks use to
coordinate their motions and on the other, how one might use similar concepts to coordinate the motion of manmade groups. The proposal was funded and at the time the research began in 2000, the playing field was almost empty. The project produced several pieces of work about which I am especially proud. One made a connection between the problem of maintaining a robot formation and the classical idea of a rigid framework; an offshoot of this was the application of graph rigidity theory to the problem of localizing a large, distributed network of sensors. Another thrust started when my physics - trained graduate student Jie Lin, ran across a paper in Physical Review Letter by Tomas Vicsek and co-authors which provided experimental justification for why a group of self - driven particles might end up moving in the same direction as a result of local interactions. Jie Lin, my post doc Ali Jadbabaie, and I set out to explain the observed phenomenon, but were initially thwarted by what seemed to be an intractable convergence question for time - varying, discrete - time, linear systems. All attempts to address the problem using standard tools such as quadratic Lyapunov functions failed. Finally Ali ran across a theorem by JacobWolfowitz, and with the help of Marc Artzrouni at the University of Pau in France, a convergence proof was obtained. We immediately wrote a paper and submitted it to a well known physics journal where it
was promptly rejected because the reviewers did not like theorems and lemmas. We then submitted a full length version of the work to the TAC where it was eventually published as the paper “Coordination of Groups of Mobile Autonomous Agents Using Nearest Neighbor Rules.”
 
Over the years, many things have changed. The American Control Conference was once the Joint Automatic Control Conference and was held at universities. Today the ACC proceedings sits on a tiny flash drive about the size of two pieces of bubble gum while a mere 15 years ago the proceedings consisted of 6 bound volumes weighing about 10 pounds and taking up approximately 1100 cubic inches of space on one’s bookshelf. And people carried those proceedings home on planes - of course there were no checked baggage fees back then.
 
The field of automatic control itself has undergone enormous and healthy changes. When I was a student, problem formulations typically began with “Consider the system described by the differential equation.” Today things are different and one of the most obvious changes is that problem formulations often include not only a differential equations, but also graphs and networks. The field has broadened its outlook considerably as this American Control Conference clearly demonstrates.
 
And where might things be going in the future? Take a look at the “Impact of Control
Technology” papers on the CSS website including the nice article about cyber - physical systems by Kishan Baheti and Helen Gill. Or try to attend the workshop on “Future Directions in Control Theory” which Fariba Fahroo is organizing for AFOSR.
 
Automatic control is a really great field and I love it. However, it is also probably the
most difficult field to explain to non - specialists. Paraphrasing Donald Knuth : “A {control} algorithm will have to be seen to be believed.”
 
I believe that most people do not understand what a control engineer does or what a control system is. This of course is not an unusual situation. But it is a problem. IBM, now largely a service company, faced a similar problem trying to explain itself after it stopped producing laptops. We of course are primarily a service field. Perhaps like IBM, we need to take some time to rethink how we should explain what we do?
 
Thank you very much for listening and enjoy the rest of the conference

2013: Hongtei Eric Tseng
Recipient of Control Engineering Practice Award

Citation:
For original applications of advanced and classical estimation and control theory to automotive industry

Biography:

Hongtei Eric Tseng received the B.S. degree from National Taiwan University, Taipei, Taiwan in 1986. He received the M.S. and Ph.D. degrees from the University of California, Berkeley in 1991 and 1994, respectively, all in Mechanical Engineering.

Since he joined Ford Motor Company in 1994, he has contributed to a number of technologies that lead to production vehicle implementation, including vehicle state estimation for Ford’s Roll Stability Control system (RSC) which is implemented on both Ford and Volvo vehicles; the design/development of fault detection on Ford’s engine only traction control and AdvanceTrac systems. His research work includes a low pressure tire warning system using wheel speed sensors; traction control; electronic stability control, and interactive vehicle dynamics control; real-time interactive powertrain control emulation through a motion based vehicle simulator; engine and transmission coordination control to improve shift feel; real-time model predictive control for vehicle applications in automated evasive maneuvers. His technical achievement at Ford has been recognized with Henry Ford Technical Fellow Award in 2004, 2010, and 2011. His current interest includes both powertrain and vehicle dynamics control. He is currently a Technical Leader in Controls Engineering at Research and Innovation Center, Ford Motor Company.

Eric has numerous patents and is the author/coauthor of over 70 publications including chapters in two handbooks (The Control Handbook, 2nd edition, and Road and Off-road Vehicle System Dynamics Handbook). He was the recipient of the Best Paper Award from 2012 International Conference on Bond Graph Modeling, and the Best Paper Award from International Symposium of Advanced Vehicle Control (AVEC) in 2006 and 2010. He has been a member of the AVEC International Science Committee since 2010 and a member of International Federation of Automotive Control (IFAC) Technical Committee since 2007.

 

2013: Vijay Gupta
Recipient of Donald P. Eckman Award

Citation:
For contributions to theory of estimation and control of networked, cyberphysical systems

Biography:

Vijay Gupta is with the Department of Electrical Engineering at the University of Notre Dame. He received his B. Tech degree from the Indian Institute of Technology, Delhi and the M.S. and Ph.D. degrees from the California Institute of Technology, all in Electrical Engineering. Prior to joining Notre Dame, he also served as a research associate in the Institute for Systems Research at the University of Maryland, College Park. He received the NSF CAREER award in 2009, and the Ruth and Joel Spira award for excellence in teaching in 2010. His research interests include cyber-physical systems, distributed estimation, detection and control, and, in general, the interaction of communication, computation and control.

Pages