By Daniel D. Lee
University of Pennsylvania
There are a number of visions of how robots will change the world in the near future coming from Hollywood and other pundits. Images abound of autonomous robots either enabling a utopian society or running amok in hellish doomsday scenarios. The most probable future, however, will lie somewhere in between and be reflected in subtler differences from our current reality. But it is clear that advances in computation, sensing and automation will have significant impact on people’s lives. A scientific understanding of the current limitations and potential of scientific research in this area is necessary for a proper perspective on future possibilities.
Robots rely upon both hardware and software to sense the surrounding world, convert those sensed signals into symbols to plan responses, and then act on those plans by controlling a number of actuators. Impressive hardware advances in sensors and computation have enabled robots to gather ever more information in real time about the surrounding world. Current software algorithms can parse that information into discrete objects and plan motions to avoid or manipulate geometrical objects. Advances in materials have resulted in more compact and efficient actuators and power systems for complex robot mechanisms. All these gains have resulted in impressive achievements such as driverless cars, flying robots, and automated warehouses.
However, in order to be successful, robots need to be able to predict how the surrounding world will change in time as a result of their potential actions. The accuracy of these predictions is what limits the use of robots. It is relatively easy to predict what will happen in constrained environments such as a well-designed factory floor. It is much more difficult to predict from sensor signals how the world will change in more unstructured environments. Difficult environments contain things that make them complicated to model and predict, such as snow, slippery surfaces, and other agents including humans.
Thus, in the near future, we expect robot systems to be developed and deployed for use in environments that can more easily be modeled. In manufacturing, robots will handle operations that involve assembling collections of well-defined parts. However, dealing with more complex materials such as compliant fabric shapes will still be difficult for robots. In transportation, autonomous driving along well-marked roads will become available to the general public, but it will be a challenge for robotic vehicles to handle bustling unregulated intersections where they are competing with human drivers and pedestrians for access.
We hope, of course, that robots will become tools to enhance human productivity and safety by undertaking tasks that are dull, dirty and dangerous. The reality will be that some activities will more easily become automated in the near future, whereas others will still require human guidance and intervention; the selection of these tasks will be determined by what is technologically possible and economically feasible. In the GRASP robotics lab at the University of Pennsylvania, we are working on integrating more advanced algorithms for perceptron, planning, coordination and control that will expand the current capabilities of robots. It is critical that we all understand the key scientific enablers as well as limitations before we can have an informed discussion about what the actual future with robotic systems will bring.
Daniel Lee is the UPS Foundation Chair and GRASP Laboratory Director in the School of Engineering and Applied Science at the University of Pennsylvania. His research focused on learning representations that enable autonomous systems to efficiently reason about real-time behaviors in an uncertain world. His work has been supported by the National Science Foundation, Office of Naval Research, Air Force Office of Scientific Research, Department of Transportation, and the Defense Advanced Research Projects Agency.