How long do you think it will take before a real robotic car can drive through a real city?
Typically, complicated problems like developing an autonomous robotic car are much easier to solve in simulation than in the real world. For example, in the simulated environment for the RoboChamps competition there were no shadows. This makes computer vision much simpler than it would be in a real-world case because there is much lower variation in color. The red traffic light will always be the same red, since there is no variation in brightness due to shadows. Let’s just say I wouldn’t want to be the passenger if it tried to drive in the real world. I suspect that the next major step will be trucks that automate highway driving but still have a human operator to drive on local roads, much like airplanes that use autopilot.
How do you communicate with robots?
One experiment that we are working on at the Interdisciplinary Robotics Research Lab at Vassar involves robots that can be trained by a human operator. When the robot does something wrong, such as driving into a wall, we send it a signal that tells it that that was a bad thing to do; if the robot does something right, we send it a reward signal that reinforces the behavior in the future.
I have a Roomba, a commercially available robotic vacuum cleaner, and I can communicate with my Roomba in a number of different ways. There are buttons that I push to tell it when and where I want it to clean. It sings a little tune if it gets stuck or if it is finished cleaning. This is really basic stuff that I still view as a form of communication.
What excites you about the idea of working in the robotics field 20 years from now?
I am really excited about working on biologically inspired intelligence for robots. All humans have this fantastically powerful computational device, their brain, and yet we still don’t really understand how it all works. One way to test theories about biological intelligence is to model the theory in a robot. By building a system that is predicted to be intelligent in some way and observing the successes and failures of that system we can learn a lot about what sort of underlying mechanisms it takes to create intelligence.
I think that the field of robotics is going to be even more exciting in 20 years than it is today, but I think that the real fuel for the explosion in the field will not be the big universities and corporations, but the garage-based tinkerers and hobbyists. Most of the commonly available and affordable microcontrollers—the robot equivalent of a brain—are only capable of handling insect-level intelligence. As computing technology gets cheaper, I think we’ll see more hobbyists get really involved in coming up with some creative ideas for robotics.
What is the value of robotics for the US military?
Dangerous environments are optimal for robots and provide a great incentive for the development of robotic technology. When human lives are at stake in a combat zone, robotic technology becomes priceless.
We are seeing an increasing number of military applications for robots, including some robots that are armed. There were a few remotely operated robots that were deployed in Iraq. These robots were not autonomous; they required a human operator to control them and be responsible for all the decisions and actions of the robot. The intelligence of these robots is significantly less than on a smart-guided missile; they are basically weaponized remote control vehicles. Even though the robots were pulled from operation before ever firing a shot, the program still provoked debate about the ethics of arming robots, which is obviously very complex.
Describe your current projects at Vassar.
I’m currently working on a few different projects. One is an underwater swimming robot, RayBot, modeled after the electric ray. The goal for that project is to build an underwater robot that swims very efficiently by mimicking the body shape and swimming motion of a real ray. One of the most exciting parts of the project has been working with the real rays. We have an adult female and several of her offspring in our aquarium. We analyze video of the rays to better understand how they swim. Then we take what we learn from the real rays and try to mimic it using the robot.
Another project that I’m working on is giving robots the ability to learn causal relationships on their own. Our current goal is to give the robot the ability to predict what it will sense in the future. Humans do this all the time. When you move around in the world, your brain is making and confirming predictions about what you will see next. For example, objects consistently appear larger in the visual field as we move closer to them, and so the brain can predict that an object will grow larger if you move toward it. By giving robots the ability to learn causal relationships as they explore the world we hope that their intelligence will also grow and develop. I programmed my virtual car to drive in a very specific environment. The hallmark of truly robust intelligence is that it is smart in many different environments. Our goal is to give robots a way to learn causal relationships whether they sense the world with a camera, lasers, sonar, or any other type of sensor.