Autonomous or self–driving cars will offer many advantages when they’re widely available, including the elimination of accidents related to human error. But as bad as some human drivers are, people still have a great cognitive advantage over the autonomous cars that have been developed to–date: Thinking as we do, people can anticipate the moves of other drivers and pedestrians on the road, make better sense of the environment around them, and thus better plan the vehicle’s path ahead. So, with this in mind, automakers have been striving to develop autonomous vehicles that think and behave more like humans — and to get there, many have been allying with academic researchers at leading universities around the world.
The latest such tie–up: a new collaborative research project between Ford Motor Co. and Stanford University, begun last month, to imbue autonomous cars with the knowledge and instincts of highly skilled expert drivers. It is research that builds upon other collaborations ongoing between Ford and the Massachusetts Institute of Technology (MIT) and the University of Michigan (U–Mich.) — both long–time partners of the automaker — on additional aspects of autonomous vehicles.
“We want our self–driving vehicle to be cognizant not only of what it can see, but what it can’t see, and want it to do things that are similar to what humans would do in [a given] situation,” said Greg Stevens, global manager for driver assistance and active safety research and advanced engineering at Ford’s Research and Innovation Center in Dearborn, Mich.
As an example, he described a scenario in which a driver stuck behind a large truck would maneuver the vehicle within the lane to try to peek ahead of the truck, to know what’s going on ahead of the truck, and to be prepared to react appropriately right away if the truck were suddenly braked. “Knowing that those lane change areas are free, having taken a peek to see, is important,” he said. The automated driving feature in the vehicle should be taking the same peek and its computers always be doing such “scenario planning,” he added.
Today, “normal path planning” for an automated driving vehicle means following certain rules, such as staying in the center of a lane, not hitting other objects and not remaining in another vehicle’s blind spot, Stevens said. Ford wants to add a rule that says, “try to maximize what your sensors can see,” he explained. Determining the parameters of that rule is one of the motivations for the automaker’s research with Stanford, he said.
In addition, Stanford researchers have already extensively researched the skills and techniques that make expert drivers expert, and Ford wants to leverage their findings — to “take those expert skills and actually program them into the vehicle, so that when it comes time to do that emergency lane change, the vehicle can do it using expert driving skills…like an expert driver does,” he said.
Ford’s work with Stanford officially began January 1 and was announced on January 22 at the start of the Washington Auto Show in Washington, D.C.
At the same time, Ford also announced an extension of its work with MIT, which began about two years ago and was aimed at developing a computational model that would predict the near future paths of other vehicles sharing the road with the automated driving vehicle, to minimize the opportunity for collisions. Now, with that model largely developed, the automaker and the university are beginning to work on an analogous model focused on pedestrians.
As it exists today, the vehicle model bases its predictions on three factors: the physical capabilities of the other vehicles on the road, such as how quickly they can accelerate, brake and change direction; those vehicles’ motion cues, such as whether a vehicle is edging into the automated driving vehicle’s lane; and the environment around the other vehicle, with assigned probabilities as to where the vehicle may be heading, such as to a nearby exit or intersection. And although other considerations may be added to the vehicle model as research continues, Ford and MIT are chiefly moving on to develop factors for the pedestrian model, Stevens said.
“The same sort of structure of the model can work. It’s just the aspects of the model might be different for pedestrians versus vehicles — different capabilities, different motion cues [for which] we would watch, and potentially different goals for where they want to get,” he said. For instance, pedestrians can’t move as fast as cars, and they may make snap decisions and quickly change direction. “That means we have to pay a bit more attention to them” and look for cues, such as “somebody very obviously leaning to check out the state of traffic [which] might indicate they’re thinking about crossing.”
These joint efforts with Stanford and MIT are just the newest between the automaker and an academic institution. In academia, Ford’s longest–standing research alliance is with U–Mich. in Ann Arbor, which has worked on vehicle sensors and processing what the sensors see into data that a vehicle’s obstacle avoidance and path planning algorithms can use to instruct the vehicle along its route. This U–Mich. research has laid the technological foundation for the automaker’s endeavors with both Stanford and MIT, and also led to the unveiling last December of an autonomous Fusion Hybrid research vehicle. Developed with U–Mich. (along with the State Farm Mutual Automobile Insurance Co., based in Bloomington, Ill.), it uses rooftop–mounted LIDAR sensors to generate a real–time 3–D map of the car’s surrounding environment.
Moreover, other automakers also have university alliances for autonomous vehicle research, such as Audi and Stanford.
Even with all the progress that has been made to–date in getting self–driving cars on the road, however, much work remains to be done.
“Getting a computer to do the things that computers do well,” Stevens concluded, “isn’t quite as challenging as getting computers to do some of the things that humans do well.”