ANN ARBOR - Researchers at the University of Michigan are improving technology in self-driving cars that recognizes and predicts pedestrian movements.
Teams are teaching driverless vehicles to anticipate pedestrian behavior by zeroing in on humans' foot placement, body symmetry and gait.
By collecting data on GPS, LiDAR and cameras rigged to vehicles, researchers are able to recreate video captured of humans in motion in a 3D computer simulation.
These video snippets of real-life behavior have allowed the teams to create a "biomechanically inspired recurrent neural network" that indexes human movements.
With this network, they are able to predict future locations and poses for one pedestrian or a group of pedestrians walking roughly 50 yards away (the scale of an average city intersection).
"Prior work in this area has typically only looked at still images. It wasn’t really concerned with how people move in three dimensions," Ram Vasudevan, U-M assistant professor of mechanical engineering, said in a statement. "But if these vehicles are going to operate and interact in the real world, we need to make sure our predictions of where a pedestrian is going doesn’t coincide with where the vehicle is going next."
What makes this research so groundbreaking is that much of the machine learning used to develop current autonomous technology has largely dealt with still photos, not video.
By utilizing short video snippets, their system can analyze the first half of the clip to make predictions that can then be verified for accuracy in the second half.
"Now, we’re training the system to recognize motion and making predictions of not just one single thing—whether it’s a stop sign or not—but where that pedestrian’s body will be at the next step and the next and the next," Matthew Johnson-Roberson, associate professor in U-M’s Department of Naval Architecture and Marine Engineering, said in a statement.
Vasudevan used an example of a pedestrian on their phone to demonstrate the forecast the network is capable of making.
"If a pedestrian is playing with their phone, you know they’re distracted," Vasudevan said in a statement. "Their pose and where they’re looking is telling you a lot about their level of attentiveness. It’s also telling you a lot about what they’re capable of doing next."
Researchers parked a vehicle at several busy intersections in Ann Arbor with LiDAR and cameras facing for the dataset. With LiDAR and cameras facing the intersection, the car was able to record several days worth of data at a time.
The data captured was a valuable break from lab-simulated data sets.
"We are open to diverse applications and exciting interdisciplinary collaboration opportunities, and we hope to create and contribute to a safer, healthier, and more efficient living environment," U-M research engineer Xiaoxiao Du said in a statement.
Like what you're reading? Sign up for our email newsletter here!
All About Ann Arbor is powered by ClickOnDetroit/WDIV.