There’s no question that self-driving cars are on the horizon, but what will they look like? How will they operate? And, most importantly, how will they think?
When it comes to research for autonomous vehicles, the data typically consists of static single images that the software will use to identify common objects on the road such as pedestrians, bicycles, stop signs, etc. Researchers at MIT and Toyota, however, have released a new dataset called DriveSeg which takes this a step further.
The new DriveSeg software contains much more high-resolution representations of common objects through a continuous video driving scene rather than single images. The researchers developed this dataset as a way of helping self-driving artificial intelligence learn about certain objects, such as construction or trees, that don’t always have a specific uniform shape like a stop sign would.
The companies have both created and released this information in the hopes of advancing self-driving research and creating AI software that is able to recognize and adapt to real-world scenarios. The team has recognized that driving isn’t a uniform process, but rather a continuous flow of visual information to which one must adapt.
According to a statement by Rini Sherony, Toyota CSRC’s senior principal engineer, “Predictive power is an important part of human intelligence…Whenever we drive, we are always tracking the movements of the environment around us to identify potential risks and make safer decisions. By sharing this dataset, we hope to accelerate research into autonomous driving systems and advanced safety features that are more attuned to the complexity of the environment around them.”
With this new dataset available and researchers working hard, there’s no doubt wheel go car over the next few years!