Autonomous vehicles need object recognition systems to navigate traffic and avoid obstacles on the road. However, detection performance of current methods is often reduced due to poor weather conditions or unstructured roads. Now, researchers have developed an IoT-enabled deep learning 3D object recognition system with improved detection capabilities even in adverse conditions.

Autonomous vehicles have the potential to solve traffic congestion, improve traffic flow through vehicle-to-vehicle communication, and revolutionize the travel experience by providing comfortable and safe rides. In addition, the integration of autonomous driving technology into electric vehicles can contribute to more environmentally friendly transport solutions.

A critical requirement for the success of autonomous vehicles is their ability to detect and navigate around obstacles, pedestrians and other vehicles in various environments.

Today’s autonomous vehicles use smart sensors, such as LiDARs (Light Detection and Ranging) to get a 3D view of the surrounding and depth information, RADaRs (Radio Detection and Ranging) to detect objects at night and in cloudy weather, and sets of cameras for RGB images and 360-degree to get a view. Together, they form a complete dataset called a point cloud.

These sensors often face problems such as reduced detection capability in adverse weather conditions and unstructured roads.

To overcome these shortcomings, an international team of researchers led by Professor Gwanggil Jeon has recently developed a ground-breaking IoT-enabled deep learning-based end solution – a complete 3D object recognition system.

“Our proposed system works in real time, improving the object detection capabilities of autonomous vehicles, making traffic navigation smoother and safer,” professor Jeon explained.

The proposed innovative system is built on the YOLOv3 (You Only Look Once) deep learning object recognition technique, which is the most active technique available for 2D visual recognition. The researchers first used this new model to detect 2D objects and then modified the YOLOv3 technique to detect 3D objects.

Using both point cloud data and RGB images as input, the system generates bounding boxes with confidence scores and labels for visible obstacles as output.

To evaluate the system’s performance, the team conducted tests using Lyft’s dataset, which consisted of road information collected from 20 autonomous vehicles driving a predetermined route in Palo Alto, California, over a four-month period.

The results showed that YOLOv3 has high accuracy, outperforming other state-of-the-art systems. The overall accuracy for 2D and 3D object detection was an impressive 96% and 97%, respectively.

Professor Jeon highlights the potential impact of this enhanced detection capability: “By improving detection capabilities, this system could push autonomous vehicles into the mainstream. The adoption of autonomous vehicles has the potential to transform the transportation and logistics industry and reduce reliance on human drivers. It also means the introduction of more efficient transportation methods.”

Source: Science Daily