How Does a Self-Driving Car See?


Camera, radar and lidar sensors give autonomous vehicles superhuman vision.

To drive better than humans, autonomous vehicles must first see better than humans.

Building reliable vision capabilities for self-driving cars has been a major development hurdle. By combining a variety of sensors, however, developers have been able to create a detection system that can “see” a vehicle’s environment even better than human eyesight.

The keys to this system are diversity — different types of sensors — and redundancy — overlapping sensors that can verify that what a car is detecting is accurate.

The three primary autonomous vehicle sensors are camera, radar and lidar. Working together, they provide the car visuals of its surroundings and help it detect the speed and distance of nearby objects, as well as their three-dimensional shape.

In addition, sensors known as inertial measurement units help track a vehicle’s acceleration and location.

To understand how these sensors work on a self-driving car — and replace and improve human driving vision — let’s start by zooming in on the most commonly used sensor, the camera.

The Camera Never Lies

From photos to video, cameras are the most accurate way to create a visual representation of the world, especially when it comes to self-driving cars.

An autonomous driving camera sensor developed by NVIDIA DRIVE partner Sekonix.

Autonomous vehicles rely on cameras placed on every side — front, rear, left and right — to stitch together a 360-degree view of their environment. Some have a wide field of view — as much as 120 degrees — and a shorter range. Others focus on a more narrow view to provide long-range visuals.

Some cars even integrate fish-eye cameras, which contain super-wide lenses that provide a panoramic view, to give a full picture of what’s behind the vehicle for it to park itself.

Though they provide accurate visuals, cameras have their limitations. They can distinguish details of the surrounding environment, however, the distances of those objects needs to be calculated to know exactly where they are. It’s also more difficult for camera-based sensors to detect objects in low visibility conditions, like fog, rain or nighttime.

On the Radar

Radar sensors can supplement camera vision in times of low visibility, like night driving, and improve detection for self-driving cars.

Traditionally used to detect ships, aircraft and weather formations, radar works by transmitting radio waves in pulses. Once those waves hit an object, they return to the sensor, providing data on the speed and location of the object.

Like the vehicle’s cameras, radar sensors typically surround the car to detect objects at every angle. They’re able to determine speed and distance, however, they can’t distinguish between different types of vehicles.

While the data provided by surround radar and camera are sufficient for lower levels of autonomy, they don’t cover all situations without a human driver. That’s where lidar comes in.

Laser Focus

Camera and radar are common sensors: most new cars today already use them for advanced driver assistance and park assist. They can also cover lower levels of autonomy when

Lidar sensor developed by NVIDIA DRIVE partner Velodyne.
a human is supervising the system.

However, for full driverless capability, lidar — a sensor that measures distances by pulsing lasers — has proven to be incredibly useful.

Lidar makes it possible for self-driving cars to have a 3D view of their environment. It provides shape and depth to surrounding cars and pedestrians as well as the road geography. And, like radar, it works just as well in low-light conditions.

By emitting invisible lasers at incredibly fast speeds, lidar sensors are able to paint a detailed 3D picture from the signals that bounce back instantaneously. These signals create “point clouds” that represent the vehicle’s surrounding environment to enhance safety and diversity of sensor data.

Vehicles only need lidar in a few key places to be effective. However, the sensors are more expensive to implement — as much as 10 times the cost of camera and radar — and have a more limited range.

Putting It All Together

Camera, radar and lidar sensors provide rich data about the car’s environment. However, much like the human brain processes visual data taken in by the eyes, an autonomous vehicle must be able to make sense of this constant flow of information.

Self-driving cars do this using a process called sensor fusion. The sensor inputs are fed into a high-performance, centralized AI computer, such as the NVIDIA DRIVE AGX platform, which combines the relevant portions of data for the car to make driving decisions.

So rather than rely just on one type of sensor data at specific moments, sensor fusion makes it possible to fuse various information from the sensor suite — such as shape, speed and distance — to ensure reliability.

It also provides redundancy. When deciding to change lanes, receiving data from both camera and radar sensors before moving into the next lane greatly improves the safety of the maneuver, just as current blind-spot warnings serve as a backup for human drivers.

The DRIVE AGX platform performs this process as the car drives, so it always has a complete, up-to-date picture of the surrounding environment. This means that unlike human drivers, autonomous vehicles don’t have blindspots and are always vigilant of the moving and changing world around them.

To learn more about how autonomous vehicles see and understand, read about NVIDIA’s perception software and check out the NVIDIA DRIVE platform.

Camera, radar and lidar sensors give autonomous vehicles superhuman vision.

To drive better than humans, autonomous vehicles must first see better than humans.

Building reliable vision capabilities for self-driving cars has been a major development hurdle. By combining a variety of sensors, however, developers have been able to create a detection system that can “see” a vehicle’s environment even better than human eyesight.

The keys to this system are diversity — different types of sensors — and redundancy — overlapping sensors that can verify that what a car is detecting is accurate.

The three primary autonomous vehicle sensors are camera, radar and lidar. Working together, they provide the car visuals of its surroundings and help it detect the speed and distance of nearby objects, as well as their three-dimensional shape.

In addition, sensors known as inertial measurement units help track a vehicle’s acceleration and location.

To understand how these sensors work on a self-driving car — and replace and improve human driving vision — let’s start by zooming in on the most commonly used sensor, the camera.

The Camera Never Lies

From photos to video, cameras are the most accurate way to create a visual representation of the world, especially when it comes to self-driving cars.

An autonomous driving camera sensor developed by NVIDIA DRIVE partner Sekonix.

Autonomous vehicles rely on cameras placed on every side — front, rear, left and right — to stitch together a 360-degree view of their environment. Some have a wide field of view — as much as 120 degrees — and a shorter range. Others focus on a more narrow view to provide long-range visuals.

Some cars even integrate fish-eye cameras, which contain super-wide lenses that provide a panoramic view, to give a full picture of what’s behind the vehicle for it to park itself.

Though they provide accurate visuals, cameras have their limitations. They can distinguish details of the surrounding environment, however, the distances of those objects needs to be calculated to know exactly where they are. It’s also more difficult for camera-based sensors to detect objects in low visibility conditions, like fog, rain or nighttime.

On the Radar

Radar sensors can supplement camera vision in times of low visibility, like night driving, and improve detection for self-driving cars.

Traditionally used to detect ships, aircraft and weather formations, radar works by transmitting radio waves in pulses. Once those waves hit an object, they return to the sensor, providing data on the speed and location of the object.

Like the vehicle’s cameras, radar sensors typically surround the car to detect objects at every angle. They’re able to determine speed and distance, however, they can’t distinguish between different types of vehicles.

While the data provided by surround radar and camera are sufficient for lower levels of autonomy, they don’t cover all situations without a human driver. That’s where lidar comes in.

Laser Focus

Camera and radar are common sensors: most new cars today already use them for advanced driver assistance and park assist. They can also cover lower levels of autonomy when

Lidar sensor developed by NVIDIA DRIVE partner Velodyne. a human is supervising the system.

However, for full driverless capability, lidar — a sensor that measures distances by pulsing lasers — has proven to be incredibly useful.

Lidar makes it possible for self-driving cars to have a 3D view of their environment. It provides shape and depth to surrounding cars and pedestrians as well as the road geography. And, like radar, it works just as well in low-light conditions.

By emitting invisible lasers at incredibly fast speeds, lidar sensors are able to paint a detailed 3D picture from the signals that bounce back instantaneously. These signals create “point clouds” that represent the vehicle’s surrounding environment to enhance safety and diversity of sensor data.

Vehicles only need lidar in a few key places to be effective. However, the sensors are more expensive to implement — as much as 10 times the cost of camera and radar — and have a more limited range.

Putting It All Together

Camera, radar and lidar sensors provide rich data about the car’s environment. However, much like the human brain processes visual data taken in by the eyes, an autonomous vehicle must be able to make sense of this constant flow of information.

Self-driving cars do this using a process called sensor fusion. The sensor inputs are fed into a high-performance, centralized AI computer, such as the NVIDIA DRIVE AGX platform, which combines the relevant portions of data for the car to make driving decisions.

So rather than rely just on one type of sensor data at specific moments, sensor fusion makes it possible to fuse various information from the sensor suite — such as shape, speed and distance — to ensure reliability.

It also provides redundancy. When deciding to change lanes, receiving data from both camera and radar sensors before moving into the next lane greatly improves the safety of the maneuver, just as current blind-spot warnings serve as a backup for human drivers.

The DRIVE AGX platform performs this process as the car drives, so it always has a complete, up-to-date picture of the surrounding environment. This means that unlike human drivers, autonomous vehicles don’t have blindspots and are always vigilant of the moving and changing world around them.

To learn more about how autonomous vehicles see and understand, read about NVIDIA’s perception software and check out the NVIDIA DRIVE platform.

etetewtgae

Top Rated

error: Content is protected !!