六合彩直播开奖

Digital Twins Reduce Field Testing for AVs

Emilie Viasnoff

Jun 13, 2022 / 7 min read

From Feet-Off and Eyes-Off Driving — to Brain-Off Driving

Assisted driving began with feet-off technology, such as adaptive cruise control and advanced driver assistance systems (ADAS) for emergency braking. It has since evolved to hands-off driving with lane-centering functions and eyes-off driving where the car can sense and automatically respond to road and driving conditions. And now brain-off driving is being tested with fully autonomous vehicles.

The Society of Automotive Engineers (SAE) has defined 6 levels of driving automation, ranging from fully manual (level 0) to fully autonomous (level 5). Feet-off and hands-off technologies are classified as assisted driving (levels 1-3). Eyes-off and brain-off technologies (levels 4 and 5) represent a significant step forward that divides assisted vehicles and autonomous vehicles.

Most vehicles today are at level 2, where the vehicle can handle some functions such as emergency braking or park assist. Level 3 vehicles have started to become available, with autopilot for highways or traffic jams. Today’s level 4 vehicles are only prototypes, such as Waymo cars. By 2030, it’s estimated that in the U.S. on the road–which accounts for only a fraction of total car sales.

The slow rise of autonomous vehicles has multiple root causes. Besides infrastructure, legal, and acceptance challenges, going to eyes-off and brain-off driving will require automotive manufacturers to overcome many challenges. These include choosing the right sensors, integrating them in the right place in the vehicle, testing them to catch all scenarios–even the most unpredictable ones–and optimizing decision latency. Ultimately, an autonomous vehicle must outperform humans on all roads and in all weather conditions–and optical sensors are critical to achieving this outcome.

In this blog post, I will explore how digital twin technology for optical sensors will accelerate the adoption of autonomous vehicles by reducing the need for field testing.

Why Optical Sensors Are the Critical Building Blocks of Autonomous Vehicles

Today’s vehicles have numerous optical components which are all-important for a car to sense driving conditions, interact with its environment and with the driver, and make decisions. This includes cameras to take 2D pictures of the environment, LiDAR to obtain 3D point maps, headlamps that automatically compensate for a low-light environment, and radar to see through fog, haze, and rain. The evolution from manual driving to automated driving will require even more sensors, , and new electrical/electronic (E/E) architectures. From a market standpoint, in 2025, the total sensor market revenue will reach $22.4B, with radar revenues estimated at $9.1B, camera module revenues reaching $8.1B, computing hardware reaching $3.5B, and LiDAR reaching $1.7B. Ultimately, a fully autonomous vehicle will rely on four to six radar systems, one to five LiDAR systems, and six to 12 cameras.

Today, the autonomous driving market is still immature, and many technologies and system designs are currently being tested. There is no one-size-fits-all solution. For instance, Tesla started using a single front-looking camera in the autopilot system introduced in 2014. The latest models now include more than eight cameras around the car and systems. In contrast, the have five LiDAR systems, 29 cameras, and six radars to scan their environment. Both systems are still heavily tested in the field, with Waymo leading the race with more than five million total miles driven by its fleet of autonomous vehicles. Waymo also built a driving simulator and accumulated massive amounts of synthetic driving data. But questions persist about how reliable and comprehensive this field data is, and how close to reality the simulated situations are.

Optical sensors are critical building blocks of autonomous vehicles. Accurate digital twins of sensors could unlock the potential of using driving simulators for tasks ranging from design and testing to integration and autonomous driving system co-optimization. This could dramatically reduce field testing of autonomous vehicles and accelerate their adoption. Let’s explore how this could work.

Autonomous Car Sensing Systems

Fostering Co-Design and Enabling Virtual Testing with Accurate Sensor Models

Building an autonomous driving architecture starts by picking the right sensors and making sure they sense what they need to. Currently, sensing platform and perception system development require integration and calibration of sensor hardware on a vehicle as well as conventional ground-truth data acquisition and annotation. These are expensive, time-consuming processes. Typically, the entire design-to-test loop will be completed several times before getting to final system validation. It is critically important to shorten this development cycle through virtual testing and, by doing so, avoid the unreachable to make autonomous vehicles safe.

What do we need for virtual testing? Between vehicle models and virtual environments, we need accurate digital representations of sensors to evaluate their behavior on a computer. Of course, as safety and reliability are critical to autonomous driving, these models should be as accurate as possible. They should precisely reflect how the sensor will interact with the environment in various conditions and how this interaction will impact the quality of the sensor’s raw data. Understanding the behavior of a sensor in any condition and getting a physically realistic sensor model can be achieved through optics-extrapolated models. are critical assets to build the models that include emission features (i.e., optical power, wavelength, and wavefront) as well as propagation, interaction, and reception features. Companies have already joined forces to support LiDAR parametric models; is one example.

Sensor models will enable the simulation of different sensor concepts and combinations and validate the sensor design requirements without assembling the entire system. To be entirely reliable, there is still a long way to go and complementary features to add to these models, such as artificial errors (missing points, non-uniform density) or systematic errors that would reproduce the “as-manufactured” sensor as opposed to the “as-designed” one.

Optimizing Integration

Where is the optimal location to integrate LiDAR and cameras in a vehicle? Digital twin simulations can help answer this question. Radar systems are already well integrated into a vehicle body, but integrating LiDAR and camera systems is a challenge. The systems must perform well while accounting for factors like aesthetics, dust and dirt, and engine heat. For example, you could place LiDAR or a camera in a vehicle grille or bumper, but there is a risk that engine heat or road debris will interfere with optical performance. Integrating these systems into the vehicle headlight seems to be the best compromise. Some companies .

In addition to physically realistic virtual environments, an optimized integration of sensors into vehicles requires a multiphysics model for each sensor. This ensures that simulations factor in sensor features along with elements of the immediate environment, such as heat or parasitic light from a nearby headlamp.

Avoiding Data Deluge

Last but not least, is an important issue, especially when self-driving cars are moving fast and need to react quickly to critical scenarios. Machine-learning algorithms used in self-driving vehicles extract insights from raw data to determine road conditions and make decisions. This may include pedestrian locations, road conditions, light levels, driving conditions, and objects around the vehicle.

With the development of better image sensors and vision processors, it has been possible to increase the level of performance of forward cameras. LiDAR systems are providing increasingly accurate 3D maps. These factors produce that must be processed at the edge. One autonomous car could potentially process up to 4 TB of data per day, compared to the average internet user, who processes about 1.5 GB of data per day.

Car sensing concept | 六合彩直播开奖

Sensor models–combined with virtual environments and specific scenarios or use-cases–could generate synthetic datasets that could feed and train algorithms. This would optimize their efficiency, latency, reliability and–most important of all–their overall safety in any environment, as opposed to predefined virtual environments. All of the datasets must be processed in a fraction of a second, and this is another area where sensor models would help enable system-level optimization.

Going Beyond

In this blog post, I outlined why physically realistic sensor models are a key building block to autonomous driving: they enable virtual testing, integration optimization, and hardware-software system-level design. To quote Prof. Shashua, CEO and founder of Mobileye, “Autonomous vehicles will only succeed when all of the technological pieces are built as a single integrated system, enabling synergies among all of its parts. It is a formidable task to build the full stack from silicon up to the full self-driving system.”[1] Optical, electrical, and mechanical design software are critical tools to support this vision and build the physically realistic and high-fidelity models that we need.

Sources for Additional Reading

References:

[1]

Continue Reading