Self-driving cars, once mass deployed, promise to dramatically reduce traffic fatalities because they will never be sleepy, distracted, drunk, or aggressive—nor will they engage in dangerous human driving behaviors like texting, going through red lights, or not seeing bicyclists. They also promise to optimize routes so as to reduce fuel consumption and traffic congestion, as well as reduce the number of vehicles on the road as more people switch to using car services rather than owning their own vehicle. However, this vision is still many years away, due to the complexity of the ever-changing built environment, the even greater complexity of the dynamic traffic environment (including irrational human behaviour), and the challenges of seeing and understanding both environments through bad weather.
To be fully autonomous, cars need to localize themselves with high precision in relation to their environment, recognize and react to events appearing on the road beyond the reach of on-board sensors, and drive in accordance with the needs of passengers and other traffic participants. Therefore, a fully developed system will require a broad mix of geospatial and other technologies, some of which are already in use for car navigation or are being tested for collision avoidance, including:
Safe driving requires positioning with centimetre accuracy—far more than what GNSS, even with RTK corrections, can currently achieve consistently, under real-world conditions. Therefore, LiDAR and cameras are essential to close this accuracy gap. LiDAR’s key contribution is measuring distances from other vehicles, fixed infrastructure, and random obstacles. The best sensors can see details of a few centimeters at distances of more than 100 meters. LiDAR is also better than cameras at detecting sharp edge features, such as curbs. It is being tested to create 360-degree models of vehicle surroundings and to predict the movements of nearby pedestrians and vehicles. The key contribution of cameras is detecting and interpreting visual cues, such as road signs and the location as well as curvature of lane markers, which help keep the vehicle in its lane and make basic lane changes. In bad weather, radar is also essential, because radio waves, unlike light waves, can penetrate rain, snow, fog, and even dust, enabling radar to “see” when cameras and LiDAR cannot. However, radar sensors can’t see much detail and cameras don’t perform well in conditions with low light or glare.
By the standards of passenger vehicles, current generation LiDAR units are still bulky and very expensive. However, once mass production begins, their cost is likely to drop quickly, from tens of thousands of dollars to hundreds of dollars, and their size will probably shrink just as fast. Mass demand will also cure the current lag in production. One development that would make LiDAR units considerably cheaper, smaller, and more robust would be a shift from spinning mirrors to direct the laser beams to solid state units that steer the beams electronically, without any moving parts.
The preferred mix of cameras, lidar, and other technologies for self-driving cars depends largely on the ultimate goal. Tesla aims to enable its Autopilot system to automate 90 percent of driving within just a couple of years. That goal may not require LiDAR. However, automation is very hard to implement in the other 10 percent of driving scenarios. Google has been working for years on developing a fully autonomous car that will completely eliminate any human errors. That goal will almost certainly require LiDAR.
The Applanix Autonomy Development Platform helps to advance these projects at every stage with the tools and engineering expertise to support and augment driverless vehicle development programs. Custom Development Kits and Production Kits are available based on customer needs and stage of development.