Understanding Simultaneous Localization and Mapping (SLAM)
Understanding Simultaneous Localization and Mapping (SLAM)
By using the inputs from cameras and sensors such as LiDAR, SLAM algorithms can fashion a map of a vehicle’s immediate surroundings while simultaneously computing its estimated position.

Using simultaneous localization and mapping (SLAM) algorithms and sensors, an autonomous vehicle can start in an unknown location or environment and use only relative observations to incrementally build a map of the world around it, while simultaneously computing a bounded estimate of its location. In fact, to be truly autonomous, cars and trucks will have to be able to navigate without an accurate mapor without any map at allin GNSS-denied areas, such as parking garages and tunnels.


The Advent of Simultaneous Localization and Mapping 

Traditionally, the preferred navigation solution for vehicles traveling through GNSS-denied areas has been to couple the GNSS receiver with an inertial navigation system (INS). The two devices are complementary because the INS can continue to guide the vehicle where GNSS signals are not available. Then, when it re-acquires signals from a sufficient number of satellites, the GNSS receiver can re-initialize the INS, which is susceptible to drift.

However, focus is increasingly shifting to image-based SLAM algorithms, which can build up a map from scratch and refine it from sensor measurements, while also constantly recalculating the vehicle’s estimated position. SLAM uses successive, overlapping images to compute a mobile camera’s position and orientation, then processes that information to reconstruct the surrounding environment.


SLAM Usage and Applications

Originally developed decades ago to help robots navigate, SLAM is already being applied to 3D sensing mobile devices to map indoor spaces and construction sites. It provides both position and orientation using the inputs from cameras and distance sensors, such as LiDAR, as well as wheel encoders. The system builds a model that enables a machine to determine its augmented position, while also detecting any obstacles in the scene. It then uses that information to refine the map as the machine moves. Any known surfaces, such as interior walls, are used as navigational aids to augment the data from the positioning sensors.

SLAM algorithms are not intended for perfect accuracy, but instead provide a best-fit model of geospatial information using available resources to improve and refine relative accuracy. Besides increasing use in self-driving cars, known approaches have been applied to unmanned aerial and underwater vehicles, planetary rovers, newer domestic robots, and in some cases the human body. 


Known Challenges of SLAM

The technique presents many challenges. First, none of the data sources are fully accurate or reliable. For example, the quality of video stream can vary greatly depending on the lighting conditions, and the distance traveled measured by wheel encoders quickly diverges from the actual distance traveled when wheels slip. Second, a map is only good until somebody introduces a new obstacle or a detour. Third, SLAM requires tight coupling and complex calibration of the sensors used. Fourth, it involves heavy processing, which rapidly drains batteries and delays the mapping process when the area being mapped gets too large. Fifth, it struggles to maintain tracking in areas with few features, such as gray cement walls.

Nevertheless, the sharp drop in the price of imaging sensors and increases in computing power have dramatically increased the adoption of SLAM. Ultimately, full autonomy will require the integration of GNSS, INS, and SLAM.


More on SLAM

To better understand how SLAM algorithms track object movement, and how it is used for precise positioning with Indoor Mobile Mappers, read our related article explaining SLAM in a bit greater detail. 

To Top