SLAM is short for Simultaneous Localization And Mapping. Lifewire defines SLAM technology wherein a robot or a device can create a map of its surroundings and orient itself properly within the map in real–time. SLAM tech is particularly important for the virtual and augmented reality (AR) science. Let’s explore what exactly SLAM is, how it works and its varied applications in autonomous systems.
What Is SLAM
Break down the acronym and you get ‘Localization’ & ‘Mapping’ as the prominent elements of SLAM. The device tries to simultaneously localise – find the location of some object/sensor with reference to its surroundings – and map the layout and framework of the environment that the device is in. This can be done using a range of algorithms that simultaneously localise and map the objects.
How It Works
SLAM is an optimization problem. For the machine to understand, the device’s sensors collect visual data from the physical world in terms of reference points. These points help the machine distinguish between floors, walls and any barriers. Google’s AR platform, Tango, uses advanced SLAM to interact with the surroundings.
Measurements are constantly taken as the device moves through the surroundings and SLAM takes care of the inaccuracies of the measurement method by factoring in ‘noise’. Different sensors use different algorithms. SLAM largely makes use of mathematical and statistical algorithms. One of which is the Kalman filter. Kalman filter takes into account a series of measurements over time, instead of just a single one. It then predicts the position of unknown variables, in our case – unknown points on 3D objects in the machine’s point of view. Interestingly, the Kalman filter also works in modelling the central nervous system – for sensory estimation and motor control.
I’ve mentioned below, in detail, how Google uses SLAM for its self-driving cars.
Why Do We Need SLAM
It is easy to navigate spaces that are known. But what about unknown terrains? A known devil is better than an unknown one. SLAM localizes an unknown environment and navigates through spaces for which no prior map or GPS signal is available. SLAM is best applicable for situations with no prior reference point.
SLAM is the amazing tech behind Google’s driverless cars. The autonomous technology that runs these self-driving cars uses a roof-mounted LIDAR sensor to create a 3D map of its surroundings. It does so within 10 seconds – quite a feat! The quick response is imperative in this technology since the machine in concern is a moving one with high speeds and acceleration. These mappings are augmented over the already existing Google maps. Through these readings, the autonomous system makes driving decisions using statistical algorithms like Bayesian filters & Monte Carlo models. The Monte Carlo models are critical in risk analysis and finding different probabilities of different outcomes. After such critical analysis, the system makes accurately measured decisions. Both Bayesian filtering and Monte Carlo models use a probabilistic approach.
Navigation on Mars also uses SLAM tech wherein landmarks have to be revisited several times. Wikitude, a company working on augmented reality software tools, makes use of this technology for its AR applications to recognise objects and to overlay digital interactive augmentations.
Autonomous systems are the obvious applications for SLAM and companies like Cortica have already built its signature technology in localization and mapping for driverless cars in Israel. Every Tech Giant (Google, Apple, Facebook, etc.) already has its hands laid on SLAM tech. Apple in its attempt to enter the AR/VR space built ARKit that heavily depends on SLAM.
SLAM has significant usage in AR applications. When virtual content reacts rightfully with real objects within the surroundings, Augmented Reality is a success!