What is ARI, the social robot, by PAL Robotics?
For humans, finding the way around a new place may involve a map, or GPS – and in the world or robotics, the technology used for robots to navigate is known as SLAM (Simultaneous Localization and Mapping). SLAM started in the 1980’s and similarly to a person trying to navigate in a place they don’t know by looking for landmarks, using SLAM, robots can build their own maps as they move around, at the same time as they estimate their position within the map by aligning sensor data.
Successful SLAM is considered one of the fundamental challenges of robotics, for robots to become truly autonomous. A robot is expected to perform complicated tasks that require navigation in complex and dynamic environments without human input. In order to navigate autonomously, plan their route and perform these tasks efficiently, the robot needs to be able to localize itself in its environment based on SLAM. You can check our definitive guide to SLAM part 1 and the guide part 2.
There are different methods to perform SLAM with robots and in this blog we will introduce you to ARI, our social robot’s tutorials on mapping using Slam_toolbox autonomous navigation and how to use these tutorials.
ARI is a high-performance social robot ready to interact with customers in businesses, retail and events. The robot is also used in various European research projects, where developments in Artificial Intelligence (AI) are ongoing, with the future aim of deploying ARI to support users in hospitals or even their homes.
Our humanoid social robot has a voice and facial recognition system and speaks in more than 30 languages. Users can access information through the robot’s touch screen, while ARI’s LCD eyes, movement and gestures add extra entertainment.
From a research perspective for ARI, our collaborative robot is used in projects, including those looking at Human-Robot-Interaction, perception, cognition and navigation. The social robot is able to locate itself inside a building and move around while avoiding obstacles in its path and has its own wiki ROS simulator which you can get started with ARI’s remote control.
What is Slam_toolbox and how does PAL Robotics use it?
Now that we have covered what SLAM stands for and introduced you to ARI, let’s learn about the previously mentioned Slam_toolbox. Slam toolbox is built upon Karto SLAM, and consists of a set of tools and capabilities for 2D SLAM, built by Steve Macenski.
Slam_toolbox incorporates information from laser scanners in the form of a LaserScan message and TF transforms from map->odom, and creates a 2D occupancy grid of the free and occupied space.
ARI’s navigation tutorials teach the user how to create a map of the environment using ARI’s torso D435i’s RGB-D camera and how to make ARI navigate autonomously in the built map while the robot avoids obstacles. It’s an easy method for any user to learn more about the robot’s navigation skills. Not to forget, the physical ARI uses a different sensor – Intel Realsense D435i infrared camera which is installed in the front torso of the robot, combined with odometry and IMU (Inertial Measurement Unit) data to achieve a robust visual SLAM system, which we will cover at a later post.
How can I get started and what can I learn from the Slam_toolbox tutorial for ARI?
To be able to apply the Slam_toolbox tutorial yourself, you will need to download and install ARI’s simulator. Visit our previous blog to see a step by step guide to install the ARI simulator pack.
You can also find more information on how to install the ARI simulation pack by visiting the wiki ROS website.
How can I create a map with Slam_toolbox?
Finally, let’s talk about the tutorial on creating a map for ARI using the Slam_toolbox. This tutorial shows you how to create a map of the environment using ARI’s torso D435i’s RGB-D camera. The sensor is a specific type of depth sensing devices that work in association with a RGB camera, that are able to augment the conventional image with depth information (related with the distance to the sensor) on a per-pixel basis. The torso’s RGB-D camera’s point cloud data is transformed into laser scans by pointcloud to laserscan package in order to perform mapping with Slam_toolbox and make the robot move around while avoiding obstacles that are permanent or can appear in the way.
For an in-depth step by step tutorial visit the wiki ROS website section “Create a map with slam_toolbox”
Keen to learn more? There are more ARI tutorials for you
Did you also know that we also have wiki ROS tutorials for many of our other robots? You can check them out here.
If you have any questions, don’t hesitate to get in touch with us. Find out more about on the website of PAL Robotics.