Mapping an indoor environment with LiDAR and ROS


After several weeks of API design, component selection, and simulation, I’m very excited to present the first prototype of our indoor autonomous robot. I’m leading a five person team to develop a robot with a front-facing web interface, capable of navigating a previously mapped indoor environment and avoiding dynamic obstacles (people!). Our chief sensor is a 1 beam LiDAR, and we will be adding wheel encoders and an IMU in the next week or so.

Thankfully I was able to leverage a significant amount of my work on the picar project for this, as I’m reusing parts of the chassis for the first prototype, as well as some of the software. We’ve ported the code from straight Python into ROS nodes, using the pub/sub model to control the robot via keyboard input. Additionally, we have a server on GCE that serves the front-facing control panel, allowing a user to flip a ‘kill switch’ that disables keyboard input (in the future, the control panel will also allow destination selection). I’ll discuss more about the low level operations of the vehicle in another post.

The interesting part

After a small trial run at my house, my teammate Kelvin and I went to map our target environment: the engineering building at UCSC. This was a two step process: first, drive the car around the environment while saving the LiDAR data to a rosbag file. Second, replay the rosbag while running the hector_slam package and rviz to view the map. Here are videos of the two parts:

In the second video, there’s a significant amount of noise visible on the top right side of the map. In order to boot up all the ROS nodes in the correct order (and ensure keyboard events went to the right window), we needed to have a screen hooked up to the Pi. This meant that we started scanning while in the computer lab, and then unplugged the car and walked it into the hallway before driving it. Given the rough initialization process of our first day testing, I’m surprised the data turned out as well as it did — with proper .launch files and shell scripts along with encoders and an IMU, I’m fairly optimistic for our mapping accuracy.

Not captured on video was the third part of the process: given this map file an initial pose, use subsequent LiDAR readings to create a rough odometry path. Kelvin and I tried this out and it went surprisingly well — as we drove the robot down the hallway and side to side, the rviz model stayed accurate. Over time it drifted, which brings us to our future work:

AMCL (Particle Filtering)

Instead of simply creating an odometry trajectory, an obvious improvement is to use the current LiDAR scans to create a heatmap by comparing it to the previously created map. That’s our next steps, and I’ll post an update as soon as possible.