top of page

NASA LiDAR Point Cloud Semantic Segmentation

The video above shows the custom ROS2 package pcl_processor running a convolutional neural network to create a semantically rich point cloud map from live LiDAR data in real-time during a rover test drive.

Project Description

This is a project I worked on as a Robotics/GNC Engineering Intern on the Guidance, Navigation, and Control Branch at the NASA Marshall Space Flight Center. The project falls under the Artemis Human Landing System (HLS) program in an effort to prepare for a sustained presence on the lunar surface. All information and materials on this page and its immediate links are presented with explicit permission from NASA.

​

The LiDAR point cloud semantic segmentation system is a suite of software packages and libraries that I designed and built from scratch for the GNC team at NASA MSFC to enable autonomous extraterrestrial navigation via LiDAR. The system allows for the creation of a semantically rich point cloud map from live, recorded, or simulated LiDAR data in real-time.

 

There are two major parts to the system: pcl_processor, a custom ROS2 package that manages the data source and information flow for real-time segmentation, and semseglib, a custom Python library that consists point cloud processing functions and wrapper classes to interface with any segmentation module. In the test run above, semseglib has been configured to use the convolutional neural network SqueezeSegV3 as the segmentation module. Currently, the semseglib library can also interface with the transformer-based PTV3. Similarly, pcl_processor is designed to interpret live, recorded, and simulated LiDAR data from any hardware and simulator. Synthetic data collection from ROS2 Gazebo is currently supported with additional expansions to CARLA and other simulators planned, allowing for data collection from custom environments to train custom segmentation models.

​

This system has been deployed on test rovers and quadrators at NASA MSFC and is currently undergoing further development.

Methods

  • Developed semantic segmentation capabilities for lidar point clouds to enhance autonomous navigation and control algorithms for in-situ resource utilization by robotic agents

  • Created a custom Python library and API for processing point clouds and interfacing with segmentation modules

  • Built interfaces for open-source CNN and transformer-based deep learning models for semantic segmentation on custom data

  • Developed a new ROS2 package for integrating real-time semantic inferencing and visualization capabilities on test drones and rovers

  • Established simulated data-collection pipeline for future data collection in exotic environments

  • Enabled real-time generation of semantically rich point clouds on test equipment and established a foundational API for testing and training semantic segmentation models, supporting future algorithmic development

Media

System Diagram

realtime_inference.png
bottom of page