Robotic Sensing with YDLIDAR OS30A and Mixtile Blade 3
Boost your robot’s navigation with YDLIDAR OS30A and Mixtile Blade 3: Capture 3D point clouds and detect objects using ROS1 and YOLO.
Boost your robot’s navigation with YDLIDAR OS30A and Mixtile Blade 3: Capture 3D point clouds and detect objects using ROS1 and YOLO.
In the world of robotics, effective navigation is crucial for the successful deployment of autonomous systems. However, relying solely on onboard sensors can limit a robot’s ability to perceive its environment accurately, especially in complex or dynamic settings. To address this, we can enhance a robot’s navigational capabilities by integrating external sensors that provide a more comprehensive understanding of its surroundings.
This series of articles will explore how to achieve this by leveraging a YDLIDAR 3D depth camera, external to the robot, combined with a Mixtile Blade 3 single-board computer running ROS1. The objective is to gather 3D point cloud data and use YOLO (You Only Look Once) for object detection. This setup will allow us to build a more robust sensing system that enhances the robot’s ability to navigate and interact with its environment effectively.
In this first article, we will walk through the process of setting up the YDLIDAR 3D depth camera with the Mixtile Blade 3 and running ROS1 to capture and process the 3D point cloud data. Additionally, we will integrate YOLO for real-time object detection. This foundation will pave the way for more advanced navigation and perception capabilities in subsequent parts of the series.
The Mixtile Blade 3 is a high-performance single-board computer designed to meet the demanding needs of edge computing applications, including robotics. Powered by the Octa-Core Rockchip RK3588, the Blade 3 delivers robust processing capabilities in a compact Pico-ITX 2.5-inch form factor.
Key features include:
To enhance storage capacity and speed, I will add a 500GB SSD (PCIe Gen4 ×4 NVMe 1.4, M.2) using the Mixtile Blade 3 Case. This case is designed specifically for the Mixtile Blade 3, featuring a built-in breakout board that transfers the U.2 port to an M.2 Key-M connector, enabling the installation of an M.2 NVMe SSD.
The YDLIDAR OS30A 3D Depth Camera is a sophisticated sensor designed for advanced robotic applications that require accurate depth perception and obstacle detection. Utilizing binocular structured light 3D imaging technology, this camera captures detailed depth information, enabling robots to effectively sense and navigate their environment.
Key features include:
The YDLIDAR OS30A is an excellent choice for enhancing a robot’s environmental awareness. When combined with powerful processing hardware like the Mixtile Blade 3, it enables the collection and processing of detailed 3D point clouds, which can be used for tasks such as obstacle avoidance, mapping, and object detection with YOLO. This camera is essential for developing robots that can effectively navigate and interact with their surroundings, making it a critical component of our enhanced robotic sensing setup.
To effectively manage and deploy applications in a containerized environment on the Mixtile Blade 3, we will install Docker, a platform that automates the deployment of applications inside lightweight, portable containers. The following steps outline the installation process for Docker on your Mixtile Blade 3.
First, we need to configure the GPG key and add Docker’s official repository to the list of package sources.
sudo install -m 0755 -d /etc/apt/keyrings
sudo curl -fsSL https://download.docker.com/linux/ubuntu/gpg -o /etc/apt/keyrings/docker.asc
sudo chmod a+r /etc/apt/keyrings/docker.asc
Next, add the Docker repository to your system’s package sources:
echo
"deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.asc] https://download.docker.com/linux/ubuntu
$(. /etc/os-release && echo "$VERSION_CODENAME") stable" |
sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
sudo apt-get update
With the repository added, you can now install Docker and its associated components. Before that, ensure some dependencies are installed:
すど apt install libip4tc2=1.8.7-1ubuntu5 libxtables12=1.8.7-1ubuntu5
Now, proceed to install Docker:
sudo apt-get install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin
By default, Docker stores its data in `/var/lib/docker`. For better management and to avoid potential storage issues, we’ll move Docker’s data directory to a different location. In this case, we’ll use `/data/docker` (on the SSD).
sudo service docker stop
mkdir -p /data/docker
sudo cp -a /var/lib/docker/ /data/docker
sudo touch /etc/docker/daemon.json
sudo nano /etc/docker/daemon.json
Add the following content to the `daemon.json` file:
{
"data-root": "/data/docker"
}
sudo rm -rf /var/lib/docker
sudo service docker start
To run Docker commands without using `sudo`, you need to add your user to the `docker` group:
sudo groupadd docker
sudo usermod -aG docker $USER
newgrp docker
Finally, verify that Docker is installed correctly by running a test container:
docker run hello-world
This command will download and run a small test container, confirming that Docker is up and running on your Mixtile Blade 3.
With Docker successfully installed and configured, you’re now ready to deploy and manage applications in a containerized environment, which is especially useful for running components like ROS1, object detection with YOLO, and other services on your robotics platform.
With Docker installed and configured on the Mixtile Blade 3, the next step is to build the Docker images that will run the ROS SDK for the YDLIDAR OS30A 3D Depth Camera and a node for YOLO-based object detection. This setup allows us to efficiently manage and deploy these components in a containerized environment, ensuring consistency and ease of use.
Start by cloning the project repository, which contains all the necessary Dockerfiles and configurations.
git clone --recursive git@github.com:andrei-ace/docker_ros_ydlidar_os30a.git
cd docker_ros_ydlidar_os30a/ros
The project includes a `Makefile` that simplifies the process of building the Docker images. These images will include everything needed to run the ROS environment, the YDLIDAR OS30A SDK, and the YOLO object detection node.
To build the images, simply run:
make build
This command will execute the build
target in the Makefile
, which consists of the following steps:
The custom image for the robot is tagged as andreiciobanu1984/robots:robot-dog-3d-depth-camera
. This image is built by combining multiple contexts, including the ROS SDK for the camera from eYs3D and the YOLO detector node, which was inspired by mats-robotics/yolov5_ros and updated to use YOLOv8 with NCNN.
If you need to remove the Docker images for any reason, you can use the `clean` target in the Makefile
:
make clean
This command will delete all the Docker images built during the make build
process.
The ROS Noetic images used are based on the official Docker images provided by the ROS team at osrf/docker_images. These images are widely used in the ROS community and provide a solid foundation for building ROS-based applications.
The ROS SDK for the YDLIDAR OS30A camera is sourced from the eYs3D ROS repository, which provides the necessary drivers and tools for integrating the camera into your ROS environment.
The YOLO object detection node was customized from the original implementation found in mats-robotics/yolov5_ros, with updates to support YOLOv8 using the NCNN framework, offering improved accuracy and performance for object detection tasks.
This setup ensures that your robotics project is equipped with the latest tools and technologies, allowing for precise sensing and robust object detection capabilities.
xhost +local:docker
docker run -it --rm --privileged -v /tmp/.X11-unix:/tmp/.X11-unix:ro
-e DISPLAY=$DISPLAY --net=host andreiciobanu1984/robots:robot-dog-3d-depth-camera /bin/bash
-c 'source /robot/devel/setup.bash; roslaunch robot_dog robot.launch'
When developing an advanced robotic sensing platform, choosing the right object detection algorithm is critical to achieving real-time performance. For this project, I opted for YOLOv8 using the NCNN framework due to its superior speed and efficiency on edge devices like the Mixtile Blade 3. Below, we present the benchmarks that guided this decision and the rationale behind choosing YOLOv8 with NCNN.
First, we build the ros:noetic-eys3d-ros
Docker image, which includes the necessary drivers and libraries to interface with the YDLIDAR OS30A 3D Depth Camera:
docker build --tag=ros:noetic-eys3d-ros --build-context eys3d-ros=../eys3d_ros eys3d-ros/.
xhost +local:docker
Next, we test the camera and the object detection performance using different versions of YOLO:
docker run -it --rm --privileged -v /tmp/.X11-unix:/tmp/.X11-unix:ro -e DISPLAY=$DISPLAY --net=host andreiciobanu1984/robots:robot-dog-3d-depth-camera /bin/bash -c 'source /robot/devel/setup.bash; roslaunch dm_preview BMVM0S30A.launch'
docker run -it --rm --privileged -v /tmp/.X11-unix:/tmp/.X11-unix:ro -e DISPLAY=$DISPLAY --net=host andreiciobanu1984/robots:robot-dog-3d-depth-camera /bin/bash -c 'python3 /robot/src/robot_dog/src/test_v5.py'
docker run -it --rm --privileged -v /tmp/.X11-unix:/tmp/.X11-unix:ro -e DISPLAY=$DISPLAY --net=host andreiciobanu1984/robots:robot-dog-3d-depth-camera /bin/bash -c 'python3 /robot/src/robot_dog/src/test_v8.py'
docker run -it --rm --privileged -v /tmp/.X11-unix:/tmp/.X11-unix:ro -e DISPLAY=$DISPLAY --net=host andreiciobanu1984/robots:robot-dog-3d-depth-camera /bin/bash -c 'python3 /robot/src/robot_dog/src/test_v8_ncnn.py'
From the benchmarks captured (as seen in the images provided), the performance metrics were recorded as follows:
YOLOv5:
YOLOv8 with Torch:
YOLOv8 with NCNN:
The primary reason for choosing YOLOv8 with NCNN over the Torch implementation or previous versions like YOLOv5 is the drastic improvement in inference speed. On edge devices like the Mixtile Blade 3, which rely on efficient use of computational resources, NCNN provides a much faster alternative for real-time object detection. This is critical for applications where quick decision-making is essential, such as in autonomous navigation and obstacle avoidance.
Moreover, NCNN’s lightweight nature allows it to run efficiently on ARM-based processors, making it an ideal fit for the Mixtile Blade 3’s architecture. The benchmarks clearly show that YOLOv8 with NCNN outperforms other configurations in both speed and efficiency, which directly translates into better performance for real-time robotic applications.
In conclusion, the decision to use YOLOv8 with NCNN in this project was based on its superior speed and efficiency, making it the best choice for enhancing the robot’s perception capabilities without compromising on performance.
In this project, we observed that adjusting the IR intensity setting of the YDLIDAR OS30A 3D Depth Camera significantly affects both object detection and depth sensing capabilities. Here’s a summary of our findings and how to optimize the settings for your specific use case.
IR Intensity Set to 0:
IR Intensity Set to 3 (Default):
To overcome these limitations, a few strategies can be considered:
These solutions will be explored in more detail in the next article, where I will focus on refining the object detection capabilities to accurately detect the robot and its surroundings under varying conditions. By fine-tuning the YOLO weights and possibly integrating a mode-switching strategy, we aim to optimize both object detection and depth sensing simultaneously.
You can adjust the IR intensity on-the-fly using the RQT Reconfigure tool:
rosrun rqt_reconfigure rqt_reconfigure
In the rqt_reconfigure
interface, navigate to the /camera_BMVM0530A1_node
settings and modify the ir_intensity
parameter. Set it to 0
for better object detection or leave it at 3
for depth sensing.
The ability to dynamically adjust the IR intensity provides flexibility in balancing the trade-offs between object detection and depth sensing. By exploring further strategies such as alternating modes or fine-tuning YOLO weights, the camera’s performance can be optimized to suit a wide range of robotic applications. Stay tuned for the next article, where we will delve into these enhancements to achieve more accurate robot detection and sensing.