Robot Operating System (ROS)
Machine Learning
Robot Operating System (ROS)
Signal processing
3D rendering
Fine-grained parallel computing
Using the turtlebot
Using the kinect sensor
Using rovio robots with ROS
Detecting color blobs in an image
NAO Robot
Using Koala robots with ROS
Using AxisPTZ cameras with ROS
> MALIS Home > Software > Our softs > Robot Operating System (ROS) > Using the kinect sensor

MAchine Learning and Interactive Systems

Using the kinect sensor
  by Fix Jeremy

Starting the kinect and viewing it within rviz

In this article, we show an example of usage of ROS and libfreenect in order to display the RBG-depth pointcloud.

In order to get the RBG-depth point clound within ROS, you need to install few packages :

terminal:$ sudo apt-get install ros-indigo-libfreenect ros-indigo-freenect-camera ros-indigo-freenect-launch

After plugging the kinect (and connecting the power supply as well), you should be able to start the ROS nodes with the freenect launch file freenect-registered-xyzrgb.launch :

terminal:$ roslaunch freenect_launch freenect.launch

If everything started correctly, you should be able to see some new published topics :

terminal:$ rostopic list

Our topic of interest in this article is /camera/depth_registered/points which is a sensor_msgs/PointCloud2 ; This message has 4 fields : x, y, z, RGB. It can be displayed directly within RVIZ ;

terminal:$ rosrun rviz rviz

Then change the Fixed Frame to camera_rgb_optical_frame , add a Topic, by topic, and select the /camera/depth_registered/points . You should be see something like :

You can also look at the RGB and disparity images :

terminal:$  rosrun image_view disparity_view image:=/camera/depth_registered/disparity

Accessing the RGB and depth information from a node

Now, let us suppose you want to write a node to process the RGB and depth components. The ROS package test_kinect.tar.gz provides nodes in python or C++ to play with RGB pointclouds. Launchfiles are provided to test the two implementations ;