12th Week: GUI development

GUI development was this week focus and is still ongoing work. I have been struggling mainly with two aspects:

  • Integrating the calibration package in the GUI;
  • Using ROS tools with the GUI – launching nodes, killing nodes and Rviz visualization.

Below you can find some print screens of the GUI.

Currently, the user can pick which sensors he wants to use from the list of supported sensors, insert their IP address and launch the selected sensors, clicking on “Start Nodes”.

“Stop Nodes” is meant to kill the previously launched nodes, this is not implemented and I am not sure if it’s possible, since when Qt launches external processes there is no way of knowing if that process has detached children. This is a problem because when Qt launches nodes it calls roslaunch (parent process), which then starts the nodes as children processes, so killing the process launched by Qt kills roslaunch and not the actual nodes.

The “Calibrate” button is meant to start the calibration process, and is work in progress.

“Add” and “Remove” buttons add and remove sensors from the list above.

As you can see the Rviz display is also embedded in the GUI so the user can visualize data from the sensors.

Advertisements

11th Week: Learning Qt

With the article now written and submitted, the next step is designing and implementing a graphical user interface (GUI) for the calibration package.

The GUI is going to be based on Qt, since it’s a modern and well documented C++ GUI framework.

This week was mostly dedicated to learning how to design and create a GUI with Qt, integrating the GUI in a ROS package and launching nodes from the GUI.

I have decided to make a GUI inspired in Rviz, since it’s one of the most used packages in ROS and it’s very easy to use.

Hopefully next week I’ll be able to show you a working, easy-to-use, GUI.

9th Week: Success

Continuing past week’s work, I limited the calibration to the first 15 grid points and results looked very promising. To better evaluate the sensors positions after calibration I imported a Ford Escort SW 98 (same car model as ATLASCAR) CAD model to Rviz. The images below show the end-result and confirm that this test was a success.

Next step is adding these results and the ball detection method for cameras to the article.

8th Week: New tests and solvePnP error

Since past week test wasn’t done properly I had to redo it this week. During this week’s test one of the LMS151 was constantly detecting the ball with a bigger radius than the actual ball radius, sometimes by a difference of 7 cm. The Sick LMS151 laser sensors have an error of 1 cm, so measurement error alone cannot explain the large radius difference. Another possibility is the laser measuring a non-plane section of the ball, because the ground floor is inclined, and the ball being an imperfect sphere, an ellipsoid. Like on the previous test, I recorded all data to a rosbag to process and analyse later.

Meanwhile, before data analyses, I finally found the error with solvePnP, it was caused by a wrong sign when calculating the ball centre on the image – I was subtracting the ball radius to a variable instead of summing. Consequently, the ball centre was wrong by two times its radius (its diameter), which is about 95 to 99 cm.

The first test results showed that the error discussed above increased with increasing distance from the ball to the LMS151 lasers. The increase in error made ball detection unreliable as you can see on the images bellow, however next week I am going to limit the calibration to the first 15 grid points to avoid large radius errors.

  • Red arrow -> reference Sick LMS151;
  • Yellow arrow -> Sick LD-MRS400001;
  • Orange arrow -> Sick LMS151;
  • Green arrow -> Camera position calculated with rigid transformation;
  • Light Green arrow -> Camera position with solvePnP;
  • Light Blue arrow -> Camera position with solvePnPRanscac.

7th Week: Data analyses

I started this week by analyzing test data from the previous week and quickly discovered a few problems with the test procedure:

  1. The Sick LD-MRS400001 was too low, resulting in the bottom laser hitting the ground or the box used to place the ball;

  2. During ball placement both LMS 151 detected legs as part of the ball arch, increasing it’s radius. Since the laser computes it’s Z coordinate based on the arch radius, this error causes the laser to be higher than it actually is.

Despite these errors, I had a successful calibration without the grid. The test showed that solvePnP was not returning the expected results, it was also highly unstable – small changes in image or world points resulted in completely different geometric transformations. These problems were partially caused by the double distortion correction done to the image points. After fixing that issue, results with solvePnP got closer to reality and more stable, however I still had large errors and couldn’t find their source; So I made a post on OpenCV’s Q&A Forums describing the problem in detail hoping for some help, no luck until now.

Bellow, on the left, you can see the calibration results represented on Rviz – red arrow represents the reference LMS 151, orange represents the second LMS 151, yellow represents the LD-MRS400001 and the camera is represented by the green arrow – on the right, for comparison purposes, a photo of ATLASCAR_1.

Laser positions are correct, however the camera position has an error of about 1 meter along the Z direction.

6th Week: Camera to LMS151 transformation and full calibration test

Before proceeding with a full calibration test, I had to implement a method to calculate the geometric transformation from the Point Grey camera to the reference laser scanner (one of the Sick LMS151). Calculating the transformation is key to join the data between lasers and cameras.

My first approach was to calculate the distance from the camera to the ball, based on its radius and the camera’s intrinsic parameters, I was hoping that with this information, plus the ball centroid position on the image, I could determine the transformation. The first part was implemented fairly easily on the fourth week, which you can see working on the attached video of that post.

However, by this week, I was still not sure how to implement the rest of the method. So, my co-advisor suggested the following:

  • Calculate centroid position from the image in pixels (camera coordinate system);
  • Get camera intrinsic parameters;
  • Get centroid position from the reference laser (reference LMS151 coordinate system);
  • Use OpenCV’s SolvePnP function to find an object pose from 3D to corresponding 2D image point coordinate space.

The first two elements were already implemented from the first approach, the third required minor work to implement and the fourth required some work to input the data on SolvePnP correctly. I ran a small test to check if the method was working, the results didn’t look very good.

At this point I ran the full calibration test, with the help from my co-advisor. The tests were recorded with rosbag so I could work “offline” with the data and check if SolvePnP was working correctly or not. Ironically, by using rosbag, the calibration software struggled to detect the ball. I’m not sure why this happened, maybe the extra load from rosbag was enough to affect data processing.

On the attached photos you can see the setup used for the test. On the ground, you can also see a grid where the ball was placed for the calibration test, this is our ground truth, it’s meant to show that every device is detecting the ball correctly. In real-world applications, the grid is not required.