6th Week: Camera to LMS151 transformation and full calibration test

Before proceeding with a full calibration test, I had to implement a method to calculate the geometric transformation from the Point Grey camera to the reference laser scanner (one of the Sick LMS151). Calculating the transformation is key to join the data between lasers and cameras.

My first approach was to calculate the distance from the camera to the ball, based on its radius and the camera’s intrinsic parameters, I was hoping that with this information, plus the ball centroid position on the image, I could determine the transformation. The first part was implemented fairly easily on the fourth week, which you can see working on the attached video of that post.

However, by this week, I was still not sure how to implement the rest of the method. So, my co-advisor suggested the following:

  • Calculate centroid position from the image in pixels (camera coordinate system);
  • Get camera intrinsic parameters;
  • Get centroid position from the reference laser (reference LMS151 coordinate system);
  • Use OpenCV’s SolvePnP function to find an object pose from 3D to corresponding 2D image point coordinate space.

The first two elements were already implemented from the first approach, the third required minor work to implement and the fourth required some work to input the data on SolvePnP correctly. I ran a small test to check if the method was working, the results didn’t look very good.

At this point I ran the full calibration test, with the help from my co-advisor. The tests were recorded with rosbag so I could work “offline” with the data and check if SolvePnP was working correctly or not. Ironically, by using rosbag, the calibration software struggled to detect the ball. I’m not sure why this happened, maybe the extra load from rosbag was enough to affect data processing.

On the attached photos you can see the setup used for the test. On the ground, you can also see a grid where the ball was placed for the calibration test, this is our ground truth, it’s meant to show that every device is detecting the ball correctly. In real-world applications, the grid is not required.


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s