![]() ![]() It’s relatively easy to integrate into my current ROS system.It captures RGB, depth map, and IMU data. A ZED camera cost around 450 USD, a LIDAR cost around 7000 USD You might be wondering why not choose a 3D-LIDAR? Stereo cameras have a few advantages. Here is an example of a point cloud captured by ZED: ZED point cloud, sidewalk We can perform many types of algorithms to achieve lots of tasks, by simply using point cloud data. They are essentially the real-time 3D mapping of the vehicle’s surroundings. Point clouds are critical in self-driving cars. With the ability to capture three-dimensional images, the ZED sensor can create a point cloud of the surroundings. According to Wikipedia, “the distance between the lenses in a typical stereo camera (the intra-axial distance) is about the distance between one’s eyes (known as the intra-ocular distance) and is about 6.35 cm, though a longer baseline produces more extreme 3-dimensionality.” Stereo cameras may be used for making stereoviews and 3D pictures for movies, or for range imaging. The two-lens setup gives the camera the ability to capture three-dimensional images, a process known as stereo photography. Similar to the human eyes’ ability to judge distance, the ZED sensor not only captures RGB photos but also distance readings.Ī stereo camera is a type of camera with two or more lenses with a separate image sensor or film frame for each lens. Unlike a simple webcam, the ZED sensor has two cameras. With a logic based algorithm and a deep learning model, the golf cart should be able to detect the road reliably, and then perform path planning. Shadows and overcast shouldn’t effect the performance of the sensor. However, the ZED camera maintains decent performance even under tricky conditions. Deep learning tends to be more generalization than logic based algorithms. The semantic segmentation network is excellent under ideal conditions: no shadows and glares. The two road detection systems are meant to compliment each other. Semantic segmentation from the CityScape dataset You can learn more about ROS from their official website. This article assumes that you have some basic knowledge of ROS, such as subscribers, publishers, topics, and callbacks. Just in case if you are not familiar with the self-driving golf-cart project, please checkout my Github page, and the project website. I use the ZED vision system, which can construct a point cloud of the space in front of the car, to detect drivable space. I use semantic segmentation and deep learning to classify each pixel in an image. In the self-driving golf cart project, I use two methods to perform road detection. Accuracy and reliable road detection can pave the road for good path planning. Road detection plays an integral role in self-driving cars.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |