Loading…
Robust Classification and 6D Pose Estimation by Sensor Dual Fusion of Image and Point Cloud Data
It is an important aspect to fully leverage complementary sensors of images and point clouds for objects classification and six-dimensional (6D) pose estimation tasks. Prior works extract objects category from a single sensor such as RGB camera or LiDAR, limiting their robustness in the event that a...
Saved in:
Published in: | ACM transactions on sensor networks 2024-03, Vol.20 (2), p.1-21, Article 46 |
---|---|
Main Authors: | , , |
Format: | Article |
Language: | English |
Subjects: | |
Citations: | Items that this one cites |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | It is an important aspect to fully leverage complementary sensors of images and point clouds for objects classification and six-dimensional (6D) pose estimation tasks. Prior works extract objects category from a single sensor such as RGB camera or LiDAR, limiting their robustness in the event that a key sensor is severely blocked or fails. In this work, we present a robust objects classification and 6D object pose estimation strategy by dual fusion of image and point cloud data. Instead of solely relying on 3D proposals or mature 2D object detectors, our model deeply integrates 2D and 3D information of heterogeneous data sources by a robustness dual fusion network and an attention-based nonlinear fusion function Attn-fun(.), achieving efficiency as well as high accuracy classification for even missed some data sources. Then, our method is also able to precisely estimate the transformation matrix between two input objects by minimizing the feature difference to achieve 6D object pose estimation, even under strong noise or with outliers. We deploy our proposed method not only to ModelNet40 datasets but also to a real fusion vision rotating platform for tracking objects in outer space based on the estimated pose. |
---|---|
ISSN: | 1550-4859 1550-4867 |
DOI: | 10.1145/3639705 |