Any Occupancy Map needs to be interoperable with the MoveIt occupancy map, see Occupancy Map Updater for MoveIt perception client…
A key finding from this requirement is to have a field dedicated to grasping pose.
The stack should have at least two modes that we’ll borrow from reinforced learning…
The idea being that if the robot does not have a local or global reference point, or if the accuracy of that point comes into question, then it reverts to an Explore state where it attempts to localize itself.
Although not explicitly part of the Client, we should have a recovery sequence in our first example that utilizes the Explore/Exploit Dynamic.
- Robot Centroid – Robot Base Frame
- Local Centroid – Probably initial position of Robot on SM startup
- Global Centroid – Like GPS Coordinates
3D Map Layers
- Static Obstacles – Stable (Fixed Items like Walls, Mountains, etc.)
- Static Obstacles – Unstable (Loose Items like Cars, things we don’t care about)
- Static Objects (Items we care about, and so track Pose)
- Dynamic Obstacles
- Dynamic Objects
Two costmap like structures..
- Occupancy Map = Obstacles
- Object Map = Objects
- Constraint Map – This we leave to MoveIt! + MoveItZ client
Pose estimation is a major goal for the Vision library. I think we’ll have two flavors of components for this…
- One that just uses the camera information and maybe does a table lookup for size (max/min height)
- One that can incorporate some type of range finder.
A 2.5d solution, tied to Anymal (DARPA SubT) legged robot. Lots of followers/interest.
Full 3D solution by Armin Hornung of Octomap. Elite but small following. Old.
A ROS Industrial Hybrid Perception System using octomap server…
Where… Southwest Research Institute proposes a hybrid approach to 3D perception systems wherein mature 2D detectors are integrated into a ROS 3D perception pipeline to detect process features and enable the flexibility to upgrade the detector without any modifications to the rest of the system.