Robotic manipulation refers to the ways robots interact with the objects around them: grasping an object, opening a door, packing an order into a box, folding laundry…etc. All these actions require robots to recognize and understand the environment, then plan and control the motion of their hands and arms in an intelligent way.
While there are various tasks pertaining to robotic manipulation, this project focuses on Bin Picking – a 3D application that involves using a robotic system to do all the following.
- Locate a part in a random orientation in any quadrant of a bin. The exact object to pick up.
- Plan a complete path from pick to place without the robot reaching any singularities or joint limits along the way.
- Enter the bin in a specific robot pose for that particular orientation of the matched part.
- Do not break or damage any adjacent parts to the part you are picking.
- Exit out of a bin and then place that part on a target, in the correct orientation without hitting anything in its environment.
We specifically focus on the perception part of the above tasks, i.e, precisely locating the part in the bin. Specifically, we are focused on industrial bin-picking in E-commerce warehouses for order fulfillment.
Goals & Objectives
Develop object pose estimation algorithms for robotics manipulation and related computer vision topics including registration, detection, reconstruction, and so on.
- Come up with a way to identify poses & potential grasp points for randomly packed objects in a tote.
- Come up with a way to gather object information on the fly and identify possible object poses for potential grasp points for randomly packed objects in a tote given partial dimensions, such as height only.
- Find grasp points for these objects without any initial information about the object.
Mujin makes a software controller platform to provide motion planning and advanced computer vision features to industrial robots. Mujin’s core software is particularly suited for picking applications in the logistics and manufacturing industries. In fact, with over 600 production deployments, we’ve proven that robots, when combined with the latest sensor technology, can meet or exceed human productivity levels in high-level applications such as piece picking, depalletizing, and palletizing. For many applications, our customers are having relatively organized situations where the robot will be deployed, such as in the below figure where the robot uses data about the products’ size, shape, and other characteristics.
See examples of real-world applications here:
- Pick and place for order fulfillment – https://youtu.be/88MDtPshQ1M
- Sorter – https://youtu.be/nlFESLYWHE8
We’re looking to take this a step further and solve a robot perception problem to be able to successfully unpack a messy box full of the same items, all with minimal data about the items.
The project focuses on these three main questions
1. How to identify the first object to pick?
2. What can we learn from the first pick to improve subsequent picks?
3. How to better represent scene/object information for pick-and-place?
- Should be a model-free solution
- Must learn the 3D object representations
- Should be able to identify object pose
- Single SKU type in a tote
- Given the object model, pick-and-place is done by the existing pipeline.
Why is robotic picking so difficult?
– "If robots can build a car, they can pick orders. Right?!"
Not really. While control & planning has their own set of problems and challenges, the main difficulty in the perception component of robot picking arises from the fact that object can vary in visual appearance, texture, shape, etc.
- Billions of SKUs – goods come in a variety of shapes, dimensions, weights, packaging, colors, textures, reflectivity, and fragility.
- Dynamic presentations of totes – no two presentations of the totes on a conveyor belt are the same
Here are some examples of various objects in a tote on a conveyer belt as seen from the robot’s camera.