Xiaomi's robot breaks through the tactile sensing bottleneck.
The Xiaomi Robot team recently made a significant breakthrough in the field of embodied intelligence, and on February 5th, they announced their latest achievement - TacRefineNet. This is a universal framework that can achieve millimeter-level pose refinement relying only on touch, without the need for vision or object 3D models. Its core features include:
- No need for vision: Based on high spatial resolution tactile sensors, unaffected by lighting and obscuration, reliable perception is achieved under complex contacts.
- Millimeter-level fine-tuning: By integrating multiple finger tactile and body information modalities, driving pose accuracy convergence, grasp errors are reduced to millimeter-level.
- No need for 3D models: Freed from the reliance on prior geometric models of objects, complex grasp adjustments are converted into target alignment problems in the tactile space.
On the hardware side, the research team has integrated high spatial resolution tactile sensors at the fingertips of agile fingers, with a spacing of only 1.1 millimeters, capable of capturing minute surface deformations of objects.
Currently, the Xiaomi Robot team has publicly shared related technical details and experimental videos, with more upcoming research results to be released gradually.
Latest
21 m ago

