Welcome to the IKCEST
China releases 60,000-minute vision-and-touch robotics dataset

A cross-body vision-and-touch multimodal dataset named Baihu-VTouch was released on Tuesday, containing more than 60,000 minutes of robot interaction data, known as one of the largest open-source datasets of its kind, China Media Group reported.

Until recently, training data for embodied artificial intelligence has long been completed by visual inputs, leading robots to rely heavily on sight while lacking tactile perception. The lack of tactile data has limited robots' performance ability to operate in poor lighting or handle fragile objects.

In response to this, Baihu-VTouch records pressure and deformation data across a range of physical contact modes, covering real-world scenarios such as household services, industrial manufacturing, catering and specialized operations.

With data collected across multiple robot configurations, including wheeled and bipedal platforms, Baihu-VTouch includes more than 380 task types involving over 500 real-world objects, and is structured around more than 100 basic manipulation skills such as grasping, inserting, rotating and placing.

The dataset was released by the National and Local Co-built Humanoid Robot Innovation Center in collaboration with a technology firm, and is designed to support about 90 percent of daily and industrial manipulation tasks.

Training data is central to intelligent robots. For example, China opened in June 2025 its largest humanoid robot training facility, the Hubei Humanoid Robot Center, with hundreds of robots deployed across 23 different simulated settings, capable of collecting more than 10 million data points annually.

Currently, 6,000 minutes of the Baihu-VTouch dataset have been made available on the open-source robotics platform OpenLoong.

Original Text (This is the original text for your reference.)

A cross-body vision-and-touch multimodal dataset named Baihu-VTouch was released on Tuesday, containing more than 60,000 minutes of robot interaction data, known as one of the largest open-source datasets of its kind, China Media Group reported.

Until recently, training data for embodied artificial intelligence has long been completed by visual inputs, leading robots to rely heavily on sight while lacking tactile perception. The lack of tactile data has limited robots' performance ability to operate in poor lighting or handle fragile objects.

In response to this, Baihu-VTouch records pressure and deformation data across a range of physical contact modes, covering real-world scenarios such as household services, industrial manufacturing, catering and specialized operations.

With data collected across multiple robot configurations, including wheeled and bipedal platforms, Baihu-VTouch includes more than 380 task types involving over 500 real-world objects, and is structured around more than 100 basic manipulation skills such as grasping, inserting, rotating and placing.

The dataset was released by the National and Local Co-built Humanoid Robot Innovation Center in collaboration with a technology firm, and is designed to support about 90 percent of daily and industrial manipulation tasks.

Training data is central to intelligent robots. For example, China opened in June 2025 its largest humanoid robot training facility, the Hubei Humanoid Robot Center, with hundreds of robots deployed across 23 different simulated settings, capable of collecting more than 10 million data points annually.

Currently, 6,000 minutes of the Baihu-VTouch dataset have been made available on the open-source robotics platform OpenLoong.

Comments

    Something to say?

    Login or Sign up for free

    Disclaimer: The translated content is provided by third-party translation service providers, and IKCEST shall not assume any responsibility for the accuracy and legality of the content.
    Translate engine
    Article's language
    English
    中文
    Pусск
    Français
    Español
    العربية
    Português
    Kikongo
    Dutch
    kiswahili
    هَوُسَ
    IsiZulu
    Action
    Related

    Report

    Select your report category *



    Reason *



    By pressing send, your feedback will be used to improve IKCEST. Your privacy will be protected.

    Submit
    Cancel