Shanghai Jiao Tong University's "Guide Robot", by Prof. Gaofeng’s Team, Improves Lives of Visually Impaired Individuals in China

International Affairs Division 2024-05-30 1028

According to the China Blind Persons Association, there are approximately 17.31 million visually impaired individuals in China, which means at least one out of every hundred people is visually impaired. They often face challenges in perceiving objects, directions, and locations, and encounter inconveniences such as blocked tactile paths and elevators lacking voice prompts during their daily travels. Visually impaired individuals desire to participate in concerts, shopping, park outings, various sports activities, and experience the richness of life, activities which typically require companionship and assistance.

Currently, guide dogs play a crucial role as companions for the blind in their social participation and mobility. However, compared to the vast number of visually impaired individuals, there are only slightly more than 400 guide dogs nationwide, meaning only one guide dog is available for over 40,000 visually impaired individuals. Moreover, the breeding and training of guide dogs incur high costs and lengthy training periods. Whether guide dogs can be allowed into many venues is still under discussion. Therefore, relying solely on guide dogs falls far short of meeting the needs of over ten million visually impaired individuals in China.

On May 28th, at the Shanghai Jiao Tong University Guide Hexapod Robot press conference, a "six-legged" guide robot developed by Professor Gaofeng's team from the School of Mechanical Engineering attracted considerable attention. This robot features visual environment perception capabilities, autonomous navigation to destinations, dynamic obstacle avoidance, and recognition of traffic lights. It is reported that through mass production and AI-assisted methods, this robot effectively reduces costs and enhances intelligence, addressing the shortage of guide dogs. Additionally, by establishing a complete internet service system backstage, the guide robot can also fulfill functions such as home care, emergency handling, and leading blind individuals to more places.

Breaking through human-machine interaction technology, the guide robot becomes the "second pair of eyes" for visually impaired individuals.

Effective communication with the blind and maintaining coordinated actions while understanding user intentions are primary tasks of the guide hexapod robot.

Professor Gaofeng's team integrated three types of interaction modes—auditory, tactile, and force feedback—enabling intelligent perception and compliance behaviors between blind individuals and the guide hexapod robot. Based on deep learning end-to-end speech recognition models, the robot can understand semantic information from the blind person's voice commands, with an accuracy rate currently exceeding 90% and a response time within one second. The robot can be commanded through voice instructions such as start, stop, set destination, accelerate, decelerate, etc. It also provides real-time feedback on walking and environmental conditions, achieving bidirectional intelligent interaction. The cane enables force feedback interaction between the blind person and the guide hexapod robot, providing traction and steering torque to guide forward movement and turns. Simultaneously, the blind person can push or pull the cane to dynamically adjust the robot's walking speed. The robot can reach a maximum speed of 3 m/s, meeting the traveling needs of blind individuals such as walking slowly, brisk walking, or running. The unique configuration advantages of the hexapod ensure stable walking with low noise.

Human-machine interaction and robot autonomous coordinated control are essential for integrating perceptual information, task requirements, and human-machine interaction instructions. Based on the dynamics model of the guide robot, Professor Gaofeng's team constructed hierarchical progressive external force estimation, ground touch detection, slope estimation, and motion state estimation model algorithms. They integrate multiple sources of information such as robot joints, inertial navigation, behavior rhythm, and historical status for multi-objective integrated state observation and feedback optimization balance control. The guide robot can already achieve autonomous coordinated control effects in various terrain scenarios.

Breakthroughs in environment-task adaptive control technology based on vision and force feedback propel unimpeded walking for the blind.

Guide robots require higher autonomous planning capabilities for walking in complex terrains, including ground information acquisition and modeling, positioning navigation, footpoint selection, body posture planning, and continuous motion planning.

Precise positioning is one of the core requirements for guide tasks. Professor Gaofeng's team established a radar-inertial odometer system through tightly coupled multi-sensor data, coupling historical frame data via a sliding window approach to significantly reduce point cloud motion distortion. They designed multi-dimensional residual states to significantly improve the accuracy and robustness of system state estimation, achieving precise three-dimensional environmental map construction and precise robot self-positioning.

Based on global environmental maps and real-time perception of local dynamic maps, the team uses predictive modeling and real-time rolling optimization methods to achieve robot path planning and autonomous obstacle avoidance, agilely avoiding static and dynamic obstacles to ensure the safety of guide tasks. For indoor navigation task requirements, the team has formulated multi-layer navigation strategies for indoor global path planning; for outdoor scenes, they use environmental maps combined with GPS information for multi-sensor fusion, greatly improving positioning and navigation accuracy. The team also uses depth cameras and employs deep learning and digital image processing technologies to identify traffic signals, ensuring safe travel.

Based on terrain change information and robot stability and mobility coordination, the team can plan gait and smooth transitions in various terrains. For typical terrain environments such as steps and stairs, they use multi-constraint optimization algorithms to plan stable walking gaits for the robot. By collecting force feedback information from the robot's leg end, they use machine learning methods to dynamically identify foot-ground contact models in real-time, enabling the robot to adaptively and dynamically walk smoothly in different terrains.

Combining industry, academia, and research to help guide robots reach thousands of households.

Guide robots developed by Professor Gaofeng's team have entered the field testing stage. Throughout the research process, blind individuals participated in offline demonstrations and functional tests. In the future, based on real-time feedback from blind individuals, the team will continue to develop and debug the robot. The practical application of guide robots not only involves the application of robots themselves but also requires the support of robust backend big data, a strong operation and maintenance team, and a series of testing and promotion efforts. It is understood that the team is closely collaborating with Sochen Technology to conduct commercial promotion based on the demand for guide robots. Shanghai Jiao Tong University is responsible for basic theoretical research and key technological breakthroughs, while Sochen Technology is responsible for product engineering and industrialized operation and promotion. Both sides are working together and leveraging social forces to accelerate the practical use of guide hexapod robots, contributing to improving the lives of visually impaired individuals in China.