Teaching a Robot Dog New Tricks with Dr. Alex Li
In collaboration with Zhejiang University, Alex Li and team have created a quick-thinking quadruped that has the ability to use pre-learned skills to respond and adapt to previously unseen situations.
The research group, which comprises researchers from Zhejiang University, China and the School of Informatics, aims to ensure the resilience of autonomous robots by developing a new learning architecture that enables robots to create new functions from a group of taught, representative skills. Their latest paper, in which they propose a multi-expert learning architecture (MELA), has been published in and chosen as the cover paper of the December issue of Science Robotics. The work marks a milestone in robotics research as it offers a vision for fully independent robots, where humans would not have to intercede when the robot is presented with a new or unexpected obstacle.
There is no way to predict or simulate every scenario a robot could encounter while fulfilling a task, so it must be equipped with adaptive skills that can be applied effectively in unexpected situations. The solution developed by the project is a new learning process, whereby deep neural networks (DNNs) are combined to produce a new kind of adaptive ability.
In combining several DNNs the researchers found the resulting system to be more than the sum of its parts; it was actually able to learn new skills, skills which none of the DNNs possessed individually. This new system is termed multi-expert learning architecture (MELA).
Our new research reports a MELA that contains a group of specialised deep neural networks (the players) together with a gating network (the coach), which is similar to a soccer team. After training together, this group of "players" can refine better skills and the "coach" learns how to coordinate the team as a whole so that they are able to solve tasks that have not been seen before. This is new milestone in robotics and AI, as robots are able to deal with new problems they have not experienced before.
MELA was tested rigorously using Jue-Ying, a quadruped robot, who was subjected to a variety of previously-unseen terrains and counterbalance disturbances. The robot was first taught a distinct set of pretrained experts, or skills, each in a different deep neural network (DNN); this initialised MELA. The robot then learned to combine these distinct DNNs using a Gating Neural Network (GNN), which enabled MELA to develop more specialised expert skills and adaptive movements.
The ability to blend multiple DNNs to produce adaptive behaviours meant that Jue-Ying could successfully react to and overcome unforeseeable situations. The robot performed coherent trotting, steering and fall recovery autonomously, demonstrating successful autonomous multiskill locomotion that did not rely on human intervention to succeed.
Watch a demonstration of the Jue-Ying robot in training:
- Video: Teaching a Robot Dog New Tricks with Dr. Alex Li
- Demonstration of multi-expert learning architecture (MELA) using a quadruped robot in a variety of environments e.g rough terrain, counterbalance disturbances.
Video credit: Yang et al., Sci Robot. 5, cabb2174 (2020)