Using AI To Recreate The Human Movement-Learning Process

Written by Abigail Hodder (Reporter)

Researchers at the École Polytechnique Fédérale de Lausanne (EPFL; Switzerland) have developed an AI model that can learn to move like a human. Movement is an incredibly complicated biological phenomenon, requiring intense neurological and muscular coordination; existing computer tools struggle to capture this intricacy. However, for the first time, scientists have been able to simulate how we learn to move using the guidance of a new AI model. The results of this groundbreaking study could lead to the development of more advanced prosthetic limbs that can mimic natural movement.

The Mystery of Movement

While it’s seemingly simple to move a hand to reach for an object, there are a lot of different physiological, neurological, and psychological influences entangled in the process.

For example, the initial part of coordinating movement is purely neurological and psychological. The brain has to first process an object (visually, but also via other sensory components), compute its value (based on past experiences and intrinsic factors), evaluate the body’s internal status (i.e., things like hunger and thirst), and therefore judge the motivation behind moving to reach for the object.

After this, a center in the brain called the basal ganglia must coordinate the correct muscles at the right time to execute the movement, while also crucially maintaining balance and posture.

On top of this, the brain needs to carry out the movement given the surrounding context. This includes things like the direction needed to aim the hand toward the object and judging the object’s weight to predict how much force should be applied.

All this is to say, movement is a nuanced concept that is much more complicated than would first appear. As such, delineating the biological mechanisms that orchestrate different types of movement is a considerable challenge that is yet to be properly understood.

How humans learn to carry out these motor activities is an even more poorly developed area.

Clinically, this has a direct impact on individuals who suffer from loss of limbs, where incomplete understanding of the brain-muscle connection leads to poor prosthetic treatment options.

Will Movement Ever Be Understood?

So, what are the next steps? How can scientists investigate how humans learn motor skills when so many complicated factors are entangled?

Perhaps AI offers a beacon of light for the future. Scientists could use AI technology to isolate different biological factors and test how they affect movement acquisition, potentially filling in these significant knowledge gaps.

A groundbreaking study published in Neuron offers exciting insights into the field.

EPFL professor Alexander Mathis and his team developed a new AI model in response to the NeurIPS MyoChellenge set by Meta in 2022: to build an AI that could precisely rotate two Baoding balls (also known as Chinese Medicine balls) in the ‘palm’ of the hand, each controlled by 39 muscles.

Such a daunting task, seemingly impossible to achieve using computer simulations, required an innovative AI solution – in this case, a machine learning model that combined biomechanical simulations  with an entirely novel learning technique, referred to as curriculum-based reinforcement learning (RL.)

Innovation Break Down

But what exactly is this new technique, and how has it benefited this group’s research?

Well, Mathis and his team focussed on optimizing a form of machine learning called model-free RL.

This is a type of technique where an ‘agent ’ interacts with its environment with no predefined dataset.

The agent goes through a process of trial and error to achieve a desired outcome (like manipulating Boarding balls) – in this case, the ‘reward’ is a type of numerical feedback.

This is different to supervised learning, another type of machine learning, where the AI is trained on a labeled dataset, with defined ‘input’ and ‘output’ pairs.

However, when applying pre-existing RL techniques to the Boarding ball task, the group saw a high failure rate.

They decided to change their approach; instead of solely focusing on computer optimization, they took inspiration from the way that humans learn to perform some tasks and integrated this with the RL algorithm.

This lead to the development of an entirely novel AI methodology, described in this paper as curriculum-based RL. In this paradigm, the whole action (rotating the Boarding balls) is broken down into individual static parts.

Essentially, the agent first learns to hold the balls stationary at different angles and then, once these stationary states are mastered, learns to transition between these configurations dynamically.

The results were impressive, recreating naturalistic movements that aligned closely with biological motor control. Indeed, in the first phase of the MyoChallenge competition, the AI achieved an impressive 100% success rate.

Looking Ahead

The work of this team from EFPL has broad implications for the future of neurology. Being able to accurately simulate motor control could open the door to finally understanding how the nervous system orchestrates complex movements. Clinically speaking, this could significantly help professionals understand, diagnose, and treat movement disorders and neuroprosthetic patients.