Two years ago, Kaveh Hamed saw his son Nikaan take his first steps on his own. He watched Nikaan’s one-year-old body teeter on wobbly legs as the baby walked. Nikaan would go from crawling on his belly to standing with a sway, to the first walks, to taking off across the room on two sure feet.

These memories make Hamed think about math, as he does when he watches his dog Telli run. When he sees her bound toward him and switch to a trot, he starts to wonder again about the ways he might impart her agility to a robot. For more than 10 years, Hamed has developed control algorithms that enable legged robots to walk and run more like humans and animals.

Seeing Nikaan and Telli in motion reminds him that there's so much left to learn. “It seems like a simple problem,” said Hamed. “We do these things every day – we walk, run, climb stairs, step over gaps. But translating that to math and robots is challenging.”

Hamed joined Virginia Tech last year as an assistant professor in the Department of Mechanical Engineering in the College of Engineering and as head of the Hybrid Dynamic Systems and Robot Locomotion Lab. Since then, he and his research team have worked to enhance bio-inspired locomotion in robots, alongside collaborators from within the department and from other universities around the country. They are currently working on four projects funded by the National Science Foundation, all of which draw inspiration from humans or animals and focus on software development. 

One of their projects involves examining the applications of bipedal (two-legged) locomotion to resilient locomotion of powered prosthetic legs. Hamed’s research team is collaborating with Robert Gregg, an associate professor in the Department of Electrical Engineering and Computer Science at the University of Michigan, to develop decentralized control algorithms for Gregg’s model of a powered prosthetic leg. Three of the team’s current projects revolve around quadrupedal (four-legged) robots and the combined use of sensors, control algorithms, and artificial intelligence to improve the agility, stability, and dexterity of robotic dogs, as well as their responsiveness to their surroundings and one another. 

Loading player for https://video.vt.edu/media/Researchers+design+software%2C+creating+robots+to+help+humans/1_d0chew2k...

Hamed said that though more legged robots are being built every year, the field has a long way to go before robots can match the agility of their two or four-legged sources of inspiration.

“We believe that the agility we see in animal locomotion – such as in a dog, a cheetah, or a mountain lion – cannot currently be closely pursued by robots, even state-of-the-art ones,” he said. "Robot technology is advancing rapidly, but there is still a fundamental gap here, between what we see in robots and what we see in their biological counterparts.”

Sourcing inspiration from four-legged friends

In his work with robotic dogs, Hamed aims to fill the gap by developing advanced and intelligent control algorithms that underline the agility and stability of animal locomotion. Integrating the use of advanced feedback control algorithms and mathematical optimization techniques with the use of sensors, his approach works in the basic biology of animals.

Balance control for vertebrates, for instance, happens mostly in the spinal cord, where oscillatory neurons communicate with one another to generate rhythmic motion. It’s a natural function – the reason why legged animals and humans can close their eyes and still walk, explained Hamed. But in order to navigate more complex environments – like a set of stairs or boulders – both humans and animals need vision, and we need the brain to interpret what we see.

Kaveh and Telli
Hamed shares a photo of his six-year-old dog Telli in high-speed motion, a source of inspiration for his bio-inspired thinking around legged locomotion.

Hamed’s research team uses sensors and robust control algorithms to create similar effects among their robotic dogs. They use encoders – sensors attached to joints to read their position in relation to one another – as well as inertial measurement units  – sensors measuring the robot body’s orientation to the ground  – to create more of the balance and motion control that comes naturally to vertebrates. The team also attaches cameras and Lidar, a form of laser technology for a more precise mapping of the environment, to make use of machine vision, which can better inform each robot’s contact with or avoidance of obstacles.

Hamed’s team has outfitted three robotic dogs, built by Ghost Robotics, a company that specializes in producing legged robots, with these sensors and has used them to test their newly-developed, intelligent and robust control algorithms. Once the robots have read measurements of their own motion and their environment, the idea is to get them to act accordingly: on-board computers calculate the robust control actions that the robots should use to steer themselves from point A to point B.

So far, the researchers have simulated and begun testing several different gaits mirroring those of real animals. The robotic dogs have begun to amble, trot, and run at sharper angles with more agility, balance, and speed. The team is also exploring the integration of artificial intelligence into their control algorithms to improve the robots’ real-time decision-making in real-world environments.

Collaborating with partners from Virginia Tech and beyond

Hamed leans on collaboration to adopt new concepts like artificial intelligence and safety-critical control algorithms. In two of his projects, he collaborates with Aaron Ames, Professor of Mechanical and Civil Engineering at Caltech. Together, they aim to develop the next generation of intelligent, safe, and robust control algorithms that will enable agile locomotion of quadrupedal and bipedal robots in complex environments. They also aim to build upon this work with swarms of legged robots, by creating distributed feedback control algorithms that enable legged robots to coordinate their motion in collaborative tasks.

Recently, Alex Leonessa, professor of Mechanical Engineering at Virginia Tech, has joined Ames and Hamed in a project that adapts use of distributed control algorithms for cooperative locomotion of robot guide dogs and humans.

“I learn from collaboration,” said Hamed. “That is what we are doing to advance knowledge. Well-known companies are doing amazing things right now, but you can’t see what they do. We would like to learn from science and math and share what we find. As we publish, we can say to other universities: ‘These are the algorithms we use. How can you expand upon them?’”

Hamed sees the potential benefits and real-world applications of these enhancements in terms of mobility, assistive capability, and a combination of the two. With more than half of the Earth’s landscape marked off as unreachable for wheeled vehicles, agile legged robots could better navigate rough, steep terrain, like that of the mountains or the woods. In homes and offices, the ground is flat and mostly predictable, but limitations for robots still take form in ladders and stairs designed for bipedal walkers. Hamed believes it’ll be important to ensure that robots can handle the same conditions, if they are to be used to assist people with limited mobility and live with them. And as robots support or replace humans in emergency response – a rescue mission for a factory fire, for instance – they’ll benefit from deft use of their legs.

Hamed finds the first tests on control algorithms with his robotic dogs promising, but the development of those algorithms will be an ongoing process. “Are the algorithms we’re using actually bio-inspired?” Hamed has asked himself. “Are they actually acting like dogs? We are trying to do the math. But it must be bio-inspired. We must look at animals and then correct our algorithms — to see how they react to this scenario and how our control algorithms react.”

-  Written by Suzanne Irby

Share this story