The humanoid robot Emo can predict a few seconds in advance if someone will smile and then smile back. The creators hope that this technology can make interacting with robots more lifelike.
Although AI can now impressively mimic human language, interactions with physical robots are often awkward, in part because robots cannot repeat the complex nonverbal gestures that are crucial to communication.
Hod Lipson of Columbia University in New York and his colleagues have created a robot called Emo that uses AI models and high-resolution cameras to predict human facial expressions and try to copy them. The robot can predict if someone will smile about 0.9 seconds before it and smile in sync.
Emo’s face has cameras in its eyeballs, and 23 separate motors are attached to its flexible plastic skin with magnets. The robot uses two neural networks: one to look at people’s faces and predict their expressions, and one to create its own facial expressions.
The first network was trained on YouTube videos of people making faces, while the second network was trained on the robot itself making faces live on camera. “He’s learning what his face looks like when he’s moving all those muscles,” says Lipson. “It’s like a person in front of a mirror, when even when you close your eyes and smile, you know what your face looks like.”
Lipson and his team hope that Emo’s technology will improve human-robot communication, but first they need to expand the range of expressions the robot can make. Lipson says they also hope to teach the robot to make facial expressions in response to what people say, rather than simply imitating another person.
Source: New Scentist