Now, computer that can read your body language
Published : 07 Jul 2017, 18:07
Scientists have developed a computer that understands the body movements of multiple people from a video in real time, including the pose of each individual's fingers.
This ability to recognise poses will make it possible for people to interact with computers in new and more natural ways, such as communicating with computers simply by pointing at things, researchers said.
Researchers at Carnegie Mellon University in the US developed a new method at the Panoptic Studio, a two-story dome embedded with 500 video cameras.
The insights gained from experiments now make it possible to detect the pose of a group of people using a single camera and a laptop computer, researchers said.
Yaser Sheikh, associate professor at Carnegie Mellon University, said these methods for tracking two dimensional (2D) human form and motion open up new ways for people and machines to interact with each other, and for people to use machines to better understand the world around them.
Detecting the nuances of nonverbal communication between individuals will allow robots to serve in social spaces, allowing robots to perceive what people around them are doing, what moods they are in and whether they can be interrupted.
A self-driving car, for instance, could get an early warning that a pedestrian is about to step into the street by monitoring body language, researchers said.
Enabling machines to understand human behaviour also could enable new approaches to behavioural diagnosis and rehabilitation for conditions such as autism, dyslexia and depression, they said.
“We communicate almost as much with the movement of our bodies as we do with our voice. But computers are more or less blind to it,” Sheikh said.
In sports analytics, real-time pose detection will make it possible for computers not only to track the position of each player on the field of play, as is now the case, but to also know what players are doing with their arms, legs and heads at each point in time.
The methods can be used for live events or applied to existing videos.
To encourage more research and applications, the scientists have released their computer code for both multiperson and hand-pose estimation.
Tracking multiple people in real time, particularly in social situations where they may be in contact with each other, presents a number of challenges.
Simply using programmes that track the pose of an individual does not work well when applied to each individual in a group, particularly when that group gets large.
Sheikh and his colleagues took a bottom-up approach, which first localises all the body parts in a scene - arms, legs, faces, etc - and then associates those parts with particular individuals.
- Most viewed
- Get hands on healthy food this yoga day
- Fan moments at FIFA World Cup Russia 2018
- Brazil to provide support for development of football in Bangladesh
- 29000 academic bodies work to mobilise anti-narcotics public opinion
- New Zealand PM gives birth to baby girl
- SC Order on Khaleda's bail in 2 cases July 5
- Tk 1,400cr additional fund okayed for Padma Bridge
- ‘Get knocked up by FIFA players, win $$$ and Whoppers’: Burger King shamed for offensive ad
- 'Ronaldo is like a good port wine, he knows how to age best'
- Argentina faces Croatia tonight
- Suarez sends Uruguay into last 16
- Ronaldo header sees off Morocco
- Costa nets lucky goal as Spain beat Iran
- Zico to visit Bangladesh soon: Brazilian ambassador
- Pressure on Messi against Croatia after Ronaldo scores again
- Neymar hopes for a ‘better’ game against Costa Rica
- UK's May defeats Brexit rebels, but divisions still reign
- Keep it up, PM to women cricketers
- Body of ‘drug trader’ found in Chuadanga
- Mother ‘commits suicide’ after killing son in Panchagarh