The SciFi Lab at Cornell has made a breakthrough in body movement tracking with its miniature wristband camera, BodyTrak, which is capable of tracking full-body postures in 3-D. The team, led by Prof. Cheng Zhang, computing and information science, combined images of partial silhouettes with machine-learning techniques to recreate entire body postures of users.
The team published a research paper demonstrating the efficacy of this new technology. The prototype was able to successfully track 13 different types of body movements with a higher accuracy than more conventional movement-tracking devices.
Technology for the tracking and estimation of body movements has become increasingly important due to its use in varied scenarios, from sports analysis to the study of physical disorders such as Parkinson’s. Traditional methods of tracking a person’s body movements required complete monitoring of the surrounding environment, along with wearable devices that would record users’ movements. However, these methods tend to become ineffective when the subject is in motion or is outdoors, making these devices difficult to use alongside daily activities.
The SciFi lab’s new prototype aimed to solve these problems by focusing on minimizing the complexity of the monitoring setup. The prototype employs a miniature camera that can be mounted on a smartwatch, aided by a customized deep neural network — a method of training computers using machine learning — that is capable of reconstructing the user’s entire body.
“The key motivation is that we believe the future of smartwatches should be able to sense more than just wrist motion or number of steps. It should be able to capture full body poses that could provide a lot of information,” Zhang said.
BodyTrak achieves a high level of accuracy in the reconstruction of body movements without an elaborate setup. Additionally, it avoids the privacy concerns that were associated with other forms of movement tracking.
For example, a head-mounted camera would likely also record information from the user’s surroundings, which could include other people. However, in this prototype, the camera is pointed towards the body, so the chance of capturing background information is lower.
“This is a unique feature of BodyTrak. We design the way the camera is pointed at the body,” said Hyunchul Lim, co-author of the paper.
BodyTrak first uses a red-green-blue camera — a standard color camera — to capture and store partial body images, such as a partially-covered image of the user’s arm and torso. Due to the angle of the wrist-mounted camera, however, some body parts block others in the images, preventing all parts from being captured.
According to Zhang, the BodyTrak algorithm in the AI prototype uses occlusion as useful data input.
“Our system takes occlusion as information. How your different body parts occlude each other can implicitly provide insights about the full body pose,” Zhang said.
The images are fed into BodyTrak’s deep learning model that is built using image classification software. However, unlike traditional body-tracking technology, BodyTrak’s, networks are modified to serve a different purpose. Instead of simply classifying images using machine learning, they are made to reconstruct the user’s full body based on a dataset of full-body images.
To ensure that the model is applicable to as many users as possible, the team also modified the input before running it through the model. This process involved converting the data into a generalized form that is applicable to a larger set of users.
“People have different lengths of limbs, and we need to modify [the data] to account for this difference,” Lim said.
The developments made by the team at the SciFi Lab could play a role in changing the way self-tracking devices, such as smartwatches, are built in the future.
“In the long term, human body poses are related to human activities,” Lim said. “There is currently a lack of tech to track full body poses, but with this new technology there is potential for this to be achieved much sooner.”