MIT Reality Virtually Hackathon 2019


bodiesinmotion screen grab 1.PNG

At the 2019 Reality Virtually Hackathon 2019 our team Bodies-In-Motion created a VR Physical Therapy Aid.

Our goal was use Computer Vision to create a real time virtual avatar in VR space to inform the user on their movement and correct improper form against a virtual model. We used PoseNet, a machine learning model, in conjunction with TensorFlow.js , for real time human pose estimation. The benefits of these open-source frameworks are that they can run on a browser, therefore one can use their laptop or smart phone as a Computer Vision device.

The pose estimations are then sent to the Unity Engine via websockets using Node.js.

bodiesinmotion screen grab2

The user is now able to see their real time body movement in Virtual Reality. We then created visual guides of a body performing an exercise (standing at rest, hands up, and squat) with points of interest for the user to match. As user matches the visual guide, the points of interest turn from red to blue. We used Euclidean distance to measure the difference between the user and the visual guide to inform the color changing visualization.

Here is the Github to the project.

Here is our DevPost publication.

Created with: Daniel Bryand, Brianne Baker, and Tania De Gasperis