top of page

FlowAR: How Different Augmented Reality Visualizations of Online Fitness Videos Support...(CHI23)

Hye-Young Jo, Laurenz Seidel, Michel Pahud, Mike Sinclair, and Andrea Bianchi.

Abstract

Online fitness video tutorials are an increasingly popular way to stay fit at home without a personal trainer. However, to keep the screen playing the video in view, users typically disrupt their balance and break the motion flow --- two main pillars for the correct execution of yoga poses. While past research partially addressed this problem, these approaches supported only a limited view of the instructor and simple movements. To enable the fluid execution of complex full-body yoga exercises, we propose FlowAR, an augmented reality system for home workouts that shows training video tutorials as always-present virtual static and dynamic overlays around the user. We tested different overlay layouts in a study with 16 participants, using motion capture equipment for baseline performance. Then, we iterated the prototype and tested it in a furnished lab simulating home settings with 12 users. Our results highlight the advantages of different visualizations and the system's general applicability.


 

Problem: Disturbed Motion Flow from Fixed Display in At-Home Yoga

Fitness videos are a popular and affordable way to exercise at home, but they can be distracting and interrupt the flow of movements. This is especially true for yoga, which requires fluid transitions between poses.


To mitigate this problem in yoga and other fitness activities, researchers employed displays that physically move along with the user or simulate the instructor’s movements(e.g., moving projected screen, head-mounted display with instructor's first-person view). However, these approaches require recording the instructor’s movements via motion capture systems, and therefore they cannot directly leverage a large number of fitness videos already existing online. Furthermore, their limited viewing windows limit their applicability to simple and small motions, which is not suitable for complex full-body yoga poses.



Solution: HMD with a Virtual Screen Overlay

We propose FlowAR, an augmented reality training system that supports motion flow via virtual screen overlays. FlowAR allows users to practice multi-directional yoga movements with a simple setup of a head-mounted display that renders online fitness videos as virtual screen overlays.


Then, "How can we visualize a virtual screen to improve the quality of the yoga exercise?"


Here, we introduce four screen layout configurations: two static layouts (Front and Circular) and two dynamic layouts (User-anchored and Trainer-anchored).


Layout ① Front (baseline)

The Front layout is a typical video-based workout layout.


Layout ② Circular

The Circular layout lets the user selectively view the screen based on their posture.


Layout ③ User-anchored

The User-anchored layout stays in front of the user’s face like Head-Up Display.


Layout ④ Trainer-anchored

The Trainer-anchored layout positions a screen where the virtual trainer is looking.

(The virtual trainer can be generated via motion capture or 3D pose estimation from video.

And we demonstrate both through separate studies.)



Study Overview

We conducted two user studies: one in a motion capture studio and the other in a home-like space.


The first study was to find out, “On which screen layout do users perform the best yoga in terms of motion flow and posture accuracy? For precise measurement, we motion-captured both the expert yoga instructor and the users using Opti-Track.


Based on the first study results, we learned that two dynamic screen layouts, User-anchored and Trainer-anchored, outperform the other two.


So, to enable home usage of the Trainer-anchored layout without special motion-captured data, we integrated an AI video pose estimation algorithm(Google's MediaPipe) into the system.


Then, we conducted a second study to determine whether "Is it applicable in the home environment even after considering noise in video pose estimation?


Additionally, we investigated if prior expertise impacts layout preference by recruiting both inexperienced and experienced users.



Study 1: Baseline Performance

RQ: On which screen layout do users perform the best yoga (in terms of motion flow and posture accuracy)?

To measure baseline performance, we conducted a within-subject user study in a mocap studio with 16 participants.


Process

The study had four sessions, one for each screen layout. In each screen layout, participants performed the yoga sequence with seven key poses.


Analysis

Then, we performed a multi-stage analysis:

  1. Expert evaluation. We recruited an expert to evaluate users’ motion flow.

  2. Performance analysis. We conducted a quantitative performance analysis comparing the user’s motion with the expert’s. Specifically, we mapped the user’s motion onto the expert’s using dynamic time warping and calculated timing and joint angle errors to determine the deviation from the expert’s motion.

  3. User feedback. We collected qualitative user feedback.

Results
  1. The expert evaluation results showed that static layouts cause incorrect head movements that disturb natural motion flow. This image shows a user looking away from the correct gazing point in static layouts. So, the expert gave lower scores to the Front and Circular layouts, the two static layouts, saying the gaze is tightly related to the yoga flow.

  2. On the other hand, performance analysis results reveal that dynamic layouts, such as User-anchored and Trainer-anchored layouts, showed fewer timing and posture errors.

  3. Also, all sixteen users preferred the dynamic layouts over the static ones.



Study 2: Applicability in Home-like Settings

RQ1: Does it applicable in the home environment(even considering noise in video pose estimation)?
RQ2: Does prior expertise impact layout preference?

The second study aimed to test the applicability of two dynamic layouts in home-like settings. Also, we recruited 12 participants, half with yoga experience and half without, to examine if it affects layout preference.


Process

This time, there were only two sessions for two dynamic screen layouts. Similar to the previous study, participants performed yoga flow with seven key poses. But this time, the study material was from YouTube, and motion data was generated using video pose estimation.


Analysis

We conducted a similar analysis of the previous study with performance analysis and user feedback.

  1. Performance analysis. We conducted a quantitative performance analysis comparing the user’s motion with the expert(in the video)'s to see if the inaccuracy of the video pose estimation causes problems in the Trainer-anchored layout.

  2. User feedback. We collected qualitative user feedback to find out if the user preference differs depending on their prior yoga expertise.

Results
  1. Performance analysis results show that users equally perform well in both User-anchored and Trainer-anchored layouts, with only 10 degrees of mean angle error and half a second of mean timing error. In fact, from the user interview, most users said they did not notice inaccuracy in video pose estimation.

  2. Also, all users felt both layouts were confidence-boosting and valuable, but the preference differed depending on users. Inexperienced users without any prior yoga experience preferred the Trainer-anchored layout, saying that it provided natural guidance that helped them correct their postures. On the other hand, experienced users, who are already familiar with yoga, preferred the User-anchored layout for its convenience of detailed observation.



Study Summary

To recap, study 1 was conducted to find out “on which screen layout do users perform the best yoga.” Here, we learned that dynamic screen layouts translate into better motion flow, posture accuracy, and usability.


Then, we conducted study 2 to test if these two dynamic layouts are applicable in the home environment, even considering noise in video pose estimation. User performance and feedback results showed that FlowAR was effectively applied in the home environment. This means there is a possibility of creating a more immersive and engaging workout experience by leveraging existing online fitness video resources with an AI pose estimation algorithm and a simple setup.


Also, our curiosity led us to investigate whether the user’s prior expertise impacts their layout preference. As a result, we saw that experienced users generally favor the User-anchored layout that supports their existing workflows, while inexperienced users appreciated the guidance in the Trainer-anchored layout. Therefore, there is no optimal dynamic layout, and both layouts have trade-offs.



Future Work

For future work, we plan to improve the system by adaptively changing the level of guidance based on the user’s expertise, adding yoga-specific features such as a symmetry indicator, and replacing the head-mounted display with a smaller, lighter one.



Contribution

Our work provides the following three contributions:

  • We introduce FlowAR, a system that supports at-home training with commonly available yoga videos via a series of virtual screen layouts displayed around the user as an augmented reality overlay.

  • Using motion capture data obtained in a user study, we validated the feasibility and effectiveness of our system and answered the question of which screen layout visualization is best suited for yoga training.

  • We integrated into the system a state-of-the-art 3D pose estimation that uses conventional videos supporting testing of FlowAR outside of a motion capture studio. We then show the real-world applicability and performance of the system through a user study in a furnished lab, such as in one's home, and with yogis of various levels of experience.


 

Reference

Hye-Young Jo, Laurenz Seidel, Michel Pahud, Mike Sinclair, and Andrea Bianchi. 2023. FlowAR: How Different Augmented Reality Visualizations of Online Fitness Videos Support Flow for At-Home Yoga Exercises. In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems (CHI ’23), April 23–28, 2023, Hamburg, Germany. ACM, New York, NY, USA, 17 pages. https://doi.org/10.1145/3544548.3580897



Comments


bottom of page