LATEST NEWS
Email: hye-young.jo@colorado.edu
Google Scholar | LinkedIn | GitHub | X.com | Instagram
Hi, I'm Hye-Young,
I am a first-year PhD student at the ATLAS Institute at CU Boulder, advised by Prof. Ryo Suzuki in the Programmable Reality Lab.
My research interests include Human-Computer Interaction (HCI), human-AI interaction, and smart healthcare. I focus on enhancing the experience of consuming and authoring digital content through AR, AI, and haptic technologies. I am particularly interested in using generative AI to augment content and develop tools to support creativity. I am also keen on exploring how technology can promote fitness through augmented instruction.
I have published papers at prestigious HCI conferences, including ACM CHI, UIST, and HCIK, and received Best Paper (HCIK'22) and Honorable Mention awards (CHI'21).
October 13, 2024
UIST24 conference
Heading to Pittsburgh to see inspiring works! Email me to chat 😊
September 12, 2024
CHI25 paper submission
I submitted a paper to CHI'25. Fingers crossed🤞
August 18, 2024
Starting a PhD program at CU Boulder
I am excited to join the Programmable Reality lab led by Dr. Ryo Suzuki 😃
PUBLICATION
CollageVis
CHI’24
CollageVis: Rapid Previsualization Tool for Indie Filmmaking using Video Collages
Hye-Young Jo, Ryo Suzuki, Yoonji Kim.
Previsualization, previs, is essential for film production, allowing cinematographic experiments and effective collaboration. However, traditional previs methods like 2D storyboarding and 3D animation require substantial time, cost, and technical expertise, posing challenges for indie filmmakers. We introduce CollageVis, a rapid previsualization tool using video collages. CollageVis enables filmmakers to create previs through two main user interfaces. First, it automatically segments actors from videos and assigns roles using name tags, color filters, and face swaps. Second, it positions video layers on a virtual stage and allows users to record shots using mobile as a proxy for a virtual camera. These features were developed based on formative interviews by reflecting indie filmmakers’ needs and working methods. We demonstrate the system’s capability by replicating seven film scenes and evaluate the system’s usability with six indie filmmakers. The findings indicate that CollageVis allows more flexible yet expressive previs creation for idea development and collaboration.
acceptance rate: 26.4%
TrainerTap
UIST'23 Adjunct
TrainerTap: Weightlifting Support System Prototype Simulating Personal Trainer's Tactile and Auditory Guidance
Hye-Young Jo, Chan Hu Wie, Yejin Jang, Dong-Uk Kim, Yurim Son, Yoonji Kim.
Working out alone at the gym often lacks the quality and intensity of exercises compared to the training session with a personal trainer. To narrow this gap, we introduce TrainerTap, which simulates the personal trainer's presence during solitary weightlifting workouts. TrainerTap replicates the trainer's manual interventions of tapping the trainee's body parts to capture their attention on target muscles and provides auditory guidance to support executing the movements at a consistent tempo.
acceptance rate: 21.0%
FlowAR
CHI’23
FlowAR: How Different Augmented Reality Visualizations of Online Fitness Videos Support Flow for At-Home Yoga Exercises
Hye-Young Jo, Laurenz Seidel, Michel Pahud, Mike Sinclair, and Andrea Bianchi.
Online fitness video tutorials are an increasingly popular way to stay fit at home without a personal trainer. However, to keep the screen playing the video in view, users typically disrupt their balance and break the motion flow --- two main pillars for the correct execution of yoga poses. While past research partially addressed this problem, these approaches supported only a limited view of the instructor and simple movements. To enable the fluid execution of complex full-body yoga exercises, we propose FlowAR, an augmented reality system for home workouts that shows training video tutorials as always-present virtual static and dynamic overlays around the user. We tested different overlay layouts in a study with 16 participants, using motion capture equipment for baseline performance. Then, we iterated the prototype and tested it in a furnished lab simulating home settings with 12 users. Our results highlight the advantages of different visualizations and the system's general applicability.
acceptance rate: 27.6%
Physical Computing Metaverse
HCIK'22
Design of Virtual Reality Application for Interaction Prototyping Remote Education
Hye-Young Jo, Wooje Chang, Hoonjin Jung, Andrea Bianchi.
The COVID-19 pandemic has impacted education, especially in STEAM subjects, such as Interaction Prototyping (a course involving physical computing), where physical practice is crucial. There have been studies to introduce virtual environments in STEAM education before the pandemic. However, the non-face-to-face education paradigm that emerged after the outbreak of the epidemic further increased this necessity. In this paper, we propose virtual reality applications for interaction prototyping remote education that provide an intuitive and safe practice environment for students. First, we summarize the flow of the interaction prototyping class and explore the difficulties in the class before and after COVID-19 through expert interviews. Based on this, we derive design considerations when converting the background of the interaction prototyping class from offline to the virtual environment. Finally, we propose four possible interaction scenarios that can provide students with an immersive experience: realistic theory class, 3D library, circuit assembly, and mixed reality practice.
🏆 Best Paper Award
GamesBond
CHI'21
GamesBond: Bimanual Haptic Illusion of Physically Connected Objects for Immersive VR Using Grip Deformation
Neung Ryu, Hye-Young Jo, Michel Pahud, Mike Sinclair, Andrea Bianchi.
Virtual Reality experiences, such as games and simulations, typically support the usage of bimanual controllers to interact with virtual objects. To recreate the haptic sensation of holding objects of various shapes and behaviors with both hands, previous researchers have used mechanical linkages between the controllers that render adjustable stiffness. However, the linkage cannot quickly adapt to simulate dynamic objects, nor it can be removed to support free movements. This paper introduces GamesBond, a pair of 4-DoF controllers without physical linkage but capable to create the illusion of being connected as a single device, forming a virtual bond. The two controllers work together by dynamically displaying and physically rendering deformations of hand grips, and so allowing users to perceive a single connected object between the hands, such as a jumping rope. With a user study and various applications we show that GamesBond increases the realism, immersion, and enjoyment of bimanual interaction.
🏅 Honorable Mention Award
UX
KARE MCM
Korea Aid for Respiratory Epidemic: Mobile Clinic Module
KAIST, UNIST, TU Korea, Zoslee Studio, K-Arts, 20PLUS, Inition
KARE MCM (Korea Aid for Respiratory Epidemic: Mobile Clinic Module) is a mobile expandable negative pressure ward with advanced medical facilities developed to cope with an infectious disease outbreak. I was a junior researcher in a research team led by Prof. Tek-Jin Nam and desinged information architecture of user interface.
🏆 iF DESIGN AWARD 2021, Winner
🏆 IDEA Design Award 2021, Bronze
FigureOUT
FigureOUT: A Personalized Tool for Summarizing and Visualizing Academic Literature
Hye-Young Jo*, Minha Lee*, Wooseok Kim*, Yeonsoo Kim*
Working out alone at the gym often lacks the quality and intensity of exercises compared to the training session with a personal trainer. To narrow this gap, we introduce TrainerTap, which simulates the personal trainer's presence during solitary weightlifting workouts. TrainerTap replicates the trainer's manual interventions of tapping the trainee's body parts to capture their attention on target muscles and provides auditory guidance to support executing the movements at a consistent tempo.
-
Meta-Boxing
Meta-Boxing: VR Boxing Game with Controllable Physics-based Character in Third-Person Perspective
Jungjin Park*, Hye-Young Jo*
Online fitness video tutorials are an increasingly popular way to stay fit at home without a personal trainer. However, to keep the screen playing the video in view, users typically disrupt their balance and break the motion flow --- two main pillars for the correct execution of yoga poses. While past research partially addressed this problem, these approaches supported only a limited view of the instructor and simple movements. To enable the fluid execution of complex full-body yoga exercises, we propose FlowAR, an augmented reality system for home workouts that shows training video tutorials as always-present virtual static and dynamic overlays around the user. We tested different overlay layouts in a study with 16 participants, using motion capture equipment for baseline performance. Then, we iterated the prototype and tested it in a furnished lab simulating home settings with 12 users. Our results highlight the advantages of different visualizations and the system's general applicability.
🏆 Excellence Award, Korea Metaverse Developer Contest 2021
🏅 Top Research Award, 2022 Joint Seminar (SNU-KAIST-Sogang-Korea University)