top of page
profile.jpg

Hi, I'm Hye-Young,

I am a Ph.D. student in Computer Science at the University of Colorado Boulder, advised by Prof. Ryo Suzuki in the Programmable Reality Lab at the ATLAS Institute.
 
My research explores how generative AI and extended reality can enable embodied learning and creative living. My work has been recognized and supported by Google Ph.D. Fellowship.


I completed my B.F.A. in Painting and Media Arts at Seoul National University and my M.S. in Industrial Design at KAIST, where I was advised by Prof. Andrea Bianchi in the MAKE Lab. I have also worked as a film VFX compositor and XR generalist at Dexter Studios and as a research intern at Fujitsu Converging Lab.

  LATEST NEWS

January 16, 2026

CHI26

TingleTouch paper got conditionally accepted to the CHI conference

December 4, 2025

Ralph J. Slutz Student Excellence Award

Grateful to be recognized with this academic excellence award 🏆

October 23, 2025

Google Ph.D. Fellowship

Honored to receive the Google PhD Fellowship in Human–Computer Interaction 🏆

October 2, 2025

Invited talk at Fujitsu

I gave a talk at Fujitsu Research of America on the interaction of AI-driven video generation.

PUBLICATION

✧ Research interests: Human-Computer Interaction; Human-AI Interaction; Creativity Support Tools; Generative Agents; AI-driven Content Adaptation; Embodied AI; XR Interaction; Augmented Instruction

TingleTouch.png

TingleTouch

CHI'26

TingleTouch: Touch Guidance through Electrical Stimulation in Resistance Training

Dong-Uk Kim, Hye-Young Jo, Hankyung Kim, Ryo Suzuki, Seungwoo Je, Yoonji Kim.

In resistance training, trainers employ touch guidance that helps trainees control form, activate muscles, and maintain safety. Haptic wearables offer a way to extend this guidance to solitary workouts, yet capturing the guidance communicated through touch and delivering it as haptic feedback remains challenging. In this paper, we categorize trainers’ touch guidance and propose electrical muscle stimulation (EMS) patterns to simulate its instructional messages. A preliminary study with six trainers and six trainees identified four core messages underlying touch guidance. We then designed EMS patterns for each message and refined them with two sports scientists and a UX designer, ensuring usability and scientific grounding. Finally, sixteen gymgoers evaluated these patterns in controlled sessions. Participants reliably distinguished the cues and used the instructed muscles accordingly, reaching accuracies of 97.14% and 99.22% across two sessions, validated through EMG and pose estimation. These findings demonstrate that EMS feedback is both intuitive and learnable.
PDF
VIDEO

To appear

DOI
TingleTouch.png

Generative Lecture

arXiv'25

Generative Lecture: Making Lecture Videos Interactive with LLMs and AI Clone Instructors

Hye-Young Jo, Ada Zhao, Xiaoan Liu, Ryo Suzuki

We introduce Generative Lecture, a concept that makes existing lecture videos interactive through generative AI and AI clone instructors. By leveraging interactive avatars powered by HeyGen, ElevenLabs, and GPT-5, we embed an AI instructor into the video and augment the video content in response to students' questions. This allows students to personalize the lecture material, directly ask questions in the video, and receive tailored explanations generated and delivered by the AI-cloned instructor. From a design elicitation study (N=8), we identified four goals that guided the development of eight system features: 1) on-demand clarification, 2) enhanced visuals, 3) interactive example, 4) personalized explanation, 5) adaptive quiz, 6) study summary, 7) automatic highlight, and 8) adaptive break. We then conducted a user study (N=12) to evaluate the usability and effectiveness of the system and collected expert feedback (N=5). The results suggest that our system enables effective two-way communication and supports personalized learning.

TingleTouch.png

Map2Video

arXiv'25

Map2Video: Street View Imagery Driven AI Video Generation

Hye-Young Jo, Mose Sakashita, Aditi Mishra, Ryo Suzuki, Koichiro Niinuma, Aakar Gupta

AI video generation has lowered barriers to video creation, but current tools still struggle with inconsistency. Filmmakers often find that clips fail to match characters and backgrounds, making it difficult to build coherent sequences. A formative study with filmmakers highlighted challenges in shot composition, character motion, and camera control. We present Map2Video, a street view imagery-driven AI video generation tool grounded in real-world geographies. The system integrates Unity and ComfyUI with the VACE video generation model, as well as OpenStreetMap and Mapillary for street view imagery. Drawing on familiar filmmaking practices such as location scouting and rehearsal, Map2Video enables users to choose map locations, position actors and cameras in street view imagery, sketch movement paths, refine camera motion, and generate spatially consistent videos. We evaluated Map2Video with 12 filmmakers. Compared to an image-to-video baseline, it achieved higher spatial accuracy, required less cognitive effort, and offered stronger controllability for both scene replication and open-ended creative exploration.

TingleTouch.png

VR Avatar Body Deformation

ISMAR'25 (TVCG)

Designing Hand and Forearm Gestures to Control Virtual Forearm for User-Initiated Forearm Deformation

Yilong Lin, Han Shi, Weitao Jiang, Xuesong Zhang, Hye-Young Jo, Yoonji Kim, Seungwoo Je

Thanks to the development of virtual reality (VR) technology, there is growing research on VR avatar body deformation effects. However, previous research mainly focused on passive body deformation expression, leaving users with limited methods to actively control their virtual bodies. To address this gap, we explored user-controlled forearm deformation by investigating how hand and forearm gestures can be mapped to various degrees of avatar forearm deformation. We conducted a gesture design workshop with six designers to generate gesture sets for different forearm deformations and deformation degrees, resulting in 15 gesture sets. Then, we selected the three highest-rated gesture sets and conducted a comparative study to evaluate the sense of embodiment and user performance across the three gesture sets. Our findings provide design suggestions for gesture-controlled forearm deformation in VR.

acceptance rate: 7.86 %

TingleTouch.png

CollageVis

CHI’24

CollageVis: Rapid Previsualization Tool for Indie Filmmaking using Video Collages

Hye-Young Jo, Ryo Suzuki, Yoonji Kim.

Previsualization, previs, is essential for film production, allowing cinematographic experiments and effective collaboration. However, traditional previs methods like 2D storyboarding and 3D animation require substantial time, cost, and technical expertise, posing challenges for indie filmmakers. We introduce CollageVis, a rapid previsualization tool using video collages. CollageVis enables filmmakers to create previs through two main user interfaces. First, it automatically segments actors from videos and assigns roles using name tags, color filters, and face swaps. Second, it positions video layers on a virtual stage and allows users to record shots using mobile as a proxy for a virtual camera. These features were developed based on formative interviews by reflecting indie filmmakers’ needs and working methods. We demonstrate the system’s capability by replicating seven film scenes and evaluate the system’s usability with six indie filmmakers. The findings indicate that CollageVis allows more flexible yet expressive previs creation for idea development and collaboration.

acceptance rate: 26.4%

TingleTouch.png

TrainerTap

UIST'23 Adjunct

TrainerTap: Weightlifting Support System Prototype Simulating Personal Trainer's Tactile and Auditory Guidance

Hye-Young Jo, Chan Hu Wie, Yejin Jang, Dong-Uk Kim, Yurim Son, Yoonji Kim.

Working out alone at the gym often lacks the quality and intensity of exercises compared to the training session with a personal trainer. To narrow this gap, we introduce TrainerTap, which simulates the personal trainer's presence during solitary weightlifting workouts. TrainerTap replicates the trainer's manual interventions of tapping the trainee's body parts to capture their attention on target muscles and provides auditory guidance to support executing the movements at a consistent tempo.

acceptance rate: 21.0%

TingleTouch.png

FlowAR

CHI’23

FlowAR: How Different Augmented Reality Visualizations of Online Fitness Videos Support Flow for At-Home Yoga Exercises

Hye-Young Jo, Laurenz Seidel, Michel Pahud, Mike Sinclair, and Andrea Bianchi.

Online fitness video tutorials are an increasingly popular way to stay fit at home without a personal trainer. However, to keep the screen playing the video in view, users typically disrupt their balance and break the motion flow --- two main pillars for the correct execution of yoga poses. While past research partially addressed this problem, these approaches supported only a limited view of the instructor and simple movements. To enable the fluid execution of complex full-body yoga exercises, we propose FlowAR, an augmented reality system for home workouts that shows training video tutorials as always-present virtual static and dynamic overlays around the user. We tested different overlay layouts in a study with 16 participants, using motion capture equipment for baseline performance. Then, we iterated the prototype and tested it in a furnished lab simulating home settings with 12 users. Our results highlight the advantages of different visualizations and the system's general applicability.

acceptance rate: 27.6%

TingleTouch.png

Physical Computing Metaverse

HCIK'22

Design of Virtual Reality Application for Interaction Prototyping Remote Education

Hye-Young Jo, Wooje Chang, Hoonjin Jung, Andrea Bianchi.

The COVID-19 pandemic has impacted education, especially in STEAM subjects, such as Interaction Prototyping (a course involving physical computing), where physical practice is crucial. There have been studies to introduce virtual environments in STEAM education before the pandemic. However, the non-face-to-face education paradigm that emerged after the outbreak of the epidemic further increased this necessity. In this paper, we propose virtual reality applications for interaction prototyping remote education that provide an intuitive and safe practice environment for students. First, we summarize the flow of the interaction prototyping class and explore the difficulties in the class before and after COVID-19 through expert interviews. Based on this, we derive design considerations when converting the background of the interaction prototyping class from offline to the virtual environment. Finally, we propose four possible interaction scenarios that can provide students with an immersive experience: realistic theory class, 3D library, circuit assembly, and mixed reality practice.

🏆 Best Paper Award

TingleTouch.png

GamesBond

CHI'21

GamesBond: Bimanual Haptic Illusion of Physically Connected Objects for Immersive VR Using Grip Deformation

Neung Ryu, Hye-Young Jo, Michel Pahud, Mike Sinclair, Andrea Bianchi.

Virtual Reality experiences, such as games and simulations, typically support the usage of bimanual controllers to interact with virtual objects. To recreate the haptic sensation of holding objects of various shapes and behaviors with both hands, previous researchers have used mechanical linkages between the controllers that render adjustable stiffness. However, the linkage cannot quickly adapt to simulate dynamic objects, nor it can be removed to support free movements. This paper introduces GamesBond, a pair of 4-DoF controllers without physical linkage but capable to create the illusion of being connected as a single device, forming a virtual bond. The two controllers work together by dynamically displaying and physically rendering deformations of hand grips, and so allowing users to perceive a single connected object between the hands, such as a jumping rope. With a user study and various applications we show that GamesBond increases the realism, immersion, and enjoyment of bimanual interaction.

🏅 Honorable Mention Award

UX

KAREMCM.png

KARE MCM

Korea Aid for Respiratory Epidemic: Mobile Clinic Module

KAIST, UNIST, TU Korea, Zoslee Studio, K-Arts, 20PLUS, Inition

 
 

KARE MCM (Korea Aid for Respiratory Epidemic: Mobile Clinic Module) is a mobile expandable negative pressure ward with advanced medical facilities developed to cope with an infectious disease outbreak. I was a junior researcher in a research team led by Prof. Tek-Jin Nam and desinged information architecture of user interface.

🏆 iF DESIGN AWARD 2021, Winner

🏆 IDEA Design Award 2021, Bronze

figureout.png

FigureOUT

FigureOUT: A Personalized Tool for Summarizing and Visualizing Academic Literature

Hye-Young Jo*, Minha Lee*, Wooseok Kim*, Yeonsoo Kim*

 
 

Working out alone at the gym often lacks the quality and intensity of exercises compared to the training session with a personal trainer. To narrow this gap, we introduce TrainerTap, which simulates the personal trainer's presence during solitary weightlifting workouts. TrainerTap replicates the trainer's manual interventions of tapping the trainee's body parts to capture their attention on target muscles and provides auditory guidance to support executing the movements at a consistent tempo.

-

meta-boxing.png

Meta-Boxing

Meta-Boxing: VR Boxing Game with Controllable Physics-based Character in Third-Person Perspective

Jungjin Park*, Hye-Young Jo*

 
 

Online fitness video tutorials are an increasingly popular way to stay fit at home without a personal trainer. However, to keep the screen playing the video in view, users typically disrupt their balance and break the motion flow --- two main pillars for the correct execution of yoga poses. While past research partially addressed this problem, these approaches supported only a limited view of the instructor and simple movements. To enable the fluid execution of complex full-body yoga exercises, we propose FlowAR, an augmented reality system for home workouts that shows training video tutorials as always-present virtual static and dynamic overlays around the user. We tested different overlay layouts in a study with 16 participants, using motion capture equipment for baseline performance. Then, we iterated the prototype and tested it in a furnished lab simulating home settings with 12 users. Our results highlight the advantages of different visualizations and the system's general applicability.

🏆 Excellence Award, Korea Metaverse Developer Contest 2021

🏅 Top Research Award, 2022 Joint Seminar (SNU-KAIST-Sogang-Korea University)

Last update: Spring 2026

© Hye-Young Jo

Boulder, CO, USA

bottom of page