CoachMe

Decoding Sport Elements with a Reference-Based
Coaching Instruction Generation Model

ACL 2025

Institute of Information Science, Academia Sinica1
National Tsing Hua University2
National Taiwan University3
{weihsinyeh168, allen0512911}@gmail.com
{yuansu, andrewman71, lwku}@iis.sinica.edu.tw
calvinku@gapp.nthu.edu.tw, whchiu@mx.nthu.edu.tw, anitahu@cs.nthu.edu.tw

CoachMe

CoachMe aims to democratize access to coaching, helping athletes improve on their own. Users can upload videos of their movements, and CoachMe analyzes the motion to provide precise and actionable instructions.

Why is this challenging?
If we upload a video to a multimodal model such as GPT, it often produces generic and redundant advice — which is not effective for athletes. The real challenge of the “Motion to Instruction” task is generating feedback that is highly informative and practical.

Our approach
CoachMe simulates the way real coaches think. Since collecting large amounts of professional sports data is difficult, we cannot directly train a model to understand what “correct motion” is. Instead, CoachMe analyzes the learner’s motion, compares it with a reference, and generates coaching instructions based on the differences.





CoachMe integrates an attention mechanism that focuses on the skeletal graph to generate sports instructions.

The attention mechanism is visualized: each video shows a skeletal attention graph highlighting different key joints. We illuminate the three most important joints and the three most crucial relationships between pairs of joints.

Each set of images is accompanied by three instructional prompts, each generated by a different model—CoachMe, LLaMA, and GPT-4o—providing predicted instruction based on the motion video.

How can an AI model coach intense sport?

AAAI-25 Educational AI Video Winner

Abstract

Motion instruction is a crucial task that helps athletes refine their technique by analyzing movements and providing corrective guidance. Although recent advances in multimodal models have improved motion understanding, generating precise and sport-specific instruction remains challenging due to the highly domain specific nature of sports and the need for in formative guidance. We propose CoachMe, a reference-based model that analyzes the differences between a learner’s motion and a reference under temporal and physical aspects. This approach enables both domain-knowledge learning and the acquisition of a coach-like thinking process that identifies movement errors effectively and provides feedback to explain how to improve. In this paper, we illustrate how CoachMe adapts well to specific sports such as skating and boxing by learning from general movements and then leveraging limited data. Experiments show that CoachMe provides high-quality instructions instead of directions merely in the tone of a coach but without critical information. CoachMe outperforms GPT-4o by 31.6% in G-Eval on figure skating and by 58.3% on boxing. Analysis further confirms that it elaborates on errors and their corresponding improvement methods in the generated instructions.

Overall Framework

Overall framework of CoachMe. CoachMe architecture comprises three modules: Concept Difference (Sec. 3.1), Human Pose Perception (Sec. 3.2), and Instruct Motion (Sec. 3.3). Instruct Motion compares the motion Tokenlearner with Tokenref to obtain the difference Tokendiff and take Tokenlearner and Tokendiff as input to the LM to generate instructions.

Distribution of Sport Indicators


We propose six sport indicators to evaluate Sport Utility, which is the amount of the information in an instruction:

error detection, time detection, body part detection, causation, method and coordination. These indicators are designed to assess various aspects of the instructions, ensuring they are comprehensive and effective for coaching purposes.

By analyzing the proportion of each sport indicator present in the instructional prompts across the entire dataset (train + test), we obtain the matrix on the far left, where each value represents a proportion.

We also analyze the distribution of sport indicators predicted in the instructional prompts generated by different models—CoachMe, LLaMa, and GPT-4o—based on videos from the test dataset, and incorporate the G-eval consistency scores, which assess consistency with the ground truth. These analyses result in the three matrices on the right.

Each value in these matrices represents the total G-eval score accumulated across all instructional prompts in which the two corresponding sport indicators co-occur, normalized by the total number of prompts multiplied by the maximum possible G-eval score.

Contributions

1. Proposed a novel task: Motion to Instruction

2. Released two motion-to-instruction datasets

3. Introduced sport indicators to evaluate Sport Utility — the amount of useful information in an instruction

4. Developed CoachMe, which generates accurate instructions, achieves state-of-the-art results, and provides highly informative feedback

5. Demonstrated that CoachMe adapts effectively to diverse sports with only ~150 training samples, while capturing specific instructional patterns