Dance Interactive Learning Systems: A Study on Interaction Workflow and Teaching Approaches
https://dl.acm.org/doi/10.1145/3323335
Katerina El Raheb
Marina Stergiou
Akrivi Katifori
Yannis Ioannidis
Motion Capture and whole-body interaction technologies have been experimentally proven to contribute to the enhancement of dance learning and to the investigation of bodily knowledge, innovating at the same time the practice of dance. Designing and implementing a dance interactive learning system with the aim to achieve effective, enjoyable, and meaningful educational experiences is, however, a highly demanding interdisciplinary and complex problem. In this work, we examine the interactive dance training systems that are described in the recent bibliography, proposing a framework of the most important design parameters, which we present along with particular examples of implementations. We discuss the way that the different phases of a common workflow are designed and implemented in these systems, examining aspects such as the visualization of feedback to the learner, the movement qualities involved, the technological approaches used, as well as the general context of use and learning approaches. Our aim is to identify common patterns and areas that require further research and development toward creating more effective and meaningful digital dance learning tools.
モーションキャプチャや全身インタラクション技術は、ダンス学習の強化や身体知の探求に貢献することが実験的に証明されており、同時にダンスの実践を革新しています。しかし,効果的で楽しく有意義な教育体験を実現するために,ダンスのインタラクティブな学習システムを設計・実装することは,学際的かつ複雑な課題である.本研究では,最近の文献に記載されているダンス対話型学習システムを検討し,最も重要な設計パラメータのフレームワークを提案し,具体的な実装例とともに提示する.これらのシステムでは、共通のワークフローの各段階がどのように設計され、実装されているかについて、学習者へのフィードバックの可視化、動きの質、技術的なアプローチ、利用や学習アプローチの一般的な文脈などを検討しています。より効果的で有意義なデジタルダンス学習ツールの開発に向けて、共通のパターンや研究開発が必要な分野を明らかにすることを目的としています。
interactive training
(1) motion demonstration through representing a stored pre-recorded motion captured movement of an expert from a database, usually using a rendered avatar,
(2) the student is asked to imitate the ideal movement,
(3) the student is motion captured and her movement is compared with the ideal one in the database, and finally
(4) the student is provided with a score value as feedback.
A. Aristidou, E. Stavrakis, P. Charalambous, Y. Chrysanthou, and S. L. Himona. 2015. Folk dance evaluation using laban movement analysis. J. Comput. Cultur. Herit. 8, 4 (2015), 20.
J. Chan, H. Leung, K. T. Tang, and T. Komura. 2007. Immersive performance training tools using motion capture technology. In Proceedings of the 1st International Conference on Immersive Telecommunications. ICST (Institute for
Computer Sciences, Social-Informatics and Telecommunications Engineering), 7.
S. Essid, X. Lin, M. Gowing, G. Kordelas, A. Aksay, P. Kelly, and R. Tournemenne. 2013. A multi-modal dance corpus for research into interaction between humans in virtual environments. J. Multimodal User Interfaces 7, 1–2 (2013),
157–170.
D. S. Alexiadis, P. Kelly, P. Daras, N. E. O’Connor, T. Boubekeur, andM. B.Moussa. 2011. Evaluating a dancer’s performance using kinect-based skeleton tracking. In Proceedings of the 19th ACM international conference on Multimedia.
ACM, 659–662.
virtual teacher
Living Archive. 2018. A collaboration between Google Arts and Culture Lab and Studio Wayne McGregor. https://waynemcgregor.com/research/living-archive
G. Sun, P. Muneesawang, M. Kyan, H. Li, L. Zhong, N. Dong, and L. Guan. 2014. An advanced computational intelligence system for training of ballet dance in a cave virtual reality environment. In Proceedings of the 2014 IEEE
International Symposium on Multimedia (ISM’14). IEEE, 159–166.
no teacher
There is no demonstration by a virtual teacher and the students are asked to perform a movement of their own choice. Then their captured motion is being processed with a motion recognition algorithm and compared with the corresponding movements, from a movement database. Therefore, the workflow of these interactive systems consists of motion capture of the student, motion capture of the teacher, motion recognition algorithm, retrieval from the motion database, motion comparison algorithm and feedback.
Z. Marquardt, J. Beira, N. Em, I. Paiva, and S. Kox. 2012. Super mirror: A kinect interface for ballet dancers. In Proceedings
of the Conference on Human Factors in Computing Systems (CHI’12). ACM, 1619–1624.
G. Tsampounaris, K. El Raheb, V. Katifori, and Y. Ioannidis. 2016. Exploring visualizations in real-time motion capture for dance education. In Proceedings of the 20th Pan-Hellenic Conference on Informatics. ACM, 76.
K. El Raheb, G. Tsampounaris, A. Katifori, and Y. Ioannidis. 2018. Choreomorphy: A whole-body interaction experience for dance improvisation and visual experimentation. In Proceedings of the 2018 International Conference on
Advanced Visual Interfaces. ACM, 27.
L. Molina-Tanco, C. Garc a-Berdon s, and A. Reyes-Lecuona. 2017. The Delay mirror: A technological innovation specific to the dance studio. In Proceedings of the 4th International Conference on Movement Computing. ACM, 9.
D. A. Becker and A. Pentland. 1996. Using a virtual environment to teach cancer patients T’ai Chi, relaxation, and self-imagery. In Proceedings of the International Conference on Automatic Face and Gesture Recognition.
A. Camurri, K. El Raheb, O. Even-Zohar, Y. Ioannidis, A. Markatzi, J. M. Matos, and S. Di Pietro. 2016. WhoLoDancE: Toward a methodology for selecting motion capture data across different dance learning practice. In Proceedings of
the 3rd International Symposium on Movement and Computing. ACM, 43.
Y. Usui, K. Sato, and S. Watabe. 2015. Learning Hawaiian hula dance by using tablet computer. In Proceedings of the
SIGGRAPH Asia 2015 Symposium on Education. ACM, 6.
J. C. Chan, H. Leung, J. K. Tang, and T. Komura. 2011. A virtual reality dance training system using motion capture technology. IEEE Trans. Learn. Technol. 4, 2 (2011), 187–195.
A. Camurri, C. Canepa, N. Ferrari, M. Mancini, R. Niewiadomski, S. Piana, and M. Romero. 2016. A system to support the learning of movement qualities in dance: a case study on dynamic symmetry. In Proceedings of the 2016 ACM
International Joint Conference on Pervasive and Ubiquitous Computing: Adjunct. ACM, 973–976.
S. Piana, P. Alborno, R. Niewiadomski, M. Mancini, G. Volpe, and A. Camurri. 2016. Movement fluidity analysis based on performance and perception. In Proceedings of the 2016 CHI Conference Extended Abstracts on Human Factors in
Computing Systems. ACM, 1629–1636.
In systems “Super Mirror” 58, “You Move” 4, and “Virtual reality dance training system” 22, different color is applied in the body parts the movement of which should be improved.
J. Chan, H. Leung, K. T. Tang, and T. Komura. 2007. Immersive performance training tools using motion capture technology. In Proceedings of the 1st International Conference on Immersive Telecommunications. ICST (Institute for
Computer Sciences, Social-Informatics and Telecommunications Engineering), 7.
#learning
#practice
#training
#練習/Practice
#interactive
#survey