Articulation Animation Generated from Speech for Pronunciation Training

Authors

  • Yurie IRIBE Information and Media Center, Toyohashi University of Technology, Japan Author
  • Silasak MANOSAVANH Graduate School of Engineering, Toyohashi University of Technology, Japan Author
  • Kouichi KATSURADA Graduate School of Engineering, Toyohashi University of Technology, Japan Author
  • Ryoko HAYASHI Graduate School of Intercultural Studies, Kobe University, Japan Author
  • Chunyue ZHU School of Language and Communication, Kobe University, Japan Author
  • Tsuneo NITTA Graduate School of Engineering, Toyohashi University of Technology, Japan Author

Abstract

We automatically generate CG animations to express the pronunciation movement of speech through articulatory feature (AF) extraction to help learn a pronunciation. The proposed system uses MRI data to map AFs to coordinate values that are needed to generate the animations. By using magnetic resonance imaging (MRI) data, we can observe the movements of the tongue, palate, and pharynx in detail while a person utters words. AFs and coordinate values are extracted by multi-layer neural networks (MLN). Specifically, the system displays animations of the pronunciation movements of both the learner and teacher from their speech in order to show in what way the learner's pronunciation is wrong. Learners can thus understand their wrong pronunciation and the correct pronunciation method through specific animated pronunciations. Experiments to compare MRI data with the generated animations confirmed the accuracy of articulatory features. Additionally, we verified the effectiveness of using AF to generate animation.

Downloads

Download data is not yet available.

Downloads

Published

2011-11-28

How to Cite

Articulation Animation Generated from Speech for Pronunciation Training. (2011). International Conference on Computers in Education. https://library.apsce.net/index.php/ICCE/article/view/2744