Study on the Use of Speech Recognition Function to Practice Speaking English Using the Voice Translator “Pocketalk”

Abstract

Although some speech recognition software is highly developed, few studies have focused on how this technology should be adapted for foreign language learners with various proficiency levels, including Japanese students. Thus, this study explores the use of speech recognition to support the practice of English speaking by using the voice translator “Pocketalk.” English sentences spoken by 95 Japanese university students were identified by Pocketalk’s speech recognition function. Afterward, a five-point Likert scale was used to measure the usefulness of the activity with Pocketalk and the affective factors related to speaking English. The results indicated that students tended not to distinctly pronounce the difference between the /n/ sound and the /m/ sound. In addition, when the end of the words such as “terribly” and “stooped” were not pronounced distinctly, they tended to be incorrectly recognized as “terrible” and “stupid.” Questionnaire results showed over 70% of the students expressed a positive attitude toward their interaction with Pocketalk, and over 90% of them paid more attention to their pronunciation. Using its recognition function, we could identify how the spoken sentences were actually recognized, which provided clues for correcting their pronunciation. Regarding the affective factors, no significant relationship was found between students’ responses to the usefulness of their interaction with Pocketalk and their nervousness in speaking English or their negative feelings toward pronunciation. These results suggest a positive potential for Pocketalk’s speech recognition function regardless of their affective factors.



Author Information
Harumi Kashiwagi, Kobe University, Japan
Min Kang, Kobe University, Japan
Kazuhiro Ohtsuki, Kobe University, Japan

Paper Information
Conference: ACEID2021
Stream: Design

This paper is part of the ACEID2021 Conference Proceedings (View)
Full Paper
View / Download the full paper in a new tab/window

Video Presentation


Posted by amp21