This study attempts to empirically examine self-ratings of Can-Do statements of the CEFR-J (Tono, 2012), which was modified from the CEFR (Council of Europe, 2001). The study mainly focuses on the relationship between the English proficiency test (STEP) scores and self-ratings and reliability of self-ratings between five skill categories. Three hundred eighty-nine freshmen at one Japanese university answered a web-questionnaire (110 questions in five skill categories) based on the CEFR-J Can-Do descriptors. The results show contradictory evidence. According to an in-depth investigation of individual raw data, the results indicate a variation of responses with little relation to English proficiency test scores. A statistical analysis (Pearson’s R) also supported this evidence. However, the results also indicate that the internal reliability of self-ratings between the five skill categories is high, according to Cronbach’s alpha value (0.872), when the data were compared in the group. To interpret this contradictory evidence, it may be inferred that CEFR-J is effective to evaluate general proficiency skill levels of overall English programs, but not very helpful to measure individual English learning.
Masanori Tokeshi, Meio University, Japan
Lianli Gao, China Academy of Chinese Medicine Sciences, China
Stream: Higher education
This paper is part of the ECE2015 Conference Proceedings (View)
View / Download the full paper in a new tab/window
Comments & FeedbackPlace a comment using your LinkedIn profile
Share this Research