报告人:邹绍艳
Diagnostic language test, which aims at identifying language learners’ strengths and weaknesses so as to guide their learning, has gained renewed research momentum in recent years as language tests have increasingly been made learning-oriented (see, e.g., Alderson, 2005; Knoch, 2009). A defining component of any diagnostic language test is the feedback it provides to the test takers (Alderson, 2005). Thus, an examination of the usefulness of the feedback on a diagnostic language test should be integrated in the evaluation of the usefulness of the test.However, to date, only a handful of research in the language testing field has been conducted to examine the usefulness of feedback on diagnostic language test (e.g., Alderson, 2005; Jang, 2009). This is partly due to the fact that there are relatively fewer language tests designed primarily for diagnostic purposes (see Alderson,2005; Jang, 2009; Knoch, 2009). In view of the mounting research interests in developing diagnostic language test in the recent decade, (e.g., Alderson et al., 2014; Kunnan & Jang, 2011; Jin & Yu, 2019; Pan et al., 2019), evaluation of the usefulness of feedback provided by diagnostic tests seems to become more urgent.This study, therefore, set its research scope on the writing assessment of Udig (Udig-W), a diagnostic English writing test developed by the Foreign Language Teaching and Research Press in China. The study was aimed at evaluating the usefulness of the feedback on the test. The research questions were:
1) How well do college students perceive the usefulness of the feedback on Udig-W?
2) How well do students’ writing proficiency and learning phases affect their perceptions?
3) What suggestions do students have, if any, in terms of optimizing the feedback on Udig-W?
An explanatory sequential design of Mixed-Methods was adopted to solve these questions. Questionnaire survey data were collected from a valid sample of 141 college students.
Based on quantitative data analysis, focused group interviews were subsequently conducted with 12 students. Descriptive analysis of the survey data showed that students perceived the feedback as generally useful. However, ANOVA analysis indicated that students’ writing proficiency and learning phases resulted in significant differences in their perceptions of items regarding their rankings, writing content, discourse organization, language quality, and vocabulary use. The interview data provided in-depth explanations for such differences. Suggestions were also proposed to further optimize the structure and content of the feedback, and to equip users of the test not only with a better knowledge of their strengths and weaknesses, but more importantly, with reliable learning resources conducive to their English writing.