欢迎来到高校英语诊断测评与教学研究官方网站!
UDig for Higher Education
Selected Research and Publications

Title: The role of diagnostic writing assessment in promoting Chinese EFL students’ learning autonomy: An action research study

Authors: Jufang Kong and Mingwei Pan

Abstract: Writing is a productive language skill that is difficult to improve and derive confidence from. Despite the abundant studies on product, process, or genre approaches to teaching writing and teachers’ feedback, there is a paucity of research into English as a foreign language (EFL) students’ responsibility for writing improvement and diagnostic writing assessment feedback. The present study employed an action research approach to explore the role of diagnostic writing assessment in promoting Chinese upper-intermediate EFL students’ learning autonomy. Questionnaires, reflective journals, and portfolios were utilized to elicit data, which were analyzed within Benson’s three-dimension learning autonomy framework. It was revealed that the diagnostic writing assessment impacted all dimensions of learning autonomy, helping EFL students form realistic perceptions of their writing abilities, formulate approachable remedial learning goals, select appropriate learning methods and content, monitor learning processes, and evaluate learning outcomes. It was also found that teacher interventions could significantly enhance EFL students’ learning management. This study highlighted the role and tested the feasibility of adopting diagnostic writing assessment to facilitate EFL students’ autonomous learning.

Keywords: learning autonomy, diagnostic writing assessment, action research, reflective journal, writing portfolio

Full Article: https://journals.sagepub.com/doi/10.1177/13621688231178465

Cite this article: Kong, J., & Pan, M. (2023). The role of diagnostic writing assessment in promoting Chinese EFL students’ learning autonomy: An action research study. Language Teaching Research. https://doi.org/10.1177/13621688231178465


Title: Diagnostic language assessment for teaching and learning: A framework for investigating the interaction between assessment and instruction

Authors: JIN Yan, YU Guoxing

Abstract: The central goal of diagnostic language assessment (DLA) is to improve learners’ language competence through remedial instruction based on diagnostic feedback. Validation studies of DLAs, therefore, should focus on the interaction between assessment and instruction. In this article, based on a validation framework of DLA and the theory of learning-oriented assessment, a research framework is proposed to guide studies of the application of DLAs in foreign language education in China. In addition, through the case analysis of UDig (for higher education), a diagnostic language assessment system for tertiary-level English language education in China, the challenges facing the interaction between assessment and instruction are discussed and the directions for further development are pointed out.

Keywords: diagnostic language assessment; remedial instruction; interaction between assessment and instruction; learning-oriented assessment

Full article: https://kns.cnki.net/kcms2/article/abstract?v=3uoqIhG8C44YLTlOAiTRKibYlV5Vjs7ioT0BO4yQ4m_mOgeS2ml3UFE1R68S4Sq-VDKHe_uDh8iS22I-RJyjisx5Gd_I__4U&uniplatform=NZKPT

Cite this article: Jin, Y., & Yu, G. (2023). Diagnostic language assessment for teaching and learning: A framework for investigating the interaction between assessment and instruction. Foreign Language Education in China, (01), 37-44+93-94. doi:10.20083/j.cnki.fleic.2023.01.037.


Title: An Analysis of the Distinguishing Features of Discourse Performance in an Online Diagnostic Speaking Test

Authors: GAO Miao, LIU Ya

Abstract: This empirical study seeks to identify the criterial features of coherence and cohesion that distinguish levels (indicated by test scores) of candidates’ performances in an oral task of an online low-stakes diagnostic speaking test for college students. By adopting both quantitative methods (descriptive analysis and the analysis of variance) and qualitative methods (discourse analysis and rater interviews), discourse features that can distinguish levels of test candidates and how they make the meaning conveyance possible were identified. The results suggest that additives can distinguish three levels of candidates(low, mid, and high)significantly whereas most other linking devices could distinguish levels between low and high but fail to differentiate adjacent levels (i.e., low-mid, mid-high). Findings from discourse analysis and rater interviews indicate that candidates of the three levels do differ in their coherence performance in terms of the structure, relevance of supporting details and thesis, and the richness of content. This study will shed light on the construction of rating criteria for the coherence and cohesion dimension of speaking tests and provide evidence for the validity argument of this speaking test.

Keywords: oral English tests; English speaking ability; discourse competence; the construct of speaking competence

Full article: https://kns.cnki.net/kcms2/article/abstract? v=3uoqIhG8C44YLTlOAiTRKibYlV5Vjs7ioT0BO4yQ4m_ mOgeS2ml3UKIyb0yikRtHoMnMT9TnYnK8RDru3rKqzc6d645LXzCL&uniplatform=NZKPT

Cite this article: Gao M., & Liu Y. (2023). An Analysis of the Distinguishing Features of Discourse Performance in an Online Diagnostic Speaking Test. Journal of China Examination, (63), 60-67. doi:10.19360/j.cnki.11-3303/g4.2023.03.008.


Title: Developing and validating the reading assessment literacy scale for college English teachers

Authors: QIU Xiaofen, LAN Chunshou

Abstract: Based on the theories of language teachers’ assessment literacy and the context of college English reading teaching in China, this study build a theoretical model of college English teachers’ reading assessment literacy, formulates a scale by using the model, and conducts a questionnaire survey among 495 college English teachers. The results of two exploratory and confirmatory factor analyses show that the 25-item scale can explain five factors, namely principles, basic knowledge, reading material selection and task design, implementation and scoring of tests, and diagnostic feedback from three dimensions of principles, knowledge and skills. The scale descriptors are further modified through interviews with teachers. The reading assessment literacy scale for college English teachers meets the requirements of measurement indicators, and can provide reference for research on English teachers’ reading assessment literacy in China.

Keywords: diagnostic test; self-assessment scale; writing ability; writing strategy

Full article: https://kns.cnki.net/kcms2/article/abstract?v=3uoqIhG8C44YLTlOAiTRKibYlV5Vjs7iJTKGjg9uTdeTsOI_ra5_ XU8T4AGvM8I2bsHARIU3hPsfSF0mTG1jsAerXFLljScL&uniplatform=NZKPT

Cite this article: Qiu, X.., Lan, C. (2022). Developing and validating the reading assessment literacy scale for college English teachers. Foreign Language World, (05), 63-70.


Title: Integrating diagnostic assessment into curriculum: a theoretical framework and teaching practices

Authors: Tingting Fan, Jieqing Song & Zheshu Guan

Abstract: Currently, much research on cognitive diagnostic assessment (CDA) focuses on the development of statistical models estimating individual students’ attribute profiles. However, little is known about how to communicate model-generated statistical results to stakeholders, and how to translate formatively diagnostic information into teaching practices. This study proposed an integrative framework of diagnosis connecting CDA to feedback and remediation and, meanwhile, demonstrated empirically the application of the framework in an English as a Foreign Language (EFL) context. Particularly, the empirical study presented procedures of integrating diagnostic assessment to EFL reading curriculum through four phases of planning, framing, implementing, and reflecting. The results show that these procedures, influenced by the teacher’s orientation to diagnostic assessment and approach to EFL teaching, affected students’ perceptions of diagnostic assessment, their attitudes toward remedial instructions, as well as their learning outcomes on EFL reading. The results also provide evidence to the effectiveness of the integrative framework proposed in this study, showing that the framework could serve as practical guidance to the implementation and use of diagnostic assessment in the classroom. Overall, this study indicates that the diagnostic approach is a more effective way to provide instructionally useful information than other test and assessment approaches that do not differentiate strengths and weaknesses among students with the same total score.

Keywords: Cognitive diagnostic assessment (CDA), Diagnostic score report, Diagnosis-based remediation, English as a Foreign Language (EFL) reading curriculum

Full article: https://languagetestingasia.springeropen.com/articles/10.1186/s40468-020-00117-y

Cite this article: Fan, T., Song, J. & Guan, Z. (2021). Integrating diagnostic assessment into curriculum: a theoretical framework and teaching practices. Language Testing in Asia, 11(2), https://doi.org/10.1186/s40468-020-00117-y


Title: A comparative study of diagnostic score reports on English reading ability

Author: FAN Tingting

Abstract: In this study, Score Report Plus, Diagnosis, and UDig three representative diagnostic reports on English reading ability at home and abroad were used, and a qualitative research method was used to conduct a comparative study. First of all, the content of the three score reports is compared using the academic feedback theory; then, the in -depth interview method is used to investigate the understanding and use of the three reports from the perspective of language learners. The results of the research show that: 1. The UDig report is the most comprehensive, covering multiple levels of academic feedback theory. 2. In terms of comprehension and usability, the subjects ranked the three score reports from high to low as UDig, Diagnosis and Score Report Plus. Through comparative analysis and empirical investigation, the study summarizes the characteristics of effective diagnostic performance reports, with a view to providing reference for the design and application of diagnostic performance reports in China, thereby improving students' learning efficiency.

Keywords: diagnostic score report; academic feedback theory; students' perspectives

Full article: https://kns.cnki.net/kcms2/article/abstract?v=TU8yF1UsqeeY0o2UsIpMPHW_0NVfsREMf5iKPhuZG3TyJO6DYcE0mKhckGQ_AzHr1_ rRak1yyoomzg32aYpeMzA5gkdb1NmBjrQx6ba6OrVt9IsTH6sjvrvHh- 4QLuJuCgQ7d0kbD1K8pCDkvCoYeg==&uniplatform=NZKPT&language=CHS

Cite this article: Fan,T. (2020). A comparative study of diagnostic score reports on English reading ability. Journal of Higher Education, (20), 49-52+56. doi:10.19980/j.cn23-1593/g4.2020.20.015.


Title: Investigating students’ cognitive processes in a diagnostic EFL reading test

Author: SUN Hang

Abstract: The present study examines the cognitive processing of eight students while completing a diagnostic reading test by employing think-aloud protocol. The results show that students were engaged in a wide range of cognitive processes, including both the lower- and higher-level processes defined in Khalifa & Weir’s (2009) model of reading and simulating to a large extent the reading processing in a non-test context. Also, students’ uses of reading strategies were in close accordance with the test specifications, although the operations of two expeditious reading strategies, scanning and search reading, needed further revision. Through the investigation of students’ cognitive processes elicited by the verbal reports, the study provides evidence for the cognitive validity of the diagnostic reading test.

Keywords: reading comprehension; diagnostic test; cognitive process; cognitive validity; think-aloud protocol

Full article: https://kns.cnki.net/kcms2/article/abstract?v=TU8yF1UsqeclsEV5K70jgg2W-EELQmoeMwdYkABpVpWviuMqmqi54lqLzrwY0uaUx0n2K2jwSrbFV_uPfd2MuMPxIzVkCcjDb2Ybc3HVAHCSC-d7E_t5PAE0wJ2l0TAz&uniplatform=NZKPT

Cite this article: Sun, H. (2019). Investigating students’ cognitive processes in a diagnostic EFL reading test. Foreign Language Education in China, (04), 25-32+91.


Title: Developing and validating the self-assessment scales in an online diagnostic test of English writing

Authors: PAN Mingwei, SONG Jieqing & DENG Hua

Abstract: Self-assessment, as a measure of language learners’ own linguistic competence and knowledge, aims to promote their awareness and enhance their self-efficacy in language learning. Self-assessment is interactive and induces low test anxiety. Self-assessment scales, therefore, are critical in diagnostic tests. Drawing on the descriptors from the writing sub-scales of China’s Standards of English Language Ability, and focusing on the self-assessment at pre-writing and post-writing phases, this study developed a pre-writing self-assessment scale for writing ability and a post-writing self-assessment scale for writing strategy. Both scales were validated quantitatively, which indicated that they were valid in terms of their constructs. The diagnostic writing tests are conducted online, where a wealth of useful diagnostic feedback is provided about learners’ writing ability and their writing strategy use. The timely feedback from the self-assessment scales plays a crucial and positive role in fostering learners’ self-access learning ability. In addition, they are also informed of what are the standards of English writing and how to employ writing strategies to improve writing competence.

Keywords: diagnostic test; self-assessment scale; writing ability; writing strategy

Full article: https://kns.cnki.net/kcms2/article/abstract?v=TU8yF1UsqeclsEV5K70jgg2W-EELQmoeMwdYkABpVpWviuMqmqi54lqLzrwY0uaUHdoO3cOgiYUCsteKdezlxRcwcs-Jl_WI62ktft4RNQmnJPBFO6msJwntLBsw5vhe&uniplatform=NZKPT

Cite this article: Pan, M., Song, J., & Deng, H. (2019). Developing and validating the self-assessment scales in an online diagnostic test of English writing. Foreign Language Education in China, (04), 33-41+91-92.


Title: An Argument-based Approach to Validating a Diagnostic Test: The Case of UDig Reading Test

Authors: SUN Hang

Abstract: This study built and supported a validity argument for a diagnostic reading test, the UDig reading test. The test aims to diagnose the reading ability of English language learners at the tertiary level in China. Drawing on the interpretive/use argument approach (IUA; Kane, 1992, 2001, 2013), the study sought to examine the most critical and relevant inferences of the score interpretations and uses concerning diagnostic assessment. Two defining features of diagnostic assessment were identified: 

(1) Diagnostic assessment measures and reports on students‘ language subskills; (2) diagnostic assessment gives test users detailed feedback for remedial learning and teaching. To this end, the validation efforts centered on what was being diagnosed, how the diagnostic feedback was interpreted, and how the feedback was used and what the impact was. Accordingly, three validity inferences which were of primary concern in the IUA for the UDig were specified: the explanation inference, the decision inference, and the consequence inference. Corresponding to these inferences, three research questions were proposed:

1. To what extent are UDig reading scores accurate indicators of test-takers’ strengths and weaknesses of reading subskills?

2. How do students and teachers perceive the usefulness of UDig diagnostic feedback for making decisions on learning and teaching?

3. How is UDig diagnostic feedback used by students and teachers and what consequences does it have on learning and teaching?

The IUA for the UDig provided a logical framework which guided the validation procedure and organized validity evidence through the articulation of the claims, warrants, and assumptions underlying the inferences. Employing a mixed-methods research design, the study triangulated multiple sources of evidence to support the validity inferences over two research phases. Phase 1 of the study explored the extent to which UDig scores could be interpreted as indicators of students’ mastery of reading subskills (i.e. the explanation inference). Students’ think-aloud protocols while taking the UDig test and experts’ judgments were elicited to shed light on the associations between the test items and the reading subskills being measured. Based on the empirical results, a Q-matrix denoting the item-by-subskill relationships was constructed and compared with the initial Q-matrix based on the test developer‘s intention. The Q-matrix with a better model fit (i.e., the one based on students‘ think-aloud protocols and experts’ judgments) was further validated and modified, resulting in a final Q-matrix. Students’ mastery profiles were then generated using the G-DINA framework (de la Torre, 2011), with the input of large-scale test performance data and the final Q-matrix. Phase 2 investigated the ways in which the diagnostic feedback could serve as the catalyst of remedial learning and teaching (i.e., the decision and the consequence inferences). At the beginning of an instructional semester, student questionnaire was distributed and the first round of student and teacher interviews was conducted, which provided insights into students’ and teachers’ perceptions of UDig feedback and their decisions on future use of the feedback. Situated within an Academic English Reading and Writing course, a longitudinal study comprising data from students’ and teachers’ reflective tasks and the second round of interviews was then adopted to document how students and teachers utilized UDig feedback throughout the semester, as well as the consequences of the feedback on learning and teaching. 

Overall, the results provided strong backing for the validity inferences, whereas rebuttal evidence which posed threats to the validity of the UDig was also identified: while the majority of the intended reading subskills were supported by empirical data, a few modifications needed to be made regarding the item-by-subskill relationships; in general, students and teachers expressed positive views of the diagnostic feedback and showed willingness to utilize the information; most of the students and teachers integrated the diagnostic feedback into their learning and teaching process, which in return, exerted positive influences over students‘ English study; a few deficiencies ofthe diagnostic feedback were pointed out, and detailed recommendations on how to refine the diagnostic feedback as well as the diagnostic procedure as a whole were given by students and teachers. To sum up, there is considerable positive evidence to suggest that the UDig has the potential to serve as an accurate indicator of students’ reading abilities and an effective instrument to promote college English learning and teaching. At the same time, further steps to refine both the item-by-subskill relationships of the test items and the form and content of the diagnostic reports are necessitated in order to improve the quality of the UDig system. It is noteworthy that the specific population and setting in this study all contribute to the evaluation of the UDig. Future research can investigate the use of the UDig by different populations and across various contexts. 

This study is a new attempt to examine the validity of a diagnostic reading assessment. The findings have implications for the construction and validation of truly diagnostic assessment, the effectiveness of diagnostic feedback, and the application of argument-based approaches to low-stakes, learning-oriented assessment validation. More importantly, the study provides new perspectives on how learning-oriented diagnostic assessment tools can be implemented in the Chinese EFL context. It is believed that with the collaboration of language testers, measurement specialists and test users, the full benefits of diagnostic assessment can be realized. 

Keywords: UDig diagnostic reading test, argument-based approach to validation, interpretive/use argument, diagnostic feedback, cognitive diagnostic assessment

Full article: https://kns.cnki.net/kcms2/article/abstract?v=3uoqIhG8C447WN1SO36whLpCgh0R0Z-ia63qwICAcC3-s4XdRlECrZMlCN7nbOJI6ULs_PLfyehK_IOcHRVosoOVhvg6RbQg&uniplatform=NZKPT

Cite this article: Sun, H. (2020). An Argument-based Approach to Validating a Diagnostic Test: The Case of UDig Reading Test (Doctoral dissertation). 10.27307/d.cnki.gsjtu.2020.000376


Selected Conference Reports


Title: Diagnostic feedback as a bridge to equity-minded assessment: The case of UDig for tertiary level learners of English

Presenter: Yan Jin, Weiwei Song

Year: 2023

Name of the conference: 44th LTRC


Title: Integrating Diagnostic Assessment into Curriculum: A Theoretical Framework and Teaching Practices

Presenter: Tingting Fan, Jieqing Song

Year: 2023

Name of the conference: NCME 2023


Title: The effect of diagnostic assessment and personal traits on L2 reading development: A Longitudinal investigation

Presenter: Tingting Fan, Xun Yan

Year: 2022

Name of the conference: 43rd LTRC


Title: The AALA Outstanding Doctoral Dissertation Award paper: An Argument-based Approach to Validating a Diagnostic Test: The Case of UDig Reading Test

Presenter: Hang Sun

Year: 2021

Name of the conference: AALA 2021


Title: Evaluating the Usefulness of Feedback on Diagnostic English Test

Presenter: Shaoyan Zou

Year: 2021

Name of the conference: AALA 2021


Title: Developing Diagnostic Assessment for the Needs of Learners: Bridging the National Standards and the Curriculum Requirements

Presenter: Weiwei Song, Liping Liu

Year: 2021

Name of the conference: 19th AILA



Title: Evaluating diagnostic feedback: A multifaceted perspective

Presenter: Hang Sun, Yan Jin

Year: 2021

Name of the conference: 42nd LTRC


Title: Building a validity argument for a diagnostic reading test

Presenter: Hang Sun, Yan Jin

Year: 2018

Name of the conference: AALA 2018


Title: Applying cognitive diagnostic analysis to a diagnostic listening

Presenter: Liang Zhao

Year: 2018

Name of the conference: AALA 2018


外语教学与研究出版社有限责任公司营业执照 Copyright 1999 FLTRP, All Rights Reserved

京ICP备11010362号-15 京公网安备 11010802020459