The Nature of Automated Essay Scoring Feedback
Issue: Vol 28 No. 1 (2011)
Journal: CALICO Journal
Subject Areas:
Abstract:
The purpose of this study is to explore the nature of feedback that English as a Second Language (ESL) students received on their writings either from an automated essay scoring (AES) system or from the teacher. The participants were 12 adult ESL students who were attending an intensive English center at a university in Florida. The drafts of the students were analyzed in depth from a case-study perspective. While the document (essay) analysis was the main data collection method, observations and interviews provided crucial information regarding the context in which the students wrote and the nature of each type of feedback they received. The results revealed that the nature of the AES feedback and written teacher feedback (TF) feedback was different from each other. While the written TF was shorter and more focused, the AES feedback was quite long, generic, and redundant. The findings suggested that AES systems are not entirely ready to meet the needs of ESL or English as a Foreign Language (EFL) students. The developing companies need to improve the feedback capabilities of the program for nonnative English-speaking students, that is, less redundancy, shorter feedback, simpler language for feedback, and feedback for short/off topic/repetitious essays.
Author: Semire Dikli