Exploring the Impact of Generative AI as an Automated Corrective Feedback Tool on Academic Writing Development for IELTS Tasks

This exploratory study aims to evaluate the effectiveness of a generative AI assistant (i.e., Claude) as an automated corrective feedback (ACF) tool for improving academic writing performance in the International English Language Testing System (IELTS) exam. Additionally, this research seeks to determine participants' perceptions of using Generative AI in this context. Participants are third-year L2-English learners from an English language course at a Chilean university, selected through a convenience sampling method. This study adopts a mixed-method approach, combining quantitative assessment of writing progress with qualitative analysis of learners' perceptions. Thus, data collection involves pre- and post-intervention writing samples, logs of student-AI interactions, as well as a questionnaire to capture participants’ perceptions and opinions on using AI for improving academic writing. Furthermore, the study design includes AI-assisted tutoring sessions with fixed prompts (i.e., participants interact with Claude to receive ACF), and guided practice (i.e., participants analyze the ACF to improve their texts) for a month. This study contributes to the growing field of AI applications in language education and may offer preliminary insights into innovative methods for improving IELTS preparation. Finally, the exploratory findings could inform future research directions and have potential implications for curriculum design in English language teaching programs and the development of AI-assisted learning tools for academic writing. 

» Titles and abstracts