An Additive Multidimensional Analysis of GenAI and Student Assignments
Since the public release of ChatGPT, language teachers and writing instructors have contended with its possible pedagogic applications and with the challenges it brings to assessment practices. The blossoming literature in the field of AI in Applied Linguistics has veered towards two broad directions: (1) applications of AI into teaching and (2) the extent to which AI can simulate original pieces of writing. In this study, we explore the extent to which the linguistic profile of ChatGPT assignments approximates the linguistic profile of assignments written by linguistics’ undergraduate students in an English as a Foreign Language context. To this end, we compiled two different corpora, one with 54 assignments written for different courses in the linguistic major and another generated with ChatGPT 3.5. For the ChatGPT corpus, we input the same prompt given to the students and collect the AI’s responses. To compare the linguistic profile of both corpora, we conducted an additive Multidimensional Analysis (MDA) using Biber’s (1988) dimensions, which allows for a broader comparison between different genres. Both corpora were tagged with the Biber Tagger and factors scores were calculated for each text. The results show considerable differences between both corpora. In Dimension 1 (Involved vs Informational production), student writing loads on the positive side (1.34), while ChatGPT assignments load on the negative side (-22.1). This suggests that ChatGPT approximates more academic and professional writing, while student writing is more personal. The MDA also revealed some similarities between both corpora, such as the use of narrative features.