Getting towards Native-Speaker Writing in Examinations

Published: 20 June 2023| Version 2 | DOI: 10.17632/mwr7mxpy3y.2
Contributors:
Somayyeh Ariyanfar,

Description

The data contains 40 IELTS Task One texts, 15 written by non-native speakers (CEFR B2 (ACTFL A-L/M/H) English-level students) and 25 model texts written by natives, selected for error analysis. The two authors, assisted by Jafar Tavakoli, identified, classified, and quantified the errors, based initially on Hyland's (2005) framework. This, however, was found to be insufficient. We then developed a much larger framework as an assessment rubric, covering three main categories, ‘communication’, ‘morpheme form and meaning’, and ‘syntax and structure’. In the article, we do not give guidance in error correction itself. We only identify errors made by students and used these to develop a master table, useable as a model for both teachers and learners worldwide. Statistically, most errors were in meaning-bearing items, including false friends, L1 transfer, interlanguage, wrong word use, syntax used with wrong semantic force, and so on. Mistakes in grammar as such seem to be statistically relatively rare – yet so many of us teachers focus more on grammar than on meaning transfer. The assessment rubric helps refocus on meaning. Generally, students at the CEFR B2 level plateau and make slower progress. It is important for students at this level to consciously raise their awareness, and this rubric can be useful for both teachers and learners to give a “statistical” set of figures of where the student has most difficulties and therefore needs to work a bit more. Teachers, therefore, can (re)design their coursework according to the frequency of the errors identified by the table. The assessment rubric, however, is suitable for learners of any intermediate to advanced level, from B1 to B2.

Files

Categories

Textual Analysis

Licence