Using generalizability theory to investigate the variability and reliability of EFL composition scores by human raters and e-rater


SARI E., Han T.

PORTA LINGUARUM, vol.2022, no.38, pp.27-45, 2022 (AHCI, SSCI, Scopus) identifier identifier

  • Publication Type: Article / Article
  • Volume: 2022 Issue: 38
  • Publication Date: 2022
  • Doi Number: 10.30827/portalin.vi38.18056
  • Journal Name: PORTA LINGUARUM
  • Journal Indexes: Arts and Humanities Citation Index (AHCI), Social Sciences Citation Index (SSCI), Scopus, MLA - Modern Language Association Database, DIALNET
  • Page Numbers: pp.27-45
  • Keywords: EFL writing assessment, generalizability theory, scoring variability, scoring reliability, automated writing evaluation (AWE), AUTOMATED WRITING EVALUATION
  • Karadeniz Technical University Affiliated: Yes

Abstract

Using the generalizability theory (G-theory) as a theoretical framework, this study aimed at investigating the variability and reliability of holistic scores assigned by human raters and e-rater to the same EFL essays. Eighty argumentative essays written on two different topics by tertiary level Turkish EFL students were scored holistically by e-rater and eight human raters who received a detailed rater training. The results showed that e-rater and human raters assigned significantly different holistic scores to the same EFL essays. G-theory analyses revealed that human raters assigned considerably inconsistent scores to the same EFL essays although they were given a detailed rater training and more reliable ratings were attained when e-rater was integrated in the scoring procedure. Some implications are given for EFL writing assessment practices.