Successful PhD defence at CTTS: Dr. Linda Mitchell

We are pleased to announce another successful PhD defence at the CTTS. On Friday 23rd January Ms. Linda Mitchell defended her thesis entitled: “Community Post-Editing of Machine-Translated User-Generated Content” (abstract below). Dr. Mitchell was co-supervised by Drs. Sharon O’Brien, Johann Roturier (Symantec) and Fred Hollowood (Fred Hollowood Consulting).


With the constant growth of user-generated content (UGC) online, the demand for quick translations of large volumes of texts increases. This is often met with a combination of machine translation (MT) and post-editing (PE). Despite extensive research in post-editing with professional translators or translation students, there are few PE studies with lay post-editors, such as domain experts. This thesis explores lay post-editing as a feasible solution for UGC in a technology support forum, machine translated from English into German. This context of lay post-editing in an online community prompts for a redefinition of quality.

We adopt a mixed-methods approach, investigating PE quality quantitatively with an error annotation, a domain specialist evaluation and an end-user evaluation. We further explore post-editing behaviour, i.e. specific edits performed, from a qualitative perspective. With the involvement of community members, the need for a PE competence model becomes even more pressing. We investigate whether Göpferich’s translation competence (TC) model (2009) may serve as a basis for lay post-editing.

Our quantitative data proves with statistical significance that lay post-editing is a feasible concept, producing variable output, however. On a qualitative level, post- editing is successful for short segments requiring  less than or equal to 35% post-editing effort. No post-editing patterns were detected for segments requiring more PE effort. Lastly, our data suggests that PE quality is largely independent of the profile characteristics measured.

This thesis constitutes an important advance in lay post-editing and benchmarking the evaluation of its output, uncovering difficulties in pinpointing reasons for variance in the resulting quality.