Translation & Language Work¶
March 2026
What this task involves¶
Translation in humanities scholarship is not a single activity. It ranges from rough orientation — getting the gist of a passage in an unfamiliar language — to fine-grained interpretive argument where every word choice carries scholarly stakes. Between these extremes lie pedagogical use (helping students work through difficult passages), corpus-level scanning (identifying relevant passages in large bodies of text), and the kind of iterative engagement where a scholar uses translation as a mode of close reading.
AI tools can participate in all of these, but they perform very differently depending on where you are on that spectrum. Understanding the shape of what they can and cannot do is more useful than a blanket judgement.
Where AI tools help¶
Rough orientation and first-pass translation. All major platforms (Claude, ChatGPT, Gemini) can translate between well-resourced languages with reasonable fluency. For getting the gist of a passage in a language adjacent to your specialism, or for a student working through unfamiliar syntax, this is often useful. Upload the source text or type it directly.
Iterative engagement. The real value emerges in dialogue, not in a single translation prompt. A well-constructed sequence of follow-up questions can surface alternative readings, probe syntactic choices, and expose where the model is guessing:
Useful follow-up prompts
- "What alternative renderings are possible for [a specific phrase]?" — A good response will note the semantic range of key terms and identify where interpretive choices carry scholarly stakes.
- "How does the word order or syntax create rhetorical emphasis in this passage?" — This probes whether the model can move beyond lexical translation into stylistic analysis.
- "What is the relationship between this passage and the broader argument of the work?" — Here the model must connect a specific passage to the work's larger aims.
Corpus scanning. For generating rough translations of large corpora to identify passages for closer reading, or quickly rendering a comparandum in a language adjacent to your specialism, the speed is genuinely useful.
Navigating apparatus criticus. Models perform reasonably well at explaining sigla, summarising variant readings in accessible language, and comparing variants across multiple witnesses.
What to watch out for¶
Leif's Notes
These tools reward expertise rather than replacing it. A postgraduate student with deep knowledge of a text will get substantially more out of this interaction than someone encountering it for the first time. The system amplifies what you bring to it.
Fluency masking error. The biggest danger is that smooth, confident English creates a false sense of adequacy. The model will flatten ambiguity, regularise unusual constructions, and paper over genuine difficulty. If the output reads too well, that is often a sign that something has been lost.
Hallucinated readings. Models will "complete" fragmentary texts with confidence, supplying plausible-sounding restorations without distinguishing between well-attested supplementation and pure invention. Treat any completion of lacunose text as a probabilistic guess, not a reading.
Sacred and liturgical texts. Biblical, Qur'anic, liturgical, and other sacred texts carry theological and liturgical significance that statistical text generation cannot engage with. A model may translate accurately at surface level while being entirely deaf to the interpretive traditions, theological commitments, and pastoral contexts that shape how these texts are read within living communities of faith.
Textual criticism. Models are unreliable for the core intellectual tasks of textual criticism. Conjectural emendation, palaeographic dating, and codicological analysis are outside their competence. They have never seen a manuscript.
Teaching vs. research¶
For teaching: AI translations can serve as a starting point for students working through difficult passages, provided they understand they are seeing an approximation rather than an authoritative rendering. The pedagogical work lies in training students to read the output critically: to ask what has been lost, what choices have been made, and where the translation papers over genuine difficulty.
For research: AI translations are unreliable without expert verification, and the verification requirement may negate the time saving. If you must check every phrase against the original, consult commentaries for contested readings, and confirm that technical terminology has been handled correctly, you have not saved labour — you have added a step. Any translation that appears in published scholarship must be your own, informed by your own judgement, and defensible on its own terms.
Worked examples¶
The worked examples for this activity — covering Latin, Greek, and Hebrew texts with side-by-side comparison against published human translations — are on a dedicated page:
Ancient Languages and Texts — iterative engagement, teaching vs. research distinctions, and working with manuscripts, fragments, and sacred texts.
Further reading¶
- Verification & Citation — systematic approach to checking AI output
- Prompting Principles — how to get better results through better prompting
- Platform Guides — choosing which tool to use