Skip to content

FAQ — Common Anxieties and Objections

Honest answers. No hype, no dismissal.

Will AI replace me?

Short answer: No, but it will change what some of your work looks like.

LLMs cannot do the core work of humanities scholarship: exercising judgement, constructing original arguments, evaluating evidence in context, or contributing to disciplinary conversations. They can produce text that looks like those things, which is precisely why critical engagement matters.

What AI is likely to change is the distribution of labour within scholarly work — automating some routine tasks while making others (like verification) more important. The scholars who understand both their discipline and these tools will be better positioned than those who ignore the tools entirely or those who rely on them uncritically.

Don't Panic

Your training in close reading, critical analysis, and evidence evaluation is more valuable now, not less. These are exactly the skills needed to use AI tools responsibly and to catch their frequent errors.

Is using AI cheating?

It depends entirely on context and transparency.

  • In your own research: Using AI as a tool is no more "cheating" than using a library catalogue, a concordance, or a spell-checker. What matters is transparency about your methods and intellectual honesty about what work is yours.
  • In teaching: Check your institution's policy. Most now distinguish between prohibited use (submitting AI-generated work as your own) and legitimate use (using AI as a research aid, with disclosure).
  • In assessment: This is an institutional and pedagogical question, not a moral one. See Disclosure & Ethics for practical guidance.

The key principle: disclose your use, maintain your responsibility, and never present AI-generated work as your own scholarship.

What about hallucinations?

"Hallucination" is the industry term for when LLMs generate confident, fluent, and entirely fabricated content — fake citations, wrong dates, non-existent scholars, invented quotes.

This is not a bug that will be fixed. It is a structural feature of how these systems work: they generate statistically probable text, and sometimes statistically probable text is wrong. Current models hallucinate less than earlier ones, but the problem persists.

The practical response: Build verification into every workflow. See Verification & Citation for a detailed framework. The short version: never trust a citation you haven't checked, never trust a factual claim you haven't verified, and be especially sceptical of output that sounds authoritative.

What about data privacy?

Your data governance obligations don't disappear when you use AI tools. Key points:

  • Free tiers on most platforms may use your conversations for training. Don't upload sensitive material.
  • Paid subscriptions (Claude Pro, ChatGPT Plus) generally don't train on your data, but check current policies.
  • Institutional data may have specific restrictions on sharing with third-party services.
  • Student data is subject to GDPR, FERPA, and other regulations.

See Data Governance for the full picture.

What about the environmental cost?

This is a legitimate concern, not an excuse for inaction or for dismissal. Training large models requires enormous computational resources with significant carbon emissions and water usage. Inference (generating each response) is less intensive but scales with use.

The honest answer: if environmental impact is a significant factor in your decision-making, these tools have real costs. Whether those costs are justified depends on what you're using them for and what alternatives exist. Routine use for trivial tasks is harder to justify than targeted use for tasks where the tools offer genuine value.

I tried ChatGPT in 2023 and it was useless

Current systems (early 2026) are substantially more capable than 2023 versions. If your last experience was with GPT-3.5 or early GPT-4, you have not seen what current frontier models can do. Extended reasoning, tool use, memory, file handling, and code execution have all been added or significantly improved.

This doesn't mean the tools are now perfect — the limitations described throughout this guide are real. But dismissing the technology based on a 2023 experience is like dismissing the internet based on a 1995 experience. The trajectory matters.

Leif's Notes

I was sceptical too, and I think cautiousness always remains warranted. But the difference between the models of even early 2025 and those of 2026 is not incremental — it's qualitative. If you tried these tools then and found them wanting, I'd suggest a 15-minute retest with the Getting Started exercise. If they're still not useful for your work, that's a perfectly defensible conclusion.

Can AI understand my field?

No. LLMs do not understand anything. They generate statistically probable text based on patterns in training data. However, for well-documented fields and mainstream topics, this pattern-matching can produce surprisingly useful output. For specialist, niche, or cutting-edge topics, the default output will be less reliable, but can be significntly improved by providing additional context in the form of documents or other data sources.

The practical implication: your expertise is the quality filter. Use AI for tasks where you can evaluate the output, not for tasks where you'd need to take it on trust.

Should I be worried about my students using AI?

You should be informed about your students using AI, because many already do. The pedagogical questions are real: How do you assess work when AI assistance is available? How do you teach skills that AI can approximate? How do you help students develop the critical judgement needed to use these tools responsibly?

These questions don't have simple answers, and they vary by discipline and level. But ignoring the reality of student AI use doesn't make it go away — it just means students navigate it without guidance.

Where do I go from here?