Skip to content

Mindset — How to Think About These Tools

This page is about how to think, not what to do. The practical guidance is elsewhere — Use Cases for tasks, Essentials for principles, platform manuals for specific tools. This page addresses the prior question: what kind of thing are you dealing with, and what mental model serves you best?

Tools, not oracles

The single most important adjustment is this: treat LLM output as you would a capable but unreliable research assistant's first draft. Sometimes brilliant, sometimes wrong, always requiring your judgement.

This means:

  • Never cite an LLM directly as an authority on any factual matter
  • Always verify claims, references, dates, and attributions against primary or authoritative sources
  • Use outputs as starting points for your own thinking, not as substitutes for it
  • Expect errors and build verification into your workflow rather than treating it as an afterthought

Caution

The fluency of LLM output is precisely what makes it dangerous. A confidently wrong answer in polished prose is harder to catch than a hesitant one full of hedges. The better the writing sounds, the more carefully you should check the substance.

The verification imperative

Claude, ChatGPT, and other LLMs will fabricate references, misattribute quotations, confuse similar scholars, get dates wrong, and present disputed interpretations as settled consensus — all while sounding entirely certain. This is not a bug that will be fixed in the next version. It is a structural feature of how these systems work.

For a detailed framework on verification practices, see Verification & Citation in Essentials.

Your expertise is the point

These tools reward expertise rather than replacing it. A scholar with deep knowledge of a text will get substantially more out of an AI interaction than someone encountering the material for the first time. The system amplifies what you bring to it.

This has a practical implication: the areas where you know most are where these tools are most useful, because you can verify outputs quickly and catch errors that a non-specialist would miss. Conversely, using AI in areas outside your expertise is where the risks are highest — you cannot evaluate what you cannot judge.

The value of friction

Even where these tools save time, the displaced time might have been intellectually productive. The friction of drafting from scratch might be when you clarify what you are actually trying to say. Slow engagement with materials is itself scholarly practice, not overhead to be optimised away.

Leif's Notes

I notice this in my own work. When I use Claude to draft something quickly, I sometimes skip the thinking that would have happened during the slower drafting process. The draft arrives faster but I understand it less well. My rule of thumb: if the task is one where the process of doing it teaches me something, I should probably do it myself. If the task is one where I already know what I want and the bottleneck is just producing it, AI assistance makes more sense.

When not to use AI

Not every task benefits from AI assistance. Some situations where you should probably not use these tools:

  • When you need to learn the material. If the point of the exercise is developing your own understanding, outsourcing the work defeats the purpose.
  • When verification would take longer than doing it yourself. If checking the output requires the same effort as producing it from scratch, you've added a step without saving time.
  • When the stakes are high and you can't verify. Using AI for tasks outside your expertise, where you can't catch errors, is genuinely risky.
  • When your institution prohibits it. Check your institutional policy, especially for student-facing work.
  • When it involves sensitive data. See Data Governance before uploading anything confidential.

A spectrum, not a binary

The question is not "should I use AI?" but "for which tasks, under what conditions, with what safeguards?" Most scholars who engage with these tools use them selectively — for some tasks and not others, with verification practices calibrated to the stakes involved.

This is the sensible approach. There is no obligation to use AI for everything, and no shame in using it for specific tasks where it genuinely helps.

Maintaining perspective

Don't Panic

These tools are changing rapidly, but your disciplinary skills are not obsolete. The ability to read critically, evaluate evidence, construct arguments, and exercise scholarly judgement is exactly what makes AI tools useful (when they are useful) and allows you to catch their failures (which are frequent). The scholars who will navigate this landscape best are those who maintain their core expertise while developing enough AI literacy to make informed decisions about when and how to engage.


For practical tasks to try, see Use Cases. For verification practices, see Essentials.