Skip to content

Essential Considerations

Whether you use Claude, ChatGPT, Gemini, Copilot, or an open-weights model running on your own laptop, certain principles apply everywhere. This section covers them.

These pages are platform-agnostic. The examples lean towards humanities scholarship --- history, classics, literature, languages, archaeology --- but the principles hold for any discipline. If you only read one section of this entire guide, make it this one.

Essential

At minimum, read Verification & Citation and Data Governance before using any AI tool for academic work. Everything else here is important, but those two pages address the issues most likely to cause real problems.


What you will find here

Page What it covers Time
Verification & Citation What LLMs get wrong, how to check their outputs, the verification ladder, recording provenance 10 min
Data Governance What providers do with your data, institutional policies, practical guidelines by data type 8 min
Prompting Principles Being specific, providing context, role-framing, structured prompts, iteration 10 min
Disclosure & Ethics When and how to declare AI use in teaching and publication, transparency templates 8 min
Multi-Model Strategy Using disagreement as a signal, mixing models within and across providers 6 min
Cost & Plans Subscription tiers, API pricing, cost-saving strategies, what is worth paying for 8 min
Decision Sheet A one-page printable checklist: before, during, and after using AI 3 min

Why a separate tier?

The platform manuals (Claude, ChatGPT, Gemini, and so on) tell you how to use a particular tool. This section tells you what to think about regardless of which tool you pick.

Verification matters whether you are using Claude Chat or GPT-4o. Data governance applies whether your institution has signed an enterprise agreement or you are using a free tier on your personal account. Prompting principles transfer across every LLM you will encounter.

Leif's Notes

I split these principles into their own section after noticing that colleagues kept asking the same questions no matter which tool they started with: "Can I trust the references?" "Is it safe to upload my data?" "Do I need to tell anyone I used it?" The answers do not depend on the platform. They depend on the principles here.


How to use these pages

If you are brand new to AI: Read Verification & Citation and Data Governance first. Then read Prompting Principles before your first serious session. The rest can wait until you need it.

If you are already experimenting: Skim the Decision Sheet to check whether you have blind spots, then read any pages that cover areas you have not thought about.

If you are setting policy: Disclosure & Ethics and Data Governance are the most relevant. The Decision Sheet can be adapted for departmental guidance documents.

If you want a quick reference: The Decision Sheet distils everything on these pages into a single-page checklist. It is designed to be printed or kept open during a session.


The three things that matter most

If you take nothing else from these pages, take these three habits:

  1. Check your data before uploading. Not everything belongs on a commercial server. Five seconds of thought can prevent a governance problem. (Data Governance)

  2. Verify outputs before using them. LLMs are confident, not reliable. Every citation, every factual claim, every translation needs checking at a level proportionate to the stakes. (Verification & Citation)

  3. Disclose your AI use. Transparency about methods is a scholarly norm, not a confession of weakness. If AI played a role in the work, say so. (Disclosure & Ethics)

Everything else on these pages --- prompting technique, multi-model strategies, cost management --- is useful refinement. But those three habits are the foundation.


Where these principles come from

The guidance here draws on three sources:

  • Anthropic's own documentation on prompting, safety, and data handling, adapted for academic contexts.
  • Emerging best practice from universities, publishers, and funding bodies that are developing AI policies.
  • Practical experience from humanities scholars who have been experimenting with these tools and discovering (sometimes the hard way) what works and what does not.

Where we state a principle, we try to explain why it matters, not just what to do. Scholars are more likely to follow guidance they understand than rules they have been told to memorise.


A note on currency

AI platforms change their pricing, policies, and capabilities frequently. These pages state principles that are durable, but where we mention specific prices, tiers, or policy details, we include the date we last checked. If something looks out of date, it probably is --- check the provider's own documentation and let us know so we can update.

Don't Panic

You do not need to master all of this before you start. Most of these principles amount to the same critical thinking you already apply to any source. The technology is new; the scholarly habits are not.