Disclosure & Ethics¶
If you use AI in your academic work, when and how should you say so? This page covers the practical and ethical dimensions of transparency about AI use.
Essential
The landscape of disclosure requirements is evolving rapidly. What matters most is not getting the format exactly right but establishing a habit of transparency. If in doubt, disclose. No one has ever been criticised for being too transparent about their methods.
Why disclosure matters¶
Transparency about AI use serves the same purpose as transparency about any other method: it lets others evaluate your work on its merits. Specifically:
- Reproducibility. If your analysis or data processing involved AI, others need to know that to assess or replicate your work.
- Intellectual honesty. Presenting AI-generated text or analysis as entirely your own is a form of misrepresentation, even if it is not technically plagiarism under current definitions.
- Trust. Colleagues, students, and reviewers are more likely to trust work that is upfront about its methods than work where AI use is discovered after the fact.
- Norms. The academic community is still working out what constitutes appropriate AI use. Transparent disclosure contributes to that collective process.
Institutional policies¶
Most universities now have policies on AI use, though they vary considerably in scope and specificity.
What to check¶
- Does your institution have a general AI use policy? Check IT services, the research office, or the provost/vice-chancellor's office.
- Does your department have discipline-specific guidance? General institutional policies may not address the particular conventions of your field.
- Are there separate policies for teaching and research? Many institutions have stricter rules about AI use in assessed student work than in research.
- Is there a policy on AI use in grant applications? Some funders now require disclosure; your institution may have guidance on compliance.
Common institutional positions¶
| Position | What it means in practice |
|---|---|
| Prohibited | AI tools may not be used for the specified activity (rare for research; more common for student assessment) |
| Permitted with disclosure | You may use AI tools but must declare when and how. This is the most common position for research |
| Permitted without restriction | No disclosure required. Uncommon, but some institutions take this position for non-assessed work |
| No policy yet | You should still disclose, because the absence of a policy is not the same as permission to be opaque |
Caution
If your institution has no AI policy, do not assume anything goes. Apply the principle of transparency: disclose your use, document your methods, and be prepared to explain your choices. Policies will catch up, and you want your past work to look responsible in hindsight.
Journal and publisher requirements¶
An increasing number of journals and publishers now require disclosure of AI use in submissions. Requirements vary, but the trend is toward greater transparency.
Common requirements (early 2026)¶
- Many major publishers (Springer Nature, Elsevier, Taylor & Francis, Cambridge University Press, Oxford University Press) require authors to disclose AI use in the writing or preparation of a manuscript.
- Most prohibit listing an AI as an author, since AI cannot take responsibility for the content, respond to peer review, or be held accountable.
- Some require disclosure in a specific section (e.g. a "Use of AI" statement in the Methods or Acknowledgements).
- Some distinguish between types of use --- using AI for language polishing may have different requirements from using it for data analysis or literature review.
Off the Beaten Track
Journal policies are changing frequently. Before submitting, check the current author guidelines for your target journal. A policy that was not there six months ago may have appeared. The COPE (Committee on Publication Ethics) position statements are a useful reference for the general direction of travel.
A practical disclosure template¶
The following template can be adapted for different contexts. It is deliberately more detailed than most current requirements demand, on the principle that over-disclosure is preferable to under-disclosure.
For a journal article or book chapter¶
AI Disclosure Statement
[Model name and version, e.g. "Claude Opus 4 (Anthropic)"] was used during the preparation of this work for the following purposes: [list specific uses, e.g. "drafting initial literature review summaries," "translating passages from Latin," "cleaning and structuring OCR output from archival sources," "generating data visualisations from structured datasets"].
All AI-generated content was reviewed, verified, and substantially revised by the author(s). Factual claims were checked against [specify sources]. Citations were verified against [specify databases/catalogues]. The author(s) take full responsibility for the content of this work.
[Optional: "Prompts and AI outputs are available in the supplementary materials / upon request."]
For teaching materials¶
These materials were prepared with the assistance of [model name]. AI was used to [specify: draft discussion questions, generate example analyses, format reading lists, etc.]. All content was reviewed and edited to ensure accuracy and appropriateness for the course.
For a grant application¶
AI tools ([model name]) were used to [specify: draft sections of the narrative, review the case for support, generate budget justifications, etc.]. All content was reviewed, verified, and revised by the applicant(s). The research plan, methodology, and intellectual contribution are entirely the applicants' own.
Leif's Notes
I include an AI disclosure statement in everything I submit, even when the journal does not require it. It takes two sentences and it pre-empts any awkward questions later. I also keep a research log noting which sessions involved AI, what I asked, and what I used from the output. This is not onerous --- a one-line entry per session suffices.
The ethics of transparency¶
The "so what?" test¶
Not all AI use is equally significant. A useful test: would knowing about this use change how someone evaluates the work?
- Using AI to check grammar in a final draft: Probably not significant. Equivalent to using a spell-checker. Disclosure is courteous but not essential.
- Using AI to draft paragraphs of argument: Significant. The reader needs to know that the prose and reasoning were AI-assisted, even if you revised them.
- Using AI to process and analyse data: Significant. This is a methodological choice that affects reproducibility.
- Using AI to generate a literature review: Very significant. The reader needs to know that the survey of scholarship was mediated by a system that may fabricate references.
When AI use could be misconduct¶
The line between appropriate and inappropriate AI use depends on context:
- Student assessment: Using AI to produce assessed work without disclosure is academic misconduct at most institutions. This is clear-cut.
- Peer review: Using AI to write or substantially assist with a peer review raises serious confidentiality concerns (the manuscript is shared with a third-party service) and integrity concerns (the reviewer's expertise is what is being asked for).
- Research data fabrication: Using AI to generate fictional data and presenting it as real is research misconduct, full stop.
- Ghost-writing: Submitting AI-generated text as your own in contexts where original authorship is expected (job applications, personal statements, testimonials) is dishonest.
Caution
Peer review is a special case. Most journals explicitly prohibit sharing manuscripts under review with third parties. Uploading a manuscript to a commercial AI service for review assistance likely violates this prohibition, regardless of the provider's data handling policies. If you want to use AI to help with a review, check the journal's policy first.
Building good habits¶
Rather than treating disclosure as a bureaucratic requirement, build it into your workflow:
- Keep a session log. Note the date, model, what you asked, and what you used. One line per session.
- Save prompts and outputs for anything you might publish or teach from. Most platforms allow you to export conversation history.
- Draft your disclosure statement early, while you still remember what you used AI for. Do not try to reconstruct it months later when submitting.
- Be specific. "AI was used in the preparation of this work" is technically a disclosure but tells the reader nothing useful. Specify what was done and how it was verified.
Don't Panic
Disclosure is not an admission of weakness. Using AI thoughtfully, transparently, and with proper verification is entirely compatible with rigorous scholarship. The goal is not to avoid AI but to use it honestly --- the same standard we apply to every other tool and method.