Gemini for Humanities Scholars¶
March 2026
Don't Panic
Skip to what you need: If you only use Gemini's conversational interface, read the orientation below and Chat, then skip to the Essentials section. If you want structured web research, see Deep Research, Canvas & Live. If you are doing coding work, see Coding & Developer Tools.
What makes Gemini different¶
Gemini's distinguishing feature is integration. If you already live inside Google's ecosystem — Gmail, Drive, Docs, Sheets, Calendar, Meet — Gemini can reach into all of them. It is also the most broadly multimodal of the major platforms: text, images, audio, video, and code are all native capabilities, not bolt-on additions. For humanities scholars who work with visual or audio material alongside text, or who want an AI assistant that already knows where their files are, this matters.
The trade-off is complexity. Google's AI product landscape is sprawling, feature availability varies by country, plan, account type, and device, and names have changed repeatedly. Where Claude offers a clean product hierarchy and ChatGPT a single familiar interface, Gemini asks you to navigate a wider and less predictable terrain. This section is your map.
Gemini, Google DeepMind, and the product landscape¶
Google DeepMind is the AI research division that builds Gemini's underlying models. The Gemini app is the consumer product — the interface you use in a browser or on your phone. The current model generation is Gemini 3 (with Pro, Flash, and Deep Think variants appearing across different tiers).
The Gemini app launched in February 2024, replacing Google's earlier chatbot Bard. The model family was introduced in December 2023. Since then, Gemini has expanded well beyond conversational chat into deep research, canvas-based document creation, voice interaction, image and video generation, connected apps, and agentic capabilities.
Separate from the consumer app, Google also offers Google AI Studio (a browser-based environment for experimenting with Gemini models), the Gemini API (for developers building their own tools), the Gemini CLI (an open-source command-line agent), and NotebookLM (a research tool for working with uploaded source documents). These are distinct products but part of the same ecosystem.
Who this section is for¶
This section is for humanities scholars who want practical guidance on using Gemini well. If you are new to AI tools altogether, start with the Primer first. If you are deciding between platforms, see How to Choose. If you already know the basics and want to know which parts of Gemini matter for real research, teaching, writing, and administration, start here.
For prompting principles that apply across all platforms, see Prompting Principles. For data governance, verification, multi-model strategies, and cost awareness, see the Essentials section.
Gemini at a glance
Best reasons to use it: Google ecosystem integration, genuinely broad multimodal support (text, images, video, audio, code), strong teaching-artefact creation via Canvas, and a smooth path from experimentation to developer tooling via AI Studio.
Main reasons to be cautious: Confusing product sprawl, uneven feature availability by country/plan/account type, and personal-account privacy defaults that deserve careful attention.
The short version¶
For many humanities researchers, the useful parts of Gemini are:
- Ordinary chat for drafting, summarising, brainstorming, translation support, and explanation
- Web search for timely answers with links back to sources
- File uploads for working with PDFs, images, audio, and video
- Deep Research for structured online research with a documented report — can draw on Gmail, Drive, uploaded files, and NotebookLM notebooks
- Canvas for creating and editing documents, slides, code, infographics, quizzes, and Audio Overviews
- Connected Apps when you need Gemini to work with Google Workspace (Drive, Docs, Gmail, Calendar)
- Gems for repeated tasks with a reusable configuration
- Gemini Live for voice-based thinking, brainstorming, and accessibility
Google offers a capable free tier (Gemini 3 Pro with daily limits, file/image uploads, basic web browsing), paid individual plans (Plus, Pro, Ultra), and Workspace plans for institutions. Google is also offering free AI Pro access for a school year to university students in the US, UK, Japan, Brazil, and Indonesia — directly relevant if you or your students are at an eligible institution.
A practical rule: start with plain chat. Move to Deep Research when you need broad web-based synthesis. Use Canvas when you need to create artefacts. Use Gems only when you find yourself repeating the same kind of task. Ignore the developer tools unless you really do want to build things.
Which Gemini tool do I need?¶
Chat¶
Best for: Asking questions, getting writing help, analysing documents, brainstorming.
You'll need: A free Google account or a paid subscription.
Skills required: None — just a browser.
Deep Research¶
Best for: Structured web-based synthesis with a documented report.
You'll need: Available on Pro and Ultra plans.
Skills required: None — but knowing what to ask for helps.
Canvas¶
Best for: Creating and editing documents, slides, code, quizzes, and other artefacts.
You'll need: Available on most plans.
Skills required: None.
Gems¶
Best for: Repeated specialist tasks with a reusable configuration.
You'll need: Available on paid plans.
Skills required: None — but you need a clear, recurring need.
Connected Apps¶
Best for: Working with Google Workspace (Gmail, Drive, Docs, Calendar) from within Gemini.
You'll need: A Google account with the relevant services. Feature availability varies by plan.
Skills required: None.
Gemini Live¶
Best for: Voice-based conversation, thinking aloud, brainstorming, accessibility.
You'll need: Available on paid plans; mobile app required for full experience.
Skills required: None.
AI Studio, CLI & Developer Tools¶
Best for: Experimenting with models, building tools, scripting, automation.
You'll need: A Google account. API usage is billed separately from Gemini subscriptions.
Skills required: Technical comfort. Terminal basics for CLI.
Many tasks start in Chat. Move to Deep Research when you need synthesis across many sources. Use Canvas when you need to create something tangible. Use Gems only once a recurring need is clear. The developer tools are for building, not for reading and writing.
The tool spectrum¶
| Conversational | Agentic | |||
|---|---|---|---|---|
| Chat | Deep Research | Canvas | Gems | Agent / Jules |
| You talk, Gemini responds | Structured web synthesis | Create artefacts | Reusable configurations | Delegated action |
The Gemini tool spectrum: from conversation to autonomous action.
Decision flowchart¶
| Question | If No | If Yes |
|---|---|---|
| Do you need structured web research? | ↓ | Deep Research |
| Do you need to create documents, slides, or other artefacts? | ↓ | Canvas |
| Is the work a repeated specialist task? | ↓ | Gems |
| Do you need Gemini to access your Google Workspace? | ↓ | Connected Apps |
| Are you building software or automations? | Chat | Developer Tools |
Which Gemini tool do I use? A quick decision guide.
Essential
If you mainly read, write, teach, and analyse texts: Start with Chat. Use Deep Research for synthesis. Use Canvas when you need to produce artefacts. You can safely ignore AI Studio, the CLI, Jules, and Agent mode until you have a specific need. The cross-platform essentials still apply — see Essentials.
If you want to build tools, transform data, or automate processing: See Coding & Developer Tools. The Essentials still apply.
Minimum safe practice
If you just want to try Gemini responsibly:
- Use Chat. Upload the source text you want to work with.
- Ask for critique, summary, or analysis — not for authoritative facts.
- Verify all references against a library catalogue.
- Do not upload sensitive, personal, or embargoed material — Gemini's personal-account defaults include human review and training use. See Plans & Privacy.
- Record what you asked and what you used from the response.
- If it's useful, great. If not, nothing was lost.
Contents¶
- Chat — Conversations, web search, uploads, and multimodal interaction
- Gems & Connected Apps — Custom configurations and Google Workspace integration
- Deep Research, Canvas & Live — Structured research, artefact creation, and voice
- Coding & Developer Tools — AI Studio, CLI, Code Assist, and Jules
- Plans & Privacy — Choosing the right tier and understanding data handling
- Worked Examples — Humanities-specific workflows step by step