Understanding your tools · Card 04

Using AI Responsibly

Environmental responsibility, mindful use, and maintaining a free — not compulsive — relationship to technology. You remain the thinking subject. The research remains yours.

Calculative thinking
Planning, optimising, extracting
Feed everything in. Let the AI find the patterns. Speed and volume as virtues. The researcher becomes a prompt-giver; the machine becomes the interpreter. This is technological compulsion — being driven by the tool rather than using it.
Meditative thinking
Dwelling, questioning, remaining present
Choose deliberately what to give the AI and why. Interrogate its outputs. Let the encounter with the material remain slow, uncertain, and yours. The AI surfaces; you interpret. The tool serves the thinking; the thinking is not replaced by it.
🧭
You are seeking a free relationship to technology — one in which you can use AI without being driven by it, set it aside without loss, and remain the thinking, interpreting subject at the centre of your research. This is not anti-technology. It is the condition for using technology well.
Martin Heidegger · Discourse on Thinking (Gelassenheit, 1959) · The Question Concerning Technology (1954)
Gestell
Enframing
Technology as a mode of ordering the world — one that reduces everything, including knowledge and participants, to "standing reserve": material to be optimised, processed, and extracted. The risk is that AI research tools enframe data as resource rather than as the traces of lived, interpreted experience.
Gelassenheit
Releasement / Letting-be
A free, non-compulsive orientation toward technology. Not rejection — but the capacity to say both yes and no simultaneously: to use the tool without being possessed by it. Releasement preserves the slowness, the dwelling, and the interpretive openness that qualitative research requires.
Applied here: the question is not whether to use AI, but whether you remain free in relation to it — able to step back, to distrust an output, to sit with ambiguity rather than accept a confident-sounding answer. Gelassenheit is not passivity. It is interpretive sovereignty.
Note: Heidegger is named here as the origin of this conceptual framework. He is not named elsewhere on this site. His work is engaged on its philosophical terms, with full awareness of the ethical complexities of his biography.
Current context windows by model (2026)
Model Context window Implications for researchers
Gemini 3 Pro Up to 10M tokens An entire research archive could technically be uploaded. The temptation toward total input is significant — and represents calculative thinking at scale.
GPT-5 ~1M tokens Large enough for comprehensive document sets. Demands researcher discipline about what is included and why.
Claude Sonnet 200K (standard) Substantial, but not unlimited. Encourages selection. Well-suited to focused, session-by-session interpretive work.
The disciplining argument: Regardless of what is technically possible, the act of choosing what to include in a session is itself interpretive work. Asking "what is essential for this specific question?" rather than "what can I fit in?" keeps the researcher in the lead. A large context window offered without critical reflection is an invitation to enframing — treating your data as standing reserve to be processed in bulk. The meditative researcher resists this, not by using less capable tools, but by remaining selective and purposeful about every input.
The real-world footprint of AI inference (2025 research)
33 Wh
Energy per long prompt on the most resource-intensive models — 70× the most efficient systems
1.2M
People's annual drinking water — equivalent freshwater demand from 700M daily queries
35,000
US homes — annual electricity equivalent when scaled to realistic global query volumes
Use the smallest, most efficient model appropriate for the task — reserve large models for complex reasoning
Craft specific, purposeful prompts rather than broad queries; shorter targeted prompts are significantly more efficient
Local models (LM Studio, Ollama) remove data centre load entirely for routine processing tasks
Do not treat AI sessions as unlimited — end them when done; running large sessions without purpose has a real cost
Prefer providers with credible renewable energy commitments for cloud AI use
01 — Accountability
You remain fully responsible
Researchers retain full accountability for the integrity of all AI-assisted work. Responsibility cannot be delegated to the tool — not even partially.
02 — Human contribution
Substantial human work is required
Oxford policy requires substantial human contribution. AI-assisted analysis must reflect your own interpretive engagement — not AI-generated conclusions submitted as your own.
03 — Transparency
Disclose all AI use
All substantive AI use must be disclosed: in publications, reports, ethics applications, and to supervisors. The tool used, its purpose, and the nature of its contribution should all be stated.
04 — Critical approach
Trust nothing uncritically
Approach every AI output with critical awareness — hallucinations, bias embedded in training data, and outputs that sound authoritative but drift beyond the source material all require vigilance.
05 — Data protection
GDPR obligations apply
Data protection law applies to AI use. Ethics approval must cover AI processing. Participants must have consented. The institutional licence improves — but does not replace — these requirements.
Oxford + — Mindful use
Keep your research in the lead
A sixth principle the AIML Competency Centre increasingly emphasises: not just following rules, but cultivating a genuinely thoughtful, questioning orientation. Use AI in service of your research questions — not the reverse.
Analogy
Using AI well is like using a powerful library — the size of the collection does not determine the quality of your scholarship. What determines it is the care with which you select, read, and interpret what you find. A researcher who reads everything uncritically, accepts whatever confirms their first instinct, and never sits with ambiguity has not used the library well. Neither has the researcher who uploads everything to an AI and waits for meaning to emerge.
← Card 03: Local vs Cloud Ethics All resources →