Understanding your tools · Card 03
Local vs Cloud AI:
The Ethical Choice
Why where your data goes is an ethical question, not just a technical one — and how to choose the right tool for the right data.
The fundamental distinction
Cloud AI
ChatGPT, Claude, Gemini, Copilot
- Easy — no setup, browser-based
- Data sent to third-party servers
- May be used for model training
- GDPR compliance requires careful assessment
- Model version changes without notice
- Dependent on provider terms and pricing
Local AI
LM Studio, Ollama, AnythingLLM
- Data never leaves your machine
- No internet required during use
- Full GDPR compliance far easier to demonstrate
- Free and open-source models available
- Pin exact model version — supports reproducibility
- Complete control and transparency
🔑
The question is not "is cloud AI bad?" — it is "does this data require the stronger protection that only local can provide?" The sensitivity of your data, your participants' consent, and your ethics approval together determine the answer. See Card 02 for the full three-tier decision framework.
Three local tools — no coding required
Hardware reality — more accessible than you think
8 GB RAM laptop
Phi-3 Mini, Llama 3.2 3B — summarisation, Q&A, drafting, thematic code suggestions
16 GB RAM laptop
Llama 3.1 8B, Mistral 7B — most research tasks comfortably. Recommended minimum.
32 GB or GPU
Llama 3.1 70B — long documents, complex reasoning, large data sets
Four ethical principles for all AI use in research
01
Transparency
Disclose all AI use — in writing, analysis, transcription, or literature review. Your institution, ethics committee, and any publishers need to know.
02
Critical oversight of bias
LLMs embed biases from training data. Be alert to misreadings of marginalised perspectives, non-Western knowledge traditions, and under-represented voices.
03
Consent & participant rights
Processing participant data with AI may require updating ethics approval and participant information sheets. Cloud AI especially requires scrutiny under GDPR.
04
Accountability
You are responsible for your research. AI cannot bear responsibility for errors or harms. Human judgement cannot be delegated, only supported.
A specific caution for interpretive researchers
The interpretive caution
The interpretive circle — moving between parts and whole, between text and context, between pre-understanding and new understanding — is the heart of interpretive phenomenological method. AI tools can assist with the mechanical dimension of pattern-finding. But they cannot dwell in ambiguity. They cannot question their own assumptions. They cannot attend to what is not said.
The risk is not that AI will do the interpretive work badly. It is that its speed and apparent confidence may discourage the slow, uncertain, and productive dwelling that hermeneutic inquiry requires.
Use AI. But protect the slowness.
Analogy
The distinction between cloud and local AI is like the difference between sending your research notes to an external print shop and photocopying them yourself at home. The output looks the same; the data exposure is entirely different. And neither option removes the need for you to read and interpret what you printed.