Understanding your tools · Card 03

Local vs Cloud AI:
The Ethical Choice

Why where your data goes is an ethical question, not just a technical one — and how to choose the right tool for the right data.

Cloud AI
ChatGPT, Claude, Gemini, Copilot
  • Easy — no setup, browser-based
  • Data sent to third-party servers
  • May be used for model training
  • GDPR compliance requires careful assessment
  • Model version changes without notice
  • Dependent on provider terms and pricing
Local AI
LM Studio, Ollama, AnythingLLM
  • Data never leaves your machine
  • No internet required during use
  • Full GDPR compliance far easier to demonstrate
  • Free and open-source models available
  • Pin exact model version — supports reproducibility
  • Complete control and transparency
🔑
The question is not "is cloud AI bad?" — it is "does this data require the stronger protection that only local can provide?" The sensitivity of your data, your participants' consent, and your ethics approval together determine the answer. See Card 02 for the full three-tier decision framework.
LM Studio
Best for beginners
A desktop app (Windows, Mac, Linux) with a point-and-click interface for downloading and running models locally. Free. Looks like a familiar chatbot.
Ollama
Open-source & flexible
Fully open-source (MIT licence). Terminal-based but pairs with graphical interfaces. Widest model support. Ideal for methodological transparency.
AnythingLLM
Talk to your documents
Connect your own PDFs, transcripts, and notes. The AI answers questions grounded in your materials (RAG). Works with Ollama or LM Studio.
Hardware reality — more accessible than you think
8 GB RAM laptop Phi-3 Mini, Llama 3.2 3B — summarisation, Q&A, drafting, thematic code suggestions
16 GB RAM laptop Llama 3.1 8B, Mistral 7B — most research tasks comfortably. Recommended minimum.
32 GB or GPU Llama 3.1 70B — long documents, complex reasoning, large data sets
01
Transparency
Disclose all AI use — in writing, analysis, transcription, or literature review. Your institution, ethics committee, and any publishers need to know.
02
Critical oversight of bias
LLMs embed biases from training data. Be alert to misreadings of marginalised perspectives, non-Western knowledge traditions, and under-represented voices.
03
Consent & participant rights
Processing participant data with AI may require updating ethics approval and participant information sheets. Cloud AI especially requires scrutiny under GDPR.
04
Accountability
You are responsible for your research. AI cannot bear responsibility for errors or harms. Human judgement cannot be delegated, only supported.
The interpretive caution
The interpretive circle — moving between parts and whole, between text and context, between pre-understanding and new understanding — is the heart of interpretive phenomenological method. AI tools can assist with the mechanical dimension of pattern-finding. But they cannot dwell in ambiguity. They cannot question their own assumptions. They cannot attend to what is not said.
The risk is not that AI will do the interpretive work badly. It is that its speed and apparent confidence may discourage the slow, uncertain, and productive dwelling that hermeneutic inquiry requires.
Use AI. But protect the slowness.
Analogy
The distinction between cloud and local AI is like the difference between sending your research notes to an external print shop and photocopying them yourself at home. The output looks the same; the data exposure is entirely different. And neither option removes the need for you to read and interpret what you printed.
← Card 02: Understanding NotebookLM Card 04: Using AI Responsibly →