GitHubMIT

Your private AI that knows everything

AI assistant that sees your screen, understands your work, never sends data to the cloud. Runs 100% locally with Ollama or any local LLM.

Your Computer
secured
📧 emails
📄 docs
💬 chats
AI runs here
Your data never leaves your device

See it in action

Cloud AI has problems

ChatGPT and Claude are powerful, but come with tradeoffs.

01

Sending screen content to OpenAI raises privacy concerns

02

Corporate policies may prohibit sharing with cloud services

03

Cloud AI doesn't know what you're working on without manual context

04

Paying for API calls every time you ask a question

05

No AI when you're offline

100% local AI with desktop context

screenpipe captures your screen and feeds context to a local LLM running on your machine. Ask questions, search history, get help - all without internet.

Complete privacy

Screen data and conversations never leave your computer. No cloud, no tracking.

Works offline

No internet required. Your AI works completely offline once set up.

Desktop aware

Ask 'what's this error about?' and it can search your screen history for context.

Your choice of model

Use Ollama, LM Studio, or any OpenAI-compatible local server.

How it works

1

Install Ollama

Download Ollama and pull a model. Llama 3.2 or Mistral work well for most tasks.

# Install Ollama
curl -fsSL https://ollama.com/install.sh | sh

# Pull a model (choose one)
ollama pull llama3.2        # 3B params, fast
ollama pull mistral         # 7B params, balanced
ollama pull deepseek-r1:8b  # 8B params, good reasoning

# Verify it's running
ollama list
2

Install and configure screenpipe

Download screenpipe and point it to your Ollama instance in Settings.

# Download screenpipe from screenpi.pe
# Or install via CLI:
curl -fsSL https://screenpi.pe/install.sh | sh

# Start screenpipe
screenpipe

# In Settings → AI Provider:
# - Select "Ollama"
# - Model: llama3.2 (or your chosen model)
# - URL: http://localhost:11434
3

Ask anything

Use the screenpipe chat or integrate with your own app. The LLM has access to your screen context.

# Example: Query screenpipe API with context
curl -X POST "http://localhost:3030/chat" \
  -H "Content-Type: application/json" \
  -d '{
    "message": "What error did I see in the terminal?",
    "use_context": true
  }'

Code examples

Ollama models for different use cases

Choose based on your hardware and needs

# Fast responses (4GB RAM)
ollama pull llama3.2          # 3B, general purpose
ollama pull phi3:mini         # 3.8B, Microsoft's efficient model

# Balanced (8GB RAM)
ollama pull mistral           # 7B, great for coding
ollama pull deepseek-r1:8b    # 8B, strong reasoning

# Maximum quality (16GB+ RAM)
ollama pull llama3.1:70b      # 70B, best quality
ollama pull deepseek-r1:32b   # 32B, excellent reasoning

# Coding focused
ollama pull codellama         # Optimized for code
ollama pull deepseek-coder    # Strong at programming

Verify your setup

Check that everything is working

# Check Ollama is running
curl http://localhost:11434/api/tags

# Check screenpipe is running
curl http://localhost:3030/health

# Test a simple query
curl http://localhost:11434/api/generate -d '{
  "model": "llama3.2",
  "prompt": "Hello, how are you?",
  "stream": false
}'

Key benefits

Zero data sent to cloud services
No API costs or subscriptions
Works without internet connection
Full control over the AI model
Corporate and compliance friendly

Frequently asked questions

Get your private AI assistant

AI power without privacy compromise.