pipes

scheduled AI agents that run on your screen data. paste into claude code, cursor, or any AI coding tool.

paste into your AI coding tool
create a screenpipe pipe that [DESCRIBE WHAT YOU WANT].

## what is screenpipe?

screenpipe is a desktop app that continuously records your screen (OCR) and audio (transcription).
it runs a local API at http://localhost:3030 that lets you query everything you've seen, said, or heard.

## what is a pipe?

a pipe is a scheduled AI agent defined as a single markdown file: ~/.screenpipe/pipes/{name}/pipe.md
every N minutes, screenpipe runs a coding agent (like pi or claude-code) with the pipe's prompt.
the agent can query your screen data, write files, call external APIs, send notifications, etc.

## pipe.md format

the file starts with YAML frontmatter, then the prompt body:

---
name: my-pipe
schedule: every 30m
lookback: 30m
enabled: true
---

Your prompt instructions here...

## screenpipe search API

the agent queries screen data via the local REST API:

curl "http://localhost:3030/search?limit=20&content_type=all&start_time={{start_time}}&end_time={{end_time}}"

### query parameters
- q: text search query (optional)
- content_type: "ocr" | "audio" | "ui" | "all" | "ocr+audio" | "ocr+ui" | "audio+ui"
- limit: max results (default 20)
- offset: pagination offset
- start_time / end_time: ISO 8601 timestamps
- app_name: filter by app (e.g. "chrome", "cursor")
- window_name: filter by window title
- browser_url: filter by URL (e.g. "github.com")
- min_length / max_length: filter by text length
- speaker_ids: filter audio by speaker IDs

### OCR results (what was on screen)
each result contains:
- text: the OCR'd text visible on screen
- app_name: which app was active (e.g. "Arc", "Cursor", "Slack")
- window_name: the window title
- browser_url: the URL if it was a browser
- timestamp: when it was captured
- file_path: path to the video frame
- focused: whether the window was focused

### audio results (what was said/heard)
each result contains:
- transcription: the spoken text
- speaker_id: numeric speaker identifier
- timestamp: when it was captured
- device_name: which audio device (mic or system audio)
- device_type: "input" (microphone) or "output" (system audio)

### UI events (accessibility data, macOS only)
query via: curl "http://localhost:3030/ui-events?app_name=Slack&limit=50&start_time={{start_time}}&end_time={{end_time}}"
event types: text (keyboard input), click, app_switch, window_focus, clipboard, scroll

## template variables

these are replaced in the prompt before execution:
- {{start_time}}: ISO 8601 start (based on lookback)
- {{end_time}}: ISO 8601 end (current time)
- {{date}}: current date (YYYY-MM-DD)
- {{timezone}}: timezone abbreviation (e.g. PST)
- {{timezone_offset}}: UTC offset (e.g. -08:00)

## secrets

store API keys in a .env file next to pipe.md (never in the prompt itself):
echo "API_KEY=your_key" > ~/.screenpipe/pipes/my-pipe/.env
reference in prompt: source .env && curl -H "Authorization: Bearer $API_KEY" ...

## after creating the file

install: bunx screenpipe pipe install ~/.screenpipe/pipes/my-pipe
enable:  bunx screenpipe pipe enable my-pipe
test:    bunx screenpipe pipe run my-pipe
logs:    bunx screenpipe pipe logs my-pipe

replace [DESCRIBE WHAT YOU WANT] with your use case — or pick an example below.

examples

namewhat it does 
obsidian syncsync screen activity to your obsidian vault as daily logs
remindersscan screen for action items → create Apple Reminders
idea trackersurface startup ideas from your browsing + market trends
standup reportgenerate daily standup from yesterday's screen activity
time trackerauto-log time to Toggl based on app usage
focus guardalert if distracted for too long on social media
meeting notestranscribe meetings + extract action items automatically
learning journaltrack what you read & learned across the web
email drafterdraft follow-up emails from meeting context
commit loggerauto-generate end-of-day git commit summaries

each “copy” gives you the full prompt with the example description pre-filled. paste directly into claude code.

pipes require screenpipe running locally.