Screenpipe logoscreenpipe

pipe store

AI agents that run on your screen data. install in one click — just copy the prompt and paste into claude code.

32 pipes available·all free·open source

featured

meeting notes

every 30m
featured

transcribe meetings + extract action items automatically

audio

daily digest

every 4h
featured

end-of-day summary of everything you worked on

screenaudio

time tracker

every 1h
featured

auto-log time based on app usage

screenui events

all pipes (32)

meeting notes

every 30m
featured

transcribe meetings + extract action items automatically

audio

daily digest

every 4h
featured

end-of-day summary of everything you worked on

screenaudio

obsidian sync

every 1h

sync screen activity to your obsidian vault as daily logs

screenaudio

reminders

every 30m

scan screen for action items → create Apple Reminders

screenaudio

standup report

daily

generate daily standup from yesterday's screen activity

screen

time tracker

every 1h
featured

auto-log time based on app usage

screenui events

focus guard

every 10m

alert if distracted for too long on social media

screen

idea tracker

every 4h

surface startup ideas from your browsing + market trends

screen

learning journal

every 2h

track what you read and learned across the web

screen

email drafter

every 1h

draft follow-up emails from meeting context

audio

toggl sync

every 30m

auto-sync time entries to Toggl Track from screen activity

screenui events
Notion

notion sync

every 1h

push daily activity summaries to a Notion database

screenaudio
Linear

linear issues

every 1h

auto-create Linear issues from bugs and tasks on screen

screenaudio
Slack

slack digest

every 2h

summarize Slack channels you've been reading

screen
Google Calendar

google calendar sync

every 1h

auto-log actual time spent to Google Calendar

screenaudio
GitHub

github PR context

every 1h

auto-add development context to your GitHub PRs

screen
HubSpot

hubspot CRM sync

every 1h

auto-log meeting notes and call context to HubSpot

screenaudio
Jira

jira sync

every 1h

auto-update Jira tickets based on your coding activity

screen
Salesforce

salesforce sync

every 1h

auto-log calls and meeting notes to Salesforce

screenaudio
Intercom

intercom context

every 30m

enrich Intercom tickets with screen context and customer history

screenaudio
Obsidian

obsidian daily notes

every 1h

auto-generate daily notes in your Obsidian vault from screen activity

screenaudio
Todoist

todoist auto-tasks

every 30m

detect action items from screen activity → create Todoist tasks

screenaudio
Figma

figma design log

every 2h

track design decisions and iterations from Figma sessions

screenaudio
Sentry

sentry bug context

every 30m

attach developer screen context to Sentry error reports

screen
Discord

discord digest

every 2h

summarize Discord servers and channels you've been following

screen
Stripe

stripe revenue context

every 4h

correlate Stripe dashboard activity with your work context

screen
Telegram

telegram summary

every 2h

summarize Telegram chats and channels from screen activity

screen
Asana

asana sync

every 1h

auto-update Asana tasks based on your screen activity

screenaudio
Datadog

datadog incident context

every 15m

add developer screen context to Datadog incidents

screenaudio
Confluence

confluence auto-docs

every 2h

auto-generate Confluence pages from meetings and decisions

screenaudio
Apple Shortcuts

apple shortcuts trigger

every 10m

trigger Apple Shortcuts from screen events and activity patterns

screenaudio
Zapier

zapier webhook

every 10m

fire Zapier webhooks from screen events to connect 7000+ apps

screenaudio

create your own

master prompt — paste into any AI coding tool
create a screenpipe pipe that [DESCRIBE WHAT YOU WANT].

## what is screenpipe?

screenpipe is a desktop app that continuously captures your screen (text extracted via accessibility APIs, OCR fallback) and audio (transcription).
it runs a local API at http://localhost:3030 that lets you query everything you've seen, said, or heard.

## what is a pipe?

a pipe is a scheduled AI agent defined as a single markdown file: ~/.screenpipe/pipes/{name}/pipe.md
every N minutes, screenpipe runs a coding agent (like pi or claude-code) with the pipe's prompt.
the agent can query your screen data, write files, call external APIs, send notifications, etc.

## pipe.md format

the file starts with YAML frontmatter, then the prompt body. only schedule and enabled are needed:

---
schedule: every 30m
enabled: true
---

Your prompt instructions here...

schedule supports: "every 30m", "every 2h", "daily", cron ("0 */2 * * *"), or "manual".

## context header

before execution, screenpipe prepends a context header to the prompt with:
- time range (start/end ISO 8601 timestamps based on the schedule interval)
- current date, timezone
- screenpipe API base URL (http://localhost:3030)
- output directory (./output/)

the AI agent uses this context to query the right time range. no template variables needed — just write plain instructions.

## screenpipe search API

the agent queries screen data via the local REST API:

curl "http://localhost:3030/search?limit=20&content_type=all&start_time=<ISO8601>&end_time=<ISO8601>"

### query parameters
- q: text search query (optional)
- content_type: "vision" | "audio" | "input" | "accessibility" | "all" | "vision+audio+input" | "vision+input" | "audio+input"
- limit: max results (default 20)
- offset: pagination offset
- start_time / end_time: ISO 8601 timestamps
- app_name: filter by app (e.g. "chrome", "cursor")
- window_name: filter by window title
- browser_url: filter by URL (e.g. "github.com")
- min_length / max_length: filter by text length
- speaker_ids: filter audio by speaker IDs

### vision results (what was on screen)
each result contains:
- text: the extracted text visible on screen
- app_name: which app was active (e.g. "Arc", "Cursor", "Slack")
- window_name: the window title
- browser_url: the URL if it was a browser
- timestamp: when it was captured
- file_path: path to the video frame
- focused: whether the window was focused

### audio results (what was said/heard)
each result contains:
- transcription: the spoken text
- speaker_id: numeric speaker identifier
- timestamp: when it was captured
- device_name: which audio device (mic or system audio)
- device_type: "input" (microphone) or "output" (system audio)

### accessibility results (accessibility tree text)
each result contains:
- text: text from the accessibility tree
- app_name: which app was active
- window_name: the window title
- timestamp: when it was captured

### input results (user actions)
query via: curl "http://localhost:3030/ui-events?app_name=Slack&limit=50&start_time=<ISO8601>&end_time=<ISO8601>"
event types: text (keyboard input), click, app_switch, window_focus, clipboard, scroll

## secrets

store API keys in a .env file next to pipe.md (never in the prompt itself):
echo "API_KEY=your_key" > ~/.screenpipe/pipes/my-pipe/.env
reference in prompt: source .env && curl -H "Authorization: Bearer $API_KEY" ...

## after creating the file

install: bunx screenpipe pipe install ~/.screenpipe/pipes/my-pipe
enable:  bunx screenpipe pipe enable my-pipe
test:    bunx screenpipe pipe run my-pipe
logs:    bunx screenpipe pipe logs my-pipe

replace [DESCRIBE WHAT YOU WANT] with your use case. the AI will create the pipe.md file for you.

pipes require screenpipe running locally.