

computer use AI SDK
screenpipe captures your computer screen and audio and let you install hundreds of AI agents that use this data
AI App Store
Discover powerful plugins built by the community
search
Freesearch, review, summarize, get specific answer to specific questions over your life digital context audio keyword filtering window/app filtering speaker fitlering (we identify people you speak to 24/7) etc.
data-table
Freevisualize your data in a table, use AI to turn your 24/7 recordings into tables
obsidian
$20turn your screen into a living knowledge base with ai analyse your screen/mic activity in real time and write logs, to build a CRM, market research, ideas, user personas, etc. local LLM first https://github.com/user-attachments/assets/7bf48aae-a739-4f2c-b8c6-8a0df081a200 https://github.com/user-attachments/assets/e4c115d9-51cf-4870-aec4-4df4743d2d02
rewind
$20scroll back in time, select a time range, cmd+k and ask AI anything about your screen activity or audio conversations https://github.com/user-attachments/assets/77e7ba2c-be41-491a-9d88-36aed6213050
identify-speakers-v2
Freeallows you to teach AI to assign names to voices that are then used everywhere else in screenpipe (data, apps, interfaces ...)
example-pipe
Free# screenpipe playground a flexible playground for displaying, testing, and exploring components with their associated code, documentation, and ai prompts. --> ## components listed - health status check - latest UI record - latest OCR record - latest audio transcription - screen streaming - audio streaming ## features - **interactive component display**: view rendered components in action - **code inspection**: examine the full source code of each component - **raw output**: see the raw api responses and data - **ai prompt visibility**: view the prompts and context used to generate components - **collapsible interface**: toggle component visibility for a cleaner workspace ## usage the playground allows you to: 1. view rendered components in their intended state 2. inspect the raw output from api calls 3. study the complete component code 4. examine the ai prompts and context used to generate components ## component structure each playground card includes: - component title and collapsible interface - tabs for different views (rendered output, raw output, code, ai prompt) - copy functionality for sharing prompts and context ## getting started 1. install this pipe from UI and play with it 2. follow docs to create your pipe (it will create this app) (https://docs.screenpi.pe/docs/plugins) 3. modify code from ready-to-use-examples directory
meeting
$20The AI notepad for people in back-to-back meetings the meeting pipe takes your raw meeting recordings and makes them awesome the world's first meeting assistant that works without internet (screenpipe embedded AI or local AI, or using antropic, openai, etc.) 100% of the meeting assistants you use leak your whole life and company to US data centers and government
forget-me-not
Free# π Smart Reminder Plugin for Screenpipe A smart reminder system that analyzes your app usage patterns and provides timely reminders for unfinished tasks or important follow-ups. ## π Features - **Real-time App Usage Monitoring**: Tracks your application switches and window changes - **AI-Powered Analysis**: Uses AI to detect patterns and suggest reminders - **Smart Notifications**: Intelligently batches notifications to avoid interruption - **Voice-Based Reminders**: Generate reminders from voice input - **Flexible AI Provider Support**: Works with: - Ollama (recommended with llama2 or llama3) - OpenAI-compatible APIs - Native Ollama integration ## π Getting Started ### Prerequisites - Screenpipe installed and running - Node.js 18 or higher - One of the following AI providers: - Ollama (recommended) - OpenAI API access - Other OpenAI-compatible APIs ### Installation 1. Clone this repository: ```bash git clone [repository-url] cd forget_me_not ``` 2. Install dependencies: ```bash npm install ``` 3. Start the development server: ```bash npm run dev ``` ## βοΈ Configuration ### AI Provider Setup #### Using Ollama (Recommended) 1. Install Ollama from [ollama.ai](https://ollama.ai) 2. Pull the recommended model: ```bash ollama pull llama3 ``` 3. Start Ollama service The plugin will automatically connect to Ollama running on `http://localhost:11434`. ## π― Features ### Smart App Tracking - Monitors app switches and window changes - Detects brief app interactions - Identifies interrupted tasks - Processes voice commands for reminder creation ### AI Analysis - Analyzes usage patterns every 2 minutes (configurable) - Detects potential forgotten tasks - Considers both app names and window titles ### Reminder Management - Add, complete, and dismiss reminders - View pending and completed tasks - Smart batching of notifications ## π§ Advanced Configuration ### Analysis Settings - **Analysis Frequency**: Configure how often AI analyzes your activity (1-10 minutes) - **Window Title Analysis**: Smart detection of meaningful window titles ### Performance Optimization - Efficient activity batching - Smart payload size management - Automatic cleanup of old data ## π€ Contributing Contributions are welcome! Please feel free to submit a Pull Request. ## π Acknowledgments - Built with [Screenpipe](https://screenpipe.com) - Uses AI models from [Ollama](https://ollama.ai) - UI components from [shadcn/ui](https://ui.shadcn.com)
MirorAI
Free# MirrorAI - Know Yourself Better MirrorAI is an intelligent agent that tracks your digital footprint to help you gain deeper insights into your behaviors, habits, and patterns. ## Features - **OCR Screen Analysis**: Captures and analyzes text content from your screen - **Gemini AI Integration**: Powerful AI assistant to answer questions about captured content - **Ad Tracker Dashboard**: Monitors ads you encounter to understand your interests and desires - **Pattern Recognition**: Identifies recurring behaviors in your digital activities - **Habit Monitoring**: Recognizes repetitive tasks like late-night YouTube sessions or content preferences - **Memory Storage**: Securely saves your activity patterns in a retrieval-augmented language model (RAG LLM) ## How It Works MirrorAI observes and learns from your digital behaviors to create a personalized understanding of who you are: 1. **Ad Tracking**: Analyzes ads you see to understand your interests and desires 2. **Habit Monitoring**: Identifies repetitive tasks and behaviors 3. **Memory Storage**: Securely stores your activity patterns 4. **Insight Generation**: Combines all data points for comprehensive understanding ## Getting Started 1. Clone this repository: ``` git clone https://github.com/ayoub0030/MirorAI.git ``` 2. Install dependencies: ``` npm install ``` 3. Set up environment variables in `.env`: ``` GEMINI_API_KEY=your_gemini_api_key_here ``` 4. Run the development server: ``` npm run dev ``` 5. Open [http://localhost:3000](http://localhost:3000) in your browser ## Technology Stack - Next.js - TypeScript - Tailwind CSS - Google Gemini AI API - OCR (Optical Character Recognition) ## Why MirrorAI? MirrorAI helps you achieve greater self-awareness by: - Discovering blind spots in your online behaviors - Identifying recurring patterns that shape your digital experience - Measuring progress towards your goals with quantifiable metrics ## License MIT
notion
$20turn your screen into a living knowledge base with ai analyse your screen/mic activity in real time and write logs, to build a CRM, market research, ideas, user personas, etc. local LLM first https://github.com/user-attachments/assets/394a48ca-c7dc-4a69-b8bd-a55082c62698 https://github.com/user-attachments/assets/e4c115d9-51cf-4870-aec4-4df4743d2d02
obsidian-dev
Freeturn your screen into a living knowledge base with ai analyse your screen/mic activity in real time and write logs, to build a CRM, market research, ideas, user personas, etc. local LLM first https://github.com/user-attachments/assets/e4c115d9-51cf-4870-aec4-4df4743d2d02
ai-interview-coach
$9.99# AI Interview Coach Simulates a real interview environment by combining body language tracking (eye contact, posture, and hand movements) with an evaluation of answer quality based on audio transcription. It guides the user through the interview process and provides a final evaluation. ## Demo Check out the demo video on YouTube: [](https://www.youtube.com/watch?v=GokPYYGrF5g) ## Technologies Used - **Next.js** - **Screenpipe** - **OpenAI** - **Deepgram** - **MediaPipe** ## Features - **Real-Time Interaction:** Captures and processes user audio with Screenpipe and OpenAI, enabling a dynamic conversation simulation. - **Body Language Analysis:** Monitors eye movement, hand gestures, and posture using MediaPipe to provide detailed feedback on body language. - **Performance Metrics:** Generates comprehensive reports highlighting strengths and areas for improvement, helping users refine their interview techniques. - **Immersive Interview Simulation:** Creates a realistic, interactive environment that mimics live interview scenarios, perfect for job seekers and professionals looking to enhance their presentation skills. ## Getting Started ### Prerequisites - Screenpipe Installed - OpenAI api key in Screenpipe ### Installation 1. **Clone the repository:** ```bash git clone https://github.com/yourusername/betterview.git ``` 2. **Install dependencies:** ```bash bun install ``` 3. **Run Development Server:** ```bash npm run dev ``` Open http://localhost:3000 in your browser to view the application.
sflow-v1
Free# Screenflow Screenflow is a powerful web application that captures, analyzes, and optimizes your digital activity built entirely on top of Screenpipe. It provides valuable insights into your productivity patterns, context switching, focus periods, and can automatically identify job postings during your browsing sessions. ## Features ### π Session Analysis - **Daily Pulse Dashboard**: Visualize your productivity metrics, focus periods, and context switching patterns. - **Context Flow**: Understand how you move between different applications and tasks throughout your day. - **Time Distribution**: See where your digital time is being spent across applications and websites. ### π Job Intelligence - **Automatic Job Post Detection**: Identifies job postings as you browse LinkedIn, X, YC, Wellfound, and other job sites. - **Structured Job Data**: Extracts and organizes key details like company, location, requirements, and salary information. - **Session Overview**: Summarizes job browsing activity with actionable insights. ### π» Productivity Insights - **Focus Tracking**: Measures your sustained attention periods and identifies your most productive times. - **Context Group Analysis**: Groups related activities to understand your workflow patterns. - **Productivity Score**: Quantifies your productive time with detailed breakdowns. ## How It Works Screenflow operates through a three-step process: 1. **Capture**: Records your screen activity during browsing sessions using Screenpipe. 2. **Analyze**: Processes the captured Screenpipe data using AI to extract insights, detect patterns, and identify job postings. 3. **Present**: Displays the analyzed data through intuitive visualizations and dashboards. ## Architecture Screenflow is built on a modern & highly popular tech stack: - **Frontend**: Next.js with React, TanStack Query for data fetching, and shadcn/ui components. - **Styling**: Tailwind CSS with a midnight theme for elegant dark mode support. - **Database**: SQLite with Drizzle ORM for efficient data storage and retrieval. - **AI Processing**: Integrates with Claude API and DeepSeek API for advanced content analysis. ## Getting Started ### Prerequisites - Node.js (v18 or higher) - npm or yarn - API keys for Claude and DeepSeek (for AI processing capabilities) ### Installation 1. Clone the repository: ```bash git clone https://github.com/Lokendra-sinh/screenflow.git cd screenflow ``` 2. Install dependencies: ```bash bun install # or npm install ``` 3. Set up environment variables: Create a `.env` file in the root directory with the following variables: ``` ANTHROPIC_API_KEY=your_claude_api_key DEEPSEEK_API_KEY=your_deepseek_api_key ``` 4. Start the development server: ```bash bun dev # or yarn dev ``` 6. Open [http://localhost:3000](http://localhost:3000) in your browser to see the application. ## Usage ### Starting a Session 1. Navigate to the "Record" tab 2. Click the "Start Screenpipe" button to begin capturing your activity 3. Browse normally - Screenpipe works in the background to record your session 4. When finished, click "Stop Screenpipe" to end the session ### Analyzing Sessions 1. Go to the "Sessions" tab to see all your recorded sessions 2. Click on a session to view detailed analytics including: - Daily Pulse dashboard with productivity metrics - Context Flow visualization - Time Distribution charts - Job Intelligence (if job postings were detected) ### AI Search (Coming Soon) Natural language querying of your session data will be available in a future update. So you'll be able to search "Give me list of all the yc startups having cracked founder" or "Find me all the jobs in SF" ## Project Structure ``` screenpipe/ βββ app/ # Next.js app directory β βββ api/ # API routes β βββ sessions/ # Session pages β βββ ... βββ components/ # React components β βββ ui/ # Shadcn UI components β βββ context-flow/ # Context visualization β βββ ... βββ db/ # Database configuration and models β βββ schema.ts # Drizzle schema definitions β βββ index.ts # Database connection setup βββ lib/ # Utility functions βββ providers/ # React context providers βββ public/ # Static assets βββ styles/ # Global styles βββ types/ # TypeScript type definitions ``` ### Code Style This project uses ESLint and Prettier for code formatting. Run the linter before committing: ```bash bun lint ``` ## Acknowledgements - [Screenpipe](https://screenpi.pe/) - [screenpipe docs](https://docs.screenpi.pe/) - [Nosu hackathons](https://www.sprint.dev/hackathons) - [Next.js](https://nextjs.org/) - [Tailwind CSS](https://tailwindcss.com/) - [shadcn/ui](https://ui.shadcn.com/) - [TanStack Query](https://tanstack.com/query) - [Drizzle ORM](https://orm.drizzle.team/) - [Anthropic Claude API](https://www.anthropic.com/) - [DeepSeek API](https://deepseek.com/)
notion-meets
$4.99The AI notepad for people in back-to-back meetings. the meeting pipe takes your raw meeting recordings and makes them awesome **also gets them pushed into your Notion account** ## Demo video https://github.com/user-attachments/assets/15ab6327-ba09-40c3-8a5e-e44e54d7d66c
momentum
Free## Momentum Momentum is a pipe/plugin for Screenpipe that tracks your app usage and screen time on a daily basis. It presents this data through beautiful graphs for easy visualization. Additionally, Momentum integrates AI support to provide: - Usage summaries - Personalized recommendations - Actionable insights to help you optimize your screen time.
focusfade
Free# Focus Fade Focus Fade is a productivity monitoring plugin for Screenpipe that helps users track and improve their focus during work sessions. It uses AI-powered analysis to provide real-time insights about your application usage and productivity patterns. ## Features - **Real-time Activity Tracking**: Monitors active applications and window titles - **Focus Session Management**: Start and stop focus sessions with detailed statistics - **AI-powered Analysis**: Uses LLMs to analyze activity patterns and provide insights - **Distraction Detection**: Identifies and scores potential distractions - **Customizable Focus Tasks**: Set and track specific focus objectives - **Desktop Notifications**: Receives alerts when distraction patterns are detected ## Prerequisites - Node.js 18+ - Screenpipe Desktop App - Ollama (optional, for local AI processing) ## Installation 1. Clone the repository: ```bash git clone cd learn-pipe ``` 2. Install dependencies: ```bash bun install ``` 3. Start the development server: ```bash bun dev ``` ## Configuration ### Focus Settings Customize your focus preferences in the settings: - Default focus task - Poll interval - Distraction threshold ## Usage 1. Launch the Screenpipe desktop app 2. Start a focus session using the "Start Session" button 3. Set your current focus task 4. Monitor your activity in real-time 5. Review AI insights and distraction scores 6. End session to save and analyze data ## Development ### Project Structure ``` learn-pipe/ βββ src/ β βββ app/ # Next.js pages and API routes β βββ components/ # React components β βββ lib/ # Utility functions and hooks β βββ types/ # TypeScript type definitions ``` ### Key Technologies - Next.js 15 - React 19 - TypeScript - Tailwind CSS - shadcn/ui - Screenpipe SDK ### Building for Production ```bash bun run build ``` ## Contributing 1. Fork the repository 2. Create your feature branch (`git checkout -b feature/amazing-feature`) 3. Commit your changes (`git commit -m 'Add some amazing feature'`) 4. Push to the branch (`git push origin feature/amazing-feature`) 5. Open a Pull Request ## Support For support, please contact the Screenpipe team or open an issue in the repository.
auto-todo-builder
Free## Auto-Todo Builder Auto-Todo Builder is an intelligent task extraction tool that automatically scans your screen for potential tasks and todos from various sources like emails, chat messages, browser content, and more. It compiles these tasks into a dynamic dashboard and can send you reminders to help you stay organized. Auto-Todo Builder Dashboard Features Automatic Task Detection: Scans your screen content to identify potential tasks and todos Multi-source Support: Extracts tasks from emails, chat applications, browsers, and documents Priority Detection: Automatically assigns priority levels based on context and keywords Deduplication: Intelligently prevents duplicate tasks using advanced similarity detection Customizable Settings: Configure scan intervals, notification preferences, and more Task Management: Mark tasks as completed, pending, or cancelled Filtering: Filter tasks by source, priority, and status Statistics Dashboard: View completion rates and task distribution ## Installation Prerequisites Node.js 16.x or higher Next.js 13.x or higher Screenpipe desktop application installed and running ## Setup Clone the repository: git clone https://github.com/yourusername/auto-todo-builder.git cd auto-todo-builder 2. Install dependencies: ```shellscript npm install ``` 3. Create a `.env.local` file in the root directory with the following variables: ```plaintext # Optional: For enhanced AI-powered task extraction GROQ_API_KEY=your_groq_api_key ``` 4. Start the development server: ```shellscript npm run dev ``` 5. Open [http://localhost:3000](http://localhost:3000) in your browser Video Link https://www.loom.com/share/b0df1b81703f487bbe896e1ec345f221?sid=87476f04-ca88-4934-8433-43dec49ac938  ## Usage ### Getting Started 1. Make sure the Screenpipe desktop application is running 2. Open Auto-Todo Builder in your browser 3. The application will automatically start scanning your screen for tasks 4. Tasks will appear in the dashboard as they are detected ### Configuration Click the "Settings" button in the top right corner to configure: - **Scan Interval**: How frequently the application scans for new tasks - **Notifications**: Enable/disable desktop notifications for new tasks - **Auto-detection**: Enable/disable automatic task detection - **Sources**: Select which applications to scan for tasks - **Priority Keywords**: Customize keywords that determine task priority ### Task Management - **Complete a Task**: Click the checkbox next to a task or use the dropdown menu - **Cancel a Task**: Use the dropdown menu to mark a task as cancelled - **Filter Tasks**: Use the filters at the top of the dashboard to filter by source, priority, or status - **View Task Sources**: Switch to the "Data Sources" tab to see where tasks are coming from ## Troubleshooting ### Common Issues - **Application shows "Disconnected"**: Make sure the Screenpipe desktop application is running - **No tasks are being detected**: Check the Settings to ensure auto-detection is enabled - **Application is stuck loading**: Try refreshing the page or restarting the Screenpipe application ### Screenpipe Connection Auto-Todo Builder requires the Screenpipe desktop application to be running to scan your screen. If you're having connection issues: 1. Make sure Screenpipe is installed and running 2. Check that port 7777 is available and not blocked by a firewall 3. Verify that screen capture permissions are enabled for Screenpipe 4. Try restarting the Screenpipe application ## Advanced Configuration ### Task Detection Customization You can customize how tasks are detected by modifying the priority keywords in the Settings. For example: - **High Priority**: urgent, asap, immediately, critical - **Medium Priority**: soon, important, needed - **Low Priority**: whenever, low priority, eventually ### API Integration Auto-Todo Builder can use the Groq API for enhanced task extraction. To enable this: 1. Obtain a Groq API key from [groq.com](https://groq.com) 2. Add the API key to your `.env.local` file as `GROQ_API_KEY` 3. Restart the application ## Acknowledgements - [Screenpipe](https://github.com/mediar-ai/screenpipe) for screen content extraction - [Next.js](https://nextjs.org/) for the application framework - [shadcn/ui](https://ui.shadcn.com/) for UI components
irs-agent
Free# irs agent: an obtrusive ai screen-watcher for your tax documents irs agent is an ai-powered agent that never takes a break. it constantly watches your screen to capture receipts, invoices, and all your tax documents as soon as they appear. by combining cutting-edge ocr, audio transcription, and ui event detection, this agent extracts key details like amounts, currencies, names, and timestamps, creating a complete record of your financial interactions. > Created at the lofi hackathoni in SF, using local-first technology like pglite, screenpipe, and ollama. ## overview irs agent silently observes your screen, capturing both visual and audio cues in real time using [Screenpipe](https://docs.screenpi.pe) - an open-source screen and audio capture framework. Screenpipe provides 24/7 local media capture capabilities, ensuring all data stays private and secure on your machine. whether you're checking emails, browsing invoices, or finalizing a payment, it identifies and logs financial activities automatically. designed to be obtrusive, it ensures no tax document slips through unnoticed. ## how it works  ## requirements - screenpipe - ollama + phi4 ## core features - real-time monitoring that continuously grabs ocr, audio, and ui events - an automated detector powered by openai models that extracts essential details from your financial documents - comprehensive logging of every captured event in a secure local database for later review - fully customizable settings, letting you adjust your openai api key and tailor prompt templates to your specific workflow 1. **installation:** clone the repository and install the dependencies with your preferred package manager. make sure to configure your openai api key and other environment variables as needed. 2. **running the project:** start the next.js development server, and irs agent will immediately begin monitoring your screen and logging tax documents in real time. 3. **reviewing logs:** visit the financial activity tab to view detailed logs of every detected event and review the extracted financial details. ## contributing contributions are welcome. if you have ideas to enhance irs agent's capabilities or performance, please open an issue or submit a pull request. make sure your code follows the project's style guidelines and includes the necessary tests. ## license this project is open source. enjoy the seamless experience of managing your tax documents with irs agent, the ai watchdog that makes sure every receipt and invoice is captured.
productivity-party
Free# π Productivity Party Welcome to **Productivity Party**, a Screenpipe plugin that makes remote work more engaging and motivating! Whether you're a solo hacker or part of a distributed team, stay connected, challenge yourself, and boost productivity in a fun, social way. ## π How It Works - **Track Your Focus**: Productivity scores are calculated based on your actual computer usage every 5 minutes, using Screenpipe's screen recording analysis. - **Compete on the Leaderboard**: Your focus earns you points! See how you rank globally and push yourself to stay on track. - **Stay Social**: Chat in real-time while you workβbecause staying productive doesnβt mean being isolated. ## π οΈ Tech Stack - **Screenpipe** β Captures and scores your productivity - **Supabase** β Handles leaderboard data and admin functionality - **PartyKit** β Powers real-time chat and social interactions - **Next.js** β Interactive UI and seamless front-end experience ## π Features - **Live Productivity Tracking**: Get scored based on your active screen time, updated every 5 minutes. - **Real-Time Leaderboard**: Compete with friends and coworkers globally. - **Integrated Social Chat**: Stay connected and share motivation while working. - **Future Plans**: Mini-games, team challenges, and more fun ways to stay productive! ## ποΈ Getting Started 1. Install dependencies: `npm install` 2. Start development servers: - Next.js: `npm run dev` - PartyKit: `npm run partykit:dev` - Both: `npm run dev:all` 3. Open `/debug` for monitoring and debugging tools. ## π§ Development Check `CLAUDE.md` for detailed guidelines on: - Build commands - Code style - Project structure - Testing procedures --- **Why I Built This**: Originally a hackathon project, Productivity Party was created for fun and to help remote workers stay engaged and productive. Now, it's evolving into a tool that blends work and social connection in a way that actually makes sense!
youtuber
Free# screenpipe playground A flexible playground for displaying, testing, and exploring components with their associated code, documentation, and ai prompts. --> ## features - **interactive component display**: view rendered components in action - **code inspection**: examine the full source code of each component - **raw output**: see the raw api responses and data - **ai prompt visibility**: view the prompts and context used to generate components - **collapsible interface**: toggle component visibility for a cleaner workspace - **youtube transcript analysis**: fetch, store, and analyze YouTube video transcripts using Gemini AI ## youtube video analysis with gemini AI This project includes a powerful feature that allows you to: 1. **Extract YouTube video metadata** from OCR data captured by Screenpipe 2. **Fetch transcripts** from YouTube videos using both YouTube-Transcript API and Supadata API 3. **Analyze video content** using Google's Gemini AI model 4. **Ask questions** about video content through a RAG (Retrieval-Augmented Generation) chatbot interface ### Setup for YouTube and Gemini features To use these features, you need to add the following API keys to your `.env` file: ``` YOUTUBE_API_KEY=your_youtube_api_key_here SUPADATA_API_KEY=your_supadata_api_key_here GEMINI_API_KEY=your_gemini_api_key_here ``` - Get a YouTube API key from the [Google Cloud Console](https://console.cloud.google.com/) - Get a Supadata API key from [Supadata](https://www.supadata.io/) (optional) - Get a Gemini AI API key from the [Google AI Studio](https://ai.google.dev/) ### How to use the YouTube Transcript Analyzer 1. The application extracts YouTube video information from your screen using OCR 2. Click the "Transcript" button next to any video to view and save its transcript 3. In the transcript viewer, click "Analyze with Gemini AI" to open the chat interface 4. Ask questions about the video content, and the AI will analyze the transcript to provide answers This RAG-based assistant uses the video transcript as its knowledge base, allowing for accurate and context-aware responses about video content. ## usage the playground allows you to: 1. view rendered components in their intended state 2. inspect the raw output from api calls 3. study the complete component code 4. examine the ai prompts and context used to generate components ## component structure each playground card includes: - component title and collapsible interface - tabs for different views (rendered output, raw output, code, ai prompt) - copy functionality for sharing prompts and context ## getting started 1. install this pipe from UI and play with it 2. follow docs to create your pipe (it will create this app) (https://docs.screenpi.pe/docs/plugins) 3. modify code from ready-to-use-examples directory
StudyTube
Free# MirrorAI - Know Yourself Better MirrorAI is an intelligent agent that tracks your digital footprint to help you gain deeper insights into your behaviors, habits, and patterns. ## Features - **OCR Screen Analysis**: Captures and analyzes text content from your screen - **Gemini AI Integration**: Powerful AI assistant to answer questions about captured content - **Ad Tracker Dashboard**: Monitors ads you encounter to understand your interests and desires - **Pattern Recognition**: Identifies recurring behaviors in your digital activities - **Habit Monitoring**: Recognizes repetitive tasks like late-night YouTube sessions or content preferences - **Memory Storage**: Securely saves your activity patterns in a retrieval-augmented language model (RAG LLM) ## How It Works MirrorAI observes and learns from your digital behaviors to create a personalized understanding of who you are: 1. **Ad Tracking**: Analyzes ads you see to understand your interests and desires 2. **Habit Monitoring**: Identifies repetitive tasks and behaviors 3. **Memory Storage**: Securely stores your activity patterns 4. **Insight Generation**: Combines all data points for comprehensive understanding ## Getting Started 1. Clone this repository: ``` git clone https://github.com/ayoub0030/MirorAI.git ``` 2. Install dependencies: ``` npm install ``` 3. Set up environment variables in `.env`: ``` GEMINI_API_KEY=your_gemini_api_key_here ``` 4. Run the development server: ``` npm run dev ``` 5. Open [http://localhost:3000](http://localhost:3000) in your browser ## Technology Stack - Next.js - TypeScript - Tailwind CSS - Google Gemini AI API - OCR (Optical Character Recognition) ## Why MirrorAI? MirrorAI helps you achieve greater self-awareness by: - Discovering blind spots in your online behaviors - Identifying recurring patterns that shape your digital experience - Measuring progress towards your goals with quantifiable metrics ## License MIT
memories
$20google photo like memories of your days, resurfacing and reminder of important information, etc.
Smart-Clippy
Free# Smart-Clippy: AI-Powered Clipboard Manager Smart-Clippy is an intelligent clipboard manager that enhances your clipboard experience with AI-powered features. Built as a Screenpipe plugin, it provides seamless integration with your workflow and offers advanced text processing capabilities. ## Features - π¨ Beautiful, modern UI with light/dark mode support - π€ AI-powered text processing: - Text summarization - Language translation - Code formatting - π Clipboard history management - π Real-time search functionality - β‘ Multiple AI provider support (Ollama & Nebius) - π― Model selection for different tasks - π Secure API key management - π Responsive and animated UI components ## Tech Stack - **Framework**: Next.js 15 - **UI Components**: Radix UI + Tailwind CSS - **Styling**: TailwindCSS with custom animations - **State Management**: React Hooks + Local Storage - **Animations**: Framer Motion - **AI Integration**: Ollama & Nebius API - **Development**: TypeScript, ESLint ## Getting Started ### Prerequisites - Node.js 18+ or Bun runtime - Screenpipe CLI installed - (Optional) Ollama or Nebius API key for AI features ### Installation 1. Clone the repository: ```bash git clone [repository-url] cd screenpipe ``` 2. Install dependencies: ```bash bun install # or npm install ``` 3. Start Screenpipe: 4. Run the development server: ```bash bun dev # or npm run dev ``` 4. Open [http://localhost:3000](http://localhost:3000) in your browser ## AI Provider Setup ### Using Ollama 1. Install Ollama: ```bash # macOS or Linux curl -fsSL https://ollama.com/install.sh | sh # Windows # Download from https://ollama.com/download ``` 2. Start the Ollama service: ```bash ollama serve ``` 3. Pull the required models: ```bash ollama pull qwen2.5 ``` 4. In Smart-Clippy: - Select "Ollama" as your AI provider - Choose your preferred model from the dropdown - No API key required as Ollama runs locally ### Using Nebius 1. Sign up for a Nebius account at [https://nebius.ai](https://nebius.ai) 2. Get your API key: - Go to your Nebius dashboard - Navigate to API Keys section - Create a new API key - Copy the key 3. In Smart-Clippy: - Select "Nebius" as your AI provider - Paste your API key in the settings - Choose your preferred model from the dropdown ### Using AI Features 1. **Text Summarization**: - Copy any text to your clipboard - Select the text from the clipboard history - Click the "Summarize" button - The AI will generate a concise summary 2. **Language Translation**: - Copy text in any language - Select the text from the clipboard history - Click the "Translate" button - The AI will translate the text to English 3. **Code Formatting**: - Copy code snippets - Select the code from the clipboard history - The AI will automatically detect and format the code - Syntax highlighting will be applied based on the language ### Tips - For best results with code formatting, use Ollama with the Qwen2.5 model - Nebius provides better performance for language translation tasks - You can switch between providers at any time - Clear your clipboard history regularly for better performance - Use the search function to quickly find past clipboard items ### Project Structure ``` screenpipe/ βββ app/ # Next.js app directory βββ components/ # React components β βββ ui/ # Reusable UI components β βββ ... # Feature components βββ hooks/ # Custom React hooks βββ lib/ # Utility functions βββ public/ # Static assets βββ ... ``` ## Contributing 1. Fork the repository 2. Create your feature branch (`git checkout -b feature/amazing-feature`) 3. Commit your changes (`git commit -m 'Add some amazing feature'`) 4. Push to the branch (`git push origin feature/amazing-feature`) 5. Open a Pull Request ## Acknowledgments - Built with [Screenpipe](https://docs.screenpi.pe) - UI components from [shadcn/ui](https://ui.shadcn.com) - Icons from [Lucide](https://lucide.dev)
interview-ai-coach
Free# AI Interview Coach Simulates a real interview environment by combining body language tracking (eye contact, posture, and hand movements) with an evaluation of answer quality based on audio transcription. It guides the user through the interview process and provides a final evaluation. ## Demo Check out the demo video on YouTube: [](https://www.youtube.com/watch?v=GokPYYGrF5g) ## Technologies Used - **Next.js** - **Screenpipe** - **OpenAI** - **Deepgram** - **MediaPipe** ## Features - **Real-Time Interaction:** Captures and processes user audio with Screenpipe and OpenAI, enabling a dynamic conversation simulation. - **Body Language Analysis:** Monitors eye movement, hand gestures, and posture using MediaPipe to provide detailed feedback on body language. - **Performance Metrics:** Generates comprehensive reports highlighting strengths and areas for improvement, helping users refine their interview techniques. - **Immersive Interview Simulation:** Creates a realistic, interactive environment that mimics live interview scenarios, perfect for job seekers and professionals looking to enhance their presentation skills. ## Getting Started ### Prerequisites - Screenpipe Installed - OpenAI api key in Screenpipe ### Installation 1. **Clone the repository:** ```bash git clone https://github.com/yourusername/betterview.git ``` 2. **Install dependencies:** ```bash bun install ``` 3. **Run Development Server:** ```bash npm run dev ``` Open http://localhost:3000 in your browser to view the application.
eduDesk
Free# screenpipe playground A flexible playground for displaying, testing, and exploring components with their associated code, documentation, and ai prompts. --> ## features - **interactive component display**: view rendered components in action - **code inspection**: examine the full source code of each component - **raw output**: see the raw api responses and data - **ai prompt visibility**: view the prompts and context used to generate components - **collapsible interface**: toggle component visibility for a cleaner workspace - **youtube transcript analysis**: fetch, store, and analyze YouTube video transcripts using Gemini AI ## youtube video analysis with gemini AI This project includes a powerful feature that allows you to: 1. **Extract YouTube video metadata** from OCR data captured by Screenpipe 2. **Fetch transcripts** from YouTube videos using both YouTube-Transcript API and Supadata API 3. **Analyze video content** using Google's Gemini AI model 4. **Ask questions** about video content through a RAG (Retrieval-Augmented Generation) chatbot interface ### Setup for YouTube and Gemini features To use these features, you need to add the following API keys to your `.env` file: ``` YOUTUBE_API_KEY=your_youtube_api_key_here SUPADATA_API_KEY=your_supadata_api_key_here GEMINI_API_KEY=your_gemini_api_key_here ``` - Get a YouTube API key from the [Google Cloud Console](https://console.cloud.google.com/) - Get a Supadata API key from [Supadata](https://www.supadata.io/) (optional) - Get a Gemini AI API key from the [Google AI Studio](https://ai.google.dev/) ### How to use the YouTube Transcript Analyzer 1. The application extracts YouTube video information from your screen using OCR 2. Click the "Transcript" button next to any video to view and save its transcript 3. In the transcript viewer, click "Analyze with Gemini AI" to open the chat interface 4. Ask questions about the video content, and the AI will analyze the transcript to provide answers This RAG-based assistant uses the video transcript as its knowledge base, allowing for accurate and context-aware responses about video content. ## usage the playground allows you to: 1. view rendered components in their intended state 2. inspect the raw output from api calls 3. study the complete component code 4. examine the ai prompts and context used to generate components ## component structure each playground card includes: - component title and collapsible interface - tabs for different views (rendered output, raw output, code, ai prompt) - copy functionality for sharing prompts and context ## getting started 1. install this pipe from UI and play with it 2. follow docs to create your pipe (it will create this app) (https://docs.screenpi.pe/docs/plugins) 3. modify code from ready-to-use-examples directory
reddit-auto-posts
$20Easily grow your followers, market your product, or be useful. GPT-4o (or local model) looks at your screen 24/7 and sends you emails with questions to post on Reddit based on your activity. https://github.com/user-attachments/assets/289d1809-6855-4336-807f-dd9ee7181324 #### quick setup 1. [Get an OpenAI API key](https://platform.openai.com/account/api-keys) 2. [Create an app-specific password](https://support.google.com/accounts/answer/185833?hl=en) in your Google account that will be used to send yourself emails 3. Configure pipe in the app UI, save, enable, and restart screenpipe recording (you can configure to either receive an email daily or several every x hours)
meeting-maestro
$20# Meeting Maestro - Your Meeting Assistant powered by `screenpipe` Meeting Maestro assistant is a real-time meeting enhancement tool built on top of Screenpipe. It leverages Screenpipeβs continuous multi-modal capture (screen, audio, and interaction events) to provide live transcription, intelligent mapping of pre-defined call goals, and dynamic AI-driven suggestions for follow-up questionsβall during your meetings. https://github.com/user-attachments/assets/35a55eee-a54a-4c98-8a59-c415daafaf54 ## Features - **Real-Time Transcription:** Continuously transcribes meeting audio using Screenpipeβs live capture. - **Pre-Defined Goals & Questions:** Automatically detects and maps answers to your preset call goals or questions. - **Dynamic Question Suggestions:** Uses AI (via pre-trained LLMs) to suggest context-aware follow-up questions in real time. - **Structured Data Capture:** Logs key meeting details for review and export. - **Local-First Processing:** Ensures low-latency performance and robust privacy by processing all data locally. - [x] Real-time transcription - [x] Organize real-time notes into respective questions - [x] Identify the current question being discussed and automatically mark question as in progress - [ ] Recommend the next question to discussion (helpful if long list) - [ ] Recommend new questions based on what is being discussed - Use Framer Motion to show the questions being re-ordered automatically based on AI recommended order. - start and end time for notes. end time is required by default, start allows a range UI - paste in contents from another doc to auto format as questions (title, description, order) - question templates (can create and import multiple into 1 call) - click on a note to highlight where transcript when it was said (preferrable range) ## How It Works 1. **Capture:** Screenpipe records your screen and audio continuously. 2. **Transcription:** Live transcription converts speech into text in real time. 3. **Analysis:** An AI component analyzes the transcript to match pre-defined questions and generate context-sensitive question suggestions. 4. **Action:** The UI displays real-time suggestions and logs captured answers for post-meeting review. ## Installation ```bash ``` The AI notepad for people in back-to-back meetings the meeting pipe takes your raw meeting recordings and makes them awesome https://github.com/user-attachments/assets/8838c562-5bae-41cd-bc56-3c1785b21fc1
ReelBreak
Free# ReelBreak **ReelBreak** is a Screenpipe pipe designed to help you track and manage your usage of short-form video platforms like **YouTube Shorts**, **Instagram Reels**, and **TikTok**. With daily usage goals, session analysis, and customizable intervention thresholds, ReelBreak empowers you to take control of your screen time and maintain a healthy digital balance. This project is built as a **Next.js application** and integrates seamlessly with the **Screenpipe** platform to monitor and analyze your activity in real-time. --- ## π Features - **Usage Tracking**: Monitor time spent on short-form video platforms. - **Daily Goal Setting**: Set and track a daily usage limit (default: 30 minutes). - **Session Analysis**: View detailed session breakdowns by date, including start/end times and platform usage. - **Intervention Alerts**: Receive desktop notifications when exceeding a customizable threshold (default: 15 minutes). - **Responsive Dashboard**: Visualize usage stats, platform breakdowns, and weekly trends with interactive charts. - **Dark Mode Support**: Enjoy a seamless experience with a toggleable dark theme. - **Settings Management**: Adjust goals and preferences via an intuitive settings page. --- ## π¦ Prerequisites - [Node.js (v18 or later)](https://nodejs.org/) - [Bun (optional, for faster builds)](https://bun.sh/) - [Screenpipe](https://screenpi.pe/) installed locally - Compatible OS: Windows, macOS, or Linux --- ## βοΈ Installation ### 1. Clone the Repository ```bash git clone https://github.com/your-username/devishmittal-reelbreak.git cd devishmittal-reelbreak ``` ### 2. Install Dependencies Using **Bun** (recommended): ```bash bun install ``` ### 3. Run Locally Start the development server: ```bash bun dev ``` Open your browser and go to: [http://localhost:3000](http://localhost:3000) to access the dashboard. --- ## π Usage - **Dashboard**: View your daily usage, current session, session count, and weekly trends. - **Sessions**: Explore detailed session data by selecting a date. - **Settings**: Customize your daily goal and intervention threshold, and manage preferences. - **Notifications**: Receive alerts when you exceed your intervention threshold (configurable). --- ## π Acknowledgments - Built with **Next.js** and **Tailwind CSS** - Powered by **Screenpipe** for screen activity tracking - Thanks to the **open-source community** for inspiration and tools!
loom
$20this pipes uses screenpipe api to merge chunks of videos in to a single loom type video and you can use any LLM to create summary of it!
tweetpipe
$9.5# Tweetpipe **Tweetpipe** is a cutting-edge social media growth tool that helps generate organic posts based on your desktop screen activity. Whether you're coding, teaching, or working on a project, **Tweetpipe** captures key moments and turns them into engaging content for platforms like **Twitter and Bluesky**. With **Tweetpipe**, you'll never run out of content ideas again! --- ## π₯ Demo π **GitHub Source:** [Tweetpipe Repository](https://github.com/emee-dev/tweetpipe) π **Watch Demo:** [YouTube](https://youtu.be/O8Wd7lx3ZcA) --- ## π Installation ### 1οΈβ£ Clone the Repository ```bash git clone https://github.com/emee-dev/tweetpipe cd tweetpipe ``` ### 2οΈβ£ Install **Screenpipe** **Tweetpipe** relies on [Screenpipe](https://docs.screenpi.pe/docs/getting-started) for screen activity tracking. - Install **Screenpipe** β [Installation Guide](https://github.com/mediar-ai/screenpipe) - Install and enable the **Pipe SDK** by following the instructions in [`pipe/README.md`](pipe/README.md) --- ## π₯οΈ Running the Next.js App This project is built on **Next.js**, providing a seamless desktop application experience. 1. Install dependencies and run the Electron app: ```bash # Install dependencies npm install # Run the app npm run dev ``` --- ## β¨ Features β **Automated Content Generation** β Uses a cron job to analyze your screen activity, summarize key insights, and send them to an LLM for post generation. β **One-Click Sharing** β Easily share your generated posts directly from the app.
Always-Online
$7.99# Lazy-Waba: Advanced Chat Automation Platform *A sophisticated AI-powered automation system that monitors your messaging applications, analyzes conversation context, and generates contextually-appropriate responses - all while operating autonomously in the background.* ## Core Technical Architecture ### π Perception Layer - **Real-time OCR Monitoring:** Leverages ScreenPipe's vision capabilities to capture and analyze screen content without requiring API access - **Intelligent Pattern Recognition:** Implements sophisticated algorithms to differentiate between new messages and existing content - **Multi-Platform Compatibility:** Simultaneously monitors both WhatsApp Desktop and Discord applications with platform-specific optimization ### π§ Intelligence Processing - **Dual AI Provider Integration:** - Local inference via Ollama for privacy-focused, offline operation - Cloud processing via Nebius for enhanced performance on complex queries - **Conversation Context Management:** Maintains conversation history and semantic understanding across sessions - **Vector Database Integration:** Utilizes SQLite with vector extensions for efficient similarity search and knowledge retrieval ### βοΈ Execution Pipeline - **Pixel-Perfect Automation:** Uses ScreenPipe's automation API for cross-platform compatibility and detection-resistant operation - **Human-like Interaction Simulation:** Randomized typing patterns and interaction delays that mimic authentic human behavior - **Application Process Management:** Programmatically launches and controls chat applications through native OS integrations ## Advanced Features - **Conversation Flow Analysis:** Uses AI to detect conversation patterns and respond with appropriate tone and content - **Initial Greeting Detection:** Automatically identifies when to start conversations vs. continue existing threads - **AI Preset Management:** Customize response styles and personalities through configurable AI presets - **Health Monitoring System:** Self-diagnostics to ensure all components function properly - **Comprehensive Activity Logging:** Detailed event tracking for troubleshooting and operational insight - **Responsive Development UI:** Real-time visual feedback of system state and operations - **OCR Visual Debugging:** View exactly what the system "sees" to fine-tune recognition parameters ## Technical Implementation Details - **Next.js Frontend:** Modern React-based interface with server components for optimal performance - **Tailwind CSS Styling:** Utility-first styling framework for consistent design language - **TypeScript Throughout:** Full type safety across the entire codebase - **Custom React Hooks:** Modular architecture with hooks for AI providers (`use-ollama.tsx`, `use-nebius.tsx`), health monitoring, and system state - **WebSocket Communication:** Real-time bidirectional communication with the ScreenPipe backend - **Tauri Integration (Beta):** Native desktop application capabilities for enhanced performance ## Technical Limitations & Considerations - **Input Simulation:** Limited to pixel-level interactions (mouse movement, clicking, typing) - **Window Positioning:** Requires consistent application window placement for reliable automation - **Timing Calibration:** May require adjustment based on system performance characteristics - **Recognition Accuracy:** Dependent on screen resolution and text clarity ## Getting Started 1. **Installation:** Install this pipe from UI and play with it or clone this repo: ```bash git clone https://github.com/Ankur2606/AutoRespond-AI cd AutoRespond-AI ``` 2. **Configuration:** Follow the documentation to create your pipe (will create this app): https://docs.screenpi.pe/plugins 3. **Backend Setup:** Run the ScreenPipe rust server (For Windows): ```bash iwr get.screenpi.pe/cli.ps1 | iex screenpipe.exe ``` 4. **Frontend Setup:** ```bash # Navigate to Next.js workspace cd AutoRespond-AI # Install dependencies bun install # Start development server bun dev ``` 5. **Application Configuration:** - Configure AI providers in settings - Position chat applications according to guidelines - Test automation with the built-in diagnostic tools ## Extensibility & Advanced Usage - **Custom AI Providers:** Extend beyond default providers by implementing the provider interface - **Response Templates:** Create and save commonly used response patterns - **Conversation Rules:** Define trigger conditions and special handling for specific conversation scenarios - **Scheduled Operation:** Configure operation hours and automatic mode switching - **Multi-Language Support:** Works with any language supported by your configured AI models --- *Lazy-Waba: Sophisticated automation for the digitally overwhelmed professional.* ---
smarttrend
$20# **SmartTrend: Your Twitter Engagement Assistant**  **SmartTrend** is an AI-powered Twitter assistant that helps you discover trending topics, generate meaningful replies, and boost your engagement effortlessly. Whether you want to grow your following, maintain active discussions, or simply stay relevant, SmartTrend makes it easy by analyzing your timeline, profile, and interactions to suggest optimized replies. --- ## π **Key Features** - **Intelligent Suggestions:** Analyzes your timeline to find tweets that align with your interests and suggests personalized replies. - **Dynamic Frequency Control:** Adjusts how often it scans, analyzes, and generates suggestions based on your preferences. - **Adaptive Writing Style:** Matches your tone, grammar, and formatting style, from casual to professional. - **Engagement Filters:** Focuses on high-impact tweets with options to prioritize verified accounts, popular hashtags, or your followers. --- ## π§ **How It Works** 1. **Timeline Analysis:** Uses advanced scraping to extract tweets without API limits. 2. **AI-Powered Insights:** Leverages OpenAI for reply suggestions based on your profile and interactions. 3. **Local Data Storage:** Keeps everything private by storing data locally. --- Engage smarter, not harder!
desktop-commander
$5.99# DesktopCommander A modern desktop automation tool combining computer control with smart clipboard management.  ## Tech Stack ### Frontend - **Framework**: Next.js 14 - **Language**: TypeScript - **Styling**: Tailwind CSS - **UI Components**: - shadcn/ui (Built on Radix UI) - Lucide Icons - Sonner (Toast notifications) ### Computer Control - **Screenpipe**: For mouse/keyboard control and system automation - **API Integration**: REST API communication ### State Management - React Hooks - Local Storage for persistence ### Development Tools - **Package Manager**: npm/pnpm - **Development Server**: Next.js dev server - **Build Tool**: Next.js build system ## Core Features 1. **Computer Control** - Mouse movement and clicks - Text input automation - Command-based interface 2. **Smart Clipboard** - Automatic clipboard monitoring - History management - Quick copy functionality - Google Translate integration ## Getting Started 1. Install dependencies: ```bash npm install ``` 2. Start the development server: ```bash npm run dev ``` 3. Start Screenpipe: ```bash screenpipe serve ``` ## Using AI Features 1. Get your Nebius API key: - Sign up at https://nebius.ai - Navigate to API Keys section - Create a new API key 2. In the app: - Click the settings icon in the Clipboard tab - Enter your Nebius API key - Start using the AI features! ## Commands - `type [text]` - Types the specified text - `move [x] [y]` - Moves mouse to coordinates - `click` - Clicks at current position ## Requirements - Node.js 18+ - Screenpipe installed and running - Modern web browser - Nebius API key for AI features ## License MIT
noter
$20# Ultimate Note Taker ## Description Ultimate Note Taker is a powerful and user-friendly application designed to help you take, organize, and manage your notes efficiently. Whether you're a student, professional, or just someone who loves to jot down ideas, this tool is perfect for you. The inspiration from this project came from my own personal hurdle where I didn't like taking notes during lectures or watching videos or sometimes just in general. This hackathon with Screenpipe, an application that captures contextual data from laptops/desktops like screen data and speaker audio, helped me tackle this hurdle. Using Screenpipe's app and SDK I was capable of developing an ai-powered application that generates notes based on your OCR data or realtime audio. But that's not it you can summarize notes based on topics, and search or delete notes and export the notes as txt files. There is much more I would love to add to this project such as exporting to external document applications such as Google docs and Notion but automating the task through the click of a button. I wasn't able to get to this because Screenpipe hasn't added UI context capturing and input control to the Windows application. Nonetheless I am looking forward to it and everything else that Screenpipe aspires to build. ## Tech Stack - **Frontend:** Next.js - **Backend:** Bun - **Database:** Dexie.js - **Styling:** Tailwind CSS, ShadCN - **Technologies:** Screenpipe - **AI Models:** GPT 4.o from ScreenPipe Cloud ## Features - **AI-Generated Notes:** Used GPT 4.o to generate notes from OCR data or audio data - **Note-Management:** Used Dexie.js to store notes locally allowing for users to always have access to it locally - **Organization:** Notes are organized and categorized by tags - **Search & Filter Notes:** With a local database users can search notes by tags - **No need for Authentication or Authorization:** Since it is all stored locally users data are safe with them and they no longer have to deal with authentication -**Summarizer:** Used GPT 4.o to generate a summarized note of all notes with the chosen tag ## How to Try It Out 1. Clone the repository: ```bash git clone https://github.com/Kish170/summarizer_pipe ``` 2. Navigate to the project directory: ```bash cd summarizer_pipe ``` 3. Install dependencies: ```bash bun install ``` 4. Start the development server: ```bash bun dev ``` 5. Open your browser and go to `http://localhost:3000` to see the application in action. ## Credits - **Project Lead:** Kishan Rajagunathas - **Contributors:** ScreenPipe - **Special Thanks:** CEO of ScreenPipe, Louis Beaumont and CoFounder of ScreenPipe, Matthew Diakonow
language-tutor-main
$5.99# Language Tutor Assistant A real-time language learning assistant that analyzes your screen content and provides contextual language help using AI.  ## Overview Language Tutor Assistant is a Next.js application that uses Screenpipe to capture your screen content while you're learning a language. It analyzes what you're seeing, detects the language context, and provides helpful suggestions, corrections, and explanations to enhance your language learning experience. ## Features - **Real-time Screen Analysis**: Captures and analyzes your screen content as you learn - **Language Context Detection**: Automatically identifies source and target languages - **Learning Platform Recognition**: Detects popular language learning platforms like Duolingo, Babbel, etc. - **Adaptive Assistance**: Provides different types of help based on your learning context: - Grammar corrections and explanations - Alternative expressions - Vocabulary insights - Cultural context - **Interactive Help**: Ask specific questions about what you're learning - **Multiple Languages**: Supports English, French, Spanish, German, Italian, and more ## Prerequisites - Node.js 18 or later - Screenpipe service running locally (typically on port 3030) - Groq API key for AI language assistance ## Environment Variables The application requires the following environment variables:
Jarvis
$9.99# Jarvis - Your AI Voice Assistant An interactive real-time voice assistant powered by AI, designed for fluid conversations, instant transcription, and dynamic responses. ## Demo https://github.com/user-attachments/assets/f721a3a3-736c-417a-8b2c-437d83cd1928 ## Key Features * **Real-time Voice Interaction** - Speak naturally with AI using Deepgram's advanced speech recognition. * **Live Audio Visualization** - Stunning waveform display while you speak. * **Instant Transcription** - See your words instantly converted into text. * **Dynamic AI Responses** - Engaging, context-aware replies powered by state-of-the-art language models. * **Modern UI** - A sleek, responsive interface built with Next.js and Framer Motion for a seamless experience. ## Tech Stack * **Frontend**: Next.js, TypeScript, Framer Motion * **Audio Processing**: Deepgram SDK, React Audio Visualize * **UI Components**: shadcn/ui * **Package Manager**: Bun ## Why Jarvis? With AI-powered voice interaction becoming the future, Jarvis is an all-in-one solution for seamless, real-time conversations. Whether you need a personal AI assistant, an interactive chatbot, or an enhanced user experience for your project, Jarvis delivers: * Optimized for productivity with instant AI-powered responses. * Hands-free interaction for accessibility. * Fully customizable and easy to integrate for developers. ## Get Started Today Experience the power of real-time AI voice interaction. Clone the repo, set up your API keys, and start building with Jarvis today.
don't see what you need? request a custom app from our developer community
Nortech.APp
$50/mowe are a smal statrup en im paying from my money, so if i cant test and prove ths benefcticl to rest of te thema, we could pi 50$ each license
LifeOS
$10/moDownload Limitless Pin data and iMessage data so that it is queryable via screenpipe AI integrations
qiepian
$1/moθ·εη΄ζθ§ι’εθΏθ‘θ§ι’ζ΄η
Capacities Sync
$20/mointerface with Capacities personal knowledge base - write data to capacities objects - search capacities database
time-copilot
$50/mocustomizable and simple time tracking app that allow me to do this using minimal UI: - i'd like to be able to keep track of how much time i spend on tasks such as: PR testing, github issue, coding, email (inbox in general might also include linkedin, twitter), planning, cursor, specific URLs - ideally using AI with custom prompt i can tweak, works with local models (usually don't have anything to hide but rather not worry about streaming my screen to openai)
social media orchestration
$100/moIβd like other people to like my twitter posts on autopilot. To have an agent be able to connect to a user computer and post likes / comments on specific posts based on the given task.
inline suggestions for any messenger
$150/mowhenever I open a messenger app, any dialogue an AI agent should show me an inline grey color font autocomplete text, kind of GitHub copilot inspired, but for any messenger. It should of course consider the history of a given conversation. The autocomplete text should be very very concise, and if I press TAB it fills in, but if I type myself it regenerates
Body posture spy
$25/motrack my body posture through webcam at all times unless i'm in a meeting, in the background, and show send me an email report at the end of the day
focused mode AI emoji companion
$10/moI need a low battery* version of screenpipe that just tracks my focused windows/app, my raw activity and evaluates against my goal for the day. Whenever I'm distracted it shows me different emojis on the screen *just want to make sure it uses minimum fps and minimum settings not to kill my battery when i'm traveling
copy any desktop app to clipboard
$20/moI need a shortcut like 'Ctrl+A' -> 'Ctrl+C', but the one that will work for any desktop app, e.g. my current window is 'Discord', I click this shortcut and copy everything from Discord, all visible elements with proper formatting, just text. Like the same way you can do on a website
Why start recording now?
Others are already training their personal AI
Don't be left behind, every day without recording is lost knowledge for your future AI assistant
AI advantage gap
Those who start collecting personal data today will have years of advantage when AI becomes more capable
Scattered digital self
Valuable personal context trapped across apps and devices - making it hard to leverage without screenpipe
what our users are saying
Ollama Deepseek-R1 AI writes my Obsidian notes by watching my screen (open source)
by u/louis3195 in ollama
Loading Hacker News post...
First place winner is @screen_pipe:
β Deepgram (@DeepgramAI) July 29, 2024
screenpipe + Facial Recognition with the Friend Smart Glasses and Real Time Voice querying with Deepgram Speech to Text. pic.twitter.com/IBk46FP7hR
Any Rewind.AI alternatives?
by u/Longjumping-Peanut14 in macapps
If you're looking for an open-source RAG solution for continuous screen and audio capture, take a look at screenpipe, described further in the quoted post. ETL for streaming desktop data is a significant challenge, and we're excited to provide this powerful functionality. https://t.co/XURTy9HZgz
β UnstructuredIO (@UnstructuredIO) August 8, 2024