Last updated: 14th June 2025

Discover powerful workflows and tools that help anyone work effectively with AI. Whether you’re building, debugging, researching, or planning - these battle-tested approaches will supercharge your productivity.

Avoid brain rot ‘vibe coding’ where you just blindly accept changes. That is not how you learn. You should read and review changes made by any LLM, to avoid bugs/hallucinations and learn to become a better engineer as you build.

Essential Dev Setup: Your AI Power Stack

Get your environment configured for maximum AI-assisted productivity. This stack combines Cursor with specialized tools to cover the entire development lifecycle.

While Cursor is your core powerhouse, combining it with specialized tools can significantly boost your productivity across the entire development lifecycle. Here’s a battle-tested stack:

ChatPRD

ChatPRD is the #1 AI tool for product managers and teams. Use it to rapidly draft high-quality Product Requirements Documents (PRDs), user stories, and other product documentation, freeing you up to focus on building.

TaskMaster: AI-Powered Task Management

TaskMaster acts as a mini project manager inside your editor, organizing and breaking down development tasks from your PRD:

  • Task Organization: Breaks down large features into manageable sub-tasks
  • Permanent Context: Maintains context with zero drift for complex projects
  • Efficient Workflow: Research, expand, prioritise, and implement tasks seamlessly
  • Open-Source: No additional cost to your development setup (Minimal costs for API requests).

Installation & Setup:

  1. Install the MCP server through Cursor’s settings
  2. Configure your API keys (supports various LLM providers)
  3. Start using TaskMaster commands to organise and manage your development tasks

For setup instructions, refer to TaskMaster’s GitHub repository. Follow the Recommended Setup instructions for Cursor. Takes 1 minute to setup.

CodeRabbit: AI-Powered Code Reviews

CodeRabbit provides Free AI-driven code reviews directly in Cursor. It provides line-by-line suggestions, PR summaries, and can even assist with generating code or documentation via its agentic chat. This helps catch bugs early, improve code quality, and speed up the review process.

  • Key Features:

    • Line-by-line AI code reviews & 1-click fixes
    • Automated PR summaries
    • Agentic chat for coding tasks & advice
    • Integrates with popular static analyzers and linters
    • Privacy-focused with SOC2 Type II certification
    • Free tier available with VS Code extension
  • Installation & Setup:

    • Install the CodeRabbit App on your Git platform (GitHub/GitLab) (as long as its a public repo)
    • Optionally, install the Extension for in-IDE reviews.

StageWise: Granular UI Updates via Browser

StageWise enables precise UI refinements directly from the browser.

  • Problem It Solves: When building UI applications with Cursor, you often get 80-90% of the desired UI, but the last 10-20% of refinements can be difficult to communicate textually.

  • Key Features:

    • Select specific UI elements directly in the browser
    • Communicate these elements precisely to Cursor
    • Make global changes across similar elements
    • Select multiple elements for complex layout adjustments
    • Execute sophisticated UI transformations that would be hard to describe in text
  • Installation & Setup:

    1. Go to Cursor extensions and search for “StageWise”
    2. Install the extension
    3. In any of your projects that you want this to work in, press Command+Shift+P and search for “StageWise”
    4. Select “Auto Setup StageWise Toolbar”
    5. Accept the setup in the chat panel
  • Usage:

    • Run your web project
    • Use the floating toolbar at the bottom of the page in the browser
    • Select UI elements by clicking on them
    • Enter prompts describing desired changes
    • The plugin communicates element details to Cursor automatically
    • Cursor makes the requested changes with precise context

BrowserMCP: Browser Automation for Testing & Tasks

BrowserMCP connects your AI editor to your browser, enabling automated testing and task automation directly from Cursor.

  • Problem It Solves: AI apps like Cursor can’t normally interact with web browsers, limiting their ability to test UI flows or automate repetitive web tasks. BrowserMCP bridges this gap.

  • Key Features:

    • Automated end-to-end testing of user flows
    • Form filling and data collection automation
    • Uses your existing browser profile (stays logged in)
    • Local execution for privacy and speed
    • Avoids basic bot detection with real browser fingerprint
    • Full browser control: navigate, click, type, screenshot, etc.
    • Get Console Logs, take screenshots, and more.
  • Installation & Setup:

    1. Install the Browser MCP Chrome Extension
    2. In Cursor, go to Settings → MCP
    3. Add this configuration to your MCP servers:
    {
      "mcpServers": {
        "browsermcp": {
          "command": "npx",
          "args": ["@browsermcp/mcp@latest"]
        }
      }
    }
    
    1. Click the refresh button next to “browsermcp” to reload
  • Usage Examples:

    • Test user flows: “Navigate to login, fill credentials, verify dashboard loads”
    • Automate tasks: “Fill out this form with test data 10 times”
    • Debug issues: “Take screenshots at each step of the checkout process”
    • The browser tools are available through Cursor’s agent mode

Operative.sh: AI-Driven Application Testing

Operative.sh enables your AI agent (e.g., Cursor) to test the web applications it helps build by automating browser interactions and debugging. This allows the AI to verify its own generated code and ensure functionalities like login pages, user flows, and edge cases are handled correctly. The tool is discussed in this YouTube video.

  • Problem It Solves: AI agents like Cursor typically cannot directly interact with browsers to test the frontend and end-to-end flows of web applications they generate. Operative.sh bridges this by allowing the AI to control a browser and validate application behavior.
  • Core Idea: The AI agent uses Operative.sh to “debug itself” by running tests on the code it has written. You instruct the AI in natural language about what to test, and it uses Operative.sh to perform these actions.
  • Key Components & Process:
    • Web Eval Agent: The primary tool that uses Playwright to emulate user interactions in a browser based on natural language tasks. It requires a URL for the app and a description of the task. Can run in headless mode.
    • Setup Browser State: A helper tool to handle initial login/authentication for sites, saving the session so repeated logins aren’t needed for subsequent tests.
    • Installation: Involves running an installer script from their website, which sets up dependencies (including Playwright) and integrates with the chosen editor (e.g., Cursor) by modifying its MCP config.json. An API key (free tier available with limits) is required and needs to be pasted during setup. A restart of the editor is crucial after installation.
    • Usage:
      1. Provide the AI with the URL of the web app (local or live).
      2. Describe the testing task in plain English (e.g., “test the login page,” “generate edge cases for login and test them”).
      3. The AI translates this into actions for Operative.sh.
      4. Tests run in a browser (visible or headless). A dashboard provides a live preview, status, console logs, and network requests.
      5. Results, including errors, logs, and screenshots, are sent back to the AI/editor.
      6. The AI can even generate test cases, write them to a file, and then have Operative.sh execute them, updating the file with pass/fail results.
  • Benefits:
    • Automates testing of UI and user flows directly from AI prompts.
    • No need to write detailed testing scripts manually; natural language is used.
    • Helps catch bugs and verify functionality, including edge cases, during development.
    • Provides detailed feedback with logs and screenshots.
  • Considerations:
    • AI-driven testing can be slower than traditional scripted tests.
    • The free tier for API keys has usage limits (e.g., 100 browser chat completion requests per month).
    • Some complex scenarios or those requiring specific unhandled configurations might not execute fully.
    • The video mentioned that out of 28 generated test cases, 9 were not executed due to various limitations, and 60% of the remainder passed. This highlights that it’s a powerful aid but not yet a perfect replacement for all testing.

vibe-tools: Your AI Team in the Editor

vibe-tools supercharges your AI agent by giving it a team of specialized assistants and advanced skills directly within your editor.

  • The AI Team:

    • Perplexity: For deep web research and answering questions.
    • Gemini 2.0: For understanding vast amounts of code (large-codebase context), search grounding, and reasoning.
    • Stagehand: For automating browser testing and debugging web apps (uses Anthropic or OpenAI).
    • OpenRouter: Provides access to the latest and greatest models via a unified API for certain commands.
  • New Skills: Beyond the team, vibe-tools adds capabilities like working with GitHub Issues/PRs, analyzing YouTube videos, and generating documentation for external dependencies.

  • Benefit: Access powerful external AI capabilities and automation without leaving your editor, streamlining research, planning, code analysis, testing, and more.

  • Installation & Setup:

    1. Install globally: npm install -g vibe-tools
    2. Run interactive setup in your project root: vibe-tools install .
    3. This guides you through API key configuration (Perplexity, Gemini, optionally OpenAI/Anthropic/OpenRouter/GitHub) and updates your editor’s AI instruction files (e.g., .cursor/rules/vibe-tools.mdc for Cursor).
    4. API keys are typically stored in ~/.vibe-tools/.env or a project-local .vibe-tools.env.
  • Requirements:

    • Node.js 18 or later
    • Perplexity API key
    • Google Gemini API key (can be API key string, path to service account JSON, or adc for Application Default Credentials)
    • For browser commands (vibe-tools browser act/extract/observe): Playwright (npm install --global playwright) and an OpenAI or Anthropic API key.
    • Optional: GitHub Token, OpenRouter API key, Anthropic API key for extended features.

AI Task Management

Keep projects on track and let AI manage the development process based on your PRDs or high-level goals.

  • Option A (TypeScript Projects): TaskmasterAI

    • Benefit: AI-powered system that integrates with Cursor. Parses requirements (like those from ChatPRD), breaks them into manageable tasks, and tracks progress. Excels in TypeScript environments.
    • See their README for setup (MCP recommended).
  • Option B (Non-TypeScript Projects): RooCode Boomerang

    • Benefit: Leverages RooCode’s feature to break complex projects into smaller subtasks delegated to specialized AI modes (e.g., code, architect, debug). Streamlines complex workflows in any language.
    • Requires setting up RooCode and its custom modes.

Best Practices

Model Selection

These are our personal recommendations. While OpenAI’s o3 max model is currently considered the most intelligent, its significantly longer response times make it less practical for everyday development tasks.

For most development work, we still recommend Gemini 2.5 Pro or Claude 4 Sonnet, offering a better balance of capability and speed for iterative coding workflows.

How to decide which model to use?

I want control over what the model does

  • What kind of task are you working on?
    • Small, scoped change → Use Claude 4 Sonnet
    • Larger task with clear instructions → Use Claude 4 Sonnet or Gemini 2.5 Pro

I want the model to figure it out

  • How complex is the task?
    • Routine or general use → Use Claude 4 Sonnet, Gemini 2.5 Pro
    • Very complex or ambiguous → Use o3

Tips and Tricks

These tips from the Cursor community maximize productivity:

  • Project Setup:
    • Define domain knowledge in .cursor/rules.json.
    • Use prd.md, specs.md (potentially generated by ChatPRD) for AI context.
    • Track tasks with todo.md or a dedicated tool (TaskmasterAI/RooCode).
    • Consider a Test-Driven Development (TDD) workflow for AI self-correction.
  • Version Control:
    • Use Git frequently as a safety net.
    • Make small, incremental commits.
    • Always review AI changes before committing.
  • AI Interaction:
    • Break down large tasks into small steps.
    • Use @ references for specific file/folder context.
    • Start new chats for distinct tasks to avoid context confusion.
    • Use powerful reasoning models (like Claude 3 Opus, GPT-4o) for planning/architecture via Chat, then implement with faster models via Agent/Edit.
    • Provide clear goals and context in your prompts.
    • Use MCP (Model Control Protocol) tools like cursor-tools for advanced control.
  • Mindset:
    • Understand AI limitations – it’s a copilot, not infallible.
    • Be specific with instructions. Clear prompts yield better results.
    • Don’t over-rely on AI for critical logic without thorough review.

Prompt Engineering Principles

Crafting effective prompts is an art. AI modals can hallucinate if they aren’t given enough context, my best tips:

  • Be Specific: Clearly state what you want the AI to do in your own words.
  • Provide Context: Tag the right files/folders to give the AI enough information to understand roughly where to make changes (preventing hallucinations).

Master Prompt Engineering

To truly unlock the potential of AI, invest time in becoming an expert prompter with PromptCraftPro.

Good prompting = Good communication IRL = Being a better team mate

Mastering prompt engineering skills transfers to improved real-world communication, making you more effective both with AI tools and other humans.

Development Workflows

Apply the tools and techniques to common development tasks.

Research & Planning

Before coding, leverage AI for research and planning.

  1. Define Requirements: Use ChatPRD to flesh out ideas into structured requirements (PRDs).

  2. Technical Research: Use vibe-tools web (powered by Perplexity) for web searches directly in your editor.

    # Ask vibe-tools to research a topic
    vibe-tools web "Compare state management libraries for React in 2024"
    
    @tool:vibe-tools web "What are the best practices for securing Next.js API routes?"
    

    Tip: You can often just ask the agent to “ask Perplexity”.

  3. Codebase Understanding & Planning: Use vibe-tools repo (powered by Gemini 2.0) to understand existing code or vibe-tools plan to generate implementation strategies.

    # Ask vibe-tools repo to explain part of your codebase
    vibe-tools repo "Explain the authentication flow in @/lib/auth.ts"
    
    # Ask vibe-tools repo for context
    @tool:vibe-tools repo "Which files handle user profile updates in this project?"
    
    # Ask vibe-tools plan for an implementation strategy
    @tool:vibe-tools plan "Outline the steps to add two-factor authentication using OTP"
    

    Tip: You can often just ask the agent to “ask Gemini” for repo context.

  4. Task Breakdown: Use your chosen AI Task Manager (TaskmasterAI/RooCode) to parse the PRD/plan and create a development task list.

Debugging

Tackle bugs efficiently with AI assistance.

  • Simple Bugs (Direct Prompt): Start a new Chat/Edit session:

    I'm encountering a bug where [Explain the bug clearly].
    The error occurs when [Describe steps to reproduce].
    I expect [Expected behavior] but get [Actual behavior].
    
    Relevant files: @[file1] @[folder]
    Error message/Logs:
    

    [Paste relevant console logs or error messages]

    
    Can you help me identify the cause and suggest a fix? Please explain your reasoning.
    
  • Complex Bugs (jam.dev): For intricate frontend bugs, jam.dev captures rich context.

    1. Install the browser extension.
    2. Record the bug reproduction.
    3. Share the generated jam link with Cursor:
    I've captured a complex bug using jam.dev: [Paste jam.dev link]
    
    Can you analyze this and help me find the root cause and fix?
    Relevant backend files: @[file1] @[file2]
    

    The jam provides screen recording, network requests, console logs, state, etc., giving the AI much more context.

  • Leveraging Tools:

    • Ask vibe-tools repo (Gemini) to analyze relevant code sections for potential issues.
      @tool:vibe-tools repo "Review the code in @[relevant-file] for potential causes of this bug: [bug description]"
      
    • Use vibe-tools browser (Stagehand) to automate browser actions for reproducing or testing frontend bugs.
      # Example: Log in, check console, take screenshot
      vibe-tools browser open "http://localhost:3000/login" --fill='input[name="email"]=test@example.com' --fill='input[name="password"]=password123' --click='button[type="submit"]' --console --screenshot=login_attempt.png
      
      # Ask Stagehand (vibe-tools browser) to perform actions
      @tool:vibe-tools browser act "Navigate to the user profile page and click the 'Edit' button" --url="http://localhost:3000"
      

    Tip: You can often just ask the agent to “use Stagehand” or “use the browser”.

Adding New Features

Implement features methodically using a hybrid AI approach. For planning, we recommend using OpenAI’s o3 model or Google’s Gemini 2.5 Pro due to their large context window and strong architectural reasoning capabilities. For implementation and review, Gemini 2.5 Pro or Claude 3 Sonnet remain the best choices.

  1. Plan (Ask Mode): Use a strong reasoning model like OpenAI’s o3 model or Gemini 2.5 Pro in Chat.

    I want to add the following feature: [Explain feature clearly, reference PRD from ChatPRD if available @prd.md]
    
    Help me plan the implementation. Consider:
    1. High-level architecture
    2. Data model changes (@prisma/schema.prisma)
    3. Key components/modules involved (@components/ @lib/)
    4. Potential edge cases & security considerations
    5. Any new dependencies needed
    
    Break this down into manageable implementation steps.
    
    Create a plan file: @feature-plan.md
    
    
  2. Implement (Agent Mode): Use the plan generated above. Tag the plan file and relevant code files. We recommend using Gemini 2.5 Pro or Claude 3 Sonnet for this stage.

    Let's implement the feature based on the plan: @feature-plan.md
    
    Start with step 1: [Describe step 1 from the plan].
    Relevant files: @[file1] @[file2]
    

    Work through the plan step-by-step.

  3. Review & Refine: Always review the AI-generated code. Ask for refinements or fixes. You can also ask vibe-tools repo (Gemini) to review, or use Gemini 2.5 Pro or Claude 3 Sonnet directly for a thorough review:

    @tool:vibe-tools repo "Review the code changes I just made in @[file]. Are there any potential issues or areas for improvement?"
    
    Using Gemini 2.5 Pro or Claude 3 Sonnet:
    Review the code changes I just made in @[file]. Are there any potential issues or areas for improvement? Explain your reasoning.
    
    If necessary, let me know if we need to go back and refine the plan or update the implementation.
    

Refactoring Code

Keep your codebase clean and maintainable with AI help.

  • Maintainability Goals:
    • Keep files concise (ideally < 300 lines).
    • Ensure functions/components have a single responsibility.
  • Refactoring Prompt: Start an Edit or Agent session:
    Please refactor this code: @[file_or_folder_to_refactor]
    
    Focus on improving [readability | modularity | performance | specific concern].
    Ensure the existing functionality remains unchanged.
    
    Explain the changes you made and why they improve the code.
    

Learning a new Concept

Use AI to quickly grasp new technologies or concepts within a sandbox environment. This prompt sets up an interactive learning session.

I want to learn '[Topic/Library/Framework, e.g., Zustand for React state management]'.

First, break this topic down into 10 logical sub-sections I need to learn to fully understand it.

When I type 'start', begin with the first sub-section. For that sub-section, ask me 5 questions designed to test my understanding. 

* If I answer all 5 correctly, congratulate me and tell me to type 'next' to move to the next sub-section.
* If I get any questions wrong, tell me which ones I got wrong, explain *why* my answer was incorrect, and suggest specific concepts or documentation sections I should review before trying again.
* Wait for me to indicate I'm ready to retry the questions for that sub-section.

Only move to the next sub-section once I've answered all 5 questions for the current one correctly.

Let's begin with the breakdown. What are the 10 sub-sections for '[Topic/Library/Framework]'?

Benefit: Get hands-on practice immediately in an isolated environment, building a reference implementation.

Micro-Efficiency Boosters

Alias Commands

Streamline common terminal tasks.

  • Automated Commit Messages: Never write mundane commit messages again.
    1. Install aicommits: Follow their quick guide.
    2. Set up a shell alias (e.g., in .zshrc or .bashrc):
      alias gagc='git add . && aicommits'
      
    3. Now, simply run gagc in your terminal to stage all changes and have AI generate a conventional commit message.

Custom Agent Modes

Leverage community-created modes to specialize your AI agent for specific tasks. Access them via the agent settings or playbooks.com/modes. Some examples include:

  • Plan: Generates a project implementation plan based on a PRD.
  • Audit: Finds security vulnerabilities and creates a report.
  • Vibe Coding: Assists in building apps conversationally.
  • PRD: Creates comprehensive Product Requirements Documents.
  • Refactor: Improves code structure and readability without adding features.
  • Teach: Explains coding concepts and asks clarifying questions.
  • Content Writer: Acts as a research and writing assistant.
  • Architect: Designs system architecture before implementation.

Useful MCP Servers

Context7: Update Your AI’s Knowledge

Context7 solves one of the biggest limitations in AI coding: outdated knowledge due to training cutoffs.

  • Problem It Solves:

    • LLMs have fixed training cutoffs, lacking knowledge of the latest libraries/frameworks
    • Built-in docs features can overload context when mixed with project code
    • Large codebases with multiple dependencies create confusion for AI agents
  • Key Features:

    • Access to 800+ up-to-date framework documentations
    • Pulls only specific, relevant documentation pieces when needed
    • Preserves context tokens for your actual code
    • Documentation includes code examples that help the agent write accurate code
    • Supports documentation search for precise information retrieval
    • No authentication or API keys required
  • Installation:

    1. In Cursor, go to Settings → MCP
    2. Add new MCP with this configuration to your existing MCP JSON object:
    {
      "mcpServers": {
        // ... other MCP servers
        "context7": {
          "command": "npx",
          "args": ["-y", "@upstash/context7-mcp@latest"]
        }
      }
    }
    
    1. Save and refresh Cursor (or restart if tools don’t appear)
  • Usage:

    • When prompting, explicitly ask Cursor agent to call context7 MCP to access up-to-date knowledge without context overload
  • Benefit: Write more accurate code for modern frameworks, eliminate outdated command errors, and improve performance on complex projects by giving your AI agent precise, relevant documentation.

Firecrawl: Web Scraping for Missing Documentation

Firecrawl complements Context7 by allowing you to scrape documentation from any website when it’s not available in the Context7 database.

  • Problem It Solves:

    • Documentation for newer or niche libraries may not be in Context7 yet
    • Need to extract specific documentation from websites not in any database
    • Want to save documentation locally for future reference
  • Key Features:

    • Scrape any website and convert to LLM-ready markdown
    • Extract specific information with structured data extraction
    • Crawl entire documentation sites
    • Parse PDFs, DOCX, and other document formats
    • Supports dynamic content with JavaScript rendering
  • Installation:

    1. In Cursor, go to Settings → MCP
    2. Add Firecrawl to your MCP configuration:
    {
      "mcpServers": {
        "Firecrawl Scraper": {
          "command": "env",
          "args": [
            "FIRECRAWL_API_KEY=your-api-key-here",
            "npx",
            "-y",
            "firecrawl-mcp"
          ]
        }
      }
    }
    
    1. Save and refresh Cursor (or restart if tools don’t appear)
  • Usage:

    • When prompting, ask Cursor agent to use Firecrawl to scrape documentation for a specific library or feature and save the output as a markdown file in your project for future reference.

Other vibe-tools Examples

  • Generate Documentation: Ask vibe-tools doc to document parts of your code or external libraries.
    # Document local API endpoints
    vibe-tools doc --save-to=docs/api.md --hint="Focus on the API endpoints in @/app/api"
    
    # Generate local docs for an external library
    @tool:vibe-tools doc --from-github=reactjs/react-router --save-to=local-docs/react-router.md --hint="Summarize the main routing components"
    
  • GitHub Integration: Interact with GitHub issues or PRs directly.
    # Fetch and summarize a PR
    vibe-tools github pr 123 --from-github=vercel/next.js
    
    # Get details of an issue
    vibe-tools github issue 456 --from-github=nodejs/node
    
  • YouTube Analysis: Extract information from YouTube videos.
    vibe-tools youtube "https://www.youtube.com/watch?v=some_video_id" --type=summary "Summarize the key points of this tech talk."
    

Remember, the key is iterative learning. Experiment with these tools and techniques, find what works best for your style, and continuously refine your AI-assisted workflow.

TODO: Add RepoPrompt