Collaborating with AI Agents

Collaborating with your AI agent on a scratch pad unlocks a lot of possibilities
At this year’s App.js Conf, I gave a talk about how AI is reshaping software development. This post covers one of the most important themes from that talk: how we can use AI tools for workflows that go beyond just writing code.
One of my favorite ways to work with AI is what I call “Vibe coding” - where you and your AI agent bounce ideas back and forth in a free-flowing conversation. You might start with a rough idea, and through rapid iteration, arrive at something that feels right. It’s like pair programming with someone who’s always ready to try a new approach. This workflow is perfect for prototyping, exploring new ideas, or when you’re working on smaller, self-contained features. The back-and-forth nature of the conversation helps you think through problems from different angles, and the AI can quickly generate variations of your ideas.
However, when you’re working on a large codebase with complex business requirements, multiple team dependencies, and important metrics to maintain, you need to be deliberate about your approach. This is where the plan.md workflow comes in.
What is plan.md?
A plan.md file is your collaborative thinking space with your AI agent. Instead of jumping straight into coding, you use it to:
- Break down the problem
- Consider different approaches
- Document assumptions and constraints
- Iterate on the solution design
Here’s a practical example. Let’s say we want to create a weather widget for our app. Instead of diving into code, we start with a prompt like this:
I want to create a weather widget for our React Native app.
- What data we need to display
- How we’ll get the weather data
- What the UI should look like
- How we’ll handle error states and loading
- What edge cases we need to consider
Please help me think through this systematically.
The agent might respond with something like:
Weather Widget Implementation Plan
Data Requirements
- Current temperature
- Weather condition (sunny, cloudy, etc.)
- Location
- Last updated timestamp
- Optional: forecast for next 24 hours
Data Source
- Consider using OpenWeatherMap API
- Need to handle API key management
- Consider caching strategy for offline support
- Rate limiting considerations
UI Components
- Main widget container
- Temperature display
- Weather icon
- Location display
- Last updated indicator
- Loading state
- Error state
Edge Cases
- No internet connection
- Invalid location
- API rate limits
- Timezone differences
- Unit conversion (C/F)
The key insight here is that once you’ve iterated with your AI agent and arrived at a solid plan, you can ask it to break this down into actionable tasks:
Based on this plan, can you break it down into specific, actionable tasks that you can help me implement? Each task should be something you can help me with directly.
The agent might then create a task list like:
-
Set up API client and environment variables
- Create a new API client module with axios
- Add environment variables for API key and base URL
- Implement request/response interceptors for error handling
- Set up TypeScript types for API responses
-
Create the core widget component
- Build a reusable container with proper styling
- Implement responsive layout using Flexbox
- Add loading skeleton component
- Set up error boundary for graceful failure handling
-
Implement data fetching and state management
- Create a custom hook for weather data fetching
- Implement proper loading and error states
- Add data caching with React Query
- Set up automatic background refresh
-
Build the weather display components
- Create animated weather icon component
- Implement temperature display with unit conversion
- Add location display with geocoding
- Build timestamp component with auto-update
-
Add offline support and testing
- Implement service worker for offline caching
- Add retry logic for failed requests
- Write unit tests for core functionality
- Add integration tests for data flow
This approach gives you several advantages:
- You can review the plan before writing any code
- The tasks are broken down into manageable chunks
- You can have the agent help you implement each task systematically
- You maintain control over the implementation while leveraging AI assistance
- The plan serves as documentation for your thought process
Once you’re happy with the plan, you can ask your AI agent to help implement each task, walking through them one at a time and iterating on the implementation until it meets your requirements.
2. Scaling the Plan: Technical Writing and Ticket Creation
The plan.md pattern doesn’t just scale to large problems — it scales to whole projects.
As leads, we often write technical design documents: proposals, epics, and implementation plans. We break those into JIRA tickets. We argue over edge cases. It’s a lot of work — and it’s exactly the kind of structured thinking that a plan file helps with.
So now, when I’m writing a big spec, I treat it like a super-powered plan.md:
- I collaborate with Cursor inside the codebase
- I document decisions and assumptions
- I ask it to help generate JIRA tickets in our preferred format
- And I use the agent to reason through potential risks or missing pieces
The result? Better specs, clearer tickets, fewer back-and-forths during implementation. And a stronger foundation for multi-sprint work across teams.
You can even take it a step further by integrating with Atlassian’s MCP server to automatically publish tickets to Jira, turning your plan.md into actionable team tasks.
3. Reviewing Code with pr-review.md
AI can help you write code, but it can also help you understand code — especially when you’re reviewing pull requests that touch unfamiliar parts of the codebase. The key insight here is that you’re collaborating with your agent using the full context of your codebase to perform a thorough review.
Here’s how I approach PR reviews with my AI agent:
- First, I check out the branch locally and use the GitHub CLI to get the PR information:
# Check out the branch locally
git checkout feature/weather-widget
# Get a summary of the PR
gh pr view
# See the actual changes
gh pr diff
- Then, I create a pr-review.md file and ask my agent to analyze the changes:
I’m reviewing a PR for a weather widget feature. I’ve checked out the branch and can see the changes. Can you:
- Summarize the main changes in this PR
- Identify which parts of the codebase are affected
- Point out any potential risks or areas that need special attention
The agent might respond with something like:
PR Review Analysis
Summary of Changes:
- Adds new weather widget component
- Modifies home screen layout to accommodate widget
- Updates API client to support weather data fetching
Affected Areas:
- src/components/WeatherWidget/
- src/screens/HomeScreen.tsx
- src/api/weather.ts
- src/types/weather.d.ts
Potential Risks:
- Weather API key management
- Offline state handling
- Performance impact on home screen
- Type safety in new components
- Now the real collaboration begins. I can ask follow-up questions to dig deeper:
Looking at the weather widget component, can you:
- Check if all edge cases from our plan.md are handled?
- Verify the error states match our design system?
- Review the test coverage?
The agent can then analyze the specific files and respond with detailed findings:
WeatherWidget Component Analysis
Edge Cases: ✅ Handles offline state ✅ Manages API errors ❌ Missing timezone conversion ❌ No handling for invalid location data
Error States: ✅ Uses standard error component ✅ Proper loading states ❌ Missing retry mechanism ❌ Error messages could be more user-friendly
Test Coverage: ✅ Unit tests for data fetching ✅ Component rendering tests ❌ Missing integration tests ❌ No performance benchmarks
- Finally, I can ask the agent to perform specific checks and show its work:
Can you verify these specific aspects of the implementation:
- Check if the API client properly handles rate limits
- Verify the widget updates correctly when location changes
- Test the offline caching behavior
The agent can then run through these checks and document its findings:
Implementation Verification
API Rate Limiting: ✅ Client includes retry logic ✅ Proper error handling for 429 responses ✅ Exponential backoff implemented
Location Updates: ✅ Widget re-renders on location change ✅ Proper cleanup of old subscriptions ❌ Potential memory leak in location listener
Offline Caching: ✅ Implements cache-first strategy ✅ Proper cache invalidation ❌ Cache size not limited
This collaborative review process gives you several advantages:
- You maintain control over the review while leveraging the agent’s ability to analyze code
- The agent can help you understand unfamiliar parts of the codebase
- You get a documented review process that you can share with the team
- The agent can perform specific checks and show its reasoning
- You can iterate on the review by asking follow-up questions
The key is that you’re not just getting a static analysis — you’re having a conversation with your agent about the code, using the pr-review.md file as your collaborative workspace. This helps ensure a thorough review while making the process more efficient and systematic.
Final Thoughts
The workflows I’ve shared — plan.md, technical writing, and pr-review.md — all share a common pattern: they treat your AI agent as a collaborative partner rather than just a code generator. By using these files as shared workspaces, you can:
- Break down complex problems into manageable pieces
- Maintain a clear record of your thought process
- Get systematic help with implementation
- Perform thorough code reviews
- Keep your team aligned with clear documentation
The key is learning how to effectively collaborate with your agent. It’s not about getting the agent to write code for you — it’s about using it to enhance your own workflow, making you more deliberate and thorough in your work.
If you’re interested in trying these workflows, start with plan.md. It’s the simplest to adopt and will help you develop the collaborative mindset that makes the other workflows effective.
If this resonates with you, I’d love to hear how you’re using AI in your own workflows — and if you want a sticker from my talk, hit me up. 😉