Ernie
AI Playbook

AI-Native Dev Workflows

The multi-agent setup I actually use. CLAUDE.md patterns, Cursor rules, CodeRabbit config, and when to stop trusting the output.

Developer

3 of 6

Start with Review, Not Generation

If your team is nervous about AI writing code — start with AI reviewing code. CodeRabbit is the lowest-risk first step because it doesn't write a single line. It reviews yours.

Set up CodeRabbit on your repo, and every PR gets an automated review before any human sees it. It catches real things: missed edge cases, FSD layer violations, inconsistent patterns, potential security issues. The path-specific instructions are the real value — generic "review this code" gets generic results. "Verify FSD boundaries in entity files" gets useful feedback.

Once your team is comfortable with AI reviewing code, you're ready for AI writing code. That transition is psychological as much as technical — and CodeRabbit builds the trust.

The CLAUDE.md File — Your Highest-Leverage Move

This is the single most important concept in AI-powered development. A CLAUDE.md file at your repo root tells every AI tool about your project's architecture, conventions, and commands. Without it, every session starts from zero. With a good one, Claude understands your project immediately.

My Daily Workflow

Here's what a typical day actually looks like:

  1. Open the terminal, start Claude Code. It reads my CLAUDE.md, knows the project, and I tell it what I'm working on. Most complex tasks — multi-file refactors, debugging sessions, new features — happen here.
  2. Write code in VS Code with Copilot on. Copilot handles inline completions while I type. I rarely think about it — it's just there, suggesting the next line.
  3. CodeRabbit reviews my PRs. Before any human sees my PR, CodeRabbit has already flagged issues.
  4. Commit and PR with Claude Code. claude "Create a PR for this branch with a clear description" — it writes better PR descriptions than I do because it's read every change.

The key insight: these tools operate at different layers. Copilot is muscle memory (line-level), Claude Code is thinking (task-level), CodeRabbit is quality gate (review-level). They never conflict.

Writing a Good CLAUDE.md

Here's a stripped-down version of the actual CLAUDE.md from this site:

# zernie.com

## Architecture
- Next.js 16 with App Router, TypeScript strict
- Tailwind CSS 4, Feature-Sliced Design
- No default exports (except Next.js pages)
- Named exports, functional components, hooks

## Directory Structure
- src/app/ — Next.js routes
- src/entities/ — Domain models and data
- src/features/ — User interactions
- src/widgets/ — Composed page sections
- src/shared/ — Reusable utilities and UI
- src/pages-layer/ — Full page compositions

## Commands
- npm run dev — Start dev server (port 3000)
- npm run build — Production build
- npm run check — All quality gates (lint + typecheck + steiger)

## Conventions
- Tailwind for all styling, no CSS modules
- rem units for spacing
- FSD layer boundaries enforced (no cross-slice imports)
- Steiger validates architecture on every build

What makes this good: It's short. It tells the AI what matters — architecture, file structure, commands, conventions. It doesn't try to document everything. A 50-line CLAUDE.md that's accurate beats a 500-line one that's aspirational.

Where to put them:

  • Repo root: CLAUDE.md — applies to the whole project
  • Subdirectories: Additional CLAUDE.md files for specific areas (e.g., src/entities/CLAUDE.md for data model conventions)
  • Personal: ~/.claude/CLAUDE.md — your preferences across all projects

Cursor Rules

If your team uses Cursor, .cursor/rules/ files give it project context similar to CLAUDE.md. Here's a real one:

---
description: General coding rules
globs: "**/*.{ts,tsx}"
alwaysApply: true
---

- TypeScript strict mode, no `any` types
- Named exports only (no default exports except pages)
- Feature-Sliced Design layer boundaries enforced
- Tailwind CSS for all styling
- Functional components with hooks
- Use `rem` units for spacing values

You can have multiple rule files scoped to different parts of your codebase. A rule for src/entities/** might specify data model conventions, while one for **/*.test.* specifies testing patterns.

CodeRabbit Config

Here's a minimal .coderabbit.yaml that actually works:

reviews:
  auto_review:
    enabled: true
    drafts: false
  path_instructions:
    - path: "src/entities/**"
      instructions: "Verify FSD layer boundaries. Entities should not import from features, widgets, or pages."
    - path: "**/*.test.*"
      instructions: "Check for meaningful assertions. Flag snapshot-only tests."
    - path: "src/shared/**"
      instructions: "Shared code must not import from any other FSD layer."

The path-specific instructions are the real value. Generic "review this code" gets generic results. "Verify FSD boundaries in entity files" gets useful feedback.

Multi-Agent Workflows

The most productive setup uses multiple tools simultaneously, each in its sweet spot:

  • Copilot stays on for completions. You don't switch it on and off — it's ambient.
  • Claude Code handles complex work. Multi-file refactors, debugging, creating PRs. It's your senior pair programmer.
  • CodeRabbit runs asynchronously. You don't interact with it directly — it shows up when you open a PR.

They're complementary, not competing. Copilot doesn't do what Claude Code does. Claude Code doesn't do what CodeRabbit does.

When AI Makes Things Worse

I've learned when to not trust the output:

Over-engineering. Ask Claude Code to "add error handling" and you'll get try-catch blocks around code that can't fail, fallbacks for impossible states, and validation for internal data that's already validated. Be specific about what actually needs handling.

Dependency creep. AI will happily import a library for something you can write in 10 lines. Always check what it's adding to your package.json. "Add lodash for this one utility" — no.

Stale patterns. AI models have training cutoffs. If you're using a bleeding-edge framework or a recent API change, the AI might generate code for an older version. Always check against current docs for new features.

The confidence problem. AI never says "I don't know." It produces something plausible-looking even when it's wrong. The more obscure the domain, the more you need to verify. Standard CRUD? Trust it. Custom algorithm for your specific business logic? Read every line.

Architecture decisions. AI can propose options, but it doesn't understand your team's constraints, timeline, or where you're headed in six months. Use it to explore options, decide yourself.

What NOT to Delegate

Some things must stay human:

  • Security reviews. AI spots common vulnerabilities but can't reason about your threat model or trust boundaries.
  • Performance-critical paths. AI code is correct more often than optimal. For hot paths, benchmark — don't assume.
  • Data model design. Schema changes are expensive to reverse. Think through migration paths yourself.
  • Dependency decisions. Evaluate maintenance burden, supply chain risk, bundle size. AI doesn't weigh these.

Use corporate accounts, don't paste secrets into prompts. You know this.