Skip to content
UXClaim
Design Ops

Architect Playbook

Self-improving Claude Code audit skills for TypeScript/React codebases covering architecture, security, accessibility, performance, testing, and more.

What it does

Architect Playbook is a collection of Claude Code slash-command skills that systematically audit TypeScript and React codebases across nine dimensions: security, accessibility, performance, architecture, testing, quality gates, linting, dependencies, and React patterns.

How it works

Install the playbook once globally or locally, then run audits in parallel from separate chat sessions. Each audit:

  • Grades against opinionated baselines, not just what’s in your codebase
  • Reports findings, then asks before generating implementation plans
  • Saves structured findings (markdown + JSON) to disk for cross-session review
  • Supports optional enrichment flags (--with-lighthouse-results, --with-network) for deeper analysis

When you find gaps during review, /system-self-improve patches the originating audit skill itself, making the playbook smarter over time.

Use cases

  • Onboarding to new projects: Walk onto a codebase and run all audits in parallel worktrees
  • Architecture reviews: Validate module boundaries, coupling, and design patterns
  • Security gates: Scan for auth flaws, XSS, secrets, and header misconfigurations
  • Accessibility compliance: Check WCAG 2.2 AA across components and the shell
  • CI/CD integration: Evolve audit quality based on real review feedback

Who benefits

Engineering teams, tech leads, and architects who want repeatable, evolving code review workflows without manual checklist fatigue.

Frequently asked questions

How do I install Architect Playbook?
Clone the repository, open it in Claude Code, then run `/install-architect-playbook-globally` (or `/install-architect-playbook-locally` to pin it to a single project). That's it—all audit commands are now available in every session.
Can I run audits in parallel?
Yes. Open multiple Claude Code chats, then use `/worktree <audit-name>` in each to run audits against isolated Git worktrees simultaneously. This prevents interference between long-running audits.
What happens after an audit finds issues?
The audit generates a concise Top 5 recommendations plus a full report saved to `.architect-audits/`. You fix the issues in the same chat, re-run the audit in a fresh chat to review, then optionally run `/system-self-improve` to patch the audit skill if gaps surface.
What audits are included?
Nine audits: security, accessibility, performance, architecture, testing, react patterns, linting, dependencies, and quality gates. Each supports standard `--learn` teaching mode, plus optional enrichment flags like `--with-network` or `--with-lighthouse-results`.
How does the self-improvement workflow work?
`/system-self-improve` reads a review's gap report and proposes edits to the originating audit's skill definition. Merge the patch, commit it, and the playbook evolves with your codebase's learnings.
Are audits read-only?
By default, yes. Every audit is static analysis and read-only. Runtime or network data is opt-in via explicit flags like `--with-scan` or `--with-network`.
What format are findings saved in?
Four files: `findings.md` (human-readable), `findings.json` (machine-readable issues list), `snapshot.md` (diagnostic snapshot), and `metadata.json` (skill version, timestamp, repo hash).
Can I customize audit thresholds?
Yes. Each audit supports per-audit `--threshold-*` flags documented in its `SKILL.md`. Use these to adjust severity filters or baseline expectations for your team's standards.

Glossary

Worktree
An isolated Git working directory created from your codebase. Audits run in worktrees to prevent interference when executing multiple audits in parallel.
Opinionated baseline
A specific, well-defined standard against which each audit grades code. The baseline is the team's or industry standard—not whatever happens to be in the current codebase.
Enrichment flag
Optional command-line flags like `--with-network` or `--with-lighthouse-results` that enable audits to pull live runtime data, security scanner output, or external tooling results beyond static analysis.
Findings contract
The deterministic on-disk structure (findings.md, findings.json, snapshot.md, metadata.json) that allows audits, fixes, and reviews to share state across separate chat sessions.
Top 5 recommendations
The highest-impact audit findings surfaced in the chat: violations and missing foundations, plus the most important partial gaps. The full report on disk contains all check statuses.

More in Design Ops

All →
Design Ops

AI Atelie

Local-first, open-source design tool. Bring your own AI agent (Claude Code, Kimi, Codex). Generate designs as HTML/JSX/CSS folders with instant tweaks, inspe...

aiatelie
Design Ops

AI Toolbox

Claude Code plugin with 13+ skills for code review, accessibility audits, design systems, and end-to-end feature planning backed by ClickUp.

Matisantillan11
Design Ops

Chrome DevTools Skill

Browser debugging, automation, performance analysis, accessibility auditing, and LCP optimization for Claude Code without MCP server setup.

ddfourtwo
Design Ops

Claude Canvas

AI-orchestrated visual production for Obsidian Canvas. Create presentations, flowcharts, mood boards, and knowledge graphs with intelligent layout.