---
title: Making Your Repository AI-Ready
description: 'My experience of transitioning from human-first to AI-ready repositories packed to six practices that enable effective AI coding agents: feedback loops, entry points, specifications, progressive disclosure, skills, and testing documentation.'
date: 2025-12-25
tags:
  - ai
  - developer-experience
  - documentation
  - best-practices
---

I've been using AI coding agents for a while now. I mostly use GitHub Copilot, Cursor and Claude Code, but the same would apply to any other AI coding agent. My initial thinking was to keep repositories AI-free as possible and use AI as an assistant for my coding. This changed last year. At this moment I only write manually the core parts of code and contracts. Everything else is mostly AI generated. This effectively triggered a transition from human-first repositories to human-read-ai-write-first.

Here's what I'm observing working best in my repos and teams.

## Six Things That Actually Matter

### 1. Feedback Loop

This is the most important one. Your agent needs to know if what it's doing is right or wrong.

- **Linting and Type checking** - Immediate feedback on style and types. Agent can fix its own issues before going further.

- **Tests** - Unit tests, integration tests. But also smoke tests - simple checks that verify the app actually runs.

- **Running the application** - Agent should be able to start the app, hit endpoints, check UI. If something looks broken, it can see that.

- **CI/CD** - Automated checks on every push. If tests fail, agent knows something is broken.

The key point: **agent must have access to all of this**. It's not enough to have CI running somewhere. Agent needs to read CI status, see logs, understand what failed. Same with tests - agent should run them locally, not just hope CI catches issues later. If your agent can't check CircleCI status or read test output, the feedback loop is broken.

The pattern: anything that gives automated feedback helps. If a human would catch an issue in review, try to automate that check so the agent catches it first.

### 2. Entry Points

Agent needs to know where to start. There's a hierarchy that works:

- **README.md** - Still the main entry point. What the repo does, how to run it, how to test it. This should work for humans AND agents.

- **[AGENTS.md](https://agents.md/)** - Agent-specific instructions. This is becoming a standard de facto - over 20,000 repos on GitHub use it now. Put things here that a human would figure out but an agent might miss. Cloud environment setup, specific workflows, gotchas.

- **Tool-specific files** - `CLAUDE.md`, `.cursorrules`, etc. If you use one tool primarily, you might have these. But I usually just make them reference `AGENTS.md` to avoid duplication.

Pro tip: keep `AGENTS.md` lean. It should give context and hints, not detailed step-by-step workflows. There are much more information in [GitHub's research](https://x.com/github/status/2003502651422449901). If you need a starting point, here's [my AGENTS.md template](https://gist.github.com/chaliy/9f79c16133810b7077bec5a650317238#file-agents-md).

### 3. Specifications

Some decisions can't be figured out from code. Business rules, architectural choices, branding guidelines - this stuff lives in people's heads or in Confluence pages that nobody reads.

I keep a `specs/` folder with markdown files. Short documents about:

- Architecture decisions
- Domain concepts
- Style guidelines
- Integration requirements

**Key rules:**

- Keep them short. If specs are too detailed, agent can't distinguish spec from generated content.
- Keep them current. Specs that say one thing while code does another are worse than no specs.
- Update specs with PRs. When you change behavior, update the spec.
- Keep short information on considered and discarded approaches to help the agent understand the reasoning behind decisions.
- Make sure that specs are mentioned in `AGENTS.md`, and also updating specs is part of the pre PR checklist.

I periodically ask the agent to review recent commits and sync up specs. Works surprisingly well.

### 4. Progressive Disclosure

Here's where it gets interesting. You don't want to dump everything into `AGENTS.md`. That file would become massive and the agent would be overwhelmed.

The solution is progressive disclosure - show information only when needed.

**Rules/Workflows** - Most AI tools support some form of "rules" that load on demand. Cursor has `.mdc` files. Claude Code has skills. The idea is the same: brief description visible always, full content loads when relevant.

Example: you might have a rule for "how to create a PR in this repo". The agent sees "PR creation guidelines" in the list, but only loads the full 50-line workflow when actually creating a PR.

### 5. [Skills](https://agentskills.io/home) (Encapsulated Capabilities)

Rules are just text. But sometimes you need more. Here's where [Anthropic's skills](https://agentskills.io/home) comes in.

Skills are like rules but they can include scripts and resources. Everything encapsulated together.

For instance I have a skill that checks CircleCI status. It includes:

- Scripts to query the API
- Instructions on how to interpret results
- Common failure patterns and what they mean

Agent triggers the skill, runs the scripts, interprets the output. Fully self-contained.

Claude Code has native skill support. Other tools are catching up. But even if your tool doesn't support skills natively, you can describe them in `AGENTS.md` and the agent will figure it out.

### 6. Testing Documentation

You won't review every line of AI-generated code. You need the agent to self-verify.

Beyond unit tests, I keep human-readable test cases:

- How to smoke test the main flows
- What screens to check after UI changes
- How to verify API changes work

This gives the agent a checklist. After making changes, it can run through verification steps before calling the task done.

At this moment running test cases is still unreliable, but I expect this to improve quickly.

## What About Tool Fragmentation?

Yeah, it's a mess. Different tools want different config files:

- Copilot: `.github/copilot-instructions.md`, GitHub Copilot already supports `AGENTS.md`, so just adopt AGENTS.md
- Claude: `CLAUDE.md`, I tend to have `CLAUDE.md` with just `@AGENTS.md`
- Cursor: `.cursorrules`, Cursor already supports `AGENTS.md`

`AGENTS.md` is trying to unify this. Most tools support it now. My approach: use `AGENTS.md` as source of truth, make tool-specific files just reference it.

## Minimum Viable AI-Ready Repo

If you're starting from zero, prioritize:

1. Working CI that runs tests and linting
2. `README.md` with setup and test commands
3. `AGENTS.md` with project-specific context
4. Type checking if your language supports it

That's 80% of the value. Everything else is optimization.

## The Unexpected Insight

Someone said this and it stuck with me: "AGENTS.md's greatest contribution isn't standardizing AI configuration—it's forcing developers to finally write decent documentation."

True. Making your repo AI-ready is basically making it developer-ready. Clear entry points, automated verification, documented decisions. Things we should have been doing all along.

The AI just forces the issue.

![Unexpected Insight](unexpected.png)
