How to Make Your Repository AI Ready and Boost Productivity
Most teams treat AI like autocomplete but the real productivity gains come when your repository is structured for AI success. Learn how to make your repo AI ready with the right context, tools, and workflows so both developers and AI agents can ship better code, faster.
- Ahmet Acer
1. The Big Promise
What if I told you that your repository could become significantly more productive with just the right setup?
Here's the reality: Most teams are using AI like a fancy autocomplete tool. They're missing the bigger picture. The key isn't in the AI model itself. It's in making your codebase AI ready.
When you structure your repository with the right context, documentation, and tooling, great things happen:
-
AI stops hallucinating your project's patterns and actually follows them
-
Code reviews become faster because AI suggests solutions that match your standards
-
Repetitive tasks disappear through intelligent automation
-
Quality issues get caught before they reach production via AI powered validation
The difference between a regular repo and an AI ready one? Context.
Think of it this way: You wouldn't hire a brilliant engineer and give them zero documentation, no examples, and no understanding of your codebase. Yet that's exactly what most teams do with AI.
The companies already seeing significant productivity gains aren't using different AI models. They're using the same models with better context.
This guide will show you exactly how to improve your repository from an AI stumbling block into an AI productivity multiplier.
2. What Slows Teams (and Their AI Agents) Down
Most engineering time is spent not on true problem solving, but on repetitive tasks, searching for information, and fixing issues that could have been prevented. These frictions affect both humans and AI helpers, regardless of technology, language, or layer.
Human friction (classic pain):
- Constantly switching between documentation, communication channels, examples, and project resources.
- Repeating boilerplate work and setup for tasks, tests, or configurations.
- Inconsistent quality: style, depth of validation, security, performance, and accessibility.
- Long troubleshooting sessions for minor issues or misconfigured tools.
- Difficulty finding or following best practices and guidelines.
- Skipping important quality or security steps under time pressure.
AI agent friction (new pain):
- Suggesting resources, APIs, or patterns that don’t exist in the project.
- Generating shallow tests or checks that miss edge cases.
- Ignoring important guidelines for quality, security, or performance.
- Using outdated or refactored project context.
- Proposing large, hard to review changes.
- Duplicating boilerplate instead of reusing shared solutions.
- Creating brittle tests or partial edits that leave unused code or unhandled cases.
- Producing inconsistent or non-deterministic suggestions, leading to unpredictable results and wasted review cycles.
- Struggling to identify the right tool or resource for a given task, sometimes getting stuck in a loop of trial and error instead of making progress.
AI Agent getting confused based on wrong or no context and goes into loop of wrong decisions
Why these happen:
- Project context is scattered across documentation, wikis, source files, and other resources.
- Guidelines are not easily accessible or machine readable.
- No automated feedback loop to validate AI output before review.
- Lack of clear guardrails or examples for best practices.
Result: Teams spend time on rework, reviews become noisy, and trust in AI output drops. This slows progress instead of accelerating it.
The solution isn’t just “use a better model.” It’s about structuring your project so both humans and AI agents can easily find reliable context, generate higher-quality work, and get fast validation.
Real-World Example: When AI Gets Lost in Our Monorepo
Here's what happened to our team before we made our repository AI ready:
The Package Manager Mix-up:
GitHub Copilot consistently suggested npm
, npx
commands and some that I don't even remember, despite our project using pnpm
. When these failed, it would suggest installing npm globally, potentially breaking our carefully configured development environment leading to cryptic errors and confusion.
The Monorepo Navigation Problem:
Our codebase has multiple apps and libraries (apps/customer-portal
, apps/admin-dashboard
, libs/ui-components
, etc.). Copilot would frequently suggest running commands from the wrong directory:
- Suggesting
npm test
from the root when tests needed to run fromapps/customer-portal
- Generating import paths like
import { Button } from './components/Button'
when it should beimport { Button } from '@dnb-wm/ui-components'
- Creating new components in the wrong directory structure
The list is very long...
3. Why AI Needs Context
AI isn’t magic. Dropping an AI agent into your project without context is like hiring a talented engineer and giving them no onboarding, no documentation, and no map of how things work. The result? Missed opportunities, wasted effort, and inconsistent quality.
To get real productivity gains, you need to treat AI like a teammate who thrives on clarity and guidance. That means:
- Centralize examples and patterns: Make sure your design system, code samples, and reusable solutions are easy to find and reference.
- Surface guidelines and anti-patterns: Document best practices, common mistakes, and quality standards in a way that both humans and AI can access and understand.
- Expose key project context: API definitions, data models, configuration files, and architectural diagrams should be discoverable and up-to-date.
- Automate context delivery: Use tools or scripts to bring relevant docs, rules, and examples directly into the developer workflow. Don’t rely on memory or manual lookup.
- Provide one-shot tooling: Instead of relying on AI to guess which commands or steps to run, build or integrate smart tools that accomplish the entire task in a single action. Such as running tests, running deployment, running build, scaffolding components, or validating code. This lets both humans and AI agents achieve the intended result quickly and reliably, without manual orchestration or trial and error.
When you do this, AI can:
- Invoke the right tool and run desired command at a single step and generate code matching your standards without going into loop and introduce additional configuration that isn't needed and came out of halusination.
- Avoid common mistakes and anti-patterns by using validated workflows.
- Suggest improvements and catch issues early, thanks to automated checks.
- Help onboard new team members faster, with less friction and clearer guidance.
Pro tip #1: The more you structure and surface your project’s context, the more valuable and reliable your AI helpers become. Think of it as building a knowledge base for both humans and machines.
Pro tip #2: Context alone isn’t enough. Providing the right tooling and making those tools easy to use is just as crucial. If your workflow is overloaded with complex commands or scattered scripts, both humans and AI agents can get lost or overwhelmed. Too much context, without clear and simple tools, can actually make your AI agent less effective, leading to confusion, slower suggestions, and missed opportunities. The key is to balance rich, relevant context with streamlined, one-shot tools that let your team and your AI agents act confidently and efficiently.
AI Agent performs better with right context, tooling and with your instructions.
4. How We Did It: MCP Server + Unit Test Writer Agent
To make our monorepo truly AI ready, we introduced two core tools: the Model Context Protocol (MCP) server and a specialized Unit Test Writer Agent, tightly integrated with GitHub Copilot.
🛠 MCP Server (@dnb-wm-tools/mcp)
The MCP server provides a suite of tools that automate context delivery, quality enforcement, and workflow integration for both humans and AI agents:
- get_eufemia_component_examples: Fetches live examples and information about DNB Eufemia design system components directly from GitHub, ensuring up to date UI references.
- get_unit_testing_guidelines: Supplies unit testing guidelines specific to the DNB Web WM Apps monorepo, so all tests follow best practices.
- get_anti_testing_patterns: Lists anti patterns to avoid in unit tests, supporting guideline verification and higher test quality.
- get_mandatory_testing_patterns: Provides mandatory patterns for test implementation and verification, enforcing consistent standards.
- check_test_anti_patterns: Scans test files for anti-pattern violations, returning detailed reports with file locations for fast remediation.
- run_jest: Executes Jest tests within the monorepo, with support for custom arguments, enabling automated and reliable test runs.
- get_api_definitions: Retrieves API type definitions for any
@dnb-api-clients/*
npm library, making API usage clear and type-safe. - get_project_structure_documentation: Delivers comprehensive documentation of the monorepo’s structure, technology stack, and development guidelines for easy onboarding and navigation.
- get_api_path_builder_function_definitions: Supplies API path builder function definitions, useful for axios mock adapters in tests and robust API mocking.
- run_lint_check_command: Runs lint checks for any app or library, ensuring code quality and adherence to coding standards.
- run_typescript_check_command: Runs TypeScript error checks, catching type errors early and maintaining code reliability.
🤖 Unit Test Writer Agent (Copilot Integration)
The Unit Test Writer Agent is a specialized AI agent, configured via unit-test-agent.chatmode.md
, designed to enforce comprehensive and guideline-compliant unit test coverage for React/TypeScript components and utilities. Its workflow is strictly governed by DNB testing standards and MCP tool outputs.
- Strict Workflow Enforcement: The agent follows a mandatory sequence for test creation, starting with fetching unit testing guidelines, mandatory patterns, and anti-patterns using MCP tools before any analysis or code is written.
- Mandatory MCP Tool Usage: Every test must be verified line-by-line against the output of MCP tools (
get_unit_testing_guidelines
,get_mandatory_testing_patterns
,get_anti_testing_patterns
,check_test_anti_patterns
). No assumptions or shortcuts are allowed. - Automated Quality Gates: Tests are executed using the
run_jest
tool (never direct Jest commands), and validated for coverage, lint, and TypeScript errors using MCP tools. All errors and guideline violations require immediate refactoring. - Zero Tolerance Compliance: Any deviation from guidelines triggers mandatory refactoring. The agent double-checks every assertion, mock, and pattern for strict compliance.
- Terminal Command Protocol: All Jest executions must use the
run_jest
tool. Other commands must follow a specific wrapping pattern to ensure proper output capture and debugging.
This rigorous process ensures:
- Consistent, high quality test coverage
- Elimination of anti patterns and guideline violations
- Reliable, reproducible test results
🚀 How It Works
1. Select the Unit Test Agent: Choose the specialized agent from the Copilot dropdown.
2. Provide the Component: Specify the component you want to generate tests for.
3. Automated Scaffolding: The agent creates test scaffolding, enforces patterns, and prevents anti-patterns.
4. Validation: Tests are run through Jest, lint, and TypeScript checks automatically.
5. Review: Only guideline compliant tests move forward, saving time and boosting quality.
And this is how it looks like during reasoning prcoess
Verification
5. How You Can Do This Too
Approach: Start Small, Build Momentum
Begin with a single pain point. Maybe it’s test writing, mock setup, or documentation lookup. Make it your pilot for improvement and ask yourself:
- What slows you or your team down?
- If you were to solve this manually, what would you need?
- What resources, examples, or documentation do you always look for?
- Which commands do you run most often?
- What patterns or standards should you follow?
- Where do mistakes or wasted time usually happen?
Once you’ve mapped out the steps, surface those resources and workflows for both humans and AI agents:
- Document your process and make examples easy to find.
- Use tools like MCP to bring guidelines, anti patterns, and examples directly into your IDE or Copilot chat.
- Share these resources with your team and AI agents.
- Automate repetitive steps. Start with one (tests, mocks, lint checks) and expand as you build trust.
- Validate everything: run checks, compare against guidelines, and refactor as needed.
- Don’t try to automate everything at once. Build your AI ready environment step by step, focusing on clarity, reliability, and feedback.
- Celebrate small wins and share improvements with your team.
Bottom line: Sustainable change starts with one well-solved problem. The more you surface context and structure your workflow, the more productive and reliable your team and your AI agents will become.
Technical Implementation: Tools & Best Practices
Building an AI ready repository requires strategic tooling that surfaces context and automates workflows. Here's a framework for implementing these tools across different domains:
🔍 Context Discovery Tools
Make your project's knowledge instantly accessible by the AI Agent, here are some examples of MCP tool you could create:
-
Guidelines & Standards:
get_coding_guidelines
: Returns coding standards, style guides, and architectural principlesget_security_patterns
: Provides security best practices and compliance requirementsget_anti_patterns
: Lists common mistakes and what to avoid across domains
-
Live Examples & Documentation:
get_component_examples
: Fetches up to date UI component usage and API examplesget_architecture_patterns
: Returns architectural patterns and design decisionsget_integration_examples
: Provides working examples of third-party integrations
⚡ Workflow Automation Tools
Streamline repetitive tasks and reduce overhead for AI Agents, handle all necessary commands, arguments and configurations in isolation within the automation tool and only return necessary information to AI Agent.
-
Code Quality & Validation:
run_quality_checks
: Executes comprehensive linting, formatting, and type checkingcheck_security_vulnerabilities
: Scans for security issues and dependency vulnerabilitiesvalidate_code_patterns
: Checks code against established patterns and anti-patternsrun_accessibility_audit
: Validates accessibility standards for frontend components
-
Documentation & Communication:
generate_api_docs
: Creates up-to-date API documentation from codeupdate_changelog
: Automatically updates changelogs based on commits and PRs
🔧 Implementation Strategy
-
Start with Your Biggest Pain Point:
- Identify the most time-consuming manual task in your workflow
- Build one tool that automates this specific problem
- Validate with your team and iterate based on feedback
-
Layer Context Gradually:
- Begin with static documentation and guidelines
- Add dynamic examples and live data sources
- Integrate validation and feedback loops
-
Scale Through Standardization:
- Create reusable patterns for similar tools
- Document tool usage and best practices
- Share successful implementations across teams
The key is building tools that AI agents can use effectively, creating a feedback loop that improves code quality, reduces manual overhead, and accelerates development velocity.
Sandboxes can isolate context from the LLM. (https://blog.langchain.com/context-engineering-for-agents)
I find this blog very usefull where it explains different techniques of context enginering please go ahead and take a look Link to the blog.
Creating Specialized AI Agents with GitHub Copilot Chatmode
GitHub Copilot supports custom chatmodes that let you create specialized AI agents with specific instructions, workflows, tool access, and strict compliance requirements. Here's how to build a domain specific agent:
🤖 Example: Unit Test Writer Agent Configuration
🎯 Additional Chatmode Examples
Key Benefits of Specialized Chatmodes
- Enforced Workflows: Agents follow strict, predefined sequences
- Tool Integration: Direct access to your MCP server and custom tools
- Domain Expertise: Specialized knowledge for specific tasks
- Quality Gates: Mandatory validation and compliance checks
- Consistent Output: Reduces variability and improves reliability
Building Your Own MCP Server
The Model Context Protocol (MCP) is an open standard that lets you build custom tools for AI assistants. Here's how to create an MCP server for your repository:
📝 Basic MCP Server Setup (TypeScript)
🛠️ Example Tool Implementation (TypeScript)
⚙️ Example MCP Configuration for Claude Desktop
🔧 Example VS Code Copilot Integration
Or you can use VS Code interface to add your locally built MCP server with couple of steps.
Resources & Next Steps
- MCP Documentation: Complete guide to building MCP servers
- MCP TypeScript SDK: Official TypeScript implementation
- Example MCP Servers: Community-built servers for inspiration
- VS Code GitHub Copilot Extension: Integrate MCP with VS Code Copilot
- Claude Code MCP Guide: Setting up MCP with Claude
Start with a simple tool that solves one specific pain point, then gradually expand your server's capabilities. The key is making your project's context and workflows easily accessible to both humans and AI assistants.
6. Benefits & Real-World Impact
Based on a survey we conducted with team members who have interracted with the Unit Test Writer Agent tooling, we have gathered remarkable results across our Web Development Team:
📊 Time Savings Breakdown:
- Daily users: 75%+ time reduction on test writing
- Weekly users: 50-75% time reduction
- Occasional users: 25-50% minimum savings
📊 Adoption & Satisfaction:
- 100% recommendation rate - Every surveyed developer would recommend it to others
- 75% use it weekly or daily - High adoption indicates genuine value
- Consistent "Good" quality ratings - Reliable output that developers trust
📈 Usage Statistics:
- 75% of surveyed developers use it weekly or daily
- 100% would recommend it to other developers
- Consistent "Good" quality ratings across all users
Developer Testimonials:
"I can do something else while the Agent is writing unit tests"
This highlights how the agent enables true multitasking. Developers can focus on feature implementation while tests are generated in parallel.
"The agent usually follows guidelines with minor issues, which is much better than starting from scratch"
Even with minor tweaks needed, the foundation provided saves significant time and mental overhead.
Direct Feedbacks (not included in the survey):
🎯 Key Benefits:
- Dramatically faster test creation through automation and pattern enforcement
- Consistent quality via mandatory pattern compliance and anti-pattern prevention
- Reduced cognitive load with contextual documentation and live examples
- Seamless integration with existing development tools and workflows
- Frees up developers for higher-value tasks while tests are generated
7. Closing: From Friction to Flow
AI isn’t here to replace developers, it’s here to amplify what great teams can do. When you invest in making your repo AI ready, you transform daily work from a series of frustrating bottlenecks into a smooth, focused flow.
Context is the key. The more you surface examples, guidelines, and project knowledge, the more your team and your AI agents can deliver high quality work, faster and with less stress.
The real win? You spend less time on glue work and re-discovery, and more time solving real problems, shipping features, and growing your skills. AI becomes a trusted teammate, not just a tool.
The future of development isn’t about working harder, it’s about working smarter, with context and collaboration. Build the environment, and the productivity multipliers will follow.
Ready to move from friction to flow? Start with one area, surface the context, and let your team and your AI do their best work.
Thanks for reading :)