essay / ai
Why your team's AI tools are not working (and how to fix it)
The gap between AI-assisted teams (10-30% gains) and AI-ready teams (100-300% gains) is not the tools. It is the context.
The problem: the context gap
After deploying GitHub Copilot across my team, I observed a significant performance disparity:
- Senior developers on greenfield projects: 40-50% faster
- Mid-level developers on legacy systems: 10-15% faster, frustrated
The root cause was not the tools themselves. The tools are not the problem. Your codebase’s missing context is.
The context problem explained
When onboarding new developers, teams invest substantial time teaching architectural decisions, patterns, and unwritten rules. Yet when AI tools are deployed, they lack this foundational understanding.
AI performs well in limited scenarios: clean demo projects, well-documented tutorial code, greenfield development. It struggles in real-world contexts: legacy codebases with years of accumulated decisions and perpetually incomplete documentation.
Real-world example: rate limiting
Without context
An AI generates standard rate-limiting code using IP-based throttling. While syntactically correct, it fails to account for multi-tenant architecture (IP-based limiting breaks shared IPs), different rate tiers by customer type, Redis-backed distributed rate limiting requirements, required response headers for customer dashboards, audit logging integration, and conflicts with existing service-level limiters.
Result: 7 hours total (3 hours implementation + 2 hours review + 2 hours fixing).
With proper context
The prompt includes system context: multi-tenant system requiring rate limit per tenant not per IP, existing middleware to use, configuration location for rate limits by tier, Redis for distributed rate limiting, required headers, and automatic metrics.
The AI generates production-ready code that integrates correctly with existing systems.
Result: 20 minutes, passes review on first try.
The caching incident
An AI suggestion for “simple” in-memory caching optimization missed critical system constraints: multiple instances require distributed cache, user updates need invalidation across instances, sensitive data requires encryption, and previous security incidents from in-memory approaches.
Without context, developers implement flawed patterns. With documented context, the AI recommends the battle-tested CacheManager service that includes proper security measures.
AI readiness assessment
To evaluate preparedness, teams should assess whether they can answer these questions from documentation alone:
- What patterns does this component follow?
- What systems does it integrate with?
- What are the non-obvious constraints?
- Where are similar implementations to reference?
- What mistakes have been made here before?
- Why is the code structured this way?
Most teams find documentation does not exist for 70% of what they need.
Implementation strategy
GitHub Copilot and Azure SWE agents support a special file providing repository-wide context. This file should include architecture overview, critical patterns with “NEVER” warnings, security requirements, integration points, common mistakes to avoid, and reference implementations.
Creation time: 3-4 hours initially.
Shared prompts library
Teams should create a /prompts folder with templates for common scenarios: adding OAuth providers, creating API endpoints, adding background jobs. Each template encodes the team’s institutional knowledge.
Phased implementation
Week 1: Create copilot-instructions.md. Document core patterns and integration points. Add security requirements and common mistakes. Link to reference implementations. Time investment: 3-4 hours.
Week 2: Build 2-3 common prompts. Pick most frequent tasks. Create prompt templates. Test on real tasks. Time investment: 2-3 hours.
Week 3: Measure. Track time saved. Gather feedback. Refine and expand.
Compound effect
- First task using prompts: saves 3-5 hours
- Fifth task: saves 8-10 hours (refined prompts)
- Twentieth task: saves 10+ hours (battle-tested prompts)
The hidden cost of insufficient context
Without proper context, teams accumulate extra time making generated code work, senior developer time spent in code review, risk of shipping code that works but does not fit, technical debt, and AI that never improves its understanding of the codebase.
The gap between AI-assisted teams and AI-ready teams is not the tools. It is the context. This distinction will become a competitive advantage over the coming years.