February 2026
BCG Enterprise Agent Design Patterns (2025)
Reference: Building Effective Enterprise Agents (BCG, November 2025)
BCG's AI Platforms Group published a comprehensive framework for building production-grade enterprise agents.
Agent Design Cards (ADCs)
Agent Design Cards are BCG's standardized blueprint for documenting agent requirements. An effective ADC should:
- Define purpose - Clearly describe what the agent is designed to achieve
- Clarify boundaries - Specify the agent's role, scope, and points of human oversight
- Detail inputs and outputs - Make data sources, dependencies, and deliverables explicit
- Describe capabilities - Outline tools and capabilities needed for the agent's success
- Anticipate failure - Define fallback behavior, escalation paths, and guardrails
Example Agent Design Card
Agent Goal: Reduce processing time for loan applications
Metrics:
- 30% reduction in manual exception handling time
Skills, Tools & Capabilities:
- Document parsing and field validation
- Cross-system data reconciliation (CRM, Credit Bureau)
- Policy-based reasoning for exception routing
Agent Trigger: System-led
Input(s) & Output(s):
- Inputs: Loan application data, validation rules from policy database
- Outputs: Audit log of actions and corrections performed, exceptions
Fallback:
- Notify loan officer via workflow system for manual intervention
Priority: 1
Agent Suitability Framework
Not every problem needs an agent. Use this framework to decide:
| Low Risk/Governance | High Risk/Governance | |
|---|---|---|
| High Complexity | Agent-led with Human Oversight | Human-led with Agent Support |
| Low Complexity | Agent-led (full autonomy) | Traditional Automation |
Key insight: If clear rules and basic automation deliver the desired outcome, avoid building agents for agents' sake.
Agent Maturity Horizons
| Horizon | Type | Description |
|---|---|---|
| 0 | Constrained agents | Predefined rules, single repetitive task |
| 1 | Single agents | Multi-step tasks in set environment, plans and acts alone |
| 2 | Deep agents | Orchestrator splits tasks for specialist agents |
| 3 | Role-based agents | Team of agents collaborate, distinct roles, handoffs |
| 4 | Agent mesh | Network of autonomous agents that self-organize |
Recommendation: Build toward Horizon 2 (deep agents) today. Fully autonomous mesh agents require mature reasoning and evaluation systems.
Human Oversight Patterns
| Pattern | Description |
|---|---|
| Agent-assisted | Agent provides output to normal user workflow |
| Human-in-the-loop | Agent makes decision, awaits human approval |
| Human-on-the-loop | User observes outputs, can intervene if issues flagged |
| Human-out-of-the-loop | Agent acts without explicit human oversight |
Design Principles
Start Simple, Iterate with Evals
- Begin with a single observe-reason-act loop
- Introduce sub-flows only when complexity causes brittleness
- Add specialized agents only when domain-specific tasks require them
Outcome-First Design
Start with business outcomes ("What are we trying to achieve?"), then decompose:
Outcome: 30% faster loan approvals
-> Dependencies: document verification, exception handling, fewer manual handoffs
-> Agent opportunities: automated resolutions, remediation suggestions
Context Engineering
Prevent context pollution with these strategies:
| Strategy | Description |
|---|---|
| Compression | Summarize context as window nears limit |
| Pruning | Remove old or irrelevant content |
| Ranking | Ensure most relevant information is visible |
| Isolation | Split task/context across sub-agents |
| Notes | Let agents take structured notes during sessions |
Memory Architecture
| Type | Description | Duration |
|---|---|---|
| Short-term (STM) | Context window: instructions, knowledge, tools | Single session |
| Semantic (LTM) | Abstract, factual, domain-specific knowledge | Persistent |
| Procedural (LTM) | How to perform tasks or skills | Persistent |
| Episodic (LTM) | Past events as example behaviors | Persistent |
Failure Modes
| Category | Examples | Mitigations |
|---|---|---|
| Identity/Auth | Agent impersonated, unintended actions | Unique identifiers, granular permissions, audit trails |
| Data supply-chain | Prompt injection, harmful content | Input validation, XPIA protection, monitor data flows |
| Orchestration | Tool failures, agent deadlocks | Control flow guardrails, scoped environments |
| Reasoning | Hallucinations, task drift | Monitor reasoning patterns, granular roles |
| Operations | Resource overuse, cost explosion | Rate limits, timeouts, isolation |
Key Takeaways
- Design for outcomes, not outputs - Anchor on measurable business outcomes
- Start simple and iterate - Single observe-reason-act loop first
- Build on shared foundations - Standardize runtimes, gateways, guardrails
- Choose the right platform - Based on data gravity, governance, differentiation
- Engineer trust by default - Identity, access control, monitoring, evaluation