StayFresh

Static archive of workflow research and patterns

February 2026

BCG Enterprise Agent Design Patterns (2025)

Reference: Building Effective Enterprise Agents (BCG, November 2025)

BCG's AI Platforms Group published a comprehensive framework for building production-grade enterprise agents.

Agent Design Cards (ADCs)

Agent Design Cards are BCG's standardized blueprint for documenting agent requirements. An effective ADC should:

  1. Define purpose - Clearly describe what the agent is designed to achieve
  2. Clarify boundaries - Specify the agent's role, scope, and points of human oversight
  3. Detail inputs and outputs - Make data sources, dependencies, and deliverables explicit
  4. Describe capabilities - Outline tools and capabilities needed for the agent's success
  5. Anticipate failure - Define fallback behavior, escalation paths, and guardrails

Example Agent Design Card

Agent Goal: Reduce processing time for loan applications

Metrics:
  - 30% reduction in manual exception handling time

Skills, Tools & Capabilities:
  - Document parsing and field validation
  - Cross-system data reconciliation (CRM, Credit Bureau)
  - Policy-based reasoning for exception routing

Agent Trigger: System-led

Input(s) & Output(s):
  - Inputs: Loan application data, validation rules from policy database
  - Outputs: Audit log of actions and corrections performed, exceptions

Fallback:
  - Notify loan officer via workflow system for manual intervention

Priority: 1

Agent Suitability Framework

Not every problem needs an agent. Use this framework to decide:

Low Risk/GovernanceHigh Risk/Governance
High ComplexityAgent-led with Human OversightHuman-led with Agent Support
Low ComplexityAgent-led (full autonomy)Traditional Automation

Key insight: If clear rules and basic automation deliver the desired outcome, avoid building agents for agents' sake.

Agent Maturity Horizons

HorizonTypeDescription
0Constrained agentsPredefined rules, single repetitive task
1Single agentsMulti-step tasks in set environment, plans and acts alone
2Deep agentsOrchestrator splits tasks for specialist agents
3Role-based agentsTeam of agents collaborate, distinct roles, handoffs
4Agent meshNetwork of autonomous agents that self-organize

Recommendation: Build toward Horizon 2 (deep agents) today. Fully autonomous mesh agents require mature reasoning and evaluation systems.

Human Oversight Patterns

PatternDescription
Agent-assistedAgent provides output to normal user workflow
Human-in-the-loopAgent makes decision, awaits human approval
Human-on-the-loopUser observes outputs, can intervene if issues flagged
Human-out-of-the-loopAgent acts without explicit human oversight

Design Principles

Start Simple, Iterate with Evals

  1. Begin with a single observe-reason-act loop
  2. Introduce sub-flows only when complexity causes brittleness
  3. Add specialized agents only when domain-specific tasks require them

Outcome-First Design

Start with business outcomes ("What are we trying to achieve?"), then decompose:

Outcome: 30% faster loan approvals
  -> Dependencies: document verification, exception handling, fewer manual handoffs
  -> Agent opportunities: automated resolutions, remediation suggestions

Context Engineering

Prevent context pollution with these strategies:

StrategyDescription
CompressionSummarize context as window nears limit
PruningRemove old or irrelevant content
RankingEnsure most relevant information is visible
IsolationSplit task/context across sub-agents
NotesLet agents take structured notes during sessions

Memory Architecture

TypeDescriptionDuration
Short-term (STM)Context window: instructions, knowledge, toolsSingle session
Semantic (LTM)Abstract, factual, domain-specific knowledgePersistent
Procedural (LTM)How to perform tasks or skillsPersistent
Episodic (LTM)Past events as example behaviorsPersistent

Failure Modes

CategoryExamplesMitigations
Identity/AuthAgent impersonated, unintended actionsUnique identifiers, granular permissions, audit trails
Data supply-chainPrompt injection, harmful contentInput validation, XPIA protection, monitor data flows
OrchestrationTool failures, agent deadlocksControl flow guardrails, scoped environments
ReasoningHallucinations, task driftMonitor reasoning patterns, granular roles
OperationsResource overuse, cost explosionRate limits, timeouts, isolation

Key Takeaways

  1. Design for outcomes, not outputs - Anchor on measurable business outcomes
  2. Start simple and iterate - Single observe-reason-act loop first
  3. Build on shared foundations - Standardize runtimes, gateways, guardrails
  4. Choose the right platform - Based on data gravity, governance, differentiation
  5. Engineer trust by default - Identity, access control, monitoring, evaluation