StayFresh

Static archive of workflow research and patterns

February 2026

Agent Psychology

Understanding how agents reason and respond to instructions.

Core Insight

Agents reason forward from instructions. They don't reason backward from outcomes. This means:

The "Surprising Behavior" Pattern

When agents encounter something unexpected, that's signal—not noise.

When agents fail, fix the code, not the prompt. Surprising behavior reveals architectural friction.

Instead of adding more instructions, consider:

  1. Is the codebase structure confusing? Rename, reorganize, add comments
  2. Are conventions unclear? Add type hints, improve names, add docstrings
  3. Is the task underspecified? Improve the issue description, not the context file

The Step-3 Trick

Counterintuitive but effective: if an agent struggles with step 2, tell it to do step 3. The agent often completes step 2 in the process.

Example

If Agent Struggles WithTry Asking For
Writing testsDeploy to production
Adding error handlingShip the feature
DocumentationOnboard a new engineer
RefactoringPrepare for code review

This works because:

Greenfield Optimization

Agents perform best on greenfield projects where they can establish patterns from scratch. In existing codebases:

Instruction Following vs. Helpfulness

Agents follow instructions reliably. The problem isn't compliance—it's that the instructions often don't help.

Evidence:

Token Economics

Context files consume tokens in every request. For a 600-word context file:

The question: Is that token budget better spent on task-specific context or generic repository context?

Research suggests: task-specific context wins.

Related Research