StayFresh

Static archive of workflow research and patterns

March 2026

Project AI Philosophy

Every project using generative systems needs a written position.

Absent policy, default governance collapses into novelty pressure, convenience, and untracked risk.

Position

AI belongs inside bounded workflows.

Acceptable roles: drafting, summarization, retrieval, code scaffolding, critique, evaluation, and repetitive local automation.

Unacceptable roles: hidden authority, unsupervised publication, irreversible state changes, invented expertise, and persuasion without evidence.

Control Surface

Operational Standard

Best use cases share five properties: bounded context, available verification, low blast radius, clear ownership, and reversible outcome.

Worst use cases share the opposite pattern: vague goals, hidden dependencies, weak review, social pressure, and no rollback.

Required Questions

  1. task class
  2. evidence source
  3. approval gate
  4. rollback method
  5. maximum acceptable failure

Anti-Patterns

Minimum Spec

[ai]
role = "bounded assistant"
allowed = ["drafting", "retrieval", "summarization", "critique", "code_scaffolding"]
forbidden = ["unreviewed_publish", "unreviewed_deploy", "production_mutation", "invented_citation"]
required = ["task_scope", "evidence", "owner", "approval_gate", "rollback_path"]
success = ["correctness", "traceability", "reversibility", "review_cost"]

Reference Pattern

Bottom Line

Good AI policy reduces ambiguity, not labor.

Good AI policy preserves judgment, surfaces evidence, and narrows blast radius.

Remaining variants reduce to marketing attached to tooling.