March 2026
Project AI Philosophy
Every project using generative systems needs a written position.
Absent policy, default governance collapses into novelty pressure, convenience, and untracked risk.
Position
AI belongs inside bounded workflows.
Acceptable roles: drafting, summarization, retrieval, code scaffolding, critique, evaluation, and repetitive local automation.
Unacceptable roles: hidden authority, unsupervised publication, irreversible state changes, invented expertise, and persuasion without evidence.
Control Surface
- explicit scope - named task, named inputs, named completion condition
- evidence required - sources, tests, diffs, logs, screenshots, or reproducible traces
- human gate on irreversible actions - merge, deploy, publish, delete, charge, notify, or mutate production data
- reversibility first - rollback path before automation depth
- failure legibility - uncertainty, missing context, blocked tools, and assumption drift surfaced in plain language
- cost discipline - token spend, latency, and review burden treated as engineering costs
- local fit - repository conventions outrank model preferences
Operational Standard
Best use cases share five properties: bounded context, available verification, low blast radius, clear ownership, and reversible outcome.
Worst use cases share the opposite pattern: vague goals, hidden dependencies, weak review, social pressure, and no rollback.
Required Questions
- task class
- evidence source
- approval gate
- rollback method
- maximum acceptable failure
Anti-Patterns
- ai-first as strategy - slogan in place of a user problem
- assistant prose as evidence - polished language mistaken for verified fact
- autonomy by boredom - dangerous delegation justified by repetitive work
- evaluation theater - quality claims with no tests, no rubric, and no citations
- interface cosplay - chat wrapper added where a form, script, or search box would work better
Minimum Spec
[ai]
role = "bounded assistant"
allowed = ["drafting", "retrieval", "summarization", "critique", "code_scaffolding"]
forbidden = ["unreviewed_publish", "unreviewed_deploy", "production_mutation", "invented_citation"]
required = ["task_scope", "evidence", "owner", "approval_gate", "rollback_path"]
success = ["correctness", "traceability", "reversibility", "review_cost"]
Reference Pattern
- Kagi AI Philosophy - Product-level boundary setting with clear scope and limits
Bottom Line
Good AI policy reduces ambiguity, not labor.
Good AI policy preserves judgment, surfaces evidence, and narrows blast radius.
Remaining variants reduce to marketing attached to tooling.