🤖 Akira PMO — See It Cycle Organization Design
Every agent runs the See It Cycle at its appropriate frequency. Information flows up on drift, down on goal changes.
v2.0 — Updated Feb 15, 2026 — Sprint Process v2.0 (Joana)
🏗️ Architecture: Greg vs. Akira Agents
GREG (OpenClaw): BUILDER and ORCHESTRATOR of the AIPMO system. Greg builds the Akira agent system through code and configuration. Greg operates at the meta-level (building the product, not running sprints).

AKIRA AGENTS: The agents below execute the AIPMO sprint methodology. When this document refers to "agents," it means Akira agents executing sprints, not Greg building the system.
Vision
👤 CLIENT Human
🔄 Every 4-5 Sprints
Vision Owner
"Are we building the right thing?"
See It Cycle Focus:
  • Business goals still valid?
  • Market feedback integrated?
  • ROI on track?
Requirements & Feedback
Translation
💎 Product Manager Human
🔄 Daily + Sprint
Client Translation
Feature prioritization, scope decisions
See It Cycle Focus:
  • Client needs → clear requirements?
  • Priorities still correct?
  • Stakeholders aligned?
Goals & Priorities
Strategy
🤖 AI Program Manager AI
🔄 Event-driven + Per-release retrospective
Multi-Squad Coordination
Resource allocation, agent orchestration
See It Cycle Focus:
  • All squads aligned?
  • Cross-squad blockers?
  • Sprint goals on track?
📊 AI Portfolio Manager AI
🔄 Every Checkpoint Gate + Per-release retrospective
3 Gates: Budget → Resources → Schedule
All must pass before Human Review
See It Cycle Focus:
  • Budget: >$100K/month cash flow check
  • Resources: Portfolio availability
  • Schedule: Human oversight feasible?
🎯 AI Estimator AI
🔄 On-demand (INTAKE_RECEIVED) + Per-release retrospective
Estimation & Backlogs
Promise/Stretch, capacity planning
See It Cycle Focus:
  • Estimates accurate?
  • Pareto ranking correct?
  • Drag factor calibrated?
Sprint Plans & Assignments
Coordination
🌱 Flow Coordinator α AI
🔄 30m Reality Check / 60m Sprint + Checkpoint Gate
Squad Alpha (Backend + API)
🌱 Flow Coordinator β AI
🔄 30m Reality Check / 60m Sprint + Checkpoint Gate
Squad Beta (Frontend + UX)
🌱 Flow Coordinator γ AI
🔄 30m Reality Check / 60m Sprint + Checkpoint Gate
Squad Gamma (Integration + QA)
Requirements Verifier AI
📝 Event-driven
Expanded Role: Commitments + Verification
Generates commitments + customer scope + "before" recordings, validates before/after demos
Contracts & Evidence
Execution
📁 SQUAD ALPHA
⚙️Backend
🏷️API
📑Database
🔄 Reality Check @ 30 min + Sign Contracts
📁 SQUAD BETA
🎨Frontend
✏️UX
🖌️Style
🔄 Reality Check @ 30 min + Sign Contracts
📁 SQUAD GAMMA
🔗Integration
QA
🚀Deploy
🔄 Reality Check @ 30 min + Sign Contracts
📝 1. Commitment Signing Ritual
1
Requirements Verifier generates contract with Promise/Stretch items and acceptance criteria
2
Execution Agent reviews contract, responds: ACCEPT / MODIFY / DECLINE
3
If ACCEPT → Agent writes SIGNATURE to file with commitment statements and hash
4
Flow Coordinator confirms signature exists → sends BEGIN WORK
Agent Response Options
ACCEPT MODIFY DECLINE
🔧 2. Tiered Workaround Response
1
Agent encounters failure — expected X, got Y
2
Assess severity: Can I achieve the goal via alternative path?
3
LOW: Try workaround → Document → Continue → Flag async
4
MEDIUM: Try workaround → Document → PAUSE → Wait approval
5
HIGH: STOP → Document → Escalate immediately
Severity Classification
LOW → Continue MEDIUM → Pause HIGH → Stop
⏱️ 3. Updated Checkpoint Timing (v4.0)
Checkpoint Human Equiv AI Timing Purpose
Reality Check Mid-sprint review 30 min (sprint midpoint) Course-correct if behind
Checkpoint Gate Sprint review + demo 60min + ~45min overhead Verify evidence, before/after compare
Human Review Release review Every 4-5 sprints (~7 hours) PM validates release quality
Retrospective Sprint retro Every 2 sprints (~3.5 hours) Automated learning integration
Updated v2.0: 60-min execution + ~45min overhead = ~105min total. Retrospective every 2 sprints.
🎯 4. Promise/Stretch Gate
1
Agent completes all Promise items with evidence
2
Agent reports: PROMISE COMPLETE. Requesting STRETCH approval.
3
Flow Coordinator reviews: all items pass? time remaining? risk?
4
Response: STRETCH APPROVED / STRETCH DECLINED / PROMISE INCOMPLETE
Coordinator Response
STRETCH APPROVED STRETCH DECLINED PROMISE INCOMPLETE
Stretch failure = documented learning, not failure. Promise is the commitment.
5. Verification Failure Flow (NEW v4.0)
1
Requirements Verifier returns FAIL on Promise items
2
MINOR FAIL: 15-min Fix Window within current sprint overhead
3
MAJOR FAIL: 1 Fix Sprint (next 60-min sprint) dedicated to fixing
4
Second failure: VERIFICATION_ESCALATED → Human PM review
Failure Classification
MINOR FAIL MAJOR FAIL ESCALATED
Max 1 retry per failed item per release cycle. Fix sprints consume sprint capacity.
🔗 6. Cross-Squad Dependency Resolution (NEW v4.0)
1
Level 1 - Pre-Sprint: Program Manager sequences work to stagger dependencies
2
Level 2 - Runtime: Squad emits DEPENDENCY_BLOCKED event
3
15-min timeout for resolution decision
4
Resolution: RESEQUENCE / MOCK & PROCEED / SWAP / ESCALATE
Resolution Options
RESEQUENCE MOCK & PROCEED SWAP ESCALATE
Dependencies in pre-sprint planning = good. Runtime dependencies = planning gap for retrospective.
⚡ See It Cycle Frequencies — Updated Timing (v3.0)
4-5 Sprints
Client Vision
Per-release
Strategic AI
~105m total cycle
Flow Coordinators
30 min Reality Check
Execution Agents
Updated v2.0: 60min execution + ~45min overhead = ~105min total cycle. 30-min Reality Check at sprint midpoint.
🚨 Agent 215 Protocol — Updated Thresholds (v3.0)
Agent Type Expected Report Alarm Threshold Intervention
Execution Agent Every 30 min 45 min silence FC pings → 15min timeout → escalate to PM
Flow Coordinator Every 1 hour 2 hour silence PM reviews → restart or reassign
Strategic AI Every ~105 min (each Checkpoint Gate) 90 min silence Human PM notified → diagnostic
Requirements Verifier Event-driven (15min expected) 30 min alarm FC escalates to PM → restart or reassign
UPDATED v2.0: Exec 45min (not 60min), Strategic 90min (not 3h), Verifier 30min alarm. Requirements Verifier: event-driven monitoring.
🚨 Escalation Protocol (Section 9)
Real-time escalation triggers (bypass scheduled cadence):
  • HIGH severity blocker raised
  • Agent 215 alarm (silent agent beyond threshold)
  • Promise items failing verification at Checkpoint Gate
  • Token budget exceeded 80%
  • Agent confidence < 70% on deliverable quality
  • Cross-squad dependency blocking multiple teams
Escalation Flow:
1. Agent detects escalation condition
2. Post to #akira-swarm with 🚨 ESCALATION tag
3. Program Manager responds (first responder)
4. If HIGH severity + no resolution → Human PM via Slack DM
5. Escalation remains open until resolved and logged
⚠️ Anti-Patterns to Prevent (Section 10)
Silent Failure
Agent claims done but broken
Prevention: Requirements Verifier + signed contracts
Sycophantic Completion
"100% done!" at 60%
Prevention: Contract signing + evidence verification
Context Amnesia
Forgets after restart
Prevention: Persist to files + Supabase + GitHub
Credential Loss
Overwrites API keys
Prevention: Credentials outside agent workspace
Scope Creep
Works on unsigned items
Prevention: Commitment Signing Ritual
Babysitting Trap
Human manages all day
Prevention: Tiered checkpoints + batch reviews
Silent Idle
Stops without reporting
Prevention: Agent 215 Protocol
Undocumented Workaround
Finds alt path without logging
Prevention: Tiered Workaround Response
Human Role
Strategic AI
Flow Coordinator
Verifier / Contract
Execution Agent
See It Cycle

📄 Full Documentation

AIPMO Sprint Process v2.0 — Complete 29-page specification with Joana's Feb 15, 2026 updates