VtKl Products

๐Ÿ“ฆ Products Dashboard

Active product development and architecture

Last updated: Feb 8, 2026 4:15 PM HST

๐Ÿš€

Progress Update: Feb 8, 2026

Meeting Artifact

For: Joana & Victor Meeting (Feb 9, 2026)
Summary of what was accomplished today and the strategic direction emerging from the Charlie partnership meeting.

โœ…

Supabase Migration Complete

All 10 dashboards now use Supabase for persistent data. No more localStorage. Data survives browser clears.

๐Ÿค–

Akira PMO Multi-Agent Deployed

3 agents running: Coordinator, Estimator, Executor. Each owns their outcomes. Built on ownership/empowerment model.

๐Ÿ”—

Akira โ†” Jarvis Protocol

GitHub-based handoff format created. Templates for issues, completion reports. Sparse โ†’ fully-specified workflow.

๐Ÿง 

Context Persistence Middleware

SQLite-based plugin integrated into OpenClaw. Sessions now survive compaction. No more 4-hour tasks for 10-min work.

๐Ÿ Two AI Swarms Architecture

From the Charlie Partnership Strategy Meeting (Feb 8)

ๆ˜Ž

AKIRA

Tony's Swarm

Business & PM Layer

  • Profiling (customer โ†’ priorities)
  • Prioritization (effort vs value)
  • Scope specification
  • Verification & acceptance
โŸท

GitHub
Source of Truth

โšก

JARVIS

Charlie's Swarm

Technical Execution Layer

  • Autonomous coding pipeline
  • Implementation from specs
  • Testing & deployment
  • Completion reporting

The Flow: Sparse โ†’ Specified โ†’ Shipped

Customer Call โ†’ Akira: Profile & Prioritize โ†’ GitHub Issue โ†’ Jarvis: Execute โ†’ Shipped Code

๐Ÿ”“ The Unlock: Charlie's insight โ€” "Linear tickets are like a title and one sentence. That's not enough. You need to inflate them into actual work." Akira's job is to take sparse input and produce fully-specified, implementation-ready tickets for Jarvis.

๐Ÿšง Key Blocker (Before Alpha)

Dedicated Machines Required

Need separate infrastructure for clean IP separation from Kindo. Cannot use Kindo resources for Akira/Jarvis work.

Status: Must resolve before Basis alpha engagement.
Options: Personal machines, cloud instances, or Tony's existing hardware.

๐Ÿ’ฐ Revenue Context

Current: Kindo

Six-figure consulting work (ongoing)

Alpha: Basis

First external customer (Sean partnership)

Potential: 8-Figure

Larger opportunity identified in meeting

๐Ÿ“‹ For Joana: Akira Status

Files Downloaded & Analyzed:
โ€ข portfolio-manager-with-context.zip โœ…
โ€ข portfolio-manager-standalone.zip โœ…
โ€ข reklaim-estimator-vercel.zip โœ…
โ€ข reklaim-estimator-lovable.zip โœ…

What Greg Built from Joana's Source:
โ€ข 3 OpenClaw agents (Coordinator, Estimator, Executor)
โ€ข Ownership/empowerment model integrated
โ€ข Multi-agent handoff protocol

Key Insight Preserved: Story Points ร— 2 = raw hours โ†’ ร— 1.7 drag factor. 33/33/34 Promise/Stretch/Buffer model.

๐Ÿ”ง For Victor: Technical Stack

Component Location Status
Akira Agents ~/.openclaw/agents/akira-*/ โœ… Running
Context Plugin ~/.openclaw/extensions/context-persistence/ โœ… Integrated
Handoff Protocol workspace/akira-jarvis-protocol/ โœ… Complete
Dashboard DB Supabase: mmwbiogqmgmtxboipyko โœ… Migrated
Dashboard UI greg-dashboard.vercel.app โœ… Deployed

โญ๏ธ Next Steps

Resolve dedicated machines blocker โ€” Required before Basis alpha

blockerinfra

Test Akira agents with real task โ€” Validate ownership model in practice

akiratesting

Formalize business entity โ€” Tony's Hawaii LLC for IP/revenue split

legalpartnership

Profiler Engine design โ€” The missing piece (customer calls โ†’ priorities)

exec.aidesign
๐Ÿง 

Exec.AI โ€” Executive Prioritizer

Prototyping

Personal productivity prioritizer powered by AI. The "High Performer Hypnagogic Workflow" โ€” capture clarity during peak creative states (3amโ€“6am), digitize via WhatsApp/photos, categorize in Stream dashboard, review with AI, route to execution. Today's session was the first live prototype.

๐Ÿ“‹ Core Concept

Executive Prioritizer works off the same engine as current estimator. Estimates level of effort vs value, then prioritizes based on that.

Leverages data from Greg's tracking of how long it takes user to develop idea, deploy it, and begin earning revenue.

Feature: Visualize user's AI chats into visual dashboards. Dashboard tracks AI chats and progress of discussions on a timeline.

๐Ÿš€ Live Prototype: Feb 6, 2026 Session

Today's session WAS the Exec.AI workflow in action. Here's what we did:

The High Performer Hypnagogic Workflow:

Phase Time What Happened
1. Capture 3amโ€“6am Tony in hypnagogic state, wrote ~40 index cards of clarity/ideas
2. Digitize 7:00am Photos of cards sent via WhatsApp โ†’ Greg captured 41 items to Stream
3. Categorize 7:20am Bulk categorize in Stream dashboard: Exec.AI, Technical Setup, Akira PMO, Partnerships, etc.
4. Review & Explain 7:30โ€“9:00am Greg answered architecture questions, explained how systems work, provided context
5. Route & Execute 9:00โ€“10:30am Moved items to To-Do, Products dashboard. Built dashboards. Sent emails. Downloaded Akira source.

Session Results:
โ€ข 41 items captured โ†’ 7 remaining in Stream (17 routed, 17 done/purged)
โ€ข 11 items in To-Do list
โ€ข 3 emails sent (Joana, Victor, Alex)
โ€ข 4 new dashboards built (Stream, Standup, Products, To-Do refresh)
โ€ข Akira PMO fully analyzed (4 zip files downloaded, architecture documented)
โ€ข Products dashboard created with Exec.AI + Akira sections

Key Insight: The "AI sells fastest when it doesn't require humans to change" principle in action โ€” Tony's existing workflow (handwritten cards, WhatsApp photos) stayed the same. Greg adapts to capture and process.

โœ… Tasks

Build time tracking mechanism for idea lifecycle (start โ†’ deploy โ†’ revenue)

tracking

Build dashboard/tracker for ongoing discussions

dashboard

Prototype using Multicity Living Chat export from Grok

prototype waiting
๐Ÿค–

Akira PMO โ€” AI-Driven Project Management

Rebuilding

Rebuilding Joana's Estimator and Portfolio Manager inside OpenClaw. Goal: replicate and enhance the AI PMO agent capabilities without external dependencies on Lovable/Vercel infrastructure.

๐Ÿ—๏ธ Architecture Analysis

Component Original OpenClaw Rebuild
Estimator AI Gemini 2.5 Flash via Lovable Gateway Direct Anthropic/Gemini API calls
Portfolio Manager AI Claude Sonnet (two-pass analysis) Same - Anthropic API
Backend Supabase Edge Functions Local scripts or OpenClaw skills
Database Supabase (tickets, epics) Local JSON or SQLite
Frontend React + Vite + Tailwind Canvas dashboards or Vercel
"RAG" Context Hardcoded prompt injection SKILL.md / workspace files

๐Ÿ’ก Key Insights

No actual RAG/embeddings needed. The "RAG" is hardcoded context strings (KINDO_SECTIONS_SUMMARY, CYCLE_PLANNING_RULES) injected into prompts. OpenClaw can do this natively with skills and workspace files.

Estimator core logic: Story Points ร— 2 = raw hours โ†’ ร— 1.7 drag factor = drag hours. 33/33/34 rule: Promise (first 33%), Stretch (next 33%), Buffer (34%).

Portfolio Manager: Pure calculation engine for capacity (engineers ร— 250 days ร— 6 hrs รท drag), payroll coverage, and milestone-based cash flow.

๐Ÿ”“ The Unlock: PROFILER โ†’ PRIORITIZER โ†’ EXECUTOR โ†’ VERIFIER

โ€ข Jarvis (Charlie) executes perfectly but doesn't decide what matters
โ€ข Akira prioritizes but needs human-supplied priorities
โ€ข The Gap: Automating the Profiler function (turning customer calls โ†’ structured priorities)
โ€ข Charlie currently IS the profiler โ€” scales linearly with calls he can attend

Akira = Profiler + Prioritizer. Jarvis = Executor. Together = The Unlock.

๐Ÿ“‚ Source Files Downloaded

โœ“

portfolio-manager-with-context.zip (477.7 KB)

downloaded
โœ“

portfolio-manager-standalone.zip (289.7 KB)

downloaded
โœ“

reklaim-estimator-vercel.zip (305.6 KB)

downloaded
โœ“

reklaim-estimator-lovable.zip (305.6 KB)

downloaded

๐Ÿ”œ Next Steps

Extract Estimator system prompt and create OpenClaw skill

Build Portfolio Manager calculation engine as local script

Create dashboard for visualizing backlog and capacity

Waiting on Joana's email response for additional context

blocked

๐Ÿ“ How to Modify SOUL.md, USER.md, AGENTS.md

Location: /Users/greg/.openclaw/workspace/

โ€ข SOUL.md โ€” Who Greg is (personality, style, boundaries)
โ€ข USER.md โ€” Who you are (name, preferences, context)
โ€ข AGENTS.md โ€” How Greg behaves (memory protocol, heartbeats, safety)
โ€ข TOOLS.md โ€” Local notes (contacts, device names)

How to modify: Direct edit in any text editor โ€” changes take effect next turn. Or ask Greg: "Update SOUL.md to be more concise"

๐Ÿ“‹ JSON Files & File-Based Memory

Location: /Users/greg/.openclaw/workspace/memory/

โ€ข YYYY-MM-DD.md โ€” Daily logs (what happened each day)
โ€ข MEMORY.md โ€” Curated long-term insights

How Greg uses them: Every session loads today + yesterday's logs. Main sessions also load MEMORY.md. memory_search does semantic search across all memory files.

You can: Read/edit directly, ask Greg to update โ€” they persist across sessions/restarts.

๐Ÿ“‚ Drive Folder Access โ€” How to Use It

Best uses for your workflow:
โ€ข Akira documentation โ€” Estimator/Portfolio Manager specs
โ€ข Chat exports โ€” ChatGPT/Claude as text files
โ€ข Research papers โ€” Gartner PDFs for summarization
โ€ข Reference docs โ€” Anything for Greg's context

How: Use gog skill to search/read Drive files, or tell Greg "read the doc at [link]"

Recommendation: Create a Greg-Context folder in Drive for docs you want Greg to reference.

Current: Joana shared folder at Akira PMO: Rebuilding Estimator & Portfolio Manager

๐Ÿค– Sub-Agents vs Multiple OpenClaw Instances

Approach Pros Cons
Sub-agents (within Greg) Shared memory, Greg orchestrates, simpler Less isolation
Multiple instances (Greg + Akira) Full isolation, separate identities More setup, no shared context

Recommendation for Akira:
โ€ข Short-term: Use sub-agents within Greg for Estimator/Portfolio Manager tasks
โ€ข Long-term: If Akira becomes a product for others, spin out to its own instance

๐Ÿ“… Reminder set: Revisit sub-agents decision in 24 hours (Feb 7, 2026 ~9:22 AM)

โš–๏ธ Value Prioritization โ€” The Pareto Engine

๐ŸŽฏ Core Principle: The 80/20 rule โ€” 80% of human-perceived value comes from 20% of the features. This depends on judgment: skillfully assessing the situation and human preference.

๐Ÿ“Š WHAT'S PRESENT IN AKIRA

Component What It Does Limitation
Stack Ranking High priority features (from PM/customer) ordered by importance Assumes priorities already known
Effort Estimation Story Points ร— 2 ร— 1.7 drag factor = hours Estimates effort, not value
Priority ร— Effort Matrix Cross-reference: High Priority + Low Effort โ†’ Ranked Higher Requires human-supplied priorities
Promise/Stretch Model Creates commitment clarity (guaranteed vs aspirational) About commitment, not discovery
Hexagon of Success 6 criteria for trade-off decisions Balances competing concerns, doesn't rank features

๐Ÿ’ก The Explicit Value Algorithm:

1. Human identifies HIGH PRIORITY features
2. Estimator calculates LEVEL OF EFFORT
3. Cross-reference: High Priority โˆฉ Low Effort = HIGHEST VALUE
4. Result: Deliver more perceived value in less time

This is also the core engine for Exec.AI Executive Prioritizer.

โŒ WHAT'S MISSING โ€” THE GAP

Preference Elicitation

How does AI learn what the human actually values?

Value-Scoring Model

How do you quantify "human-perceived value"?

Prioritization Algorithm

Given N features, how do you identify the vital 20%?

Feedback Loops

Does the system learn "we thought X was high-value but client cared more about Y"?

๐Ÿ”ฎ THE MISSING PIECE: To-Do Item #10 โ€” "Explain Profiler & Multi-Agent Distribution Product Customizer"

This is where Tony will explain the Human Profiler Engine โ€” the mechanism for assessing perceived value based on understanding the human. This completes the Pareto judgment loop.

Status: Waiting for Tony's explanation session

๐Ÿ“ Summary: The Estimator is really an effort estimator, not a value estimator. It assumes priority input from humans. The Profiler fills the gap by modeling human preference โ€” turning "what does this person actually care about?" into actionable priority weights.

๐Ÿš€ Why Auto Pilot Projects (APP) Works for Multi-Agent Orchestration

๐Ÿ‘๏ธ SHOW ME DON'T TELL ME

PERFECT for AI. Agents hallucinate completion โ€” evidence-based verification catches this.

โฑ๏ธ HOUR-BASED CYCLES

AI works 24/7. 12-hour "sprints" = 100x faster iteration than 2-week human sprints.

๐ŸŽฏ PROMISE VS STRETCH

Forces unambiguous scope. Promise = guaranteed, Stretch = aspirational. No hallucinated scope.

๐Ÿ—๏ธ HIERARCHICAL COORDINATION

Program Manager โ†’ Flow Coordinators โ†’ Squads. Mirrors supervisor/worker patterns.

โœ… REQUIREMENTS VERIFIER

"Vibe prototype vs squad output" = automated diff/testing. Continuous alignment.

๐Ÿšจ AGENT 215 PROTOCOL

Idle detection is critical. No commits for 2+ standups โ†’ diagnostic intervention.

โš ๏ธ ADAPTATIONS NEEDED

No Ego

Agents don't self-correct from peer pressure โ†’ explicit failure handling + retry logic

Structured I/O

Agents need structure โ†’ JSON-formatted status reports

Auto Verification

Agents can't eyeball โ†’ automated tests, visual diff, deploy checks

No Reflection

Agents don't learn from retros โ†’ prompt refinement, context memory

Infinite Loops

Agents can loop forever โ†’ tighter timeouts, cost guardrails

Token Burn

API burn is real โ†’ Portfolio Manager needs hard limits

โšก THE KILLER FEATURE: See It Cycle at Machine Speed

See where you want to go โ†’ See where you are โ†’ See how to get there โ†’ โ†ป Every 30 min

Humans can't run this loop every 30 minutes. AI agents can. That's the leverage.

๐Ÿ’ก Bottom Line: APP is built on verification, not trust. That's why it works better for AI agents than humans. It's one of the few human methodologies that translates because it doesn't rely on ego, peer pressure, or human judgment.

๐ŸŽฏ View Full See It Cycle Organization Chart
Akira PMO See It Cycle Organization
๐Ÿ”Ÿ

10X Coder โ€” Sub-Agent Protocol

Learning

First sub-agent in the Chief of Staff team. Uses Two-Phase Spawn Protocol with Contract system. The workout-images-001 cycle was the first real test โ€” results mixed but learnings valuable.

๐Ÿ“Š Cycle: workout-images-001 Retrospective

Feb 7, 2026 โ€” First production cycle for 10X Coder agent

โœ… WHAT WORKED

Contract System: Clear acceptance criteria with Promise/Stretch separation. Explicit "ACCEPT/MODIFY/DECLINE" gate before work starts. Defined success metrics upfront (41+ images = 50% coverage).

Two-Phase Protocol: Phase 0 (acceptance) prevented rushing into work. Gave opportunity to catch scope issues before burning cycles. Coordinator approval ("BEGIN WORK") created accountability checkpoint.

Technical Context: Contract included specific file paths, API details, naming conventions. Wger API integration delivered 24 valid images. The Promise deliverable (update exercises.json, update UI) was completed.

โŒ WHAT FAILED

Frame Extraction Approach: Card photo cropping was fundamentally broken. ffmpeg grabbed random frames (carpet, wrong exercises). Only 1 of 14 extractions was correct (7% accuracy). Root cause: No validation step before marking as "done".

No Intermediate Checkpoints: Lost visibility during long-running work. Couldn't catch the frame extraction issue until final QA. No "show me 3 sample extractions before proceeding".

Session Label Confusion: Used both 10x-images-001 and 10x-images-002 without clear handoff. Hard to trace what happened in which session post-facto.

Stretch Became Mixed with Promise: Card cropping was listed as "Stretch" but became attempted as Promise. Should have stopped after Wger images + declared Stretch incomplete.

๐Ÿ“ˆ METRICS

Metric Target Actual Status
Wger API coverage 50% (41) 29% (24) โš ๏ธ Below target
Card extraction accuracy N/A (Stretch) 7% (1/14) โŒ Failed
Total coverage โ€” 46% (38/82) โš ๏ธ Partial
Deployment โœ… โœ… โœ… Complete

๐Ÿ”ง PROCESS IMPROVEMENTS

Issue Fix
No intermediate validation Add checkpoint gates: "After 5 images, pause for QA"
Session tracking confusion Single session label per cycle, numbered phases (001-phase-1, 001-phase-2)
Stretch/Promise bleed Explicit "STRETCH START" gate requiring coordinator approval
Frame extraction failure Vision-based validation: "Show sample crops before batch"
Lost visibility Mandatory memory/checkpoint.md updates every 30min

๐Ÿ’ก Verdict: The contract system works โ€” it forced clarity upfront and created accountability. The failure was in execution monitoring and validation gates. The Stretch item (card cropping) should never have been attempted without a proven approach.

Recommendation: For the next cycle, add explicit QA checkpoints at 25%, 50%, 75% completion. Stretch items require separate "STRETCH APPROVED" confirmation.