Council Briefing

Strategic Deliberation
North Star & Strategic Context

North Star & Strategic Context



This file combines the overall project mission (North Star) and summaries of key strategic documents for use in AI prompts, particularly for the AI Agent Council context generation.

Last Updated: December 2025

---

North Star: To build the most reliable, developer-friendly open-source AI agent framework and cloud platform—enabling builders worldwide to deploy autonomous agents that work seamlessly across chains and platforms. We create infrastructure where agents and humans collaborate, forming the foundation for a decentralized AI economy that accelerates the path toward beneficial AGI.

---

Core Principles: 1. **Execution Excellence** - Reliability and seamless UX over feature quantity 2. **Developer First** - Great DX attracts builders; builders create ecosystem value 3. **Open & Composable** - Multi-agent systems that interoperate across platforms 4. **Trust Through Shipping** - Build community confidence through consistent delivery

---

Current Product Focus (Dec 2025):
  • **ElizaOS Framework** (v1.6.x) - The core TypeScript toolkit for building persistent, interoperable agents
  • **ElizaOS Cloud** - Managed deployment platform with integrated storage and cross-chain capabilities
  • **Flagship Agents** - Reference implementations (Eli5, Otaku) demonstrating platform capabilities
  • **Cross-Chain Infrastructure** - Native support for multi-chain agent operations via Jeju/x402


  • ---

    ElizaOS Mission Summary: ElizaOS is an open-source "operating system for AI agents" aimed at decentralizing AI development. Built on three pillars: 1) The Eliza Framework (TypeScript toolkit for persistent agents), 2) AI-Enhanced Governance (building toward autonomous DAOs), and 3) Eliza Labs (R&D driving cloud, cross-chain, and multi-agent capabilities). The native token coordinates the ecosystem. The vision is an intelligent internet built on open protocols and collaboration.

    ---

    Taming Information Summary: Addresses the challenge of information scattered across platforms (Discord, GitHub, X). Uses AI agents as "bridges" to collect, wrangle (summarize/tag), and distribute information in various formats (JSON, MD, RSS, dashboards, council episodes). Treats documentation as a first-class citizen to empower AI assistants and streamline community operations.
    Daily Strategic Focus
    A major Jeju+Babylon cloud marketplace breakthrough is now in tension with execution excellence: reliability, SLAs, and clear public definitions (migration eligibility + official affiliations) must be hardened immediately to protect developer trust and the Cloud launch narrative.
    Monthly Goal
    December 2025: Execution excellence—complete token migration with high success rate, launch ElizaOS Cloud, stabilize flagship agents, and build developer trust through reliability and clear documentation.

    Key Deliberations

    Jeju+Babylon Cloud Marketplace: From Breakthrough to Trustworthy Platform
    Shaw reports Cloud services running on both Jeju and Babylon, positioning ElizaOS as a decentralized Vercel alternative with DNS, cache, and SQLite compatibility plus claimed cost reductions. The Council must now convert the vision into an operationally credible product (provider SLAs, predictable performance, and clear market positioning).
    Q1
    What is the Council-approved “reliability contract” for the Cloud marketplace (what we guarantee vs. what providers guarantee) at launch?
    • core-devs: Shaw described the system as a compute marketplace that selects "cheapest and fastest" resources and includes DNS/cache/SQLite.
    • Q/A: Odilitime asked about SLAs; Shaw replied: "I do not, but providers can".
    1Ship as a pure marketplace: ElizaOS guarantees scheduling/observability only; providers own SLAs.
    Fastest to launch, but increased reputational risk if users blame ElizaOS for provider failures.
    2Offer a baseline ElizaOS SLA for core control-plane + routing; providers optionally add stronger SLAs for compute.
    Balances speed and trust; requires immediate investment in monitoring, incident response, and uptime targets.
    3Delay broad launch until ElizaOS can provide end-to-end SLAs for the full developer experience.
    Maximizes trust long-term but risks losing momentum and mindshare during the Cloud narrative window.
    4Other / More discussion needed / None of the above.
    Q2
    Which technical wedge should define the Cloud’s differentiation in the next 30 days: cost, composability, or security/TEE verifiability?
    • Shaw: positioned value proposition as “40% reduction in cloud bills for web2 developers.”
    • 2025-12-27: Jeju described as using TEEs, proof-of-cloud, key sharding, and a distributed KMS-like system.
    1Lead with cost ("40% cheaper"), keep security claims minimal until validated.
    Strong acquisition hook, but any pricing inconsistency can erode trust quickly.
    2Lead with composability (Vercel/K8s/S3/Redis/SQLite compatibility) to win developers first.
    Best aligned with Developer First, but may be less viral than cost messaging.
    3Lead with security/verifiability (TEE + key sharding) as the ‘unstoppable agents’ foundation.
    Differentiates strongly in Web3, but requires rigorous proof and careful messaging to avoid overpromising.
    4Other / More discussion needed / None of the above.
    Q3
    Should we formally pursue “POKT/poktroll-inspired” architecture elements now, or treat it as later-stage research?
    • Sayonara suggested POKT Network’s poktroll GitHub repo as inspiration for the cloud marketplace development.
    1Integrate immediately: assign a short spike to map poktroll primitives to Jeju marketplace needs.
    May accelerate design maturity but risks scope creep during a reliability-critical month.
    2Defer integration; extract only design lessons (docs + threat model) without committing to code changes.
    Protects execution excellence while still improving architecture decisions.
    3Avoid alignment; keep Jeju architecture independent to reduce coupling and external design constraints.
    Simplifies ownership, but could miss proven patterns for decentralized infra markets.
    4Other / More discussion needed / None of the above.
    Token Migration Friction: Snapshot Policy, UX Failures, and Trust Recovery
    Users continue reporting migration blockers (wallet connection issues, ineligible snapshot confusion, and "max amount reached" errors), while policy states no post-snapshot purchases are eligible. This is a direct threat to “trust through shipping” and must be addressed with clearer eligibility messaging, safer support pathways, and targeted technical fixes.
    Q1
    Do we keep the strict snapshot policy as-is, or create a limited remediation path for edge cases without reopening broad eligibility?
    • Odilitime (2025-12-27): "As a policy we're not migrating any purchases after snapshot."
    • Users (2025-12-27/28): repeated "max amount reached" and wallet eligibility confusion.
    1Keep strict snapshot policy; invest only in better UX copy and automated eligibility diagnostics.
    Preserves tokenomics predictability but risks continued community resentment and support load.
    2Add a narrowly scoped remediation program (manual review/appeals) for verifiable technical blockers (e.g., unsupported wallets).
    Improves trust with manageable scope, but introduces operational overhead and precedent risk.
    3Reopen a broader migration window for post-snapshot buyers with a discounted/penalized rate cap.
    May reduce hostility but can destabilize expectations and amplify token distribution disputes.
    4Other / More discussion needed / None of the above.
    Q2
    What is the single most important migration UX fix to ship next: error clarity, wallet support breadth, or support-channel hardening against impersonation?
    • Emma (2025-12-28): "I have a huge amount of AI16Z, and I am getting max amount reached error."
    • Fataliti (2025-12-28): "coins are visible and cannot be exchanged" when connecting Phantom.
    • Odilitime (2025-12-27): "max amount reached" means the wallet is not in the snapshot.
    1Prioritize error clarity: replace ambiguous errors with deterministic, user-specific reasons and next steps.
    Fast win that reduces support burden and aligns with Execution Excellence.
    2Prioritize wallet support breadth (WalletConnect coverage, better Phantom flows), even if it delays other work.
    Addresses root access issues, but adds integration risk during a sensitive migration.
    3Prioritize support-channel security and verification (signed links, verified staff workflow) to prevent social-engineering loss events.
    Protects users and reputation, but does not directly resolve technical migration blockers.
    4Other / More discussion needed / None of the above.
    Q3
    How should we publicly communicate migration constraints to restore trust: technical transparency, policy-first clarity, or forward-looking utility narrative?
    • General sentiment (2025-12-26): frustration around token value, snapshot confusion, and marketing limitations.
    • Support interactions (2025-12-28): users redirected to support channel for recurring migration issues.
    1Technical transparency: publish a postmortem-style explainer of snapshot rationale and known failure modes.
    Builds credibility with developers but may inflame token-focused discourse short-term.
    2Policy-first clarity: a short, strict eligibility FAQ + automated checker, minimizing debate.
    Reduces ambiguity and support churn, but may feel cold to affected holders.
    3Utility narrative: emphasize Cloud/Jeju utility and roadmap, framing migration as a step toward platform economics.
    Reorients attention to product value, but risks backlash if migration pain persists.
    4Other / More discussion needed / None of the above.
    Developer Trust Surface Area: Naming Bugs, Login/Deploy Errors, and Shipping Cadence
    Reliability gaps are surfacing in core UX (agent naming edge cases like "null" or numeric causing exceptions) alongside Cloud login/deploy issues and streaming/UI roughness. Meanwhile, the latest daily GitHub activity is minimal (1 contributor on Dec 28–29), suggesting a cadence risk precisely when execution excellence is the directive.
    Q1
    What should be the Council’s immediate reliability triage list: user-blocking Cloud issues first, core framework correctness, or flagship-agent experience polish?
    • DorianD (2025-12-27): agent names like "null" or numeric values cause errors; "$" works.
    • 2025-12-28 GitHub note: "minimal activity" with "1 active contributor" (Dec 28–29).
    1Prioritize user-blocking Cloud issues (login/deploy/migration-adjacent flows) to protect platform adoption.
    Directly supports Cloud launch, but may defer deeper framework quality work.
    2Prioritize core framework correctness (input validation, SSE/streaming stability, tests) to prevent recurring regressions.
    Improves long-term reliability, but may not immediately reduce visible user pain.
    3Prioritize flagship-agent polish (Otaku/Eli5 interaction quality) to anchor marketing and community belief.
    Boosts perception, but risks being seen as veneer if core bugs remain.
    4Other / More discussion needed / None of the above.
    Q2
    Do we enforce stricter validation contracts at the API boundary (server rejects invalid agent names) or at the client boundary (prevent entry), or both?
    • DorianD (2025-12-27): numeric agent names created "client side exception"; "null" saved but behaved oddly.
    1Server-side validation only (single source of truth; all clients protected).
    Most robust against unknown clients, but may feel harsher without client guidance.
    2Client-side validation only (fast UX feedback; minimal backend change).
    Quick improvement, but leaves API and integrations vulnerable to bad inputs.
    3Both: client prevents + server enforces with clear error codes and docs.
    Best practice for reliability; slightly more work but aligns with Execution Excellence.
    4Other / More discussion needed / None of the above.
    Q3
    Given the short-term dip in visible repo activity, do we shift to a “stability lockdown” mode (merge freeze + bugfix-only) or keep feature throughput to maintain momentum?
    • Dec 28–29 GitHub activity: "minimal activity" and "1 active contributor".
    • Ongoing initiatives (Dec summaries): streaming support, Cloud integration, auth work, and UI overhaul are in flight.
    1Stability lockdown: freeze new features, focus on migration + Cloud reliability + critical bug fixes.
    Maximizes trust through fewer regressions; may slow innovation and partner excitement.
    2Balanced lane system: one lane for urgent reliability, one lane for essential roadmap features with stricter reviews.
    Maintains momentum while controlling risk; requires strong release management discipline.
    3Maintain feature throughput: keep shipping broadly to demonstrate progress and drown out negativity.
    Can energize community, but increases regression probability during a trust-sensitive period.
    4Other / More discussion needed / None of the above.