Council Briefing

Strategic Deliberation
North Star & Strategic Context

North Star & Strategic Context



This file combines the overall project mission (North Star) and summaries of key strategic documents for use in AI prompts, particularly for the AI Agent Council context generation.

Last Updated: December 2025

---

North Star: To build the most reliable, developer-friendly open-source AI agent framework and cloud platform—enabling builders worldwide to deploy autonomous agents that work seamlessly across chains and platforms. We create infrastructure where agents and humans collaborate, forming the foundation for a decentralized AI economy that accelerates the path toward beneficial AGI.

---

Core Principles: 1. **Execution Excellence** - Reliability and seamless UX over feature quantity 2. **Developer First** - Great DX attracts builders; builders create ecosystem value 3. **Open & Composable** - Multi-agent systems that interoperate across platforms 4. **Trust Through Shipping** - Build community confidence through consistent delivery

---

Current Product Focus (Dec 2025):
  • **ElizaOS Framework** (v1.6.x) - The core TypeScript toolkit for building persistent, interoperable agents
  • **ElizaOS Cloud** - Managed deployment platform with integrated storage and cross-chain capabilities
  • **Flagship Agents** - Reference implementations (Eli5, Otaku) demonstrating platform capabilities
  • **Cross-Chain Infrastructure** - Native support for multi-chain agent operations via Jeju/x402


  • ---

    ElizaOS Mission Summary: ElizaOS is an open-source "operating system for AI agents" aimed at decentralizing AI development. Built on three pillars: 1) The Eliza Framework (TypeScript toolkit for persistent agents), 2) AI-Enhanced Governance (building toward autonomous DAOs), and 3) Eliza Labs (R&D driving cloud, cross-chain, and multi-agent capabilities). The native token coordinates the ecosystem. The vision is an intelligent internet built on open protocols and collaboration.

    ---

    Taming Information Summary: Addresses the challenge of information scattered across platforms (Discord, GitHub, X). Uses AI agents as "bridges" to collect, wrangle (summarize/tag), and distribute information in various formats (JSON, MD, RSS, dashboards, council episodes). Treats documentation as a first-class citizen to empower AI assistants and streamline community operations.
    Daily Strategic Focus
    Operational momentum bifurcated: public trust friction intensified around snapshot-based token migration while core engineering surfaced a bold Jeju/Eliza Cloud future—yet daily GitHub throughput dipped to near-idle, risking execution credibility at month-end.
    Monthly Goal
    December 2025: Execution excellence—complete token migration with high success rate, launch ElizaOS Cloud, stabilize flagship agents, and build developer trust through reliability and clear documentation.

    Key Deliberations

    Token Migration Clarity & Trust Repair
    Snapshot-locked migration policy is generating repeated user failure modes ("max amount reached" / "0 eligible") and compounding reputational damage during a major price drawdown. The Council must decide how to reconcile strict migration rules with the North Star imperative of reliability and developer/community trust.
    Q1
    Do we hold the line on a strict snapshot-only migration, or introduce a controlled remediation path for edge cases to protect trust?
    • Odilitime (💬-discussion / 🥇-partners): "As a policy we're not migrating any purchases after snapshot."
    • Odilitime (🥇-partners): "'max amount reached'... means that wallet is not in the snapshot."
    1Maintain strict snapshot-only migration with improved tooling and messaging (no policy change).
    Preserves on-chain/accounting simplicity, but requires exceptional UX/docs to avoid continued trust erosion.
    2Add a narrow remediation process (case review + proofs) for specific wallet/connectivity failures without opening post-snapshot buys.
    Reduces legitimate user harm and scam susceptibility, at the cost of operational overhead and policy complexity.
    3Open a time-limited secondary window with broader eligibility rules to reset sentiment quickly.
    Short-term goodwill boost, but risks dilution narratives, legal/exchange complications, and precedent of policy reversals.
    4Other / More discussion needed / None of the above.
    Q2
    What is the minimum public-facing explanation we must ship now to reduce repeat support load and scam risk while tokenomics remain partially undisclosed?
    • Borko (🥇-partners): "You're mistaking silence for something we're not sharing yet externally."
    • User reports across Discord: repeated confusion about eligibility and migrator errors.
    1Publish a concise Migration Canon (eligibility rules + error glossary + official links) without discussing future token plans.
    Cuts support churn immediately while keeping strategic token design confidential.
    2Publish Migration Canon plus a high-level token utility statement (one paragraph) to anchor expectations.
    May stabilize narrative without overcommitting, but requires careful wording to avoid future contradiction.
    3Delay new public docs until tokenomics and Jeju token alignment are ready to disclose together.
    Reduces rework risk, but prolongs confusion and increases attack surface for impersonation/scams.
    4Other / More discussion needed / None of the above.
    Q3
    How should we define success for “high success rate” migration in the monthly directive—policy compliance or user-perceived fairness?
    • Monthly Directive: "complete token migration with high success rate"
    • Community: significant frustration around ineligibility and wallet snapshot constraints (Dec 25-27 logs).
    1Success = % of eligible snapshot wallets migrated successfully (technical completion metric).
    Optimizes for execution excellence, but may ignore reputational cost among ineligible/edge-case users.
    2Success = eligible migration rate plus reduction in support incidents and scam reports (trust metric).
    Aligns with North Star trust-building, but demands investment in support ops and comms immediately.
    3Success = market-facing recovery indicators (sentiment/price stability) after migration completion.
    Targets narrative outcomes, but risks conflating product excellence with market forces outside control.
    4Other / More discussion needed / None of the above.
    ElizaOS Cloud Reliability & Developer Experience Gaps
    Cloud is live and attracting projects, but edge-case failures (agent naming, deployment credentials, UI multistep streaming) are breaking the “seamless UX” promise. The Council should prioritize a small set of high-impact hardening fixes that convert early adopters into advocates.
    Q1
    Which reliability breach is most existential to developer trust this week: agent creation validation, deploy pipeline errors, or chat/streaming UX regressions?
    • DorianD (💬-coders): agent names like "null" and numeric values cause exceptions; "$" works.
    • DorianD (💬-coders): "ECR credentials error" during `elizaos deploy`.
    1Fix agent creation/validation and schema constraints first (prevent corrupt or crashing states).
    Reduces user-facing fatal errors at the entry point, improving first-run success and retention.
    2Fix deploy pipeline reliability first (ECR/registry/auth) to ensure “create → deploy” works end-to-end.
    Maximizes perceived platform viability for serious builders evaluating Cloud as infrastructure.
    3Fix chat streaming + multistep UI parity with Otaku first to improve flagship experience.
    Improves product delight and demos, but may leave foundational breakages unresolved.
    4Other / More discussion needed / None of the above.
    Q2
    Do we formalize and enforce naming/metadata constraints at the API boundary (server) or in the client UX layer first?
    • DorianD: numeric agent names produce client-side exceptions; "null" behaves inconsistently between save and edit.
    1Enforce constraints server-side immediately (canonical validation + clear error codes).
    Prevents bad states across all clients and aligns with reliability-first engineering.
    2Patch the client UX first (friendly validation) while scheduling server hardening next sprint.
    Fastest perceived fix, but risks other clients/CLI still creating invalid states.
    3Do both in one coordinated change with backward-compat migration for already-bad records.
    Most robust, but higher coordination cost and potential to delay urgent relief.
    4Other / More discussion needed / None of the above.
    Q3
    How do we convert emerging Cloud projects into ecosystem proof without slowing shipping velocity?
    • Discord: "Zoria has 'bonded' and is identified as an Eliza Cloud project."
    • Community: need projects to identify themselves as built on "elizaos cloud" for distribution.
    1Launch a lightweight “Built on Eliza Cloud” badge + showcase channel now (manual curation).
    Creates immediate social proof with minimal engineering, reinforcing trust through visible adoption.
    2Implement an automated identification system in Cloud (metadata + public directory).
    Scales distribution long-term, but adds near-term engineering load during stability push.
    3Defer showcasing until Cloud error rates drop below a defined SLO threshold.
    Avoids amplifying a fragile product, but misses a window to rebuild narrative momentum.
    4Other / More discussion needed / None of the above.
    Jeju Distributed Cloud Trajectory vs. Month-End Execution Risk
    Jeju’s vision (TEE-secured, proof-of-cloud, sharded KMS, serverless SQLite) is strategically aligned with cross-chain, unstoppable agents—but the immediate signal shows low day-to-day repo activity and unresolved Cloud ergonomics. The Council must decide how to sequence visionary platform work against December’s execution-excellence mandate.
    Q1
    What sequencing maximizes North Star alignment: accelerate Jeju R&D now, or pause scope to harden Cloud + flagship agents first?
    • shaw (core-devs): "jeju" described as a fully distributed cloud with TEE, proof-of-cloud, key sharding, distributed KMS; "eliza cloud" will run on it.
    • GitHub daily (Dec 27-28): "minimal activity... 0 merged PRs... 1 active contributor"
    1Prioritize Cloud hardening and flagship stability through end-of-month; keep Jeju as design-only work.
    Improves near-term reliability and trust, but risks losing momentum on differentiated decentralization.
    2Run dual-track: a small Jeju strike team while the main force focuses on Cloud reliability SLOs.
    Balances narrative and execution, but requires disciplined coordination to avoid fragmented delivery.
    3Accelerate Jeju implementation immediately to create a major narrative catalyst, accepting short-term Cloud roughness.
    Could create a breakthrough story, but conflicts with “Execution Excellence” and may worsen developer churn.
    4Other / More discussion needed / None of the above.
    Q2
    What is the Council’s desired public posture on Jeju details while tokenomics remain partially undisclosed?
    • DorianD (🥇-partners): asked if elizaos will be the native token of Jeju.
    • Borko (🥇-partners): token plans exist but are not being shared externally yet.
    1Share technical Jeju vision openly, but explicitly separate it from token commitments (no promises).
    Supports open-source credibility while reducing implied token guarantees.
    2Keep Jeju details mostly internal until token alignment and Cloud reliability are ready for a unified launch message.
    Avoids mixed signals, but forfeits an opportunity to redirect sentiment toward engineering strength.
    3Announce a firm token alignment position now (e.g., ElizaOS is Jeju’s native token) to quell uncertainty.
    May calm token debates short-term, but creates strategic lock-in before architecture and policy are final.
    4Other / More discussion needed / None of the above.
    Q3
    How should we handle the distributed SQLite initiative to avoid “cool tech” drift and instead deliver developer-visible value quickly?
    • shaw (core-devs): building a distributed SQLite; naming discussion ("sqlit", "sqliite", "ShawQLite", "sq-lit").
    1Keep it as an internal dependency of Jeju/Cloud (no separate branding) until it powers a clear Cloud feature.
    Reduces distraction and aligns R&D with product outcomes.
    2Open-source it as a standalone component with a crisp roadmap and benchmarks.
    Attracts contributors and credibility, but increases maintenance and support surface area.
    3Defer distributed SQLite until core Cloud storage paths are stable; use existing managed stores short-term.
    Maximizes execution excellence now, but delays key decentralization and cost/latency advantages.
    4Other / More discussion needed / None of the above.