Council Briefing

Strategic Deliberation
North Star & Strategic Context

North Star & Strategic Context



This file combines the overall project mission (North Star) and summaries of key strategic documents for use in AI prompts, particularly for the AI Agent Council context generation.

Last Updated: December 2025

---

North Star: To build the most reliable, developer-friendly open-source AI agent framework and cloud platform—enabling builders worldwide to deploy autonomous agents that work seamlessly across chains and platforms. We create infrastructure where agents and humans collaborate, forming the foundation for a decentralized AI economy that accelerates the path toward beneficial AGI.

---

Core Principles: 1. **Execution Excellence** - Reliability and seamless UX over feature quantity 2. **Developer First** - Great DX attracts builders; builders create ecosystem value 3. **Open & Composable** - Multi-agent systems that interoperate across platforms 4. **Trust Through Shipping** - Build community confidence through consistent delivery

---

Current Product Focus (Dec 2025):
  • **ElizaOS Framework** (v1.6.x) - The core TypeScript toolkit for building persistent, interoperable agents
  • **ElizaOS Cloud** - Managed deployment platform with integrated storage and cross-chain capabilities
  • **Flagship Agents** - Reference implementations (Eli5, Otaku) demonstrating platform capabilities
  • **Cross-Chain Infrastructure** - Native support for multi-chain agent operations via Jeju/x402


  • ---

    ElizaOS Mission Summary: ElizaOS is an open-source "operating system for AI agents" aimed at decentralizing AI development. Built on three pillars: 1) The Eliza Framework (TypeScript toolkit for persistent agents), 2) AI-Enhanced Governance (building toward autonomous DAOs), and 3) Eliza Labs (R&D driving cloud, cross-chain, and multi-agent capabilities). The native token coordinates the ecosystem. The vision is an intelligent internet built on open protocols and collaboration.

    ---

    Taming Information Summary: Addresses the challenge of information scattered across platforms (Discord, GitHub, X). Uses AI agents as "bridges" to collect, wrangle (summarize/tag), and distribute information in various formats (JSON, MD, RSS, dashboards, council episodes). Treats documentation as a first-class citizen to empower AI assistants and streamline community operations.
    Daily Strategic Focus
    End-of-month execution hinges on converting renewed token momentum into trust: de-risk the migration and clarify token/ecosystem messaging while shipping reliability upgrades (hooks, logging, plugins) that make Cloud onboarding frictionless.
    Monthly Goal
    December 2025: Execution excellence—complete token migration with high success rate, launch ElizaOS Cloud, stabilize flagship agents, and build developer trust through reliability and clear documentation.

    Key Deliberations

    Token Migration Integrity & Public Trust
    Community sentiment surged with a major price move tied to Shaw’s return to X, but operational risk remains: migration confusion, wallet-edge cases, and role verification gaps are eroding trust and consuming support bandwidth.
    Q1
    What is the Council’s minimum acceptable success criterion for the AI16Z → ElizaOS migration (and what do we publicly commit to measure and report)?
    • Discord (2025-12-29): "Snapshot has already occurred for the migration from AI16Z to ElizaOS"
    • Discord (2025-12-28): "Multiple users reported problems with the migration process"
    1Commit to a quantified success target (e.g., % of eligible wallets migrated) and publish daily progress until close.
    Builds credibility via transparency, but forces rapid instrumentation and exposes short-term misses.
    2Commit to a deadline and a robust support playbook, but avoid publishing metrics until after stabilization.
    Reduces reputational risk from noisy numbers, but may feel opaque during peak attention.
    3Treat migration as an ongoing service with rolling remediation and no single success KPI.
    Maximizes inclusivity for edge cases, but risks perpetual support load and diluted urgency.
    4Other / More discussion needed / None of the above.
    Q2
    How should we handle migration edge cases (large holders, unsupported wallets, bridge constraints) without creating a scam surface or favoritism narrative?
    • Discord (2025-12-28): "Fix 'max amount reached error' when users have large amounts of AI16Z"
    • Discord (2025-12-29): "Can only bridge from Solana to other chains, not vice versa."
    1Create a formal, auditable manual-claims pathway with proof requirements and published procedures.
    Reduces scam risk via process clarity, but adds operational overhead and verification complexity.
    2Patch the portal and wallet integrations rapidly; avoid manual handling except in extreme, private escalation.
    Preserves fairness optics, but risks leaving legitimate users stranded if integration lags.
    3Offer a time-limited alternative migration contract/route that supports more wallets and chains.
    Improves accessibility, but increases smart-contract and communication complexity mid-flight.
    4Other / More discussion needed / None of the above.
    Q3
    What is our immediate comms priority: token information discoverability, ecosystem relationship clarity (DegenAI/Ruby), or gated-role verification correctness?
    • Discord (2025-12-30, Kenk): "added details to docs.elizaos.ai/tokenomics"
    • Discord (2025-12-30): "Update Collabland to properly reflect ElizaOS token holdings instead of ai16z"
    1Token info discoverability first (dedicated token page, CA/exchanges, official links), then ecosystem clarifications.
    Cuts confusion and support load fastest during peak visibility, reinforcing trust-through-shipping.
    2Ecosystem relationship clarity first (what is 'official' vs community), then token page polish.
    Prevents brand dilution and misattribution, but may not solve immediate how-to-buy/migrate friction.
    3Role verification correctness first (Collabland + ElizaOS verification), then token and ecosystem docs.
    Stabilizes community governance and gated channels, but leaves broader market-facing confusion longer.
    4Other / More discussion needed / None of the above.
    ElizaOS Cloud + Jeju Infrastructure Trajectory
    Cloud beta is live with light support and Jeju is framed as a compute marketplace launching on AWS first, then migrating toward permissionless physical infrastructure—an ambitious path that requires clear reliability guarantees and developer onboarding clarity.
    Q1
    Do we position Jeju/Cloud primarily as a developer cost-saver marketplace now, or as a reliability-first managed platform with marketplace optionality later?
    • Discord (2025-12-28, Shaw): "functions as a compute marketplace that automatically selects optimal resources"
    • Discord (2025-12-28): "40% reduction in cloud bills for web2 developers"
    1Lead with reliability-first managed Cloud; introduce marketplace routing once SLAs and observability are mature.
    Aligns to Execution Excellence, but delays the most novel differentiation narrative.
    2Lead with the marketplace narrative immediately (cost/monetization), accepting rough edges as early-adopter tax.
    Maximizes momentum and token utility story, but risks trust loss if early reliability disappoints.
    3Dual-track messaging: Cloud for builders, Jeju marketplace for providers—two lanes, one brand.
    Captures both audiences, but increases comms complexity and potential confusion about guarantees.
    4Other / More discussion needed / None of the above.
    Q2
    What is the Council’s stance on SLAs in the near term: none (provider-only), platform baseline SLA, or tiered SLA tied to token/plan?
    • Discord (2025-12-28, Shaw): "I do not, but providers can" (re: SLAs)
    • Discord (2025-12-29): "Eliza Cloud Beta: Open beta access is now available with light support ahead of full launch"
    1Provider-only SLAs for now; platform remains best-effort during beta with explicit disclaimers.
    Reduces liability, but may slow serious developer adoption for production workloads.
    2Introduce a minimal platform baseline SLA (uptime + incident comms) to anchor trust.
    Accelerates developer confidence, but requires on-call, monitoring, and incident discipline immediately.
    3Tiered SLAs (free/beta best-effort; paid/token-gated SLA) with clear boundaries.
    Creates monetization and prioritization, but risks community backlash if perceived as paywalling stability.
    4Other / More discussion needed / None of the above.
    Q3
    How aggressively do we pursue the AWS → self-owned permissionless racks transition relative to December’s directive of execution excellence and trust?
    • Discord (2025-12-30, Shaw): "Initially launch on AWS with a goal to transition to self-owned permissionless infrastructure with physical racks in data centers by year-end"
    • Discord (2025-12-30, Odilitime): "offered to cover Northern California for data center infrastructure"
    1Defer physical infra commitments until Cloud reliability KPIs stabilize; keep AWS as primary near-term substrate.
    Maximizes reliability and velocity, but may undercut the permissionless narrative temporarily.
    2Proceed on a fixed timeline; treat physical infra as a flagship credibility milestone for the decentralized AI economy.
    Strengthens long-term narrative, but increases operational complexity and risk of service instability.
    3Pilot a small multi-region rack footprint in parallel while AWS remains the control plane.
    Balances ambition and safety, but demands strong architecture boundaries and extra engineering bandwidth.
    4Other / More discussion needed / None of the above.
    Framework Reliability & Developer Experience (Hooks, Streaming, Plugins)
    Core work is trending toward execution excellence: unified hooks across transports, database logging for streaming LLM calls, and plugin fixes (OpenAI image gen + caching). However, release/versioning and integration pathways remain fragmented, threatening DX consistency as Cloud adoption ramps.
    Q1
    Should unified hooks (HTTP/SSE/WebSocket) be treated as a v1.6.x hardening requirement or an optional capability that can land incrementally?
    • GitHub (PR #6300): "unified hooks with multi-transport support, including HTTP, SSE, and WebSocket"
    • Discord (core-devs, Stan): "fixed issues with duplicate events"
    1Hardening requirement: block related releases until hooks are stable and documented end-to-end.
    Improves reliability guarantees, but may slow other roadmap items during peak market attention.
    2Incremental landing: merge behind defaults and progressively roll out transport support with feature flags.
    Maintains velocity, but risks inconsistent behavior across transports and support complexity.
    3Defer hooks consolidation; prioritize Cloud launch polish and migration reliability first.
    Optimizes near-term trust outcomes, but leaves architectural debt that will compound with scale.
    4Other / More discussion needed / None of the above.
    Q2
    What is the Council’s preferred approach to plugin versioning and release automation to reduce developer friction and ecosystem breakage?
    • Discord (core-devs, Odilitime): "Should I pump the version for every plugin PR I make?"
    • Discord (core-devs, Stan): "we should have a CI... like release please"
    1Adopt automated semantic releases across core + plugins (changesets/release-please) with enforced conventions.
    Improves trust and predictability for builders, but requires workflow standardization and maintainer buy-in.
    2Keep manual versioning but publish a strict policy and a checklist for PR authors and reviewers.
    Low tooling overhead, but relies on human discipline and will drift under scale.
    3Centralize plugins into a monorepo-style release train to eliminate cross-repo drift.
    Maximizes coherence, but increases repo complexity and may deter some community contributions.
    4Other / More discussion needed / None of the above.
    Q3
    How do we turn recent reliability fixes (streaming logging, caching, duplicate-event fixes) into visible developer trust signals?
    • GitHub (PR #6296, merged): "log streaming LLM calls to database"
    • Discord (2025-12-30, Odilitime): "added caching to prevent redundant media processing"
    1Publish a weekly reliability changelog with before/after metrics and incident learnings.
    Converts engineering work into trust narratives, but requires consistent measurement and comms cadence.
    2Prioritize in-product observability (dashboards, model-call logs, perf indicators) over external comms.
    Improves self-serve debugging, but may under-leverage public momentum for reputational gains.
    3Run a developer-facing stability campaign: bug bounties, regression tests, and a “known issues” canon.
    Builds community participation and clarity, but may temporarily highlight shortcomings during launch.
    4Other / More discussion needed / None of the above.