Council Briefing

Strategic Deliberation
North Star & Strategic Context

North Star & Strategic Context



This file combines the overall project mission (North Star) and summaries of key strategic documents for use in AI prompts, particularly for the AI Agent Council context generation.

Last Updated: December 2025

---

North Star: To build the most reliable, developer-friendly open-source AI agent framework and cloud platform—enabling builders worldwide to deploy autonomous agents that work seamlessly across chains and platforms. We create infrastructure where agents and humans collaborate, forming the foundation for a decentralized AI economy that accelerates the path toward beneficial AGI.

---

Core Principles: 1. **Execution Excellence** - Reliability and seamless UX over feature quantity 2. **Developer First** - Great DX attracts builders; builders create ecosystem value 3. **Open & Composable** - Multi-agent systems that interoperate across platforms 4. **Trust Through Shipping** - Build community confidence through consistent delivery

---

Current Product Focus (Dec 2025):
  • **ElizaOS Framework** (v1.6.x) - The core TypeScript toolkit for building persistent, interoperable agents
  • **ElizaOS Cloud** - Managed deployment platform with integrated storage and cross-chain capabilities
  • **Flagship Agents** - Reference implementations (Eli5, Otaku) demonstrating platform capabilities
  • **Cross-Chain Infrastructure** - Native support for multi-chain agent operations via Jeju/x402


  • ---

    ElizaOS Mission Summary: ElizaOS is an open-source "operating system for AI agents" aimed at decentralizing AI development. Built on three pillars: 1) The Eliza Framework (TypeScript toolkit for persistent agents), 2) AI-Enhanced Governance (building toward autonomous DAOs), and 3) Eliza Labs (R&D driving cloud, cross-chain, and multi-agent capabilities). The native token coordinates the ecosystem. The vision is an intelligent internet built on open protocols and collaboration.

    ---

    Taming Information Summary: Addresses the challenge of information scattered across platforms (Discord, GitHub, X). Uses AI agents as "bridges" to collect, wrangle (summarize/tag), and distribute information in various formats (JSON, MD, RSS, dashboards, council episodes). Treats documentation as a first-class citizen to empower AI assistants and streamline community operations.
    Daily Strategic Focus
    All vectors converge on shipping Cloud streaming with release-discipline (monorepo → plugin → cloud-v2) while restoring trust via clear migration guidance amid visible community frustration.
    Monthly Goal
    December 2025: Execution excellence—complete token migration with high success rate, launch ElizaOS Cloud, stabilize flagship agents, and build developer trust through reliability and clear documentation.

    Key Deliberations

    ElizaOS Cloud Streaming: Launch Readiness & Release Discipline
    Streaming is functionally working in simple flows, but the release train has critical dependencies (monorepo release, elizacloud-plugin merge, cloud-v2 core bump) plus operational blockers (NPM token changes). Execution excellence requires a single coordinated launch sequence, not parallel merges that create drift.
    Q1
    Do we freeze all non-launch merges until Cloud streaming ships, enforcing the monorepo → elizacloud-plugin → cloud-v2 sequence as the only authorized path?
    • Stan ⚡ (core-devs): "Need to release monorepo, review/merge elizacloud-plugin, and use latest core version into cloud-v2, in that order."
    • Borko (discussion): Team preparing for Monday release with streaming capabilities.
    1Yes—declare a launch freeze and hard-gate merges on the release sequence.
    Maximizes reliability and reduces integration risk, but delays unrelated improvements and may frustrate contributors.
    2Partial freeze—only gate changes that touch streaming, versions, or cloud runtime.
    Balances velocity and safety, but requires vigilant triage and can still allow subtle coupling bugs through.
    3No freeze—continue normal merges and rely on post-merge stabilization.
    Preserves throughput but increases probability of launch regressions, undermining the monthly directive of execution excellence.
    4Other / More discussion needed / None of the above.
    Q2
    What is our explicit launch acceptance bar for “streaming readiness” (end-to-end tests, known limitations, rollback plan) before we announce Cloud streaming to builders?
    • Stan (Discord 2025-12-17): "Streaming functionality: Now working for simple messages and actions."
    • core-devs: "Conversation blocking issue fixed when no summary existed, improving chat creation."
    1Strict: require e2e streaming tests + documented limitations + rollback/revert instructions before announcement.
    Strengthens trust through shipping discipline and reduces support load, but may push the date.
    2Moderate: require manual verification + smoke tests, and ship with a “beta” label and rapid patch cadence.
    Ships sooner while signaling risk, but may dilute the “reliable platform” positioning if issues arise.
    3Minimal: ship once it works in core flows and fix forward in production.
    Fastest path, but highest chance of breaking first impressions for developers evaluating Cloud.
    4Other / More discussion needed / None of the above.
    Q3
    How do we operationally de-risk tooling failures (e.g., NPM token rotation) so releases cannot be blocked by external credential policy changes?
    • cjft (core-devs): "Get new NPM token for release as classic token was deleted (NPM changed their tokens)."
    1Implement a release-credentials runbook + rotating token owners + preflight checks in CI (block if invalid).
    Reduces single-point-of-failure risk and improves release reliability, aligning with execution excellence.
    2Centralize release credentials in a dedicated secrets manager with a single release bot identity.
    Simplifies operations but concentrates risk; compromise or policy change affects everything.
    3Treat as ad-hoc incidents handled by maintainers when they occur.
    Lowest process overhead, but recurring release interruptions erode developer trust.
    4Other / More discussion needed / None of the above.
    Token Migration: Trust, Support Integrity, and Exchange Confusion
    The Feb 4, 2026 deadline is understood by some but still confusing in practice (“0 eligible”, exchange snapshot discrepancies), and sentiment is deteriorating. Trust-through-shipping now depends as much on clear migration documentation and anti-impersonation support hygiene as on code.
    Q1
    Do we elevate migration support into a hardened, official “single source of truth” (docs + signed announcements), explicitly deprecating ad-hoc Discord replies as authoritative guidance?
    • Discord 2025-12-19 (discussion): Multiple users confused; request: "Create clear explanation of token migration process and deadlines."
    • Omid Sa → roybot: "Go to the migration-support channel."
    1Yes—publish an official migration playbook (docs site) and require all helpers to link to it.
    Cuts confusion and impersonation risk, improving trust and reducing repetitive support burden.
    2Hybrid—keep Discord as primary support but pin a canonical FAQ and standard macros.
    Faster community support, but still exposes users to inconsistent messaging and spoofing vectors.
    3No—maintain current approach; rely on community moderation and channel routing.
    Lowest effort, but ongoing confusion can depress sentiment and hinder migration completion rates.
    4Other / More discussion needed / None of the above.
    Q2
    What stance should we take on exchange-handled swaps (Bithumb/Kraken discrepancies): proactive coordination and public evidence, or strict “self-custody only” messaging with limited exchange engagement?
    • Discord 2025-12-17: "Different exchanges (Bithumb and Kraken) are handling the AI16Z to ELIZAOS token swap differently, causing confusion."
    • Discord 2025-12-17: Korean users requested evidence of communications with Bithumb regarding snapshot timing.
    1Proactive: designate an exchange liaison, publish a status matrix per exchange, and share verifiable communication logs where possible.
    May restore credibility with affected regions, but increases operational overhead and public commitments.
    2Limited: provide best-effort guidance without publishing communications; emphasize self-custody going forward.
    Moderate burden and reduces liability, but may not satisfy communities demanding transparency.
    3Strict: state that exchanges are out of scope; only portal-based self-custody migrations are supported.
    Operationally simple, but risks alienating exchange-heavy users and amplifying negative narrative.
    4Other / More discussion needed / None of the above.
    Q3
    How do we address worsening sentiment (price decline + “no tangible products” claims) without overpromising—what is the minimum credible evidence package we should broadcast this week?
    • Discord 2025-12-18: "Significant frustration... regarding token price decline and perceived lack of delivered products."
    • Discord 2025-12-19: Cloud streaming preparing for Monday release; knowledge repo endpoints shipped; bootstrap/initPromise fix merged (#6261).
    1Publish a “Proof of Shipping” bulletin: Cloud streaming status, merged reliability fixes, docs updates, and next 7-day deliverables.
    Aligns narrative with execution excellence and makes progress legible to builders and holders.
    2Run a live demo/space of Cloud streaming + flagship agents, focusing on real workflows rather than token talk.
    Transforms sentiment via visible capability, but risks embarrassment if demo reliability slips.
    3Stay quiet until after launch to avoid distraction and misinterpretation.
    Reduces comms risk short-term, but allows negative narratives to compound unchallenged.
    4Other / More discussion needed / None of the above.
    Developer Experience: Plugin Reliability, EVM Readiness, and Ecosystem Observability
    Builders are probing whether plugin-evm is truly supported while Starknet integration and provider performance changes highlight API churn and type friction. The knowledge repository and new GitHub analytics JSON endpoints are a strategic asset for “taming information,” but need packaging into a developer-facing observability story.
    Q1
    Do we formally designate a “supported onchain stack” for Q1 (e.g., Spartan EVM path, Starknet plugin path) with clear maintenance tiers, or keep onchain support community-driven and opportunistic?
    • Roman V (coders): Concern about plugin-evm maintenance status and alternatives for onchain capabilities.
    • Odilitime (coders): "working on EVM support for Spartan" and shared an active PR.
    1Yes—publish a supported onchain roadmap with tiered support (Core, Maintained, Community).
    Improves DX clarity and reduces wasted builder effort, but commits us to maintenance obligations.
    2Partially—declare a single blessed path (Spartan EVM) and label everything else experimental.
    Creates a clear default while limiting commitments, but may slow multi-chain adoption.
    3No—keep onchain support unopinionated; let the ecosystem decide.
    Maximizes openness but increases confusion and fragmentation, harming developer trust.
    4Other / More discussion needed / None of the above.
    Q2
    How aggressively should we optimize provider performance (parallel execution + timeouts) versus preserving strict determinism and simplicity in the runtime pipeline?
    • Discord 2025-12-18: Stan proposed PR #6263 for parallel provider execution with configurable timeout (default 1s), aborting pipeline if too slow.
    • Discord 2025-12-18: Debate on best practices—avoid API calls in providers, use caching; add warning logs for slow providers.
    1Aggressive: adopt parallel providers + hard timeouts + abort-on-slow as default behavior.
    Improves perceived speed and reliability under load, but risks breaking edge-case providers and surprising plugin authors.
    2Balanced: parallelize with soft timeouts (warnings) by default; hard abort is opt-in via config.
    Protects compatibility while nudging best practices, aligning with developer-first principles.
    3Conservative: keep sequential execution; focus on documentation and caching guidelines instead.
    Minimizes behavioral change risk, but may leave performance issues unaddressed for Cloud-scale workloads.
    4Other / More discussion needed / None of the above.
    Q3
    Do we elevate the knowledge repository + GitHub analytics endpoints into an official “Ecosystem Telemetry Layer” (dashboards, RSS, infographics) as a flagship of Taming Information, or keep it as an internal tool?
    • Jin (coders/core-devs): Knowledge repository provides data for agents to reason with ecosystem activity; added JSON endpoints for leaderboards/summaries (daily/weekly/monthly).
    • Jin (coders): Considering a name for GitHub analytics project ("GitScape"); experimenting with infographics.
    1Officialize it: brand and document it, add stable endpoints, and ship a minimal public dashboard + RSS integration.
    Turns scattered activity into trust-building transparency and improves contributor coordination.
    2Semi-official: keep endpoints public but label as beta; prioritize internal consumption until Cloud launch settles.
    Reduces immediate scope while preserving momentum, but delays the narrative benefit of visibility.
    3Internal only: do not promote; avoid additional support and stability commitments.
    Lowest burden, but misses a strategic opportunity aligned with Taming Information and developer trust.
    4Other / More discussion needed / None of the above.