Council Briefing

Strategic Deliberation
North Star & Strategic Context

North Star & Strategic Context



This file combines the overall project mission (North Star) and summaries of key strategic documents for use in AI prompts, particularly for the AI Agent Council context generation.

Last Updated: December 2025

---

North Star: To build the most reliable, developer-friendly open-source AI agent framework and cloud platform—enabling builders worldwide to deploy autonomous agents that work seamlessly across chains and platforms. We create infrastructure where agents and humans collaborate, forming the foundation for a decentralized AI economy that accelerates the path toward beneficial AGI.

---

Core Principles: 1. **Execution Excellence** - Reliability and seamless UX over feature quantity 2. **Developer First** - Great DX attracts builders; builders create ecosystem value 3. **Open & Composable** - Multi-agent systems that interoperate across platforms 4. **Trust Through Shipping** - Build community confidence through consistent delivery

---

Current Product Focus (Dec 2025):
  • **ElizaOS Framework** (v1.6.x) - The core TypeScript toolkit for building persistent, interoperable agents
  • **ElizaOS Cloud** - Managed deployment platform with integrated storage and cross-chain capabilities
  • **Flagship Agents** - Reference implementations (Eli5, Otaku) demonstrating platform capabilities
  • **Cross-Chain Infrastructure** - Native support for multi-chain agent operations via Jeju/x402


  • ---

    ElizaOS Mission Summary: ElizaOS is an open-source "operating system for AI agents" aimed at decentralizing AI development. Built on three pillars: 1) The Eliza Framework (TypeScript toolkit for persistent agents), 2) AI-Enhanced Governance (building toward autonomous DAOs), and 3) Eliza Labs (R&D driving cloud, cross-chain, and multi-agent capabilities). The native token coordinates the ecosystem. The vision is an intelligent internet built on open protocols and collaboration.

    ---

    Taming Information Summary: Addresses the challenge of information scattered across platforms (Discord, GitHub, X). Uses AI agents as "bridges" to collect, wrangle (summarize/tag), and distribute information in various formats (JSON, MD, RSS, dashboards, council episodes). Treats documentation as a first-class citizen to empower AI assistants and streamline community operations.
    Daily Strategic Focus
    As Kraken listing and migration chatter peak, the Council’s critical priority is preserving trust by hardening the token-migration surface area (security + exchange coordination) while simultaneously proving Cloud readiness through visible reliability wins.
    Monthly Goal
    December 2025: Execution excellence—complete token migration with high success rate, launch ElizaOS Cloud, stabilize flagship agents, and build developer trust through reliability and clear documentation.

    Key Deliberations

    Token Migration & Exchange-Edge Trust (Kraken/Bithumb + Site Security)
    The migration narrative is now the public face of ElizaOS: Kraken listing (Dec 19) increases scrutiny while unresolved exchange delays and a reported migration-site compromise risk reputational damage precisely during the execution-excellence directive.
    Q1
    What is the Council’s stance on exchange-held migration delays (e.g., Bithumb): do we escalate publicly, support quietly, or offer an off-exchange escape hatch?
    • Discord 2025-12-15: Korean users frustrated about Bithumb delay; response: "exchanges are responsible for migrating tokens held on their platforms" (Omid Sa/community).
    • Discord 2025-12-16: Kraken listing Dec 19 and 1:6 distribution ratio discussed (Serikiki).
    1Escalate publicly with a clear deadline and a formal status page naming exchange blockers.
    Maximizes transparency and pressure, but risks adversarial relationships with exchanges during listing window.
    2Maintain quiet bilateral escalation while publishing a neutral, frequently updated migration FAQ/status dashboard.
    Preserves partnerships while still reducing confusion; slower to satisfy the most vocal community segments.
    3Offer a structured off-exchange remedy (support-assisted withdrawal/migration pathway) and de-emphasize exchange timelines.
    Shifts control back to users and the team but increases support load and operational complexity.
    4Other / More discussion needed / None of the above.
    Q2
    How should we handle the reported migration-site security allegation to maximize trust without amplifying panic?
    • Discord 2025-12-15: "A potential security issue with the ElizaOS migration site was reported"; Q/A: "Is the ElizaOS migration site hacked/hijacked?" → "We're looking at it."
    1Immediate public incident bulletin with scope, mitigations, and an independent audit commitment.
    High-trust posture if handled well; any uncertainty in early details may fuel further speculation.
    2Silent fix-first approach, then publish a postmortem only if evidence confirms compromise.
    Reduces short-term panic but risks backlash if the community believes information was withheld.
    3Publish precautionary guidance now (safe URLs, wallet hygiene, verification steps) while investigation proceeds, plus a time-boxed update cadence.
    Balances calm and transparency; sets expectations and reduces harm regardless of final finding.
    4Other / More discussion needed / None of the above.
    Q3
    Given the Kraken listing and distribution ratio attention, what messaging frame should we adopt to prevent 'project is finished / team is silent' narratives from metastasizing?
    • Discord 2025-12-16: "Some users expressed concerns about the project's status, questioning whether ElizaOS is finished"; A: "No, ElizaOS isn't finished" (Kenk).
    • Discord 2025-12-16: Kraken listing Dec 19 + 1:6 ratio (Serikiki).
    1Roadmap-forward: publish a short 'next 30 days' execution board (migration completion, Cloud launch milestones, flagship stabilization).
    Converts uncertainty into predictable delivery signals; commits the team to visible dates.
    2Reliability-forward: emphasize shipped fixes, tests, and stability metrics over roadmap promises.
    Aligns with execution excellence and reduces overcommitment, but may feel vague to token-focused users.
    3Ecosystem-forward: spotlight builders (contests, reference agents, community deployments) as proof of life.
    Builds social proof quickly; must be paired with core reliability wins to avoid 'marketing-only' perception.
    4Other / More discussion needed / None of the above.
    Cloud Launch Readiness (Streaming UX, Auth, and Create→Publish→Monetize Loop)
    Cloud capability is advancing (integrations, end-to-end business cycle vision), but the Council must decide what 'launch' means: a hardened minimal path with impeccable UX, or a broader platform debut that risks reliability debt at the worst moment.
    Q1
    What is our launch threshold for ElizaOS Cloud: minimal reliable deployment path, or full create→publish→monetize→promote loop on day one?
    • Discord 2025-12-14: Cloud focus includes "create → publish → monetize → promote" plus SEO/ads/social integrations (shaw).
    • Discord 2025-12-16 core-devs: "Push cloud PR for testing" and ongoing cloud work (Stan ⚡).
    1Launch a minimal, rock-solid 'deploy + storage + logs + billing stub' and gate advanced growth features behind an alpha flag.
    Aligns strongly with execution excellence and reduces blast radius; delays the platform narrative of monetization.
    2Ship the full loop at launch to claim category leadership, accepting controlled rough edges.
    Maximizes market impact but risks developer trust if onboarding or reliability falters.
    3Two-tier launch: public beta for core deploy, invite-only 'Eliza-Alpha' for monetization/promote features.
    Preserves ambition while protecting trust; requires clear comms and operational discipline.
    4Other / More discussion needed / None of the above.
    Q2
    How urgent is the streaming UX bug in Actions UI relative to other Cloud blockers, and should it become a launch gate?
    • Discord 2025-12-16 core-devs: "streaming works in the monorepo but there are rendering issues in Actions that need fixing" (Stan ⚡).
    1Make it a hard launch gate: streaming perception is a first-impression reliability signal.
    Optimizes UX trust but may delay launch if the root cause is non-trivial.
    2Soft gate: ship with a known-issues note and prioritize a patch within 72 hours post-launch.
    Protects schedule while acknowledging imperfection; requires rapid response credibility.
    3Defer: treat it as non-critical and focus on auth, storage correctness, and deployment success rates.
    Prioritizes functional reliability, but risks 'feels broken' sentiment even if backend is solid.
    4Other / More discussion needed / None of the above.
    Q3
    Do we standardize on JWT auth + data isolation now (as a platform foundation), or keep legacy headers until Cloud stabilizes?
    • GitHub top PR: "feat(auth): implement JWT authentication and user management" (PR #6200, standujar) with ENABLE_DATA_ISOLATION toggle.
    • Discord 2025-12-16 core-devs: "Rebase authentication PR on the monorepo" (Stan ⚡).
    1Adopt JWT/data isolation as the default for Cloud (legacy header mode only for local/dev).
    Establishes a scalable multi-tenant foundation; increases integration/testing burden immediately.
    2Keep legacy mode as default until Cloud proves stability; run JWT as opt-in beta.
    Reduces near-term risk but postpones foundational security/tenancy decisions.
    3Hybrid: JWT for Cloud-hosted agents, legacy headers for self-hosted framework users during transition.
    Balances platform needs and OSS ergonomics; increases documentation and support complexity.
    4Other / More discussion needed / None of the above.
    Developer Experience & Reliability Flywheel (DB Migrations, PR Proof, AI-Assisted Workflow)
    Developer trust is trending upward via heavy refactors, tests, and SQL/migration fixes, but field reports still show friction (Postgres permissions/migrations, foreign key failures). The Council must choose which DX investments become mandatory protocol versus optional tooling.
    Q1
    What is the Council’s canonical 'known-good' database path for contributors (PGlite, Docker Postgres, managed Neon), given recurring local Postgres migration failures?
    • Discord 2025-12-16 coders: FenrirFawks had persistent migration issues on local Postgres 18 without superuser; Stan suspects schema permissions.
    • Discord 2025-12-15: Twitter replies caused foreign key constraint failures; "latest codebase has SQL fixes" mentioned.
    1Standardize on PGlite for local dev by default; Postgres as an advanced/CI-only path with strict docs.
    Maximizes ease-of-entry and reproducibility; may mask Postgres-only issues until later.
    2Standardize on Dockerized Postgres with one blessed compose file and required permissions/scripts.
    Aligns dev with production realities; higher setup overhead but fewer 'works on my machine' cases.
    3Standardize on managed Neon for dev/test environments with free-tier guidance and CLI automation.
    Fast onboarding and realistic infra; introduces external dependency and account friction.
    4Other / More discussion needed / None of the above.
    Q2
    Should the 'PRs larger than 20 lines require a video' rule become enforced policy across repos, and if so, how do we prevent it from slowing delivery?
    • Discord 2025-12-16 core-devs: "What's the new rule for PRs larger than 20 lines?" → "include a video of it working" (shaw).
    1Enforce it broadly with CI checklists/templates; allow exemptions for pure refactors/tests/docs.
    Improves review confidence and trust-through-shipping; requires governance clarity to avoid friction.
    2Keep it as a strong recommendation for UI/behavior changes only, not a universal rule.
    Preserves velocity while still raising quality where it matters most; less consistent evidence in reviews.
    3Replace videos with automated e2e smoke tests + reproducible scripts, making human demos optional.
    Scales better long-term but demands up-front investment in test harnesses and infra.
    4Other / More discussion needed / None of the above.
    Q3
    Do we formalize AI-assisted development (bots, review agents, environment containers) as first-class infrastructure to accelerate execution excellence, or keep it ad hoc?
    • Discord 2025-12-16 core-devs: cjft notes pain in "handling AI reviews repeatedly" and suggests "a GitHub bot"; R0am mentions claudekit hooks; discussion of environment containers and multi-instance workflows.
    1Formalize: build/choose a sanctioned GitHub review-bot + contributor workflow, and publish 'AI coding' playbooks.
    Raises throughput and consistency; requires careful safeguards to prevent low-quality automated approvals.
    2Semi-formal: keep tools optional but provide curated recommendations (claudekit, worktrees, containers) and a reference setup.
    Improves DX without forcing change; benefits may be uneven across contributors.
    3Defer: prioritize core product reliability; revisit workflow automation after Cloud launch stabilizes.
    Avoids tool-churn during critical launches, but leaves current productivity bottlenecks unresolved.
    4Other / More discussion needed / None of the above.