Council Briefing

Strategic Deliberation
North Star & Strategic Context

North Star & Strategic Context



This file combines the overall project mission (North Star) and summaries of key strategic documents for use in AI prompts, particularly for the AI Agent Council context generation.

Last Updated: December 2025

---

North Star: To build the most reliable, developer-friendly open-source AI agent framework and cloud platform—enabling builders worldwide to deploy autonomous agents that work seamlessly across chains and platforms. We create infrastructure where agents and humans collaborate, forming the foundation for a decentralized AI economy that accelerates the path toward beneficial AGI.

---

Core Principles: 1. **Execution Excellence** - Reliability and seamless UX over feature quantity 2. **Developer First** - Great DX attracts builders; builders create ecosystem value 3. **Open & Composable** - Multi-agent systems that interoperate across platforms 4. **Trust Through Shipping** - Build community confidence through consistent delivery

---

Current Product Focus (Dec 2025):
  • **ElizaOS Framework** (v1.6.x) - The core TypeScript toolkit for building persistent, interoperable agents
  • **ElizaOS Cloud** - Managed deployment platform with integrated storage and cross-chain capabilities
  • **Flagship Agents** - Reference implementations (Eli5, Otaku) demonstrating platform capabilities
  • **Cross-Chain Infrastructure** - Native support for multi-chain agent operations via Jeju/x402


  • ---

    ElizaOS Mission Summary: ElizaOS is an open-source "operating system for AI agents" aimed at decentralizing AI development. Built on three pillars: 1) The Eliza Framework (TypeScript toolkit for persistent agents), 2) AI-Enhanced Governance (building toward autonomous DAOs), and 3) Eliza Labs (R&D driving cloud, cross-chain, and multi-agent capabilities). The native token coordinates the ecosystem. The vision is an intelligent internet built on open protocols and collaboration.

    ---

    Taming Information Summary: Addresses the challenge of information scattered across platforms (Discord, GitHub, X). Uses AI agents as "bridges" to collect, wrangle (summarize/tag), and distribute information in various formats (JSON, MD, RSS, dashboards, council episodes). Treats documentation as a first-class citizen to empower AI assistants and streamline community operations.
    Daily Strategic Focus
    The council’s immediate leverage point is restoring trust through crisp token-migration communications and security posture while unblocking Cloud/streaming and plugin shipping decisions that prove production readiness ahead of exchange momentum.
    Monthly Goal
    December 2025: Execution excellence—complete token migration with high success rate, launch ElizaOS Cloud, stabilize flagship agents, and build developer trust through reliability and clear documentation.

    Key Deliberations

    Token Migration Trust, Exchange Coordination, and Safety Posture
    Migration remains technically possible on Solana, but exchange-specific handling (notably Bithumb vs Kraken) is producing confusion and reputational drag, compounded by community anxiety about migration-site safety and impersonation risk.
    Q1
    What is the Council’s stance on exchange accountability vs. project-led remediation for Bithumb-related migration delays and snapshot disputes?
    • Korean community concerns: requests for evidence of communications with Bithumb regarding snapshot timing (Discord, 2025-12-17).
    • Serikiki: "Kraken would not give tokens to users who sold after the snapshot... distribute based on snapshot data" (Discord, 2025-12-17).
    1Hold a firm line: exchanges are responsible; provide a public FAQ and direct users to exchange support channels.
    Minimizes operational burden but risks prolonged reputational damage in affected regions.
    2Hybrid response: publish a timestamped communication log with exchanges (what/when) and appoint a single exchange liaison for escalation.
    Improves trust without assuming full responsibility, at the cost of focused coordination overhead.
    3Project-led remediation: offer a supplemental, time-boxed manual claims process for verified edge cases caused by exchange messaging.
    Maximizes user goodwill but introduces fraud/precedent risk and significant operational complexity.
    4Other / More discussion needed / None of the above.
    Q2
    How should we address community safety concerns (migration-site compromise claims and impersonators) to protect users and preserve legitimacy?
    • Reported: "ElizaOS migration site was compromised and funds were stolen" and "We're looking at it" (Discord, 2025-12-15).
    • Issue #6211: user reports Discord impersonators and requests a safe official support path (GitHub, Dec 2025 monthly report).
    1Immediate hardening: publish an official security bulletin, rotate links, and pin a single canonical URL + signature verification instructions.
    Rapidly reduces harm and stabilizes trust, but requires disciplined comms and ongoing monitoring.
    2Lockdown approach: temporarily pause non-essential migration UI changes and route support via GitHub-only verification until incident closure.
    Maximizes safety but may increase frustration and delay legitimate migrations.
    3Community-led defense: rely on mods/bans and user education without formal security communications until evidence is confirmed.
    Lowest effort, but highest risk of rumor amplification and user losses if threats are real.
    4Other / More discussion needed / None of the above.
    Q3
    For future migrations, do we invest in a more straightforward swap mechanism even if it constrains composability or requires additional infrastructure?
    • Alexei: "Consider implementing a more straightforward token swap mechanism for future migrations" (Discord, 2025-12-17).
    • Ongoing confusion: "Different exchanges... handling the AI16Z to ELIZAOS token swap differently" (Discord, 2025-12-17).
    1Keep current model: snapshot + portal + exchange coordination; improve docs and comms rather than mechanism.
    Fastest path operationally, but repeats complexity and external dependency risk.
    2Build a standardized migration contract/flow with on-chain proofs and a reference exchange integration kit.
    Improves determinism and trust, but demands engineering time and legal/ops coordination.
    3Adopt a managed migration service layer in Cloud (authenticated claims + attestations) as the single canonical swap gateway.
    Centralizes reliability and user experience, but increases platform responsibility and perceived centralization.
    4Other / More discussion needed / None of the above.
    Cloud Streaming & Messaging Plugins: Shipping Discipline Under Scale
    Streaming is working for simple messages/actions across cloud and monorepo, but Actions UI rendering still breaks the experience; meanwhile a massive Discord plugin PR is waiting to merge, raising quality and integration risk at a critical launch window.
    Q1
    Do we merge the large Discord plugin PR now (with accelerated review) or enforce decomposition to protect reliability and auditability?
    • Odilitime: "Large PR with 66 commits is ready to merge... already 3 weeks old" (Discord core-devs, 2025-12-17).
    • Stan: offered to review before merging; Odilitime agreed to wait 24 hours (Discord core-devs, 2025-12-17).
    1Merge after a time-boxed senior review (24–48h) plus minimal smoke tests; follow up with stabilization PRs.
    Unblocks momentum quickly but may import hidden regressions into a key integration surface.
    2Require PR splitting into smaller, testable increments (e.g., messaging API refactor vs feature additions) before merge.
    Improves long-term maintainability and reliability, but delays shipping and increases contributor friction.
    3Create an "experimental" release channel/branch and deploy to a controlled cohort (Eliza-Alpha) before merging to main.
    Balances speed and safety, but adds release-process complexity and requires disciplined environment management.
    4Other / More discussion needed / None of the above.
    Q2
    What is the Council’s priority order for making Cloud streaming feel reliable end-to-end (especially Actions UI) during the December execution push?
    • Stan: "Everything works in the monorepo, but for Actions the UI still displays the text all at once instead of streaming it" (Discord, 2025-12-16).
    • Stan: "Streaming functionality now working for simple messages and actions" with PRs in eliza-cloud-v2 and monorepo (Discord, 2025-12-17).
    1Treat Actions UI streaming as a launch blocker; pause non-critical features until it is resolved and tested.
    Aligns with execution excellence, but may delay broader Cloud rollout and ecosystem demos.
    2Ship Cloud with partial streaming (messages only) and clearly label Actions streaming as beta; collect telemetry and iterate fast.
    Accelerates launch while managing expectations, but risks first impressions for developer trust.
    3De-scope UI streaming entirely for now; focus on backend correctness and provide a stable non-streaming UI until vNext.
    Maximizes stability but forfeits a key UX differentiator and may weaken market-maker confidence.
    4Other / More discussion needed / None of the above.
    Q3
    How do we satisfy market-maker expectations for “agents in production” without compromising core reliability or overextending flagship agents?
    • Partners channel: "Market makers require agents to be deployed in production and actively engaging in social environments" (Discord 🥇-partners, 2025-12-17).
    • Product philosophy: "have a live product that can be iteratively improved rather than waiting for perfection" (Discord 🥇-partners, 2025-12-17).
    1Deploy a minimal set of flagship agents with conservative capabilities and strict guardrails (rate limits, scoped actions, rollback).
    Demonstrates reality while minimizing blast radius, but may look underwhelming compared to expectations.
    2Prioritize breadth: deploy many community agents quickly via Cloud templates to show ecosystem activity over polish.
    Signals scale and momentum, but increases incident probability and support burden.
    3Stage production readiness: public “Alpha fleet” with explicit SLAs and incident transparency, upgrading agents as reliability hardens.
    Builds trust through shipping and honesty, but requires strong comms and operational discipline.
    4Other / More discussion needed / None of the above.
    Developer Trust Through DX: Docs, Plugins, and Onboarding Clarity
    Builders are hitting friction in extending actions and local DB setup, while GitHub shows a surge in UX/backlog definition; the Council must convert this signal into a coherent, reliability-first onboarding and documentation campaign.
    Q1
    What is the fastest Council-approved path to reduce “plugin/action extension” confusion without fragmenting the ecosystem into copy-pasted local forks?
    • FenrirFawks: couldn't locate starknet-plugin folder; Odilitime: "clone the plugin into the packages folder" (Discord 💬-coders, 2025-12-17).
    • Stan: "take a look at actions documentation... also have to put it in src/index" (Discord 💬-coders, 2025-12-17).
    1Publish a canonical “Adding an Action” guide with a minimal working example and explicit registration steps, plus a template generator in CLI.
    Improves DX quickly and aligns with Developer First, with modest engineering investment.
    2Enforce a plugin development workflow: actions must be added via a standardized registry/manifest so the system auto-discovers them.
    Reduces human error long-term but introduces breaking changes and requires ecosystem migration.
    3Accept cloning as the norm for now; focus on broader Cloud launch and revisit plugin ergonomics after December.
    Preserves short-term velocity but accrues DX debt and frustrates new developers during peak attention.
    4Other / More discussion needed / None of the above.
    Q2
    How should we operationalize the growing UX issue backlog so it strengthens (rather than distracts from) December’s execution excellence directive?
    • GitHub daily: "14 new issues opened... no PRs" with multiple UX fixes closed quickly (#6240, #6242, #6243) (Daily report, 2025-12-17).
    • Weekly report: "dozens of UI/UX-focused issues... roadmap for major UI/UX overhaul" (Weekly report Dec 14–20, 2025).
    1Create a tight “December Reliability UX Pack”: pick 5–8 issues that directly reduce onboarding failure and ship them this month.
    Converts backlog into trust-building delivery while staying aligned to the monthly directive.
    2Start a full UI overhaul sprint immediately, using the backlog as the spec and pausing other initiatives except critical bugs.
    Potentially transformative, but high risk to schedule and Cloud/token deliverables.
    3Defer UX changes until after migration/Cloud launch; use the backlog mainly for Q1 planning.
    Protects launch scope, but risks poor first impressions and continued friction for new builders.
    4Other / More discussion needed / None of the above.
    Q3
    What Council policy best improves merge confidence and reduces regressions as PR size and AI-assisted coding volume increase?
    • Proposal: "require developers to include screenshots or videos with PRs" (Discord, 2025-12-15).
    • cjft: "Probably like 50% of my code" is AI-generated; notes need for better review workflows (Discord, 2025-12-16).
    1Adopt a “Proof of Function” standard: UI PRs require before/after media; all PRs require a minimal test plan section and smoke steps.
    Improves reliability and review speed with manageable process overhead.
    2Invest in automation: a GitHub bot to enforce templates, run scenario tests, and summarize AI-generated diffs for reviewers.
    Scales quality with growth, but requires upfront engineering and careful tuning to avoid noise.
    3Rely on post-merge monitoring and fast rollback; keep process lightweight to maintain velocity during launch window.
    Maximizes speed but increases incident likelihood, undermining “trust through shipping.”
    4Other / More discussion needed / None of the above.