Keep the intelligence you already use.
Claude, ChatGPT, Gemini, your own API key, or ODEI credits. The model layer remains replaceable while the personal layer becomes yours.
Bring Claude, ChatGPT, or Gemini. ODEI turns the intelligence you already use into a governed personal runtime that remembers, acts, verifies, and improves across sessions instead of starting from zero.
This is the runtime preview and founding cohort access lane. The public launch is not open yet, but the exact onboarding contract is already live.
Claude, ChatGPT, Gemini, your own API key, or ODEI credits. The model layer remains replaceable while the personal layer becomes yours.
Email, calendar, documents, tasks, CRM, and files stop living as disconnected integrations and start forming your World Model.
Approvals, action scopes, quiet hours, budget limits, and sensitive boundaries are defined before the loop is allowed to act.
Inbox triage, follow-ups, operating briefs, pipeline monitoring, or a custom mission that matters to your actual operating cadence.
Inbox triage, weekly operating briefs, and governed follow-ups built around your real operating cadence.
Silent thread detection, next-touch preparation, and deal momentum across email, calendar, and CRM context.
Watch a fixed perimeter, update the World Model, and surface receipt-backed briefs when something materially changes.
Best for founders who need inbox triage, investor follow-ups, and weekly operating briefs to keep moving without losing context.
Best for operators running pipelines, counterparties, and silent threads where timing, context, and next-touch quality drive revenue.
Best for people who monitor a fixed perimeter and need governed briefs when something materially changes.
Best for small teams that need one shared World Model, governed operator lanes, and receipts across execution.
Founders, deal operators, research operators, and small teams with one mission where governed agency changes the outcome immediately.
One model lane, three world sources or fewer, one mission with a real cadence, and a governance mode that defines what the runtime may do.
Deterministic intake envelope, provisioning capsule, runtime scope, governance boundary, and the first mission queue instead of a vague waitlist state.
Intake returns immediately, runtime scoping and governance review happen in the first operator review window, and activation starts only after reviewed provisioning.
Choose model, governance, world sources, execution surfaces, and the first mission, then carry that exact runtime shape in a preview URL.
Before submission, the app exposes the same structured payload that the intake API will receive. Nothing is hidden behind the UI.
After submit, the API immediately returns intakeId, lane, receivedAt, and consentAt.
The app then renders the provisioning capsule from that same runtime contract, so the first handoff artifact exists immediately.
Pick the intelligence layer first: your subscription workflow, your own API key, or ODEI credits. The model should stay replaceable.
Start with three sources that already matter: email, calendar, docs, CRM, or tasks. The goal is a useful first graph, not a perfect one.
Define one mission you already repeat every week: inbox triage, follow-ups, operating brief, or research monitoring. This keeps the first runtime measurable.
We map the first World Model, connect tools, define approvals, and configure the missions that should run from day one.
Memory, governance, receipt storage, hosted loop execution, and continuity across sessions live here.
Browser actions, large model calls, paid routes, and external API spend scale with execution rather than idle seats.
The preview is public, but runtime activation still starts from an intake ID, verified operator details, and a reviewed mission boundary.
Setup, runtime, and usage are already defined. Founding cohort onboarding captures billing readiness now, even while public checkout remains closed.
Activation is not a vague “approved” state. It yields a World Model workspace, governance profile, first mission queue, and receipt surface.
The API returns the deterministic intake envelope first: intake ID, lane, source, receivedAt, consentAt, a normalized summary, and the first provisioning trail. The app then renders the provisioning capsule from the same submitted runtime contract.
World sources, model layer, execution surfaces, and first mission are translated into the initial runtime scope instead of a generic assistant template.
Quiet hours, action permissions, and sensitive lanes are confirmed so the first loop launches with explicit control and clear receipts.
The output is a working personal runtime: World Model, governance profile, receipt surface, and the first mission queued to run.
The first output is not a vague waitlist entry. You get an intake ID and a machine-readable capsule bound to your exact onboarding contract.
The runtime is scoped around your chosen sources, your execution surfaces, and the approval rules that define what the loop may do.
The first runtime ships with one live mission queued and a receipt trail you can inspect as the agent observes, decides, acts, and verifies.