Everything you need to build with AI.
A complete framework — not a wrapper, not a SaaS. Every piece is independently usable, fully typed, and open source.
Eight pillars. One framework.
Each capability is documented in depth, fully typed, and shipped under MIT license.
Streaming Engine
Token-by-token rendering with backpressure and reconnection. Works with SSE, fetch streams, and WebSocket transports.
- Web Streams API native
- Automatic reconnect on disconnect
- Partial tool-call streaming
- Backpressure-aware
Reconciler State System
Diffs partial AI output against the current UI state and applies minimal patches. No flicker, no duplicate frames.
- Optimistic updates
- Patch-based reconciliation
- Hydration-safe
- Persistent threads
Event Bus
Pub/sub for runtime events. Hook into messages, tools, errors, and lifecycle for analytics or custom UI.
- Typed event payloads
- OpenTelemetry adapter
- PostHog integration
- Custom listeners
Tool Execution
Schema-validated tools with type inference end-to-end. Run client-side, server-side, or both.
- Zod or JSON Schema
- Parallel execution
- Tool composition
- State injection
Multi Provider Layer
OpenAI, Anthropic, and Ollama are the currently documented providers, with an adapter API for anything custom.
- 3 documented providers
- Custom adapter API
- Per-request routing
- Provider abstraction
Node + Python Servers
Drop-in Express middleware and FastAPI router. Same wire protocol, same tool registry.
- Express middleware
- FastAPI router
- Identical protocol
- Mix and match
React Components
AIApp, AIInput, AIStream, AIPane — composable, headless-friendly, styled with your tokens.
- Headless-compatible
- Tree-shakeable
- 12kB gzipped
- Full a11y
CLI
Scaffold projects, run dev servers, and manage providers from the command line.
- Project scaffolding
- Dev server
- Provider config
- Type generation