A framework built by people who ship.
AINative was started in 2025 by a team that had seen the same prototype-to-production gap over and over. We built the framework we wished existed.
Mission
Most AI tooling stops at "it works on my laptop." We build the pieces that come after: streaming UX, tools that actually execute, state that survives reloads, and providers you can swap without a rewrite.
AINative is open source, MIT licensed, and developed in the open. We make money from teams that need security, scale, and support — not from gating the framework behind feature flags.
If you ship AI to real users, we want to hear from you.
Values
- Open by default. Code, roadmap, and decisions in public. RFCs over private slack threads.
- Composability over magic. Every layer should be replaceable. If you can't swap it, we built it wrong.
- Performance is a feature. Streaming latency, hydration cost, and bundle size all matter — and we measure them.
- Developer-first. If the docs are bad, the product is bad. We treat docs as production code.
Team

Hari Patel
Previously infrastructure at a Series C AI startup. Built three production AI products before AINative.

Sara Liu
Streaming protocols, runtime internals. Spent six years on real-time systems.

Marcus Tanaka
Design systems and developer tools. Made things that engineers actually use.
Timeline
- Sep 2025
Project started
Three engineers, one prototype, a lot of hand-rolled streaming.
- Dec 2025
v0.1 public release
Client, Node server, and CLI ship. First production users.
- Jan 2026
Anthropic + multimodal
Claude provider lands. Image and audio inputs go GA.
- Mar 2026
Ollama + AIPane
Local models supported. Side-pane copilot component.
- Apr 2026
v0.4 — Tools + Python
Parallel tool runtime. FastAPI adapter goes stable.
Want to work on this?
We're hiring engineers, designers, and DevRel. Remote, async, engineering-led.