Build AI-native apps, not AI demos.
AINative gives you streaming UI, React components, tool execution, and multi-provider support - in one framework.
npm install @hari7261/ainative-client react react-domimport React from "react";
import ReactDOM from "react-dom/client";
import { AIAppComponent, AIPane } from "@hari7261/ainative-client";
function App() {
const config = {
apiUrl: "http://localhost:3001",
streamMethod: "SSE" as const,
};
return (
<AIAppComponent config={config}>
{(state, app) => (
<AIPane
state={state}
title="Basic Chat"
subtitle="Powered by AINative"
onSendMessage={(msg) => app.sendMessage(msg, { stream: true })}
enableAudio={true}
enableImage={true}
enableFile={true}
/>
)}
</AIAppComponent>
);
}
ReactDOM.createRoot(document.getElementById("root")!).render(<App />);The framework for the next generation of AI apps.
Three primitives that turn LLMs into production software, not chat toys. Each one is independently usable and fully replaceable.
Streaming Native
Token-by-token rendering with backpressure, retry, and resume. Built on the Web Streams API — no polyfills, no opinions.
- SSE, fetch, WebSocket
- Auto-reconnect
- Partial tool calls
State-Driven Runtime
A reconciler diffs partial AI output against current UI state and applies minimal patches. No flicker, no duplicate frames, no lost messages.
- Optimistic updates
- Persistent threads
- SSR-safe hydration
Provider Agnostic
Start with OpenAI, Anthropic, or Ollama behind one consistent protocol, then extend with your own adapter if you need more.
- 3 documented providers
- Custom adapter API
- Per-request routing
Composable primitives, not opinionated UIs.
Drop in what you need. Style with your own design system. Every component is headless-compatible and ships with sensible defaults.
AIApp
componentRoot provider. Wires runtime, providers, and tools together with a single config object.
<AIApp.Provider app={app}>
{children}
</AIApp.Provider>AIInput
componentMultimodal input with attachments, slash commands, and submit-on-enter.
<AIInput
placeholder="Ask..."
onSubmit={send}
accept="image/*,audio/*"
/>AIStream
componentRenders streaming output with auto-scroll, diffing, and tool-call rendering.
<AIStream
threadId={id}
renderTool={Tool}
autoScroll
/>AIPane
componentSide-pane assistant for embedded copilots inside any app surface.
<AIPane
side="right"
width={420}
collapsible
/>Tool Runtime
componentType-safe tools executed client-side, server-side, or both — with state injection.
app.tool("search", {
schema: z.object({
query: z.string(),
}),
run: async ({ query }) =>
searchWeb(query),
})Multimodal Input
componentImage, audio, and file uploads, normalized into one provider-agnostic format.
<AIInput
accept="image/*,audio/*"
maxFileSize={20 * 1024 * 1024}
/>One framework. Three runtimes.
The same primitives across React, Node, and Python. Pick the stack you already love.
import { AIApp, AIInput, AIStream } from "@hari7261/ainative-client";
const app = new AIApp({
provider: "openai",
model: "gpt-4o",
endpoint: "/api/ai",
});
export default function Page() {
return (
<AIApp.Provider app={app}>
<AIStream className="flex-1" />
<AIInput placeholder="Message AINative..." />
</AIApp.Provider>
);
}A clean stack, top to bottom.
Every layer is replaceable. Every contract is typed. The same protocol runs from the browser to the model provider.
Browser
Your React app + AINative client runtime
Client Runtime
State manager, event bus, streaming engine, reconciler
Server Adapters
Node Express middleware or Python FastAPI router
Providers
OpenAI, Anthropic, Ollama, plus custom adapters
Tool Registry
Schema-validated functions with state injection
One wire protocol. Streamed end to end. Tool calls, state patches, and errors travel through the same typed channel.
Ship products that ship value.
From single-purpose chatbots to enterprise copilots — the same primitives scale all the way up.
Engineered for production, not slideware.
Every measurement comes from real apps in production — not contrived benchmarks.
Trusted by teams shipping AI to production.
From two-person startups to public companies — AINative is the substrate for serious AI products.
"AINative replaced weeks of glue code. Streaming, tools, and provider switching just work. Our product feels noticeably faster, and our codebase shrank by a third."
"Streaming UX finally feels native. The reconciler is the part nobody else got right — partial tool calls render perfectly without a single hydration warning."
"We swapped OpenAI for Anthropic in one config line and shipped to prod the same day. Routing requests by user tier was a 4-line change. This is what a real framework looks like."
Open source forever. Pro features coming soon.
The AINative core is MIT-licensed. We're currently building our managed cloud and team tools in private beta.
Free
Open source. Self-host forever.
- MIT license
- All client packages
- All server adapters
- Community Discord
- Unlimited apps
Pro
Most popularFor teams shipping production AI.
- Everything in Free
- Premium templates
- Priority support
- Private registry
- Team seats
- Usage dashboards
Enterprise
SSO, SLA, and dedicated onboarding.
- Everything in Pro
- Single sign-on (SAML)
- Security review
- Architecture support
- 99.9% SLA
Questions, answered.
Don't see yours? Reach out — we read everything.
Ship better AI products with AINative.
Free, open source, and production-ready. Start in under a minute — no signup, no credit card, no telemetry.