Newv0.2.4 - v0.2.4: Version Display Fix

Build AI-native apps, not AI demos.

AINative gives you streaming UI, React components, tool execution, and multi-provider support - in one framework.

MIT licensed
TypeScript-first
Zero lock-in
terminal
npm install @hari7261/ainative-client react react-dom

Built for modern teams shipping real AI products

944
npm downloads last 30 days
3
published npm packages
3
documented providers
main
active branch
Why AINative

The framework for the next generation of AI apps.

Three primitives that turn LLMs into production software, not chat toys. Each one is independently usable and fully replaceable.

Streaming Native

Token-by-token rendering with backpressure, retry, and resume. Built on the Web Streams API — no polyfills, no opinions.

  • SSE, fetch, WebSocket
  • Auto-reconnect
  • Partial tool calls
Learn more

State-Driven Runtime

A reconciler diffs partial AI output against current UI state and applies minimal patches. No flicker, no duplicate frames, no lost messages.

  • Optimistic updates
  • Persistent threads
  • SSR-safe hydration
Learn more

Provider Agnostic

Start with OpenAI, Anthropic, or Ollama behind one consistent protocol, then extend with your own adapter if you need more.

  • 3 documented providers
  • Custom adapter API
  • Per-request routing
Learn more
Components

Composable primitives, not opinionated UIs.

Drop in what you need. Style with your own design system. Every component is headless-compatible and ships with sensible defaults.

AIApp

component

Root provider. Wires runtime, providers, and tools together with a single config object.

<AIApp.Provider app={app}>
  {children}
</AIApp.Provider>

AIInput

component

Multimodal input with attachments, slash commands, and submit-on-enter.

<AIInput
  placeholder="Ask..."
  onSubmit={send}
  accept="image/*,audio/*"
/>

AIStream

component

Renders streaming output with auto-scroll, diffing, and tool-call rendering.

<AIStream
  threadId={id}
  renderTool={Tool}
  autoScroll
/>

AIPane

component

Side-pane assistant for embedded copilots inside any app surface.

<AIPane
  side="right"
  width={420}
  collapsible
/>

Tool Runtime

component

Type-safe tools executed client-side, server-side, or both — with state injection.

app.tool("search", {
  schema: z.object({
    query: z.string(),
  }),
  run: async ({ query }) =>
    searchWeb(query),
})

Multimodal Input

component

Image, audio, and file uploads, normalized into one provider-agnostic format.

<AIInput
  accept="image/*,audio/*"
  maxFileSize={20 * 1024 * 1024}
/>
Examples

One framework. Three runtimes.

The same primitives across React, Node, and Python. Pick the stack you already love.

app/chat.tsx
import { AIApp, AIInput, AIStream } from "@hari7261/ainative-client";

const app = new AIApp({
  provider: "openai",
  model: "gpt-4o",
  endpoint: "/api/ai",
});

export default function Page() {
  return (
    <AIApp.Provider app={app}>
      <AIStream className="flex-1" />
      <AIInput placeholder="Message AINative..." />
    </AIApp.Provider>
  );
}
Architecture

A clean stack, top to bottom.

Every layer is replaceable. Every contract is typed. The same protocol runs from the browser to the model provider.

AINative layered architecture diagram
Layer 01

Browser

Your React app + AINative client runtime

Layer 02

Client Runtime

State manager, event bus, streaming engine, reconciler

Layer 03

Server Adapters

Node Express middleware or Python FastAPI router

Layer 04

Providers

OpenAI, Anthropic, Ollama, plus custom adapters

Layer 05

Tool Registry

Schema-validated functions with state injection

ClientServerProviders

One wire protocol. Streamed end to end. Tool calls, state patches, and errors travel through the same typed channel.

Performance

Engineered for production, not slideware.

Every measurement comes from real apps in production — not contrived benchmarks.

<80ms
First streamed token
On warm provider connections from the edge.
3
Documented providers
OpenAI, Anthropic, and Ollama are the providers listed in the repo README today.
SSR
Server rendering
Hydration-safe streaming primitives, no flicker.
100%
Type-safe API
Zod or JSON schemas in. Inferred types out — everywhere.
12kB
Client gzipped
Tree-shakeable. Pay only for the components you use.
0
Required services
No vendor cloud, no telemetry, no required accounts.
Loved by builders

Trusted by teams shipping AI to production.

From two-person startups to public companies — AINative is the substrate for serious AI products.

"AINative replaced weeks of glue code. Streaming, tools, and provider switching just work. Our product feels noticeably faster, and our codebase shrank by a third."
MC
Maya Chen
Staff Engineer · Lattice AI
"Streaming UX finally feels native. The reconciler is the part nobody else got right — partial tool calls render perfectly without a single hydration warning."
DA
Diego Alvarez
Founder · Northwind
"We swapped OpenAI for Anthropic in one config line and shipped to prod the same day. Routing requests by user tier was a 4-line change. This is what a real framework looks like."
PR
Priya Raman
Engineering Lead · Helix Labs
Pricing

Open source forever. Pro features coming soon.

The AINative core is MIT-licensed. We're currently building our managed cloud and team tools in private beta.

Free

$0/mo

Open source. Self-host forever.

  • MIT license
  • All client packages
  • All server adapters
  • Community Discord
  • Unlimited apps
View Plans

Pro

Most popular
$29/mo

For teams shipping production AI.

  • Everything in Free
  • Premium templates
  • Priority support
  • Private registry
  • Team seats
  • Usage dashboards
View Plans

Enterprise

Custom

SSO, SLA, and dedicated onboarding.

  • Everything in Pro
  • Single sign-on (SAML)
  • Security review
  • Architecture support
  • 99.9% SLA
Talk to sales
FAQ

Questions, answered.

Don't see yours? Reach out — we read everything.

Ship better AI products with AINative.

Free, open source, and production-ready. Start in under a minute — no signup, no credit card, no telemetry.