Open-source projects are often described by what they will become rather than what they are. This post describes the AINative repository as it exists today, based on the actual directory tree, package manifests, and README.
Monorepo structure
ainative/
├── packages/
│ ├── client/ # @hari7261/ainative-client v0.1.1
│ ├── server-node/ # @hari7261/ainative-server-node v0.1.1
│ ├── server-python/ # Python server adapter (pre-release)
│ └── cli/ # @hari7261/ainative-cli v0.2.0
├── examples/
│ ├── basic-chat/ # Simple streaming chat (Playwright tested)
│ ├── tool-use/ # LLM tool-calling demo
│ ├── multimodal/ # Image + text inputs
│ └── support-bot/ # Multi-turn support agent
├── docs/ # Documentation source
└── .github/ # CI workflowsThe client runtime
The client package is the smallest and most fundamental piece. It handles streaming responses, exposes React hooks for real-time UI updates, and manages tool-call lifecycles. It has zero runtime dependencies beyond React 18+.
import { useStream } from "@hari7261/ainative-client";
function Chat() {
const { messages, send, isStreaming } = useStream({
endpoint: "/api/chat",
});
return (
<div>
{messages.map((m) => (
<Message key={m.id} role={m.role} content={m.content} />
))}
<input onKeyDown={(e) => e.key === "Enter" && send(e.currentTarget.value)} />
</div>
);
}The Node server adapter
The Node adapter provides a thin Express-compatible middleware that translates between incoming HTTP requests and your LLM provider of choice. It normalises provider differences so your client code stays provider-agnostic.
import express from "express";
import { createAINativeHandler } from "@hari7261/ainative-server-node";
const app = express();
app.post(
"/api/chat",
createAINativeHandler({
provider: "openai",
model: "gpt-4o",
systemPrompt: "You are a helpful assistant.",
})
);
app.listen(3000);Validation suite
- Monorepo build — all packages compile cleanly with TypeScript strict mode.
- Client unit tests — jest coverage for streaming parser and hook state machine.
- Node server tests — supertest integration against a local OpenAI mock.
- Python server smoke test — verifies the adapter starts and returns a 200.
- Playwright E2E — basic-chat example runs end-to-end against a live stream.
- CLI smoke checks — `ainative help` and `ainative doctor` exit with code 0.