One of the key promises of AINative is that your frontend should not need to change when you swap LLM providers. The server adapter normalises streaming formats, tool-call schemas, and error codes across all supported providers. Today that means OpenAI, Anthropic, and Ollama.
OpenAI
OpenAI support uses the official openai Node SDK under the hood. We support chat completions, streaming, function calling, and the new structured outputs (JSON schema) feature released in mid-2024. GPT-4o is the default model.
createAINativeHandler({
provider: "openai",
model: "gpt-4o", // or "gpt-4o-mini", "gpt-4-turbo"
systemPrompt: "…",
tools: [mySearchTool], // optional tool array
temperature: 0.7,
maxTokens: 2048,
})Anthropic
Anthropic's Claude models use a different streaming protocol than OpenAI's. The adapter handles the translation transparently — your client code receives the same message delta format regardless of which provider is active.
createAINativeHandler({
provider: "anthropic",
model: "claude-3-5-sonnet-20241022",
systemPrompt: "…",
})Ollama — local, private inference
Ollama lets you run open-weight models locally with no API key required. This is ideal for development, privacy-sensitive applications, or air-gapped environments. AINative's Ollama adapter talks to the local Ollama daemon over HTTP.
createAINativeHandler({
provider: "ollama",
model: "llama3",
baseUrl: "http://localhost:11434", // default Ollama port
})Provider comparison
| Provider | Streaming | Tool Calls | Local | Key Required |
|---|---|---|---|---|
| OpenAI | ✓ | ✓ | ✗ | ✓ |
| Anthropic | ✓ | ✓ | ✗ | ✓ |
| Ollama | ✓ | Coming soon | ✓ | ✗ |
What is coming next
The provider roadmap includes Google Gemini, Mistral, and AWS Bedrock. Community PRs for additional providers are welcome — the adapter interface is documented in CONTRIBUTING.md.