All articles
IntegrationsApr 18, 20266 min read

Provider support: OpenAI, Anthropic, and Ollama

The AINative server adapter currently supports three LLM providers out of the box. Here is what each integration looks like in practice and what the planned provider roadmap looks like.

A
AINative Studio
Engineering

One of the key promises of AINative is that your frontend should not need to change when you swap LLM providers. The server adapter normalises streaming formats, tool-call schemas, and error codes across all supported providers. Today that means OpenAI, Anthropic, and Ollama.

OpenAI

OpenAI support uses the official openai Node SDK under the hood. We support chat completions, streaming, function calling, and the new structured outputs (JSON schema) feature released in mid-2024. GPT-4o is the default model.

typescript
createAINativeHandler({
  provider: "openai",
  model: "gpt-4o",              // or "gpt-4o-mini", "gpt-4-turbo"
  systemPrompt: "…",
  tools: [mySearchTool],         // optional tool array
  temperature: 0.7,
  maxTokens: 2048,
})
OpenAI provider configuration

Anthropic

Anthropic's Claude models use a different streaming protocol than OpenAI's. The adapter handles the translation transparently — your client code receives the same message delta format regardless of which provider is active.

typescript
createAINativeHandler({
  provider: "anthropic",
  model: "claude-3-5-sonnet-20241022",
  systemPrompt: "…",
})
Switching to Claude 3.5 Sonnet requires one line

Ollama — local, private inference

Ollama lets you run open-weight models locally with no API key required. This is ideal for development, privacy-sensitive applications, or air-gapped environments. AINative's Ollama adapter talks to the local Ollama daemon over HTTP.

typescript
createAINativeHandler({
  provider: "ollama",
  model: "llama3",
  baseUrl: "http://localhost:11434", // default Ollama port
})
Ollama provider — run llama3 locally

Provider comparison

ProviderStreamingTool CallsLocalKey Required
OpenAI
Anthropic
OllamaComing soon

What is coming next

The provider roadmap includes Google Gemini, Mistral, and AWS Bedrock. Community PRs for additional providers are welcome — the adapter interface is documented in CONTRIBUTING.md.