AXON
MODEL
FAMILY.
Enterprise-grade AI models built for coding agents, tool calling, and advanced reasoning. Available via API and integrated directly in Orbital.
Axon 2.1 Pro High
MOST CAPABLEHighest intelligence for the most demanding long-running agent tasks, complex reasoning, and advanced tool calling.
axon-2-pro-highAxon 2.1 Pro
RECOMMENDEDHigh-intelligence model for long-running agent tasks, tool calling, coding and general purpose use.
axon-2-proAxon 2.1
High-intelligence model for long running agent tasks, tool calling, coding and general purpose.
axon-2Axon Code 2.1 Pro High
CODE OPTIMISEDCode-optimised variant of Axon 2.1 Pro High for long-running agent coding tasks and tool use.
axon-code-2-pro-highAxon Code 2.1 Pro
CODE OPTIMISEDCode-optimised variant of Axon 2.1 Pro for long-running agent coding tasks and tool use.
axon-code-2-proAxon 1.3
General purpose super intelligent LLM model for high-effort day-to-day tasks.
axonAxon Mini
FASTESTGeneral purpose super intelligent LLM coding model for low-effort day-to-day tasks.
axon-miniPROMPT
CACHING.
Cached input tokens are automatically discounted at 50% off the standard input price. Caching kicks in automatically for repeated prompt prefixes.
USE WITH
ANY SDK.
Fully OpenAI-compatible. Drop in your existing OpenAI client with just a base URL change.
from openai import OpenAI
client = OpenAI(
api_key="YOUR_MATTERAI_KEY",
base_url="https://api.matterai.so/v1",
)
response = client.chat.completions.create(
model="axon-2-pro",
messages=[{"role": "user", "content": "Hello"}],
)import OpenAI from "openai";
const client = new OpenAI({
apiKey: "YOUR_MATTERAI_KEY",
baseURL: "https://api.matterai.so/v1",
});
const response = await client.chat.completions.create({
model: "axon-2-pro",
messages: [{ role: "user", content: "Hello" }],
});