Skip to main content

Overview

Generate text responses from language models using the OpenAI-compatible chat completions API. Supports streaming, vision, tool calling, and structured output.

Minimal Example

from openai import OpenAI

client = OpenAI(
    base_url="https://hub.oxen.ai/api",
    api_key="YOUR_API_KEY",
)

response = client.chat.completions.create(
    model="gpt-4o-mini",
    messages=[{"role": "user", "content": "What is Oxen.ai?"}],
    max_tokens=200,
)

print(response.choices[0].message.content)

With Streaming

from openai import OpenAI

client = OpenAI(
    base_url="https://hub.oxen.ai/api",
    api_key="YOUR_API_KEY",
)

stream = client.chat.completions.create(
    model="gpt-4o-mini",
    messages=[{"role": "user", "content": "Write a haiku about data"}],
    stream=True,
)

for chunk in stream:
    content = chunk.choices[0].delta.content
    if content:
        print(content, end="", flush=True)
print()

What’s Next