The Thesys C1 API is fully compatible with the OpenAI API, making it easy to integrate into your existing applications. You can use any OpenAI SDK or make direct HTTP requests to interact with our models.
Base URL
The base URL for the Thesys C1 API is:
https://api.thesys.dev/v1/embed/chat/completions
Authentication
All requests require an API key to be included in the Authorization
header:
Authorization: Bearer YOUR_API_KEY
Examples
Manually calling the API via cURL:
curl -X POST https://api.thesys.dev/v1/embed/chat/completions \
-H "Content-Type: application/json" \
-H "Authorization: Bearer YOUR_API_KEY" \
-d '{
"model": "thesys-c1",
"messages": [
{
"role": "user",
"content": "Create a simple contact form"
}
],
"temperature": 0.7,
"max_tokens": 1500
}'
Manually calling the API via cURL:
curl -X POST https://api.thesys.dev/v1/embed/chat/completions \
-H "Content-Type: application/json" \
-H "Authorization: Bearer YOUR_API_KEY" \
-d '{
"model": "thesys-c1",
"messages": [
{
"role": "user",
"content": "Create a simple contact form"
}
],
"temperature": 0.7,
"max_tokens": 1500
}'
Using the OpenAI Python library (recommended):
from openai import OpenAI
# Initialize the client with Thesys C1 endpoint
client = OpenAI(
api_key="YOUR_THESYS_API_KEY",
base_url="https://api.thesys.dev/v1/embed"
)
response = client.chat.completions.create(
model="thesys-c1",
messages=[
{"role": "user", "content": "hello"}
],
temperature=0.7,
max_tokens=1500
)
Using the OpenAI JavaScript library (recommended):
import OpenAI from 'openai';
const openai = new OpenAI({
apiKey: 'YOUR_API_KEY',
baseURL: 'https://api.thesys.dev/v1',
});
const completion = await openai.chat.completions.create({
model: 'thesys-c1',
messages: [
{ role: 'user', content: userMessage }
],
temperature: 0.7,
max_tokens: 1500,
});
Streaming Responses
The C1 API also supports streaming responses, just like the OpenAI API:
stream = client.chat.completions.create(
model="thesys-c1",
messages=[{"role": "user", "content": "Hello"}],
stream=True,
)
stream = client.chat.completions.create(
model="thesys-c1",
messages=[{"role": "user", "content": "Hello"}],
stream=True,
)
const stream = await openai.chat.completions.create({
model: 'thesys-c1',
messages: [{ role: 'user', content: userMessage }],
stream: true,
});
Parameters
The C1 API supports most standard OpenAI chat completion parameters:
model
: The model to use. See Models for more information.
messages
: Array of message objects with role
and content
temperature
: Controls randomness (0.0 to 1.0)
max_tokens
: Maximum tokens in the response
stream
: Whether to stream the response
top_p
: Nucleus sampling parameter
parallel_tool_calls
: Whether to call tools in parallel
stop
: Stop sequences
Responses
The C1 API represents a JSON object as a XML document which can be rendered by the <C1Component />
or is automatically handled by the <C1Chat /> component if you are building a conversational application.
See the integration guide for more information.
Error
The API returns standard HTTP status codes and error responses compatible with OpenAI’s format:
{
"error": {
"message": "Invalid API key provided",
"type": "invalid_request_error",
"code": "invalid_api_key"
}
}