Create a chat API endpoint that handles streaming responses from the C1 model.
Install required dependencies
Install the necessary packages for your backend API:
npm install openai @crayonai/stream
Create the message store
First, create a simple in-memory message store to manage conversation history:
This in-memory store just stores the list of messages for a given threadId
including messages that are not sent to the client like the tool call messages.
app/api/chat/messageStore.ts
import OpenAI from "openai";
export type DBMessage = OpenAI.Chat.ChatCompletionMessageParam & {
id?: string;
};
const messagesStore: {
[threadId: string]: DBMessage[];
} = {};
export const getMessageStore = (threadId: string) => {
if (!messagesStore[threadId]) {
messagesStore[threadId] = [];
}
const messageList = messagesStore[threadId];
return {
addMessage: (message: DBMessage) => {
messageList.push(message);
},
getOpenAICompatibleMessageList: () => {
return messageList.map((m) => {
const message = { ...m };
delete message.id;
return message;
});
},
};
};
Create the API route file
Create a new file at app/api/chat/route.ts
and add the necessary imports:
import { NextRequest, NextResponse } from "next/server";
import OpenAI from "openai";
import { transformStream } from "@crayonai/stream";
import { DBMessage, getMessageStore } from "./messageStore";
Set up the POST handler
Add the main POST function that will handle incoming chat requests:
// ... previous imports ...
export async function POST(req: NextRequest) {
const { prompt, threadId, responseId } = (await req.json()) as {
prompt: DBMessage;
threadId: string;
responseId: string;
};
// More code will go here...
}
Initialize the OpenAI client
Configure the OpenAI client to use Thesys API:
// ... inside the POST function ...
const client = new OpenAI({
baseURL: "https://api.thesys.dev/v1/embed/",
apiKey: process.env.THESYS_API_KEY,
});
Handle message storage
Add the user’s message to the conversation history:
// ... after client initialization ...
const messageStore = getMessageStore(threadId);
messageStore.addMessage(prompt);
Create streaming chat completion
Call the C1 model with the conversation history:
// ... after message storage ...
const llmStream = await client.chat.completions.create({
model: "c1-nightly",
messages: messageStore.getOpenAICompatibleMessageList(),
stream: true,
});
Transform the response stream
Convert the OpenAI stream to a format suitable for your frontend:
// ... after llmStream creation ...
const responseStream = transformStream(
llmStream,
(chunk) => {
return chunk.choices[0].delta.content;
},
{
onEnd: ({ accumulated }) => {
const message = accumulated.filter((message) => message).join("");
messageStore.addMessage({
role: "assistant",
content: message,
id: responseId,
});
},
}
) as ReadableStream;
Return the streaming response
Return the response with proper headers for server-sent events:
// ... after responseStream creation ...
return new NextResponse(responseStream, {
headers: {
"Content-Type": "text/event-stream",
"Cache-Control": "no-cache, no-transform",
Connection: "keep-alive",
},
});
Set your API key
Make sure to set your Thesys API key as an environment variable:
export THESYS_API_KEY=<your-api-key>
Or add it to your .env.local
file:
THESYS_API_KEY=<your-api-key>
Your API endpoint is now ready to handle streaming chat conversations with the C1 model!