Skip to main content
To enable an assistant to create an artifact, you must provide it with a tool that it can call in response to a user’s request. Your backend will then handle this tool call by invoking the C1 Artifacts API and streaming the result back to the user. This guide covers the entire creation workflow, from defining the tool to handling the API calls.

Step 1: Define the create_artifact Tool

First, define the schema for a tool that the assistant can use to create an artifact. This schema outlines the parameters the LLM needs to extract from the user’s instructions. In this example, we’ll create a create_presentation tool.
import { z } from "zod";
import { zodToJsonSchema } from "zod-to-json-schema";

const createPresentationTool = {
  type: "function",
  function: {
    name: "create_presentation",
    description: "Creates a slide presentation based on a topic.",
    parameters: zodToJsonSchema(
      z.object({
        instructions: z.string().describe("The instructions to generate the presentation."),
      })
    ),
  },
};

Step 2: Handle the Tool Call in Your Backend

When the LLM decides to use your create_presentation tool, your backend will receive the tool call. Your code must then orchestrate the process of generating the artifact and streaming it back as part of the assistant’s final message. The key steps are:
  1. Generate a unique artifactId and messageId.
  2. Call the C1 Artifacts API, passing the artifactId.
  3. Stream the artifact content into a C1 Response.
  4. Return a tool_result to the main LLM, including the artifactId and messageId (as the version).
  5. Stream the LLM’s final text confirmation into the same C1 Response.
  6. Store the response in the database as the assistant message.
// (Inside your Next.js API route)
import { nanoid } from "nanoid";
import { makeC1Response } from "@thesysai/genui-sdk/server";

// When your tool handler is invoked by the LLM...
async function handleCreatePresentation(
  // following parameters are passed by the LLM
  { instructions }: { instructions: string },
  // following parameters are passed by your backend
  { messageId, c1Response }: { messageId: string, c1Response: ReturnType<typeof makeC1Response> }
) {
  // 2. Call the Artifacts API and stream the result.
  const artifactStream = await c1ArtifactsClient.chat.completions.create({
    model: "c1/artifact/v-20251030",
    messages: [{ role: "user", content: instructions }],
    metadata: { thesys: JSON.stringify({ c1_artifact_type: "slides", id: artifactId }) },
    stream: true,
  });

    // 3. Pipe the artifact stream into the C1 Response object.
    for await (const delta of artifactStream) {
      const content = delta.choices[0]?.delta?.content;
      if (content) {
        c1Response.writeContent(content);
      }
    }

    // 4. Return the result to the main LLM so it knows the tool succeeded.
    return `Presentation created with artifact_id: ${artifactId}, version: ${messageId}`,
}
// After the tool result is sent, the main LLM will generate its final text response.
// You will then pipe that final text stream into the same `c1Response` object
// before returning `c1Response.responseStream` from your API route.

Step 3: The Final Assistant Response

After your backend returns the successful tool_result, the main LLM generates a final, user-facing response (e.g., “I’ve created the presentation for you. You can see it below.”). Your backend code must stream this final text into the same C1 Response object. The result is a single, composite assistant message that contains both the confirmation text and the fully rendered artifact, which is then sent to the frontend.

A Note on the version

In this guide, we use the unique messageId of the assistant’s response as the version. This creates a clear link between a specific version of the artifact and the message that contains it. The importance of this version will become clear in the next guide, where we use it to edit the artifact.

View the code

Find more examples and complete code on our GitHub repository.