Step 1: Define the create_artifact Tool
First, define the schema for a tool that the assistant can use to create an artifact. This schema outlines the parameters the LLM needs to extract from the user’s instructions. In this example, we’ll create a create_presentation tool.
Step 2: Handle the Tool Call in Your Backend
When the LLM decides to use yourcreate_presentation tool, your backend will receive the tool call. Your code must then orchestrate the process of generating the artifact and streaming it back as part of the assistant’s final message.
The key steps are:
- Generate a unique
artifactIdandmessageId. - Call the C1 Artifacts API, passing the
artifactId. - Stream the artifact content into a C1 Response.
- Return a
tool_resultto the main LLM, including theartifactIdandmessageId(as theversion). - Stream the LLM’s final text confirmation into the same C1 Response.
- Store the response in the database as the assistant message.
Step 3: The Final Assistant Response
After your backend returns the successfultool_result, the main LLM generates a final, user-facing response (e.g., “I’ve created the presentation for you. You can see it below.”).
Your backend code must stream this final text into the same C1 Response object. The result is a single, composite assistant message that contains both the confirmation text and the fully rendered artifact, which is then sent to the frontend.
A Note on the version
In this guide, we use the unique messageId of the assistant’s response as the version. This creates a clear link between a specific version of the artifact and the message that contains it. The importance of this version will become clear in the next guide, where we use it to edit the artifact.
View the code
Find more examples and complete code on our GitHub repository.