Skip to main content
Build an intelligent infinite canvas where AI generates rich visual cards based on your prompts. This guide teaches you how to integrate tldraw with C1 to create context-aware cards that understand existing canvas content.
Try it above or visit canvas-with-c1.vercel.app

What You’ll Learn

  • Integrating tldraw infinite canvas with C1
  • Creating custom shape utils for AI-generated content
  • Building context-aware systems that understand selected shapes
  • Real-time streaming content into canvas cards
  • Implementing keyboard shortcuts for quick access
  • Adding image search to enrich visual content

Architecture Overview

AI canvas apps combine visual collaboration tools with generative UI:
User Prompt → Extract Context from Selected Cards → LLM + C1 → Generate Visual Card → Stream to Canvas Shape
When you select existing cards and create a new one, the AI sees the content of selected cards and generates contextually relevant responses. Each card is a resizable, repositionable C1 component.

Setup

Prerequisites

  • Node.js 18+
  • Thesys API key from console.thesys.dev
  • (Optional) Google Custom Search API key and CSE ID for image search

Create Next.js Project

npm
npx create-next-app@latest canvas-with-c1
cd canvas-with-c1
When prompted, select:
  • TypeScript: Yes
  • ESLint: Yes
  • Tailwind CSS: Yes
  • App Router: Yes
  • Customize default import alias: No

Install Dependencies

npm
npm install @thesysai/genui-sdk @crayonai/react-ui openai tldraw zod zod-to-json-schema
Optional (for image search):
npm
npm install google-images

Environment Variables

Create a .env.local file:
THESYS_API_KEY=your_thesys_api_key

# Optional: For image search
GOOGLE_API_KEY=your_google_api_key
GOOGLE_CSE_ID=your_custom_search_engine_id
Get your Thesys API key from console.thesys.dev. For Google image search, follow the Custom Search API guide.

Step 1: Set Up tldraw Canvas

Start by creating the infinite canvas workspace with tldraw:
app/page.tsx
"use client";

import "@crayonai/react-ui/styles/index.css";
import "tldraw/tldraw.css";
import { Tldraw } from "tldraw";
import { shapeUtils } from "./shapeUtils";
import { PromptInput } from "./components/PromptInput";
import { FOCUS_PROMPT_EVENT } from "./events";

export default function Page() {
  return (
    <div style={{ position: "fixed", inset: 0 }}>
      <Tldraw
        shapeUtils={shapeUtils}
        persistenceKey="c1-canvas"
      >
        <PromptInput focusEventName={FOCUS_PROMPT_EVENT} />
      </Tldraw>
    </div>
  );
}
The persistenceKey saves canvas state to localStorage so users don’t lose their work on refresh.

Step 2: Create C1 Component Shape

Define a custom tldraw shape that wraps C1 components:
shapes/C1ComponentShape.tsx
import type { TLBaseShape } from "tldraw";

export type C1ComponentShapeProps = {
  w: number;
  h: number;
  c1Response?: string;
  isStreaming?: boolean;
  prompt?: string;
};

export type C1ComponentShape = TLBaseShape<
  "c1-component",
  C1ComponentShapeProps
>;
shapeUtils/C1ComponentShapeUtil.tsx
import { HTMLContainer, ShapeUtil } from "tldraw";
import { C1Component } from "@crayonai/react-ui";
import type { C1ComponentShape } from "../shapes/C1ComponentShape";

export class C1ComponentShapeUtil extends ShapeUtil<C1ComponentShape> {
  static override type = "c1-component" as const;

  getDefaultProps(): C1ComponentShape["props"] {
    return {
      w: 600,
      h: 300,
      c1Response: "",
      isStreaming: false,
    };
  }

  component(shape: C1ComponentShape) {
    return (
      <HTMLContainer>
        <div style={{ width: shape.props.w, height: shape.props.h }}>
          {shape.props.isStreaming && !shape.props.c1Response ? (
            <div>Loading...</div>
          ) : (
            <C1Component
              c1Response={shape.props.c1Response || ""}
              streamingStatus={
                shape.props.isStreaming ? "streaming" : "complete"
              }
            />
          )}
        </div>
      </HTMLContainer>
    );
  }

  indicator(shape: C1ComponentShape) {
    return (
      <rect
        width={shape.props.w}
        height={shape.props.h}
        fill="none"
        stroke="currentColor"
        strokeWidth={2}
      />
    );
  }
}
This creates a resizable, repositionable shape that renders C1 components on the canvas.

Step 3: Extract Context from Selected Shapes

When users select existing cards, extract their content to provide context to the AI:
utils/shapeContext.ts
import type { Editor, TLShape } from "tldraw";
import type { C1ComponentShapeProps } from "@/app/shapes/C1ComponentShape";

export function extractC1ShapeContext(editor: Editor): string {
  const selectedShapes = editor.getSelectedShapes();

  const c1Shapes = selectedShapes.filter(
    (shape): shape is TLShape => shape.type === "c1-component"
  );

  const c1Responses = c1Shapes
    .map((shape) => (shape.props as C1ComponentShapeProps).c1Response)
    .filter((response) => response)
    .join("\n");

  return JSON.stringify(c1Responses);
}
Why this matters: When a user selects “Tesla Q3 earnings” and “TSLA stock price” cards, then asks “Compare these”, the AI sees both cards’ content and generates a comparison.

Step 4: Create the API Endpoint

Build the backend endpoint that generates C1 responses for canvas cards:
app/api/ask/route.ts
import { NextRequest } from "next/server";
import OpenAI from "openai";
import { makeC1Response } from "@thesysai/genui-sdk/server";

const client = new OpenAI({
  baseURL: "https://api.thesys.dev/v1/embed",
  apiKey: process.env.THESYS_API_KEY,
});

const SYSTEM_PROMPT = `You are a helpful assistant that generates cards for a visual canvas.

<rules>
  - Generate short, focused cards - don't pack everything into one card
  - Create visually rich layouts with charts, images, and mini-components
  - For comparisons, use tables and side-by-side layouts
  - Integrate relevant images to make cards engaging
  - Do not use accordions or add follow-up questions
</rules>`;

export async function POST(req: NextRequest) {
  const { prompt, context } = await req.json();
  const c1Response = makeC1Response();

  c1Response.writeThinkItem({
    title: "Processing your request...",
    description: "Analyzing input and preparing visual content.",
  });

  const messages = [];

  // If context exists, combine it with the prompt
  if (context) {
    messages.push({
      role: "user",
      content: `{prompt: ${prompt}, context: ${context}}`,
    });
  } else {
    messages.push({
      role: "user",
      content: prompt,
    });
  }

  const llmStream = await client.beta.chat.completions.runTools({
    model: "c1/anthropic/claude-sonnet-4/v-20250930",
    messages: [
      { role: "system", content: SYSTEM_PROMPT },
      ...messages
    ],
    stream: true,
    tools: [], // We'll add image search tool in Step 6
  });

  llmStream.on("content", c1Response.writeContent);
  llmStream.on("end", () => c1Response.end());

  return new Response(c1Response.responseStream, {
    headers: {
      "Content-Type": "text/event-stream",
      "Cache-Control": "no-cache, no-transform",
      Connection: "keep-alive",
    },
  });
}
The system prompt is critical for canvas apps. Emphasize short, visually rich cards rather than long text blocks.

Step 5: Shape Creation and Management

Create a manager to handle the lifecycle of canvas shapes:
utils/c1ShapeManager.ts
import type { Editor, TLShapeId } from "tldraw";
import { createShapeId } from "tldraw";
import { extractC1ShapeContext } from "./shapeContext";

export async function createC1ComponentShape(
  editor: Editor,
  options: {
    searchQuery: string;
    width?: number;
    height?: number;
  }
): Promise<TLShapeId> {
  const { searchQuery, width = 600, height = 300 } = options;

  // Generate unique shape ID
  const shapeId = createShapeId();

  // Extract context from selected shapes
  const context = extractC1ShapeContext(editor);

  // Calculate optimal position (center of viewport)
  const viewport = editor.getViewportPageBounds();
  const position = {
    x: viewport.center.x - width / 2,
    y: viewport.center.y - height / 2,
  };

  // Create the shape
  editor.createShape({
    id: shapeId,
    type: "c1-component",
    x: position.x,
    y: position.y,
    props: {
      w: width,
      h: height,
      prompt: searchQuery,
    },
  });

  // Call API and stream updates
  const response = await fetch("/api/ask", {
    method: "POST",
    headers: { "Content-Type": "application/json" },
    body: JSON.stringify({
      prompt: searchQuery,
      context,
    }),
  });

  const reader = response.body?.getReader();
  const decoder = new TextDecoder();
  let fullResponse = "";

  // Mark shape as streaming
  editor.updateShape({
    id: shapeId,
    type: "c1-component",
    props: { isStreaming: true },
  });

  // Stream updates
  while (reader) {
    const { done, value } = await reader.read();
    if (done) break;

    const chunk = decoder.decode(value);
    fullResponse += chunk;

    editor.updateShape({
      id: shapeId,
      type: "c1-component",
      props: { c1Response: fullResponse, isStreaming: true },
    });
  }

  // Mark streaming complete
  editor.updateShape({
    id: shapeId,
    type: "c1-component",
    props: { isStreaming: false },
  });

  return shapeId;
}
This handles:
  1. Extracting context from selected cards
  2. Positioning the new card in the viewport
  3. Creating the shape
  4. Streaming C1 content as it generates
  5. Marking completion

Step 6: Add Image Search Tool (Optional)

Enhance cards with relevant images using tool calling. This step is optional. Create the image search tool:
app/api/ask/tools.ts
import { z } from "zod";
import { zodToJsonSchema } from "zod-to-json-schema";
import GoogleImages from "google-images";

const client = new GoogleImages(
  process.env.GOOGLE_CSE_ID!,
  process.env.GOOGLE_API_KEY!
);

export function getImageSearchTool(writeThinkItem?: Function) {
  return {
    type: "function",
    function: {
      name: "getImageSrc",
      description: "Get image src for given alt text",
      parse: JSON.parse,
      parameters: zodToJsonSchema(
        z.object({
          altText: z.string().describe("The alt text of the image"),
        })
      ),
      function: async ({ altText }: { altText: string }) => {
        if (writeThinkItem) {
          writeThinkItem(
            "Searching for images...",
            "Finding the perfect image for your canvas."
          );
        }

        const results = await client.search(altText, { size: "huge" });
        return results[0].url;
      },
    },
  };
}
Update app/api/ask/route.ts to use the tool:
import { getImageSearchTool } from "./tools";

export async function POST(req: NextRequest) {
  // ... existing code ...

  const llmStream = await client.beta.chat.completions.runTools({
    model: "c1/anthropic/claude-sonnet-4/v-20250930",
    messages: [
      { role: "system", content: SYSTEM_PROMPT },
      ...messages
    ],
    stream: true,
    tools: [
      getImageSearchTool((title: string, desc: string) => {
        c1Response.writeThinkItem({ title, description: desc });
      })
    ],
  });

  // ... rest of the code ...
}
The LLM will automatically call this tool when generating cards that need images. Skip this step if you don’t have Google API credentials.

Step 7: Keyboard Shortcuts

Add keyboard shortcut support. Update your app/page.tsx to include overrides:
app/page.tsx
"use client";

import "@crayonai/react-ui/styles/index.css";
import "tldraw/tldraw.css";
import { Tldraw, type TLUiOverrides } from "tldraw";
import { shapeUtils } from "./shapeUtils";
import { PromptInput } from "./components/PromptInput";
import { FOCUS_PROMPT_EVENT } from "./events";

const overrides: TLUiOverrides = {
  actions: (_editor, actions) => {
    return {
      ...actions,
      "focus-prompt-input": {
        id: "focus-prompt-input",
        label: "Focus Prompt Input",
        kbd: "$k", // Cmd/Ctrl + K
        onSelect: () => {
          window.dispatchEvent(new CustomEvent(FOCUS_PROMPT_EVENT));
        },
      },
    };
  },
};

export default function Page() {
  return (
    <div style={{ position: "fixed", inset: 0 }}>
      <Tldraw
        shapeUtils={shapeUtils}
        overrides={overrides}
        persistenceKey="c1-canvas"
      >
        <PromptInput focusEventName={FOCUS_PROMPT_EVENT} />
      </Tldraw>
    </div>
  );
}
The $k syntax in tldraw means Cmd+K on Mac, Ctrl+K on Windows/Linux.

Step 8: Create Prompt Input Component

Create the UI component that users interact with:
app/components/PromptInput.tsx
"use client";

import { useState, useRef, useEffect } from "react";
import { useEditor } from "tldraw";
import { createC1ComponentShape } from "../utils/c1ShapeManager";

interface PromptInputProps {
  focusEventName: string;
}

export function PromptInput({ focusEventName }: PromptInputProps) {
  const editor = useEditor();
  const [isFocused, setIsFocused] = useState(false);
  const [prompt, setPrompt] = useState("");
  const inputRef = useRef<HTMLInputElement>(null);
  const isCanvasZeroState = editor.getCurrentPageShapes().length === 0;

  // Listen for keyboard shortcut event
  useEffect(() => {
    const handleFocusEvent = () => {
      inputRef.current?.focus();
      setIsFocused(true);
    };

    window.addEventListener(focusEventName, handleFocusEvent);
    return () => window.removeEventListener(focusEventName, handleFocusEvent);
  }, [focusEventName]);

  const onInputSubmit = async (prompt: string) => {
    setPrompt("");
    try {
      await createC1ComponentShape(editor, {
        searchQuery: prompt,
        width: 600,
        height: 300,
      });
    } catch (error) {
      console.error("Failed to create shape:", error);
    }
  };

  return (
    <form
      className={`
        flex items-center fixed left-1/2 -translate-x-1/2
        py-4 px-6 rounded-2xl border bg-white shadow-lg
        transition-all duration-300
        ${isFocused ? "w-1/2" : "w-[400px]"}
        ${isCanvasZeroState ? "top-1/2 -translate-y-1/2" : "bottom-4"}
      `}
      onSubmit={(e) => {
        e.preventDefault();
        onInputSubmit(prompt);
        setIsFocused(false);
        inputRef.current?.blur();
      }}
    >
      <input
        ref={inputRef}
        type="text"
        placeholder="Ask anything..."
        className="flex-1 outline-none"
        onFocus={() => setIsFocused(true)}
        onBlur={() => setIsFocused(false)}
        value={prompt}
        onChange={(e) => setPrompt(e.target.value)}
      />
      {isFocused ? (
        <button
          type="submit"
          className="ml-2 px-3 py-1 bg-purple-600 text-white rounded-lg"
        >

        </button>
      ) : (
        <span className="text-xs text-gray-400">
          {navigator.platform.includes("Mac") ? "⌘ + K" : "Ctrl + K"}
        </span>
      )}
    </form>
  );
}
Create an event constant:
app/events/index.ts
export const FOCUS_PROMPT_EVENT = "focus-prompt-input";

Step 9: Register Shape Utils and Run

Create the shape utils registry:
app/shapeUtils/index.ts
import { C1ComponentShapeUtil } from "./C1ComponentShapeUtil";

export const shapeUtils = [C1ComponentShapeUtil];
Now run your development server:
npm
npm run dev
Open http://localhost:3000 and:
  1. Press Cmd/Ctrl + K to open the prompt input
  2. Type “Create a product launch plan” and press Enter
  3. Watch as an AI-generated card appears on the canvas
  4. Select the card and press Cmd/Ctrl + K again
  5. Ask “What are the risks?” - the AI will see the first card’s context
  6. Experiment with multiple cards and selections
If you skipped Step 6 (image search), the canvas will work perfectly without it - you just won’t get automatic image embedding.

Key Concepts

When you select existing cards on the canvas and create a new one, the extractC1ShapeContext function reads the C1 responses from selected shapes and sends them as context to the API. The LLM sees this context and generates responses that reference or build upon the selected cards.
tldraw’s shape system provides built-in features: selection, resizing, repositioning, undo/redo, persistence, and export. Using custom shapes means your AI cards integrate seamlessly with tldraw’s native editing capabilities.
Currently, cards have fixed dimensions. For dynamic sizing, calculate content height in the shape component and call editor.updateShape() with new height. Monitor the C1Component’s rendered size and update the shape’s h prop accordingly.
Yes! tldraw has built-in collaboration support. Use their sync server or implement your own using the store prop. Each user’s shapes, selections, and camera positions sync in real-time.

Testing Your Canvas

Try these workflows to test context awareness:
  • Single card: “Create a product roadmap for Q1”
  • Context-aware: Create “Tesla stock analysis” card, select it, then ask “What’s the outlook?”
  • Multi-select: Create “Revenue data” and “Cost data” cards, select both, ask “Create a comparison chart”
  • Follow-up: After any card, select it and ask “Expand on this with more details”

Going to Production

Before deploying:
  1. Add authentication to prevent unauthorized API usage
  2. Implement rate limiting on the /api/ask endpoint
  3. Set up canvas sharing if you want users to share their canvases
  4. Add export functionality - tldraw can export to PNG/SVG
  5. Consider collaboration - tldraw supports multiplayer mode

Full Example & Source Code