Thanks to visit codestin.com
Credit goes to github.com

Skip to content

Cloudflare Implementation #49

Closed
Closed
@sideshot

Description

@sideshot

Is this the intended setup for a Cloudflare Worker? I'm particularly interested in how the streaming is handled.

I'm running into an issue when following the cookbook instructions for the prompt: "Before any tool, tell the user 'I will be right back'." The tool initiates correctly, but there's no response afterward—it just ends the session.

/*────────────────────────  Manual SSE chat handler  ─────────────────*/
async function handleChat(
  req: Request,
  env: Env,
  ctx: ExecutionContext
): Promise<Response> {
  /* Parse JSON body */
  const { message, lastResponseId, trace_id } = (await req.json()) as {
    message: string;
    lastResponseId?: string;
    trace_id?: string;
  };

  /* Get or generate trace_id for grouping conversations */
  const traceId = trace_id || `trace_${Math.random().toString(36).substring(2, 34)}`;

  /* Initialise tracing/exporter once */
  await initTracing(env.OPENAI_API_KEY);

  /* Dynamically import SDK pieces that may use crypto or fs */
  const { Agent, Runner, withTrace } = await import("@openai/agents");

  /* Build agent & runner */
  const agent = new Agent({
    name: "Site Assistant",
    instructions: "You will answer questions by using the search_tool(query).",
    outputType: "text",
    model: "gpt-4.1",
    // tools: [searchTool],
  });

  const runner = new Runner({
    tracingDisabled: false,          // default
  });

  let runResult: any;
  await withTrace('cf-assistant-ui', async () => {
    runResult = await runner.run(agent, message, {
      stream: true,
      previousResponseId: lastResponseId,
    });
  }, {
    traceId: traceId,
    groupId: traceId, // Use same ID for grouping conversations
  });

  /* Create manual SSE stream */
  const stream = new ReadableStream({
    async start(controller) {
      const encoder = new TextEncoder();
      
      try {
        for await (const ev of runResult) {
          // Handle raw model stream events which contain the delta text
          if (ev.type === 'raw_model_stream_event') {
            // Check if this is an output_text_delta event
            if (ev.data && ev.data.type === 'output_text_delta' && ev.data.delta) {
              const textDelta = (ev as any).data.delta;
              controller.enqueue(encoder.encode(`event: text-delta\ndata: ${textDelta}\n\n`));
            }
            // Handle completion
            else if (ev.data && ev.data.type === 'response_done') {
              controller.enqueue(encoder.encode(`event: lastResponseId\ndata: ${ev.data.response.id}\n\n`));
              controller.enqueue(encoder.encode(`event: trace_id\ndata: ${traceId}\n\n`));
              controller.enqueue(encoder.encode('event: done\ndata: [DONE]\n\n'));
              break;
            }
          }
        }
      } catch (error) {
        console.error('Error processing stream:', error);
        controller.enqueue(encoder.encode(`event: error\ndata: ${String(error)}\n\n`));
      } finally {
        controller.close();
      }
    }
  });

  

  return new Response(stream, {
    headers: {
      'Content-Type': 'text/event-stream',
      'Cache-Control': 'no-cache',
      'Connection': 'keep-alive',
      'Access-Control-Allow-Origin': '*',
    }
  });
}

I'd be happy to add the full script if you prefer. The reason for the tracing extra code was from debugging why the tracing stopped for me.

Metadata

Metadata

Assignees

No one assigned

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions