如何让 Cloudflare 工作线程将 OpenAI 响应流式传输回前端?

问题描述 投票:0回答:1

我试图让 Cloudflare 工作线程将数据块返回到前端,而不是在响应之前等待完整的 OpenAI 结果。目前,在收到 OpenAI 的完整响应之前,它似乎没有返回任何数据。

知道如何解决这个问题(我不太熟悉从服务器到前端的流数据)吗?

Worker 代码: 我基本上从 Cloudflare 文档复制了以下代码,并对标头配置进行了细微调整,以避免开发过程中发生 CORS 错误。

export default {
    async fetch(request, env, ctx) {
        const openai = new OpenAI({
            apiKey: "##-#######"
        })

        // make our request to the OpenAI API
        const stream = await openai.chat.completions.create({
            model: "gpt-3.5-turbo",
            messages: [{ role: "user", content: "Tell me a story using 2000 chars." }],
            stream: true
        },
            { responseType: "stream" }
        )

        // Using our readable and writable to handle streaming data
        let { readable, writable } = new TransformStream()

        let writer = writable.getWriter()
        const textEncoder = new TextEncoder()

        // loop over the data as it is streamed from OpenAI and write it using our writeable
        for await (const part of stream) {
            console.log(part.choices[0]?.delta?.content || "")
            writer.write(textEncoder.encode(part.choices[0]?.delta?.content || ""))
        }

        writer.close()

        const response = new Response(readable, {
            headers: {
                'Content-Type': 'text/plain; charset=utf-8',
                'Access-Control-Allow-Origin': '*',
                'Access-Control-Allow-Headers': 'Origin, X-Requested-With, Content-Type, Accept',
                'Transfer-Encoding': 'chunked'
            }
        });

        return response;
    }
}

前端代码:

const onSubmit = async (data) => {
        try {
            // Fetch the streaming endpoint
            const response = await fetch('CLOUDFLARE_END_POINT');

            if (!response.body) {
                console.error('The browser does not support streaming responses or server did not send a stream.');
                return;
            }

            const reader = response.body.getReader();

            while (true) {
                const { done, value } = await reader.read();

                if (done) {
                    break;
                }

                // Convert each chunk to text and log/display it
                console.log(new TextDecoder().decode(value));
            }
        } catch (error) {
            console.error('Error fetching the data:', error);
        }
}
service-worker cloudflare openai-api
1个回答
0
投票

我的建议:

export default {
    async fetch(request: Request, env: Env, ctx: ExecutionContext): Promise<Response> {
        const openai = new OpenAI({
            apiKey: env.OPENAI_API_KEY,
            baseURL: env.CLOUDFLARE_OPEN_AI_GATEWAY,
        });

        const stream = await openai.chat.completions.create({
            model: 'gpt-3.5-turbo-0613',
            messages: [{ role: 'user', content: 'Write a story with 50 words.' }],
            max_tokens: 100,
            stream: true,
        });

        const readableStream = new ReadableStream({
            async start(controller) {
                for await (const part of stream) {
                    if (part.choices[0]?.delta?.content) {
                        const textEncoder = new TextEncoder();
                        const formattedData = `data: ${part.choices[0].delta.content}\n\n`;
                        controller.enqueue(textEncoder.encode(formattedData));
                    }
                }
                controller.close();
            },
        });

        return new Response(readableStream, {
            headers: {
                'content-type': 'text/event-stream',
                'Cache-Control': 'no-cache',
                Connection: 'keep-alive',
                'Access-Control-Allow-Origin': '*',
                'Access-Control-Allow-Headers': 'Origin, X-Requested-With, Content-Type, Accept',
            },
        });
    },
};
© www.soinside.com 2019 - 2024. All rights reserved.