如何使用 jest 为 openai 的 Node SDK 编写带有 Streaming: true 的集成测试

问题描述 投票:0回答:1

我在使用

streaming: true
标志为 openai 的 node.js SDK 编写测试时遇到困难。

下面的代码在浏览器中运行得很好,我只是想为它编写一个测试。

我想测试我的整个快速路线,拦截对 openai 的调用,并在我自己的代码中继续执行。但由于某种原因,我无法拦截它/从模拟中向我自己的代码提供正确的响应。

我不需要使用您在我的代码中看到的任何模拟库。所以请随意提出替代方案。

it("should return stream the ai output if this is the first time fetching the document", async () => {
      nock("https://api.openai.com")
        .post("/v1/chat/completions")
        .reply(200, {
          id: "chatcmpl-123",
          object: "chat.completion.chunk",
          created: 1694268190,
          model: "gpt-3.5-turbo-0613",
          system_fingerprint: "fp_44709d6fcb",
          choices: [
            {
              index: 0,
              delta: {
                role: "assistant",
                content: "I am actually not OpenAI.",
              },
            },
          ],
          finish_reason: "stop",
        });
      const response = await agent.get(`${baseRoute}/documents/${documentId}`);
      expect(response.status).toBe(200);
      console.log(response.body);
    });

这是 /api/v2/documents/:id 控制器

const stream = await openaiService.getCompletionStream(document);
response.setHeader("Transfer-Encoding", "chunked");
response.setHeader("X-Content-Type-Options", "nosniff");
for await (const chunk of stream) {
  const data = chunk.choices[0]?.delta?.content;
  if (data !== undefined) {
    response.write(data);
  }
}

response.end();

这是被调用的 getCompletionStream 函数

getCompletionStream = async (document: DocumentDtoV2) => {
    const openai = this.getOpenAiAgent();
    const completion = await openai.chat.completions.create({
      model: document.aiEngine.defaultProperties.model,
      messages: document.messages,
      max_tokens: document.aiEngine.defaultProperties.max_tokens,
      stream: true,
      temperature: document.prompt.model.customProperties.temperature
        ? document.prompt.model.customProperties.temperature
        : document.aiEngine.defaultProperties.temperature,
      top_p: document.prompt.model.customProperties.top_p
        ? document.prompt.model.customProperties.top_p
        : document.aiEngine.defaultProperties.top_p,
      frequency_penalty: (document.aiEngine.defaultProperties as ChatGPTProperties)
        .frequency_penalty,
      presence_penalty: (document.aiEngine.defaultProperties as ChatGPTProperties).presence_penalty,
    });

    return completion;
  };
node.js jestjs integration-testing openai-api
1个回答
0
投票

我有一个可以使用 msw 的解决方案,但我认为同样的想法在这里也适用。

首先,这是一个实用程序函数,您可以在其中传递字符串并将其转换为与 OpenAI 为其流块发送的格式相匹配的可读流。对于模拟特定的流响应很有用。

import { ReadableStream } from 'node:stream/web';
import { OpenAI } from 'openai';

// Takes the content you want to have your OpenAI mock respond with and turns it 
// into a readable stream that emits chunks in the same format as OpenAI
function createOpenAiResponseStream(content: string) {
  const baseAttrs = {
    id: 'chatcmpl-123',
    object: 'chat.completion.chunk',
    created: Date.now(),
    model: 'gpt-3.5-turbo-0125',
    system_fingerprint: 'fp_44709d6fcb',
  } as const;

  // Split string into chunks of 4 characters
  const contentChunks = content.match(/.{1,4}/g)!;

  const contentStreamChunks: OpenAI.Chat.Completions.ChatCompletionChunk[] = contentChunks.map((content) => ({
    ...baseAttrs,
    choices: [
      {
        index: 0,
        delta: {
          content,
        },
        logprobs: null,
        finish_reason: null,
      },
    ],
  }));

  // Add a starting chunk (this chunk's delta just has the role. No content) and an ending chunk (this chunk has an empty delta but has finish_reason === 'stop')
  const streamChunks: OpenAI.Chat.Completions.ChatCompletionChunk[] = [
    {
      ...baseAttrs,
      choices: [
        {
          index: 0,
          delta: {
            role: 'assistant',
          },
          logprobs: null,
          finish_reason: 'stop',
        },
      ],
    },
    ...contentStreamChunks,
    {
      ...baseAttrs,
      choices: [
        {
          index: 0,
          delta: {},
          logprobs: null,
          finish_reason: 'stop',
        },
      ],
    },
  ];

  const encoder = new TextEncoder();

  return new ReadableStream({
    start(controller) {
      try {
        for (const chunk of streamChunks) {
          // Match the format of open ai chunks. Each streamed chunk has `data: `, then the content, then two newlines
          controller.enqueue(encoder.encode(`data: ${JSON.stringify(chunk)}\n\n`));
        }
      } finally {
        controller.close();
      }
    },
  });
}

我不熟悉nock,但看看文档,你的回复就是

.reply(200, createOpenAiResponseStream("I am actually not OpenAI."))

您可能还需要指定一些标头:

.reply(200, createOpenAiResponseStream('I am actually not OpenAI.'), {
  'Content-Type': 'text/event-stream',
  'Cache-Control': 'no-cache',
  Connection: 'keep-alive',
})
© www.soinside.com 2019 - 2024. All rights reserved.