如何在 AWS Lambda 和 API Gateway 上运行 Vercel AI SDK

问题描述 投票:0回答:1

我正在尝试通过 Cloudfront、Lambda 和 API Gateway 在 AWS 上托管我的 NextJS Vercel AI SDK 应用程序。

我想修改 useChat() 函数以包含 Lambda 函数的 API,该函数执行连接并从 OpenAI 返回 StreamingTextResponse。

但是,StreamingTextResponse 主体始终具有未定义的流。

我可以采取哪些措施来解决这个问题?

感谢任何帮助,谢谢。

页面.tsx

"use client";

import { useChat } from "ai/react";
import { useState, useEffect } from "react";

export default function Chat() {
  
  const { messages, input, handleInputChange, handleSubmit, data } = useChat({api: '/myAWSAPI'});
 
...

Lambda 函数


const OpenAI = require('openai')
const { OpenAIStream, StreamingTextResponse } = require('ai');
const prompts = require('./prompts')
const { roleplay_prompt } = prompts

// Create an OpenAI API client (that's edge friendly!)
const openai = new OpenAI({
  apiKey: process.env.OPENAI_API_KEY || '',
});



exports.handler = async function(event, context, callback) {
  
  
  // Extract the `prompt` from the body of the request
  
  const { messages } = event.body;
    
  const messageWithSystem = [


    {role: 'system', content: roleplay_prompt},
    ...messages // Add user and assistant messages after the system message
  ]

  console.log(messageWithSystem)

  // Ask OpenAI for a streaming chat completion given the prompt
  const response = await openai.chat.completions.create({
    model: 'gpt-3.5-turbo',
    stream: true,
    messages: messageWithSystem,
  });
  
  // Convert the response into a friendly text-stream
  const stream = OpenAIStream(response);
  // Respond with the stream
  const chatResponse = new StreamingTextResponse(stream);

  // body's stream is always undefined

 
  console.log(chatResponse)
  
  return chatResponse
}

next.js aws-lambda vercel-ai
1个回答
0
投票

在我自己深入研究了一下之后,我发现与 Lambda 相比,NextJS 处理程序的工作方式存在根本差异,至少就流式传输而言(至少截至 2024 年 1 月)。

这是我发现的最简单的指南,它可以让您使用 Lambda 上的流处理程序从 0->1:https://docs.aws.amazon.com/lambda/latest/dg/response-streaming-tutorial。 html

这是 2023 年 4 月他们介绍该功能的文章。 https://aws.amazon.com/blogs/compute/introducing-aws-lambda-response-streaming/

简而言之,有几种方法,您缺少的主要方法是

awslambda.streamifyResponse
,因为流式 Lambda 需要这种魔法来转换处理程序并通过responseStream。

这是我将 Vercel 的聊天响应处理程序转换为 lambda(类似于 https://github.com/vercel/ai/blob/main/examples/next-openai/app/api/chat/route.ts)似乎有效:

import { OpenAIStream } from 'ai';
import OpenAI from 'openai';

import stream from 'stream';
import util from 'util';
const { Readable } = stream;
const pipeline = util.promisify(stream.pipeline);

const openai = new OpenAI({
  apiKey: process.env.OPENAI_API_KEY,
});

export const chatHandler: awslambda.StreamifyHandler = async (event, responseStream, _context) => {
  console.log(`chat processing event: ${JSON.stringify(event)}`);

  const { messages } = JSON.parse(event.body || '');

  console.log(`chat processing messages: ${JSON.stringify(messages)}`);

  // Ask OpenAI for a streaming chat completion given the prompt
  const response = await openai.chat.completions.create({
    model: 'gpt-3.5-turbo',
    stream: true,
    messages,
  });

  // Convert the response into a friendly text-stream
  const stream = OpenAIStream(response, {
    onStart: async () => {
      // This callback is called when the stream starts
      // You can use this to save the prompt to your database
      // await savePromptToDatabase(prompt);
      console.log(`Started stream, latest message: ${JSON.stringify(messages.length > 0 ? messages[messages.length - 1] : '<none>')}`);
    },
    onToken: async (token: string) => {
      console.log(`chat got token: ${token}`);
      // This callback is called for each token in the stream
      // You can use this to debug the stream or save the tokens to your database
      // console.log(token);
    },
    onCompletion: async (completion: string) => {
      // This callback is called when the stream completes
      // You can use this to save the final completion to your database
      // await saveCompletionToDatabase(completion);
      console.log(`Completed stream with completion: ${completion}`);
    },
    onFinal: async (final: string) => {
      console.log(`chat got final: ${final}`);
    },
  });

  // Respond with the stream
  // NOPE! Not in a lambda
  //return new StreamingTextResponse(stream);

  // this is how we chain things together in lambda
  // @ts-expect-error this seems to be ok, but i'd like to do this safely
  await pipeline(stream, responseStream);
};

// see https://github.com/astuyve/lambda-stream for better support of this
export const handler = awslambda.streamifyResponse(chatHandler);

我在上面提到了一个辅助类型库,我在源代码中使用了下面的 .d.ts,它至少可以为那些 awslambda 完成这项工作。节点类型。我很想了解 Vercel ReadableStream 为何只是一个 Readable,因为当我问副驾驶时,它说它不起作用 🙃

import { APIGatewayProxyEventV2, Context, Handler } from 'aws-lambda';
import { Writable } from 'stream';

declare global {
  namespace awslambda {
    export namespace HttpResponseStream {
      function from(writable: Writable, metadata: any): Writable;
    }

    export type ResponseStream = Writable & {
      setContentType(type: string): void;
    };

    export type StreamifyHandler = (event: APIGatewayProxyEventV2, responseStream: ResponseStream, context: Context) => Promise<any>;

    export function streamifyResponse(handler: StreamifyHandler): Handler<APIGatewayProxyEventV2>;
  }
}

我确信 AWS 会在某个时候改进这一点,但目前感觉还很边缘。我很想知道是否有人在生产中这样做,缺乏 APIGW 支持是相当有限的。

© www.soinside.com 2019 - 2024. All rights reserved.