我正在尝试构建一个 ChatGPT 网站克隆,现在我需要制作流完成效果,以逐字显示结果。 我的服务器是一个使用 Express.js 框架的 TypeScript Node.js 应用程序。
路线是这样的:
import express, { Request, Response } from 'express';
import cors from 'cors';
import { Configuration, OpenAIAPI } from 'openai';
// ...
app.post('/api/admin/testStream', async (req: Request, res: Response) => {
const { password } = req.body;
try {
if (password !== process.env.ADMIN_PASSWORD) {
res.send({ message: 'Incorrect password' });
return;
}
const completion = await openai.createCompletion({
model: 'text-davinci-003',
prompt: 'Say this is a test',
stream: true,
}, { responseType: 'stream' });
completion.data.on('data', (chunk: any) => {
console.log(chunk.toString());
});
res.send({ message: 'Stream started' });
} catch (err) {
console.log(err);
res.send(err);
}
});
// ...
现在,它给我一个错误提示
“CreateCompletionResponse”类型上不存在属性“on”。ts(2339)
即使我设置了
{ responseType: 'stream' }
。
如何解决这个问题并将响应块逐块发送到前端? (我正在使用Socket.IO。)
在@uzluisf的帮助下终于解决了! 这就是我所做的:
import express, { Request, Response } from 'express';
import cors from 'cors';
import { Configuration, OpenAIAPI } from 'openai';
import http, { IncomingMessage } from 'http';
// ...
app.post('/api/admin/testStream', async (req: Request, res: Response) => {
const { password } = req.body;
try {
if (password !== process.env.ADMIN_PASSWORD) {
res.send({ message: 'Incorrect password' });
return;
}
const completion = await openai.createChatCompletion({
model: 'gpt-3.5-turbo',
messages: [{ role: 'user', content: 'When was America founded?' }],
stream: true,
}, { responseType: 'stream' });
const stream = completion.data as unknown as IncomingMessage;
stream.on('data', (chunk: Buffer) => {
const payloads = chunk.toString().split("\n\n");
for (const payload of payloads) {
if (payload.includes('[DONE]')) return;
if (payload.startsWith("data:")) {
const data = JSON.parse(payload.replace("data: ", ""));
try {
const chunk: undefined | string = data.choices[0].delta?.content;
if (chunk) {
console.log(chunk);
}
} catch (error) {
console.log(`Error with JSON.parse and ${payload}.\n${error}`);
}
}
}
});
stream.on('end', () => {
setTimeout(() => {
console.log('\nStream done');
res.send({ message: 'Stream done' });
}, 10);
});
stream.on('error', (err: Error) => {
console.log(err);
res.send(err);
});
} catch (err) {
console.log(err);
res.send(err);
}
});
// ...
了解更多信息,请访问 https://github.com/openai/openai-node/issues/18
还设法使用 Socket.IO 事件发送消息块!
顺便说一句,如果有人需要查看此应用程序的更多内容,您可以查看此链接:
const streamRes = await fetch("https://api.openai.com/v1/chat/completions", {
method: "POST",
headers: {
"Authorization": `Bearer ${agencyData.OPENAI_API_KEY}`,
"Content-Type": "application/json"
},
body: JSON.stringify({
"model": agentData.vg_defaultModel || 'gpt-3.5-turbo',
"messages": [
{
"role": "system",
"content": agentData.vg_systemPrompt || 'You are a helpful assitant.'
},
...embedHistory,
{
"role": "user",
"content": agentData.vg_prompt || 'Tell me a very short story'
}
],
"temperature": agentData.vg_temperature || 0.5,
"stream": true
})
})
const reader = streamRes.body.getReader();
let done = false;
let concenattedJsonStrn = '';
while (!done) {
const { value, done: readerDone, } = await reader.read();
done = readerDone;
const buffer = Buffer.from(value);
const textPayload = buffer.toString();
concenattedJsonStrn += textPayload;
if (!concenattedJsonStrn.includes(`data: `) || !concenattedJsonStrn.includes(`\n\n`)) {
continue;
}
const payloads = concenattedJsonStrn.toString().split("\n\n");
concenattedJsonStrn = '';
for (const payload of payloads) {
if (payload.includes('[DONE]')) return;
if (payload.startsWith("data:")) {
try {
const data = JSON.parse(payload.replace("data: ", ""));
const chunk: undefined | string = data.choices[0].delta?.content;
if (chunk) {
console.log(chunk);
// ws.send(chunk); // send the chunk to websocket for example
}
} catch (error) {
console.log(`Error with JSON.parse and ${payload}.\n${error}`);
concenattedJsonStrn += payload;
}
}
}
}