如何在 python 中使用 chatgpt 从“text-davinci-003”更新我的聊天机器人到“gpt-3.5-turbo”

问题描述 投票:0回答:3

我是 python 的新手,我想稍微了解这段代码。 我正在使用 openai API 开发一个智能聊天机器人,并在 what's app 中使用它。我的这段代码负责我代码中的chatgpt响应。目前,这段代码在 model = "text-davinci-003" 上,我想把它变成 "gpt-3.5-turbo"。有好心人有兴趣帮助我吗?

Obs.:“msg”是我们在 whatsapp 上向

chatgpt
提出的要求

我的代码片段:

msg = todas_as_msg_texto[-1]
print(msg) # -> Mensagem que o cliente manda (no caso eu)

cliente = 'msg do cliente: '
texto2 = 'Responda a mensagem do cliente com base no próximo texto: '
questao = cliente + msg + texto2 + texto

# #### PROCESSA A MENSAGEM NA API DO CHAT GPT ####

openai.api_key= apiopenai.strip()

response=openai.Completion.create(
    model="text-davinci-003",
    prompt=questao,
    temperature=0.1,
    max_tokens=270,
    top_p=1,
    frequency_penalty=0,
    presence_penalty=0.6,
)

resposta=response['choices'][0]['text']
print(resposta)
time.sleep(1)
    
python chatbot whatsapp openai-api chatgpt-api
3个回答
0
投票

要将您的代码更新为

gpt-3.5-turbo
,您需要修改四个区域:

  1. 呼叫
    openai.ChatCompletion.create
    而不是
    openai.Completion.create
  2. 设置
    model='gpt-3.5-turbo'
  3. messages=
    改为数组如下图
  4. 更改将
    repsonse
    分配给
    resposta
    变量的方式,以便您从
    messages
    键读取

这个测试示例考虑了这些变化:

response=openai.ChatCompletion.create(
    model="gpt-3.5-turbo",
    messages=[{"role": "user", "content": questao }],
    temperature=0.1,
    max_tokens=270,
    top_p=1,
    frequency_penalty=0,
    presence_penalty=0.6,
)

resposta=response['choices'][0]['message']['content']

此外,由于可以从模型返回多个选择,而不是只看

[0]
,您可能有兴趣迭代它们以查看您得到了什么,例如:

for choice in response.choices:
            outputText = choice.message.content
            print(outputText)
            print("------")
print("\n")

请注意,如果您使用 'n=1'

 调用
openai.ChatCompletion.create

,则不需要这样做

此外,您的示例同时设置了

temperature
top_p
,但是 docs 建议只设置其中一个变量


0
投票

你可以试试这个

import requests
import json
# Create HTTP client and request objects
httpClient1 = requests.Session()
url = "https://api.openai.com/v1/chat/completions"
headers = {"Authorization": "Bearer " + OpenaiApiKey}
prompt = "Hello, how are you?"
request1 = {
"model": "gpt-3.5-turbo",
"messages": [{"role": "user", "content": prompt}]
}
# Send the request and wait for the response
response1 = httpClient1.post(url, headers=headers, json=request1)
responseContent1 = response1.content
# Deserialize the JSON response and extract the generated text
responseObject1 = json.loads(responseContent1.decode('utf-8'))
results = responseObject1["choices"][0]["text"]

0
投票

我也在尝试访问“gpt-3.5-turbo”,但它给了我一个错误。我正在使用 FastAPI 包装端点“v1/chat/completions”,并使用 docker 将其容器化。 这是我的代码:

'''

try:
        response = openai.ChatCompletion.create(
            model=model_id,
            messages=[{"role": "user", "content": prompt}],
            max_tokens=200,
            n=1,
            stop=None,
            temperature=0.5,
            top_p=1.0,
            frequency_penalty=0.0,
            presence_penalty=0.0,
        )
        reply = response['choices'][0]['message']['content']

        # Cache the response and conversation history in Redis
        r.set(message.message, reply)
        r.set("conversation", conversation + reply)

        return {"message": reply}
    except openai.error.InvalidRequestError as e:
        raise HTTPException(status_code=500, detail="Failed to generate chat response")
    except openai.OpenAIError as e:
        raise HTTPException(status_code=500, detail="OpenAI server error")

'''

© www.soinside.com 2019 - 2024. All rights reserved.