OpenAI 提供 Python 客户端,目前版本为 0.27.8,同时支持 Azure 和 OpenAI。
以下是如何使用它为每个提供商调用 ChatCompletion 的示例:
# openai_chatcompletion.py
"""Test OpenAI's ChatCompletion endpoint"""
import os
import openai
import dotenv
dotenv.load_dotenv()
openai.api_key = os.environ.get('OPENAI_API_KEY')
# Hello, world.
api_response = openai.ChatCompletion.create(
model="gpt-3.5-turbo",
messages=[
{"role": "user", "content": "Hello!"}
],
max_tokens=16,
temperature=0,
top_p=1,
frequency_penalty=0,
presence_penalty=0,
)
print('api_response:', type(api_response), api_response)
print('api_response.choices[0].message:', type(api_response.choices[0].message), api_response.choices[0].message)
并且:
# azure_openai_35turbo.py
"""Test Microsoft Azure's ChatCompletion endpoint"""
import os
import openai
import dotenv
dotenv.load_dotenv()
openai.api_type = "azure"
openai.api_base = os.getenv("AZURE_OPENAI_ENDPOINT")
openai.api_version = "2023-05-15"
openai.api_key = os.getenv("AZURE_OPENAI_KEY")
# Hello, world.
# In addition to the `api_*` properties above, mind the difference in arguments
# as well between OpenAI and Azure:
# - OpenAI from OpenAI uses `model="gpt-3.5-turbo"`!
# - OpenAI from Azure uses `engine="‹deployment name›"`! ⚠️
# > You need to set the engine variable to the deployment name you chose when
# > you deployed the GPT-35-Turbo or GPT-4 models.
# This is the name of the deployment I created in the Azure portal on the resource.
api_response = openai.ChatCompletion.create(
engine="gpt-35-turbo", # engine = "deployment_name".
messages=[
{"role": "user", "content": "Hello!"}
],
max_tokens=16,
temperature=0,
top_p=1,
frequency_penalty=0,
presence_penalty=0,
)
print('api_response:', type(api_response), api_response)
print('api_response.choices[0].message:', type(api_response.choices[0].message), api_response.choices[0].message)
即
api_type
和其他设置是Python库的全局变量。
这是转录音频的第三个示例(它使用 Whisper,它在 OpenAI 上可用,但在 Azure 上不可用):
# openai_transcribe.py
"""
Test the transcription endpoint
https://platform.openai.com/docs/api-reference/audio
"""
import os
import openai
import dotenv
dotenv.load_dotenv()
openai.api_key = os.getenv("OPENAI_API_KEY")
audio_file = open("minitests/minitests_data/bilingual-english-bosnian.wav", "rb")
transcript = openai.Audio.transcribe(
model="whisper-1",
file=audio_file,
prompt="Part of a Bosnian language class.",
response_format="verbose_json",
)
print(transcript)
这些是最小的示例,但我使用类似的代码作为我的 web 应用程序(Flask 应用程序)的一部分。
现在我的挑战是我想:
有什么办法可以做到吗?
我心里有几个选择:
我对这些不太满意,觉得我可能错过了一个更明显的解决方案。
或者当然……或者,我可以将 Whisper 与不同的提供商一起使用(例如 Replicate),或者完全替代 Whisper。
库中的每个 API 都接受配置选项的按方法覆盖。如果要访问 Azure API 来完成聊天,可以显式传入 Azure 配置。对于转录端点,您可以显式传递 OpenAI 配置。例如:
import os
import openai
api_response = openai.ChatCompletion.create(
api_base=os.getenv("AZURE_OPENAI_ENDPOINT"),
api_key=os.getenv("AZURE_OPENAI_KEY"),
api_type="azure",
api_version="2023-05-15",
engine="gpt-35-turbo",
messages=[
{"role": "user", "content": "Hello!"}
],
max_tokens=16,
temperature=0,
top_p=1,
frequency_penalty=0,
presence_penalty=0,
)
print(api_response)
audio_file = open("minitests/minitests_data/bilingual-english-bosnian.wav", "rb")
transcript = openai.Audio.transcribe(
api_key=os.getenv("OPENAI_API_KEY"),
model="whisper-1",
file=audio_file,
prompt="Part of a Bosnian language class.",
response_format="verbose_json",
)
print(transcript)
我有同样的问题,看起来我们不能同时使用Azure和openai。即使在实例化过程中更改“api_type”也不起作用。如果有人有解决方案,我会很高兴!