有没有办法从 gpt 模型获得完整响应?

问题描述 投票:0回答:1

我正在尝试使用 openai api 和 langchain 训练 gpt 模型,以便在我的自定义数据上创建聊天机器人,我已经以 txt 格式准备了数据,当用户提出问题时得到正确的响应,但我遇到了问题答案不完整,我该如何处理这个问题?

我尝试更改模型,尝试增加变量值但没有成功

这是我的代码

from flask import Flask
app = Flask(__name__)

from llama_index import SimpleDirectoryReader, GPTSimpleVectorIndex, LLMPredictor, PromptHelper, ServiceContext
from langchain.chat_models import ChatOpenAI
import os

os.environ["OPENAI_API_KEY"] = "MYAPI"

def construct_index(directory_path):
    # set maximum input size
    max_input_size = 4096
    # set number of output tokens
    num_outputs = 2000
    # set maximum chunk overlap
    max_chunk_overlap = 20
    # set chunk size limit
    chunk_size_limit = 600 

    # define prompt helper
    prompt_helper = PromptHelper(max_input_size, num_outputs, max_chunk_overlap, chunk_size_limit=chunk_size_limit)

    # define LLM
    llm_predictor = LLMPredictor(llm=ChatOpenAI(temperature=0.5, model_name="text-davinci-003", max_tokens=num_outputs))
 
    documents = SimpleDirectoryReader(directory_path).load_data()
    
    service_context = ServiceContext.from_defaults(llm_predictor=llm_predictor, prompt_helper=prompt_helper)
    index = GPTSimpleVectorIndex.from_documents(documents, service_context=service_context)

    index.save_to_disk('index.json')

    return index

construct_index("context_data/data")

def ask_ai(query):
    index = GPTSimpleVectorIndex.load_from_disk('index.json')
    response = index.query(query)
    return response.response

@app.route('/<question>')
def answer_question(question):
    answer = ask_ai(question)
    return answer

if __name__ == '__main__':
    app.run(debug=True)
python chatbot openai-api langchain llama-index
1个回答
0
投票

community.openai.com有人说:

如果您重新发送收到的内容,它可能会继续,但它 将花费更多代币...

© www.soinside.com 2019 - 2024. All rights reserved.