如何从Chainlit提示中正确获取input_variables?

问题描述 投票:0回答:1

最近,我尝试在我的电脑上使用 Llama 2 模型。我关注了“AI Anytime”的视频。然而,当我从 Chainlit 提示中提供 input_variables 时,我收到了错误。我搜索了一段时间但没有解决方案。你能帮我吗?

我的代码如下。当我在 Anaconda Prompt 中运行代码时,我在 Chainlit 提示中收到错误, “StuffDocumentsChain 出现 1 个验证错误 根 在 llm_chain input_variables 中找不到 document_variable_name 上下文:['', 'question'] (type=value_error)"

我没有找到输入变量“上下文”的必要性。虽然,当我从代码中删除它时,我可以在 chainlit 提示中提供问题,但随后我收到了错误消息, “UserSession.set() 缺少 1 个必需的位置参数:'value'”


from langchain.prompts import PromptTemplate
from langchain.embeddings import HuggingFaceEmbeddings
from langchain.vectorstores.faiss import FAISS
from langchain.llms import CTransformers
from langchain.chains import RetrievalQA
import chainlit as cl

DB_FAISS_PATH = "vectorstores/db_faiss" 

# Context: {context}/ 

custom_prompt_template = """Use the following pieces of information to answer the user's question.
If you don't know the answer, please just say that you don't know the answer, don't try to make up
an answer.

Context: {}
Question: {question}

Only returns the helpful answer below and nothing else.
Helpful answer:

"""

def set_custom_prompt():
    """
    Prompt template for QA retrieval for each vector stores
    """
    
    prompt = PromptTemplate(template = custom_prompt_template, input_variables = ['context','question'])
#    prompt = PromptTemplate.from_template(custom_prompt_template)
#    prompt.format(question = 'question'])
    
    
    return prompt


def load_llm():
    print("*** Start the load_llm.")
    llm = CTransformers(
        model = "llama-2-7b-chat.ggmlv3.q8_0.bin",
        model_type = "llama",
        max_new_tokens = 512,
        temperature = 0.5
        )
    print("****** Finish the load_llm")
    return llm

def retrieval_qa_chain(llm, prompt, db):
    qa_chain = RetrievalQA.from_chain_type(
        llm = llm,
        chain_type = "stuff",
        retriever = db.as_retriever(search_kwargs = {'k': 2}),
        return_source_documents = True,
        chain_type_kwargs = {'prompt': prompt}
        )
    return qa_chain

def qa_bot():
    embeddings = HuggingFaceEmbeddings(model_name = 'sentence-transformers/all-MiniLM-L6-v2',
                                       model_kwargs = {'device': 'cpu'})
    
    db = FAISS.load_local(DB_FAISS_PATH, embeddings)
    print("*******FAISS.load_local() works well.")
    llm = load_llm()
    print("****** llm step works well.")
    qa_prompt = set_custom_prompt()
    qa = retrieval_qa_chain(llm, qa_prompt, db)
    print("******qa step works well.")
    
    return qa
    
def final_result(query):
    qa_result = qa_bot()
    response = qa_result({'query': query})
    return response
    
# chainlit ####
@cl.on_chat_start
async def start():
    chain = qa_bot()
    msg = cl.Message(content="Starting the bot......")    
    await msg.send()
    msg.content = "Hi, Welcome to the Medical Bot. What is your query?"
    await msg.update()
    cl.user_session.set('chain', chain)
    
@cl.on_message
async def main(message):
    chain = cl.user_session.set("chain")
    cb = cl.AsyncLangchainCallbackHandler(
        stream_final_answer = True, answer_prefix_tokens = ["FINAL", "ANSWER"]
        )
    cb.answer_reached = True
    res = await chain.acall(message, callbacks = [cb])
    answer = res["result"]
    sources = res["source_documents"]
    
    if sources:
        answer += f"\nSources:" + str(sources)
    else:
        answer += f"\nNo Sources Found"
        
    await cl.Message(content = answer).send()

我尝试了一些解决方案,例如删除或更改“上下文”变量的位置。总是有错误。

python chatbot py-langchain llama
1个回答
0
投票

更改这行代码 链 = cl.user_session.set("链") 到 链 = cl.user_session.get("链")

因为我们已经在上面设置了链变量,所以这里我们必须得到它

© www.soinside.com 2019 - 2024. All rights reserved.