如何加速基于 Langchain(带有代理和工具)和 Streamlit 的分析聊天机器人并禁用其中间步骤?

问题描述 投票:0回答:1

我使用 Langchain(带有工具和代理)作为后端,使用 Streamlit 作为前端创建了一个分析聊天机器人。它可以工作,但对于某些用户的问题,输出任何内容都需要太多时间。如果我查看中间步骤的输出,我可以看到聊天机器人尝试打印输出中的所有相关行。例如,下面的聊天机器人找到了 40 条相关评论,并在其中一个中间步骤中将它们一一打印出来(最多需要一分钟)。

我的问题是:

  1. 有什么办法可以加快这个过程吗?
  2. 如何禁用聊天机器人的中间输出? (我已经放置了
    return_intermediate_steps=False
    verbose=False
    expand_new_thoughts=False
    ,但聊天机器人仍然显示中间步骤。)

聊天机器人代码:



def load_data(path):
    return pd.read_csv(path)

if st.sidebar.button('Use Data'):
    # If button is clicked, load the EDW.csv file
    st.session_state["df"] = load_data('./data/EDW.csv')
uploaded_file = st.sidebar.file_uploader("Choose a CSV file", type="csv")


if "df" in st.session_state:

    msgs = StreamlitChatMessageHistory()
    memory = ConversationBufferWindowMemory(chat_memory=msgs, 
                                            return_messages=True, 
                                            k=5, 
                                            memory_key="chat_history", 
                                            output_key="output")
    
    if len(msgs.messages) == 0 or st.sidebar.button("Reset chat history"):
        msgs.clear()
        msgs.add_ai_message("How can I help you?")
        st.session_state.steps = {}

    avatars = {"human": "user", "ai": "assistant"}

    # Display a chat input widget
    if prompt := st.chat_input(placeholder=""):
        st.chat_message("user").write(prompt)

        llm = AzureChatOpenAI(
                        deployment_name = "gpt-4",
                        model_name = "gpt-4",
                        openai_api_key = os.environ["OPENAI_API_KEY"],
                        openai_api_version = os.environ["OPENAI_API_VERSION"],
                        openai_api_base = os.environ["OPENAI_API_BASE"],
                        temperature = 0, 
                        streaming=True
                        )
        
        max_number_of_rows = 40
        agent_analytics_node = create_pandas_dataframe_agent(
                                                        llm, 
                                                        st.session_state["df"], 
                                                        verbose=False, 
                                                        agent_type=AgentType.OPENAI_FUNCTIONS,
                                                        reduce_k_below_max_tokens=True, # to not exceed token limit 
                                                        max_execution_time = 20,
                                                        early_stopping_method="generate", # will generate a final answer after the max_execution_time has been surpassed
                                                        # max_iterations=2, # to cap an agent at taking a certain number of steps
                                                    )
        tool_analytics_node = Tool(
                                return_intermediate_steps=False,
                                name='Analytics Node',
                                func=agent_analytics_node.run,
                                description=f''' 
                                            This tool is useful when you need to answer questions about data stored in a pandas dataframe, referred to as 'df'. 
                                            'df' comprises the following columns: {st.session_state["df"].columns.to_list()}.
                                            Here is a sample of the data: {st.session_state["df"].head(5)}.
                                            When working with df, ensure not to output more than {max_number_of_rows} rows at once, either in intermediate steps or in the final answer. This is because df could contain too many rows, which could potentially overload memory, for example instead of `df[df['survey_comment'].str.contains('wet', na=False, case=False)]['survey_comment'].tolist()` use `df[df['survey_comment'].str.contains('wet', na=False, case=False)]['survey_comment'].head({max_number_of_rows}).tolist()`.
                                            '''
                            )              
        
        tools = [tool_analytics_node] 
        chat_agent = ConversationalChatAgent.from_llm_and_tools(llm=llm, tools=tools, return_intermediate_steps=False)
    
        
        executor = AgentExecutor.from_agent_and_tools(
                                                        agent=chat_agent,
                                                        tools=tools,
                                                        memory=memory,
                                                        return_intermediate_steps=False,
                                                        handle_parsing_errors=True,
                                                        verbose=False,
                                                    )
        
        with st.chat_message("assistant"):
          
            st_cb = StreamlitCallbackHandler(st.container(), expand_new_thoughts=False)
            response = executor(prompt, callbacks=[st_cb])
            st.write(response["output"])
python chatbot streamlit langchain llm
1个回答
0
投票

优化它的一种方法(至少我知道)是对架构进行彻底的检修,并使用 RAG 模型。 如果你不知道,RAG 代表检索增强生成,基本上你给人工智能一个文档,它会在上面创建一个向量数据库,并回答有关它的问题。虽然这确实使用了 Langchain,但它也使用了 Open AI,恐怕我从未尝试过没有 Open AI 的方法,因为你需要它来创建嵌入,这对于 ragging 至关重要。

我附上了一些使用 Streamlit 和 Langchain 的代码,它为用户提供了一个文件投递框,然后您可以询问有关文档的任何问题(例如聊天 PDF)。它仅适用于 txt 和 pdf 文件。

import streamlit as st
from dotenv import load_dotenv
import pickle
from PyPDF2 import PdfReader
from langchain.text_splitter import RecursiveCharacterTextSplitter
from langchain.embeddings.openai import OpenAIEmbeddings
from langchain.vectorstores import FAISS
from langchain.llms import OpenAI
from langchain.chains.question_answering import load_qa_chain
from langchain.callbacks import get_openai_callback
import os
import openai

# Sidebar contents
with st.sidebar:
    st.title('💬 LLM Chat App')
    st.markdown('''
    ## About
    This app is an LLM-powered chatbot built using:
    - [Streamlit](https://streamlit.io/)
    - [LangChain](https://python.langchain.com/)
    - [OpenAI](https://platform.openai.com/docs/models) LLM model
 
    ''')
    st.write('Created for Non-Profit Purposes')

load_dotenv()

def main():
    st.header("Chat with PDF 💬")

    # upload a PDF file
    pdf = st.file_uploader("Upload your PDF", type='pdf')

    if pdf is not None:
        pdf_reader = PdfReader(pdf)

        text = ""
        for page in pdf_reader.pages:
            text += page.extract_text()

        text_splitter = RecursiveCharacterTextSplitter(
            chunk_size=1000,
            chunk_overlap=200,
            length_function=len
        )
        chunks = text_splitter.split_text(text=text)

        store_name = pdf.name[:-4]
        st.write(f'{store_name}')

        if os.path.exists(f"{store_name}.pkl"):
            with open(f"{store_name}.pkl", "rb") as f:
                VectorStore = pickle.load(f)
        else:
            embeddings = OpenAIEmbeddings(openai_api_key=OPEN_AI_KEY)
            VectorStore = FAISS.from_texts(chunks, embedding=embeddings)
            with open(f"{store_name}.pkl", "wb") as f:
                pickle.dump(VectorStore, f)

        query = st.text_input("Ask questions about your PDF file:")

        if query:
            docs = VectorStore.similarity_search(query=query, k=3)
            llm = OpenAI(openai_api_key=OPEN_AI_KEY)
            chain = load_qa_chain(llm=llm, chain_type="stuff")
            with get_openai_callback() as cb:
                response = chain.run(input_documents=docs, question=query)
                st.write(response)

if __name__ == '__main__':
    main()

(我对文件阅读不当表示歉意,但我想你明白了) 当然,您正在做一些非常不同的事情,但是您可以根据需要编辑代码,这样您就不必在开始时插入文件。这种架构可能根本不适合你,你可能正在做一些完全不同的事情,但希望现在你可能受到启发去做 RAG 或类似的事情,你可以做更多研究。希望这有帮助

© www.soinside.com 2019 - 2024. All rights reserved.