将 llama 索引 vectorstoreindex 与用于 RAG 应用程序的 Langchain 代理集成

问题描述 投票:0回答:1

我一整天都在阅读文档,似乎无法理解如何使用 llama_index 创建 VectorStoreIndex 并使用创建的嵌入作为可以与用户通信的 RAG 应用程序/聊天机器人的补充信息。我想使用 llama_index 因为它们有一些很酷的方法来执行更高级的检索技术,例如句子窗口检索和自动合并检索(公平地说,我没有调查 Langchain 是否也支持这些类型的向量检索方法)。我想使用 LangChain 因为它具有开发更复杂的提示模板的功能(同样我也没有真正调查过 llama_index 是否支持这个)。

我的目标是最终评估这些不同的检索方法在应用程序/聊天机器人的上下文中的执行情况。我知道如何使用单独的评估问题文件来评估它们,但我想做一些事情,例如比较响应的速度和人性化、令牌使用情况等。

最小可重现示例的代码如下

1) LangChain ChatBot initiation 
   from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder
    from langchain.memory import ChatMessageHistory
    
    
    prompt = ChatPromptTemplate.from_messages(
        [
            (
                "system",
                """You are the world's greatest... \
                Use this document base to help you provide the best support possible to everyone you engage with. 
                """,
            ),
            MessagesPlaceholder(variable_name="messages"),
        ]
    )
    
    chat = ChatOpenAI(model=llm_model, temperature=0.7)
    
    
    
    chain = prompt | chat
    
    
    chat_history = ChatMessageHistory()
    
    while True:
        user_input = input("You: ")
        chat_history.add_user_message(user_input)
        
        response = chain.invoke({"messages": chat_history.messages})
        
        if user_input.lower() == 'exit':
            break
        
        print("AI:", response)
        chat_history.add_ai_message(response)
  1. Llama索引句窗口检索
from llama_index.core.node_parser import SentenceWindowNodeParser
        from llama_index.core.indices.postprocessor import MetadataReplacementPostProcessor
        from llama_index.core.postprocessor import LLMRerank
    
    class SentenceWindowUtils:
        def __init__(self, documents, llm, embed_model, sentence_window_size):
            self.documents = documents
            self.llm = llm
            self.embed_model = embed_model
            self.sentence_window_size = sentence_window_size
            # self.save_dir = save_dir
    
            self.node_parser = SentenceWindowNodeParser.from_defaults(
                window_size=self.sentence_window_size,
                window_metadata_key="window",
                original_text_metadata_key="original_text",
            )
    
            self.sentence_context = ServiceContext.from_defaults(
                llm=self.llm,
                embed_model=self.embed_model,
                node_parser=self.node_parser,
            )
    
        def build_sentence_window_index(self, save_dir):
            if not os.path.exists(save_dir):
                os.makedirs(save_dir)
                sentence_index = VectorStoreIndex.from_documents(
                    self.documents, service_context=self.sentence_context
                )
                sentence_index.storage_context.persist(persist_dir=save_dir)
            else:
                sentence_index = load_index_from_storage(
                    StorageContext.from_defaults(persist_dir=save_dir),
                    service_context=self.sentence_context,
                )
    
            return sentence_index
    
        def get_sentence_window_query_engine(self, sentence_index, similarity_top_k=6, rerank_top_n=3):
            postproc = MetadataReplacementPostProcessor(target_metadata_key="window")
            rerank = LLMRerank(top_n=rerank_top_n, service_context=self.sentence_context)
    
            sentence_window_engine = sentence_index.as_query_engine(
                similarity_top_k=similarity_top_k, node_postprocessors=[postproc, rerank]
            )
    
            return sentence_window_engine
    
    
        sentence_window = SentenceWindowUtils(documents=documents, llm = llm, embed_model=embed_model, sentence_window_size=1)
        sentence_window_1 = sentence_window.build_sentence_window_index(save_dir='./indexes/sentence_window_index_1')
        sentence_window_engine_1 = sentence_window.get_sentence_window_query_engine(sentence_window_1)

两个代码块都将独立运行。但目标是,当执行需要检索现有文档库的查询时,我可以使用构建的句子_窗口_引擎。我想我可以根据查询检索相关信息,然后将该信息传递到聊天机器人的后续提示中,但我想尝试避免在提示中包含文档数据。

有什么建议吗?

python langchain embedding large-language-model llama-index
1个回答
0
投票

我从未像我希望的那样找到通过 llama_index 检索信息的确切方法,但我基本上找到了一种解决方法,通过查询我的文档库并将其作为上下文信息添加到我的聊天机器人中来完成我最初想要避免的事情

#### Conversation Prompt Chain #####
prompt = ChatPromptTemplate.from_messages(
    [
        (
            "system",
            """You are the world's greatest...
            You have access to an extensive document base of information.
            Relevant Information to the user query is provided below. Use the information at your own discretion if it improves the quality of the response.
            A summary of the previous conversation is also provided to contextualize you on previous conversation.

            <<Relevant Information>>
            {relevant_information}


            << Previous Conversation Summary>>
            {previous_conversation}


            << Current Prompt >>
            {user_input}
            """,
        ),
        MessagesPlaceholder(variable_name="messages"),
    ]
)

chat = ChatOpenAI(model=llm_model, temperature=0.0)



chain = prompt | chat


### Application Start ###


while True:
    # Some code....
    if route['destination'] == "data querying":
                formatted_response = query_and_format_sql(username, password, host, port, mydatabase, query_prompt, model = 'gpt-4', client_name = client_name, user_input=user_input)
                print(formatted_response)
                chat_history.add_ai_message(AIMessage(f'The previous query triggered a SQL agent response that was {formatted_response}'))
        else:
            # Search Document Base
            RAG_Context = sentence_window_engine_1.query(user_input)
    
            # Inject the retrieved information into the chatbot's context
            context_with_relevant_info = {
                "user_input": user_input,
                "messages": chat_history.messages,
                "previous_conversation": memory.load_memory_variables({}),
                "relevant_information": RAG_Context # ==> Inject relevant information from llama_index here
            }
            
            response = chain.invoke(context_with_relevant_info)

我还没有遇到令牌问题,但我可以想象,如果我的应用程序增长并扩展,它可能会在尝试注入相关信息、消息历史记录和提示时遇到问题。我用 ConversationBufferMemoryHistory 限制我的记忆,目前看来效果不错。

© www.soinside.com 2019 - 2024. All rights reserved.