我正在尝试跳上 LCEL 和 Langserve 列车,但我在理解访问管道字典中设置的变量所涉及的一些“魔法”时遇到了一些困难。
这些变量似乎可以从提示模板中解析。我想在自定义函数等中检索这些值,但我不清楚如何直接访问它们。采取以下人为的示例,其目的是返回源文档以及响应中的答案:
class ChatResponse(BaseModel):
answer: str
sources: List[Document]
store = FAISS.from_texts(
["harrison worked at kensho"], embedding=OpenAIEmbeddings()
)
retriever = store.as_retriever()
template = """Answer the question based only on the following context:
{context}
Question: {question}
"""
prompt = ChatPromptTemplate.from_template(template)
llm = ChatOpenAI()
def format_response(answer):
sources = [] # TODO lookup source documents (key: 'context')
return ChatResponse(answer=answer, sources=sources)
retrieval_chain = (
{"context": retriever, "question": RunnablePassthrough()}
| prompt
| llm
| StrOutputParser()
| RunnableLambda(format_response)
)
app = FastAPI()
add_routes(app, retrieval_chain, path="/chat", input_type=str, output_type=ChatResponse)
在
format_response
中,我留下了TODO
来查找源文档。我想从管道的 context
键检索源文档。我如何访问从链的第一步设置的这个密钥?
来自文档https://python.langchain.com/docs/use_cases/question_answering/sources/
from langchain_core.runnables import RunnableParallel
def format_docs(docs):
return "\n\n".join(doc.page_content for doc in docs)
rag_chain_from_docs = (
RunnablePassthrough.assign(context=(lambda x: format_docs(x["context"])))
| prompt
| llm
| StrOutputParser()
)
rag_chain_with_source = RunnableParallel(
{"context": retriever, "question": RunnablePassthrough()}
).assign(answer=rag_chain_from_docs)
然后这个:
rag_chain_with_source.invoke("where did harrison work ?")
返回:
{'context': [Document(page_content='harrison worked at kensho')],
'question': 'where did harrison work ?',
'answer': 'Harrison worked at Kensho.'}