我正在尝试使用 LangChain 嵌入,在 Google colab 中使用以下代码:
这些是安装:
pip install pypdf
pip install -q transformers einops accelerate langchain bitsandbytes
pip install install sentence_transformers
pip3 install llama-index --upgrade
pip install llama-index-llms-huggingface
huggingface-cli login
pip install -U llama-index-core llama-index-llms-openai llama-index-embeddings-openai
然后我在谷歌colab中运行了这段代码:
from llama_index.core import VectorStoreIndex,SimpleDirectoryReader,ServiceContext
from llama_index.llms.huggingface import HuggingFaceLLM
from llama_index.core.prompts.prompts import SimpleInputPrompt
# Reading pdf
documents=SimpleDirectoryReader("/content/sample_data/Data").load_data()
#Prompt
query_wrapper_prompt=SimpleInputPrompt("<|USER|>{query_str}<|ASSISTANT|>")
import torch
llm = HuggingFaceLLM(
context_window=4096,
max_new_tokens=256,
generate_kwargs={"temperature": 0.0, "do_sample": False},
system_prompt=system_prompt,
query_wrapper_prompt=query_wrapper_prompt,
tokenizer_name="meta-llama/Llama-2-7b-chat-hf",
model_name="meta-llama/Llama-2-7b-chat-hf",
device_map="auto",
# uncomment this if using CUDA to reduce memory usage
model_kwargs={"torch_dtype": torch.float16 , "load_in_8bit":True}
)
# Embeddings
from langchain.embeddings.huggingface import HuggingFaceEmbeddings
from llama_index.core import ServiceContext
from llama_index.embeddings.langchain import LangchainEmbedding
embed_model=LangchainEmbedding(
HuggingFaceEmbeddings(model_name="sentence-transformers/all-mpnet-base-v2"))
然后我得到了这个错误:
ModuleNotFoundError: No module named 'llama_index.embeddings.langchain'
我正在使用最新版本的 llama-index 版本:0.10.26
有人可以建议如何解决此错误。
你有
pip install llama-index-embeddings-openai
pip install llama-index-embeddings-huggingface
以及您需要的类似方式
pip install llama-index-embeddings-langchain