从高频管道获得注意面罩

问题描述 投票:0回答:1

如何从Huggingface中的FeatureExtractionPipeline访问返回的注意力掩码?

下面的代码采用嵌入模型,将其和 Huggingface 数据集分布在单个节点上的 8 个 GPU 上,并对输入执行推理。该代码需要用于均值池的注意掩码。

代码示例:

from accelerate import Accelerator
from accelerate.utils import tqdm
from transformers import AutoTokenizer, AutoModel
from optimum.bettertransformer import BetterTransformer

import torch

from datasets import load_dataset

from transformers import pipeline

accelerator = Accelerator()

model_name = "BAAI/bge-large-en-v1.5"

tokenizer = AutoTokenizer.from_pretrained(model_name,)

model = AutoModel.from_pretrained(model_name,)

pipe = pipeline(
    "feature-extraction",
    model=model,
    tokenizer=tokenizer,
    max_length=512,
    truncation=True,
    padding=True,
    pad_to_max_length=True,
    batch_size=256,
    framework="pt",
    return_tensors=True,
    return_attention_mask=True,
    device=(accelerator.device)
)

dataset = load_dataset(
    "wikitext",
    "wikitext-2-v1",
    split="train",
)

#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
    token_embeddings = model_output[0] #First element of model_output contains all token embeddings
    input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
    return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)


# Assume 8 processes

with accelerator.split_between_processes(dataset["text"]) as data:

    for out in pipe(data):

        sentence_embeddings = mean_pooling(out, out["attention_mask"])

我需要管道中的注意力来用于均值池。

最好的,

恩里科

nlp huggingface-transformers huggingface huggingface-tokenizers accelerate
1个回答
0
投票

pipeline

库中的
transformers
对象为模型的快速推理提供了方便的抽象,但对于更多定制的解决方案,直接使用模型通常是一个好主意。例如:

text = 'This is a test.' tokenized = tokenizer( text, max_length=512, truncation=True, padding=True, return_attention_mask=True, return_tensors='pt').to(accelerator.device) out = model(**tokenized) embeddings = out.last_hidden_state attention_mask = tokenized['attention_mask']
然后您可以使用 

embeddings

attention_mask
 来计算均值池。您也可以考虑使用 
out.pooler_output
 而不是手动计算均值池,但是,我不确定在这种情况下如何计算 
pooler_output
,所以要小心。

© www.soinside.com 2019 - 2024. All rights reserved.