如何删除 Huggingface 的 Transformers GPT2 预训练模型中的图层?

问题描述 投票:0回答:1

我的代码:

from transformers import GPT2Config, GPT2Model
from transformers import AutoTokenizer, AutoModelForMaskedLM, AutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrained("openai-community/gpt2")
print(decoder)

这是控制台的输出,列出了模型架构:

GPT2LMHeadModel(
  (transformer): GPT2Model(
    (wte): Embedding(50257, 768)
    (wpe): Embedding(1024, 768)
    (drop): Dropout(p=0.1, inplace=False)
    (h): ModuleList(
      (0-11): 12 x GPT2Block(
        (ln_1): LayerNorm((768,), eps=1e-05, elementwise_affine=True)
        (attn): GPT2Attention(
          (c_attn): Conv1D()
          (c_proj): Conv1D()
          (attn_dropout): Dropout(p=0.1, inplace=False)
          (resid_dropout): Dropout(p=0.1, inplace=False)
        )
        (ln_2): LayerNorm((768,), eps=1e-05, elementwise_affine=True)
        (mlp): GPT2MLP(
          (c_fc): Conv1D()
          (c_proj): Conv1D()
          (act): NewGELUActivation()
          (dropout): Dropout(p=0.1, inplace=False)
        )
      )
    )
    (ln_f): LayerNorm((768,), eps=1e-05, elementwise_affine=True)
  )
  (lm_head): Linear(in_features=768, out_features=50257, bias=False)
)

我想删除第一层:

(wte): Embedding(50257, 768)

我尝试过以下方法:

def deleteEncodingLayers(model, num_layers_to_keep):  # must pass in the full bert model
    oldModuleList = model.bert.encoder.layer
    newModuleList = nn.ModuleList()

    # Now iterate over all layers, only keepign only the relevant layers.
    for i in range(0, len(num_layers_to_keep)):
        newModuleList.append(oldModuleList[i])

    # create a copy of the model, modify it with the new list, and return
    copyOfModel = copy.deepcopy(model)
    copyOfModel.bert.encoder.layer = newModuleList

    return copyOfModel

但是没有成功。谁知道怎么解决?

python machine-learning deep-learning nlp transformer-model
1个回答
0
投票

尝试使用这些参数来绕过嵌入层:

class GPT2WithoutWTE(GPT2Model):
def __init__(self, config):
    super().__init__(config)
    # Remove the word token embedding layer
    del self.wte

def forward(
    self,
    inputs_embeds,
    attention_mask=None,
    token_type_ids=None,
    position_ids=None,
    head_mask=None,
    inputs=None,
    encoder_hidden_states=None,
    encoder_attention_mask=None,
    past_key_values=None,
    use_cache=None,
    output_attentions=None,
    output_hidden_states=None,
    return_dict=None,
):
    # here you will bypass the embedding layer and use inputs_embeds directly
    return super().forward(
        inputs_embeds=inputs_embeds,
        attention_mask=attention_mask,
        token_type_ids=token_type_ids,
        position_ids=position_ids,
        head_mask=head_mask,
        encoder_hidden_states=encoder_hidden_states,
        encoder_attention_mask=encoder_attention_mask,
        past_key_values=past_key_values,
        use_cache=use_cache,
        output_attentions=output_attentions,
        output_hidden_states=output_hidden_states,
        return_dict=return_dict,
    )

对于输入嵌入,您可以使用以下内容

inputs_embeds = torch.rand(1, 10, config.n_embd)

加载 GPT2 的配置,将其发送到类,然后将 input_embeds 用于新模型。

© www.soinside.com 2019 - 2024. All rights reserved.