我现在正在使用 NER 进行信息提取。我的数据集领域(主要)是计算机科学。它包含标签/标记:“TUJUAN”、“METODE”和“TEMUAN”。问题是几乎 80-90% 的数据被标记为 O,这意味着它没有有意义的标签。模型的精度和召回率为 0,而准确度约为 0.78。我使用 IndoBERT 作为 NER 任务的模型。
我怀疑发生这种情况是因为我的数据集极其不平衡。首先,我想将基于 BertForTokenClassification documentation 的损失函数修改为 Dice Loss 或 Focal Loss,正如它提到的 here 但我不知道如何做,因为我的 Python 知识还很薄弱。
class BertForTokenClassification(BertPreTrainedModel):
def __init__(self, config):
super().__init__(config)
self.num_labels = config.num_labels
self.bert = BertModel(config, add_pooling_layer=False)
classifier_dropout = (
config.classifier_dropout if config.classifier_dropout is not None else config.hidden_dropout_prob
)
self.dropout = nn.Dropout(classifier_dropout)
self.classifier = nn.Linear(config.hidden_size, config.num_labels)
# Initialize weights and apply final processing
self.post_init()
@add_start_docstrings_to_model_forward(BERT_INPUTS_DOCSTRING.format("batch_size, sequence_length"))
@add_code_sample_docstrings(
checkpoint=_CHECKPOINT_FOR_TOKEN_CLASSIFICATION,
output_type=TokenClassifierOutput,
config_class=_CONFIG_FOR_DOC,
expected_output=_TOKEN_CLASS_EXPECTED_OUTPUT,
expected_loss=_TOKEN_CLASS_EXPECTED_LOSS,
)
def forward(
self,
input_ids: Optional[torch.Tensor] = None,
attention_mask: Optional[torch.Tensor] = None,
token_type_ids: Optional[torch.Tensor] = None,
position_ids: Optional[torch.Tensor] = None,
head_mask: Optional[torch.Tensor] = None,
inputs_embeds: Optional[torch.Tensor] = None,
labels: Optional[torch.Tensor] = None,
output_attentions: Optional[bool] = None,
output_hidden_states: Optional[bool] = None,
return_dict: Optional[bool] = None,
) -> Union[Tuple[torch.Tensor], TokenClassifierOutput]:
r"""
labels (`torch.LongTensor` of shape `(batch_size, sequence_length)`, *optional*):
Labels for computing the token classification loss. Indices should be in `[0, ..., config.num_labels - 1]`.
"""
return_dict = return_dict if return_dict is not None else self.config.use_return_dict
outputs = self.bert(
input_ids,
attention_mask=attention_mask,
token_type_ids=token_type_ids,
position_ids=position_ids,
head_mask=head_mask,
inputs_embeds=inputs_embeds,
output_attentions=output_attentions,
output_hidden_states=output_hidden_states,
return_dict=return_dict,
)
sequence_output = outputs[0]
sequence_output = self.dropout(sequence_output)
logits = self.classifier(sequence_output)
loss = None
if labels is not None:
loss_fct = CrossEntropyLoss()
loss = loss_fct(logits.view(-1, self.num_labels), labels.view(-1))
if not return_dict:
output = (logits,) + outputs[2:]
return ((loss,) + output) if loss is not None else output
return TokenClassifierOutput(
loss=loss,
logits=logits,
hidden_states=outputs.hidden_states,
attentions=outputs.attentions,
)
我的完整代码是这里
我可以获得有关如何根据我的问题处理不平衡数据集的任何帮助吗?
NER 中的不平衡数据集有时是高度多任务处理的,但您可以采用多种策略来处理这个问题。
您可以通过在交叉熵损失函数中为不同的类别分配不同的权重来处理类别不平衡。
焦点损失旨在通过降低分类良好的示例的权重来解决类别不平衡问题。它更关注困难的、错误分类的例子。这在大多数类别主导损失计算的情况下会有所帮助。
Dice Loss 是另一种常用于不平衡数据集的损失函数。它测量预测掩模和目标掩模之间的重叠。这种损失函数往往适用于分割等任务,但也适用于 NER。
您实际上可以修改代码来实现加权交叉熵损失,上面提供的类似代码使用Python应该如下所示:
import torch.nn.functional as F
class BertForTokenClassification(BertPreTrainedModel):
def __init__(self, config, class_weights=None):
super().__init__(config)
self.num_labels = config.num_labels
self.class_weights = class_weights
self.bert = BertModel(config, add_pooling_layer=False)
classifier_dropout = (
config.classifier_dropout if config.classifier_dropout is not None else config.hidden_dropout_prob
)
self.dropout = nn.Dropout(classifier_dropout)
self.classifier = nn.Linear(config.hidden_size, config.num_labels)
# Initialize weights and apply final processing
self.post_init()
def forward(
self,
input_ids: Optional[torch.Tensor] = None,
attention_mask: Optional[torch.Tensor] = None,
token_type_ids: Optional[torch.Tensor] = None,
position_ids: Optional[torch.Tensor] = None,
head_mask: Optional[torch.Tensor] = None,
inputs_embeds: Optional[torch.Tensor] = None,
labels: Optional[torch.Tensor] = None,
output_attentions: Optional[bool] = None,
output_hidden_states: Optional[bool] = None,
return_dict: Optional[bool] = None,
) -> Union[Tuple[torch.Tensor], TokenClassifierOutput]:
# Your forward function code here
if labels is not None:
# Calculate loss with class weights
loss_weights = torch.tensor(self.class_weights, dtype=torch.float32).to(logits.device)
loss_fct = nn.CrossEntropyLoss(weight=loss_weights)
loss = loss_fct(logits.view(-1, self.num_labels), labels.view(-1))
因此,仔细检查上述内容,您可以在初始化模型时传递 class_weights 参数。您可以根据数据集中每个类别的频率计算类别权重。当然,它应该如下所示:
# Calculate class weights
class_weights = [0.1, 0.3, 0.6] # Adjust these weights based on your dataset
model = BertForTokenClassification.from_pretrained("indobenchmark/indobert-base-p2", num_labels=num_labels, class_weights=class_weights)
根据班级分布调整 class_weights 的值。这种方法将为少数类别赋予更多权重,这有助于训练不平衡的数据集。
检查并正确实施,如有任何问题,请随时询问。
作为参考,您还可以从此链接阅读更多内容: