恒定的训练损失和验证损失

问题描述 投票:0回答:1

我正在使用带有Pytorch库的RNN模型进行电影评论的情感分析,但是在整个训练过程中,训练损失和验证损失都以某种方式保持不变。我查找了不同的在线资源,但仍然陷于困境。

有人可以帮忙看看我的代码吗?

某些参数由分配指定:

embedding_dim = 64

n_layers = 1

n_hidden = 128

dropout = 0.5

batch_size = 32

我的主要代码

txt_field = data.Field(tokenize=word_tokenize, lower=True, include_lengths=True, batch_first=True)
label_field = data.Field(sequential=False, use_vocab=False, batch_first=True)

train = data.TabularDataset(path=part2_filepath+"train_Copy.csv", format='csv',
                            fields=[('label', label_field), ('text', txt_field)], skip_header=True)
validation = data.TabularDataset(path=part2_filepath+"validation_Copy.csv", format='csv',
                            fields=[('label', label_field), ('text', txt_field)], skip_header=True)

txt_field.build_vocab(train, min_freq=5)
label_field.build_vocab(train, min_freq=2)

device = torch.device('cuda:0' if torch.cuda.is_available() else 'cpu')
train_iter, valid_iter, test_iter = data.BucketIterator.splits(
    (train, validation, test),
    batch_size=32,
    sort_key=lambda x: len(x.text),
    sort_within_batch=True,
    device=device)

n_vocab = len(txt_field.vocab)
embedding_dim = 64
n_hidden = 128
n_layers = 1
dropout = 0.5

model = Text_RNN(n_vocab, embedding_dim, n_hidden, n_layers, dropout)

optimizer = torch.optim.Adam(model.parameters(), lr=0.0001)
criterion = torch.nn.BCELoss().to(device)

N_EPOCHS = 15
best_valid_loss = float('inf')

for epoch in range(N_EPOCHS):
    train_loss, train_acc = RNN_train(model, train_iter, optimizer, criterion)
    valid_loss, valid_acc = evaluate(model, valid_iter, criterion)

我的模特

class Text_RNN(nn.Module):
    def __init__(self, n_vocab, embedding_dim, n_hidden, n_layers, dropout):
        super(Text_RNN, self).__init__()
        self.n_layers = n_layers
        self.n_hidden = n_hidden
        self.emb = nn.Embedding(n_vocab, embedding_dim)
        self.rnn = nn.RNN(
            input_size=embedding_dim,
            hidden_size=n_hidden,
            num_layers=n_layers,
            dropout=dropout,
            batch_first=True
        )
        self.sigmoid = nn.Sigmoid()
        self.linear = nn.Linear(n_hidden, 2)

    def forward(self, sent, sent_len):
        sent_emb = self.emb(sent)
        outputs, hidden = self.rnn(sent_emb)
        prob = self.sigmoid(self.linear(hidden.squeeze(0)))

        return prob

训练功能

def RNN_train(model, iterator, optimizer, criterion):
    epoch_loss = 0
    epoch_acc = 0
    model.train()
    for batch in iterator:
        text, text_lengths = batch.text
        predictions = model(text, text_lengths)
        batch.label = batch.label.type(torch.FloatTensor).squeeze()
        predictions = torch.max(predictions.data, 1).indices.type(torch.FloatTensor)
        loss = criterion(predictions, batch.label)
        loss.requires_grad = True
        acc = binary_accuracy(predictions, batch.label)
        optimizer.zero_grad()
        loss.backward()
        optimizer.step()
        epoch_loss += loss.item()
        epoch_acc += acc.item()

    return epoch_loss / len(iterator), epoch_acc / len(iterator)

我运行10条测试评论+ 5条验证评论的输出

Epoch [1/15]:   Train Loss: 15.351 | Train Acc: 44.44%  Val. Loss: 11.052 |  Val. Acc: 60.00%
Epoch [2/15]:   Train Loss: 15.351 | Train Acc: 44.44%  Val. Loss: 11.052 |  Val. Acc: 60.00%
Epoch [3/15]:   Train Loss: 15.351 | Train Acc: 44.44%  Val. Loss: 11.052 |  Val. Acc: 60.00%
Epoch [4/15]:   Train Loss: 15.351 | Train Acc: 44.44%  Val. Loss: 11.052 |  Val. Acc: 60.00%
...

[如果有人可以指出我正确的方向,我相信培训代码有帮助,因为在大多数情况下,我都遵循本文:https://www.analyticsvidhya.com/blog/2020/01/first-text-classification-in-pytorch/

machine-learning pytorch recurrent-neural-network sentiment-analysis
1个回答
0
投票

在您的训练循环中,您正在使用max操作中的索引,这是不可微的,因此您无法通过它跟踪梯度。由于它不可微,因此以后的所有内容也不会跟踪渐变。呼唤loss.backward()将失败。

# The indices of the max operation are not differentiable
predictions = torch.max(predictions.data, 1).indices.type(torch.FloatTensor)
loss = criterion(predictions, batch.label)
# Setting requires_grad to True to make .backward() work, although incorrectly.
loss.requires_grad = True

大概您想通过设置requires_grad来解决这个问题,但这并没有达到您的期望,因为没有梯度传播到您的模型,因为计算图中唯一的事情就是损失本身,并且无处不在从那里去。

您使用索引来获取0或1,因为模型的输出本质上是两个类,并且您希望该类具有更高的概率。对于Binary Cross Entropy损失,您只需要一个值在0到1(连续)之间的类,可以通过应用Sigmoid函数获得。

因此您需要将最终线性层的输出通道更改为1:

self.linear = nn.Linear(n_hidden, 1)

并且在训练循环中,您可以删除torch.max呼叫以及requires_grad

# Squeeze the model's output to get rid of the single class dimension
predictions = model(text, text_lengths).squeeze()
batch.label = batch.label.type(torch.FloatTensor).squeeze()
loss = criterion(predictions, batch.label)
acc = binary_accuracy(predictions, batch.label)
optimizer.zero_grad()
loss.backward()

由于您最后只有1个班级,因此实际预测将为0或1(介于两者之间),以实现您可以简单地使用0.5作为阈值,因此下面的所有内容都被视为0,上面的所有内容都被视为0。被认为是1.如果您正在使用所关注文章的binary_accuracy功能,则该操作会自动为您完成。他们通过使用torch.round对其进行四舍五入来完成此操作。

© www.soinside.com 2019 - 2024. All rights reserved.