如何在VGG-19 PyTorch中微调FC1和FC2?

问题描述 投票:0回答:1

我需要用 pytorch 微调预训练的 VGG-19。我有这些具体任务:

  1. 微调 VGG-19 网络中所有层的权重。
  2. 仅微调最后两个全连接层(FC1 和 FC2)的权重 在 VGG-19 网络中。这是给我的唯一信息。

VGG-19结构如下:

VGG(
  (features): Sequential(
    (0): Conv2d(3, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (1): ReLU(inplace=True)
    (2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (3): ReLU(inplace=True)
    (4): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
    (5): Conv2d(64, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (6): ReLU(inplace=True)
    (7): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (8): ReLU(inplace=True)
    (9): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
    (10): Conv2d(128, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (11): ReLU(inplace=True)
    (12): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (13): ReLU(inplace=True)
    (14): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (15): ReLU(inplace=True)
    (16): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (17): ReLU(inplace=True)
    (18): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
    (19): Conv2d(256, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (20): ReLU(inplace=True)
    (21): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (22): ReLU(inplace=True)
    (23): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (24): ReLU(inplace=True)
    (25): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (26): ReLU(inplace=True)
    (27): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
    (28): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (29): ReLU(inplace=True)
    (30): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (31): ReLU(inplace=True)
    (32): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (33): ReLU(inplace=True)
    (34): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (35): ReLU(inplace=True)
    (36): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
  )
  (avgpool): AdaptiveAvgPool2d(output_size=(7, 7))
  (classifier): Sequential(
    (0): Linear(in_features=25088, out_features=4096, bias=True)
    (1): ReLU(inplace=True)
    (2): Dropout(p=0.5, inplace=False)
    (3): Linear(in_features=4096, out_features=4096, bias=True)
    (4): ReLU(inplace=True)
    (5): Dropout(p=0.5, inplace=False)
    (6): Linear(in_features=4096, out_features=1000, bias=True)
  )
)

我尝试做第一个任务,我认为这是正确的:

model = models.vgg19(pretrained=True)

for param in model.parameters():
    param.requires_grad = True

model.classifier[6] = nn.Linear(4096, len(class_to_idx))

但是我无法弄清楚第二个任务,我尝试了这个,但我不确定:

model2 = models.vgg19(pretrained=True)

for param in model2.parameters():
    param.requires_grad = False

# Set requires_grad to True for FC1 and FC2
for param in model2.classifier[0].parameters():
    param.requires_grad = True

for param in model2.classifier[3].parameters():
    param.requires_grad = True

# Modify the last fully connected layers for the number of classes in your dataset
model2.classifier[6] = nn.Linear(4096, len(class_to_idx))

第二部分怎么做?我应该保留

model2.classifier[6]
还是定义一个新的顺序结构?

python deep-learning pytorch neural-network conv-neural-network
1个回答
0
投票

你的想法基本上是正确的。实施第二项任务的小步骤:

  1. 加载预训练的VGG-19模型。
  2. 冻结除最后两个全连接层(FC1 和 FC2)之外的所有层的权重。如果您定义一个新的序列,您将不会有预训练的权重,因为预训练的权重不可用,因此该方法将涉及从头开始训练层,而不是微调现有的权重
  3. (可选)修改最后一层以适应数据集中的类数量。

正如你上面所说,VGG-19的分类器部分的结构是:

     (classifier): Sequential(
        (0): Linear(in_features=25088, out_features=4096, bias=True) # FC1
        (1): ReLU(inplace=True)
        (2): Dropout(p=0.5, inplace=False)
        (3): Linear(in_features=4096, out_features=4096, bias=True) # FC2
        (4): ReLU(inplace=True)
        (5): Dropout(p=0.5, inplace=False)
        (6): Linear(in_features=4096, out_features=1000, bias=True) # Original output layer
      )

仅更新 FC1 和 FC2(以及最终分类器层,如果您选择修改它)的代码将在训练期间更新,而网络的其余部分保持不变:

    import torch.nn as nn
    from torchvision import models
    from torchvision.models import VGG19_Weights
    
    # Load the pretrained model
    vgg19_model = models.vgg19(weights=VGG19_Weights.IMAGENET1K_V1) # since 0.13, the argument 'pretrained' has been deprecated and may be deleted in the future
    
    # Freeze all layers' weights in the model
    for param in vgg19_model.parameters():  # get all the parameters of the model
        param.requires_grad = False  # these layers won't be update during training
    
    # Unfreeze weights of last two fully connected layers (FC1 and FC2)
    for param in vgg19_model.classifier[0].parameters():
        param.requires_grad = True  # will be updated during training
    for param in vgg19_model.classifier[3].parameters():
        param.requires_grad = True  # will be updated during training
    
    # (Recommended) Modify the last layer for your number of classes
    class_to_idx = TODO
    num_classes = len(class_to_idx)
    model.classifier[6] = nn.Linear(4096, num_classes)

最后一个全连接层(对于 ImageNet 最初具有 1000 个输出特征)被替换为与数据集中的类数量相等的特征数量,这是推荐的(如果不是必需的)方法。如果您的任务需要不同数量的类,您必须调整模型以输出正确数量的可能性。另一件需要考虑的事情是,虽然您的任务可能具有与 ImageNet 相同数量的类,但类本身可能不同。重新训练输出层有助于模型更好地区分特定于您的任务的类。

© www.soinside.com 2019 - 2024. All rights reserved.