视觉变压器:运行时错误:mat1和mat2形状不能相乘(32x1000和768x32)[关闭]

问题描述 投票:0回答:1

我正在尝试对视觉变压器模型进行回归,但我无法用回归层替换最后一层分类

class RegressionViT(nn.Module):
    def __init__(self, in_features=224 * 224 * 3, num_classes=1, pretrained=True):
        super(RegressionViT, self).__init__()
        self.vit_b_16 = vit_b_16(pretrained=pretrained)
        # Accessing the actual output feature size from vit_b_16
        self.regressor = nn.Linear(self.vit_b_16.heads[0].in_features, num_classes * batch_size)

    def forward(self, x):
        x = self.vit_b_16(x)
        x = self.regressor(x)
        return x


# Model
model = RegressionViT(num_classes=1)
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model.to(device)

criterion = nn.MSELoss()  # Use appropriate loss function for regression
optimizer = optim.Adam(model.parameters(), lr=0.0001)

当我尝试初始化模型时出现此错误

RuntimeError: mat1 and mat2 shapes cannot be multiplied (32x1000 and 768x32)

问题是回归层和vit_b_16模型层不匹配,解决这个问题的正确方法是什么

python machine-learning deep-learning pytorch transformer-model
1个回答
1
投票

如果您查看

VisionTransformer
的源代码,您会在本节中注意到
self.heads
是顺序层,而不是线性层。默认情况下,它仅包含与最终分类层相对应的单个层
head
。要覆盖该层,您可以执行以下操作:

heads = self.vit_b_16.heads
heads.head = nn.Linear(heads.head.in_features, num_classes)
© www.soinside.com 2019 - 2024. All rights reserved.