我遇到一个运行时错误,指出在使用以 Conv2d(3, 64, kernel_size=(3, 3)) 层开头的自定义模型进行分类期间,期望 64 个通道的层收到了具有 3 个通道的输入。即使输入形状看起来正确 ([64, 3, 32, 32]) 并且符合模型的初始层期望,也会发生错误。
*第一批输入的形状:torch.Size([64, 3, 32, 32]) *
RuntimeError Traceback (most recent call last)
\<ipython-input-25-b3283edbd61f\> in \<cell line: 22\>()
30 optimizer.zero_grad()
31
\---\> 32 outputs = model2(inputs)
33 loss = criterion(outputs, labels)
34 loss.backward()
16 frames
/usr/local/lib/python3.10/dist-packages/torch/nn/modules/conv.py in \_conv_forward(self, input, weight, bias)
454 weight, bias, self.stride,
455 \_pair(0), self.dilation, self.groups)
\--\> 456 return F.conv2d(input, weight, bias, self.stride,
457 self.padding, self.dilation, self.groups)
458
RuntimeError: Given groups=1, weight of size \[64, 64, 3, 3\], expected input\[64, 3, 32, 32\] to have 64 channels, but got 3 channels instead
**我的模型: **
[model2_code](https://i.stack.imgur.com/X3ZFm.png)
\*\*Model2 OUTPUT: \*\*
[OUTPUT_Model2](https://i.stack.imgur.com/vH5Sc.png)
\*\*Lines that produce the error: \*\*
model2.to(device)
optimizer = optim.Adam(model2.parameters(), lr=0.001)
criterion = nn.CrossEntropyLoss()
losses = \[\]
def update_plot(epoch, loss):
losses.append(loss)
plt.plot(losses, '-x')
plt.xlabel('Epoch')
plt.ylabel('Loss')
plt.title('Training Loss')
plt.pause(0.001)
num_epochs = 10
for epoch in range(1, num_epochs + 1):
start_time = time.time()
running_loss = 0.0
total_batches = 0
for i, (inputs, labels) in enumerate(train_loader, 0):
inputs, labels = inputs.to(device), labels.to(device)
optimizer.zero_grad()
outputs = model2(inputs)
loss = criterion(outputs, labels)
loss.backward()
optimizer.step()
running_loss += loss.item()
total_batches += 1
avg_l_loss / total_batches
update_plot(epoch, avg_loss)
elapsed_time = time.time() - start_time
if epoch % 10 == 0 or epoch == 1:
print(f'Epoch {epoch}/{num_epochs} - Loss: {avg_loss:.4f} - Time: {elapsed_time:.2f}s')
plt.show()
再次检查你的前传,确保你没有犯错。我认为您将原始输入(形状
[64, 3, 32, 32]
)作为block_2
的输入,而不是block_1
(形状[64, 64, 32, 32]
)的输出。