Quantcast
Channel: Active questions tagged python - Stack Overflow
Viewing all articles
Browse latest Browse all 13861

RuntimeError: Given groups=1, weight of size [64, 64, 3, 3], expected input[64, 3, 32, 32] to have 64 channels, but got 3 channels instead

$
0
0

I'm encountering a RuntimeError stating that a layer expecting 64 channels received an input with 3 channels instead, during classification with a custom model that starts with a Conv2d(3, 64, kernel_size=(3, 3)) layer. The error occurs even though the input shape seems correct ([64, 3, 32, 32]) and matches the model's initial layer expectation.

*Shape of the inputs in the first batch: torch.Size([64, 3, 32, 32])*

RuntimeError                              Traceback (most recent call last)\<ipython-input-25-b3283edbd61f\> in \<cell line: 22\>()30         optimizer.zero_grad()31\---\> 32         outputs = model2(inputs)33         loss = criterion(outputs, labels)34         loss.backward()16 frames/usr/local/lib/python3.10/dist-packages/torch/nn/modules/conv.py in \_conv_forward(self, input, weight, bias)454                             weight, bias, self.stride,455                             \_pair(0), self.dilation, self.groups)\--\> 456         return F.conv2d(input, weight, bias, self.stride,457                         self.padding, self.dilation, self.groups)458RuntimeError: Given groups=1, weight of size \[64, 64, 3, 3\], expected input\[64, 3, 32, 32\] to have 64 channels, but got 3 channels instead

**My Model:**

[model2_code](https://i.stack.imgur.com/X3ZFm.png)
\*\*Model2 OUTPUT: \*\*[OUTPUT_Model2](https://i.stack.imgur.com/vH5Sc.png)\*\*Lines that produce the error: \*\*model2.to(device)optimizer = optim.Adam(model2.parameters(), lr=0.001)criterion = nn.CrossEntropyLoss()losses = \[\]def update_plot(epoch, loss):losses.append(loss)plt.plot(losses, '-x')plt.xlabel('Epoch')plt.ylabel('Loss')plt.title('Training Loss')plt.pause(0.001)num_epochs = 10for epoch in range(1, num_epochs + 1):start_time = time.time()running_loss = 0.0total_batches = 0    for i, (inputs, labels) in enumerate(train_loader, 0):        inputs, labels = inputs.to(device), labels.to(device)        optimizer.zero_grad()        outputs = model2(inputs)        loss = criterion(outputs, labels)        loss.backward()        optimizer.step()        running_loss += loss.item()        total_batches += 1    avg_l_loss / total_batches    update_plot(epoch, avg_loss)     elapsed_time = time.time() - start_time    if epoch % 10 == 0 or epoch == 1:        print(f'Epoch {epoch}/{num_epochs} - Loss: {avg_loss:.4f} - Time: {elapsed_time:.2f}s')plt.show()

Viewing all articles
Browse latest Browse all 13861

Trending Articles



<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>