Quantcast
Channel: Active questions tagged python - Stack Overflow
Viewing all articles
Browse latest Browse all 23131

Getting missing Forward function, when the forward function is present within class | pytorch

$
0
0

Receiving this error

`---------------------------------------------------------------------------NotImplementedError Traceback (most recent call last)Cell In[80], line 21 # Make predictions with model----> 2 y_preds = model_0(x_test)

File ~/.conda/envs/pytorch311/lib/python3.11/site-packages/torch/nn/modules/module.py:1518, in Module._wrapped_call_impl(self, *args, **kwargs)1516 return self._compiled_call_impl(*args, **kwargs) # type: ignore[misc]1517 else:-> 1518 return self._call_impl(*args, **kwargs)

File ~/.conda/envs/pytorch311/lib/python3.11/site-packages/torch/nn/modules/module.py:1527, in Module._call_impl(self, *args, **kwargs)1522 # If we don't have any hooks, we want to skip the rest of the logic in1523 # this function, and just call forward.1524 if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks or self._forward_pre_hooks1525 or _global_backward_pre_hooks or _global_backward_hooks1526 or _global_forward_hooks or _global_forward_pre_hooks):-> 1527 return forward_call(*args, **kwargs)1529 try:1530 result = None

File ~/.conda/envs/pytorch311/lib/python3.11/site-packages/torch/nn/modules/module.py:372, in _forward_unimplemented(self, *input)361 def _forward_unimplemented(self, *input: Any) -> None:362 r"""Defines the computation performed at every call.363364 Should be overridden by all subclasses.(...)370 registered hooks while the latter silently ignores them.371 """--> 372 raise NotImplementedError(f"Module [{type(self).name}] is missing the required "forward" function")

NotImplementedError: Module [LinearRegressionModel] is missing the required "forward" function

`

Thinking that it may be the indentation of the forward function, I had adjusted it to match the init but still received the same error regardless.

`# Create a Linear Regression model classclass LinearRegressionModel(nn.Module): # <- almost everything in PyTorch is a nn.Module (think of this as neural network lego blocks)def init(self):super().init()self.weights = nn.Parameter(torch.randn(1, # <- start with random weights (this will get adjusted as the model learns)dtype=torch.float), # <- PyTorch loves float32 by defaultrequires_grad=True) # <- can we update this value with gradient descent?)

    self.bias = nn.Parameter(torch.randn(1, # <- start with random bias (this will get adjusted as the model learns)                                        dtype=torch.float), # <- PyTorch loves float32 by default                            requires_grad=True) # <- can we update this value with gradient descent?))# Forward defines the computation in the model    def forward(self, x: torch.Tensor) -> torch.Tensor: # <- "x" is the input data (e.g. training/testing features)        xx = self.weights * x + self.bias # <- this is the linear regression formula (y = m*x + b)        return xx`

Viewing all articles
Browse latest Browse all 23131

Trending Articles



<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>