I am learning the signal convolution and I am little bit confusing the different between Pytorch functional conv1d and scipy convolution. What I know for sure is pytorch conv1d is actually calculating the cross-correlation and scipy will do the traditional convolution. I could manually flip the kernel to match from cross-correlation to convolution.
Right now, for example I have a 1D kernel with length 120 (120 timing points) and a signal Sobs with length 149 (149 timing points). I know the Sobs is coming from some signal called Strue conv with the kernel (length 120). My final goal is getting the Sture.
Suppose I have a random guess Sture, what length should be? If I want to keep the Strue has the same length of Sobs, the convoltuion result (Strue * kernel) will become 149 + 120 - 1 = 268 (I want to use the full mode) instead of Sobs = 149. What should I do here?
If I want to write the code using pytorch (I want to have the initial Strue guess via a fully connected neural network). The network will give me a random Sture [size, 1] since it is a 1D signal, if I want to calculate the conv using pytorch functional conv1d. What is my minibatch? Should it be 149 or 1? And the pytorch functional conv1d only has mode valid and same. How could I adjust the padding to match the full mode in Scipy?