Quantcast
Channel: Active questions tagged python - Stack Overflow
Viewing all articles
Browse latest Browse all 14301

ReLU not working accurately in regression for a continuous function

$
0
0

I was trying to solve a regression problem for sine function and I was supposed to use the ReLU activation function and my network should have been a fully connected one. I wanted to see how hyperparameters would affect model's accuracy so I trained a bunch of models and check how they worked.This is my dataset:

x_sin_values = np.linspace(0, 4 * np.pi, 1000)y_sin_values = np.sin(x_sin_values)

At first I tried a simple two layer fully connected network with relu activation function with 12 neurons in each layer and with different train test ratio the output was now what I expected after 25 epochs. This is the model:

def create_model_regression(n_hidden_layers):    model = Sequential()    model.add(Dense(units=12, activation="relu", input_dim=1))    for _ in range(n_hidden_layers):        model.add(Dense(units=8, activation="relu"))    model.add(Dense(units=1, activation="linear"))    model.compile(        optimizer="adam",        loss="mean_squared_error",    )model = create_model_regression(2) 

Function estimation for 0.3 ratio

Function estimation for 0.9 ratio

I understand that if I have too much train data the model may overfit and also I don't have much test data left when using 0.1 of my data but still I didn't expect it to estimated function somewhat acceptable to a point and then just suddenly turn into a straight line.

Then I tried to see how number of layers would affect my accuracy and I got these:

Function estimation for 4 layer

Function estimation for 19 layer

Function estimation for 20 layer

I was confused so I read a little bit about this on many sites and if I'm not mistaken it's due to vanishing gradient but still it doesn't make sense to me why the model wouldn't learn at all in some cases (I expected there to a little bit learning at least but there is none) and if it's due to vanishing gradient why the model with 20 layers has learned it since it's theoretically more prone to this phenomenon, and this is not just some random coincidence I ran this code multiple time each time something similar happened. I tried common ways to solve vanishing gradient but still I couldn't explain many situations I would appreciate if anyone could explain to me why these two problem have happened and also what can I do to fix it.

For fixing the problem I had with number of layers I used methods like batch normalization, weight normalization or skip connections but none of them solved the problem or helped me understand why It has happened and I also couldn't find the any explaination for the problems I had with ratios either cause in none of the examples I saw the model just suddenly fails to work (I ran it multiple time and each time it learned differently but never it learned for x >= 10 though the shuffling for splitting train and test was enabled and also I used different initial weights. The closest thing I found was this which is still very different from mine.


Viewing all articles
Browse latest Browse all 14301

Trending Articles



<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>