-
Notifications
You must be signed in to change notification settings - Fork 5
Ba/r_nse_loss #179
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Ba/r_nse_loss #179
Conversation
|
I was thinking that maybe we should just called them |
| l_init_train, | ||
| l_init_val, | ||
| training_loss, | ||
| loss_types[1], |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't know how I feel about this. I thinks this is already breaking :/
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it is not breaking - it is just for the trainboard. The default behavior stays the same (training_loss = :mse, loss_types = [:mse, :r2])
| # one minus nse | ||
| function loss_fn(ŷ, y, y_nan, ::Val{:nse}) | ||
| return sum((ŷ[y_nan] .- y[y_nan]).^2) / sum((y[y_nan] .- mean(y[y_nan])).^2) | ||
| return one(eltype(ŷ)) - (sum((ŷ[y_nan] .- y[y_nan]).^2) / sum((y[y_nan] .- mean(y[y_nan])).^2)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
oh yes, it should be like this for a proper loss.
| return one(eltype(ŷ)) .- (cor(ŷ[y_nan], y[y_nan])) | ||
| end | ||
|
|
||
| function loss_fn(ŷ, y, y_nan, ::Val{:nseLoss}) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
maybe this is only ::Val{:nse} and the next one is the actual :Val{:nseLoss}.
Yes, performance_metrics would be good. This would be indeed breaking - maybe my proposed changes are also breaking. We could have performance_metrics where the highest value gives the best performance (NSE) and what we have so far returns the model with lowest metric. So at the moment the train function would only work with losses (decreasing) |
make losses losses