You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi,
I'm working on uncertainty prediction using Bayesian Neural Networks (BNNs). I trained a BNN on a dataset where all values were positive. The training went well, and the model showed good performance (accuracy, convergence) on the test set, which also contained only positive values.
However, I'm running into a confusing issue when applying this trained BNN model to predict on other datasets. These new datasets also contain only positive numbers, and based on the problem context, the output should definitely be positive.
The problem is, the BNN is predicting entirely negative values, with extremely high variance estimates.
What's weird is that a standard feedforward NN (BPNN) gives normal, positive outputs on these same new datasets.
I'm a bit new to BNNs and trying to understand why it's behaving this way, especially when the data seems consistent (all positive).
Could anyone offer some insights or suggestions on what might be causing this, or how to troubleshoot it?
Thanks for any help!