-
Notifications
You must be signed in to change notification settings - Fork 75
Description
When training a new model ore even using the pretrained one, trying to obtain predictions all probabilities leads to none.
This strange behavior was firstly observed during predictions of specific semantic data types where many labels had a bias towards the first defined label. Digging deeper, when using predict_proba, a full set of nan probabilities was observed. I believe this is a bug.
Digging deeper, I found out that probably skewness & kurtosis for character level statistics are having nan as actual values. As these metrics have the standard deviation in the denominator of calculations this is a valid concern and issue.
This can be fixed in the code by adding fixed min/max values for computational reasons but I believe that this is something that has to be also taken into account when deriving complex features from metrics. This issue is not described I believe in the corresponding paper (https://arxiv.org/pdf/1905.10688.pdf) and probably it is an edge case that was missed by the authors.
This may also be the root cause behind issue#47 (#47).
Thanks a lot for the great model, OS code and contributions.
Bellow is an example of the aforementioned behavior just by changing the examples of the provided examples notebooks. This is the minimum reproducible example.
https://gist.github.com/stranger-codebits/6074b5fe2d02ac9db9f2750dbad9a24f