Skip to content

Why the minimum of weight quantization is one more than activation and bias quantization? #11521

Closed Unanswered
codereba asked this question in Q&A
Discussion options

You must be logged in to vote

Replies: 2 comments 1 reply

Comment options

You must be logged in to vote
0 replies
Comment options

You must be logged in to vote
1 reply
@codereba
Comment options

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Category
Q&A
Labels
partner: qualcomm For backend delegation, kernels, demo, etc. from the 3rd-party partner, Qualcomm module: qnn Issues related to Qualcomm's QNN delegate and code under backends/qualcomm/
3 participants