You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Basically, we use two Q4_0 quantizations, each having 16 weights,
to a quantize a set of 32 weights. We get two separate scaling
factors, which we store as fp16, ending up using the exact same
5 bits per weight as the current Q4_0.
We end up witn an rmse of ~0.00159, so basically the same as
the improved Q4_1. But this should run faster than `Q4_1`
(unless fp16 -> fp32 conversion is somehow very slow).
0 commit comments