| license: mit | |
| language: | |
| - en | |
| pipeline_tag: text-generation | |
| My own (ZeroWw) quantizations. | |
| output and embed tensors quantized to f16. | |
| all other tensors quantized to q5_k or q6_k. | |
| Result: | |
| both f16.q6 and f16.q5 are smaller than q8_0 standard quantization | |
| and they perform as well as the pure f16. | |
| Updated on: Tue Jan 14, 14:11:53 | |