|
|
--- |
|
|
base_model: |
|
|
- nvidia/Llama-3_1-Nemotron-Ultra-253B-v1 |
|
|
pipeline_tag: text-generation |
|
|
--- |
|
|
|
|
|
Big thanks to ymcki for updating the llama.cpp code to support the 'dummy' layers. |
|
|
Use the llama.cpp branch from this PR: https://github.com/ggml-org/llama.cpp/pull/12843 if it hasn't been merged yet. |
|
|
|
|
|
Note the imatrix data used for the IQ quants has been produced from the Q4 quant! |
|
|
|
|
|
 |
|
|
|
|
|
[<img src="https://raw.githubusercontent.com/csabakecskemeti/devquasar/main/dq_logo_black-transparent.png" width="200"/>](https://devquasar.com) |
|
|
|
|
|
'Make knowledge free for everyone' |
|
|
|
|
|
Quantized version of: [nvidia/Llama-3_1-Nemotron-Ultra-253B-v1](https://huggingface.co/nvidia/Llama-3_1-Nemotron-Ultra-253B-v1) |
|
|
<a href='https://ko-fi.com/L4L416YX7C' target='_blank'><img height='36' style='border:0px;height:36px;' src='https://storage.ko-fi.com/cdn/kofi6.png?v=6' border='0' alt='Buy Me a Coffee at ko-fi.com' /></a> |
|
|
|