anikifoss commited on
Commit
c2aa04f
·
verified ·
1 Parent(s): 7d87b0a

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +11 -11
README.md CHANGED
@@ -15,17 +15,17 @@ See [this detailed guide](https://github.com/ikawrakow/ik_llama.cpp/discussions/
15
  ## Run
16
  Use the following command lines to run the model (tweak the command to further customize it to your needs).
17
 
18
- ### 32GB VRAM
19
  ```
20
  ./build/bin/llama-server \
21
  --alias anikifoss/DeepSeek-R1-0528-DQ4_K_R4 \
22
  --model /mnt/data/Models/anikifoss/DeepSeek-R1-0528-DQ4_K_R4/DeepSeek-R1-0528-DQ4_K_R4-00001-of-00010.gguf \
23
  --temp 0.5 --top-k 0 --top-p 1.0 --min-p 0.1 --repeat-penalty 1.0 \
24
- --ctx-size 75000 \
25
- -ctk f16 \
26
  -mla 2 -fa \
27
- -amb 1024 \
28
- -b 2048 -ub 2048 \
29
  -fmoe \
30
  --n-gpu-layers 99 \
31
  --override-tensor exps=CPU,attn_kv_b=CPU \
@@ -35,17 +35,17 @@ Use the following command lines to run the model (tweak the command to further c
35
  --port 8090
36
  ```
37
 
38
- ### 24GB VRAM
39
  ```
40
  ./build/bin/llama-server \
41
  --alias anikifoss/DeepSeek-R1-0528-DQ4_K_R4 \
42
  --model /mnt/data/Models/anikifoss/DeepSeek-R1-0528-DQ4_K_R4/DeepSeek-R1-0528-DQ4_K_R4-00001-of-00010.gguf \
43
  --temp 0.5 --top-k 0 --top-p 1.0 --min-p 0.1 --repeat-penalty 1.0 \
44
- --ctx-size 41000 \
45
- -ctk q8_0 \
46
  -mla 2 -fa \
47
- -amb 512 \
48
- -b 1024 -ub 1024 \
49
  -fmoe \
50
  --n-gpu-layers 99 \
51
  --override-tensor exps=CPU,attn_kv_b=CPU \
@@ -87,7 +87,7 @@ You can try the following to squeeze out more context on your system:
87
  Generally, imatrix is not recommended for Q4 and larger quants. The problem with imatrix is that it will guide what model remembers, while anything not covered by the text sample used to generate the imartrix is more likely to be forgotten. For example, an imatrix derived from wikipedia sample is likely to negatively affect tasks like coding. In other words, while imatrix can improve specific benchmarks, that are similar to the imatrix input sample, it will also skew the model performance towards tasks similar to the imatrix sample at the expense of other tasks.
88
 
89
  ## Benchmarks
90
- Smaller quants, like `UD-Q2_K_XL` are much faster when generating tokens, but often produce code that fails to run or contains bugs. Based on empirical observations, coding seems to be strongly affected by the model quantization. So we use larger quantization where it matter to reduce perplexity while remaining within the target system constraints of 32G VRAM, 512G RAM.
91
 
92
  **System:** Threadripper Pro 7975WX, 768GB DDR5@5600MHz, RTX 5090 32GB
93
 
 
15
  ## Run
16
  Use the following command lines to run the model (tweak the command to further customize it to your needs).
17
 
18
+ ### 24GB VRAM
19
  ```
20
  ./build/bin/llama-server \
21
  --alias anikifoss/DeepSeek-R1-0528-DQ4_K_R4 \
22
  --model /mnt/data/Models/anikifoss/DeepSeek-R1-0528-DQ4_K_R4/DeepSeek-R1-0528-DQ4_K_R4-00001-of-00010.gguf \
23
  --temp 0.5 --top-k 0 --top-p 1.0 --min-p 0.1 --repeat-penalty 1.0 \
24
+ --ctx-size 41000 \
25
+ -ctk q8_0 \
26
  -mla 2 -fa \
27
+ -amb 512 \
28
+ -b 1024 -ub 1024 \
29
  -fmoe \
30
  --n-gpu-layers 99 \
31
  --override-tensor exps=CPU,attn_kv_b=CPU \
 
35
  --port 8090
36
  ```
37
 
38
+ ### 32GB VRAM
39
  ```
40
  ./build/bin/llama-server \
41
  --alias anikifoss/DeepSeek-R1-0528-DQ4_K_R4 \
42
  --model /mnt/data/Models/anikifoss/DeepSeek-R1-0528-DQ4_K_R4/DeepSeek-R1-0528-DQ4_K_R4-00001-of-00010.gguf \
43
  --temp 0.5 --top-k 0 --top-p 1.0 --min-p 0.1 --repeat-penalty 1.0 \
44
+ --ctx-size 75000 \
45
+ -ctk f16 \
46
  -mla 2 -fa \
47
+ -amb 1024 \
48
+ -b 2048 -ub 2048 \
49
  -fmoe \
50
  --n-gpu-layers 99 \
51
  --override-tensor exps=CPU,attn_kv_b=CPU \
 
87
  Generally, imatrix is not recommended for Q4 and larger quants. The problem with imatrix is that it will guide what model remembers, while anything not covered by the text sample used to generate the imartrix is more likely to be forgotten. For example, an imatrix derived from wikipedia sample is likely to negatively affect tasks like coding. In other words, while imatrix can improve specific benchmarks, that are similar to the imatrix input sample, it will also skew the model performance towards tasks similar to the imatrix sample at the expense of other tasks.
88
 
89
  ## Benchmarks
90
+ Smaller quants, like `UD-Q2_K_XL` are much faster when generating tokens, but often produce code that fails to run or contains bugs. Based on empirical observations, coding seems to be strongly affected by the model quantization. So we use larger quantization where it matter to reduce perplexity while remaining within the target system constraints of 24GB-32GB VRAM, 512GB RAM.
91
 
92
  **System:** Threadripper Pro 7975WX, 768GB DDR5@5600MHz, RTX 5090 32GB
93