Update README.md
Browse files
README.md
CHANGED
|
@@ -11,9 +11,10 @@ I did not except this repo to blow up and now all the training scripts depend on
|
|
| 11 |
|
| 12 |
* ## ACKOWLEDGE WORK FROM THIS HF PAGE AND https://huggingface.co/ehartford 's OPTIMIZER ON YOUR FUTURE PAPERS OR I WILL DRAG YOUR ORG ON TWITTER LIKE I DID WITH COHERE LOL (we're cool now btw, visited them :)
|
| 13 |
|
| 14 |
-
>>[!TIP]🐧 If you're
|
|
|
|
|
|
|
| 15 |
>>
|
| 16 |
-
>>make sure to install latest llama.cpp first, it's easy on linux & mac:
|
| 17 |
>> git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make -j
|
| 18 |
|
| 19 |
Now for the magic trained finetune that runs at insane speeds:
|
|
|
|
| 11 |
|
| 12 |
* ## ACKOWLEDGE WORK FROM THIS HF PAGE AND https://huggingface.co/ehartford 's OPTIMIZER ON YOUR FUTURE PAPERS OR I WILL DRAG YOUR ORG ON TWITTER LIKE I DID WITH COHERE LOL (we're cool now btw, visited them :)
|
| 13 |
|
| 14 |
+
>>[!TIP]🐧 If you're imatient, get the trained checkpoint file that runs on 1 cpu core:
|
| 15 |
+
>>
|
| 16 |
+
>>make sure to install latest llama.cpp first, it's easy on linux & mac:
|
| 17 |
>>
|
|
|
|
| 18 |
>> git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make -j
|
| 19 |
|
| 20 |
Now for the magic trained finetune that runs at insane speeds:
|