michelkao commited on
Commit
92c347b
·
verified ·
1 Parent(s): b356e20

Upload 7 files

Browse files
.gitattributes CHANGED
@@ -33,3 +33,7 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
 
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ Llama-3.2-3B-GGUF.F16.gguf filter=lfs diff=lfs merge=lfs -text
37
+ Llama-3.2-3B-GGUF.Q4_K_M.gguf filter=lfs diff=lfs merge=lfs -text
38
+ Llama-3.2-3B-GGUF.Q5_K_M.gguf filter=lfs diff=lfs merge=lfs -text
39
+ Llama-3.2-3B-GGUF.Q8_0.gguf filter=lfs diff=lfs merge=lfs -text
Llama-3.2-3B-GGUF.F16.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:bf40c2eb881176d721935df7284971456734c0c22e065f98f8954c7475fb1583
3
+ size 6433683744
Llama-3.2-3B-GGUF.Q4_K_M.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:433a360b918088f01da04b95ade4e58af335c6b6c35f5165981c149bc0906f1f
3
+ size 2019373344
Llama-3.2-3B-GGUF.Q5_K_M.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:cbc32d62a5cc48b21607cc79b677409b39705832a75525b48346523d71613a8f
3
+ size 2322149664
Llama-3.2-3B-GGUF.Q8_0.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:53938886b87fb5668ca88292495071714e8d55093440d33421b9e7620210bbb5
3
+ size 3421894944
README.md CHANGED
@@ -1,3 +1,123 @@
1
- ---
2
- license: unknown
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: creativeml-openrail-m
3
+ datasets:
4
+ - yahma/alpaca-cleaned
5
+ language:
6
+ - en
7
+ base_model:
8
+ - meta-llama/Llama-3.2-3B
9
+ pipeline_tag: text-generation
10
+ library_name: transformers
11
+ tags:
12
+ - Llama3.2-3B
13
+ - Llama-cpp
14
+ - F16
15
+ - 16-bit
16
+ - Q4
17
+ - Q5
18
+ - Q8
19
+ - 8-bit
20
+ - 4-bit
21
+ ---
22
+ ## Llama-3.2-3B-GGUF Model Files
23
+
24
+ | File Name | Size | Description | Upload Status |
25
+ |------------------------------------|---------|---------------------------------|----------------|
26
+ | `.gitattributes` | 1.78 kB | Git attributes file | Uploaded |
27
+ | `Llama-3.2-3B-GGUF.F16.gguf` | 6.43 GB | Full precision (F16) model | Uploaded (LFS) |
28
+ | `Llama-3.2-3B-GGUF.Q4_K_M.gguf` | 2.02 GB | Quantized Q4 model (medium) | Uploaded (LFS) |
29
+ | `Llama-3.2-3B-GGUF.Q5_K_M.gguf` | 2.32 GB | Quantized Q5 model (medium) | Uploaded (LFS) |
30
+ | `Llama-3.2-3B-GGUF.Q8_0.gguf` | 3.42 GB | Quantized Q8 model | Uploaded (LFS) |
31
+ | `README.md` | 42 Bytes| Initial commit README | Uploaded |
32
+
33
+ # Run with Ollama 🦙
34
+
35
+ ## Overview
36
+
37
+ Ollama is a powerful tool that allows you to run machine learning models effortlessly. This guide will help you download, install, and run your own GGUF models in just a few minutes.
38
+
39
+ ## Table of Contents
40
+
41
+ - [Download and Install Ollama](#download-and-install-ollama)
42
+ - [Steps to Run GGUF Models](#steps-to-run-gguf-models)
43
+ - [1. Create the Model File](#1-create-the-model-file)
44
+ - [2. Add the Template Command](#2-add-the-template-command)
45
+ - [3. Create and Patch the Model](#3-create-and-patch-the-model)
46
+ - [Running the Model](#running-the-model)
47
+ - [Sample Usage](#sample-usage)
48
+
49
+ ## Download and Install Ollama🦙
50
+
51
+ To get started, download Ollama from [https://ollama.com/download](https://ollama.com/download) and install it on your Windows or Mac system.
52
+
53
+ ## Steps to Run GGUF Models
54
+
55
+ ### 1. Create the Model File
56
+ First, create a model file and name it appropriately. For example, you can name your model file `metallama`.
57
+
58
+ ### 2. Add the Template Command
59
+ In your model file, include a `FROM` line that specifies the base model file you want to use. For instance:
60
+
61
+ ```bash
62
+ FROM Llama-3.2-1B.F16.gguf
63
+ ```
64
+
65
+ Ensure that the model file is in the same directory as your script.
66
+
67
+ ### 3. Create and Patch the Model
68
+ Open your terminal and run the following command to create and patch your model:
69
+
70
+ ```bash
71
+ ollama create metallama -f ./metallama
72
+ ```
73
+
74
+ Once the process is successful, you will see a confirmation message.
75
+
76
+ To verify that the model was created successfully, you can list all models with:
77
+
78
+ ```bash
79
+ ollama list
80
+ ```
81
+
82
+ Make sure that `metallama` appears in the list of models.
83
+
84
+ ---
85
+
86
+ ## Running the Model
87
+
88
+ To run your newly created model, use the following command in your terminal:
89
+
90
+ ```bash
91
+ ollama run metallama
92
+ ```
93
+
94
+ ### Sample Usage
95
+
96
+ In the command prompt, you can execute:
97
+
98
+ ```bash
99
+ D:\>ollama run metallama
100
+ ```
101
+
102
+ You can interact with the model like this:
103
+
104
+ ```plaintext
105
+ >>> write a mini passage about space x
106
+ Space X, the private aerospace company founded by Elon Musk, is revolutionizing the field of space exploration.
107
+ With its ambitious goals to make humanity a multi-planetary species and establish a sustainable human presence in
108
+ the cosmos, Space X has become a leading player in the industry. The company's spacecraft, like the Falcon 9, have
109
+ demonstrated remarkable capabilities, allowing for the transport of crews and cargo into space with unprecedented
110
+ efficiency. As technology continues to advance, the possibility of establishing permanent colonies on Mars becomes
111
+ increasingly feasible, thanks in part to the success of reusable rockets that can launch multiple times without
112
+ sustaining significant damage. The journey towards becoming a multi-planetary species is underway, and Space X
113
+ plays a pivotal role in pushing the boundaries of human exploration and settlement.
114
+ ```
115
+
116
+ ---
117
+
118
+ ## Conclusion
119
+
120
+ With these simple steps, you can easily download, install, and run your own models using Ollama. Whether you're exploring the capabilities of Llama or building your own custom models, Ollama makes it accessible and efficient.
121
+
122
+
123
+ - This README provides clear instructions and structured information to help users navigate the process of using Ollama effectively. Adjust any sections as needed based on your specific requirements or additional details you may want to include.
config.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ {
2
+ "model_type": "llama"
3
+ }
gitattributes ADDED
@@ -0,0 +1,39 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ *.7z filter=lfs diff=lfs merge=lfs -text
2
+ *.arrow filter=lfs diff=lfs merge=lfs -text
3
+ *.bin filter=lfs diff=lfs merge=lfs -text
4
+ *.bz2 filter=lfs diff=lfs merge=lfs -text
5
+ *.ckpt filter=lfs diff=lfs merge=lfs -text
6
+ *.ftz filter=lfs diff=lfs merge=lfs -text
7
+ *.gz filter=lfs diff=lfs merge=lfs -text
8
+ *.h5 filter=lfs diff=lfs merge=lfs -text
9
+ *.joblib filter=lfs diff=lfs merge=lfs -text
10
+ *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
+ *.mlmodel filter=lfs diff=lfs merge=lfs -text
12
+ *.model filter=lfs diff=lfs merge=lfs -text
13
+ *.msgpack filter=lfs diff=lfs merge=lfs -text
14
+ *.npy filter=lfs diff=lfs merge=lfs -text
15
+ *.npz filter=lfs diff=lfs merge=lfs -text
16
+ *.onnx filter=lfs diff=lfs merge=lfs -text
17
+ *.ot filter=lfs diff=lfs merge=lfs -text
18
+ *.parquet filter=lfs diff=lfs merge=lfs -text
19
+ *.pb filter=lfs diff=lfs merge=lfs -text
20
+ *.pickle filter=lfs diff=lfs merge=lfs -text
21
+ *.pkl filter=lfs diff=lfs merge=lfs -text
22
+ *.pt filter=lfs diff=lfs merge=lfs -text
23
+ *.pth filter=lfs diff=lfs merge=lfs -text
24
+ *.rar filter=lfs diff=lfs merge=lfs -text
25
+ *.safetensors filter=lfs diff=lfs merge=lfs -text
26
+ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
27
+ *.tar.* filter=lfs diff=lfs merge=lfs -text
28
+ *.tar filter=lfs diff=lfs merge=lfs -text
29
+ *.tflite filter=lfs diff=lfs merge=lfs -text
30
+ *.tgz filter=lfs diff=lfs merge=lfs -text
31
+ *.wasm filter=lfs diff=lfs merge=lfs -text
32
+ *.xz filter=lfs diff=lfs merge=lfs -text
33
+ *.zip filter=lfs diff=lfs merge=lfs -text
34
+ *.zst filter=lfs diff=lfs merge=lfs -text
35
+ *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ Llama-3.2-3B-GGUF.F16.gguf filter=lfs diff=lfs merge=lfs -text
37
+ Llama-3.2-3B-GGUF.Q4_K_M.gguf filter=lfs diff=lfs merge=lfs -text
38
+ Llama-3.2-3B-GGUF.Q5_K_M.gguf filter=lfs diff=lfs merge=lfs -text
39
+ Llama-3.2-3B-GGUF.Q8_0.gguf filter=lfs diff=lfs merge=lfs -text