sky-2002 commited on
Commit
a26c628
·
verified ·
1 Parent(s): 9148a38

Upload folder using huggingface_hub

Browse files
Files changed (4) hide show
  1. .gitattributes +1 -0
  2. README.md +85 -61
  3. image.png +3 -0
  4. tokenizer.model +3 -0
.gitattributes CHANGED
@@ -33,3 +33,4 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ image.png filter=lfs diff=lfs merge=lfs -text
README.md CHANGED
@@ -7,150 +7,174 @@ tags: []
7
 
8
  <!-- Provide a quick summary of what the model is/does. -->
9
 
 
 
10
 
 
11
 
12
- ## Model Details
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
13
 
14
- ### Model Description
15
 
 
 
 
 
16
  <!-- Provide a longer summary of what this model is. -->
17
 
18
- This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
19
 
20
- - **Developed by:** [More Information Needed]
21
- - **Funded by [optional]:** [More Information Needed]
22
- - **Shared by [optional]:** [More Information Needed]
23
- - **Model type:** [More Information Needed]
24
- - **Language(s) (NLP):** [More Information Needed]
25
- - **License:** [More Information Needed]
26
- - **Finetuned from model [optional]:** [More Information Needed]
27
 
28
- ### Model Sources [optional]
29
 
30
  <!-- Provide the basic links for the model. -->
31
 
32
- - **Repository:** [More Information Needed]
33
  - **Paper [optional]:** [More Information Needed]
34
- - **Demo [optional]:** [More Information Needed]
35
 
36
- ## Uses
37
 
38
  <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
39
 
40
- ### Direct Use
41
 
42
  <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
43
 
44
- [More Information Needed]
45
 
46
- ### Downstream Use [optional]
47
 
48
  <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
49
 
50
- [More Information Needed]
51
 
52
- ### Out-of-Scope Use
53
 
54
  <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
55
 
56
- [More Information Needed]
57
 
58
  ## Bias, Risks, and Limitations
 
 
59
 
60
  <!-- This section is meant to convey both technical and sociotechnical limitations. -->
61
 
62
- [More Information Needed]
63
 
64
- ### Recommendations
65
 
66
  <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
67
 
68
- Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
69
 
70
- ## How to Get Started with the Model
71
 
72
  Use the code below to get started with the model.
73
 
74
- [More Information Needed]
75
 
76
- ## Training Details
77
 
78
- ### Training Data
79
 
80
  <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
81
 
82
- [More Information Needed]
83
 
84
- ### Training Procedure
85
 
86
  <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
87
 
88
- #### Preprocessing [optional]
89
 
90
- [More Information Needed]
91
 
92
 
93
- #### Training Hyperparameters
94
 
95
- - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
96
 
97
- #### Speeds, Sizes, Times [optional]
98
 
99
  <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
100
 
101
- [More Information Needed]
102
 
103
- ## Evaluation
104
 
105
  <!-- This section describes the evaluation protocols and provides the results. -->
106
 
107
- ### Testing Data, Factors & Metrics
108
 
109
- #### Testing Data
110
 
111
  <!-- This should link to a Dataset Card if possible. -->
112
 
113
- [More Information Needed]
114
 
115
- #### Factors
116
 
117
  <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
118
 
119
- [More Information Needed]
120
 
121
- #### Metrics
122
 
123
  <!-- These are the evaluation metrics being used, ideally with a description of why. -->
124
 
125
- [More Information Needed]
126
 
127
  ### Results
128
 
129
  [More Information Needed]
130
 
131
- #### Summary
132
 
133
 
134
 
135
- ## Model Examination [optional]
136
 
137
  <!-- Relevant interpretability work for the model goes here -->
138
 
139
- [More Information Needed]
140
 
141
- ## Environmental Impact
142
 
143
  <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
144
 
145
- Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
146
 
147
- - **Hardware Type:** [More Information Needed]
148
  - **Hours used:** [More Information Needed]
149
  - **Cloud Provider:** [More Information Needed]
150
  - **Compute Region:** [More Information Needed]
151
- - **Carbon Emitted:** [More Information Needed]
152
 
153
- ## Technical Specifications [optional]
154
 
155
  ### Model Architecture and Objective
156
 
@@ -168,32 +192,32 @@ Carbon emissions can be estimated using the [Machine Learning Impact calculator]
168
 
169
  [More Information Needed]
170
 
171
- ## Citation [optional]
172
 
173
  <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
174
 
175
- **BibTeX:**
176
 
177
  [More Information Needed]
178
 
179
  **APA:**
180
 
181
- [More Information Needed]
182
 
183
- ## Glossary [optional]
184
 
185
  <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
186
 
187
- [More Information Needed]
188
 
189
- ## More Information [optional]
190
 
191
- [More Information Needed]
192
 
193
- ## Model Card Authors [optional]
194
 
195
- [More Information Needed]
196
 
197
  ## Model Card Contact
198
 
199
- [More Information Needed]
 
7
 
8
  <!-- Provide a quick summary of what the model is/does. -->
9
 
10
+ ## Model Details
11
+ An experimental 145M parameter pre-trained base model for marathi. Inspired by SmolLM2 and its architecture.
12
 
13
+ Pre-trained on verified marathi split of the [`ai4bharat/sangraha`](https://huggingface.co/datasets/ai4bharat/sangraha) dataset, around ~2.8 billion tokens.
14
 
15
+ Note: This is an experimental model and will be followed by more pre-training, followed by task specific instruction finetuning.
16
+
17
+ ## How to use
18
+ ```python
19
+ # Load model directly
20
+ from transformers import AutoTokenizer, AutoModelForCausalLM
21
+
22
+ tokenizer = AutoTokenizer.from_pretrained("sky-2002/Marathi-SmolLM2-145M")
23
+ model = AutoModelForCausalLM.from_pretrained("sky-2002/Marathi-SmolLM2-145M")
24
+
25
+ sentence = "पुणे विद्यापीठाने म्हटले आहे"
26
+ inputs = tokenizer(sentence, return_tensors="pt")
27
+ output = model.generate(**inputs, max_length=50)
28
+ print(tokenizer.batch_decode(output, skip_special_tokens=True))
29
+ ```
30
+
31
+ ### Model Description, data and training details
32
+
33
+ **Architecture**: SmolLM2 based
34
 
35
+ **Tokenizer**: Uses the `sarvamai/sarvam-1` tokenizer, since it has been trained on indic languages and has lower fertility rates than existing multilingual tokenizers.
36
 
37
+ **Training dataset**:
38
+ The training dataset covers the following domains.
39
+
40
+ ![alt text](image.png)
41
  <!-- Provide a longer summary of what this model is. -->
42
 
 
43
 
44
+ **Training**:
45
+ - Trained using modal platform on an A100.
46
+ - Trained for 1 epoch on verified marathi split of sangraha dataset, covering ~5.8M samples.
47
+
48
+ This model can generate coherent text, especially in the domains similar to those in the training dataset.
 
 
49
 
50
+ <!-- ### Model Sources [optional]
51
 
52
  <!-- Provide the basic links for the model. -->
53
 
54
+ <!-- - **Repository:** [More Information Needed]
55
  - **Paper [optional]:** [More Information Needed]
56
+ - **Demo [optional]:** [More Information Needed] -->
57
 
58
+ <!-- ## Uses
59
 
60
  <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
61
 
62
+ <!-- ### Direct Use -->
63
 
64
  <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
65
 
66
+ <!-- [More Information Needed] -->
67
 
68
+ <!-- ### Downstream Use [optional] -->
69
 
70
  <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
71
 
72
+ <!-- [More Information Needed] -->
73
 
74
+ <!-- ### Out-of-Scope Use -->
75
 
76
  <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
77
 
78
+ <!-- [More Information Needed] -->
79
 
80
  ## Bias, Risks, and Limitations
81
+ This model is trained on data of 2.8 B tokens and using a context length of 512, due to computational constraints of training.
82
+ Often gives out gibberish if prompt is not related to domains shown, or if in a conversational style.
83
 
84
  <!-- This section is meant to convey both technical and sociotechnical limitations. -->
85
 
86
+ <!-- [More Information Needed] -->
87
 
88
+ <!-- ### Recommendations
89
 
90
  <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
91
 
92
+ <!-- Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. -->
93
 
94
+ <!-- ## How to Get Started with the Model
95
 
96
  Use the code below to get started with the model.
97
 
98
+ [More Information Needed] -->
99
 
100
+ <!-- ## Training Details
101
 
102
+ ### Training Data -->
103
 
104
  <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
105
 
106
+ <!-- [More Information Needed]
107
 
108
+ ### Training Procedure -->
109
 
110
  <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
111
 
112
+ <!-- #### Preprocessing [optional]
113
 
114
+ [More Information Needed] -->
115
 
116
 
117
+ <!-- #### Training Hyperparameters -->
118
 
119
+ <!-- - **Training regime:** [More Information Needed] fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
120
 
121
+ <!-- #### Speeds, Sizes, Times [optional] -->
122
 
123
  <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
124
 
125
+ <!-- [More Information Needed] -->
126
 
127
+ <!-- ## Evaluation -->
128
 
129
  <!-- This section describes the evaluation protocols and provides the results. -->
130
 
131
+ <!-- ### Testing Data, Factors & Metrics
132
 
133
+ #### Testing Data -->
134
 
135
  <!-- This should link to a Dataset Card if possible. -->
136
 
137
+ <!-- [More Information Needed]
138
 
139
+ #### Factors -->
140
 
141
  <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
142
 
143
+ <!-- [More Information Needed]
144
 
145
+ #### Metrics -->
146
 
147
  <!-- These are the evaluation metrics being used, ideally with a description of why. -->
148
 
149
+ <!-- [More Information Needed]
150
 
151
  ### Results
152
 
153
  [More Information Needed]
154
 
155
+ #### Summary -->
156
 
157
 
158
 
159
+ <!-- ## Model Examination [optional] -->
160
 
161
  <!-- Relevant interpretability work for the model goes here -->
162
 
163
+ <!-- [More Information Needed]
164
 
165
+ ## Environmental Impact -->
166
 
167
  <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
168
 
169
+ <!-- Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
170
 
171
+ - **Hardware Type:** A100
172
  - **Hours used:** [More Information Needed]
173
  - **Cloud Provider:** [More Information Needed]
174
  - **Compute Region:** [More Information Needed]
175
+ - **Carbon Emitted:** [More Information Needed] -->
176
 
177
+ <!-- ## Technical Specifications [optional]
178
 
179
  ### Model Architecture and Objective
180
 
 
192
 
193
  [More Information Needed]
194
 
195
+ ## Citation [optional] -->
196
 
197
  <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
198
 
199
+ <!-- **BibTeX:**
200
 
201
  [More Information Needed]
202
 
203
  **APA:**
204
 
205
+ [More Information Needed] -->
206
 
207
+ <!-- ## Glossary [optional] -->
208
 
209
  <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
210
 
211
+ <!-- [More Information Needed]
212
 
213
+ ## More Information [optional] -->
214
 
215
+ <!-- [More Information Needed]
216
 
217
+ ## Model Card Authors [optional] -->
218
 
219
+ <!-- [More Information Needed]
220
 
221
  ## Model Card Contact
222
 
223
+ [More Information Needed] -->
image.png ADDED

Git LFS Details

  • SHA256: bbaeb8d073c5180a750a215c39b7547005b78d517c63fbb005befedd14c78ecf
  • Pointer size: 131 Bytes
  • Size of remote file: 504 kB
tokenizer.model ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4cd33409a577e8b416247587b0f5bd7a3eec245a1f18d4ec7793ff299ad3fbe2
3
+ size 1935856