nbeerbower lbourdois commited on
Commit
cdaf915
·
verified ·
1 Parent(s): dc0152d

Improve language tag (#1)

Browse files

- Improve language tag (07873051a5e3ca4adb0cb112d0853c908166ead9)


Co-authored-by: Loïck BOURDOIS <[email protected]>

Files changed (1) hide show
  1. README.md +64 -50
README.md CHANGED
@@ -1,51 +1,65 @@
1
- ---
2
- base_model:
3
- - huihui-ai/Qwen2.5-1.5B-Instruct-abliterated
4
- - Qwen/Qwen2.5-1.5B
5
- - EVA-UNIT-01/EVA-Qwen2.5-1.5B-v0.0
6
- library_name: transformers
7
- tags:
8
- - mergekit
9
- - merge
10
- license: apache-2.0
11
- ---
12
- # EVA-abliterated-TIES-Qwen2.5-1.5B
13
-
14
- This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
15
-
16
- ## Merge Details
17
- ### Merge Method
18
-
19
- This model was merged using the [TIES](https://arxiv.org/abs/2306.01708) merge method using [Qwen/Qwen2.5-1.5B](https://huggingface.co/Qwen/Qwen2.5-1.5B) as a base.
20
-
21
- ### Models Merged
22
-
23
- The following models were included in the merge:
24
- * [huihui-ai/Qwen2.5-1.5B-Instruct-abliterated](https://huggingface.co/huihui-ai/Qwen2.5-1.5B-Instruct-abliterated)
25
- * [EVA-UNIT-01/EVA-Qwen2.5-1.5B-v0.0](https://huggingface.co/EVA-UNIT-01/EVA-Qwen2.5-1.5B-v0.0)
26
-
27
- ### Configuration
28
-
29
- The following YAML configuration was used to produce this model:
30
-
31
- ```yaml
32
- models:
33
- - model: huihui-ai/Qwen2.5-1.5B-Instruct-abliterated
34
- parameters:
35
- weight: 1
36
- density: 1
37
- - model: EVA-UNIT-01/EVA-Qwen2.5-1.5B-v0.0
38
- parameters:
39
- weight: 1
40
- density: 1
41
- merge_method: ties
42
- base_model: Qwen/Qwen2.5-1.5B
43
- parameters:
44
- weight: 1
45
- density: 1
46
- normalize: true
47
- int8_mask: true
48
- dtype: bfloat16
49
-
50
-
 
 
 
 
 
 
 
 
 
 
 
 
 
 
51
  ```
 
1
+ ---
2
+ base_model:
3
+ - huihui-ai/Qwen2.5-1.5B-Instruct-abliterated
4
+ - Qwen/Qwen2.5-1.5B
5
+ - EVA-UNIT-01/EVA-Qwen2.5-1.5B-v0.0
6
+ library_name: transformers
7
+ tags:
8
+ - mergekit
9
+ - merge
10
+ license: apache-2.0
11
+ language:
12
+ - zho
13
+ - eng
14
+ - fra
15
+ - spa
16
+ - por
17
+ - deu
18
+ - ita
19
+ - rus
20
+ - jpn
21
+ - kor
22
+ - vie
23
+ - tha
24
+ - ara
25
+ ---
26
+ # EVA-abliterated-TIES-Qwen2.5-1.5B
27
+
28
+ This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
29
+
30
+ ## Merge Details
31
+ ### Merge Method
32
+
33
+ This model was merged using the [TIES](https://arxiv.org/abs/2306.01708) merge method using [Qwen/Qwen2.5-1.5B](https://huggingface.co/Qwen/Qwen2.5-1.5B) as a base.
34
+
35
+ ### Models Merged
36
+
37
+ The following models were included in the merge:
38
+ * [huihui-ai/Qwen2.5-1.5B-Instruct-abliterated](https://huggingface.co/huihui-ai/Qwen2.5-1.5B-Instruct-abliterated)
39
+ * [EVA-UNIT-01/EVA-Qwen2.5-1.5B-v0.0](https://huggingface.co/EVA-UNIT-01/EVA-Qwen2.5-1.5B-v0.0)
40
+
41
+ ### Configuration
42
+
43
+ The following YAML configuration was used to produce this model:
44
+
45
+ ```yaml
46
+ models:
47
+ - model: huihui-ai/Qwen2.5-1.5B-Instruct-abliterated
48
+ parameters:
49
+ weight: 1
50
+ density: 1
51
+ - model: EVA-UNIT-01/EVA-Qwen2.5-1.5B-v0.0
52
+ parameters:
53
+ weight: 1
54
+ density: 1
55
+ merge_method: ties
56
+ base_model: Qwen/Qwen2.5-1.5B
57
+ parameters:
58
+ weight: 1
59
+ density: 1
60
+ normalize: true
61
+ int8_mask: true
62
+ dtype: bfloat16
63
+
64
+
65
  ```