sabato-nocera commited on
Commit
a5ed5d2
·
verified ·
1 Parent(s): 57a7311

Dear model owner(s),
We are a group of researchers investigating the usefulness of sharing AIBOMs (Artificial Intelligence Bill of Materials) to document AI models – AIBOMs are machine-readable structured lists of components (e.g., datasets and models) used to enhance transparency in AI-model supply chains.

To pursue the above-mentioned objective, we identified popular models on HuggingFace and, based on your model card (and some configuration information available in HuggingFace), we generated your AIBOM according to the CyclonDX (v1.6) standard (see https://cyclonedx.org/docs/1.6/json/). AIBOMs are generated as JSON files by using the following open-source supporting tool: https://github.com/MSR4SBOM/ALOHA (technical details are available in the research paper: https://github.com/MSR4SBOM/ALOHA/blob/main/ALOHA.pdf).

The JSON file in this pull request is your AIBOM (see https://github.com/MSR4SBOM/ALOHA/blob/main/documentation.json for details on its structure).

Clearly, the submitted AIBOM matches the current model information, yet it can be easily regenerated when the model evolves, using the aforementioned AIBOM generator tool.

We open this pull request containing an AIBOM of your AI model, and hope it will be considered. We would also like to hear your opinion on the usefulness (or not) of AIBOM by answering a 3-minute anonymous survey: https://forms.gle/WGffSQD5dLoWttEe7.

Thanks in advance, and regards,
Riccardo D’Avino, Fatima Ahmed, Sabato Nocera, Simone Romano, Giuseppe Scanniello (University of Salerno, Italy),
Massimiliano Di Penta (University of Sannio, Italy),
The MSR4SBOM team

NousResearch_Hermes-2-Theta-Llama-3-8B.json ADDED
@@ -0,0 +1,123 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "bomFormat": "CycloneDX",
3
+ "specVersion": "1.6",
4
+ "serialNumber": "urn:uuid:1fb15494-8c2a-42cf-b6ed-198a23dddb02",
5
+ "version": 1,
6
+ "metadata": {
7
+ "timestamp": "2025-06-05T09:40:14.245042+00:00",
8
+ "component": {
9
+ "type": "machine-learning-model",
10
+ "bom-ref": "NousResearch/Hermes-2-Theta-Llama-3-8B-5d3d90dc-3d8e-561a-9d4c-36a9712abc04",
11
+ "name": "NousResearch/Hermes-2-Theta-Llama-3-8B",
12
+ "externalReferences": [
13
+ {
14
+ "url": "https://huggingface.co/NousResearch/Hermes-2-Theta-Llama-3-8B",
15
+ "type": "documentation"
16
+ }
17
+ ],
18
+ "modelCard": {
19
+ "modelParameters": {
20
+ "task": "text-generation",
21
+ "architectureFamily": "llama",
22
+ "modelArchitecture": "LlamaForCausalLM",
23
+ "datasets": [
24
+ {
25
+ "ref": "teknium/OpenHermes-2.5-1a7eb3be-7eaa-5577-91f6-d4ad0d639c6c"
26
+ }
27
+ ]
28
+ },
29
+ "properties": [
30
+ {
31
+ "name": "library_name",
32
+ "value": "transformers"
33
+ },
34
+ {
35
+ "name": "base_model",
36
+ "value": "NousResearch/Hermes-2-Pro-Llama-3-8B"
37
+ }
38
+ ]
39
+ },
40
+ "authors": [
41
+ {
42
+ "name": "NousResearch"
43
+ }
44
+ ],
45
+ "licenses": [
46
+ {
47
+ "license": {
48
+ "id": "Apache-2.0",
49
+ "url": "https://spdx.org/licenses/Apache-2.0.html"
50
+ }
51
+ }
52
+ ],
53
+ "description": "Hermes-2 \u0398 (Theta) is the first experimental merged model released by [Nous Research](https://nousresearch.com/), in collaboration with Charles Goddard at [Arcee](https://www.arcee.ai/), the team behind MergeKit.Hermes-2 \u0398 is a merged and then further RLHF'ed version our excellent Hermes 2 Pro model and Meta's Llama-3 Instruct model to form a new model, Hermes-2 \u0398, combining the best of both worlds of each model.",
54
+ "tags": [
55
+ "transformers",
56
+ "safetensors",
57
+ "llama",
58
+ "text-generation",
59
+ "Llama-3",
60
+ "instruct",
61
+ "finetune",
62
+ "chatml",
63
+ "DPO",
64
+ "RLHF",
65
+ "gpt4",
66
+ "synthetic data",
67
+ "distillation",
68
+ "function calling",
69
+ "json mode",
70
+ "axolotl",
71
+ "merges",
72
+ "conversational",
73
+ "en",
74
+ "dataset:teknium/OpenHermes-2.5",
75
+ "base_model:NousResearch/Hermes-2-Pro-Llama-3-8B",
76
+ "base_model:finetune:NousResearch/Hermes-2-Pro-Llama-3-8B",
77
+ "license:apache-2.0",
78
+ "autotrain_compatible",
79
+ "text-generation-inference",
80
+ "endpoints_compatible",
81
+ "region:us"
82
+ ]
83
+ }
84
+ },
85
+ "components": [
86
+ {
87
+ "type": "data",
88
+ "bom-ref": "teknium/OpenHermes-2.5-1a7eb3be-7eaa-5577-91f6-d4ad0d639c6c",
89
+ "name": "teknium/OpenHermes-2.5",
90
+ "data": [
91
+ {
92
+ "type": "dataset",
93
+ "bom-ref": "teknium/OpenHermes-2.5-1a7eb3be-7eaa-5577-91f6-d4ad0d639c6c",
94
+ "name": "teknium/OpenHermes-2.5",
95
+ "contents": {
96
+ "url": "https://huggingface.co/datasets/teknium/OpenHermes-2.5",
97
+ "properties": [
98
+ {
99
+ "name": "language",
100
+ "value": "eng"
101
+ },
102
+ {
103
+ "name": "pretty_name",
104
+ "value": "OpenHermes 2.5"
105
+ }
106
+ ]
107
+ },
108
+ "governance": {
109
+ "owners": [
110
+ {
111
+ "organization": {
112
+ "name": "teknium",
113
+ "url": "https://huggingface.co/teknium"
114
+ }
115
+ }
116
+ ]
117
+ },
118
+ "description": "\n\n\t\n\t\t\n\t\tDataset Card for Dataset Name\n\t\n\nThis is the dataset that made OpenHermes 2.5 and Nous Hermes 2 series of models.\nSupport me on GitHub sponsors <3 : https://github.com/sponsors/teknium1\n\n\t\n\t\t\n\t\tDataset Details\n\t\n\n\n\t\n\t\t\n\t\tDataset Description\n\t\n\nThe Open Hermes 2/2.5 and Nous Hermes 2 models have made significant advancements of SOTA LLM's over recent months, and are underpinned by this exact compilation and curation of many open source datasets and custom created synthetic datasets.\u2026 See the full description on the dataset page: https://huggingface.co/datasets/teknium/OpenHermes-2.5."
119
+ }
120
+ ]
121
+ }
122
+ ]
123
+ }