Update README.md
Browse files
README.md
CHANGED
|
@@ -4,14 +4,15 @@ license: other
|
|
| 4 |
license_name: nvidia-open-model-license
|
| 5 |
license_link: >-
|
| 6 |
https://www.nvidia.com/en-us/agreements/enterprise-software/nvidia-open-model-license/
|
| 7 |
-
|
| 8 |
pipeline_tag: text-generation
|
| 9 |
language:
|
| 10 |
-
|
| 11 |
tags:
|
| 12 |
-
|
| 13 |
-
|
| 14 |
-
|
|
|
|
|
|
|
| 15 |
---
|
| 16 |
|
| 17 |
|
|
@@ -42,7 +43,7 @@ GOVERNING TERMS: Your use of this model is governed by the [NVIDIA Open Model Li
|
|
| 42 |
|
| 43 |
**Model Dates:** Trained between August 2024 and May 2025
|
| 44 |
|
| 45 |
-
**Data Freshness:** The pretraining data has a cutoff of 2023
|
| 46 |
|
| 47 |
|
| 48 |
## Use Case:
|
|
@@ -268,7 +269,7 @@ Prompts have been sourced from either public and open corpus or synthetically ge
|
|
| 268 |
|
| 269 |
## Evaluation Datasets
|
| 270 |
|
| 271 |
-
We used the datasets listed below to evaluate Llama-3.1-Nemotron-Nano-
|
| 272 |
|
| 273 |
**Data Collection for Evaluation Datasets:** Hybrid: Human/Synthetic
|
| 274 |
|
|
@@ -387,4 +388,4 @@ NVIDIA believes Trustworthy AI is a shared responsibility and we have establishe
|
|
| 387 |
|
| 388 |
For more detailed information on ethical considerations for this model, please see the Model Card++ [Explainability](explainability.md), [Bias](bias.md), [Safety & Security](safety.md), and [Privacy](privacy.md) Subcards.
|
| 389 |
|
| 390 |
-
Please report security vulnerabilities or NVIDIA AI Concerns [here](https://www.nvidia.com/en-us/support/submit-security-vulnerability/).
|
|
|
|
| 4 |
license_name: nvidia-open-model-license
|
| 5 |
license_link: >-
|
| 6 |
https://www.nvidia.com/en-us/agreements/enterprise-software/nvidia-open-model-license/
|
|
|
|
| 7 |
pipeline_tag: text-generation
|
| 8 |
language:
|
| 9 |
+
- en
|
| 10 |
tags:
|
| 11 |
+
- nvidia
|
| 12 |
+
- llama-3
|
| 13 |
+
- pytorch
|
| 14 |
+
base_model:
|
| 15 |
+
- nvidia/Llama-3.1-Minitron-4B-Width-Base
|
| 16 |
---
|
| 17 |
|
| 18 |
|
|
|
|
| 43 |
|
| 44 |
**Model Dates:** Trained between August 2024 and May 2025
|
| 45 |
|
| 46 |
+
**Data Freshness:** The pretraining data has a cutoff of June 2023.
|
| 47 |
|
| 48 |
|
| 49 |
## Use Case:
|
|
|
|
| 269 |
|
| 270 |
## Evaluation Datasets
|
| 271 |
|
| 272 |
+
We used the datasets listed below to evaluate Llama-3.1-Nemotron-Nano-4B-v1.1.
|
| 273 |
|
| 274 |
**Data Collection for Evaluation Datasets:** Hybrid: Human/Synthetic
|
| 275 |
|
|
|
|
| 388 |
|
| 389 |
For more detailed information on ethical considerations for this model, please see the Model Card++ [Explainability](explainability.md), [Bias](bias.md), [Safety & Security](safety.md), and [Privacy](privacy.md) Subcards.
|
| 390 |
|
| 391 |
+
Please report security vulnerabilities or NVIDIA AI Concerns [here](https://www.nvidia.com/en-us/support/submit-security-vulnerability/).
|