Update README.md
Browse files
README.md
CHANGED
|
@@ -61,45 +61,13 @@ The pipeline we used to produce the data and models is fully open-sourced!
|
|
| 61 |
We provide [all instructions](https://nvidia.github.io/NeMo-Skills/openmathreasoning1/)
|
| 62 |
to fully reproduce our results, including data generation.
|
| 63 |
|
| 64 |
-
|
| 65 |
|
| 66 |
-
|
| 67 |
-
|
| 68 |
-
To run inference with CoT mode, you can use this example code snippet.
|
| 69 |
-
|
| 70 |
-
```python
|
| 71 |
-
import transformers
|
| 72 |
-
import torch
|
| 73 |
-
|
| 74 |
-
model_id = "nvidia/OpenMath-Nemotron-14B-Kaggle"
|
| 75 |
-
|
| 76 |
-
pipeline = transformers.pipeline(
|
| 77 |
-
"text-generation",
|
| 78 |
-
model=model_id,
|
| 79 |
-
model_kwargs={"torch_dtype": torch.bfloat16},
|
| 80 |
-
device_map="auto",
|
| 81 |
-
)
|
| 82 |
-
|
| 83 |
-
messages = [
|
| 84 |
-
{
|
| 85 |
-
"role": "user",
|
| 86 |
-
"content": "Solve the following math problem. Make sure to put the answer (and only answer) inside \\boxed{}.\n\n" +
|
| 87 |
-
"What is the minimum value of $a^2+6a-7$?"},
|
| 88 |
-
]
|
| 89 |
-
|
| 90 |
-
outputs = pipeline(
|
| 91 |
-
messages,
|
| 92 |
-
max_new_tokens=4096,
|
| 93 |
-
)
|
| 94 |
-
print(outputs[0]["generated_text"][-1]['content'])
|
| 95 |
-
```
|
| 96 |
-
|
| 97 |
-
To run inference with TIR or GenSelect modes, we highly recommend to use our
|
| 98 |
[reference implementation in NeMo-Skills](https://nvidia.github.io/NeMo-Skills/openmathreasoning1/evaluation/).
|
| 99 |
|
| 100 |
Please note that these models have not been instruction tuned on general data and thus might not provide good answers outside of math domain.
|
| 101 |
|
| 102 |
-
|
| 103 |
## Citation
|
| 104 |
|
| 105 |
If you find our work useful, please consider citing us!
|
|
@@ -134,7 +102,7 @@ This model is intended to facilitate research in the area of mathematical reason
|
|
| 134 |
|
| 135 |
Huggingface 04/23/2025 <br>
|
| 136 |
|
| 137 |
-
|
| 138 |
|
| 139 |
**Architecture Type:** Transformer decoder-only language model <br>
|
| 140 |
|
|
@@ -145,7 +113,7 @@ Huggingface 04/23/2025 <br>
|
|
| 145 |
|
| 146 |
** This model has 1.5B of model parameters. <br>
|
| 147 |
|
| 148 |
-
|
| 149 |
|
| 150 |
**Input Type(s):** Text <br>
|
| 151 |
|
|
@@ -157,7 +125,7 @@ Huggingface 04/23/2025 <br>
|
|
| 157 |
|
| 158 |
|
| 159 |
|
| 160 |
-
|
| 161 |
|
| 162 |
**Output Type(s):** Text <br>
|
| 163 |
|
|
@@ -173,7 +141,7 @@ Our AI models are designed and/or optimized to run on NVIDIA GPU-accelerated sys
|
|
| 173 |
|
| 174 |
|
| 175 |
|
| 176 |
-
|
| 177 |
|
| 178 |
**Runtime Engine(s):** <br>
|
| 179 |
|
|
@@ -195,7 +163,7 @@ Our AI models are designed and/or optimized to run on NVIDIA GPU-accelerated sys
|
|
| 195 |
|
| 196 |
|
| 197 |
|
| 198 |
-
|
| 199 |
|
| 200 |
[OpenMath-Nemotron-1.5B](https://huggingface.co/nvidia/OpenMath-Nemotron-1.5B)
|
| 201 |
|
|
|
|
| 61 |
We provide [all instructions](https://nvidia.github.io/NeMo-Skills/openmathreasoning1/)
|
| 62 |
to fully reproduce our results, including data generation.
|
| 63 |
|
| 64 |
+
## How to use the models?
|
| 65 |
|
| 66 |
+
This model will always use code execution to solve math problems, so we highly recommend to run inference with our
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 67 |
[reference implementation in NeMo-Skills](https://nvidia.github.io/NeMo-Skills/openmathreasoning1/evaluation/).
|
| 68 |
|
| 69 |
Please note that these models have not been instruction tuned on general data and thus might not provide good answers outside of math domain.
|
| 70 |
|
|
|
|
| 71 |
## Citation
|
| 72 |
|
| 73 |
If you find our work useful, please consider citing us!
|
|
|
|
| 102 |
|
| 103 |
Huggingface 04/23/2025 <br>
|
| 104 |
|
| 105 |
+
### Model Architecture: <br>
|
| 106 |
|
| 107 |
**Architecture Type:** Transformer decoder-only language model <br>
|
| 108 |
|
|
|
|
| 113 |
|
| 114 |
** This model has 1.5B of model parameters. <br>
|
| 115 |
|
| 116 |
+
### Input: <br>
|
| 117 |
|
| 118 |
**Input Type(s):** Text <br>
|
| 119 |
|
|
|
|
| 125 |
|
| 126 |
|
| 127 |
|
| 128 |
+
### Output: <br>
|
| 129 |
|
| 130 |
**Output Type(s):** Text <br>
|
| 131 |
|
|
|
|
| 141 |
|
| 142 |
|
| 143 |
|
| 144 |
+
### Software Integration : <br>
|
| 145 |
|
| 146 |
**Runtime Engine(s):** <br>
|
| 147 |
|
|
|
|
| 163 |
|
| 164 |
|
| 165 |
|
| 166 |
+
### Model Version(s):
|
| 167 |
|
| 168 |
[OpenMath-Nemotron-1.5B](https://huggingface.co/nvidia/OpenMath-Nemotron-1.5B)
|
| 169 |
|