Text Generation
Transformers
Safetensors
English
Japanese
llama
conversational
text-generation-inference
Taishi-N324 commited on
Commit
8d5357f
·
verified ·
1 Parent(s): f3d14a1

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -0
README.md CHANGED
@@ -22,6 +22,7 @@ We use approximately 200 billion tokens that were sampled from a large Japanese
22
  coding contents, etc (see the Training Datasets section of the base model) for continual pre-training.
23
  The instruction-tuned models (Instruct) were built by supervised fine-tuning (SFT) on the synthetic data specially built for Japanese.
24
  See the Swallow Model Index section to find other model variants.
 
25
  **Note**: [Llama-3.1-Swallow-8B-Instruct-v0.2](https://huggingface.co/tokyotech-llm/Llama-3.1-Swallow-8B-Instruct-v0.2) model was continually pre-trained from the [meta-llama/Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Llama-3.1-8B-Instruct). All other Llama 3.1 Swallow models were pre-trained from their respective base models.
26
 
27
 
 
22
  coding contents, etc (see the Training Datasets section of the base model) for continual pre-training.
23
  The instruction-tuned models (Instruct) were built by supervised fine-tuning (SFT) on the synthetic data specially built for Japanese.
24
  See the Swallow Model Index section to find other model variants.
25
+
26
  **Note**: [Llama-3.1-Swallow-8B-Instruct-v0.2](https://huggingface.co/tokyotech-llm/Llama-3.1-Swallow-8B-Instruct-v0.2) model was continually pre-trained from the [meta-llama/Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Llama-3.1-8B-Instruct). All other Llama 3.1 Swallow models were pre-trained from their respective base models.
27
 
28