SmolLM2 score0_mix_rephrased_from_beginning-300B-mbs16-gbs1024-16feb (Version: main)

Model Details

  • Architecture: SmolLM2
  • Parameters: 360M

Training Configuration

attention_logit_softcapping: null
attention_scores_scalar: null
attn_bias: false
bias: false
block_size: 8192
final_logit_softcapping: null
gelu_approximate: none
head_size: 64
hf_config:
  name: SmolLM2-360M
  org: HuggingFaceTB
intermediate_size: 2560
lm_head_bias: false
mlp_class_name: LLaMAMLP
n_embd: 960
n_expert: 0
n_expert_per_token: 0
n_head: 15
n_layer: 32
n_query_groups: 5
name: SmolLM2-360M
norm_class_name: RMSNorm
norm_eps: 1.0e-05
norm_qk: false
padded_vocab_size: 49152
padding_multiple: 512
parallel_residual: false
post_attention_norm: false
post_mlp_norm: false
rope_adjustments: null
rope_base: 100000
rope_condense_ratio: 1
rotary_percentage: 1.0
scale_embeddings: false
shared_attention_norm: false
sliding_window_layer_placing: null
sliding_window_size: null
vocab_size: 49152

Model Loading and Revision System

This repository hosts multiple revisions of the model. To load a specific revision, use the revision parameter. For example:

from transformers import AutoModelForCausalLM, AutoTokenizer

model = AutoModelForCausalLM.from_pretrained("locuslab/base-smollm2-360m-score0_mix_rephrased_from_beginning-300B-mbs16-gbs1024-16feb", revision="final")
tokenizer = AutoTokenizer.from_pretrained("locuslab/base-smollm2-360m-score0_mix_rephrased_from_beginning-300B-mbs16-gbs1024-16feb", revision="final")

Replace "final" with the desired revision.

Downloads last month
5
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support