best_bvv_unfrozen_ru
Model summary
This repository contains the model and associated resources from the papers
best_bvv_unfrozen_ru is a 500M parameter Causal Language Model (LM) for Russian (and some English), trained as an open proof-of-concept for the "frozen embeddings" paradigm. This version uses fully trainable token embeddings β a standard setup β and serves as a baseline for direct comparison with the corresponding "frozen-embedding" model Bochkov/best_bvv_ru.
- Architecture: Transformer, rotary positional encoding
- Vocabulary: Custom Unicode-based, 131072 tokens
- Embedding: Unfrozen (trainable, classic)
- Pretraining data: 9B tokens, predominantly Russian (Wikipedia, SQuAD2.0, TriviaQA, NQ etc) and 10% SFT (instruction/factual Q&A) mixed in
- Purpose: Compare learning capacity and generalization of full vs. frozen-embedding LMs on small data
Key results
- MMLU (avg): 11.37% (Β±0.18%)
- ARC-e: 20.56%
- ARC-c: 24.18%
- C-Sense: 18.79%
- SQUAD: 13.55%
- BLEU [en-ru]: 8.40%
Intended use
- Research & benchmarking: Designed to benchmark the new paradigm of "frozen" vs. traditional embedding LMs under realistic, small-data conditions.
- Comparison: Use alongside [
Bochkov/best_bvv_ru] for ablation studies, transfer/interlingua research and MoE fusion experiments. - NOT for production! This model is for research and experimentation only. Text quality is moderate, factual hallucinations possible.
π§βπ¬ Citation & Concept
If you use or build upon this demo, please cite:
@article{
bochkov2025emergent,
title={Emergent Semantics Beyond Token Embeddings: Transformer {LM}s with Frozen Visual Unicode Representations},
author={Andrey Bochkov},
journal={Transactions on Machine Learning Research},
issn={2835-8856},
year={2025},
url={https://openreview.net/forum?id=Odh8IynO1o},
note={}
}
@misc{bochkov2025growingtransformersmodularcomposition,
title={Growing Transformers: Modular Composition and Layer-wise Expansion on a Frozen Substrate},
author={A. Bochkov},
year={2025},
eprint={2507.07129},
archivePrefix={arXiv},
primaryClass={cs.LG},
url={https://arxiv.org/abs/2507.07129},
}
This work demonstrates that transformer blocks, not token embeddings, carry the semantic burden in LLMs β a step toward modular, fusable, multilingual LMs.
Example Usage
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
model = AutoModelForCausalLM.from_pretrained('Bochkov/best_bvv_unfrozen_ru', trust_remote_code=True).to('cuda')
tokenizer = AutoTokenizer.from_pretrained('Bochkov/best_bvv_unfrozen_ru')
inputs = tokenizer("Hello, ΠΌΠΈΡ! ", return_tensors="pt").to('cuda')
outputs = model.generate(
**inputs,
max_new_tokens=100,
temperature=0.8,
top_k=50,
top_p=0.95,
do_sample=True
)
print(tokenizer.decode(outputs[0]))
- Downloads last month
- 6
Collection including Bochkov/best_bvv_unfrozen_ru
Collection
Frozen embedding LMs (en/ru/zh) & their MoE fusion. Baselines: frozen vs unfrozen embedding ablation.
β’
7 items
β’
Updated