File size: 4,957 Bytes
b828001 f62c612 b828001 697f365 b828001 c69ad92 44d98ce b828001 07952ff abc4ea5 b6ed48c 44d98ce b6ed48c 44d98ce b6ed48c ae1d0d2 abc4ea5 c768c70 b828001 f62c612 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 |
---
license: apache-2.0
tags:
- causal-lm
- russian
- chinese
- mixture-of-experts
- frozen-embeddings
- research
- demonstration
- low-resource
pipeline_tag: text-generation
library_name: transformers
---
# BVV-MoE: Mixture-of-Experts LLM with Frozen Shared Embeddings (Russian + Chinese, Demo-Scale)
This repository contains the model and associated resources from the papers
[📚 Paper (Emergent Semantics Beyond Token Embeddings: Transformer LMs with Frozen Visual Unicode Representations)](https://huggingface.co/papers/2507.04886) -
[📚 Paper (Growing Transformers: Modular Composition and Layer-wise Expansion on a Frozen Substrate)](https://huggingface.co/papers/2507.07129) -
[💻 Code](https://github.com/AVBochkov/Embeddings)
**Model size**: ~0.9B parameters
**Languages**: Russian, Chinese, some English
---
## Model Summary
**best_bvv_moe** is a demonstration-scale Mixture-of-Experts (MoE) decoder-only causal language model combining two independently trained models (Russian and Chinese) with strictly frozen, shared **visual/Unicode-based token embeddings**.
- Each "expert" was pre-trained on a small subordinate corpus (English-Russian, English-Chinese) with ~9B total tokens, mixing 10% SFT-like samples, using the same, fully frozen embedding matrix for all languages.
- After separate training, the two models were seamlessly merged at the transformer block level using a "mean logits" MoE fusion approach – thanks to the shared frozen token embeddings, no retraining/alignment of embeddings was needed.
- This model is a **conceptual/research artifact**, designed to illustrate that frozen, non-semantic embeddings enable combining multilingual LMs into a working MoE model *without catastrophic loss* of performance.
---
## Key Features
- **Frozen, Unicode/visual token embeddings**: All tokens (for all supported languages) share the same frozen embedding matrix, based on Unicode and visual forms, not statistical co-occurrence.
- **Direct Mixture-of-Experts merge**: Two language models (Russian-, Chinese-oriented) are combined *without retraining* via simple logits averaging, made possible by the strictly-shared embeddings.
- **Demo-scale**: Trained on a modest dataset (9B tokens), with a small SFT fraction (~10%), intended to illustrate the principle, not to maximize absolute scores.
- **Comparison available**: Separately released standard (unfrozen embeddings) models for direct comparison of convergence and generalization.
- **Extremely "clean" codebase**: No reliance on exotic pipeline tricks; clear transformer architecture, easy to review and experiment with.
---
## Use Case / Intended Purpose
This model is **not** an end-user chatbot solution.
Its purpose is:
- To **demonstrate** new possibilities in LM architecture:
- Multilingual/multimodal MoE with frozen, shared embeddings
- Modular, "plug-and-play" scaling and mixing of LMs
- Comparison between frozen and unfrozen/learnable embeddings in real convergence
- As a **reference implementation** for research communities investigating model unification, low-resource language mixing, or studying where "meaning" emerges inside LLM architectures.
---
## Evaluation
MMLU (across tasks, test set mean ± std):
MMLU: 23.44% ± 0.28%
ARC-e: 23.74% ± 1.02%
ARC-c: 25.28% ± 2.07%
C-SENSE: 19.69% ± 1.13%
SQUAD: 19.73% ± 1.45%
BLEU:
en-ru: 6.52% ± 0.62%
ru-en: 6.22% ± 0.38%
en-zh: 2.93% ± 0.34%
zh-en: 4.95% ± 0.59%
## 🧑🔬 Citation & Concept
If you use or build upon this demo, please cite:
```
@article{
bochkov2025emergent,
title={Emergent Semantics Beyond Token Embeddings: Transformer {LM}s with Frozen Visual Unicode Representations},
author={Andrey Bochkov},
journal={Transactions on Machine Learning Research},
issn={2835-8856},
year={2025},
url={https://openreview.net/forum?id=Odh8IynO1o},
note={}
}
@misc{bochkov2025growingtransformersmodularcomposition,
title={Growing Transformers: Modular Composition and Layer-wise Expansion on a Frozen Substrate},
author={A. Bochkov},
year={2025},
eprint={2507.07129},
archivePrefix={arXiv},
primaryClass={cs.LG},
url={https://arxiv.org/abs/2507.07129},
}
```
This work demonstrates that transformer blocks, not token embeddings, carry the semantic burden in LLMs — a step toward modular, fusable, multilingual LMs.
## Example Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
model = AutoModelForCausalLM.from_pretrained('Bochkov/best_bvv_moe', trust_remote_code=True).to('cuda')
tokenizer = AutoTokenizer.from_pretrained('Bochkov/best_bvv_moe')
inputs = tokenizer("Hello, мир! ", return_tensors="pt").to('cuda')
outputs = model.generate(
**inputs,
max_new_tokens=100,
temperature=0.8,
top_k=50,
top_p=0.95,
do_sample=True
)
print(tokenizer.decode(outputs[0]))
``` |