best_bvv_moe / README.md
Bochkov's picture
Update README.md
b6ed48c verified
metadata
license: apache-2.0
tags:
  - causal-lm
  - russian
  - chinese
  - mixture-of-experts
  - frozen-embeddings
  - research
  - demonstration
  - low-resource
pipeline_tag: text-generation
library_name: transformers

BVV-MoE: Mixture-of-Experts LLM with Frozen Shared Embeddings (Russian + Chinese, Demo-Scale)

This repository contains the model and associated resources from the papers

📚 Paper (Emergent Semantics Beyond Token Embeddings: Transformer LMs with Frozen Visual Unicode Representations) -

📚 Paper (Growing Transformers: Modular Composition and Layer-wise Expansion on a Frozen Substrate) -

💻 Code

Model size: ~0.9B parameters
Languages: Russian, Chinese, some English


Model Summary

best_bvv_moe is a demonstration-scale Mixture-of-Experts (MoE) decoder-only causal language model combining two independently trained models (Russian and Chinese) with strictly frozen, shared visual/Unicode-based token embeddings.

  • Each "expert" was pre-trained on a small subordinate corpus (English-Russian, English-Chinese) with ~9B total tokens, mixing 10% SFT-like samples, using the same, fully frozen embedding matrix for all languages.
  • After separate training, the two models were seamlessly merged at the transformer block level using a "mean logits" MoE fusion approach – thanks to the shared frozen token embeddings, no retraining/alignment of embeddings was needed.
  • This model is a conceptual/research artifact, designed to illustrate that frozen, non-semantic embeddings enable combining multilingual LMs into a working MoE model without catastrophic loss of performance.

Key Features

  • Frozen, Unicode/visual token embeddings: All tokens (for all supported languages) share the same frozen embedding matrix, based on Unicode and visual forms, not statistical co-occurrence.
  • Direct Mixture-of-Experts merge: Two language models (Russian-, Chinese-oriented) are combined without retraining via simple logits averaging, made possible by the strictly-shared embeddings.
  • Demo-scale: Trained on a modest dataset (9B tokens), with a small SFT fraction (~10%), intended to illustrate the principle, not to maximize absolute scores.
  • Comparison available: Separately released standard (unfrozen embeddings) models for direct comparison of convergence and generalization.
  • Extremely "clean" codebase: No reliance on exotic pipeline tricks; clear transformer architecture, easy to review and experiment with.

Use Case / Intended Purpose

This model is not an end-user chatbot solution.
Its purpose is:

  • To demonstrate new possibilities in LM architecture:
    • Multilingual/multimodal MoE with frozen, shared embeddings
    • Modular, "plug-and-play" scaling and mixing of LMs
    • Comparison between frozen and unfrozen/learnable embeddings in real convergence
  • As a reference implementation for research communities investigating model unification, low-resource language mixing, or studying where "meaning" emerges inside LLM architectures.

Evaluation

MMLU (across tasks, test set mean ± std):

MMLU: 23.44% ± 0.28%

ARC-e: 23.74% ± 1.02%

ARC-c: 25.28% ± 2.07%

C-SENSE: 19.69% ± 1.13%

SQUAD: 19.73% ± 1.45%

BLEU:

en-ru: 6.52% ± 0.62% ru-en: 6.22% ± 0.38% en-zh: 2.93% ± 0.34% zh-en: 4.95% ± 0.59%

🧑‍🔬 Citation & Concept

If you use or build upon this demo, please cite:

@article{
      bochkov2025emergent,
      title={Emergent Semantics Beyond Token Embeddings: Transformer {LM}s with Frozen Visual Unicode Representations},
      author={Andrey Bochkov},
      journal={Transactions on Machine Learning Research},
      issn={2835-8856},
      year={2025},
      url={https://openreview.net/forum?id=Odh8IynO1o},
      note={}
}

@misc{bochkov2025growingtransformersmodularcomposition,
      title={Growing Transformers: Modular Composition and Layer-wise Expansion on a Frozen Substrate}, 
      author={A. Bochkov},
      year={2025},
      eprint={2507.07129},
      archivePrefix={arXiv},
      primaryClass={cs.LG},
      url={https://arxiv.org/abs/2507.07129}, 
}

This work demonstrates that transformer blocks, not token embeddings, carry the semantic burden in LLMs — a step toward modular, fusable, multilingual LMs.

Example Usage

from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
model = AutoModelForCausalLM.from_pretrained('Bochkov/best_bvv_moe', trust_remote_code=True).to('cuda')
tokenizer = AutoTokenizer.from_pretrained('Bochkov/best_bvv_moe')
inputs = tokenizer("Hello, мир! ", return_tensors="pt").to('cuda')
outputs = model.generate(
    **inputs, 
    max_new_tokens=100, 
    temperature=0.8, 
    top_k=50, 
    top_p=0.95, 
    do_sample=True
)
print(tokenizer.decode(outputs[0]))