Matryoshka Text Embedding v1
A multilingual text embedding model with Matryoshka Representation Learning, allowing flexible embedding dimensions from 64D to 1024D.
Model Overview
This model implements Matryoshka Representation Learning, enabling you to truncate embeddings to different dimensions while maintaining good performance. This allows you to balance accuracy, speed, and storage based on your specific needs.
Key Features
- Flexible Dimensions: Choose from 7 different embedding sizes (64D, 128D, 256D, 384D, 512D, 768D, 1024D)
- Multilingual Support: Trained on 100+ languages
- Base Architecture: XLM-RoBERTa
- Max Sequence Length: 8192 tokens
Quick Start
Installation
pip install sentence-transformers
Basic Usage
from sentence_transformers import SentenceTransformer
# Load model
model = SentenceTransformer('matryoshka-text-embedding-v1')
# Full precision (1024D)
embeddings = model.encode(["Your text here"])
# Balanced mode (512D) - Recommended for most use cases
embeddings = model.encode(["Your text here"], truncate_dim=512)
# Fast mode (256D) - For high-throughput applications
embeddings = model.encode(["Your text here"], truncate_dim=256)
# Ultra-fast mode (128D) - For real-time applications
embeddings = model.encode(["Your text here"], truncate_dim=128)
Performance Benchmarks
SciFact (Scientific Document Retrieval)
| Dimension | NDCG@10 | Relative Performance |
|---|---|---|
| 1024D | 0.6308 | 100.0% |
| 768D | 0.6277 | 99.5% |
| 512D | 0.6114 | 96.9% |
| 384D | 0.6035 | 95.7% |
| 256D | 0.5614 | 89.0% |
| 128D | 0.4732 | 75.0% |
| 64D | 0.3317 | 52.6% |
STSBenchmark (English Semantic Similarity)
- Spearman: 0.8506 (1024D)
- Pearson: 0.8381 (1024D)
STS17 (Multilingual Semantic Similarity)
Average Spearman Correlation across languages: 0.8096
Performance by language pair (1024D):
- Spanish (es-es): 0.8808
- English (en-en): 0.8740
- German (en-de): 0.8245
- Korean (ko-ko): 0.8210
- French (fr-en): 0.8157
- Italian (it-en): 0.8152
- Dutch (nl-en): 0.8190
- Arabic (ar-ar): 0.8056
- Turkish (en-tr): 0.7484
- Spanish-English (es-en): 0.7660
- English-Arabic (en-ar): 0.7191
Use Cases
High Accuracy Applications (768D-1024D)
- Scientific literature search
- Legal document retrieval
- Medical information systems
Balanced Production (512D) - Recommended
- General web search
- E-commerce product search
- Content recommendation engines
- Knowledge base retrieval
High-Throughput Systems (256D-384D)
- Real-time search APIs
- Large-scale document indexing
- Social media search
Mobile & Edge Devices (64D-128D)
- Mobile applications
- IoT devices
- Browser-based search
- Resource-constrained environments
Advanced Usage
Semantic Search
import numpy as np
from sentence_transformers import util
# Index documents with 512D (optimal balance)
documents = [
"Artificial intelligence is transforming healthcare.",
"Machine learning models require large datasets.",
"Quantum computing promises exponential speedups."
]
doc_embeddings = model.encode(documents, truncate_dim=512)
# Search with same dimension
query = "How is AI used in medicine?"
query_embedding = model.encode(query, truncate_dim=512)
# Compute similarities
similarities = util.cos_sim(query_embedding, doc_embeddings)
top_result = np.argmax(similarities)
print(f"Most relevant: {documents[top_result]}")
Integration with FAISS
import faiss
import numpy as np
# Create embeddings with 512D
embeddings = model.encode(documents, truncate_dim=512)
embeddings = embeddings.astype('float32')
# Build FAISS index
dimension = 512
index = faiss.IndexFlatIP(dimension)
faiss.normalize_L2(embeddings)
index.add(embeddings)
# Search
query_embedding = model.encode(query, truncate_dim=512).astype('float32')
faiss.normalize_L2(query_embedding.reshape(1, -1))
distances, indices = index.search(query_embedding.reshape(1, -1), k=10)
Technical Details
Architecture
- Base: XLM-RoBERTa transformer encoder
- Embedding Dimensions: 1024 (full) with 7 supported truncation levels
- Max Sequence Length: 8192 tokens
- Vocabulary Size: 250,002 tokens
- Parameters: ~568M
Training
- Technique: Matryoshka Representation Learning
- Languages: 100+ languages
- Max Input Length: 8192 tokens
Model Files
pytorch_model.bin- Model weightsconfig.json- Model configurationtokenizer.json- Tokenizer configurationlumees_config.json- Matryoshka-specific configuration
License
This model is released under the CC-BY-NC-4.0 (Creative Commons Attribution-NonCommercial 4.0 International) license.
See the LICENSE file for full details and acknowledgments.
Acknowledgments
This model builds upon important foundational work:
- XLM-RoBERTa: Base architecture for multilingual representations
- BAAI: For their contributions through RetroMAE and BGE-M3 papers
- Matryoshka Representation Learning: Training methodology (Kusupati et al., 2022)
Citation
If you use this model in your research or application, please cite:
@misc{matryoshka-text-embedding-v1,
title={Matryoshka Text Embedding v1},
author={Hasan Kurşun and Kerem Berkay Yanık},
year={2025},
url={https://huggingface.co/lumees/lumees-matryoshka-embedding-v1},
organization={Lumees},
contact={[email protected]},
website={https://lumees.io}
}
- Downloads last month
- 94
Model tree for lumees/lumees-matryoshka-embedding-v1
Base model
FacebookAI/xlm-roberta-baseCollection including lumees/lumees-matryoshka-embedding-v1
Evaluation results
- NDCG@10 on SciFacttest set self-reported0.631
- NDCG@1 on SciFacttest set self-reported0.510
- NDCG@3 on SciFacttest set self-reported0.578
- NDCG@5 on SciFacttest set self-reported0.606
- Spearman on STSBenchmarktest set self-reported0.851
- Pearson on STSBenchmarktest set self-reported0.838
- Spearman (en-en) on STS17test set self-reported0.874
- Spearman (es-es) on STS17test set self-reported0.881
- Spearman (ko-ko) on STS17test set self-reported0.821
- Spearman (ar-ar) on STS17test set self-reported0.806