Dataset Viewer (First 5GB)
Auto-converted to Parquet
The dataset viewer is not available for this split.
Job manager crashed while running this job (missing heartbeats).
Error code:   JobManagerCrashedError

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

Vector Database Dataset

Generated embeddings dataset for vector database training and evaluation with multiple format support.

Dataset Summary

This dataset contains 500,000 text samples with high-quality vector embeddings generated using Qwen/Qwen3-Embedding-8B from the wikimedia/wikipedia dataset. The dataset is designed for vector database training, similarity search, and retrieval tasks.

Dataset Structure

  • Base dataset: 500,000 samples with embeddings
  • Query dataset: 100,000 query samples
  • Embedding dimension: 4096

Supported Tasks

  • Feature Extraction: Use embeddings directly for downstream tasks
  • Similarity Search: Find similar documents using vector similarity
  • Text Retrieval: Build search and retrieval systems
  • Vector Database Training: Train and evaluate vector database systems

Languages

  • English (primary)

Repository Structure

πŸ“ parquet/

Contains parquet files compatible with HuggingFace dataset viewer:

  • base.parquet - Main dataset with text and embeddings
  • queries.parquet - Query subset for evaluation

πŸ“ fvecs/

Contains .fvecs files for DiskANN compatibility:

  • base.fvecs - Base vectors in fvecs format
  • queries.fvecs - Query vectors in fvecs format

πŸ“ fbin/

Contains .fbin files for DiskANN compatibility:

  • base.fbin - Base vectors in fbin format
  • queries.fbin - Query vectors in fbin format

πŸ“ diskann/

Contains pre-built DiskANN index files:

  • gt_*.fbin - Ground truth file
  • index_*.index - DiskANN index files
  • Additional index metadata files

Usage

Loading with HuggingFace Datasets

from datasets import load_dataset

# Load the dataset (uses parquet files automatically)
dataset = load_dataset("maknee/wikipedia_qwen_8b")
base_data = dataset['base'] 
query_data = dataset['queries']

# Access embeddings and texts
import numpy as np
embeddings = np.array(base_data['embedding'])
texts = base_data['text']

Using .fvecs files with DiskANN

# Download and use .fvecs files
from huggingface_hub import hf_hub_download

base_fvecs = hf_hub_download(repo_id="{repo_name}", filename="fvecs/base.fvecs")
query_fvecs = hf_hub_download(repo_id="{repo_name}", filename="fvecs/queries.fvecs")

# Load with your DiskANN pipeline
# (Implementation depends on your DiskANN setup)

Using .fbin files with DiskANN

# Download and use .fbin files
from huggingface_hub import hf_hub_download

base_fbin = hf_hub_download(repo_id="{repo_name}", filename="fbin/base.fbin")
query_fbin = hf_hub_download(repo_id="{repo_name}", filename="fbin/queries.fbin")

# Load with DiskANN
# .fbin format is the native DiskANN format with header: [num_vectors, dimensions] followed by vectors

Using Pre-built DiskANN Index

# Download index files
from huggingface_hub import hf_hub_download
import os

# Create local directory for index
os.makedirs("diskann_index", exist_ok=True)

# Download all index files (adjust filenames as needed)
index_files = ["gt_100.fbin", "index_64_100_256_disk.index"]  # Example names
for filename in index_files:
    hf_hub_download(
        repo_id="{repo_name}", 
        filename=f"diskann/{filename}",
        local_dir="diskann_index"
    )

# Use with DiskANN search
# (Implementation depends on your DiskANN setup)

File Formats

  • Parquet: Efficient columnar format, compatible with pandas/HuggingFace
  • fvecs: Binary format for vector data, used by many vector search libraries
  • fbin: Native DiskANN binary format with header containing metadata
  • DiskANN: Optimized index format for fast similarity search

Dataset Information

  • Created: using Qwen/Qwen3-Embedding-8B embedding model
  • Source: wikimedia/wikipedia
  • Size: 500,000 base samples, 100,000 query samples
  • Dimension: 4096 embedding vectors
  • Formats: Parquet, FVECS, FBIN, DiskANN indices

Citation

If you use this dataset in your research, please cite:

@dataset{huggingface_embeddings_maknee_wikipedia_qwen_8b,
  title={Vector Database Embeddings Dataset},
  author={Henry Zhu},
  year={2025},
  url={https://huggingface.co/datasets/maknee/wikipedia_qwen_8b}
}

License

This dataset is released under the MIT License. See the LICENSE file for details.


Generated with the DiskANN embedding generation tool.

Downloads last month
235