--- configs: - config_name: unicode_16d data_files: - split: train path: data/unicode/train_16d-*.parquet - config_name: unicode_19d data_files: - split: train path: data/unicode/train_19d-*.parquet - config_name: unicode_22d data_files: - split: train path: data/unicode/train_22d-*.parquet - config_name: unicode_24d data_files: - split: train path: data/unicode/train_24d-*.parquet - config_name: unicode_25d data_files: - split: train path: data/unicode/train_25d-*.parquet - config_name: unicode_29d data_files: - split: train path: data/unicode/train_29d-*.parquet - config_name: unicode_32d data_files: - split: train path: data/unicode/train_32d-*.parquet - config_name: unicode_50d data_files: - split: train path: data/unicode/train_50d-*.parquet - config_name: unicode_64d data_files: - split: train path: data/unicode/train_64d-*.parquet - config_name: unicode_100d data_files: - split: train path: data/unicode/train_100d-*.parquet - config_name: unicode_128d data_files: - split: train path: data/unicode/train_128d-*.parquet - config_name: unicode_256d data_files: - split: train path: data/unicode/train_256d-*.parquet - config_name: unicode_500d data_files: - split: train path: data/unicode/train_500d-*.parquet - config_name: unicode_512d data_files: - split: train path: data/unicode/train_512d-*.parquet - config_name: unicode_1024d data_files: - split: train path: data/unicode/train_1024d-*.parquet - config_name: unicode_1280d data_files: - split: train path: data/unicode/train_1280d-*.parquet - config_name: unicode_2048d data_files: - split: train path: data/unicode/train_2048d-*.parquet - config_name: unicode_2500d data_files: - split: train path: data/unicode/train_2500d-*.parquet - config_name: unicode_4096d data_files: - split: train path: data/unicode/train_4096d-*.parquet - config_name: wordnet_eng_16d data_files: - split: train path: data/wordnet_eng/train_16d-*.parquet - config_name: wordnet_eng_19d data_files: - split: train path: data/wordnet_eng/train_19d-*.parquet - config_name: wordnet_eng_22d data_files: - split: train path: data/wordnet_eng/train_22d-*.parquet - config_name: wordnet_eng_24d data_files: - split: train path: data/wordnet_eng/train_24d-*.parquet - config_name: wordnet_eng_25d data_files: - split: train path: data/wordnet_eng/train_25d-*.parquet - config_name: wordnet_eng_29d data_files: - split: train path: data/wordnet_eng/train_29d-*.parquet - config_name: wordnet_eng_32d data_files: - split: train path: data/wordnet_eng/train_32d-*.parquet - config_name: wordnet_eng_50d data_files: - split: train path: data/wordnet_eng/train_50d-*.parquet - config_name: wordnet_eng_64d data_files: - split: train path: data/wordnet_eng/train_64d-*.parquet - config_name: wordnet_eng_100d data_files: - split: train path: data/wordnet_eng/train_100d-*.parquet - config_name: wordnet_eng_128d data_files: - split: train path: data/wordnet_eng/train_128d-*.parquet - config_name: wordnet_eng_256d data_files: - split: train path: data/wordnet_eng/train_256d-*.parquet - config_name: wordnet_eng_500d data_files: - split: train path: data/wordnet_eng/train_500d-*.parquet - config_name: wordnet_eng_512d data_files: - split: train path: data/wordnet_eng/train_512d-*.parquet - config_name: wordnet_eng_1024d data_files: - split: train path: data/wordnet_eng/train_1024d-*.parquet - config_name: wordnet_eng_1280d data_files: - split: train path: data/wordnet_eng/train_1280d-*.parquet - config_name: wordnet_eng_2048d data_files: - split: train path: data/wordnet_eng/train_2048d-*.parquet - config_name: wordnet_eng_2500d data_files: - split: train path: data/wordnet_eng/train_2500d-*.parquet - config_name: wordnet_eng_4096d data_files: - split: train path: data/wordnet_eng/train_4096d-*.parquet dataset_info: features: - name: token_id dtype: int32 - name: token dtype: string - name: definition dtype: string - name: volume dtype: float32 - name: cardinal_id dtype: int8 - name: crystal sequence: float32 - name: origin dtype: string - name: unicode_codepoint dtype: int32 - name: synset_id dtype: string - name: language dtype: string license: apache-2.0 task_categories: - feature-extraction tags: - symbolic-embeddings - geometric-embeddings - vocabulary - unicode - wordnet size_categories: - 100K My deepest apologies if you already built a solution for this and I broke your code. It's my intention to make this convenient to use, not require a multi-layered solution just to access this one. # Geometric Vocabulary Embeddings This is the **complete unified collection** of all geometric vocabulary embeddings with optimized shard sizes to avoid rate limits and improve loading performance. ## ๐Ÿš€ Optimizations - **Pooled small shards**: Combined files smaller than 50MB unless 100k rows were reached - **Split large shards**: Split files larger than 250MB - **Target shard size**: ~100,000 rows per file - **Result**: 56 optimized shards (from 191 original files) ## ๐Ÿ“Š Available Dimensions **19 dimensions available:** `16d`, `19d`, `22d`, `24d`, `25d`, `29d`, `32d`, `50d`, `64d`, `100d`, `128d`, `256d`, `500d`, `512d`, `1024d`, `1280d`, `2048d`, `2500d`, `4096d` ## ๐ŸŽฏ Usage ```python from datasets import load_dataset # Load specific dimension splits ds = load_dataset("AbstractPhil/geometric-vocab", name="unicode_64d") # 64 d is kind of weak without patch projection. print(ds.keys()) # [['train']] # Load direct split ds = load_dataset("AbstractPhil/geometric-vocab", name="unicode_64d", split="train") # for item in ds do stuff # I advise using streaming as it's a fair-sized dataset, but not particularly too big of a footprint by default. ds = load_dataset("AbstractPhil/geometric-vocab", name="unicode_64d", split='train', streaming=True) ``` ```python Formerly; -> -> -> ds = load_dataset("AbstractPhil/geometric-vocab", "unicode", split="train_512d") # My deepest apologies for the additional problems imposed by this incorrect split. ``` This will effectively prevent you from automatically downloading all the splits without any weird janky workarounds. ## Data Format and Code ```Python def _deterministic_pentachoron(center_vec: np.ndarray) -> np.ndarray: d = center_vec.shape[0] proposals = np.stack([ center_vec, np.roll(center_vec, 1), np.roll(center_vec, 3) * np.sign(center_vec + 1e-8), np.roll(center_vec, 7) - center_vec, np.roll(center_vec, 11) + center_vec, ], 0).astype(np.float32) # Normalize rows with L1 norm norms = np.sum(np.abs(proposals), axis=1, keepdims=True) + 1e-8 Q = proposals / norms # Gramโ€“Schmidt orthogonalization with L1 re-norm for i in range(5): for j in range(i): Q[i] -= np.dot(Q[i], Q[j]) * Q[j] Q[i] /= (np.sum(np.abs(Q[i])) + 1e-8) # Apply scaling factors to spread vertices gamma = np.array([1.0, 0.9, -0.8, 1.1, 1.2], np.float32) X = np.zeros((5, d), np.float32) for i in range(5): X[i] = center_vec + gamma[i] * Q[i] # Center the pentachoron return X - X.mean(0, keepdims=True) ``` This is currently hosted on the repo for the lattice_geometry and it's imperfect. Keep in mind this is meant to be a starting point. ## ๐Ÿ“ฆ Dataset Structure Each token is embedded as a 5-vertex simplex in n-dimensional space: ```python import numpy as np # Load and use import huggingface_hub from datasets import load_dataset import numpy as np # Load and use; # Should be able to just paste this into colab and it'll work with no fuss. # -> streaming=False downloads the split -> cannot stream a split from disk according to HF datasets currently. ds = load_dataset("AbstractPhil/geometric-vocab", name="unicode_64d", split='train', streaming=False) test_crystal = {} # This is NOT for production use. This is an example showing loading the repo, preparing a crystal and then breaking. # For production; you will want to batch with workers, prefetch, and implement proper accel, pyring, or a combination of multi-gpu capable systems. for item in ds: token = item['token'] # Our token; raw string or character depending on need. For us is unicode so character. crystal_flat = item['crystal'] # Flattened array, we need to shape this to our correct form. # Reshape to 5 vertices ร— 64 dimensions. crystal = np.array(crystal_flat).reshape(5, 64) volume = item['volume'] # Cayley-Menger volume, used to calculate trajectory and delta to prevent combination variants from overlapping. test_crystal = { "token": token, "crystal": crystal, "volume": volume } break print("Test case;\n") print(test_crystal) ``` ## ๐Ÿ”„ Migration from Legacy Repositories This optimized dataset replaces all individual repositories with better shard organization for improved performance. ## ๐Ÿ“„ License Apache 2.0