|
|
--- |
|
|
license: mit |
|
|
dataset_info: |
|
|
features: |
|
|
- name: anchor |
|
|
dtype: string |
|
|
- name: positive |
|
|
dtype: string |
|
|
- name: negative |
|
|
dtype: string |
|
|
- name: sim_pos |
|
|
dtype: float64 |
|
|
- name: sim_neg |
|
|
dtype: float64 |
|
|
- name: len_anc |
|
|
dtype: int64 |
|
|
- name: len_pos |
|
|
dtype: int64 |
|
|
- name: len_neg |
|
|
dtype: int64 |
|
|
splits: |
|
|
- name: train |
|
|
num_bytes: 614206347 |
|
|
num_examples: 1000000 |
|
|
download_size: 308842392 |
|
|
dataset_size: 614206347 |
|
|
configs: |
|
|
- config_name: default |
|
|
data_files: |
|
|
- split: train |
|
|
path: data/train-* |
|
|
--- |
|
|
|
|
|
|
|
|
# Arabic 1 Million Triplets (curated): |
|
|
This is a curated dataset to use in Arabic ColBERT and SBERT models (among other uses). |
|
|
In addition to `anchor`, `positive` and `negative` columns, the dataset has two columns: `sim_pos` and `sim_neg` which are cosine |
|
|
similarities between the anchor (query) and bothe positive and negative examples. |
|
|
The last 3 columns are lengths (words) for each of the `anchor`, `positive` and `negative` examples. Length uses simple split on space, not tokens. |
|
|
|
|
|
The cosine similarity uses an embedding model by [AbderrahmanSkiredj1/Arabic_text_embedding_for_sts](https://huggingface.co/AbderrahmanSkiredj1/Arabic_text_embedding_for_sts) |
|
|
(inspired by [Omar Nicar](https://huggingface.co/Omartificial-Intelligence-Space)) who made the |
|
|
[first Arabic SBERT embeddings model](https://huggingface.co/Omartificial-Intelligence-Space/Arabert-all-nli-triplet-Matryoshka) and a triplets dataset based on NLI. |
|
|
|
|
|
# Why another dataset? |
|
|
While training an Arabic ColBERT model using a sample from the mMARCO dataset, I noticed retrieval issues. It is true all these triplet datasets are translated, but |
|
|
quality was not up to expectation. I took the dataset used by the embedding model (which is NLI plus some 300K) and 1 million samples from mMARCO and removed |
|
|
lines that had seperate latin words/phrases and sampled 1 million rows of the combined data. Then I added the similiarity columns and lengths. |
|
|
This should enable researchers and users to filter based on several criteria (including hard negatives). This is not saying the model used in similarities was perfect. |
|
|
In some cases, exmples annotated as negative were identical to the anchor/query. Adding the similarities columns took more time than training models. |
|
|
|
|
|
# Arabic SBERT and ColBERT models: |
|
|
Filtered subsets based on certain criteria show impressive perfrmance. Models will be uploaded and linked from here when ready. |
|
|
If you saw earlier versions of triplets datasets under this account, they have been removed in favor of this one. If you downloaded or duplicated a triplets |
|
|
dataset from this account prior to Satuday 3 PM Jerusalem time on July 27th, 2024, you are also advised to get the updated version. |
|
|
|