| datasetId
				 large_stringlengths 6 118 | author
				 large_stringlengths 2 42 | last_modified
				 large_stringdate 2021-04-29 15:34:29 2025-09-30 12:15:29 | downloads
				 int64 0 3.97M | likes
				 int64 0 7.74k | tags
				 large listlengths 1 7.92k | task_categories
				 large listlengths 0 48 | createdAt
				 large_stringdate 2022-03-02 23:29:22 2025-09-30 12:08:01 | trending_score
				 float64 0 64 | card
				 large_stringlengths 31 1M | 
|---|---|---|---|---|---|---|---|---|---|
| 
	danthepol/mnlp-m2-rag-eval | 
	danthepol | 
	2025-05-23T19:18:27Z | 0 | 0 | 
	[
  "size_categories:1K<n<10K",
  "format:parquet",
  "modality:text",
  "library:datasets",
  "library:pandas",
  "library:mlcroissant",
  "library:polars",
  "region:us"
] | 
	[] | 
	2025-05-23T19:16:12Z | 0 | 
	---
dataset_info:
  features:
  - name: question
    dtype: string
  - name: context
    dtype: string
  - name: dataset
    dtype: string
  splits:
  - name: test
    num_bytes: 4918807
    num_examples: 1000
  download_size: 2712384
  dataset_size: 4918807
configs:
- config_name: default
  data_files:
  - split: test
    path: data/test-*
---
 | 
| 
	owengretzinger/mlh-top-hackers-2020-2023 | 
	owengretzinger | 
	2025-03-27T15:11:01Z | 16 | 0 | 
	[
  "license:mit",
  "size_categories:n<1K",
  "format:json",
  "modality:text",
  "library:datasets",
  "library:dask",
  "library:mlcroissant",
  "region:us"
] | 
	[] | 
	2025-03-27T15:01:33Z | 0 | 
	---
license: mit
---
Between 2020-2023 MLH published a top 50 hackers list. I scraped this data from top hacker pages. The data is available in both JSON and CSV files.
Source code: https://github.com/owengretzinger/mlh-top-hackers-scrape | 
| 
	GeeveGeorge/miko-story-checkpoint-35 | 
	GeeveGeorge | 
	2024-10-01T14:37:16Z | 16 | 0 | 
	[
  "format:parquet",
  "library:datasets",
  "library:pandas",
  "library:mlcroissant",
  "library:polars",
  "region:us"
] | 
	[] | 
	2024-10-01T14:37:15Z | 0 | 
	---
dataset_info:
  features:
  - name: user
    dtype: 'null'
  - name: assistant
    dtype: 'null'
  splits:
  - name: train
    num_bytes: 0
    num_examples: 0
  download_size: 726
  dataset_size: 0
configs:
- config_name: default
  data_files:
  - split: train
    path: data/train-*
---
 | 
| 
	withmartian/cs3_dataset_synonyms | 
	withmartian | 
	2025-01-09T05:32:16Z | 47 | 0 | 
	[
  "size_categories:100K<n<1M",
  "format:parquet",
  "modality:text",
  "library:datasets",
  "library:pandas",
  "library:mlcroissant",
  "library:polars",
  "region:us"
] | 
	[] | 
	2025-01-09T05:32:05Z | 0 | 
	---
dataset_info:
  features:
  - name: command_set
    dtype: int64
  - name: table_name
    dtype: string
  - name: create_statement
    dtype: string
  - name: english_prompt
    dtype: string
  - name: sql_statement
    dtype: string
  - name: table_fields
    dtype: string
  - name: select
    dtype: string
  - name: order_by
    dtype: string
  splits:
  - name: train
    num_bytes: 112705709.625
    num_examples: 76500
  - name: validation
    num_bytes: 19889242.875
    num_examples: 13500
  - name: test
    num_bytes: 14732772.5
    num_examples: 10000
  download_size: 42707420
  dataset_size: 147327725.0
configs:
- config_name: default
  data_files:
  - split: train
    path: data/train-*
  - split: validation
    path: data/validation-*
  - split: test
    path: data/test-*
---
 | 
| 
	ferrazzipietro/LS_Llama-3.1-8B_e3c-sentences-GR-revised_NoQuant_16_32_0.01_64_BestF1 | 
	ferrazzipietro | 
	2024-11-28T17:42:50Z | 16 | 0 | 
	[
  "size_categories:n<1K",
  "format:parquet",
  "modality:text",
  "library:datasets",
  "library:pandas",
  "library:mlcroissant",
  "library:polars",
  "region:us"
] | 
	[] | 
	2024-11-28T17:42:47Z | 0 | 
	---
dataset_info:
  features:
  - name: sentence
    dtype: string
  - name: entities
    list:
    - name: offsets
      sequence: int64
    - name: text
      dtype: string
    - name: type
      dtype: string
  - name: tokens
    sequence: string
  - name: ner_tags
    sequence: int64
  - name: ground_truth_word_level
    sequence: string
  - name: input_ids
    sequence: int32
  - name: attention_mask
    sequence: int8
  - name: labels
    sequence: int64
  - name: predictions
    sequence: string
  - name: ground_truth_labels
    sequence: string
  splits:
  - name: all_validation
    num_bytes: 172478
    num_examples: 94
  - name: test
    num_bytes: 1556265
    num_examples: 738
  download_size: 311863
  dataset_size: 1728743
configs:
- config_name: default
  data_files:
  - split: all_validation
    path: data/all_validation-*
  - split: test
    path: data/test-*
---
 | 
| 
	ai-chem/Cytotox | 
	ai-chem | 
	2025-06-02T13:37:31Z | 69 | 0 | 
	[
  "license:mit",
  "size_categories:1K<n<10K",
  "format:parquet",
  "modality:tabular",
  "modality:text",
  "library:datasets",
  "library:pandas",
  "library:mlcroissant",
  "library:polars",
  "doi:10.57967/hf/5581",
  "region:us"
] | 
	[] | 
	2025-05-11T19:12:58Z | 0 | 
	---
dataset_info:
  features:
  - name: sn
    dtype: int64
  - name: material
    dtype: string
  - name: shape
    dtype: string
  - name: coat_functional_group
    dtype: string
  - name: synthesis_method
    dtype: string
  - name: surface_charge
    dtype: string
  - name: size_in_medium_nm
    dtype: float64
  - name: zeta_in_medium_mv
    dtype: float64
  - name: no_of_cells_cells_well
    dtype: float64
  - name: human_animal
    dtype: string
  - name: cell_source
    dtype: string
  - name: cell_tissue
    dtype: string
  - name: cell_morphology
    dtype: string
  - name: cell_age
    dtype: string
  - name: time_hr
    dtype: int64
  - name: concentration
    dtype: float64
  - name: test
    dtype: string
  - name: test_indicator
    dtype: string
  - name: viability_%
    dtype: float64
  - name: doi
    dtype: string
  - name: article_list
    dtype: int64
  - name: core_nm
    dtype: float64
  - name: hydrodynamic_nm
    dtype: float64
  - name: potential_mv
    dtype: float64
  - name: cell_type
    dtype: string
  - name: journal_name
    dtype: string
  - name: publisher
    dtype: string
  - name: year
    dtype: int64
  - name: title
    dtype: string
  - name: journal_is_oa
    dtype: bool
  - name: is_oa
    dtype: string
  - name: oa_status
    dtype: string
  - name: pdf
    dtype: string
  - name: access
    dtype: int64
  splits:
  - name: train
    num_bytes: 2557938
    num_examples: 5476
  download_size: 172810
  dataset_size: 2557938
configs:
- config_name: default
  data_files:
  - split: train
    path: data/train-*
license: mit
---
Information about the dataset is detailed in the documentation:  
https://ai-chem.github.io/ChemX/overview/datasets_description.html  
You can find the Croissant file in our GitHub repository:  
https://github.com/ai-chem/ChemX/tree/main/datasets/croissants   | 
| 
	emradar/0.1_augmented_implicit_hate_speech_dataset | 
	emradar | 
	2025-05-25T20:26:01Z | 0 | 0 | 
	[
  "task_categories:text-classification",
  "language:en",
  "license:mit",
  "size_categories:10K<n<100K",
  "format:arrow",
  "modality:text",
  "library:datasets",
  "library:mlcroissant",
  "region:us",
  "hate-speech"
] | 
	[
  "text-classification"
] | 
	2025-05-25T19:13:32Z | 0 | 
	---
license: mit
task_categories:
- text-classification
language:
- en
tags:
- hate-speech
---
# Description
This is a public release of the dataset described in ElSherief et al. (2022) and augmented by Adar and Wiberg (2025).
This dataset card is a work in progress and will be improved over time.
## Contributions
Dataset augmented by [@emradar](https://github.com/emradar) and [@Wiberacci](https://github.com/Wiberacci).
## References
ElSherief, M., Ziems, C., Muchlinski, D., Anupindi, V., Seybolt, J., De Choudhury, M., & Yang, D. (2021). Latent Hatred: A Benchmark for Understanding Implicit Hate Speech. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing (EMNLP).
## Copyright
MIT License
Copyright (c) 2023 SALT
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE. | 
| 
	dwb2023/gdelt-event-2025-v3 | 
	dwb2023 | 
	2025-03-21T18:18:30Z | 18 | 0 | 
	[
  "license:cc-by-4.0",
  "size_categories:100K<n<1M",
  "format:parquet",
  "modality:tabular",
  "modality:text",
  "library:datasets",
  "library:dask",
  "library:mlcroissant",
  "library:polars",
  "region:us"
] | 
	[] | 
	2025-03-21T17:40:46Z | 0 | 
	---
license: cc-by-4.0
---
# Dataset Card for dwb2023/gdelt-event-2025-v3
This dataset contains global event records from the GDELT (Global Database of Events, Language, and Tone) Project, capturing real-world events and their characteristics across the globe through news media coverage.
## Dataset Details
### Dataset Description
The GDELT Event Database is a comprehensive repository of human societal-scale behavior and beliefs across all countries of the world, connecting every person, organization, location, count, theme, news source, and event across the planet into a single massive network. The database records what happens in every corner of the world, containing over a quarter-billion event records in over 300 categories.
- **Curated by:** The GDELT Project
- **Funded by:** Google Ideas, supported by Google Cloud Platform
- **Language(s) (NLP):** Multi-language source data, processed into standardized English format
- **License:** All GDELT event data is available for free download and use with proper attribution
- **Updates:** Every 15 minutes, 24/7
### Dataset Sources
- **Repository:** http://gdeltproject.org/
- **Paper:** Leetaru, Kalev and Schrodt, Philip. (2013). "GDELT: Global Data on Events, Language, and Tone, 1979-2012." International Studies Association Annual Conference, April 2013. San Francisco, CA.
- **Documentation:**
  - http://data.gdeltproject.org/documentation/GDELT-Event_Codebook-V2.0.pdf
  - https://www.gdeltproject.org/data/documentation/CAMEO.Manual.1.1b3.pdf
## Uses
### Direct Use
- Monitoring global events and conflicts in real-time
- Analyzing international relations and diplomatic activities
- Tracking social movements and protests
- Studying media coverage patterns
- Research in political science, international relations, and social sciences
- Crisis early warning systems
- Geopolitical risk assessment
### Out-of-Scope Use
- Real-time emergency response (due to potential reporting delays)
- Individual-level surveillance or tracking
- Definitive source of truth for events (should be cross-referenced with other sources)
- Prediction of future events without careful consideration of limitations
## Dataset Structure
The dataset consists of tab-delimited files with 61 fields per event record, including:
1. Event Identification
   - GlobalEventID: Unique identifier for each event
   - DATEADDED: Timestamp in YYYYMMDDHHMMSS format
   - Day, MonthYear, Year: Various date formats
2. Actor Information (for both Actor1 and Actor2)
   - ActorCode: CAMEO-coded actor identifier
   - ActorName: Name of the actor
   - ActorCountryCode: 3-character country code
   - Various actor attribute codes (ethnic, religious, type)
3. Event Details
   - EventCode: CAMEO action code
   - EventBaseCode: Root event category
   - QuadClass: Primary event classification (Verbal/Material Cooperation/Conflict)
   - GoldsteinScale: Event impact score (-10 to +10)
   - AvgTone: Average tone of coverage (-100 to +100)
4. Geographic Information
   - Multiple geographic fields for each actor and action
   - Includes country codes, feature IDs, and lat/long coordinates
## Dataset Creation
### Curation Rationale
GDELT was created to capture and analyze global human society in real-time, providing a platform for understanding global behavior through media coverage. It processes news media from around the world to identify and code events using the CAMEO (Conflict and Mediation Event Observations) coding system.
### Curation Method
- Prefect based python extract script:  https://gist.github.com/donbr/704789a6131bb4a92c9810185c63a16a
### Source Data
#### Data Collection and Processing
- Continuous monitoring of news media worldwide
- Automated processing using natural language processing
- Event coding using CAMEO taxonomy
- Geographic coding using full-text geocoding
- 15-minute update cycle
- Machine translation for non-English sources
#### Who are the source data producers?
Primary sources include:
- International news media
- Web news
- Broadcast transcripts
- Print media
- Various online platforms
### Personal and Sensitive Information
While the dataset primarily focuses on public events and public figures, it may contain:
- Names of public figures and officials
- Locations of public events
- Public statements and actions
- Media coverage details
## Bias, Risks, and Limitations
1. Media Bias
   - Over-representation of English-language media
   - Varying media coverage across regions
   - Event selection bias based on news worthiness
2. Technical Limitations
   - Machine coding errors
   - Translation inaccuracies
   - Geographic coding challenges
   - Duplicate event reporting
3. Coverage Gaps
   - Limited coverage in media-restricted regions
   - Potential missed events in less-covered areas
   - Varying detail levels across events
### Recommendations
1. Users should:
   - Cross-reference critical events with other sources
   - Consider media bias in coverage
   - Account for regional differences in coverage
   - Use appropriate statistical methods for aggregation
   - Be aware of potential duplicates and coding errors
2. Best Practices:
   - Aggregate data over appropriate time periods
   - Consider confidence scores in analysis
   - Use multiple GDELT fields for validation
   - Account for regional and temporal variations in coverage
## Citation
**BibTeX:**
```bibtex
@inproceedings{leetaru2013gdelt,
  title={GDELT: Global Data on Events, Language, and Tone, 1979-2012},
  author={Leetaru, Kalev and Schrodt, Philip},
  booktitle={International Studies Association Annual Conference},
  year={2013},
  address={San Francisco, CA}
}
```
**APA:**
Leetaru, K., & Schrodt, P. (2013). GDELT: Global Data on Events, Language, and Tone, 1979-2012. Paper presented at the International Studies Association Annual Conference, San Francisco, CA.
## Dataset Card Contact
dwb2023 | 
| 
	magnifi/Phi3_intent_v66_1_w_unknown_upper_lower | 
	magnifi | 
	2025-09-29T10:00:09Z | 35 | 0 | 
	[
  "size_categories:10K<n<100K",
  "format:parquet",
  "modality:text",
  "library:datasets",
  "library:pandas",
  "library:mlcroissant",
  "library:polars",
  "region:us"
] | 
	[] | 
	2025-09-24T18:32:27Z | 0 | 
	---
dataset_info:
  features:
  - name: Query
    dtype: string
  - name: true_intent
    dtype: string
  splits:
  - name: train
    num_bytes: 3143608.0
    num_examples: 43362
  - name: validation
    num_bytes: 25818.0
    num_examples: 383
  download_size: 1215798
  dataset_size: 3169426.0
configs:
- config_name: default
  data_files:
  - split: train
    path: data/train-*
  - split: validation
    path: data/validation-*
---
 | 
| 
	Marta102/MNLP_M3_dpo_dataset_updated | 
	Marta102 | 
	2025-06-04T13:29:37Z | 0 | 0 | 
	[
  "size_categories:100K<n<1M",
  "format:parquet",
  "modality:text",
  "library:datasets",
  "library:pandas",
  "library:mlcroissant",
  "library:polars",
  "region:us"
] | 
	[] | 
	2025-06-04T12:18:15Z | 0 | 
	---
dataset_info:
  features:
  - name: prompt
    dtype: string
  - name: chosen
    dtype: string
  - name: rejected
    dtype: string
  - name: dataset
    dtype: string
  splits:
  - name: train
    num_bytes: 348868872
    num_examples: 112239
  - name: validation
    num_bytes: 61680674
    num_examples: 19809
  download_size: 228049045
  dataset_size: 410549546
configs:
- config_name: default
  data_files:
  - split: train
    path: data/train-*
  - split: validation
    path: data/validation-*
---
 | 
| 
	ali-sh07/PhenoBench_images_semantics | 
	ali-sh07 | 
	2025-05-17T02:31:49Z | 0 | 0 | 
	[
  "size_categories:1K<n<10K",
  "format:parquet",
  "modality:image",
  "library:datasets",
  "library:dask",
  "library:mlcroissant",
  "library:polars",
  "region:us"
] | 
	[] | 
	2025-05-16T23:43:35Z | 0 | 
	---
dataset_info:
  features:
  - name: image
    dtype: image
  - name: label
    dtype: image
  splits:
  - name: train
    num_bytes: 3926148196.654
    num_examples: 1407
  - name: val
    num_bytes: 2019915541.0
    num_examples: 772
  download_size: 5665498678
  dataset_size: 5946063737.653999
configs:
- config_name: default
  data_files:
  - split: train
    path: data/train-*
  - split: val
    path: data/val-*
---
 | 
| 
	kevin017/bioS_inverse_QA_all_small_all_answer | 
	kevin017 | 
	2025-04-10T10:05:25Z | 16 | 0 | 
	[
  "size_categories:n<1K",
  "format:parquet",
  "modality:text",
  "library:datasets",
  "library:pandas",
  "library:mlcroissant",
  "library:polars",
  "region:us"
] | 
	[] | 
	2025-04-10T10:05:19Z | 0 | 
	---
dataset_info:
  features:
  - name: question
    dtype: string
  - name: all_answers
    sequence: string
  splits:
  - name: train
    num_bytes: 75370
    num_examples: 350
  - name: test
    num_bytes: 75370
    num_examples: 350
  download_size: 44360
  dataset_size: 150740
configs:
- config_name: default
  data_files:
  - split: train
    path: data/train-*
  - split: test
    path: data/test-*
---
 | 
| 
	mlfoundations-dev/embedding_apply_tiger_math | 
	mlfoundations-dev | 
	2025-02-26T23:08:40Z | 39 | 0 | 
	[
  "size_categories:1K<n<10K",
  "format:parquet",
  "modality:tabular",
  "modality:text",
  "library:datasets",
  "library:pandas",
  "library:mlcroissant",
  "library:polars",
  "region:us"
] | 
	[] | 
	2025-02-26T23:08:32Z | 0 | 
	---
dataset_info:
  features:
  - name: instruction_seed
    dtype: string
  - name: source
    dtype: string
  - name: embeddings
    sequence: float64
  - name: mean_positive_score
    dtype: float64
  - name: mean_negative_score
    dtype: float64
  - name: difference_score
    dtype: float64
  splits:
  - name: train
    num_bytes: 34283406.4
    num_examples: 4000
  download_size: 27034393
  dataset_size: 34283406.4
configs:
- config_name: default
  data_files:
  - split: train
    path: data/train-*
---
 | 
| 
	ZiyiXia/SWE-bench__style-3__fs-bm25 | 
	ZiyiXia | 
	2025-05-06T08:24:59Z | 0 | 0 | 
	[
  "size_categories:n<1K",
  "format:parquet",
  "modality:text",
  "library:datasets",
  "library:pandas",
  "library:mlcroissant",
  "library:polars",
  "region:us"
] | 
	[] | 
	2025-05-06T08:24:53Z | 0 | 
	---
dataset_info:
  features:
  - name: instance_id
    dtype: string
  - name: text
    dtype: string
  - name: repo
    dtype: string
  - name: base_commit
    dtype: string
  - name: problem_statement
    dtype: string
  - name: hints_text
    dtype: string
  - name: created_at
    dtype: string
  - name: patch
    dtype: string
  - name: test_patch
    dtype: string
  - name: version
    dtype: string
  - name: FAIL_TO_PASS
    dtype: string
  - name: PASS_TO_PASS
    dtype: string
  - name: environment_setup_commit
    dtype: string
  splits:
  - name: test
    num_bytes: 8430964
    num_examples: 6
  download_size: 3885049
  dataset_size: 8430964
configs:
- config_name: default
  data_files:
  - split: test
    path: data/test-*
---
 | 
| 
	ferrazzipietro/LS_Llama-3.1-8B_e3c-sentences-GR-revised_NoQuant_32_64_0.05_64_BestF1 | 
	ferrazzipietro | 
	2024-11-28T17:47:34Z | 16 | 0 | 
	[
  "size_categories:n<1K",
  "format:parquet",
  "modality:text",
  "library:datasets",
  "library:pandas",
  "library:mlcroissant",
  "library:polars",
  "region:us"
] | 
	[] | 
	2024-11-28T17:47:26Z | 0 | 
	---
dataset_info:
  features:
  - name: sentence
    dtype: string
  - name: entities
    list:
    - name: offsets
      sequence: int64
    - name: text
      dtype: string
    - name: type
      dtype: string
  - name: tokens
    sequence: string
  - name: ner_tags
    sequence: int64
  - name: ground_truth_word_level
    sequence: string
  - name: input_ids
    sequence: int32
  - name: attention_mask
    sequence: int8
  - name: labels
    sequence: int64
  - name: predictions
    sequence: string
  - name: ground_truth_labels
    sequence: string
  splits:
  - name: all_validation
    num_bytes: 172478
    num_examples: 94
  - name: test
    num_bytes: 1556265
    num_examples: 738
  download_size: 312438
  dataset_size: 1728743
configs:
- config_name: default
  data_files:
  - split: all_validation
    path: data/all_validation-*
  - split: test
    path: data/test-*
---
 | 
| 
	TOBEAI/KOR_Merged_data_v3 | 
	TOBEAI | 
	2024-10-28T02:11:47Z | 20 | 0 | 
	[
  "size_categories:1M<n<10M",
  "format:parquet",
  "modality:text",
  "library:datasets",
  "library:dask",
  "library:mlcroissant",
  "library:polars",
  "region:us"
] | 
	[] | 
	2024-10-28T02:10:54Z | 0 | 
	---
dataset_info:
  features:
  - name: system
    dtype: string
  - name: user
    dtype: string
  - name: assistant
    dtype: string
  splits:
  - name: train
    num_bytes: 1392182088
    num_examples: 1740818
  download_size: 773822365
  dataset_size: 1392182088
configs:
- config_name: default
  data_files:
  - split: train
    path: data/train-*
---
 | 
| 
	argilla-internal-testing/test_import_dataset_from_hub_with_classlabel_b56b5e5d-d682-4f17-b00a-b14b2f7285ec | 
	argilla-internal-testing | 
	2024-10-11T09:46:11Z | 18 | 0 | 
	[
  "size_categories:n<1K",
  "format:parquet",
  "modality:text",
  "library:datasets",
  "library:pandas",
  "library:mlcroissant",
  "library:polars",
  "region:us"
] | 
	[] | 
	2024-10-11T09:46:10Z | 0 | 
	---
dataset_info:
  features:
  - name: text
    dtype: string
  - name: label
    dtype:
      class_label:
        names:
          '0': positive
          '1': negative
  splits:
  - name: train
    num_bytes: 111
    num_examples: 3
  download_size: 1454
  dataset_size: 111
configs:
- config_name: default
  data_files:
  - split: train
    path: data/train-*
---
 | 
| 
	argilla-internal-testing/test_import_dataset_from_hub_with_classlabel_7d9c8002-6b5d-448c-a671-5eaef0a95f27 | 
	argilla-internal-testing | 
	2024-12-10T10:59:48Z | 14 | 0 | 
	[
  "size_categories:n<1K",
  "format:parquet",
  "modality:text",
  "library:datasets",
  "library:pandas",
  "library:mlcroissant",
  "library:polars",
  "region:us"
] | 
	[] | 
	2024-12-10T10:59:47Z | 0 | 
	---
dataset_info:
  features:
  - name: text
    dtype: string
  - name: label
    dtype:
      class_label:
        names:
          '0': positive
          '1': negative
  splits:
  - name: train
    num_bytes: 111
    num_examples: 3
  download_size: 1256
  dataset_size: 111
configs:
- config_name: default
  data_files:
  - split: train
    path: data/train-*
---
 | 
| 
	macavaney/msmarco-passage.splade-lg.cache | 
	macavaney | 
	2025-02-08T11:39:59Z | 37 | 0 | 
	[
  "task_categories:text-retrieval",
  "region:us",
  "pyterrier",
  "pyterrier-artifact",
  "pyterrier-artifact.indexer_cache",
  "pyterrier-artifact.indexer_cache.lz4pickle"
] | 
	[
  "text-retrieval"
] | 
	2025-02-08T11:27:44Z | 0 | 
	---
# pretty_name: "" # Example: "MS MARCO Terrier Index"
tags:
- pyterrier
- pyterrier-artifact
- pyterrier-artifact.indexer_cache
- pyterrier-artifact.indexer_cache.lz4pickle
task_categories:
- text-retrieval
viewer: false
---
# msmarco-passage.splade-lg.cache
## Description
*TODO: What is the artifact?*
## Usage
```python
# Load the artifact
import pyterrier as pt
artifact = pt.Artifact.from_hf('macavaney/msmarco-passage.splade-lg.cache')
# TODO: Show how you use the artifact
```
## Benchmarks
*TODO: Provide benchmarks for the artifact.*
## Reproduction
```python
# TODO: Show how you constructed the artifact.
```
## Metadata
```
{
  "type": "indexer_cache",
  "format": "lz4pickle",
  "record_count": 8841823
}
```
 | 
| 
	somosnlp-hackathon-2025/gastronomia-hispana-dpo | 
	somosnlp-hackathon-2025 | 
	2025-06-02T00:53:04Z | 99 | 0 | 
	[
  "language:es",
  "license:mit",
  "size_categories:1K<n<10K",
  "format:parquet",
  "modality:text",
  "library:datasets",
  "library:pandas",
  "library:mlcroissant",
  "library:polars",
  "region:us",
  "dpo",
  "food",
  "recipes"
] | 
	[] | 
	2025-05-29T17:34:22Z | 0 | 
	---
license: mit
language:
- es
tags:
- dpo
- food
- recipes
pretty_name: Recetas en Español DPO
size_categories:
- 1K<n<10K
---
# Gastronomía Hispana DPO
## Descripción del Dataset
Este dataset contiene pares de preferencias para el entrenamiento de modelos de lenguaje especializados en gastronomía hispana utilizando la técnica DPO (Direct Preference Optimization). Los datos incluyen conversaciones sobre cocina internacional con un enfoque particular en recetas, ingredientes, técnicas culinarias y tradiciones gastronómicas del mundo hispano.
## Estructura del Dataset
El dataset contiene las siguientes columnas:
- **`chosen`**: Lista de mensajes de la conversación preferida (formato ChatML)
- **`rejected`**: Lista de mensajes de la conversación rechazada (formato ChatML)
- **`recipe_id`**: Identificador único de la receta (1-471)
- **`recipe_name`**: Nombre de la receta o plato
- **`category`**: Categoría del contenido
### Categorías Disponibles
- `ingredients`: Preguntas sobre ingredientes, sustituciones y propiedades
- `cooking_techniques`: Técnicas de cocción, temperaturas y métodos
- `basic_recipe`: Recetas básicas y preparaciones
- `cultural_context`: Contexto cultural e histórico de los platos
## Estadísticas del Dataset
- **Total de ejemplos**: ~470 pares de preferencias
- **Recetas únicas**: 471
- **Idioma**: Español
- **Formato**: Conversaciones multi-turno
## Uso del Dataset
### Instalación
```python
from datasets import load_dataset
# Cargar el dataset
dataset = load_dataset("somosnlp-hackathon-2025/gastronomia-hispana-dpo")
```
### Ejemplo de Uso
```python
# Acceder a un ejemplo
example = dataset['train'][0]
print("Conversación elegida:")
for message in example['chosen']:
    print(f"{message['role']}: {message['content'][:100]}...")
print(f"\nReceta: {example['recipe_name']}")
print(f"Categoría: {example['category']}")
```
### Entrenamiento DPO
Este dataset está diseñado para ser usado con bibliotecas como TRL (Transformer Reinforcement Learning) para entrenar modelos con DPO:
```python
from trl import DPOTrainer
from transformers import AutoTokenizer, AutoModelForCausalLM
# Preparar datos para DPO
def format_dataset(example):
    return {
        "prompt": example["chosen"][0]["content"],
        "chosen": example["chosen"][1]["content"],
        "rejected": example["rejected"][1]["content"],
    }
formatted_dataset = dataset.map(format_dataset)
# Entrenar con DPOTrainer
trainer = DPOTrainer(
    model=model,
    tokenizer=tokenizer,
    train_dataset=formatted_dataset["train"],
    # ... otros parámetros
)
```
## Características del Dataset
### Roles del Sistema
El dataset incluye diferentes roles especializados:
- **Experto en ingredientes**: Conocimiento sobre ingredientes internacionales, propiedades y sustituciones
- **Maestro culinario**: Especialista en técnicas de cocción y métodos tradicionales
- **Chef instructor**: Explicaciones paso a paso de recetas y técnicas
- **Historiador gastronómico**: Contexto cultural y origen de platos tradicionales
### Calidad de las Respuestas
- **Respuestas preferidas**: Detalladas, culturalmente informadas y técnicamente precisas
- **Respuestas rechazadas**: Más básicas, menos contexto cultural o información limitada
- **Enfoque educativo**: Las respuestas incluyen contexto cultural, técnicas tradicionales y consejos prácticos
## Aplicaciones
Este dataset es útil para:
- **Entrenamiento de chatbots culinarios** especializados en gastronomía hispana
- **Modelos de recomendación** de recetas y técnicas de cocina
- **Sistemas de tutoría** para enseñanza de cocina
- **Investigación en NLP** aplicado al dominio gastronómico
- **Preservación del conocimiento culinario** tradicional
## Limitaciones
- El dataset se centra principalmente en gastronomía hispana e internacional
- Las respuestas están en español
- El conocimiento está limitado a la fecha de creación (enero 2025)
- Algunas recetas pueden requerir ingredientes específicos de ciertas regiones
## Consideraciones Éticas
- Las recetas y técnicas representan tradiciones culturales que deben ser respetadas
- Se recomienda verificar la información culinaria antes de aplicarla en contextos comerciales
- El dataset respeta la diversidad cultural de la gastronomía internacional
## Licencia
[Incluir información de licencia apropiada]
## Citación
Si utilizas este dataset en tu investigación, por favor cita:
```bibtex
@dataset{gastronomia_hispana_dpo_2025,
  title={Gastronomía Hispana DPO: Dataset for Culinary Instruction Following},
  author={SomosNLP Hackathon 2025},
  year={2025},
  url={https://huggingface.co/datasets/somosnlp-hackathon-2025/gastronomia-hispana-dpo}
}
```
## Contribuciones
Este dataset fue creado como parte del Hackathon SomosNLP 2025. Agradecemos las contribuciones de la comunidad para expandir y mejorar el conocimiento gastronómico representado.
## Contacto
Para preguntas sobre el dataset, por favor abre un issue en este repositorio o contacta al equipo de SomosNLP. | 
| 
	shanchen/combine1k_ds | 
	shanchen | 
	2025-04-26T19:35:40Z | 35 | 0 | 
	[
  "size_categories:1K<n<10K",
  "format:parquet",
  "modality:text",
  "library:datasets",
  "library:pandas",
  "library:mlcroissant",
  "library:polars",
  "region:us"
] | 
	[] | 
	2025-04-26T19:35:37Z | 0 | 
	---
dataset_info:
  features:
  - name: source_dataset
    dtype: string
  - name: text
    dtype: string
  splits:
  - name: train
    num_bytes: 23008137.0
    num_examples: 1000
  download_size: 8932967
  dataset_size: 23008137.0
configs:
- config_name: default
  data_files:
  - split: train
    path: data/train-*
---
 | 
| 
	hpprc/honyaku | 
	hpprc | 
	2024-11-20T09:06:40Z | 18 | 2 | 
	[
  "task_categories:translation",
  "language:ja",
  "language:en",
  "license:cc-by-sa-4.0",
  "size_categories:n<1K",
  "format:parquet",
  "modality:text",
  "library:datasets",
  "library:pandas",
  "library:mlcroissant",
  "library:polars",
  "region:us"
] | 
	[
  "translation"
] | 
	2024-11-20T04:42:28Z | 0 | 
	---
language:
- ja
- en
license: cc-by-sa-4.0
size_categories:
- n<1K
task_categories:
- translation
pretty_name: honyaku
dataset_info:
- config_name: passage
  features:
  - name: id
    dtype: int64
  - name: en
    dtype: string
  - name: ja
    dtype: string
  - name: url_en
    dtype: string
  - name: url_ja
    dtype: string
  splits:
  - name: train
    num_bytes: 34839
    num_examples: 33
  download_size: 29554
  dataset_size: 34839
- config_name: sentence
  features:
  - name: id
    dtype: int64
  - name: en
    dtype: string
  - name: ja
    dtype: string
  - name: url_en
    dtype: string
  - name: url_ja
    dtype: string
  splits:
  - name: train
    num_bytes: 44734
    num_examples: 104
  download_size: 26899
  dataset_size: 44734
configs:
- config_name: passage
  data_files:
  - split: train
    path: passage/train-*
- config_name: sentence
  data_files:
  - split: train
    path: sentence/train-*
---
英語Wikipedia記事の冒頭複数文を抽出し、人手で日本語翻訳した文章レベル対訳データセットです。
日英対訳コーパスはライセンスが厳しいものが多く、自由に利用できる高品質なパッセージレベルの対訳データセットが少なかったため作成しました。
翻訳は大体を[hpprc](https://x.com/hpp_ricecake)が、数件を[yano氏](https://x.com/yano0_c)が行いました。
`passage`サブセットは文章レベルの翻訳(対応する文ごとに改行区切り)を、`sentence`サブセットは文ごとの対訳(こちらは代名詞の翻訳の齟齬など誤った翻訳になっている可能性がある)を収載したデータセットです。
## 翻訳方針
DeepLやGoogle翻訳など、既存の翻訳ツールは翻訳結果を機械学習モデルの入力として使用することを禁じています。
本データセットは機械学習用途にも利用できる寛容なライセンスの元で公開したかったため、安全のため、できるだけこれらのツールを使用せずに英日翻訳を行いました。
その代わり、ライセンスの寛容なLLMによる翻訳結果を参考に翻訳を行いました。
具体的には、日本語における性能が高くライセンスがApache 2.0で配布されている[CALM3-22B-Chat](https://huggingface.co/cyberagent/calm3-22b-chat)および[Qwen 2.5 32B](https://huggingface.co/Qwen/Qwen2.5-32B-Instruct)を利用しました。
翻訳対象としたテキストは、邦訳にある程度の合意が取れそうなものを厳選して利用しました。
具体的には、まず日本語Wikipediaの記事から一般性の高いもの(人名や地名っぽさが過ぎるもの以外)をランダムに選出して、英語に対応する記事が存在するかを調べました。
英語に対応する記事が存在する日本語記事は、内容が大きく異なることが多いため翻訳としては利用できませんが、邦訳の参考としては非常に有用なので、そのようなペアを選んで、対応する英語記事の冒頭の数文を抽出しました。
翻訳については、できるだけ日本語テキストがそれ単体として成り立つように作成しました。
例えば、英語での発音記号が記載されているテキストについては、元の英語表記を残さないと発音記号単体では意味不明となってしまうので、そのような部分を残しました。
日本語においてテクニカルタームが定まっている場合は、できるだけそれを反映した翻訳となるように留意しました。
そのほかにもテキスト中に定型の翻訳が存在しないか確認する作業を全事例に対して行いました。
そのため、翻訳には1件あたり15分程度を要しました。
## ライセンス
翻訳元テキストが英語Wikipediaであること、そして人手で翻訳を行なっていることから、本データセットはCC-BY-SA 4.0ライセンスでの配布とします。
## Note
翻訳結果に誤りや修正事項がある場合、遠慮なくお教えいただけますと幸いです。
 | 
| 
	math-extraction-comp/princeton-nlp__Mistral-7B-Base-SFT-SLiC-HF | 
	math-extraction-comp | 
	2025-01-11T16:17:17Z | 54 | 0 | 
	[
  "size_categories:1K<n<10K",
  "format:parquet",
  "modality:tabular",
  "modality:text",
  "library:datasets",
  "library:pandas",
  "library:mlcroissant",
  "library:polars",
  "region:us"
] | 
	[] | 
	2025-01-11T16:17:14Z | 0 | 
	---
dataset_info:
  features:
  - name: question
    dtype: string
  - name: gold
    dtype: string
  - name: target
    dtype: string
  - name: prediction
    dtype: string
  - name: subset
    dtype: string
  - name: harness_extracted_answer
    dtype: string
  - name: harness_score
    dtype: float64
  - name: qwen_extracted_answer
    dtype: string
  - name: qwen_score
    dtype: float64
  - name: lighteval-0f21c935_extracted_answer
    dtype: string
  - name: lighteval-0f21c935_score
    dtype: float64
  splits:
  - name: train
    num_bytes: 3227346
    num_examples: 1324
  download_size: 1323081
  dataset_size: 3227346
configs:
- config_name: default
  data_files:
  - split: train
    path: data/train-*
---
 | 
| 
	qfq/MetaMathQANoGSM8k | 
	qfq | 
	2024-10-28T21:21:01Z | 20 | 0 | 
	[
  "size_categories:100K<n<1M",
  "format:parquet",
  "modality:text",
  "library:datasets",
  "library:pandas",
  "library:mlcroissant",
  "library:polars",
  "region:us"
] | 
	[] | 
	2024-10-28T21:20:56Z | 0 | 
	---
dataset_info:
  features:
  - name: type
    dtype: string
  - name: problem
    dtype: string
  - name: original_question
    dtype: string
  - name: solution
    dtype: string
  - name: answer
    dtype: string
  splits:
  - name: train
    num_bytes: 146011835.67088607
    num_examples: 155000
  download_size: 62945481
  dataset_size: 146011835.67088607
configs:
- config_name: default
  data_files:
  - split: train
    path: data/train-*
---
 | 
| 
	mlfoundations-dev/olympiads_figures | 
	mlfoundations-dev | 
	2025-01-23T20:30:40Z | 14 | 0 | 
	[
  "size_categories:1K<n<10K",
  "format:parquet",
  "modality:text",
  "library:datasets",
  "library:pandas",
  "library:mlcroissant",
  "library:polars",
  "region:us"
] | 
	[] | 
	2025-01-23T14:09:35Z | 0 | 
	---
dataset_info:
  features:
  - name: source
    dtype: string
  - name: problem
    dtype: string
  - name: solution
    dtype: string
  - name: messages
    list:
    - name: content
      dtype: string
    - name: role
      dtype: string
  splits:
  - name: train
    num_bytes: 11830252.961538998
    num_examples: 4075
  download_size: 9350602
  dataset_size: 11830252.961538998
configs:
- config_name: default
  data_files:
  - split: train
    path: data/train-*
---
 | 
| 
	AfriMM/AFRICaption_filtered | 
	AfriMM | 
	2025-03-25T05:50:39Z | 10 | 0 | 
	[
  "size_categories:10K<n<100K",
  "format:parquet",
  "modality:text",
  "library:datasets",
  "library:pandas",
  "library:mlcroissant",
  "library:polars",
  "region:us"
] | 
	[] | 
	2025-03-25T05:50:25Z | 0 | 
	---
dataset_info:
  features:
  - name: image_id
    dtype: string
  - name: caption
    dtype: string
  splits:
  - name: afr
    num_bytes: 691661
    num_examples: 8089
  - name: amh
    num_bytes: 1002994
    num_examples: 8042
  - name: bem
    num_bytes: 322159
    num_examples: 3523
  - name: cjk
    num_bytes: 113865
    num_examples: 1181
  - name: dik
    num_bytes: 88243
    num_examples: 975
  - name: dyu
    num_bytes: 12547
    num_examples: 156
  - name: ewe
    num_bytes: 26016
    num_examples: 289
  - name: fuv
    num_bytes: 45757
    num_examples: 496
  - name: hau
    num_bytes: 747641
    num_examples: 7943
  - name: ibo
    num_bytes: 826094
    num_examples: 8078
  - name: kik
    num_bytes: 67534
    num_examples: 654
  - name: kab
    num_bytes: 10841
    num_examples: 121
  - name: kam
    num_bytes: 33640
    num_examples: 348
  - name: kon
    num_bytes: 91343
    num_examples: 927
  - name: kmb
    num_bytes: 15833
    num_examples: 180
  - name: lua
    num_bytes: 103114
    num_examples: 1075
  - name: lug
    num_bytes: 340830
    num_examples: 3710
  - name: lin
    num_bytes: 131232
    num_examples: 1269
  - name: kin
    num_bytes: 652547
    num_examples: 7088
  - name: yor
    num_bytes: 902706
    num_examples: 7526
  download_size: 3497744
  dataset_size: 6226597
configs:
- config_name: default
  data_files:
  - split: afr
    path: data/afr-*
  - split: amh
    path: data/amh-*
  - split: bem
    path: data/bem-*
  - split: cjk
    path: data/cjk-*
  - split: dik
    path: data/dik-*
  - split: dyu
    path: data/dyu-*
  - split: ewe
    path: data/ewe-*
  - split: fuv
    path: data/fuv-*
  - split: hau
    path: data/hau-*
  - split: ibo
    path: data/ibo-*
  - split: kik
    path: data/kik-*
  - split: kab
    path: data/kab-*
  - split: kam
    path: data/kam-*
  - split: kon
    path: data/kon-*
  - split: kmb
    path: data/kmb-*
  - split: lua
    path: data/lua-*
  - split: lug
    path: data/lug-*
  - split: lin
    path: data/lin-*
  - split: kin
    path: data/kin-*
  - split: yor
    path: data/yor-*
---
 | 
| 
	andreina-covi/vqasynth_sample_spatial | 
	andreina-covi | 
	2024-10-24T15:12:32Z | 69 | 0 | 
	[
  "size_categories:n<1K",
  "format:parquet",
  "modality:image",
  "library:datasets",
  "library:pandas",
  "library:mlcroissant",
  "library:polars",
  "region:us",
  "vqasynth",
  "remyx"
] | 
	[] | 
	2024-10-16T02:57:37Z | 0 | 
	---
dataset_info:
  features:
  - name: image
    dtype: image
  - name: messages
    sequence: 'null'
  splits:
  - name: train
    num_bytes: 1363377.0
    num_examples: 10
  download_size: 1364998
  dataset_size: 1363377.0
configs:
- config_name: default
  data_files:
  - split: train
    path: data/train-*
tags:
- vqasynth
- remyx
---
 | 
| 
	Shalini731/ICONS_25_data_short | 
	Shalini731 | 
	2025-02-05T17:02:27Z | 15 | 0 | 
	[
  "size_categories:n<1K",
  "format:parquet",
  "modality:image",
  "modality:text",
  "library:datasets",
  "library:pandas",
  "library:mlcroissant",
  "library:polars",
  "region:us"
] | 
	[] | 
	2025-02-05T17:02:24Z | 0 | 
	---
dataset_info:
  features:
  - name: image
    dtype: image
  - name: prompt
    dtype: string
  splits:
  - name: train
    num_bytes: 352596.0
    num_examples: 25
  download_size: 354360
  dataset_size: 352596.0
configs:
- config_name: default
  data_files:
  - split: train
    path: data/train-*
---
 | 
| 
	haorandai/png_bicycle_gaussian_noise_5samples_epsilon0.1 | 
	haorandai | 
	2024-10-08T21:20:01Z | 19 | 0 | 
	[
  "size_categories:n<1K",
  "format:parquet",
  "modality:image",
  "modality:text",
  "library:datasets",
  "library:pandas",
  "library:mlcroissant",
  "library:polars",
  "region:us"
] | 
	[] | 
	2024-10-08T21:20:00Z | 0 | 
	---
dataset_info:
  features:
  - name: image
    dtype: image
  - name: text
    dtype: string
  splits:
  - name: train
    num_bytes: 1100851.0
    num_examples: 10
  download_size: 1102557
  dataset_size: 1100851.0
configs:
- config_name: default
  data_files:
  - split: train
    path: data/train-*
---
 | 
| 
	RabotniKuma/Fast-Math-R1-GRPO | 
	RabotniKuma | 
	2025-04-15T01:55:46Z | 74 | 1 | 
	[
  "license:apache-2.0",
  "size_categories:1K<n<10K",
  "format:csv",
  "modality:text",
  "library:datasets",
  "library:pandas",
  "library:mlcroissant",
  "library:polars",
  "region:us"
] | 
	[] | 
	2025-04-15T01:30:44Z | 0 | 
	---
license: apache-2.0
---
We extracted the answers from [the 2nd stage SFT data of Light-R1](https://huggingface.co/datasets/qihoo360/Light-R1-SFTData). | 
| 
	lighteval/RULER-8192-SmolLM3-11T-32k-v1-remote-code | 
	lighteval | 
	2025-06-20T16:57:26Z | 0 | 0 | 
	[
  "size_categories:1K<n<10K",
  "format:parquet",
  "modality:tabular",
  "modality:text",
  "library:datasets",
  "library:pandas",
  "library:mlcroissant",
  "library:polars",
  "region:us"
] | 
	[] | 
	2025-06-20T16:57:14Z | 0 | 
	---
dataset_info:
  features:
  - name: index
    dtype: int64
  - name: input
    dtype: string
  - name: outputs
    sequence: string
  - name: length
    dtype: int64
  splits:
  - name: fwe
    num_bytes: 10481423
    num_examples: 500
  - name: niah_single_1
    num_bytes: 14822605
    num_examples: 500
  - name: qa_2
    num_bytes: 15858056
    num_examples: 500
  - name: niah_multikey_1
    num_bytes: 17675160
    num_examples: 500
  - name: niah_multivalue
    num_bytes: 17699318
    num_examples: 500
  - name: niah_multikey_3
    num_bytes: 7378000
    num_examples: 500
  - name: niah_single_3
    num_bytes: 17599964
    num_examples: 500
  - name: niah_single_2
    num_bytes: 17576096
    num_examples: 500
  - name: qa_1
    num_bytes: 15574631
    num_examples: 500
  - name: niah_multikey_2
    num_bytes: 14262757
    num_examples: 500
  - name: niah_multiquery
    num_bytes: 17754657
    num_examples: 500
  - name: cwe
    num_bytes: 11307523
    num_examples: 500
  download_size: 65357941
  dataset_size: 177990190
configs:
- config_name: default
  data_files:
  - split: fwe
    path: data/fwe-*
  - split: niah_single_1
    path: data/niah_single_1-*
  - split: qa_2
    path: data/qa_2-*
  - split: niah_multikey_1
    path: data/niah_multikey_1-*
  - split: niah_multivalue
    path: data/niah_multivalue-*
  - split: niah_multikey_3
    path: data/niah_multikey_3-*
  - split: niah_single_3
    path: data/niah_single_3-*
  - split: niah_single_2
    path: data/niah_single_2-*
  - split: qa_1
    path: data/qa_1-*
  - split: niah_multikey_2
    path: data/niah_multikey_2-*
  - split: niah_multiquery
    path: data/niah_multiquery-*
  - split: cwe
    path: data/cwe-*
---
 | 
| 
	Asap7772/prm800k_4_shot_formatted | 
	Asap7772 | 
	2024-10-23T00:06:08Z | 20 | 0 | 
	[
  "size_categories:10K<n<100K",
  "format:parquet",
  "modality:text",
  "library:datasets",
  "library:pandas",
  "library:mlcroissant",
  "library:polars",
  "region:us"
] | 
	[] | 
	2024-10-23T00:06:03Z | 0 | 
	---
dataset_info:
  features:
  - name: prompt
    dtype: string
  splits:
  - name: train
    num_bytes: 62286782
    num_examples: 12000
  download_size: 27228532
  dataset_size: 62286782
configs:
- config_name: default
  data_files:
  - split: train
    path: data/train-*
---
 | 
| 
	OmkaRane/Major_Legal3 | 
	OmkaRane | 
	2025-03-12T10:15:21Z | 15 | 0 | 
	[
  "format:parquet",
  "library:datasets",
  "library:pandas",
  "library:mlcroissant",
  "library:polars",
  "region:us"
] | 
	[] | 
	2025-03-12T10:15:18Z | 0 | 
	---
dataset_info:
  features: []
  splits:
  - name: train
    num_bytes: 0
    num_examples: 0
  download_size: 324
  dataset_size: 0
configs:
- config_name: default
  data_files:
  - split: train
    path: data/train-*
---
# Dataset Card for "Major_Legal3"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 
| 
	kh4dien/mc-german | 
	kh4dien | 
	2025-01-24T01:02:37Z | 54 | 0 | 
	[
  "size_categories:1K<n<10K",
  "format:parquet",
  "modality:text",
  "library:datasets",
  "library:pandas",
  "library:mlcroissant",
  "library:polars",
  "region:us"
] | 
	[] | 
	2025-01-24T01:02:36Z | 0 | 
	---
dataset_info:
  features:
  - name: question
    dtype: string
  - name: correct
    dtype: string
  - name: incorrect
    dtype: string
  splits:
  - name: train
    num_bytes: 326045.99022736336
    num_examples: 5145
  download_size: 161034
  dataset_size: 326045.99022736336
configs:
- config_name: default
  data_files:
  - split: train
    path: data/train-*
---
 | 
| 
	dvgodoy/CUAD_v1_Contract_Understanding_PDF | 
	dvgodoy | 
	2025-01-29T18:32:20Z | 97 | 0 | 
	[
  "language:en",
  "license:cc-by-4.0",
  "size_categories:n<1K",
  "format:parquet",
  "modality:text",
  "library:datasets",
  "library:pandas",
  "library:mlcroissant",
  "library:polars",
  "arxiv:2103.06268",
  "region:us",
  "PDF"
] | 
	[] | 
	2025-01-20T14:29:20Z | 0 | 
	---
dataset_info:
  features:
  - name: file_name
    dtype: string
  - name: pdf_bytes_base64
    dtype: binary
  - name: text
    dtype: string
  splits:
  - name: train
    num_bytes: 159348380
    num_examples: 509
  download_size: 138630085
  dataset_size: 159348380
configs:
- config_name: default
  data_files:
  - split: train
    path: data/train-*
tags:
- PDF
size_categories:
- n<1K
license: cc-by-4.0
language:
- en
---
# Dataset Card for Contract Understanding Atticus Dataset (CUAD) PDF
**This dataset contains the PDFs and the full text of 509 commercial legal contracts from the original [CUAD](https://www.atticusprojectai.org/cuad) dataset.** One of the original 510 contracts was removed due to being a scanned copy.
The extracted text was cleaned using [`clean-text`](https://github.com/jfilter/clean-text).
The PDFs were encoded in base64 and added as the `pdf_bytes_base64` feature.
You can easily and quickly load it:
```python
dataset = load_dataset("dvgodoy/CUAD_v1_Contract_Understanding_PDF")
```
```
Dataset({
    features: ['file_name', 'pdf_bytes_base64', 'text'],
    num_rows: 509
})
```
## Decoding PDFs
To handle the PDFs, you will need to decode the `pdf_bytes_base64` feature and load it into a PDF object using your favorite PDF library (e.g. `pdfplumber`):
```python
import io
import pdfplumber
# load the bytes into your favorite PDF library e.g., `pdfplumber`
for encoded in mini_batch['pdf_bytes_base64']:
    bytes_content = io.BytesIO(base64.b64decode(encoded))
    pdf_obj = pdfplumber.open(bytes_content)
    # process the PDFs
    # ...
    # CLOSE the objects after you've used them
    bytes_content.close()
    pdf_obj.close()
```
You can use any other library/package to load the PDF, just make sure it can open a PDF from bytes.
## Table of Contents
- [Dataset Description](#dataset-description)
  - [Dataset Summary](#dataset-summary)
- [Dataset Structure](#dataset-structure)
  - [Data Instances](#data-instances)
- [Dataset Creation](#dataset-creation)
  - [Curation Rationale](#curation-rationale)
  - [Source Data](#source-data)
- [Additional Information](#additional-information)
  - [Credits](#credits)
  - [Licensing Information](#licensing-information)
## Dataset Description
- **Homepage:** [Contract Understanding Atticus Dataset (CUAD)](https://www.atticusprojectai.org/cuad)
- **Repository:** https://github.com/TheAtticusProject/cuad
- **Paper:** [CUAD: An Expert-Annotated NLP Dataset for Legal Contract Review](https://arxiv.org/abs/2103.06268)
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
The original CUAD v1 dataset contained 510 commercial legals contracts. **This version contains 509 contracts, as one of those contracts was removed due to being a scanned copy.**
## Dataset Structure
### Data Instances
A sample from the training set is provided below :
```
{
    'file_name': '2ThemartComInc_19990826_10-12G_EX-10.10_6700288_EX-10.10_Co-Branding Agreement_ Agency Agreement.pdf',
    'pdf_bytes_base64': b'SlZCRVJpMHhMalF...',
    'text': 'CO-BRANDING AND ADVERTISING AGREEMENT\nTHIS CO-BRANDING AND ADVERTISING AGREEMENT...'
}
```
## Dataset Creation
### Curation Rationale
CUAD v1 is a corpus of 13,000+ labels in 510 commercial legal contracts with rich expert annotations curated for AI training purposes. The dataset has been manually labeled under the supervision of experienced attorneys to identify 41 types of legal clauses in commercial contracts that are considered important in contract review in connection with a corporate transaction, including mergers & acquisitions, corporate finance, investments & IPOs.
### Source Data
#### Initial Data Collection and Normalization
The beta version of CUAD was released in October 2020 under the name AOK v.1 with 200 contracts. CUAD v1 has 510 contracts and includes both PDF and TXT versions of the full contracts, one master CSV file and 27 Excel files corresponding to one or more types of legal clauses to ensure ease of use by both developers and attorneys.
## Additional Information
### Credits
```
@article{hendrycks2021cuad,
      title={CUAD: An Expert-Annotated NLP Dataset for Legal Contract Review}, 
      author={Dan Hendrycks and Collin Burns and Anya Chen and Spencer Ball},
      journal={NeurIPS},
      year={2021}
}
```
### Licensing Information
- [CC-BY-4.0](https://creativecommons.org/licenses/by/4.0/) | 
| 
	svjack/Product_Posters_Singer_DESC | 
	svjack | 
	2025-06-15T14:51:03Z | 0 | 0 | 
	[
  "size_categories:n<1K",
  "format:parquet",
  "modality:text",
  "library:datasets",
  "library:pandas",
  "library:mlcroissant",
  "library:polars",
  "region:us"
] | 
	[] | 
	2025-06-15T14:50:58Z | 0 | 
	---
dataset_info:
  features:
  - name: product_category
    dtype: string
  - name: poster_prompt
    dtype: string
  - name: final_prompt
    dtype: string
  splits:
  - name: train
    num_bytes: 54202
    num_examples: 50
  download_size: 30844
  dataset_size: 54202
configs:
- config_name: default
  data_files:
  - split: train
    path: data/train-*
---
 | 
| 
	Thermostatic/CommonVoice-17.0-Spanish-Filtered | 
	Thermostatic | 
	2025-06-20T01:03:10Z | 0 | 0 | 
	[
  "license:cc0-1.0",
  "size_categories:100K<n<1M",
  "format:parquet",
  "modality:audio",
  "modality:text",
  "library:datasets",
  "library:pandas",
  "library:mlcroissant",
  "library:polars",
  "region:us"
] | 
	[] | 
	2025-06-20T00:57:10Z | 0 | 
	---
license: cc0-1.0
configs:
- config_name: default
  data_files:
  - split: train
    path: data/train-*
  - split: validation
    path: data/validation-*
  - split: test
    path: data/test-*
dataset_info:
  features:
  - name: client_id
    dtype: string
  - name: path
    dtype: string
  - name: audio
    dtype:
      audio:
        sampling_rate: 48000
  - name: sentence
    dtype: string
  - name: up_votes
    dtype: int64
  - name: down_votes
    dtype: int64
  - name: age
    dtype: string
  - name: gender
    dtype: string
  - name: accent
    dtype: string
  - name: locale
    dtype: string
  - name: segment
    dtype: string
  - name: variant
    dtype: string
  splits:
  - name: train
    num_bytes: 13030072526.0
    num_examples: 336846
  - name: validation
    num_bytes: 766524228.0
    num_examples: 15857
  - name: test
    num_bytes: 744404420.0
    num_examples: 15857
  download_size: 14292272484
  dataset_size: 14541001174.0
---
 | 
| 
	candywal/code_rule_violation | 
	candywal | 
	2025-04-19T21:29:44Z | 21 | 0 | 
	[
  "size_categories:1K<n<10K",
  "format:parquet",
  "modality:text",
  "library:datasets",
  "library:pandas",
  "library:mlcroissant",
  "library:polars",
  "region:us"
] | 
	[] | 
	2025-04-19T21:29:43Z | 0 | 
	---
dataset_info:
  features:
  - name: user
    dtype: string
  - name: assistant
    dtype: string
  - name: classification
    dtype: string
  - name: explanation
    dtype: string
  - name: rule
    dtype: string
  splits:
  - name: train
    num_bytes: 3498524
    num_examples: 1328
  download_size: 1351865
  dataset_size: 3498524
configs:
- config_name: default
  data_files:
  - split: train
    path: data/train-*
---
 | 
| 
	gokceKy/earthquake | 
	gokceKy | 
	2024-11-15T14:00:11Z | 18 | 0 | 
	[
  "size_categories:n<1K",
  "format:parquet",
  "modality:image",
  "library:datasets",
  "library:pandas",
  "library:mlcroissant",
  "library:polars",
  "region:us"
] | 
	[] | 
	2024-11-09T23:59:15Z | 0 | 
	---
dataset_info:
  features:
  - name: image
    dtype: image
  - name: label
    dtype: image
  - name: pixel_values
    dtype: image
  - name: labels
    dtype: image
  splits:
  - name: train
    num_bytes: 9888970.0
    num_examples: 75
  - name: test
    num_bytes: 880548.0
    num_examples: 8
  download_size: 10247557
  dataset_size: 10769518.0
configs:
- config_name: default
  data_files:
  - split: train
    path: data/train-*
  - split: test
    path: data/test-*
---
 | 
| 
	ljnlonoljpiljm/docci | 
	ljnlonoljpiljm | 
	2024-12-17T04:42:15Z | 61 | 0 | 
	[
  "size_categories:10K<n<100K",
  "format:parquet",
  "modality:image",
  "modality:text",
  "library:datasets",
  "library:dask",
  "library:mlcroissant",
  "library:polars",
  "region:us"
] | 
	[] | 
	2024-12-15T19:18:43Z | 0 | 
	---
dataset_info:
  features:
  - name: uuid
    dtype: string
  - name: url
    dtype: string
  - name: image
    dtype: image
  - name: caption
    dtype: string
  - name: detailed_caption
    dtype: string
  - name: tags
    sequence: string
  - name: dataset
    dtype: string
  - name: points
    sequence:
    - name: uuid
      dtype: string
    - name: x
      dtype: float32
    - name: y
      dtype: float32
    - name: label
      dtype: string
  - name: objects
    sequence:
    - name: uuid
      dtype: string
    - name: x_min
      dtype: float32
    - name: y_min
      dtype: float32
    - name: x_max
      dtype: float32
    - name: y_max
      dtype: float32
    - name: label
      dtype: string
  - name: image_width
    dtype: int32
  - name: image_height
    dtype: int32
  - name: aesthetic_score
    dtype: float32
  - name: sensitivity_score
    dtype: float32
  splits:
  - name: train
    num_bytes: 7550643050.0
    num_examples: 14647
  download_size: 7542100387
  dataset_size: 7550643050.0
configs:
- config_name: default
  data_files:
  - split: train
    path: data/train-*
---
 | 
| 
	zkpbeats/reddit_ds_100415 | 
	zkpbeats | 
	2025-05-22T10:06:16Z | 1,867 | 0 | 
	[
  "task_categories:text-classification",
  "task_categories:token-classification",
  "task_categories:question-answering",
  "task_categories:summarization",
  "task_categories:text-generation",
  "task_ids:sentiment-analysis",
  "task_ids:topic-classification",
  "task_ids:named-entity-recognition",
  "task_ids:language-modeling",
  "task_ids:text-scoring",
  "task_ids:multi-class-classification",
  "task_ids:multi-label-classification",
  "task_ids:extractive-qa",
  "task_ids:news-articles-summarization",
  "multilinguality:multilingual",
  "source_datasets:original",
  "license:mit",
  "size_categories:1M<n<10M",
  "format:parquet",
  "modality:text",
  "library:datasets",
  "library:dask",
  "library:mlcroissant",
  "library:polars",
  "region:us"
] | 
	[
  "text-classification",
  "token-classification",
  "question-answering",
  "summarization",
  "text-generation"
] | 
	2025-04-03T12:12:07Z | 0 | 
	---
license: mit
multilinguality:
  - multilingual
source_datasets:
  - original
task_categories:
  - text-classification
  - token-classification
  - question-answering
  - summarization
  - text-generation
task_ids:
  - sentiment-analysis
  - topic-classification
  - named-entity-recognition
  - language-modeling
  - text-scoring
  - multi-class-classification
  - multi-label-classification
  - extractive-qa
  - news-articles-summarization
---
# Bittensor Subnet 13 Reddit Dataset
<center>
    <img src="https://huggingface.co/datasets/macrocosm-os/images/resolve/main/bittensor.png" alt="Data-universe: The finest collection of social media data the web has to offer">
</center>
<center>
    <img src="https://huggingface.co/datasets/macrocosm-os/images/resolve/main/macrocosmos-black.png" alt="Data-universe: The finest collection of social media data the web has to offer">
</center>
## Dataset Description
- **Repository:** zkpbeats/reddit_ds_100415
- **Subnet:** Bittensor Subnet 13
- **Miner Hotkey:** 5H3AggXAqErtsYWdn5A2cnf2MhkVS45HzqyErD3VxoDGWuxC
### Dataset Summary
This dataset is part of the Bittensor Subnet 13 decentralized network, containing preprocessed Reddit data. The data is continuously updated by network miners, providing a real-time stream of Reddit content for various analytical and machine learning tasks.
For more information about the dataset, please visit the [official repository](https://github.com/macrocosm-os/data-universe).
### Supported Tasks
The versatility of this dataset allows researchers and data scientists to explore various aspects of social media dynamics and develop innovative applications. Users are encouraged to leverage this data creatively for their specific research or business needs.
For example:
- Sentiment Analysis
- Topic Modeling
- Community Analysis
- Content Categorization
### Languages
Primary language: Datasets are mostly English, but can be multilingual due to decentralized ways of creation.
## Dataset Structure
### Data Instances
Each instance represents a single Reddit post or comment with the following fields:
### Data Fields
- `text` (string): The main content of the Reddit post or comment.
- `label` (string): Sentiment or topic category of the content.
- `dataType` (string): Indicates whether the entry is a post or a comment.
- `communityName` (string): The name of the subreddit where the content was posted.
- `datetime` (string): The date when the content was posted or commented.
- `username_encoded` (string): An encoded version of the username to maintain user privacy.
- `url_encoded` (string): An encoded version of any URLs included in the content.
### Data Splits
This dataset is continuously updated and does not have fixed splits. Users should create their own splits based on their requirements and the data's timestamp.
## Dataset Creation
### Source Data
Data is collected from public posts and comments on Reddit, adhering to the platform's terms of service and API usage guidelines.
### Personal and Sensitive Information
All usernames and URLs are encoded to protect user privacy. The dataset does not intentionally include personal or sensitive information.
## Considerations for Using the Data
### Social Impact and Biases
Users should be aware of potential biases inherent in Reddit data, including demographic and content biases. This dataset reflects the content and opinions expressed on Reddit and should not be considered a representative sample of the general population.
### Limitations
- Data quality may vary due to the nature of media sources.
- The dataset may contain noise, spam, or irrelevant content typical of social media platforms.
- Temporal biases may exist due to real-time collection methods.
- The dataset is limited to public subreddits and does not include private or restricted communities.
## Additional Information
### Licensing Information
The dataset is released under the MIT license. The use of this dataset is also subject to Reddit Terms of Use.
### Citation Information
If you use this dataset in your research, please cite it as follows:
```
@misc{zkpbeats2025datauniversereddit_ds_100415,
        title={The Data Universe Datasets: The finest collection of social media data the web has to offer},
        author={zkpbeats},
        year={2025},
        url={https://huggingface.co/datasets/zkpbeats/reddit_ds_100415},
        }
```
### Contributions
To report issues or contribute to the dataset, please contact the miner or use the Bittensor Subnet 13 governance mechanisms.
## Dataset Statistics
[This section is automatically updated]
- **Total Instances:** 2046436
- **Date Range:** 2025-04-14T00:00:00Z to 2025-05-22T00:00:00Z
- **Last Updated:** 2025-05-22T10:06:14Z
### Data Distribution
- Posts: 2.20%
- Comments: 97.45%
### Top 10 Subreddits
For full statistics, please refer to the `stats.json` file in the repository.
| Rank | Topic | Total Count | Percentage |
|------|-------|-------------|-------------|
| 1 | r/worldnews | 119635 | 5.87% |
| 2 | r/Millennials | 105681 | 5.18% |
| 3 | r/mildlyinteresting | 80343 | 3.94% |
| 4 | r/redscarepod | 44502 | 2.18% |
| 5 | r/mexico | 43052 | 2.11% |
| 6 | r/india | 39499 | 1.94% |
| 7 | r/Grimdank | 38994 | 1.91% |
| 8 | r/ShitAmericansSay | 38710 | 1.90% |
| 9 | r/wallstreetbets | 38257 | 1.88% |
| 10 | r/BravoRealHousewives | 36118 | 1.77% |
## Update History
| Date | New Instances | Total Instances |
|------|---------------|-----------------|
| 2025-04-14T14:31:44Z | 6673 | 6673 |
| 2025-04-16T08:55:17Z | 513 | 7186 |
| 2025-04-18T17:12:54Z | 6171 | 13357 |
| 2025-04-18T19:38:03Z | 6914 | 20271 |
| 2025-04-18T22:02:52Z | 7520 | 27791 |
| 2025-04-19T00:26:05Z | 5532 | 33323 |
| 2025-04-19T02:49:42Z | 6645 | 39968 |
| 2025-04-19T05:16:23Z | 4202 | 44170 |
| 2025-04-19T07:42:07Z | 4649 | 48819 |
| 2025-04-19T10:07:00Z | 2362 | 51181 |
| 2025-04-19T12:29:29Z | 2887 | 54068 |
| 2025-04-19T14:52:07Z | 5581 | 59649 |
| 2025-04-19T17:14:54Z | 4835 | 64484 |
| 2025-04-19T19:38:30Z | 5905 | 70389 |
| 2025-04-19T22:01:12Z | 6387 | 76776 |
| 2025-04-20T00:25:18Z | 6282 | 83058 |
| 2025-04-20T02:49:26Z | 3961 | 87019 |
| 2025-04-20T05:12:31Z | 4729 | 91748 |
| 2025-04-20T07:35:25Z | 4478 | 96226 |
| 2025-04-20T09:57:13Z | 2655 | 98881 |
| 2025-04-20T12:19:06Z | 4021 | 102902 |
| 2025-04-20T14:41:14Z | 5906 | 108808 |
| 2025-04-20T17:04:41Z | 8013 | 116821 |
| 2025-04-20T19:27:42Z | 5631 | 122452 |
| 2025-04-20T21:51:21Z | 6411 | 128863 |
| 2025-04-21T00:13:38Z | 5824 | 134687 |
| 2025-04-21T02:36:17Z | 4309 | 138996 |
| 2025-04-21T05:01:15Z | 4852 | 143848 |
| 2025-04-21T07:23:24Z | 2730 | 146578 |
| 2025-04-21T09:47:13Z | 2943 | 149521 |
| 2025-04-21T12:10:27Z | 4326 | 153847 |
| 2025-04-21T14:33:50Z | 5710 | 159557 |
| 2025-04-21T16:57:09Z | 7218 | 166775 |
| 2025-04-21T19:20:09Z | 8127 | 174902 |
| 2025-04-21T21:43:20Z | 6225 | 181127 |
| 2025-04-22T00:08:03Z | 6598 | 187725 |
| 2025-04-22T02:30:40Z | 5490 | 193215 |
| 2025-04-22T04:53:06Z | 4215 | 197430 |
| 2025-04-22T07:19:07Z | 3912 | 201342 |
| 2025-04-22T09:42:40Z | 4005 | 205347 |
| 2025-04-22T12:05:12Z | 4493 | 209840 |
| 2025-04-22T14:28:03Z | 4894 | 214734 |
| 2025-04-22T16:50:41Z | 8216 | 222950 |
| 2025-04-22T19:14:24Z | 6801 | 229751 |
| 2025-04-22T21:37:05Z | 5624 | 235375 |
| 2025-04-23T00:00:02Z | 4731 | 240106 |
| 2025-04-23T02:22:21Z | 6388 | 246494 |
| 2025-04-23T04:45:31Z | 5112 | 251606 |
| 2025-04-23T07:08:24Z | 3077 | 254683 |
| 2025-04-23T09:31:32Z | 4866 | 259549 |
| 2025-04-23T10:47:48Z | 1339 | 260888 |
| 2025-04-23T13:10:54Z | 3924 | 264812 |
| 2025-04-23T15:33:55Z | 7979 | 272791 |
| 2025-04-23T17:56:36Z | 6625 | 279416 |
| 2025-04-23T20:19:21Z | 6475 | 285891 |
| 2025-04-23T22:42:32Z | 7214 | 293105 |
| 2025-04-24T01:05:22Z | 6323 | 299428 |
| 2025-04-24T03:27:38Z | 6420 | 305848 |
| 2025-04-24T05:49:47Z | 5245 | 311093 |
| 2025-04-24T08:12:36Z | 3189 | 314282 |
| 2025-04-24T10:34:53Z | 3231 | 317513 |
| 2025-04-24T12:57:32Z | 5343 | 322856 |
| 2025-04-24T15:20:06Z | 8967 | 331823 |
| 2025-04-24T17:42:55Z | 6654 | 338477 |
| 2025-04-24T20:05:36Z | 8008 | 346485 |
| 2025-04-24T22:29:45Z | 8446 | 354931 |
| 2025-04-25T00:52:49Z | 6185 | 361116 |
| 2025-04-25T03:15:40Z | 6436 | 367552 |
| 2025-04-25T05:38:44Z | 4778 | 372330 |
| 2025-04-25T08:02:25Z | 4177 | 376507 |
| 2025-04-25T10:26:00Z | 3792 | 380299 |
| 2025-04-25T12:48:47Z | 3961 | 384260 |
| 2025-04-25T15:10:50Z | 7941 | 392201 |
| 2025-04-25T17:33:47Z | 9420 | 401621 |
| 2025-04-25T19:57:24Z | 7643 | 409264 |
| 2025-04-25T22:20:14Z | 7954 | 417218 |
| 2025-04-26T00:42:45Z | 6541 | 423759 |
| 2025-04-26T03:05:10Z | 4562 | 428321 |
| 2025-04-26T05:27:09Z | 6095 | 434416 |
| 2025-04-26T07:48:52Z | 3612 | 438028 |
| 2025-04-26T10:10:51Z | 3086 | 441114 |
| 2025-04-26T12:33:17Z | 3439 | 444553 |
| 2025-04-26T15:00:29Z | 7397 | 451950 |
| 2025-04-26T17:22:53Z | 7048 | 458998 |
| 2025-04-26T19:47:35Z | 6964 | 465962 |
| 2025-04-26T22:10:58Z | 6460 | 472422 |
| 2025-04-27T00:33:42Z | 7338 | 479760 |
| 2025-04-27T02:55:53Z | 6334 | 486094 |
| 2025-04-27T05:18:25Z | 6222 | 492316 |
| 2025-04-27T07:40:51Z | 5159 | 497475 |
| 2025-04-27T10:03:05Z | 2964 | 500439 |
| 2025-04-27T12:25:30Z | 4022 | 504461 |
| 2025-04-27T14:48:21Z | 7933 | 512394 |
| 2025-04-27T17:13:44Z | 7546 | 519940 |
| 2025-04-27T19:36:15Z | 7962 | 527902 |
| 2025-04-27T21:58:30Z | 5000 | 532902 |
| 2025-04-28T00:21:59Z | 5738 | 538640 |
| 2025-04-28T02:45:06Z | 6255 | 544895 |
| 2025-04-28T05:07:08Z | 4881 | 549776 |
| 2025-04-28T07:29:45Z | 4842 | 554618 |
| 2025-04-28T09:52:39Z | 4563 | 559181 |
| 2025-04-28T12:15:39Z | 3516 | 562697 |
| 2025-04-28T14:38:12Z | 7026 | 569723 |
| 2025-04-28T17:01:05Z | 6173 | 575896 |
| 2025-04-28T19:23:58Z | 6904 | 582800 |
| 2025-04-28T21:47:11Z | 7422 | 590222 |
| 2025-04-29T00:09:34Z | 5669 | 595891 |
| 2025-04-29T02:32:31Z | 5035 | 600926 |
| 2025-04-29T04:55:25Z | 6811 | 607737 |
| 2025-04-29T07:17:27Z | 4580 | 612317 |
| 2025-04-29T09:40:18Z | 3572 | 615889 |
| 2025-04-29T12:04:13Z | 5189 | 621078 |
| 2025-04-29T14:26:42Z | 6134 | 627212 |
| 2025-04-29T16:48:49Z | 5725 | 632937 |
| 2025-04-29T19:12:12Z | 8269 | 641206 |
| 2025-04-29T21:35:08Z | 7741 | 648947 |
| 2025-04-29T23:57:26Z | 7825 | 656772 |
| 2025-04-30T02:19:53Z | 4065 | 660837 |
| 2025-04-30T04:42:16Z | 5086 | 665923 |
| 2025-04-30T07:04:32Z | 4704 | 670627 |
| 2025-04-30T09:28:04Z | 3608 | 674235 |
| 2025-04-30T11:50:14Z | 5846 | 680081 |
| 2025-04-30T14:12:22Z | 7098 | 687179 |
| 2025-04-30T16:34:44Z | 8904 | 696083 |
| 2025-04-30T18:59:01Z | 5544 | 701627 |
| 2025-04-30T21:21:06Z | 6703 | 708330 |
| 2025-04-30T23:43:26Z | 7272 | 715602 |
| 2025-05-01T02:07:20Z | 6315 | 721917 |
| 2025-05-01T04:30:14Z | 5673 | 727590 |
| 2025-05-01T06:54:49Z | 4787 | 732377 |
| 2025-05-01T09:17:54Z | 4126 | 736503 |
| 2025-05-01T11:40:22Z | 2985 | 739488 |
| 2025-05-01T14:02:56Z | 6913 | 746401 |
| 2025-05-01T16:26:00Z | 7966 | 754367 |
| 2025-05-01T18:48:30Z | 6874 | 761241 |
| 2025-05-01T21:12:58Z | 8503 | 769744 |
| 2025-05-01T23:35:27Z | 6871 | 776615 |
| 2025-05-02T01:57:54Z | 6292 | 782907 |
| 2025-05-02T04:20:13Z | 7125 | 790032 |
| 2025-05-02T06:43:10Z | 5111 | 795143 |
| 2025-05-02T09:05:25Z | 4295 | 799438 |
| 2025-05-02T11:27:56Z | 3304 | 802742 |
| 2025-05-02T13:52:15Z | 7162 | 809904 |
| 2025-05-02T16:14:29Z | 8007 | 817911 |
| 2025-05-02T18:36:41Z | 7826 | 825737 |
| 2025-05-02T20:59:13Z | 6862 | 832599 |
| 2025-05-02T23:21:43Z | 6934 | 839533 |
| 2025-05-03T01:44:23Z | 6926 | 846459 |
| 2025-05-03T04:06:58Z | 6269 | 852728 |
| 2025-05-03T06:30:10Z | 5893 | 858621 |
| 2025-05-03T08:52:13Z | 4328 | 862949 |
| 2025-05-03T11:14:30Z | 2742 | 865691 |
| 2025-05-03T13:37:06Z | 5906 | 871597 |
| 2025-05-03T15:59:34Z | 6432 | 878029 |
| 2025-05-03T18:22:12Z | 6930 | 884959 |
| 2025-05-03T20:44:37Z | 7420 | 892379 |
| 2025-05-03T23:06:52Z | 7364 | 899743 |
| 2025-05-04T01:29:49Z | 5757 | 905500 |
| 2025-05-04T03:51:49Z | 5738 | 911238 |
| 2025-05-04T06:13:45Z | 4217 | 915455 |
| 2025-05-04T08:35:53Z | 3333 | 918788 |
| 2025-05-04T10:58:18Z | 3020 | 921808 |
| 2025-05-04T13:20:37Z | 3502 | 925310 |
| 2025-05-04T15:43:13Z | 6520 | 931830 |
| 2025-05-04T18:05:44Z | 7524 | 939354 |
| 2025-05-04T20:28:05Z | 7386 | 946740 |
| 2025-05-04T22:50:26Z | 6014 | 952754 |
| 2025-05-05T01:12:56Z | 8336 | 961090 |
| 2025-05-05T03:35:18Z | 5919 | 967009 |
| 2025-05-05T05:57:34Z | 4551 | 971560 |
| 2025-05-05T08:19:41Z | 4397 | 975957 |
| 2025-05-05T10:42:17Z | 2881 | 978838 |
| 2025-05-05T13:04:28Z | 3834 | 982672 |
| 2025-05-05T15:26:54Z | 6271 | 988943 |
| 2025-05-05T17:49:31Z | 6878 | 995821 |
| 2025-05-05T20:12:02Z | 8960 | 1004781 |
| 2025-05-05T22:34:32Z | 7150 | 1011931 |
| 2025-05-06T00:57:02Z | 6813 | 1018744 |
| 2025-05-06T03:19:26Z | 8478 | 1027222 |
| 2025-05-06T05:41:35Z | 4649 | 1031871 |
| 2025-05-06T08:04:13Z | 3824 | 1035695 |
| 2025-05-06T10:26:38Z | 3650 | 1039345 |
| 2025-05-06T12:48:51Z | 6481 | 1045826 |
| 2025-05-06T15:11:09Z | 6622 | 1052448 |
| 2025-05-06T17:33:58Z | 8696 | 1061144 |
| 2025-05-06T19:57:06Z | 7521 | 1068665 |
| 2025-05-06T22:20:12Z | 7746 | 1076411 |
| 2025-05-07T00:43:15Z | 7306 | 1083717 |
| 2025-05-07T03:06:05Z | 6757 | 1090474 |
| 2025-05-07T05:28:28Z | 6072 | 1096546 |
| 2025-05-07T07:50:33Z | 4436 | 1100982 |
| 2025-05-07T10:12:43Z | 5652 | 1106634 |
| 2025-05-07T12:34:44Z | 3694 | 1110328 |
| 2025-05-07T14:57:11Z | 8226 | 1118554 |
| 2025-05-07T17:19:48Z | 9114 | 1127668 |
| 2025-05-07T19:42:30Z | 9541 | 1137209 |
| 2025-05-07T22:04:58Z | 9539 | 1146748 |
| 2025-05-08T00:27:37Z | 6664 | 1153412 |
| 2025-05-08T02:50:19Z | 9841 | 1163253 |
| 2025-05-08T05:12:27Z | 5400 | 1168653 |
| 2025-05-08T07:34:47Z | 3427 | 1172080 |
| 2025-05-08T09:57:17Z | 4060 | 1176140 |
| 2025-05-08T12:20:24Z | 5275 | 1181415 |
| 2025-05-08T14:43:07Z | 8854 | 1190269 |
| 2025-05-08T17:05:46Z | 9541 | 1199810 |
| 2025-05-08T19:28:22Z | 9742 | 1209552 |
| 2025-05-08T21:51:45Z | 6099 | 1215651 |
| 2025-05-09T00:14:14Z | 5912 | 1221563 |
| 2025-05-09T02:36:51Z | 6879 | 1228442 |
| 2025-05-09T04:59:11Z | 7747 | 1236189 |
| 2025-05-09T07:21:38Z | 4184 | 1240373 |
| 2025-05-09T09:44:15Z | 4371 | 1244744 |
| 2025-05-09T12:06:36Z | 3647 | 1248391 |
| 2025-05-09T14:29:16Z | 5890 | 1254281 |
| 2025-05-09T16:53:09Z | 7717 | 1261998 |
| 2025-05-09T19:15:46Z | 7368 | 1269366 |
| 2025-05-09T21:38:19Z | 7879 | 1277245 |
| 2025-05-10T00:00:58Z | 6570 | 1283815 |
| 2025-05-10T02:23:31Z | 5730 | 1289545 |
| 2025-05-10T04:45:51Z | 5362 | 1294907 |
| 2025-05-10T07:08:08Z | 4617 | 1299524 |
| 2025-05-10T09:30:21Z | 3101 | 1302625 |
| 2025-05-10T11:52:51Z | 3653 | 1306278 |
| 2025-05-10T14:14:50Z | 6043 | 1312321 |
| 2025-05-10T16:36:54Z | 8235 | 1320556 |
| 2025-05-10T18:58:59Z | 8370 | 1328926 |
| 2025-05-10T21:21:14Z | 6141 | 1335067 |
| 2025-05-10T23:44:04Z | 6265 | 1341332 |
| 2025-05-11T02:07:00Z | 7315 | 1348647 |
| 2025-05-11T04:29:19Z | 5421 | 1354068 |
| 2025-05-11T06:51:58Z | 5463 | 1359531 |
| 2025-05-11T09:14:38Z | 3980 | 1363511 |
| 2025-05-11T11:37:39Z | 4431 | 1367942 |
| 2025-05-11T13:59:50Z | 6181 | 1374123 |
| 2025-05-11T16:22:14Z | 7736 | 1381859 |
| 2025-05-11T18:45:19Z | 6791 | 1388650 |
| 2025-05-11T21:08:49Z | 6579 | 1395229 |
| 2025-05-11T23:32:14Z | 7701 | 1402930 |
| 2025-05-12T01:54:40Z | 6786 | 1409716 |
| 2025-05-12T04:16:44Z | 6802 | 1416518 |
| 2025-05-12T06:39:01Z | 3979 | 1420497 |
| 2025-05-12T09:03:11Z | 4558 | 1425055 |
| 2025-05-12T11:25:38Z | 4377 | 1429432 |
| 2025-05-12T13:48:02Z | 6324 | 1435756 |
| 2025-05-12T16:10:47Z | 9365 | 1445121 |
| 2025-05-12T18:33:55Z | 11055 | 1456176 |
| 2025-05-12T20:57:18Z | 9381 | 1465557 |
| 2025-05-12T23:19:54Z | 8918 | 1474475 |
| 2025-05-13T01:43:10Z | 7599 | 1482074 |
| 2025-05-13T04:05:16Z | 7517 | 1489591 |
| 2025-05-13T06:27:34Z | 5505 | 1495096 |
| 2025-05-13T08:50:58Z | 3874 | 1498970 |
| 2025-05-13T11:13:21Z | 3143 | 1502113 |
| 2025-05-13T13:36:09Z | 5539 | 1507652 |
| 2025-05-13T15:59:58Z | 9643 | 1517295 |
| 2025-05-13T18:22:45Z | 6962 | 1524257 |
| 2025-05-13T20:46:29Z | 6749 | 1531006 |
| 2025-05-13T23:09:32Z | 9450 | 1540456 |
| 2025-05-14T01:32:19Z | 7048 | 1547504 |
| 2025-05-14T03:54:58Z | 8498 | 1556002 |
| 2025-05-14T06:17:04Z | 4854 | 1560856 |
| 2025-05-14T08:39:47Z | 3784 | 1564640 |
| 2025-05-14T11:02:47Z | 3460 | 1568100 |
| 2025-05-14T13:25:01Z | 5587 | 1573687 |
| 2025-05-14T15:47:30Z | 8370 | 1582057 |
| 2025-05-14T18:10:21Z | 8253 | 1590310 |
| 2025-05-14T20:33:17Z | 7316 | 1597626 |
| 2025-05-14T22:56:47Z | 6516 | 1604142 |
| 2025-05-15T01:19:36Z | 5547 | 1609689 |
| 2025-05-15T03:42:05Z | 8617 | 1618306 |
| 2025-05-15T06:04:30Z | 5333 | 1623639 |
| 2025-05-15T08:28:22Z | 4609 | 1628248 |
| 2025-05-15T10:51:36Z | 4085 | 1632333 |
| 2025-05-15T13:15:28Z | 6781 | 1639114 |
| 2025-05-15T15:38:39Z | 9008 | 1648122 |
| 2025-05-15T15:39:15Z | 9008 | 1657130 |
| 2025-05-15T18:02:16Z | 7464 | 1664594 |
| 2025-05-15T20:27:30Z | 7837 | 1672431 |
| 2025-05-15T22:52:22Z | 6106 | 1678537 |
| 2025-05-16T01:15:56Z | 7440 | 1685977 |
| 2025-05-16T03:40:49Z | 5423 | 1691400 |
| 2025-05-16T06:03:43Z | 4286 | 1695686 |
| 2025-05-16T08:29:29Z | 4233 | 1699919 |
| 2025-05-16T10:52:28Z | 2931 | 1702850 |
| 2025-05-16T13:17:57Z | 5866 | 1708716 |
| 2025-05-16T15:43:13Z | 7311 | 1716027 |
| 2025-05-16T18:06:29Z | 8174 | 1724201 |
| 2025-05-16T20:29:58Z | 7819 | 1732020 |
| 2025-05-16T22:54:39Z | 6752 | 1738772 |
| 2025-05-17T01:18:03Z | 6382 | 1745154 |
| 2025-05-17T03:40:48Z | 6051 | 1751205 |
| 2025-05-17T06:03:19Z | 3672 | 1754877 |
| 2025-05-17T08:25:46Z | 3530 | 1758407 |
| 2025-05-17T10:50:23Z | 4177 | 1762584 |
| 2025-05-17T13:15:07Z | 4038 | 1766622 |
| 2025-05-17T15:38:41Z | 6297 | 1772919 |
| 2025-05-17T18:01:58Z | 4948 | 1777867 |
| 2025-05-17T20:24:30Z | 5059 | 1782926 |
| 2025-05-17T22:47:29Z | 6712 | 1789638 |
| 2025-05-18T01:10:48Z | 5350 | 1794988 |
| 2025-05-18T03:32:58Z | 5936 | 1800924 |
| 2025-05-18T05:55:36Z | 4483 | 1805407 |
| 2025-05-18T08:19:35Z | 4392 | 1809799 |
| 2025-05-18T10:42:00Z | 3959 | 1813758 |
| 2025-05-18T13:04:32Z | 4301 | 1818059 |
| 2025-05-18T15:27:47Z | 5957 | 1824016 |
| 2025-05-18T17:51:32Z | 7226 | 1831242 |
| 2025-05-18T20:14:39Z | 5501 | 1836743 |
| 2025-05-18T22:39:03Z | 5990 | 1842733 |
| 2025-05-19T01:01:47Z | 6209 | 1848942 |
| 2025-05-19T03:24:31Z | 5727 | 1854669 |
| 2025-05-19T05:47:09Z | 4786 | 1859455 |
| 2025-05-19T08:11:58Z | 4459 | 1863914 |
| 2025-05-19T10:35:14Z | 3981 | 1867895 |
| 2025-05-19T12:58:03Z | 3385 | 1871280 |
| 2025-05-19T15:20:57Z | 5702 | 1876982 |
| 2025-05-19T17:44:21Z | 7794 | 1884776 |
| 2025-05-19T20:08:26Z | 9671 | 1894447 |
| 2025-05-19T22:31:29Z | 8001 | 1902448 |
| 2025-05-20T00:54:35Z | 6063 | 1908511 |
| 2025-05-20T03:18:15Z | 7571 | 1916082 |
| 2025-05-20T05:42:05Z | 4629 | 1920711 |
| 2025-05-20T08:05:38Z | 4244 | 1924955 |
| 2025-05-20T10:29:07Z | 3897 | 1928852 |
| 2025-05-20T12:53:19Z | 4809 | 1933661 |
| 2025-05-20T15:17:08Z | 6091 | 1939752 |
| 2025-05-20T17:40:47Z | 7407 | 1947159 |
| 2025-05-20T20:03:39Z | 10047 | 1957206 |
| 2025-05-20T22:26:07Z | 8196 | 1965402 |
| 2025-05-21T00:48:58Z | 5330 | 1970732 |
| 2025-05-21T03:11:28Z | 6717 | 1977449 |
| 2025-05-21T05:34:18Z | 4849 | 1982298 |
| 2025-05-21T07:57:03Z | 3652 | 1985950 |
| 2025-05-21T10:19:15Z | 3610 | 1989560 |
| 2025-05-21T12:42:34Z | 3372 | 1992932 |
| 2025-05-21T15:05:07Z | 6436 | 1999368 |
| 2025-05-21T17:27:46Z | 6563 | 2005931 |
| 2025-05-21T19:50:35Z | 8433 | 2014364 |
| 2025-05-21T22:13:20Z | 5952 | 2020316 |
| 2025-05-22T00:36:19Z | 5190 | 2025506 |
| 2025-05-22T02:58:36Z | 6844 | 2032350 |
| 2025-05-22T05:21:02Z | 6449 | 2038799 |
| 2025-05-22T07:43:09Z | 3942 | 2042741 |
| 2025-05-22T10:06:14Z | 3695 | 2046436 |
 | 
| 
	samabena/dreambooth-hackathon-images | 
	samabena | 
	2025-04-17T18:58:05Z | 17 | 0 | 
	[
  "size_categories:n<1K",
  "format:parquet",
  "modality:image",
  "library:datasets",
  "library:pandas",
  "library:mlcroissant",
  "library:polars",
  "region:us"
] | 
	[] | 
	2025-04-17T17:32:33Z | 0 | 
	---
dataset_info:
  features:
  - name: image
    dtype: image
  splits:
  - name: train
    num_bytes: 3440294.0
    num_examples: 14
  download_size: 3270864
  dataset_size: 3440294.0
configs:
- config_name: default
  data_files:
  - split: train
    path: data/train-*
---
 | 
| 
	neginashz/ins-medqa-dataset_winst | 
	neginashz | 
	2024-12-11T13:35:40Z | 13 | 0 | 
	[
  "size_categories:10K<n<100K",
  "format:parquet",
  "modality:text",
  "library:datasets",
  "library:pandas",
  "library:mlcroissant",
  "library:polars",
  "region:us"
] | 
	[] | 
	2024-12-11T13:35:40Z | 0 | 
	---
dataset_info:
  features:
  - name: prompt
    dtype: string
  - name: completion
    dtype: string
  splits:
  - name: train
    num_bytes: 9544494
    num_examples: 10178
  download_size: 5250508
  dataset_size: 9544494
configs:
- config_name: default
  data_files:
  - split: train
    path: data/train-*
---
 | 
| 
	nickfuryavg/Audio_Sample_1 | 
	nickfuryavg | 
	2024-11-01T07:52:53Z | 18 | 0 | 
	[
  "size_categories:n<1K",
  "format:parquet",
  "modality:text",
  "library:datasets",
  "library:pandas",
  "library:mlcroissant",
  "library:polars",
  "region:us"
] | 
	[] | 
	2024-11-01T07:52:49Z | 0 | 
	---
dataset_info:
  features:
  - name: job_id
    dtype: int64
  - name: english_sentence
    dtype: string
  - name: english_audio_bytes
    dtype: string
  - name: hindi_sentence
    dtype: string
  - name: hindi_audio_bytes
    dtype: string
  splits:
  - name: train
    num_bytes: 17706348
    num_examples: 50
  download_size: 12249570
  dataset_size: 17706348
configs:
- config_name: default
  data_files:
  - split: train
    path: data/train-*
---
 | 
| 
	cauaveiga/moss_test | 
	cauaveiga | 
	2024-10-27T11:36:30Z | 21 | 0 | 
	[
  "task_categories:robotics",
  "region:us",
  "LeRobot",
  "moss",
  "tutorial"
] | 
	[
  "robotics"
] | 
	2024-10-27T11:26:20Z | 0 | 
	---
task_categories:
- robotics
tags:
- LeRobot
- moss
- tutorial
---
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
 | 
| 
	pclucas14/nqa-RAG-64_1_24 | 
	pclucas14 | 
	2024-12-04T23:08:44Z | 15 | 0 | 
	[
  "size_categories:n<1K",
  "format:parquet",
  "modality:text",
  "library:datasets",
  "library:pandas",
  "library:mlcroissant",
  "library:polars",
  "region:us"
] | 
	[] | 
	2024-12-04T23:08:42Z | 0 | 
	---
dataset_info:
  features:
  - name: text
    sequence:
      sequence: string
  - name: questions
    sequence: string
  - name: answers
    sequence:
      sequence: string
  - name: document_id
    dtype: string
  - name: split
    dtype: string
  splits:
  - name: train
    num_bytes: 5935246
    num_examples: 66
  download_size: 2094096
  dataset_size: 5935246
configs:
- config_name: default
  data_files:
  - split: train
    path: data/train-*
---
 | 
| 
	SnehaPriyaaMP/llama-pre-finetuned-Apr-21-2025-Updated | 
	SnehaPriyaaMP | 
	2025-04-21T13:34:12Z | 19 | 0 | 
	[
  "size_categories:n<1K",
  "modality:text",
  "region:us"
] | 
	[] | 
	2025-04-21T13:34:02Z | 0 | 
	---
dataset_info:
  features:
  - name: prompt
    dtype: string
  splits:
  - name: train
    num_bytes: 114289
    num_examples: 70
  download_size: 26451
  dataset_size: 114289
configs:
- config_name: default
  data_files:
  - split: train
    path: data/train-*
---
 | 
| 
	SayantanJoker/Hindi_1000hr_Train_Subset_44100Hz_quality_metadata | 
	SayantanJoker | 
	2025-04-13T10:31:09Z | 70 | 0 | 
	[
  "size_categories:1K<n<10K",
  "format:parquet",
  "modality:tabular",
  "modality:text",
  "library:datasets",
  "library:pandas",
  "library:mlcroissant",
  "library:polars",
  "region:us"
] | 
	[] | 
	2025-04-13T10:31:08Z | 0 | 
	---
dataset_info:
  features:
  - name: text
    dtype: string
  - name: file_name
    dtype: string
  - name: utterance_pitch_mean
    dtype: float32
  - name: utterance_pitch_std
    dtype: float32
  - name: snr
    dtype: float64
  - name: c50
    dtype: float64
  - name: speaking_rate
    dtype: string
  - name: phonemes
    dtype: string
  - name: stoi
    dtype: float64
  - name: si-sdr
    dtype: float64
  - name: pesq
    dtype: float64
  - name: noise
    dtype: string
  - name: reverberation
    dtype: string
  - name: speech_monotony
    dtype: string
  - name: sdr_noise
    dtype: string
  - name: pesq_speech_quality
    dtype: string
  splits:
  - name: train
    num_bytes: 1307557
    num_examples: 3843
  download_size: 255513
  dataset_size: 1307557
configs:
- config_name: default
  data_files:
  - split: train
    path: data/train-*
---
 | 
| 
	mlfoundations-dev/GeneralThought-feb25_10k | 
	mlfoundations-dev | 
	2025-03-21T22:28:30Z | 13 | 0 | 
	[
  "size_categories:10K<n<100K",
  "format:parquet",
  "modality:text",
  "library:datasets",
  "library:pandas",
  "library:mlcroissant",
  "library:polars",
  "region:us"
] | 
	[] | 
	2025-03-21T22:28:18Z | 0 | 
	---
dataset_info:
  features:
  - name: conversations
    list:
    - name: from
      dtype: string
    - name: value
      dtype: string
  splits:
  - name: train
    num_bytes: 120276586.92596735
    num_examples: 10000
  download_size: 52837405
  dataset_size: 120276586.92596735
configs:
- config_name: default
  data_files:
  - split: train
    path: data/train-*
---
 | 
| 
	kh4dien/talktuner-openai | 
	kh4dien | 
	2025-02-18T23:49:17Z | 20 | 0 | 
	[
  "size_categories:10K<n<100K",
  "format:parquet",
  "modality:text",
  "library:datasets",
  "library:pandas",
  "library:mlcroissant",
  "library:polars",
  "region:us"
] | 
	[] | 
	2025-02-13T05:50:19Z | 0 | 
	---
dataset_info:
  features:
  - name: label
    dtype: string
  - name: messages
    list:
    - name: content
      dtype: string
    - name: role
      dtype: string
  splits:
  - name: age
    num_bytes: 6957752.147239264
    num_examples: 2880
  - name: education
    num_bytes: 13866627.816924142
    num_examples: 4263
  - name: gender
    num_bytes: 4336468.098876405
    num_examples: 1762
  - name: socioeconomic
    num_bytes: 5369022.74964904
    num_examples: 2124
  download_size: 14962444
  dataset_size: 30529870.812688854
configs:
- config_name: default
  data_files:
  - split: age
    path: data/age-*
  - split: education
    path: data/education-*
  - split: gender
    path: data/gender-*
  - split: socioeconomic
    path: data/socioeconomic-*
---
 | 
| 
	Asap7772/d1shs0ap-hard-hintgen-qwen3-4b-lr1e6-shard5 | 
	Asap7772 | 
	2025-05-10T07:47:20Z | 0 | 0 | 
	[
  "size_categories:1K<n<10K",
  "format:parquet",
  "modality:tabular",
  "modality:text",
  "library:datasets",
  "library:pandas",
  "library:mlcroissant",
  "library:polars",
  "region:us"
] | 
	[] | 
	2025-05-10T07:47:14Z | 0 | 
	---
dataset_info:
  features:
  - name: problem
    dtype: string
  - name: answer
    dtype: string
  - name: solution
    dtype: string
  - name: reward
    dtype: float64
  - name: length
    dtype: float64
  - name: correct_length
    dtype: float64
  - name: incorrect_length
    dtype: float64
  - name: all_hints
    sequence: string
  splits:
  - name: train
    num_bytes: 71420403
    num_examples: 1586
  download_size: 31277734
  dataset_size: 71420403
configs:
- config_name: default
  data_files:
  - split: train
    path: data/train-*
---
 | 
| 
	IAlsace/affiche_gestes_barrieres | 
	IAlsace | 
	2025-03-29T22:12:24Z | 29 | 0 | 
	[
  "task_categories:translation",
  "multilinguality:multilingual",
  "language:gsw",
  "language:fra",
  "region:us",
  "welche",
  "polatt"
] | 
	[
  "translation"
] | 
	2025-01-12T14:16:14Z | 0 | 
	---
language:
- gsw
- fra
multilinguality:
- multilingual
viewer: false
task_categories:
- translation
tags:
- welche
- polatt
---
> [!NOTE]
> Dataset origin: https://www.olcalsace.org/fr/affiches-cartes-et-signaletique
## Description
Les gestes à adopter pour se protéger et protéger les autres du coronavirus.
Affiches conçues par le gouvernement dans le cadre de la lutte contre la Covid-19 et traduite par l'OLCA. | 
| 
	wangx0t/numina-deepseek-r1-qwen-7b | 
	wangx0t | 
	2025-01-30T13:12:36Z | 35 | 0 | 
	[
  "size_categories:n<1K",
  "format:parquet",
  "modality:text",
  "library:datasets",
  "library:pandas",
  "library:mlcroissant",
  "library:polars",
  "library:distilabel",
  "region:us",
  "synthetic",
  "distilabel",
  "rlaif"
] | 
	[] | 
	2025-01-30T12:56:24Z | 0 | 
	---
size_categories: n<1K
dataset_info:
  features:
  - name: problem
    dtype: string
  - name: solution
    dtype: string
  - name: messages
    list:
    - name: content
      dtype: string
    - name: role
      dtype: string
  - name: generation
    dtype: string
  - name: distilabel_metadata
    struct:
    - name: raw_input_text_generation_0
      list:
      - name: content
        dtype: string
      - name: role
        dtype: string
    - name: raw_output_text_generation_0
      dtype: string
    - name: statistics_text_generation_0
      struct:
      - name: input_tokens
        dtype: int64
      - name: output_tokens
        dtype: int64
  - name: model_name
    dtype: string
  splits:
  - name: train
    num_bytes: 751350
    num_examples: 40
  download_size: 277645
  dataset_size: 751350
configs:
- config_name: default
  data_files:
  - split: train
    path: data/train-*
tags:
- synthetic
- distilabel
- rlaif
---
<p align="left">
  <a href="https://github.com/argilla-io/distilabel">
    <img src="https://raw.githubusercontent.com/argilla-io/distilabel/main/docs/assets/distilabel-badge-light.png" alt="Built with Distilabel" width="200" height="32"/>
  </a>
</p>
# Dataset Card for numina-deepseek-r1-qwen-7b
This dataset has been created with [distilabel](https://distilabel.argilla.io/).
## Dataset Summary
This dataset contains a `pipeline.yaml` which can be used to reproduce the pipeline that generated it in distilabel using the `distilabel` CLI:
```console
distilabel pipeline run --config "https://huggingface.co/datasets/wangx0t/numina-deepseek-r1-qwen-7b/raw/main/pipeline.yaml"
```
or explore the configuration:
```console
distilabel pipeline info --config "https://huggingface.co/datasets/wangx0t/numina-deepseek-r1-qwen-7b/raw/main/pipeline.yaml"
```
## Dataset structure
The examples have the following structure per configuration:
<details><summary> Configuration: default </summary><hr>
```json
{
    "distilabel_metadata": {
        "raw_input_text_generation_0": [
            {
                "content": "You will be given a problem. Please reason step by step, and put your final answer within \boxed{}:\nWhat is the coefficient of $x^2y^6$ in the expansion of $\\left(\\frac{3}{5}x-\\frac{y}{2}\\right)^8$?  Express your answer as a common fraction.",
                "role": "user"
            }
        ],
        "raw_output_text_generation_0": "\u003cthink\u003e\nOkay, so I need to find the coefficient of \\( x^2 y^6 \\) in the expansion of \\( \\left( \\frac{3}{5}x - \\frac{y}{2} \\right)^8 \\). Hmm, I remember that when expanding binomials raised to a power, the binomial theorem is useful here. Let me recall the binomial theorem.\n\nThe binomial theorem states that \\( (a + b)^n = \\sum_{k=0}^{n} \\binom{n}{k} a^{n - k} b^{k} \\). So in this case, \\( a \\) is \\( \\frac{3}{5}x \\) and \\( b \\) is \\( -\\frac{y}{2} \\). The exponent \\( n \\) is 8.\n\nI need to find the term where the exponents of \\( x \\) is 2 and the exponent of \\( y \\) is 6. Let me think: Since each term in the expansion is \\( \\binom{8}{k} \\left( \\frac{3}{5}x \\right)^{8 - k} \\left( -\\frac{y}{2} \\right)^k \\). \n\nSo, for the term to have \\( x^2 y^6 \\), the exponent on \\( x \\) should be 2, which means \\( 8 - k = 2 \\). Solving for \\( k \\), that gives \\( k = 6 \\). \n\nLet me verify that: if \\( k = 6 \\), then the exponent on \\( y \\) is \\( 6 \\), which is exactly what we need. So, that term will be the one we\u0027re interested in.\n\nNow, let me write out that term:\n\n\\( \\binom{8}{6} \\left( \\frac{3}{5}x \\right)^{8 - 6} \\left( -\\frac{y}{2} \\right)^6 \\)\n\nSimplifying the exponents first: \\( 8 - 6 = 2 \\), so:\n\n\\( \\binom{8}{6} \\left( \\frac{3}{5}x \\right)^2 \\left( -\\frac{y}{2} \\right)^6 \\)\n\nCalculating each part separately. First, \\( \\binom{8}{6} \\). I know that \\( \\binom{8}{6} \\) is equal to \\( \\binom{8}{2} \\) because of the symmetry in the binomial coefficients, right? \\( \\binom{8}{6} = \\binom{8}{2} \\). And \\( \\binom{8}{2} = \\frac{8 \\times 7}{2 \\times 1} = 28 \\). So that part is 28.\n\nNext, \\( \\left( \\frac{3}{5}x \\right)^2 \\). Squaring \\( \\frac{3}{5} \\) gives \\( \\frac{9}{25} \\), and squaring \\( x \\) is still \\( x^2 \\). So that part is \\( \\frac{9}{25}x^2 \\).\n\nThen, \\( \\left( -\\frac{y}{2} \\right)^6 \\). When I raise a negative to an even power, it becomes positive. So, \\( \\left( -\\frac{1}{2} \\right)^6 = \\left( \\frac{1}{2} \\right)^6 = \\frac{1}{64} \\). So that part is \\( \\frac{1}{64} y^6 \\).\n\nNow, putting it all together: 28 multiplied by \\( \\frac{9}{25}x^2 \\) multiplied by \\( \\frac{1}{64} y^6 \\).\n\nLet me compute the constants first: 28 times \\( \\frac{9}{25} \\) times \\( \\frac{1}{64} \\).\n\nFirst, multiply 28 and 9: 28 * 9 = 252.\n\nThen, 252 divided by 25 is \\( 10.08 \\), but I should keep it as a fraction. 252 divided by 25 is \\( \\frac{252}{25} \\).\n\nThen, \\( \\frac{252}{25} \\times \\frac{1}{64} \\). Multiply the numerators: 252 * 1 = 252, denominators: 25 * 64 = 1600. So that\u0027s \\( \\frac{252}{1600} \\).\n\nSimplify \\( \\frac{252}{1600} \\): Let me see if both numbers can be divided by a common factor. 252 and 1600. Let\u0027s try 4: 252 \u00f7 4 = 63, 1600 \u00f7 4 = 400. So, \\( \\frac{63}{400} \\). Hmm, 63 and 400. 63 is 7*9, 400 is 16*25. They have no common factors besides 1, so that\u0027s the simplified fraction.\n\nTherefore, the coefficient is \\( \\frac{63}{400} \\).\n\nWait, let me double-check my calculations to make sure I didn\u0027t make a mistake. So, 28 * 9 is indeed 252, correct. 252 divided by 25 is 10.08? Wait, 10.08 times 25 is 252, yes. Then 252 divided by 64 is 3.9375, which is \\( \\frac{63}{16} \\). Wait, no, hold on, that doesn\u0027t make sense. Wait, I think I messed up the multiplication.\n\nWait, no, wait. Let\u0027s go back. The constants: 28 * (9/25) * (1/64). Let me compute 28 * 9 first. 28*9: 20*9=180, 8*9=72, so 180+72=252. Then, 252 * (1/25) = 252/25. Then, 252/25 * (1/64) = 252/(25*64) = 252/1600.\n\nThen, 252 divided by 1600: let me divide numerator and denominator by 4: 252 \u00f74=63, 1600 \u00f74=400. So, 63/400. That simplifies correctly.\n\nWait, but 63/400 is 0.1575, and 252/1600 is 0.1575. So that\u0027s correct.\n\nBut wait, let me check 252 divided by 1600:\n\n252 / 1600 = (252 \u00f7 4) / (1600 \u00f74) = 63/400. Yes, correct.\n\nSo, okay, so the coefficient is 63/400.\n\nWait, but let me double-check the binomial coefficient. So, \\( \\binom{8}{6} \\). Since \\( \\binom{8}{6} = \\binom{8}{2} \\) because \\( \\binom{n}{k} = \\binom{n}{n - k} \\). So, \\( \\binom{8}{2} \\) is (8*7)/2 = 28, correct. So that\u0027s correct.\n\nSo, 28 * (9/25) * (1/64) = 28*(9)/(25*64) = 252/1600 = 63/400.\n\nTherefore, the coefficient is 63/400.\n\nWait, let me think again about the signs. In the term, the \\( b^k \\) is \\( (-\\frac{y}{2})^6 \\). Since 6 is even, the negative becomes positive, so that\u0027s correct, it\u0027s positive. So, the coefficient is positive 63/400.\n\nTherefore, after going through all that, I think the coefficient is 63/400.\n\n**Final Answer**\nThe coefficient of \\( x^2 y^6 \\) is \\boxed{\\dfrac{63}{400}}.\n\u003c/think\u003e\n\nTo find the coefficient of \\( x^2 y^6 \\) in the expansion of \\( \\left( \\frac{3}{5}x - \\frac{y}{2} \\right)^8 \\), we use the binomial theorem. The binomial theorem states that:\n\n\\[\n(a + b)^n = \\sum_{k=0}^{n} \\binom{n}{k} a^{n - k} b^{k}\n\\]\n\nHere, \\( a = \\frac{3}{5}x \\), \\( b = -\\frac{y}{2} \\), and \\( n = 8 \\). We need the term where the exponent of \\( x \\) is 2 and the exponent of \\( y \\) is 6. This corresponds to \\( k = 6 \\) because \\( 8 - k = 2 \\) implies \\( k = 6 \\).\n\nThe term is given by:\n\n\\[\n\\binom{8}{6} \\left( \\frac{3}{5}x \\right)^{2} \\left( -\\frac{y}{2} \\right)^{6}\n\\]\n\nCalculating each part:\n\n1. The binomial coefficient \\( \\binom{8}{6} = \\binom{8}{2} = \\frac{8 \\times 7}{2 \\times 1} = 28 \\).\n2. \\( \\left( \\frac{3}{5}x \\right)^2 = \\frac{9}{25}x^2 \\).\n3. \\( \\left( -\\frac{y}{2} \\right)^6 = \\left( \\frac{1}{2} \\right)^6 = \\frac{1}{64} y^6 \\).\n\nCombining these, we get:\n\n\\[\n28 \\times \\frac{9}{25} \\times \\frac{1}{64} = 28 \\times \\frac{9}{1600} = \\frac{252}{1600} = \\frac{63}{400}\n\\]\n\nThus, the coefficient of \\( x^2 y^6 \\) is \\boxed{\\dfrac{63}{400}}.",
        "statistics_text_generation_0": {
            "input_tokens": 77,
            "output_tokens": 2155
        }
    },
    "generation": "\u003cthink\u003e\nOkay, so I need to find the coefficient of \\( x^2 y^6 \\) in the expansion of \\( \\left( \\frac{3}{5}x - \\frac{y}{2} \\right)^8 \\). Hmm, I remember that when expanding binomials raised to a power, the binomial theorem is useful here. Let me recall the binomial theorem.\n\nThe binomial theorem states that \\( (a + b)^n = \\sum_{k=0}^{n} \\binom{n}{k} a^{n - k} b^{k} \\). So in this case, \\( a \\) is \\( \\frac{3}{5}x \\) and \\( b \\) is \\( -\\frac{y}{2} \\). The exponent \\( n \\) is 8.\n\nI need to find the term where the exponents of \\( x \\) is 2 and the exponent of \\( y \\) is 6. Let me think: Since each term in the expansion is \\( \\binom{8}{k} \\left( \\frac{3}{5}x \\right)^{8 - k} \\left( -\\frac{y}{2} \\right)^k \\). \n\nSo, for the term to have \\( x^2 y^6 \\), the exponent on \\( x \\) should be 2, which means \\( 8 - k = 2 \\). Solving for \\( k \\), that gives \\( k = 6 \\). \n\nLet me verify that: if \\( k = 6 \\), then the exponent on \\( y \\) is \\( 6 \\), which is exactly what we need. So, that term will be the one we\u0027re interested in.\n\nNow, let me write out that term:\n\n\\( \\binom{8}{6} \\left( \\frac{3}{5}x \\right)^{8 - 6} \\left( -\\frac{y}{2} \\right)^6 \\)\n\nSimplifying the exponents first: \\( 8 - 6 = 2 \\), so:\n\n\\( \\binom{8}{6} \\left( \\frac{3}{5}x \\right)^2 \\left( -\\frac{y}{2} \\right)^6 \\)\n\nCalculating each part separately. First, \\( \\binom{8}{6} \\). I know that \\( \\binom{8}{6} \\) is equal to \\( \\binom{8}{2} \\) because of the symmetry in the binomial coefficients, right? \\( \\binom{8}{6} = \\binom{8}{2} \\). And \\( \\binom{8}{2} = \\frac{8 \\times 7}{2 \\times 1} = 28 \\). So that part is 28.\n\nNext, \\( \\left( \\frac{3}{5}x \\right)^2 \\). Squaring \\( \\frac{3}{5} \\) gives \\( \\frac{9}{25} \\), and squaring \\( x \\) is still \\( x^2 \\). So that part is \\( \\frac{9}{25}x^2 \\).\n\nThen, \\( \\left( -\\frac{y}{2} \\right)^6 \\). When I raise a negative to an even power, it becomes positive. So, \\( \\left( -\\frac{1}{2} \\right)^6 = \\left( \\frac{1}{2} \\right)^6 = \\frac{1}{64} \\). So that part is \\( \\frac{1}{64} y^6 \\).\n\nNow, putting it all together: 28 multiplied by \\( \\frac{9}{25}x^2 \\) multiplied by \\( \\frac{1}{64} y^6 \\).\n\nLet me compute the constants first: 28 times \\( \\frac{9}{25} \\) times \\( \\frac{1}{64} \\).\n\nFirst, multiply 28 and 9: 28 * 9 = 252.\n\nThen, 252 divided by 25 is \\( 10.08 \\), but I should keep it as a fraction. 252 divided by 25 is \\( \\frac{252}{25} \\).\n\nThen, \\( \\frac{252}{25} \\times \\frac{1}{64} \\). Multiply the numerators: 252 * 1 = 252, denominators: 25 * 64 = 1600. So that\u0027s \\( \\frac{252}{1600} \\).\n\nSimplify \\( \\frac{252}{1600} \\): Let me see if both numbers can be divided by a common factor. 252 and 1600. Let\u0027s try 4: 252 \u00f7 4 = 63, 1600 \u00f7 4 = 400. So, \\( \\frac{63}{400} \\). Hmm, 63 and 400. 63 is 7*9, 400 is 16*25. They have no common factors besides 1, so that\u0027s the simplified fraction.\n\nTherefore, the coefficient is \\( \\frac{63}{400} \\).\n\nWait, let me double-check my calculations to make sure I didn\u0027t make a mistake. So, 28 * 9 is indeed 252, correct. 252 divided by 25 is 10.08? Wait, 10.08 times 25 is 252, yes. Then 252 divided by 64 is 3.9375, which is \\( \\frac{63}{16} \\). Wait, no, hold on, that doesn\u0027t make sense. Wait, I think I messed up the multiplication.\n\nWait, no, wait. Let\u0027s go back. The constants: 28 * (9/25) * (1/64). Let me compute 28 * 9 first. 28*9: 20*9=180, 8*9=72, so 180+72=252. Then, 252 * (1/25) = 252/25. Then, 252/25 * (1/64) = 252/(25*64) = 252/1600.\n\nThen, 252 divided by 1600: let me divide numerator and denominator by 4: 252 \u00f74=63, 1600 \u00f74=400. So, 63/400. That simplifies correctly.\n\nWait, but 63/400 is 0.1575, and 252/1600 is 0.1575. So that\u0027s correct.\n\nBut wait, let me check 252 divided by 1600:\n\n252 / 1600 = (252 \u00f7 4) / (1600 \u00f74) = 63/400. Yes, correct.\n\nSo, okay, so the coefficient is 63/400.\n\nWait, but let me double-check the binomial coefficient. So, \\( \\binom{8}{6} \\). Since \\( \\binom{8}{6} = \\binom{8}{2} \\) because \\( \\binom{n}{k} = \\binom{n}{n - k} \\). So, \\( \\binom{8}{2} \\) is (8*7)/2 = 28, correct. So that\u0027s correct.\n\nSo, 28 * (9/25) * (1/64) = 28*(9)/(25*64) = 252/1600 = 63/400.\n\nTherefore, the coefficient is 63/400.\n\nWait, let me think again about the signs. In the term, the \\( b^k \\) is \\( (-\\frac{y}{2})^6 \\). Since 6 is even, the negative becomes positive, so that\u0027s correct, it\u0027s positive. So, the coefficient is positive 63/400.\n\nTherefore, after going through all that, I think the coefficient is 63/400.\n\n**Final Answer**\nThe coefficient of \\( x^2 y^6 \\) is \\boxed{\\dfrac{63}{400}}.\n\u003c/think\u003e\n\nTo find the coefficient of \\( x^2 y^6 \\) in the expansion of \\( \\left( \\frac{3}{5}x - \\frac{y}{2} \\right)^8 \\), we use the binomial theorem. The binomial theorem states that:\n\n\\[\n(a + b)^n = \\sum_{k=0}^{n} \\binom{n}{k} a^{n - k} b^{k}\n\\]\n\nHere, \\( a = \\frac{3}{5}x \\), \\( b = -\\frac{y}{2} \\), and \\( n = 8 \\). We need the term where the exponent of \\( x \\) is 2 and the exponent of \\( y \\) is 6. This corresponds to \\( k = 6 \\) because \\( 8 - k = 2 \\) implies \\( k = 6 \\).\n\nThe term is given by:\n\n\\[\n\\binom{8}{6} \\left( \\frac{3}{5}x \\right)^{2} \\left( -\\frac{y}{2} \\right)^{6}\n\\]\n\nCalculating each part:\n\n1. The binomial coefficient \\( \\binom{8}{6} = \\binom{8}{2} = \\frac{8 \\times 7}{2 \\times 1} = 28 \\).\n2. \\( \\left( \\frac{3}{5}x \\right)^2 = \\frac{9}{25}x^2 \\).\n3. \\( \\left( -\\frac{y}{2} \\right)^6 = \\left( \\frac{1}{2} \\right)^6 = \\frac{1}{64} y^6 \\).\n\nCombining these, we get:\n\n\\[\n28 \\times \\frac{9}{25} \\times \\frac{1}{64} = 28 \\times \\frac{9}{1600} = \\frac{252}{1600} = \\frac{63}{400}\n\\]\n\nThus, the coefficient of \\( x^2 y^6 \\) is \\boxed{\\dfrac{63}{400}}.",
    "messages": [
        {
            "content": "What is the coefficient of $x^2y^6$ in the expansion of $\\left(\\frac{3}{5}x-\\frac{y}{2}\\right)^8$?  Express your answer as a common fraction.",
            "role": "user"
        },
        {
            "content": "To determine the coefficient of \\(x^2y^6\\) in the expansion of \\(\\left(\\frac{3}{5}x - \\frac{y}{2}\\right)^8\\), we can use the binomial theorem.\n\nThe binomial theorem states:\n\\[\n(a + b)^n = \\sum_{k=0}^{n} \\binom{n}{k} a^{n-k} b^k\n\\]\n\nIn this case, \\(a = \\frac{3}{5}x\\), \\(b = -\\frac{y}{2}\\), and \\(n = 8\\).\n\nWe are interested in the term that contains \\(x^2y^6\\). In the general term of the binomial expansion:\n\\[\n\\binom{8}{k} \\left(\\frac{3}{5}x\\right)^{8-k} \\left(-\\frac{y}{2}\\right)^k\n\\]\n\nTo get \\(x^2\\), we need \\(8 - k = 2\\), thus \\(k = 6\\).\n\nSubstituting \\(k = 6\\) into the expression:\n\\[\n\\binom{8}{6} \\left(\\frac{3}{5}x\\right)^{8-6} \\left(-\\frac{y}{2}\\right)^6 = \\binom{8}{6} \\left(\\frac{3}{5}x\\right)^2 \\left(-\\frac{y}{2}\\right)^6\n\\]\n\nNow, we will compute each part of this expression.\n\n1. Calculate the binomial coefficient \\(\\binom{8}{6}\\).\n2. Compute \\(\\left(\\frac{3}{5}\\right)^2\\).\n3. Compute \\(\\left(-\\frac{y}{2}\\right)^6\\).\n4. Combine everything together to get the coefficient of \\(x^2y^6\\).\n\nLet\u0027s compute these in Python.\n```python\nfrom math import comb\n\n# Given values\nn = 8\nk = 6\n\n# Calculate the binomial coefficient\nbinom_coeff = comb(n, k)\n\n# Compute (3/5)^2\na_term = (3/5)**2\n\n# Compute (-1/2)^6\nb_term = (-1/2)**6\n\n# Combine terms to get the coefficient of x^2y^6\ncoefficient = binom_coeff * a_term * b_term\nprint(coefficient)\n```\n```output\n0.1575\n```\nThe coefficient of \\(x^2y^6\\) in the expansion of \\(\\left(\\frac{3}{5}x - \\frac{y}{2}\\right)^8\\) is \\(0.1575\\). To express this as a common fraction, we recognize that:\n\n\\[ 0.1575 = \\frac{1575}{10000} = \\frac{63}{400} \\]\n\nThus, the coefficient can be expressed as:\n\n\\[\n\\boxed{\\frac{63}{400}}\n\\]",
            "role": "assistant"
        }
    ],
    "model_name": "deepseek-ai/DeepSeek-R1-Distill-Llama-8B",
    "problem": "What is the coefficient of $x^2y^6$ in the expansion of $\\left(\\frac{3}{5}x-\\frac{y}{2}\\right)^8$?  Express your answer as a common fraction.",
    "solution": "To determine the coefficient of \\(x^2y^6\\) in the expansion of \\(\\left(\\frac{3}{5}x - \\frac{y}{2}\\right)^8\\), we can use the binomial theorem.\n\nThe binomial theorem states:\n\\[\n(a + b)^n = \\sum_{k=0}^{n} \\binom{n}{k} a^{n-k} b^k\n\\]\n\nIn this case, \\(a = \\frac{3}{5}x\\), \\(b = -\\frac{y}{2}\\), and \\(n = 8\\).\n\nWe are interested in the term that contains \\(x^2y^6\\). In the general term of the binomial expansion:\n\\[\n\\binom{8}{k} \\left(\\frac{3}{5}x\\right)^{8-k} \\left(-\\frac{y}{2}\\right)^k\n\\]\n\nTo get \\(x^2\\), we need \\(8 - k = 2\\), thus \\(k = 6\\).\n\nSubstituting \\(k = 6\\) into the expression:\n\\[\n\\binom{8}{6} \\left(\\frac{3}{5}x\\right)^{8-6} \\left(-\\frac{y}{2}\\right)^6 = \\binom{8}{6} \\left(\\frac{3}{5}x\\right)^2 \\left(-\\frac{y}{2}\\right)^6\n\\]\n\nNow, we will compute each part of this expression.\n\n1. Calculate the binomial coefficient \\(\\binom{8}{6}\\).\n2. Compute \\(\\left(\\frac{3}{5}\\right)^2\\).\n3. Compute \\(\\left(-\\frac{y}{2}\\right)^6\\).\n4. Combine everything together to get the coefficient of \\(x^2y^6\\).\n\nLet\u0027s compute these in Python.\n```python\nfrom math import comb\n\n# Given values\nn = 8\nk = 6\n\n# Calculate the binomial coefficient\nbinom_coeff = comb(n, k)\n\n# Compute (3/5)^2\na_term = (3/5)**2\n\n# Compute (-1/2)^6\nb_term = (-1/2)**6\n\n# Combine terms to get the coefficient of x^2y^6\ncoefficient = binom_coeff * a_term * b_term\nprint(coefficient)\n```\n```output\n0.1575\n```\nThe coefficient of \\(x^2y^6\\) in the expansion of \\(\\left(\\frac{3}{5}x - \\frac{y}{2}\\right)^8\\) is \\(0.1575\\). To express this as a common fraction, we recognize that:\n\n\\[ 0.1575 = \\frac{1575}{10000} = \\frac{63}{400} \\]\n\nThus, the coefficient can be expressed as:\n\n\\[\n\\boxed{\\frac{63}{400}}\n\\]"
}
```
This subset can be loaded as:
```python
from datasets import load_dataset
ds = load_dataset("wangx0t/numina-deepseek-r1-qwen-7b", "default")
```
Or simply as it follows, since there's only one configuration and is named `default`: 
```python
from datasets import load_dataset
ds = load_dataset("wangx0t/numina-deepseek-r1-qwen-7b")
```
</details>
 | 
| 
	Lots-of-LoRAs/task1159_bard_analogical_reasoning_containers | 
	Lots-of-LoRAs | 
	2024-12-30T23:35:53Z | 9 | 0 | 
	[
  "task_categories:text-generation",
  "annotations_creators:crowdsourced",
  "language_creators:crowdsourced",
  "language:en",
  "license:apache-2.0",
  "size_categories:n<1K",
  "format:parquet",
  "modality:text",
  "library:datasets",
  "library:pandas",
  "library:mlcroissant",
  "library:polars",
  "arxiv:2204.07705",
  "arxiv:2407.00066",
  "region:us"
] | 
	[
  "text-generation"
] | 
	2024-12-30T23:35:51Z | 0 | 
	---
annotations_creators:
- crowdsourced
language_creators:
- crowdsourced
language:
- en
license:
- apache-2.0
task_categories:
- text-generation
pretty_name: task1159_bard_analogical_reasoning_containers
dataset_info:
  config_name: plain_text
  features:
  - name: input
    dtype: string
  - name: output
    dtype: string
  - name: id
    dtype: string
  splits:
  - name: train
    num_examples: 558
  - name: valid
    num_examples: 70
  - name: test
    num_examples: 70
---
# Dataset Card for Natural Instructions (https://github.com/allenai/natural-instructions) Task: task1159_bard_analogical_reasoning_containers
## Dataset Description
- **Homepage:** https://github.com/allenai/natural-instructions
- **Paper:** https://arxiv.org/abs/2204.07705
- **Paper:** https://arxiv.org/abs/2407.00066
- **Point of Contact:** [Rickard Brüel Gabrielsson](mailto:[email protected])
## Additional Information
### Citation Information
The following paper introduces the corpus in detail. If you use the corpus in published work, please cite it: 
```bibtex
@misc{wang2022supernaturalinstructionsgeneralizationdeclarativeinstructions,
    title={Super-NaturalInstructions: Generalization via Declarative Instructions on 1600+ NLP Tasks}, 
    author={Yizhong Wang and Swaroop Mishra and Pegah Alipoormolabashi and Yeganeh Kordi and Amirreza Mirzaei and Anjana Arunkumar and Arjun Ashok and Arut Selvan Dhanasekaran and Atharva Naik and David Stap and Eshaan Pathak and Giannis Karamanolakis and Haizhi Gary Lai and Ishan Purohit and Ishani Mondal and Jacob Anderson and Kirby Kuznia and Krima Doshi and Maitreya Patel and Kuntal Kumar Pal and Mehrad Moradshahi and Mihir Parmar and Mirali Purohit and Neeraj Varshney and Phani Rohitha Kaza and Pulkit Verma and Ravsehaj Singh Puri and Rushang Karia and Shailaja Keyur Sampat and Savan Doshi and Siddhartha Mishra and Sujan Reddy and Sumanta Patro and Tanay Dixit and Xudong Shen and Chitta Baral and Yejin Choi and Noah A. Smith and Hannaneh Hajishirzi and Daniel Khashabi},
    year={2022},
    eprint={2204.07705},
    archivePrefix={arXiv},
    primaryClass={cs.CL},
    url={https://arxiv.org/abs/2204.07705}, 
}
```
More details can also be found in the following paper:
```bibtex
@misc{brüelgabrielsson2024compressserveservingthousands,
    title={Compress then Serve: Serving Thousands of LoRA Adapters with Little Overhead}, 
    author={Rickard Brüel-Gabrielsson and Jiacheng Zhu and Onkar Bhardwaj and Leshem Choshen and Kristjan Greenewald and Mikhail Yurochkin and Justin Solomon},
    year={2024},
    eprint={2407.00066},
    archivePrefix={arXiv},
    primaryClass={cs.DC},
    url={https://arxiv.org/abs/2407.00066}, 
}
```
### Contact Information
For any comments or questions, please email [Rickard Brüel Gabrielsson](mailto:[email protected])
 | 
| 
	PatronusAI/glider-multilingual-reward-bench-suite | 
	PatronusAI | 
	2024-12-18T23:41:51Z | 67 | 0 | 
	[
  "size_categories:10K<n<100K",
  "format:parquet",
  "modality:text",
  "library:datasets",
  "library:pandas",
  "library:mlcroissant",
  "library:polars",
  "region:us"
] | 
	[] | 
	2024-11-19T17:15:14Z | 0 | 
	---
dataset_info:
  features:
  - name: prompt
    dtype: string
  - name: score
    dtype: int64
  - name: pass_criteria
    dtype: string
  - name: rubric
    dtype: string
  splits:
  - name: arb_Arab
    num_bytes: 12237595
    num_examples: 2869
  - name: ces_Latn
    num_bytes: 10748534
    num_examples: 2869
  - name: deu_Latn
    num_bytes: 11178372
    num_examples: 2869
  - name: ell_Grek
    num_bytes: 14404826
    num_examples: 2869
  - name: fra_Latn
    num_bytes: 11245160
    num_examples: 2869
  - name: heb_Hebr
    num_bytes: 11754840
    num_examples: 2869
  - name: hin_Deva
    num_bytes: 15857179
    num_examples: 2869
  - name: ind_Latn
    num_bytes: 10872895
    num_examples: 2869
  - name: ita_Latn
    num_bytes: 10874456
    num_examples: 2869
  - name: jpn_Jpan
    num_bytes: 11443888
    num_examples: 2869
  - name: kor_Hang
    num_bytes: 11060869
    num_examples: 2869
  - name: nld_Latn
    num_bytes: 10904828
    num_examples: 2869
  - name: pes_Arab
    num_bytes: 12736791
    num_examples: 2869
  - name: pol_Latn
    num_bytes: 10905164
    num_examples: 2869
  - name: por_Latn
    num_bytes: 10826113
    num_examples: 2869
  - name: ron_Latn
    num_bytes: 11069968
    num_examples: 2869
  - name: rus_Cyrl
    num_bytes: 13720140
    num_examples: 2869
  - name: spa_Latn
    num_bytes: 10960266
    num_examples: 2869
  - name: tur_Latn
    num_bytes: 10873535
    num_examples: 2869
  - name: ukr_Cyrl
    num_bytes: 13392298
    num_examples: 2869
  - name: vie_Latn
    num_bytes: 11823251
    num_examples: 2869
  - name: zho_Hans
    num_bytes: 10016686
    num_examples: 2869
  - name: zho_Hant
    num_bytes: 9899172
    num_examples: 2869
  download_size: 71816921
  dataset_size: 268806826
configs:
- config_name: default
  data_files:
  - split: arb_Arab
    path: data/arb_Arab-*
  - split: ces_Latn
    path: data/ces_Latn-*
  - split: deu_Latn
    path: data/deu_Latn-*
  - split: ell_Grek
    path: data/ell_Grek-*
  - split: fra_Latn
    path: data/fra_Latn-*
  - split: heb_Hebr
    path: data/heb_Hebr-*
  - split: hin_Deva
    path: data/hin_Deva-*
  - split: ind_Latn
    path: data/ind_Latn-*
  - split: ita_Latn
    path: data/ita_Latn-*
  - split: jpn_Jpan
    path: data/jpn_Jpan-*
  - split: kor_Hang
    path: data/kor_Hang-*
  - split: nld_Latn
    path: data/nld_Latn-*
  - split: pes_Arab
    path: data/pes_Arab-*
  - split: pol_Latn
    path: data/pol_Latn-*
  - split: por_Latn
    path: data/por_Latn-*
  - split: ron_Latn
    path: data/ron_Latn-*
  - split: rus_Cyrl
    path: data/rus_Cyrl-*
  - split: spa_Latn
    path: data/spa_Latn-*
  - split: tur_Latn
    path: data/tur_Latn-*
  - split: ukr_Cyrl
    path: data/ukr_Cyrl-*
  - split: vie_Latn
    path: data/vie_Latn-*
  - split: zho_Hans
    path: data/zho_Hans-*
  - split: zho_Hant
    path: data/zho_Hant-*
---
 | 
| 
	pdf2dataset/436f3df84262cc0743d5d8d1acf9368 | 
	pdf2dataset | 
	2024-10-05T11:48:31Z | 18 | 0 | 
	[
  "size_categories:n<1K",
  "format:parquet",
  "modality:text",
  "library:datasets",
  "library:pandas",
  "library:mlcroissant",
  "library:polars",
  "region:us"
] | 
	[] | 
	2024-10-05T11:48:30Z | 0 | 
	---
dataset_info:
  features:
  - name: text
    dtype: string
  - name: source
    dtype: string
  splits:
  - name: train
    num_bytes: 1691
    num_examples: 5
  download_size: 6842
  dataset_size: 1691
configs:
- config_name: default
  data_files:
  - split: train
    path: data/train-*
---
 | 
| 
	R0bfried/RAGAS-12000-llama-3-2-1B-eval | 
	R0bfried | 
	2025-03-20T09:01:32Z | 14 | 0 | 
	[
  "size_categories:n<1K",
  "format:parquet",
  "modality:tabular",
  "modality:text",
  "library:datasets",
  "library:pandas",
  "library:mlcroissant",
  "library:polars",
  "region:us"
] | 
	[] | 
	2025-03-20T09:01:28Z | 0 | 
	---
dataset_info:
  features:
  - name: user_input
    dtype: string
  - name: retrieved_contexts
    sequence: string
  - name: response
    dtype: string
  - name: reference
    dtype: string
  - name: faithfulness
    dtype: float64
  - name: answer_relevancy
    dtype: float64
  - name: answer_correctness
    dtype: float64
  splits:
  - name: train
    num_bytes: 2604364
    num_examples: 150
  download_size: 954997
  dataset_size: 2604364
configs:
- config_name: default
  data_files:
  - split: train
    path: data/train-*
---
 | 
| 
	cijerezg/pick-up-test-v1 | 
	cijerezg | 
	2025-09-25T05:41:35Z | 64 | 0 | 
	[
  "task_categories:robotics",
  "license:apache-2.0",
  "size_categories:1K<n<10K",
  "format:parquet",
  "modality:tabular",
  "modality:video",
  "library:datasets",
  "library:pandas",
  "library:mlcroissant",
  "library:polars",
  "region:us",
  "LeRobot"
] | 
	[
  "robotics"
] | 
	2025-09-25T05:41:15Z | 0 | 
	---
license: apache-2.0
task_categories:
- robotics
tags:
- LeRobot
configs:
- config_name: default
  data_files: data/*/*.parquet
---
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
## Dataset Description
- **Homepage:** [More Information Needed]
- **Paper:** [More Information Needed]
- **License:** apache-2.0
## Dataset Structure
[meta/info.json](meta/info.json):
```json
{
    "codebase_version": "v3.0",
    "robot_type": "so101_follower",
    "total_episodes": 20,
    "total_frames": 8941,
    "total_tasks": 1,
    "chunks_size": 1000,
    "data_files_size_in_mb": 100,
    "video_files_size_in_mb": 500,
    "fps": 30,
    "splits": {
        "train": "0:20"
    },
    "data_path": "data/chunk-{chunk_index:03d}/file-{file_index:03d}.parquet",
    "video_path": "videos/{video_key}/chunk-{chunk_index:03d}/file-{file_index:03d}.mp4",
    "features": {
        "action": {
            "dtype": "float32",
            "names": [
                "shoulder_pan.pos",
                "shoulder_lift.pos",
                "elbow_flex.pos",
                "wrist_flex.pos",
                "wrist_roll.pos",
                "gripper.pos"
            ],
            "shape": [
                6
            ]
        },
        "observation.state": {
            "dtype": "float32",
            "names": [
                "shoulder_pan.pos",
                "shoulder_lift.pos",
                "elbow_flex.pos",
                "wrist_flex.pos",
                "wrist_roll.pos",
                "gripper.pos"
            ],
            "shape": [
                6
            ]
        },
        "observation.images.wrist": {
            "dtype": "video",
            "shape": [
                480,
                640,
                3
            ],
            "names": [
                "height",
                "width",
                "channels"
            ],
            "info": {
                "video.height": 480,
                "video.width": 640,
                "video.codec": "av1",
                "video.pix_fmt": "yuv420p",
                "video.is_depth_map": false,
                "video.fps": 30,
                "video.channels": 3,
                "has_audio": false
            }
        },
        "observation.images.top_left": {
            "dtype": "video",
            "shape": [
                340,
                340,
                3
            ],
            "names": [
                "height",
                "width",
                "channels"
            ],
            "info": {
                "video.height": 340,
                "video.width": 340,
                "video.codec": "av1",
                "video.pix_fmt": "yuv420p",
                "video.is_depth_map": false,
                "video.fps": 30,
                "video.channels": 3,
                "has_audio": false
            }
        },
        "observation.images.top_right": {
            "dtype": "video",
            "shape": [
                480,
                640,
                3
            ],
            "names": [
                "height",
                "width",
                "channels"
            ],
            "info": {
                "video.height": 480,
                "video.width": 640,
                "video.codec": "av1",
                "video.pix_fmt": "yuv420p",
                "video.is_depth_map": false,
                "video.fps": 30,
                "video.channels": 3,
                "has_audio": false
            }
        },
        "timestamp": {
            "dtype": "float32",
            "shape": [
                1
            ],
            "names": null
        },
        "frame_index": {
            "dtype": "int64",
            "shape": [
                1
            ],
            "names": null
        },
        "episode_index": {
            "dtype": "int64",
            "shape": [
                1
            ],
            "names": null
        },
        "index": {
            "dtype": "int64",
            "shape": [
                1
            ],
            "names": null
        },
        "task_index": {
            "dtype": "int64",
            "shape": [
                1
            ],
            "names": null
        }
    }
}
```
## Citation
**BibTeX:**
```bibtex
[More Information Needed]
``` | 
| 
	ds4sd/PubTables-1M_OTSL | 
	ds4sd | 
	2023-08-31T16:00:24Z | 22,884 | 4 | 
	[
  "task_categories:object-detection",
  "task_categories:table-to-text",
  "license:other",
  "size_categories:1M<n<10M",
  "format:parquet",
  "modality:image",
  "modality:text",
  "library:datasets",
  "library:dask",
  "library:mlcroissant",
  "library:polars",
  "arxiv:2305.03393",
  "region:us",
  "table-structure-recognition",
  "table-understanding",
  "PDF"
] | 
	[
  "object-detection",
  "table-to-text"
] | 
	2023-08-10T08:21:06Z | 0 | 
	---
license: other
pretty_name: PubTables-1M-OTSL
size_categories:
- 100K<n<1M
tags:
- table-structure-recognition
- table-understanding
- PDF
task_categories:
- object-detection
- table-to-text
---
# Dataset Card for PubTables-1M_OTSL
## Dataset Description
- **Homepage:** https://ds4sd.github.io
- **Paper:** https://arxiv.org/pdf/2305.03393
### Dataset Summary
This dataset enables the evaluation of both object detection models and image-to-text methods.
[PubTables-1M](https://github.com/microsoft/table-transformer) is introduced in the publication *"PubTables-1M: Towards Comprehensive Table Extraction From Unstructured Documents"* by Smock et al. The conversion into HF (Hugging Face) and the addition of the OTSL (Optimized Table Structure Language) format is presented in our paper "Optimized Table Tokenization for Table Structure Recognition" by Lysak et al. The dataset includes the original annotations amongst new additions. 
### Dataset Structure
* cells: origunal dataset cell groundtruth (content).
* table_bbox: origunal dataset table detection groundtruth.
* otsl: new reduced table structure token format
* html: Generated HTML for PubTables-1M to match PubTabNet, FinTabNet, and SynthTabNet format.
* html_restored: generated HTML from OTSL.
* cols: grid column length.
* rows: grid row length.
* image: PIL image
### OTSL Vocabulary:
**OTSL**: new reduced table structure token format
More information on the OTSL table structure format and its concepts can be read from our paper.
Format of this dataset extends work presented in a paper, and introduces slight modifications:
* "fcel" - cell that has content in it
* "ecel" - cell that is empty
* "lcel" - left-looking cell (to handle horizontally merged cells)
* "ucel" - up-looking cell (to handle vertically merged cells)
* "xcel" - 2d span cells, in this dataset - covers entire area of a merged cell
* "nl" - new line token
### Data Splits
The dataset provides three splits
- `train`
- `val`
- `test`
## Additional Information
### Dataset Curators
The dataset is converted by the [Deep Search team](https://ds4sd.github.io/) at IBM Research.
You can contact us at [[email protected]](mailto:[email protected]).
Curators:
- Maksym Lysak, [@maxmnemonic](https://github.com/maxmnemonic)
- Ahmed Nassar, [@nassarofficial](https://github.com/nassarofficial)
- Christoph Auer, [@cau-git](https://github.com/cau-git)
- Nikos Livathinos, [@nikos-livathinos](https://github.com/nikos-livathinos)
- Peter Staar, [@PeterStaar-IBM](https://github.com/PeterStaar-IBM)
### Citation Information
**Citation to OTSL Paper:**
	@article{lysak2023optimized,
	      title={Optimized Table Tokenization for Table Structure Recognition}, 
	      author={Maksym Lysak and Ahmed Nassar and Nikolaos Livathinos and Christoph Auer and Peter Staar},
	      year={2023},
	      eprint={2305.03393},
	      archivePrefix={arXiv},
	      primaryClass={cs.CV}
	}
**Citation to PubTables-1M creators:**
	@inproceedings{smock2022pubtables,
	  title={Pub{T}ables-1{M}: Towards comprehensive table extraction from unstructured documents},
	  author={Smock, Brandon and Pesala, Rohith and Abraham, Robin},
	  booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
	  pages={4634-4642},
	  year={2022},
	  month={June}
	} | 
| 
	Abeyankar/mtl_7k_inference_ds | 
	Abeyankar | 
	2025-06-03T20:06:37Z | 0 | 0 | 
	[
  "task_categories:object-detection",
  "language:en",
  "size_categories:10K<n<100K",
  "format:imagefolder",
  "modality:image",
  "library:datasets",
  "library:mlcroissant",
  "library:fiftyone",
  "region:us",
  "fiftyone",
  "image",
  "object-detection"
] | 
	[
  "object-detection"
] | 
	2025-06-03T19:58:48Z | 0 | 
	---
annotations_creators: []
language: en
size_categories:
- 10K<n<100K
task_categories:
- object-detection
task_ids: []
pretty_name: train_2844_mcityclean_val_7642_7k
tags:
- fiftyone
- image
- object-detection
commit_message: Hand annotated dataset MTL 7k
overwrite: true
dataset_summary: '
  This is a [FiftyOne](https://github.com/voxel51/fiftyone) dataset with 10486 samples.
  ## Installation
  If you haven''t already, install FiftyOne:
  ```bash
  pip install -U fiftyone
  ```
  ## Usage
  ```python
  import fiftyone as fo
  from fiftyone.utils.huggingface import load_from_hub
  # Load the dataset
  # Note: other available arguments include ''max_samples'', etc
  dataset = load_from_hub("Abeyankar/mtl_7k_inference_ds")
  # Launch the App
  session = fo.launch_app(dataset)
  ```
  '
---
# Dataset Card for train_2844_mcityclean_val_7642_7k
<!-- Provide a quick summary of the dataset. -->
This is a [FiftyOne](https://github.com/voxel51/fiftyone) dataset with 10486 samples.
## Installation
If you haven't already, install FiftyOne:
```bash
pip install -U fiftyone
```
## Usage
```python
import fiftyone as fo
from fiftyone.utils.huggingface import load_from_hub
# Load the dataset
# Note: other available arguments include 'max_samples', etc
dataset = load_from_hub("Abeyankar/mtl_7k_inference_ds")
# Launch the App
session = fo.launch_app(dataset)
```
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** en
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] | 
| 
	Ramitha/unique-records-selected-integrated-gradients | 
	Ramitha | 
	2025-09-27T08:57:26Z | 229 | 0 | 
	[
  "size_categories:n<1K",
  "format:parquet",
  "modality:tabular",
  "modality:text",
  "library:datasets",
  "library:pandas",
  "library:mlcroissant",
  "library:polars",
  "region:us"
] | 
	[] | 
	2025-08-23T17:19:14Z | 0 | 
	---
dataset_info:
  features:
  - name: question
    dtype: string
  - name: snippet
    dtype: string
  - name: answer
    dtype: string
  - name: ILRAlign
    dtype: float64
  - name: dataset
    dtype: string
  - name: question_ig_tokens_llama
    dtype: string
  - name: answer_ig_tokens_llama
    dtype: string
  - name: question_ig_tokens_mistral
    dtype: string
  - name: answer_ig_tokens_mistral
    dtype: string
  - name: question_ig_tokens_falcon
    dtype: string
  - name: answer_ig_tokens_falcon
    dtype: string
  - name: question_ig_tokens_gemma
    dtype: string
  - name: answer_ig_tokens_gemma
    dtype: string
  - name: question_raw_ig_tokens_falcon
    dtype: string
  - name: answer_raw_ig_tokens_falcon
    dtype: string
  - name: question_raw_ig_tokens_llama
    dtype: string
  - name: answer_raw_ig_tokens_llama
    dtype: string
  - name: question_raw_ig_tokens_mistral
    dtype: string
  - name: answer_raw_ig_tokens_mistral
    dtype: string
  - name: question_raw_ig_tokens_gemma
    dtype: string
  - name: answer_raw_ig_tokens_gemma
    dtype: string
  - name: question_iaa_all
    dtype: float64
  - name: question_iaa_llama_falcon
    dtype: float64
  - name: question_iaa_llama_gemma
    dtype: float64
  - name: question_iaa_llama_mistral
    dtype: float64
  - name: question_iaa_falcon_gemma
    dtype: float64
  - name: question_iaa_falcon_mistral
    dtype: float64
  - name: question_iaa_gemma_mistral
    dtype: float64
  - name: answer_iaa_all
    dtype: float64
  - name: answer_iaa_llama_falcon
    dtype: float64
  - name: answer_iaa_llama_gemma
    dtype: float64
  - name: answer_iaa_llama_mistral
    dtype: float64
  - name: answer_iaa_falcon_gemma
    dtype: float64
  - name: answer_iaa_falcon_mistral
    dtype: float64
  - name: answer_iaa_gemma_mistral
    dtype: float64
  - name: question_unique_words
    dtype: float64
  - name: answer_unique_words
    dtype: float64
  - name: question_answerGenerated_llama
    dtype: string
  - name: reverse_answer_answerGenerated_llama
    dtype: string
  - name: question_answerGenerated_falcon
    dtype: string
  - name: reverse_answer_answerGenerated_falcon
    dtype: string
  - name: question_answerGenerated_gemma
    dtype: string
  - name: reverse_answer_answerGenerated_gemma
    dtype: string
  - name: question_answerGenerated_mistral
    dtype: string
  - name: reverse_answer_answerGenerated_mistral
    dtype: string
  - name: ILRAlign_without_context_llama
    dtype: float64
  - name: ILRAlign_with_context_llama
    dtype: float64
  - name: ILRAlign_without_context_falcon
    dtype: float64
  - name: ILRAlign_with_context_falcon
    dtype: float64
  - name: ILRAlign_without_context_gemma
    dtype: float64
  - name: ILRAlign_with_context_gemma
    dtype: float64
  - name: ILRAlign_without_context_mistral
    dtype: float64
  - name: ILRAlign_with_context_mistral
    dtype: float64
  - name: ILRAlign_with_problem_context_only_mistral
    dtype: float64
  - name: ILRAlign_with_answer_context_only_mistral
    dtype: float64
  - name: ILRAlign_with_problem_context_only_llama
    dtype: float64
  - name: ILRAlign_with_answer_context_only_llama
    dtype: float64
  - name: ILRAlign_with_problem_context_only_falcon
    dtype: float64
  - name: ILRAlign_with_answer_context_only_falcon
    dtype: float64
  - name: ILRAlign_with_problem_context_only_gemma
    dtype: float64
  - name: ILRAlign_with_answer_context_only_gemma
    dtype: float64
  splits:
  - name: rawcases
    num_bytes: 2063463
    num_examples: 54
  download_size: 1254341
  dataset_size: 2063463
configs:
- config_name: default
  data_files:
  - split: rawcases
    path: data/rawcases-*
---
 | 
| 
	trnguyenai01/FullyIndicators_distillation | 
	trnguyenai01 | 
	2025-04-21T03:06:27Z | 18 | 0 | 
	[
  "size_categories:10K<n<100K",
  "format:parquet",
  "modality:text",
  "library:datasets",
  "library:pandas",
  "library:mlcroissant",
  "library:polars",
  "region:us"
] | 
	[] | 
	2025-04-21T03:06:06Z | 0 | 
	---
dataset_info:
  features:
  - name: reports
    dtype: string
  - name: labels
    dtype: string
  splits:
  - name: train
    num_bytes: 243978594
    num_examples: 15350
  download_size: 44782472
  dataset_size: 243978594
configs:
- config_name: default
  data_files:
  - split: train
    path: data/train-*
---
 | 
| 
	cchoi1/humaneval-datagen-qwen1.5b_best_att_50_sol_50_20250226_184052 | 
	cchoi1 | 
	2025-02-27T05:19:20Z | 18 | 0 | 
	[
  "size_categories:n<1K",
  "format:parquet",
  "modality:tabular",
  "modality:text",
  "library:datasets",
  "library:pandas",
  "library:mlcroissant",
  "library:polars",
  "region:us"
] | 
	[] | 
	2025-02-27T05:19:18Z | 0 | 
	---
dataset_info:
  features:
  - name: problem_id
    dtype: string
  - name: prompt
    dtype: string
  - name: canonical_solution
    dtype: string
  - name: chosen
    dtype: string
  - name: rejected
    dtype: string
  - name: chosen_attack
    dtype: string
  - name: chosen_attack_explanation
    dtype: string
  - name: chosen_solution
    dtype: string
  - name: chosen_solution_explanation
    dtype: string
  - name: chosen_solve_rate
    dtype: float64
  - name: rejected_attack
    dtype: string
  - name: rejected_attack_explanation
    dtype: string
  - name: rejected_solution
    dtype: string
  - name: rejected_solution_explanation
    dtype: string
  - name: rejected_solve_rate
    dtype: float64
  splits:
  - name: train
    num_bytes: 464817
    num_examples: 156
  download_size: 45399
  dataset_size: 464817
configs:
- config_name: default
  data_files:
  - split: train
    path: data/train-*
---
 | 
| 
	kenken6696/ALCUNA_meta_affirmative_understood_unfamiliar_for_fix_middle_train | 
	kenken6696 | 
	2024-12-22T14:26:06Z | 55 | 0 | 
	[
  "size_categories:1K<n<10K",
  "format:parquet",
  "modality:text",
  "library:datasets",
  "library:pandas",
  "library:mlcroissant",
  "library:polars",
  "region:us"
] | 
	[] | 
	2024-12-22T14:26:02Z | 0 | 
	---
dataset_info:
  features:
  - name: type
    dtype: string
  - name: form
    dtype: string
  - name: property_name
    dtype: string
  - name: related_property_name
    dtype: string
  - name: question
    dtype: string
  - name: answer
    dtype: string
  - name: sentence
    dtype: string
  - name: meta_tag
    dtype: string
  - name: meta_sentence
    dtype: string
  splits:
  - name: train
    num_bytes: 879893.8397722652
    num_examples: 2213
  - name: test
    num_bytes: 97810.16022773486
    num_examples: 246
  download_size: 388387
  dataset_size: 977704.0
configs:
- config_name: default
  data_files:
  - split: train
    path: data/train-*
  - split: test
    path: data/test-*
---
 | 
| 
	yoonholee/completions_easy-8k-med16k_HMMT2025 | 
	yoonholee | 
	2025-05-13T23:04:46Z | 0 | 0 | 
	[
  "region:us"
] | 
	[] | 
	2025-05-13T23:04:44Z | 0 | 
	---
dataset_info:
  features:
  - name: problem
    dtype: string
  - name: completions
    sequence: string
  - name: answer
    dtype: string
  - name: corrects
    sequence: bool
  - name: acc
    dtype: float64
  splits:
  - name: train
    num_bytes: 10195068
    num_examples: 30
  download_size: 4017936
  dataset_size: 10195068
configs:
- config_name: default
  data_files:
  - split: train
    path: data/train-*
---
 | 
| 
	interhin/fora-eval | 
	interhin | 
	2024-10-18T13:25:20Z | 18 | 0 | 
	[
  "size_categories:1K<n<10K",
  "format:parquet",
  "modality:text",
  "library:datasets",
  "library:pandas",
  "library:mlcroissant",
  "library:polars",
  "region:us"
] | 
	[] | 
	2024-10-18T13:08:15Z | 0 | 
	---
dataset_info:
  features:
  - name: repo_id
    dtype: string
  - name: file_path
    dtype: string
  - name: content
    dtype: string
  - name: __index_level_0__
    dtype: int64
  splits:
  - name: train
    num_bytes: 15988254
    num_examples: 4377
  download_size: 4209542
  dataset_size: 15988254
configs:
- config_name: default
  data_files:
  - split: train
    path: data/train-*
---
 | 
| 
	gbyuvd/sabrlo-chem-selfies-training | 
	gbyuvd | 
	2025-09-26T17:52:29Z | 37 | 0 | 
	[
  "license:cc-by-3.0",
  "size_categories:100K<n<1M",
  "format:csv",
  "modality:text",
  "library:datasets",
  "library:dask",
  "library:mlcroissant",
  "library:polars",
  "region:us"
] | 
	[] | 
	2025-09-26T17:39:59Z | 0 | 
	---
license: cc-by-3.0
---
# Valid Bioactives and Natural Product SELFIES
~1M valid SELFIES with seq_len<=25 using FastChemTokenizerSelfies, built and curated from ChemBL34 (Zdrazil et al. 2023), COCONUTDB (Sorokina et al. 2021), and Supernatural3 (Gallo et al. 2023) dataset.
## Processing
The original dataset was processed by filtering max_seq_len<=25 for training ChemMiniQ3-SAbRLo. The cleaned dataset was then split into 6 approximately equal chunks for training in limited compute.
Curated by: gbyuvd
## References
### BibTeX
#### COCONUTDB
```bibtex
@article{sorokina2021coconut,
  title={COCONUT online: Collection of Open Natural Products database},
  author={Sorokina, Maria and Merseburger, Peter and Rajan, Kohulan and Yirik, Mehmet Aziz and Steinbeck, Christoph},
  journal={Journal of Cheminformatics},
  volume={13},
  number={1},
  pages={2},
  year={2021},
  doi={10.1186/s13321-020-00478-9}
}
```
#### ChemBL34
```bibtex
@article{zdrazil2023chembl,
  title={The ChEMBL Database in 2023: a drug discovery platform spanning multiple bioactivity data types and time periods},
  author={Zdrazil, Barbara and Felix, Eloy and Hunter, Fiona and Manners, Emma J and Blackshaw, James and Corbett, Sybilla and de Veij, Marleen and Ioannidis, Harris and Lopez, David Mendez and Mosquera, Juan F and Magarinos, Maria Paula and Bosc, Nicolas and Arcila, Ricardo and Kizil{\"o}ren, Tevfik and Gaulton, Anna and Bento, A Patr{\'i}cia and Adasme, Melissa F and Monecke, Peter and Landrum, Gregory A and Leach, Andrew R},
  journal={Nucleic Acids Research},
  year={2023},
  volume={gkad1004},
  doi={10.1093/nar/gkad1004}
}
@misc{chembl34,
  title={ChemBL34},
  year={2023},
  doi={10.6019/CHEMBL.database.34}
}
```
#### SuperNatural3
```bibtex
@article{Gallo2023,
  author = {Gallo, K and Kemmler, E and Goede, A and Becker, F and Dunkel, M and Preissner, R and Banerjee, P},
  title = {{SuperNatural 3.0-a database of natural products and natural product-based derivatives}},
  journal = {Nucleic Acids Research},
  year = {2023},
  month = jan,
  day = {6},
  volume = {51},
  number = {D1},
  pages = {D654-D659},
  doi = {10.1093/nar/gkac1008}
}
``` | 
| 
	jchun/so100_pickplace_small_20250322_160953 | 
	jchun | 
	2025-03-23T00:44:11Z | 25 | 0 | 
	[
  "task_categories:robotics",
  "license:apache-2.0",
  "size_categories:1K<n<10K",
  "format:parquet",
  "modality:tabular",
  "modality:timeseries",
  "modality:video",
  "library:datasets",
  "library:dask",
  "library:mlcroissant",
  "library:polars",
  "region:us",
  "LeRobot",
  "pickplace"
] | 
	[
  "robotics"
] | 
	2025-03-22T23:03:54Z | 0 | 
	---
license: apache-2.0
task_categories:
- robotics
tags:
- LeRobot
- pickplace
configs:
- config_name: default
  data_files: data/*/*.parquet
---
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
## Dataset Description
- **Homepage:** [More Information Needed]
- **Paper:** [More Information Needed]
- **License:** apache-2.0
## Dataset Structure
[meta/info.json](meta/info.json):
```json
{
    "codebase_version": "v2.1",
    "robot_type": "so100",
    "total_episodes": 4,
    "total_frames": 3323,
    "total_tasks": 1,
    "total_videos": 12,
    "total_chunks": 1,
    "chunks_size": 1000,
    "fps": 30,
    "splits": {
        "train": "0:4"
    },
    "data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet",
    "video_path": "videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4",
    "features": {
        "action": {
            "dtype": "float32",
            "shape": [
                12
            ],
            "names": [
                "main_shoulder_pan",
                "main_shoulder_lift",
                "main_elbow_flex",
                "main_wrist_flex",
                "main_wrist_roll",
                "main_gripper",
                "cv_shoulder_pan",
                "cv_shoulder_lift",
                "cv_elbow_flex",
                "cv_wrist_flex",
                "cv_wrist_roll",
                "cv_gripper"
            ]
        },
        "observation.state": {
            "dtype": "float32",
            "shape": [
                12
            ],
            "names": [
                "main_shoulder_pan",
                "main_shoulder_lift",
                "main_elbow_flex",
                "main_wrist_flex",
                "main_wrist_roll",
                "main_gripper",
                "cv_shoulder_pan",
                "cv_shoulder_lift",
                "cv_elbow_flex",
                "cv_wrist_flex",
                "cv_wrist_roll",
                "cv_gripper"
            ]
        },
        "observation.images.main": {
            "dtype": "video",
            "shape": [
                480,
                640,
                3
            ],
            "names": [
                "height",
                "width",
                "channels"
            ],
            "info": {
                "video.fps": 30.0,
                "video.height": 480,
                "video.width": 640,
                "video.channels": 3,
                "video.codec": "h264",
                "video.pix_fmt": "yuv420p",
                "video.is_depth_map": false,
                "has_audio": false
            }
        },
        "observation.images.cv": {
            "dtype": "video",
            "shape": [
                480,
                640,
                3
            ],
            "names": [
                "height",
                "width",
                "channels"
            ],
            "info": {
                "video.fps": 30.0,
                "video.height": 480,
                "video.width": 640,
                "video.channels": 3,
                "video.codec": "h264",
                "video.pix_fmt": "yuv420p",
                "video.is_depth_map": false,
                "has_audio": false
            }
        },
        "observation.images.webcam": {
            "dtype": "video",
            "shape": [
                480,
                640,
                3
            ],
            "names": [
                "height",
                "width",
                "channels"
            ],
            "info": {
                "video.fps": 30.0,
                "video.height": 480,
                "video.width": 640,
                "video.channels": 3,
                "video.codec": "h264",
                "video.pix_fmt": "yuv420p",
                "video.is_depth_map": false,
                "has_audio": false
            }
        },
        "timestamp": {
            "dtype": "float32",
            "shape": [
                1
            ],
            "names": null
        },
        "frame_index": {
            "dtype": "int64",
            "shape": [
                1
            ],
            "names": null
        },
        "episode_index": {
            "dtype": "int64",
            "shape": [
                1
            ],
            "names": null
        },
        "index": {
            "dtype": "int64",
            "shape": [
                1
            ],
            "names": null
        },
        "task_index": {
            "dtype": "int64",
            "shape": [
                1
            ],
            "names": null
        }
    }
}
```
## Citation
**BibTeX:**
```bibtex
[More Information Needed]
``` | 
| 
	Turbo-AI/train-multi-negatives-v2 | 
	Turbo-AI | 
	2024-11-17T14:19:35Z | 17 | 0 | 
	[
  "size_categories:100K<n<1M",
  "format:parquet",
  "modality:text",
  "library:datasets",
  "library:dask",
  "library:mlcroissant",
  "library:polars",
  "region:us"
] | 
	[] | 
	2024-11-17T14:19:12Z | 0 | 
	---
dataset_info:
  features:
  - name: id
    dtype: int64
  - name: text
    dtype: string
  - name: relevant
    list:
    - name: id
      dtype: int64
    - name: text
      dtype: string
  - name: not_relevant
    list:
    - name: id
      dtype: int64
    - name: text
      dtype: string
  splits:
  - name: train
    num_bytes: 918366297
    num_examples: 118956
  download_size: 321049144
  dataset_size: 918366297
configs:
- config_name: default
  data_files:
  - split: train
    path: data/train-*
---
 | 
| 
	rishitdagli/lrmm_full | 
	rishitdagli | 
	2025-06-07T01:56:55Z | 0 | 0 | 
	[
  "size_categories:1K<n<10K",
  "format:parquet",
  "library:datasets",
  "library:dask",
  "library:mlcroissant",
  "library:polars",
  "region:us"
] | 
	[] | 
	2025-06-07T01:43:06Z | 0 | 
	---
dataset_info:
  features:
  - name: inputs
    sequence:
      sequence: float32
  - name: outputs
    sequence:
      sequence:
        sequence:
          sequence: float32
  splits:
  - name: train
    num_bytes: 34504560000
    num_examples: 10000
  download_size: 34238220403
  dataset_size: 34504560000
configs:
- config_name: default
  data_files:
  - split: train
    path: data/train-*
---
 | 
| 
	argilla-internal-testing/test_import_dataset_from_hub_with_classlabel_32dc5455-b053-4c95-94dd-2400e610d4e9 | 
	argilla-internal-testing | 
	2024-10-11T09:24:56Z | 21 | 0 | 
	[
  "size_categories:n<1K",
  "format:parquet",
  "modality:text",
  "library:datasets",
  "library:pandas",
  "library:mlcroissant",
  "library:polars",
  "region:us"
] | 
	[] | 
	2024-10-11T09:24:55Z | 0 | 
	---
dataset_info:
  features:
  - name: text
    dtype: string
  - name: label
    dtype:
      class_label:
        names:
          '0': positive
          '1': negative
  splits:
  - name: train
    num_bytes: 111
    num_examples: 3
  download_size: 1454
  dataset_size: 111
configs:
- config_name: default
  data_files:
  - split: train
    path: data/train-*
---
 | 
| 
	bwang0911/reasoning_pairs_s2orc | 
	bwang0911 | 
	2025-03-28T09:33:30Z | 16 | 0 | 
	[
  "size_categories:100K<n<1M",
  "format:parquet",
  "modality:text",
  "library:datasets",
  "library:pandas",
  "library:mlcroissant",
  "library:polars",
  "region:us"
] | 
	[] | 
	2025-03-28T09:33:24Z | 0 | 
	---
dataset_info:
  features:
  - name: title
    dtype: string
  - name: body
    dtype: string
  - name: valid
    dtype: bool
  - name: reason
    dtype: string
  splits:
  - name: train
    num_bytes: 106573680
    num_examples: 100000
  download_size: 61811728
  dataset_size: 106573680
configs:
- config_name: default
  data_files:
  - split: train
    path: data/train-*
---
 | 
| 
	ThatsGroes/synthetic-from-text-mathing-short-tasks-swedish | 
	ThatsGroes | 
	2025-01-31T09:30:43Z | 21 | 0 | 
	[
  "language:sv",
  "license:mit",
  "size_categories:10K<n<100K",
  "format:parquet",
  "modality:text",
  "library:datasets",
  "library:pandas",
  "library:mlcroissant",
  "library:polars",
  "arxiv:2401.00368",
  "region:us"
] | 
	[] | 
	2025-01-25T09:51:51Z | 0 | 
	---
dataset_info:
  features:
  - name: response
    dtype: string
  - name: model
    dtype: string
  - name: prompt
    list:
    - name: content
      dtype: string
    - name: role
      dtype: string
  splits:
  - name: train
    num_bytes: 50422736
    num_examples: 50000
  download_size: 6177357
  dataset_size: 50422736
configs:
- config_name: default
  data_files:
  - split: train
    path: data/train-*
license: mit
language:
- sv
---
# Thanks to Arrow Denmark and Nvidia for sponsoring the compute used to generate this dataset
The purpose of this dataset is to pre- or post-train embedding models for text matching tasks on short texts.
The dataset consists of 100,000 samples generated with gemma-2-27b-it.
The column "prompt" shows the prompt given to the LLM and "response" shows the LLM output.
Each sample in the dataset was generated from a seed task randomly sampled from https://huggingface.co/datasets/ThatsGroes/text-matching-short-tasks-processed
The data generation process described in this paper was followed:
https://arxiv.org/pdf/2401.00368
Compute sponsored by Arrow Denmark and Nvidia through Danish Data Science Community. | 
| 
	tmpmodelsave/beta01llamasft_math_ift_balanced_dpo_moredata_400tmp10 | 
	tmpmodelsave | 
	2025-01-22T08:38:43Z | 15 | 0 | 
	[
  "size_categories:10K<n<100K",
  "format:parquet",
  "modality:text",
  "library:datasets",
  "library:pandas",
  "library:mlcroissant",
  "library:polars",
  "region:us"
] | 
	[] | 
	2025-01-22T08:38:41Z | 0 | 
	---
dataset_info:
  features:
  - name: idx
    dtype: int64
  - name: gt
    dtype: string
  - name: prompt
    dtype: string
  - name: level
    dtype: string
  - name: type
    dtype: string
  - name: solution
    dtype: string
  - name: my_solu
    sequence: string
  - name: pred
    sequence: string
  - name: rewards
    sequence: bool
  splits:
  - name: train
    num_bytes: 43372827
    num_examples: 15000
  download_size: 15914121
  dataset_size: 43372827
configs:
- config_name: default
  data_files:
  - split: train
    path: data/train-*
---
 | 
| 
	thusinh1969/EraX-Cosent_LanhGPTV2_30NOV2024_4.4M | 
	thusinh1969 | 
	2024-12-01T01:56:45Z | 23 | 0 | 
	[
  "size_categories:1M<n<10M",
  "format:parquet",
  "modality:tabular",
  "modality:text",
  "library:datasets",
  "library:dask",
  "library:mlcroissant",
  "library:polars",
  "region:us"
] | 
	[] | 
	2024-12-01T01:39:11Z | 0 | 
	---
dataset_info:
  features:
  - name: sentence1
    dtype: string
  - name: sentence2
    dtype: string
  - name: score
    dtype: float64
  - name: similarity
    dtype: float64
  - name: type
    dtype: string
  - name: hard
    dtype: int64
  splits:
  - name: train
    num_bytes: 21363010272.562458
    num_examples: 4399028
  - name: test
    num_bytes: 194252096.3500342
    num_examples: 40000
  - name: eval
    num_bytes: 48563024.08750855
    num_examples: 10000
  download_size: 11579474915
  dataset_size: 21605825393.0
configs:
- config_name: default
  data_files:
  - split: train
    path: data/train-*
  - split: test
    path: data/test-*
  - split: eval
    path: data/eval-*
---
 | 
| 
	james-1111/x_dataset_0303241 | 
	james-1111 | 
	2025-06-24T11:55:57Z | 1,077 | 0 | 
	[
  "task_categories:text-classification",
  "task_categories:token-classification",
  "task_categories:question-answering",
  "task_categories:summarization",
  "task_categories:text-generation",
  "task_ids:sentiment-analysis",
  "task_ids:topic-classification",
  "task_ids:named-entity-recognition",
  "task_ids:language-modeling",
  "task_ids:text-scoring",
  "task_ids:multi-class-classification",
  "task_ids:multi-label-classification",
  "task_ids:extractive-qa",
  "task_ids:news-articles-summarization",
  "multilinguality:multilingual",
  "source_datasets:original",
  "license:mit",
  "size_categories:10M<n<100M",
  "format:parquet",
  "modality:text",
  "library:datasets",
  "library:dask",
  "library:mlcroissant",
  "library:polars",
  "region:us"
] | 
	[
  "text-classification",
  "token-classification",
  "question-answering",
  "summarization",
  "text-generation"
] | 
	2025-01-25T07:11:23Z | 0 | 
	---
license: mit
multilinguality:
  - multilingual
source_datasets:
  - original
task_categories:
  - text-classification
  - token-classification
  - question-answering
  - summarization
  - text-generation
task_ids:
  - sentiment-analysis
  - topic-classification
  - named-entity-recognition
  - language-modeling
  - text-scoring
  - multi-class-classification
  - multi-label-classification
  - extractive-qa
  - news-articles-summarization
---
# Bittensor Subnet 13 X (Twitter) Dataset
<center>
    <img src="https://huggingface.co/datasets/macrocosm-os/images/resolve/main/bittensor.png" alt="Data-universe: The finest collection of social media data the web has to offer">
</center>
<center>
    <img src="https://huggingface.co/datasets/macrocosm-os/images/resolve/main/macrocosmos-black.png" alt="Data-universe: The finest collection of social media data the web has to offer">
</center>
## Dataset Description
- **Repository:** james-1111/x_dataset_0303241
- **Subnet:** Bittensor Subnet 13
- **Miner Hotkey:** 5HMi2tcDTWxckR86mn6d4cmM3dCGwoRdA4PvQNLcnSqjw86k
### Miner Data Compliance Agreement 
In uploading this dataset, I am agreeing to the [Macrocosmos Miner Data Compliance Policy](https://github.com/macrocosm-os/data-universe/blob/add-miner-policy/docs/miner_policy.md). 
### Dataset Summary
This dataset is part of the Bittensor Subnet 13 decentralized network, containing preprocessed data from X (formerly Twitter). The data is continuously updated by network miners, providing a real-time stream of tweets for various analytical and machine learning tasks.
For more information about the dataset, please visit the [official repository](https://github.com/macrocosm-os/data-universe).
### Supported Tasks
The versatility of this dataset allows researchers and data scientists to explore various aspects of social media dynamics and develop innovative applications. Users are encouraged to leverage this data creatively for their specific research or business needs.
For example: 
- Sentiment Analysis
- Trend Detection
- Content Analysis
- User Behavior Modeling
### Languages
Primary language: Datasets are mostly English, but can be multilingual due to decentralized ways of creation.
## Dataset Structure
### Data Instances
Each instance represents a single tweet with the following fields:
### Data Fields
- `text` (string): The main content of the tweet.
- `label` (string): Sentiment or topic category of the tweet.
- `tweet_hashtags` (list): A list of hashtags used in the tweet. May be empty if no hashtags are present.
- `datetime` (string): The date when the tweet was posted.
- `username_encoded` (string): An encoded version of the username to maintain user privacy.
- `url_encoded` (string): An encoded version of any URLs included in the tweet. May be empty if no URLs are present.
### Data Splits
This dataset is continuously updated and does not have fixed splits. Users should create their own splits based on their requirements and the data's timestamp.
## Dataset Creation
### Source Data
Data is collected from public tweets on X (Twitter), adhering to the platform's terms of service and API usage guidelines.
### Personal and Sensitive Information
All usernames and URLs are encoded to protect user privacy. The dataset does not intentionally include personal or sensitive information.
## Considerations for Using the Data
### Social Impact and Biases
Users should be aware of potential biases inherent in X (Twitter) data, including demographic and content biases. This dataset reflects the content and opinions expressed on X and should not be considered a representative sample of the general population.
### Limitations
- Data quality may vary due to the decentralized nature of collection and preprocessing.
- The dataset may contain noise, spam, or irrelevant content typical of social media platforms.
- Temporal biases may exist due to real-time collection methods.
- The dataset is limited to public tweets and does not include private accounts or direct messages.
- Not all tweets contain hashtags or URLs.
## Additional Information
### Licensing Information
The dataset is released under the MIT license. The use of this dataset is also subject to X Terms of Use.
### Citation Information
If you use this dataset in your research, please cite it as follows:
```
@misc{james-11112025datauniversex_dataset_0303241,
        title={The Data Universe Datasets: The finest collection of social media data the web has to offer},
        author={james-1111},
        year={2025},
        url={https://huggingface.co/datasets/james-1111/x_dataset_0303241},
        }
```
### Contributions
To report issues or contribute to the dataset, please contact the miner or use the Bittensor Subnet 13 governance mechanisms.
## Dataset Statistics
[This section is automatically updated]
- **Total Instances:** 5415872
- **Date Range:** 2025-01-02T00:00:00Z to 2025-06-14T00:00:00Z
- **Last Updated:** 2025-06-24T11:55:56Z
### Data Distribution
- Tweets with hashtags: 4.82%
- Tweets without hashtags: 95.18%
### Top 10 Hashtags
For full statistics, please refer to the `stats.json` file in the repository.
| Rank | Topic | Total Count | Percentage |
|------|-------|-------------|-------------|
| 1 | NULL | 1112737 | 80.99% |
| 2 | #riyadh | 24082 | 1.75% |
| 3 | #マテムり | 15966 | 1.16% |
| 4 | #pbbcollab6thduoeviction | 11023 | 0.80% |
| 5 | #tiktok | 8638 | 0.63% |
| 6 | #箱根駅伝 | 8147 | 0.59% |
| 7 | #thameposeriesep9 | 7605 | 0.55% |
| 8 | #wtcfinal2025 | 6398 | 0.47% |
| 9 | #first_showcase | 6311 | 0.46% |
| 10 | #ad | 5465 | 0.40% |
## Update History
| Date | New Instances | Total Instances |
|------|---------------|-----------------|
| 2025-01-25T07:07:31Z | 453526 | 453526 |
| 2025-01-25T07:07:59Z | 453526 | 907052 |
| 2025-01-25T07:08:28Z | 453526 | 1360578 |
| 2025-01-25T07:08:56Z | 446896 | 1807474 |
| 2025-01-25T07:09:24Z | 446896 | 2254370 |
| 2025-01-25T07:09:52Z | 446896 | 2701266 |
| 2025-01-25T07:10:21Z | 446896 | 3148162 |
| 2025-01-25T07:10:51Z | 446896 | 3595058 |
| 2025-01-25T07:11:21Z | 446896 | 4041954 |
| 2025-01-25T07:11:51Z | 446896 | 4488850 |
| 2025-02-18T03:41:26Z | 467290 | 4956140 |
| 2025-06-24T11:55:56Z | 459732 | 5415872 |
 | 
| 
	aliazn/mathchatsync-temp | 
	aliazn | 
	2025-05-16T21:10:23Z | 0 | 0 | 
	[
  "region:us"
] | 
	[] | 
	2025-05-16T21:10:13Z | 0 | 
	---
dataset_info:
  features:
  - name: messages
    list:
    - name: content
      dtype: string
    - name: role
      dtype: string
  splits:
  - name: train
    num_bytes: 377089336
    num_examples: 144978
  download_size: 159608385
  dataset_size: 377089336
configs:
- config_name: default
  data_files:
  - split: train
    path: data/train-*
---
 | 
| 
	JunxiongWang/MATH_SFT | 
	JunxiongWang | 
	2025-04-15T18:08:20Z | 92 | 0 | 
	[
  "size_categories:10M<n<100M",
  "format:parquet",
  "library:datasets",
  "library:dask",
  "library:mlcroissant",
  "library:polars",
  "region:us"
] | 
	[] | 
	2025-04-15T18:05:29Z | 0 | 
	---
dataset_info:
- config_name: tokenized_llama31
  features:
  - name: input_ids
    sequence: int32
  - name: labels
    sequence: int64
  - name: attention_mask
    sequence: int8
  splits:
  - name: train
    num_examples: 19089792
configs:
- config_name: tokenized_llama31
  data_files:
  - split: train
    path: tokenized_llama31/train-*
---
 | 
| 
	mlfoundations-dev/instruction_filtering_scale_up_math_base_embedding_filter_mean_8K | 
	mlfoundations-dev | 
	2025-03-07T20:44:31Z | 98 | 0 | 
	[
  "size_categories:1K<n<10K",
  "format:parquet",
  "modality:tabular",
  "modality:text",
  "library:datasets",
  "library:dask",
  "library:mlcroissant",
  "library:polars",
  "region:us"
] | 
	[] | 
	2025-03-07T20:31:09Z | 0 | 
	---
dataset_info:
  features:
  - name: instruction_seed
    dtype: string
  - name: source
    dtype: string
  - name: embeddings
    sequence: float64
  - name: mean_positive_score
    dtype: float64
  - name: mean_negative_score
    dtype: float64
  - name: difference_score
    dtype: float64
  - name: reasoning
    dtype: string
  - name: deepseek_solution
    dtype: string
  - name: __original_row_idx
    dtype: int64
  - name: final_reasoning_trace
    dtype: string
  - name: conversations
    list:
    - name: from
      dtype: string
    - name: value
      dtype: string
  splits:
  - name: train
    num_bytes: 636139670
    num_examples: 8000
  download_size: 290540569
  dataset_size: 636139670
configs:
- config_name: default
  data_files:
  - split: train
    path: data/train-*
---
 | 
| 
	shuohsuan/grasp_data_090_099 | 
	shuohsuan | 
	2025-06-16T08:53:26Z | 0 | 0 | 
	[
  "task_categories:robotics",
  "license:apache-2.0",
  "size_categories:1K<n<10K",
  "format:parquet",
  "modality:tabular",
  "modality:timeseries",
  "modality:video",
  "library:datasets",
  "library:dask",
  "library:mlcroissant",
  "library:polars",
  "region:us",
  "LeRobot"
] | 
	[
  "robotics"
] | 
	2025-06-16T07:25:13Z | 0 | 
	---
license: apache-2.0
task_categories:
- robotics
tags:
- LeRobot
configs:
- config_name: default
  data_files: data/*/*.parquet
---
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
## Dataset Description
- **Homepage:** [More Information Needed]
- **Paper:** [More Information Needed]
- **License:** apache-2.0
## Dataset Structure
[meta/info.json](meta/info.json):
```json
{
    "codebase_version": "v2.1",
    "robot_type": "so100_follower",
    "total_episodes": 3,
    "total_frames": 2488,
    "total_tasks": 1,
    "total_videos": 6,
    "total_chunks": 1,
    "chunks_size": 1000,
    "fps": 30,
    "splits": {
        "train": "0:3"
    },
    "data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet",
    "video_path": "videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4",
    "features": {
        "action": {
            "dtype": "float32",
            "shape": [
                6
            ],
            "names": [
                "shoulder_pan.pos",
                "shoulder_lift.pos",
                "elbow_flex.pos",
                "wrist_flex.pos",
                "wrist_roll.pos",
                "gripper.pos"
            ]
        },
        "observation.state": {
            "dtype": "float32",
            "shape": [
                6
            ],
            "names": [
                "shoulder_pan.pos",
                "shoulder_lift.pos",
                "elbow_flex.pos",
                "wrist_flex.pos",
                "wrist_roll.pos",
                "gripper.pos"
            ]
        },
        "observation.images.top": {
            "dtype": "video",
            "shape": [
                480,
                640,
                3
            ],
            "names": [
                "height",
                "width",
                "channels"
            ],
            "info": {
                "video.height": 480,
                "video.width": 640,
                "video.codec": "av1",
                "video.pix_fmt": "yuv420p",
                "video.is_depth_map": false,
                "video.fps": 30,
                "video.channels": 3,
                "has_audio": false
            }
        },
        "observation.images.wrist": {
            "dtype": "video",
            "shape": [
                480,
                640,
                3
            ],
            "names": [
                "height",
                "width",
                "channels"
            ],
            "info": {
                "video.height": 480,
                "video.width": 640,
                "video.codec": "av1",
                "video.pix_fmt": "yuv420p",
                "video.is_depth_map": false,
                "video.fps": 30,
                "video.channels": 3,
                "has_audio": false
            }
        },
        "timestamp": {
            "dtype": "float32",
            "shape": [
                1
            ],
            "names": null
        },
        "frame_index": {
            "dtype": "int64",
            "shape": [
                1
            ],
            "names": null
        },
        "episode_index": {
            "dtype": "int64",
            "shape": [
                1
            ],
            "names": null
        },
        "index": {
            "dtype": "int64",
            "shape": [
                1
            ],
            "names": null
        },
        "task_index": {
            "dtype": "int64",
            "shape": [
                1
            ],
            "names": null
        }
    }
}
```
## Citation
**BibTeX:**
```bibtex
[More Information Needed]
``` | 
| 
	mingye94/trial0_fixed_5rules_rm_training_data | 
	mingye94 | 
	2025-01-21T02:26:49Z | 19 | 0 | 
	[
  "size_categories:1K<n<10K",
  "format:parquet",
  "modality:text",
  "library:datasets",
  "library:pandas",
  "library:mlcroissant",
  "library:polars",
  "region:us"
] | 
	[] | 
	2025-01-21T02:26:49Z | 0 | 
	---
dataset_info:
  features:
  - name: prompt
    dtype: string
  - name: chosen
    dtype: string
  - name: rejected
    dtype: string
  splits:
  - name: train
    num_bytes: 2351567
    num_examples: 1000
  download_size: 1336836
  dataset_size: 2351567
configs:
- config_name: default
  data_files:
  - split: train
    path: data/train-*
---
 | 
| 
	perfectName/rse_test | 
	perfectName | 
	2025-06-20T11:25:56Z | 0 | 0 | 
	[
  "license:apache-2.0",
  "region:us"
] | 
	[] | 
	2025-06-20T11:25:15Z | 0 | 
	---
license: apache-2.0
---
 | 
| 
	TOBEAI/KOR_Merged_data | 
	TOBEAI | 
	2024-10-02T07:28:27Z | 20 | 0 | 
	[
  "size_categories:1M<n<10M",
  "format:parquet",
  "modality:text",
  "library:datasets",
  "library:dask",
  "library:mlcroissant",
  "library:polars",
  "region:us"
] | 
	[] | 
	2024-10-02T00:43:07Z | 0 | 
	---
dataset_info:
  features:
  - name: instruction
    dtype: string
  - name: input
    dtype: string
  - name: output
    dtype: string
  splits:
  - name: train
    num_bytes: 1316148258.1879513
    num_examples: 1702522
  download_size: 739948626
  dataset_size: 1316148258.1879513
configs:
- config_name: default
  data_files:
  - split: train
    path: data/train-*
---
 | 
| 
	OpenFinAL/FINGPT_QA_meta3B-train-dataset | 
	OpenFinAL | 
	2025-06-07T20:50:25Z | 0 | 0 | 
	[
  "region:us"
] | 
	[] | 
	2025-06-07T20:50:22Z | 0 | 
	---
dataset_info:
  features:
  - name: Question
    dtype: string
  - name: Answer
    dtype: string
  splits:
  - name: train
    num_bytes: 16959815.1
    num_examples: 15399
  download_size: 10216321
  dataset_size: 16959815.1
configs:
- config_name: default
  data_files:
  - split: train
    path: data/train-*
---
 | 
| 
	DataTonic/climate-guard-thinking_data_nocomment_qwen_toxic_agent | 
	DataTonic | 
	2025-02-14T07:33:51Z | 20 | 0 | 
	[
  "task_categories:text-classification",
  "task_categories:zero-shot-classification",
  "language:en",
  "license:mit",
  "size_categories:1K<n<10K",
  "format:parquet",
  "modality:text",
  "library:datasets",
  "library:pandas",
  "library:mlcroissant",
  "library:polars",
  "region:us",
  "climate"
] | 
	[
  "text-classification",
  "zero-shot-classification"
] | 
	2025-02-09T14:54:23Z | 0 | 
	---
dataset_info:
  features:
  - name: quote
    dtype: string
  - name: label
    dtype: string
  - name: __index_level_0__
    dtype: int64
  splits:
  - name: train
    num_bytes: 1487214
    num_examples: 1548
  download_size: 742242
  dataset_size: 1487214
configs:
- config_name: default
  data_files:
  - split: train
    path: data/train-*
license: mit
task_categories:
- text-classification
- zero-shot-classification
language:
- en
tags:
- climate
---
# Toxic Agent - Phi4 Synthetic Data : Magpie-like Climate Disinformation Dataset
## Dataset Description
### Overview
This dataset contains synthetic climate change-related statements, including various forms of climate disinformation and denial. It was created by generating variations and transformations of real climate-related statements, producing a diverse set of synthetic examples across different categories of climate disinformation from the [climate-guard-synthetic_data_qwen_toxic_agent](https://huggingface.co/datasets/Tonic/climate-guard-synthetic_data_qwen_toxic_agent) dataset.
Total examples from `qwen2.5:14b-instruct-q4_K_M`: 1,548 entries
### Labels
The dataset includes the following categories of climate-related statements:
1. `1_not_happening`: Denial of climate change occurrence
2. `2_not_human`: Denial of human influence on climate
3. `3_not_bad`: Minimization of climate change impacts
4. `4_solutions_harmful_unnecessary`: Opposition to climate solutions
5. `5_science_is_unreliable`: Questioning climate science
6. `6_proponents_biased`: Criticism of climate scientists
7. `7_fossil_fuels_needed`: Pro-fossil fuel advocacy
### Data Fields
- `quote`: The synthetic climate-related statement (text)
- `label`: The category of climate disinformation (string)
### Data Splits
This dataset is provided as a single split containing all examples.
## Intended Uses
This dataset is intended for:
- Research on climate disinformation detection
- Development of content moderation systems
- Study of climate denial narratives
- Training classification models for climate-related content
## Limitations & Ethical Considerations
- This dataset contains synthetic examples of climate disinformation
- Should not be used to spread or promote climate denial
- Intended for research and educational purposes only
- May contain biases from the generation process
## Citation & Attribution
If you use this dataset, please cite:
```bibtex
@dataset{synthetic_climate_disinfo_2024,
  author       = {[Joseph Pollack]},
  title        = {Toxic Agent - Qwen Synthetic Data 2},
  year         = {2025},
  publisher    = {Hugging Face},
  howpublished = {\url{https://huggingface.co/datasets/DataTonic/climate-guard-thinking_data_nocomment_qwen_toxic_agent}
}
```
## Dataset Creation
### Data Generation Process
The dataset was created using:
1. Base examples from the QuotaClimat/frugalaichallenge-text-train dataset
2. Generation of variations using "qwen2.5:14b-instruct-q4_K_M" 
3. Preservation of original label categories
4. Multiple generation strategies for diversity
#### Prompts
```python
        self.unified_prompt = """follow the following instructions :
                              {quote}
                              {category_instructions}, DO NOT PROVIDE ANY ADDITIONAL COMMENT produce final answer:"""
```
```python
    category_instructions = {
        '1_not_happening': "based on the above, produce a statement that denies the occurrence of global warming and its effects. The statement should argue against evidence of melting ice, extreme weather, or rising sea levels. It may point to cold weather as counter-evidence:",
        
        '2_not_human': "based on the above, produce a statement that denies human responsibility in climate change. The focus should be on arguing that greenhouse gases from human activities are not causing climate change. Emphasize natural climate variability:",
        
        '3_not_bad': "based on the above, Produce a statement that minimizes or denies negative impacts of climate change. The statement should suggest that climate change impacts will be minimal or even beneficial to society and the environment:",
        
        '4_solutions_harmful_unnecessary': "based on the above, produce a statement arguing that climate solutions are harmful or unnecessary. Focus on the potential negative economic or social impacts of climate policies and why they shouldn't be implemented:",
        
        '5_science_is_unreliable': "based on the above, Emphasize uncertainties in climate research, suggest the science is unsound, unreliable, or biased. Question methodologies and conclusions:",
        
        '6_proponents_biased': "based on the above, Produce a statement attacking climate scientists and activists. Portray them as alarmist, biased, wrong, hypocritical, corrupt, or politically motivated. Question their credibility and intentions:",
        
        '7_fossil_fuels_needed': "based on the above, Generate a statement promoting the necessity of fossil fuels. Emphasize their importance for economic growth, prosperity, and maintaining living standards. Argue why they cannot be replaced:"
    }
```
### Preprocessing
- Removed irrelevant examples
- Compose the prompt according to the above
- Maintained label consistency
## Dataset Structure
```python
DatasetDict({
    'train': Dataset({
        features: ['quote', 'label'],
        num_rows: 1548
    })
})
```
## Additional Information
### Dataset Curators
[Tonic](https://huggingface.co/Tonic) / [Datatonic](https://huggingface.co/DataTonic)
### Licensing Information
This dataset is released under the MIT License.
### Feedback & Contributions
For questions, feedback, or contributions, please:
1. Open an issue on the dataset repository
2. Contact the dataset maintainers
3. Submit a pull request with improvements
## Example Usage
```python
from datasets import load_dataset
# Load the dataset
dataset = load_dataset("Tonic/climate-guard-thinking_data_nocomment_qwen_toxic_agent")
# Access examples
for example in dataset['train'].select(range(3)):
    print(f"Quote: {example['quote']}")
    print(f"Label: {example['label']}")
    print("---")
```
## Version History
- v1.0.0 (2024-02-08): Initial release with 1,548 synthetic examples
## Acknowledgements
- Based on the QuotaClimat/frugalaichallenge-text-train dataset
- Generated using RunPod infrastructure
- Developed for climate disinformation research | 
| 
	villekuosmanen/agilex_put_paper_cup_jsh | 
	villekuosmanen | 
	2025-02-13T05:17:47Z | 24 | 0 | 
	[
  "task_categories:robotics",
  "license:apache-2.0",
  "size_categories:1K<n<10K",
  "format:parquet",
  "modality:tabular",
  "modality:timeseries",
  "modality:video",
  "library:datasets",
  "library:dask",
  "library:mlcroissant",
  "library:polars",
  "region:us",
  "LeRobot"
] | 
	[
  "robotics"
] | 
	2025-02-13T05:17:27Z | 0 | 
	---
license: apache-2.0
task_categories:
- robotics
tags:
- LeRobot
configs:
- config_name: default
  data_files: data/*/*.parquet
---
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
## Dataset Description
- **Homepage:** [More Information Needed]
- **Paper:** [More Information Needed]
- **License:** apache-2.0
## Dataset Structure
[meta/info.json](meta/info.json):
```json
{
    "codebase_version": "v2.0",
    "robot_type": "arx5_bimanual",
    "total_episodes": 20,
    "total_frames": 4643,
    "total_tasks": 1,
    "total_videos": 60,
    "total_chunks": 1,
    "chunks_size": 1000,
    "fps": 25,
    "splits": {
        "train": "0:20"
    },
    "data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet",
    "video_path": "videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4",
    "features": {
        "action": {
            "dtype": "float32",
            "shape": [
                14
            ]
        },
        "observation.state": {
            "dtype": "float32",
            "shape": [
                14
            ]
        },
        "observation.effort": {
            "dtype": "float32",
            "shape": [
                14
            ]
        },
        "observation.images.cam_high": {
            "dtype": "video",
            "shape": [
                480,
                640,
                3
            ],
            "names": [
                "height",
                "width",
                "channels"
            ],
            "info": {
                "video.fps": 25.0,
                "video.height": 480,
                "video.width": 640,
                "video.channels": 3,
                "video.codec": "av1",
                "video.pix_fmt": "yuv420p",
                "video.is_depth_map": false,
                "has_audio": false
            }
        },
        "observation.images.cam_left_wrist": {
            "dtype": "video",
            "shape": [
                480,
                640,
                3
            ],
            "names": [
                "height",
                "width",
                "channels"
            ],
            "info": {
                "video.fps": 25.0,
                "video.height": 480,
                "video.width": 640,
                "video.channels": 3,
                "video.codec": "av1",
                "video.pix_fmt": "yuv420p",
                "video.is_depth_map": false,
                "has_audio": false
            }
        },
        "observation.images.cam_right_wrist": {
            "dtype": "video",
            "shape": [
                480,
                640,
                3
            ],
            "names": [
                "height",
                "width",
                "channels"
            ],
            "info": {
                "video.fps": 25.0,
                "video.height": 480,
                "video.width": 640,
                "video.channels": 3,
                "video.codec": "av1",
                "video.pix_fmt": "yuv420p",
                "video.is_depth_map": false,
                "has_audio": false
            }
        },
        "timestamp": {
            "dtype": "float32",
            "shape": [
                1
            ],
            "names": null
        },
        "frame_index": {
            "dtype": "int64",
            "shape": [
                1
            ],
            "names": null
        },
        "episode_index": {
            "dtype": "int64",
            "shape": [
                1
            ],
            "names": null
        },
        "index": {
            "dtype": "int64",
            "shape": [
                1
            ],
            "names": null
        },
        "task_index": {
            "dtype": "int64",
            "shape": [
                1
            ],
            "names": null
        }
    }
}
```
## Citation
**BibTeX:**
```bibtex
[More Information Needed]
``` | 
| 
	gcp-acp/flipkart-dataprep | 
	gcp-acp | 
	2024-11-21T00:37:24Z | 31 | 0 | 
	[
  "license:cc-by-sa-4.0",
  "region:us"
] | 
	[] | 
	2024-11-21T00:37:22Z | 0 | 
	---
license: cc-by-sa-4.0
---
- Generated prompt data, [Built with Llama 3.1](https://www.llama.com/llama3_1/license/)
- [Data Preparation](https://github.com/GoogleCloudPlatform/accelerated-platforms/tree/main/docs/use-cases/model-fine-tuning-pipeline#data-preparation)
- [Raw Data](https://www.kaggle.com/datasets/PromptCloudHQ/flipkart-products/data)
 | 
| 
	dgambettaphd/D_llm2_gen10_run0_X_doc1000_synt64_tot128_SYNLAST | 
	dgambettaphd | 
	2025-04-28T10:24:41Z | 18 | 0 | 
	[
  "size_categories:10K<n<100K",
  "format:parquet",
  "modality:tabular",
  "modality:text",
  "library:datasets",
  "library:pandas",
  "library:mlcroissant",
  "library:polars",
  "region:us"
] | 
	[] | 
	2025-04-28T10:24:38Z | 0 | 
	---
dataset_info:
  features:
  - name: id
    dtype: int64
  - name: text
    dtype: string
  - name: dataset
    dtype: string
  - name: gen
    dtype: int64
  - name: synt
    dtype: int64
  - name: TPP
    dtype: float64
  - name: MPP
    dtype: float64
  - name: FTP
    dtype: float64
  splits:
  - name: train
    num_bytes: 7829037
    num_examples: 14000
  download_size: 4616053
  dataset_size: 7829037
configs:
- config_name: default
  data_files:
  - split: train
    path: data/train-*
---
 | 
| 
	luca0621/multi_turn_soda_seperate_rewards_trajectory_normalized | 
	luca0621 | 
	2025-06-10T11:10:00Z | 0 | 0 | 
	[
  "size_categories:10K<n<100K",
  "format:parquet",
  "modality:tabular",
  "modality:text",
  "library:datasets",
  "library:pandas",
  "library:mlcroissant",
  "library:polars",
  "region:us"
] | 
	[] | 
	2025-06-10T11:09:52Z | 0 | 
	---
dataset_info:
  features:
  - name: context
    dtype: string
  - name: query
    dtype: string
  - name: response
    dtype: string
  - name: pref
    dtype: int64
  - name: reward
    dtype: float64
  - name: original_reward
    dtype: float64
  - name: whole_dialogue
    dtype: string
  - name: turn_index
    dtype: int64
  - name: dialogue_length
    dtype: int64
  - name: trajectory_return
    dtype: float64
  - name: original_trajectory_return
    dtype: float64
  splits:
  - name: train
    num_bytes: 83628998
    num_examples: 67108
  - name: test
    num_bytes: 11316680
    num_examples: 16778
  download_size: 41498459
  dataset_size: 94945678
configs:
- config_name: default
  data_files:
  - split: train
    path: data/train-*
  - split: test
    path: data/test-*
---
 | 
| 
	rlhn/default-100K | 
	rlhn | 
	2025-03-19T20:09:02Z | 18 | 0 | 
	[
  "size_categories:10K<n<100K",
  "format:parquet",
  "modality:text",
  "library:datasets",
  "library:dask",
  "library:mlcroissant",
  "library:polars",
  "region:us"
] | 
	[] | 
	2025-03-19T20:08:17Z | 0 | 
	---
dataset_info:
  features:
  - name: query_id
    dtype: string
  - name: query
    dtype: string
  - name: positive_passages
    list:
    - name: docid
      dtype: string
    - name: text
      dtype: string
    - name: title
      dtype: string
  - name: negative_passages
    list:
    - name: docid
      dtype: string
    - name: text
      dtype: string
    - name: title
      dtype: string
  - name: subset
    dtype: string
  splits:
  - name: train
    num_bytes: 1778484241
    num_examples: 96167
  download_size: 1026965735
  dataset_size: 1778484241
configs:
- config_name: default
  data_files:
  - split: train
    path: data/train-*
---
 | 
| 
	ZixuanKe/economy_fineweb_phi3.5_unsup_chunk_2 | 
	ZixuanKe | 
	2024-10-24T00:17:33Z | 18 | 0 | 
	[
  "size_categories:1K<n<10K",
  "format:parquet",
  "modality:tabular",
  "modality:text",
  "library:datasets",
  "library:pandas",
  "library:mlcroissant",
  "library:polars",
  "region:us"
] | 
	[] | 
	2024-10-22T23:51:25Z | 0 | 
	---
dataset_info:
  features:
  - name: text
    dtype: string
  - name: topic
    dtype: string
  - name: title
    dtype: string
  - name: input_ids
    sequence: int32
  - name: attention_mask
    sequence: int8
  - name: special_tokens_mask
    sequence: int8
  - name: chunk
    dtype: int64
  - name: fineweb_justification
    dtype: string
  - name: fineweb_score
    dtype: int64
  splits:
  - name: train
    num_bytes: 94124434
    num_examples: 1445
  download_size: 36108144
  dataset_size: 94124434
configs:
- config_name: default
  data_files:
  - split: train
    path: data/train-*
---
 | 
| 
	alea-institute/kl3m-filter-data-dotgov-www.aoc.gov | 
	alea-institute | 
	2025-02-03T22:18:44Z | 15 | 0 | 
	[
  "size_categories:1K<n<10K",
  "format:parquet",
  "modality:text",
  "library:datasets",
  "library:pandas",
  "library:mlcroissant",
  "library:polars",
  "region:us"
] | 
	[] | 
	2025-02-03T22:18:42Z | 0 | 
	---
dataset_info:
  features:
  - name: identifier
    dtype: string
  - name: dataset
    dtype: string
  - name: mime_type
    dtype: string
  - name: score
    dtype: float64
  - name: tokens
    sequence: int64
  splits:
  - name: train
    num_bytes: 21131384
    num_examples: 1936
  download_size: 5115965
  dataset_size: 21131384
configs:
- config_name: default
  data_files:
  - split: train
    path: data/train-*
---
 | 
| 
	Pamela153/swe-gym-test-getmoto | 
	Pamela153 | 
	2025-09-24T22:53:40Z | 49 | 0 | 
	[
  "size_categories:n<1K",
  "format:parquet",
  "modality:text",
  "library:datasets",
  "library:pandas",
  "library:mlcroissant",
  "library:polars",
  "region:us"
] | 
	[] | 
	2025-09-24T22:30:12Z | 0 | 
	---
dataset_info:
  features:
  - name: instance_id
    dtype: string
  - name: hints_text
    dtype: string
  - name: patch
    dtype: string
  - name: test_patch
    dtype: string
  - name: created_at
    dtype: string
  - name: problem_statement
    dtype: string
  - name: repo
    dtype: string
  - name: base_commit
    dtype: string
  - name: version
    dtype: string
  - name: PASS_TO_PASS
    sequence: string
  - name: FAIL_TO_PASS
    sequence: string
  - name: eval_script
    dtype: string
  splits:
  - name: train
    num_bytes: 805074
    num_examples: 25
  download_size: 260205
  dataset_size: 805074
configs:
- config_name: default
  data_files:
  - split: train
    path: data/train-*
---
 | 
| 
	mengdili/Marco-7B-Pruned-K-16-k-8-epoch-1-log-noclipping-nonorm_math_test_8192_normal_K-1 | 
	mengdili | 
	2025-04-13T13:29:34Z | 18 | 0 | 
	[
  "size_categories:n<1K",
  "format:parquet",
  "modality:text",
  "library:datasets",
  "library:pandas",
  "library:mlcroissant",
  "library:polars",
  "region:us"
] | 
	[] | 
	2025-04-13T13:29:30Z | 0 | 
	---
dataset_info:
  features:
  - name: problem
    dtype: string
  - name: ground_truth_solution
    dtype: string
  - name: ground_truth_answer
    dtype: string
  - name: pre_generated_steps
    sequence: string
  - name: pre_generated_answer
    dtype: string
  - name: pre_generated_verifier_score
    dtype: float64
  - name: prompt
    list:
    - name: content
      dtype: string
    - name: role
      dtype: string
  - name: solution
    dtype: string
  - name: answer
    dtype: string
  splits:
  - name: train
    num_bytes: 1901530
    num_examples: 458
  download_size: 902208
  dataset_size: 1901530
configs:
- config_name: default
  data_files:
  - split: train
    path: data/train-*
---
 | 
| 
	jxie/droid | 
	jxie | 
	2024-11-15T23:14:19Z | 17 | 0 | 
	[
  "size_categories:100K<n<1M",
  "format:parquet",
  "library:datasets",
  "library:dask",
  "library:mlcroissant",
  "library:polars",
  "region:us"
] | 
	[] | 
	2024-11-15T19:23:57Z | 0 | 
	---
dataset_info:
  features:
  - name: video
    dtype: video
  splits:
  - name: train
    num_bytes: 16142033.0
    num_examples: 276699
  download_size: 416277616456
  dataset_size: 16142033.0
configs:
- config_name: default
  data_files:
  - split: train
    path: data/train-*
---
 | 
| 
	automated-analytics/gretel-pii-coarse-grained-chatml | 
	automated-analytics | 
	2025-06-09T03:08:26Z | 0 | 0 | 
	[
  "size_categories:10K<n<100K",
  "format:parquet",
  "modality:text",
  "library:datasets",
  "library:pandas",
  "library:mlcroissant",
  "library:polars",
  "region:us"
] | 
	[] | 
	2025-06-09T03:08:09Z | 0 | 
	---
dataset_info:
  features:
  - name: messages
    list:
    - name: content
      dtype: string
    - name: role
      dtype: string
  splits:
  - name: train
    num_bytes: 188028367
    num_examples: 50000
  - name: validation
    num_bytes: 18793571
    num_examples: 5000
  - name: test
    num_bytes: 18807450
    num_examples: 5000
  download_size: 14633055
  dataset_size: 225629388
configs:
- config_name: default
  data_files:
  - split: train
    path: data/train-*
  - split: validation
    path: data/validation-*
  - split: test
    path: data/test-*
---
 | 
| 
	zjhhhh/baseline_0 | 
	zjhhhh | 
	2025-09-28T02:59:01Z | 30 | 0 | 
	[
  "size_categories:1K<n<10K",
  "format:parquet",
  "modality:tabular",
  "modality:text",
  "library:datasets",
  "library:pandas",
  "library:mlcroissant",
  "library:polars",
  "region:us"
] | 
	[] | 
	2025-09-28T02:58:57Z | 0 | 
	---
dataset_info:
  features:
  - name: prompt
    dtype: string
  - name: requirements
    dtype: string
  - name: selection_response_1
    dtype: string
  - name: selection_response_2
    dtype: string
  - name: selection_response_3
    dtype: string
  - name: base_response_1
    dtype: string
  - name: base_response_2
    dtype: string
  - name: current_response_1
    dtype: string
  - name: current_response_2
    dtype: string
  - name: selection_1_mean
    dtype: float64
  - name: selection_1_majority
    dtype: float64
  - name: base_1_mean
    dtype: float64
  - name: base_1_majority
    dtype: float64
  - name: current_1_mean
    dtype: float64
  - name: current_1_majority
    dtype: float64
  - name: selection_2_mean
    dtype: float64
  - name: selection_2_majority
    dtype: float64
  - name: base_2_mean
    dtype: float64
  - name: base_2_majority
    dtype: float64
  - name: current_2_mean
    dtype: float64
  - name: current_2_majority
    dtype: float64
  - name: selection_3_mean
    dtype: float64
  - name: selection_3_majority
    dtype: float64
  splits:
  - name: train
    num_bytes: 46001070
    num_examples: 2000
  download_size: 23456998
  dataset_size: 46001070
configs:
- config_name: default
  data_files:
  - split: train
    path: data/train-*
---
 | 
| 
	CompassioninMachineLearning/2krows_20percentours_80percentalpaca | 
	CompassioninMachineLearning | 
	2025-05-14T18:55:25Z | 0 | 0 | 
	[
  "size_categories:1K<n<10K",
  "format:parquet",
  "modality:text",
  "library:datasets",
  "library:pandas",
  "library:mlcroissant",
  "library:polars",
  "region:us"
] | 
	[] | 
	2025-05-14T18:55:23Z | 0 | 
	---
dataset_info:
  features:
  - name: instruction
    dtype: string
  - name: output
    dtype: string
  splits:
  - name: train
    num_bytes: 1059189.4611528823
    num_examples: 1795
  - name: test
    num_bytes: 118015.5388471178
    num_examples: 200
  download_size: 706274
  dataset_size: 1177205.0
configs:
- config_name: default
  data_files:
  - split: train
    path: data/train-*
  - split: test
    path: data/test-*
---
 | 
| 
	burman-ai/wikitext2-v1 | 
	burman-ai | 
	2025-04-21T07:43:20Z | 20 | 0 | 
	[
  "size_categories:10K<n<100K",
  "format:parquet",
  "modality:text",
  "library:datasets",
  "library:pandas",
  "library:mlcroissant",
  "library:polars",
  "region:us"
] | 
	[] | 
	2025-04-21T07:43:15Z | 0 | 
	---
dataset_info:
  features:
  - name: text
    dtype: string
  splits:
  - name: train
    num_bytes: 11061717
    num_examples: 36718
  download_size: 6342468
  dataset_size: 11061717
configs:
- config_name: default
  data_files:
  - split: train
    path: data/train-*
---
 | 
| 
	Anonyme162325/ultrachat_200k-Mistral-7B-Instruct-v0.3-with-thanks | 
	Anonyme162325 | 
	2025-09-23T14:49:55Z | 54 | 0 | 
	[
  "size_categories:10K<n<100K",
  "format:parquet",
  "modality:text",
  "library:datasets",
  "library:pandas",
  "library:mlcroissant",
  "library:polars",
  "region:us"
] | 
	[] | 
	2025-09-23T14:49:51Z | 0 | 
	---
dataset_info:
  features:
  - name: conversation_with_thanks
    dtype: string
  - name: conversation_without_thanks
    dtype: string
  splits:
  - name: train
    num_bytes: 115156614
    num_examples: 10000
  download_size: 58010659
  dataset_size: 115156614
configs:
- config_name: default
  data_files:
  - split: train
    path: data/train-*
---
 | 
			Subsets and Splits
				
	
				
			
				
Recent Trending Datasets
												Lists newly created datasets from the past week, sorted by trending score, creation date, likes, and downloads, highlighting popular and recently active datasets.
													
