--- annotations_creators: - no-annotation language: - en language_creators: - found license: - other multilinguality: - monolingual pretty_name: The Pile GitHub size_categories: [] source_datasets: - original tags: [] task_categories: - text-generation - fill-mask - text-classification task_ids: [] configs: - config_name: all default: true data_files: - split: train path: data/train/*/*.parquet - split: test path: data/test/*/*.parquet - split: validation path: data/validation/*/*.parquet - config_name: assembly data_files: - split: train path: data/train/Assembly/*.parquet - split: test path: data/test/Assembly/*.parquet - split: validation path: data/validation/Assembly/*.parquet - config_name: batchfile data_files: - split: train path: data/train/Batchfile/*.parquet - split: test path: data/test/Batchfile/*.parquet - split: validation path: data/validation/Batchfile/*.parquet - config_name: c# data_files: - split: train path: data/train/C#/*.parquet - split: test path: data/test/C#/*.parquet - split: validation path: data/validation/C#/*.parquet - config_name: c++ data_files: - split: train path: data/train/C++/*.parquet - split: test path: data/test/C++/*.parquet - split: validation path: data/validation/C++/*.parquet - config_name: c data_files: - split: train path: data/train/C/*.parquet - split: test path: data/test/C/*.parquet - split: validation path: data/validation/C/*.parquet - config_name: cmake data_files: - split: train path: data/train/CMake/*.parquet - split: test path: data/test/CMake/*.parquet - split: validation path: data/validation/CMake/*.parquet - config_name: cobol data_files: - split: train path: data/train/COBOL/*.parquet - split: test path: data/test/COBOL/*.parquet - split: validation path: data/validation/COBOL/*.parquet - config_name: css data_files: - split: train path: data/train/CSS/*.parquet - split: test path: data/test/CSS/*.parquet - split: validation path: data/validation/CSS/*.parquet - config_name: csv data_files: - split: train path: data/train/CSV/*.parquet - split: test path: data/test/CSV/*.parquet - split: validation path: data/validation/CSV/*.parquet - config_name: clojure data_files: - split: train path: data/train/Clojure/*.parquet - split: test path: data/test/Clojure/*.parquet - split: validation path: data/validation/Clojure/*.parquet - config_name: coffescript data_files: - split: train path: data/train/CoffeScript/*.parquet - split: test path: data/test/CoffeScript/*.parquet - split: validation path: data/validation/CoffeScript/*.parquet - config_name: dm data_files: - split: train path: data/train/DM/*.parquet - split: test path: data/test/DM/*.parquet - split: validation path: data/validation/DM/*.parquet - config_name: dart data_files: - split: train path: data/train/Dart/*.parquet - split: test path: data/test/Dart/*.parquet - split: validation path: data/validation/Dart/*.parquet - config_name: dockerfile data_files: - split: train path: data/train/Dockerfile/*.parquet - split: test path: data/test/Dockerfile/*.parquet - split: validation path: data/validation/Dockerfile/*.parquet - config_name: elixir data_files: - split: train path: data/train/Elixir/*.parquet - split: test path: data/test/Elixir/*.parquet - split: validation path: data/validation/Elixir/*.parquet - config_name: erlang data_files: - split: train path: data/train/Erlang/*.parquet - split: test path: data/test/Erlang/*.parquet - split: validation path: data/validation/Erlang/*.parquet - config_name: fortran data_files: - split: train path: data/train/Fortran/*.parquet - split: test path: data/test/Fortran/*.parquet - split: validation path: data/validation/Fortran/*.parquet - config_name: go data_files: - split: train path: data/train/Go/*.parquet - split: test path: data/test/Go/*.parquet - split: validation path: data/validation/Go/*.parquet - config_name: groovy data_files: - split: train path: data/train/Groovy/*.parquet - split: test path: data/test/Groovy/*.parquet - split: validation path: data/validation/Groovy/*.parquet - config_name: html data_files: - split: train path: data/train/HTML/*.parquet - split: test path: data/test/HTML/*.parquet - split: validation path: data/validation/HTML/*.parquet - config_name: haskell data_files: - split: train path: data/train/Haskell/*.parquet - split: test path: data/test/Haskell/*.parquet - split: validation path: data/validation/Haskell/*.parquet - config_name: ini data_files: - split: train path: data/train/INI/*.parquet - split: test path: data/test/INI/*.parquet - split: validation path: data/validation/INI/*.parquet - config_name: json data_files: - split: train path: data/train/JSON/*.parquet - split: test path: data/test/JSON/*.parquet - split: validation path: data/validation/JSON/*.parquet - config_name: java data_files: - split: train path: data/train/Java/*.parquet - split: test path: data/test/Java/*.parquet - split: validation path: data/validation/Java/*.parquet - config_name: javascript data_files: - split: train path: data/train/JavaScript/*.parquet - split: test path: data/test/JavaScript/*.parquet - split: validation path: data/validation/JavaScript/*.parquet - config_name: julia data_files: - split: train path: data/train/Julia/*.parquet - split: test path: data/test/Julia/*.parquet - split: validation path: data/validation/Julia/*.parquet - config_name: kotlin data_files: - split: train path: data/train/Kotlin/*.parquet - split: test path: data/test/Kotlin/*.parquet - split: validation path: data/validation/Kotlin/*.parquet - config_name: lisp data_files: - split: train path: data/train/Lisp/*.parquet - split: test path: data/test/Lisp/*.parquet - split: validation path: data/validation/Lisp/*.parquet - config_name: lua data_files: - split: train path: data/train/Lua/*.parquet - split: test path: data/test/Lua/*.parquet - split: validation path: data/validation/Lua/*.parquet - config_name: makefile data_files: - split: train path: data/train/Makefile/*.parquet - split: test path: data/test/Makefile/*.parquet - split: validation path: data/validation/Makefile/*.parquet - config_name: markdown data_files: - split: train path: data/train/Markdown/*.parquet - split: test path: data/test/Markdown/*.parquet - split: validation path: data/validation/Markdown/*.parquet - config_name: matlab data_files: - split: train path: data/train/Matlab/*.parquet - split: test path: data/test/Matlab/*.parquet - split: validation path: data/validation/Matlab/*.parquet - config_name: none data_files: - split: train path: data/train/None/*.parquet - split: test path: data/test/None/*.parquet - split: validation path: data/validation/None/*.parquet - config_name: ocaml data_files: - split: train path: data/train/OCaml/*.parquet - split: test path: data/test/OCaml/*.parquet - split: validation path: data/validation/OCaml/*.parquet - config_name: objective-c data_files: - split: train path: data/train/Objective-C/*.parquet - split: test path: data/test/Objective-C/*.parquet - split: validation path: data/validation/Objective-C/*.parquet - config_name: php data_files: - split: train path: data/train/PHP/*.parquet - split: test path: data/test/PHP/*.parquet - split: validation path: data/validation/PHP/*.parquet - config_name: pascal data_files: - split: train path: data/train/Pascal/*.parquet - split: test path: data/test/Pascal/*.parquet - split: validation path: data/validation/Pascal/*.parquet - config_name: perl data_files: - split: train path: data/train/Perl/*.parquet - split: test path: data/test/Perl/*.parquet - split: validation path: data/validation/Perl/*.parquet - config_name: powershell data_files: - split: train path: data/train/PowerShell/*.parquet - split: test path: data/test/PowerShell/*.parquet - split: validation path: data/validation/PowerShell/*.parquet - config_name: prolog data_files: - split: train path: data/train/Prolog/*.parquet - split: test path: data/test/Prolog/*.parquet - split: validation path: data/validation/Prolog/*.parquet - config_name: python data_files: - split: train path: data/train/Python/*.parquet - split: test path: data/test/Python/*.parquet - split: validation path: data/validation/Python/*.parquet - config_name: r data_files: - split: train path: data/train/R/*.parquet - split: test path: data/test/R/*.parquet - split: validation path: data/validation/R/*.parquet - config_name: ruby data_files: - split: train path: data/train/Ruby/*.parquet - split: test path: data/test/Ruby/*.parquet - split: validation path: data/validation/Ruby/*.parquet - config_name: rust data_files: - split: train path: data/train/Rust/*.parquet - split: test path: data/test/Rust/*.parquet - split: validation path: data/validation/Rust/*.parquet - config_name: sql data_files: - split: train path: data/train/SQL/*.parquet - split: test path: data/test/SQL/*.parquet - split: validation path: data/validation/SQL/*.parquet - config_name: scala data_files: - split: train path: data/train/Scala/*.parquet - split: test path: data/test/Scala/*.parquet - split: validation path: data/validation/Scala/*.parquet - config_name: shell data_files: - split: train path: data/train/Shell/*.parquet - split: test path: data/test/Shell/*.parquet - split: validation path: data/validation/Shell/*.parquet - config_name: swift data_files: - split: train path: data/train/Swift/*.parquet - split: test path: data/test/Swift/*.parquet - split: validation path: data/validation/Swift/*.parquet - config_name: toml data_files: - split: train path: data/train/TOML/*.parquet - split: test path: data/test/TOML/*.parquet - split: validation path: data/validation/TOML/*.parquet - config_name: tex data_files: - split: train path: data/train/Tex/*.parquet - split: test path: data/test/Tex/*.parquet - split: validation path: data/validation/Tex/*.parquet - config_name: typescript data_files: - split: train path: data/train/TypeScript/*.parquet - split: test path: data/test/TypeScript/*.parquet - split: validation path: data/validation/TypeScript/*.parquet - config_name: verilog data_files: - split: train path: data/train/Verilog/*.parquet - split: test path: data/test/Verilog/*.parquet - split: validation path: data/validation/Verilog/*.parquet - config_name: visual_basic data_files: - split: train path: data/train/Visual Basic/*.parquet - split: test path: data/test/Visual Basic/*.parquet - split: validation path: data/validation/Visual Basic/*.parquet - config_name: xml data_files: - split: train path: data/train/XML/*.parquet - split: test path: data/test/XML/*.parquet - split: validation path: data/validation/XML/*.parquet - config_name: yaml data_files: - split: train path: data/train/YAML/*.parquet - split: test path: data/test/YAML/*.parquet - split: validation path: data/validation/YAML/*.parquet --- # Dataset Card for The Pile GitHub ## Table of Contents - [Dataset Card for Smart Contracts](#dataset-card-for-the-pile-github) - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Additional Information](#additional-information) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [ElutherAI](https://pile.eleuther.ai) - **Repository:** [GitHub](https://github.com/andstor/the-pile-github) - **Paper:** [arXiv](https://arxiv.org/abs/2101.00027) - **Leaderboard:** [Needs More Information] - **Point of Contact:** [Needs More Information] ### Dataset Summary This is the GitHub subset of EleutherAi/The Pile dataset and contains GitHub repositories. The programming languages are identified using the [guesslang library](https://github.com/yoeo/guesslang). A total of 54 programming languages are included in the dataset. ### Supported Tasks and Leaderboards [More Information Needed] ### Languages The following languages are covered by the dataset: ``` 'Assembly', 'Batchfile', 'C', 'C#', 'C++', 'CMake', 'COBOL', 'CSS', 'CSV', 'Clojure', 'CoffeeScript', 'DM', 'Dart', 'Dockerfile', 'Elixir', 'Erlang', 'Fortran', 'Go', 'Groovy', 'HTML', 'Haskell', 'INI', 'JSON', 'Java', 'JavaScript', 'Julia', 'Kotlin', 'Lisp', 'Lua', 'Makefile', 'Markdown', 'Matlab', 'None', 'OCaml', 'Objective-C', 'PHP', 'Pascal', 'Perl', 'PowerShell', 'Prolog', 'Python', 'R', 'Ruby', 'Rust', 'SQL', 'Scala', 'Shell', 'Swift', 'TOML', 'TeX', 'TypeScript', 'Verilog', 'Visual Basic', 'XML', 'YAML' ``` The [guesslang library](https://github.com/yoeo/guesslang) is used to identify the programming languages. It has a guessing accuracy of above 90%. Hence, there will be some misclassifications in the language identification. ## Dataset Structure ### Data Instances [More Information Needed] ``` { 'text': ..., 'meta': {'language': ...} } ``` ### Data Fields - `text` (`string`): the source code. - `meta` (`dict`): the metadata of the source code. - `language` (`string`): the programming language of the source code. ### Data Splits [More Information Needed] | | train | validation | test | |-------------------------|------:|-----------:|-----:| | Input Sentences | | | | | Average Sentence Length | | | | ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data The data is purely a subset of the [EleutherAI/The Pile dataset](https://huggingface.co/datasets/the_pile). See the original [dataset](https://arxiv.org/abs/2201.07311) for more details. ## Additional Information ### Licensing Information The Pile dataset was released on January 1st, 2021. It is licensed under the MIT License. See the [dataset](https://arxiv.org/abs/2201.07311) for more details. ### Citation Information Provide the [BibTex](http://www.bibtex.org/)-formatted reference for the dataset. For example: ``` @article{pile, title={The {P}ile: An 800GB Dataset of Diverse Text for Language Modeling}, author={Gao, Leo and Biderman, Stella and Black, Sid and Golding, Laurence and Hoppe, Travis and Foster, Charles and Phang, Jason and He, Horace and Thite, Anish and Nabeshima, Noa and Presser, Shawn and Leahy, Connor}, journal={arXiv preprint arXiv:2101.00027}, year={2020} } ``` ### Contributions Thanks to [@andstor](https://github.com/andstor) for adding this dataset.