Dataset Viewer
The dataset viewer is not available for this split.
Cannot extract the features (columns) for the split 'train' of the config 'default' of the dataset.
Error code:   FeaturesError
Exception:    ArrowInvalid
Message:      Schema at index 2 was different: 
header: struct<licensing: string, formatVersion: string, reportId: string, reportDatetime: timestamp[s], reportStatus: string, publisher: struct<name: string, confidentialityLevel: string>>
task: struct<taskStage: string, taskFamily: string, nbRequest: int64, algorithms: list<item: struct<algorithmType: string, foundationModelName: string, foundationModelUri: string, framework: string, parametersNumber: int64, quantization: string>>, dataset: list<item: struct<dataUsage: string, dataType: string, dataQuantity: int64>>, estimatedAccuracy: string>
measures: list<item: struct<measurementMethod: string, version: string, cpuTrackingMode: string, gpuTrackingMode: string, powerConsumption: double, measurementDuration: double, measurementDateTime: timestamp[s]>>
system: struct<os: string>
software: struct<language: string, version: string>
infrastructure: struct<infraType: string, cloudProvider: string, components: list<item: struct<componentName: string, componentType: string, nbComponent: int64, manufacturer: string, family: string, series: string, memorySize: int64>>>
environment: struct<country: string, powerSupplierType: string>
quality: string
vs
header: struct<reportId: string, reportDatetime: timestamp[s], reportStatus: string, publisher: struct<name: string, division: string, projectName: string, confidentialityLevel: string>>
task: struct<taskFamily: string, taskStage: string, algorithms: list<item: struct<algorithmType: string, foundationModelName: string, parametersNumber: double, framework: string, quantization: string>>, dataset: list<item: struct<dataUsage: string, dataType: string, dataQuantity: int64, source: string, owner: string>>>
measures: list<item: struct<measurementMethod: string, version: string, cpuTrackingMode: string, gpuTrackingMode: string, powerConsumption: double, measurementDuration: double, measurementDateTime: timestamp[s]>>
system: struct<os: string>
software: struct<language: string, version: string>
infrastructure: struct<infraType: string, cloudProvider: string, components: list<item: struct<componentName: string, componentType: string, nbComponent: int64, memorySize: int64>>>
environment: struct<country: string, latitude: double, longitude: double>
quality: string
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/split/first_rows.py", line 228, in compute_first_rows_from_streaming_response
                  iterable_dataset = iterable_dataset._resolve_features()
                                     ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 3496, in _resolve_features
                  features = _infer_features_from_batch(self.with_format(None)._head())
                                                        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2257, in _head
                  return next(iter(self.iter(batch_size=n)))
                         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2461, in iter
                  for key, example in iterator:
                                      ^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 1952, in __iter__
                  for key, pa_table in self._iter_arrow():
                                       ^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 1974, in _iter_arrow
                  yield from self.ex_iterable._iter_arrow()
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 531, in _iter_arrow
                  yield new_key, pa.Table.from_batches(chunks_buffer)
                                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "pyarrow/table.pxi", line 5039, in pyarrow.lib.Table.from_batches
                File "pyarrow/error.pxi", line 155, in pyarrow.lib.pyarrow_internal_check_status
                File "pyarrow/error.pxi", line 92, in pyarrow.lib.check_status
              pyarrow.lib.ArrowInvalid: Schema at index 2 was different: 
              header: struct<licensing: string, formatVersion: string, reportId: string, reportDatetime: timestamp[s], reportStatus: string, publisher: struct<name: string, confidentialityLevel: string>>
              task: struct<taskStage: string, taskFamily: string, nbRequest: int64, algorithms: list<item: struct<algorithmType: string, foundationModelName: string, foundationModelUri: string, framework: string, parametersNumber: int64, quantization: string>>, dataset: list<item: struct<dataUsage: string, dataType: string, dataQuantity: int64>>, estimatedAccuracy: string>
              measures: list<item: struct<measurementMethod: string, version: string, cpuTrackingMode: string, gpuTrackingMode: string, powerConsumption: double, measurementDuration: double, measurementDateTime: timestamp[s]>>
              system: struct<os: string>
              software: struct<language: string, version: string>
              infrastructure: struct<infraType: string, cloudProvider: string, components: list<item: struct<componentName: string, componentType: string, nbComponent: int64, manufacturer: string, family: string, series: string, memorySize: int64>>>
              environment: struct<country: string, powerSupplierType: string>
              quality: string
              vs
              header: struct<reportId: string, reportDatetime: timestamp[s], reportStatus: string, publisher: struct<name: string, division: string, projectName: string, confidentialityLevel: string>>
              task: struct<taskFamily: string, taskStage: string, algorithms: list<item: struct<algorithmType: string, foundationModelName: string, parametersNumber: double, framework: string, quantization: string>>, dataset: list<item: struct<dataUsage: string, dataType: string, dataQuantity: int64, source: string, owner: string>>>
              measures: list<item: struct<measurementMethod: string, version: string, cpuTrackingMode: string, gpuTrackingMode: string, powerConsumption: double, measurementDuration: double, measurementDateTime: timestamp[s]>>
              system: struct<os: string>
              software: struct<language: string, version: string>
              infrastructure: struct<infraType: string, cloudProvider: string, components: list<item: struct<componentName: string, componentType: string, nbComponent: int64, memorySize: int64>>>
              environment: struct<country: string, latitude: double, longitude: double>
              quality: string

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

Guide: How to share your data on the BoAmps repository

This guide explains step by step how to share BoAmps format reports on this public Hugging Face repository.

Prerequisites

Before starting, make sure you have:

  • A Hugging Face account
  • The files you want to upload

Method 1: Hugging Face Web Interface

  1. Log in to Hugging Face

  2. Go to the boamps dataset

  3. Navigate to the files: Click on "Files and versions" then on the "data" folder Access to files Access to data folder

  4. Click on "Contribute" then "Upload files" Contribute

  5. Drop your files in BoAmps format (please name them clearly) and give a name to the PR (e.g. 10 reports on image classification). You can add an extended description but this is optional. The name of the files should follow this format : "report_<publisher>_<taskStage>_<taskFamily>_<infraType>_<reportID>.json"

  • <publisher>: the name of the publisher (this can be useful for gathering reports written by the same person, but you can of course create a pseudonym if you wish to remain anonymous)
  • <taskStage>: mandatory field of the report
  • <taskFamily>: mandatory field of the report
  • <infraType>: mandatory field of the report
  • <reportID>: optionnal
  1. At the bottom of the page, click on "Open a Pull Request".
  2. You should see your PR created in "Community" > "Pull request". Now just wait for our team to validate your PR, thank you very much for your participation and your commitment to more frugal AI, in full transparency! Consult

Method 2: Git (Command Line)

  1. Clone the repository
  2. Create a branch
  3. Add your files
  4. Create a PR
Downloads last month
157

Space using boavizta/open_data_boamps 1