The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
The dataset generation failed
Error code:   DatasetGenerationError
Exception:    TypeError
Message:      Couldn't cast array of type
struct<generated_data: int64, data_fps: int64, video_fps: int64, commit_hash: string, total_frames: int64, mocap_raw_data_source: struct<capMachine: string, operator: string, object: string, gesture: string, sequence: string>, mano_hand_shape: fixed_size_list<element: float>[10], object_move_start_frame: int64, object_move_end_frame: int64, train_info: struct<reward_value: float, current_step: int64, trajectory_length: int64>>
to
{'generated_data': Value('int64'), 'data_fps': Value('int32'), 'video_fps': Value('int32'), 'commit_hash': Value('string'), 'mocap_raw_data_source': {'capMachine': Value('string'), 'operator': Value('string'), 'object': Value('string'), 'gesture': Value('string'), 'sequence': Value('string')}, 'total_frames': Value('int32'), 'alignment_rmse_mean': Value('float32'), 'mano_hand_shape': List(Value('float32'), length=10), 'camera1': List(Value('float32'), length=6), 'camera2': List(Value('float32'), length=6)}
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1404, in compute_config_parquet_and_info_response
                  fill_builder_info(builder, hf_endpoint=hf_endpoint, hf_token=hf_token, validate=validate)
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 577, in fill_builder_info
                  ) = retry_validate_get_features_num_examples_size_and_compression_ratio(
                      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 496, in retry_validate_get_features_num_examples_size_and_compression_ratio
                  validate(pf)
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 534, in validate
                  raise TooBigRowGroupsError(
              worker.job_runners.config.parquet_and_info.TooBigRowGroupsError: Parquet file has too big row groups. First row group has 1863260755 which exceeds the limit of 300000000
              
              During handling of the above exception, another exception occurred:
              
              Traceback (most recent call last):
                File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 1815, in _prepare_split_single
                  for _, table in generator:
                                  ^^^^^^^^^
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 691, in wrapped
                  for item in generator(*args, **kwargs):
                              ^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/parquet/parquet.py", line 106, in _generate_tables
                  yield f"{file_idx}_{batch_idx}", self._cast_table(pa_table)
                                                   ^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/parquet/parquet.py", line 73, in _cast_table
                  pa_table = table_cast(pa_table, self.info.features.arrow_schema)
                             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 2272, in table_cast
                  return cast_table_to_schema(table, schema)
                         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 2224, in cast_table_to_schema
                  cast_array_to_feature(
                File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 1795, in wrapper
                  return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks])
                                           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 2092, in cast_array_to_feature
                  raise TypeError(f"Couldn't cast array of type\n{_short_str(array.type)}\nto\n{_short_str(feature)}")
              TypeError: Couldn't cast array of type
              struct<generated_data: int64, data_fps: int64, video_fps: int64, commit_hash: string, total_frames: int64, mocap_raw_data_source: struct<capMachine: string, operator: string, object: string, gesture: string, sequence: string>, mano_hand_shape: fixed_size_list<element: float>[10], object_move_start_frame: int64, object_move_end_frame: int64, train_info: struct<reward_value: float, current_step: int64, trajectory_length: int64>>
              to
              {'generated_data': Value('int64'), 'data_fps': Value('int32'), 'video_fps': Value('int32'), 'commit_hash': Value('string'), 'mocap_raw_data_source': {'capMachine': Value('string'), 'operator': Value('string'), 'object': Value('string'), 'gesture': Value('string'), 'sequence': Value('string')}, 'total_frames': Value('int32'), 'alignment_rmse_mean': Value('float32'), 'mano_hand_shape': List(Value('float32'), length=10), 'camera1': List(Value('float32'), length=6), 'camera2': List(Value('float32'), length=6)}
              
              The above exception was the direct cause of the following exception:
              
              Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1427, in compute_config_parquet_and_info_response
                  parquet_operations, partial, estimated_dataset_info = stream_convert_to_parquet(
                                                                        ^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 993, in stream_convert_to_parquet
                  builder._prepare_split(
                File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 1702, in _prepare_split
                  for job_id, done, content in self._prepare_split_single(
                                               ^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 1858, in _prepare_split_single
                  raise DatasetGenerationError("An error occurred while generating the dataset") from e
              datasets.exceptions.DatasetGenerationError: An error occurred while generating the dataset

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

trajectory_meta_data
dict
sequence_info
dict
{"generated_data":1761026212,"data_fps":100,"video_fps":30,"commit_hash":"257ce3bf8dc6153d5e73cf4609(...TRUNCATED)
{"timestamp":[0.0,1.0,2.0,3.0,4.0,5.0,6.0,7.0,8.0,9.0,10.0,11.0,12.0,13.0,14.0,15.0,16.0,17.0,18.0,1(...TRUNCATED)
{"generated_data":1761026213,"data_fps":100,"video_fps":30,"commit_hash":"257ce3bf8dc6153d5e73cf4609(...TRUNCATED)
{"timestamp":[0.0,1.0,2.0,3.0,4.0,5.0,6.0,7.0,8.0,9.0,10.0,11.0,12.0,13.0,14.0,15.0,16.0,17.0,18.0,1(...TRUNCATED)
{"generated_data":1761026214,"data_fps":100,"video_fps":30,"commit_hash":"257ce3bf8dc6153d5e73cf4609(...TRUNCATED)
{"timestamp":[0.0,1.0,2.0,3.0,4.0,5.0,6.0,7.0,8.0,9.0,10.0,11.0,12.0,13.0,14.0,15.0,16.0,17.0,18.0,1(...TRUNCATED)
{"generated_data":1761026214,"data_fps":100,"video_fps":30,"commit_hash":"257ce3bf8dc6153d5e73cf4609(...TRUNCATED)
{"timestamp":[0.0,1.0,2.0,3.0,4.0,5.0,6.0,7.0,8.0,9.0,10.0,11.0,12.0,13.0,14.0,15.0,16.0,17.0,18.0,1(...TRUNCATED)
{"generated_data":1761026222,"data_fps":100,"video_fps":30,"commit_hash":"257ce3bf8dc6153d5e73cf4609(...TRUNCATED)
{"timestamp":[0.0,1.0,2.0,3.0,4.0,5.0,6.0,7.0,8.0,9.0,10.0,11.0,12.0,13.0,14.0,15.0,16.0,17.0,18.0,1(...TRUNCATED)
{"generated_data":1761026224,"data_fps":100,"video_fps":30,"commit_hash":"257ce3bf8dc6153d5e73cf4609(...TRUNCATED)
{"timestamp":[0.0,1.0,2.0,3.0,4.0,5.0,6.0,7.0,8.0,9.0,10.0,11.0,12.0,13.0,14.0,15.0,16.0,17.0,18.0,1(...TRUNCATED)
{"generated_data":1761026225,"data_fps":100,"video_fps":30,"commit_hash":"257ce3bf8dc6153d5e73cf4609(...TRUNCATED)
{"timestamp":[0.0,1.0,2.0,3.0,4.0,5.0,6.0,7.0,8.0,9.0,10.0,11.0,12.0,13.0,14.0,15.0,16.0,17.0,18.0,1(...TRUNCATED)
{"generated_data":1761026227,"data_fps":100,"video_fps":30,"commit_hash":"257ce3bf8dc6153d5e73cf4609(...TRUNCATED)
{"timestamp":[0.0,1.0,2.0,3.0,4.0,5.0,6.0,7.0,8.0,9.0,10.0,11.0,12.0,13.0,14.0,15.0,16.0,17.0,18.0,1(...TRUNCATED)
{"generated_data":1761026232,"data_fps":100,"video_fps":30,"commit_hash":"257ce3bf8dc6153d5e73cf4609(...TRUNCATED)
{"timestamp":[0.0,1.0,2.0,3.0,4.0,5.0,6.0,7.0,8.0,9.0,10.0,11.0,12.0,13.0,14.0,15.0,16.0,17.0,18.0,1(...TRUNCATED)
{"generated_data":1761026234,"data_fps":100,"video_fps":30,"commit_hash":"257ce3bf8dc6153d5e73cf4609(...TRUNCATED)
{"timestamp":[0.0,1.0,2.0,3.0,4.0,5.0,6.0,7.0,8.0,9.0,10.0,11.0,12.0,13.0,14.0,15.0,16.0,17.0,18.0,1(...TRUNCATED)
End of preview.