Dataset Viewer
Auto-converted to Parquet
The dataset viewer is not available for this split.
Cannot load the dataset split (in streaming mode) to extract the first rows.
Error code:   StreamingRowsError
Exception:    CastError
Message:      Couldn't cast
audio: struct<bytes: binary, path: string>
  child 0, bytes: binary
  child 1, path: string
text: string
speaker_id: string
language: string
emotion: string
original_dataset: string
original_filename: string
start_time: float
end_time: float
duration: float
-- schema metadata --
huggingface: '{"info": {"features": {"audio": {"bytes": {"dtype": "binary' + 549
to
{'audio': Audio(sampling_rate=None, decode=True, stream_index=None), 'text': Value('string'), 'speaker_id': Value('string'), 'emotion': Value('string'), 'language': Value('string')}
because column names don't match
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/src/worker/utils.py", line 99, in get_rows_or_raise
                  return get_rows(
                File "/src/libs/libcommon/src/libcommon/utils.py", line 272, in decorator
                  return func(*args, **kwargs)
                File "/src/services/worker/src/worker/utils.py", line 77, in get_rows
                  rows_plus_one = list(itertools.islice(ds, rows_max_number + 1))
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 2361, in __iter__
                  for key, example in ex_iterable:
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 1882, in __iter__
                  for key, pa_table in self._iter_arrow():
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 1905, in _iter_arrow
                  for key, pa_table in self.ex_iterable._iter_arrow():
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 499, in _iter_arrow
                  for key, pa_table in iterator:
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 346, in _iter_arrow
                  for key, pa_table in self.generate_tables_fn(**gen_kwags):
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/parquet/parquet.py", line 106, in _generate_tables
                  yield f"{file_idx}_{batch_idx}", self._cast_table(pa_table)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/parquet/parquet.py", line 73, in _cast_table
                  pa_table = table_cast(pa_table, self.info.features.arrow_schema)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2272, in table_cast
                  return cast_table_to_schema(table, schema)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2218, in cast_table_to_schema
                  raise CastError(
              datasets.table.CastError: Couldn't cast
              audio: struct<bytes: binary, path: string>
                child 0, bytes: binary
                child 1, path: string
              text: string
              speaker_id: string
              language: string
              emotion: string
              original_dataset: string
              original_filename: string
              start_time: float
              end_time: float
              duration: float
              -- schema metadata --
              huggingface: '{"info": {"features": {"audio": {"bytes": {"dtype": "binary' + 549
              to
              {'audio': Audio(sampling_rate=None, decode=True, stream_index=None), 'text': Value('string'), 'speaker_id': Value('string'), 'emotion': Value('string'), 'language': Value('string')}
              because column names don't match

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

test3434234

This is a merged speech dataset containing 848 audio segments from 2 source datasets.

Dataset Information

  • Total Segments: 848
  • Speakers: 2
  • Languages: tr
  • Emotions: angry, happy, neutral
  • Original Datasets: 2

Dataset Structure

Each example contains:

  • audio: Audio file (WAV format, original sampling rate preserved)
  • text: Transcription of the audio
  • speaker_id: Unique speaker identifier (made unique across all merged datasets)
  • emotion: Detected emotion (neutral, happy, sad, etc.)
  • language: Language code (en, es, fr, etc.)

Usage

Loading the Dataset

from datasets import load_dataset

# Load the dataset
dataset = load_dataset("Codyfederer/test3434234")

# Access the training split
train_data = dataset["train"]

# Example: Get first sample
sample = train_data[0]
print(f"Text: {sample['text']}")
print(f"Speaker: {sample['speaker_id']}")
print(f"Language: {sample['language']}")
print(f"Emotion: {sample['emotion']}")

# Play audio (requires audio libraries)
# sample['audio']['array'] contains the audio data
# sample['audio']['sampling_rate'] contains the sampling rate

Alternative: Load from JSONL

from datasets import Dataset, Audio, Features, Value
import json

# Load the JSONL file
rows = []
with open("data.jsonl", "r", encoding="utf-8") as f:
    for line in f:
        rows.append(json.loads(line))

features = Features({
    "audio": Audio(sampling_rate=None),
    "text": Value("string"),
    "speaker_id": Value("string"),
    "emotion": Value("string"),
    "language": Value("string")
})

dataset = Dataset.from_list(rows, features=features)

Dataset Structure

The dataset includes:

  • data.jsonl - Main dataset file with all columns (JSON Lines)
  • *.wav - Audio files under audio_XXX/ subdirectories
  • load_dataset.txt - Python script for loading the dataset (rename to .py to use)

JSONL keys:

  • audio: Relative audio path (e.g., audio_000/segment_000000_speaker_0.wav)
  • text: Transcription of the audio
  • speaker_id: Unique speaker identifier
  • emotion: Detected emotion
  • language: Language code

Speaker ID Mapping

Speaker IDs have been made unique across all merged datasets to avoid conflicts. For example:

  • Original Dataset A: speaker_0, speaker_1
  • Original Dataset B: speaker_0, speaker_1
  • Merged Dataset: speaker_0, speaker_1, speaker_2, speaker_3

Original dataset information is preserved in the metadata for reference.

Data Quality

This dataset was created using the Vyvo Dataset Builder with:

  • Automatic transcription and diarization
  • Quality filtering for audio segments
  • Music and noise filtering
  • Emotion detection
  • Language identification

License

This dataset is released under the Creative Commons Attribution 4.0 International License (CC BY 4.0).

Citation

@dataset{vyvo_merged_dataset,
  title={test3434234},
  author={Vyvo Dataset Builder},
  year={2025},
  url={https://huggingface.co/datasets/Codyfederer/test3434234}
}

This dataset was created using the Vyvo Dataset Builder tool.

Downloads last month
28