ivrit-ai/yi-whisper-large-v3
Automatic Speech Recognition
•
2B
•
Updated
•
33
See more details on the source dataset card.
This is a derived dataset for structured for whisper training:
Excludes low quality segments (judged by probabilities of the text-audio auto alignment process)
Encodes timestamps along segments of text + previous text
Audio encoded to 16K sample-rate, mono
Total audio duration - ~19h
License: other
Each example in the dataset contains:
audio: An audio column containing:bytes: The audio data encoded in MP3 formatpath: A string identifier derived from the source entry IDtranscript: A string containing the text with potentially Whisper-style timestamp tokens (e.g., <|0.00|>text<|2.40|>) if "has_timestamps" is truemetadata: A dictionary containing:seek: Float indicating the start time of this slice in the original source audiosource: String identifier for the source of the audio (Name of podcast, production system, etc.)entry_id: Unique identifier for the source entryhas_prev: Boolean indicating if this slice has transcript from the previous slice within the audio sourcehas_timestamps: Boolean indicating if the transcript contains timestamp tokensprev_transcript: String containing the transcript of the previous slice (empty if has_prev is false)