Dataset Viewer
Auto-converted to Parquet
discussion_title
stringlengths
15
149
discussion_url
stringlengths
55
178
discussion_topic_id
int64
11.3k
169k
discussion_category
int64
2
69
discussion_created_at
stringdate
2021-11-01 15:54:32
2025-10-25 07:31:09
thread
listlengths
3
20
question
stringlengths
77
20.5k
solution
stringlengths
24
23.2k
Problem with pyannote/speaker-diarization-3.1
https://discuss.huggingface.co/t/problem-with-pyannote-speaker-diarization-3-1/169415
169,415
5
2025-10-25T07:31:09.724000Z
[ { "id": 244110, "name": "MAJH", "username": "aldkela", "avatar_template": "https://avatars.discourse-cdn.com/v4/letter/a/4bbf92/{size}.png", "created_at": "2025-10-25T07:31:09.796Z", "cooked": "<p>Hello, I am trying to make some code with pyannote/speaker-diarization-3.1 but I got some error that I cannot handle now….</p>\n<p>This is the code I made below, I only used function “speaker_diarization” this time..</p>\n<pre data-code-wrap=\"python\"><code class=\"lang-python\">import pandas as pd\nfrom transformers import AutoModelForSpeechSeq2Seq, AutoProcessor, pipeline\n\nfrom pyannote.audio import Pipeline\n\n\n\nfrom pathlib import Path\nimport os, sys\n\nffmpeg_dll_dir = Path(r\"C:\\Users\\majh0\\miniconda3\\Library\\bin\") \nassert ffmpeg_dll_dir.exists(), ffmpeg_dll_dir\nos.add_dll_directory(str(ffmpeg_dll_dir)) \n\n\nimport torch, torchcodec, platform, subprocess\nprint(\"exe:\", sys.executable)\nprint(\"torch\", torch.__version__, \"torchcodec\", torchcodec.__version__, \"py\", platform.python_version())\nsubprocess.run([\"ffmpeg\", \"-version\"], check=True)\nprint(\"cuda torch?\",torch.cuda.is_available())\n\n\n\n\ndef whisper_stt(\n audio_file_path: str,\n output_file_path: str = \"./output.csv\",\n):\n device = \"cuda:0\" if torch.cuda.is_available() else \"cpu\"\n torch_dtype = torch.float16 if torch.cuda.is_available() else torch.float32\n model_id = \"openai/whisper-large-v3-turbo\"\n\n model = AutoModelForSpeechSeq2Seq.from_pretrained(\n model_id, torch_dtype=torch_dtype, low_cpu_mem_usage=True, use_safetensors=True\n )\n model.to(device)\n\n processor = AutoProcessor.from_pretrained(model_id)\n\n pipe = pipeline(\n \"automatic-speech-recognition\",\n model=model,\n tokenizer=processor.tokenizer,\n feature_extractor=processor.feature_extractor,\n torch_dtype=torch_dtype,\n device=device,\n return_timestamps=True, \n chunk_length_s=10, \n stride_length_s=2, \n )\n\n result = pipe(audio_file_path)\n df = whisper_to_dataframe(result, output_file_path)\n\n return result, df\n\n\n\ndef whisper_to_dataframe(result, output_file_path):\n start_end_text = []\n\n for chunk in result[\"chunks\"]:\n start = chunk[\"timestamp\"][0]\n end = chunk[\"timestamp\"][1]\n text = chunk[\"text\"]\n start_end_text.append([start, end, text])\n df = pd.DataFrame(start_end_text, columns=[\"start\", \"end\", \"text\"])\n df.to_csv(output_file_path, index=False, sep=\"|\")\n \n return df\n\n\ndef speaker_diarization(\n audio_file_path: str,\n output_rttm_file_path: str,\n output_csv_file_path: str,\n):\n pipeline = Pipeline.from_pretrained(\n \"pyannote/speaker-diarization-3.1\",\n token=\"\")\n\n if torch.cuda.is_available():\n pipeline.to(torch.device(\"cuda\"))\n print(\"Using CUDA\")\n else:\n print(\"Using CPU\")\n \n print(\"torch version:\", torch.__version__)\n print(\"compiled with cuda:\", torch.version.cuda)\n print(\"cuda available:\", torch.cuda.is_available())\n\n out = pipeline(audio_file_path)\n ann = out.speaker_diarization\n\n # dump the diarization output to disk using RTTM format\n with open(output_rttm_file_path, \"w\", encoding=\"utf-8\") as rttm:\n ann.write_rttm(rttm)\n\n df_rttm = pd.read_csv(\n output_rttm_file_path,\n sep=' ',\n header=None,\n names=['type', 'file', 'chnl', 'start', 'duration', 'C1', 'C2', 'speaker_id', 'C3', 'C4']\n)\n \n\n df_rttm['end'] = df_rttm['start'] + df_rttm['duration']\n\n\n df_rttm[\"number\"] = None\n df_rttm.at[0, \"number\"] = 0\n\n\n for i in range(1, len(df_rttm)):\n if df_rttm.at[i, \"speaker_id\"] != df_rttm.at[i-1, \"speaker_id\"]:\n df_rttm.at[i, \"number\"] = df_rttm.at[i-1, \"number\"] + 1\n else:\n df_rttm.at[i, \"number\"] = df_rttm.at[i-1, \"number\"]\n\n\n\n df_rttm_grouped = df_rttm.groupby(\"number\").agg(\n start=pd.NamedAgg(column=\"start\", aggfunc=\"min\"),\n end=pd.NamedAgg(column=\"end\", aggfunc=\"max\"),\n speaker_id=pd.NamedAgg(column=\"speaker_id\", aggfunc=\"first\")\n )\n\n df_rttm_grouped['duration'] = df_rttm_grouped['end'] - df_rttm_grouped['start']\n df_rttm_grouped = df_rttm_grouped.reset_index(drop=True)\n\n\n df_rttm_grouped.to_csv(output_csv_file_path, sep=',', index=False, encoding='utf-8')\n\n return df_rttm_grouped\n\n\n\n\n\nif __name__ == \"__main__\":\n # result, df = whisper_stt(\n # \"./chap05/guitar.wav\",\n # \"./chap05/guitar.csv\",\n # )\n\n # print(df)\n\n\n audio_file_path = \"./chap05/guitar.wav\"\n stt_output_file_path = \"./chap05/guitar.csv\"\n rttm_file_path = \"./chap05/guitar.rttm\"\n rttm_csv_file_path = \"./chap05/guitar_rttm.csv\"\n\n df_rttm = speaker_diarization(\n audio_file_path,\n rttm_file_path,\n rttm_csv_file_path\n )\n\n print(df_rttm)\n</code></pre>\n<p>After running this code, it gives me error like below..</p>\n<pre><code class=\"lang-auto\">(venv) PS C:\\GPT_AGENT_2025_BOOK&gt; &amp; C:/GPT_AGENT_2025_BOOK/venv/Scripts/python.exe c:/GPT_AGENT_2025_BOOK/chap05/whisper_stt.py\nC:\\GPT_AGENT_2025_BOOK\\venv\\Lib\\site-packages\\pyannote\\audio\\core\\io.py:47: UserWarning: \ntorchcodec is not installed correctly so built-in audio decoding will fail. Solutions are:\n* use audio preloaded in-memory as a {'waveform': (channel, time) torch.Tensor, 'sample_rate': int} dictionary;\n* fix torchcodec installation. Error message was:\n\nCould not load libtorchcodec. Likely causes:\n 1. FFmpeg is not properly installed in your environment. We support\n versions 4, 5, 6 and 7.\n 2. The PyTorch version (2.9.0+cu126) is not compatible with\n this version of TorchCodec. Refer to the version compatibility\n table:\n https://github.com/pytorch/torchcodec?tab=readme-ov-file#installing-torchcodec.\n 3. Another runtime dependency; see exceptions below.\n The following exceptions were raised as we tried to load libtorchcodec:\n\n[start of libtorchcodec loading traceback]\nFFmpeg version 8: Could not load this library: C:\\GPT_AGENT_2025_BOOK\\venv\\Lib\\site-packages\\torchcodec\\libtorchcodec_core8.dll\nFFmpeg version 7: Could not load this library: C:\\GPT_AGENT_2025_BOOK\\venv\\Lib\\site-packages\\torchcodec\\libtorchcodec_core7.dll\nFFmpeg version 6: Could not load this library: C:\\GPT_AGENT_2025_BOOK\\venv\\Lib\\site-packages\\torchcodec\\libtorchcodec_core6.dll\nFFmpeg version 5: Could not load this library: C:\\GPT_AGENT_2025_BOOK\\venv\\Lib\\site-packages\\torchcodec\\libtorchcodec_core5.dll\nFFmpeg version 4: Could not load this library: C:\\GPT_AGENT_2025_BOOK\\venv\\Lib\\site-packages\\torchcodec\\libtorchcodec_core4.dll\n[end of libtorchcodec loading traceback].\n warnings.warn(\nexe: C:\\GPT_AGENT_2025_BOOK\\venv\\Scripts\\python.exe\ntorch 2.9.0+cu126 torchcodec 0.8.0 py 3.12.9\nffmpeg version 4.3.1 Copyright (c) 2000-2020 the FFmpeg developers\nbuilt with gcc 10.2.1 (GCC) 20200726\nconfiguration: --disable-static --enable-shared --enable-gpl --enable-version3 --enable-sdl2 --enable-fontconfig --enable-gnutls --enable-iconv --enable-libass --enable-libdav1d --enable-libbluray --enable-libfreetype --enable-libmp3lame --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenjpeg --enable-libopus --enable-libshine --enable-libsnappy --enable-libsoxr --enable-libsrt --enable-libtheora --enable-libtwolame --enable-libvpx --enable-libwavpack --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxml2 --enable-libzimg --enable-lzma --enable-zlib --enable-gmp --enable-libvidstab --enable-libvmaf --enable-libvorbis --enable-libvo-amrwbenc --enable-libmysofa --enable-libspeex --enable-libxvid --enable-libaom --enable-libgsm --enable-librav1e --disable-w32threads --enable-libmfx --enable-ffnvcodec --enable-cuda-llvm --enable-cuvid --enable-d3d11va --enable-nvenc --enable-nvdec --enable-dxva2 --enable-avisynth --enable-libopenmpt --enable-amf\nlibavutil 56. 51.100 / 56. 51.100\nlibavcodec 58. 91.100 / 58. 91.100\nlibavformat 58. 45.100 / 58. 45.100\nlibavdevice 58. 10.100 / 58. 10.100\nlibavfilter 7. 85.100 / 7. 85.100\nlibswscale 5. 7.100 / 5. 7.100\nlibswresample 3. 7.100 / 3. 7.100\nlibpostproc 55. 7.100 / 55. 7.100\ncuda torch? True\nUsing CUDA\ntorch version: 2.9.0+cu126\ncompiled with cuda: 12.6\ncuda available: True\nC:\\GPT_AGENT_2025_BOOK\\venv\\Lib\\site-packages\\torch\\backends\\cuda\\__init__.py:131: UserWarning: Please use the new API settings to control TF32 behavior, such as torch.backends.cudnn.conv.fp32_precision = 'tf32' \nor torch.backends.cuda.matmul.fp32_precision = 'ieee'. Old settings, e.g, torch.backends.cuda.matmul.allow_tf32 = True, torch.backends.cudnn.allow_tf32 = True, allowTF32CuDNN() and allowTF32CuBLAS() will be deprecated after Pytorch 2.9. Please see https://pytorch.org/docs/main/notes/cuda.html#tensorfloat-32-tf32-on-ampere-and-later-devices (Triggered internally at C:\\actions-runner\\_work\\pytorch\\pytorch\\pytorch\\aten\\src\\ATen\\Context.cpp:85.)\n return torch._C._get_cublas_allow_tf32()\nC:\\GPT_AGENT_2025_BOOK\\venv\\Lib\\site-packages\\pyannote\\audio\\utils\\reproducibility.py:74: ReproducibilityWarning: TensorFloat-32 (TF32) has been disabled as it might lead to reproducibility issues and lower accuracy.\nIt can be re-enabled by calling\n &gt;&gt;&gt; import torch\n &gt;&gt;&gt; torch.backends.cuda.matmul.allow_tf32 = True\n &gt;&gt;&gt; torch.backends.cudnn.allow_tf32 = True\nSee https://github.com/pyannote/pyannote-audio/issues/1370 for more details.\n\n warnings.warn(\nTraceback (most recent call last):\n File \"c:\\GPT_AGENT_2025_BOOK\\chap05\\whisper_stt.py\", line 156, in &lt;module&gt;\n df_rttm = speaker_diarization(\n ^^^^^^^^^^^^^^^^^^^^\n File \"c:\\GPT_AGENT_2025_BOOK\\chap05\\whisper_stt.py\", line 94, in speaker_diarization\n out = pipeline(audio_file_path)\n ^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"C:\\GPT_AGENT_2025_BOOK\\venv\\Lib\\site-packages\\pyannote\\audio\\core\\pipeline.py\", line 440, in __call__\n track_pipeline_apply(self, file, **kwargs)\n File \"C:\\GPT_AGENT_2025_BOOK\\venv\\Lib\\site-packages\\pyannote\\audio\\telemetry\\metrics.py\", line 152, in track_pipeline_apply\n duration: float = Audio().get_duration(file)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"C:\\GPT_AGENT_2025_BOOK\\venv\\Lib\\site-packages\\pyannote\\audio\\core\\io.py\", line 273, in get_duration\n metadata: AudioStreamMetadata = get_audio_metadata(file)\n ^^^^^^^^^^^^^^^^^^^^^^^^\n File \"C:\\GPT_AGENT_2025_BOOK\\venv\\Lib\\site-packages\\pyannote\\audio\\core\\io.py\", line 86, in get_audio_metadata\n metadata = AudioDecoder(file[\"audio\"]).metadata\n ^^^^^^^^^^^^\nNameError: name 'AudioDecoder' is not defined\n</code></pre>\n<p>It says torchcodec is not installed so auodio decoding will fail.. but strange thing is that it tells me the version of torch codec as below….</p>\n<pre><code class=\"lang-auto\">C:\\GPT_AGENT_2025_BOOK\\venv\\Lib\\site-packages\\pyannote\\audio\\core\\io.py:47: UserWarning: \ntorchcodec is not installed correctly so built-in audio decoding will fail.\n\n\n(...)\n\n[end of libtorchcodec loading traceback].\n warnings.warn(\nexe: C:\\GPT_AGENT_2025_BOOK\\venv\\Scripts\\python.exe\ntorch 2.9.0+cu126 torchcodec 0.8.0 py 3.12.9\nffmpeg version 4.3.1 Copyright (c) 2000-2020 the FFmpeg developers\nbuilt with gcc 10.2.1 (GCC) 20200726\n</code></pre>\n<p>and more strange thing is that this code actually worked pretty well without any problem in Jupyternote book… and last picture is the result..</p>\n<p><div class=\"lightbox-wrapper\"><a class=\"lightbox\" href=\"https://us1.discourse-cdn.com/hellohellohello/original/3X/1/6/16e615d060caba5985d089d7d1fae229383905ee.png\" data-download-href=\"/uploads/short-url/3gzsuRerXGquP8haz4cPzLTewJE.png?dl=1\" title=\"image\" rel=\"noopener nofollow ugc\"><img src=\"https://us1.discourse-cdn.com/hellohellohello/original/3X/1/6/16e615d060caba5985d089d7d1fae229383905ee.png\" alt=\"image\" data-base62-sha1=\"3gzsuRerXGquP8haz4cPzLTewJE\" width=\"690\" height=\"264\" data-dominant-color=\"1E1F1F\"><div class=\"meta\"><svg class=\"fa d-icon d-icon-far-image svg-icon\" aria-hidden=\"true\"><use href=\"#far-image\"></use></svg><span class=\"filename\">image</span><span class=\"informations\">1026×394 21 KB</span><svg class=\"fa d-icon d-icon-discourse-expand svg-icon\" aria-hidden=\"true\"><use href=\"#discourse-expand\"></use></svg></div></a></div></p>\n<p><div class=\"lightbox-wrapper\"><a class=\"lightbox\" href=\"https://us1.discourse-cdn.com/hellohellohello/original/3X/9/a/9ad2487ccbcd0deffda12cf8393ee7b4f563d586.png\" data-download-href=\"/uploads/short-url/m5C3IKEV9BXzbF2iR89wAJ7difQ.png?dl=1\" title=\"image\" rel=\"noopener nofollow ugc\"><img src=\"https://us1.discourse-cdn.com/hellohellohello/original/3X/9/a/9ad2487ccbcd0deffda12cf8393ee7b4f563d586.png\" alt=\"image\" data-base62-sha1=\"m5C3IKEV9BXzbF2iR89wAJ7difQ\" width=\"690\" height=\"374\" data-dominant-color=\"202122\"><div class=\"meta\"><svg class=\"fa d-icon d-icon-far-image svg-icon\" aria-hidden=\"true\"><use href=\"#far-image\"></use></svg><span class=\"filename\">image</span><span class=\"informations\">1070×581 29.3 KB</span><svg class=\"fa d-icon d-icon-discourse-expand svg-icon\" aria-hidden=\"true\"><use href=\"#discourse-expand\"></use></svg></div></a></div></p>\n<p><div class=\"lightbox-wrapper\"><a class=\"lightbox\" href=\"https://us1.discourse-cdn.com/hellohellohello/original/3X/c/8/c8b3f19a75ddacfd3fac5d3c8da4d6c941adbfc0.png\" data-download-href=\"/uploads/short-url/sDv1lTkSQy0ehRarqfUk6JLiXDy.png?dl=1\" title=\"image\" rel=\"noopener nofollow ugc\"><img src=\"https://us1.discourse-cdn.com/hellohellohello/original/3X/c/8/c8b3f19a75ddacfd3fac5d3c8da4d6c941adbfc0.png\" alt=\"image\" data-base62-sha1=\"sDv1lTkSQy0ehRarqfUk6JLiXDy\" width=\"690\" height=\"499\" data-dominant-color=\"2F2F2F\"><div class=\"meta\"><svg class=\"fa d-icon d-icon-far-image svg-icon\" aria-hidden=\"true\"><use href=\"#far-image\"></use></svg><span class=\"filename\">image</span><span class=\"informations\">724×524 12.5 KB</span><svg class=\"fa d-icon d-icon-discourse-expand svg-icon\" aria-hidden=\"true\"><use href=\"#discourse-expand\"></use></svg></div></a></div></p>\n<p>It is hard to understand for me because I didn’t change any environment setting… and I just almost copied and pasted the code from the Jupyternote book..</p>\n<p>Thank you so much for the help in advance…</p>", "post_number": 1, "post_type": 1, "posts_count": 8, "updated_at": "2025-10-25T07:56:14.768Z", "reply_count": 0, "reply_to_post_number": null, "quote_count": 0, "incoming_link_count": 48, "reads": 5, "readers_count": 4, "score": 246, "yours": false, "topic_id": 169415, "topic_slug": "problem-with-pyannote-speaker-diarization-3-1", "display_username": "MAJH", "primary_group_name": null, "flair_name": null, "flair_url": null, "flair_bg_color": null, "flair_color": null, "flair_group_id": null, "badges_granted": [], "version": 2, "can_edit": false, "can_delete": false, "can_recover": false, "can_see_hidden_post": false, "can_wiki": false, "link_counts": null, "read": true, "user_title": null, "bookmarked": false, "actions_summary": [ { "id": 2, "count": 1 } ], "moderator": false, "admin": false, "staff": false, "user_id": 105819, "hidden": false, "trust_level": 1, "deleted_at": null, "user_deleted": false, "edit_reason": null, "can_view_edit_history": true, "wiki": false, "post_url": "/t/problem-with-pyannote-speaker-diarization-3-1/169415/1", "reactions": [ { "id": "eyes", "type": "emoji", "count": 1 }, { "id": "heart", "type": "emoji", "count": 1 } ], "current_user_reaction": null, "reaction_users_count": 1, "current_user_used_main_reaction": false, "can_accept_answer": false, "can_unaccept_answer": false, "accepted_answer": false, "topic_accepted_answer": true, "can_vote": false, "title_is_group": null, "reply_to_user": null, "action_code": null, "via_email": null }, { "id": 244112, "name": "system", "username": "system", "avatar_template": "https://us1.discourse-cdn.com/hellohellohello/original/2X/d/de4155eb4aa4108ecb32a1389d7cc37ae69f88b7.png", "created_at": "2025-10-25T07:31:53.165Z", "cooked": "", "post_number": 2, "post_type": 3, "posts_count": 8, "updated_at": "2025-10-25T07:31:53.165Z", "reply_count": 0, "reply_to_post_number": null, "quote_count": 0, "incoming_link_count": 0, "reads": 5, "readers_count": 4, "score": 1, "yours": false, "topic_id": 169415, "topic_slug": "problem-with-pyannote-speaker-diarization-3-1", "display_username": "system", "primary_group_name": null, "flair_name": null, "flair_url": null, "flair_bg_color": null, "flair_color": null, "flair_group_id": null, "badges_granted": [], "version": 1, "can_edit": false, "can_delete": false, "can_recover": false, "can_see_hidden_post": false, "can_wiki": false, "link_counts": null, "read": true, "user_title": null, "bookmarked": false, "actions_summary": [], "moderator": true, "admin": true, "staff": true, "user_id": -1, "hidden": false, "trust_level": 4, "deleted_at": null, "user_deleted": false, "edit_reason": null, "can_view_edit_history": true, "wiki": false, "post_url": "/t/problem-with-pyannote-speaker-diarization-3-1/169415/2", "reactions": [], "current_user_reaction": null, "reaction_users_count": 0, "current_user_used_main_reaction": false, "can_accept_answer": false, "can_unaccept_answer": false, "accepted_answer": false, "topic_accepted_answer": true, "can_vote": null, "title_is_group": null, "reply_to_user": null, "action_code": "visible.disabled", "via_email": null }, { "id": 244126, "name": "system", "username": "system", "avatar_template": "https://us1.discourse-cdn.com/hellohellohello/original/2X/d/de4155eb4aa4108ecb32a1389d7cc37ae69f88b7.png", "created_at": "2025-10-25T07:56:14.176Z", "cooked": "", "post_number": 3, "post_type": 3, "posts_count": 8, "updated_at": "2025-10-25T07:56:14.176Z", "reply_count": 0, "reply_to_post_number": null, "quote_count": 0, "incoming_link_count": 0, "reads": 4, "readers_count": 3, "score": 0.8, "yours": false, "topic_id": 169415, "topic_slug": "problem-with-pyannote-speaker-diarization-3-1", "display_username": "system", "primary_group_name": null, "flair_name": null, "flair_url": null, "flair_bg_color": null, "flair_color": null, "flair_group_id": null, "badges_granted": [], "version": 1, "can_edit": false, "can_delete": false, "can_recover": false, "can_see_hidden_post": false, "can_wiki": false, "link_counts": null, "read": true, "user_title": null, "bookmarked": false, "actions_summary": [], "moderator": true, "admin": true, "staff": true, "user_id": -1, "hidden": false, "trust_level": 4, "deleted_at": null, "user_deleted": false, "edit_reason": null, "can_view_edit_history": true, "wiki": false, "post_url": "/t/problem-with-pyannote-speaker-diarization-3-1/169415/3", "reactions": [], "current_user_reaction": null, "reaction_users_count": 0, "current_user_used_main_reaction": false, "can_accept_answer": false, "can_unaccept_answer": false, "accepted_answer": false, "topic_accepted_answer": true, "can_vote": null, "title_is_group": null, "reply_to_user": null, "action_code": "visible.enabled", "via_email": null }, { "id": 244133, "name": "MAJH", "username": "aldkela", "avatar_template": "https://avatars.discourse-cdn.com/v4/letter/a/4bbf92/{size}.png", "created_at": "2025-10-25T08:44:46.837Z", "cooked": "<p>I am so sorry for this…</p>\n<p>I uploaded a few threads with the same topic….</p>\n<p>Please ignore this thread..</p>\n<p>I am really sorry for this inconvenience…</p>", "post_number": 4, "post_type": 1, "posts_count": 8, "updated_at": "2025-10-25T14:59:09.677Z", "reply_count": 0, "reply_to_post_number": null, "quote_count": 0, "incoming_link_count": 2, "reads": 3, "readers_count": 2, "score": 70.6, "yours": false, "topic_id": 169415, "topic_slug": "problem-with-pyannote-speaker-diarization-3-1", "display_username": "MAJH", "primary_group_name": null, "flair_name": null, "flair_url": null, "flair_bg_color": null, "flair_color": null, "flair_group_id": null, "badges_granted": [], "version": 1, "can_edit": false, "can_delete": false, "can_recover": false, "can_see_hidden_post": false, "can_wiki": false, "link_counts": null, "read": true, "user_title": null, "bookmarked": false, "actions_summary": [ { "id": 2, "count": 1 } ], "moderator": false, "admin": false, "staff": false, "user_id": 105819, "hidden": false, "trust_level": 1, "deleted_at": null, "user_deleted": false, "edit_reason": null, "can_view_edit_history": true, "wiki": false, "post_url": "/t/problem-with-pyannote-speaker-diarization-3-1/169415/4", "reactions": [ { "id": "+1", "type": "emoji", "count": 1 }, { "id": "heart", "type": "emoji", "count": 1 } ], "current_user_reaction": null, "reaction_users_count": 1, "current_user_used_main_reaction": false, "can_accept_answer": false, "can_unaccept_answer": false, "accepted_answer": false, "topic_accepted_answer": true, "can_vote": null, "title_is_group": null, "reply_to_user": null, "action_code": null, "via_email": null }, { "id": 244136, "name": "John Smith", "username": "John6666", "avatar_template": "/user_avatar/discuss.huggingface.co/john6666/{size}/27664_2.png", "created_at": "2025-10-25T08:53:27.062Z", "cooked": "<p>Problems frequently occur in Windows environments.<br>\nSpecifically, issues related to DLLs can arise because Python 3.8 and later no longer reference the Windows <code>PATH</code> environment variable.</p>\n<p><a href=\"https://huggingface.co/datasets/John6666/forum2/blob/main/torchcodec_windows_error_1.md\">Several workarounds exist, such as explicitly specifying the path within the code, adjusting the DLL location, or using methods that don’t require DLLs</a>.</p>", "post_number": 5, "post_type": 1, "posts_count": 8, "updated_at": "2025-10-25T08:53:27.062Z", "reply_count": 1, "reply_to_post_number": null, "quote_count": 0, "incoming_link_count": 3, "reads": 3, "readers_count": 2, "score": 35.6, "yours": false, "topic_id": 169415, "topic_slug": "problem-with-pyannote-speaker-diarization-3-1", "display_username": "John Smith", "primary_group_name": null, "flair_name": null, "flair_url": null, "flair_bg_color": null, "flair_color": null, "flair_group_id": null, "badges_granted": [], "version": 1, "can_edit": false, "can_delete": false, "can_recover": false, "can_see_hidden_post": false, "can_wiki": false, "link_counts": [ { "url": "https://huggingface.co/datasets/John6666/forum2/blob/main/torchcodec_windows_error_1.md", "internal": false, "reflection": false, "title": "torchcodec_windows_error_1.md · John6666/forum2 at main", "clicks": 5 } ], "read": true, "user_title": "Regular", "bookmarked": false, "actions_summary": [ { "id": 2, "count": 1 } ], "moderator": false, "admin": false, "staff": false, "user_id": 52272, "hidden": false, "trust_level": 3, "deleted_at": null, "user_deleted": false, "edit_reason": null, "can_view_edit_history": true, "wiki": false, "post_url": "/t/problem-with-pyannote-speaker-diarization-3-1/169415/5", "reactions": [ { "id": "heart", "type": "emoji", "count": 1 } ], "current_user_reaction": null, "reaction_users_count": 1, "current_user_used_main_reaction": false, "can_accept_answer": false, "can_unaccept_answer": false, "accepted_answer": true, "topic_accepted_answer": true, "can_vote": null, "title_is_group": false, "reply_to_user": null, "action_code": null, "via_email": null }, { "id": 244194, "name": "MAJH", "username": "aldkela", "avatar_template": "https://avatars.discourse-cdn.com/v4/letter/a/4bbf92/{size}.png", "created_at": "2025-10-26T03:54:02.655Z", "cooked": "<p>Hello!</p>\n<p>I just changed the code “out = pipeline(audio_file)” to the one you gave me</p>\n<pre data-code-wrap=\"python\"><code class=\"lang-python\">waveform, sr = torchaudio.load(audio_file_path)\n\nout = pipeline({\"waveform\": waveform, \"sample_rate\": sr})\n</code></pre>\n<p>It magically works!!</p>\n<p>By the way, How did you find the solution that fast? and even you made this document so fast!</p>\n<aside class=\"onebox allowlistedgeneric\" data-onebox-src=\"https://huggingface.co/datasets/John6666/forum2/blob/main/torchcodec_windows_error_1.md\">\n <header class=\"source\">\n\n <a href=\"https://huggingface.co/datasets/John6666/forum2/blob/main/torchcodec_windows_error_1.md\" target=\"_blank\" rel=\"noopener\">huggingface.co</a>\n </header>\n\n <article class=\"onebox-body\">\n <div class=\"aspect-image\" style=\"--aspect-ratio:690/372;\"><img src=\"https://us1.discourse-cdn.com/hellohellohello/optimized/3X/c/7/c73620b9c0ca5fc732b60c6f27a1a431c5bfe565_2_690x372.png\" class=\"thumbnail\" alt=\"\" data-dominant-color=\"6853C0\" width=\"690\" height=\"372\"></div>\n\n<h3><a href=\"https://huggingface.co/datasets/John6666/forum2/blob/main/torchcodec_windows_error_1.md\" target=\"_blank\" rel=\"noopener\">torchcodec_windows_error_1.md · John6666/forum2 at main</a></h3>\n\n <p>We’re on a journey to advance and democratize artificial intelligence through open source and open science.</p>\n\n\n </article>\n\n <div class=\"onebox-metadata\">\n \n \n </div>\n\n <div style=\"clear: both\"></div>\n</aside>\n\n<p>Did you used the Chat GPT to find the solution?</p>\n<p>Anyways, Thank you so much for your help again and I think you are really good at programming!</p>", "post_number": 6, "post_type": 1, "posts_count": 8, "updated_at": "2025-10-26T03:54:02.655Z", "reply_count": 0, "reply_to_post_number": 5, "quote_count": 0, "incoming_link_count": 0, "reads": 2, "readers_count": 1, "score": 15.4, "yours": false, "topic_id": 169415, "topic_slug": "problem-with-pyannote-speaker-diarization-3-1", "display_username": "MAJH", "primary_group_name": null, "flair_name": null, "flair_url": null, "flair_bg_color": null, "flair_color": null, "flair_group_id": null, "badges_granted": [], "version": 1, "can_edit": false, "can_delete": false, "can_recover": false, "can_see_hidden_post": false, "can_wiki": false, "link_counts": [ { "url": "https://huggingface.co/datasets/John6666/forum2/blob/main/torchcodec_windows_error_1.md", "internal": false, "reflection": false, "title": "torchcodec_windows_error_1.md · John6666/forum2 at main", "clicks": 1 } ], "read": true, "user_title": null, "bookmarked": false, "actions_summary": [ { "id": 2, "count": 1 } ], "moderator": false, "admin": false, "staff": false, "user_id": 105819, "hidden": false, "trust_level": 1, "deleted_at": null, "user_deleted": false, "edit_reason": null, "can_view_edit_history": true, "wiki": false, "post_url": "/t/problem-with-pyannote-speaker-diarization-3-1/169415/6", "reactions": [ { "id": "confetti_ball", "type": "emoji", "count": 1 } ], "current_user_reaction": null, "reaction_users_count": 1, "current_user_used_main_reaction": false, "can_accept_answer": false, "can_unaccept_answer": false, "accepted_answer": false, "topic_accepted_answer": true, "can_vote": null, "title_is_group": null, "reply_to_user": { "id": 52272, "username": "John6666", "name": "John Smith", "avatar_template": "/user_avatar/discuss.huggingface.co/john6666/{size}/27664_2.png" }, "action_code": null, "via_email": null }, { "id": 244195, "name": "John Smith", "username": "John6666", "avatar_template": "/user_avatar/discuss.huggingface.co/john6666/{size}/27664_2.png", "created_at": "2025-10-26T04:23:33.479Z", "cooked": "<blockquote>\n<p>By the way, How did you find the solution that fast? and even you made this document so fast!</p>\n</blockquote>\n<p>Yeah. Since it was an error I recognized from a similar case, I fed my prior knowledge to <code>GPT-5 Thinking</code> and had it search for it. I then formatted that Markdown in Python and output it.<img src=\"https://emoji.discourse-cdn.com/apple/grinning_face.png?v=14\" title=\":grinning_face:\" class=\"emoji\" alt=\":grinning_face:\" loading=\"lazy\" width=\"20\" height=\"20\"><br>\nI think Gemini can do it too…</p>", "post_number": 7, "post_type": 1, "posts_count": 8, "updated_at": "2025-10-26T07:46:05.096Z", "reply_count": 0, "reply_to_post_number": null, "quote_count": 0, "incoming_link_count": 0, "reads": 2, "readers_count": 1, "score": 60.4, "yours": false, "topic_id": 169415, "topic_slug": "problem-with-pyannote-speaker-diarization-3-1", "display_username": "John Smith", "primary_group_name": null, "flair_name": null, "flair_url": null, "flair_bg_color": null, "flair_color": null, "flair_group_id": null, "badges_granted": [], "version": 1, "can_edit": false, "can_delete": false, "can_recover": false, "can_see_hidden_post": false, "can_wiki": false, "link_counts": null, "read": true, "user_title": "Regular", "bookmarked": false, "actions_summary": [ { "id": 2, "count": 1 } ], "moderator": false, "admin": false, "staff": false, "user_id": 52272, "hidden": false, "trust_level": 3, "deleted_at": null, "user_deleted": false, "edit_reason": null, "can_view_edit_history": true, "wiki": false, "post_url": "/t/problem-with-pyannote-speaker-diarization-3-1/169415/7", "reactions": [ { "id": "heart", "type": "emoji", "count": 1 }, { "id": "open_mouth", "type": "emoji", "count": 1 } ], "current_user_reaction": null, "reaction_users_count": 1, "current_user_used_main_reaction": false, "can_accept_answer": false, "can_unaccept_answer": false, "accepted_answer": false, "topic_accepted_answer": true, "can_vote": null, "title_is_group": false, "reply_to_user": null, "action_code": null, "via_email": null }, { "id": 244244, "name": "system", "username": "system", "avatar_template": "https://us1.discourse-cdn.com/hellohellohello/original/2X/d/de4155eb4aa4108ecb32a1389d7cc37ae69f88b7.png", "created_at": "2025-10-26T16:23:43.476Z", "cooked": "<p>This topic was automatically closed 12 hours after the last reply. New replies are no longer allowed.</p>", "post_number": 8, "post_type": 3, "posts_count": 8, "updated_at": "2025-10-26T16:23:43.476Z", "reply_count": 0, "reply_to_post_number": null, "quote_count": 0, "incoming_link_count": 1, "reads": 1, "readers_count": 0, "score": 5.2, "yours": false, "topic_id": 169415, "topic_slug": "problem-with-pyannote-speaker-diarization-3-1", "display_username": "system", "primary_group_name": null, "flair_name": null, "flair_url": null, "flair_bg_color": null, "flair_color": null, "flair_group_id": null, "badges_granted": [], "version": 1, "can_edit": false, "can_delete": false, "can_recover": false, "can_see_hidden_post": false, "can_wiki": false, "link_counts": null, "read": true, "user_title": null, "bookmarked": false, "actions_summary": [], "moderator": true, "admin": true, "staff": true, "user_id": -1, "hidden": false, "trust_level": 4, "deleted_at": null, "user_deleted": false, "edit_reason": null, "can_view_edit_history": true, "wiki": false, "post_url": "/t/problem-with-pyannote-speaker-diarization-3-1/169415/8", "reactions": [], "current_user_reaction": null, "reaction_users_count": 0, "current_user_used_main_reaction": false, "can_accept_answer": false, "can_unaccept_answer": false, "accepted_answer": false, "topic_accepted_answer": true, "can_vote": null, "title_is_group": null, "reply_to_user": null, "action_code": "autoclosed.enabled", "via_email": null } ]
<p>Hello, I am trying to make some code with pyannote/speaker-diarization-3.1 but I got some error that I cannot handle now….</p> <p>This is the code I made below, I only used function “speaker_diarization” this time..</p> <pre data-code-wrap="python"><code class="lang-python">import pandas as pd from transformers import AutoModelForSpeechSeq2Seq, AutoProcessor, pipeline from pyannote.audio import Pipeline from pathlib import Path import os, sys ffmpeg_dll_dir = Path(r"C:\Users\majh0\miniconda3\Library\bin") assert ffmpeg_dll_dir.exists(), ffmpeg_dll_dir os.add_dll_directory(str(ffmpeg_dll_dir)) import torch, torchcodec, platform, subprocess print("exe:", sys.executable) print("torch", torch.__version__, "torchcodec", torchcodec.__version__, "py", platform.python_version()) subprocess.run(["ffmpeg", "-version"], check=True) print("cuda torch?",torch.cuda.is_available()) def whisper_stt( audio_file_path: str, output_file_path: str = "./output.csv", ): device = "cuda:0" if torch.cuda.is_available() else "cpu" torch_dtype = torch.float16 if torch.cuda.is_available() else torch.float32 model_id = "openai/whisper-large-v3-turbo" model = AutoModelForSpeechSeq2Seq.from_pretrained( model_id, torch_dtype=torch_dtype, low_cpu_mem_usage=True, use_safetensors=True ) model.to(device) processor = AutoProcessor.from_pretrained(model_id) pipe = pipeline( "automatic-speech-recognition", model=model, tokenizer=processor.tokenizer, feature_extractor=processor.feature_extractor, torch_dtype=torch_dtype, device=device, return_timestamps=True, chunk_length_s=10, stride_length_s=2, ) result = pipe(audio_file_path) df = whisper_to_dataframe(result, output_file_path) return result, df def whisper_to_dataframe(result, output_file_path): start_end_text = [] for chunk in result["chunks"]: start = chunk["timestamp"][0] end = chunk["timestamp"][1] text = chunk["text"] start_end_text.append([start, end, text]) df = pd.DataFrame(start_end_text, columns=["start", "end", "text"]) df.to_csv(output_file_path, index=False, sep="|") return df def speaker_diarization( audio_file_path: str, output_rttm_file_path: str, output_csv_file_path: str, ): pipeline = Pipeline.from_pretrained( "pyannote/speaker-diarization-3.1", token="") if torch.cuda.is_available(): pipeline.to(torch.device("cuda")) print("Using CUDA") else: print("Using CPU") print("torch version:", torch.__version__) print("compiled with cuda:", torch.version.cuda) print("cuda available:", torch.cuda.is_available()) out = pipeline(audio_file_path) ann = out.speaker_diarization # dump the diarization output to disk using RTTM format with open(output_rttm_file_path, "w", encoding="utf-8") as rttm: ann.write_rttm(rttm) df_rttm = pd.read_csv( output_rttm_file_path, sep=' ', header=None, names=['type', 'file', 'chnl', 'start', 'duration', 'C1', 'C2', 'speaker_id', 'C3', 'C4'] ) df_rttm['end'] = df_rttm['start'] + df_rttm['duration'] df_rttm["number"] = None df_rttm.at[0, "number"] = 0 for i in range(1, len(df_rttm)): if df_rttm.at[i, "speaker_id"] != df_rttm.at[i-1, "speaker_id"]: df_rttm.at[i, "number"] = df_rttm.at[i-1, "number"] + 1 else: df_rttm.at[i, "number"] = df_rttm.at[i-1, "number"] df_rttm_grouped = df_rttm.groupby("number").agg( start=pd.NamedAgg(column="start", aggfunc="min"), end=pd.NamedAgg(column="end", aggfunc="max"), speaker_id=pd.NamedAgg(column="speaker_id", aggfunc="first") ) df_rttm_grouped['duration'] = df_rttm_grouped['end'] - df_rttm_grouped['start'] df_rttm_grouped = df_rttm_grouped.reset_index(drop=True) df_rttm_grouped.to_csv(output_csv_file_path, sep=',', index=False, encoding='utf-8') return df_rttm_grouped if __name__ == "__main__": # result, df = whisper_stt( # "./chap05/guitar.wav", # "./chap05/guitar.csv", # ) # print(df) audio_file_path = "./chap05/guitar.wav" stt_output_file_path = "./chap05/guitar.csv" rttm_file_path = "./chap05/guitar.rttm" rttm_csv_file_path = "./chap05/guitar_rttm.csv" df_rttm = speaker_diarization( audio_file_path, rttm_file_path, rttm_csv_file_path ) print(df_rttm) </code></pre> <p>After running this code, it gives me error like below..</p> <pre><code class="lang-auto">(venv) PS C:\GPT_AGENT_2025_BOOK&gt; &amp; C:/GPT_AGENT_2025_BOOK/venv/Scripts/python.exe c:/GPT_AGENT_2025_BOOK/chap05/whisper_stt.py C:\GPT_AGENT_2025_BOOK\venv\Lib\site-packages\pyannote\audio\core\io.py:47: UserWarning: torchcodec is not installed correctly so built-in audio decoding will fail. Solutions are: * use audio preloaded in-memory as a {'waveform': (channel, time) torch.Tensor, 'sample_rate': int} dictionary; * fix torchcodec installation. Error message was: Could not load libtorchcodec. Likely causes: 1. FFmpeg is not properly installed in your environment. We support versions 4, 5, 6 and 7. 2. The PyTorch version (2.9.0+cu126) is not compatible with this version of TorchCodec. Refer to the version compatibility table: https://github.com/pytorch/torchcodec?tab=readme-ov-file#installing-torchcodec. 3. Another runtime dependency; see exceptions below. The following exceptions were raised as we tried to load libtorchcodec: [start of libtorchcodec loading traceback] FFmpeg version 8: Could not load this library: C:\GPT_AGENT_2025_BOOK\venv\Lib\site-packages\torchcodec\libtorchcodec_core8.dll FFmpeg version 7: Could not load this library: C:\GPT_AGENT_2025_BOOK\venv\Lib\site-packages\torchcodec\libtorchcodec_core7.dll FFmpeg version 6: Could not load this library: C:\GPT_AGENT_2025_BOOK\venv\Lib\site-packages\torchcodec\libtorchcodec_core6.dll FFmpeg version 5: Could not load this library: C:\GPT_AGENT_2025_BOOK\venv\Lib\site-packages\torchcodec\libtorchcodec_core5.dll FFmpeg version 4: Could not load this library: C:\GPT_AGENT_2025_BOOK\venv\Lib\site-packages\torchcodec\libtorchcodec_core4.dll [end of libtorchcodec loading traceback]. warnings.warn( exe: C:\GPT_AGENT_2025_BOOK\venv\Scripts\python.exe torch 2.9.0+cu126 torchcodec 0.8.0 py 3.12.9 ffmpeg version 4.3.1 Copyright (c) 2000-2020 the FFmpeg developers built with gcc 10.2.1 (GCC) 20200726 configuration: --disable-static --enable-shared --enable-gpl --enable-version3 --enable-sdl2 --enable-fontconfig --enable-gnutls --enable-iconv --enable-libass --enable-libdav1d --enable-libbluray --enable-libfreetype --enable-libmp3lame --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenjpeg --enable-libopus --enable-libshine --enable-libsnappy --enable-libsoxr --enable-libsrt --enable-libtheora --enable-libtwolame --enable-libvpx --enable-libwavpack --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxml2 --enable-libzimg --enable-lzma --enable-zlib --enable-gmp --enable-libvidstab --enable-libvmaf --enable-libvorbis --enable-libvo-amrwbenc --enable-libmysofa --enable-libspeex --enable-libxvid --enable-libaom --enable-libgsm --enable-librav1e --disable-w32threads --enable-libmfx --enable-ffnvcodec --enable-cuda-llvm --enable-cuvid --enable-d3d11va --enable-nvenc --enable-nvdec --enable-dxva2 --enable-avisynth --enable-libopenmpt --enable-amf libavutil 56. 51.100 / 56. 51.100 libavcodec 58. 91.100 / 58. 91.100 libavformat 58. 45.100 / 58. 45.100 libavdevice 58. 10.100 / 58. 10.100 libavfilter 7. 85.100 / 7. 85.100 libswscale 5. 7.100 / 5. 7.100 libswresample 3. 7.100 / 3. 7.100 libpostproc 55. 7.100 / 55. 7.100 cuda torch? True Using CUDA torch version: 2.9.0+cu126 compiled with cuda: 12.6 cuda available: True C:\GPT_AGENT_2025_BOOK\venv\Lib\site-packages\torch\backends\cuda\__init__.py:131: UserWarning: Please use the new API settings to control TF32 behavior, such as torch.backends.cudnn.conv.fp32_precision = 'tf32' or torch.backends.cuda.matmul.fp32_precision = 'ieee'. Old settings, e.g, torch.backends.cuda.matmul.allow_tf32 = True, torch.backends.cudnn.allow_tf32 = True, allowTF32CuDNN() and allowTF32CuBLAS() will be deprecated after Pytorch 2.9. Please see https://pytorch.org/docs/main/notes/cuda.html#tensorfloat-32-tf32-on-ampere-and-later-devices (Triggered internally at C:\actions-runner\_work\pytorch\pytorch\pytorch\aten\src\ATen\Context.cpp:85.) return torch._C._get_cublas_allow_tf32() C:\GPT_AGENT_2025_BOOK\venv\Lib\site-packages\pyannote\audio\utils\reproducibility.py:74: ReproducibilityWarning: TensorFloat-32 (TF32) has been disabled as it might lead to reproducibility issues and lower accuracy. It can be re-enabled by calling &gt;&gt;&gt; import torch &gt;&gt;&gt; torch.backends.cuda.matmul.allow_tf32 = True &gt;&gt;&gt; torch.backends.cudnn.allow_tf32 = True See https://github.com/pyannote/pyannote-audio/issues/1370 for more details. warnings.warn( Traceback (most recent call last): File "c:\GPT_AGENT_2025_BOOK\chap05\whisper_stt.py", line 156, in &lt;module&gt; df_rttm = speaker_diarization( ^^^^^^^^^^^^^^^^^^^^ File "c:\GPT_AGENT_2025_BOOK\chap05\whisper_stt.py", line 94, in speaker_diarization out = pipeline(audio_file_path) ^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\GPT_AGENT_2025_BOOK\venv\Lib\site-packages\pyannote\audio\core\pipeline.py", line 440, in __call__ track_pipeline_apply(self, file, **kwargs) File "C:\GPT_AGENT_2025_BOOK\venv\Lib\site-packages\pyannote\audio\telemetry\metrics.py", line 152, in track_pipeline_apply duration: float = Audio().get_duration(file) ^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\GPT_AGENT_2025_BOOK\venv\Lib\site-packages\pyannote\audio\core\io.py", line 273, in get_duration metadata: AudioStreamMetadata = get_audio_metadata(file) ^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\GPT_AGENT_2025_BOOK\venv\Lib\site-packages\pyannote\audio\core\io.py", line 86, in get_audio_metadata metadata = AudioDecoder(file["audio"]).metadata ^^^^^^^^^^^^ NameError: name 'AudioDecoder' is not defined </code></pre> <p>It says torchcodec is not installed so auodio decoding will fail.. but strange thing is that it tells me the version of torch codec as below….</p> <pre><code class="lang-auto">C:\GPT_AGENT_2025_BOOK\venv\Lib\site-packages\pyannote\audio\core\io.py:47: UserWarning: torchcodec is not installed correctly so built-in audio decoding will fail. (...) [end of libtorchcodec loading traceback]. warnings.warn( exe: C:\GPT_AGENT_2025_BOOK\venv\Scripts\python.exe torch 2.9.0+cu126 torchcodec 0.8.0 py 3.12.9 ffmpeg version 4.3.1 Copyright (c) 2000-2020 the FFmpeg developers built with gcc 10.2.1 (GCC) 20200726 </code></pre> <p>and more strange thing is that this code actually worked pretty well without any problem in Jupyternote book… and last picture is the result..</p> <p><div class="lightbox-wrapper"><a class="lightbox" href="https://us1.discourse-cdn.com/hellohellohello/original/3X/1/6/16e615d060caba5985d089d7d1fae229383905ee.png" data-download-href="/uploads/short-url/3gzsuRerXGquP8haz4cPzLTewJE.png?dl=1" title="image" rel="noopener nofollow ugc"><img src="https://us1.discourse-cdn.com/hellohellohello/original/3X/1/6/16e615d060caba5985d089d7d1fae229383905ee.png" alt="image" data-base62-sha1="3gzsuRerXGquP8haz4cPzLTewJE" width="690" height="264" data-dominant-color="1E1F1F"><div class="meta"><svg class="fa d-icon d-icon-far-image svg-icon" aria-hidden="true"><use href="#far-image"></use></svg><span class="filename">image</span><span class="informations">1026×394 21 KB</span><svg class="fa d-icon d-icon-discourse-expand svg-icon" aria-hidden="true"><use href="#discourse-expand"></use></svg></div></a></div></p> <p><div class="lightbox-wrapper"><a class="lightbox" href="https://us1.discourse-cdn.com/hellohellohello/original/3X/9/a/9ad2487ccbcd0deffda12cf8393ee7b4f563d586.png" data-download-href="/uploads/short-url/m5C3IKEV9BXzbF2iR89wAJ7difQ.png?dl=1" title="image" rel="noopener nofollow ugc"><img src="https://us1.discourse-cdn.com/hellohellohello/original/3X/9/a/9ad2487ccbcd0deffda12cf8393ee7b4f563d586.png" alt="image" data-base62-sha1="m5C3IKEV9BXzbF2iR89wAJ7difQ" width="690" height="374" data-dominant-color="202122"><div class="meta"><svg class="fa d-icon d-icon-far-image svg-icon" aria-hidden="true"><use href="#far-image"></use></svg><span class="filename">image</span><span class="informations">1070×581 29.3 KB</span><svg class="fa d-icon d-icon-discourse-expand svg-icon" aria-hidden="true"><use href="#discourse-expand"></use></svg></div></a></div></p> <p><div class="lightbox-wrapper"><a class="lightbox" href="https://us1.discourse-cdn.com/hellohellohello/original/3X/c/8/c8b3f19a75ddacfd3fac5d3c8da4d6c941adbfc0.png" data-download-href="/uploads/short-url/sDv1lTkSQy0ehRarqfUk6JLiXDy.png?dl=1" title="image" rel="noopener nofollow ugc"><img src="https://us1.discourse-cdn.com/hellohellohello/original/3X/c/8/c8b3f19a75ddacfd3fac5d3c8da4d6c941adbfc0.png" alt="image" data-base62-sha1="sDv1lTkSQy0ehRarqfUk6JLiXDy" width="690" height="499" data-dominant-color="2F2F2F"><div class="meta"><svg class="fa d-icon d-icon-far-image svg-icon" aria-hidden="true"><use href="#far-image"></use></svg><span class="filename">image</span><span class="informations">724×524 12.5 KB</span><svg class="fa d-icon d-icon-discourse-expand svg-icon" aria-hidden="true"><use href="#discourse-expand"></use></svg></div></a></div></p> <p>It is hard to understand for me because I didn’t change any environment setting… and I just almost copied and pasted the code from the Jupyternote book..</p> <p>Thank you so much for the help in advance…</p>
<p>Problems frequently occur in Windows environments.<br> Specifically, issues related to DLLs can arise because Python 3.8 and later no longer reference the Windows <code>PATH</code> environment variable.</p> <p><a href="https://huggingface.co/datasets/John6666/forum2/blob/main/torchcodec_windows_error_1.md">Several workarounds exist, such as explicitly specifying the path within the code, adjusting the DLL location, or using methods that don’t require DLLs</a>.</p>
QLoRA - model isn&rsquo;t training
https://discuss.huggingface.co/t/qlora-model-isnt-training/169337
169,337
5
2025-10-22T11:19:32.837000Z
[ { "id": 243954, "name": "Anton Bartash", "username": "antbartash", "avatar_template": "https://avatars.discourse-cdn.com/v4/letter/a/46a35a/{size}.png", "created_at": "2025-10-22T11:19:32.912Z", "cooked": "<p>Hi everyone,<br>\nI’ve been trying to switch from LoRA to QLoRA on an Nvidia T4, but I’m running into an issue where the evaluation loss stays completely flat, while the training loss fluctuates around its initial value.</p>\n<p>My LoRA setup works fine, but adding <code>bnb_config</code>, <code>model.gradient_checkpointing_enable()</code>, and <code>model = prepare_model_for_kbit_training(model)</code> causes the issue described above.<br>\n<div class=\"lightbox-wrapper\"><a class=\"lightbox\" href=\"https://us1.discourse-cdn.com/hellohellohello/original/3X/5/d/5d755be17cacac8fc8637104730fdb9b8cb38d49.jpeg\" data-download-href=\"/uploads/short-url/dkLQoooAVBLFYkiL9asE9DmfI5r.jpeg?dl=1\" title=\"1000000396\" rel=\"noopener nofollow ugc\"><img src=\"https://us1.discourse-cdn.com/hellohellohello/optimized/3X/5/d/5d755be17cacac8fc8637104730fdb9b8cb38d49_2_690x454.jpeg\" alt=\"1000000396\" data-base62-sha1=\"dkLQoooAVBLFYkiL9asE9DmfI5r\" width=\"690\" height=\"454\" srcset=\"https://us1.discourse-cdn.com/hellohellohello/optimized/3X/5/d/5d755be17cacac8fc8637104730fdb9b8cb38d49_2_690x454.jpeg, https://us1.discourse-cdn.com/hellohellohello/optimized/3X/5/d/5d755be17cacac8fc8637104730fdb9b8cb38d49_2_1035x681.jpeg 1.5x, https://us1.discourse-cdn.com/hellohellohello/optimized/3X/5/d/5d755be17cacac8fc8637104730fdb9b8cb38d49_2_1380x908.jpeg 2x\" data-dominant-color=\"1D1D1D\"><div class=\"meta\"><svg class=\"fa d-icon d-icon-far-image svg-icon\" aria-hidden=\"true\"><use href=\"#far-image\"></use></svg><span class=\"filename\">1000000396</span><span class=\"informations\">1455×959 167 KB</span><svg class=\"fa d-icon d-icon-discourse-expand svg-icon\" aria-hidden=\"true\"><use href=\"#discourse-expand\"></use></svg></div></a></div></p>\n<p>Since the non-quantized version runs without problems, I don’t think the issue is related to the LoRA config, dataset, or formatting functions. The number of trainable parameters is non-zero for both the LoRA and QLoRA setups.</p>\n<p>Below is the code I’m using for QLoRA. Any help would be appreciated!</p>\n<pre data-code-wrap=\"python\"><code class=\"lang-python\">ds_train_with_assistant_content = ds_train.map(construct_message_with_assistant_content)\nds_valid_with_assistant_content = ds_valid.map(construct_message_with_assistant_content)\n\nbnb_config = BitsAndBytesConfig(\n load_in_4bit=True,\n bnb_4bit_use_double_quant=True,\n bnb_4bit_quant_type=\"nf4\",\n bnb_4bit_compute_dtype=torch.bfloat16\n)\n\ncheckpoint = \"Qwen/Qwen3-0.6B\"\ntokenizer = AutoTokenizer.from_pretrained(checkpoint)\nmodel = AutoModelForCausalLM.from_pretrained(\n checkpoint,\n device_map=\"auto\",\n quantization_config=bnb_config\n)\n\nmodel.config.use_cache = False\nmodel.gradient_checkpointing_enable()\nmodel = prepare_model_for_kbit_training(model)\nmodel.enable_input_require_grads()\n\n\ntimestamp = datetime.now().strftime('%Y-%m-%d_%H-%M-%S')\nRUN_NAME = f'qlora-final-model-all-linear-r64-{timestamp}'\nwandb.init(\n project=os.environ[\"WANDB_PROJECT\"],\n name=RUN_NAME,\n # id=run_id, # resume previous run if available\n resume=\"allow\", # allows resuming crashed run\n)\n\n\nRESUME_TRAINING = False\nOUTPUT_DIR = \"./qlora-final_model_all_linear_r64-output\"\nPER_DEVICE_BATCH_SIZE = 2 # higher values --&gt; OOM\n\noptimizer = 'paged_adamw_8bit'\neffective_batch_size = 16\nlearning_rate = 1e-5\nweight_decay = 0.0\nbetas = (0.9, 0.9999)\nwarmup_ratio = 0.2\nepochs = 1\ngradient_accumulation_steps = int(effective_batch_size / PER_DEVICE_BATCH_SIZE)\nlora_r = 16*4\nlora_alpha = 64*4\nlora_dropout = 0.01\n\n\ntraining_args = TrainingArguments(\n output_dir=OUTPUT_DIR,\n per_device_train_batch_size=PER_DEVICE_BATCH_SIZE,\n gradient_accumulation_steps=gradient_accumulation_steps,\n learning_rate=learning_rate,\n optim=optimizer, \n num_train_epochs=epochs,\n weight_decay=weight_decay,\n lr_scheduler_type=\"cosine\",\n warmup_ratio=warmup_ratio,\n save_strategy=\"steps\",\n save_steps=gradient_accumulation_steps*5,\n save_total_limit=2,\n eval_strategy=\"steps\",\n eval_steps=gradient_accumulation_steps*5,\n logging_strategy=\"steps\",\n logging_steps=gradient_accumulation_steps*5,\n report_to=['wandb'],\n run_name=RUN_NAME,\n bf16=True,\n # fp16=True,\n # fp16_full_eval=True,\n metric_for_best_model=\"eval_loss\",\n greater_is_better=False,\n max_grad_norm=1,\n load_best_model_at_end=True,\n gradient_checkpointing=True,\n gradient_checkpointing_kwargs={\"use_reentrant\": False}\n)\n\n\npeft_config = LoraConfig(\n r=lora_r,\n lora_alpha=lora_alpha,\n lora_dropout=lora_dropout,\n bias=\"none\",\n task_type=\"CAUSAL_LM\",\n target_modules='all-linear'\n)\n# model.requires_grad_(False) # freeze base weights (precautionary)\nmodel_peft = get_peft_model(model, peft_config) # inject a LoRA adapter\nprint_trainable_parameters(model_peft)\n\ntrainer = SFTTrainer(\n model=model_peft,\n train_dataset=ds_train_with_assistant_content,\n eval_dataset=ds_valid_with_assistant_content,\n formatting_func=formatting_func,\n args=training_args,\n callbacks=[EarlyStoppingCallback(early_stopping_patience=25)]\n)\n\n\n# Training setup summary\ndataset_size = len(ds_train_with_assistant_content)\nsteps_per_epoch = dataset_size // (PER_DEVICE_BATCH_SIZE * gradient_accumulation_steps)\ntotal_steps = steps_per_epoch * epochs\nwarmup_steps = int(total_steps * warmup_ratio)\n\nprint(\"===== Training Setup Summary =====\")\nprint(f\"Num epochs: {epochs}\")\nprint(f\"Effective batch size: {effective_batch_size}\")\nprint(f\"Per-device batch size: {PER_DEVICE_BATCH_SIZE}\")\nprint(f\"Gradient accumulation: {gradient_accumulation_steps}\")\nprint(f\"Dataset size: {dataset_size}\")\nprint(f\"Steps per epoch: {steps_per_epoch}\")\nprint(f\"Total training steps: {total_steps}\")\nprint(f\"Warmup steps: {warmup_steps}\")\nprint(f\"Logging steps: {training_args.logging_steps}\")\nprint(\"===================================\")\nprint(f\"Start time: {datetime.now().strftime('%Y-%m-%d_%H-%M-%S')}\")\n\n\n# Training\nlast_checkpoint = None\nif RESUME_TRAINING and os.path.isdir(OUTPUT_DIR):\n last_checkpoint = get_last_checkpoint(OUTPUT_DIR)\n\nif last_checkpoint is not None:\n print(f\"Resuming training from checkpoint: {last_checkpoint}\")\n trainer.train(resume_from_checkpoint=last_checkpoint)\nelse:\n print(\"Starting fresh training run\")\n trainer.train()\n\nprint(f\"End time: {datetime.now().strftime('%Y-%m-%d_%H-%M-%S')}\")\n\n\n# WandB logging of eval metrics\nfor log in trainer.state.log_history:\n if 'eval_loss' in log:\n wandb.log({\n \"eval_loss\": log['eval_loss'],\n \"eval_perplexity\": math.exp(log['eval_loss']),\n \"step\": log['step'],\n \"learning_rate\": learning_rate,\n \"weight_decay\": weight_decay,\n \"betas\": betas,\n \"warmup_ratio\": warmup_ratio,\n \"effective_batch_size\": effective_batch_size,\n \"optimizer\": optimizer\n })\n\nwandb.finish() # finish the run</code></pre>", "post_number": 1, "post_type": 1, "posts_count": 4, "updated_at": "2025-10-22T11:19:32.912Z", "reply_count": 0, "reply_to_post_number": null, "quote_count": 0, "incoming_link_count": 32, "reads": 8, "readers_count": 7, "score": 36.4, "yours": false, "topic_id": 169337, "topic_slug": "qlora-model-isnt-training", "display_username": "Anton Bartash", "primary_group_name": null, "flair_name": null, "flair_url": null, "flair_bg_color": null, "flair_color": null, "flair_group_id": null, "badges_granted": [], "version": 1, "can_edit": false, "can_delete": false, "can_recover": false, "can_see_hidden_post": false, "can_wiki": false, "link_counts": null, "read": true, "user_title": null, "bookmarked": false, "actions_summary": [ { "id": 2, "count": 1 } ], "moderator": false, "admin": false, "staff": false, "user_id": 106030, "hidden": false, "trust_level": 1, "deleted_at": null, "user_deleted": false, "edit_reason": null, "can_view_edit_history": true, "wiki": false, "post_url": "/t/qlora-model-isnt-training/169337/1", "reactions": [ { "id": "eyes", "type": "emoji", "count": 1 } ], "current_user_reaction": null, "reaction_users_count": 1, "current_user_used_main_reaction": false, "can_accept_answer": false, "can_unaccept_answer": false, "accepted_answer": false, "topic_accepted_answer": true, "can_vote": false, "title_is_group": null, "reply_to_user": null, "action_code": null, "via_email": null }, { "id": 243957, "name": "John Smith", "username": "John6666", "avatar_template": "/user_avatar/discuss.huggingface.co/john6666/{size}/27664_2.png", "created_at": "2025-10-22T12:52:50.634Z", "cooked": "<blockquote>\n<p>Nvidia T4</p>\n</blockquote>\n<p>Since T4 doesn’t natively support <code>torch.bfloat16</code>, using <code>torch.float16</code>/ <code>fp16=True</code> instead might resolve the error. No other major issues appear to exist.</p>", "post_number": 2, "post_type": 1, "posts_count": 4, "updated_at": "2025-10-22T12:52:50.634Z", "reply_count": 1, "reply_to_post_number": null, "quote_count": 0, "incoming_link_count": 1, "reads": 8, "readers_count": 7, "score": 11.4, "yours": false, "topic_id": 169337, "topic_slug": "qlora-model-isnt-training", "display_username": "John Smith", "primary_group_name": null, "flair_name": null, "flair_url": null, "flair_bg_color": null, "flair_color": null, "flair_group_id": null, "badges_granted": [], "version": 1, "can_edit": false, "can_delete": false, "can_recover": false, "can_see_hidden_post": false, "can_wiki": false, "link_counts": null, "read": true, "user_title": "Regular", "bookmarked": false, "actions_summary": [], "moderator": false, "admin": false, "staff": false, "user_id": 52272, "hidden": false, "trust_level": 3, "deleted_at": null, "user_deleted": false, "edit_reason": null, "can_view_edit_history": true, "wiki": false, "post_url": "/t/qlora-model-isnt-training/169337/2", "reactions": [], "current_user_reaction": null, "reaction_users_count": 0, "current_user_used_main_reaction": false, "can_accept_answer": false, "can_unaccept_answer": false, "accepted_answer": false, "topic_accepted_answer": true, "can_vote": null, "title_is_group": false, "reply_to_user": null, "action_code": null, "via_email": null }, { "id": 243998, "name": "Anton Bartash", "username": "antbartash", "avatar_template": "https://avatars.discourse-cdn.com/v4/letter/a/46a35a/{size}.png", "created_at": "2025-10-23T07:19:01.516Z", "cooked": "<p>Thanks for the suggestion<br>\nIt turned out the issue was environment-related — I was able to get the expected results using the exact same code on Colab. In my local environment, clearing the caches for transformers, torch, etc., and upgrading all the libraries resolved the problem.</p>", "post_number": 3, "post_type": 1, "posts_count": 4, "updated_at": "2025-10-23T07:19:01.516Z", "reply_count": 0, "reply_to_post_number": 2, "quote_count": 0, "incoming_link_count": 1, "reads": 7, "readers_count": 6, "score": 21.2, "yours": false, "topic_id": 169337, "topic_slug": "qlora-model-isnt-training", "display_username": "Anton Bartash", "primary_group_name": null, "flair_name": null, "flair_url": null, "flair_bg_color": null, "flair_color": null, "flair_group_id": null, "badges_granted": [], "version": 1, "can_edit": false, "can_delete": false, "can_recover": false, "can_see_hidden_post": false, "can_wiki": false, "link_counts": null, "read": true, "user_title": null, "bookmarked": false, "actions_summary": [ { "id": 2, "count": 1 } ], "moderator": false, "admin": false, "staff": false, "user_id": 106030, "hidden": false, "trust_level": 1, "deleted_at": null, "user_deleted": false, "edit_reason": null, "can_view_edit_history": true, "wiki": false, "post_url": "/t/qlora-model-isnt-training/169337/3", "reactions": [ { "id": "+1", "type": "emoji", "count": 1 } ], "current_user_reaction": null, "reaction_users_count": 1, "current_user_used_main_reaction": false, "can_accept_answer": false, "can_unaccept_answer": false, "accepted_answer": true, "topic_accepted_answer": true, "can_vote": null, "title_is_group": null, "reply_to_user": { "id": 52272, "username": "John6666", "name": "John Smith", "avatar_template": "/user_avatar/discuss.huggingface.co/john6666/{size}/27664_2.png" }, "action_code": null, "via_email": null }, { "id": 244071, "name": "system", "username": "system", "avatar_template": "https://us1.discourse-cdn.com/hellohellohello/original/2X/d/de4155eb4aa4108ecb32a1389d7cc37ae69f88b7.png", "created_at": "2025-10-24T18:16:57.733Z", "cooked": "<p>This topic was automatically closed 12 hours after the last reply. New replies are no longer allowed.</p>", "post_number": 4, "post_type": 3, "posts_count": 4, "updated_at": "2025-10-24T18:16:57.733Z", "reply_count": 0, "reply_to_post_number": null, "quote_count": 0, "incoming_link_count": 1, "reads": 2, "readers_count": 1, "score": 0, "yours": false, "topic_id": 169337, "topic_slug": "qlora-model-isnt-training", "display_username": "system", "primary_group_name": null, "flair_name": null, "flair_url": null, "flair_bg_color": null, "flair_color": null, "flair_group_id": null, "badges_granted": [], "version": 1, "can_edit": false, "can_delete": false, "can_recover": false, "can_see_hidden_post": false, "can_wiki": false, "link_counts": null, "read": true, "user_title": null, "bookmarked": false, "actions_summary": [], "moderator": true, "admin": true, "staff": true, "user_id": -1, "hidden": false, "trust_level": 4, "deleted_at": null, "user_deleted": false, "edit_reason": null, "can_view_edit_history": true, "wiki": false, "post_url": "/t/qlora-model-isnt-training/169337/4", "reactions": [], "current_user_reaction": null, "reaction_users_count": 0, "current_user_used_main_reaction": false, "can_accept_answer": false, "can_unaccept_answer": false, "accepted_answer": false, "topic_accepted_answer": true, "can_vote": null, "title_is_group": null, "reply_to_user": null, "action_code": "autoclosed.enabled", "via_email": null } ]
<p>Hi everyone,<br> I’ve been trying to switch from LoRA to QLoRA on an Nvidia T4, but I’m running into an issue where the evaluation loss stays completely flat, while the training loss fluctuates around its initial value.</p> <p>My LoRA setup works fine, but adding <code>bnb_config</code>, <code>model.gradient_checkpointing_enable()</code>, and <code>model = prepare_model_for_kbit_training(model)</code> causes the issue described above.<br> <div class="lightbox-wrapper"><a class="lightbox" href="https://us1.discourse-cdn.com/hellohellohello/original/3X/5/d/5d755be17cacac8fc8637104730fdb9b8cb38d49.jpeg" data-download-href="/uploads/short-url/dkLQoooAVBLFYkiL9asE9DmfI5r.jpeg?dl=1" title="1000000396" rel="noopener nofollow ugc"><img src="https://us1.discourse-cdn.com/hellohellohello/optimized/3X/5/d/5d755be17cacac8fc8637104730fdb9b8cb38d49_2_690x454.jpeg" alt="1000000396" data-base62-sha1="dkLQoooAVBLFYkiL9asE9DmfI5r" width="690" height="454" srcset="https://us1.discourse-cdn.com/hellohellohello/optimized/3X/5/d/5d755be17cacac8fc8637104730fdb9b8cb38d49_2_690x454.jpeg, https://us1.discourse-cdn.com/hellohellohello/optimized/3X/5/d/5d755be17cacac8fc8637104730fdb9b8cb38d49_2_1035x681.jpeg 1.5x, https://us1.discourse-cdn.com/hellohellohello/optimized/3X/5/d/5d755be17cacac8fc8637104730fdb9b8cb38d49_2_1380x908.jpeg 2x" data-dominant-color="1D1D1D"><div class="meta"><svg class="fa d-icon d-icon-far-image svg-icon" aria-hidden="true"><use href="#far-image"></use></svg><span class="filename">1000000396</span><span class="informations">1455×959 167 KB</span><svg class="fa d-icon d-icon-discourse-expand svg-icon" aria-hidden="true"><use href="#discourse-expand"></use></svg></div></a></div></p> <p>Since the non-quantized version runs without problems, I don’t think the issue is related to the LoRA config, dataset, or formatting functions. The number of trainable parameters is non-zero for both the LoRA and QLoRA setups.</p> <p>Below is the code I’m using for QLoRA. Any help would be appreciated!</p> <pre data-code-wrap="python"><code class="lang-python">ds_train_with_assistant_content = ds_train.map(construct_message_with_assistant_content) ds_valid_with_assistant_content = ds_valid.map(construct_message_with_assistant_content) bnb_config = BitsAndBytesConfig( load_in_4bit=True, bnb_4bit_use_double_quant=True, bnb_4bit_quant_type="nf4", bnb_4bit_compute_dtype=torch.bfloat16 ) checkpoint = "Qwen/Qwen3-0.6B" tokenizer = AutoTokenizer.from_pretrained(checkpoint) model = AutoModelForCausalLM.from_pretrained( checkpoint, device_map="auto", quantization_config=bnb_config ) model.config.use_cache = False model.gradient_checkpointing_enable() model = prepare_model_for_kbit_training(model) model.enable_input_require_grads() timestamp = datetime.now().strftime('%Y-%m-%d_%H-%M-%S') RUN_NAME = f'qlora-final-model-all-linear-r64-{timestamp}' wandb.init( project=os.environ["WANDB_PROJECT"], name=RUN_NAME, # id=run_id, # resume previous run if available resume="allow", # allows resuming crashed run ) RESUME_TRAINING = False OUTPUT_DIR = "./qlora-final_model_all_linear_r64-output" PER_DEVICE_BATCH_SIZE = 2 # higher values --&gt; OOM optimizer = 'paged_adamw_8bit' effective_batch_size = 16 learning_rate = 1e-5 weight_decay = 0.0 betas = (0.9, 0.9999) warmup_ratio = 0.2 epochs = 1 gradient_accumulation_steps = int(effective_batch_size / PER_DEVICE_BATCH_SIZE) lora_r = 16*4 lora_alpha = 64*4 lora_dropout = 0.01 training_args = TrainingArguments( output_dir=OUTPUT_DIR, per_device_train_batch_size=PER_DEVICE_BATCH_SIZE, gradient_accumulation_steps=gradient_accumulation_steps, learning_rate=learning_rate, optim=optimizer, num_train_epochs=epochs, weight_decay=weight_decay, lr_scheduler_type="cosine", warmup_ratio=warmup_ratio, save_strategy="steps", save_steps=gradient_accumulation_steps*5, save_total_limit=2, eval_strategy="steps", eval_steps=gradient_accumulation_steps*5, logging_strategy="steps", logging_steps=gradient_accumulation_steps*5, report_to=['wandb'], run_name=RUN_NAME, bf16=True, # fp16=True, # fp16_full_eval=True, metric_for_best_model="eval_loss", greater_is_better=False, max_grad_norm=1, load_best_model_at_end=True, gradient_checkpointing=True, gradient_checkpointing_kwargs={"use_reentrant": False} ) peft_config = LoraConfig( r=lora_r, lora_alpha=lora_alpha, lora_dropout=lora_dropout, bias="none", task_type="CAUSAL_LM", target_modules='all-linear' ) # model.requires_grad_(False) # freeze base weights (precautionary) model_peft = get_peft_model(model, peft_config) # inject a LoRA adapter print_trainable_parameters(model_peft) trainer = SFTTrainer( model=model_peft, train_dataset=ds_train_with_assistant_content, eval_dataset=ds_valid_with_assistant_content, formatting_func=formatting_func, args=training_args, callbacks=[EarlyStoppingCallback(early_stopping_patience=25)] ) # Training setup summary dataset_size = len(ds_train_with_assistant_content) steps_per_epoch = dataset_size // (PER_DEVICE_BATCH_SIZE * gradient_accumulation_steps) total_steps = steps_per_epoch * epochs warmup_steps = int(total_steps * warmup_ratio) print("===== Training Setup Summary =====") print(f"Num epochs: {epochs}") print(f"Effective batch size: {effective_batch_size}") print(f"Per-device batch size: {PER_DEVICE_BATCH_SIZE}") print(f"Gradient accumulation: {gradient_accumulation_steps}") print(f"Dataset size: {dataset_size}") print(f"Steps per epoch: {steps_per_epoch}") print(f"Total training steps: {total_steps}") print(f"Warmup steps: {warmup_steps}") print(f"Logging steps: {training_args.logging_steps}") print("===================================") print(f"Start time: {datetime.now().strftime('%Y-%m-%d_%H-%M-%S')}") # Training last_checkpoint = None if RESUME_TRAINING and os.path.isdir(OUTPUT_DIR): last_checkpoint = get_last_checkpoint(OUTPUT_DIR) if last_checkpoint is not None: print(f"Resuming training from checkpoint: {last_checkpoint}") trainer.train(resume_from_checkpoint=last_checkpoint) else: print("Starting fresh training run") trainer.train() print(f"End time: {datetime.now().strftime('%Y-%m-%d_%H-%M-%S')}") # WandB logging of eval metrics for log in trainer.state.log_history: if 'eval_loss' in log: wandb.log({ "eval_loss": log['eval_loss'], "eval_perplexity": math.exp(log['eval_loss']), "step": log['step'], "learning_rate": learning_rate, "weight_decay": weight_decay, "betas": betas, "warmup_ratio": warmup_ratio, "effective_batch_size": effective_batch_size, "optimizer": optimizer }) wandb.finish() # finish the run</code></pre>
<p>Thanks for the suggestion<br> It turned out the issue was environment-related — I was able to get the expected results using the exact same code on Colab. In my local environment, clearing the caches for transformers, torch, etc., and upgrading all the libraries resolved the problem.</p>
Problem with pyannote.audio==3.1.0
https://discuss.huggingface.co/t/problem-with-pyannote-audio-3-1-0/169326
169,326
5
2025-10-21T13:54:38.497000Z
[ { "id": 243920, "name": "MAJH", "username": "aldkela", "avatar_template": "https://avatars.discourse-cdn.com/v4/letter/a/4bbf92/{size}.png", "created_at": "2025-10-21T13:54:38.567Z", "cooked": "<p>Hello, I was trying to use model named pyannote/speaker-diarization-3.1</p>\n<p>so I installed some libraries as below</p>\n<pre><code class=\"lang-auto\">%pip install pyannote.audio==3.1.0\n%pip install numpy==1.26\n</code></pre>\n<p>Here is the result and I think I installed this properly…</p>\n<pre><code class=\"lang-auto\">Collecting pyannote.audio==3.1.0\n Using cached pyannote.audio-3.1.0-py2.py3-none-any.whl.metadata (7.8 kB)\nRequirement already satisfied: asteroid-filterbanks&gt;=0.4 in c:\\gpt_agent_2025_book\\venv\\lib\\site-packages (from pyannote.audio==3.1.0) (0.4.0)\nRequirement already satisfied: einops&gt;=0.6.0 in c:\\gpt_agent_2025_book\\venv\\lib\\site-packages (from pyannote.audio==3.1.0) (0.8.1)\nRequirement already satisfied: huggingface-hub&gt;=0.13.0 in c:\\gpt_agent_2025_book\\venv\\lib\\site-packages (from pyannote.audio==3.1.0) (0.35.3)\nRequirement already satisfied: lightning&gt;=2.0.1 in c:\\gpt_agent_2025_book\\venv\\lib\\site-packages (from pyannote.audio==3.1.0) (2.5.5)\nRequirement already satisfied: omegaconf&lt;3.0,&gt;=2.1 in c:\\gpt_agent_2025_book\\venv\\lib\\site-packages (from pyannote.audio==3.1.0) (2.3.0)\nRequirement already satisfied: pyannote.core&gt;=5.0.0 in c:\\gpt_agent_2025_book\\venv\\lib\\site-packages (from pyannote.audio==3.1.0) (6.0.1)\nRequirement already satisfied: pyannote.database&gt;=5.0.1 in c:\\gpt_agent_2025_book\\venv\\lib\\site-packages (from pyannote.audio==3.1.0) (6.1.0)\nRequirement already satisfied: pyannote.metrics&gt;=3.2 in c:\\gpt_agent_2025_book\\venv\\lib\\site-packages (from pyannote.audio==3.1.0) (4.0.0)\nRequirement already satisfied: pyannote.pipeline&gt;=3.0.1 in c:\\gpt_agent_2025_book\\venv\\lib\\site-packages (from pyannote.audio==3.1.0) (4.0.0)\nRequirement already satisfied: pytorch-metric-learning&gt;=2.1.0 in c:\\gpt_agent_2025_book\\venv\\lib\\site-packages (from pyannote.audio==3.1.0) (2.9.0)\nRequirement already satisfied: rich&gt;=12.0.0 in c:\\gpt_agent_2025_book\\venv\\lib\\site-packages (from pyannote.audio==3.1.0) (14.2.0)\nRequirement already satisfied: semver&gt;=3.0.0 in c:\\gpt_agent_2025_book\\venv\\lib\\site-packages (from pyannote.audio==3.1.0) (3.0.4)\nRequirement already satisfied: soundfile&gt;=0.12.1 in c:\\gpt_agent_2025_book\\venv\\lib\\site-packages (from pyannote.audio==3.1.0) (0.13.1)\nRequirement already satisfied: speechbrain&gt;=0.5.14 in c:\\gpt_agent_2025_book\\venv\\lib\\site-packages (from pyannote.audio==3.1.0) (1.0.3)\nRequirement already satisfied: tensorboardX&gt;=2.6 in c:\\gpt_agent_2025_book\\venv\\lib\\site-packages (from pyannote.audio==3.1.0) (2.6.4)\nRequirement already satisfied: torch&gt;=2.0.0 in c:\\gpt_agent_2025_book\\venv\\lib\\site-packages (from pyannote.audio==3.1.0) (2.9.0+cu126)\nRequirement already satisfied: torch-audiomentations&gt;=0.11.0 in c:\\gpt_agent_2025_book\\venv\\lib\\site-packages (from pyannote.audio==3.1.0) (0.12.0)\nRequirement already satisfied: torchaudio&gt;=2.0.0 in c:\\gpt_agent_2025_book\\venv\\lib\\site-packages (from pyannote.audio==3.1.0) (2.9.0)\nRequirement already satisfied: torchmetrics&gt;=0.11.0 in c:\\gpt_agent_2025_book\\venv\\lib\\site-packages (from pyannote.audio==3.1.0) (1.8.2)\nRequirement already satisfied: antlr4-python3-runtime==4.9.* in c:\\gpt_agent_2025_book\\venv\\lib\\site-packages (from omegaconf&lt;3.0,&gt;=2.1-&gt;pyannote.audio==3.1.0) (4.9.3)\nRequirement already satisfied: PyYAML&gt;=5.1.0 in c:\\gpt_agent_2025_book\\venv\\lib\\site-packages (from omegaconf&lt;3.0,&gt;=2.1-&gt;pyannote.audio==3.1.0) (6.0.3)\nRequirement already satisfied: numpy in c:\\gpt_agent_2025_book\\venv\\lib\\site-packages (from asteroid-filterbanks&gt;=0.4-&gt;pyannote.audio==3.1.0) (1.26.0)\nRequirement already satisfied: typing-extensions in c:\\gpt_agent_2025_book\\venv\\lib\\site-packages (from asteroid-filterbanks&gt;=0.4-&gt;pyannote.audio==3.1.0) (4.15.0)\n...\n Uninstalling numpy-2.3.4:\n Successfully uninstalled numpy-2.3.4\nSuccessfully installed numpy-1.26.0\nNote: you may need to restart the kernel to use updated packages.\nOutput is truncated. View as a scrollable element or open in a text editor. Adjust cell output settings...\nERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.\npyannote-core 6.0.1 requires numpy&gt;=2.0, but you have numpy 1.26.0 which is incompatible.\npyannote-metrics 4.0.0 requires numpy&gt;=2.2.2, but you have numpy 1.26.0 which is incompatible.\n</code></pre>\n<p>I ran this code to load the ffmpeg</p>\n<pre data-code-wrap=\"python\"><code class=\"lang-python\">from pathlib import Path\nimport os, sys\n\nffmpeg_dll_dir = Path(r\"C:\\Users\\majh0\\miniconda3\\Library\\bin\") \nassert ffmpeg_dll_dir.exists(), ffmpeg_dll_dir\nos.add_dll_directory(str(ffmpeg_dll_dir)) \n\nimport torch, torchcodec, platform, subprocess\nprint(\"exe:\", sys.executable)\nprint(\"torch\", torch.__version__, \"torchcodec\", torchcodec.__version__, \"py\", platform.python_version())\nsubprocess.run([\"ffmpeg\", \"-version\"], check=True)\nprint(\"cuda torch?\",torch.cuda.is_available())\n</code></pre>\n<p>and the result looks fine to me..</p>\n<pre><code class=\"lang-auto\">exe: c:\\GPT_AGENT_2025_BOOK\\venv\\Scripts\\python.exe\ntorch 2.9.0+cu126 torchcodec 0.8.0 py 3.12.9\ncuda torch? True\n</code></pre>\n<p>I ran this code and it gave me an error as below…</p>\n<pre data-code-wrap=\"python\"><code class=\"lang-python\"># instantiate the pipeline\nimport torch\nfrom pyannote.audio import Pipeline\npipeline = Pipeline.from_pretrained(\n \"pyannote/speaker-diarization-3.1\",\n token=\"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\")\n\n\nif torch.cuda.is_available():\n pipeline.to(torch.device(\"cuda\"))\n print(\"Using CUDA\")\nelse:\n print(\"Using CPU\")\n</code></pre>\n<pre><code class=\"lang-auto\">---------------------------------------------------------------------------\nAttributeError Traceback (most recent call last)\nCell In[3], line 3\n 1 # instantiate the pipeline\n 2 import torch\n----&gt; 3 from pyannote.audio import Pipeline\n 4 pipeline = Pipeline.from_pretrained(\n 5 \"pyannote/speaker-diarization-3.1\",\n 6 token=\"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\")\n 9 if torch.cuda.is_available():\n\nFile c:\\GPT_AGENT_2025_BOOK\\venv\\Lib\\site-packages\\pyannote\\audio\\__init__.py:29\n 25 except ImportError:\n 26 pass\n---&gt; 29 from .core.inference import Inference\n 30 from .core.io import Audio\n 31 from .core.model import Model\n\nFile c:\\GPT_AGENT_2025_BOOK\\venv\\Lib\\site-packages\\pyannote\\audio\\core\\inference.py:36\n 33 from pyannote.core import Segment, SlidingWindow, SlidingWindowFeature\n 34 from pytorch_lightning.utilities.memory import is_oom_error\n---&gt; 36 from pyannote.audio.core.io import AudioFile\n 37 from pyannote.audio.core.model import Model, Specifications\n 38 from pyannote.audio.core.task import Resolution\n...\n 49 - a \"str\" or \"Path\" instance: \"audio.wav\" or Path(\"audio.wav\")\n (...) 56 integer to load a specific channel: {\"audio\": \"stereo.wav\", \"channel\": 0}\n 57 \"\"\"\n\nAttributeError: module 'torchaudio' has no attribute 'set_audio_backend'\n</code></pre>\n<p>I have checked the document and it says I need to install <a href=\"https://github.com/pyannote/pyannote-audio\" rel=\"noopener nofollow ugc\"><code>pyannote.audio</code></a> <code>3.1</code></p>\n<p>I don’t know why this thing doesn’t work…. I tried to solve this problem for 3hrs changing version of pyannote.audio but this thing didn’t give me solution..</p>\n<p>Do I need to delete venv and reinstall it clearly..?</p>\n<p>Thank you so much for the help in advance..</p>", "post_number": 1, "post_type": 1, "posts_count": 6, "updated_at": "2025-10-21T14:42:42.475Z", "reply_count": 0, "reply_to_post_number": null, "quote_count": 0, "incoming_link_count": 84, "reads": 5, "readers_count": 4, "score": 221, "yours": false, "topic_id": 169326, "topic_slug": "problem-with-pyannote-audio-3-1-0", "display_username": "MAJH", "primary_group_name": null, "flair_name": null, "flair_url": null, "flair_bg_color": null, "flair_color": null, "flair_group_id": null, "badges_granted": [], "version": 2, "can_edit": false, "can_delete": false, "can_recover": false, "can_see_hidden_post": false, "can_wiki": false, "link_counts": [ { "url": "https://github.com/pyannote/pyannote-audio", "internal": false, "reflection": false, "title": "GitHub - pyannote/pyannote-audio: Neural building blocks for speaker diarization: speech activity detection, speaker change detection, overlapped speech detection, speaker embedding", "clicks": 0 } ], "read": true, "user_title": null, "bookmarked": false, "actions_summary": [ { "id": 2, "count": 1 } ], "moderator": false, "admin": false, "staff": false, "user_id": 105819, "hidden": false, "trust_level": 1, "deleted_at": null, "user_deleted": false, "edit_reason": null, "can_view_edit_history": true, "wiki": false, "post_url": "/t/problem-with-pyannote-audio-3-1-0/169326/1", "reactions": [ { "id": "eyes", "type": "emoji", "count": 1 } ], "current_user_reaction": null, "reaction_users_count": 1, "current_user_used_main_reaction": false, "can_accept_answer": false, "can_unaccept_answer": false, "accepted_answer": false, "topic_accepted_answer": true, "can_vote": false, "title_is_group": null, "reply_to_user": null, "action_code": null, "via_email": null }, { "id": 243939, "name": "John Smith", "username": "John6666", "avatar_template": "/user_avatar/discuss.huggingface.co/john6666/{size}/27664_2.png", "created_at": "2025-10-22T02:49:32.789Z", "cooked": "<p>Seems library version incompatibility…</p>\n<hr>\n<p>Your import error comes from an API removal in torchaudio and an incompatible NumPy pin. Fix by upgrading <code>pyannote.audio</code> and undoing the NumPy downgrade. Keep your Torch 2.9 stack.</p>\n<h1><a name=\"p-243939-tldr-fix-1\" class=\"anchor\" href=\"#p-243939-tldr-fix-1\"></a>TL;DR fix</h1>\n<pre data-code-wrap=\"bash\"><code class=\"lang-bash\"># clean conflicting pins\npip uninstall -y pyannote.audio pyannote.core pyannote.metrics pyannote.pipeline pyannote.database numpy\n\n# install a compatible, modern set\npip install --upgrade \"numpy&gt;=2.3\" \"pyannote.audio&gt;=4.0.1\" --prefer-binary\n# keep your existing torch==2.9.*, torchaudio==2.9.* and torchcodec\n</code></pre>\n<p><code>pyannote.audio&gt;=4</code> removed the old torchaudio backend call and uses FFmpeg via <code>torchcodec</code>, so the import works on torchaudio≥2.2. NumPy≥2.x satisfies <code>pyannote-core</code> and <code>pyannote-metrics</code>. (<a href=\"https://github.com/pyannote/pyannote-audio/releases\" title=\"Releases · pyannote/pyannote-audio\">GitHub</a>)</p>\n<p>Then restart the kernel once. Verify:</p>\n<pre data-code-wrap=\"python\"><code class=\"lang-python\"># refs:\n# - torchaudio dispatcher notes: https://docs.pytorch.org/audio/main/torchaudio.html\n# - pyannote model card: https://huggingface.co/pyannote/speaker-diarization-3.1\nimport torchaudio, torchcodec\nprint(\"backends:\", torchaudio.list_audio_backends()) # should show 'ffmpeg' and/or 'soundfile'\nfrom pyannote.audio import Pipeline\npipe = Pipeline.from_pretrained(\"pyannote/speaker-diarization-3.1\", token=\"hf_xxx\") # do not hardcode secrets\n</code></pre>\n<p><code>set_audio_backend</code> was deprecated, then removed in torchaudio 2.2+, which is why <code>pyannote.audio==3.1.0</code> fails to import on your current torchaudio. (<a href=\"https://docs.pytorch.org/audio/main/torchaudio.html\" title=\"Torchaudio 2.8.0 documentation\">PyTorch Docs</a>)</p>\n<h1><a name=\"p-243939-why-your-install-failed-2\" class=\"anchor\" href=\"#p-243939-why-your-install-failed-2\"></a>Why your install failed</h1>\n<ul>\n<li><code>pyannote.audio==3.1.0</code> calls <code>torchaudio.set_audio_backend(\"soundfile\")</code>. That function is gone in torchaudio≥2.2, so import raises <code>AttributeError</code>. Upgrading pyannote fixes it because 4.x removed that path. (<a href=\"https://github.com/pyannote/pyannote-audio/issues/1576\" title=\"Removing torchaudio.set_audio_backend(”soundfile”) #1576\">GitHub</a>)</li>\n<li>You forced <code>numpy==1.26</code>. Current pyannote ecosystem components require NumPy≥2.0 (core) and ≥2.2.2 (metrics). Pip warned correctly. Use NumPy≥2.3. (<a href=\"https://github.com/huggingface/transformers/issues/41230\" title=\"Consider forking and maintaining pyctcdecode #41230\">GitHub</a>)</li>\n</ul>\n<h1><a name=\"p-243939-if-you-must-stay-on-pyannoteaudio310-not-recommended-3\" class=\"anchor\" href=\"#p-243939-if-you-must-stay-on-pyannoteaudio310-not-recommended-3\"></a>If you must stay on <code>pyannote.audio==3.1.0</code> (not recommended)</h1>\n<p>Pick one, not both:</p>\n<pre data-code-wrap=\"bash\"><code class=\"lang-bash\"># Legacy stack that still has set_audio_backend\npip install \"torch&lt;=2.1.2\" \"torchaudio&lt;=2.1.2\" \"numpy&gt;=2.0,&lt;3\" \"pyannote.audio==3.1.0\"\n</code></pre>\n<p>or a temporary shim:</p>\n<pre data-code-wrap=\"python\"><code class=\"lang-python\"># WARNING: local hack to import 3.1.0 with new torchaudio\nimport torchaudio\nif not hasattr(torchaudio, \"set_audio_backend\"):\n torchaudio.set_audio_backend = lambda *a, **k: None\n torchaudio.get_audio_backend = lambda: \"soundfile\"\nfrom pyannote.audio import Pipeline\n</code></pre>\n<p>The first aligns versions to when the API existed. The second bypasses the call so you can upgrade later. (<a href=\"https://docs.pytorch.org/audio/main/torchaudio.html\" title=\"Torchaudio 2.8.0 documentation\">PyTorch Docs</a>)</p>\n<h1><a name=\"p-243939-gating-and-ffmpeg-checks-4\" class=\"anchor\" href=\"#p-243939-gating-and-ffmpeg-checks-4\"></a>Gating and FFmpeg checks</h1>\n<ul>\n<li>Accept the model terms for <code>pyannote/speaker-diarization-3.1</code> on Hugging Face and pass a valid token, or downloads will fail. (<a href=\"https://huggingface.co/pyannote/speaker-diarization-3.1\" title=\"pyannote/speaker-diarization-3.1\">Hugging Face</a>)</li>\n<li><code>pyannote.audio&gt;=4</code> expects FFmpeg via <code>torchcodec</code>. You already verified FFmpeg and <code>torchcodec</code>, which matches the 4.x I/O design. (<a href=\"https://github.com/pyannote/pyannote-audio/releases\" title=\"Releases · pyannote/pyannote-audio\">GitHub</a>)</li>\n</ul>\n<h1><a name=\"p-243939-sanity-test-end-to-end-5\" class=\"anchor\" href=\"#p-243939-sanity-test-end-to-end-5\"></a>Sanity test end-to-end</h1>\n<pre data-code-wrap=\"python\"><code class=\"lang-python\"># refs in comments:\n# https://huggingface.co/pyannote/speaker-diarization-3.1\n# https://docs.pytorch.org/audio/main/torchaudio.html\nimport torch\nfrom pyannote.audio import Pipeline\npipe = Pipeline.from_pretrained(\"pyannote/speaker-diarization-3.1\", token=\"hf_xxx\")\nif torch.cuda.is_available():\n pipe.to(\"cuda\")\nresult = pipe(\"sample.wav\") # 16 kHz mono recommended\nprint(result)\n</code></pre>\n<p>The model card confirms “pyannote.audio version 3.1 or higher,” so using 4.x is valid and simpler on modern Torch. (<a href=\"https://huggingface.co/collinbarnwell/pyannote-speaker-diarization-31\" title=\"collinbarnwell/pyannote-speaker-diarization-31\">Hugging Face</a>)</p>\n<h1><a name=\"p-243939-extra-context-and-references-6\" class=\"anchor\" href=\"#p-243939-extra-context-and-references-6\"></a>Extra context and references</h1>\n<ul>\n<li>Torchaudio 2.2+ removed <code>set_audio_backend</code> and switched to a dispatcher. That is the precise cause of your <code>AttributeError</code>. (<a href=\"https://docs.pytorch.org/audio/main/torchaudio.html\" title=\"Torchaudio 2.8.0 documentation\">PyTorch Docs</a>)</li>\n<li>pyannote 4.x release notes: removed <code>sox</code>/<code>soundfile</code> backends; use FFmpeg or in-memory audio. Explains why 4.x works on Windows with <code>torchcodec</code>. (<a href=\"https://github.com/pyannote/pyannote-audio/releases\" title=\"Releases · pyannote/pyannote-audio\">GitHub</a>)</li>\n<li>NumPy≥2 requirement in the pyannote stack. Avoid forcing 1.26. (<a href=\"https://github.com/huggingface/transformers/issues/41230\" title=\"Consider forking and maintaining pyctcdecode #41230\">GitHub</a>)</li>\n</ul>\n<p>Deleting the venv is optional. Uninstall→reinstall with the versions above and one kernel restart is sufficient.</p>", "post_number": 2, "post_type": 1, "posts_count": 6, "updated_at": "2025-10-22T02:50:15.452Z", "reply_count": 1, "reply_to_post_number": null, "quote_count": 0, "incoming_link_count": 4, "reads": 4, "readers_count": 3, "score": 15.8, "yours": false, "topic_id": 169326, "topic_slug": "problem-with-pyannote-audio-3-1-0", "display_username": "John Smith", "primary_group_name": null, "flair_name": null, "flair_url": null, "flair_bg_color": null, "flair_color": null, "flair_group_id": null, "badges_granted": [], "version": 1, "can_edit": false, "can_delete": false, "can_recover": false, "can_see_hidden_post": false, "can_wiki": false, "link_counts": [ { "url": "https://github.com/pyannote/pyannote-audio/releases", "internal": false, "reflection": false, "title": "Releases · pyannote/pyannote-audio · GitHub", "clicks": 1 }, { "url": "https://github.com/pyannote/pyannote-audio/issues/1576", "internal": false, "reflection": false, "title": "Removing torchaudio.set_audio_backend(\"soundfile\") · Issue #1576 · pyannote/pyannote-audio · GitHub", "clicks": 1 }, { "url": "https://github.com/huggingface/transformers/issues/41230", "internal": false, "reflection": false, "title": "Consider forking and maintaining pyctcdecode or switch to torchaudio.models.decoder · Issue #41230 · huggingface/transformers · GitHub", "clicks": 0 }, { "url": "https://huggingface.co/pyannote/speaker-diarization-3.1", "internal": false, "reflection": false, "title": "pyannote/speaker-diarization-3.1 · Hugging Face", "clicks": 0 }, { "url": "https://docs.pytorch.org/audio/main/torchaudio.html", "internal": false, "reflection": false, "title": "torchaudio — Torchaudio 2.8.0 documentation", "clicks": 0 }, { "url": "https://huggingface.co/collinbarnwell/pyannote-speaker-diarization-31", "internal": false, "reflection": false, "title": "collinbarnwell/pyannote-speaker-diarization-31 · Hugging Face", "clicks": 0 } ], "read": true, "user_title": "Regular", "bookmarked": false, "actions_summary": [], "moderator": false, "admin": false, "staff": false, "user_id": 52272, "hidden": false, "trust_level": 3, "deleted_at": null, "user_deleted": false, "edit_reason": null, "can_view_edit_history": true, "wiki": false, "post_url": "/t/problem-with-pyannote-audio-3-1-0/169326/2", "reactions": [], "current_user_reaction": null, "reaction_users_count": 0, "current_user_used_main_reaction": false, "can_accept_answer": false, "can_unaccept_answer": false, "accepted_answer": false, "topic_accepted_answer": true, "can_vote": null, "title_is_group": false, "reply_to_user": null, "action_code": null, "via_email": null }, { "id": 243955, "name": "MAJH", "username": "aldkela", "avatar_template": "https://avatars.discourse-cdn.com/v4/letter/a/4bbf92/{size}.png", "created_at": "2025-10-22T12:34:52.198Z", "cooked": "<p>Hello! Thank you so much!! I realized.. I should read the error msg properly to solve the problem!!! xD</p>\n<p>I have one more problem….</p>\n<p>I made a code as below..</p>\n<pre data-code-wrap=\"python\"><code class=\"lang-python\">from pathlib import Path\nimport os, sys\n\nffmpeg_dll_dir = Path(r\"C:\\Users\\majh0\\miniconda3\\Library\\bin\") \nassert ffmpeg_dll_dir.exists(), ffmpeg_dll_dir\nos.add_dll_directory(str(ffmpeg_dll_dir)) \n\nimport torch, torchcodec, platform, subprocess\nprint(\"exe:\", sys.executable)\nprint(\"torch\", torch.__version__, \"torchcodec\", torchcodec.__version__, \"py\", platform.python_version())\nsubprocess.run([\"ffmpeg\", \"-version\"], check=True)\nprint(\"cuda torch?\",torch.cuda.is_available())\n\n# instantiate the pipeline\nimport torch\nfrom pyannote.audio import Pipeline\n\npipeline = Pipeline.from_pretrained(\n \"pyannote/speaker-diarization-3.1\",\n token=\"my token\")\n\n\nif torch.cuda.is_available():\n pipeline.to(torch.device(\"cuda\"))\n print(\"Using CUDA\")\nelse:\n print(\"Using CPU\")\n\naudio_file =\"./guitar.wav\"\ndiarization = pipeline(audio_file)\n\n# dump the diarization output to disk using RTTM format\nwith open(\"./guitar.rttm\", \"w\", encoding=\"utf-8\") as rttm:\n diarization.write_rttm(rttm)\n</code></pre>\n<p>this thing gave me error as below…</p>\n<pre><code class=\"lang-auto\">---------------------------------------------------------------------------\nAttributeError Traceback (most recent call last)\nCell In[15], line 6\n 4 # dump the diarization output to disk using RTTM format\n 5 with open(\"./guitar.rttm\", \"w\", encoding=\"utf-8\") as rttm:\n----&gt; 6 diarization.write_rttm(rttm)\n\nAttributeError: 'DiarizeOutput' object has no attribute 'write_rttm'\n</code></pre>\n<p>This thing is hard to understand for me… because I literally typed “diarization.write_rttm(rttm)” same with the example of this document like picture below <a href=\"https://huggingface.co/pyannote/speaker-diarization-3.1\">https://huggingface.co/pyannote/speaker-diarization-3.1</a></p>\n<p><div class=\"lightbox-wrapper\"><a class=\"lightbox\" href=\"https://us1.discourse-cdn.com/hellohellohello/original/3X/e/1/e12f6fb814a9818839879f59f631cf0ed994b78d.png\" data-download-href=\"/uploads/short-url/w853TGQotS8EsELlrorkptlyDgN.png?dl=1\" title=\"image\" rel=\"noopener nofollow ugc\"><img src=\"https://us1.discourse-cdn.com/hellohellohello/original/3X/e/1/e12f6fb814a9818839879f59f631cf0ed994b78d.png\" alt=\"image\" data-base62-sha1=\"w853TGQotS8EsELlrorkptlyDgN\" width=\"690\" height=\"324\" data-dominant-color=\"202222\"><div class=\"meta\"><svg class=\"fa d-icon d-icon-far-image svg-icon\" aria-hidden=\"true\"><use href=\"#far-image\"></use></svg><span class=\"filename\">image</span><span class=\"informations\">768×361 15.6 KB</span><svg class=\"fa d-icon d-icon-discourse-expand svg-icon\" aria-hidden=\"true\"><use href=\"#discourse-expand\"></use></svg></div></a></div></p>\n<p>the name of the function “write_rttm” has changed? then is there any way to check the new name of it..?</p>\n<p>or did I make another mistake again..?</p>\n<p>I think I am bothering you too much.. but thank you so much for your help..</p>", "post_number": 3, "post_type": 1, "posts_count": 6, "updated_at": "2025-10-22T12:34:52.198Z", "reply_count": 0, "reply_to_post_number": 2, "quote_count": 0, "incoming_link_count": 0, "reads": 4, "readers_count": 3, "score": 15.8, "yours": false, "topic_id": 169326, "topic_slug": "problem-with-pyannote-audio-3-1-0", "display_username": "MAJH", "primary_group_name": null, "flair_name": null, "flair_url": null, "flair_bg_color": null, "flair_color": null, "flair_group_id": null, "badges_granted": [], "version": 1, "can_edit": false, "can_delete": false, "can_recover": false, "can_see_hidden_post": false, "can_wiki": false, "link_counts": [ { "url": "https://huggingface.co/pyannote/speaker-diarization-3.1", "internal": false, "reflection": false, "title": "pyannote/speaker-diarization-3.1 · Hugging Face", "clicks": 0 } ], "read": true, "user_title": null, "bookmarked": false, "actions_summary": [ { "id": 2, "count": 1 } ], "moderator": false, "admin": false, "staff": false, "user_id": 105819, "hidden": false, "trust_level": 1, "deleted_at": null, "user_deleted": false, "edit_reason": null, "can_view_edit_history": true, "wiki": false, "post_url": "/t/problem-with-pyannote-audio-3-1-0/169326/3", "reactions": [ { "id": "+1", "type": "emoji", "count": 1 } ], "current_user_reaction": null, "reaction_users_count": 1, "current_user_used_main_reaction": false, "can_accept_answer": false, "can_unaccept_answer": false, "accepted_answer": false, "topic_accepted_answer": true, "can_vote": null, "title_is_group": null, "reply_to_user": { "id": 52272, "username": "John6666", "name": "John Smith", "avatar_template": "/user_avatar/discuss.huggingface.co/john6666/{size}/27664_2.png" }, "action_code": null, "via_email": null }, { "id": 243956, "name": "John Smith", "username": "John6666", "avatar_template": "/user_avatar/discuss.huggingface.co/john6666/{size}/27664_2.png", "created_at": "2025-10-22T12:48:54.185Z", "cooked": "<p>It seems like a partial hit.<img src=\"https://emoji.discourse-cdn.com/apple/sweat_smile.png?v=14\" title=\":sweat_smile:\" class=\"emoji\" alt=\":sweat_smile:\" loading=\"lazy\" width=\"20\" height=\"20\"> The cause is a specification change due to a library version upgrade, but it appears to be because the returned object changed, not because the function itself changed.</p>\n<hr>\n<p>You’re on <code>pyannote.audio</code> 4.x. In 4.x the pipeline returns a <strong><code>DiarizeOutput</code></strong> object, not an <code>Annotation</code>. The <code>Annotation</code> lives at <code>output.speaker_diarization</code>. <code>write_rttm</code> is a method of <code>Annotation</code>, so call it there.</p>\n<pre data-code-wrap=\"python\"><code class=\"lang-python\">from pyannote.audio import Pipeline\nimport torch\n\npipeline = Pipeline.from_pretrained(\n \"pyannote/speaker-diarization-3.1\",\n token=\"YOUR_HF_TOKEN\"\n)\nif torch.cuda.is_available():\n pipeline.to(\"cuda\")\n\nout = pipeline(\"./guitar.wav\") # out is DiarizeOutput\nann = out.speaker_diarization # this is an Annotation\n\nwith open(\"./guitar.rttm\", \"w\", encoding=\"utf-8\") as f:\n ann.write_rttm(f)\n</code></pre>\n<p>Evidence</p>\n<ul>\n<li>The current README shows usage as <code>output = pipeline(...); for turn, spk in output.speaker_diarization: ...</code>, proving the wrapper return type in 4.x. (<a href=\"https://github.com/pyannote/pyannote-audio\" title=\"GitHub - pyannote/pyannote-audio: Neural building blocks for speaker diarization: speech activity detection, speaker change detection, overlapped speech detection, speaker embedding\">GitHub</a>)</li>\n<li><code>write_rttm</code> is defined on <code>pyannote.core.Annotation</code>, not on the wrapper. (<a href=\"https://pyannote.github.io/pyannote-core/_modules/pyannote/core/annotation.html\" title=\"Source code for pyannote.core.annotation\">pyannote.github.io</a>)</li>\n<li>The model card snippet you followed is the legacy 3.1 example that returned an <code>Annotation</code> directly. That is why your call failed on 4.x. (<a href=\"https://huggingface.co/pyannote/speaker-diarization-3.1\" title=\"pyannote/speaker-diarization-3.1\">Hugging Face</a>)</li>\n</ul>\n<p>Option if you want the old behavior: pin to the legacy stack (<code>pyannote.audio==3.1.x</code>) where <code>pipeline(...)</code> returns an <code>Annotation</code>, and the snippet <code>diarization.write_rttm(...)</code> works as-is. Note 4.x introduced several breaking changes, including API renames. (<a href=\"https://github.com/pyannote/pyannote-audio/releases\" title=\"Releases · pyannote/pyannote-audio\">GitHub</a>)</p>", "post_number": 4, "post_type": 1, "posts_count": 6, "updated_at": "2025-10-22T12:48:54.185Z", "reply_count": 0, "reply_to_post_number": null, "quote_count": 0, "incoming_link_count": 2, "reads": 2, "readers_count": 1, "score": 25.4, "yours": false, "topic_id": 169326, "topic_slug": "problem-with-pyannote-audio-3-1-0", "display_username": "John Smith", "primary_group_name": null, "flair_name": null, "flair_url": null, "flair_bg_color": null, "flair_color": null, "flair_group_id": null, "badges_granted": [], "version": 1, "can_edit": false, "can_delete": false, "can_recover": false, "can_see_hidden_post": false, "can_wiki": false, "link_counts": [ { "url": "https://huggingface.co/pyannote/speaker-diarization-3.1", "internal": false, "reflection": false, "title": "pyannote/speaker-diarization-3.1 · Hugging Face", "clicks": 1 }, { "url": "https://github.com/pyannote/pyannote-audio", "internal": false, "reflection": false, "title": "GitHub - pyannote/pyannote-audio: Neural building blocks for speaker diarization: speech activity detection, speaker change detection, overlapped speech detection, speaker embedding", "clicks": 1 }, { "url": "https://pyannote.github.io/pyannote-core/_modules/pyannote/core/annotation.html", "internal": false, "reflection": false, "title": "pyannote.core.annotation — pyannote.core 6.0.2.dev0+gb83999a4e.d20250916 documentation", "clicks": 1 }, { "url": "https://github.com/pyannote/pyannote-audio/releases", "internal": false, "reflection": false, "title": "Releases · pyannote/pyannote-audio · GitHub", "clicks": 0 } ], "read": true, "user_title": "Regular", "bookmarked": false, "actions_summary": [ { "id": 2, "count": 1 } ], "moderator": false, "admin": false, "staff": false, "user_id": 52272, "hidden": false, "trust_level": 3, "deleted_at": null, "user_deleted": false, "edit_reason": null, "can_view_edit_history": true, "wiki": false, "post_url": "/t/problem-with-pyannote-audio-3-1-0/169326/4", "reactions": [ { "id": "+1", "type": "emoji", "count": 1 } ], "current_user_reaction": null, "reaction_users_count": 1, "current_user_used_main_reaction": false, "can_accept_answer": false, "can_unaccept_answer": false, "accepted_answer": true, "topic_accepted_answer": true, "can_vote": null, "title_is_group": false, "reply_to_user": null, "action_code": null, "via_email": null }, { "id": 244024, "name": "MAJH", "username": "aldkela", "avatar_template": "https://avatars.discourse-cdn.com/v4/letter/a/4bbf92/{size}.png", "created_at": "2025-10-23T18:31:44.078Z", "cooked": "<p>Hello, finally it works!!!</p>\n<p>I thought I made mistake again.. I didn’t even think there was a change due to a library version upgrade..</p>\n<p>Thank you so much now I can use this model without any problem!!!</p>", "post_number": 5, "post_type": 1, "posts_count": 6, "updated_at": "2025-10-23T18:31:44.078Z", "reply_count": 0, "reply_to_post_number": null, "quote_count": 0, "incoming_link_count": 1, "reads": 2, "readers_count": 1, "score": 20.4, "yours": false, "topic_id": 169326, "topic_slug": "problem-with-pyannote-audio-3-1-0", "display_username": "MAJH", "primary_group_name": null, "flair_name": null, "flair_url": null, "flair_bg_color": null, "flair_color": null, "flair_group_id": null, "badges_granted": [], "version": 1, "can_edit": false, "can_delete": false, "can_recover": false, "can_see_hidden_post": false, "can_wiki": false, "link_counts": null, "read": true, "user_title": null, "bookmarked": false, "actions_summary": [ { "id": 2, "count": 1 } ], "moderator": false, "admin": false, "staff": false, "user_id": 105819, "hidden": false, "trust_level": 1, "deleted_at": null, "user_deleted": false, "edit_reason": null, "can_view_edit_history": true, "wiki": false, "post_url": "/t/problem-with-pyannote-audio-3-1-0/169326/5", "reactions": [ { "id": "confetti_ball", "type": "emoji", "count": 1 } ], "current_user_reaction": null, "reaction_users_count": 1, "current_user_used_main_reaction": false, "can_accept_answer": false, "can_unaccept_answer": false, "accepted_answer": false, "topic_accepted_answer": true, "can_vote": null, "title_is_group": null, "reply_to_user": null, "action_code": null, "via_email": null }, { "id": 244046, "name": "system", "username": "system", "avatar_template": "https://us1.discourse-cdn.com/hellohellohello/original/2X/d/de4155eb4aa4108ecb32a1389d7cc37ae69f88b7.png", "created_at": "2025-10-24T06:32:17.200Z", "cooked": "<p>This topic was automatically closed 12 hours after the last reply. New replies are no longer allowed.</p>", "post_number": 6, "post_type": 3, "posts_count": 6, "updated_at": "2025-10-24T06:32:17.200Z", "reply_count": 0, "reply_to_post_number": null, "quote_count": 0, "incoming_link_count": 1, "reads": 1, "readers_count": 0, "score": 0.2, "yours": false, "topic_id": 169326, "topic_slug": "problem-with-pyannote-audio-3-1-0", "display_username": "system", "primary_group_name": null, "flair_name": null, "flair_url": null, "flair_bg_color": null, "flair_color": null, "flair_group_id": null, "badges_granted": [], "version": 1, "can_edit": false, "can_delete": false, "can_recover": false, "can_see_hidden_post": false, "can_wiki": false, "link_counts": null, "read": true, "user_title": null, "bookmarked": false, "actions_summary": [], "moderator": true, "admin": true, "staff": true, "user_id": -1, "hidden": false, "trust_level": 4, "deleted_at": null, "user_deleted": false, "edit_reason": null, "can_view_edit_history": true, "wiki": false, "post_url": "/t/problem-with-pyannote-audio-3-1-0/169326/6", "reactions": [], "current_user_reaction": null, "reaction_users_count": 0, "current_user_used_main_reaction": false, "can_accept_answer": false, "can_unaccept_answer": false, "accepted_answer": false, "topic_accepted_answer": true, "can_vote": null, "title_is_group": null, "reply_to_user": null, "action_code": "autoclosed.enabled", "via_email": null } ]
<p>Hello, I was trying to use model named pyannote/speaker-diarization-3.1</p> <p>so I installed some libraries as below</p> <pre><code class="lang-auto">%pip install pyannote.audio==3.1.0 %pip install numpy==1.26 </code></pre> <p>Here is the result and I think I installed this properly…</p> <pre><code class="lang-auto">Collecting pyannote.audio==3.1.0 Using cached pyannote.audio-3.1.0-py2.py3-none-any.whl.metadata (7.8 kB) Requirement already satisfied: asteroid-filterbanks&gt;=0.4 in c:\gpt_agent_2025_book\venv\lib\site-packages (from pyannote.audio==3.1.0) (0.4.0) Requirement already satisfied: einops&gt;=0.6.0 in c:\gpt_agent_2025_book\venv\lib\site-packages (from pyannote.audio==3.1.0) (0.8.1) Requirement already satisfied: huggingface-hub&gt;=0.13.0 in c:\gpt_agent_2025_book\venv\lib\site-packages (from pyannote.audio==3.1.0) (0.35.3) Requirement already satisfied: lightning&gt;=2.0.1 in c:\gpt_agent_2025_book\venv\lib\site-packages (from pyannote.audio==3.1.0) (2.5.5) Requirement already satisfied: omegaconf&lt;3.0,&gt;=2.1 in c:\gpt_agent_2025_book\venv\lib\site-packages (from pyannote.audio==3.1.0) (2.3.0) Requirement already satisfied: pyannote.core&gt;=5.0.0 in c:\gpt_agent_2025_book\venv\lib\site-packages (from pyannote.audio==3.1.0) (6.0.1) Requirement already satisfied: pyannote.database&gt;=5.0.1 in c:\gpt_agent_2025_book\venv\lib\site-packages (from pyannote.audio==3.1.0) (6.1.0) Requirement already satisfied: pyannote.metrics&gt;=3.2 in c:\gpt_agent_2025_book\venv\lib\site-packages (from pyannote.audio==3.1.0) (4.0.0) Requirement already satisfied: pyannote.pipeline&gt;=3.0.1 in c:\gpt_agent_2025_book\venv\lib\site-packages (from pyannote.audio==3.1.0) (4.0.0) Requirement already satisfied: pytorch-metric-learning&gt;=2.1.0 in c:\gpt_agent_2025_book\venv\lib\site-packages (from pyannote.audio==3.1.0) (2.9.0) Requirement already satisfied: rich&gt;=12.0.0 in c:\gpt_agent_2025_book\venv\lib\site-packages (from pyannote.audio==3.1.0) (14.2.0) Requirement already satisfied: semver&gt;=3.0.0 in c:\gpt_agent_2025_book\venv\lib\site-packages (from pyannote.audio==3.1.0) (3.0.4) Requirement already satisfied: soundfile&gt;=0.12.1 in c:\gpt_agent_2025_book\venv\lib\site-packages (from pyannote.audio==3.1.0) (0.13.1) Requirement already satisfied: speechbrain&gt;=0.5.14 in c:\gpt_agent_2025_book\venv\lib\site-packages (from pyannote.audio==3.1.0) (1.0.3) Requirement already satisfied: tensorboardX&gt;=2.6 in c:\gpt_agent_2025_book\venv\lib\site-packages (from pyannote.audio==3.1.0) (2.6.4) Requirement already satisfied: torch&gt;=2.0.0 in c:\gpt_agent_2025_book\venv\lib\site-packages (from pyannote.audio==3.1.0) (2.9.0+cu126) Requirement already satisfied: torch-audiomentations&gt;=0.11.0 in c:\gpt_agent_2025_book\venv\lib\site-packages (from pyannote.audio==3.1.0) (0.12.0) Requirement already satisfied: torchaudio&gt;=2.0.0 in c:\gpt_agent_2025_book\venv\lib\site-packages (from pyannote.audio==3.1.0) (2.9.0) Requirement already satisfied: torchmetrics&gt;=0.11.0 in c:\gpt_agent_2025_book\venv\lib\site-packages (from pyannote.audio==3.1.0) (1.8.2) Requirement already satisfied: antlr4-python3-runtime==4.9.* in c:\gpt_agent_2025_book\venv\lib\site-packages (from omegaconf&lt;3.0,&gt;=2.1-&gt;pyannote.audio==3.1.0) (4.9.3) Requirement already satisfied: PyYAML&gt;=5.1.0 in c:\gpt_agent_2025_book\venv\lib\site-packages (from omegaconf&lt;3.0,&gt;=2.1-&gt;pyannote.audio==3.1.0) (6.0.3) Requirement already satisfied: numpy in c:\gpt_agent_2025_book\venv\lib\site-packages (from asteroid-filterbanks&gt;=0.4-&gt;pyannote.audio==3.1.0) (1.26.0) Requirement already satisfied: typing-extensions in c:\gpt_agent_2025_book\venv\lib\site-packages (from asteroid-filterbanks&gt;=0.4-&gt;pyannote.audio==3.1.0) (4.15.0) ... Uninstalling numpy-2.3.4: Successfully uninstalled numpy-2.3.4 Successfully installed numpy-1.26.0 Note: you may need to restart the kernel to use updated packages. Output is truncated. View as a scrollable element or open in a text editor. Adjust cell output settings... ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts. pyannote-core 6.0.1 requires numpy&gt;=2.0, but you have numpy 1.26.0 which is incompatible. pyannote-metrics 4.0.0 requires numpy&gt;=2.2.2, but you have numpy 1.26.0 which is incompatible. </code></pre> <p>I ran this code to load the ffmpeg</p> <pre data-code-wrap="python"><code class="lang-python">from pathlib import Path import os, sys ffmpeg_dll_dir = Path(r"C:\Users\majh0\miniconda3\Library\bin") assert ffmpeg_dll_dir.exists(), ffmpeg_dll_dir os.add_dll_directory(str(ffmpeg_dll_dir)) import torch, torchcodec, platform, subprocess print("exe:", sys.executable) print("torch", torch.__version__, "torchcodec", torchcodec.__version__, "py", platform.python_version()) subprocess.run(["ffmpeg", "-version"], check=True) print("cuda torch?",torch.cuda.is_available()) </code></pre> <p>and the result looks fine to me..</p> <pre><code class="lang-auto">exe: c:\GPT_AGENT_2025_BOOK\venv\Scripts\python.exe torch 2.9.0+cu126 torchcodec 0.8.0 py 3.12.9 cuda torch? True </code></pre> <p>I ran this code and it gave me an error as below…</p> <pre data-code-wrap="python"><code class="lang-python"># instantiate the pipeline import torch from pyannote.audio import Pipeline pipeline = Pipeline.from_pretrained( "pyannote/speaker-diarization-3.1", token="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") if torch.cuda.is_available(): pipeline.to(torch.device("cuda")) print("Using CUDA") else: print("Using CPU") </code></pre> <pre><code class="lang-auto">--------------------------------------------------------------------------- AttributeError Traceback (most recent call last) Cell In[3], line 3 1 # instantiate the pipeline 2 import torch ----&gt; 3 from pyannote.audio import Pipeline 4 pipeline = Pipeline.from_pretrained( 5 "pyannote/speaker-diarization-3.1", 6 token="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") 9 if torch.cuda.is_available(): File c:\GPT_AGENT_2025_BOOK\venv\Lib\site-packages\pyannote\audio\__init__.py:29 25 except ImportError: 26 pass ---&gt; 29 from .core.inference import Inference 30 from .core.io import Audio 31 from .core.model import Model File c:\GPT_AGENT_2025_BOOK\venv\Lib\site-packages\pyannote\audio\core\inference.py:36 33 from pyannote.core import Segment, SlidingWindow, SlidingWindowFeature 34 from pytorch_lightning.utilities.memory import is_oom_error ---&gt; 36 from pyannote.audio.core.io import AudioFile 37 from pyannote.audio.core.model import Model, Specifications 38 from pyannote.audio.core.task import Resolution ... 49 - a "str" or "Path" instance: "audio.wav" or Path("audio.wav") (...) 56 integer to load a specific channel: {"audio": "stereo.wav", "channel": 0} 57 """ AttributeError: module 'torchaudio' has no attribute 'set_audio_backend' </code></pre> <p>I have checked the document and it says I need to install <a href="https://github.com/pyannote/pyannote-audio" rel="noopener nofollow ugc"><code>pyannote.audio</code></a> <code>3.1</code></p> <p>I don’t know why this thing doesn’t work…. I tried to solve this problem for 3hrs changing version of pyannote.audio but this thing didn’t give me solution..</p> <p>Do I need to delete venv and reinstall it clearly..?</p> <p>Thank you so much for the help in advance..</p>
<p>It seems like a partial hit.<img src="https://emoji.discourse-cdn.com/apple/sweat_smile.png?v=14" title=":sweat_smile:" class="emoji" alt=":sweat_smile:" loading="lazy" width="20" height="20"> The cause is a specification change due to a library version upgrade, but it appears to be because the returned object changed, not because the function itself changed.</p> <hr> <p>You’re on <code>pyannote.audio</code> 4.x. In 4.x the pipeline returns a <strong><code>DiarizeOutput</code></strong> object, not an <code>Annotation</code>. The <code>Annotation</code> lives at <code>output.speaker_diarization</code>. <code>write_rttm</code> is a method of <code>Annotation</code>, so call it there.</p> <pre data-code-wrap="python"><code class="lang-python">from pyannote.audio import Pipeline import torch pipeline = Pipeline.from_pretrained( "pyannote/speaker-diarization-3.1", token="YOUR_HF_TOKEN" ) if torch.cuda.is_available(): pipeline.to("cuda") out = pipeline("./guitar.wav") # out is DiarizeOutput ann = out.speaker_diarization # this is an Annotation with open("./guitar.rttm", "w", encoding="utf-8") as f: ann.write_rttm(f) </code></pre> <p>Evidence</p> <ul> <li>The current README shows usage as <code>output = pipeline(...); for turn, spk in output.speaker_diarization: ...</code>, proving the wrapper return type in 4.x. (<a href="https://github.com/pyannote/pyannote-audio" title="GitHub - pyannote/pyannote-audio: Neural building blocks for speaker diarization: speech activity detection, speaker change detection, overlapped speech detection, speaker embedding">GitHub</a>)</li> <li><code>write_rttm</code> is defined on <code>pyannote.core.Annotation</code>, not on the wrapper. (<a href="https://pyannote.github.io/pyannote-core/_modules/pyannote/core/annotation.html" title="Source code for pyannote.core.annotation">pyannote.github.io</a>)</li> <li>The model card snippet you followed is the legacy 3.1 example that returned an <code>Annotation</code> directly. That is why your call failed on 4.x. (<a href="https://huggingface.co/pyannote/speaker-diarization-3.1" title="pyannote/speaker-diarization-3.1">Hugging Face</a>)</li> </ul> <p>Option if you want the old behavior: pin to the legacy stack (<code>pyannote.audio==3.1.x</code>) where <code>pipeline(...)</code> returns an <code>Annotation</code>, and the snippet <code>diarization.write_rttm(...)</code> works as-is. Note 4.x introduced several breaking changes, including API renames. (<a href="https://github.com/pyannote/pyannote-audio/releases" title="Releases · pyannote/pyannote-audio">GitHub</a>)</p>
How to make my customized pipeline consumable for Transformers.js
https://discuss.huggingface.co/t/how-to-make-my-customized-pipeline-consumable-for-transformers-js/169036
169,036
5
2025-10-08T15:06:33.223000Z
[ { "id": 243309, "name": "Sicheng Mao", "username": "alephpi", "avatar_template": "/user_avatar/discuss.huggingface.co/alephpi/{size}/54288_2.png", "created_at": "2025-10-08T15:06:33.311Z", "cooked": "<p>Hi community,</p>\n<p>Here is my image-to-text pipeline:</p>\n<p>(<em>customized</em> means not a registered one in official Transformers)</p>\n<p>A <em>customized</em> Image processor,</p>\n<p>A VisionEncoderDecoder, with a <em>customized</em> vision encoder that inherits the PretrainedModel and a MBartDecoder,</p>\n<p>A WordLevel tokenizer (yes I haven’t used a MBartTokenizer and I have distilled my own one for specific corpus).</p>\n<p>I want to consume this pipeline in Transformers.js, however I notice that all examples given in Transformers.js documentation seem like pulling from a ready made Transformers pipeline with official components and configurations, <strong>I just wonder is it possible to turn my customized pipeline consumable for Transformers.js, or to what extent my pipeline could be partially turned to?</strong></p>\n<p>My guess is that the I should make my own image preprocessing step and send the image input tensor to the model, in that way, which kind of js libraries you recommend to use? (It won’t be very intensive, just simply resize and normalize things plus a crop-white-margin function which doesn’t exist in Transformers’ image processors).</p>\n<p><strong>Also just to be sure, is my VisionEncoderDecoder possible to export to an onnx format to be consumable for Transformers.js?</strong></p>\n<p>Of course my model should be possible to run in browser (and that’s the whole point for me to do this), as it has only 20M parameters (way less than the showcase in Transformers.js)</p>\n<p>Thanks for your help in advance!</p>", "post_number": 1, "post_type": 1, "posts_count": 12, "updated_at": "2025-10-08T15:19:25.343Z", "reply_count": 0, "reply_to_post_number": null, "quote_count": 0, "incoming_link_count": 26, "reads": 9, "readers_count": 8, "score": 21.6, "yours": false, "topic_id": 169036, "topic_slug": "how-to-make-my-customized-pipeline-consumable-for-transformers-js", "display_username": "Sicheng Mao", "primary_group_name": null, "flair_name": null, "flair_url": null, "flair_bg_color": null, "flair_color": null, "flair_group_id": null, "badges_granted": [], "version": 2, "can_edit": false, "can_delete": false, "can_recover": false, "can_see_hidden_post": false, "can_wiki": false, "link_counts": [ { "url": "https://discuss.huggingface.co/t/load-model-from-platform-other-than-hf-hub-and-display-a-progress-bar-by-from-pretrained-in-transformers-js/169364", "internal": true, "reflection": true, "title": "Load model from platform other than HF Hub and display a progress bar by `from_pretrained()` in Transformers.js", "clicks": 0 } ], "read": true, "user_title": null, "bookmarked": false, "actions_summary": [ { "id": 2, "count": 1 } ], "moderator": false, "admin": false, "staff": false, "user_id": 104516, "hidden": false, "trust_level": 1, "deleted_at": null, "user_deleted": false, "edit_reason": null, "can_view_edit_history": true, "wiki": false, "post_url": "/t/how-to-make-my-customized-pipeline-consumable-for-transformers-js/169036/1", "reactions": [ { "id": "eyes", "type": "emoji", "count": 1 } ], "current_user_reaction": null, "reaction_users_count": 1, "current_user_used_main_reaction": false, "can_accept_answer": false, "can_unaccept_answer": false, "accepted_answer": false, "topic_accepted_answer": true, "can_vote": false, "title_is_group": null, "reply_to_user": null, "action_code": null, "via_email": null }, { "id": 243331, "name": "John Smith", "username": "John6666", "avatar_template": "/user_avatar/discuss.huggingface.co/john6666/{size}/27664_2.png", "created_at": "2025-10-08T23:15:26.000Z", "cooked": "<p>It <a href=\"https://huggingface.co/datasets/John6666/forum1/blob/main/transformer_js_custom_pipeline_1.md\">seems possible</a>. For Transoformers.js, there’s a dedicated channel on the HF Discord, so asking there would be the most reliable option.</p>", "post_number": 2, "post_type": 1, "posts_count": 12, "updated_at": "2025-10-08T23:15:26.000Z", "reply_count": 2, "reply_to_post_number": null, "quote_count": 0, "incoming_link_count": 0, "reads": 8, "readers_count": 7, "score": 26.4, "yours": false, "topic_id": 169036, "topic_slug": "how-to-make-my-customized-pipeline-consumable-for-transformers-js", "display_username": "John Smith", "primary_group_name": null, "flair_name": null, "flair_url": null, "flair_bg_color": null, "flair_color": null, "flair_group_id": null, "badges_granted": [], "version": 1, "can_edit": false, "can_delete": false, "can_recover": false, "can_see_hidden_post": false, "can_wiki": false, "link_counts": [ { "url": "https://huggingface.co/datasets/John6666/forum1/blob/main/transformer_js_custom_pipeline_1.md", "internal": false, "reflection": false, "title": "transformer_js_custom_pipeline_1.md · John6666/forum1 at main", "clicks": 2 } ], "read": true, "user_title": "Regular", "bookmarked": false, "actions_summary": [ { "id": 2, "count": 1 } ], "moderator": false, "admin": false, "staff": false, "user_id": 52272, "hidden": false, "trust_level": 3, "deleted_at": null, "user_deleted": false, "edit_reason": null, "can_view_edit_history": true, "wiki": false, "post_url": "/t/how-to-make-my-customized-pipeline-consumable-for-transformers-js/169036/2", "reactions": [ { "id": "heart", "type": "emoji", "count": 1 } ], "current_user_reaction": null, "reaction_users_count": 1, "current_user_used_main_reaction": false, "can_accept_answer": false, "can_unaccept_answer": false, "accepted_answer": true, "topic_accepted_answer": true, "can_vote": null, "title_is_group": false, "reply_to_user": null, "action_code": null, "via_email": null }, { "id": 243351, "name": "Sicheng Mao", "username": "alephpi", "avatar_template": "/user_avatar/discuss.huggingface.co/alephpi/{size}/54288_2.png", "created_at": "2025-10-09T05:47:31.103Z", "cooked": "<p>Thanks let me check!</p>", "post_number": 3, "post_type": 1, "posts_count": 12, "updated_at": "2025-10-09T05:47:31.103Z", "reply_count": 0, "reply_to_post_number": 2, "quote_count": 0, "incoming_link_count": 0, "reads": 8, "readers_count": 7, "score": 16.4, "yours": false, "topic_id": 169036, "topic_slug": "how-to-make-my-customized-pipeline-consumable-for-transformers-js", "display_username": "Sicheng Mao", "primary_group_name": null, "flair_name": null, "flair_url": null, "flair_bg_color": null, "flair_color": null, "flair_group_id": null, "badges_granted": [], "version": 1, "can_edit": false, "can_delete": false, "can_recover": false, "can_see_hidden_post": false, "can_wiki": false, "link_counts": null, "read": true, "user_title": null, "bookmarked": false, "actions_summary": [ { "id": 2, "count": 1 } ], "moderator": false, "admin": false, "staff": false, "user_id": 104516, "hidden": false, "trust_level": 1, "deleted_at": null, "user_deleted": false, "edit_reason": null, "can_view_edit_history": true, "wiki": false, "post_url": "/t/how-to-make-my-customized-pipeline-consumable-for-transformers-js/169036/3", "reactions": [ { "id": "+1", "type": "emoji", "count": 1 } ], "current_user_reaction": null, "reaction_users_count": 1, "current_user_used_main_reaction": false, "can_accept_answer": false, "can_unaccept_answer": false, "accepted_answer": false, "topic_accepted_answer": true, "can_vote": null, "title_is_group": null, "reply_to_user": { "id": 52272, "username": "John6666", "name": "John Smith", "avatar_template": "/user_avatar/discuss.huggingface.co/john6666/{size}/27664_2.png" }, "action_code": null, "via_email": null }, { "id": 243504, "name": "Sicheng Mao", "username": "alephpi", "avatar_template": "/user_avatar/discuss.huggingface.co/alephpi/{size}/54288_2.png", "created_at": "2025-10-13T17:27:00.991Z", "cooked": "<p>Hi John,<br>\nI try to follow your export script and I made to export 1 onnx file with the following:</p>\n<pre data-code-wrap=\"py\"><code class=\"lang-py\">register_tasks_manager_onnx = TasksManager.create_register(\"onnx\")\n@register_tasks_manager_onnx(\"my_hgnetv2\", *[\"feature-extraction\"])\nclass HGNetv2OnnxConfig(ViTOnnxConfig):\n @property\n def inputs(self):\n return {\"pixel_values\": {0: \"batch\"}} # only dynamical axis is needed to list here\n @property\n def outputs(self):\n return {\"last_hidden_state\": {0: \"batch\"}}\n\ndef export_onnx():\n path='./model'\n model = VisionEncoderDecoderModel.from_pretrained(path)\n onnx_config_constructor = TasksManager.get_exporter_config_constructor(\n exporter=\"onnx\",\n model=model,\n task=\"image-to-text\",\n library_name=\"transformers\",\n exporter_config_kwargs={\"use_past\": True},\n )\n onnx_config = onnx_config_constructor(model.config)\n out = Path(\"./model/onnx\")\n out.mkdir(exist_ok=True)\n\n inputs, outputs = export(model, \n onnx_config, \n out/\"model.onnx\", \n onnx_config.DEFAULT_ONNX_OPSET,\n input_shapes={\"pixel_values\": [1, 3, 384, 384]},\n )\n print(inputs)\n print(outputs)\n</code></pre>\n<p>However, I don’t know how to export to trio .onnx file with the cli, since within the python script, I can register the customized config, but I don’t know how to register it with cli…</p>", "post_number": 4, "post_type": 1, "posts_count": 12, "updated_at": "2025-10-13T17:27:47.078Z", "reply_count": 1, "reply_to_post_number": 2, "quote_count": 0, "incoming_link_count": 0, "reads": 7, "readers_count": 6, "score": 21.2, "yours": false, "topic_id": 169036, "topic_slug": "how-to-make-my-customized-pipeline-consumable-for-transformers-js", "display_username": "Sicheng Mao", "primary_group_name": null, "flair_name": null, "flair_url": null, "flair_bg_color": null, "flair_color": null, "flair_group_id": null, "badges_granted": [], "version": 2, "can_edit": false, "can_delete": false, "can_recover": false, "can_see_hidden_post": false, "can_wiki": false, "link_counts": null, "read": true, "user_title": null, "bookmarked": false, "actions_summary": [ { "id": 2, "count": 1 } ], "moderator": false, "admin": false, "staff": false, "user_id": 104516, "hidden": false, "trust_level": 1, "deleted_at": null, "user_deleted": false, "edit_reason": null, "can_view_edit_history": true, "wiki": false, "post_url": "/t/how-to-make-my-customized-pipeline-consumable-for-transformers-js/169036/4", "reactions": [ { "id": "eyes", "type": "emoji", "count": 1 } ], "current_user_reaction": null, "reaction_users_count": 1, "current_user_used_main_reaction": false, "can_accept_answer": false, "can_unaccept_answer": false, "accepted_answer": false, "topic_accepted_answer": true, "can_vote": null, "title_is_group": null, "reply_to_user": { "id": 52272, "username": "John6666", "name": "John Smith", "avatar_template": "/user_avatar/discuss.huggingface.co/john6666/{size}/27664_2.png" }, "action_code": null, "via_email": null }, { "id": 243505, "name": "Sicheng Mao", "username": "alephpi", "avatar_template": "/user_avatar/discuss.huggingface.co/alephpi/{size}/54288_2.png", "created_at": "2025-10-13T17:54:45.869Z", "cooked": "<p>Oh I see, it’s here <a href=\"https://huggingface.co/docs/optimum-onnx/onnx/usage_guides/export_a_model#customize-the-export-of-official-transformers-models\" class=\"inline-onebox\">Export a model to ONNX with optimum.exporters.onnx</a> and we need to use <code>main_export</code> instead of <code>export</code></p>", "post_number": 5, "post_type": 1, "posts_count": 12, "updated_at": "2025-10-13T17:54:45.869Z", "reply_count": 1, "reply_to_post_number": 4, "quote_count": 0, "incoming_link_count": 0, "reads": 5, "readers_count": 4, "score": 21, "yours": false, "topic_id": 169036, "topic_slug": "how-to-make-my-customized-pipeline-consumable-for-transformers-js", "display_username": "Sicheng Mao", "primary_group_name": null, "flair_name": null, "flair_url": null, "flair_bg_color": null, "flair_color": null, "flair_group_id": null, "badges_granted": [], "version": 1, "can_edit": false, "can_delete": false, "can_recover": false, "can_see_hidden_post": false, "can_wiki": false, "link_counts": [ { "url": "https://huggingface.co/docs/optimum-onnx/onnx/usage_guides/export_a_model#customize-the-export-of-official-transformers-models", "internal": false, "reflection": false, "title": "Export a model to ONNX with optimum.exporters.onnx", "clicks": 0 } ], "read": true, "user_title": null, "bookmarked": false, "actions_summary": [ { "id": 2, "count": 1 } ], "moderator": false, "admin": false, "staff": false, "user_id": 104516, "hidden": false, "trust_level": 1, "deleted_at": null, "user_deleted": false, "edit_reason": null, "can_view_edit_history": true, "wiki": false, "post_url": "/t/how-to-make-my-customized-pipeline-consumable-for-transformers-js/169036/5", "reactions": [ { "id": "+1", "type": "emoji", "count": 1 } ], "current_user_reaction": null, "reaction_users_count": 1, "current_user_used_main_reaction": false, "can_accept_answer": false, "can_unaccept_answer": false, "accepted_answer": false, "topic_accepted_answer": true, "can_vote": null, "title_is_group": null, "reply_to_user": { "id": 104516, "username": "alephpi", "name": "Sicheng Mao", "avatar_template": "/user_avatar/discuss.huggingface.co/alephpi/{size}/54288_2.png" }, "action_code": null, "via_email": null }, { "id": 243509, "name": "Sicheng Mao", "username": "alephpi", "avatar_template": "/user_avatar/discuss.huggingface.co/alephpi/{size}/54288_2.png", "created_at": "2025-10-13T20:49:24.000Z", "cooked": "<p>Finally I use the following:</p>\n<pre data-code-wrap=\"py\"><code class=\"lang-py\">def export_onnx():\n path='./model'\n out = Path(\"./model/trio_onnx\")\n out.mkdir(exist_ok=True)\n\n main_export(\n path,\n task=\"image-to-text\",\n output=out,\n )\n</code></pre>\n<p>However, this can only export to <code>encoder_model.onnx</code> and <code>decoder_model.onnx</code>, since I have no idea how the <code>use_past=True</code> can be injected with main_export’s argument(The example in the above link doesn’t work out), I monkey-patched the source code to make it export to trio onnx.</p>", "post_number": 6, "post_type": 1, "posts_count": 12, "updated_at": "2025-10-13T20:49:24.000Z", "reply_count": 0, "reply_to_post_number": 5, "quote_count": 0, "incoming_link_count": 0, "reads": 5, "readers_count": 4, "score": 16, "yours": false, "topic_id": 169036, "topic_slug": "how-to-make-my-customized-pipeline-consumable-for-transformers-js", "display_username": "Sicheng Mao", "primary_group_name": null, "flair_name": null, "flair_url": null, "flair_bg_color": null, "flair_color": null, "flair_group_id": null, "badges_granted": [], "version": 1, "can_edit": false, "can_delete": false, "can_recover": false, "can_see_hidden_post": false, "can_wiki": false, "link_counts": null, "read": true, "user_title": null, "bookmarked": false, "actions_summary": [ { "id": 2, "count": 1 } ], "moderator": false, "admin": false, "staff": false, "user_id": 104516, "hidden": false, "trust_level": 1, "deleted_at": null, "user_deleted": false, "edit_reason": null, "can_view_edit_history": true, "wiki": false, "post_url": "/t/how-to-make-my-customized-pipeline-consumable-for-transformers-js/169036/6", "reactions": [ { "id": "+1", "type": "emoji", "count": 1 } ], "current_user_reaction": null, "reaction_users_count": 1, "current_user_used_main_reaction": false, "can_accept_answer": false, "can_unaccept_answer": false, "accepted_answer": false, "topic_accepted_answer": true, "can_vote": null, "title_is_group": null, "reply_to_user": { "id": 104516, "username": "alephpi", "name": "Sicheng Mao", "avatar_template": "/user_avatar/discuss.huggingface.co/alephpi/{size}/54288_2.png" }, "action_code": null, "via_email": null }, { "id": 243513, "name": "John Smith", "username": "John6666", "avatar_template": "/user_avatar/discuss.huggingface.co/john6666/{size}/27664_2.png", "created_at": "2025-10-13T23:14:53.440Z", "cooked": "<p>For Transformer.js:</p>\n<hr>\n<p>Use <code>main_export()</code> <strong>with</strong> <code>custom_onnx_configs</code> and <code>with_behavior(..., use_past=True)</code> to get the trio. Do not monkey-patch.</p>\n<h1><a name=\"p-243513-background-and-context-1\" class=\"anchor\" href=\"#p-243513-background-and-context-1\"></a>Background and context</h1>\n<ul>\n<li>Why a “trio”: seq2seq generation needs a one-off <strong>decoder</strong> for the first token and a <strong>decoder_with_past</strong> for subsequent tokens so KV-cache is reused. This is the supported pattern. (<a href=\"https://discuss.huggingface.co/t/when-exporting-seq2seq-models-with-onnx-why-do-we-need-both-decoder-with-past-model-onnx-and-decoder-model-onnx/33354\" title=\"When exporting seq2seq models with ONNX, why do we ...\">Hugging Face Forums</a>)</li>\n<li>Where to set it: Optimum’s exporter lets you pass <strong>custom_onnx_configs</strong> to <code>main_export()</code> and choose behaviors per subgraph: <code>\"encoder\"</code>, <code>\"decoder\"</code>, and <code>\"decoder with past\"</code>. You can also disable post-processing so files are kept separate. (<a href=\"https://huggingface.co/docs/optimum-onnx/onnx/usage_guides/export_a_model\" title=\"Export a model to ONNX with optimum.exporters.onnx\">Hugging Face</a>)</li>\n<li>Transformers.js expects this layout. Public web-ready repos ship <code>onnx/{encoder_model.onnx, decoder_model.onnx, decoder_with_past_model.onnx}</code> or a merged decoder. (<a href=\"https://huggingface.co/Xenova/vit-gpt2-image-captioning\" title=\"Xenova/vit-gpt2-image-captioning\">Hugging Face</a>)</li>\n</ul>\n<h1><a name=\"p-243513-minimal-correct-export-no-patches-2\" class=\"anchor\" href=\"#p-243513-minimal-correct-export-no-patches-2\"></a>Minimal, correct export (no patches)</h1>\n<pre data-code-wrap=\"python\"><code class=\"lang-python\"># refs:\n# - Export guide (custom_onnx_configs + with_behavior + no_post_process):\n# https://huggingface.co/docs/optimum-onnx/onnx/usage_guides/export_a_model\n# - main_export reference:\n# https://huggingface.co/docs/optimum-onnx/en/onnx/package_reference/export\n\nfrom pathlib import Path\nfrom transformers import AutoConfig\nfrom optimum.exporters.onnx import main_export\nfrom optimum.exporters.tasks import TasksManager\n\nmodel_dir = \"./model\" # your VisionEncoderDecoder checkpoint\nout = Path(\"./model/trio_onnx\"); out.mkdir(parents=True, exist_ok=True)\n\n# Build an ONNX config for your model+task\ncfg = AutoConfig.from_pretrained(model_dir)\nctor = TasksManager.get_exporter_config_constructor(\n model_type=cfg.model_type, backend=\"onnx\", task=\"image-to-text\" # vision→text task\n)\nonnx_cfg = ctor(config=cfg, task=\"image-to-text\")\n\n# Ask explicitly for the three subgraphs\ncustom_onnx_configs = {\n \"encoder_model\": onnx_cfg.with_behavior(\"encoder\"),\n \"decoder_model\": onnx_cfg.with_behavior(\"decoder\", use_past=False),\n \"decoder_with_past_model\": onnx_cfg.with_behavior(\"decoder\", use_past=True),\n}\n\n# Export. Keep trio separate (avoid automatic merge).\nmain_export(\n model=model_dir,\n task=\"image-to-text\",\n output=str(out),\n custom_onnx_configs=custom_onnx_configs,\n no_post_process=True,\n)\n</code></pre>\n<p>Why this works: Optimum documents <code>custom_onnx_configs</code> and <code>with_behavior(\"decoder\", use_past=True)</code> to emit <code>decoder_with_past_model.onnx</code>; <code>no_post_process=True</code> prevents the exporter from merging decoders. (<a href=\"https://huggingface.co/docs/optimum-onnx/onnx/usage_guides/export_a_model\" title=\"Export a model to ONNX with optimum.exporters.onnx\">Hugging Face</a>)</p>\n<h1><a name=\"p-243513-verify-and-align-with-transformersjs-3\" class=\"anchor\" href=\"#p-243513-verify-and-align-with-transformersjs-3\"></a>Verify and align with Transformers.js</h1>\n<ul>\n<li>Check the output folder contains exactly: <code>encoder_model.onnx</code>, <code>decoder_model.onnx</code>, <code>decoder_with_past_model.onnx</code>. This mirrors working web repos. (<a href=\"https://huggingface.co/Xenova/vit-gpt2-image-captioning/tree/main/onnx\" title=\"Xenova/vit-gpt2-image-captioning at main\">Hugging Face</a>)</li>\n<li>Use that folder structure in your web model repo. Xenova’s captioner card recommends this layout for browser use. (<a href=\"https://huggingface.co/Xenova/vit-gpt2-image-captioning\" title=\"Xenova/vit-gpt2-image-captioning\">Hugging Face</a>)</li>\n</ul>\n<h1><a name=\"p-243513-common-failure-modes-and-fixes-4\" class=\"anchor\" href=\"#p-243513-common-failure-modes-and-fixes-4\"></a>Common failure modes and fixes</h1>\n<ul>\n<li><strong>Only two files produced</strong>: you didn’t request the with-past behavior. Add the <code>custom_onnx_configs</code> dict as above. (<a href=\"https://huggingface.co/docs/optimum-onnx/onnx/usage_guides/export_a_model\" title=\"Export a model to ONNX with optimum.exporters.onnx\">Hugging Face</a>)</li>\n<li><strong>Decoder files merged</strong>: remove the merge by setting <code>no_post_process=True</code>. The doc names this exact flag. (<a href=\"https://huggingface.co/docs/optimum-onnx/onnx/usage_guides/export_a_model\" title=\"Export a model to ONNX with optimum.exporters.onnx\">Hugging Face</a>)</li>\n<li><strong>Unsure which tasks your model supports</strong>: query <code>TasksManager.get_supported_tasks_for_model_type(model_type, \"onnx\")</code> and pick the vision→text task. The export guide shows this workflow. (<a href=\"https://huggingface.co/docs/optimum-onnx/onnx/usage_guides/export_a_model\" title=\"Export a model to ONNX with optimum.exporters.onnx\">Hugging Face</a>)</li>\n<li><strong>Why two decoders at all</strong>: first-token vs subsequent tokens. Author of Transformers.js explains the duplication and runtime need. (<a href=\"https://discuss.huggingface.co/t/when-exporting-seq2seq-models-with-onnx-why-do-we-need-both-decoder-with-past-model-onnx-and-decoder-model-onnx/33354\" title=\"When exporting seq2seq models with ONNX, why do we ...\">Hugging Face Forums</a>)</li>\n</ul>\n<h1><a name=\"p-243513-optional-merged-decoder-5\" class=\"anchor\" href=\"#p-243513-optional-merged-decoder-5\"></a>Optional: merged decoder</h1>\n<p>Some exporters can produce a single <strong><code>decoder_model_merged.onnx</code></strong> that handles both first and subsequent tokens. If you prefer that, omit <code>no_post_process=True</code>. The public ViT-GPT2 repo shows merged and split variants side by side. (<a href=\"https://huggingface.co/Xenova/vit-gpt2-image-captioning/tree/main/onnx\" title=\"Xenova/vit-gpt2-image-captioning at main\">Hugging Face</a>)</p>", "post_number": 7, "post_type": 1, "posts_count": 12, "updated_at": "2025-10-13T23:14:53.440Z", "reply_count": 1, "reply_to_post_number": null, "quote_count": 0, "incoming_link_count": 0, "reads": 6, "readers_count": 5, "score": 6, "yours": false, "topic_id": 169036, "topic_slug": "how-to-make-my-customized-pipeline-consumable-for-transformers-js", "display_username": "John Smith", "primary_group_name": null, "flair_name": null, "flair_url": null, "flair_bg_color": null, "flair_color": null, "flair_group_id": null, "badges_granted": [], "version": 1, "can_edit": false, "can_delete": false, "can_recover": false, "can_see_hidden_post": false, "can_wiki": false, "link_counts": [ { "url": "https://huggingface.co/docs/optimum-onnx/onnx/usage_guides/export_a_model", "internal": false, "reflection": false, "title": "Export a model to ONNX with optimum.exporters.onnx", "clicks": 1 }, { "url": "https://huggingface.co/Xenova/vit-gpt2-image-captioning/tree/main/onnx", "internal": false, "reflection": false, "title": "Xenova/vit-gpt2-image-captioning at main", "clicks": 0 }, { "url": "https://huggingface.co/Xenova/vit-gpt2-image-captioning", "internal": false, "reflection": false, "title": "Xenova/vit-gpt2-image-captioning · Hugging Face", "clicks": 0 }, { "url": "https://discuss.huggingface.co/t/when-exporting-seq2seq-models-with-onnx-why-do-we-need-both-decoder-with-past-model-onnx-and-decoder-model-onnx/33354", "internal": true, "reflection": false, "title": "When exporting seq2seq models with ONNX, why do we need both decoder_with_past_model.onnx and decoder_model.onnx?", "clicks": 0 } ], "read": true, "user_title": "Regular", "bookmarked": false, "actions_summary": [], "moderator": false, "admin": false, "staff": false, "user_id": 52272, "hidden": false, "trust_level": 3, "deleted_at": null, "user_deleted": false, "edit_reason": null, "can_view_edit_history": true, "wiki": false, "post_url": "/t/how-to-make-my-customized-pipeline-consumable-for-transformers-js/169036/7", "reactions": [], "current_user_reaction": null, "reaction_users_count": 0, "current_user_used_main_reaction": false, "can_accept_answer": false, "can_unaccept_answer": false, "accepted_answer": false, "topic_accepted_answer": true, "can_vote": null, "title_is_group": false, "reply_to_user": null, "action_code": null, "via_email": null }, { "id": 243560, "name": "Sicheng Mao", "username": "alephpi", "avatar_template": "/user_avatar/discuss.huggingface.co/alephpi/{size}/54288_2.png", "created_at": "2025-10-14T08:55:40.490Z", "cooked": "<p>Well, I still cannot make this work, by debugging, I find that the main_export() will take me to <code>optimum.exporters.utils._get_submodels_and_export_configs()</code>, and an error raises here</p>\n<pre data-code-wrap=\"py\"><code class=\"lang-py\"> # When specifying custom export configs for supported transformers architectures, we do\n # not force to specify a custom export config for each submodel.\n for key, custom_export_config in custom_export_configs.items():\n models_and_export_configs[key] = (models_and_export_configs[key][0], custom_export_config)\n</code></pre>\n<p>where the <code>custom_export_configs</code> is the one we passed in with <code>use_past</code> injected, while the <code>models_and_export_configs</code>, generated here</p>\n<pre data-code-wrap=\"py\"><code class=\"lang-py\"> # TODO: this succession of if/else strongly suggests a refactor is needed.\n if (\n task.startswith(TasksManager._ENCODER_DECODER_TASKS)\n and model.config.is_encoder_decoder\n and not monolith\n ):\n models_and_export_configs = get_encoder_decoder_models_for_export(model, export_config)\n</code></pre>\n<p>doesn’t contain the key “decoder_with_past”, where the default <code>export_config</code> generated here</p>\n<pre data-code-wrap=\"py\"><code class=\"lang-py\"> export_config_constructor = TasksManager.get_exporter_config_constructor(\n model=model, exporter=exporter, task=task, library_name=library_name\n )\n export_config = export_config_constructor(\n model.config,\n int_dtype=int_dtype,\n float_dtype=float_dtype,\n preprocessors=preprocessors,\n )\n</code></pre>\n<p>with a default <code>use_past=False</code>, therefore would not generate a config for “decoder_with_past”.<br>\nAnd actually here is what I monkey_patched during the debugging.</p>\n<p>I think there is a high dependency between the export config and model config in optimum library, where I although use a customized encoder but still the VisionEncoderDecoder Config as the outermost config, which leads me to the <code>not custom_architecture</code> config processing logic here, which leads to the above error, which may not considered as a normal scenario in design.</p>\n<pre data-code-wrap=\"py\"><code class=\"lang-py\"> if not custom_architecture:\n if library_name == \"diffusers\":\n export_config = None\n models_and_export_configs = get_diffusion_models_for_export(\n model, int_dtype=int_dtype, float_dtype=float_dtype, exporter=exporter\n )\n else:\n export_config_constructor = TasksManager.get_exporter_config_constructor(\n model=model, exporter=exporter, task=task, library_name=library_name\n )\n export_config = export_config_constructor(\n model.config,\n int_dtype=int_dtype,\n float_dtype=float_dtype,\n preprocessors=preprocessors,\n )\n\n export_config.variant = _variant\n all_variants = \"\\n\".join(\n [f\" - {name}: {description}\" for name, description in export_config.VARIANTS.items()]\n )\n logger.info(f\"Using the export variant {export_config.variant}. Available variants are:\\n{all_variants}\")\n\n # TODO: this succession of if/else strongly suggests a refactor is needed.\n if (\n task.startswith(TasksManager._ENCODER_DECODER_TASKS)\n and model.config.is_encoder_decoder\n and not monolith\n ):\n models_and_export_configs = get_encoder_decoder_models_for_export(model, export_config)\n elif task.startswith(\"text-generation\") and not monolith:\n models_and_export_configs = get_decoder_models_for_export(model, export_config)\n elif model.config.model_type == \"sam\":\n models_and_export_configs = get_sam_models_for_export(model, export_config)\n elif model.config.model_type == \"speecht5\":\n models_and_export_configs = get_speecht5_models_for_export(model, export_config, model_kwargs)\n elif model.config.model_type == \"musicgen\":\n models_and_export_configs = get_musicgen_models_for_export(model, export_config)\n else:\n models_and_export_configs = {\"model\": (model, export_config)}\n\n # When specifying custom export configs for supported transformers architectures, we do\n # not force to specify a custom export config for each submodel.\n for key, custom_export_config in custom_export_configs.items():\n models_and_export_configs[key] = (models_and_export_configs[key][0], custom_export_config)\n</code></pre>", "post_number": 8, "post_type": 1, "posts_count": 12, "updated_at": "2025-10-14T09:00:23.165Z", "reply_count": 1, "reply_to_post_number": 7, "quote_count": 0, "incoming_link_count": 0, "reads": 4, "readers_count": 3, "score": 20.8, "yours": false, "topic_id": 169036, "topic_slug": "how-to-make-my-customized-pipeline-consumable-for-transformers-js", "display_username": "Sicheng Mao", "primary_group_name": null, "flair_name": null, "flair_url": null, "flair_bg_color": null, "flair_color": null, "flair_group_id": null, "badges_granted": [], "version": 2, "can_edit": false, "can_delete": false, "can_recover": false, "can_see_hidden_post": false, "can_wiki": false, "link_counts": null, "read": true, "user_title": null, "bookmarked": false, "actions_summary": [ { "id": 2, "count": 1 } ], "moderator": false, "admin": false, "staff": false, "user_id": 104516, "hidden": false, "trust_level": 1, "deleted_at": null, "user_deleted": false, "edit_reason": null, "can_view_edit_history": true, "wiki": false, "post_url": "/t/how-to-make-my-customized-pipeline-consumable-for-transformers-js/169036/8", "reactions": [ { "id": "+1", "type": "emoji", "count": 1 } ], "current_user_reaction": null, "reaction_users_count": 1, "current_user_used_main_reaction": false, "can_accept_answer": false, "can_unaccept_answer": false, "accepted_answer": false, "topic_accepted_answer": true, "can_vote": null, "title_is_group": null, "reply_to_user": { "id": 52272, "username": "John6666", "name": "John Smith", "avatar_template": "/user_avatar/discuss.huggingface.co/john6666/{size}/27664_2.png" }, "action_code": null, "via_email": null }, { "id": 243569, "name": "Sicheng Mao", "username": "alephpi", "avatar_template": "/user_avatar/discuss.huggingface.co/alephpi/{size}/54288_2.png", "created_at": "2025-10-14T09:27:23.844Z", "cooked": "<p>Alright, actually we don’t need those verbose configs, just change the task from “image-to-text” to “image-to-text-with-past” will solve the issue (no monkey-patch)</p>\n<pre><code class=\"lang-auto\">def export_onnx():\n path='./model'\n out = Path(\"./model/trio_onnx\")\n out.mkdir(exist_ok=True)\n main_export(\n path,\n task=\"image-to-text-with-past\", # to get trio onnx model, use \"-with-past\", otherwise use \"image-to-text\"\n output=out,\n )\n</code></pre>", "post_number": 9, "post_type": 1, "posts_count": 12, "updated_at": "2025-10-14T09:27:35.932Z", "reply_count": 0, "reply_to_post_number": 8, "quote_count": 0, "incoming_link_count": 0, "reads": 3, "readers_count": 2, "score": 15.6, "yours": false, "topic_id": 169036, "topic_slug": "how-to-make-my-customized-pipeline-consumable-for-transformers-js", "display_username": "Sicheng Mao", "primary_group_name": null, "flair_name": null, "flair_url": null, "flair_bg_color": null, "flair_color": null, "flair_group_id": null, "badges_granted": [], "version": 1, "can_edit": false, "can_delete": false, "can_recover": false, "can_see_hidden_post": false, "can_wiki": false, "link_counts": null, "read": true, "user_title": null, "bookmarked": false, "actions_summary": [ { "id": 2, "count": 1 } ], "moderator": false, "admin": false, "staff": false, "user_id": 104516, "hidden": false, "trust_level": 1, "deleted_at": null, "user_deleted": false, "edit_reason": null, "can_view_edit_history": true, "wiki": false, "post_url": "/t/how-to-make-my-customized-pipeline-consumable-for-transformers-js/169036/9", "reactions": [ { "id": "+1", "type": "emoji", "count": 1 } ], "current_user_reaction": null, "reaction_users_count": 1, "current_user_used_main_reaction": false, "can_accept_answer": false, "can_unaccept_answer": false, "accepted_answer": false, "topic_accepted_answer": true, "can_vote": null, "title_is_group": null, "reply_to_user": { "id": 104516, "username": "alephpi", "name": "Sicheng Mao", "avatar_template": "/user_avatar/discuss.huggingface.co/alephpi/{size}/54288_2.png" }, "action_code": null, "via_email": null }, { "id": 243573, "name": "John Smith", "username": "John6666", "avatar_template": "/user_avatar/discuss.huggingface.co/john6666/{size}/27664_2.png", "created_at": "2025-10-14T11:37:36.605Z", "cooked": "<p>Great. <a href=\"https://discuss.huggingface.co/t/what-does-the-decoder-with-past-values-means/21088/2\">About <code>_with_past</code></a></p>", "post_number": 10, "post_type": 1, "posts_count": 12, "updated_at": "2025-10-14T11:37:36.605Z", "reply_count": 1, "reply_to_post_number": null, "quote_count": 0, "incoming_link_count": 0, "reads": 4, "readers_count": 3, "score": 5.8, "yours": false, "topic_id": 169036, "topic_slug": "how-to-make-my-customized-pipeline-consumable-for-transformers-js", "display_username": "John Smith", "primary_group_name": null, "flair_name": null, "flair_url": null, "flair_bg_color": null, "flair_color": null, "flair_group_id": null, "badges_granted": [], "version": 1, "can_edit": false, "can_delete": false, "can_recover": false, "can_see_hidden_post": false, "can_wiki": false, "link_counts": [ { "url": "https://discuss.huggingface.co/t/what-does-the-decoder-with-past-values-means/21088/2", "internal": true, "reflection": false, "title": "What does the decoder with past values means", "clicks": 1 } ], "read": true, "user_title": "Regular", "bookmarked": false, "actions_summary": [], "moderator": false, "admin": false, "staff": false, "user_id": 52272, "hidden": false, "trust_level": 3, "deleted_at": null, "user_deleted": false, "edit_reason": null, "can_view_edit_history": true, "wiki": false, "post_url": "/t/how-to-make-my-customized-pipeline-consumable-for-transformers-js/169036/10", "reactions": [], "current_user_reaction": null, "reaction_users_count": 0, "current_user_used_main_reaction": false, "can_accept_answer": false, "can_unaccept_answer": false, "accepted_answer": false, "topic_accepted_answer": true, "can_vote": null, "title_is_group": false, "reply_to_user": null, "action_code": null, "via_email": null }, { "id": 244005, "name": "Sicheng Mao", "username": "alephpi", "avatar_template": "/user_avatar/discuss.huggingface.co/alephpi/{size}/54288_2.png", "created_at": "2025-10-23T09:33:46.333Z", "cooked": "<p>Hi John,</p>\n<p>I’ve finally succeeded in implementing the above things. Thanks for your help!<br>\nYet I still have some other questions and I think I’d better create a new discussion.</p>", "post_number": 11, "post_type": 1, "posts_count": 12, "updated_at": "2025-10-23T09:36:01.027Z", "reply_count": 0, "reply_to_post_number": 10, "quote_count": 0, "incoming_link_count": 0, "reads": 2, "readers_count": 1, "score": 15.4, "yours": false, "topic_id": 169036, "topic_slug": "how-to-make-my-customized-pipeline-consumable-for-transformers-js", "display_username": "Sicheng Mao", "primary_group_name": null, "flair_name": null, "flair_url": null, "flair_bg_color": null, "flair_color": null, "flair_group_id": null, "badges_granted": [], "version": 2, "can_edit": false, "can_delete": false, "can_recover": false, "can_see_hidden_post": false, "can_wiki": false, "link_counts": null, "read": true, "user_title": null, "bookmarked": false, "actions_summary": [ { "id": 2, "count": 1 } ], "moderator": false, "admin": false, "staff": false, "user_id": 104516, "hidden": false, "trust_level": 1, "deleted_at": null, "user_deleted": false, "edit_reason": null, "can_view_edit_history": true, "wiki": false, "post_url": "/t/how-to-make-my-customized-pipeline-consumable-for-transformers-js/169036/11", "reactions": [ { "id": "confetti_ball", "type": "emoji", "count": 1 } ], "current_user_reaction": null, "reaction_users_count": 1, "current_user_used_main_reaction": false, "can_accept_answer": false, "can_unaccept_answer": false, "accepted_answer": false, "topic_accepted_answer": true, "can_vote": null, "title_is_group": null, "reply_to_user": { "id": 52272, "username": "John6666", "name": "John Smith", "avatar_template": "/user_avatar/discuss.huggingface.co/john6666/{size}/27664_2.png" }, "action_code": null, "via_email": null }, { "id": 244029, "name": "system", "username": "system", "avatar_template": "https://us1.discourse-cdn.com/hellohellohello/original/2X/d/de4155eb4aa4108ecb32a1389d7cc37ae69f88b7.png", "created_at": "2025-10-23T21:34:35.488Z", "cooked": "<p>This topic was automatically closed 12 hours after the last reply. New replies are no longer allowed.</p>", "post_number": 12, "post_type": 3, "posts_count": 12, "updated_at": "2025-10-23T21:34:35.488Z", "reply_count": 0, "reply_to_post_number": null, "quote_count": 0, "incoming_link_count": 0, "reads": 1, "readers_count": 0, "score": 0.2, "yours": false, "topic_id": 169036, "topic_slug": "how-to-make-my-customized-pipeline-consumable-for-transformers-js", "display_username": "system", "primary_group_name": null, "flair_name": null, "flair_url": null, "flair_bg_color": null, "flair_color": null, "flair_group_id": null, "badges_granted": [], "version": 1, "can_edit": false, "can_delete": false, "can_recover": false, "can_see_hidden_post": false, "can_wiki": false, "link_counts": null, "read": true, "user_title": null, "bookmarked": false, "actions_summary": [], "moderator": true, "admin": true, "staff": true, "user_id": -1, "hidden": false, "trust_level": 4, "deleted_at": null, "user_deleted": false, "edit_reason": null, "can_view_edit_history": true, "wiki": false, "post_url": "/t/how-to-make-my-customized-pipeline-consumable-for-transformers-js/169036/12", "reactions": [], "current_user_reaction": null, "reaction_users_count": 0, "current_user_used_main_reaction": false, "can_accept_answer": false, "can_unaccept_answer": false, "accepted_answer": false, "topic_accepted_answer": true, "can_vote": null, "title_is_group": null, "reply_to_user": null, "action_code": "autoclosed.enabled", "via_email": null } ]
<p>Hi community,</p> <p>Here is my image-to-text pipeline:</p> <p>(<em>customized</em> means not a registered one in official Transformers)</p> <p>A <em>customized</em> Image processor,</p> <p>A VisionEncoderDecoder, with a <em>customized</em> vision encoder that inherits the PretrainedModel and a MBartDecoder,</p> <p>A WordLevel tokenizer (yes I haven’t used a MBartTokenizer and I have distilled my own one for specific corpus).</p> <p>I want to consume this pipeline in Transformers.js, however I notice that all examples given in Transformers.js documentation seem like pulling from a ready made Transformers pipeline with official components and configurations, <strong>I just wonder is it possible to turn my customized pipeline consumable for Transformers.js, or to what extent my pipeline could be partially turned to?</strong></p> <p>My guess is that the I should make my own image preprocessing step and send the image input tensor to the model, in that way, which kind of js libraries you recommend to use? (It won’t be very intensive, just simply resize and normalize things plus a crop-white-margin function which doesn’t exist in Transformers’ image processors).</p> <p><strong>Also just to be sure, is my VisionEncoderDecoder possible to export to an onnx format to be consumable for Transformers.js?</strong></p> <p>Of course my model should be possible to run in browser (and that’s the whole point for me to do this), as it has only 20M parameters (way less than the showcase in Transformers.js)</p> <p>Thanks for your help in advance!</p>
<p>It <a href="https://huggingface.co/datasets/John6666/forum1/blob/main/transformer_js_custom_pipeline_1.md">seems possible</a>. For Transoformers.js, there’s a dedicated channel on the HF Discord, so asking there would be the most reliable option.</p>
Issue with TorchCodec when fine-tuning Whisper ASR model
https://discuss.huggingface.co/t/issue-with-torchcodec-when-fine-tuning-whisper-asr-model/169315
169,315
5
2025-10-21T07:37:40.941000Z
[ { "id": 243905, "name": "Ong Jun Rong", "username": "junnyrong", "avatar_template": "/user_avatar/discuss.huggingface.co/junnyrong/{size}/54763_2.png", "created_at": "2025-10-21T07:37:41.012Z", "cooked": "<p>Hello,</p>\n<p>In the past I have been fine tuning the Whisper-tiny ASR model using these guides:</p>\n<aside class=\"onebox allowlistedgeneric\" data-onebox-src=\"https://learnopencv.com/fine-tuning-whisper-on-custom-dataset/\">\n <header class=\"source\">\n <img src=\"https://us1.discourse-cdn.com/hellohellohello/original/3X/2/0/204a927c63845be135413775d0411d987adb24fe.png\" class=\"site-icon\" alt=\"\" data-dominant-color=\"A6CBE1\" width=\"32\" height=\"32\">\n\n <a href=\"https://learnopencv.com/fine-tuning-whisper-on-custom-dataset/\" target=\"_blank\" rel=\"noopener nofollow ugc\" title=\"01:00PM - 06 August 2024\">LearnOpenCV – Learn OpenCV, PyTorch, Keras, Tensorflow with code, &amp;... – 6 Aug 24</a>\n </header>\n\n <article class=\"onebox-body\">\n <div class=\"aspect-image\" style=\"--aspect-ratio:600/338;\"><img src=\"https://us1.discourse-cdn.com/hellohellohello/original/3X/c/7/c7750586d9d05f878edd84a6a1a6665ae37136e0.gif\" class=\"thumbnail animated\" alt=\"\" data-dominant-color=\"EDEFF6\" width=\"690\" height=\"388\"></div>\n\n<h3><a href=\"https://learnopencv.com/fine-tuning-whisper-on-custom-dataset/\" target=\"_blank\" rel=\"noopener nofollow ugc\">Fine Tuning Whisper on Custom Dataset</a></h3>\n\n <p>Fine tuning Whisper on a custom dataset involving Air Traffic Control audio and diving deep into the dataset &amp; training code to understand the process.</p>\n\n\n </article>\n\n <div class=\"onebox-metadata\">\n \n \n </div>\n\n <div style=\"clear: both\"></div>\n</aside>\n\n<aside class=\"onebox allowlistedgeneric\" data-onebox-src=\"https://huggingface.co/blog/fine-tune-whisper\">\n <header class=\"source\">\n\n <a href=\"https://huggingface.co/blog/fine-tune-whisper\" target=\"_blank\" rel=\"noopener\">huggingface.co</a>\n </header>\n\n <article class=\"onebox-body\">\n <div class=\"aspect-image\" style=\"--aspect-ratio:690/337;\"><img src=\"https://us1.discourse-cdn.com/hellohellohello/optimized/2X/d/d023324d5f93c9a490894d8ec915989a7a655572_2_690x337.jpeg\" class=\"thumbnail\" alt=\"\" data-dominant-color=\"B0CEC7\" width=\"690\" height=\"337\"></div>\n\n<h3><a href=\"https://huggingface.co/blog/fine-tune-whisper\" target=\"_blank\" rel=\"noopener\">Fine-Tune Whisper For Multilingual ASR with 🤗 Transformers</a></h3>\n\n <p>We’re on a journey to advance and democratize artificial intelligence through open source and open science.</p>\n\n\n </article>\n\n <div class=\"onebox-metadata\">\n \n \n </div>\n\n <div style=\"clear: both\"></div>\n</aside>\n\n<p>It was all working fine, I was able do everything locally like loading a pre-trained Whisper-tiny model and also my own dataset until recently when I updated the modules. I have been getting errors like these:</p>\n<p><div class=\"lightbox-wrapper\"><a class=\"lightbox\" href=\"https://us1.discourse-cdn.com/hellohellohello/original/3X/3/e/3e0ff636781aeeb1fdff900eafe2f60051f3ea6c.png\" data-download-href=\"/uploads/short-url/8R1NFqqbFyJBPlB72gGxCx6yM68.png?dl=1\" title=\"image\" rel=\"noopener nofollow ugc\"><img src=\"https://us1.discourse-cdn.com/hellohellohello/original/3X/3/e/3e0ff636781aeeb1fdff900eafe2f60051f3ea6c.png\" alt=\"image\" data-base62-sha1=\"8R1NFqqbFyJBPlB72gGxCx6yM68\" width=\"690\" height=\"298\" data-dominant-color=\"252727\"><div class=\"meta\"><svg class=\"fa d-icon d-icon-far-image svg-icon\" aria-hidden=\"true\"><use href=\"#far-image\"></use></svg><span class=\"filename\">image</span><span class=\"informations\">1430×618 30.9 KB</span><svg class=\"fa d-icon d-icon-discourse-expand svg-icon\" aria-hidden=\"true\"><use href=\"#discourse-expand\"></use></svg></div></a></div></p>\n<p>I have tried falling back and testing the samples provided by the guides and they also seem to have broke and started giving the same error. I also tried running them on Google Colab where it will crash when trying to run a cell like this:</p>\n<p><div class=\"lightbox-wrapper\"><a class=\"lightbox\" href=\"https://us1.discourse-cdn.com/hellohellohello/original/3X/c/2/c2cf5b03a21c3eacb8d525f29c49f087a917a64e.png\" data-download-href=\"/uploads/short-url/rNmSXqNLVggnt0RblKjzDtL6meO.png?dl=1\" title=\"image\" rel=\"noopener nofollow ugc\"><img src=\"https://us1.discourse-cdn.com/hellohellohello/original/3X/c/2/c2cf5b03a21c3eacb8d525f29c49f087a917a64e.png\" alt=\"image\" data-base62-sha1=\"rNmSXqNLVggnt0RblKjzDtL6meO\" width=\"690\" height=\"398\" data-dominant-color=\"3C3C3B\"><div class=\"meta\"><svg class=\"fa d-icon d-icon-far-image svg-icon\" aria-hidden=\"true\"><use href=\"#far-image\"></use></svg><span class=\"filename\">image</span><span class=\"informations\">693×400 11.8 KB</span><svg class=\"fa d-icon d-icon-discourse-expand svg-icon\" aria-hidden=\"true\"><use href=\"#discourse-expand\"></use></svg></div></a></div></p>\n<p>I would like to know if anyone else is also facing the same issue and if there are any solutions for it. Thanks in advance!</p>", "post_number": 1, "post_type": 1, "posts_count": 4, "updated_at": "2025-10-21T07:37:41.012Z", "reply_count": 0, "reply_to_post_number": null, "quote_count": 0, "incoming_link_count": 21, "reads": 4, "readers_count": 3, "score": 50.8, "yours": false, "topic_id": 169315, "topic_slug": "issue-with-torchcodec-when-fine-tuning-whisper-asr-model", "display_username": "Ong Jun Rong", "primary_group_name": null, "flair_name": null, "flair_url": null, "flair_bg_color": null, "flair_color": null, "flair_group_id": null, "badges_granted": [], "version": 1, "can_edit": false, "can_delete": false, "can_recover": false, "can_see_hidden_post": false, "can_wiki": false, "link_counts": [ { "url": "https://learnopencv.com/fine-tuning-whisper-on-custom-dataset/", "internal": false, "reflection": false, "title": "Fine Tuning Whisper on Custom Dataset", "clicks": 2 }, { "url": "https://huggingface.co/blog/fine-tune-whisper", "internal": false, "reflection": false, "title": "Fine-Tune Whisper For Multilingual ASR with 🤗 Transformers", "clicks": 1 } ], "read": true, "user_title": null, "bookmarked": false, "actions_summary": [ { "id": 2, "count": 1 } ], "moderator": false, "admin": false, "staff": false, "user_id": 105467, "hidden": false, "trust_level": 1, "deleted_at": null, "user_deleted": false, "edit_reason": null, "can_view_edit_history": true, "wiki": false, "post_url": "/t/issue-with-torchcodec-when-fine-tuning-whisper-asr-model/169315/1", "reactions": [ { "id": "eyes", "type": "emoji", "count": 1 } ], "current_user_reaction": null, "reaction_users_count": 1, "current_user_used_main_reaction": false, "can_accept_answer": false, "can_unaccept_answer": false, "accepted_answer": false, "topic_accepted_answer": true, "can_vote": false, "title_is_group": null, "reply_to_user": null, "action_code": null, "via_email": null }, { "id": 243907, "name": "John Smith", "username": "John6666", "avatar_template": "/user_avatar/discuss.huggingface.co/john6666/{size}/27664_2.png", "created_at": "2025-10-21T08:37:37.072Z", "cooked": "<p>This error appears to stem from changes to the audio backend in the datasets library. The quickest workaround may be to install using <code>pip install datasets==3.6.0</code>. Additionally, if using version <code>4.0.0</code> or later, <strong>builder script-type datasets can no longer be used directly from the Hub</strong>. <a href=\"https://huggingface.co/lhoestq/datasets\">You will need to find and use datasets that have been converted to the standard type beforehand</a>. If the original datasets were standard datasets, the latter issue should not be a problem.</p>\n<p>Additionally, since Transformers underwent significant changes around version <code>4.49.0</code>, if you encounter errors related to Whisper, <strong>rolling <code>transformers</code> back to version <code>4.48.3</code> or earlier would be the simplest workaround</strong>. Of course, rewriting for the new version is preferable… but for a temporary fix.</p>\n<hr>\n<p>Your error started after upgrading to <strong><img src=\"https://emoji.discourse-cdn.com/apple/hugs.png?v=14\" title=\":hugs:\" class=\"emoji\" alt=\":hugs:\" loading=\"lazy\" width=\"20\" height=\"20\"> Datasets 4.x</strong>. 4.x <strong>switched audio decoding to TorchCodec</strong>, which <strong>loads FFmpeg at runtime</strong> and also <strong>requires a matching torch↔torchcodec pair</strong>. Accessing or printing an <code>Audio</code> column now triggers that decode path, so if FFmpeg is missing or versions don’t line up, you see the probe-and-fail chain (<code>core7 → core6 → core5 → core4 ... Could not load torchcodec</code>). On Windows this is more brittle, and early 4.0 notes even said Windows was not supported yet. (<a href=\"https://huggingface.co/docs/datasets/en/audio_load\" title=\"Load audio data\">Hugging Face</a>)</p>\n<h1><a name=\"p-243907-why-it-broke-now-1\" class=\"anchor\" href=\"#p-243907-why-it-broke-now-1\"></a>Why it broke now</h1>\n<ul>\n<li><strong>Behavior change in Datasets 4.x</strong>: audio is decoded on access via TorchCodec + FFmpeg. Older 3.x used a different backend. Printing an example decodes it. (<a href=\"https://huggingface.co/docs/datasets/en/audio_load\" title=\"Load audio data\">Hugging Face</a>)</li>\n<li><strong>New runtime requirements</strong>: TorchCodec expects FFmpeg on the system and a compatible <code>torch</code> version. The README documents FFmpeg support and the torch↔torchcodec matrix. (<a href=\"https://github.com/meta-pytorch/torchcodec\" title=\"GitHub - meta-pytorch/torchcodec: PyTorch media decoding and encoding\">GitHub</a>)</li>\n<li><strong>Windows caveat</strong>: initial 4.0 release notes warned “not available for Windows yet; use datasets&lt;4.0.” This explains why your previously working Windows setup started failing after upgrade. (<a href=\"https://github.com/huggingface/datasets/releases\" title=\"Releases · huggingface/datasets\">GitHub</a>)</li>\n</ul>\n<h1><a name=\"p-243907-typical-root-causes-2\" class=\"anchor\" href=\"#p-243907-typical-root-causes-2\"></a>Typical root causes</h1>\n<ol>\n<li><strong>FFmpeg missing or wrong major</strong>. TorchCodec supports FFmpeg majors <strong>4–7</strong> on all platforms, with <strong>8</strong> only on macOS/Linux. Missing or mismatched DLLs yields your exact probe sequence. (<a href=\"https://github.com/meta-pytorch/torchcodec\" title=\"GitHub - meta-pytorch/torchcodec: PyTorch media decoding and encoding\">GitHub</a>)</li>\n<li><strong>Torch↔TorchCodec mismatch</strong>. Use the official matrix. Example: <code>torchcodec 0.7 ↔ torch 2.8</code>; <code>0.8 ↔ 2.9</code>. (<a href=\"https://github.com/meta-pytorch/torchcodec\" title=\"GitHub - meta-pytorch/torchcodec: PyTorch media decoding and encoding\">GitHub</a>)</li>\n<li><strong>Fresh 4.0 regressions</strong>. Multiple reports show 3.x works then 4.x fails until TorchCodec+FFmpeg are added and versions pinned. (<a href=\"https://github.com/huggingface/datasets/issues/7678\" title=\"To support decoding audio data, please install 'torchcodec'.\">GitHub</a>)</li>\n</ol>\n<h1><a name=\"p-243907-fixes-and-workarounds-3\" class=\"anchor\" href=\"#p-243907-fixes-and-workarounds-3\"></a>Fixes and workarounds</h1>\n<p>Pick one path. Keep it pinned.</p>\n<h2><a name=\"p-243907-a-fastest-unblock-on-windows-4\" class=\"anchor\" href=\"#p-243907-a-fastest-unblock-on-windows-4\"></a>A) Fastest unblock on Windows</h2>\n<pre data-code-wrap=\"bash\"><code class=\"lang-bash\"># Downgrade Datasets to pre-TorchCodec behavior\npip install \"datasets&lt;4.0.0\" # release notes flagged Windows not ready\n# https://github.com/huggingface/datasets/releases/tag/4.0.0\n</code></pre>\n<p>(<a href=\"https://github.com/huggingface/datasets/releases\" title=\"Releases · huggingface/datasets\">GitHub</a>)</p>\n<h2><a name=\"p-243907-b-stay-on-datasets-4x-and-make-it-work-5\" class=\"anchor\" href=\"#p-243907-b-stay-on-datasets-4x-and-make-it-work-5\"></a>B) Stay on Datasets 4.x and make it work</h2>\n<pre data-code-wrap=\"bash\"><code class=\"lang-bash\"># Windows CPU: install FFmpeg and match versions\nconda install -c conda-forge \"ffmpeg&lt;8\" # README recommends conda FFmpeg\npip install \"torch==2.8.*\" \"torchcodec==0.7.*\" # matrix: 0.7 &lt;-&gt; 2.8\n# https://github.com/meta-pytorch/torchcodec#installing-torchcodec\n</code></pre>\n<p>If you need CUDA on Windows, use the experimental conda package:</p>\n<pre data-code-wrap=\"bash\"><code class=\"lang-bash\">conda install -c conda-forge \"ffmpeg&lt;8\" \"torchcodec=*=*cuda*\"\n# https://github.com/meta-pytorch/torchcodec#installing-cuda-enabled-torchcodec\n</code></pre>\n<p>(<a href=\"https://github.com/meta-pytorch/torchcodec\" title=\"GitHub - meta-pytorch/torchcodec: PyTorch media decoding and encoding\">GitHub</a>)</p>\n<h2><a name=\"p-243907-c-linux-or-colab-6\" class=\"anchor\" href=\"#p-243907-c-linux-or-colab-6\"></a>C) Linux or Colab</h2>\n<pre data-code-wrap=\"bash\"><code class=\"lang-bash\"># Colab VM or Linux\napt-get update &amp;&amp; apt-get install -y ffmpeg\npip install -U \"datasets[audio]\" \"torch==2.8.*\" \"torchcodec==0.7.*\"\n# HF docs: audio decoding uses TorchCodec + FFmpeg\n# https://huggingface.co/docs/datasets/en/audio_load\n</code></pre>\n<p>(<a href=\"https://huggingface.co/docs/datasets/en/audio_load\" title=\"Load audio data\">Hugging Face</a>)</p>\n<h2><a name=\"p-243907-d-bypass-decoding-while-you-train-7\" class=\"anchor\" href=\"#p-243907-d-bypass-decoding-while-you-train-7\"></a>D) Bypass decoding while you train</h2>\n<p>Avoid TorchCodec until your env is fixed.</p>\n<pre data-code-wrap=\"python\"><code class=\"lang-python\">from datasets import Audio\n# Option 1: disable globally\nds = ds.decode(False) # https://huggingface.co/docs/datasets/en/package_reference/main_classes#datasets.Dataset.decode\n# Option 2: disable per column\nds = ds.cast_column(\"audio\", Audio(decode=False)) # https://huggingface.co/docs/datasets/en/about_dataset_features\n</code></pre>\n<p>These return paths/bytes rather than decoded arrays, so printing items won’t invoke TorchCodec. (<a href=\"https://huggingface.co/docs/datasets/en/package_reference/main_classes\" title=\"Main classes\">Hugging Face</a>)</p>\n<h1><a name=\"p-243907-sanity-checks-8\" class=\"anchor\" href=\"#p-243907-sanity-checks-8\"></a>Sanity checks</h1>\n<pre data-code-wrap=\"bash\"><code class=\"lang-bash\">python - &lt;&lt;'PY'\nimport subprocess, sys\nimport torch\nprint(\"python:\", sys.version)\nprint(\"torch:\", torch.__version__)\ntry:\n import torchcodec\n print(\"torchcodec:\", torchcodec.__version__)\nexcept Exception as e:\n print(\"torchcodec import failed:\", e)\nsubprocess.run([\"ffmpeg\", \"-hide_banner\", \"-version\"])\nPY\n# Matrix and FFmpeg policy:\n# https://github.com/meta-pytorch/torchcodec#installing-torchcodec\n</code></pre>\n<p>(<a href=\"https://github.com/meta-pytorch/torchcodec\" title=\"GitHub - meta-pytorch/torchcodec: PyTorch media decoding and encoding\">GitHub</a>)</p>\n<h1><a name=\"p-243907-context-from-your-linked-thread-9\" class=\"anchor\" href=\"#p-243907-context-from-your-linked-thread-9\"></a>Context from your linked thread</h1>\n<p>Your screenshots show Datasets 4.x decoding an <code>Audio</code> column, TorchCodec probing FFmpeg 7→6→5→4, then failing. That matches the new 4.x behavior and the FFmpeg/compatibility requirements above. (<a href=\"https://discuss.huggingface.co/t/issue-with-torchcodec-when-fine-tuning-whisper-asr-model/169315\" title=\"Issue with TorchCodec when fine-tuning Whisper ASR model - Beginners - Hugging Face Forums\">Hugging Face Forums</a>)</p>\n<h1><a name=\"p-243907-extra-references-and-pitfalls-10\" class=\"anchor\" href=\"#p-243907-extra-references-and-pitfalls-10\"></a>Extra references and pitfalls</h1>\n<ul>\n<li><strong>Release notes roundup</strong>: breaking changes, removal of scripts, and the Windows note. Useful if other 4.0 changes surfaced after your upgrade. (<a href=\"https://newreleases.io/project/github/huggingface/datasets/release/4.0.0\" title=\"huggingface/datasets 4.0.0 on GitHub\">NewReleases</a>)</li>\n<li><strong>Known mismatch/FFmpeg pitfalls</strong>: reports of brew-FFmpeg conflicts and version-mismatch guidance from TorchCodec maintainers. (<a href=\"https://github.com/pytorch/torchcodec/issues/570\" title=\"torchcodec not compatible with brew-installed ffmpeg #570\">GitHub</a>)</li>\n<li><strong>PyTorch/Torchaudio migration</strong>: decoding is consolidating on TorchCodec (<code>load_with_torchcodec</code> exists as a bridge). Aligns your stack with where the ecosystem is going. (<a href=\"https://docs.pytorch.org/audio/main/torchaudio.html\" title=\"Torchaudio 2.8.0 documentation\">PyTorch Documentation</a>)</li>\n</ul>", "post_number": 2, "post_type": 1, "posts_count": 4, "updated_at": "2025-10-21T08:37:37.072Z", "reply_count": 1, "reply_to_post_number": null, "quote_count": 0, "incoming_link_count": 5, "reads": 3, "readers_count": 2, "score": 15.6, "yours": false, "topic_id": 169315, "topic_slug": "issue-with-torchcodec-when-fine-tuning-whisper-asr-model", "display_username": "John Smith", "primary_group_name": null, "flair_name": null, "flair_url": null, "flair_bg_color": null, "flair_color": null, "flair_group_id": null, "badges_granted": [], "version": 1, "can_edit": false, "can_delete": false, "can_recover": false, "can_see_hidden_post": false, "can_wiki": false, "link_counts": [ { "url": "https://huggingface.co/docs/datasets/en/audio_load", "internal": false, "reflection": false, "title": "Load audio data", "clicks": 1 }, { "url": "https://github.com/huggingface/datasets/issues/7678", "internal": false, "reflection": false, "title": "To support decoding audio data, please install 'torchcodec'. · Issue #7678 · huggingface/datasets · GitHub", "clicks": 1 }, { "url": "https://newreleases.io/project/github/huggingface/datasets/release/4.0.0", "internal": false, "reflection": false, "title": "huggingface/datasets 4.0.0 on GitHub", "clicks": 0 }, { "url": "https://huggingface.co/lhoestq/datasets", "internal": false, "reflection": false, "title": "lhoestq (Quentin Lhoest)", "clicks": 0 }, { "url": "https://github.com/meta-pytorch/torchcodec", "internal": false, "reflection": false, "title": "GitHub - meta-pytorch/torchcodec: PyTorch media decoding and encoding", "clicks": 0 }, { "url": "https://docs.pytorch.org/audio/main/torchaudio.html", "internal": false, "reflection": false, "title": "torchaudio — Torchaudio 2.8.0 documentation", "clicks": 0 }, { "url": "https://github.com/huggingface/datasets/releases", "internal": false, "reflection": false, "title": "Releases · huggingface/datasets · GitHub", "clicks": 0 }, { "url": "https://github.com/pytorch/torchcodec/issues/570", "internal": false, "reflection": false, "title": "torchcodec not compatible with brew-installed ffmpeg · Issue #570 · meta-pytorch/torchcodec · GitHub", "clicks": 0 }, { "url": "https://huggingface.co/docs/datasets/en/package_reference/main_classes", "internal": false, "reflection": false, "title": "Main classes", "clicks": 0 } ], "read": true, "user_title": "Regular", "bookmarked": false, "actions_summary": [], "moderator": false, "admin": false, "staff": false, "user_id": 52272, "hidden": false, "trust_level": 3, "deleted_at": null, "user_deleted": false, "edit_reason": null, "can_view_edit_history": true, "wiki": false, "post_url": "/t/issue-with-torchcodec-when-fine-tuning-whisper-asr-model/169315/2", "reactions": [], "current_user_reaction": null, "reaction_users_count": 0, "current_user_used_main_reaction": false, "can_accept_answer": false, "can_unaccept_answer": false, "accepted_answer": true, "topic_accepted_answer": true, "can_vote": null, "title_is_group": false, "reply_to_user": null, "action_code": null, "via_email": null }, { "id": 243937, "name": "Ong Jun Rong", "username": "junnyrong", "avatar_template": "/user_avatar/discuss.huggingface.co/junnyrong/{size}/54763_2.png", "created_at": "2025-10-22T01:45:23.750Z", "cooked": "<p>I was pulling my hair thinking it has something to do with TorchCodec’s versioning, it never came to me that it might have been datasets! Thank you so much for the detailed explanation too, that solved my issue <img src=\"https://emoji.discourse-cdn.com/apple/smile.png?v=14\" title=\":smile:\" class=\"emoji\" alt=\":smile:\" loading=\"lazy\" width=\"20\" height=\"20\"></p>", "post_number": 3, "post_type": 1, "posts_count": 4, "updated_at": "2025-10-22T01:45:23.750Z", "reply_count": 0, "reply_to_post_number": 2, "quote_count": 0, "incoming_link_count": 0, "reads": 2, "readers_count": 1, "score": 15.4, "yours": false, "topic_id": 169315, "topic_slug": "issue-with-torchcodec-when-fine-tuning-whisper-asr-model", "display_username": "Ong Jun Rong", "primary_group_name": null, "flair_name": null, "flair_url": null, "flair_bg_color": null, "flair_color": null, "flair_group_id": null, "badges_granted": [], "version": 1, "can_edit": false, "can_delete": false, "can_recover": false, "can_see_hidden_post": false, "can_wiki": false, "link_counts": null, "read": true, "user_title": null, "bookmarked": false, "actions_summary": [ { "id": 2, "count": 1 } ], "moderator": false, "admin": false, "staff": false, "user_id": 105467, "hidden": false, "trust_level": 1, "deleted_at": null, "user_deleted": false, "edit_reason": null, "can_view_edit_history": true, "wiki": false, "post_url": "/t/issue-with-torchcodec-when-fine-tuning-whisper-asr-model/169315/3", "reactions": [ { "id": "confetti_ball", "type": "emoji", "count": 1 } ], "current_user_reaction": null, "reaction_users_count": 1, "current_user_used_main_reaction": false, "can_accept_answer": false, "can_unaccept_answer": false, "accepted_answer": false, "topic_accepted_answer": true, "can_vote": null, "title_is_group": null, "reply_to_user": { "id": 52272, "username": "John6666", "name": "John Smith", "avatar_template": "/user_avatar/discuss.huggingface.co/john6666/{size}/27664_2.png" }, "action_code": null, "via_email": null }, { "id": 243964, "name": "system", "username": "system", "avatar_template": "https://us1.discourse-cdn.com/hellohellohello/original/2X/d/de4155eb4aa4108ecb32a1389d7cc37ae69f88b7.png", "created_at": "2025-10-22T13:45:34.064Z", "cooked": "<p>This topic was automatically closed 12 hours after the last reply. New replies are no longer allowed.</p>", "post_number": 4, "post_type": 3, "posts_count": 4, "updated_at": "2025-10-22T13:45:34.064Z", "reply_count": 0, "reply_to_post_number": null, "quote_count": 0, "incoming_link_count": 1, "reads": 1, "readers_count": 0, "score": 5.2, "yours": false, "topic_id": 169315, "topic_slug": "issue-with-torchcodec-when-fine-tuning-whisper-asr-model", "display_username": "system", "primary_group_name": null, "flair_name": null, "flair_url": null, "flair_bg_color": null, "flair_color": null, "flair_group_id": null, "badges_granted": [], "version": 1, "can_edit": false, "can_delete": false, "can_recover": false, "can_see_hidden_post": false, "can_wiki": false, "link_counts": null, "read": true, "user_title": null, "bookmarked": false, "actions_summary": [], "moderator": true, "admin": true, "staff": true, "user_id": -1, "hidden": false, "trust_level": 4, "deleted_at": null, "user_deleted": false, "edit_reason": null, "can_view_edit_history": true, "wiki": false, "post_url": "/t/issue-with-torchcodec-when-fine-tuning-whisper-asr-model/169315/4", "reactions": [], "current_user_reaction": null, "reaction_users_count": 0, "current_user_used_main_reaction": false, "can_accept_answer": false, "can_unaccept_answer": false, "accepted_answer": false, "topic_accepted_answer": true, "can_vote": null, "title_is_group": null, "reply_to_user": null, "action_code": "autoclosed.enabled", "via_email": null } ]
<p>Hello,</p> <p>In the past I have been fine tuning the Whisper-tiny ASR model using these guides:</p> <aside class="onebox allowlistedgeneric" data-onebox-src="https://learnopencv.com/fine-tuning-whisper-on-custom-dataset/"> <header class="source"> <img src="https://us1.discourse-cdn.com/hellohellohello/original/3X/2/0/204a927c63845be135413775d0411d987adb24fe.png" class="site-icon" alt="" data-dominant-color="A6CBE1" width="32" height="32"> <a href="https://learnopencv.com/fine-tuning-whisper-on-custom-dataset/" target="_blank" rel="noopener nofollow ugc" title="01:00PM - 06 August 2024">LearnOpenCV – Learn OpenCV, PyTorch, Keras, Tensorflow with code, &amp;... – 6 Aug 24</a> </header> <article class="onebox-body"> <div class="aspect-image" style="--aspect-ratio:600/338;"><img src="https://us1.discourse-cdn.com/hellohellohello/original/3X/c/7/c7750586d9d05f878edd84a6a1a6665ae37136e0.gif" class="thumbnail animated" alt="" data-dominant-color="EDEFF6" width="690" height="388"></div> <h3><a href="https://learnopencv.com/fine-tuning-whisper-on-custom-dataset/" target="_blank" rel="noopener nofollow ugc">Fine Tuning Whisper on Custom Dataset</a></h3> <p>Fine tuning Whisper on a custom dataset involving Air Traffic Control audio and diving deep into the dataset &amp; training code to understand the process.</p> </article> <div class="onebox-metadata"> </div> <div style="clear: both"></div> </aside> <aside class="onebox allowlistedgeneric" data-onebox-src="https://huggingface.co/blog/fine-tune-whisper"> <header class="source"> <a href="https://huggingface.co/blog/fine-tune-whisper" target="_blank" rel="noopener">huggingface.co</a> </header> <article class="onebox-body"> <div class="aspect-image" style="--aspect-ratio:690/337;"><img src="https://us1.discourse-cdn.com/hellohellohello/optimized/2X/d/d023324d5f93c9a490894d8ec915989a7a655572_2_690x337.jpeg" class="thumbnail" alt="" data-dominant-color="B0CEC7" width="690" height="337"></div> <h3><a href="https://huggingface.co/blog/fine-tune-whisper" target="_blank" rel="noopener">Fine-Tune Whisper For Multilingual ASR with 🤗 Transformers</a></h3> <p>We’re on a journey to advance and democratize artificial intelligence through open source and open science.</p> </article> <div class="onebox-metadata"> </div> <div style="clear: both"></div> </aside> <p>It was all working fine, I was able do everything locally like loading a pre-trained Whisper-tiny model and also my own dataset until recently when I updated the modules. I have been getting errors like these:</p> <p><div class="lightbox-wrapper"><a class="lightbox" href="https://us1.discourse-cdn.com/hellohellohello/original/3X/3/e/3e0ff636781aeeb1fdff900eafe2f60051f3ea6c.png" data-download-href="/uploads/short-url/8R1NFqqbFyJBPlB72gGxCx6yM68.png?dl=1" title="image" rel="noopener nofollow ugc"><img src="https://us1.discourse-cdn.com/hellohellohello/original/3X/3/e/3e0ff636781aeeb1fdff900eafe2f60051f3ea6c.png" alt="image" data-base62-sha1="8R1NFqqbFyJBPlB72gGxCx6yM68" width="690" height="298" data-dominant-color="252727"><div class="meta"><svg class="fa d-icon d-icon-far-image svg-icon" aria-hidden="true"><use href="#far-image"></use></svg><span class="filename">image</span><span class="informations">1430×618 30.9 KB</span><svg class="fa d-icon d-icon-discourse-expand svg-icon" aria-hidden="true"><use href="#discourse-expand"></use></svg></div></a></div></p> <p>I have tried falling back and testing the samples provided by the guides and they also seem to have broke and started giving the same error. I also tried running them on Google Colab where it will crash when trying to run a cell like this:</p> <p><div class="lightbox-wrapper"><a class="lightbox" href="https://us1.discourse-cdn.com/hellohellohello/original/3X/c/2/c2cf5b03a21c3eacb8d525f29c49f087a917a64e.png" data-download-href="/uploads/short-url/rNmSXqNLVggnt0RblKjzDtL6meO.png?dl=1" title="image" rel="noopener nofollow ugc"><img src="https://us1.discourse-cdn.com/hellohellohello/original/3X/c/2/c2cf5b03a21c3eacb8d525f29c49f087a917a64e.png" alt="image" data-base62-sha1="rNmSXqNLVggnt0RblKjzDtL6meO" width="690" height="398" data-dominant-color="3C3C3B"><div class="meta"><svg class="fa d-icon d-icon-far-image svg-icon" aria-hidden="true"><use href="#far-image"></use></svg><span class="filename">image</span><span class="informations">693×400 11.8 KB</span><svg class="fa d-icon d-icon-discourse-expand svg-icon" aria-hidden="true"><use href="#discourse-expand"></use></svg></div></a></div></p> <p>I would like to know if anyone else is also facing the same issue and if there are any solutions for it. Thanks in advance!</p>
<p>This error appears to stem from changes to the audio backend in the datasets library. The quickest workaround may be to install using <code>pip install datasets==3.6.0</code>. Additionally, if using version <code>4.0.0</code> or later, <strong>builder script-type datasets can no longer be used directly from the Hub</strong>. <a href="https://huggingface.co/lhoestq/datasets">You will need to find and use datasets that have been converted to the standard type beforehand</a>. If the original datasets were standard datasets, the latter issue should not be a problem.</p> <p>Additionally, since Transformers underwent significant changes around version <code>4.49.0</code>, if you encounter errors related to Whisper, <strong>rolling <code>transformers</code> back to version <code>4.48.3</code> or earlier would be the simplest workaround</strong>. Of course, rewriting for the new version is preferable… but for a temporary fix.</p> <hr> <p>Your error started after upgrading to <strong><img src="https://emoji.discourse-cdn.com/apple/hugs.png?v=14" title=":hugs:" class="emoji" alt=":hugs:" loading="lazy" width="20" height="20"> Datasets 4.x</strong>. 4.x <strong>switched audio decoding to TorchCodec</strong>, which <strong>loads FFmpeg at runtime</strong> and also <strong>requires a matching torch↔torchcodec pair</strong>. Accessing or printing an <code>Audio</code> column now triggers that decode path, so if FFmpeg is missing or versions don’t line up, you see the probe-and-fail chain (<code>core7 → core6 → core5 → core4 ... Could not load torchcodec</code>). On Windows this is more brittle, and early 4.0 notes even said Windows was not supported yet. (<a href="https://huggingface.co/docs/datasets/en/audio_load" title="Load audio data">Hugging Face</a>)</p> <h1><a name="p-243907-why-it-broke-now-1" class="anchor" href="#p-243907-why-it-broke-now-1"></a>Why it broke now</h1> <ul> <li><strong>Behavior change in Datasets 4.x</strong>: audio is decoded on access via TorchCodec + FFmpeg. Older 3.x used a different backend. Printing an example decodes it. (<a href="https://huggingface.co/docs/datasets/en/audio_load" title="Load audio data">Hugging Face</a>)</li> <li><strong>New runtime requirements</strong>: TorchCodec expects FFmpeg on the system and a compatible <code>torch</code> version. The README documents FFmpeg support and the torch↔torchcodec matrix. (<a href="https://github.com/meta-pytorch/torchcodec" title="GitHub - meta-pytorch/torchcodec: PyTorch media decoding and encoding">GitHub</a>)</li> <li><strong>Windows caveat</strong>: initial 4.0 release notes warned “not available for Windows yet; use datasets&lt;4.0.” This explains why your previously working Windows setup started failing after upgrade. (<a href="https://github.com/huggingface/datasets/releases" title="Releases · huggingface/datasets">GitHub</a>)</li> </ul> <h1><a name="p-243907-typical-root-causes-2" class="anchor" href="#p-243907-typical-root-causes-2"></a>Typical root causes</h1> <ol> <li><strong>FFmpeg missing or wrong major</strong>. TorchCodec supports FFmpeg majors <strong>4–7</strong> on all platforms, with <strong>8</strong> only on macOS/Linux. Missing or mismatched DLLs yields your exact probe sequence. (<a href="https://github.com/meta-pytorch/torchcodec" title="GitHub - meta-pytorch/torchcodec: PyTorch media decoding and encoding">GitHub</a>)</li> <li><strong>Torch↔TorchCodec mismatch</strong>. Use the official matrix. Example: <code>torchcodec 0.7 ↔ torch 2.8</code>; <code>0.8 ↔ 2.9</code>. (<a href="https://github.com/meta-pytorch/torchcodec" title="GitHub - meta-pytorch/torchcodec: PyTorch media decoding and encoding">GitHub</a>)</li> <li><strong>Fresh 4.0 regressions</strong>. Multiple reports show 3.x works then 4.x fails until TorchCodec+FFmpeg are added and versions pinned. (<a href="https://github.com/huggingface/datasets/issues/7678" title="To support decoding audio data, please install 'torchcodec'.">GitHub</a>)</li> </ol> <h1><a name="p-243907-fixes-and-workarounds-3" class="anchor" href="#p-243907-fixes-and-workarounds-3"></a>Fixes and workarounds</h1> <p>Pick one path. Keep it pinned.</p> <h2><a name="p-243907-a-fastest-unblock-on-windows-4" class="anchor" href="#p-243907-a-fastest-unblock-on-windows-4"></a>A) Fastest unblock on Windows</h2> <pre data-code-wrap="bash"><code class="lang-bash"># Downgrade Datasets to pre-TorchCodec behavior pip install "datasets&lt;4.0.0" # release notes flagged Windows not ready # https://github.com/huggingface/datasets/releases/tag/4.0.0 </code></pre> <p>(<a href="https://github.com/huggingface/datasets/releases" title="Releases · huggingface/datasets">GitHub</a>)</p> <h2><a name="p-243907-b-stay-on-datasets-4x-and-make-it-work-5" class="anchor" href="#p-243907-b-stay-on-datasets-4x-and-make-it-work-5"></a>B) Stay on Datasets 4.x and make it work</h2> <pre data-code-wrap="bash"><code class="lang-bash"># Windows CPU: install FFmpeg and match versions conda install -c conda-forge "ffmpeg&lt;8" # README recommends conda FFmpeg pip install "torch==2.8.*" "torchcodec==0.7.*" # matrix: 0.7 &lt;-&gt; 2.8 # https://github.com/meta-pytorch/torchcodec#installing-torchcodec </code></pre> <p>If you need CUDA on Windows, use the experimental conda package:</p> <pre data-code-wrap="bash"><code class="lang-bash">conda install -c conda-forge "ffmpeg&lt;8" "torchcodec=*=*cuda*" # https://github.com/meta-pytorch/torchcodec#installing-cuda-enabled-torchcodec </code></pre> <p>(<a href="https://github.com/meta-pytorch/torchcodec" title="GitHub - meta-pytorch/torchcodec: PyTorch media decoding and encoding">GitHub</a>)</p> <h2><a name="p-243907-c-linux-or-colab-6" class="anchor" href="#p-243907-c-linux-or-colab-6"></a>C) Linux or Colab</h2> <pre data-code-wrap="bash"><code class="lang-bash"># Colab VM or Linux apt-get update &amp;&amp; apt-get install -y ffmpeg pip install -U "datasets[audio]" "torch==2.8.*" "torchcodec==0.7.*" # HF docs: audio decoding uses TorchCodec + FFmpeg # https://huggingface.co/docs/datasets/en/audio_load </code></pre> <p>(<a href="https://huggingface.co/docs/datasets/en/audio_load" title="Load audio data">Hugging Face</a>)</p> <h2><a name="p-243907-d-bypass-decoding-while-you-train-7" class="anchor" href="#p-243907-d-bypass-decoding-while-you-train-7"></a>D) Bypass decoding while you train</h2> <p>Avoid TorchCodec until your env is fixed.</p> <pre data-code-wrap="python"><code class="lang-python">from datasets import Audio # Option 1: disable globally ds = ds.decode(False) # https://huggingface.co/docs/datasets/en/package_reference/main_classes#datasets.Dataset.decode # Option 2: disable per column ds = ds.cast_column("audio", Audio(decode=False)) # https://huggingface.co/docs/datasets/en/about_dataset_features </code></pre> <p>These return paths/bytes rather than decoded arrays, so printing items won’t invoke TorchCodec. (<a href="https://huggingface.co/docs/datasets/en/package_reference/main_classes" title="Main classes">Hugging Face</a>)</p> <h1><a name="p-243907-sanity-checks-8" class="anchor" href="#p-243907-sanity-checks-8"></a>Sanity checks</h1> <pre data-code-wrap="bash"><code class="lang-bash">python - &lt;&lt;'PY' import subprocess, sys import torch print("python:", sys.version) print("torch:", torch.__version__) try: import torchcodec print("torchcodec:", torchcodec.__version__) except Exception as e: print("torchcodec import failed:", e) subprocess.run(["ffmpeg", "-hide_banner", "-version"]) PY # Matrix and FFmpeg policy: # https://github.com/meta-pytorch/torchcodec#installing-torchcodec </code></pre> <p>(<a href="https://github.com/meta-pytorch/torchcodec" title="GitHub - meta-pytorch/torchcodec: PyTorch media decoding and encoding">GitHub</a>)</p> <h1><a name="p-243907-context-from-your-linked-thread-9" class="anchor" href="#p-243907-context-from-your-linked-thread-9"></a>Context from your linked thread</h1> <p>Your screenshots show Datasets 4.x decoding an <code>Audio</code> column, TorchCodec probing FFmpeg 7→6→5→4, then failing. That matches the new 4.x behavior and the FFmpeg/compatibility requirements above. (<a href="https://discuss.huggingface.co/t/issue-with-torchcodec-when-fine-tuning-whisper-asr-model/169315" title="Issue with TorchCodec when fine-tuning Whisper ASR model - Beginners - Hugging Face Forums">Hugging Face Forums</a>)</p> <h1><a name="p-243907-extra-references-and-pitfalls-10" class="anchor" href="#p-243907-extra-references-and-pitfalls-10"></a>Extra references and pitfalls</h1> <ul> <li><strong>Release notes roundup</strong>: breaking changes, removal of scripts, and the Windows note. Useful if other 4.0 changes surfaced after your upgrade. (<a href="https://newreleases.io/project/github/huggingface/datasets/release/4.0.0" title="huggingface/datasets 4.0.0 on GitHub">NewReleases</a>)</li> <li><strong>Known mismatch/FFmpeg pitfalls</strong>: reports of brew-FFmpeg conflicts and version-mismatch guidance from TorchCodec maintainers. (<a href="https://github.com/pytorch/torchcodec/issues/570" title="torchcodec not compatible with brew-installed ffmpeg #570">GitHub</a>)</li> <li><strong>PyTorch/Torchaudio migration</strong>: decoding is consolidating on TorchCodec (<code>load_with_torchcodec</code> exists as a bridge). Aligns your stack with where the ecosystem is going. (<a href="https://docs.pytorch.org/audio/main/torchaudio.html" title="Torchaudio 2.8.0 documentation">PyTorch Documentation</a>)</li> </ul>
[HF Space not starting] Repeatedly crashes: @semmyKG]
https://discuss.huggingface.co/t/hf-space-not-starting-repeatedly-crashes-semmykg/169242
169,242
24
2025-10-17T14:59:37.863000Z
[{"id":243751,"name":"Researcher","username":"semmyk","avatar_template":"/user_avatar/discuss.huggin(...TRUNCATED)
"<p>[HF Space repeatedly crashes: <a href=\"https://huggingface.co/spaces/semmyk/semmyKG\">semmyKG</(...TRUNCATED)
"<p>In <a href=\"https://huggingface.co/spaces/semmyk/semmyKG/blob/main/README.md\"><code>README.md<(...TRUNCATED)
Cannot load torchcodec
https://discuss.huggingface.co/t/cannot-load-torchcodec/169260
169,260
5
2025-10-19T10:22:29.688000Z
[{"id":243788,"name":"MAJH","username":"aldkela","avatar_template":"https://avatars.discourse-cdn.co(...TRUNCATED)
"<p>Hello, I have some problem making some program and here is the code I made below</p>\n<pre data-(...TRUNCATED)
"<p>When using Python in a Windows environment, particularly with venv, conda, or Jupyter, DLL error(...TRUNCATED)
WARN Status Code: 500
https://discuss.huggingface.co/t/warn-status-code-500/169281
169,281
9
2025-10-20T07:24:36.364000Z
[{"id":243832,"name":"ロマン","username":"concretejungles","avatar_template":"/user_avatar/discus(...TRUNCATED)
"<p>Running a simple <code>hf download Qwen/Qwen3-4B</code> in colab, I keep getting infinite retrie(...TRUNCATED)
"<p>I solved the issue by <strong>disabling xet</strong>, like this:</p>\n<p><code>export HF_HUB_DIS(...TRUNCATED)
Hybrid Resonance Algorithm for Artificial Superintelligence
https://discuss.huggingface.co/t/hybrid-resonance-algorithm-for-artificial-superintelligence/169264
169,264
7
2025-10-19T11:19:56.732000Z
[{"id":243794,"name":"bit","username":"olegbits","avatar_template":"https://avatars.discourse-cdn.co(...TRUNCATED)
"<p>GRA-ASI: Hybrid Resonance Algorithm for Artificial Superintelligence**</p>\n<h3><a name=\"p-2437(...TRUNCATED)
"<p>Certainly! Here is the <strong>full English translation</strong> of your request and the detaile(...TRUNCATED)
Replacing attention class with identical subclass creates hallucinations
https://discuss.huggingface.co/t/replacing-attention-class-with-identical-subclass-creates-hallucinations/169215
169,215
6
2025-10-16T11:23:27.606000Z
[{"id":243707,"name":"Alexander Jephtha","username":"AlexJephtha","avatar_template":"https://avatars(...TRUNCATED)
"<p>I’m writing a custom versions of LlamaModels, and for one of those approaches I want to overwr(...TRUNCATED)
"<p>SOLUTION: With SDPA attention, passing in an attention_mask with value not equal to none overrid(...TRUNCATED)
End of preview. Expand in Data Studio
README.md exists but content is empty.
Downloads last month
11