# Flash Attention - CUDA 12, PyTorch 2.6, Python 3.10 flash-attn @ https://github.com/Dao-AILab/flash-attention/releases/download/v2.7.3/flash_attn-2.7.3+cu12torch2.6cxx11abiFALSE-cp310-cp310-linux_x86_64.whl # Core ML/AI Libraries torch==2.6.0 torchvision transformers==4.57.1 transformers-stream-generator accelerate xformers # Hugging Face huggingface_hub hf_xet spaces # Vision & Image Processing qwen-vl-utils albumentations opencv-python pyvips pyvips-binary pillow timm einops supervision # Document Processing docling-core python-docx pymupdf pdf2image markdown html2text # PDF Generation reportlab fpdf # Text Processing sentencepiece num2words # Utilities loguru requests httpx click # Web Interface gradio # Model Fine-tuning peft # Video Processing av