| language: | |
| - en | |
| - zh | |
| license: apache-2.0 | |
| library_name: transformers | |
| base_model: Qwen/Qwen2.5-3B | |
| tags: | |
| - qwen2.5 | |
| - text-generation | |
| - pytorch | |
| - multilingual | |
| pipeline_tag: text-generation | |
| # Qwen 2.5 3B - QNN Ready | |
| Original Qwen 2.5 3B model prepared for QNN conversion. | |
| ## Usage | |
| ```python | |
| from transformers import AutoModelForCausalLM, AutoTokenizer | |
| model = AutoModelForCausalLM.from_pretrained("marcusmi4n/qwen2.5-3b-original") | |
| tokenizer = AutoTokenizer.from_pretrained("marcusmi4n/qwen2.5-3b-original") | |
| inputs = tokenizer("Hello, I am", return_tensors="pt") | |
| outputs = model.generate(**inputs, max_length=50) | |
| print(tokenizer.decode(outputs[0])) | |
| ``` | |
| ## Features | |
| - Original unmodified model | |
| - Safetensors format | |
| - Ready for QNN conversion | |
| - Multilingual support | |