|
|
--- |
|
|
datasets: |
|
|
- NeelNanda/pile-10k |
|
|
base_model: |
|
|
- google/gemma-3-12b-it |
|
|
--- |
|
|
## Model Details |
|
|
|
|
|
This model is an int4 model with group_size 128 and symmetric quantization of [google/gemma-3-12b-it](https://huggingface.co/google/gemma-3-12b-it) generated by [intel/auto-round](https://github.com/intel/auto-round) algorithm. |
|
|
|
|
|
Please follow the license of the original model. |
|
|
|
|
|
### Inference on CPU/XPU/CUDA |
|
|
|
|
|
Requirements |
|
|
|
|
|
```bash |
|
|
pip install 'auto-round>0.5.1' |
|
|
pip install 'transformers>=4.52' |
|
|
``` |
|
|
|
|
|
~~~python |
|
|
from transformers import AutoProcessor, Gemma3ForConditionalGeneration |
|
|
from PIL import Image |
|
|
import requests |
|
|
import torch |
|
|
|
|
|
model_id = "OPEA/gemma-3-12b-it-int4-AutoRound" |
|
|
|
|
|
model = Gemma3ForConditionalGeneration.from_pretrained( |
|
|
model_id, torch_dtype=torch.bfloat16, device_map="auto").eval() |
|
|
|
|
|
processor = AutoProcessor.from_pretrained(model_id) |
|
|
|
|
|
messages = [ |
|
|
{ |
|
|
"role": "system", |
|
|
"content": [{"type": "text", "text": "You are a helpful assistant."}] |
|
|
}, |
|
|
{ |
|
|
"role": "user", |
|
|
"content": [ |
|
|
{"type": "image", |
|
|
"image": "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/bee.jpg"}, |
|
|
{"type": "text", "text": "Describe this image in detail."} |
|
|
] |
|
|
} |
|
|
] |
|
|
|
|
|
inputs = processor.apply_chat_template( |
|
|
messages, add_generation_prompt=True, tokenize=True, |
|
|
return_dict=True, return_tensors="pt" |
|
|
).to(model.device, dtype=torch.bfloat16) |
|
|
model.to(torch.bfloat16) |
|
|
input_len = inputs["input_ids"].shape[-1] |
|
|
|
|
|
with torch.inference_mode(): |
|
|
generation = model.generate(**inputs, max_new_tokens=100, do_sample=False) |
|
|
generation = generation[0][input_len:] |
|
|
|
|
|
decoded = processor.decode(generation, skip_special_tokens=True) |
|
|
print(decoded) |
|
|
"""Here's a detailed description of the image: |
|
|
|
|
|
**Overall Impression:** |
|
|
|
|
|
The image is a close-up shot of a vibrant garden scene, focusing on a pink cosmos flower with a bumblebee visiting it. The overall feel is natural, colorful, and slightly wild. |
|
|
|
|
|
**Main Elements:** |
|
|
|
|
|
* **Cosmos Flower:** The primary focus is a large, pink cosmos flower. It has broad, slightly ruffled petals in a soft pink hue. The center of the flower is a bright""" |
|
|
~~~ |
|
|
|
|
|
|
|
|
## Generate the model |
|
|
|
|
|
Here is the sample command to reproduce the model. |
|
|
|
|
|
```bash |
|
|
auto-round-mllm \ |
|
|
--model google/gemma-3-12b-it \ |
|
|
--device 0 \ |
|
|
--bits 4 \ |
|
|
--format 'auto_round' \ |
|
|
--output_dir "./tmp_autoround" |
|
|
``` |