File size: 2,410 Bytes
9150dc2
 
 
 
 
 
 
 
 
 
 
 
764dade
9150dc2
 
 
 
7c04311
7194b65
9150dc2
 
 
 
 
 
 
 
b685679
9150dc2
 
764dade
9150dc2
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
6b1b6e2
9150dc2
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
b685679
57f41c1
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
---
datasets:
- NeelNanda/pile-10k
base_model:
- google/gemma-3-12b-it
---
## Model Details

This model is an int4 model with group_size 128 and symmetric quantization of [google/gemma-3-12b-it](https://huggingface.co/google/gemma-3-12b-it) generated by [intel/auto-round](https://github.com/intel/auto-round) algorithm.

Please follow the license of the original model.

### Inference on CPU/XPU/CUDA

Requirements

```bash
pip install 'auto-round>0.5.1'
pip install 'transformers>=4.52'
```

~~~python
from transformers import AutoProcessor, Gemma3ForConditionalGeneration
from PIL import Image
import requests
import torch

model_id = "OPEA/gemma-3-12b-it-int4-AutoRound"

model = Gemma3ForConditionalGeneration.from_pretrained(
    model_id, torch_dtype=torch.bfloat16, device_map="auto").eval()

processor = AutoProcessor.from_pretrained(model_id)

messages = [
    {
        "role": "system",
        "content": [{"type": "text", "text": "You are a helpful assistant."}]
    },
    {
        "role": "user",
        "content": [
            {"type": "image",
             "image": "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/bee.jpg"},
            {"type": "text", "text": "Describe this image in detail."}
        ]
    }
]

inputs = processor.apply_chat_template(
    messages, add_generation_prompt=True, tokenize=True,
    return_dict=True, return_tensors="pt"
).to(model.device, dtype=torch.bfloat16)
model.to(torch.bfloat16)
input_len = inputs["input_ids"].shape[-1]

with torch.inference_mode():
    generation = model.generate(**inputs, max_new_tokens=100, do_sample=False)
    generation = generation[0][input_len:]

decoded = processor.decode(generation, skip_special_tokens=True)
print(decoded)
"""Here's a detailed description of the image:

**Overall Impression:**

The image is a close-up shot of a vibrant garden scene, focusing on a pink cosmos flower with a bumblebee visiting it. The overall feel is natural, colorful, and slightly wild.

**Main Elements:**

*   **Cosmos Flower:** The primary focus is a large, pink cosmos flower. It has broad, slightly ruffled petals in a soft pink hue. The center of the flower is a bright"""
~~~


## Generate the model

Here is the sample command to reproduce the model.

```bash
auto-round-mllm \
--model google/gemma-3-12b-it \
--device 0 \
--bits 4 \
--format 'auto_round' \
--output_dir "./tmp_autoround"
```