n24q02m/Qwen3-Embedding-0.6B-GGUF
GGUF-quantized version of Qwen/Qwen3-Embedding-0.6B for use with qwen3-embed and llama-cpp-python.
Available Variants
| Variant | File | Size | Description |
|---|---|---|---|
| Q4_K_M | qwen3-embedding-0.6b-q4-k-m.gguf |
378 MB | 4-bit quantization (recommended) |
Usage
qwen3-embed
pip install qwen3-embed[gguf]
from qwen3_embed import TextEmbedding
model = TextEmbedding("n24q02m/Qwen3-Embedding-0.6B-GGUF")
embeddings = list(model.embed(["Hello world"])) # 1024-dim
# MRL: reduce dimension
embeddings_256 = list(model.embed(["Hello world"], dim=256)) # 256-dim
# Query with instruction
query_emb = list(model.query_embed("What is machine learning?"))
llama-cpp-python (direct)
from llama_cpp import Llama
model = Llama(
model_path="qwen3-embedding-0.6b-q4-k-m.gguf",
embedding=True,
pooling_type=3, # LLAMA_POOLING_TYPE_LAST
n_ctx=32768,
)
result = model.create_embedding("Hello world")
Conversion Details
- Source: Qwen/Qwen3-Embedding-0.6B
- Method:
convert_hf_to_gguf.py(F16) +llama-quantize(Q4_K_M)
Related
- ONNX variants: n24q02m/Qwen3-Embedding-0.6B-ONNX
- Downloads last month
- 345
Hardware compatibility
Log In to add your hardware
We're not able to determine the quantization variants.