Field-Adaptive Query Generator

A fine-tuned text generation model for generating diverse and relevant search queries from presentation template metadata. This model uses LoRA adapters to efficiently fine-tune Google Gemma-3-4B-IT for generating search queries as part of the Field-Adaptive Dense Retrieval framework.

Model Description

This model generates 8 different search queries from presentation template metadata including titles, descriptions, industries, categories, and tags. It serves as a key component in the Field-Adaptive Dense Retrieval system for structured documents.

Base Model: unsloth/gemma-3-4b-it-unsloth-bnb-4bit
Model Type: Causal Language Model with LoRA
Language: English
License: Apache 2.0

Usage

With Transformers

from transformers import AutoModelForCausalLM, AutoTokenizer

model = AutoModelForCausalLM.from_pretrained(
    "mudasir13cs/Field-adaptive-query-generator"
)
tokenizer = AutoTokenizer.from_pretrained(
    "mudasir13cs/Field-adaptive-query-generator"
)

# Format prompt using Gemma chat template
prompt = """<start_of_turn>user
Generate 8 different search queries that users might use to find this presentation template:
    Title: Modern Business Presentation
    Description: This modern business presentation template features a minimalist design...
    Industries: Business, Marketing
    Categories: Corporate, Professional
    Tags: Modern, Clean, Professional
<end_of_turn>
<start_of_turn>model
"""

inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_length=512, temperature=0.7, do_sample=True)
generated_text = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(generated_text)

With llama.cpp

# Download the GGUF model
huggingface-cli download mudasir13cs/Field-adaptive-query-generator-gguf \
    query-generator-q4_k_m.gguf --local-dir . --local-dir-use-symlinks False

# Run inference
./llama-cli -m query-generator-q4_k_m.gguf \
    -p "<start_of_turn>user
Generate 8 different search queries that users might use to find this presentation template:
    Title: Modern Business Presentation
    Description: This modern business presentation template features a minimalist design...
    Industries: Business, Marketing
    Categories: Corporate, Professional
    Tags: Modern, Clean, Professional
<end_of_turn>
<start_of_turn>model
"

With Ollama

# Import model to Ollama
ollama create field-adaptive-query-generator -f Modelfile

# Run inference
ollama run field-adaptive-query-generator "<start_of_turn>user
Generate 8 different search queries that users might use to find this presentation template:
    Title: Modern Business Presentation
    Description: This modern business presentation template features a minimalist design...
    Industries: Business, Marketing
    Categories: Corporate, Professional
    Tags: Modern, Clean, Professional
<end_of_turn>
<start_of_turn>model
"

Expected Output Format

The model generates exactly 8 queries, one per line, with no numbering or bullets:

business presentation template
modern corporate slides
professional marketing presentation
blue gradient business template
minimalist corporate design
marketing pitch template
geometric business slides
clean professional presentation

Prompt Format

Always use the Gemma chat template format:

<start_of_turn>user
Generate 8 different search queries that users might use to find this presentation template:
    Title: [Template Title]
    Description: [Template Description]
    Industries: [Industry1, Industry2]
    Categories: [Category1, Category2]
    Tags: [Tag1, Tag2, Tag3]

    Include a mix of:
    - Short queries (2-3 words)
    - Medium queries (4-6 words)
    - Natural language queries
    - Industry-specific queries
    - Use-case based queries
    - Style-based queries

    Format: Return exactly 8 queries, one per line, no numbering or bullets.
<end_of_turn>
<start_of_turn>model

Model Details

  • Architecture: Google Gemma-3-4B-IT with LoRA adapters
  • Training: Parameter-Efficient Fine-Tuning (PEFT) with LoRA
  • LoRA Rank: 16
  • LoRA Alpha: 32
  • Training Epochs: 3
  • Learning Rate: 2e-4
  • Batch Size: 4

Evaluation

  • BLEU Score: ~0.75
  • ROUGE Score: ~0.80
  • Performance: Optimized for query generation quality in structured document retrieval

Citation

Paper

@article{field_adaptive_dense_retrieval,
  title={Field-Adaptive Dense Retrieval of Structured Documents},
  author={Mudasir Syed},
  journal={DBPIA},
  year={2024},
  url={https://www.dbpia.co.kr/journal/articleDetail?nodeId=NODE12352544}
}

Model

@misc{field_adaptive_query_generator,
  title={Field-adaptive-query-generator for Presentation Template Query Generation},
  author={Mudasir Syed},
  year={2024},
  howpublished={Hugging Face},
  url={https://huggingface.co/mudasir13cs/Field-adaptive-query-generator}
}

Base Model

@misc{gemma_3_4b_it,
  title={Gemma: Open Models Based on Gemini Research and Technology},
  author={Gemma Team and others},
  year={2024},
  howpublished={Hugging Face},
  url={https://huggingface.co/google/gemma-3-4b-it}
}

Related Models

Author

Mudasir Syed (mudasir13cs)

Downloads last month
82
Safetensors
Model size
4B params
Tensor type
F32
·
F16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for mudasir13cs/Field-adaptive-query-generator-gguf

Dataset used to train mudasir13cs/Field-adaptive-query-generator-gguf