- Hugston-Qwen3-30B-A3B-Thinking-2507 pipeline_tag: text-generation tags:
- Thinking
- Hugston
Hugston-Qwen3-30B-A3B-Thinking-2507
This is an converted and quantized version by Hugston Team created with Quanta (see Github to know more about it). This is a crude, proof-of-concept implementation to convert and quantize a .safetensor llm model in GGUF.
Quantization was performed using an automatic and faster method, which leads to less time and faster results. This model was made possible by: https://Hugston.com
This model was converted and Quantized by Hugston Team.
HugstonOne Enterprise Edition
You can use the model with HugstonOne Enterprise Edition
Tested
Watch HugstonOne coding and preview in action: https://vimeo.com/1121493834?share=copy&fl=sv&fe=ci
Usage
-Download App HugstonOne at Hugston.com or at https://github.com/Mainframework
-Download model from https://hugston.com/explore?folder=llm_models or Huggingface
-If you already have the Llm Model downloaded chose it by clicking pick model in HugstonOne -Then click Load model in Cli or Server
-For multimodal use you need a VL/multimodal LLM model with the Mmproj file in the same folder. -Select model and select mmproj.
-Note: if the mmproj is inside the same folder with other models non multimodal, the non model will not load unless the mmproj is moved from folder.
- Downloads last month
- 13
4-bit
5-bit
6-bit
8-bit
