Text Generation
	
	
	
	
	Transformers
	
	
	
	
	GGUF
	
	
	
		
	
	English
	
	
	
	
	shining-valiant
	
	
	
	
	shining-valiant-3
	
	
	
	
	valiant
	
	
	
	
	valiant-labs
	
	
	
	
	qwen
	
	
	
	
	qwen-3
	
	
	
	
	qwen-3-4b
	
	
	
	
	4b
	
	
	
	
	reasoning
	
	
	
	
	code
	
	
	
	
	code-reasoning
	
	
	
	
	science
	
	
	
	
	science-reasoning
	
	
	
	
	physics
	
	
	
	
	biology
	
	
	
	
	chemistry
	
	
	
	
	earth-science
	
	
	
	
	astronomy
	
	
	
	
	machine-learning
	
	
	
	
	artificial-intelligence
	
	
	
	
	compsci
	
	
	
	
	computer-science
	
	
	
	
	information-theory
	
	
	
	
	ML-Ops
	
	
	
	
	math
	
	
	
	
	cuda
	
	
	
	
	deep-learning
	
	
	
	
	agentic
	
	
	
	
	LLM
	
	
	
	
	neuromorphic
	
	
	
	
	self-improvement
	
	
	
	
	complex-systems
	
	
	
	
	cognition
	
	
	
	
	linguistics
	
	
	
	
	philosophy
	
	
	
	
	logic
	
	
	
	
	epistemology
	
	
	
	
	simulation
	
	
	
	
	game-theory
	
	
	
	
	knowledge-management
	
	
	
	
	creativity
	
	
	
	
	problem-solving
	
	
	
	
	architect
	
	
	
	
	engineer
	
	
	
	
	developer
	
	
	
	
	creative
	
	
	
	
	analytical
	
	
	
	
	expert
	
	
	
	
	rationality
	
	
	
	
	conversational
	
	
	
	
	chat
	
	
	
	
	instruct
	
	
	
	
	llama-cpp
	
	
	
	
	gguf-my-repo
	
	
Update README.md
Browse files
    	
        README.md
    CHANGED
    
    | @@ -71,6 +71,17 @@ license: apache-2.0 | |
| 71 | 
             
            This model was converted to GGUF format from [`ValiantLabs/Qwen3-4B-ShiningValiant3`](https://huggingface.co/ValiantLabs/Qwen3-4B-ShiningValiant3) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
         | 
| 72 | 
             
            Refer to the [original model card](https://huggingface.co/ValiantLabs/Qwen3-4B-ShiningValiant3) for more details on the model.
         | 
| 73 |  | 
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
| 74 | 
             
            ## Use with llama.cpp
         | 
| 75 | 
             
            Install llama.cpp through brew (works on Mac and Linux)
         | 
| 76 |  | 
|  | |
| 71 | 
             
            This model was converted to GGUF format from [`ValiantLabs/Qwen3-4B-ShiningValiant3`](https://huggingface.co/ValiantLabs/Qwen3-4B-ShiningValiant3) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
         | 
| 72 | 
             
            Refer to the [original model card](https://huggingface.co/ValiantLabs/Qwen3-4B-ShiningValiant3) for more details on the model.
         | 
| 73 |  | 
| 74 | 
            +
            ---
         | 
| 75 | 
            +
            Shining Valiant 3: Qwen3-1.7B, Qwen3-4B, Qwen3-8B
         | 
| 76 | 
            +
             | 
| 77 | 
            +
            Shining Valiant 3 is a science, AI design, and general reasoning specialist built on Qwen 3.
         | 
| 78 | 
            +
             | 
| 79 | 
            +
            - Finetuned on our newest science reasoning data generated with Deepseek R1 0528!
         | 
| 80 | 
            +
            - AI to build AI: our high-difficulty AI reasoning data makes Shining Valiant 3 your friend for building with current AI tech and discovering new innovations and improvements!
         | 
| 81 | 
            +
            - Improved general and creative reasoning to supplement problem-solving and general chat performance.
         | 
| 82 | 
            +
            - Small model sizes allow running on local desktop and mobile, plus super-fast server inference!
         | 
| 83 | 
            +
             | 
| 84 | 
            +
            ---
         | 
| 85 | 
             
            ## Use with llama.cpp
         | 
| 86 | 
             
            Install llama.cpp through brew (works on Mac and Linux)
         | 
| 87 |  | 
