--- language: - en - fr - de - es - pt - it - ja - ko - ru - zh - ar - fa - id - ms - ne - pl - ro - sr - sv - tr - uk - vi - hi - bn base_model: - mistralai/Devstral-Small-2507 - mistralai/Magistral-Small-2506 pipeline_tag: text-generation tags: - merge - programming - code generation - Codestral - code - moe - coding - coder - chat - mistral - mixtral - mixture of experts - mistral moe - 2X24B - reasoning - thinking - Devstral - Magistral library_name: transformers license: apache-2.0 ---

Mistral-2x24B-MOE-Magistral-2506-Devstral-2507-1.1-Coder-Reasoning-Ultimate-44B

This repo contains the full precision source code, in "safe tensors" format to generate GGUFs, GPTQ, EXL2, AWQ, HQQ and other formats. The source code can also be used directly. The monster coders in MOE (Mixture of Experts) 2x24B configuration with full reasoning (can be turned on/off). The two best Mistral Coders at 24B in one that are stronger than the sum of their parts. Both models code together, with Magistral re-inforcing Devstral's coding power, including full reasoning/thinking which can be turned on or off. Info on each model below, info on MOE model / settings etc below this. --- QUANTS: --- GGUF? GGUF Imatrix? Other? See under "model tree", upper right and click on "quantizations". New quants will automatically appear. --- # Devstral Small 1.1 Devstral is an agentic LLM for software engineering tasks built under a collaboration between [Mistral AI](https://mistral.ai/) and [All Hands AI](https://www.all-hands.dev/) 🙌. Devstral excels at using tools to explore codebases, editing multiple files and power software engineering agents. The model achieves remarkable performance on SWE-bench which positions it as the #1 open source model on this [benchmark](#benchmark-results). It is finetuned from [Mistral-Small-3.1](https://huggingface.co/mistralai/Mistral-Small-3.1-24B-Base-2503), therefore it has a long context window of up to 128k tokens. As a coding agent, Devstral is text-only and before fine-tuning from `Mistral-Small-3.1` the vision encoder was removed. For enterprises requiring specialized capabilities (increased context, domain-specific knowledge, etc.), we will release commercial models beyond what Mistral AI contributes to the community. Learn more about Devstral in our [blog post](https://mistral.ai/news/devstral-2507). **Updates compared to [`Devstral Small 1.0`](https://huggingface.co/mistralai/Devstral-Small-2505):** - Improved performance, please refer to the [benchmark results](#benchmark-results). - `Devstral Small 1.1` is still great when paired with OpenHands. This new version also generalizes better to other prompts and coding environments. - Supports [Mistral's function calling format](https://mistralai.github.io/mistral-common/usage/tools/). ## Key Features: - **Agentic coding**: Devstral is designed to excel at agentic coding tasks, making it a great choice for software engineering agents. - **lightweight**: with its compact size of just 24 billion parameters, Devstral is light enough to run on a single RTX 4090 or a Mac with 32GB RAM, making it an appropriate model for local deployment and on-device use. - **Apache 2.0 License**: Open license allowing usage and modification for both commercial and non-commercial purposes. - **Context Window**: A 128k context window. - **Tokenizer**: Utilizes a Tekken tokenizer with a 131k vocabulary size. ## Benchmark Results ### SWE-Bench Devstral Small 1.1 achieves a score of **53.6%** on SWE-Bench Verified, outperforming Devstral Small 1.0 by +6,8% and the second best state of the art model by +11.4%. | Model | Agentic Scaffold | SWE-Bench Verified (%) | |--------------------|--------------------|------------------------| | Devstral Small 1.1 | OpenHands Scaffold | **53.6** | | Devstral Small 1.0 | OpenHands Scaffold | *46.8* | | GPT-4.1-mini | OpenAI Scaffold | 23.6 | | Claude 3.5 Haiku | Anthropic Scaffold | 40.6 | | SWE-smith-LM 32B | SWE-agent Scaffold | 40.2 | | Skywork SWE | OpenHands Scaffold | 38.0 | | DeepSWE | R2E-Gym Scaffold | 42.2 | When evaluated under the same test scaffold (OpenHands, provided by All Hands AI 🙌), Devstral exceeds far larger models such as Deepseek-V3-0324 and Qwen3 232B-A22B. ![SWE Benchmark](assets/swe_benchmark.png) ## Usage We recommend to use Devstral with the [OpenHands](https://github.com/All-Hands-AI/OpenHands/tree/main) scaffold. You can use it either through our API or by running locally. ### API Follow these [instructions](https://docs.mistral.ai/getting-started/quickstart/#account-setup) to create a Mistral account and get an API key. Then run these commands to start the OpenHands docker container. ```bash export MISTRAL_API_KEY= mkdir -p ~/.openhands && echo '{"language":"en","agent":"CodeActAgent","max_iterations":null,"security_analyzer":null,"confirmation_mode":false,"llm_model":"mistral/devstral-small-2507","llm_api_key":"'$MISTRAL_API_KEY'","remote_runtime_resource_factor":null,"github_token":null,"enable_default_condenser":true}' > ~/.openhands-state/settings.json docker pull docker.all-hands.dev/all-hands-ai/runtime:0.48-nikolaik docker run -it --rm --pull=always \ -e SANDBOX_RUNTIME_CONTAINER_IMAGE=docker.all-hands.dev/all-hands-ai/runtime:0.48-nikolaik \ -e LOG_ALL_EVENTS=true \ -v /var/run/docker.sock:/var/run/docker.sock \ -v ~/.openhands:/.openhands \ -p 3000:3000 \ --add-host host.docker.internal:host-gateway \ --name openhands-app \ docker.all-hands.dev/all-hands-ai/openhands:0.48 ``` ### Local inference The model can also be deployed with the following libraries: - [`vllm (recommended)`](https://github.com/vllm-project/vllm): See [here](#vllm-recommended) - [`mistral-inference`](https://github.com/mistralai/mistral-inference): See [here](#mistral-inference) - [`transformers`](https://github.com/huggingface/transformers): See [here](#transformers) - [`LMStudio`](https://lmstudio.ai/): See [here](#lmstudio) - [`llama.cpp`](https://github.com/ggml-org/llama.cpp): See [here](#llama.cpp) - [`ollama`](https://github.com/ollama/ollama): See [here](#ollama) #### vLLM (recommended)
Expand= 0.9.1`](https://github.com/vllm-project/vllm/releases/tag/v0.9.1): ``` pip install vllm --upgrade ``` Also make sure to have installed [`mistral_common >= 1.7.0`](https://github.com/mistralai/mistral-common/releases/tag/v1.7.0). ``` pip install mistral-common --upgrade ``` To check: ``` python -c "import mistral_common; print(mistral_common.__version__)" ``` You can also make use of a ready-to-go [docker image](https://github.com/vllm-project/vllm/blob/main/Dockerfile) or on the [docker hub](https://hub.docker.com/layers/vllm/vllm-openai/latest/images/sha256-de9032a92ffea7b5c007dad80b38fd44aac11eddc31c435f8e52f3b7404bbf39). **_Launch server_** We recommand that you use Devstral in a server/client setting. 1. Spin up a server: ``` vllm serve mistralai/Devstral-Small-2507 --tokenizer_mode mistral --config_format mistral --load_format mistral --tool-call-parser mistral --enable-auto-tool-choice --tensor-parallel-size 2 ``` 2. To ping the client you can use a simple Python snippet. ```py import requests import json from huggingface_hub import hf_hub_download url = "http://:8000/v1/chat/completions" headers = {"Content-Type": "application/json", "Authorization": "Bearer token"} model = "mistralai/Devstral-Small-2507" def load_system_prompt(repo_id: str, filename: str) -> str: file_path = hf_hub_download(repo_id=repo_id, filename=filename) with open(file_path, "r") as file: system_prompt = file.read() return system_prompt SYSTEM_PROMPT = load_system_prompt(model, "SYSTEM_PROMPT.txt") messages = [ {"role": "system", "content": SYSTEM_PROMPT}, { "role": "user", "content": [ { "type": "text", "text": "", }, ], }, ] data = {"model": model, "messages": messages, "temperature": 0.15} # Devstral Small 1.1 supports tool calling. If you want to use tools, follow this: # tools = [ # Define tools for vLLM # { # "type": "function", # "function": { # "name": "git_clone", # "description": "Clone a git repository", # "parameters": { # "type": "object", # "properties": { # "url": { # "type": "string", # "description": "The url of the git repository", # }, # }, # "required": ["url"], # }, # }, # } # ] # data = {"model": model, "messages": messages, "temperature": 0.15, "tools": tools} # Pass tools to payload. response = requests.post(url, headers=headers, data=json.dumps(data)) print(response.json()["choices"][0]["message"]["content"]) ```
#### Mistral-inference
Expand= 1.6.0 installed. ```bash pip install mistral_inference --upgrade ``` **_Download_** ```python from huggingface_hub import snapshot_download from pathlib import Path mistral_models_path = Path.home().joinpath('mistral_models', 'Devstral') mistral_models_path.mkdir(parents=True, exist_ok=True) snapshot_download(repo_id="mistralai/Devstral-Small-2507", allow_patterns=["params.json", "consolidated.safetensors", "tekken.json"], local_dir=mistral_models_path) ``` **_Chat_** You can run the model using the following command: ```bash mistral-chat $HOME/mistral_models/Devstral --instruct --max_tokens 300 ``` You can then prompt it with anything you'd like.
#### Transformers
Expand= 1.7.0` to use our tokenizer. ```bash pip install mistral-common --upgrade ``` Then load our tokenizer along with the model and generate: ```python import torch from mistral_common.protocol.instruct.messages import ( SystemMessage, UserMessage ) from mistral_common.protocol.instruct.request import ChatCompletionRequest from mistral_common.tokens.tokenizers.mistral import MistralTokenizer from huggingface_hub import hf_hub_download from transformers import AutoModelForCausalLM def load_system_prompt(repo_id: str, filename: str) -> str: file_path = hf_hub_download(repo_id=repo_id, filename=filename) with open(file_path, "r") as file: system_prompt = file.read() return system_prompt model_id = "mistralai/Devstral-Small-2507" SYSTEM_PROMPT = load_system_prompt(model_id, "SYSTEM_PROMPT.txt") tokenizer = MistralTokenizer.from_hf_hub(model_id) model = AutoModelForCausalLM.from_pretrained(model_id) tokenized = tokenizer.encode_chat_completion( ChatCompletionRequest( messages=[ SystemMessage(content=SYSTEM_PROMPT), UserMessage(content=""), ], ) ) output = model.generate( input_ids=torch.tensor([tokenized.tokens]), max_new_tokens=1000, )[0] decoded_output = tokenizer.decode(output[len(tokenized.tokens):]) print(decoded_output) ```
#### LM Studio
Expand #### llama.cpp
Expand ### OpenHands (recommended) #### Launch a server to deploy Devstral Small 1.1 Make sure you launched an OpenAI-compatible server such as vLLM or Ollama as described above. Then, you can use OpenHands to interact with `Devstral Small 1.1`. In the case of the tutorial we spineed up a vLLM server running the command: ```bash vllm serve mistralai/Devstral-Small-2507 --tokenizer_mode mistral --config_format mistral --load_format mistral --tool-call-parser mistral --enable-auto-tool-choice --tensor-parallel-size 2 ``` The server address should be in the following format: `http://:8000/v1` #### Launch OpenHands You can follow installation of OpenHands [here](https://docs.all-hands.dev/modules/usage/installation). The easiest way to launch OpenHands is to use the Docker image: ```bash docker pull docker.all-hands.dev/all-hands-ai/runtime:0.48-nikolaik docker run -it --rm --pull=always \ -e SANDBOX_RUNTIME_CONTAINER_IMAGE=docker.all-hands.dev/all-hands-ai/runtime:0.48-nikolaik \ -e LOG_ALL_EVENTS=true \ -v /var/run/docker.sock:/var/run/docker.sock \ -v ~/.openhands:/.openhands \ -p 3000:3000 \ --add-host host.docker.internal:host-gateway \ --name openhands-app \ docker.all-hands.dev/all-hands-ai/openhands:0.48 ``` Then, you can access the OpenHands UI at `http://localhost:3000`. #### Connect to the server When accessing the OpenHands UI, you will be prompted to connect to a server. You can use the advanced mode to connect to the server you launched earlier. Fill the following fields: - **Custom Model**: `openai/mistralai/Devstral-Small-2507` - **Base URL**: `http://:8000/v1` - **API Key**: `token` (or any other token you used to launch the server if any)
See settings ![OpenHands Settings](assets/open_hands_config.png)
### Cline #### Launch a server to deploy Devstral Small 1.1 Make sure you launched an OpenAI-compatible server such as vLLM or Ollama as described above. Then, you can use OpenHands to interact with `Devstral Small 1.1`. In the case of the tutorial we spineed up a vLLM server running the command: ```bash vllm serve mistralai/Devstral-Small-2507 --tokenizer_mode mistral --config_format mistral --load_format mistral --tool-call-parser mistral --enable-auto-tool-choice --tensor-parallel-size 2 ``` The server address should be in the following format: `http://:8000/v1` #### Launch Cline You can follow installation of Cline [here](https://docs.cline.bot/getting-started/installing-cline). Then you can configure the server address in the settings.
See settings ![Cline Settings](assets/cline_config.png)
### Examples #### OpenHands:Understanding Test Coverage of Mistral Common We can start the OpenHands scaffold and link it to a repo to analyze test coverage and identify badly covered files. Here we start with our public `mistral-common` repo. After the repo is mounted in the workspace, we give the following instruction ``` Check the test coverage of the repo and then create a visualization of test coverage. Try plotting a few different types of graphs and save them to a png. ``` The agent will first browse the code base to check test configuration and structure. ![mistral common coverage - prompt](assets/mistral_common_coverage/prompt.png) Then it sets up the testing dependencies and launches the coverage test: ![mistral common coverage - dependencies](assets/mistral_common_coverage/dependencies.png) Finally, the agent writes necessary code to visualize the coverage, export the results and save the plots to a png. ![mistral common coverage - visualization](assets/mistral_common_coverage/visualization.png) At the end of the run, the following plots are produced: ![mistral common coverage - coverage distribution](assets/mistral_common_coverage/coverage_distribution.png) ![mistral common coverage - coverage pie](assets/mistral_common_coverage/coverage_pie.png) ![mistral common coverage - coverage summary](assets/mistral_common_coverage/coverage_summary.png) and the model is able to explain the results: ![mistral common coverage - navigate](assets/mistral_common_coverage/navigate.png) #### Cline: build a video game First initialize Cline inside VSCode and connect it to the server you launched earlier. We give the following instruction to builde the video game: ``` Create a video game that mixes Space Invaders and Pong for the web. Follow these instructions: - There are two players one at the top and one at the bottom. The players are controling a bar to bounce a ball. - The first player plays with the keys "a" and "d", the second with the right and left arrows. - The invaders are located at the center of the screen. They shoud look like the ones in Space Invaders. Their goal is to shoot on the players randomly. They cannot be destroyed by the ball that pass through them. This means that invaders never die. - The players goal is to avoid shootings from the space invaders and send the ball to the edge of the over player. - The ball bounces on the left and right edges. - Once the ball touch one of the player's edge, the player loses. - Once a player is touched 3 times or more by a shooting, the player loses. - The player winning is the last one standing. - Display on the UI, the number of times a player touched the ball, and the remaining health. ``` ![space invaders pong - prompt](assets/space_invaders_pong/prompt.png) The agent will first create the game: ![space invaders pong - structure](assets/space_invaders_pong/base_structure.png) Then it will explain how to launch the game: ![space invaders pong - task completed](assets/space_invaders_pong/task%20completed.png) Finally, the game is ready to be played: ![space invaders pong - game](assets/space_invaders_pong/game.png) Don't hesitate to iterate or give more information to Devstral to improve the game! --- # Magistral Small 1.0 --- Building upon Mistral Small 3.1 (2503), **with added reasoning capabilities**, undergoing SFT from Magistral Medium traces and RL on top, it's a small, efficient reasoning model with 24B parameters. Magistral Small can be deployed locally, fitting within a single RTX 4090 or a 32GB RAM MacBook once quantized. Learn more about Magistral in our [blog post](https://mistral.ai/news/magistral/). The model was presented in the paper [Magistral](https://huggingface.co/papers/2506.10910). ## Key Features - **Reasoning:** Capable of long chains of reasoning traces before providing an answer. - **Multilingual:** Supports dozens of languages, including English, French, German, Greek, Hindi, Indonesian, Italian, Japanese, Korean, Malay, Nepali, Polish, Portuguese, Romanian, Russian, Serbian, Spanish, Turkish, Ukrainian, Vietnamese, Arabic, Bengali, Chinese, and Farsi. - **Apache 2.0 License:** Open license allowing usage and modification for both commercial and non-commercial purposes. - **Context Window:** A 128k context window, **but** performance might degrade past **40k**. Hence we recommend setting the maximum model length to 40k. ## Benchmark Results | Model | AIME24 pass@1 | AIME25 pass@1 | GPQA Diamond | Livecodebench (v5) | |-------|-------------|-------------|--------------|-------------------| | Magistral Medium | 73.59% | 64.95% | 70.83% | 59.36% | | Magistral Small | 70.68% | 62.76% | 68.18% | 55.84% | ## Sampling parameters Please make sure to use: - `top_p`: 0.95 - `temperature`: 0.7 - `max_tokens`: 40960 ## Basic Chat Template We highly recommend including the default system prompt used during RL for the best results, you can edit and customise it if needed for your specific use case. ``` [SYSTEM_PROMPT]system_prompt A user will ask you to solve a task. You should first draft your thinking process (inner monologue) until you have derived the final answer. Afterwards, write a self-contained summary of your thoughts (i.e. your summary should be succinct but contain all the critical steps you needed to reach the conclusion). You should use Markdown to format your response. Write both your thoughts and summary in the same language as the task posed by the user. NEVER use \boxed{} in your response. Your thinking process must follow the template below: Your thoughts or/and draft, like working through an exercise on scratch paper. Be as casual and as long as you want until you are confident to generate a correct answer. Here, provide a concise summary that reflects your reasoning and presents a clear final answer to the user. Don't mention that this is a summary. Problem: [/SYSTEM_PROMPT][INST]user_message[/INST] reasoning_traces assistant_response[INST]user_message[/INST] ``` *`system_prompt`, `user_message` and `assistant_response` are placeholders.* We invite you to choose, depending on your use case and requirements, between keeping reasoning traces during multi-turn interactions or keeping only the final assistant response. ***Please make sure to use [mistral-common](https://github.com/mistralai/mistral-common) as the source of truth*** ## Usage The model can be used with the following frameworks; ### Inference - [`vllm (recommended)`](https://github.com/vllm-project/vllm): See [below](#vllm-recommended) In addition the community has prepared quantized versions of the model that can be used with the following frameworks (*alphabetically sorted*): - [`llama.cpp`](https://github.com/ggml-org/llama.cpp): https://huggingface.co/mistralai/Magistral-Small-2506_gguf - [`lmstudio` (llama.cpp, MLX)](https://lmstudio.ai/): https://lmstudio.ai/models/mistralai/magistral-small - [`ollama`](https://ollama.com/): https://ollama.com/library/magistral - [`unsloth` (llama.cpp)](https://huggingface.co/unsloth): https://huggingface.co/unsloth/Magistral-Small-2506-GGUF ### Training Fine-tuning is possible with (*alphabetically sorted*): - [`axolotl`](https://github.com/axolotl-ai-cloud/axolotl): https://github.com/axolotl-ai-cloud/axolotl/tree/main/examples/magistral - [`unsloth`](https://github.com/unslothai/unsloth): https://docs.unsloth.ai/basics/magistral ### Other Also you can use Magistral with: - [`kaggle`](https://www.kaggle.com/models/mistral-ai/magistral-small-2506): https://www.kaggle.com/models/mistral-ai/magistral-small-2506 ### vLLM (recommended) We recommend using this model with the [vLLM library](https://github.com/vllm-project/vllm) to implement production-ready inference pipelines. **_Installation_** Make sure you install the latest [`vLLM`](https://github.com/vllm-project/vllm/) code: ``` pip install -U vllm \ --pre \ --extra-index-url https://wheels.vllm.ai/nightly ``` Doing so should automatically install [`mistral_common >= 1.6.0`](https://github.com/mistralai/mistral-common/releases/tag/v1.6.0). To check: ``` python -c "import mistral_common; print(mistral_common.__version__)" ``` You can also make use of a ready-to-go [docker image](https://github.com/vllm-project/vllm/blob/main/Dockerfile) or on the [docker hub](https://hub.docker.com/layers/vllm/vllm-openai/latest/images/sha256-de9032a92ffea7b5c007dad80b38fd44aac11eddc31c435f8e52f3b7404bbf39). Serve model as follows: ``` vllm serve mistralai/Magistral-Small-2506 --tokenizer_mode mistral --config_format mistral --load_format mistral --tool-call-parser mistral --enable-auto-tool-choice --tensor-parallel-size 2 ``` Ping model as follows: ```py from openai import OpenAI from huggingface_hub import hf_hub_download # Modify OpenAI's API key and API base to use vLLM's API server. openai_api_key = "EMPTY" openai_api_base = "http://localhost:8000/v1" TEMP = 0.7 TOP_P = 0.95 MAX_TOK = 40_960 client = OpenAI( api_key=openai_api_key, base_url=openai_api_base, ) models = client.models.list() model = models.data[0].id def load_system_prompt(repo_id: str, filename: str) -> str: file_path = hf_hub_download(repo_id=repo_id, filename=filename) with open(file_path, "r") as file: system_prompt = file.read() return system_prompt SYSTEM_PROMPT = load_system_prompt(model, "SYSTEM_PROMPT.txt") query = "Write 4 sentences, each with at least 8 words. Now make absolutely sure that every sentence has exactly one word less than the previous sentence." # or try out other queries # query = "Exactly how many days ago did the French Revolution start? Today is June 4th, 2025." # query = "Think about 5 random numbers. Verify if you can combine them with addition, multiplication, subtraction or division to 133" # query = "If it takes 30 minutes to dry 12 T-shirts in the sun, how long does it take to dry 33 T-shirts?" messages = [ {"role": "system", "content": SYSTEM_PROMPT}, {"role": "user", "content": query} ] stream = client.chat.completions.create( model=model, messages=messages, stream=True, temperature=TEMP, top_p=TOP_P, max_tokens=MAX_TOK, ) print("client: Start streaming chat completions...") printed_content = False for chunk in stream: content = None # Check the content is content if hasattr(chunk.choices[0].delta, "content"): content = chunk.choices[0].delta.content if content is not None: if not printed_content: printed_content = True print("\ncontent:", end="", flush=True) # Extract and print the content print(content, end="", flush=True) # content: # Alright, I need to write 4 sentences where each one has at least 8 words and each subsequent sentence has one fewer word than the previous one. # ... # Final boxed answer (the four sentences): # \[ # \boxed{ # \begin{aligned} # &\text{1. The quick brown fox jumps over lazy dog and yells hello.} \\ # &\text{2. I saw the cat on the stair with my hat.} \\ # &\text{3. The man in the moon came down quickly today.} \\ # &\text{4. A cat sat on the mat today patiently.} # \end{aligned} # } # \] https://huggingface.co/mistralai/Magistral-Small-2507 ---

Mistral-2x24B-MOE-Power-Devstral-Magistral-Reasoning-Ultimate-44B

SETTINGS --- Max context is 128k/131072 ; for reasoning strongly suggest min 8k context window, if reasoning is on. REASONING SYSTEM PROMPT (optional): ``` A user will ask you to solve a task. You should first draft your thinking process (inner monologue) until you have derived the final answer. Afterwards, write a self-contained summary of your thoughts (i.e. your summary should be succinct but contain all the critical steps you needed to reach the conclusion). You should use Markdown and Latex to format your response. Write both your thoughts and summary in the same language as the task posed by the user. Your thinking process must follow the template below: Your thoughts or/and draft, like working through an exercise on scratch paper. Be as casual and as long as you want until you are confident to generate a correct answer. ``` GENERAL: All versions have default of 2 experts activated. Number of active experts can be adjusted in Lmstudio and other AI Apps. Suggest 2-4 generations, especially if using 1 expert (all models). Models will accept "simple prompt" as well as very detailed instructions ; however for larger projects I suggest using Q6/Q8 quants / optimized quants. Suggested Settings : - Temp .5 to .7 (or lower) - topk: 20, topp: .8, minp: .05 - rep pen: 1.1 (can be lower) - Jinja Template (embedded) or CHATML template. - A System Prompt is not required. (ran tests with blank system prompt) For additional settings, usage information, benchmarks etc also see: https://huggingface.co/mistralai/Devstral-Small-2505 and/or https://huggingface.co/mistralai/Magistral-Small-2506 --- For more information / other Qwen/Mistral Coders / additional settings see: --- [ https://huggingface.co/DavidAU/Qwen2.5-MOE-2x-4x-6x-8x__7B__Power-CODER__19B-30B-42B-53B-gguf ] ---

Help, Adjustments, Samplers, Parameters and More

--- CHANGE THE NUMBER OF ACTIVE EXPERTS: See this document: https://huggingface.co/DavidAU/How-To-Set-and-Manage-MOE-Mix-of-Experts-Model-Activation-of-Experts Settings: CHAT / ROLEPLAY and/or SMOOTHER operation of this model: In "KoboldCpp" or "oobabooga/text-generation-webui" or "Silly Tavern" ; Set the "Smoothing_factor" to 1.5 : in KoboldCpp -> Settings->Samplers->Advanced-> "Smooth_F" : in text-generation-webui -> parameters -> lower right. : In Silly Tavern this is called: "Smoothing" NOTE: For "text-generation-webui" -> if using GGUFs you need to use "llama_HF" (which involves downloading some config files from the SOURCE version of this model) Source versions (and config files) of my models are here: https://huggingface.co/collections/DavidAU/d-au-source-files-for-gguf-exl2-awq-gptq-hqq-etc-etc-66b55cb8ba25f914cbf210be OTHER OPTIONS: - Increase rep pen to 1.1 to 1.15 (you don't need to do this if you use "smoothing_factor") - If the interface/program you are using to run AI MODELS supports "Quadratic Sampling" ("smoothing") just make the adjustment as noted. Highest Quality Settings / Optimal Operation Guide / Parameters and Samplers This a "Class 1" model: For all settings used for this model (including specifics for its "class"), including example generation(s) and for advanced settings guide (which many times addresses any model issue(s)), including methods to improve model performance for all use case(s) as well as chat, roleplay and other use case(s) please see: [ https://huggingface.co/DavidAU/Maximizing-Model-Performance-All-Quants-Types-And-Full-Precision-by-Samplers_Parameters ] You can see all parameters used for generation, in addition to advanced parameters and samplers to get the most out of this model here: [ https://huggingface.co/DavidAU/Maximizing-Model-Performance-All-Quants-Types-And-Full-Precision-by-Samplers_Parameters ]