Text Generation
Safetensors
English
llama
esper
esper-2
valiant
valiant-labs
llama-3.1
llama-3.1-instruct
llama-3.1-instruct-8b
llama-3
llama-3-instruct
llama-3-instruct-8b
8b
code
code-instruct
python
dev-ops
terraform
azure
aws
gcp
architect
engineer
developer
conversational
chat
instruct
Eval Results
| language: | |
| - en | |
| pipeline_tag: text-generation | |
| tags: | |
| - esper | |
| - esper-2 | |
| - valiant | |
| - valiant-labs | |
| - llama | |
| - llama-3.1 | |
| - llama-3.1-instruct | |
| - llama-3.1-instruct-8b | |
| - llama-3 | |
| - llama-3-instruct | |
| - llama-3-instruct-8b | |
| - 8b | |
| - code | |
| - code-instruct | |
| - python | |
| - dev-ops | |
| - terraform | |
| - azure | |
| - aws | |
| - gcp | |
| - architect | |
| - engineer | |
| - developer | |
| - conversational | |
| - chat | |
| - instruct | |
| base_model: meta-llama/Meta-Llama-3.1-8B-Instruct | |
| datasets: | |
| - sequelbox/Titanium | |
| - sequelbox/Tachibana | |
| - sequelbox/Supernova | |
| model_type: llama | |
| model-index: | |
| - name: ValiantLabs/Llama3.1-8B-Esper2 | |
| results: | |
| - task: | |
| type: text-generation | |
| name: Text Generation | |
| dataset: | |
| name: Winogrande (5-Shot) | |
| type: Winogrande | |
| args: | |
| num_few_shot: 5 | |
| metrics: | |
| - type: acc | |
| value: 75.85 | |
| name: acc | |
| license: llama3.1 | |
| **[ESPER 3 COMING SOON! Click here to support our open-source dataset and model releases!](https://huggingface.co/spaces/sequelbox/SupportOpenSource)** | |
|  | |
| Esper 2 is a DevOps and cloud architecture code specialist built on Llama 3.1 8b. | |
| - Expertise-driven, an AI assistant focused on AWS, Azure, GCP, Terraform, Dockerfiles, pipelines, shell scripts and more! | |
| - Real world problem solving and high quality code instruct performance within the Llama 3.1 Instruct chat format | |
| - Finetuned on synthetic [DevOps-instruct](https://huggingface.co/datasets/sequelbox/Titanium) and [code-instruct](https://huggingface.co/datasets/sequelbox/Tachibana) data generated with Llama 3.1 405b. | |
| - Overall chat performance supplemented with [generalist chat data.](https://huggingface.co/datasets/sequelbox/Supernova) | |
| Try our code-instruct AI assistant [Enigma!](https://huggingface.co/ValiantLabs/Llama3.1-8B-Enigma) | |
| ## Version | |
| This is the **2024-10-02** release of Esper 2 for Llama 3.1 8b. | |
| Esper 2 is now available for [Llama 3.2 3b!](https://huggingface.co/ValiantLabs/Llama3.2-3B-Esper2) | |
| Esper 2 will be coming to more model sizes soon :) | |
| ## Prompting Guide | |
| Esper 2 uses the [Llama 3.1 Instruct](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B-Instruct) prompt format. The example script below can be used as a starting point for general chat: | |
| ```python | |
| import transformers | |
| import torch | |
| model_id = "ValiantLabs/Llama3.1-8B-Esper2" | |
| pipeline = transformers.pipeline( | |
| "text-generation", | |
| model=model_id, | |
| model_kwargs={"torch_dtype": torch.bfloat16}, | |
| device_map="auto", | |
| ) | |
| messages = [ | |
| {"role": "system", "content": "You are an AI assistant."}, | |
| {"role": "user", "content": "Hi, how do I optimize the size of a Docker image?"} | |
| ] | |
| outputs = pipeline( | |
| messages, | |
| max_new_tokens=2048, | |
| ) | |
| print(outputs[0]["generated_text"][-1]) | |
| ``` | |
| ## The Model | |
| Esper 2 is built on top of Llama 3.1 8b Instruct, improving performance through high quality DevOps, code, and chat data in Llama 3.1 Instruct prompt style. | |
| Our current version of Esper 2 is trained on DevOps data from [sequelbox/Titanium](https://huggingface.co/datasets/sequelbox/Titanium), supplemented by code-instruct data from [sequelbox/Tachibana](https://huggingface.co/datasets/sequelbox/Tachibana) and general chat data from [sequelbox/Supernova.](https://huggingface.co/datasets/sequelbox/Supernova) | |
|  | |
| Esper 2 is created by [Valiant Labs.](http://valiantlabs.ca/) | |
| [Check out our HuggingFace page for Shining Valiant 2 Enigma, and our other Build Tools models for creators!](https://huggingface.co/ValiantLabs) | |
| We care about open source. | |
| For everyone to use. | |
| We encourage others to finetune further from our models. |