---
language:
- ta
- en
license: llama2
tags:
- TensorBlock
- GGUF
base_model: abhinand/tamil-llama-13b-instruct-v0.1
model-index:
- name: tamil-llama-13b-instruct-v0.1
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 54.52
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=abhinand/tamil-llama-13b-instruct-v0.1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 79.35
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=abhinand/tamil-llama-13b-instruct-v0.1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 50.37
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=abhinand/tamil-llama-13b-instruct-v0.1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 41.22
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=abhinand/tamil-llama-13b-instruct-v0.1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 76.56
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=abhinand/tamil-llama-13b-instruct-v0.1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 7.51
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=abhinand/tamil-llama-13b-instruct-v0.1
name: Open LLM Leaderboard
---
[](https://tensorblock.co)
[](https://twitter.com/tensorblock_aoi)
[](https://discord.gg/Ej5NmeHFf2)
[](https://github.com/TensorBlock)
[](https://t.me/TensorBlock)
## abhinand/tamil-llama-13b-instruct-v0.1 - GGUF
This repo contains GGUF format model files for [abhinand/tamil-llama-13b-instruct-v0.1](https://huggingface.co/abhinand/tamil-llama-13b-instruct-v0.1).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4242](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
## Our projects
## Prompt template
```
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [tamil-llama-13b-instruct-v0.1-Q2_K.gguf](https://huggingface.co/tensorblock/tamil-llama-13b-instruct-v0.1-GGUF/blob/main/tamil-llama-13b-instruct-v0.1-Q2_K.gguf) | Q2_K | 4.949 GB | smallest, significant quality loss - not recommended for most purposes |
| [tamil-llama-13b-instruct-v0.1-Q3_K_S.gguf](https://huggingface.co/tensorblock/tamil-llama-13b-instruct-v0.1-GGUF/blob/main/tamil-llama-13b-instruct-v0.1-Q3_K_S.gguf) | Q3_K_S | 5.762 GB | very small, high quality loss |
| [tamil-llama-13b-instruct-v0.1-Q3_K_M.gguf](https://huggingface.co/tensorblock/tamil-llama-13b-instruct-v0.1-GGUF/blob/main/tamil-llama-13b-instruct-v0.1-Q3_K_M.gguf) | Q3_K_M | 6.441 GB | very small, high quality loss |
| [tamil-llama-13b-instruct-v0.1-Q3_K_L.gguf](https://huggingface.co/tensorblock/tamil-llama-13b-instruct-v0.1-GGUF/blob/main/tamil-llama-13b-instruct-v0.1-Q3_K_L.gguf) | Q3_K_L | 7.032 GB | small, substantial quality loss |
| [tamil-llama-13b-instruct-v0.1-Q4_0.gguf](https://huggingface.co/tensorblock/tamil-llama-13b-instruct-v0.1-GGUF/blob/main/tamil-llama-13b-instruct-v0.1-Q4_0.gguf) | Q4_0 | 7.479 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [tamil-llama-13b-instruct-v0.1-Q4_K_S.gguf](https://huggingface.co/tensorblock/tamil-llama-13b-instruct-v0.1-GGUF/blob/main/tamil-llama-13b-instruct-v0.1-Q4_K_S.gguf) | Q4_K_S | 7.537 GB | small, greater quality loss |
| [tamil-llama-13b-instruct-v0.1-Q4_K_M.gguf](https://huggingface.co/tensorblock/tamil-llama-13b-instruct-v0.1-GGUF/blob/main/tamil-llama-13b-instruct-v0.1-Q4_K_M.gguf) | Q4_K_M | 7.980 GB | medium, balanced quality - recommended |
| [tamil-llama-13b-instruct-v0.1-Q5_0.gguf](https://huggingface.co/tensorblock/tamil-llama-13b-instruct-v0.1-GGUF/blob/main/tamil-llama-13b-instruct-v0.1-Q5_0.gguf) | Q5_0 | 9.096 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [tamil-llama-13b-instruct-v0.1-Q5_K_S.gguf](https://huggingface.co/tensorblock/tamil-llama-13b-instruct-v0.1-GGUF/blob/main/tamil-llama-13b-instruct-v0.1-Q5_K_S.gguf) | Q5_K_S | 9.096 GB | large, low quality loss - recommended |
| [tamil-llama-13b-instruct-v0.1-Q5_K_M.gguf](https://huggingface.co/tensorblock/tamil-llama-13b-instruct-v0.1-GGUF/blob/main/tamil-llama-13b-instruct-v0.1-Q5_K_M.gguf) | Q5_K_M | 9.354 GB | large, very low quality loss - recommended |
| [tamil-llama-13b-instruct-v0.1-Q6_K.gguf](https://huggingface.co/tensorblock/tamil-llama-13b-instruct-v0.1-GGUF/blob/main/tamil-llama-13b-instruct-v0.1-Q6_K.gguf) | Q6_K | 10.814 GB | very large, extremely low quality loss |
| [tamil-llama-13b-instruct-v0.1-Q8_0.gguf](https://huggingface.co/tensorblock/tamil-llama-13b-instruct-v0.1-GGUF/blob/main/tamil-llama-13b-instruct-v0.1-Q8_0.gguf) | Q8_0 | 14.006 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/tamil-llama-13b-instruct-v0.1-GGUF --include "tamil-llama-13b-instruct-v0.1-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/tamil-llama-13b-instruct-v0.1-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```