Hugging Face
Models
Datasets
Spaces
Community
Docs
Enterprise
Pricing
Log In
Sign Up
jmajkutewicz
/
zephyr-7b-dpo_dataset-mix
like
0
Text Generation
PEFT
Safetensors
4 datasets
English
mistral
lora
dpo
alignment
conversational
License:
apache-2.0
Model card
Files
Files and versions
xet
Community
Use this model
main
zephyr-7b-dpo_dataset-mix
337 MB
1 contributor
History:
2 commits
jmajkutewicz
Upload folder using huggingface_hub
9ae5e0a
verified
about 2 months ago
.gitattributes
Safe
1.52 kB
initial commit
about 2 months ago
README.md
2.08 kB
Upload folder using huggingface_hub
about 2 months ago
adapter_config.json
740 Bytes
Upload folder using huggingface_hub
about 2 months ago
adapter_model.safetensors
336 MB
xet
Upload folder using huggingface_hub
about 2 months ago
config.json
Safe
654 Bytes
Upload folder using huggingface_hub
about 2 months ago
special_tokens_map.json
Safe
551 Bytes
Upload folder using huggingface_hub
about 2 months ago
tokenizer.json
Safe
1.8 MB
Upload folder using huggingface_hub
about 2 months ago
tokenizer_config.json
Safe
1.42 kB
Upload folder using huggingface_hub
about 2 months ago