Hugging Face's logo Hugging Face
  • Models
  • Datasets
  • Spaces
  • Docs
  • Enterprise
  • Pricing

  • Log In
  • Sign Up

lmsys
/
vicuna-33b-v1.3

Text Generation
Transformers
PyTorch
llama
text-generation-inference
Model card Files Files and versions
xet
Community
13
New discussion
Resources
  • PR & discussions documentation
  • Code of Conduct
  • Hub documentation

Adding `safetensors` variant of this model

#13 opened over 1 year ago by
SFconvertbot

Adding Evaluation Results

#11 opened about 2 years ago by
leaderboard-pr-bot

When we can expect vicuna variant of CodeLlama-2 34b model?

πŸ‘ 1
#10 opened about 2 years ago by
perelmanych

Failed. Reason: The primary container for production variant AllTraffic did not pass the ping health check

#9 opened about 2 years ago by
Shivam1410

Bigger is NOT always better...

πŸ‘ 1
5
#8 opened over 2 years ago by
MrDevolver

Adding `safetensors` variant of this model

#6 opened over 2 years ago by
mmahlwy3

Adding `safetensors` variant of this model

#5 opened over 2 years ago by
mmahlwy3

How much GPU graphics memory is required for deployment

2
#3 opened over 2 years ago by
chenfeicqq

Is there a 4bit quantize version for the FastChat?

6
#2 opened over 2 years ago by
ruradium

Prompt format?

10
#1 opened over 2 years ago by
Thireus
Company
TOS Privacy About Jobs
Website
Models Datasets Spaces Pricing Docs