Hugging Face's logo Hugging Face
  • Models
  • Datasets
  • Spaces
  • Docs
  • Enterprise
  • Pricing

  • Log In
  • Sign Up

BAAI
/
bge-m3

Sentence Similarity
sentence-transformers
PyTorch
ONNX
xlm-roberta
feature-extraction
text-embeddings-inference
Model card Files Files and versions
xet
Community
132
New discussion
Resources
  • PR & discussions documentation
  • Code of Conduct
  • Hub documentation

关于多阶段训练

3
#21 opened over 1 year ago by
xxxcliu

bge-M3和baai_general_embedding是什么关系

2
#20 opened over 1 year ago by
biaodiluer

How many GPU's are required to fine tuning bge-m3 over 1 million tripplets ?

4
#18 opened over 1 year ago by
wilfoderek

How do you suggest using Colbert vectors ?

1
#16 opened almost 2 years ago by
EquinoxElahin

may be little bugs

👍 2
3
#14 opened almost 2 years ago by
prudant

serving the model

2
#13 opened almost 2 years ago by
prudant

Datasets

1
#11 opened almost 2 years ago by
AbdelkerimDassi

Issue while finetuning embedding model because of use_reentrant = True

2
#10 opened almost 2 years ago by
DamianS89

Optimize inference speed

6
#9 opened almost 2 years ago by
CoolWP

OOM occurs in the process of converting the model to torchscript. I have a question about this issue.

1
#8 opened almost 2 years ago by
LeeJungHoon

Add benchmark to MTEB

👍 1
6
#7 opened almost 2 years ago by
sam-gab

base model

16
#6 opened almost 2 years ago by
ambivalent02

It is now working colab..

3
#5 opened almost 2 years ago by
LeeJungHoon

中文Dense retrieval性能与BGE V1.5相比如何?

3
#3 opened almost 2 years ago by
TianyuLLM

OOMS on 8 GB GPU, is it normal?

4
#2 opened almost 2 years ago by
tanimazsin130
  • Previous
  • 1
  • 2
  • 3
  • Next
Company
TOS Privacy About Jobs
Website
Models Datasets Spaces Pricing Docs