Unnamed: 0
int64 0
1.86M
| model_id
stringlengths 5
133
| likes
int64 0
12.5k
| trendingScore
float64 0
479
| private
bool 1
class | downloads
int64 0
142M
| tags
stringlengths 13
28.7k
| pipeline_tag
stringclasses 54
values | library_name
stringclasses 629
values | createdAt
stringdate 2022-03-02 23:29:04
2025-07-12 17:14:09
| parent_model
stringlengths 2
3.4k
| finetune_parent
stringlengths 2
109
| quantized_parent
stringlengths 2
105
| adapter_parent
stringlengths 2
110
| merge_parent
stringlengths 2
3.4k
| license
stringclasses 79
values | region
stringclasses 3
values | arxiv_id
float64 0
2.51k
⌀ | card
stringlengths 1
18.3M
⌀ |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1,860,400
|
New-videos-Amira-Ishtara-viral-Video-Links/ORIGINAL.FULL.VIDEOS.Amira.Ishtara.Viral.Video.Official.Tutorial
| 0
| 0
| false
| 0
|
['region:us']
| null | null |
2025-07-12T17:09:25.000Z
|
[]
|
[]
|
[]
|
[]
|
[]
| null |
us
| null | null |
1,860,401
|
rizki8/longt5-lora-findsum
| 0
| 0
| false
| 0
|
['region:us']
| null | null |
2025-07-12T17:09:38.000Z
|
[]
|
[]
|
[]
|
[]
|
[]
| null |
us
| null | null |
1,860,402
|
New-Priyanka-Pandit-Viral-Video/FULL.VIDEOS.Priyanka.Pandit.Viral.Video.Official.Tutorial
| 0
| 0
| false
| 0
|
['region:us']
| null | null |
2025-07-12T17:10:12.000Z
|
[]
|
[]
|
[]
|
[]
|
[]
| null |
us
| null | null |
1,860,403
|
RichardErkhov/mlfoundations-dev_-_oh_teknium_scaling_down_ratiocontrolled_0.1-gguf
| 0
| 0
| false
| 0
|
['gguf', 'endpoints_compatible', 'region:us', 'conversational']
| null | null |
2025-07-12T17:10:45.000Z
|
[]
|
[]
|
[]
|
[]
|
[]
| null |
us
| null |
---
{}
---
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
oh_teknium_scaling_down_ratiocontrolled_0.1 - GGUF
- Model creator: https://huggingface.co/mlfoundations-dev/
- Original model: https://huggingface.co/mlfoundations-dev/oh_teknium_scaling_down_ratiocontrolled_0.1/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [oh_teknium_scaling_down_ratiocontrolled_0.1.Q2_K.gguf](https://huggingface.co/RichardErkhov/mlfoundations-dev_-_oh_teknium_scaling_down_ratiocontrolled_0.1-gguf/blob/main/oh_teknium_scaling_down_ratiocontrolled_0.1.Q2_K.gguf) | Q2_K | 2.96GB |
| [oh_teknium_scaling_down_ratiocontrolled_0.1.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/mlfoundations-dev_-_oh_teknium_scaling_down_ratiocontrolled_0.1-gguf/blob/main/oh_teknium_scaling_down_ratiocontrolled_0.1.IQ3_XS.gguf) | IQ3_XS | 3.28GB |
| [oh_teknium_scaling_down_ratiocontrolled_0.1.IQ3_S.gguf](https://huggingface.co/RichardErkhov/mlfoundations-dev_-_oh_teknium_scaling_down_ratiocontrolled_0.1-gguf/blob/main/oh_teknium_scaling_down_ratiocontrolled_0.1.IQ3_S.gguf) | IQ3_S | 3.43GB |
| [oh_teknium_scaling_down_ratiocontrolled_0.1.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/mlfoundations-dev_-_oh_teknium_scaling_down_ratiocontrolled_0.1-gguf/blob/main/oh_teknium_scaling_down_ratiocontrolled_0.1.Q3_K_S.gguf) | Q3_K_S | 3.41GB |
| [oh_teknium_scaling_down_ratiocontrolled_0.1.IQ3_M.gguf](https://huggingface.co/RichardErkhov/mlfoundations-dev_-_oh_teknium_scaling_down_ratiocontrolled_0.1-gguf/blob/main/oh_teknium_scaling_down_ratiocontrolled_0.1.IQ3_M.gguf) | IQ3_M | 3.52GB |
| [oh_teknium_scaling_down_ratiocontrolled_0.1.Q3_K.gguf](https://huggingface.co/RichardErkhov/mlfoundations-dev_-_oh_teknium_scaling_down_ratiocontrolled_0.1-gguf/blob/main/oh_teknium_scaling_down_ratiocontrolled_0.1.Q3_K.gguf) | Q3_K | 3.74GB |
| [oh_teknium_scaling_down_ratiocontrolled_0.1.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/mlfoundations-dev_-_oh_teknium_scaling_down_ratiocontrolled_0.1-gguf/blob/main/oh_teknium_scaling_down_ratiocontrolled_0.1.Q3_K_M.gguf) | Q3_K_M | 3.74GB |
| [oh_teknium_scaling_down_ratiocontrolled_0.1.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/mlfoundations-dev_-_oh_teknium_scaling_down_ratiocontrolled_0.1-gguf/blob/main/oh_teknium_scaling_down_ratiocontrolled_0.1.Q3_K_L.gguf) | Q3_K_L | 4.03GB |
| [oh_teknium_scaling_down_ratiocontrolled_0.1.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/mlfoundations-dev_-_oh_teknium_scaling_down_ratiocontrolled_0.1-gguf/blob/main/oh_teknium_scaling_down_ratiocontrolled_0.1.IQ4_XS.gguf) | IQ4_XS | 4.18GB |
| [oh_teknium_scaling_down_ratiocontrolled_0.1.Q4_0.gguf](https://huggingface.co/RichardErkhov/mlfoundations-dev_-_oh_teknium_scaling_down_ratiocontrolled_0.1-gguf/blob/main/oh_teknium_scaling_down_ratiocontrolled_0.1.Q4_0.gguf) | Q4_0 | 4.34GB |
| [oh_teknium_scaling_down_ratiocontrolled_0.1.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/mlfoundations-dev_-_oh_teknium_scaling_down_ratiocontrolled_0.1-gguf/blob/main/oh_teknium_scaling_down_ratiocontrolled_0.1.IQ4_NL.gguf) | IQ4_NL | 4.38GB |
| [oh_teknium_scaling_down_ratiocontrolled_0.1.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/mlfoundations-dev_-_oh_teknium_scaling_down_ratiocontrolled_0.1-gguf/blob/main/oh_teknium_scaling_down_ratiocontrolled_0.1.Q4_K_S.gguf) | Q4_K_S | 4.37GB |
| [oh_teknium_scaling_down_ratiocontrolled_0.1.Q4_K.gguf](https://huggingface.co/RichardErkhov/mlfoundations-dev_-_oh_teknium_scaling_down_ratiocontrolled_0.1-gguf/blob/main/oh_teknium_scaling_down_ratiocontrolled_0.1.Q4_K.gguf) | Q4_K | 4.58GB |
| [oh_teknium_scaling_down_ratiocontrolled_0.1.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/mlfoundations-dev_-_oh_teknium_scaling_down_ratiocontrolled_0.1-gguf/blob/main/oh_teknium_scaling_down_ratiocontrolled_0.1.Q4_K_M.gguf) | Q4_K_M | 4.58GB |
| [oh_teknium_scaling_down_ratiocontrolled_0.1.Q4_1.gguf](https://huggingface.co/RichardErkhov/mlfoundations-dev_-_oh_teknium_scaling_down_ratiocontrolled_0.1-gguf/blob/main/oh_teknium_scaling_down_ratiocontrolled_0.1.Q4_1.gguf) | Q4_1 | 4.78GB |
| [oh_teknium_scaling_down_ratiocontrolled_0.1.Q5_0.gguf](https://huggingface.co/RichardErkhov/mlfoundations-dev_-_oh_teknium_scaling_down_ratiocontrolled_0.1-gguf/blob/main/oh_teknium_scaling_down_ratiocontrolled_0.1.Q5_0.gguf) | Q5_0 | 5.21GB |
| [oh_teknium_scaling_down_ratiocontrolled_0.1.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/mlfoundations-dev_-_oh_teknium_scaling_down_ratiocontrolled_0.1-gguf/blob/main/oh_teknium_scaling_down_ratiocontrolled_0.1.Q5_K_S.gguf) | Q5_K_S | 5.21GB |
| [oh_teknium_scaling_down_ratiocontrolled_0.1.Q5_K.gguf](https://huggingface.co/RichardErkhov/mlfoundations-dev_-_oh_teknium_scaling_down_ratiocontrolled_0.1-gguf/blob/main/oh_teknium_scaling_down_ratiocontrolled_0.1.Q5_K.gguf) | Q5_K | 5.34GB |
| [oh_teknium_scaling_down_ratiocontrolled_0.1.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/mlfoundations-dev_-_oh_teknium_scaling_down_ratiocontrolled_0.1-gguf/blob/main/oh_teknium_scaling_down_ratiocontrolled_0.1.Q5_K_M.gguf) | Q5_K_M | 5.34GB |
| [oh_teknium_scaling_down_ratiocontrolled_0.1.Q5_1.gguf](https://huggingface.co/RichardErkhov/mlfoundations-dev_-_oh_teknium_scaling_down_ratiocontrolled_0.1-gguf/blob/main/oh_teknium_scaling_down_ratiocontrolled_0.1.Q5_1.gguf) | Q5_1 | 5.65GB |
| [oh_teknium_scaling_down_ratiocontrolled_0.1.Q6_K.gguf](https://huggingface.co/RichardErkhov/mlfoundations-dev_-_oh_teknium_scaling_down_ratiocontrolled_0.1-gguf/blob/main/oh_teknium_scaling_down_ratiocontrolled_0.1.Q6_K.gguf) | Q6_K | 6.14GB |
| [oh_teknium_scaling_down_ratiocontrolled_0.1.Q8_0.gguf](https://huggingface.co/RichardErkhov/mlfoundations-dev_-_oh_teknium_scaling_down_ratiocontrolled_0.1-gguf/blob/main/oh_teknium_scaling_down_ratiocontrolled_0.1.Q8_0.gguf) | Q8_0 | 7.95GB |
Original model description:
---
library_name: transformers
license: llama3.1
base_model: meta-llama/Meta-Llama-3.1-8B
tags:
- llama-factory
- full
- generated_from_trainer
model-index:
- name: oh_teknium_scaling_down_ratiocontrolled_0.1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# oh_teknium_scaling_down_ratiocontrolled_0.1
This model is a fine-tuned version of [meta-llama/Meta-Llama-3.1-8B](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B) on the mlfoundations-dev/oh_teknium_scaling_down_ratiocontrolled_0.1 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6307
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 8
- total_train_batch_size: 512
- total_eval_batch_size: 64
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: constant
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.68 | 0.9778 | 33 | 0.6524 |
| 0.6045 | 1.9667 | 66 | 0.6236 |
| 0.5201 | 2.9556 | 99 | 0.6307 |
### Framework versions
- Transformers 4.46.1
- Pytorch 2.3.0
- Datasets 3.1.0
- Tokenizers 0.20.3
|
1,860,404
|
phospho-app/yyyy76514263-ACT-data_first_13_07-g8wsr
| 0
| 0
| false
| 0
|
['phosphobot', 'act', 'region:us']
| null | null |
2025-07-12T17:10:50.000Z
|
[]
|
[]
|
[]
|
[]
|
[]
| null |
us
| null |
---
tags:
- phosphobot
- act
task_categories:
- robotics
---
# act Model - phospho Training Pipeline
## Error Traceback
We faced an issue while training your model.
```
Training process failed with exit code 1:
return self.transform(batch)
File "/lerobot/lerobot/common/datasets/utils.py", line 272, in hf_transform_to_torch
items_dict[key] = [x if isinstance(x, str) else torch.tensor(x) for x in items_dict[key]]
File "/lerobot/lerobot/common/datasets/utils.py", line 272, in <listcomp>
items_dict[key] = [x if isinstance(x, str) else torch.tensor(x) for x in items_dict[key]]
RuntimeError: Could not infer dtype of NoneType
[1;34mwandb[0m:
[1;34mwandb[0m: 🚀 View run [33mact[0m at: [34mhttps://wandb.ai/lawyiyang08-null/phospho-ACT/runs/4alaer7z[0m
[1;34mwandb[0m: Find logs at: [1;35m../data/phospho-app/yyyy76514263-ACT-data_first_13_07-g8wsr/1752340250.588072/wandb/run-20250712_191105-4alaer7z/logs[0m
```
## Training parameters:
- **Dataset**: [yyyy76514263/data_first_13_07](https://huggingface.co/datasets/yyyy76514263/data_first_13_07)
- **Wandb run URL**: None
- **Epochs**: None
- **Batch size**: 60
- **Training steps**: 8000
📖 **Get Started**: [docs.phospho.ai](https://docs.phospho.ai?utm_source=huggingface_readme)
🤖 **Get your robot**: [robots.phospho.ai](https://robots.phospho.ai?utm_source=huggingface_readme)
|
1,860,405
|
cedricgaudron/scanner-tickets
| 0
| 0
| false
| 0
|
['safetensors', 't5', 'region:us']
| null | null |
2025-07-12T17:11:50.000Z
|
[]
|
[]
|
[]
|
[]
|
[]
| null |
us
| null |
---
language: fr
license: mit
tags:
- t5
- invoice
- receipt
- document-information-extraction
- ocr
pipeline_tag: text2text-generation
---
# 🧾 Scanner Tickets – Extraction automatique de données
Ce modèle T5 a été entraîné pour **extraire automatiquement des informations clés depuis du texte OCR issu de factures ou tickets de caisse**.
## 📌 Données extraites :
- 🧾 **Type** : facture ou ticket
- 💸 **Montant total**
- 📅 **Date**
- 🏢 **Fournisseur**
- 🔢 **SIRET**
- 🔢 **Numéro de TVA**
- #️⃣ **Numéro de facture ou ticket**
## 🔍 Exemple d'utilisation
```python
from transformers import T5Tokenizer, T5ForConditionalGeneration
tokenizer = T5Tokenizer.from_pretrained("cedricgaudron/scanner-tickets")
model = T5ForConditionalGeneration.from_pretrained("cedricgaudron/scanner-tickets")
texte = """CARREFOUR
TOTAL TTC : 24,75€
Date : 12/06/2024
SIRET : 123 456 789 00012
TVA : FR 12 345678912"""
input_ids = tokenizer("Extrais les données suivantes en format JSON :\n" + texte, return_tensors="pt").input_ids
output = model.generate(input_ids, max_length=128)
print(tokenizer.decode(output[0], skip_special_tokens=True))
|
1,860,406
|
Amal17/NusaBERT-concate-BiGRU-NusaParagraph-emot
| 0
| 0
| false
| 0
|
['license:apache-2.0', 'region:us']
| null | null |
2025-07-12T17:13:42.000Z
|
[]
|
[]
|
[]
|
[]
|
[]
|
apache-2.0
|
us
| null |
---
license: apache-2.0
---
|
1,860,407
|
jackrvn/bidirectional-dialect-translator
| 0
| 0
| false
| 0
|
['transformers', 'safetensors', 't5', 'text2text-generation', 'arxiv:1910.09700', 'autotrain_compatible', 'text-generation-inference', 'endpoints_compatible', 'region:us']
|
text-generation
|
transformers
|
2025-07-12T17:13:59.000Z
|
[]
|
[]
|
[]
|
[]
|
[]
| null |
us
| 1,910.097
|
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
1,860,408
|
Amal17/NusaBERT-concate-BiGRU-NusaParagraph-topic
| 0
| 0
| false
| 0
|
['license:apache-2.0', 'region:us']
| null | null |
2025-07-12T17:14:00.000Z
|
[]
|
[]
|
[]
|
[]
|
[]
|
apache-2.0
|
us
| null |
---
license: apache-2.0
---
|
1,860,409
|
ond-ai/ond-agent-1.3-8b-ckpt-1
| 0
| 0
| false
| 0
|
['region:us']
| null | null |
2025-07-12T17:14:07.000Z
|
[]
|
[]
|
[]
|
[]
|
[]
| null |
us
| null |
---
tags:
- text-generation
---
|
1,860,410
|
jackrvn/biderectional-dialect-translator
| 0
| 0
| false
| 0
|
['transformers', 'arxiv:1910.09700', 'endpoints_compatible', 'region:us']
| null |
transformers
|
2025-07-12T17:14:09.000Z
|
[]
|
[]
|
[]
|
[]
|
[]
| null |
us
| 1,910.097
| null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.