DACTYL Classifiers
Collection
Trained AI-generated text classifiers. Pretrained means using binary cross entropy loss, finetuned refers to deep X-risk optimized classifiers.
•
10 items
•
Updated
{
"training_split": "training",
"evaluation_split": "testing",
"results_path": "bce-distilroberta-base.csv",
"num_epochs": 5,
"model_path": "distilroberta-base",
"tokenizer": "distilroberta-base",
"optimizer": "AdamW",
"optimizer_type": "torch",
"optimizer_args": {
"lr": 2e-05,
"weight_decay": 0.01
},
"loss_fn": "BCEWithLogitsLoss",
"reset_classification_head": false,
"loss_type": "torch",
"loss_fn_args": {},
"needs_loss_fn_as_parameter": false,
"save_path": "ShantanuT01/dactyl-distilroberta-base-pretrained",
"training_args": {
"batch_size": 64,
"needs_sampler": false,
"needs_index": false,
"shuffle": true,
"sampling_rate": null,
"apply_sigmoid": false
},
"best_model_path": "best-distilroberta-model"
}
| model | AP Score | AUC Score | OPAUC Score | TPAUC Score |
|---|---|---|---|---|
| DeepSeek-V3 | 0.998699 | 0.999922 | 0.999197 | 0.992808 |
| ShantanuT01/fine-tuned-Llama-3.2-1B-Instruct-apollo-mini-RedditWritingPrompts-testing | 0.810519 | 0.989865 | 0.968408 | 0.702748 |
| ShantanuT01/fine-tuned-Llama-3.2-1B-Instruct-apollo-mini-abstracts-testing | 0.894979 | 0.988233 | 0.963348 | 0.660559 |
| ShantanuT01/fine-tuned-Llama-3.2-1B-Instruct-apollo-mini-news-testing | 0.700377 | 0.966824 | 0.902982 | 0.138836 |
| ShantanuT01/fine-tuned-Llama-3.2-1B-Instruct-apollo-mini-reviews-testing | 0.440967 | 0.95985 | 0.849814 | 6.54236e-06 |
| ShantanuT01/fine-tuned-Llama-3.2-1B-Instruct-apollo-mini-student_essays-testing | 0.134024 | 0.904907 | 0.747968 | 0 |
| ShantanuT01/fine-tuned-Llama-3.2-1B-Instruct-apollo-mini-tweets-testing | 0.791893 | 0.984326 | 0.933165 | 0.387968 |
| claude-3-5-haiku-20241022 | 0.98282 | 0.99758 | 0.988231 | 0.886102 |
| claude-3-5-sonnet-20241022 | 0.997363 | 0.999605 | 0.998469 | 0.985682 |
| gemini-1.5-flash | 0.981901 | 0.997486 | 0.988071 | 0.884501 |
| gemini-1.5-pro | 0.962404 | 0.993594 | 0.974546 | 0.752917 |
| gpt-4o-2024-11-20 | 0.984701 | 0.997702 | 0.989714 | 0.900309 |
| gpt-4o-mini | 0.999182 | 0.99994 | 0.999584 | 0.996457 |
| llama-3.2-90b | 0.967473 | 0.991852 | 0.978925 | 0.795645 |
| llama-3.3-70b | 0.992379 | 0.99847 | 0.99536 | 0.955419 |
| mistral-large-latest | 0.997743 | 0.999722 | 0.998687 | 0.987735 |
| mistral-small-latest | 0.998081 | 0.99951 | 0.998917 | 0.989988 |
| overall | 0.994427 | 0.995509 | 0.984938 | 0.853827 |