DACTYL Text Detector

Training Configuration

{
    "training_split": "training",
    "evaluation_split": "testing",
    "results_path": "libauc-distilroberta-base.csv",
    "num_epochs": 1,
    "model_path": "ShantanuT01/dactyl-distilroberta-base-pretrained",
    "tokenizer": "distilroberta-base",
    "optimizer": "SOTAs",
    "optimizer_type": "libauc",
    "optimizer_args": {
        "lr": 1e-05
    },
    "loss_fn": "tpAUC_KL_Loss",
    "reset_classification_head": true,
    "loss_type": "libauc",
    "loss_fn_args": {
        "data_len": 466005
    },
    "needs_loss_fn_as_parameter": false,
    "save_path": "ShantanuT01/dactyl-distilroberta-base-finetuned",
    "training_args": {
        "batch_size": 64,
        "needs_sampler": true,
        "needs_index": true,
        "shuffle": false,
        "sampling_rate": 0.5,
        "apply_sigmoid": true
    },
    "best_model_path": "best-tpauc-model-distilroberta-base"
}

Results

model AP Score AUC Score OPAUC Score TPAUC Score
DeepSeek-V3 0.997783 0.999901 0.998984 0.991968
ShantanuT01/fine-tuned-Llama-3.2-1B-Instruct-apollo-mini-RedditWritingPrompts-testing 0.385431 0.965153 0.925687 0.38438
ShantanuT01/fine-tuned-Llama-3.2-1B-Instruct-apollo-mini-abstracts-testing 0.875288 0.985992 0.971279 0.75316
ShantanuT01/fine-tuned-Llama-3.2-1B-Instruct-apollo-mini-news-testing 0.546747 0.90218 0.847515 0
ShantanuT01/fine-tuned-Llama-3.2-1B-Instruct-apollo-mini-reviews-testing 0.385851 0.935973 0.835005 0
ShantanuT01/fine-tuned-Llama-3.2-1B-Instruct-apollo-mini-student_essays-testing 0.0376482 0.806328 0.611652 0
ShantanuT01/fine-tuned-Llama-3.2-1B-Instruct-apollo-mini-tweets-testing 0.594489 0.976732 0.917376 0.310269
claude-3-5-haiku-20241022 0.982784 0.996476 0.989317 0.899193
claude-3-5-sonnet-20241022 0.994605 0.999284 0.997636 0.982059
gemini-1.5-flash 0.980723 0.997217 0.987467 0.879463
gemini-1.5-pro 0.964801 0.994032 0.976397 0.771443
gpt-4o-2024-11-20 0.983534 0.997288 0.989648 0.903522
gpt-4o-mini 0.996224 0.999868 0.998646 0.992356
llama-3.2-90b 0.971754 0.992887 0.982388 0.831708
llama-3.3-70b 0.990585 0.998126 0.99489 0.953972
mistral-large-latest 0.995819 0.999661 0.998056 0.984268
mistral-small-latest 0.996215 0.999667 0.998216 0.986264
overall 0.992324 0.993112 0.982549 0.834284
Downloads last month
4
Safetensors
Model size
82.1M params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Collection including ShantanuT01/dactyl-distilroberta-base-finetuned