DACTYL Text Detector

Training Configuration

{
        "training_split": "training",
        "evaluation_split": "testing",
        "results_path":"bce-debertav3-large.csv",
        "num_epochs": 1,
        "model_path": "best-debertav3-model",
        "tokenizer": "microsoft/deberta-v3-large",
        "optimizer": "AdamW",
        "optimizer_type": "torch",
        "optimizer_args": {
            "lr": 2e-5,
            "weight_decay":0.01
        },
        "loss_fn": "BCEWithLogitsLoss",
        "reset_classification_head": false,
        "loss_type": "torch",
        "loss_fn_args": {},
        "needs_loss_fn_as_parameter": false,
        "save_path": "ShantanuT01/dactyl-deberta-v3-large-pretrained",
        "training_args": {
            "batch_size": 16,
            "needs_sampler": false,
            "needs_index": false,
            "shuffle": true,
            "sampling_rate": null,
            "apply_sigmoid": false
        },
        "best_model_path": "best-debertav3-model",
        "eval_batch_size": 8
    
}

Results

model AP Score AUC Score OPAUC Score TPAUC Score
DeepSeek-V3 0.996018 0.999827 0.998268 0.98686
ShantanuT01/fine-tuned-Llama-3.2-1B-Instruct-apollo-mini-RedditWritingPrompts-testing 0.70532 0.959644 0.910133 0.160828
ShantanuT01/fine-tuned-Llama-3.2-1B-Instruct-apollo-mini-abstracts-testing 0.724459 0.980279 0.927757 0.411551
ShantanuT01/fine-tuned-Llama-3.2-1B-Instruct-apollo-mini-news-testing 0.42023 0.934212 0.840951 0
ShantanuT01/fine-tuned-Llama-3.2-1B-Instruct-apollo-mini-reviews-testing 0.320366 0.953743 0.834918 7.85083e-05
ShantanuT01/fine-tuned-Llama-3.2-1B-Instruct-apollo-mini-student_essays-testing 0.0152029 0.79227 0.510376 0
ShantanuT01/fine-tuned-Llama-3.2-1B-Instruct-apollo-mini-tweets-testing 0.468615 0.975752 0.856127 0.0610184
claude-3-5-haiku-20241022 0.972541 0.996902 0.982338 0.834824
claude-3-5-sonnet-20241022 0.992147 0.999509 0.995912 0.964305
gemini-1.5-flash 0.962676 0.995395 0.977143 0.784304
gemini-1.5-pro 0.919276 0.984604 0.949869 0.528626
gpt-4o-2024-11-20 0.973855 0.996913 0.983887 0.84817
gpt-4o-mini 0.994224 0.999781 0.998007 0.987244
llama-3.2-90b 0.938608 0.98916 0.960872 0.62814
llama-3.3-70b 0.977713 0.997508 0.986066 0.869837
mistral-large-latest 0.992386 0.999551 0.99628 0.968953
mistral-small-latest 0.993747 0.999556 0.997151 0.977661
overall 0.989294 0.991859 0.971624 0.730174
Downloads last month
63
Safetensors
Model size
0.4B params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Collection including ShantanuT01/dactyl-deberta-v3-large-pretrained