Drug Review Sentiment Analysis with BERT

Drug Review Classification Language

A fine-tuned BERT model for detecting positive/negative sentiment in pharmaceutical drug reviews with state-of-the-art performance. Developed as part of the DRUG_FEEDBACK_NLP project.

Model Overview

  • Architecture: bert-base-uncased fine-tuned
  • Input: Drug review text
  • Output: Binary sentiment (0 = negative, 1 = positive)
  • Threshold: Ratings >7 converted to positive (1)
  • Max Sequence Length: 512 tokens
  • GitHub Project: DRUG_FEEDBACK_NLP

Performance

Metric Score
ROC-AUC 0.967
F1-Score 0.936
Precision 0.932
Recall 0.939
Downloads last month
1
Safetensors
Model size
0.1B params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Dataset used to train serguntsov/bert-base-uncased-drugsCom_raw-cls