Phi-2 Financial Sentiment Analyzer with Reasoning

⚠️ Disclaimer (Experimental Model)

This model was developed as an experimental learning project to explore parameter-efficient fine-tuning (LoRA/QLoRA), instruction-tuning, and the end-to-end Hugging Face workflow (training, saving adapters, and model publishing).

The primary goal of this project was hands-on learning, not production deployment. While the model demonstrates the fine-tuning pipeline and reasoning-style outputs, its performance is limited by a small dataset and experimental setup.

Fine-tuned Phi-2 (2.7B) for financial sentiment analysis with natural language explanations.

Model Description

This model analyzes financial news and provides:

  • Sentiment classification (Positive/Negative/Neutral)
  • Natural language reasoning explaining the sentiment
  • Domain-specific understanding of financial terminology

Training Details

  • Base Model: microsoft/phi-2 (2.7B parameters)
  • Fine-tuning Method: QLoRA (4-bit quantization + LoRA adapters)
  • Dataset: 4000 financial news samples from Twitter Financial News Sentiment
  • Training Time: ~50-60 minutes on T4 GPU (2 epochs)
  • Trainable Parameters: 0.4% of total parameters (10M out of 2.7B)
  • Hardware: Google Colab T4 GPU (16GB VRAM)

Usage

Option 1: Using PEFT (Recommended)

from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import PeftModel
import torch

# Load base model
base_model = AutoModelForCausalLM.from_pretrained(
    "microsoft/phi-2",
    torch_dtype=torch.float16,
    device_map="auto",
    trust_remote_code=True
)

# Load LoRA adapters
model = PeftModel.from_pretrained(base_model, "prasanna030/phi2-financial-sentiment-lora")
tokenizer = AutoTokenizer.from_pretrained("microsoft/phi-2", trust_remote_code=True)

# Analyze sentiment
def analyze_sentiment(news_text):
    prompt = f'''Instruct: Analyze the sentiment of this financial news and explain your reasoning:
"{news_text}"

Output:'''
    
    inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
    outputs = model.generate(
        **inputs, 
        max_new_tokens=80,
        temperature=0.7,
        repetition_penalty=1.2
    )
    return tokenizer.decode(outputs[0], skip_special_tokens=True)

# Example
result = analyze_sentiment("Apple revenue exceeded expectations by 15 percent")
print(result)

Option 2: Direct Loading

from transformers import AutoModelForCausalLM, AutoTokenizer

model = AutoModelForCausalLM.from_pretrained("prasanna030/phi2-financial-sentiment-lora")
tokenizer = AutoTokenizer.from_pretrained("prasanna030/phi2-financial-sentiment-lora")

Example Outputs

Input: "Tesla stock surged 20 percent after record quarterly deliveries"

Output:

Sentiment: Positive
Reasoning: This statement suggests positive investor sentiment with language pointing to growth or success. The significant stock surge and record deliveries indicate strong financial performance.

Input: "Company reports declining revenue and upcoming layoffs"

Output:

Sentiment: Negative
Reasoning: The text reflects bearish sentiment with concerning indicators about market or company performance. Declining revenue and layoffs suggest potential challenges ahead.

Limitations

  • Trained on 4,000 samples (relatively small dataset)
  • May not handle highly nuanced or sarcastic financial commentary
  • Best suited for straightforward financial news analysis
  • English language only

Training Hyperparameters

  • Learning rate: 2e-4
  • Batch size: 4 (per device)
  • Gradient accumulation steps: 2
  • Epochs: 2
  • LoRA rank (r): 16
  • LoRA alpha: 32
  • LoRA dropout: 0.05
  • Max sequence length: 256 tokens

Citation

@misc{phi2-financial-sentiment,
  author = {Prasanna},
  title = {Phi-2 Financial Sentiment Analyzer with Reasoning},
  year = {2025},
  publisher = {HuggingFace},
  url = {https://huggingface.co/prasanna030/phi2-financial-sentiment-lora}
}

License

This model inherits the MIT license from Phi-2 base model.

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for prasanna030/phi2-financial-sentiment-lora

Base model

microsoft/phi-2
Adapter
(946)
this model

Dataset used to train prasanna030/phi2-financial-sentiment-lora