Powell-Phi3-Mini β€” Jerome Powell Style Language Model

Hugging Face License: MIT GPU Training Fine-tuning

🎯Summary

Powell-Phi3-Mini is an fine-tuned language model that replicates Federal Reserve Chair Jerome Powell's distinctive communication style, tone, and strategic hedging patterns. This project showcases expertise in modern LLM fine-tuning techniques, parameter-efficient training methods, and responsible AI development β€” demonstrating industry-ready machine learning engineering skills.


πŸš€ Key Features & Capabilities

Style Mimicry & Linguistic Analysis

  • βœ… Authentic Communication Style: Replicates Powell's cautious, data-dependent rhetoric
  • βœ… Strategic Hedging Patterns: Maintains appropriate uncertainty in speculative scenarios
  • βœ… Domain-Specific Responses: Handles economic and monetary policy discussions contextually
  • βœ… Refusal Training: Appropriately declines to provide financial advice or policy predictions (to an extent)

Technical Implementation

  • βœ… Efficient Architecture: Built on Microsoft Phi-3-mini-4k-instruct (3.8B parameters)
  • βœ… Scalable Training: LoRA r=16, alpha=32 configuration optimized for consumer GPUs
  • βœ… Deployment Flexibility: Available as lightweight adapter or full merged model
  • βœ… Integration Ready: One-line inference with Hugging Face Transformers

πŸ’» Implementation Examples

Production Ready - Merged Model

from transformers import AutoTokenizer, AutoModelForCausalLM

# One-line model loading
tokenizer = AutoTokenizer.from_pretrained("BoostedJonP/powell-phi3-mini")
model = AutoModelForCausalLM.from_pretrained("BoostedJonP/powell-phi3-mini", device_map="auto")

# Economic analysis prompt
prompt = "How is the current labor market affecting your inflation outlook?"
inputs = tokenizer(prompt, return_tensors="pt").to("cuda")
response = model.generate(**inputs, max_new_tokens=200, do_sample=True)
print(tokenizer.decode(response[0], skip_special_tokens=True))

πŸ“Š Technical Specifications & Training Pipeline

Model Architecture

Component Specification
Base Model microsoft/Phi-3-mini-4k-instruct (3.8B parameters)
License MIT License (Commercial Use Approved)
Fine-tuning Method QLoRA with PEFT integration
Context Length 4,096 tokens
Training Hardware NVIDIA TESLA P100 (16GB VRAM)

Training Configuration

Hyperparameter Value Rationale
LoRA Rank (r) 16 Optimal parameter/performance balance
LoRA Alpha 32 2x rank for stable training
Dropout Rate 0.05 Regularization without overfitting
Learning Rate 1.5e-4 Conservative rate for stable convergence
Scheduler Cosine decay Smooth learning rate reduction
Training Epochs 3 Prevents overfitting on specialized domain
Sequence Length 1,536 tokens Optimized for dataset
Precision Mixed fp16 2x memory efficiency, maintained accuracy

Dataset & Methodology


πŸ“ˆ Performance Metrics & Evaluation

Quantitative Results

Metric Baseline (Phi-3) Powell-Phi3-Mini Improvement
Powell-style Classification NA NA NA
Economic Domain Accuracy NA NA NA
Response Coherence (BLEU) NA NA NA

Qualitative Assessment

  • NA

🌐 Deployment & Access

πŸš€ Live Demo

Try Powell-Phi3-Mini Interactive Demo β†’

πŸ“¦ Model Downloads

  • Adapter Version: BoostedJonP/powell-phi3-mini-adapter
  • Merged Model: BoostedJonP/powell-phi3-mini (Full Model - 7.4GB)

πŸ”— Resources


βš–οΈ Responsible AI & Legal Compliance

Ethical Considerations

  • ⚠️ No Official Affiliation: Not endorsed by or affiliated with the Federal Reserve System
  • ⚠️ Educational Purpose Only: Designed for research, education, and demonstration purposes
  • ⚠️ No Financial Advice: Model responses should not be interpreted as investment guidance
  • ⚠️ Transparency: All training data sourced from public domain government transcripts

Licensing & Usage Rights

  • Base Model License: MIT License (Microsoft Phi-3)
  • Fine-tuned Weights: MIT License (Commercial use permitted)
  • Training Data: Public domain (U.S. government works)
  • Usage: Unrestricted for research, education, and commercial applications

πŸ‘¨β€πŸ’» Connect & Collaborate

Downloads last month
30
Safetensors
Model size
2B params
Tensor type
F32
Β·
F16
Β·
U8
Β·
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for BoostedJonP/powell-phi3-mini

Quantized
(144)
this model

Dataset used to train BoostedJonP/powell-phi3-mini

Space using BoostedJonP/powell-phi3-mini 1

Collection including BoostedJonP/powell-phi3-mini