Model Card for FinoAI — Financial Intelligence LLM
FinoAI is a privacy-first, explainable financial reasoning model fine-tuned on the Meta-Llama-3-8B base using parameter-efficient fine-tuning (PEFT) and LoRA adapters.
It acts as a secure, autonomous AI financial advisor capable of forecasting, anomaly detection, and policy-grounded recommendations across personal and enterprise finance contexts.
Model Details
Model Description
FinoAI is a hybrid AI model that integrates Graph Neural Ordinary Differential Equations (GNN-ODEs) with a multi-stage Large Language Model reasoning pipeline (Planner → Executor → Fact-Guard).
The model performs continuous-time financial forecasting, investment planning, and anomaly detection while maintaining user privacy through federated learning and differential privacy mechanisms.
It is designed for both consumer (B2C) and enterprise (B2B) deployment scenarios, supporting API, web, and voice-based interfaces.
- Developed by: S Kunal Achintya Reddy
- Model Type: Financial reasoning and forecasting LLM
- Languages: English
- License: MIT
- Fine-tuned from model: meta-llama/Meta-Llama-3-8B
- Frameworks: PyTorch, PEFT, Hugging Face Transformers, LangChain
- Version: 1.0 (October 2025)
Uses
Direct Use
FinoAI can be used directly for:
- Personalized financial advisory and planning
- Debt optimization and anomaly detection
- Investment forecasting and policy compliance queries
- Conversational financial assistants or embedded fintech copilots
Downstream Use
- Finetuning or domain adaptation for specific markets (e.g., insurance, SME credit scoring)
- Embedding as a reasoning layer in enterprise fintech dashboards
- Integration with federated finance apps requiring privacy guarantees
Out-of-Scope Use
- Licensed financial advice without human review
- Predictive trading or speculative financial activities
- Processing personally identifiable financial data without consent
Bias, Risks, and Limitations
- FinoAI’s outputs depend on data quality and may reflect inaccuracies in the financial documents used for training.
- The model is not a certified financial advisor and should be used as a decision-support tool.
- While differential privacy mitigates leakage risk, outputs should not be used for regulated decision-making without compliance oversight.
- Model performance may degrade in underrepresented financial systems or local languages.
Recommendations
Users and developers should:
- Use the model for advisory and educational purposes, not regulatory or transactional decision-making.
- Ensure interpretability modules (Fact-Guard and RAG explainability) remain active during deployment.
- Periodically retrain with updated financial datasets to avoid model drift.
Training Details
Training Data
The model was trained on a curated proprietary dataset of:
- Publicly available financial documents (RBI guidelines, SEBI reports, OECD datasets)
- Educational finance materials (tax codes, investment fundamentals, risk management data)
- Synthetic dialogues and case studies generated using reinforcement-based reasoning for advisor simulation
Data was curated using a custom financial web crawler built for regulatory document scraping and normalization.
Training Procedure
Preprocessing
- Data cleaned, tokenized, and formatted into structured “context → reasoning → insight” triplets.
- Outliers filtered using statistical anomaly detection.
- Financial equations standardized using symbolic formatting.
Training Hyperparameters
- Base model: Meta-Llama-3-8B
- Fine-tuning: LoRA (r=32, alpha=16)
- Batch size: 64
- Learning rate: 2e-4
- Optimizer: AdamW
- Precision: bf16 mixed precision
- Epochs: 5
- Training cost: 9.28 USD (RunPod A100, 6.2 GPU-hours)
Speeds, Sizes, Times
- Total parameters (trainable): ~120M
- Checkpoint size: ~2.5 GB
- Average training speed: 420 tokens/sec
- Total training time: ~6 hours
- Downloads last month
- 2
Model tree for globalnebula/FinoLLama
Base model
meta-llama/Meta-Llama-3-8B