Financial Transaction Categorization Dataset
A comprehensive worldwide dataset for financial transaction categorization with 4.5+ million records across 10 categories, 5 countries, and 5 currencies.
π Dataset Overview
- Total Records: 4,501,043 transactions
- Categories: 10 financial categories
- Countries: 5 countries (USA, UK, Canada, Australia, India)
- Currencies: 5 currencies (USD, GBP, CAD, AUD, INR)
- File Format: Parquet (optimized for fast loading and storage)
- Total Size: ~71 MB (compressed)
π Dataset Structure
The dataset is now consolidated into a single optimized parquet file:
| File | Records | Size | Description |
|---|---|---|---|
default/train/0000.parquet |
4,501,043 | 71 MB | Complete dataset in parquet format |
categories.json |
- | 5.3 KB | Category definitions and keywords |
dataset_info.json |
- | 1.0 KB | Dataset metadata and statistics |
π·οΈ Categories
The dataset includes 10 comprehensive financial categories:
- Food & Dining - Restaurants, groceries, fast food, coffee shops, food delivery
- Transportation - Gas, rideshare, airlines, public transport, car rental
- Shopping & Retail - Online shopping, electronics, retail, fashion, home & garden
- Entertainment & Recreation - Streaming, gaming, movies, music, sports
- Healthcare & Medical - Medical, pharmacy, dental, vision, fitness
- Utilities & Services - Electricity, water, gas, internet & phone, cable
- Financial Services - Banking, insurance, credit cards, investments, taxes
- Income - Salary, freelance, business, investments, government benefits
- Government & Legal - Taxes, licenses, legal services, government fees
- Charity & Donations - Charitable, religious, community, political donations
π Geographic Coverage
| Country | Currency | Sample Transactions |
|---|---|---|
| USA | USD | McDonald's, Uber, Amazon, Netflix |
| UK | GBP | Tesco, Shell, ASDA, BBC iPlayer |
| Canada | CAD | Tim Hortons, Petro-Canada, Loblaws |
| Australia | AUD | Coles, Woolworths, Bunnings, Telstra |
| India | INR | Big Bazaar, Ola, Flipkart, Zomato |
π Dataset Schema
Each record contains the following fields:
{
"transaction_description": "string",
"category": "string",
"country": "string",
"currency": "string"
}
Example Records
transaction_description,category,country,currency
McDonald's #1234,Food & Dining,USA,USD
Uber Ride,Transportation,UK,GBP
Amazon Purchase,Shopping & Retail,CANADA,CAD
Netflix Subscription,Entertainment & Recreation,AUSTRALIA,AUD
Pharmacy Purchase,Healthcare & Medical,INDIA,INR
π Usage
Loading the Dataset
Python (Pandas)
import pandas as pd
# Load the complete dataset
df = pd.read_parquet('default/train/0000.parquet')
print(f"Total records: {len(df):,}")
print(f"Columns: {list(df.columns)}")
Python (Hugging Face Datasets)
from datasets import load_dataset
# Load the dataset
dataset = load_dataset("mitulshah/transaction-categorization")
# Access the data
train_data = dataset['train']
print(f"Dataset size: {len(train_data):,}")
Chunked Processing (Memory Efficient)
import pandas as pd
# Process large parquet file in chunks
for chunk in pd.read_parquet('default/train/0000.parquet', chunksize=10000):
# Process each chunk
print(f"Processing {len(chunk)} records...")
# Your analysis code here
Data Analysis Examples
Category Distribution
import pandas as pd
# Load and analyze category distribution
df = pd.read_parquet('default/train/0000.parquet')
category_counts = df['category'].value_counts()
print(category_counts)
Country Analysis
# Analyze transactions by country
country_analysis = df.groupby(['country', 'category']).size().unstack(fill_value=0)
print(country_analysis)
π― Use Cases
This dataset is perfect for:
- Machine Learning: Train classification models for transaction categorization
- Financial Analysis: Study spending patterns across different regions
- NLP Research: Text classification and merchant name analysis
- Data Science: Exploratory data analysis and visualization
- Business Intelligence: Market research and consumer behavior analysis
- Academic Research: Financial behavior studies and economic research
π Dataset Statistics
Record Distribution
- Total Records: 4,501,043
- Unique Descriptions: ~1.4M unique transaction descriptions
- Category Balance: Well-distributed across all 10 categories
- Geographic Distribution: Balanced representation across 5 countries
File Sizes
- default/train/0000.parquet: 71 MB (4,501,043 records)
- Total: ~71 MB (compressed)
π§ Technical Details
Data Quality
- β No Duplicates: All records are unique
- β Consistent Schema: All files follow the same structure
- β Valid Categories: All categories match the defined taxonomy
- β Country-Currency Pairs: Validated country-currency combinations
Performance Optimizations
- Parquet Format: Optimized columnar storage for fast loading and analysis
- Compression: Built-in compression reduces file size by ~66%
- Chunked Processing: Support for memory-efficient processing
- Fast Queries: Columnar format enables efficient filtering and aggregation
π Dataset Creation & Methodology
Curation Rationale
This dataset was created to address the need for a comprehensive, standardized dataset for financial transaction categorization that:
- Covers multiple countries and currencies
- Uses consistent categorization schema
- Includes high-quality, manually curated data
- Is suitable for both research and production use
Source Data
The dataset combines data from multiple sources:
- Synthetic Generation: 4.5M+ records generated using comprehensive merchant templates
- External Integration: Real transaction data from external Hugging Face datasets
- Country-specific Data: Curated data for USA, UK, Canada, Australia, and India
- Quality Validation: Duplicate prevention and data integrity checks
Data Quality Assurance
- β No Duplicates: Hash-based duplicate detection implemented
- β Schema Consistency: All files follow the same structure
- β Category Validation: All categories match the defined taxonomy
- β Country-Currency Pairs: Validated country-currency combinations
- β Anonymized Data: No personally identifiable information
π― Use Cases & Applications
Direct Use
This dataset can be used directly for:
- Training transaction classification models
- Building personal finance applications
- Developing banking transaction categorization systems
- Research in financial NLP and text classification
Downstream Applications
Potential downstream applications include:
- Fraud detection systems
- Expense tracking applications
- Budgeting and financial planning tools
- Business intelligence and analytics
- Academic research in fintech and financial behavior
Out-of-Scope Use
This dataset should not be used for:
- Identifying specific individuals or accounts
- Training models that could compromise financial privacy
- Any application that requires access to actual financial data
β οΈ Bias, Risks, and Limitations
Known Limitations
- Geographic Bias: The dataset focuses on 5 major countries and may not represent all global financial patterns
- Currency Bias: Only 5 currencies are represented
- Category Granularity: The 10-category schema may be too broad for some specialized applications
- Temporal Bias: Data represents a specific time period and may not reflect current trends
Recommendations
Users should:
- Validate model performance on their specific data
- Consider fine-tuning for domain-specific applications
- Be aware of potential geographic and cultural biases
- Regularly update models with new data
π Training & Evaluation
Recommended Metrics
For model evaluation, consider these metrics:
- Accuracy: Overall classification accuracy
- F1-score: Macro and weighted F1-scores
- Precision and Recall: Per-category performance
- Confusion Matrix: Detailed error analysis
Model Training Tips
# Example training setup
from sklearn.model_selection import train_test_split
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.ensemble import RandomForestClassifier
# Load and prepare data
df = pd.read_parquet('default/train/0000.parquet')
X = df['transaction_description']
y = df['category']
# Split data
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, stratify=y)
# Vectorize text
vectorizer = TfidfVectorizer(max_features=10000)
X_train_vec = vectorizer.fit_transform(X_train)
X_test_vec = vectorizer.transform(X_test)
# Train model
model = RandomForestClassifier(n_estimators=100, random_state=42)
model.fit(X_train_vec, y_train)
π Additional Resources
- Categories: See
categories.jsonfor detailed category definitions and keywords - Metadata: See
dataset_info.jsonfor complete dataset statistics
π€ Contributing
This dataset is actively maintained. If you find issues or have suggestions:
- Check the existing issues
- Create a new issue with detailed description
- Follow the contribution guidelines
π Citation
If you use this dataset in your research, please cite it as:
@dataset{financial_transaction_categorization_2025,
title={Financial Transaction Categorization Dataset},
author={Mitul Shah},
year={2025},
url={https://huggingface.co/datasets/mitulshah/transaction-categorization},
note={A comprehensive worldwide dataset for financial transaction categorization with 4.5M+ records}
}
π License
This dataset is released under the MIT License. See the license file for details.
π Acknowledgments
- Data Sources: Synthetic generation + real transaction data from external sources
- Categories: Based on comprehensive financial transaction taxonomy
- Validation: Duplicate prevention and data quality checks implemented
π Contact
- Dataset Maintainer: Mitul Shah
- Repository: mitulshah/transaction-categorization
- Last Updated: October 14, 2025
β If you find this dataset useful, please consider giving it a star!
- Downloads last month
- 48