-
Orion-MSP: Multi-Scale Sparse Attention for Tabular In-Context Learning
Paper • 2511.02818 • Published • 9 -
TabTune: A Unified Library for Inference and Fine-Tuning Tabular Foundation Models
Paper • 2511.02802 • Published • 9 -
Interpretability as Alignment: Making Internal Understanding a Design Principle
Paper • 2509.08592 • Published -
Interpretability-Aware Pruning for Efficient Medical Image Analysis
Paper • 2507.08330 • Published
AI & ML interests
Frontier research around Safe and aligned intelligence
Recent Activity
Papers
Orion-MSP: Multi-Scale Sparse Attention for Tabular In-Context Learning
TabTune: A Unified Library for Inference and Fine-Tuning Tabular Foundation Models
Organization Card
Lexsi Labs drives Aligned and Safe AI Frontier Research. Our goal is to build AI systems that are transparent, reliable, and value-aligned, combining interpretability, alignment, and governance to enable trustworthy intelligence at scale.
Research Focus
- Aligned & Safe AI: Frameworks for self-monitoring, interpretable, and alignment-aware systems.
- Explainability & Alignment: Faithful, architecture-agnostic interpretability and value-aligned optimization across tabular, vision, and language models.
- Safe Behaviour Control: Techniques for fine-tuning, pruning, and behavioural steering in large models.
- Risk & Governance: Continuous monitoring, drift detection, and fairness auditing for responsible deployment.
- Tabular & LLM Research: Foundational work on tabular intelligence, in-context learning, and interpretable large language models.
-
Orion-MSP: Multi-Scale Sparse Attention for Tabular In-Context Learning
Paper • 2511.02818 • Published • 9 -
TabTune: A Unified Library for Inference and Fine-Tuning Tabular Foundation Models
Paper • 2511.02802 • Published • 9 -
Interpretability as Alignment: Making Internal Understanding a Design Principle
Paper • 2509.08592 • Published -
Interpretability-Aware Pruning for Efficient Medical Image Analysis
Paper • 2507.08330 • Published
datasets
0
None public yet