LFM2-8B-A1B — MLX 3-bit (Apple Silicon)

Maintainer / Publisher: Susant Achary
Upstream model: LiquidAI/LFM2-8B-A1B
This repo (MLX 3-bit): mlx-community/LFM2-8B-A1B-3bit-MLX

This repository provides an Apple-Silicon-optimized MLX build of LFM2-8B-A1B at 3-bit quantization.
3-bit is an excellent size↔quality sweet spot on many Macs—very small memory footprint with surprisingly solid answer quality and snappy decoding.


🔎 What is LFM2-8B-A1B?

  • Architecture: Mixture-of-Experts (MoE) Transformer.
  • Size: ~8B total parameters with ~1B active per token (the “A1B” naming commonly indicates ~1B active params).
  • Why MoE? Per token, only a subset of experts is activated → lower compute per token while retaining a larger parameter pool for expressivity.

Memory reality on a single device: Even though ~1B parameters are active at a time, all experts typically reside in memory in single-device runs. Plan RAM based on total parameters, not just the active slice.


📦 What’s in this MLX build

  • config.json (MLX), mlx_model*.safetensors (3-bit shards)
  • Tokenizer: tokenizer.json, tokenizer_config.json
  • Metadata: model_index.json (and/or processor metadata as applicable)

Target: macOS on Apple Silicon (M-series) using Metal/MPS.


✅ Intended use

  • General instruction following, chat, and summarization
  • RAG back-ends and long-context assistants on device
  • Schema-guided structured outputs (JSON) where low RAM is a priority

⚠️ Limitations

  • 3-bit is lossy: tiny improvements in latency/RAM come with some accuracy trade-off vs 6/8-bit.
  • For very long contexts and/or batching, KV-cache can dominate memory—tune max_tokens and batch size.
  • Add your own guardrails/safety for production deployments.

🔢 RAM planning (3-bit, MoE, MLX)

You asked to assume and decide realistic ranges. The numbers below are practical starting points—verify on your machine.

Rule-of-thumb components

  • Weights (3-bit):total_params × 0.375 byte → for 8B params ≈ ~3.0 GB
  • Runtime overhead: MLX graph/tensors/metadata → ~0.6–1.0 GB
  • KV-cache: grows with context × layers × heads × dtype~0.8–2.5+ GB

Indicative peak RAM (batch=1)

Context window Estimated peak RAM
4k tokens ~4.4–5.5 GB
8k tokens ~5.2–6.6 GB
16k tokens ~6.5–8.8 GB

For ≤2k windows you may see ~4.0–4.8 GB. Larger windows/batches increase KV-cache and peak RAM.


🧭 Precision choices for LFM2-8B-A1B (lineup planning)

While this card is 3-bit, teams often publish multiple precisions. Use this table as a planning guide (8B MoE LM; actuals depend on context/batch/prompts):

Variant Typical Peak RAM Relative Speed Typical Behavior When to choose
3-bit (this repo) ~4.4–8.8 GB 🔥🔥🔥🔥 Direct, concise, great latency Default on 8–16 GB Macs
6-bit ~7.5–12.5 GB 🔥🔥 Best quality under quant Choose if RAM allows
8-bit ~9.5–12+ GB 🔥🔥 Largest quantized size / highest fidelity When you prefer simpler 8-bit workflows

MoE caveat: MoE lowers compute per token; unless experts are paged/partitioned, memory still scales with total parameters on a single device.


🚀 Quickstart (CLI — MLX)

Deterministic generation

python -m mlx_lm.generate \
  --model mlx-community/LFM2-8B-A1B-3bit-MLX \
  --prompt "Summarize the following in 5 concise bullet points:\n<your text>" \
  --max-tokens 256 \
  --temperature 0.0 \
  --device mps \
  --seed 0
Downloads last month
767
Safetensors
Model size
1B params
Tensor type
F32
·
U32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for mlx-community/LFM2-8B-A1B-3bit-MLX

Quantized
(20)
this model

Collection including mlx-community/LFM2-8B-A1B-3bit-MLX