NER-Standard πŸ€–

A high-performance Named Entity Recognition model with superior accuracy and structured JSON output.

Model Size Architecture Context Window License Discord

Built by Minibase - Train and deploy small AI models from your browser. Browse all of the models and datasets available on the Minibase Marketplace.

πŸ“‹ Model Summary

Minibase-NER-Standard is a high-performance language model fine-tuned for Named Entity Recognition (NER) tasks. It automatically identifies and extracts named entities from text, outputting them in structured JSON format with proper entity type classification for persons, organizations, locations, and miscellaneous terms.

Key Features

  • 🎯 Excellent NER Performance: 95.1% F1 score on entity recognition tasks
  • πŸ“Š Entity Classification: Accurately categorizes PERSON, ORG, LOC, and MISC entities
  • πŸ“ Optimized Size: 369MB (Q8_0 quantized)
  • ⚑ Efficient Inference: 323.3ms average response time
  • πŸ”„ Local Processing: No data sent to external servers
  • πŸ—οΈ Structured JSON Output: Clean, parseable entity extraction results

πŸš€ Quick Start

Local Inference (Recommended)

  1. Install llama.cpp (if not already installed):

    # Clone and build llama.cpp
    git clone https://github.com/ggerganov/llama.cpp
    cd llama.cpp
    make
    
    # Return to project directory
    cd ../NER_small
    
  2. Download the GGUF model:

    # Download model files from HuggingFace
    wget https://huggingface.co/Minibase/NER-Small/resolve/main/model.gguf
    wget https://huggingface.co/Minibase/NER-Small/resolve/main/ner_inference.py
    wget https://huggingface.co/Minibase/NER-Small/resolve/main/config.json
    wget https://huggingface.co/Minibase/NER-Small/resolve/main/tokenizer_config.json
    wget https://huggingface.co/Minibase/NER-Small/resolve/main/generation_config.json
    
  3. Start the model server:

    # Start llama.cpp server with the GGUF model
    ../llama.cpp/llama-server \
      -m model.gguf \
      --host 127.0.0.1 \
      --port 8000 \
      --ctx-size 2048 \
      --n-gpu-layers 0 \
      --chat-template
    
  4. Make API calls:

    import requests
    
    # NER tagging via REST API
    response = requests.post("http://127.0.0.1:8000/completion", json={
        "prompt": "Instruction: Identify and tag all named entities in the following text. Use BIO format with entity types: PERSON, ORG, LOC, MISC.\n\nInput: John Smith works at Google in New York.\n\nResponse: ",
        "max_tokens": 512,
        "temperature": 0.1
    })
    
    result = response.json()
    print(result["content"])
    # Output: "John B-PERSON\nSmith I-PERSON\nworks O\nat O\nGoogle B-ORG\nin O\nNew York B-LOC\nI-LOC\n."
    

Python Client (Recommended)

# Download and use the provided Python client
from ner_inference import NERClient

# Initialize client (connects to local server)
client = NERClient()

# Tag entities in text
text = "Apple Inc. was founded by Steve Jobs in Cupertino, California."
entities = client.extract_entities(text)

print(entities)
# Output: [
#   {"text": "Apple Inc.", "type": "ORG", "start": 0, "end": 9},
#   {"text": "Steve Jobs", "type": "PERSON", "start": 24, "end": 34},
#   {"text": "Cupertino", "type": "LOC", "start": 38, "end": 47},
#   {"text": "California", "type": "LOC", "start": 49, "end": 59}
# ]

# Batch processing
texts = [
    "Microsoft announced a new CEO.",
    "Paris is the capital of France."
]
all_entities = client.extract_entities_batch(texts)
print(all_entities)

Direct llama.cpp Usage

# Alternative: Use llama.cpp directly without server
import subprocess
import json

def extract_entities_with_llama_cpp(text: str) -> str:
    prompt = f"Instruction: Identify and tag all named entities in the following text. Use BIO format with entity types: PERSON, ORG, LOC, MISC.\n\nInput: {text}\n\nResponse: "

    # Run llama.cpp directly
    cmd = [
        "../llama.cpp/llama-cli",
        "-m", "model.gguf",
        "--prompt", prompt,
        "--ctx-size", "2048",
        "--n-predict", "512",
        "--temp", "0.1",
        "--log-disable"
    ]

    result = subprocess.run(cmd, capture_output=True, text=True, cwd=".")
    return result.stdout.strip()

# Usage
result = extract_entities_with_llama_cpp("John Smith works at Google in New York.")
print(result)

πŸ“Š Benchmarks & Performance

Overall Performance (100 samples)

Metric Score Description
NER F1 Score 95.1% Overall entity recognition performance
Precision 91.5% Accuracy of entity predictions
Recall 100.0% Perfect recall - finds all relevant entities
Average Latency 323.3ms Response time performance

Entity Recognition Performance

  • Entity Identification Accuracy: 100% (100/100 correct predictions when entities are found)
  • Evaluation Methodology: JSON parsing with exact entity type matching
  • Output Format: Structured JSON (e.g., {"PER": ["John Smith"], "ORG": ["Google"]})

Performance Insights

  • βœ… Excellent F1 Score: 95.1% overall performance
  • βœ… Perfect Recall: 100% of expected entities are found
  • βœ… High Precision: 91.5% accuracy on identified entities
  • βœ… Structured Output: Clean JSON format with proper entity categorization
  • βœ… High Reliability: Consistent performance across different entity types
  • βœ… Production Ready: Excellent for real-world NER applications

πŸ’‘ Examples

Here are real examples from the benchmark evaluation showing the structured JSON output:

🏒 Business Example

Input:

Microsoft Corporation announced that Satya Nadella will visit London next week.

Output:

{"PER": ["Satya Nadella"], "ORG": ["Microsoft Corporation"], "LOC": ["London"]}

Analysis: Perfect entity identification and categorization - 3/3 entities correctly identified and classified.

πŸŽ“ Academic Example

Input:

The University of Cambridge is located in the United Kingdom and was founded by King Henry III.

Output:

{"PER": ["King Henry III"], "ORG": ["University of Cambridge"], "LOC": ["United Kingdom"]}

Analysis: All 3 entities correctly identified and properly categorized by type.

πŸ’Ό Professional Example

Input:

John Smith works at Google in New York and uses Python programming language.

Output:

{"PER": ["John Smith"], "ORG": ["Google"], "LOC": ["New York"], "MISC": ["Python"]}

Analysis: Complete entity extraction with accurate type classification for all 4 entities found.

πŸ—οΈ Technical Details

Model Architecture

  • Architecture: LlamaForCausalLM
  • Parameters: 135M (small capacity)
  • Context Window: 2,048 tokens
  • Max Position Embeddings: 2,048
  • Quantization: GGUF (Q8_0 quantization)
  • File Size: 369MB
  • Memory Requirements: 8GB RAM minimum, 16GB recommended

Training Details

  • Base Model: Custom-trained Llama architecture
  • Fine-tuning Dataset: Mixed-domain entity recognition data
  • Training Objective: Named entity extraction and listing
  • Optimization: Quantized for efficient inference
  • Model Scale: Small capacity optimized for speed

System Requirements

Component Minimum Recommended
Operating System Linux, macOS, Windows Linux or macOS
RAM 8GB 16GB
Storage 150MB free space 500MB free space
Python 3.8+ 3.10+
Dependencies llama.cpp llama.cpp, requests

Notes:

  • βœ… CPU-only inference supported but slower
  • βœ… GPU acceleration provides significant speed improvements
  • βœ… Apple Silicon users get Metal acceleration automatically

πŸ“œ Citation

If you use Named Entity Recognition - Standard in your research, please cite:

@misc{ner-standard-2025,
  title={Named Entity Recognition - Standard: High-Performance Named Entity Recognition Model},
  author={Minibase AI Team},
  year={2025},
  publisher={Hugging Face},
  url={https://huggingface.co/Minibase/NER-Standard}
}

🀝 Community & Support

πŸ“‹ License

This model is released under the Apache License 2.0.

πŸ™ Acknowledgments

  • CoNLL-2003 Dataset: Used for training and evaluation
  • llama.cpp: For efficient local inference
  • Hugging Face: For model hosting and community
  • Our amazing community: For feedback and contributions

Built with ❀️ by the Minibase team

Making AI more accessible for everyone

πŸ’¬ Join our Discord

Downloads last month
41
GGUF
Model size
0.4B params
Architecture
llama
Hardware compatibility
Log In to view the estimation

We're not able to determine the quantization variants.

Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Evaluation results