tags: - spacy - token-classification language: - en license: mit model-index: - name: en_core_web_md results: - task: name: NER type: token-classification metrics: - name: NER Precision type: precision value: 0.8531330602 - name: NER Recall type: recall value: 0.8448016827 - name: NER F Score type: f_score value: 0.8489469314 - task: name: POS type: token-classification metrics: - name: POS Accuracy type: accuracy value: 0.9736958159 - task: name: SENTER type: token-classification metrics: - name: SENTER Precision type: precision value: 0.9144345238 - name: SENTER Recall type: recall value: 0.8918134442 - name: SENTER F Score type: f_score value: 0.9029823331 - task: name: UNLABELED_DEPENDENCIES type: token-classification metrics: - name: Unlabeled Dependencies Accuracy type: accuracy value: 0.9186827918 - task: name: LABELED_DEPENDENCIES type: token-classification metrics: - name: Labeled Dependencies Accuracy type: accuracy value: 0.9186827918
Details: https://spacy.io/models/en#en_core_web_md
English pipeline optimized for CPU. Components: tok2vec, tagger, parser, senter, ner, attribute_ruler, lemmatizer.
| Feature | Description |
|---|---|
| Name | en_core_web_md |
| Version | 3.2.0 |
| spaCy | >=3.2.0,<3.3.0 |
| Default Pipeline | tok2vec, tagger, parser, attribute_ruler, lemmatizer, ner |
| Components | tok2vec, tagger, parser, senter, attribute_ruler, lemmatizer, ner |
| Vectors | 684830 keys, 20000 unique vectors (300 dimensions) |
| Sources | OntoNotes 5 (Ralph Weischedel, Martha Palmer, Mitchell Marcus, Eduard Hovy, Sameer Pradhan, Lance Ramshaw, Nianwen Xue, Ann Taylor, Jeff Kaufman, Michelle Franchini, Mohammed El-Bachouti, Robert Belvin, Ann Houston) ClearNLP Constituent-to-Dependency Conversion (Emory University) WordNet 3.0 (Princeton University) GloVe Common Crawl (Jeffrey Pennington, Richard Socher, and Christopher D. Manning) |
| License | MIT |
| Author | Explosion |
Label Scheme
View label scheme (114 labels for 4 components)
| Component | Labels |
|---|---|
tagger |
$, '', ,, -LRB-, -RRB-, ., :, ADD, AFX, CC, CD, DT, EX, FW, HYPH, IN, JJ, JJR, JJS, LS, MD, NFP, NN, NNP, NNPS, NNS, PDT, POS, PRP, PRP$, RB, RBR, RBS, RP, SYM, TO, UH, VB, VBD, VBG, VBN, VBP, VBZ, WDT, WP, WP$, WRB, XX, ```` |
parser |
ROOT, acl, acomp, advcl, advmod, agent, amod, appos, attr, aux, auxpass, case, cc, ccomp, compound, conj, csubj, csubjpass, dative, dep, det, dobj, expl, intj, mark, meta, neg, nmod, npadvmod, nsubj, nsubjpass, nummod, oprd, parataxis, pcomp, pobj, poss, preconj, predet, prep, prt, punct, quantmod, relcl, xcomp |
senter |
I, S |
ner |
CARDINAL, DATE, EVENT, FAC, GPE, LANGUAGE, LAW, LOC, MONEY, NORP, ORDINAL, ORG, PERCENT, PERSON, PRODUCT, QUANTITY, TIME, WORK_OF_ART |
Accuracy
| Type | Score |
|---|---|
TOKEN_ACC |
99.93 |
TOKEN_P |
99.57 |
TOKEN_R |
99.58 |
TOKEN_F |
99.57 |
TAG_ACC |
97.37 |
SENTS_P |
91.44 |
SENTS_R |
89.18 |
SENTS_F |
90.30 |
DEP_UAS |
91.87 |
DEP_LAS |
90.07 |
ENTS_P |
85.31 |
ENTS_R |
84.48 |
ENTS_F |
84.89 |
- Downloads last month
- 177
Space using Nitrino/en_core_web_md 1
Evaluation results
- NER Precisionself-reported0.853
- NER Recallself-reported0.845
- NER F Scoreself-reported0.849
- TAG (XPOS) Accuracyself-reported0.974
- Unlabeled Attachment Score (UAS)self-reported0.919
- Labeled Attachment Score (LAS)self-reported0.901
- Sentences F-Scoreself-reported0.903