DLL Catalog Author Reconciliation Model

The purpose of this model is to automate the reconciliation of bibliographic metadata with records in the DLL Catalog.

The DLL Catalog maintains authority records for authors and work records for works. Each work is linked to its author (if known), and each individual item record must be linked to the relevant authority records and work records.

The Problem

Linking incoming metadata for individual items to their corresponding author and work records in the DLL Catalog is the problem: the metadata that we acquire from other sites comes in different forms, with different spellings of authors' names and the titles of their works. Reconciling one or two records does not take much time, but more often than not we acquire many thousands of records all at once, which creates a significant logjam in the process of publishing records in the catalog.

The Proposed Solution

The authority and work records in the DLL Catalog contain multiple variant spellings of author names and work titles, and new variant spellings are added to the records as we encounter them. This means that we already have a labeled set of data that could be used to train a model to identify names and titles and match them with the unique identifiers of the DLL Catalog's authority and work records.

Achieving accuracy and reliability in this process will make the second goal of reconciling titles with their corresponding work records easier, since the author's name can be used to narrow the field of potential matches to the works by that author, thus reducing the chances for false positives on works with the same or similar titles. For example, both Julius Caesar and Lucan wrote works called Bellum Civile, and several authors wrote works known generically as Carmina.

The Model

After preliminary experiments with sequential neural network models using bag-of-words, term frequency-inverse document frequency (tf-idf), and custom word embedding encoding, I settled on using a pretrained BERT model developed by Devlin et al. 2018. Specifically, I'm using Hugging Face's DistilBert base multilingual (cased) model, which is based on work by Sanh et al. 2020.

Emissions

Here is the codecarbon output from training on Google Colab with an A100 runtime:

timestamp: 2025-06-12T16:36:57
project_name: codecarbon
run_id: a4b09711-f89f-4264-abad-e08e30dd32b1
duration: 2197.286738872528
emissions: 0.0148805916450009
emissions_rate: 6.772257521854648e-06
cpu_power: 42.5
gpu_power: 0.0
ram_power: 9.000000000000002
cpu_energy: 0.0259392851730187
gpu_energy: 0
ram_energy: 0.0054927221614122
energy_consumed: 0.0314320073344309
country_name: United States
country_iso_code: USA
os: macOS-15.5-arm64-arm-64bit
python_version: 3.10.9
codecarbon_version: 2.2.2
cpu_count: 12
cpu_model: Apple M4 Pro
longitude: -97.4536
latitude: 35.2144
ram_total_size: 24.0
tracking_mode: machine
Downloads last month
10
Safetensors
Model size
0.1B params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for sjhuskey/distilbert_multilingual_cased_latin_author_identifier

Finetuned
(386)
this model

Dataset used to train sjhuskey/distilbert_multilingual_cased_latin_author_identifier