Semantic Scholar Open Access 2022 146 sitasi

Quantifying the advantage of domain-specific pre-training on named entity recognition tasks in materials science

Amalie Trewartha Nicholas Walker Haoyan Huo Sanghoon Lee Kevin Cruse +5 lainnya

Abstrak

Summary A bottleneck in efficiently connecting new materials discoveries to established literature has arisen due to an increase in publications. This problem may be addressed by using named entity recognition (NER) to extract structured summary-level data from unstructured materials science text. We compare the performance of four NER models on three materials science datasets. The four models include a bidirectional long short-term memory (BiLSTM) and three transformer models (BERT, SciBERT, and MatBERT) with increasing degrees of domain-specific materials science pre-training. MatBERT improves over the other two BERTBASE-based models by 1%∼12%, implying that domain-specific pre-training provides measurable advantages. Despite relative architectural simplicity, the BiLSTM model consistently outperforms BERT, perhaps due to its domain-specific pre-trained word embeddings. Furthermore, MatBERT and SciBERT models outperform the original BERT model to a greater extent in the small data limit. MatBERT’s higher-quality predictions should accelerate the extraction of structured data from materials science literature.

Penulis (10)

A

Amalie Trewartha

N

Nicholas Walker

H

Haoyan Huo

S

Sanghoon Lee

K

Kevin Cruse

J

John Dagdelen

A

Alex Dunn

K

K. Persson

G

G. Ceder

A

Anubhav Jain

Format Sitasi

Trewartha, A., Walker, N., Huo, H., Lee, S., Cruse, K., Dagdelen, J. et al. (2022). Quantifying the advantage of domain-specific pre-training on named entity recognition tasks in materials science. https://doi.org/10.1016/j.patter.2022.100488

Akses Cepat

Informasi Jurnal
Tahun Terbit
2022
Bahasa
en
Total Sitasi
146×
Sumber Database
Semantic Scholar
DOI
10.1016/j.patter.2022.100488
Akses
Open Access ✓