arXiv Open Access 2021

Linguistically Informed Masking for Representation Learning in the Patent Domain

Sophia Althammer Mark Buckley Sebastian Hofstätter Allan Hanbury
Lihat Sumber

Abstrak

Domain-specific contextualized language models have demonstrated substantial effectiveness gains for domain-specific downstream tasks, like similarity matching, entity recognition or information retrieval. However successfully applying such models in highly specific language domains requires domain adaptation of the pre-trained models. In this paper we propose the empirically motivated Linguistically Informed Masking (LIM) method to focus domain-adaptative pre-training on the linguistic patterns of patents, which use a highly technical sublanguage. We quantify the relevant differences between patent, scientific and general-purpose language and demonstrate for two different language models (BERT and SciBERT) that domain adaptation with LIM leads to systematically improved representations by evaluating the performance of the domain-adapted representations of patent language on two independent downstream tasks, the IPC classification and similarity matching. We demonstrate the impact of balancing the learning from different information sources during domain adaptation for the patent domain. We make the source code as well as the domain-adaptive pre-trained patent language models publicly available at https://github.com/sophiaalthammer/patent-lim.

Topik & Kata Kunci

Penulis (4)

S

Sophia Althammer

M

Mark Buckley

S

Sebastian Hofstätter

A

Allan Hanbury

Format Sitasi

Althammer, S., Buckley, M., Hofstätter, S., Hanbury, A. (2021). Linguistically Informed Masking for Representation Learning in the Patent Domain. https://arxiv.org/abs/2106.05768

Akses Cepat

Lihat di Sumber
Informasi Jurnal
Tahun Terbit
2021
Bahasa
en
Sumber Database
arXiv
Akses
Open Access ✓