Document-Level Relation Extraction with Entity Enhancement and Context Refinement

Meng Zou, Qiang Yang, Jianfeng Qu, Zhixu Li, An Liu, Lei Zhao, Zhigang Chen

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

Document-level Relation Extraction (DocRE) is the task of extracting relational facts mentioned in the entire document. Despite its popularity, there are still two major difficulties with this task: (i) How to learn more informative embeddings for entity pairs? (ii) How to capture the crucial context describing the relation between an entity pair from the document? To tackle the first challenge, we propose to encode the document with a task-specific pre-trained encoder, where three tasks are involved in pre-training. While one novel task is designed to learn the relation semantic from diverse expressions by utilizing relation-aware pre-training data, the other two tasks, Masked Language Modeling (MLM) and Mention Reference Prediction (MRP), are adopted to enhance the encoder’s capacity in text understanding and coreference capturing. For addressing the second challenge, we craft a hierarchical attention mechanism to refine the context for entity pairs, which considers the embeddings from the encoder as well as the sequential distance information of mentions in the given document. Extensive experimental study on the benchmark dataset DocRED verifies that our method achieves better performance than the baselines.
Original languageEnglish (US)
Title of host publicationWeb Information Systems Engineering – WISE 2021
PublisherSpringer International Publishing
Pages347-362
Number of pages16
ISBN (Print)9783030915599
DOIs
StatePublished - Jan 1 2022

Fingerprint

Dive into the research topics of 'Document-Level Relation Extraction with Entity Enhancement and Context Refinement'. Together they form a unique fingerprint.

Cite this