Relation Semantic Guidance and Entity Position Location for Relation Extraction
Abstract Relation extraction is a research hot-spot in the field of natural language processing, and aims at structured knowledge acquirement. However, existing methods still grapple with the issue of entity overlapping, where they treat relation types as inconsequential labels, overlooking the fact...
Saved in:
| Main Authors: | , , , , , , , |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
SpringerOpen
2024-12-01
|
| Series: | Data Science and Engineering |
| Subjects: | |
| Online Access: | https://doi.org/10.1007/s41019-024-00268-5 |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
| Summary: | Abstract Relation extraction is a research hot-spot in the field of natural language processing, and aims at structured knowledge acquirement. However, existing methods still grapple with the issue of entity overlapping, where they treat relation types as inconsequential labels, overlooking the fact that relation type has a great influence on entity type hindering the performance of these models from further improving. Furthermore, current models are inadequate in handling the fine-grained aspect of entity positioning, which leads to ambiguity in entity boundary localization and uncertainty in relation inference, directly. In response to this challenge, a relation extraction model is proposed, which is guided by relational semantic cues and focused on entity boundary localization. The model uses an attention mechanism to align relation semantics with sentence information, so as to obtain the most relevant semantic expression to the target relation instance. It then incorporates an entity locator to harness additional positional features, thereby, enhancing the capability of the model to pinpoint entity start and end tags. Consequently, this approach effectively alleviates the problem of entity overlapping. Extensive experiments are conducted on the widely used datasets NYT and WebNLG. The experimental results show that the proposed model outperforms the baseline ones in F1 scores of the two datasets, and the improvement margin is up to 5.50% and 2.80%, respectively. |
|---|---|
| ISSN: | 2364-1185 2364-1541 |