- Session 2: Multi-task Learning, NLP, Computer Vision, Applications -- Day 2 (Nov.18), talks: 09:00-11:00 (5th floor Hall 2), poster session: 11:00-13:30
- Poster number: Mon25
- Download paper
Yashen Wang (China Academy of Electronics and Information Technology of CETC)
This paper proposes an accurate text-enhanced knowledge graph (KG) representation model, which can utilize textual information to enhance the knowledge representations. Especially, a mutual attention mechanism between KG and text is proposed to learn more accurate textual representations for further improving knowledge graph representation, within a unified parameter sharing semantic space. Different from conventional joint models, no complicated linguistic analysis or strict alignments between KG and text are required to train our model. Experimental results show that the proposed model achieves the state-of-the-art performance on both link prediction and triple classification tasks, and significantly outperforms previous text-enhanced knowledge representation models.