Zhihuan Yan

and 5 more

Knowledge graph entity alignment refers to the process of identifying and linking entities that refer to the same real-world object from different knowledge graphs. However, structural heterogeneity between knowledge graphs and scarcity of training data have always been two major challenges that impede entity alignment tasks. The advent of Large Language Models presents new avenues for entity knowledge completion and unsupervised EA, inspired by their extensive background knowledge and comprehensive capability to process semantic information. However, it is nontrivial to directly apply Large Language Models in addressing the aforementioned two challenges due to the following reasons: 1) it will blindly enrich entity knowledge in the absence of appropriate constraints; 2) it could generate noisy labels that may mislead the alignment. To this end, this paper presents a novel entity alignment framework, named LLM-Align, to effectively leverage Large Language Models for unsupervised entity alignment. Firstly, Constrained Entity Information Enrichment ( CIE) technique is devised, which employs attributes and relationships existing in the KGs to constrain the generation process of LLMs, which further alleviates structural heterogeneity between aligned entities. Subsequently, a Code-formatted Prompt Template ( CPT) was designed to assist the Large Language Models in labelling aligned entity pairs from candidate pairs generating via both semantic and structural similarities. Ultimately, Combinatorial Optimization method based entity pair Refinement ( COR) technique was conceived to further enhance the quality of the annotated entity pairs, which are used to train a base EA model. And we iteratively add the newly obtained entity pairs to the training data in order to further enhance the performance of entity alignment. Extensive experiments on various scales benchmark datasets demonstrate the advantages of LLM-Align .