Thanks to visit codestin.com
Credit goes to www.scribd.com

0% found this document useful (0 votes)
76 views9 pages

Knowledge Extrapolation in KGs

This document presents a survey on knowledge extrapolation for knowledge graphs (KGs), addressing the challenges conventional knowledge graph embedding (KGE) methods face when generalizing to unseen entities and relations. It categorizes various approaches into entity and relation extrapolation, providing a comprehensive overview of existing methods, benchmarks, and future research directions. The paper aims to unify terminologies and enhance understanding of the evolving landscape of KGs and their applications.

Uploaded by

Hubert Nicolas
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
76 views9 pages

Knowledge Extrapolation in KGs

This document presents a survey on knowledge extrapolation for knowledge graphs (KGs), addressing the challenges conventional knowledge graph embedding (KGE) methods face when generalizing to unseen entities and relations. It categorizes various approaches into entity and relation extrapolation, providing a comprehensive overview of existing methods, benchmarks, and future research directions. The paper aims to unify terminologies and enhance understanding of the evolving landscape of KGs and their applications.

Uploaded by

Hubert Nicolas
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 9

Proceedings of the Thirty-Second International Joint Conference on Artificial Intelligence (IJCAI-23)

Survey Track

Generalizing to Unseen Elements: A Survey on Knowledge Extrapolation for


Knowledge Graphs
Mingyang Chen1 , Wen Zhang2 , Yuxia Geng1 , Zezhong Xu1 , Jeff Z. Pan3 , Huajun Chen1,4,5∗
1
College of Computer Science and Technology, Zhejiang University
2
School of Software Technology, Zhejiang University
3
School of Informatics, The University of Edinburgh
4
Donghai Laboratory
5
Alibaba-Zhejiang University Joint Institute of Frontier Technologies
{mingyangchen, zhang.wen, gengyx, xuzezhong, huajunsir}@zju.edu.cn, [email protected]

Abstract cannot map unseen elements to a proper position in the vec-


tor space of trained elements. This inability to generalize to
Knowledge graphs (KGs) have become valuable unseen elements limits their usefulness in addressing the evo-
knowledge resources in various applications, and lution feature of KGs.
knowledge graph embedding (KGE) methods have Recently, many studies have focused on generalizing to
garnered increasing attention in recent years. How- unseen elements of KGs in various scenarios. For instance,
ever, conventional KGE methods still face chal- some research has concentrated on predicting missing triples
lenges when it comes to handling unseen entities or for out-of-knowledge-base (OOKB) entities [Hamaguchi et
relations during model testing. To address this is- al., 2017; Wang et al., 2019]. Additionally, some inductive
sue, much effort has been devoted to various fields relation prediction methods and toolkits have studied gener-
of KGs. In this paper, we use a set of general termi- alizing to entirely new KGs with unseen entities [Teru et al.,
nologies to unify these methods and refer to them 2020; Chen et al., 2022b; Zhang et al., 2023]. Furthermore,
collectively as Knowledge Extrapolation. We com- problems of generalizing to unseen relations have also been
prehensively summarize these methods, classified deeply investigated, especially in low-resource settings such
by our proposed taxonomy, and describe their in- as few-shot [Xiong et al., 2018; Chen et al., 2019] and zero-
terrelationships. Additionally, we introduce bench- shot [Geng et al., 2021] scenarios.
marks and provide comparisons of these methods Therefore, the problem of generalizing to unseen elements
based on aspects that are not captured by the tax- has garnered increasing attention in the knowledge graph
onomy. Finally, we suggest potential directions for field. However, investigations of this problem have used dif-
future research. ferent scenarios and terminologies, such as OOKB, induc-
tive, few-shot, and zero-shot. Furthermore, there is no exist-
1 Introduction ing literature that systematically summarizes this area. Based
on our research in this field, we ask the following question:
A knowledge graph (KG) [Pan et al., 2017] comprises triples Can we use a general set of terminologies to comprehensively
in the form of facts, such as (head entity, relation, tail entity), summarize works in this area and make comparisons to pro-
with entities and relations represented as nodes and edges in vide insights for future research? In this paper, we present
a graph. Despite their simplicity, KGs are increasingly rec- the first survey of recent studies on generalizing to unseen ele-
ognized as valuable knowledge resources for various applica- ments in KGs and refer to them collectively as Knowledge Ex-
tions. trapolation. Specifically, we begin by providing background
Knowledge graph embedding (KGE) is a dominant field on knowledge extrapolation and unifying related definitions.
that has recently gained massive attention for representing el- Next, we offer an overview of knowledge extrapolation meth-
ements (i.e., entities and relations) of knowledge graphs in ods and categorize them using our proposed taxonomy. Ad-
latent vector spaces. Conventional KGE methods are eval- ditionally, we summarize commonly used benchmarks, com-
uated among entities and relations occurring in triples used pare methods, and suggest directions for future research.
for model training, assuming a fixed entity and relation set.
However, this assumption does not always hold in real-world 2 Preliminary
applications [Hamaguchi et al., 2017; Xiong et al., 2018].
2.1 Knowledge Graph Embedding
Knowledge graphs are dynamic, with new entities and rela-
tions emerging over time. Furthermore, the number of knowl- To begin, we provide a formal definition of knowledge
edge graphs is increasing. Conventional KGE methods are in- graphs. Knowledge graphs are commonly represented as
capable of handling emerging elements of KGs because they graph structures, in which nodes represent entities corre-
sponding to specific concepts, and edges are labeled by re-

Corresponding author. lations. Formally, a knowledge graph can be denoted as

6574
Proceedings of the Thirty-Second International Joint Conference on Artificial Intelligence (IJCAI-23)
Survey Track

G = (E, R, T ), where E is the set of entities, R is the set Training Test Test Test
of relations, and T is the set of triples. A triple describes a Support Support
connection between two entities through a relation, and can
be represented as (h, r, t) ⊆ E × R × E. In this context, both
entities and relations are considered elements within a KG, as
is common in the RDF [Pan, 2009] format.
Based on knowledge graphs, the goal of conventional Query Query
knowledge graph embedding methods is to embed elements (a) (b)
in E and R into continuous vector spaces while preserving Seen Entity Seen Relation
the inherent structure of a KG [Wang et al., 2017]. Link pre- Unseen Entity Unseen Relation
(c) (d)
diction is a common task for evaluating the effectiveness of a
KGE method, and we use it as an example to introduce the ba- Figure 1: Training (a) and test (b) set for conventional knowledge
sics since most existing works focus on it. However, explor- graph embedding. Example test set for the entity extrapolation set-
ing KGE and knowledge extrapolation on other KG-related ting (c) and the relation extrapolation setting (d). Note that there
tasks is a pressing need (cf. §6.1). may be any support information about unseen elements in the sup-
In practice, as shown in Figure 1, there is a training set port set, and we use relevant triples as examples.
G tr = (E tr , Rtr , T tr ) (a) and a test set G te = (E te , Rte , T te )
(b). The goal of link prediction is to train the model
to score positive triples higher than negative triples and tual descriptions to handle unseen entities. With GNNs be-
generalize the model to the test set. This can be ex- coming popular for KGs, methods [Hamaguchi et al., 2017;
pressed as minθ E(h,r,t)∈T te [ℓθ (h, r, t)], where ℓθ (h, r, t) ∝ Wang et al., 2019] investigated aggregating knowledge from
−s(h, r, t) + s(h′ , r′ , t′ ) is the loss function, s is the score seen entities to unseen entities. GraIL [Teru et al., 2020]
function, and θ represents the learnable parameters, including used subgraph encoding to handle unseen entities in a com-
entity and relation embeddings. We use h, r, t to denote the pletely new KG, with many methods following this paradigm.
embeddings of h, r, t and (h′ , r′ , t′ ) ∈ / T tr to denote negative Other works [Chen et al., 2022b; Galkin et al., 2022a] ex-
samples. plored encoding entity-independent information for unseen
However, conventional KGE methods cannot deal with entities. Meanwhile, some models [Wang et al., 2021c;
new entities and relations during the test since they don’t have Wang et al., 2021a] incorporated pre-trained language mod-
proper embeddings for new elements. Knowledge extrapo- els to encode textual descriptions for unseen entities. Re-
lation focuses on solving this problem, and we describe the cent research has also considered few-shot settings for han-
details next. dling unseen entities [Baek et al., 2020; Zhang et al., 2021].
For relation extrapolation, pioneer works [Xiong et al., 2018;
2.2 Knowledge Extrapolation Settings Chen et al., 2019] have often focused on few-shot settings.
Other works represent unseen relations using textual descrip-
Methods aimed at knowledge extrapolation attempt to con-
tions [Qin et al., 2020] or ontologies [Geng et al., 2021],
duct link prediction on unseen elements. To unify exist-
without using support triples, referred to as zero-shot learning
ing works on handling these unseen elements, we introduce
for KGs.
a set of general terms. Specifically, during knowledge ex-
trapolation, there are two sets used for testing: one pro-
vides support information about the unseen elements (such 3 Methods
as their structural or textual features), and the other evalu- In this section, we describe the existing works on knowledge
ates the model’s link prediction ability, much like the orig- extrapolation in detail. As shown in Figure 2, we categorize
inal T te . We refer to these sets as the support set S te and these methods based on their model design. For each category
query set Qte , respectively, and the test set is formulated as of methods, we begin by introducing its general idea and then
G te = (E te , Rte , S te , Qte ). While different works may use delve into the specifics of existing methods.
varying terminology, they must all involve these two sets dur-
ing knowledge extrapolation. For convenience, we refer to 3.1 Entity Extrapolation
them uniformly as the support and query sets. Entity Encoding-based Entity Extrapolation
In this work, we categorize existing approaches for han- Conventional knowledge graph embedding methods typically
dling unseen elements into two types: Entity Extrapolation learn an embedding lookup table for entities. However, this
and Relation Extrapolation. As illustrated in Figure 1, we paradigm hinders models from extrapolating to unseen en-
use the term “entity extrapolation” to refer to situations where tities, as there are no reasonable embeddings for them in the
previously unseen entities appear in the test set, and “relation learned table. To handle unseen entities, an intuitive approach
extrapolation” to describe scenarios where previously unseen is to learn to encode entities rather than learn fixed embedding
relations are present in the test set. tables. These learned encoders can operate on the support set
Recent years have seen fast growth in knowledge extrap- of entities to produce reasonable embeddings for them. The
olation for KGs. Early work on rule learning [Galárraga et score function of entity encoding-based entity extrapolation
al., 2013; Meilicke et al., 2018] addressed entity extrapola- for (h, r, t) ∈ Qte is formulated as:
tion via rules that are independent of specific entities. Later,
researchers [Xie et al., 2016; Shah et al., 2019] used tex- s(f (h, S te ), r, f (t, S te )) (1)

6575
Proceedings of the Thirty-Second International Joint Conference on Artificial Intelligence (IJCAI-23)
Survey Track

MEAN [Hamaguchi et al., 2017], LAN [Wang et al., 2019], [Bhowmik and de Melo, 2020],
From Structural [Albooyeh et al., 2020], CFAG [Wang et al., 2022a], ARGCN [Cui et al., 2022],
Information QBLP [Ali et al., 2021], GEN [Baek et al., 2020], HRFN [Zhang et al., 2021],
INDIGO [Liu et al., 2021], MorsE [Chen et al., 2022b], NodePiece [Galkin et al., 2022a]
Entity Encoding
DKRL [Xie et al., 2016], ConMask [Shi and Weninger, 2018], OWE [Shah et al., 2019],
From Other
KEPLER [Wang et al., 2021c], StAR [Wang et al., 2021a], BLP [Daza et al., 2021],
Information
SimKGC [Wang et al., 2022b], StATIK [Markowitz et al., 2022]
Entity
Extrapolation GraIL [Teru et al., 2020], CoMPILE [Mai et al., 2021], TACT [Chen et al., 2021], ConGLR [Lin et al., 2022],
Knowledge Extrapolation

Subgraph Predicting SNRI [Xu et al., 2022], BertRL [Zha et al., 2022], RMPI [Geng et al., 2023], PathCon [Wang et al., 2021b],
NBFNet [Zhu et al., 2021], RED-GNN [Zhang and Yao, 2022]

AMIE [Galárraga et al., 2013], RuleN [Meilicke et al., 2018], AnyBURL [Meilicke et al., 2019], Neural LP
Rule Learning [Yang et al., 2017], DRUM [Sadeghian et al., 2019], CBGNN [Yan et al., 2022]

From Structural
MetaR [Chen et al., 2019], GANA [Niu et al., 2021]
Information
Relation Encoding
From Other ZSGAN [Qin et al., 2020], OntoZSL [Geng et al., 2021], DMoG [Song et al., 2022],
Relation Information HAPZSL [Li et al., 2022], DOZSL [Geng et al., 2022]
Extrapolation
GMatching [Xiong et al., 2018], FSRL [Zhang et al., 2020], FAAN [Sheng et al., 2020], MetaP
Entity Pair Matching [Jiang et al., 2021], P-INT [Xu et al., 2021], GraphANGEL [Jin et al., 2022], CSR [Huang et al., 2022]

Figure 2: Taxonomy of knowledge extrapolation for knowledge graphs.

where f denotes the learnable encoder, and r denotes relation employing hyper-relational KGs on entity extrapolation.
embedding or relation-dependent parameters. Note that not In addition to considering connections between seen and
all methods encode both head and tail entities. unseen entities, recent works have also started to explore re-
Existing works have designed various encoding models f lations between unseen entities. GEN [Baek et al., 2020]
that correspond to different types of information in the sup- considers unseen entities with only a few associative triples
port set S te . If the support set contains triples about unseen and tackles link prediction between seen to unseen entities, as
entities, then f encodes those entities from structural infor- well as between unseen entities, by designing a meta-learning
mation. On the other hand, if the support set contains other framework to conduct learning to extrapolate the knowledge
types of information (e.g., textual descriptions) about unseen from a given KG to unseen entities. Following GEN, HRFN
entities, we refer to this situation as encoding unseen entities [Zhang et al., 2021] first coarsely represents unseen entities
from other information. by meta-learned hyper-relation features and then uses a GNN
to obtain fine-grained embeddings for unseen entities.
Encode from structural information. Some methods as- In certain scenarios, unseen entities have no connection
sume that unseen entities are connected to seen entities and with seen entities and form a completely new KG. Conse-
focus on transferring knowledge from the latter to the former quently, some methods assume that the training and test entity
by aggregating neighbors for the unseen entities. For exam- sets are not overlapping, i.e., E te ∩ E tr = ∅. For these scenar-
ple, Hamaguchi et al. [2017] apply a graph neural network ios, the methods do not focus on aggregating representations
to propagate representations between entities and represent from seen entities for unseen entities. Instead, they learn to
unseen emerging entities using different transition and pool- encode entity-independent information to output entity em-
ing functions. To build more efficient neighbor aggregators beddings. MorsE [Chen et al., 2022b] uses an entity initial-
for emerging entities, LAN [Wang et al., 2019] uses attention izer and a GNN modulator to model entity-independent meta-
weights based on logic rules and neighbor entity embeddings knowledge to produce high-quality embeddings for unseen
that consider query relations. VN Network [He et al., 2020] entities and learn such meta-knowledge by meta-learning.
conducts virtual neighbor prediction based on logic and sym- NodePiece [Galkin et al., 2022a] also treats connected rela-
metric path rules to make the neighbor for an unseen entity tions as entity-independent information and tries MLP and
denser. Bhowmik and de Melo [2020] propose a graph trans- Transformer to encode relational contexts for unseen entities.
former architecture to aggregate the neighbor information for Unlike the above methods encoding entities individually, IN-
learning entity embeddings and use the policy gradient to find DIGO [Liu et al., 2021] encodes the support triples and query
a symbolic reasoning path for explainable reasoning. Al- candidate triples into a node-annotated graph where nodes
booyeh et al. [2020] designs a training procedure resembling correspond to pairs of entities, and uses GNN to update node
what is expected of the model at the test time. CFAG [Wang features for indicating the query triple predictions.
et al., 2022a] first uses an aggregator to obtain entity repre-
sentations and then applies a GAN to generate entity repre- Encode from other information. In addition to using rel-
sentations based on the query relation. ARGCN [Cui et al., evant triples as support sets for unseen entities, other types
2022] considers representing new entities emerging in multi- of information can also be useful for encoding unseen en-
ple batches. QBLP [Ali et al., 2021] explores the benefits of tities. The textual description is common for encoding un-

6576
Proceedings of the Thirty-Second International Joint Conference on Artificial Intelligence (IJCAI-23)
Survey Track

seen entities. DKRL [Xie et al., 2016] embodies entity 2021] proposes a node-edge communicative message-passing
descriptions for knowledge graph embedding and jointly mechanism replacing the original GNN to consider the impor-
trains structure-based embeddings and description-based em- tance of relations. TACT [Chen et al., 2021] considers cor-
beddings. ConMask [Shi and Weninger, 2018] applies relations between relations in a subgraph, and encodes a Re-
relationship-dependent content masking on a given entity de- lational Correlation Network (RCN) to enhance the encoding
scription and uses the representation extracted from the de- of the enclosing subgraph. ConGLR [Lin et al., 2022] for-
scription to predict the most proper entities from an existing mulates a context graph representing relational paths from a
entity set. OWE [Shah et al., 2019] trains traditional knowl- subgraph and then uses two GCNs to process it with the en-
edge graph embeddings and word embeddings independently, closing subgraph respectively. Since extracted enclosing sub-
and learns a transformation from entities’ textual information graphs can be sparse and some surrounded relations are ne-
to their embeddings, enabling unseen entities with text can be glected, SNRI [Xu et al., 2022] leverages complete neighbor
mapped into the KGE space. KEPLER [Wang et al., 2021c] relations of entities in a subgraph by neighboring relational
uses the same pre-trained language model to encode entities features and neighboring relational paths. BertRL [Zha et al.,
and texts, and jointly optimizes it with knowledge embedding 2022] leverages pre-trained language models to encode rea-
and masked language modeling objectives. StAR [Wang et soning paths to enhance subgraph semantics. RMPI [Geng et
al., 2021a] is a structure-augmented text representation model al., 2023] utilizes the relation semantics defined in the KG’s
which encodes triples with a siamese-style textual encoder. ontological schema [Wiharja et al., 2020] to extend for han-
Since it encodes the text of entities, it can easily extrapolate dling both unseen entities and relations.
to unseen entities using their text. BLP [Daza et al., 2021] The path between two entities can also be viewed as a sub-
treats a pre-trained language model as an entity encoder and graph and used to indicate the relation between them. Path-
finetunes it via link prediction. SimKGC [Wang et al., 2022b] Con [Wang et al., 2021b] proposes a relational message pass-
uses contrastive learning with different negative samplings to ing framework and conducts it to encode relational contexts
improve the efficiency of text-based knowledge graph em- and relational paths for two given entities to predict the miss-
bedding. StATIK [Markowitz et al., 2022] combines a lan- ing relation. NBFNet [Zhu et al., 2021] uses the Message
guage model and a message passing graph neural network as and Aggregate operation from GNN to parameterize the gen-
an encoder to process and uses TransE as the decoder to score eralized Bellman-Ford algorithm for representing the paths
triples. between two entities. RED-GNN [Zhang and Yao, 2022]
proposes a relational directed graph (r-digraph) to preserve
Subgraph Predicting-based Entity Extrapolation the local information between entities and recursively con-
The aforementioned entity encoding-based methods com- structs the r-digraph between one query and any candidate
monly treat the head entity, relation, and tail entity in a triple entity, making the reasoning processes efficient.
individually. However, some research provides another view
that does not explicitly encode entities. Instead, they treat the Rule Learning-based Entity Extrapolation
head and tail entity in a triple together and encode the rela- Several works have explored learning rules from knowledge
tional subgraph between them. This perspective assumes that graphs, as these logical rules can inherently extrapolate to un-
the semantics underlying the subgraph between two entities seen entities since they are independent of specific entities.
can be used to predict their relation. The ability to encode the Rule learning-based methods can be divided into two cate-
subgraph of two entities can be extrapolated to unseen enti- gories. The pure symbolic methods generate rules from ex-
ties since subgraph structures are independent of entities. For isting knowledge through statistics and filter them with prede-
the subgraph predicting-based method, the score function can fined indicators. AMIE [Galárraga et al., 2013] proposes pre-
be formulated as: defined indicators, including head coverage and confidence,
to filter possible rules which are generated with three dif-
s(f (h, t, S te ), r) (2) ferent atom operators. RuleN [Meilicke et al., 2018] and
where f is responsible for subgraph extraction and encoding; AnyBURL [Meilicke et al., 2019] generate possible rules
r denotes relation embedding or relation-dependent param- by sampling the path from the head entity to the tail en-
eters; s is responsible for predicting whether the subgraph tity of each triple and filtering them using confidence and
is related to a specific relation. This research line mainly en- head coverage. Another kind of method combines neural net-
codes subgraphs for unseen entities based on triples from sup- works and symbolic rules. Neural LP [Yang et al., 2017] and
port sets, so they assume that support sets contain structural DRUM [Sadeghian et al., 2019] are based on TensorLog and
information about unseen entities. use neural networks to simulate rule paths. CBGNN [Yan et
One line of this method starts from GraIL [Teru et al., al., 2022] view logical rules as cycles from the perspective of
2020], which uses a GNN to reason over enclosing subgraph algebraic topology and learn rules in the space of cycles.
between target nodes and learns entity-independent relational
3.2 Relation Extrapolation
semantics for entity extrapolation. Specifically, for predict-
ing the relation between two unseen entities, GraIL first ex- Relation Encoding-based Relation Extrapolation
tracts the enclosing subgraph around them, then labels each Similar to entity extrapolation, the shortcoming of conven-
entity in this subgraph based on the distance between that en- tional knowledge graph embedding methods on relation ex-
tity and two unseen entities. Finally, it uses a GNN to score trapolation is that they cannot give unseen relations reason-
the subgraph with candidate relations. CoMPILE [Mai et al., able embeddings. However, since some observed information

6577
Proceedings of the Thirty-Second International Joint Conference on Artificial Intelligence (IJCAI-23)
Survey Track

from the support set for unseen relations can be utilized, en- Entity Pair Matching-based Relation Extrapolation
coding such information to embed unseen relations is an intu- Another solution, instead of encoding the relation directly, is
itive solution. The score function of relation encoding-based to encode head and tail entity pairs of an unseen relation and
methods is formulated as follows: then match these encoded entity pairs with entity pairs in the
s(h, f (r, S te ), t) (3) query set to predict whether they are connected by the same
unseen relation. The score function of entity pair matching-
where f is an encoder that transforms related support infor-
based methods for (h, r, t) can be formulated as follows:
mation for an unseen relation r to its embedding, and h and
t are entity embeddings or entity-dependent parameters. De- s(f (PAIR(r, S te )), f (h, t))
pending on the type of information used for relation encod- (4)
ing, we also categorize these methods into encode from struc- PAIR(r, S te ) = {(hi , ti )|(hi , r, ti ) ∈ S te }
tural information and encode from other information.
where PAIR(r, S te ) is used to get the head-tail entity pairs
Encode from structural information. Methods in this cat- from the support set for the unseen relation. f encodes head-
egory assume that there are some example triples (i.e., struc- tail entity pairs, and s scores for entity pair matching. Since
tural information) for unseen relations in the support set, this research line involves encoding entity pairs for unseen
and the connected head and tail entities of unseen relations relations, the support sets are commonly assumed to consist
can reveal their semantics. These methods mainly focus on of relevant triples for these unseen relations.
transferring representation from entities to unseen relations. GMatching [Xiong et al., 2018] uses entity embeddings
MetaR [Chen et al., 2019] learns relation-specific meta infor- and entity local graph structures to represent entity pairs and
mation from support triples to replace the missing represen- learn a metric model to discover more similar triples. FSRL
tations for unseen relations. More precisely, for a specific [Zhang et al., 2020] uses a relation-aware heterogeneous
unseen relation, MetaR first encodes entity pairs of it and neighbor encoder considering different impacts of neighbors
outputs relation meta. Then, a rapid gradient descent step of an entity, and a recurrent autoencoder aggregation network
is applied to the relation meta following optimization-based to aggregate multiple reference entity pairs in the support set.
meta-learning, which will produce good generalization per- FAAN [Sheng et al., 2020] uses an adaptive neighbor encoder
formance on that unseen relation. In order to leverage more to encode entities and uses a Transformer to encode entity
structural information provided by connected entities of un- pairs; furthermore, an adaptive matching processor is intro-
seen relations, GANA [Niu et al., 2021] uses a gated and duced to match a query entity pair to multiple support entity
attentive neighbor aggregator to represent entity representa- pairs attentively. MetaP [Jiang et al., 2021] learns relation-
tions. Such entity representations are used to generate gen- specific meta patterns from entity pairs of a relation, and such
eral representations for unseen relations via an attentive Bi- patterns are captured by convolutional filters on representa-
LSTM encoder. Furthermore, it applies the training paradigm tions (by pre-trained or random initialize) of entity pairs. Fur-
of optimization-based meta-learning with TransH as the score thermore, the subgraphs between the entity pairs are also ex-
function. pressive. P-INT [Xu et al., 2021] leverages the paths between
Encode from other information. In addition to using an entity pair to represent it and calculates the path similarity
structural information from relevant triples, methods in this between support entity pairs and query entity pairs to predict
category rely on unseen relations’ support information from the query triples about unseen relations. GraphANGEL [Jin
external resources, including textual descriptions and onto- et al., 2022] extracts 3-cycle and 4-cycle patterns to represent
logical schemas. ZSGAN [Qin et al., 2020] leverages Gen- an unseen relation; specifically, for a query triple with an un-
erative Adversarial Networks (GANs) to generate plausible seen relation, it extracts target patterns, supporting patterns,
relation embeddings for unseen relations conditioned on their refuting patterns for query entity pairs and calculates the sim-
encoded textual descriptions. Since a KG is often accom- ilarity between target patterns and support patterns as well
panied by an ontology as its schema, OntoZSL [Geng et refuting patterns. CSR [Huang et al., 2022] uses connection
al., 2021] is proposed to synthesize the unseen relation em- subgraphs to represent entity pairs. It finds the shared con-
beddings conditioned on the embeddings of an ontological nection subgraph among the support triples and tests whether
schema which includes the semantics of the KG relations. it connects query triples.
DOZSL [Geng et al., 2022] proposes a property-guided dis-
entangled ontology embedding method to extract more fine- 4 Benchmarks
grained inter-class relationships between seen and unseen
relations in the ontology. It uses a GAN-based generative In this section, we briefly describe some datasets used to eval-
model and a GCN-based propagation model to integrate dis- uate models for knowledge extrapolation. The following four
entangled embeddings for generating embeddings for un- datasets are commonly used by different knowledge extrapo-
seen relations. Besides leveraging generative models such as lation models, and they have various assumptions for diverse
GANs to obtain plausible relation embeddings from support application scenarios. We show the usage details in Table 1
information, DMoG [Song et al., 2022] learns a linear map- and describe them as follows.
ping matrix to transform the encoded textual and ontologi- ① WN11-{Head/Tail/Both}-{1,000/3,000/5,000} Here
cal features, and HAPZSL [Li et al., 2022] uses an attention are nine datasets derived from WordNet11 created by Ham-
mechanism to encode relation descriptions into valid relation aguchi et al. [2017]. These datasets are used to conduct
prototype representations. entity extrapolation experiments, and they assume that

6578
Proceedings of the Thirty-Second International Joint Conference on Artificial Intelligence (IJCAI-23)
Survey Track

Dataset Type Proposed by Used by


MEAN [Hamaguchi et al., 2017], LAN [Wang et al., 2019], VN Network
① Ent MEAN [Hamaguchi et al., 2017] [He et al., 2020], INDIGO [Liu et al., 2021]
GraIL [Teru et al., 2020], CoMPILE [Mai et al., 2021], TACT [Chen et al., 2021],
ConGLR [Lin et al., 2022], SNRI [Xu et al., 2022], BertRL [Zha et al., 2022],
RMPI [Geng et al., 2023], MorsE [Chen et al., 2022b], NodePiece
② Ent GraIL [Teru et al., 2020] [Galkin et al., 2022a], NBFNet [Zhu et al., 2021], RED-GNN [Zhang and Yao, 2022],
INDIGO [Liu et al., 2021], RuleN [Meilicke et al., 2018], Nerual LP
[Yang et al., 2017], DRUM [Sadeghian et al., 2019], CBGNN [Yan et al., 2022]
GMatching [Xiong et al., 2018], MetaR [Chen et al., 2019], FSRL
③ Rel GMatching [Xiong et al., 2018] [Zhang et al., 2020], FAAN [Sheng et al., 2020], GANA [Niu et al., 2021],
MetaP [Jiang et al., 2021], P-INT [Xu et al., 2021], CSR [Huang et al., 2022]
ZSGAN [Qin et al., 2020], OntoZSL [Geng et al., 2021], DOZSL [Geng et al., 2022]
④ Rel ZSGAN [Qin et al., 2020]
DMoG [Song et al., 2022], HAPZSL [Li et al., 2022]

Table 1: The usage of four commonly used benchmarks described in §4. The column ‘Type’ indicates the dataset is used for entity or relation
extrapolation.

unseen entities are all connected to training entities since lations during the model test. ARGCN [Cui et al., 2022]
triples that contain two unseen entities in support sets are and LKGE [Cui et al., 2023] create datasets to simulate the
discarded. Head/Tail/Both denotes the position of unseen growth of knowledge graphs with emerging entities and rela-
entities in test triples, and 1,000/3,000/5,000 represents the tions.
number of triplets used for generating the unseen entities.
② {WN18RR/FB15k-237/NELL995}-{v1/2/3/4} These 5 Comparison and Discussion
12 datasets are generated by Teru et al. [2020] from
WN18RR, FB15k-237, and NELL955. Each original dataset 5.1 Assumption on Entity Extrapolation
is sampled into four different versions with an increasing
number of entities and relations. In these datasets, the entity As described in §3.1, there are two different assumptions
sets during training and testing are disjoint, which indicates about entity extrapolation. One assumption is that unseen
that they are used to evaluate entity extrapolation models as- entities in support sets are connected to seen entities (i.e.,
suming unseen entities are not connected to training entities. E te ∩ E tr ̸= ∅), while the other is that unseen entities form
entirely new KGs in support sets and are not connected by
③ NELL-One/Wiki-One These two datasets are developed seen entities (i.e., E te ∩ E tr = ∅). We refer to these two
by Xiong et al. [2018] and used to evaluate the few-shot rela- assumptions as semi-entity extrapolation and fully-entity ex-
tion prediction task originally. The relation sets for the train- trapolation for convenience. Generally, methods designed for
ing and the testing are disjoint, indicating models evaluated fully-entity extrapolation can handle semi-entity extrapola-
on these datasets are capable of relation extrapolation. For tion problems, but not vice versa. We discuss the ability of
each unseen relation during the test, the number of support entity extrapolation methods to handle these two assumptions
triples (i.e., k) is often specified, and the task is called k-shot as follows.
link prediction. Most semi-entity extrapolation models lie in entity
④ NELL-ZS/Wiki-ZS These two datasets are presented by encoding-based methods and encode unseen entities from
Qin et al. [2020], and ‘ZS’ denotes that they are used to evalu- structural information since they often design modules for
ate zero-shot learning for link prediction, i.e., relation extrap- transferring knowledge from seen entities to unseen enti-
olation with information other than triples related to unseen ties by aggregating representations from seen entities [Ham-
relations. Each relation in these datasets has its textual de- aguchi et al., 2017; Wang et al., 2019; Wang et al., 2022a].
scription, and such information is viewed as support sets for Some other methods that conduct entity encoding from struc-
unseen relations. Furthermore, besides textual information, tural information can also handle fully-entity extrapolation
Geng et al. [2021] adds ontological schemas for these two by designing entity-independent encoding procedure [Chen
datasets as side information for relations. et al., 2022b; Galkin et al., 2022a].
Moreover, there are many other datasets provided by dif- Furthermore, methods encoding unseen entities from other
ferent works to show their own effectiveness from some spe- information (e.g., textual descriptions) can usually solve
cific aspects. LAN [Wang et al., 2019] and VN Network [He fully-entity extrapolation problems since such side informa-
et al., 2020] use FB15k and YAGO37 to construct datasets, tion is sufficient for encoding entity embeddings and such en-
respectively. GEN [Baek et al., 2020] and HRFN [Zhang et coding is independent to specific entities [Wang et al., 2021a;
al., 2021] develop datasets to demonstrate their link predic- Wang et al., 2022b]. Subgraph predicting-based methods
tion ability on both seen and unseen entities. MaKEr [Chen et and rule learning-based methods are also capable of tack-
al., 2022a] and RMPI [Geng et al., 2023] construct datasets ling fully-entity extrapolation since subgraphs and rules are
showing that they can handle both unseen entities and re- entity-independent.

6579
Proceedings of the Thirty-Second International Joint Conference on Artificial Intelligence (IJCAI-23)
Survey Track

5.2 Information in Support Set 6.2 Multi-modal Support Information


Various types of information have been explored to build sup- Multi-modal knowledge graphs are a current research topic
port sets for unseen elements, including triples, textual de- that has been explored in recent literature. While existing
scriptions, and ontologies. Here, we compare these three knowledge extrapolation methods have primarily focused on
widely used types of support information as follows. using natural language as support information for unseen ele-
Triples, which provide structural information, are an intu- ments, there are few works that address the potential of utiliz-
itive type of support information for unseen elements since ing visual information. However, we believe that images can
they usually emerge with other elements in the form of triples also help generalize KGE to unseen elements since they can
rather than alone. Knowledge from seen elements provided be universally understood by specific pre-trained encoders.
by triples can be utilized by unseen elements. For example, Additionally, hyper-relational KGs [Ali et al., 2021], where
in entity extrapolation, surrounding seen relations can pro- triples can be instantiated with a set of qualifiers, can provide
vide type information for unseen entities [Chen et al., 2022b]; different modal information.
in relation extrapolation, connected seen entities can reveal
the characteristics of unseen relations [Xiong et al., 2018; 6.3 Entity and Relation Extrapolation
Chen et al., 2019].” Existing research on knowledge extrapolation primarily fo-
Textual descriptions are also common for KGs since many cuses on solving entity extrapolation and relation extrapola-
KGs are constructed from text data. Text descriptions can nat- tion separately, but in real-world applications, unseen entities
urally provide the ability to extrapolate to unseen elements, and relations may emerge simultaneously. One feasible so-
and the typical procedure is to use text encoders (e.g., pre- lution is effectively integrating methods of entity extrapola-
trained language models) to transform text descriptions into tion and relation extrapolation. Existing literature has made
embeddings. That is, the encoder f in Eq. (1) or Eq. (3) can some attempts. Specifically, MaKEr [Chen et al., 2022a] uses
be treated as a text encoder. Both entity extrapolation [Wang meta-learning to learn to encode unseen entities and relations
et al., 2021a; Daza et al., 2021] and relation extrapolation based on a set of training tasks with simulated unseen entities
[Qin et al., 2020] can benefit from textual descriptions. and relations. Furthermore, by combining the capability of
Ontologies are typically used as prior knowledge about the entity extrapolation from subgraph encoding and relation ex-
correlation between seen and unseen elements and have been trapolation based on the ontology of relations, RMPI [Geng
widely used to handle unseen relations in existing works. An et al., 2023] provides a nascent exemplar of this perspective.
ontology is often represented as a graph encompassing re-
lation hierarchies and constraints on relation domains and
6.4 Temporal and Lifelong Setting
ranges. The embeddings of unseen relations can be gener- In practical applications, some KGs include temporal con-
ated using an ontology-based approach that utilizes various straints, which necessitates the consideration of temporal
techniques, such as GANs [Geng et al., 2021] or disentan- information when scoring a triple [Bourgaux et al., 2021;
gled representation learning [Geng et al., 2022]. Liang et al., 2022]. Temporal KGs also face the challenge of
Besides using the above support information alone, some emerging elements due to their dynamic nature. To address
methods apply different support information together. For ex- this issue, FILT [Ding et al., 2022] defines a problem of en-
ample, BertRL [Zha et al., 2022] uses both subgraph struc- tity extrapolation in temporal KGs and utilizes a time-aware
tures and textualized paths to predict relations between un- graph encoder and entity concept information to obtain em-
seen entities, exploiting both structural triples and textual de- beddings for unseen entities. Additionally, existing knowl-
scriptions. Similarly, RMPI [Geng et al., 2023] uses sub- edge extrapolation works typically assume one-time extrapo-
graphs and ontological schemas together to tackle both un- lation, where all unseen elements emerge simultaneously in
seen entities and relations. a batch. However, recent literature, such as ARGCN [Cui et
al., 2022] and LKGE [Cui et al., 2023], considers scenarios
where unseen elements emerge in a multi-batch and lifelong
6 Future Prospects manner.
6.1 Diverse Application
7 Conclusion
Most existing knowledge extrapolation methods are evalu-
ated by simple link prediction on test sets. Even though the Recent years have seen an increasing amount of research on
link prediction task can show the effectiveness of models and generalizing to unseen elements in KGs from diverse perspec-
help knowledge graph completion, it is valuable to explore tives. In this paper, we provide a comprehensive survey of
how to generalize to unseen KG elements on diverse appli- these works and summarize them under a general set of ter-
cations. GNN-QE [Galkin et al., 2022b] studies answering minologies. We categorize existing methods using our pro-
logical queries expressed in a subset of first-order logic with posed systematic taxonomy and list commonly used bench-
unseen entities at test time. Meanwhile, ContEA [Wang et marks along with the methods that employ them. We also pro-
al., 2022c] targets the entity alignment task under the grow- vide method comparisons and discussions from commonly
ing of KGs. We argue that besides the above in-KG applica- mentioned perspectives in the existing literature. Finally, we
tions, more common out-of-KG tasks like KG enhanced ques- suggest several potential research directions. We believe that
tion answering [Chen et al., 2022b], including those based on this exploration can provide a clear overview of the field and
LLMs [Hu et al., 2023] can be explored. facilitate future research.

6580
Proceedings of the Thirty-Second International Joint Conference on Artificial Intelligence (IJCAI-23)
Survey Track

Acknowledgments [Galárraga et al., 2013] L. A. Galárraga, C. Teflioudi,


This work is supported by the National Natural Science K. Hose, and F. M. Suchanek. AMIE: association rule
Foundation of China (NSFCU19B2027, NSFC91846204), mining under incomplete evidence in ontological knowl-
the joint project DH-2022ZY0012 from Donghai Lab and edge bases. In WWW, 2013.
the Chang Jiang Scholars Program (J2019032). Mingyang [Galkin et al., 2022a] M. Galkin, E. G. Denis, J. Wu,
Chen is supported by the China Scholarship Council (No. and W. L. Hamilton. Nodepiece: Compositional and
202206320309). parameter-efficient representations of large knowledge
graphs. In ICLR, 2022.
Contribution Statement [Galkin et al., 2022b] M. Galkin, Z. Zhu, H. Ren, and
J. Tang. Inductive logical query answering in knowledge
Mingyang Chen and Wen Zhang contributed equally and graphs. In NeurIPS, 2022.
share first authorship. [Geng et al., 2021] Y. Geng, J. Chen, Z. Chen, J. Z. Pan,
Z. Ye, Z. Yuan, Y. Jia, and H. Chen. Ontozsl: Ontology-
References enhanced zero-shot learning. In WWW, 2021.
[Albooyeh et al., 2020] M. Albooyeh, R. Goel, and S. M. [Geng et al., 2022] Y. Geng, J. Chen, W. Zhang, Y. Xu,
Kazemi. Out-of-sample representation learning for knowl- Z. Chen, J. Z. Pan, Y. Huang, F. Xiong, and H. Chen. Dis-
edge graphs. In EMNLP, 2020. entangled ontology embedding for zero-shot learning. In
[Ali et al., 2021] M. Ali, M. Berrendorf, M. Galkin, KDD, 2022.
V. Thost, T. Ma, V. Tresp, and J. Lehmann. Improving [Geng et al., 2023] Y. Geng, J. Chen, W. Zhang, J. Z. Pan,
inductive link prediction using hyper-relational facts. In M. Chen, H. Chen, and S. Jiang. Relational message pass-
ISWC, 2021. ing for fully inductive knowledge graph completion. In
[Baek et al., 2020] J. Baek, D. Bok Lee, and S. Ju Hwang. ICDE, 2023.
Learning to extrapolate knowledge: Transductive few-shot [Hamaguchi et al., 2017] T. Hamaguchi, H. Oiwa,
out-of-graph link prediction. In NeurIPS, 2020. M. Shimbo, and Y. Matsumoto. Knowledge transfer
[Bhowmik and de Melo, 2020] R. Bhowmik and G. de Melo. for out-of-knowledge-base entities : A graph neural
Explainable link prediction for emerging entities in knowl- network approach. In IJCAI, 2017.
edge graphs. In ISWC, 2020. [He et al., 2020] Y. He, Z. Wang, P. Zhang, Z. Tu, and
[Bourgaux et al., 2021] C. Bourgaux, A. Ozaki, and J. Z. Z. Ren. VN network: Embedding newly emerging enti-
Pan. Geometric models for (temporally) attributed descrip- ties with virtual neighbors. In CIKM, 2020.
tion logics. In DL, 2021. [Hu et al., 2023] N. Hu, Y. Wu, G. Qi, D. Min, J. Chen,
[Chen et al., 2019] M. Chen, W. Zhang, W. Zhang, Q. Chen, J. Z Pan, and Z. Ali. An empirical study of pre-trained
and H. Chen. Meta relational learning for few-shot link language models in simple knowledge graph question an-
prediction in knowledge graphs. In EMNLP, 2019. swering. CoRR, 2023.
[Chen et al., 2021] J. Chen, H. He, F. Wu, and J. Wang. [Huang et al., 2022] Q. Huang, H. Ren, and J. Leskovec.
Topology-aware correlations between relations for induc- Few-shot relational reasoning via connection subgraph
tive link prediction in knowledge graphs. In AAAI, 2021. pretraining. In NeurIPS, 2022.
[Chen et al., 2022a] M. Chen, W. Zhang, Z. Yao, X. Chen, [Jiang et al., 2021] Z. Jiang, J. Gao, and X. Lv. Metap: Meta
M. Ding, F. Huang, and H. Chen. Meta-learning based pattern learning for one-shot knowledge graph completion.
knowledge extrapolation for knowledge graphs in the fed- In SIGIR, 2021.
erated setting. In IJCAI, 2022. [Jin et al., 2022] J. Jin, Y. Wang, K. Du, W. Zhang, Z. Zhang,
[Chen et al., 2022b] M. Chen, W. Zhang, Y. Zhu, H. Zhou, D. Wipf, Y. Yu, and Q. Gan. Inductive relation prediction
Z. Yuan, C. Xu, and H. Chen. Meta-knowledge transfer for using analogy subgraph embeddings. In ICLR, 2022.
inductive knowledge graph embedding. In SIGIR, 2022. [Li et al., 2022] X. Li, J. Ma, J. Yu, T. Xu, M. Zhao, H. Liu,
[Cui et al., 2022] Y. Cui, Y. Wang, Z. Sun, W. Liu, Y. Jiang, M. Yu, and R. Yu. HAPZSL: A hybrid attention prototype
K. Han, and W. Hu. Inductive knowledge graph reasoning network for knowledge graph zero-shot relational learn-
for multi-batch emerging entities. In CIKM, 2022. ing. Neurocomputing, 2022.
[Cui et al., 2023] Y. Cui, Y. Wang, Z. Sun, W. Liu, Y. Jiang, [Liang et al., 2022] K. Liang, L. Meng, M. Liu, Y. Liu,
K. Han, and W. Hu. Lifelong embedding learning and W. Tu, S. Wang, S. Zhou, X. Liu, and F. Sun. Reasoning
transfer for growing knowledge graphs. In AAAI, 2023. over different types of knowledge graphs: Static, temporal
[Daza et al., 2021] D. Daza, M. Cochez, and P. Groth. Induc- and multi-modal. CoRR, 2022.
tive entity representations from text via link prediction. In [Lin et al., 2022] Q. Lin, J. Liu, F. Xu, et al. Incorporating
WWW, 2021. context graph with logical reasoning for inductive relation
[Ding et al., 2022] Z. Ding, J. Wu, B. He, Y. Ma, Z. Han, prediction. In SIGIR, 2022.
and V. Tresp. Few-shot inductive learning on temporal [Liu et al., 2021] S. Liu, B. C. Grau, I. Horrocks, and E. V.
knowledge graphs using concept-aware information. In Kostylev. INDIGO: gnn-based inductive knowledge graph
AKBC, 2022. completion using pair-wise encoding. In NeurIPS, 2021.

6581
Proceedings of the Thirty-Second International Joint Conference on Artificial Intelligence (IJCAI-23)
Survey Track

[Mai et al., 2021] S. Mai, S. Zheng, Y. Yang, and H. Hu. [Wang et al., 2021b] H. Wang, H. Ren, and J. Leskovec. Re-
Communicative message passing for inductive relation lational message passing for knowledge graph completion.
reasoning. In AAAI, 2021. In KDD, 2021.
[Markowitz et al., 2022] E. Markowitz, K. Balasubrama- [Wang et al., 2021c] X. Wang, T. Gao, Z. Zhu, Z. Liu, J. Li,
nian, M. Mirtaheri, M. Annavaram, A. Galstyan, and and J. Tang. Kepler: A unified model for knowledge em-
G. Ver Steeg. Statik: Structure and text for inductive bedding and pre-trained language representation. TACL,
knowledge graph completion. In NAACL-HLT, 2022. 2021.
[Meilicke et al., 2018] C. Meilicke, M. Fink, Y. Wang, [Wang et al., 2022a] C. Wang, X. Zhou, S. Pan, L. Dong,
D. Ruffinelli, R. Gemulla, and H. Stuckenschmidt. Fine- Z. Song, and Y. Sha. Exploring relational semantics for
grained evaluation of rule- and embedding-based systems inductive knowledge graph completion. In AAAI, 2022.
for knowledge graph completion. In ISWC, 2018. [Wang et al., 2022b] L. Wang, W. Zhao, Z. Wei, and J. Liu.
[Meilicke et al., 2019] C. Meilicke, M. W. Chekol, Simkgc: Simple contrastive knowledge graph completion
with pre-trained language models. In ACL, 2022.
D. Ruffinelli, et al. Anytime bottom-up rule learn-
ing for knowledge graph completion. In IJCAI, 2019. [Wang et al., 2022c] Y. Wang, Y. Cui, W. Liu, et al. Facing
changes: Continual entity alignment for growing knowl-
[Niu et al., 2021] G. Niu, Y. Li, C. Tang, R. Geng, J. Dai, edge graphs. In ISWC, 2022.
Q. Liu, H. Wang, J. Sun, F. Huang, and L. Si. Relational [Wiharja et al., 2020] K. Wiharja, J. Z. Pan, M. J. Kolling-
learning with gated and attentive neighbor aggregator for
baum, and Y. Deng. Schema Aware Iterative Knowledge
few-shot knowledge graph completion. In SIGIR, 2021.
Graph Completion. Journal of Web Semantics, 2020.
[Pan et al., 2017] J. Z. Pan, G. Vetere, J.M. Gomez-Perez, [Xie et al., 2016] R. Xie, Z. Liu, J. Jia, H. Luan, and M. Sun.
and H. Wu, editors. Exploiting Linked Data and Knowl- Representation learning of knowledge graphs with entity
edge Graphs for Large Organisations. Springer, 2017. descriptions. In AAAI, 2016.
[Pan, 2009] J. Z. Pan. Resource Description Framework. In [Xiong et al., 2018] W. Xiong, M. Yu, S. Chang, X. Guo, and
Handbook of Ontologies. 2009. W. Y. Wang. One-shot relational learning for knowledge
[Qin et al., 2020] P. Qin, X. Wang, W. Chen, C. Zhang, graphs. In EMNLP, 2018.
W. Xu, and W. Y. Wang. Generative adversarial zero-shot [Xu et al., 2021] J. Xu, J. Zhang, X. Ke, Y. Dong, H. Chen,
relational learning for knowledge graphs. In AAAI, 2020. C. Li, and Y. Liu. P-INT: A path-based interaction model
[Sadeghian et al., 2019] A. Sadeghian, M. Armandpour, for few-shot knowledge graph completion. In EMNLP
P. Ding, and D. Wang. DRUM: end-to-end differentiable (Findings), 2021.
rule mining on knowledge graphs. In NeurIPS, 2019. [Xu et al., 2022] X. Xu, P. Zhang, Y. He, et al. Subgraph
[Shah et al., 2019] H. Shah, J. Villmow, A. Ulges, U. Schwa- neighboring relations infomax for inductive link prediction
necke, and F. Shafait. An open-world extension to knowl- on knowledge graphs. In IJCAI, 2022.
edge graph completion models. In AAAI, 2019. [Yan et al., 2022] Z. Yan, T. Ma, L. Gao, Z. Tang, and
C. Chen. Cycle representation learning for inductive re-
[Sheng et al., 2020] J. Sheng, S. Guo, Z. Chen, et al. Adap-
lation prediction. In ICML, 2022.
tive attentional network for few-shot knowledge graph
[Yang et al., 2017] F. Yang, Z. Yang, and W. W. Cohen. Dif-
completion. In EMNLP, 2020.
ferentiable learning of logical rules for knowledge base
[Shi and Weninger, 2018] B. Shi and T. Weninger. Open- reasoning. In NeurIPS, 2017.
world knowledge graph completion. In AAAI, 2018. [Zha et al., 2022] H. Zha, Z. Chen, and X. Yan. Inductive
[Song et al., 2022] R. Song, S. He, S. Zheng, S. Gao, K. Liu, relation prediction by BERT. In AAAI, 2022.
Z. Yu, and J. Zhao. Decoupling mixture-of-graphs: Un- [Zhang and Yao, 2022] Y. Zhang and Q. Yao. Knowledge
seen relational learning for knowledge graph completion graph reasoning with relational digraph. In WWW, 2022.
by fusing ontology and textual experts. In COLING, 2022. [Zhang et al., 2020] C. Zhang, H. Yao, C. Huang, M. Jiang,
[Teru et al., 2020] K. K. Teru, E. G. Denis, and W. L. Hamil- Z. Li, and N. V. Chawla. Few-shot knowledge graph com-
ton. Inductive relation prediction by subgraph reasoning. pletion. In AAAI, 2020.
In ICML, 2020. [Zhang et al., 2021] Y. Zhang, W. Wang, W. Chen, J. Xu,
[Wang et al., 2017] Q. Wang, Z. Mao, B. Wang, and L. Guo. A. Liu, and L. Zhao. Meta-learning based hyper-relation
Knowledge graph embedding: A survey of approaches and feature modeling for out-of-knowledge-base embedding.
applications. TKDE, 2017. In CIKM, 2021.
[Wang et al., 2019] P. Wang, J. Han, C. Li, and R. Pan. Logic [Zhang et al., 2023] W. Zhang, Z. Yao, M. Chen, Z. Huang,
attention based neighborhood aggregation for inductive and H. Chen. Neuralkg-ind: A python library for inductive
knowledge graph embedding. In AAAI, 2019. knowledge graph representation learning. In SIGIR, 2023.
[Wang et al., 2021a] B. Wang, T. Shen, G. Long, T. Zhou, [Zhu et al., 2021] Z. Zhu, Z. Zhang, L. P. Xhonneux, and
Y. Wang, and Y. Chang. Structure-augmented text repre- J. Tang. Neural bellman-ford networks: A general graph
sentation learning for efficient knowledge graph comple- neural network framework for link prediction. In NeurIPS,
tion. In WWW, 2021. 2021.

6582

You might also like