29290-Article Text-33344-1-2-20240324
29290-Article Text-33344-1-2-20240324
Introduction
For real-world applications, there exist many irregular data
structures, which can be effectively modeled via the graph
structure. To extract the useful information from the graph
structured data, graph neural networks (GNNs) have been Figure 1: Intuitive comparison between our work and the ex-
proposed to address various learning tasks. The existing isting graph NAS work. (a) Graph NAS is dedicated to the
GNNs have achieved promising performance in real appli- study of normal graphs. (b) Our work is based on hyper-
cations such as node classification (Xiao et al. 2022; Tian graphs.
et al. 2023), community detection (Qiu et al. 2022), traffic
forecasting (Jiang and Luo 2022) and biological problems
(Bongini et al. 2023).
Graph neural networks (GNNs) model pairwise connec- In recent years, hypergraph neural networks (HGNNs)
tions between two data samples via a normal graph. How- have become a popular learning tool to extract the complex
ever, the data structure in real-world tasks may exceed pair- patterns in non-Euclidean data, which is first proposed in
wise relations that cannot be effectively modeled by normal (Feng et al. 2019). It designs a hyperedge convolution op-
graphs. Instead of edges in normal graphs, the hyperedge in eration to leverage the high-order correlations across data.
a hypergraph can connect the arbitrary number of vertices. Since then, a variety of hypergraph neural networks have
Thus, hypergraph-based learning methods are the more flex- been proposed for different learning tasks, including but not
ible and natural to extract the useful information from the limited to image retrieval (Zeng et al. 2023), quadratic as-
complex graph data, which are attracting more and more at- signment problem (Wang, Yan, and Yang 2021), biomedical
tention from the researchers. science (Klimm, Deane, and Reinert 2021; Saifuddin et al.
* Corresponding author.
2022), keypoint matching (Kim et al. 2022) and node classi-
Copyright © 2024, Association for the Advancement of Artificial fication (Bai, Zhang, and Torr 2021; Gao et al. 2022).
Intelligence (www.aaai.org). All rights reserved. Although a lot of progress has been made in the literature,
13837
The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24)
it is intractable to design an effective hypergraph networks ture guided by the hypergraph structure, we derive a
practically. Similar to CNNs that highly rely on the design hypergraph structure-aware distance criterion to obtain
of architectures, the results of hypergraph neural networks a formidable HGNN architecture in the leave-one-out
are primarily depended on the architecture design, includ- manner.
ing vertex feature aggregation (Arya et al. 2020) and hy- 3. The extensive experimental results on benchmark graph
peredge feature aggregation (Jiang et al. 2019; Gao et al. and hypergraph datasets demonstrate that the proposed
2022). Obtaining the optimal architecture usually involves method is capable to design the optimal neural architec-
thousands of reiterative cross validation steps. It requires tures that outperform the manually-designed graph and
numerous domain knowledge and expert efforts, which is hypergraph architectures as well as graph NAS methods.
a labor-intensive task.
Until very recently, neural architectures search (NAS) Related Work
(Zoph and Le 2017) has become an effective tool to ad-
dress the aforementioned issue. Through learning in expres- Hypergraph Neural Networks
sive search spaces and efficient search strategies, automatic Existing hypergraph neural architectures mainly contain
graph learning via NAS has achieved promising progress on three types of operators: hypergraph construction, vertex
various graph data analysis tasks (Gao et al. 2021; Huan, feature aggregation, and hyperedge feature aggregation. In-
Quanming, and Weiwei 2021; Zheng et al. 2023; Gao et al. spired by convolutional neural networks, hypergraph neural
2023). Thus, applying NAS to derive the optimal neural ar- network (HGNN) (Feng et al. 2019) is the first deep learning
chitectures for GNNs motivates us to use NAS to search the model for hypergraph, which leverages hypergraph Lapla-
best architecture for HGNNs. To the best of our knowledge, cian to represent hypergraphs from a spectral graph perspec-
there are still a research gap in developing hypergraph NAS tive. (Zhang, Zou, and Ma 2020; Bai, Zhang, and Torr 2021)
for automatic hypergraph learning. Perhaps the most direct generalize the convolutional operation or attention mecha-
approach to implementing hypergraph NAS is to employ nism to hypergraph. HGNN+ (Gao et al. 2022) enables the
NAS methods commonly used in normal graphs. But this learning of optimal representations in a single hypergraph
straightforward solution would incur some issues: First, the framework by bridging multi-modal/multi-type data and hy-
inherent structural differences between normal graphs and peredge groups.
hypergraphs make graph-based search spaces unsuitable for On the other hand, some studies are devoted to the dy-
hypergraphs, as shown in Fig. 1. The key to hypergraph ar- namic modification of hypergraph structure for feature em-
chitecture design lies in the effective way to aggregate ver- bedding. DHGNN (Jiang et al. 2019) utilizes k-means clus-
tex and hyperedge features. Unlike graphs that only allow tering strategy to update the hypergraph structure based on
two nodes to connect, hypergraphs allow an arbitrary num- the local and global features, respectively. (Yin et al. 2022)
ber of nodes to form hyperedges, thus graph-based feature extracts features of historical context content to provide
aggregation methods cannot be used directly. Second, sim- guidance for dynamic hypergraph construction. (Yao et al.
ply utilizing existing general architecture selection strategies 2022) introduces the attention mechanism to achieve alter-
limits search performance, resulting in suboptimal HGNN nate update of vertices and hyperedges. Despite the desirable
architectures. success of HGNNs, domain knowledge is highly required to
To address the above challenges, we propose a novel manually design the architecture. In this paper, we utilize
hypergraph neural architecture search method, namely Hy- NAS to search feature aggregation operators for automatic
perNAS, for automatic hypergraph learning. The proposed hypergraph learning instead of manual design.
model defines a search space suitable for hypergraphs,
which can well emulate the artificially designed HGNN ar- Neural Architecture Search
chitectures. Then, HyperNAS designs a differentiable search NAS-RL (Zoph and Le 2017) and MetaQNN (Baker
algorithm and adopts the advanced one-shot NAS paradigm et al. 2016) are considered as the pioneers in the field of
to train a supernet containing all candidate architectures. NAS, which leverage reinforcement learning (RL) to search
Furthermore, a hypergraph structure-aware distance crite- the suitable network architectures. However, the aforemen-
rion is introduced as a guideline for obtaining an optimal tioned methods spend hundreds of GPU days or even more
hypergraph architecture. Extensive experiments on bench- computing resources on searching architectures. Darts (Liu,
mark graph and hypergraph datasets demonstrate the effec- Simonyan, and Yang 2018) exploits differentiable NAS by
tiveness of the proposed framework. Experimental results of relaxing discrete architectures into a continuous space, and
the transfer learning task further demonstrate the power of jointly learns supernet weights and architecture weights,
HyperNAS. Our contributions are summarized as follows: which greatly reduces the amount of calculation and speeds
up the search. Furthermore, some of the works adopt the
1. We propose a hypergraph neural architecture search evolutionary algorithm (EA) (Real et al. 2019), bayesian
method, termed HyperNAS, to enable automatic hyper- optimization (BO) (White, Neiswanger, and Savani 2021),
graph learning. To the best of our knowledge, this is the random search (Xie et al. 2018) and hybrid-based (Yang
first attempt to apply NAS to hypergraphs. et al. 2020) strategies. With the improvement of search effi-
2. By designing a search space suitable for hypergraphs, ciency, NAS has been applied to object detection (Guo et al.
HyperNAS emulates existing human-designed HGNN 2020), image classification (He et al. 2023), text-to-image
architectures. To select the optimal HGNN architec- synthesis (Li et al. 2022) and the other fields.
13838
The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24)
Figure 2: An illustration of the proposed HyperNAS framework. (Best viewed in color) The supernet is generated based on the
constructed search space. Furthermore, the hypergraph structure is used to guide the architecture selection process, resulting in
a superior architecture.
With the guidance of NAS, there are many studies focus- trix learned from the vertex features by multi-layer percep-
ing on extending neural architecture search to GNNs. Graph- tion (MLP) is Mt , which takes into account the information
NAS (Gao et al. 2021) designs a search space to include op- of both the vertices and the channels.
erators from state-of-the-art GNNs and leverages reinforce-
ment learning to solve the challenging problem of applying Search Space
NAS to graphs. Concurrently, researchers have used more Fig. 2 shows the framework of the proposed model. Al-
search strategies in the field of graph NAS, such as bayesian though a graph is a special case of a hypergraph, the search
optimization (Yoon et al. 2020), evolution learning (Li space in the previously proposed graph NAS cannot be di-
and King 2020), random search (You, Ying, and Leskovec rectly applied to the hypergraph NAS. Thus, it is crucial to
2020). SANE (Huan, Quanming, and Weiwei 2021) uses propose a search space suitable for hypergraph neural archi-
a gradient-based search strategy, which greatly improves the tecture search. As shown in Table 1, in order to design an
search efficiency. (Zheng et al. 2023; Gao et al. 2023) further expressive search space suitable for hypergraph, we focus
use NAS on the heterophilic graph. on three key important parts: vertex aggregation, hyperedge
In a summary, considering that NAS-based search has aggregation and skip-connection aggregation, which are in-
achieved the satisfactory results for CNNs, RNNs, as well troduced as follows:
as GNNs, we make the attempt to apply NAS to designing
hypergraph neural architectures in this article. • Vertex Aggregation: The features of the hyperedge need
to be obtained by aggregating the features of the vertices
Methodology in the hyperedge. Specifically, hyperedge feature can be
calculated by
Definitions and Notations
A hypergraph is an extension of a graph, where normal graph Xe = conv(M erg(Xv(1) , · · · , Xv(k) )), (1)
is a special case of hypergraph. An edge in a graph connects (i)
only two vertices. Different from a normal graph, each hy- where Xv is the features of the i-th vertex in a hyper-
peredge in a hypergraph can connect an arbitrary number edge. The M erg(·) mechanism merges the message of
of vertices. A hypergraph G is a pair G = (V, E) where all the vertices and the conv(·) operator indicates that
V = {v1 , . . . , vn } is a set of elements, termed vertices, 1-dimension convolution is used to compact the derived
and E = {ei = (v1 , . . . , vk )} is a set of non-empty sub- result, as is shown in Fig. 3. We denote the vertex aggre-
sets of V called hyperedges. k is used to denote the num- gators set by Ov .
ber of nodes in the hyperedge. d represents the dimension • Hyperedge Aggregation: We regard each vertex as a cen-
of the vertex feature. A hypergraph G can be described by ter point c, and then aggregate the hyperedge features as-
an |V | × |E| incidence matrix H. Dv and De denote the sociated with it to obtain the high-order feature of c, de-
diagonal matrices of vertex degrees and edge degrees, re- noted as Xh . The attention mechanism (Kim et al. 2020)
spectively. Each hyperedge is assigned with a weight by we . is employed to generate the weights for each hyperedge
The feature embedding is represented as X = {x1 , . . . , xn }, in different ways. The high-order feature is calculated as
where xi (i = 1, . . . , n) represents the feature of the i-th m
sample. Xv represents the vertex feature and Xe is the hy-
X
Xh = we(i) Xe(i) , (2)
peredge feature. For k vertices, a k × k transformation ma- i=0
13839
The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24)
13840
The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24)
13841
The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24)
Accuracy(%)
Type Method
Cora Citeseer Pubmed
GCN (Welling and Kipf 2017) 81.4 70.9 79.0
GAT (Velickovic et al. 2017) 83.0 72.5 79.0
HGNN (Feng et al. 2019) 81.6 71.9 80.1
Human-designed models
DHGNN (Jiang et al. 2019) 82.5 70.0 79.9
HCHA (Bai, Zhang, and Torr 2021) 82.7 71.2 78.4
HGNN+ (Gao et al. 2022) 83.3 73.0 80.7
GraphNAS (Gao et al. 2021) 83.2 73.5 80.3
Graph NAS SANE (Huan, Quanming, and Weiwei 2021) 83.6 73.9 81.0
HGNAS++ (Gao et al. 2023) 83.5 73.8 81.1
Random (ours) 82.8 73.6 80.7
Hypergraph NAS HyperNAS-RL (ours) 83.3 73.7 80.9
HyperNAS (ours) 83.9 74.1 81.3
Table 4: Comparison of accuracies on citation networks. ”Random” and ”HyperNAS-RL” represent two variants of HyperNAS.
Accuracy(%)
LR(%) #Train
GCN GAT HGNN DHGNN HyperNAS
std 140 81.4 83.0 81.6 82.5 83.9
2 54 69.6 74.8 75.4 76.9 79.1
5.2 140 77.8 79.4 79.7 80.2 82.4
10 270 79.9 81.5 80.0 81.6 84.5
20 540 81.4 83.5 80.1 83.6 84.9
30 812 81.9 84.5 82.0 85.0 85.6
44 1200 82.0 85.2 81.9 85.6 86.1
Table 5: Node classification accuracies for different splits on Cora. ”LR” represents label rate, ”#Train” represents the number
of training samples and ”std” represents Cora standard split. The standard split experiment is a fixed training set, whereas the
training sets of the other experiments are randomly selected. Regardless of how the train set is chosen, the experiments share
the same validation and test sets.
13842
The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24)
Accuracy(%)
Method
Coauthor-CS Photos Computers
GCN (Welling and Kipf 2017) 85.8 92.4 75.3
GAT (Velickovic et al. 2017) 85.9 91.0 76.1
GMI (Peng et al. 2020) 83.9 91.7 78.2
MVGRL (Hassani and Khasahmadi 2020) 86.1 89.7 79.6
GCA (Zhu et al. 2021) 89.4 91.8 81.5
DHGNN (Jiang et al. 2019) 89.1 92.1 81.9
SANE (Huan, Quanming, and Weiwei 2021) 89.4 92.5 82.1
HyperNAS (ours) 89.7 92.7 82.5
Table 7: Performance comparisons of transferring HyperNAS architectures designed on Citation Network to the other datasets.
Accuracy(%)
Selection strategy
Cora Citeseer Pubmed
Weight size-based 83.5 73.8 80.9
Validation loss-based 83.3 73.6 80.8
Hyper-guided (ours) 83.9 74.1 81.3
13843
The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24)
Acknowledgements He, X.; Yao, J.; Wang, Y.; Tang, Z.; Cheung, K. C.; See,
S.; Han, B.; and Chu, X. 2023. NAS-LID: Efficient Neural
This work was supported by National Key R&D Pro-
Architecture Search with Local Intrinsic Dimension. In Pro-
gram of China (No. 2022ZD0118202), in part by the Na-
ceedings of the AAAI Conference on Artificial Intelligence.
tional Natural Science Foundation of China (Nos. 62072386,
U21B2027,61972186, U23A20388 and 62266028), in Huan, Z.; Quanming, Y.; and Weiwei, T. 2021. Search to
part by Yunnan Provincial Major Science and Tech- aggregate neighborhood for graph neural network. In Pro-
nology (Nos. 202302AD080003, 202202AD080003 and ceedings of IEEE International Conference on Data Engi-
202303AP140008), in part by the General Projects of Ba- neering.
sic Research in Yunnan Province (Nos. 202301AS070047, Huang, Y.; Liu, Q.; and Metaxas, D. 2009. Video object
202301AT070471). We also thank Yayao Hong for valuable segmentation by hypergraph cut. In Proceedings of IEEE
discussions. Conference on Computer Vision and Pattern Recognition.
Ji, S.; Feng, Y.; Ji, R.; Zhao, X.; Tang, W.; and Gao, Y. 2020.
References Dual channel hypergraph collaborative filtering. In Proceed-
ings of ACM SIGKDD International Conference on Knowl-
Arya, D.; Gupta, D. K.; Rudinac, S.; and Worring, M. 2020. edge Discovery & Data Mining.
Hypersage: Generalizing inductive representation learning
on hypergraphs. arXiv preprint arXiv:2010.04558. Jiang, J.; Wei, Y.; Feng, Y.; Cao, J.; and Gao, Y. 2019. Dy-
namic Hypergraph Neural Networks. In Proceedings of In-
Bai, S.; Zhang, F.; and Torr, P. H. 2021. Hypergraph convo- ternational Joint Conference on Artificial Intelligence.
lution and hypergraph attention. Pattern Recognition, 110:
Jiang, W.; and Luo, J. 2022. Graph neural network for traffic
107637(1–8).
forecasting: A survey. Expert Systems with Applications,
Baker, B.; Gupta, O.; Naik, N.; and Raskar, R. 2016. De- 207: 117921(1–28).
signing Neural Network Architectures using Reinforcement Kim, E.-S.; Kang, W. Y.; On, K.-W.; Heo, Y.-J.; and Zhang,
Learning. In Proceedings of International Conference on B.-T. 2020. Hypergraph attention networks for multimodal
Learning Representations. learning. In Proceedings of the IEEE/CVF Conference on
Bergstra, J.; and Bengio, Y. 2012. Random search for hyper- Computer Vision and Pattern Recognition.
parameter optimization. Journal of Machine Learning Re- Kim, J.; Oh, S.; Cho, S.; and Hong, S. 2022. Equivariant
search, 13(2): 281–305. Hypergraph Neural Networks. In Proceedings of European
Bongini, P.; Pancino, N.; Scarselli, F.; and Bianchini, M. Conference on Computer Vision.
2023. BioGNN: How Graph Neural Networks Can Solve Klimm, F.; Deane, C. M.; and Reinert, G. 2021. Hy-
Biological Problems. In Artificial Intelligence and Machine pergraphs for predicting essential genes using multipro-
Learning for Healthcare, 211–231. tein complex data. Journal of Complex Networks, 9(2):
Feng, Y.; You, H.; Zhang, Z.; Ji, R.; and Gao, Y. 2019. Hy- cnaa028(1–25).
pergraph Neural Networks. In Proceedings of the AAAI Con- Li, W.; Wen, S.; Shi, K.; Yang, Y.; and Huang, T. 2022.
ference on Artificial Intelligence. Neural architecture search with a lightweight transformer for
text-to-image synthesis. IEEE Transactions on Network Sci-
Gao, Y.; Feng, Y.; Ji, S.; and Ji, R. 2022. HGNN+: General
ence and Engineering, 9(3): 1567–1576.
hypergraph neural networks. IEEE Transactions on Pattern
Analysis and Machine Intelligence, 45(3): 3181–3199. Li, Y.; and King, I. 2020. Autograph: Automated graph neu-
ral network. In Proceedings of International Conference on
Gao, Y.; Wang, M.; Tao, D.; Ji, R.; and Dai, Q. 2012. 3- Neural Information Processing.
D object retrieval and recognition with hypergraph analysis.
IEEE Transactions on Image Processing, 21(9): 4290–4303. Liu, H.; Simonyan, K.; and Yang, Y. 2018. Darts: Differen-
tiable architecture search. arXiv preprint arXiv:1806.09055.
Gao, Y.; Yang, H.; Zhang, P.; Zhou, C.; and Hu, Y. 2021. Peng, Z.; Huang, W.; Luo, M.; Zheng, Q.; Rong, Y.; Xu,
Graph neural architecture search. In Proceedings of Inter- T.; and Huang, J. 2020. Graph representation learning via
national Joint Conference on Artificial Intelligence. graphical mutual information maximization. In Proceedings
Gao, Y.; Zhang, P.; Zhou, C.; Yang, H.; Li, Z.; Hu, Y.; and of The Web Conference.
Philip, S. Y. 2023. HGNAS++: efficient architecture search Qiu, C.; Huang, Z.; Xu, W.; and Li, H. 2022. VGAER: graph
for heterogeneous graph neural networks. IEEE Transac- neural network reconstruction based community detection.
tions on Knowledge and Data Engineering. arXiv preprint arXiv:2201.04066.
Guo, J.; Han, K.; Wang, Y.; Zhang, C.; Yang, Z.; Wu, H.; Real, E.; Aggarwal, A.; Huang, Y.; and Le, Q. V. 2019. Reg-
Chen, X.; and Xu, C. 2020. Hit-detector: Hierarchical trinity ularized evolution for image classifier architecture search.
architecture search for object detection. In Proceedings of In Proceedings of the AAAI Conference on Artificial Intelli-
the IEEE/CVF Conference on Computer Vision and Pattern gence.
Recognition. Saifuddin, K. M.; Bumgardnerr, B.; Tanvir, F.; and Ak-
Hassani, K.; and Khasahmadi, A. H. 2020. Contrastive bas, E. 2022. HyGNN: Drug-Drug Interaction Predic-
multi-view representation learning on graphs. In Proceed- tion via Hypergraph Neural Network. arXiv preprint
ings of International Conference on Machine Learning. arXiv:2206.12747.
13844
The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24)
Sen, P.; Namata, G.; Bilgic, M.; Getoor, L.; Galligher, B.; Yin, N.; Feng, F.; Luo, Z.; Zhang, X.; Wang, W.; Luo, X.;
and Eliassi-Rad, T. 2008. Collective classification in net- Chen, C.; and Hua, X.-S. 2022. Dynamic hypergraph convo-
work data. AI Magazine, 29(3): 93–93. lutional network. In Proceedings of the IEEE International
Shchur, O.; Mumme, M.; Bojchevski, A.; and Günnemann, Conference on Data Engineering.
S. 2018. Pitfalls of graph neural network evaluation. arXiv Yoon, M.; Gervet, T.; Hooi, B.; and Faloutsos, C. 2020.
preprint arXiv:1811.05868. Autonomous graph mining algorithm search with best
Tian, Y.; Dong, K.; Zhang, C.; Zhang, C.; and Chawla, N. V. speed/accuracy trade-off. In Proceedings of the IEEE In-
2023. Heterogeneous graph masked autoencoders. In Pro- ternational Conference on Data Mining.
ceedings of the AAAI Conference on Artificial Intelligence. You, J.; Ying, Z.; and Leskovec, J. 2020. Design space
Velickovic, P.; Cucurull, G.; Casanova, A.; Romero, A.; Lio, for graph neural networks. Advances in Neural Information
P.; Bengio, Y.; et al. 2017. Graph Attention Networks. Stat, Processing Systems, 33: 17009–17021.
1050(20): 10–48550. Zeng, Y.; Jin, Q.; Bao, T.; and Li, W. 2023. Multi-Modal
Knowledge Hypergraph for Diverse Image Retrieval. In Pro-
Wang, R.; Cheng, M.; Chen, X.; Tang, X.; and Hsieh, C.-
ceedings of the AAAI Conference on Artificial Intelligence.
J. 2021. Rethinking Architecture Selection in Differentiable
NAS. In Proceedings of International Conference on Learn- Zhang, R.; Zou, Y.; and Ma, J. 2020. Hyper-SAGNN: a self-
ing Representations. attention based graph neural network for hypergraphs. In
Proceedings of International Conference on Learning Rep-
Wang, R.; Yan, J.; and Yang, X. 2021. Neural graph match- resentations.
ing network: Learning lawler’s quadratic assignment prob-
lem with extension to hypergraph and multiple-graph match- Zhang, Z.; Lin, H.; Gao, Y.; and BNRist, K. 2018. Dynamic
ing. IEEE Transactions on Pattern Analysis and Machine hypergraph structure learning. In Proceedings of Interna-
Intelligence, 44(9): 5261–5279. tional Joint Conference on Artificial Intelligence.
Welling, M.; and Kipf, T. N. 2017. Semi-supervised Classifi- Zheng, X.; Zhang, M.; Chen, C.; Zhang, Q.; Zhou, C.; and
cation with Graph Convolutional Networks. In Proceedings Pan, S. 2023. Auto-HeG: Automated Graph Neural Network
of International Conference on Learning Representations. on Heterophilic Graphs. In Proceedings of the ACM Web
Conference.
White, C.; Neiswanger, W.; and Savani, Y. 2021. Bananas:
Zhou, D.; Huang, J.; and Schölkopf, B. 2006. Learning with
Bayesian optimization with neural architectures for neural
hypergraphs: Clustering, classification, and embedding. Ad-
architecture search. In Proceedings of the AAAI Conference
vances in Neural Information Processing Systems, 19.
on Artificial Intelligence.
Zhu, Y.; Xu, Y.; Yu, F.; Liu, Q.; Wu, S.; and Wang, L. 2021.
Xiao, S.; Wang, S.; Dai, Y.; and Guo, W. 2022. Graph neu- Graph contrastive learning with adaptive augmentation. In
ral networks in node classification: survey and evaluation. Proceedings of the Web Conference.
Machine Vision and Applications, 33(1): 1–19.
Zoph, B.; and Le, Q. V. 2017. Neural architecture search
Xie, S.; Zheng, H.; Liu, C.; and Lin, L. 2018. SNAS: with reinforcement learning. Proceedings of International
stochastic neural architecture search. arXiv preprint Conference on Learning Representations.
arXiv:1812.09926.
Yadati, N.; Nimishakavi, M.; Yadav, P.; Nitin, V.; Louis, A.;
and Talukdar, P. 2019. Hypergcn: A new method for training
graph convolutional networks on hypergraphs. Advances in
Neural Information Processing Systems.
Yang, Z.; Cohen, W.; and Salakhudinov, R. 2016. Revisiting
semi-supervised learning with graph embeddings. In Pro-
ceedings of International Conference on Machine Learning.
Yang, Z.; Wang, Y.; Chen, X.; Shi, B.; Xu, C.; Xu, C.;
Tian, Q.; and Xu, C. 2020. Cars: Continuous evolution
for efficient neural architecture search. In Proceedings of
the IEEE/CVF Conference on Computer Vision and Pattern
Recognition.
Yao, F.; Sun, X.; Liu, N.; Tian, C.; Xu, L.; Hu, L.; and Ding,
C. 2022. Hypergraph-Enhanced Textual-Visual Matching
Network for Cross-Modal Remote Sensing Image Retrieval
via Dynamic Hypergraph Learning. IEEE Journal of Se-
lected Topics in Applied Earth Observations and Remote
Sensing, 16: 688–701.
Yao, Q.; Xu, J.; Tu, W.-W.; and Zhu, Z. 2020. Efficient neu-
ral architecture search via proximal iterations. In Proceed-
ings of the AAAI Conference on Artificial Intelligence.
13845