Convolutional Layers
Many different types of graphs convolutional layers have been proposed in the literature. Choosing the right layer for your application can bould involve a lot of exploration. Some of the most commonly used layers are the GCNConv
and the GATv2Conv
. Multiple graph convolutional layers are typically stacked together to create a graph neural network model (see GNNChain
).
The table below lists all graph convolutional layers implemented in the GraphNeuralNetworks.jl. It also highlights the presence of some additional capabilities with respect to basic message passing:
- Sparse Ops: implements message passing as multiplication by sparse adjacency matrix instead of the gather/scatter mechanism. This can lead to better cpu performances but it is not supported on gpu yet.
- Edge Weights: supports scalar weights (or equivalently scalar features) on edges.
- Edge Features: supports feature vectors on edges.
Layer | Sparse Ops | Edge Weight | Edge Features |
---|---|---|---|
AGNNConv | ✓ | ||
CGConv | |||
ChebConv | |||
EdgeConv | |||
GATConv | ✓ | ||
GATv2Conv | ✓ | ||
GatedGraphConv | ✓ | ||
GCNConv | ✓ | ✓ | |
GINConv | ✓ | ||
GMMConv | ✓ | ||
GraphConv | ✓ | ||
MEGNetConv | ✓ | ||
NNConv | ✓ | ||
ResGatedGraphConv | |||
SAGEConv | ✓ |
Docs
GraphNeuralNetworks.AGNNConv
— TypeAGNNConv(init_beta=1f0)
Attention-based Graph Neural Network layer from paper Attention-based Graph Neural Network for Semi-Supervised Learning.
The forward pass is given by
\[\mathbf{x}_i' = \sum_{j \in {N(i) \cup \{i\}}} \alpha_{ij} W \mathbf{x}_j\]
where the attention coefficients $\alpha_{ij}$ are given by
\[\alpha_{ij} =\frac{e^{\beta \cos(\mathbf{x}_i, \mathbf{x}_j)}} {\sum_{j'}e^{\beta \cos(\mathbf{x}_i, \mathbf{x}_{j'})}}\]
with the cosine distance defined by
\[\cos(\mathbf{x}_i, \mathbf{x}_j) = \frac{\mathbf{x}_i \cdot \mathbf{x}_j}{\lVert\mathbf{x}_i\rVert \lVert\mathbf{x}_j\rVert}\]
and $\beta$ a trainable parameter.
Arguments
init_beta
: The initial value of $\beta$.
GraphNeuralNetworks.CGConv
— TypeCGConv((in, ein) => out, f, act=identity; bias=true, init=glorot_uniform, residual=false)
CGConv(in => out, ...)
The crystal graph convolutional layer from the paper Crystal Graph Convolutional Neural Networks for an Accurate and Interpretable Prediction of Material Properties. Performs the operation
\[\mathbf{x}_i' = \mathbf{x}_i + \sum_{j\in N(i)}\sigma(W_f \mathbf{z}_{ij} + \mathbf{b}_f)\, act(W_s \mathbf{z}_{ij} + \mathbf{b}_s)\]
where $\mathbf{z}_{ij}$ is the node and edge features concatenation $[\mathbf{x}_i; \mathbf{x}_j; \mathbf{e}_{j\to i}]$ and $\sigma$ is the sigmoid function. The residual $\mathbf{x}_i$ is added only if residual=true
and the output size is the same as the input size.
Arguments
in
: The dimension of input node features.ein
: The dimension of input edge features.
If ein
is not given, assumes that no edge features are passed as input in the forward pass.
out
: The dimension of output node features.act
: Activation function.bias
: Add learnable bias.init
: Weights' initializer.residual
: Add a residual connection.
Examples
g = rand_graph(5, 6)
x = rand(Float32, 2, g.num_nodes)
e = rand(Float32, 3, g.num_edges)
l = CGConv((2, 3) => 4, tanh)
y = l(g, x, e) # size: (4, num_nodes)
# No edge features
l = CGConv(2 => 4, tanh)
y = l(g, x) # size: (4, num_nodes)
GraphNeuralNetworks.ChebConv
— TypeChebConv(in => out, k; bias=true, init=glorot_uniform)
Chebyshev spectral graph convolutional layer from paper Convolutional Neural Networks on Graphs with Fast Localized Spectral Filtering.
Implements
\[X' = \sum^{K-1}_{k=0} W^{(k)} Z^{(k)}\]
where $Z^{(k)}$ is the $k$-th term of Chebyshev polynomials, and can be calculated by the following recursive form:
\[Z^{(0)} = X \\ Z^{(1)} = \hat{L} X \\ Z^{(k)} = 2 \hat{L} Z^{(k-1)} - Z^{(k-2)}\]
with $\hat{L}$ the scaled_laplacian
.
Arguments
in
: The dimension of input features.out
: The dimension of output features.k
: The order of Chebyshev polynomial.bias
: Add learnable bias.init
: Weights' initializer.
GraphNeuralNetworks.EdgeConv
— TypeEdgeConv(nn; aggr=max)
Edge convolutional layer from paper Dynamic Graph CNN for Learning on Point Clouds.
Performs the operation
\[\mathbf{x}_i' = \square_{j \in N(i)}\, nn([\mathbf{x}_i; \mathbf{x}_j - \mathbf{x}_i])\]
where nn
generally denotes a learnable function, e.g. a linear layer or a multi-layer perceptron.
Arguments
nn
: A (possibly learnable) function.aggr
: Aggregation operator for the incoming messages (e.g.+
,*
,max
,min
, andmean
).
GraphNeuralNetworks.GATConv
— TypeGATConv(in => out, [σ; heads, concat, init, bias, negative_slope, add_self_loops])
GATConv((in, ein) => out, ...)
Graph attentional layer from the paper Graph Attention Networks.
Implements the operation
\[\mathbf{x}_i' = \sum_{j \in N(i) \cup \{i\}} \alpha_{ij} W \mathbf{x}_j\]
where the attention coefficients $\alpha_{ij}$ are given by
\[\alpha_{ij} = \frac{1}{z_i} \exp(LeakyReLU(\mathbf{a}^T [W \mathbf{x}_i; W \mathbf{x}_j]))\]
with $z_i$ a normalization factor.
In case ein > 0
is given, edge features of dimension ein
will be expected in the forward pass and the attention coefficients will be calculated as math \alpha_{ij} = \frac{1}{z_i} \exp(LeakyReLU(\mathbf{a}^T [W_e \mathbf{e}_{j\to i}; W \mathbf{x}_i; W \mathbf{x}_j]))
`
Arguments
in
: The dimension of input node features.ein
: The dimension of input edget features. Default 0 (i.e. no edge features passed in the forward).out
: The dimension of output node features.σ
: Activation function. Defaultidentity
.bias
: Learn the additive bias if true. Dafaulttrue
.heads
: Number attention heads. Dafault `1.concat
: Concatenate layer output or not. If not, layer output is averaged over the heads. Defaulttrue
.negative_slope
: The parameter of LeakyReLU.Default0.2
.add_self_loops
: Add self loops to the graph before performing the convolution. Defaulttrue
.
GraphNeuralNetworks.GATv2Conv
— TypeGATv2Conv(in => out, [σ; heads, concat, init, bias, negative_slope, add_self_loops])
GATv2Conv((in, ein) => out, ...)
GATv2 attentional layer from the paper How Attentive are Graph Attention Networks?.
Implements the operation
\[\mathbf{x}_i' = \sum_{j \in N(i) \cup \{i\}} \alpha_{ij} W_1 \mathbf{x}_j\]
where the attention coefficients $\alpha_{ij}$ are given by
\[\alpha_{ij} = \frac{1}{z_i} \exp(\mathbf{a}^T LeakyReLU([W_2 \mathbf{x}_i; W_1 \mathbf{x}_j]))\]
with $z_i$ a normalization factor.
In case ein > 0
is given, edge features of dimension ein
will be expected in the forward pass and the attention coefficients will be calculated as
\[\alpha_{ij} = \frac{1}{z_i} \exp(\mathbf{a}^T LeakyReLU([W_3 \mathbf{e}_{j\to i}; W_2 \mathbf{x}_i; W_1 \mathbf{x}_j])).\]
Arguments
in
: The dimension of input node features.ein
: The dimension of input edget features. Default 0 (i.e. no edge features passed in the forward).out
: The dimension of output node features.σ
: Activation function. Defaultidentity
.bias
: Learn the additive bias if true. Dafaulttrue
.heads
: Number attention heads. Dafault `1.concat
: Concatenate layer output or not. If not, layer output is averaged over the heads. Defaulttrue
.negative_slope
: The parameter of LeakyReLU.Default0.2
.add_self_loops
: Add self loops to the graph before performing the convolution. Defaulttrue
.
GraphNeuralNetworks.GCNConv
— TypeGCNConv(in => out, σ=identity; [bias, init, add_self_loops, use_edge_weight])
Graph convolutional layer from paper Semi-supervised Classification with Graph Convolutional Networks.
Performs the operation
\[\mathbf{x}'_i = \sum_{j\in N(i)} a_{ij} W \mathbf{x}_j\]
where $a_{ij} = 1 / \sqrt{|N(i)||N(j)|}$ is a normalization factor computed from the node degrees.
If the input graph has weighted edges and use_edge_weight=true
, than $a_{ij}$ will be computed as
\[a_{ij} = \frac{e_{j\to i}}{\sqrt{\sum_{j \in N(i)} e_{j\to i}} \sqrt{\sum_{i \in N(j)} e_{i\to j}}}\]
The input to the layer is a node feature array X
of size (num_features, num_nodes)
and optionally an edge weight vector.
Arguments
in
: Number of input features.out
: Number of output features.σ
: Activation function. Defaultidentity
.bias
: Add learnable bias. Defaulttrue
.init
: Weights' initializer. Defaultglorot_uniform
.add_self_loops
: Add self loops to the graph before performing the convolution. Defaultfalse
.use_edge_weight
: Iftrue
, consider the edge weights in the input graph (if available). Ifadd_self_loops=true
the new weights will be set to 1. Defaultfalse
.
Examples
# create data
s = [1,1,2,3]
t = [2,3,1,1]
g = GNNGraph(s, t)
x = randn(3, g.num_nodes)
# create layer
l = GCNConv(3 => 5)
# forward pass
y = l(g, x) # size: 5 × num_nodes
# convolution with edge weights
w = [1.1, 0.1, 2.3, 0.5]
y = l(g, x, w)
# Edge weights can also be embedded in the graph.
g = GNNGraph(s, t, w)
l = GCNConv(3 => 5, use_edge_weight=true)
y = l(g, x) # same as l(g, x, w)
GraphNeuralNetworks.GINConv
— TypeGINConv(f, ϵ; aggr=+)
Graph Isomorphism convolutional layer from paper How Powerful are Graph Neural Networks?.
Implements the graph convolution
\[\mathbf{x}_i' = f_\Theta\left((1 + \epsilon) \mathbf{x}_i + \sum_{j \in N(i)} \mathbf{x}_j \right)\]
where $f_\Theta$ typically denotes a learnable function, e.g. a linear layer or a multi-layer perceptron.
Arguments
f
: A (possibly learnable) function acting on node features.ϵ
: Weighting factor.
GraphNeuralNetworks.GMMConv
— TypeGMMConv((in, ein) => out, σ=identity; K=1, bias=true, init=glorot_uniform, residual=false)
Graph mixture model convolution layer from the paper Geometric deep learning on graphs and manifolds using mixture model CNNs Performs the operation
\[\mathbf{x}_i' = \mathbf{x}_i + \frac{1}{|N(i)|} \sum_{j\in N(i)}\frac{1}{K}\sum_{k=1}^K \mathbf{w}_k(\mathbf{e}_{j\to i}) \odot \Theta_k \mathbf{x}_j\]
where $w^a_{k}(e^a)$ for feature a
and kernel k
is given by
\[w^a_{k}(e^a) = \exp(-\frac{1}{2}(e^a - \mu^a_k)^T (\Sigma^{-1})^a_k(e^a - \mu^a_k))\]
$\Theta_k, \mu^a_k, (\Sigma^{-1})^a_k$ are learnable parameters.
The input to the layer is a node feature array x
of size (num_features, num_nodes)
and edge pseudo-coordinate array e
of size (num_features, num_edges)
The residual $\mathbf{x}_i$ is added only if residual=true
and the output size is the same as the input size.
Arguments
in
: Number of input node features.ein
: Number of input edge features.out
: Number of output features.σ
: Activation function. Defaultidentity
.K
: Number of kernels. Default1
.bias
: Add learnable bias. Defaulttrue
.init
: Weights' initializer. Defaultglorot_uniform
.residual
: Residual conncetion. Defaultfalse
.
Examples
# create data
s = [1,1,2,3]
t = [2,3,1,1]
g = GNNGraph(s,t)
nin, ein, out, K = 4, 10, 7, 8
x = randn(Float32, nin, g.num_nodes)
e = randn(Float32, ein, g.num_edges)
# create layer
l = GMMConv((nin, ein) => out, K=K)
# forward pass
l(g, x, e)
GraphNeuralNetworks.GatedGraphConv
— TypeGatedGraphConv(out, num_layers; aggr=+, init=glorot_uniform)
Gated graph convolution layer from Gated Graph Sequence Neural Networks.
Implements the recursion
\[\mathbf{h}^{(0)}_i = [\mathbf{x}_i; \mathbf{0}] \\ \mathbf{h}^{(l)}_i = GRU(\mathbf{h}^{(l-1)}_i, \square_{j \in N(i)} W \mathbf{h}^{(l-1)}_j)\]
where $\mathbf{h}^{(l)}_i$ denotes the $l$-th hidden variables passing through GRU. The dimension of input $\mathbf{x}_i$ needs to be less or equal to out
.
Arguments
out
: The dimension of output features.num_layers
: The number of gated recurrent unit.aggr
: Aggregation operator for the incoming messages (e.g.+
,*
,max
,min
, andmean
).init
: Weight initialization function.
GraphNeuralNetworks.GraphConv
— TypeGraphConv(in => out, σ=identity; aggr=+, bias=true, init=glorot_uniform)
Graph convolution layer from Reference: Weisfeiler and Leman Go Neural: Higher-order Graph Neural Networks.
Performs:
\[\mathbf{x}_i' = W_1 \mathbf{x}_i + \square_{j \in \mathcal{N}(i)} W_2 \mathbf{x}_j\]
where the aggregation type is selected by aggr
.
Arguments
in
: The dimension of input features.out
: The dimension of output features.σ
: Activation function.aggr
: Aggregation operator for the incoming messages (e.g.+
,*
,max
,min
, andmean
).bias
: Add learnable bias.init
: Weights' initializer.
GraphNeuralNetworks.MEGNetConv
— TypeMEGNetConv(ϕe, ϕv; aggr=mean)
MEGNetConv(in => out; aggr=mean)
Convolution from Graph Networks as a Universal Machine Learning Framework for Molecules and Crystals paper. In the forward pass, takes as inputs node features x
and edge features e
and returns updated features x'
and e'
according to
\[\mathbf{e}_{i\to j}' = \phi_e([\mathbf{x}_i;\, \mathbf{x}_j;\, \mathbf{e}_{i\to j}]),\\ \mathbf{x}_{i}' = \phi_v([\mathbf{x}_i;\, \square_{j\in \mathcal{N}(i)}\,\mathbf{e}_{j\to i}']).\]
aggr
defines the aggregation to be performed.
If the neural networks ϕe
and ϕv
are not provided, they will be constructed from the in
and out
arguments instead as multi-layer perceptron with one hidden layer and relu
activations.
Examples
g = rand_graph(10, 30)
x = randn(3, 10)
e = randn(3, 30)
m = MEGNetConv(3 => 3)
x′, e′ = m(g, x, e)
GraphNeuralNetworks.NNConv
— TypeNNConv(in => out, f, σ=identity; aggr=+, bias=true, init=glorot_uniform)
The continuous kernel-based convolutional operator from the Neural Message Passing for Quantum Chemistry paper. This convolution is also known as the edge-conditioned convolution from the Dynamic Edge-Conditioned Filters in Convolutional Neural Networks on Graphs paper.
Performs the operation
\[\mathbf{x}_i' = W \mathbf{x}_i + \square_{j \in N(i)} f_\Theta(\mathbf{e}_{j\to i})\,\mathbf{x}_j\]
where $f_\Theta$ denotes a learnable function (e.g. a linear layer or a multi-layer perceptron). Given an input of batched edge features e
of size (num_edge_features, num_edges)
, the function f
will return an batched matrices array whose size is (out, in, num_edges)
. For convenience, also functions returning a single (out*in, num_edges)
matrix are allowed.
Arguments
in
: The dimension of input features.out
: The dimension of output features.f
: A (possibly learnable) function acting on edge features.aggr
: Aggregation operator for the incoming messages (e.g.+
,*
,max
,min
, andmean
).σ
: Activation function.bias
: Add learnable bias.init
: Weights' initializer.
GraphNeuralNetworks.ResGatedGraphConv
— TypeResGatedGraphConv(in => out, act=identity; init=glorot_uniform, bias=true)
The residual gated graph convolutional operator from the Residual Gated Graph ConvNets paper.
The layer's forward pass is given by
\[\mathbf{x}_i' = act\big(U\mathbf{x}_i + \sum_{j \in N(i)} \eta_{ij} V \mathbf{x}_j\big),\]
where the edge gates $\eta_{ij}$ are given by
\[\eta_{ij} = sigmoid(A\mathbf{x}_i + B\mathbf{x}_j).\]
Arguments
in
: The dimension of input features.out
: The dimension of output features.act
: Activation function.init
: Weight matrices' initializing function.bias
: Learn an additive bias if true.
GraphNeuralNetworks.SAGEConv
— TypeSAGEConv(in => out, σ=identity; aggr=mean, bias=true, init=glorot_uniform)
GraphSAGE convolution layer from paper Inductive Representation Learning on Large Graphs.
Performs:
\[\mathbf{x}_i' = W \cdot [\mathbf{x}_i; \square_{j \in \mathcal{N}(i)} \mathbf{x}_j]\]
where the aggregation type is selected by aggr
.
Arguments
in
: The dimension of input features.out
: The dimension of output features.σ
: Activation function.aggr
: Aggregation operator for the incoming messages (e.g.+
,*
,max
,min
, andmean
).bias
: Add learnable bias.init
: Weights' initializer.
GraphNeuralNetworks.SGConv
— TypeSGConv(int => out, k=1; [bias, init, add_self_loops, use_edge_weight])
SGC layer from Simplifying Graph Convolutional Networks Performs operation
\[H^{K} = (\tilde{D}^{-1/2} \tilde{A} \tilde{D}^{-1/2})^K X \Theta\]
where $\tilde{A}$ is $A + I$.
Arguments
in
: Number of input features.out
: Number of output features.k
: Number of hops k. Default1
.bias
: Add learnable bias. Defaulttrue
.init
: Weights' initializer. Defaultglorot_uniform
.add_self_loops
: Add self loops to the graph before performing the convolution. Defaultfalse
.use_edge_weight
: Iftrue
, consider the edge weights in the input graph (if available). Ifadd_self_loops=true
the new weights will be set to 1. Defaultfalse
.
Examples
# create data
s = [1,1,2,3]
t = [2,3,1,1]
g = GNNGraph(s, t)
x = randn(3, g.num_nodes)
# create layer
l = SGConv(3 => 5; add_self_loops = true)
# forward pass
y = l(g, x) # size: 5 × num_nodes
# convolution with edge weights
w = [1.1, 0.1, 2.3, 0.5]
y = l(g, x, w)
# Edge weights can also be embedded in the graph.
g = GNNGraph(s, t, w)
l = SGConv(3 => 5, add_self_loops = true, use_edge_weight=true)
y = l(g, x) # same as l(g, x, w)