grb.model.dgl¶
grb.model.dgl.gat¶
- class grb.model.dgl.gat.GAT(in_features, out_features, hidden_features, n_layers, n_heads, activation=<function leaky_relu>, layer_norm=False, feat_norm=None, adj_norm_func=None, feat_dropout=0.0, attn_dropout=0.0, residual=False, dropout=0.0)[source]¶
Bases:
Module
Graph Attention Networks (GAT)
- Parameters
in_features (int) – Dimension of input features.
out_features (int) – Dimension of output features.
hidden_features (int or list of int) – Dimension of hidden features. List if multi-layer.
n_layers (int) – Number of layers.
layer_norm (bool, optional) – Whether to use layer normalization. Default:
False
.activation (func of torch.nn.functional, optional) – Activation function. Default:
torch.nn.functional.leaky_relu
.feat_norm (str, optional) – Type of features normalization, choose from [“arctan”, “tanh”, None]. Default:
None
.adj_norm_func (func of utils.normalize, optional) – Function that normalizes adjacency matrix. Default:
None
.feat_dropout (float, optional) – Dropout rate for input features. Default:
0.0
.attn_dropout (float, optional) – Dropout rate for attention. Default:
0.0
.residual (bool, optional) – Whether to use residual connection. Default:
False
.dropout (float, optional) – Dropout rate during training. Default:
0.0
.
- forward(x, adj)[source]¶
- Parameters
x (torch.Tensor) – Tensor of input features.
adj (torch.SparseTensor) – Sparse tensor of adjacency matrix.
- Returns
x – Output of layer.
- Return type
torch.Tensor
- property model_name¶
- property model_type¶
- training: bool¶
grb.model.dgl.gcn¶
- class grb.model.dgl.gcn.GCN(in_features, out_features, hidden_features, activation=<function relu>, layer_norm=False)[source]¶
Bases:
Module
- forward(x, adj, dropout=0)[source]¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- property model_type¶
- training: bool¶
grb.model.dgl.gin¶
- class grb.model.dgl.gin.ApplyNodeFunc(mlp)[source]¶
Bases:
Module
Update the node feature hv with MLP, BN and ReLU.
- forward(h)[source]¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- training: bool¶
- class grb.model.dgl.gin.GIN(in_features, hidden_features, out_features, learn_eps=True, neighbor_pooling_type='sum', num_mlp_layers=1)[source]¶
Bases:
Module
GIN model
- forward(x, adj, dropout=0)[source]¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- property model_type¶
- training: bool¶
- class grb.model.dgl.gin.MLP(num_layers, input_dim, hidden_dim, output_dim)[source]¶
Bases:
Module
MLP with linear output
- forward(x)[source]¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- training: bool¶
grb.model.dgl.grand¶
- class grb.model.dgl.grand.GRAND(in_features, out_features, hidden_features, n_layers=2, s=1, k=3, temp=1.0, lam=1.0, feat_norm=None, adj_norm_func=None, node_dropout=0.0, input_dropout=0.0, hidden_dropout=0.0)[source]¶
Bases:
Module
Graph Random Neural Networks (GRAND)
- Parameters
in_features (int) – Dimension of input features.
out_features (int) – Dimension of output features.
hidden_features (int or list of int) – Dimension of hidden features. List if multi-layer.
n_layers (int) – Number of layers.
s (int) – Number of Augmentation samples
k (int) – Number of Propagation Steps
node_dropout (float) – Dropout rate on node features.
input_dropout (float) – Dropout rate of the input layer of a MLP
hidden_dropout (float) – Dropout rate of the hidden layer of a MLP
- forward(x, adj)[source]¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- property model_name¶
- property model_type¶
- training: bool¶