grb.model.torch¶
grb.model.torch.appnp¶
Torch module for APPNP.
- class grb.model.torch.appnp.APPNP(in_features, out_features, hidden_features, n_layers, layer_norm=False, activation=<function relu>, edge_drop=0.0, alpha=0.01, k=10, feat_norm=None, adj_norm_func=<function GCNAdjNorm>, dropout=0.0)[source]¶
Bases:
Module
Approximated Personalized Propagation of Neural Predictions (APPNP)
- Parameters
in_features (int) – Dimension of input features.
out_features (int) – Dimension of output features.
hidden_features (int or list of int) – Dimension of hidden features. List if multi-layer.
n_layers (int) – Number of layers.
layer_norm (bool, optional) – Whether to use layer normalization. Default:
False
.activation (func of torch.nn.functional, optional) – Activation function. Default:
torch.nn.functional.relu
.feat_norm (str, optional) – Type of features normalization, choose from [“arctan”, “tanh”, None]. Default:
None
.adj_norm_func (func of utils.normalize, optional) – Function that normalizes adjacency matrix. Default:
GCNAdjNorm
.edge_drop (float, optional) – Rate of edge drop.
alpha (float, optional) – Hyper-parameter, refer to original paper. Default:
0.01
.k (int, optional) – Hyper-parameter, refer to original paper. Default:
10
.dropout (float, optional) – Dropout rate during training. Default:
0.0
.
- forward(x, adj)[source]¶
- Parameters
x (torch.Tensor) – Tensor of input features.
adj (torch.SparseTensor) – Sparse tensor of adjacency matrix.
- Returns
x – Output of model (logits without activation).
- Return type
torch.Tensor
- property model_name¶
- property model_type¶
Indicate type of implementation.
- training: bool¶
grb.model.torch.gcn¶
Torch module for GCN.
- class grb.model.torch.gcn.GCN(in_features, out_features, hidden_features, n_layers, activation=<function relu>, layer_norm=False, residual=False, feat_norm=None, adj_norm_func=<function GCNAdjNorm>, dropout=0.0)[source]¶
Bases:
Module
Graph Convolutional Networks (GCN)
- Parameters
in_features (int) – Dimension of input features.
out_features (int) – Dimension of output features.
hidden_features (int or list of int) – Dimension of hidden features. List if multi-layer.
n_layers (int) – Number of layers.
layer_norm (bool, optional) – Whether to use layer normalization. Default:
False
.activation (func of torch.nn.functional, optional) – Activation function. Default:
torch.nn.functional.relu
.residual (bool, optional) – Whether to use residual connection. Default:
False
.feat_norm (str, optional) – Type of features normalization, choose from [“arctan”, “tanh”, None]. Default:
None
.adj_norm_func (func of utils.normalize, optional) – Function that normalizes adjacency matrix. Default:
GCNAdjNorm
.dropout (float, optional) – Dropout rate during training. Default:
0.0
.
- forward(x, adj)[source]¶
- Parameters
x (torch.Tensor) – Tensor of input features.
adj (torch.SparseTensor) – Sparse tensor of adjacency matrix.
- Returns
x – Output of model (logits without activation).
- Return type
torch.Tensor
- property model_name¶
- property model_type¶
Indicate type of implementation.
- training: bool¶
- class grb.model.torch.gcn.GCNConv(in_features, out_features, activation=None, residual=False, dropout=0.0)[source]¶
Bases:
Module
GCN convolutional layer.
- Parameters
in_features (int) – Dimension of input features.
out_features (int) – Dimension of output features.
activation (func of torch.nn.functional, optional) – Activation function. Default:
None
.residual (bool, optional) – Whether to use residual connection. Default:
False
.dropout (float, optional) – Dropout rate during training. Default:
0.0
.
- forward(x, adj)[source]¶
- Parameters
x (torch.Tensor) – Tensor of input features.
adj (torch.SparseTensor) – Sparse tensor of adjacency matrix.
- Returns
x – Output of layer.
- Return type
torch.Tensor
- training: bool¶
- class grb.model.torch.gcn.GCNGC(in_features, out_features, hidden_features, n_layers, activation=<function relu>, layer_norm=False, residual=False, feat_norm=None, adj_norm_func=<function GCNAdjNorm>, dropout=0.0)[source]¶
Bases:
Module
Graph Convolutional Networks (GCN)
- Parameters
in_features (int) – Dimension of input features.
out_features (int) – Dimension of output features.
hidden_features (int or list of int) – Dimension of hidden features. List if multi-layer.
n_layers (int) – Number of layers.
layer_norm (bool, optional) – Whether to use layer normalization. Default:
False
.activation (func of torch.nn.functional, optional) – Activation function. Default:
torch.nn.functional.relu
.residual (bool, optional) – Whether to use residual connection. Default:
False
.feat_norm (str, optional) – Type of features normalization, choose from [“arctan”, “tanh”, None]. Default:
None
.adj_norm_func (func of utils.normalize, optional) – Function that normalizes adjacency matrix. Default:
GCNAdjNorm
.dropout (float, optional) – Dropout rate during training. Default:
0.0
.
- forward(x, adj, batch_index=None)[source]¶
- Parameters
x (torch.Tensor) – Tensor of input features.
adj (torch.SparseTensor) – Sparse tensor of adjacency matrix.
- Returns
x – Output of model (logits without activation).
- Return type
torch.Tensor
- property model_name¶
- property model_type¶
Indicate type of implementation.
- training: bool¶
grb.model.torch.gin¶
Torch module for GIN.
- class grb.model.torch.gin.GIN(in_features, out_features, hidden_features, n_layers, n_mlp_layers=2, activation=<function relu>, layer_norm=False, batch_norm=True, eps=0.0, feat_norm=None, adj_norm_func=None, dropout=0.0)[source]¶
Bases:
Module
Graph Isomorphism Network (GIN)
- Parameters
in_features (int) – Dimension of input features.
out_features (int) – Dimension of output features.
hidden_features (int or list of int) – Dimension of hidden features. List if multi-layer.
n_layers (int) – Number of layers.
n_mlp_layers (int) – Number of layers.
layer_norm (bool, optional) – Whether to use layer normalization. Default:
False
.batch_norm (bool, optional) – Whether to apply batch normalization. Default:
True
.eps (float, optional) – Hyper-parameter, refer to original paper. Default:
0.0
.activation (func of torch.nn.functional, optional) – Activation function. Default:
torch.nn.functional.relu
.feat_norm (str, optional) – Type of features normalization, choose from [“arctan”, “tanh”, None]. Default:
None
.adj_norm_func (func of utils.normalize, optional) – Function that normalizes adjacency matrix. Default:
None
.dropout (float, optional) – Rate of dropout. Default:
0.0
.
- forward(x, adj)[source]¶
- Parameters
x (torch.Tensor) – Tensor of input features.
adj (torch.SparseTensor) – Sparse tensor of adjacency matrix.
- Returns
x – Output of model (logits without activation).
- Return type
torch.Tensor
- property model_name¶
- property model_type¶
Indicate type of implementation.
- training: bool¶
- class grb.model.torch.gin.GINConv(in_features, out_features, activation=<function relu>, eps=0.0, batch_norm=True, dropout=0.0)[source]¶
Bases:
Module
GIN convolutional layer.
- Parameters
in_features (int) – Dimension of input features.
out_features (int) – Dimension of output features.
activation (func of torch.nn.functional, optional) – Activation function. Default:
None
.eps (float, optional) – Hyper-parameter, refer to original paper. Default:
0.0
.batch_norm (bool, optional) – Whether to apply batch normalization. Default:
True
.dropout (float, optional) – Rate of dropout. Default:
0.0
.
- forward(x, adj)[source]¶
- Parameters
x (torch.Tensor) – Tensor of input features.
adj (torch.SparseTensor) – Sparse tensor of adjacency matrix.
- Returns
x – Output of layer.
- Return type
torch.Tensor
- training: bool¶
grb.model.torch.graphsage¶
Torch module for GraphSAGE.
- class grb.model.torch.graphsage.GraphSAGE(in_features, out_features, hidden_features, n_layers, activation=<function relu>, layer_norm=False, feat_norm=None, adj_norm_func=<function SAGEAdjNorm>, mu=2.0, dropout=0.0)[source]¶
Bases:
Module
Inductive Representation Learning on Large Graphs (GraphSAGE)
- Parameters
in_features (int) – Dimension of input features.
out_features (int) – Dimension of output features.
n_layers (int) – Number of layers.
hidden_features (int or list of int) – Dimension of hidden features. List if multi-layer.
layer_norm (bool, optional) – Whether to use layer normalization. Default:
False
.activation (func of torch.nn.functional, optional) – Activation function. Default:
torch.nn.functional.relu
.feat_norm (str, optional) – Type of features normalization, choose from [“arctan”, “tanh”, None]. Default:
None
.adj_norm_func (func of utils.normalize, optional) – Function that normalizes adjacency matrix. Default:
SAGEAdjNorm
.mu (float, optional) – Hyper-parameter, refer to original paper. Default:
2.0
.dropout (float, optional) – Rate of dropout. Default:
0.0
.
- forward(x, adj)[source]¶
- Parameters
x (torch.Tensor) – Tensor of input features.
adj (torch.SparseTensor) – Sparse tensor of adjacency matrix.
- Returns
x – Output of model (logits without activation).
- Return type
torch.Tensor
- property model_name¶
- property model_type¶
Indicate type of implementation.
- training: bool¶
- class grb.model.torch.graphsage.SAGEConv(in_features, pool_features, out_features, activation=None, mu=2.0, dropout=0.0)[source]¶
Bases:
Module
SAGE convolutional layer.
- Parameters
in_features (int) – Dimension of input features.
pool_features (int) – Dimension of pooling features.
out_features (int) – Dimension of output features.
activation (func of torch.nn.functional, optional) – Activation function. Default:
None
.dropout (float, optional) – Rate of dropout. Default:
0.0
.mu (float, optional) – Hyper-parameter, refer to original paper. Default:
2.0
.
- forward(x, adj)[source]¶
- Parameters
x (torch.Tensor) – Tensor of input features.
adj (torch.SparseTensor) – Sparse tensor of adjacency matrix.
- Returns
x – Output of layer.
- Return type
torch.Tensor
- training: bool¶
grb.model.torch.robustgcn¶
grb.model.torch.sgcn¶
Torch module for SGCN.
- class grb.model.torch.sgcn.SGCN(in_features, out_features, hidden_features, n_layers, activation=<built-in method tanh of type object>, feat_norm=None, adj_norm_func=<function GCNAdjNorm>, layer_norm=False, batch_norm=False, k=4, dropout=0.0)[source]¶
Bases:
Module
Simplifying Graph Convolutional Networks (SGCN)
- Parameters
in_features (int) – Dimension of input features.
out_features (int) – Dimension of output features.
hidden_features (int or list of int) – Dimension of hidden features. List if multi-layer.
n_layers (int) – Number of layers.
layer_norm (bool, optional) – Whether to use layer normalization. Default:
False
.activation (func of torch.nn.functional, optional) – Activation function. Default:
torch.tanh
.k (int, optional) – Hyper-parameter, refer to original paper. Default:
4
.feat_norm (str, optional) – Type of features normalization, choose from [“arctan”, “tanh”, None]. Default:
None
.adj_norm_func (func of utils.normalize, optional) – Function that normalizes adjacency matrix. Default:
GCNAdjNorm
.dropout (float, optional) – Rate of dropout. Default:
0.0
.
- forward(x, adj)[source]¶
- Parameters
x (torch.Tensor) – Tensor of input features.
adj (torch.SparseTensor) – Sparse tensor of adjacency matrix.
- Returns
x – Output of model (logits without activation).
- Return type
torch.Tensor
- property model_name¶
- property model_type¶
Indicate type of implementation.
- training: bool¶
- class grb.model.torch.sgcn.SGConv(in_features, out_features, k)[source]¶
Bases:
Module
SGCN convolutional layer.
- Parameters
in_features (int) – Dimension of input features.
out_features (int) – Dimension of output features.
k (int, optional) – Hyper-parameter, refer to original paper. Default:
4
.
- Returns
x – Output of layer.
- Return type
torch.Tensor
- forward(x, adj)[source]¶
- Parameters
x (torch.Tensor) – Tensor of input features.
adj (torch.SparseTensor) – Sparse tensor of adjacency matrix.
- Returns
x – Output of layer.
- Return type
torch.Tensor
- training: bool¶
grb.model.torch.tagcn¶
Torch module for TAGCN.
- class grb.model.torch.tagcn.TAGCN(in_features, out_features, hidden_features, n_layers, k, activation=<function leaky_relu>, feat_norm=None, adj_norm_func=<function GCNAdjNorm>, layer_norm=False, batch_norm=False, dropout=0.0)[source]¶
Bases:
Module
Topological Adaptive Graph Convolutional Networks (TAGCN)
- Parameters
in_features (int) – Dimension of input features.
out_features (int) – Dimension of output features.
hidden_features (int or list of int) – Dimension of hidden features. List if multi-layer.
n_layers (int) – Number of layers.
k (int) – Hyper-parameter, k-hop adjacency matrix, refer to original paper.
layer_norm (bool, optional) – Whether to use layer normalization. Default:
False
.batch_norm (bool, optional) – Whether to apply batch normalization. Default:
False
.activation (func of torch.nn.functional, optional) – Activation function. Default:
torch.nn.functional.leaky_relu
.feat_norm (str, optional) – Type of features normalization, choose from [“arctan”, “tanh”, None]. Default:
None
.adj_norm_func (func of utils.normalize, optional) – Function that normalizes adjacency matrix. Default:
GCNAdjNorm
.dropout (float, optional) – Rate of dropout. Default:
0.0
.
- forward(x, adj)[source]¶
- Parameters
x (torch.Tensor) – Tensor of input features.
adj (torch.SparseTensor) – Sparse tensor of adjacency matrix.
- Returns
x – Output of model (logits without activation).
- Return type
torch.Tensor
- property model_name¶
- property model_type¶
Indicate type of implementation.
- training: bool¶
- class grb.model.torch.tagcn.TAGConv(in_features, out_features, k=2, activation=None, batch_norm=False, dropout=0.0)[source]¶
Bases:
Module
TAGCN convolutional layer.
- Parameters
in_features (int) – Dimension of input features.
out_features (int) – Dimension of output features.
k (int, optional) – Hyper-parameter, refer to original paper. Default:
2
.activation (func of torch.nn.functional, optional) – Activation function. Default:
None
.batch_norm (bool, optional) – Whether to apply batch normalization. Default:
False
.dropout (float, optional) – Rate of dropout. Default:
0.0
.
- forward(x, adj)[source]¶
- Parameters
x (torch.Tensor) – Tensor of input features.
adj (torch.SparseTensor) – Sparse tensor of adjacency matrix.
- Returns
x – Output of layer.
- Return type
torch.Tensor
- training: bool¶