Belle II Software light-2406-ragdoll
GlobalLayer Class Reference
Inheritance diagram for GlobalLayer:
Collaboration diagram for GlobalLayer:

Public Member Functions

def __init__ (self, nfeat_in_dim, efeat_in_dim, gfeat_in_dim, gfeat_hid_dim, gfeat_out_dim, num_hid_layers, dropout, normalize=True)
 
def forward (self, x, edge_index, edge_attr, u, batch)
 

Public Attributes

 nonlin_function
 Non-linear activation.
 
 num_hid_layers
 Number of hidden layers.
 
 dropout_prob
 Dropout probability.
 
 normalize
 Normalization.
 
 lin_in
 Input linear layer.
 
 lins_hid
 Intermediate linear layers.
 
 lin_out
 Output linear layer.
 
 norm
 Batch normalization.
 

Detailed Description

Updates node features in MetaLayer:

.. math::
    u_{i}^{'} = \\phi^{u}(\\rho^{e \\to u}(e), \\rho^{v \\to u}(v), u)

with

.. math::
    \\rho^{e \\to u}(e) = \\frac{\\sum_{i,j=1,\\ i \\neq j}^{N} e_{ij}}{N \\cdot (N-1)},\\\\
    \\rho^{v \\to u}(e) = \\frac{\\sum_{i=1}^{N} v_{i}}{N},

where :math:`\\phi^{u}` is a neural network of the form

.. figure:: figs/MLP_structure.png
    :width: 42em
    :align: center

Args:
    nfeat_in_dim (int): Node features input dimension (number of node features in input).
    efeat_in_dim (int): Edge features input dimension (number of edge features in input).
    gfeat_in_dim (int): Gloabl features input dimension (number of global features in input).
    nfeat_hid_dim (int): Global features dimension in hidden layers.
    nfeat_out_dim (int): Global features output dimension.
    num_hid_layers (int): Number of hidden layers.
    dropout (float): Dropout rate :math:`r \\in [0,1]`.
    normalize (str): Type of normalization (batch/layer).

:return: Updated global features tensor.
:rtype: `Tensor <https://pytorch.org/docs/stable/tensors.html#torch.Tensor>`_

Definition at line 254 of file geometric_layers.py.

Constructor & Destructor Documentation

◆ __init__()

def __init__ (   self,
  nfeat_in_dim,
  efeat_in_dim,
  gfeat_in_dim,
  gfeat_hid_dim,
  gfeat_out_dim,
  num_hid_layers,
  dropout,
  normalize = True 
)
Initialization.

Definition at line 287 of file geometric_layers.py.

297 ):
298 """
299 Initialization.
300 """
301 super(GlobalLayer, self).__init__()
302
303
304 self.nonlin_function = F.elu
305
306 self.num_hid_layers = num_hid_layers
307
308 self.dropout_prob = dropout
309
310 self.normalize = normalize
311
312
313 self.lin_in = nn.Linear(
314 nfeat_in_dim + efeat_in_dim + gfeat_in_dim, gfeat_hid_dim
315 )
316
317 self.lins_hid = nn.ModuleList(
318 [
319 nn.Linear(gfeat_hid_dim, gfeat_hid_dim)
320 for _ in range(self.num_hid_layers)
321 ]
322 )
323
324 self.lin_out = nn.Linear(gfeat_hid_dim, gfeat_out_dim, bias=not normalize)
325
326 if normalize:
327
328 self.norm = nn.BatchNorm1d(gfeat_out_dim)
329
330 _init_weights(self, normalize)
331

Member Function Documentation

◆ forward()

def forward (   self,
  x,
  edge_index,
  edge_attr,
  u,
  batch 
)
Called internally by Pytorch to propagate the input through the network.
 - x: [N, F_x], where N is the number of nodes.
 - edge_index: [2, E] with max entry N - 1.
 - edge_attr: [E, F_e]
 - u: [B, F_u]
 - batch: [N] with max entry B - 1.

Nodes are averaged over graph

Definition at line 332 of file geometric_layers.py.

332 def forward(self, x, edge_index, edge_attr, u, batch):
333 """
334 Called internally by Pytorch to propagate the input through the network.
335 - x: [N, F_x], where N is the number of nodes.
336 - edge_index: [2, E] with max entry N - 1.
337 - edge_attr: [E, F_e]
338 - u: [B, F_u]
339 - batch: [N] with max entry B - 1.
340
341 Nodes are averaged over graph
342 """
343 node_mean = scatter(
344 x, batch, dim=0, reduce="mean"
345 )
346 # Edges are averaged over nodes
347 edge_mean = scatter(
348 edge_attr, edge_index[1], dim=0, reduce="mean"
349 )
350 # Edges are averaged over graph
351 edge_mean = scatter(
352 edge_mean, batch, dim=0, reduce="mean"
353 )
354 out = (
355 torch.cat([u, node_mean, edge_mean], dim=1)
356 if u.shape != torch.Size([0])
357 else torch.cat([node_mean, edge_mean], dim=1)
358 )
359
360 out = self.nonlin_function(self.lin_in(out))
361 out = F.dropout(out, self.dropout_prob, training=self.training)
362
363 out_skip = out
364
365 for lin_hid in self.lins_hid:
366 out = self.nonlin_function(lin_hid(out))
367 out = F.dropout(out, self.dropout_prob, training=self.training)
368
369 if self.num_hid_layers > 1:
370 out += out_skip
371
372 if self.normalize:
373 out = self.nonlin_function(self.norm(self.lin_out(out)))
374 else:
375 out = self.lin_out(out)
376
377 return out

Member Data Documentation

◆ dropout_prob

dropout_prob

Dropout probability.

Definition at line 308 of file geometric_layers.py.

◆ lin_in

lin_in

Input linear layer.

Definition at line 313 of file geometric_layers.py.

◆ lin_out

lin_out

Output linear layer.

Definition at line 324 of file geometric_layers.py.

◆ lins_hid

lins_hid

Intermediate linear layers.

Definition at line 317 of file geometric_layers.py.

◆ nonlin_function

nonlin_function

Non-linear activation.

Definition at line 304 of file geometric_layers.py.

◆ norm

norm

Batch normalization.

Definition at line 328 of file geometric_layers.py.

◆ normalize

normalize

Normalization.

Definition at line 310 of file geometric_layers.py.

◆ num_hid_layers

num_hid_layers

Number of hidden layers.

Definition at line 306 of file geometric_layers.py.


The documentation for this class was generated from the following file: