Belle II Software development
EncoderBlock Class Reference
Inheritance diagram for EncoderBlock:

Public Member Functions

def __init__ (self, input_dim, num_heads, dim_feedforward, dropout, which_linear)
 Constructor.
 
def forward (self, x)
 forward
 

Public Attributes

 which_linear
 which linear
 
 self_attn
 Attention layer.
 
 linear_net
 Two-layer MLP.
 
 norm1
 norm1
 
 norm2
 norm2
 
 dropout
 dropout
 

Detailed Description

EncoderBlock

Definition at line 905 of file ieagan.py.

Constructor & Destructor Documentation

◆ __init__()

def __init__ (   self,
  input_dim,
  num_heads,
  dim_feedforward,
  dropout,
  which_linear 
)

Constructor.

Inputs:
    input_dim - Dimensionality of the input
    num_heads - Number of heads to use in the attention block
    dim_feedforward - Dimensionality of the hidden layer in the MLP
    dropout - Dropout probability to use in the dropout layers

Definition at line 909 of file ieagan.py.

909 def __init__(self, input_dim, num_heads, dim_feedforward, dropout, which_linear):
910 """
911 Inputs:
912 input_dim - Dimensionality of the input
913 num_heads - Number of heads to use in the attention block
914 dim_feedforward - Dimensionality of the hidden layer in the MLP
915 dropout - Dropout probability to use in the dropout layers
916 """
917 super().__init__()
918
919
920 self.which_linear = which_linear
921
922 self.self_attn = MultiheadAttention(
923 input_dim, input_dim, num_heads, which_linear
924 )
925
926
927 self.linear_net = nn.Sequential(
928 self.which_linear(input_dim, dim_feedforward),
929 nn.Dropout(dropout),
930 nn.ReLU(inplace=True),
931 self.which_linear(dim_feedforward, input_dim),
932 )
933
934 # Layers to apply in between the main layers
935
936 self.norm1 = nn.LayerNorm(input_dim)
937
938 self.norm2 = nn.LayerNorm(input_dim)
939
940 self.dropout = nn.Dropout(dropout)
941

Member Function Documentation

◆ forward()

def forward (   self,
  x 
)

forward

Definition at line 943 of file ieagan.py.

943 def forward(self, x):
944 # Attention part
945 x_pre1 = self.norm1(x)
946 attn_out = self.self_attn(x_pre1)
947 x = x + self.dropout(attn_out)
948 # x = self.norm1(x)
949
950 # MLP part
951 x_pre2 = self.norm2(x)
952 linear_out = self.linear_net(x_pre2)
953 x = x + self.dropout(linear_out)
954 # x = self.norm2(x)
955
956 return x
957
958

Member Data Documentation

◆ dropout

dropout

dropout

Definition at line 940 of file ieagan.py.

◆ linear_net

linear_net

Two-layer MLP.

Definition at line 927 of file ieagan.py.

◆ norm1

norm1

norm1

Definition at line 936 of file ieagan.py.

◆ norm2

norm2

norm2

Definition at line 938 of file ieagan.py.

◆ self_attn

self_attn

Attention layer.

Definition at line 922 of file ieagan.py.

◆ which_linear

which_linear

which linear

Definition at line 920 of file ieagan.py.


The documentation for this class was generated from the following file: