Belle II Software development
EncoderBlock Class Reference
Inheritance diagram for EncoderBlock:

Public Member Functions

 __init__ (self, input_dim, num_heads, dim_feedforward, dropout, which_linear)
 Constructor.
 
 forward (self, x)
 forward
 

Public Attributes

 which_linear = which_linear
 which linear
 
 self_attn
 Attention layer.
 
 linear_net
 Two-layer MLP.
 
 norm1 = nn.LayerNorm(input_dim)
 norm1
 
 norm2 = nn.LayerNorm(input_dim)
 norm2
 
 dropout = nn.Dropout(dropout)
 dropout
 

Detailed Description

EncoderBlock

Definition at line 933 of file ieagan.py.

Constructor & Destructor Documentation

◆ __init__()

__init__ ( self,
input_dim,
num_heads,
dim_feedforward,
dropout,
which_linear )

Constructor.

Inputs:
    input_dim - Dimensionality of the input
    num_heads - Number of heads to use in the attention block
    dim_feedforward - Dimensionality of the hidden layer in the MLP
    dropout - Dropout probability to use in the dropout layers

Definition at line 937 of file ieagan.py.

937 def __init__(self, input_dim, num_heads, dim_feedforward, dropout, which_linear):
938 """
939 Inputs:
940 input_dim - Dimensionality of the input
941 num_heads - Number of heads to use in the attention block
942 dim_feedforward - Dimensionality of the hidden layer in the MLP
943 dropout - Dropout probability to use in the dropout layers
944 """
945 super().__init__()
946
947
948 self.which_linear = which_linear
949
950 self.self_attn = MultiheadAttention(
951 input_dim, input_dim, num_heads, which_linear
952 )
953
954
955 self.linear_net = nn.Sequential(
956 self.which_linear(input_dim, dim_feedforward),
957 nn.Dropout(dropout),
958 nn.ReLU(inplace=True),
959 self.which_linear(dim_feedforward, input_dim),
960 )
961
962 # Layers to apply in between the main layers
963
964 self.norm1 = nn.LayerNorm(input_dim)
965
966 self.norm2 = nn.LayerNorm(input_dim)
967
968 self.dropout = nn.Dropout(dropout)
969

Member Function Documentation

◆ forward()

forward ( self,
x )

forward

Definition at line 971 of file ieagan.py.

971 def forward(self, x):
972 # Attention part
973 x_pre1 = self.norm1(x)
974 attn_out = self.self_attn(x_pre1)
975 x = x + self.dropout(attn_out)
976 # x = self.norm1(x)
977
978 # MLP part
979 x_pre2 = self.norm2(x)
980 linear_out = self.linear_net(x_pre2)
981 x = x + self.dropout(linear_out)
982 # x = self.norm2(x)
983
984 return x
985
986

Member Data Documentation

◆ dropout

dropout = nn.Dropout(dropout)

dropout

Definition at line 968 of file ieagan.py.

◆ linear_net

linear_net
Initial value:
= nn.Sequential(
self.which_linear(input_dim, dim_feedforward),
nn.Dropout(dropout),
nn.ReLU(inplace=True),
self.which_linear(dim_feedforward, input_dim),
)

Two-layer MLP.

Definition at line 955 of file ieagan.py.

◆ norm1

norm1 = nn.LayerNorm(input_dim)

norm1

Definition at line 964 of file ieagan.py.

◆ norm2

norm2 = nn.LayerNorm(input_dim)

norm2

Definition at line 966 of file ieagan.py.

◆ self_attn

self_attn
Initial value:
= MultiheadAttention(
input_dim, input_dim, num_heads, which_linear
)

Attention layer.

Definition at line 950 of file ieagan.py.

◆ which_linear

which_linear = which_linear

which linear

Definition at line 948 of file ieagan.py.


The documentation for this class was generated from the following file: