Belle II Software development
PriorModel Class Reference
Inheritance diagram for PriorModel:

Public Member Functions

 __init__ (self, int n_output)
 
torch.tensor forward (self, torch.tensor x)
 

Public Attributes

 hidden1 = Linear(9, 128)
 Linear layer with 9 inputs and 128 outputs.
 
 act1 = ReLU()
 ReLU activation layer.
 
 hidden2 = BatchNorm1d(128)
 Batchnormalization layer.
 
 hidden3 = Linear(128, 64)
 Linear layer with 128 inputs and 64 outputs.
 
 act2 = ReLU()
 ReLU activation layer.
 
 hidden4 = BatchNorm1d(64)
 Batchnormalization layer.
 
 hidden5 = Linear(64, 32)
 Linear layer with 64 inputs and 32 outputs.
 
 act3 = ReLU()
 ReLU activation layer.
 
 hidden6 = BatchNorm1d(32)
 Batchnormalization layer.
 
 hidden7 = Linear(32, n_output)
 Linear layer with 32 inputs and outputs for each particle in the particlelist.
 
 act4 = Softmax(dim=1)
 Softmax activation layer.
 

Detailed Description

Pytorch model for PID prior probability calculation.

Attributes:
    hidden1: Linear layer with 9 inputs and 128 outputs.
    act1: An RELU activation layer.
    hidden2: A batch normalization layer.
    hidden3: Linear layer with 128 inputs and 64 outputs.
    act2: An RELU activation layer.
    hidden4: A batch normalization layer.
    hidden5: Linear layer with 64 inputs and 32 outputs.
    act3: An RELU activation layer.
    hidden6: A batch normalization layer.
    hidden7: Linear layer with 9 inputs and 128 outputs.
    act4: A softmax activation layer.

Definition at line 108 of file priorDataLoaderAndModel.py.

Constructor & Destructor Documentation

◆ __init__()

__init__ ( self,
int n_output )
Initialize the PID prior probability model.

Parameter:
    n_output (int): Number of output nodes.

Definition at line 127 of file priorDataLoaderAndModel.py.

127 def __init__(self, n_output: int):
128 """
129 Initialize the PID prior probability model.
130
131 Parameter:
132 n_output (int): Number of output nodes.
133
134 """
135 super().__init__()
136
137 self.hidden1 = Linear(9, 128)
138 kaiming_uniform_(self.hidden1.weight, nonlinearity="relu")
139
140 self.act1 = ReLU()
141
142 self.hidden2 = BatchNorm1d(128)
143
144 self.hidden3 = Linear(128, 64)
145 kaiming_uniform_(self.hidden3.weight, nonlinearity="relu")
146
147 self.act2 = ReLU()
148
149 self.hidden4 = BatchNorm1d(64)
150
151 self.hidden5 = Linear(64, 32)
152 kaiming_uniform_(self.hidden5.weight, nonlinearity="relu")
153
154 self.act3 = ReLU()
155
156 self.hidden6 = BatchNorm1d(32)
157
158 self.hidden7 = Linear(32, n_output)
159 xavier_uniform_(self.hidden7.weight)
160
161 self.act4 = Softmax(dim=1)
162

Member Function Documentation

◆ forward()

torch.tensor forward ( self,
torch.tensor x )
Gives PID prior probabilities for the input features.

Parameter:
    x (torch.tensor): A 2D tensor containing features for a particle as a row.

Returns:
    A torch tensor containing PID prior probabilities for the provided features.

Definition at line 163 of file priorDataLoaderAndModel.py.

163 def forward(self, x: torch.tensor) -> torch.tensor:
164 """
165 Gives PID prior probabilities for the input features.
166
167 Parameter:
168 x (torch.tensor): A 2D tensor containing features for a particle as a row.
169
170 Returns:
171 A torch tensor containing PID prior probabilities for the provided features.
172 """
173 x = self.hidden1(x)
174 x = self.act1(x)
175 x = self.hidden2(x)
176 x = self.hidden3(x)
177 x = self.act2(x)
178 x = self.hidden4(x)
179 x = self.hidden5(x)
180 x = self.act3(x)
181 x = self.hidden6(x)
182 x = self.hidden7(x)
183 x = self.act4(x)
184 return x

Member Data Documentation

◆ act1

act1 = ReLU()

ReLU activation layer.

Definition at line 140 of file priorDataLoaderAndModel.py.

◆ act2

act2 = ReLU()

ReLU activation layer.

Definition at line 147 of file priorDataLoaderAndModel.py.

◆ act3

act3 = ReLU()

ReLU activation layer.

Definition at line 154 of file priorDataLoaderAndModel.py.

◆ act4

act4 = Softmax(dim=1)

Softmax activation layer.

Definition at line 161 of file priorDataLoaderAndModel.py.

◆ hidden1

hidden1 = Linear(9, 128)

Linear layer with 9 inputs and 128 outputs.

Definition at line 137 of file priorDataLoaderAndModel.py.

◆ hidden2

hidden2 = BatchNorm1d(128)

Batchnormalization layer.

Definition at line 142 of file priorDataLoaderAndModel.py.

◆ hidden3

hidden3 = Linear(128, 64)

Linear layer with 128 inputs and 64 outputs.

Definition at line 144 of file priorDataLoaderAndModel.py.

◆ hidden4

hidden4 = BatchNorm1d(64)

Batchnormalization layer.

Definition at line 149 of file priorDataLoaderAndModel.py.

◆ hidden5

hidden5 = Linear(64, 32)

Linear layer with 64 inputs and 32 outputs.

Definition at line 151 of file priorDataLoaderAndModel.py.

◆ hidden6

hidden6 = BatchNorm1d(32)

Batchnormalization layer.

Definition at line 156 of file priorDataLoaderAndModel.py.

◆ hidden7

hidden7 = Linear(32, n_output)

Linear layer with 32 inputs and outputs for each particle in the particlelist.

Definition at line 158 of file priorDataLoaderAndModel.py.


The documentation for this class was generated from the following file: