![]() |
Belle II Software light-2509-fornax
|


Public Member Functions | |
| __init__ (self, mlp, mom_init=.9, mom_max=.99, mom_epochs=200, lr_init=.05, lr_min=1e-6, lr_dec_rate=.976, stop_epochs=10, min_epochs=200, max_epochs=1000, wd_coeffs=None, change_optimizer=None, staircase=True, smooth_cross_entropy=False) | |
| initialize (self, data_set) | |
| __call__ (self, x) | |
| get_optimizer (self, epoch=0) | |
| loss (self, predict_y, true_y) | |
Public Attributes | |
| mlp = mlp | |
| mlp net | |
| wd_coeffs = wd_coeffs | |
| weight decay coefficients | |
| global_step = tf.Variable(0, trainable=False, name='global_step', dtype=tf.int64) | |
| global step | |
| c_mom_init = tf.constant(mom_init, dtype=tf.float32) | |
| initial momentum | |
| c_mom_max = tf.constant(mom_max, dtype=tf.float32) | |
| maximum momentum | |
| c_mom_epochs = tf.constant(mom_epochs, dtype=tf.float32) | |
| momentum epochs | |
| tuple | c_mom_dec_rate = (self.c_mom_max - self.c_mom_init) / tf.cast(self.c_mom_epochs, tf.float32) |
| momentum decay rate | |
| c_lr_init = tf.constant(lr_init, dtype=tf.float32) | |
| initial learning rate | |
| c_lr_min = tf.constant(lr_min, dtype=tf.float32) | |
| minimum learning rate | |
| c_lr_dec_rate = tf.constant(lr_dec_rate, dtype=tf.float32) | |
| learning rate decay rate | |
| c_stop_epochs = stop_epochs | |
| number of epochs without improvement for early termination | |
| c_staircase = staircase | |
| use staircase | |
| batches_per_epoch = None | |
| batches per epoch unknown. | |
| list | optimizers = [] |
| define multiple optimizers | |
| optimizer_change_epochs = change_optimizer | |
| used optimizers | |
| min_epochs = min_epochs | |
| min epochs | |
| max_epochs = max_epochs | |
| max epochs | |
| termination_criterion = None | |
| termination criterion | |
| list | recent_params = [] |
| recent params | |
| best_value = np.inf | |
| the best value will be set a default start value, then updated with the termination criterion | |
| step_countdown = self.c_stop_epochs | |
| step countdown | |
| smooth_cross_entropy = smooth_cross_entropy | |
| True for a small epsilon addition, false for a clipped network output. | |
| bool | is_initialized = False |
| check if initialized | |
Protected Member Functions | |
| _default_termination_criterion (self, monitoring_param, epoch, prop_dec=1e-5) | |
| _get_learning_rate (self) | |
| _get_momentum (self) | |
| _set_optimizer (self) | |
define the default model
Definition at line 198 of file tensorflow_dnn_model.py.
| __init__ | ( | self, | |
| mlp, | |||
| mom_init = .9, | |||
| mom_max = .99, | |||
| mom_epochs = 200, | |||
| lr_init = .05, | |||
| lr_min = 1e-6, | |||
| lr_dec_rate = .976, | |||
| stop_epochs = 10, | |||
| min_epochs = 200, | |||
| max_epochs = 1000, | |||
| wd_coeffs = None, | |||
| change_optimizer = None, | |||
| staircase = True, | |||
| smooth_cross_entropy = False ) |
initialization function :param mlp: network model. :param mom_init: initial momentum :param mom_max: maximum momentum :param mom_epochs: momentum epochs :param lr_init: initial learning rate :param lr_min: minimum learning rate :param lr_dec_rate: learning rate decay factor :param stop_epochs: number of epochs without improvement required for early termination :param min_epochs: minimum number of epochs for training :param max_epochs: maximum number of epochs for training :param wd_coeffs: weight decay coefficients. If not None must have one per mlp layer. :param change_optimizer: :param staircaise: "param smooth_cross_entropy:
Definition at line 203 of file tensorflow_dnn_model.py.
| __call__ | ( | self, | |
| x ) |
Call the mlp
Definition at line 336 of file tensorflow_dnn_model.py.
|
protected |
early stopping criterion :param monitoring_param: the parameter to monitor for early termination :param epoch: the current epoch :param prop_dec: :return:
Definition at line 342 of file tensorflow_dnn_model.py.
|
protected |
Returns the learning rate at the current global step.
Definition at line 367 of file tensorflow_dnn_model.py.
|
protected |
returns the momentum at the current global step.
Definition at line 376 of file tensorflow_dnn_model.py.
|
protected |
set optimizers
Definition at line 391 of file tensorflow_dnn_model.py.
| get_optimizer | ( | self, | |
| epoch = 0 ) |
get the optimizer. If multiple optimizers are booked gets the one appropriate for the epoch. :param epoch: current epoch.
Definition at line 400 of file tensorflow_dnn_model.py.
| initialize | ( | self, | |
| data_set ) |
Finalises initialization based of data_set specific information (number of batches per epoch)
Definition at line 316 of file tensorflow_dnn_model.py.
| loss | ( | self, | |
| predict_y, | |||
| true_y ) |
calculate the loss :param predict_y: predicted labels :param true_y: true labels
Definition at line 418 of file tensorflow_dnn_model.py.
| batches_per_epoch = None |
batches per epoch unknown.
needs to be set with initialize
Definition at line 276 of file tensorflow_dnn_model.py.
| best_value = np.inf |
the best value will be set a default start value, then updated with the termination criterion
Definition at line 301 of file tensorflow_dnn_model.py.
| c_lr_dec_rate = tf.constant(lr_dec_rate, dtype=tf.float32) |
learning rate decay rate
Definition at line 267 of file tensorflow_dnn_model.py.
| c_lr_init = tf.constant(lr_init, dtype=tf.float32) |
initial learning rate
Definition at line 261 of file tensorflow_dnn_model.py.
| c_lr_min = tf.constant(lr_min, dtype=tf.float32) |
minimum learning rate
Definition at line 264 of file tensorflow_dnn_model.py.
| tuple c_mom_dec_rate = (self.c_mom_max - self.c_mom_init) / tf.cast(self.c_mom_epochs, tf.float32) |
momentum decay rate
Definition at line 258 of file tensorflow_dnn_model.py.
| c_mom_epochs = tf.constant(mom_epochs, dtype=tf.float32) |
momentum epochs
Definition at line 255 of file tensorflow_dnn_model.py.
| c_mom_init = tf.constant(mom_init, dtype=tf.float32) |
initial momentum
Definition at line 249 of file tensorflow_dnn_model.py.
| c_mom_max = tf.constant(mom_max, dtype=tf.float32) |
maximum momentum
Definition at line 252 of file tensorflow_dnn_model.py.
| c_staircase = staircase |
use staircase
Definition at line 273 of file tensorflow_dnn_model.py.
| c_stop_epochs = stop_epochs |
number of epochs without improvement for early termination
Definition at line 270 of file tensorflow_dnn_model.py.
| global_step = tf.Variable(0, trainable=False, name='global_step', dtype=tf.int64) |
global step
Definition at line 245 of file tensorflow_dnn_model.py.
| bool is_initialized = False |
check if initialized
Definition at line 310 of file tensorflow_dnn_model.py.
| max_epochs = max_epochs |
max epochs
Definition at line 292 of file tensorflow_dnn_model.py.
| min_epochs = min_epochs |
min epochs
Definition at line 289 of file tensorflow_dnn_model.py.
| mlp = mlp |
mlp net
Definition at line 236 of file tensorflow_dnn_model.py.
| optimizer_change_epochs = change_optimizer |
used optimizers
Definition at line 282 of file tensorflow_dnn_model.py.
| optimizers = [] |
define multiple optimizers
Definition at line 279 of file tensorflow_dnn_model.py.
| list recent_params = [] |
recent params
Definition at line 298 of file tensorflow_dnn_model.py.
| smooth_cross_entropy = smooth_cross_entropy |
True for a small epsilon addition, false for a clipped network output.
Definition at line 307 of file tensorflow_dnn_model.py.
| step_countdown = self.c_stop_epochs |
step countdown
Definition at line 304 of file tensorflow_dnn_model.py.
| termination_criterion = None |
termination criterion
Definition at line 295 of file tensorflow_dnn_model.py.
| wd_coeffs = wd_coeffs |
weight decay coefficients
Definition at line 242 of file tensorflow_dnn_model.py.