Belle II Software development
ONNXExpert Class Reference

Expert for the ONNX MVA method. More...

#include <ONNX.h>

Inheritance diagram for ONNXExpert:
Expert

Public Member Functions

virtual void load (Weightfile &weightfile) override
 Load the expert from a Weightfile.
 
virtual std::vector< float > apply (Dataset &testData) const override
 Apply this expert onto a dataset.
 
virtual std::vector< std::vector< float > > applyMulticlass (Dataset &test_data) const override
 Apply this expert onto a dataset and return multiple outputs.
 

Protected Attributes

GeneralOptions m_general_options
 General options loaded from the weightfile.
 

Private Member Functions

void run (ONNXTensorView &view) const
 Run the current inputs through the onnx model Will retrieve and fill the buffers from the view.
 

Private Attributes

Ort::Env m_env
 Environment object for ONNX session.
 
Ort::SessionOptions m_sessionOptions
 ONNX session configuration.
 
std::unique_ptr< Ort::Session > m_session
 The ONNX inference session.
 
Ort::RunOptions m_runOptions
 Options to be passed to Ort::Session::Run.
 
const char * m_inputNames [1] = {"input"}
 Input tensor names.
 
const char * m_outputNames [1] = {"output"}
 Output tensor names.
 

Detailed Description

Expert for the ONNX MVA method.

Definition at line 149 of file ONNX.h.

Member Function Documentation

◆ apply()

std::vector< float > apply ( Dataset & testData) const
overridevirtual

Apply this expert onto a dataset.

Parameters
testDatadataset

Implements Expert.

Definition at line 45 of file ONNX.cc.

46{
47 auto view = ONNXTensorView(testData, 1);
48 std::vector<float> result;
49 result.reserve(testData.getNumberOfEvents());
50 for (unsigned int iEvent = 0; iEvent < testData.getNumberOfEvents(); ++iEvent) {
51 testData.loadEvent(iEvent);
52 run(view);
53 result.push_back(view.outputData()[0]);
54 }
55 return result;
56}
virtual unsigned int getNumberOfEvents() const =0
Returns the number of events in this dataset.
virtual void loadEvent(unsigned int iEvent)=0
Load the event number iEvent.
void run(ONNXTensorView &view) const
Run the current inputs through the onnx model Will retrieve and fill the buffers from the view.
Definition ONNX.cc:38

◆ applyMulticlass()

std::vector< std::vector< float > > applyMulticlass ( Dataset & test_data) const
overridevirtual

Apply this expert onto a dataset and return multiple outputs.

Parameters
test_datadataset

Reimplemented from Expert.

Definition at line 58 of file ONNX.cc.

59{
60 auto view = ONNXTensorView(testData, m_general_options.m_nClasses);
61 std::vector<std::vector<float>> result(testData.getNumberOfEvents(),
62 std::vector<float>(m_general_options.m_nClasses));
63 for (unsigned int iEvent = 0; iEvent < testData.getNumberOfEvents(); ++iEvent) {
64 testData.loadEvent(iEvent);
65 run(view);
66 auto outputs = view.outputData();
67 for (unsigned int iClass = 0; iClass < m_general_options.m_nClasses; ++iClass) {
68 result[iEvent][iClass] = outputs[iClass];
69 }
70 }
71 return result;
72}
GeneralOptions m_general_options
General options loaded from the weightfile.
Definition Expert.h:70
unsigned int m_nClasses
Number of classes in a classification problem.
Definition Options.h:89

◆ load()

void load ( Weightfile & weightfile)
overridevirtual

Load the expert from a Weightfile.

Parameters
weightfilecontaining all information necessary to build the expert

Implements Expert.

Definition at line 17 of file ONNX.cc.

18{
19 std::string onnxModelFileName = weightfile.generateFileName();
20 weightfile.getFile("ONNX_Modelfile", onnxModelFileName);
21 weightfile.getOptions(m_general_options);
22
23 // Ensure single-threaded execution, see
24 // https://onnxruntime.ai/docs/performance/tune-performance/threading.html
25 //
26 // InterOpNumThreads is probably optional (not used in ORT_SEQUENTIAL mode)
27 // Also, with batch size 1 and ORT_SEQUENTIAL mode, MLP-like models will
28 // always run single threaded, but maybe not e.g. graph networks which can run
29 // in parallel on nodes. Here, setting IntraOpNumThreads to 1 is important to
30 // ensure single-threaded execution.
31 m_sessionOptions.SetIntraOpNumThreads(1);
32 m_sessionOptions.SetInterOpNumThreads(1);
33 m_sessionOptions.SetExecutionMode(ORT_SEQUENTIAL); // default, but make it explicit
34
35 m_session = std::make_unique<Ort::Session>(m_env, onnxModelFileName.c_str(), m_sessionOptions);
36}
Ort::Env m_env
Environment object for ONNX session.
Definition ONNX.h:179
std::unique_ptr< Ort::Session > m_session
The ONNX inference session.
Definition ONNX.h:189
Ort::SessionOptions m_sessionOptions
ONNX session configuration.
Definition ONNX.h:184
void getOptions(Options &options) const
Fills an Option object from the xml tree.
Definition Weightfile.cc:67
std::string generateFileName(const std::string &suffix="")
Returns a temporary filename with the given suffix.
void getFile(const std::string &identifier, const std::string &custom_weightfile)
Creates a file from our weightfile (mostly this will be a weightfile of an MVA library)

◆ run()

void run ( ONNXTensorView & view) const
private

Run the current inputs through the onnx model Will retrieve and fill the buffers from the view.

Definition at line 38 of file ONNX.cc.

39{
41 m_inputNames, view.inputTensor(), 1,
42 m_outputNames, view.outputTensor(), 1);
43}
Ort::RunOptions m_runOptions
Options to be passed to Ort::Session::Run.
Definition ONNX.h:194
const char * m_inputNames[1]
Input tensor names.
Definition ONNX.h:199
const char * m_outputNames[1]
Output tensor names.
Definition ONNX.h:204

Member Data Documentation

◆ m_env

Ort::Env m_env
private

Environment object for ONNX session.

Definition at line 179 of file ONNX.h.

◆ m_general_options

GeneralOptions m_general_options
protectedinherited

General options loaded from the weightfile.

Definition at line 70 of file Expert.h.

◆ m_inputNames

const char* m_inputNames[1] = {"input"}
private

Input tensor names.

Definition at line 199 of file ONNX.h.

199{"input"};

◆ m_outputNames

const char* m_outputNames[1] = {"output"}
private

Output tensor names.

Definition at line 204 of file ONNX.h.

204{"output"};

◆ m_runOptions

Ort::RunOptions m_runOptions
private

Options to be passed to Ort::Session::Run.

Definition at line 194 of file ONNX.h.

◆ m_session

std::unique_ptr<Ort::Session> m_session
private

The ONNX inference session.

Definition at line 189 of file ONNX.h.

◆ m_sessionOptions

Ort::SessionOptions m_sessionOptions
private

ONNX session configuration.

Definition at line 184 of file ONNX.h.


The documentation for this class was generated from the following files: