Belle II Software  release-06-02-00
Local Class Reference
Inheritance diagram for Local:
Collaboration diagram for Local:

Classes

class  LocalResult
 

Public Member Functions

def __init__ (self, *backend_args=None, max_processes=1)
 
def join (self)
 
def max_processes (self)
 
def max_processes (self, value)
 
def submit (self, job)
 
def get_submit_script_path (self, job)
 

Static Public Member Functions

def run_job (name, working_dir, output_dir, script)
 

Public Attributes

 pool
 The actual Pool object of this instance of the Backend.
 
 max_processes
 The size of the multiprocessing process pool.
 
 backend_args
 The backend args that will be applied to jobs unless the job specifies them itself.
 

Static Public Attributes

string submit_script = "submit.sh"
 Default submission script name.
 
string exit_code_file = "__BACKEND_CMD_EXIT_STATUS__"
 Default exit code file name.
 
dictionary default_backend_args = {}
 Default backend_args.
 

Private Member Functions

def _ (self, job)
 
def _ (self, job)
 
def _ (self, jobs)
 
def _create_parent_job_result (cls, parent)
 
def _add_wrapper_script_setup (self, job, batch_file)
 
def _add_wrapper_script_teardown (self, job, batch_file)
 

Static Private Member Functions

def _add_setup (job, batch_file)
 

Private Attributes

 _max_processes
 Internal attribute of max_processes.
 

Detailed Description

Backend for local processes i.e. on the same machine but in a subprocess.

Note that you should call the self.join() method to close the pool and wait for any
running processes to finish before exiting the process. Once you've called join you will have to set up a new
instance of this backend to create a new pool. If you don't call `Local.join` or don't create a join yourself
somewhere, then the main python process might end before your pool is done.

Keyword Arguments:
    max_processes (int): Integer that specifies the size of the process pool that spawns the subjobs, default=1.
        It's the maximium simultaneous subjobs.
        Try not to specify a large number or a number larger than the number of cores.
        It won't crash the program but it will slow down and negatively impact performance.

Definition at line 894 of file backends.py.

Constructor & Destructor Documentation

◆ __init__()

def __init__ (   self,
backend_args = None,
  max_processes = 1 
)
 

Definition at line 910 of file backends.py.

Member Function Documentation

◆ _() [1/3]

def _ (   self,
  job 
)
private
Submission of a `SubJob` for the Local backend

Definition at line 991 of file backends.py.

◆ _() [2/3]

def _ (   self,
  job 
)
private
Submission of a `Job` for the Local backend

Definition at line 1022 of file backends.py.

◆ _() [3/3]

def _ (   self,
  jobs 
)
private
Submit method of Local() that takes a list of jobs instead of just one and submits each one.

Definition at line 1064 of file backends.py.

◆ _add_setup()

def _add_setup (   job,
  batch_file 
)
staticprivateinherited
Adds setup lines to the shell script file.

Definition at line 777 of file backends.py.

◆ _add_wrapper_script_setup()

def _add_wrapper_script_setup (   self,
  job,
  batch_file 
)
privateinherited
Adds lines to the submitted script that help with job monitoring/setup. Mostly here so that we can insert
`trap` statements for Ctrl-C situations.

Definition at line 784 of file backends.py.

◆ _add_wrapper_script_teardown()

def _add_wrapper_script_teardown (   self,
  job,
  batch_file 
)
privateinherited
Adds lines to the submitted script that help with job monitoring/teardown. Mostly here so that we can insert
an exit code of the job cmd being written out to a file. Which means that we can know if the command was
successful or not even if the backend server/monitoring database purges the data about our job i.e. If PBS
removes job information too quickly we may never know if a job succeeded or failed without some kind of exit
file.

Definition at line 809 of file backends.py.

◆ _create_parent_job_result()

def _create_parent_job_result (   cls,
  parent 
)
private
We want to be able to call `ready()` on the top level `Job.result`. So this method needs to exist
so that a Job.result object actually exists. It will be mostly empty and simply updates subjob
statuses and allows the use of ready().

Reimplemented from Backend.

Definition at line 1100 of file backends.py.

◆ get_submit_script_path()

def get_submit_script_path (   self,
  job 
)
inherited
Construct the Path object of the bash script file that we will submit. It will contain
the actual job command, wrapper commands, setup commands, and any batch directives

Definition at line 830 of file backends.py.

◆ join()

def join (   self)
Closes and joins the Pool, letting you wait for all results currently
still processing.

Definition at line 952 of file backends.py.

◆ max_processes() [1/2]

def max_processes (   self)
Getter for max_processes

Definition at line 963 of file backends.py.

◆ max_processes() [2/2]

def max_processes (   self,
  value 
)
Setter for max_processes, we also check for a previous Pool(), wait for it to join
and then create a new one with the new value of max_processes

Definition at line 970 of file backends.py.

◆ run_job()

def run_job (   name,
  working_dir,
  output_dir,
  script 
)
static
The function that is used by multiprocessing.Pool.apply_async during process creation. This runs a
shell command in a subprocess and captures the stdout and stderr of the subprocess to files.

Definition at line 1074 of file backends.py.

◆ submit()

def submit (   self,
  job 
)
 

Reimplemented from Backend.

Definition at line 984 of file backends.py.


The documentation for this class was generated from the following file: