pibronic.server package

Submodules

pibronic.server.job_boss module

provides a number of job submission related functions - a helper module for job submission assumes the server uses SLURM

class pibronic.server.job_boss.PimcSubmissionClass(input_FS, input_param_dict=None)[source]

Bases: pibronic.server.job_boss.SubmissionClass

Class to store all the logic involved with submitting 1 or more jobs to a server running SLURM - should be self consistent

MAX_SAMPLES_PER_JOB = 100000
param_dict = {'bead_list': [0], 'block_size': 1000, 'cores_per_socket': 4, 'cpus_per_task': 4, 'hostname': None, 'id_data': 0, 'id_rho': 0, 'memory_per_node': 20, 'number_of_beads': 0, 'number_of_blocks': 0, 'number_of_modes': 0, 'number_of_samples': 0, 'number_of_samples_overall': 0, 'number_of_samples_per_job': 0, 'number_of_states': 0, 'partition': None, 'path_data': '', 'path_rho': '', 'path_root': '', 'path_scratch': '', 'script_name': 'pimc.py', 'temperature_list': [0.0], 'total_memory': 20, 'wait_param': ''}
prepare_paths()[source]

fill the param_dict with values from the give FileStructure

setup_blocks_and_jobs()[source]

calculates the following: - the number of blocks per job - the number of samples per job - the number of jobs and stores them in the param_dict

submit_job_array()[source]

for each temp submit an array of jobs over the beads

submit_jobs()[source]

submit jobs at diff temps and beads

pibronic.server.job_boss.SIGUSR1_handle(signum, frame)[source]

signal handler - uses job_state_lock to record signal using the bool job_almost_done_flag

class pibronic.server.job_boss.SubmissionClass(input_FS, input_param_dict=None)[source]

Bases: object

MAX_SAMPLES_PER_JOB = 10000
construct_job_command(params)[source]

this

node_dict = {'feynman': <function prepare_job_nlogn>, 'graham': <function prepare_job_graham>, 'nlogn': <function prepare_job_nlogn>, 'orca': <function prepare_job_orca>}
param_dict = {'bead_list': [0], 'block_size': 1, 'cores_per_socket': 1, 'cpus_per_task': 1, 'delta_beta': 0.0002, 'hostname': '', 'id_data': 0, 'id_rho': 0, 'memory_per_node': 1, 'number_of_beads': 0, 'number_of_blocks': 0, 'number_of_links': 0, 'number_of_modes': 0, 'number_of_samples': 0, 'number_of_samples_overall': 0, 'number_of_samples_per_job': 0, 'number_of_states': 0, 'partition': None, 'path_data': '', 'path_rho': '', 'path_root': '', 'path_scratch': '', 'script_name': None, 'temperature_list': [0.0], 'total_memory': 1, 'wait_param': ''}
prepare_paths()[source]

fill the param_dict with values from the give FileStructure

verify_hostname_is_valid(hostname)[source]

this checks the hostname against a pre-defined dictionary this should alert the user if they are trying to submit jobs on a server without first preparing a job submission wrapper

pibronic.server.job_boss.assert_partition_exists(partition_name)[source]

only checks that the given string is listed as a partition by sinfo

pibronic.server.job_boss.check_acct_state(id_job)[source]

returns the recorded state of the job (from SLURM) as a string

pibronic.server.job_boss.check_running_state(id_job)[source]

returns the running state of the job (from SLURM) as a string

pibronic.server.job_boss.check_slurm_output(path_root, id_job)[source]

checks ouput file from slurm for errors memory issues, incorrect arguments, etc

pibronic.server.job_boss.extract_id_job_from_output(out)[source]

returns the job id inside the str argument ‘out’ or None if the job id is not present if no job id can be found then a warning is raised

pibronic.server.job_boss.get_hostname()[source]

returns the hostname of the cluster of the server (from SLURM) as a string

pibronic.server.job_boss.get_path_of_job_boss_directory()[source]

returns the absolute path to the directory holding job_boss.py

pibronic.server.job_boss.get_path_to_python_executable()[source]

returns the absolute path to the python executable currently executing this script

pibronic.server.job_boss.prepare_job_compute_canada(param_dict)[source]

wrapper for jobs on compute canada servers

pibronic.server.job_boss.prepare_job_feynman(param_dict)[source]

wrapper for job_boss job submission

pibronic.server.job_boss.prepare_job_graham(param_dict)[source]

wrapper for job_boss job submission

pibronic.server.job_boss.prepare_job_nlogn(param_dict)[source]

wrapper for job_boss job submission

pibronic.server.job_boss.prepare_job_orca(param_dict)[source]

wrapper for job_boss job submission

pibronic.server.job_boss.serialize_BoxData_dictionary(parameter_dictionary)[source]

wrapper for the call to BoxData’s json_serialize() takes a dictionary of parameters and returns a string which is a ‘serialized’ version of those parameters when the submitted job eventually executes it initializes a BoxData (or child) object using that ‘serialized’ string

pibronic.server.job_boss.submit_job(command, parameter_dictionary)[source]

craft the job submission command - no error checking

pibronic.server.job_boss.subprocess_run_wrapper(cmd, **kwargs)[source]

wrapper for subprocess.run function to allow for different implementation for different python versions

pibronic.server.job_boss.subprocess_submit_asynch_wrapper(cmd, **kwargs)[source]

wrapper for subprocess.Popen function to allow for different implementation for different python versions

pibronic.server.job_boss.synchronize_with_job(id_job, job_type='default')[source]

synchronizes with a submitted job

pibronic.server.server module

Should hold all server related enums

class pibronic.server.server.ServerExecutionParameters[source]

Bases: enum.Enum

An enumeration.

A = 'number_of_states'
BlkS = 'block_size'
D = 'id_data'
N = 'number_of_modes'
P = 'number_of_beads'
R = 'id_rho'
T = 'temperature'
X = 'number_of_samples'
beta = 'beta'
dB = 'delta_beta'
nBlk = 'number_of_blocks'
tau = 'tau'