How to use start_lambda method in localstack

Best Python code snippet using localstack_python

parallel_streams_sync_optimisation.py

Source:parallel_streams_sync_optimisation.py Github

copy

Full Screen

1# Create ARCHER job files based on parameters passed2# parallel: This experiment measures the effectiveness of optimisation techniques on parallel streaming partitioning and benchmarks it to global static partitioning (zoltan)3# this differs from parallel_streams_hyperPraw in the scale (larger graphs, larger parallelisation scales)4# Factors to characterise:5# impact of window-based streaming (updating only in batches)6# impact of local updates only (only send remote updates when a pin has never been seen before, otherwise only update local datastructure)7# fixed prameters8# lambda = 0.85 to start with9# impalance tolerance = 1.210# staggered start11import sys12import math13#job templates14template_1 = '''#!/bin/bash --login15# name of the job16#PBS -N '''17template_2 = '''18# how many nodes19#PBS -l select='''20template_3=''':bigmem='''21template_4='''22# walltime23#PBS -l walltime=24:00:024# budget code25#PBS -A e58226REPETITIONS=127SIMS_PER_TRIAL=228PROCESSES='''29template_5='''30EXPERIMENT_NAME='''31template_6='''32MESSAGE_SIZE='''33template_7='''34# This shifts to the directory that you submitted the job from35cd $PBS_O_WORKDIR36# bandwidth matrix creation37# bandwidth probing parameters38SIZE=51239ITERATIONS=2040WINDOW=1041#renaming is necessary to avoid clashes between simultaneous jobs42ORIGINAL_BM_FILE="results_mpi_send_bandwidth_"$PROCESSES43aprun -n $PROCESSES mpi_perf $SIZE $ITERATIONS $WINDOW44for p in $(seq 1 10)45do46 FILENAME="results_mpi_send_bandwidth_"$p"_"$PROCESSES47 if [ ! -f $FILENAME ]; then48 BM_FILE="results_mpi_send_bandwidth_"$p"_"$PROCESSES49 break50 fi51done52mv $ORIGINAL_BM_FILE $BM_FILE53run_experiment() {54 HYPERGRAPH_FILE="$1"55 SEED="$2"56 E_SIM_STEPS="$3"57 H_SIM_STEPS="$4"58 START_LAMBDA="$5"59 COMPLETE_WINDOW_SCALING="$6"60 GRAPH_STREAM="inverted_"$HYPERGRAPH_FILE61 # run single stream baseline for hyper PRAW62 aprun -n $PROCESSES hyperPraw -n $EXPERIMENT_NAME"_hyperPraw_bandwidth_1" -h $HYPERGRAPH_FILE -i 25 -m 1200 -p hyperPrawVertex -t $E_SIM_STEPS -x $H_SIM_STEPS -s $SEED -k $MESSAGE_SIZE -e $GRAPH_STREAM -P -K 1 -g 1 -b $BM_FILE -W -r $START_LAMBDA -q $SIMS_PER_TRIAL -H63 sleep 164 # global hypergraph partitioning baseline (zoltan)65 aprun -n $PROCESSES hyperPraw -n $EXPERIMENT_NAME"_zoltanVertex_1" -h $HYPERGRAPH_FILE -i 25 -m 1200 -p zoltanVertex -t $E_SIM_STEPS -x $H_SIM_STEPS -s $SEED -k $MESSAGE_SIZE -e $GRAPH_STREAM -P -K 1 -g 1 -b $BM_FILE -r $START_LAMBDA -q $SIMS_PER_TRIAL -H66 sleep 167 # run parallel versions68 NUM_PARALLEL_EXPERIMENTS=469 MAX_PROCESSES="12"70 FACTOR="2"71 for p in $(seq 1 $NUM_PARALLEL_EXPERIMENTS)72 do73 # window based streaming tests74 aprun -n $PROCESSES hyperPraw -n $EXPERIMENT_NAME"_hyperPraw_bandwidth_w1_"$MAX_PROCESSES -h $HYPERGRAPH_FILE -i 15 -m 1200 -p hyperPrawVertex -t $E_SIM_STEPS -x $H_SIM_STEPS -s $SEED -k $MESSAGE_SIZE -e $GRAPH_STREAM -P -K $MAX_PROCESSES -g 1 -b $BM_FILE -W -r $START_LAMBDA -q $SIMS_PER_TRIAL -H75 sleep 176 aprun -n $PROCESSES hyperPraw -n $EXPERIMENT_NAME"_hyperPraw_bandwidth_w3_"$MAX_PROCESSES -h $HYPERGRAPH_FILE -i 15 -m 1200 -p hyperPrawVertex -t $E_SIM_STEPS -x $H_SIM_STEPS -s $SEED -k $MESSAGE_SIZE -e $GRAPH_STREAM -P -K $MAX_PROCESSES -g 3 -b $BM_FILE -W -r $START_LAMBDA -q $SIMS_PER_TRIAL -H77 sleep 178 if [[ $COMPLETE_WINDOW_SCALING == "yes" ]]; then79 aprun -n $PROCESSES hyperPraw -n $EXPERIMENT_NAME"_hyperPraw_bandwidth_w10_"$MAX_PROCESSES -h $HYPERGRAPH_FILE -i 15 -m 1200 -p hyperPrawVertex -t $E_SIM_STEPS -x $H_SIM_STEPS -s $SEED -k $MESSAGE_SIZE -e $GRAPH_STREAM -P -K $MAX_PROCESSES -g 10 -b $BM_FILE -W -r $START_LAMBDA -q $SIMS_PER_TRIAL -H80 sleep 181 aprun -n $PROCESSES hyperPraw -n $EXPERIMENT_NAME"_hyperPraw_bandwidth_w20_"$MAX_PROCESSES -h $HYPERGRAPH_FILE -i 15 -m 1200 -p hyperPrawVertex -t $E_SIM_STEPS -x $H_SIM_STEPS -s $SEED -k $MESSAGE_SIZE -e $GRAPH_STREAM -P -K $MAX_PROCESSES -g 20 -b $BM_FILE -W -r $START_LAMBDA -q $SIMS_PER_TRIAL -H82 sleep 183 fi84 MAX_PROCESSES=$(($MAX_PROCESSES * $FACTOR))85 done86 87}88for p in $(seq 1 $REPETITIONS)89do90 SEED=$RANDOM91 #synthetic graphs92 run_experiment "huge_uniform_dense_c96.hgr" $SEED 1 0 850 "yes"93 run_experiment "huge_uniform_packed_c192.hgr" $SEED 1 0 1050 "no"94 95done96'''97if len(sys.argv) < 7:98 print("Input error: usage -> python generate_archer_job.py jobName min_processes num_experiments geometric_step big_mem[true|false] simulation_steps")99 exit()100test_name = sys.argv[1]101min_processes = int(sys.argv[2])102num_experiments = int(sys.argv[3])103geometric_step = int(sys.argv[4])104big_mem = (sys.argv[5] == "true" or sys.argv[5] == "True")105message_size = int(sys.argv[6])106process_counts = [min_processes * geometric_step ** (n-1) for n in range (1, num_experiments+1)]107print("Generating experiments")108print(process_counts)109for p in process_counts:110 nodes = max(int(math.ceil(p / 24)),1)111 writebuffer = open("archer_job_" + test_name + "_" + str(p) + ".sh",'w')112 writebuffer.write(template_1 + test_name)113 writebuffer.write(template_2 + str(nodes))114 writebuffer.write(template_3 + str(big_mem).lower())115 writebuffer.write(template_4 + str(p))116 writebuffer.write(template_5 + test_name)117 writebuffer.write(template_6 + str(message_size))...

Full Screen

Full Screen

dtpnn.py

Source:dtpnn.py Github

copy

Full Screen

1import numpy as np2from nmf.norms import norm_Frobenius3from nmf.pgrad import project, dFnorm_H, dH_projected_norm2, pgd_subproblem_step_condition4from nmf.mult import update_empty_initials5from time import time as get_time6# an unsuccessful attempt to implement a NMF algorithm based on [https://doi.org/10.1016/j.neunet.2018.03.003] paper7# the code in this file does not produce any meaningful result, excluded from the analysis8def factorize_Fnorm(V, inner_dim, n_steps=10000, epsiolon=1e-6,9 record_errors=False, W_init=None, H_init=None):10 W, H = update_empty_initials(V, inner_dim, W_init, H_init)11 dFWt = dFnorm_H(H @ V.T, H @ H.T, W.T)12 dFH = dFnorm_H(W.T @ V, W.T @ W, H)13 norm_dFpWt_2 = dH_projected_norm2(dFWt, W.T)14 norm_dFpH_2 = dH_projected_norm2(dFH, H)15 pgrad_norm = np.sqrt(norm_dFpWt_2 + norm_dFpH_2)16 min_pgrad_main = epsiolon * pgrad_norm17 min_pgrad_W = max(1e-3, epsiolon) * pgrad_norm18 min_pgrad_H = min_pgrad_W19 start_time = get_time()20 err = norm_Frobenius(V - W @ H)21 errors = [(get_time() - start_time, err)]22 for i in range(n_steps):23 if pgrad_norm < min_pgrad_main:24 break25 W, l_W, min_pgrad_W, norm_dFpWt_2 = \26 dtpnn_subproblem_H(V.T, H.T, W.T, min_pgrad_H)27 W = W.T28 H, l_H, min_pgrad_H, norm_dFpH_2 = \29 dtpnn_subproblem_H(V, W, H, min_pgrad_W)30 err = norm_Frobenius(V - W @ H)31 if record_errors:32 errors.append((get_time() - start_time, err))33 pgrad_norm = np.sqrt(norm_dFpWt_2 + norm_dFpH_2)34 if record_errors:35 return W, H, np.array(errors)36 else:37 return W, H38def dtpnn_subproblem_H(V, W, H, min_pgrad, start_lambda=1):39 l = start_lambda40 WtV = W.T @ V41 WtW = W.T @ W42 dF = dFnorm_H(WtV, WtW, H)43 H_new, l = dtpnn_subproblem_step(WtW, H, dF, l)44 H = H_new45 dF = dFnorm_H(WtV, WtW, H)46 norm_dFpH_2 = dH_projected_norm2(dF, H)47 return H, l, min_pgrad, norm_dFpH_248def dtpnn_subproblem_step(WtW, H, dF, start_lambda, beta=0.5, max_lambda_search_iters=30):49 alpha = 0.0150 max_l = beta ** -max_lambda_search_iters51 min_l = beta ** max_lambda_search_iters52 l = np.clip(start_lambda, min_l, max_l)53 H_new = next_value(H, dF, l)54 C = pgd_subproblem_step_condition(WtW, H, H_new, dF, l * alpha)55 should_increase = C <= 056 while max_l >= l >= min_l:57 if should_increase:58 l = l / beta59 else:60 l = l * beta61 H_prev = H_new62 H_new = next_value(H, dF, l)63 C = pgd_subproblem_step_condition(WtW, H, H_new, dF, l * alpha)64 if should_increase:65 if not C <= 0 or (H_prev == H_new).all():66 l = l * beta67 H_new = H_prev68 break69 else:70 if C <= 0:71 break72 return H_new, l73def next_value(H, dFH, l):...

Full Screen

Full Screen

manager.py

Source:manager.py Github

copy

Full Screen

...8 self.lambdas = {}9 self.remote = RemoteControl()10 self.remote.register_command('reload', self.reload_lambda)11 self.remote.register_command('deactivate', self.deactivate_lambda)12 def start_lambda(self, _id):13 self.processes[_id] = LambdaProcess(14 data=self.lambdas[_id]15 )16 self.processes[_id].daemon = True17 self.processes[_id].start()18 print('[MAIN] Starting Lambda {}'.format(self.lambdas[_id]['name']))19 def kill_lambda(self, _id):20 process = self.processes.pop(_id, None)21 if process:22 process.terminate()23 def reload_lambda(self, _id):24 l = get_lambda(_id)25 if _id in self.lambdas:26 print('[MAIN] Terminating Lambda {}'.format(self.lambdas[_id]['name']))27 self.kill_lambda(_id)28 if l:29 self.lambdas[_id] = l30 self.start_lambda(_id)31 else:32 self.lambdas.pop(_id, None)33 def deactivate_lambda(self, _id, reason):34 print('[MAIN] Deactivating lambda {} due to {}'.format(_id, reason))35 deactivate_lambda(_id, reason)36 self.reload_lambda(_id) # Kill and Remove37 def load_lambdas(self):38 lambdas = fetch_active_lambdas()39 print('[MAIN] Booting up {} lambdas'.format(len(lambdas)))40 self.processes = {}41 self.lambdas = {}42 for i in lambdas:43 _id = str(i['_id'])44 i['_id'] = _id45 self.lambdas[_id] = i46 self.start_lambda(_id)47 print('[MAIN] Bootup complete.')48 def run(self):49 self.load_lambdas()50 self.remote.run()51 def terminate(self):52 for i in self.processes.values():53 try:54 i.terminate()55 except Exception as e:56 print('Process can\'t terminate: ' + e.__repr__())...

Full Screen

Full Screen

Automation Testing Tutorials

Learn to execute automation testing from scratch with LambdaTest Learning Hub. Right from setting up the prerequisites to run your first automation test, to following best practices and diving deeper into advanced test scenarios. LambdaTest Learning Hubs compile a list of step-by-step guides to help you be proficient with different test automation frameworks i.e. Selenium, Cypress, TestNG etc.

LambdaTest Learning Hubs:

YouTube

You could also refer to video tutorials over LambdaTest YouTube channel to get step by step demonstration from industry experts.

Run localstack automation tests on LambdaTest cloud grid

Perform automation testing on 3000+ real desktop and mobile devices online.

Try LambdaTest Now !!

Get 100 minutes of automation test minutes FREE!!

Next-Gen App & Browser Testing Cloud

Was this article helpful?

Helpful

NotHelpful