How to use results_dir method in avocado

Best Python code snippet using avocado_python

machine_learning.smk

Source:machine_learning.smk Github

copy

Full Screen

1rule train_test_split:2 input:3 config["dataset_dir"] + "dataset/full_classification.npy"4 output:5 config["dataset_dir"] + "dataset/train.h5py"6 threads:7 18 message:9 "Splitting training data into test and train dataset"10 log:11 config["results_dir"] + "logs/train_test_split/train_test_split.log"12 benchmark:13 config["results_dir"] + "benchmarks/train_test_split/train_test_split.benchmark.txt"14 script:15 "../scripts/machine_learning/train_test_split.py"16rule sgd_classifier:17 input:18 config["dataset_dir"] + "dataset/train.h5py"19 output:20 model=config["results_dir"] + "models/SGD.sav",21 metrics=config["results_dir"] + "model_metrics/SGD.sav",22 report=config["results_dir"] + "model_metrics/SGD_report.sav"23 threads:24 125 message:26 "Training SGD classifier"27 log:28 config["results_dir"] + "logs/models/sgd_classifier.log"29 benchmark:30 config["results_dir"] + "benchmarks/models/sgd_classifier.benchmark.txt"31 script:32 "../scripts/machine_learning/sgd_classifier.py"33rule sgd_classifier_manual:34 input:35 config["dataset_dir"] + "dataset/train.h5py"36 output:37 model=config["results_dir"] + "models/SGDmanual.sav",38 metrics=config["results_dir"] + "model_metrics/SGDmanual.sav",39 report=config["results_dir"] + "model_metrics/SGDmanual_report.sav"40 threads:41 142 message:43 "Training SGD classifier"44 log:45 config["results_dir"] + "logs/models/sgd_manual_classifier.log"46 benchmark:47 config["results_dir"] + "benchmarks/models/sgd_manual_classifier.benchmark.txt"48 script:49 "../scripts/machine_learning/sgd_classifier_manual.py"50rule gaussian_nb:51 input:52 config["dataset_dir"] + "dataset/train.h5py"53 output:54 model=config["results_dir"] + "models/GaussianNB.sav",55 metrics=config["results_dir"] + "model_metrics/GaussianNB.sav",56 report=config["results_dir"] + "model_metrics/GaussianNB_report.sav"57 threads:58 159 message:60 "Training Gaussian Naive Bayes classifier"61 log:62 config["results_dir"] + "logs/models/gaussian_nb.log"63 benchmark:64 config["results_dir"] + "benchmarks/models/gaussian_nb.benchmark.txt"65 script:66 "../scripts/machine_learning/gaussiannb.py"67rule zero_r:68 input:69 config["dataset_dir"] + "dataset/train.h5py"70 output:71 model=config["results_dir"] + "models/ZeroR.sav",72 metrics=config["results_dir"] + "model_metrics/ZeroR.sav",73 report=config["results_dir"] + "model_metrics/ZeroR_report.sav"74 threads:75 176 message:77 "Training ZeroR classifier"78 log:79 config["results_dir"] + "logs/models/zero_r.log"80 benchmark:81 config["results_dir"] + "benchmarks/models/zero_r.benchmark.txt"82 script:83 "../scripts/machine_learning/zero_r.py"84rule multinomial_nb:85 input:86 config["dataset_dir"] + "dataset/train.h5py"87 output:88 model=config["results_dir"] + "models/MultinomialNB.sav",89 metrics=config["results_dir"] + "model_metrics/MultinomialNB.sav",90 report=config["results_dir"] + "model_metrics/MultinomialNB_report.sav"91 threads:92 193 message:94 "Training Multinomial Naive Bayes classifier"95 log:96 config["results_dir"] + "logs/models/multinomial_nb.log"97 benchmark:98 config["results_dir"] + "benchmarks/models/multinomial_nb.benchmark.txt"99 script:100 "../scripts/machine_learning/multinomialnb.py"101rule decision_tree:102 input:103 config["dataset_dir"] + "dataset/train.h5py"104 output:105 model=config["results_dir"] + "models/DecisionTree.sav",106 metrics=config["results_dir"] + "model_metrics/DecisionTree.sav",107 report=config["results_dir"] + "model_metrics/DecisionTree_report.sav"108 threads:109 1110 message:111 "Training Decision Tree classifier"112 log:113 config["results_dir"] + "logs/models/DecisionTree.log"114 benchmark:115 config["results_dir"] + "benchmarks/models/DecisionTree.benchmark.txt"116 script:117 "../scripts/machine_learning/decisiontree.py"118rule SVM:119 input:120 config["dataset_dir"] + "dataset/train.h5py"121 output:122 model=config["results_dir"] + "models/SVM.sav",123 metrics=config["results_dir"] + "model_metrics/SVM.sav",124 report=config["results_dir"] + "model_metrics/SVM_report.sav"125 threads:126 1127 message:128 "Training SVM classifier"129 log:130 config["results_dir"] + "logs/models/SVM.log"131 benchmark:132 config["results_dir"] + "benchmarks/models/SVM.benchmark.txt"133 script:134 "../scripts/machine_learning/SVM.py"135rule SVM_sampling:136 input:137 config["dataset_dir"] + "dataset/full_classification.npy"138 output:139 model=config["results_dir"] + "models/SVMsampling.sav",140 metrics=config["results_dir"] + "model_metrics/SVMsampling.sav",141 report=config["results_dir"] + "model_metrics/SVMsampling_report.sav"142 threads:143 1144 message:145 "Training SVM_sampling classifier"146 log:147 config["results_dir"] + "logs/models/SVMsampling.log"148 benchmark:149 config["results_dir"] + "benchmarks/models/SVMsampling.benchmark.txt"150 script:151 "../scripts/machine_learning/SVM_sampling.py"152# rule Kernel:153# input:154# config["dataset_dir"] + "dataset/train.h5py"155# output:156# model=config["results_dir"] + "models/Kernel.sav",157# metrics=config["results_dir"] + "model_metrics/Kernel.sav"158# threads:159# 1160# message:161# "Training SVM classifier"162# log:163# config["results_dir"] + "logs/models/Kernel.log"164# benchmark:165# config["results_dir"] + "benchmarks/models/Kernel.benchmark.txt"166# script:...

Full Screen

Full Screen

main.py

Source:main.py Github

copy

Full Screen

1import os2import shutil3from pathlib import Path4from plotting import plot_counts_by_question_type, plot_count_over_time, plot_counts_by_team_and_member5from utils import get_chat_data, get_talk_data, calculate_participation_score6def handle_chats(filename, results_dir, rounds, round_names, category_names):7 teams = get_chat_data(filename)8 plot_counts_by_question_type(teams, "Chat Count by question type and category", rounds, category_names, round_names,9 os.path.join(results_dir,10 "chats_vs_category_qtype.png") if results_dir is not None else None)11 plot_count_over_time(teams, "Chat Count vs time for each team",12 os.path.join(results_dir, "chats_count_vs_time.png") if results_dir is not None else None)13 plot_counts_by_team_and_member(teams, 'Chat count by team member',14 os.path.join(results_dir,15 "chats_vs_member_teams.png") if results_dir is not None else None)16 calculate_participation_score(teams, os.path.join(results_dir,17 "chat_participation_scores.txt") if results_dir is not None else None)18def handle_talks(filename, results_dir, rounds, round_names, category_names):19 teams = get_talk_data(filename)20 plot_counts_by_question_type(teams, "Talk Activity by question type and category", rounds, category_names,21 round_names, os.path.join(results_dir,22 "talks_vs_category_qtype.png") if results_dir is not None else None)23 plot_count_over_time(teams, "Talk Activity vs time for each team",24 os.path.join(results_dir, "talks_count_vs_time.png") if results_dir is not None else None)25 plot_counts_by_team_and_member(teams, 'Talk Activity by team member',26 os.path.join(results_dir,27 "talks_vs_member_teams.png") if results_dir is not None else None)28 calculate_participation_score(teams, os.path.join(results_dir,29 "talk_participation_scores.txt") if results_dir is not None else None)30if __name__ == "__main__":31 rounds = [32 [[1664431635289, 1664431799384], [1664431799384, 1664431963479], [1664431963479, 1664432127574],33 [1664432127574, 1664432291670]],34 [[1664432678581, 1664432806428], [1664432806428, 1664432934275], [1664432934275, 1664433062122],35 [1664433062122, 1664433189971]]36 ]37 round_names = ["picture", "category", "trivia", "who am i"]38 category_names = ["geography", "science"]39 results_dir = "results"40 if results_dir is not None:41 if os.path.exists(results_dir): shutil.rmtree(results_dir)42 Path(results_dir).mkdir(exist_ok=True)43 handle_chats("chat_data_2.json", results_dir, rounds, round_names, category_names)44 handle_talks("talk_data_2.json", results_dir, rounds, round_names, category_names)45 # geography = 1664431635289 -> 166443229167046 # picture = 1664431635289 -> 166443179938447 # category = 1664431799384 -> 166443196347948 # trivia = 1664431963479 -> 166443212757449 # whoami = 1664432127574 -> 166443229167050 # science = 1664432678581 -> 166443318997151 # picture = 1664432678581 -> 166443280642852 # category = 1664432806428 -> 166443293427553 # trivia = 1664432934275 -> 1664433062122...

Full Screen

Full Screen

Automation Testing Tutorials

Learn to execute automation testing from scratch with LambdaTest Learning Hub. Right from setting up the prerequisites to run your first automation test, to following best practices and diving deeper into advanced test scenarios. LambdaTest Learning Hubs compile a list of step-by-step guides to help you be proficient with different test automation frameworks i.e. Selenium, Cypress, TestNG etc.

LambdaTest Learning Hubs:

YouTube

You could also refer to video tutorials over LambdaTest YouTube channel to get step by step demonstration from industry experts.

Run avocado automation tests on LambdaTest cloud grid

Perform automation testing on 3000+ real desktop and mobile devices online.

Try LambdaTest Now !!

Get 100 minutes of automation test minutes FREE!!

Next-Gen App & Browser Testing Cloud

Was this article helpful?

Helpful

NotHelpful