How to use baseline method in Playwright Python

Best Python code snippet using playwright-python

compare.py

Source:compare.py Github

copy

Full Screen

1#!/usr/bin/env python2"""3compare.py - versatile benchmark output compare tool4"""5import argparse6from argparse import ArgumentParser7import sys8import gbench9from gbench import util, report10from gbench.util import *11def check_inputs(in1, in2, flags):12 """13 Perform checking on the user provided inputs and diagnose any abnormalities14 """15 in1_kind, in1_err = classify_input_file(in1)16 in2_kind, in2_err = classify_input_file(in2)17 output_file = find_benchmark_flag('--benchmark_out=', flags)18 output_type = find_benchmark_flag('--benchmark_out_format=', flags)19 if in1_kind == IT_Executable and in2_kind == IT_Executable and output_file:20 print(("WARNING: '--benchmark_out=%s' will be passed to both "21 "benchmarks causing it to be overwritten") % output_file)22 if in1_kind == IT_JSON and in2_kind == IT_JSON and len(flags) > 0:23 print("WARNING: passing optional flags has no effect since both "24 "inputs are JSON")25 if output_type is not None and output_type != 'json':26 print(("ERROR: passing '--benchmark_out_format=%s' to 'compare.py`"27 " is not supported.") % output_type)28 sys.exit(1)29def create_parser():30 parser = ArgumentParser(31 description='versatile benchmark output compare tool')32 subparsers = parser.add_subparsers(33 help='This tool has multiple modes of operation:',34 dest='mode')35 parser_a = subparsers.add_parser(36 'benchmarks',37 help='The most simple use-case, compare all the output of these two benchmarks')38 baseline = parser_a.add_argument_group(39 'baseline', 'The benchmark baseline')40 baseline.add_argument(41 'test_baseline',42 metavar='test_baseline',43 type=argparse.FileType('r'),44 nargs=1,45 help='A benchmark executable or JSON output file')46 contender = parser_a.add_argument_group(47 'contender', 'The benchmark that will be compared against the baseline')48 contender.add_argument(49 'test_contender',50 metavar='test_contender',51 type=argparse.FileType('r'),52 nargs=1,53 help='A benchmark executable or JSON output file')54 parser_a.add_argument(55 'benchmark_options',56 metavar='benchmark_options',57 nargs=argparse.REMAINDER,58 help='Arguments to pass when running benchmark executables')59 parser_b = subparsers.add_parser(60 'filters', help='Compare filter one with the filter two of benchmark')61 baseline = parser_b.add_argument_group(62 'baseline', 'The benchmark baseline')63 baseline.add_argument(64 'test',65 metavar='test',66 type=argparse.FileType('r'),67 nargs=1,68 help='A benchmark executable or JSON output file')69 baseline.add_argument(70 'filter_baseline',71 metavar='filter_baseline',72 type=str,73 nargs=1,74 help='The first filter, that will be used as baseline')75 contender = parser_b.add_argument_group(76 'contender', 'The benchmark that will be compared against the baseline')77 contender.add_argument(78 'filter_contender',79 metavar='filter_contender',80 type=str,81 nargs=1,82 help='The second filter, that will be compared against the baseline')83 parser_b.add_argument(84 'benchmark_options',85 metavar='benchmark_options',86 nargs=argparse.REMAINDER,87 help='Arguments to pass when running benchmark executables')88 parser_c = subparsers.add_parser(89 'benchmarksfiltered',90 help='Compare filter one of first benchmark with filter two of the second benchmark')91 baseline = parser_c.add_argument_group(92 'baseline', 'The benchmark baseline')93 baseline.add_argument(94 'test_baseline',95 metavar='test_baseline',96 type=argparse.FileType('r'),97 nargs=1,98 help='A benchmark executable or JSON output file')99 baseline.add_argument(100 'filter_baseline',101 metavar='filter_baseline',102 type=str,103 nargs=1,104 help='The first filter, that will be used as baseline')105 contender = parser_c.add_argument_group(106 'contender', 'The benchmark that will be compared against the baseline')107 contender.add_argument(108 'test_contender',109 metavar='test_contender',110 type=argparse.FileType('r'),111 nargs=1,112 help='The second benchmark executable or JSON output file, that will be compared against the baseline')113 contender.add_argument(114 'filter_contender',115 metavar='filter_contender',116 type=str,117 nargs=1,118 help='The second filter, that will be compared against the baseline')119 parser_c.add_argument(120 'benchmark_options',121 metavar='benchmark_options',122 nargs=argparse.REMAINDER,123 help='Arguments to pass when running benchmark executables')124 return parser125def main():126 # Parse the command line flags127 parser = create_parser()128 args, unknown_args = parser.parse_known_args()129 if args.mode is None:130 parser.print_help()131 exit(1)132 assert not unknown_args133 benchmark_options = args.benchmark_options134 if args.mode == 'benchmarks':135 test_baseline = args.test_baseline[0].name136 test_contender = args.test_contender[0].name137 filter_baseline = ''138 filter_contender = ''139 # NOTE: if test_baseline == test_contender, you are analyzing the stdev140 description = 'Comparing %s to %s' % (test_baseline, test_contender)141 elif args.mode == 'filters':142 test_baseline = args.test[0].name143 test_contender = args.test[0].name144 filter_baseline = args.filter_baseline[0]145 filter_contender = args.filter_contender[0]146 # NOTE: if filter_baseline == filter_contender, you are analyzing the147 # stdev148 description = 'Comparing %s to %s (from %s)' % (149 filter_baseline, filter_contender, args.test[0].name)150 elif args.mode == 'benchmarksfiltered':151 test_baseline = args.test_baseline[0].name152 test_contender = args.test_contender[0].name153 filter_baseline = args.filter_baseline[0]154 filter_contender = args.filter_contender[0]155 # NOTE: if test_baseline == test_contender and156 # filter_baseline == filter_contender, you are analyzing the stdev157 description = 'Comparing %s (from %s) to %s (from %s)' % (158 filter_baseline, test_baseline, filter_contender, test_contender)159 else:160 # should never happen161 print("Unrecognized mode of operation: '%s'" % args.mode)162 parser.print_help()163 exit(1)164 check_inputs(test_baseline, test_contender, benchmark_options)165 options_baseline = []166 options_contender = []167 if filter_baseline and filter_contender:168 options_baseline = ['--benchmark_filter=%s' % filter_baseline]169 options_contender = ['--benchmark_filter=%s' % filter_contender]170 # Run the benchmarks and report the results171 json1 = json1_orig = gbench.util.run_or_load_benchmark(172 test_baseline, benchmark_options + options_baseline)173 json2 = json2_orig = gbench.util.run_or_load_benchmark(174 test_contender, benchmark_options + options_contender)175 # Now, filter the benchmarks so that the difference report can work176 if filter_baseline and filter_contender:177 replacement = '[%s vs. %s]' % (filter_baseline, filter_contender)178 json1 = gbench.report.filter_benchmark(179 json1_orig, filter_baseline, replacement)180 json2 = gbench.report.filter_benchmark(181 json2_orig, filter_contender, replacement)182 # Diff and output183 output_lines = gbench.report.generate_difference_report(json1, json2)184 print(description)185 for ln in output_lines:186 print(ln)187import unittest188class TestParser(unittest.TestCase):189 def setUp(self):190 self.parser = create_parser()191 testInputs = os.path.join(192 os.path.dirname(193 os.path.realpath(__file__)),194 'gbench',195 'Inputs')196 self.testInput0 = os.path.join(testInputs, 'test1_run1.json')197 self.testInput1 = os.path.join(testInputs, 'test1_run2.json')198 def test_benchmarks_basic(self):199 parsed = self.parser.parse_args(200 ['benchmarks', self.testInput0, self.testInput1])201 self.assertEqual(parsed.mode, 'benchmarks')202 self.assertEqual(parsed.test_baseline[0].name, self.testInput0)203 self.assertEqual(parsed.test_contender[0].name, self.testInput1)204 self.assertFalse(parsed.benchmark_options)205 def test_benchmarks_with_remainder(self):206 parsed = self.parser.parse_args(207 ['benchmarks', self.testInput0, self.testInput1, 'd'])208 self.assertEqual(parsed.mode, 'benchmarks')209 self.assertEqual(parsed.test_baseline[0].name, self.testInput0)210 self.assertEqual(parsed.test_contender[0].name, self.testInput1)211 self.assertEqual(parsed.benchmark_options, ['d'])212 def test_benchmarks_with_remainder_after_doubleminus(self):213 parsed = self.parser.parse_args(214 ['benchmarks', self.testInput0, self.testInput1, '--', 'e'])215 self.assertEqual(parsed.mode, 'benchmarks')216 self.assertEqual(parsed.test_baseline[0].name, self.testInput0)217 self.assertEqual(parsed.test_contender[0].name, self.testInput1)218 self.assertEqual(parsed.benchmark_options, ['e'])219 def test_filters_basic(self):220 parsed = self.parser.parse_args(221 ['filters', self.testInput0, 'c', 'd'])222 self.assertEqual(parsed.mode, 'filters')223 self.assertEqual(parsed.test[0].name, self.testInput0)224 self.assertEqual(parsed.filter_baseline[0], 'c')225 self.assertEqual(parsed.filter_contender[0], 'd')226 self.assertFalse(parsed.benchmark_options)227 def test_filters_with_remainder(self):228 parsed = self.parser.parse_args(229 ['filters', self.testInput0, 'c', 'd', 'e'])230 self.assertEqual(parsed.mode, 'filters')231 self.assertEqual(parsed.test[0].name, self.testInput0)232 self.assertEqual(parsed.filter_baseline[0], 'c')233 self.assertEqual(parsed.filter_contender[0], 'd')234 self.assertEqual(parsed.benchmark_options, ['e'])235 def test_filters_with_remainder_after_doubleminus(self):236 parsed = self.parser.parse_args(237 ['filters', self.testInput0, 'c', 'd', '--', 'f'])238 self.assertEqual(parsed.mode, 'filters')239 self.assertEqual(parsed.test[0].name, self.testInput0)240 self.assertEqual(parsed.filter_baseline[0], 'c')241 self.assertEqual(parsed.filter_contender[0], 'd')242 self.assertEqual(parsed.benchmark_options, ['f'])243 def test_benchmarksfiltered_basic(self):244 parsed = self.parser.parse_args(245 ['benchmarksfiltered', self.testInput0, 'c', self.testInput1, 'e'])246 self.assertEqual(parsed.mode, 'benchmarksfiltered')247 self.assertEqual(parsed.test_baseline[0].name, self.testInput0)248 self.assertEqual(parsed.filter_baseline[0], 'c')249 self.assertEqual(parsed.test_contender[0].name, self.testInput1)250 self.assertEqual(parsed.filter_contender[0], 'e')251 self.assertFalse(parsed.benchmark_options)252 def test_benchmarksfiltered_with_remainder(self):253 parsed = self.parser.parse_args(254 ['benchmarksfiltered', self.testInput0, 'c', self.testInput1, 'e', 'f'])255 self.assertEqual(parsed.mode, 'benchmarksfiltered')256 self.assertEqual(parsed.test_baseline[0].name, self.testInput0)257 self.assertEqual(parsed.filter_baseline[0], 'c')258 self.assertEqual(parsed.test_contender[0].name, self.testInput1)259 self.assertEqual(parsed.filter_contender[0], 'e')260 self.assertEqual(parsed.benchmark_options[0], 'f')261 def test_benchmarksfiltered_with_remainder_after_doubleminus(self):262 parsed = self.parser.parse_args(263 ['benchmarksfiltered', self.testInput0, 'c', self.testInput1, 'e', '--', 'g'])264 self.assertEqual(parsed.mode, 'benchmarksfiltered')265 self.assertEqual(parsed.test_baseline[0].name, self.testInput0)266 self.assertEqual(parsed.filter_baseline[0], 'c')267 self.assertEqual(parsed.test_contender[0].name, self.testInput1)268 self.assertEqual(parsed.filter_contender[0], 'e')269 self.assertEqual(parsed.benchmark_options[0], 'g')270if __name__ == '__main__':271 # unittest.main()272 main()273# vim: tabstop=4 expandtab shiftwidth=4 softtabstop=4274# kate: tab-width: 4; replace-tabs on; indent-width 4; tab-indents: off;...

Full Screen

Full Screen

add_data.py

Source:add_data.py Github

copy

Full Screen

1# fingertraining2# Stefan Hochuli, 31.08.2021,3# Folder: File: add_data.py4#5from speck_weg.models import (TrainingThemeModel, TrainingProgramModel, TrainingExerciseModel,6 TrainingProgramExerciseModel, UserModel,7 WorkoutSessionModel, WorkoutExerciseModel)8from speck_weg.db import CRUD9if __name__ == '__main__':10 db = CRUD(drop_all=True)11 usr1 = UserModel(name='Stefan', weight=72)12 tth1 = TrainingThemeModel(name='Beastmaker 1000', sequence=1)13 tpr1 = TrainingProgramModel(name='5a', sequence=2, training_theme=tth1)14 tpr2 = TrainingProgramModel(name='5b', sequence=3, training_theme=tth1)15 tpr3 = TrainingProgramModel(name='5c', sequence=4, training_theme=tth1)16 tex1 = TrainingExerciseModel(name='Jugs', baseline_sets=1, baseline_repetitions=7, baseline_duration=7, user=usr1)17 tex2 = TrainingExerciseModel(name='Open 3 tief', baseline_sets=1, baseline_repetitions=7, baseline_duration=7, user=usr1)18 tex3 = TrainingExerciseModel(name='Open 4 tief', baseline_sets=1, baseline_repetitions=7, baseline_duration=7, user=usr1)19 tex4 = TrainingExerciseModel(name='Half Crimp tief', description='Vorderstes Glied', baseline_sets=1, baseline_repetitions=7,20 baseline_duration=7, user=usr1)21 tex5 = TrainingExerciseModel(name='Sloper 20°', baseline_sets=1, baseline_repetitions=7, baseline_duration=7, user=usr1)22 tex6 = TrainingExerciseModel(name='Open 3 mittel', baseline_sets=1, baseline_repetitions=7, baseline_duration=7, user=usr1)23 tex7 = TrainingExerciseModel(name='Open 2 tief (Zeig-/Mittelfinger)',24 description='Zeige- / Mittelfinger oder Mittel- / Ringfinger',25 baseline_sets=1, baseline_repetitions=7, baseline_duration=7, user=usr1)26 tex8 = TrainingExerciseModel(name='Open 4 halb-tief ', description='Vorderstes Glied',27 baseline_sets=1, baseline_repetitions=7, baseline_duration=7, user=usr1)28 tex9 = TrainingExerciseModel(name='Open 4 mittel', baseline_sets=1, baseline_repetitions=7, baseline_duration=7, user=usr1)29 # 5a30 tpe1 = TrainingProgramExerciseModel(training_program=tpr1, training_exercise=tex1, sequence=1)31 tpe2 = TrainingProgramExerciseModel(training_program=tpr1, training_exercise=tex1, sequence=2)32 tpe3 = TrainingProgramExerciseModel(training_program=tpr1, training_exercise=tex2, sequence=3)33 tpe4 = TrainingProgramExerciseModel(training_program=tpr1, training_exercise=tex2, sequence=4)34 tpe5 = TrainingProgramExerciseModel(training_program=tpr1, training_exercise=tex3, sequence=5)35 tpe6 = TrainingProgramExerciseModel(training_program=tpr1, training_exercise=tex3, sequence=6)36 # 5b37 tpe7 = TrainingProgramExerciseModel(training_program=tpr2, training_exercise=tex4, sequence=1)38 tpe8 = TrainingProgramExerciseModel(training_program=tpr2, training_exercise=tex5, sequence=2)39 tpe9 = TrainingProgramExerciseModel(training_program=tpr2, training_exercise=tex4, sequence=3)40 tpe10 = TrainingProgramExerciseModel(training_program=tpr2, training_exercise=tex6, sequence=4)41 tpe11 = TrainingProgramExerciseModel(training_program=tpr2, training_exercise=tex7, sequence=5)42 tpe12 = TrainingProgramExerciseModel(training_program=tpr2, training_exercise=tex8, sequence=6)43 # 5c44 tpe13 = TrainingProgramExerciseModel(training_program=tpr3, training_exercise=tex4, sequence=1)45 tpe14 = TrainingProgramExerciseModel(training_program=tpr3, training_exercise=tex6, sequence=2)46 tpe15 = TrainingProgramExerciseModel(training_program=tpr3, training_exercise=tex9, sequence=3)47 tpe16 = TrainingProgramExerciseModel(training_program=tpr3, training_exercise=tex7, sequence=4)48 tpe17 = TrainingProgramExerciseModel(training_program=tpr3, training_exercise=tex5, sequence=5)49 tpe18 = TrainingProgramExerciseModel(training_program=tpr3, training_exercise=tex9, sequence=6)50 # Warmup51 tpr4 = TrainingProgramModel(name='Warmup Schultern / Arme', training_theme=tth1, sequence=1)52 tex10 = TrainingExerciseModel(name='Terraband diagonal', baseline_sets=3, baseline_repetitions=10)53 tex11 = TrainingExerciseModel(name='Terraband Schulter 1/4 Drehung', baseline_sets=3, baseline_repetitions=10)54 tex12 = TrainingExerciseModel(name='Terraband Ellbogen 1/4 Drehung', baseline_sets=3, baseline_repetitions=10)55 tex13 = TrainingExerciseModel(name='Liegestützen', baseline_sets=3, baseline_repetitions=10)56 tex14 = TrainingExerciseModel(name='Schulter anspannen 2 Arme', description='3s halten', baseline_sets=3, baseline_repetitions=10, user=usr1)57 tex15 = TrainingExerciseModel(name='Schulter anspannen 1 Arm', description='3s halten', baseline_sets=3, baseline_repetitions=5, user=usr1)58 tex16 = TrainingExerciseModel(name='Klimmzug', baseline_sets=3, baseline_repetitions=5, user=usr1)59 tpe19 = TrainingProgramExerciseModel(training_program=tpr4, training_exercise=tex10, sequence=1)60 tpe20 = TrainingProgramExerciseModel(training_program=tpr4, training_exercise=tex11, sequence=2)61 tpe21 = TrainingProgramExerciseModel(training_program=tpr4, training_exercise=tex12, sequence=3)62 tpe22 = TrainingProgramExerciseModel(training_program=tpr4, training_exercise=tex13, sequence=4)63 tpe23 = TrainingProgramExerciseModel(training_program=tpr4, training_exercise=tex14, sequence=5)64 tpe24 = TrainingProgramExerciseModel(training_program=tpr4, training_exercise=tex15, sequence=6)65 tpe25 = TrainingProgramExerciseModel(training_program=tpr4, training_exercise=tex16, sequence=7)66 tth101 = TrainingThemeModel(name='Test', sequence=2)67 tpr101 = TrainingProgramModel(name='Test', training_theme=tth101, sequence=1)68 tex101 = TrainingExerciseModel(name='weight', baseline_sets=2, baseline_repetitions=3, baseline_custom_weight=3.)69 tex102 = TrainingExerciseModel(name='body_weight duration', baseline_sets=3, baseline_repetitions=3, user=usr1, baseline_duration=11)70 tex103 = TrainingExerciseModel(name='repetitions', baseline_sets=3, baseline_repetitions=2)71 tpe101 = TrainingProgramExerciseModel(training_program=tpr101, training_exercise=tex101, sequence=1)72 tpe102 = TrainingProgramExerciseModel(training_program=tpr101, training_exercise=tex102, sequence=2)73 tpe103 = TrainingProgramExerciseModel(training_program=tpr101, training_exercise=tex103, sequence=3)74 # workout75 wse1 = WorkoutSessionModel(training_program=tpr101)76 wex1 = WorkoutExerciseModel(workout_session=wse1, training_exercise=tex101, sequence=1, set=1, weight=1.5, repetitions=3)77 wex2 = WorkoutExerciseModel(workout_session=wse1, training_exercise=tex101, sequence=1, set=2, weight=3, repetitions=6)78 wex3 = WorkoutExerciseModel(workout_session=wse1, training_exercise=tex102, sequence=2, set=1, weight=72, duration=22, repetitions=3)79 wex4 = WorkoutExerciseModel(workout_session=wse1, training_exercise=tex102, sequence=2, set=2, weight=36, duration=11, repetitions=3)80 wex5 = WorkoutExerciseModel(workout_session=wse1, training_exercise=tex102, sequence=2, set=3, weight=36, duration=22, repetitions=3)81 wex6 = WorkoutExerciseModel(workout_session=wse1, training_exercise=tex103, sequence=3, set=1, repetitions=2)82 wex7 = WorkoutExerciseModel(workout_session=wse1, training_exercise=tex103, sequence=3, set=1, repetitions=4)83 objects = [84 usr1,85 tth1,86 tpr1,87 tpr2,88 tpr3,89 tex1,90 tex2,91 tex3,92 tex4,93 tex5,94 tex6,95 tex7,96 tex8,97 tex9,98 tpe1,99 tpe2,100 tpe3,101 tpe4,102 tpe5,103 tpe6,104 tpe7,105 tpe8,106 tpe9,107 tpe10,108 tpe11,109 tpe12,110 tpe13,111 tpe14,112 tpe15,113 tpe16,114 tpe17,115 tpe18,116 tpr4,117 tex10,118 tex11,119 tex12,120 tex13,121 tex14,122 tex15,123 tpe19,124 tpe20,125 tpe21,126 tpe22,127 tpe23,128 tpe24,129 tth101,130 tpr101,131 tex101,132 tex102,133 tex103,134 tpe101,135 tpe102,136 tpe103,137 wse1,138 wex1,139 wex2,140 wex3,141 wex4,142 wex5,143 wex6,144 wex7,145 ]...

Full Screen

Full Screen

MetricsDriver.py

Source:MetricsDriver.py Github

copy

Full Screen

1#!/usr/bin/env python2# ----------------------------------------------------------3# Copyright (c) Microsoft Corporation. All rights reserved.4# Licensed under the MIT license. See LICENSE.md file in the project root for full license information.5# ---------------------------------------------------------6# This script extracts information (hardware used, final results) contained in the baselines files7# and generates a markdown file (wiki page)8import sys, os, re9import TestDriver as td10try:11 import six12except ImportError:13 print("Python package 'six' not installed. Please run 'pip install six'.")14 sys.exit(1)15thisDir = os.path.dirname(os.path.realpath(__file__))16windows = os.getenv("OS")=="Windows_NT"17class Baseline:18 def __init__(self, fullPath, testResult = "", trainResult = ""):19 self.fullPath = fullPath20 self.cpuInfo = ""21 self.gpuInfo = ""22 self.testResult = testResult23 self.trainResult = trainResult24 # extracts results info. e.g.25 # Finished Epoch[ 5 of 5]: [Training] ce = 2.32253198 * 1000 err = 0.90000000 * 1000 totalSamplesSeen = 5000 learningRatePerSample = 2e-06 epochTime=0.17578126 # Final Results: Minibatch[1-1]: err = 0.90000000 * 100 ce = 2.32170486 * 100 perplexity = 10.193037227 def extractResultsInfo(self, baselineContent):28 trainResults = re.findall('.*(Finished Epoch\[ *\d+ of \d+\]\: \[Training\]) (.*)', baselineContent)29 if trainResults: 30 self.trainResult = Baseline.formatLastTrainResult(trainResults[-1])[0:-2]31 testResults = re.findall('.*(Final Results: Minibatch\[1-\d+\]:)(\s+\* \d+)?\s+(.*)', baselineContent)32 if testResults:33 self.testResult = Baseline.formatLastTestResult(testResults[-1])[0:-2]34 # extracts cpu and gpu info from baseline content. e.g.:35 #CPU info:36 # CPU Model Name: Intel(R) Xeon(R) CPU E5-2620 v3 @ 2.40GHz37 # Hardware threads: 1238 #GPU info:39 #40 #Device[0]: cores = 2496; computeCapability = 5.2; type = "Quadro M4000"; memory = 8192 MB41 #Device[1]: cores = 96; computeCapability = 2.1; type = "Quadro 600"; memory = 1024 MB42 # Total Memory: 33474872 kB43 def extractHardwareInfo(self, baselineContent):44 startCpuInfoIndex = baselineContent.find("CPU info:")45 endCpuInfoIndex = baselineContent.find("----------", startCpuInfoIndex)46 cpuInfo = re.search("^CPU info:\s+"47 "CPU Model (Name:\s*.*)\s+" 48 "(Hardware threads: \d+)\s+"49 "Total (Memory:\s*.*)\s+", baselineContent[startCpuInfoIndex:endCpuInfoIndex], re.MULTILINE)50 if cpuInfo is None:51 return52 self.cpuInfo = "\n".join(cpuInfo.groups())53 startGpuInfoIndex = baselineContent.find("GPU info:")54 endGpuInfoIndex = baselineContent.find("----------", startGpuInfoIndex)55 gpuInfoSnippet = baselineContent[startGpuInfoIndex:endGpuInfoIndex]56 gpuDevices = re.findall("\t\t(Device\[\d+\]: cores = \d+; computeCapability = \d\.\d; type = .*; memory = \d+ MB)[\r\n]?", gpuInfoSnippet)57 if not gpuDevices:58 return59 gpuInfo = [ device for device in gpuDevices ]60 self.gpuInfo = "\n".join(gpuInfo)61 @staticmethod62 def formatLastTestResult(line):63 return line[0] + line[1] + "\n" + line[2].replace('; ', '\n').replace(' ','\n')64 @staticmethod65 def formatLastTrainResult(line):66 epochsInfo, parameters = line[0], line[1]67 return epochsInfo + '\n' + parameters.replace('; ', '\n')68class Example:69 allExamplesIndexedByFullName = {} 70 def __init__(self, suite, name, testDir):71 self.suite = suite72 self.name = name73 self.fullName = suite + "/" + name74 self.testDir = testDir75 self.baselineList = []76 77 self.gitHash = ""78 @staticmethod79 def discoverAllExamples():80 testsDir = thisDir81 for dirName, subdirList, fileList in os.walk(testsDir):82 if 'testcases.yml' in fileList:83 testDir = dirName84 exampleName = os.path.basename(dirName)85 suiteDir = os.path.dirname(dirName)86 # suite name will be derived from the path components87 suiteName = os.path.relpath(suiteDir, testsDir).replace('\\', '/') 88 example = Example(suiteName, exampleName, testDir)89 Example.allExamplesIndexedByFullName[example.fullName.lower()] = example90 # it returns a list with all baseline files for current example91 def findBaselineFilesList(self):92 baselineFilesList = []93 oses = [".windows", ".linux", ""]94 devices = [".cpu", ".gpu", ""]95 flavors = [".debug", ".release", ""]96 for o in oses:97 for device in devices:98 for flavor in flavors: 99 candidateName = "baseline" + o + flavor + device + ".txt"100 fullPath = td.cygpath(os.path.join(self.testDir, candidateName), relative=True) 101 if os.path.isfile(fullPath):102 baseline = Baseline(fullPath);103 baselineFilesList.append(baseline)104 return baselineFilesList105# extracts information for every example and stores it in Example.allExamplesIndexedByFullName106def getExamplesMetrics(): 107 Example.allExamplesIndexedByFullName = list(sorted(Example.allExamplesIndexedByFullName.values(), key=lambda test: test.fullName)) 108 allExamples = Example.allExamplesIndexedByFullName109 print ("CNTK - Metrics collector") 110 for example in allExamples:111 baselineListForExample = example.findBaselineFilesList() 112 six.print_("Example: " + example.fullName) 113 for baseline in baselineListForExample: 114 with open(baseline.fullPath, "r") as f:115 baselineContent = f.read()116 gitHash = re.search('.*Build SHA1:\s([a-z0-9]{40})[\r\n]+', baselineContent, re.MULTILINE)117 if gitHash is None:118 continue119 example.gitHash = gitHash.group(1) 120 baseline.extractHardwareInfo(baselineContent)121 baseline.extractResultsInfo(baselineContent)122 example.baselineList.append(baseline) 123 124# creates a list with links to each example result125def createAsciidocExampleList(file):126 for example in Example.allExamplesIndexedByFullName:127 if not example.baselineList:128 continue129 file.write("".join(["<<", example.fullName.replace("/","").lower(),",", example.fullName, ">> +\n"]))130 file.write("\n")131def writeMetricsToAsciidoc():132 metricsFile = open("metrics.adoc",'wb')133 createAsciidocExampleList(metricsFile)134 for example in Example.allExamplesIndexedByFullName:135 if not example.baselineList:136 continue137 metricsFile.write("".join(["===== ", example.fullName, "\n"]))138 metricsFile.write("".join(["**Git Hash: **", example.gitHash, "\n\n"]))139 metricsFile.write("[cols=3, options=\"header\"]\n")140 metricsFile.write("|====\n")141 metricsFile.write("|Log file / Configuration | Train Result | Test Result\n")142 for baseline in example.baselineList:143 pathInDir=baseline.fullPath.split(thisDir)[1][1:]144 metricsFile.write("".join(["|link:../blob/", example.gitHash[:7],"/Tests/EndToEndTests/", pathInDir, "[",145 baseline.fullPath.split("/")[-1], "] .2+|", baseline.trainResult.replace("\n", " "), " .2+|",146 baseline.testResult.replace("\n", " "), "|\n"]))147 cpuInfo = "".join(["CPU: ", re.sub("[\r]?\n", ' ', baseline.cpuInfo)])148 gpuInfo = re.sub("[\r]?\n", ' ', baseline.gpuInfo)149 if gpuInfo:150 metricsFile.write("".join([cpuInfo, " GPU: ", gpuInfo]))151 else:152 metricsFile.write(cpuInfo)153 metricsFile.write("\n|====\n\n")154# ======================= Entry point =======================155six.print_("==============================================================================")156Example.discoverAllExamples()157getExamplesMetrics()...

Full Screen

Full Screen

aggregate_rule_ranks.py

Source:aggregate_rule_ranks.py Github

copy

Full Screen

1#!/usr/bin/python2# coding=utf-83# -*- encoding: utf-8 -*-4import sys, codecs, copy, commands, os;5sys.stdin = codecs.getreader('utf-8')(sys.stdin);6sys.stdout = codecs.getwriter('utf-8')(sys.stdout);7sys.stderr = codecs.getwriter('utf-8')(sys.stderr);8baseline_score = 0.0;9rule_centre = {};10rule_scores = {}; 11rule_content = {}; 12if len(sys.argv) < 2: #{13 print 'rank_candidate_rules.py <rules.rlx> <temporary dir>';14 sys.exit(-1);15#}16rules_rlx_file = sys.argv[1];17temporary_dir = sys.argv[2];18rule_aggregate_file = sys.argv[2] + '/rule.aggregate.txt';19rule_ranks_file = sys.argv[2] + '/rules.ranked.txt';20###############################################################################21for line in file(rules_rlx_file).read().split('\n'): #{22 if line.count('SUBSTITUTE') > 0: #{23 rule_id = line.split(':')[1].split(' ')[0];24 rule_scores[rule_id] = -10.0;25 rule_centre[rule_id] = line.split('"')[1];26 rule_content[rule_id] = line;27 #}28#}29print >>sys.stderr, '+ Read' , len(rule_scores) , 'rules.';30###############################################################################31# Calculate the total score of the baseline.32baseline_lines = [];33baseline_lines_hash = {};34if not os.path.isfile(temporary_dir + '/baseline.out'): #{35 # If the file does not exist, then they need to run generate_rule_diffs.py36 print 'Please run generate_rule_diffs.py before rank_candidate_rules.py';37 sys.exit(-1);38#}39baseline_lines = file(temporary_dir + '/baseline.out').read().split('\n');40for line in baseline_lines: #{41 if len(line) < 2: #{42 continue;43 #}44 line_score = float(line.split('||')[0].strip());45 line_id = line.split('||')[1].split('[')[1].split(':')[0];46 baseline_lines_hash[line_id] = line_score;47 baseline_score = baseline_score + line_score;48#}49print >>sys.stderr, '+ Total score for ' + str(len(baseline_lines)) + ' sentences is ' + str(baseline_score);50###############################################################################51centres = set();52for centre in rule_centre: #{53 centres.add(rule_centre[centre]);54#}55print >>sys.stderr, '+ There are ' + str(len(centres)) + ' rule centres.';56###############################################################################57print >>sys.stderr, '+ Processing ' + str(len(rule_scores)) + ' rules.'58def rule_id_compare(x, y): #{59 x = int(x.replace('r', '')); 60 y = int(y.replace('r', '')); 61 return x - y;62#}63sorted_rule_ids = list(rule_scores);64sorted_rule_ids.sort(cmp=rule_id_compare);65#baseline_tagged_set = set(file(baseline_tagged_file).read().split('\n'));66count = 0;67rules_phrases = {};68for ag_line in file(temporary_dir + '/rules.ranked.txt').read().split('\n'): #{69 if len(ag_line) < 2: #{70 continue;71 #}72 # -3.78711 || [5970:0:3:26 || r1 ].[] ...73 #74 # rules_phrases['r1'][5970] = -3.78711;75 #76 prob = float(ag_line.split('||')[0].strip());77 phid = int(ag_line.split('[')[1].split(':')[0].strip());78 rule_id = ag_line.split('||')[2].split(']')[0].strip();79 #print rule_id , phid , prob;80 if rule_id not in rules_phrases: #{81 rules_phrases[rule_id] = {};82 #}83 84 rules_phrases[rule_id][phid] = prob;85#}86baseline_phrases = {};87for baseline_line in file(temporary_dir + '/baseline.out').read().split('\n'): #{88 if len(baseline_line) < 2: #{89 continue;90 #}91 # -3.41661 || [10:0:4:12 || ].[] Finnar Defended months...92 #93 # baseline_phrases[5970] = -3.41661;94 #95 prob = float(baseline_line.split('||')[0].strip());96 phid = int(baseline_line.split('[')[1].split(':')[0].strip());97 #print 'baseline' , phid , prob;98 baseline_phrases[phid] = prob;99#}100def rule_id_compare(x, y): #{101 x = int(x.replace('r', '')); 102 y = int(y.replace('r', '')); 103 return x - y;104#}105sorted_rule_ids = list(rules_phrases);106sorted_rule_ids.sort(cmp=rule_id_compare);107all_rule_probs = {};108for rule_id in sorted_rule_ids: #{109 rule_score = 0.0;110 for phid in baseline_phrases: #{111 if phid in rules_phrases[rule_id]: #{112 rule_score = rule_score + rules_phrases[rule_id][phid];113 else: #{114 rule_score = rule_score + baseline_phrases[phid];115 #}116 #}117 #print >> sys.stderr , (rule_score - baseline_score) , rule_id , baseline_score , rule_score;118 all_rule_probs[rule_id] = (rule_score - baseline_score);119#}120alist = sorted(all_rule_probs.iteritems(), key=lambda (k,v): (v,k), reverse=True);121for rule in alist: #{122 # ('r1942', -38.076839999979711)123 #print '%.5f\t%s\t%s' % (rule[1], rule[0], rule_content[rule[0]]);124 print '%.4f\t%s' % (rule[1], rule_content[rule[0]]);125#}126# current_rule_score = 0.0;127# for line in file(temporary_dir + '/rule.ranked.' + rule_id).read().split('\n'): #{128# if len(line) > 2: #{129# current_rule_score = current_rule_score + float(line.split('||')[0].strip());130# #} 131# #} 132#133# # baseline_lines_hash134# 135# bl_intersect_score = 0.0;136# for line in list(intersection_baseline_rules): #{137# 138# if len(line) > 2: #{139# bl_line_id = line.split('||')[0].strip().split(':')[0].strip('[');140# bl_intersect_score = bl_intersect_score + baseline_lines_hash[bl_line_id];141# #} 142# #} 143#144# bl_diff = (current_rule_score + bl_intersect_score) - baseline_score;145# print '++ rule_score: ' + str(current_rule_score) + '; bl_intersect: ' + str(bl_intersect_score);146# print '++ bl_score: ' + str(baseline_score) + '; bl_with_rule: ' + str(current_rule_score + bl_intersect_score) + '; bl_diff: ' + str(bl_diff);147#148# rule_scores[rule_id] = current_rule_score + bl_intersect_score;149#150# print intersection_baseline_rules; 151# print diff_baseline_rules; 152#}153# WHY ARE WE RANKING TWICE ? 154# WE ALREADY KNOW THE SCORES...

Full Screen

Full Screen

Playwright tutorial

LambdaTest’s Playwright tutorial will give you a broader idea about the Playwright automation framework, its unique features, and use cases with examples to exceed your understanding of Playwright testing. This tutorial will give A to Z guidance, from installing the Playwright framework to some best practices and advanced concepts.

Chapters:

  1. What is Playwright : Playwright is comparatively new but has gained good popularity. Get to know some history of the Playwright with some interesting facts connected with it.
  2. How To Install Playwright : Learn in detail about what basic configuration and dependencies are required for installing Playwright and run a test. Get a step-by-step direction for installing the Playwright automation framework.
  3. Playwright Futuristic Features: Launched in 2020, Playwright gained huge popularity quickly because of some obliging features such as Playwright Test Generator and Inspector, Playwright Reporter, Playwright auto-waiting mechanism and etc. Read up on those features to master Playwright testing.
  4. What is Component Testing: Component testing in Playwright is a unique feature that allows a tester to test a single component of a web application without integrating them with other elements. Learn how to perform Component testing on the Playwright automation framework.
  5. Inputs And Buttons In Playwright: Every website has Input boxes and buttons; learn about testing inputs and buttons with different scenarios and examples.
  6. Functions and Selectors in Playwright: Learn how to launch the Chromium browser with Playwright. Also, gain a better understanding of some important functions like “BrowserContext,” which allows you to run multiple browser sessions, and “newPage” which interacts with a page.
  7. Handling Alerts and Dropdowns in Playwright : Playwright interact with different types of alerts and pop-ups, such as simple, confirmation, and prompt, and different types of dropdowns, such as single selector and multi-selector get your hands-on with handling alerts and dropdown in Playright testing.
  8. Playwright vs Puppeteer: Get to know about the difference between two testing frameworks and how they are different than one another, which browsers they support, and what features they provide.
  9. Run Playwright Tests on LambdaTest: Playwright testing with LambdaTest leverages test performance to the utmost. You can run multiple Playwright tests in Parallel with the LammbdaTest test cloud. Get a step-by-step guide to run your Playwright test on the LambdaTest platform.
  10. Playwright Python Tutorial: Playwright automation framework support all major languages such as Python, JavaScript, TypeScript, .NET and etc. However, there are various advantages to Python end-to-end testing with Playwright because of its versatile utility. Get the hang of Playwright python testing with this chapter.
  11. Playwright End To End Testing Tutorial: Get your hands on with Playwright end-to-end testing and learn to use some exciting features such as TraceViewer, Debugging, Networking, Component testing, Visual testing, and many more.
  12. Playwright Video Tutorial: Watch the video tutorials on Playwright testing from experts and get a consecutive in-depth explanation of Playwright automation testing.

Run Playwright Python automation tests on LambdaTest cloud grid

Perform automation testing on 3000+ real desktop and mobile devices online.

Try LambdaTest Now !!

Get 100 minutes of automation test minutes FREE!!

Next-Gen App & Browser Testing Cloud

Was this article helpful?

Helpful

NotHelpful