How to use mytest1 method in Lemoncheesecake

Best Python code snippet using lemoncheesecake

pipeline_testing.py

Source:pipeline_testing.py Github

copy

Full Screen

1"""=================================================2pipeline_testing - automated testing of pipelines3=================================================4This pipeline executes other pipelines for testing purposes.5Overview6========7This pipeline implements automated testing of CGAT pipelines. The8pipeline downloads test data from a specified URL, runs the associated9pipeline for each data set and compares the output with a reference.10The results are collected in a report.11Tests are setup in the pipeline configuration file.12Usage13=====14See :ref:`PipelineSettingUp` and :ref:`PipelineRunning` on general15information how to use CGAT pipelines.16In order to run all tests, simply enter an empty directory and type::17 python <srcdir>/pipeline_testing.py config18Edit the config files as required and then type::19 python <srcdir>/pipeline_testing.py make full20 python <srcdir>/pipeline_testing.py make build_report21The first command will download the data and run the pipelines while22the second will build a summary report.23Configuration24-------------25The pipeline requires a configured :file:`pipeline.ini` file.26Tests are described as section in the configuration file. A test27section starts with the prefix ``test_``. For following example is a28complete test setup::29 [test_mytest1]30 # pipeline to run31 pipeline=pipeline_mapping32 # pipeline target to run (default is 'full')33 # multiple targets can be specified as a comma separated list.34 target=full35 # filename suffixes to checksum36 regex_md5=gtf.gz,bed.gz,tsv.gz,bam,nreads37 # regular expression of files to be excluded from38 # test for difference. Use | to separate multiple39 # regular expressions.40 regex_only_exist=rates.gff.gz41This configuration will run the test ``mytest1``. The associated42pipeline is :doc:`pipeline_mapping` and it will execute the target43``make full``. To check if the pipeline has completed successfully, it44will compare all files ending with any of the suffixes specified45(``gtf.gz``, ``bed.gz``, etc). The comparison will be done by building46a checksum of the whole file ignoring any comments (lines starting47with a ``#``).48Some files will be different at every run, for example if they use49some form of random initialization. Thus, the exact test can be50relaxed for groups of files. Files matching the regular expression in51``regex_linecount`` will test if a file exists and the number of lines52are identitical. Files matching the regular expressions in53``regex_exist`` will thus only be tested if they exist or not.54The test expects a file called :file:`test_mytest1.tgz` with the55test data at the download URL (parameter ``data_url``).56To define a default test for a pipeline, simply name the57test ``test_<pipeline name>``, for example::58 [test_mapping]59 regex_md5=gtf.gz,bed.gz,tsv.gz,bam,nreads60Note that setting the ``target`` and ``pipeline`` options is61not necessary in this case as the default values suffice.62Input data63----------64The input data for each test resides a compressed tar-ball. The input65data should uncompress in a directory called :file:`<testname>.dir`66The tar-ball need also contain a file :file:`<testname>.ref`67containing the md5 checksums of files of a previous run of the test68that is being used as a reference.69The input data should contain all the data that is required for70running a test within a directory. It is best to minimize dependencies71between tests, though there is a mechanism for this (see below).72For example, the contents of a tar-ball will look light this::73 test_mytest1.dir/ # test data root74 test_mytest1.dir/Brain-F2-R1.fastq.gz # test data75 test_mytest1.dir/Brain-F1-R1.fastq.gz76 test_mytest1.dir/hg19.fasta # genomic data77 test_mytest1.dir/hg19.idx78 test_mytest1.dir/hg19.fa79 test_mytest1.dir/hg19.fa.fai80 test_mytest1.dir/pipeline.ini # pipeline configuration file81 test_mytest1.dir/indices/ # configured to work in test dir82 test_mytest1.dir/indices/bwa/ # bwa indices83 test_mytest1.dir/indices/bwa/hg19.bwt84 test_mytest1.dir/indices/bwa/hg19.ann85 test_mytest1.dir/indices/bwa/hg19.pac86 test_mytest1.dir/indices/bwa/hg19.sa87 test_mytest1.dir/indices/bwa/hg19.amb88 test_mytest1.ref # reference file89The reference file looks like this::90 test_mytest1.dir/bwa.dir/Brain-F2-R2.bwa.bam 0e1c4ee88f0249c21e16d93ac496eddf91 test_mytest1.dir/bwa.dir/Brain-F1-R2.bwa.bam 01bee8af5bbb5b1d13ed82ef1bc3620d92 test_mytest1.dir/bwa.dir/Brain-F2-R1.bwa.bam 80902c87519b6865a9ca98278728097293 test_mytest1.dir/bwa.dir/Brain-F1-R1.bwa.bam 503c99ab7042a839e56147fb1a221f2794 ...95This file is created by the test pipeline and called96:file:`test_mytest1.md5`. When setting up a test, start with an empty97files and later add this file to the test data.98Pipeline dependencies99---------------------100Some pipelines depend on the output of other pipelines, most notable101is :doc:`pipeline_annotations`. To run a set of pipelines before other102pipelines name them in the option ``prerequisites``, for example::103 prerequisites=prereq_annnotations104Pipeline output105===============106The major output is in the database file :file:`csvdb`.107Code108====109"""110from ruffus import files, transform, suffix, follows, merge, collate, regex, mkdir, jobs_limit111import sys112import pipes113import os114import re115import glob116import tarfile117import pandas118import CGAT.Experiment as E119import CGAT.IOTools as IOTools120###################################################121###################################################122###################################################123# Pipeline configuration124###################################################125# load options from the config file126import CGATPipelines.Pipeline as P127P.getParameters(128 ["%s/pipeline.ini" % os.path.splitext(__file__)[0],129 "../pipeline.ini",130 "pipeline.ini"])131PARAMS = P.PARAMS132# obtain prerequisite generic data133@files([(None, "%s.tgz" % x)134 for x in P.asList(PARAMS.get("prerequisites", ""))])135def setupPrerequisites(infile, outfile):136 '''setup pre-requisites.137 These are tar-balls that are unpacked, but not run.138 '''139 to_cluster = False140 track = P.snip(outfile, ".tgz")141 # obtain data - should overwrite pipeline.ini file142 statement = '''143 wget --no-check-certificate -O %(track)s.tgz %(data_url)s/%(track)s.tgz'''144 P.run()145 tf = tarfile.open(outfile)146 tf.extractall()147@files([(None, "%s.tgz" % x)148 for x in P.CONFIG.sections()149 if x.startswith("test")])150def setupTests(infile, outfile):151 '''setup tests.152 This method creates a directory in which a test will be run153 and downloads test data with configuration files.154 '''155 to_cluster = False156 track = P.snip(outfile, ".tgz")157 if os.path.exists(track + ".dir"):158 raise OSError('directory %s.dir already exists, please delete' % track)159 # create directory160 os.mkdir(track + ".dir")161 # run pipeline config162 pipeline_name = PARAMS.get(163 "%s_pipeline" % track,164 "pipeline_" + track[len("test_"):])165 statement = '''166 (cd %(track)s.dir;167 python %(pipelinedir)s/%(pipeline_name)s.py168 %(pipeline_options)s config) >& %(outfile)s.log169 '''170 P.run()171 # obtain data - should overwrite pipeline.ini file172 statement = '''173 wget --no-check-certificate -O %(track)s.tgz %(data_url)s/%(track)s.tgz'''174 P.run()175 tf = tarfile.open(outfile)176 tf.extractall()177 if not os.path.exists("%s.dir" % track):178 raise ValueError(179 "test package did not create directory '%s.dir'" % track)180def runTest(infile, outfile):181 '''run a test.182 Multiple targets are run iteratively.183 '''184 track = P.snip(outfile, ".log")185 pipeline_name = PARAMS.get(186 "%s_pipeline" % track,187 "pipeline_" + track[len("test_"):])188 pipeline_targets = P.asList(189 PARAMS.get("%s_target" % track,190 "full"))191 # do not run on cluster, mirror192 # that a pipeline is started from193 # the head node194 to_cluster = False195 template_statement = '''196 (cd %%(track)s.dir;197 python %%(pipelinedir)s/%%(pipeline_name)s.py198 %%(pipeline_options)s make %s) 1> %%(outfile)s 2> %%(outfile)s.stderr199 '''200 if len(pipeline_targets) == 1:201 statement = template_statement % pipeline_targets[0]202 P.run(ignore_errors=True)203 else:204 statements = []205 for pipeline_target in pipeline_targets:206 statements.append(template_statement % pipeline_target)207 P.run(ignore_errors=True)208# @follows(setupTests)209# @files([("%s.tgz" % x, "%s.log" % x)210# for x in P.asList(PARAMS.get("prerequisites", ""))])211# def runPreparationTests(infile, outfile):212# '''run pre-requisite pipelines.'''213# runTest(infile, outfile)214@follows(setupTests, setupPrerequisites)215@files([("%s.tgz" % x, "%s.log" % x)216 for x in P.CONFIG.sections()217 if x.startswith("test") and218 x not in P.asList(PARAMS.get("prerequisites", ""))])219def runTests(infile, outfile):220 '''run a pipeline with test data.'''221 runTest(infile, outfile)222@transform(runTests,223 suffix(".log"),224 ".report")225def runReports(infile, outfile):226 '''run a pipeline report.'''227 track = P.snip(outfile, ".report")228 pipeline_name = PARAMS.get(229 "%s_pipeline" % track,230 "pipeline_" + track[len("test_"):])231 statement = '''232 (cd %(track)s.dir; python %(pipelinedir)s/%(pipeline_name)s.py233 %(pipeline_options)s make build_report) 1> %(outfile)s 2> %(outfile)s.stderr234 '''235 P.run(ignore_errors=True)236def compute_file_metrics(infile, outfile, metric, suffixes):237 """apply a tool to compute metrics on a list of files matching238 regex_pattern."""239 if suffixes is None or len(suffixes) == 0:240 E.info("No metrics computed for {}".format(outfile))241 IOTools.touchFile(outfile)242 return243 track = P.snip(infile, ".log")244 # convert regex patterns to a suffix match:245 # prepend a .*246 # append a $247 regex_pattern = " -or ".join(["-regex .*{}$".format(pipes.quote(x))248 for x in suffixes])249 E.debug("applying metric {} to files matching {}".format(metric,250 regex_pattern))251 if metric == "file":252 statement = '''find %(track)s.dir253 -type f254 -not -regex '.*\/report.*'255 -not -regex '.*\/_.*'256 \( %(regex_pattern)s \)257 | sort -k1,1258 > %(outfile)s'''259 else:260 statement = '''find %(track)s.dir261 -type f262 -not -regex '.*\/report.*'263 -not -regex '.*\/_.*'264 \( %(regex_pattern)s \)265 -exec %(pipeline_scriptsdir)s/cgat_file_apply.sh {} %(metric)s \;266 | perl -p -e "s/ +/\\t/g"267 | sort -k1,1268 > %(outfile)s'''269 P.run()270@follows(runReports)271@transform(runTests,272 suffix(".log"),273 ".md5")274def buildCheckSums(infile, outfile):275 '''build checksums for files in the build directory.276 Files are uncompressed before computing the checksum277 as gzip stores meta information such as the time stamp.278 '''279 track = P.snip(infile, ".log")280 compute_file_metrics(281 infile,282 outfile,283 metric="md5sum",284 suffixes=P.asList(P.asList(PARAMS.get('%s_regex_md5' % track, ""))))285@transform(runTests,286 suffix(".log"),287 ".lines")288def buildLineCounts(infile, outfile):289 '''compute line counts.290 Files are uncompressed before computing the number of lines.291 '''292 track = P.snip(infile, ".log")293 compute_file_metrics(294 infile,295 outfile,296 metric="wc -l",297 suffixes=P.asList(P.asList(PARAMS.get('%s_regex_linecount' % track, ""))))298@transform(runTests,299 suffix(".log"),300 ".exist")301def checkFileExistence(infile, outfile):302 '''check whether file exists.303 Files are uncompressed before checking existence.304 '''305 track = P.snip(infile, ".log")306 compute_file_metrics(307 infile,308 outfile,309 metric="file",310 suffixes=P.asList(P.asList(PARAMS.get('%s_regex_exist' % track, ""))))311@collate((buildCheckSums, buildLineCounts, checkFileExistence),312 regex("([^.]*).(.*)"),313 r"\1.stats")314def mergeFileStatistics(infiles, outfile):315 '''merge all file statistics.'''316 to_cluster = False317 infiles = " ".join(sorted(infiles))318 statement = '''319 %(pipeline_scriptsdir)s/merge_testing_output.sh320 %(infiles)s321 > %(outfile)s'''322 P.run()323@merge(mergeFileStatistics,324 "md5_compare.tsv")325def compareCheckSums(infiles, outfile):326 '''compare checksum files against existing reference data.327 '''328 to_cluster = False329 outf = IOTools.openFile(outfile, "w")330 outf.write("\t".join((331 ("track", "status",332 "job_finished",333 "nfiles", "nref",334 "missing", "extra",335 "different",336 "different_md5",337 "different_lines",338 "same",339 "same_md5",340 "same_lines",341 "same_exist",342 "files_missing",343 "files_extra",344 "files_different_md5",345 "files_different_lines"))) + "\n")346 for infile in infiles:347 E.info("working on {}".format(infile))348 track = P.snip(infile, ".stats")349 logfiles = glob.glob(track + "*.log")350 job_finished = True351 for logfile in logfiles:352 is_complete = IOTools.isComplete(logfile)353 E.debug("logcheck: {} = {}".format(logfile, is_complete))354 job_finished = job_finished and is_complete355 reffile = track + ".ref"356 # regular expression of files to test only for existence357 regex_exist = PARAMS.get('%s_regex_exist' % track, None)358 if regex_exist:359 regex_exist = re.compile("|".join(P.asList(regex_exist)))360 regex_linecount = PARAMS.get('%s_regex_linecount' % track, None)361 if regex_linecount:362 regex_linecount = re.compile("|".join(P.asList(regex_linecount)))363 regex_md5 = PARAMS.get('%s_regex_md5' % track, None)364 if regex_md5:365 regex_md5 = re.compile("|".join(P.asList(regex_md5)))366 if not os.path.exists(reffile):367 raise ValueError('no reference data defined for %s' % track)368 cmp_data = pandas.read_csv(IOTools.openFile(infile),369 sep="\t",370 index_col=0)371 ref_data = pandas.read_csv(IOTools.openFile(reffile),372 sep="\t",373 index_col=0)374 shared_files = set(cmp_data.index).intersection(ref_data.index)375 missing = set(ref_data.index).difference(cmp_data.index)376 extra = set(cmp_data.index).difference(ref_data.index)377 different = set(shared_files)378 # remove those for which only check for existence379 if regex_exist:380 same_exist = set([x for x in different381 if regex_exist.search(x)])382 different = set([x for x in different383 if not regex_exist.search(x)])384 else:385 same_exist = set()386 # select those for which only check for number of lines387 if regex_linecount:388 check_lines = [x for x in different389 if regex_linecount.search(x)]390 dd = (cmp_data['nlines'][check_lines] !=391 ref_data['nlines'][check_lines])392 different_lines = set(dd.index[dd])393 different = different.difference(check_lines)394 dd = (cmp_data['nlines'][check_lines] ==395 ref_data['nlines'][check_lines])396 same_lines = set(dd.index[dd])397 else:398 different_lines = set()399 same_lines = set()400 # remainder - check md5401 if regex_md5:402 check_md5 = [x for x in different403 if regex_md5.search(x)]404 dd = (cmp_data['md5'][check_md5] !=405 ref_data['md5'][check_md5])406 different_md5 = set(dd.index[dd])407 dd = (cmp_data['md5'][check_md5] ==408 ref_data['md5'][check_md5])409 same_md5 = set(dd.index[dd])410 else:411 different_md5 = set()412 same_md5 = set()413 if job_finished and (len(missing) + len(extra) +414 len(different_md5) + len(different_lines) == 0):415 status = "OK"416 else:417 status = "FAIL"418 outf.write("\t".join(map(str, (419 track,420 status,421 job_finished,422 len(cmp_data),423 len(ref_data),424 len(missing),425 len(extra),426 len(different_md5) + len(different_lines),427 len(different_md5),428 len(different_lines),429 len(same_md5) + len(same_lines) + len(same_exist),430 len(same_md5),431 len(same_lines),432 len(same_exist),433 ",".join(missing),434 ",".join(extra),435 ",".join(different_md5),436 ",".join(different_lines),437 ))) + "\n")438 outf.close()439@jobs_limit(PARAMS.get("jobs_limit_db", 1), "db")440@transform(compareCheckSums,441 suffix(".tsv"),442 ".load")443def loadComparison(infile, outfile):444 '''load comparison data into database.'''445 P.load(infile, outfile)446@jobs_limit(PARAMS.get("jobs_limit_db", 1), "db")447@transform(mergeFileStatistics,448 suffix(".stats"),449 "_results.load")450def loadResults(infile, outfile):451 '''load comparison data into database.'''452 P.load(infile, outfile, options="--add-index=file")453@jobs_limit(PARAMS.get("jobs_limit_db", 1), "db")454@transform(mergeFileStatistics,455 suffix(".ref"),456 "_reference.load")457def loadReference(infile, outfile):458 '''load comparison data into database.'''459 P.load(infile, outfile, options="--add-index=file")460@follows(runTests, runReports)461def run_components():462 pass463@follows(run_components, loadComparison, loadResults, loadReference)464def full():465 pass466@files(None, 'reset.log')467def reset(infile, outfile):468 '''remove all data in pipeline.'''469 to_cluster = False470 statement = '''471 rm -rf prereq_* ctmp*;472 rm -rf test_* _cache _static _templates _tmp report;473 rm -f *.log csvdb *.load *.tsv'''474 P.run()475###################################################################476###################################################################477###################################################################478# primary targets479###################################################################480@follows(mkdir("report"))481def build_report():482 '''build report from scratch.'''483 E.info("starting report build process from scratch")484 P.run_report(clean=True)485@follows(mkdir("report"))486def update_report():487 '''update report.'''488 E.info("updating report")489 P.run_report(clean=False)490@follows(update_report)491def publish_report():492 '''publish report.'''493 E.info("publishing report")494 P.publish_report()495def main(argv=None):496 if argv is None:497 argv = sys.argv498 P.main(argv)499if __name__ == "__main__":...

Full Screen

Full Screen

test_package.py

Source:test_package.py Github

copy

Full Screen

1#/bin/env python32# -*- encoding=utf8 -*-3#******************************************************************************4# Copyright (c) Huawei Technologies Co., Ltd. 2020-2020. All rights reserved.5# licensed under the Mulan PSL v2.6# You can use this software according to the terms and conditions of the Mulan PSL v2.7# You may obtain a copy of Mulan PSL v2 at:8# http://license.coscl.org.cn/MulanPSL29# THIS SOFTWARE IS PROVIDED ON AN "AS IS" BASIS, WITHOUT WARRANTIES OF ANY KIND, EITHER EXPRESS OR10# IMPLIED, INCLUDING BUT NOT LIMITED TO NON-INFRINGEMENT, MERCHANTABILITY OR FIT FOR A PARTICULAR11# PURPOSE.12# See the Mulan PSL v2 for more details.13# Author: wangchong14# Create: 2021-04-2315# ******************************************************************************16"""17test all: python3 -m pytest -s test_package.py18test one class: python3 -m pytest -s test_package.py::TestCase19test one class one case: python3 -m pytest -s test_package.py::TestCase::test_120"""21import os22import sys23import pytest24current_path = os.path.join(os.path.split(os.path.realpath(__file__))[0])25sys.path.append(os.path.join(current_path, ".."))26from func import SetEnv, CommonFunc27from core.package_manager import OBSPkgManager28S = SetEnv()29C = CommonFunc()30obs_meta_path = None31P = None32class TestCase(object):33 def setup_class(self):34 S.set_oscrc()35 S.set_gitee()36 global obs_meta_path37 global P38 obs_meta_path = C.pull_from_gitee_repo(S.gitee_info["user"], S.gitee_info["passwd"], \39 "https://gitee.com/{0}/obs_meta".format(S.gitee_info["user"]), "master", "obs_meta")40 kw = {"obs_meta_path": obs_meta_path,41 "gitee_user": S.gitee_info["user"],42 "gitee_pwd": S.gitee_info["passwd"],43 "sync_code": False,44 "branch2": "", "project2": "",45 "check_yaml": "", "check_meta": "",46 }47 P = OBSPkgManager(**kw)48 for i in range(1, 3):49 cmd = "osc list | grep home:{0}:test{1}".format(S.obs_info["user"], i)50 if os.system(cmd) == 0:51 cmd = "osc api -X DELETE /source/home:{0}:test{1}".format(S.obs_info["user"], i)52 ret = os.system(cmd)53 file_msg = """54<project name=\\"home:{0}:test{1}\\">55 <title/>56 <description/>57 <person userid=\\"{2}\\" role=\\"maintainer\\"/>58</project>59 """.format(S.obs_info["user"], i, S.obs_info["user"])60 cmd = "echo \"{0}\" > {1}/_meta_test".format(file_msg, obs_meta_path)61 if os.system(cmd) == 0:62 assert True63 else:64 assert False, "fail to exec cmd:{0}".format(cmd)65 cmd = "osc api -X PUT /source/home:{0}:test{1}/_meta -T {2}/_meta_test".format(S.obs_info["user"], \66 i, obs_meta_path)67 if os.system(cmd) == 0:68 assert True69 else:70 assert False, "fail to exec cmd:{0}".format(cmd)71 cmd = "cd {0} && mkdir -p multi-version/test-rock/home:{1}:test{2}".format(obs_meta_path, S.obs_info["user"], i)72 if os.system(cmd) == 0:73 assert True74 else:75 assert False, "fail to exec cmd:{0}".format(cmd)76 C.commit_to_gitee_repo(obs_meta_path, "multi-version/test-rock/home:{0}:test{1}".format(S.obs_info["user"], i))77 def teardown_class(self):78 S.unset_oscrc()79 S.unset_gitee()80 for i in range(1, 3):81 cmd = "osc api -X DELETE /source/home:{0}:test{1}".format(S.obs_info["user"], i)82 if os.system(cmd) == 0:83 assert True84 else:85 assert False, "fail to delete home:{0}:test{1} after testing".format(S.obs_info["user"], i)86 cmd = "rm -fr {0}".format(obs_meta_path)87 if os.system(cmd) == 0:88 assert True89 else:90 assert False, "fail to exec cmd:{0}".format(cmd)91 def test_1(self):92 """93 test for creating package for multi-version94 """95 assert os.path.exists(obs_meta_path), "{0} not exist".format(obs_meta_path)96 for i in range(1, 3):97 file_msg = """98<services>99 <service name=\\"tar_scm_kernel_repo\\">100 <param name=\\"scm\\">repo</param>101 <param name=\\"url\\">next/test-rock/mytest{0}</param>102 </service>103</services>104 """.format(i)105 prj_path = os.path.join(obs_meta_path, "multi-version/test-rock/home:{0}:test{1}".format(S.obs_info["user"], i))106 cmd = "cd {0} && mkdir mytest{1} && echo \"{2}\" > mytest{3}/_service".format(prj_path, i, file_msg, i)107 if os.system(cmd) == 0:108 assert True109 else:110 assert False, "fail to exec cmd:{0}".format(cmd)111 C.commit_to_gitee_repo(obs_meta_path, \112 "multi-version/test-rock/home:{0}:test1/mytest1/_service".format(S.obs_info["user"]), \113 "multi-version/test-rock/home:{0}:test2/mytest2/_service".format(S.obs_info["user"]))114 P.obs_pkg_admc()115 for i in range(1, 3):116 cmd = "osc list home:{0}:test{1} mytest{2} _service".format(S.obs_info["user"], i, i)117 if os.system(cmd) == 0:118 assert True119 else:120 assert False, "fail to create package mytest{0} in project home:{1}:test{2}".format(i, S.obs_info["user"], i)121 122 def test_2(self):123 """124 test for modify package _service for multi-version125 """126 assert os.path.exists(obs_meta_path), "{0} not exist".format(obs_meta_path)127 cmd = "cd {0} && sed -i 's/mytest1/mytest1-new/g' \128 multi-version/test-rock/home:{1}:test1/mytest1/_service".format(obs_meta_path, S.obs_info["user"])129 if os.system(cmd) == 0:130 assert True131 else:132 assert False, "fail to exec cmd:{0}".format(cmd)133 C.commit_to_gitee_repo(obs_meta_path, "multi-version/test-rock/home:{0}:test1/mytest1/_service".format(S.obs_info["user"]))134 P.obs_pkg_admc()135 cmd = "osc api -X GET /source/home:{0}:test1/mytest1/_service".format(S.obs_info["user"])136 ret = os.popen(cmd).read()137 if "mytest1-new" in ret:138 assert True139 else:140 assert False, "fail to modify package _service"141 def test_3(self):142 """143 test for change package project for multi-version144 """145 assert os.path.exists(obs_meta_path), "{0} not exist".format(obs_meta_path)146 cmd = "cd {0} && mv multi-version/test-rock/home:{1}:test1/mytest1 \147 multi-version/test-rock/home:{2}:test2/".format(obs_meta_path, S.obs_info["user"], S.obs_info["user"])148 if os.system(cmd) == 0:149 assert True150 else:151 assert False, "fail to exec cmd:{0}".format(cmd)152 C.commit_to_gitee_repo(obs_meta_path, \153 "multi-version/test-rock/home:{0}:test1/mytest1".format(S.obs_info["user"]), \154 "multi-version/test-rock/home:{0}:test2/mytest1".format(S.obs_info["user"]))155 P.obs_pkg_admc()156 cmd = "osc list home:{0}:test1 | grep ^mytest1$".format(S.obs_info["user"])157 ret1 = os.popen(cmd).read()158 cmd = "osc list home:{0}:test2 | grep ^mytest1$".format(S.obs_info["user"])159 ret2 = os.popen(cmd).read()160 if "mytest1" not in ret1 and "mytest1" in ret2:161 assert True162 else:163 assert False, "fail to change package project"164 def test_4(self):165 """166 test for delete package _service for multi-version167 """168 assert os.path.exists(obs_meta_path), "{0} not exist".format(obs_meta_path)169 cmd = "cd {0} && rm -f multi-version/test-rock/home:{1}:test2/mytest1/_service".format(170 obs_meta_path, S.obs_info["user"])171 if os.system(cmd) == 0:172 assert True173 else:174 assert False, "fail to exec cmd:{0}".format(cmd)175 C.commit_to_gitee_repo(obs_meta_path, \176 "multi-version/test-rock/home:{0}:test2/mytest1/_service".format(S.obs_info["user"]))177 P.obs_pkg_admc()178 cmd = "osc api -X GET /source/home:{0}:test2/mytest1/_service".format(S.obs_info["user"])179 if os.system(cmd) != 0:180 assert True181 else:182 assert False, "fail to delete package _service"183 def test_5(self):184 """185 test for delete package for multi-version186 """187 assert os.path.exists(obs_meta_path), "{0} not exist".format(obs_meta_path)188 cmd = "cd {0} && rm -rf multi-version/test-rock/home:{1}:test2/mytest1".format(189 obs_meta_path, S.obs_info["user"])190 if os.system(cmd) == 0:191 assert True192 else:193 assert False, "fail to exec cmd:{0}".format(cmd)194 C.commit_to_gitee_repo(obs_meta_path, \195 "multi-version/test-rock/home:{0}:test2/mytest1".format(S.obs_info["user"]))196 P.obs_pkg_admc()197 cmd = "osc list home:{0}:test2 | grep mytest1".format(S.obs_info["user"])198 if os.system(cmd) != 0:199 assert True200 else:...

Full Screen

Full Screen

test_test_runner.py

Source:test_test_runner.py Github

copy

Full Screen

1# Licensed under the Apache License, Version 2.0 (the "License"); you may2# not use this file except in compliance with the License. You may obtain3# a copy of the License at4#5# http://www.apache.org/licenses/LICENSE-2.06#7# Unless required by applicable law or agreed to in writing, software8# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT9# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the10# License for the specific language governing permissions and limitations11# under the License.12import os13import sys14import fixtures15import mock16from oslo_config import cfg17from oslo_log import log as logging18from oslo_utils import importutils19import six20import testtools21from murano.cmd import test_runner22from murano import version23CONF = cfg.CONF24logging.register_options(CONF)25logging.setup(CONF, 'murano')26class TestCaseShell(testtools.TestCase):27 def setUp(self):28 super(TestCaseShell, self).setUp()29 self.auth_params = {'username': 'test',30 'password': 'test',31 'project_name': 'test',32 'auth_url': 'http://localhost:5000'}33 self.args = ['test-runner.py']34 for k, v in six.iteritems(self.auth_params):35 k = '--os-' + k.replace('_', '-')36 self.args.extend([k, v])37 sys.stdout = six.StringIO()38 sys.stderr = six.StringIO()39 self.useFixture(fixtures.MonkeyPatch('keystoneclient.v3.client.Client',40 mock.MagicMock))41 dirs = [os.path.dirname(__file__),42 os.path.join(os.path.dirname(__file__), os.pardir, os.pardir,43 os.pardir, os.pardir, os.pardir, 'meta')]44 self.override_config('load_packages_from', dirs, 'engine')45 def tearDown(self):46 super(TestCaseShell, self).tearDown()47 CONF.clear()48 def override_config(self, name, override, group=None):49 CONF.set_override(name, override, group, enforce_type=True)50 self.addCleanup(CONF.clear_override, name, group)51 def shell(self, cmd_args=None, exitcode=0):52 orig = sys.stdout53 orig_stderr = sys.stderr54 sys.stdout = six.StringIO()55 sys.stderr = six.StringIO()56 args = self.args57 if cmd_args:58 cmd_args = cmd_args.split()59 args.extend(cmd_args)60 with mock.patch.object(sys, 'argv', args):61 result = self.assertRaises(SystemExit, test_runner.main)62 self.assertEqual(result.code, exitcode,63 'Command finished with error.')64 stdout = sys.stdout.getvalue()65 sys.stdout.close()66 sys.stdout = orig67 stderr = sys.stderr.getvalue()68 sys.stderr.close()69 sys.stderr = orig_stderr70 return (stdout, stderr)71 def test_help(self):72 stdout, _ = self.shell('--help')73 usage = """usage: murano-test-runner [-h] [--config-file CONFIG_FILE]74 [--os-auth-url OS_AUTH_URL]75 [--os-username OS_USERNAME]76 [--os-password OS_PASSWORD]77 [--os-project-name OS_PROJECT_NAME]78 [-l [</path1, /path2> [</path1, /path2> ...]]] [-v]79 [--version]80 <PACKAGE_FQN>81 [<testMethod1, className.testMethod2> [<testMethod1, className.testMethod2""" # noqa82 self.assertIn(usage, stdout)83 def test_version(self):84 _, stderr = self.shell('--version')85 self.assertIn(version.version_string, stderr)86 @mock.patch.object(test_runner, 'LOG')87 def test_increase_verbosity(self, mock_log):88 self.shell('io.murano.test.MyTest1 -v')89 mock_log.logger.setLevel.assert_called_with(logging.DEBUG)90 @mock.patch('keystoneclient.v3.client.Client')91 def test_os_params_replaces_config(self, mock_client):92 # Load keystone configuration parameters from config93 importutils.import_module('keystonemiddleware.auth_token')94 self.override_config('admin_user', 'new_value', 'keystone_authtoken')95 self.shell('io.murano.test.MyTest1 io.murano.test.MyTest2')96 mock_client.assert_has_calls([mock.call(**self.auth_params)])97 def test_package_all_tests(self):98 _, stderr = self.shell('io.murano.test.MyTest1 -v')99 # NOTE(efedorova): May be, there is a problem with test-runner, since100 # all logs are passed to stderr101 self.assertIn('io.murano.test.MyTest1.testSimple1.....OK', stderr)102 self.assertIn('io.murano.test.MyTest1.testSimple2.....OK', stderr)103 self.assertIn('io.murano.test.MyTest2.testSimple1.....OK', stderr)104 self.assertIn('io.murano.test.MyTest2.testSimple2.....OK', stderr)105 self.assertNotIn('thisIsNotAtestMethod', stderr)106 def test_package_by_class(self):107 _, stderr = self.shell(108 'io.murano.test.MyTest1 io.murano.test.MyTest2 -v')109 self.assertNotIn('io.murano.test.MyTest1.testSimple1.....OK', stderr)110 self.assertNotIn('io.murano.test.MyTest1.testSimple2.....OK', stderr)111 self.assertIn('io.murano.test.MyTest2.testSimple1.....OK', stderr)112 self.assertIn('io.murano.test.MyTest2.testSimple2.....OK', stderr)113 def test_package_by_test_name(self):114 _, stderr = self.shell(115 'io.murano.test.MyTest1 testSimple1 -v')116 self.assertIn('io.murano.test.MyTest1.testSimple1.....OK', stderr)117 self.assertNotIn('io.murano.test.MyTest1.testSimple2.....OK', stderr)118 self.assertIn('io.murano.test.MyTest2.testSimple1.....OK', stderr)119 self.assertNotIn('io.murano.test.MyTest2.testSimple2.....OK', stderr)120 def test_package_by_test_and_class_name(self):121 _, stderr = self.shell(122 'io.murano.test.MyTest1 io.murano.test.MyTest2.testSimple1 -v')123 self.assertNotIn('io.murano.test.MyTest1.testSimple1.....OK', stderr)124 self.assertNotIn('io.murano.test.MyTest1.testSimple2.....OK', stderr)125 self.assertIn('io.murano.test.MyTest2.testSimple1.....OK', stderr)126 self.assertNotIn('io.murano.test.MyTest2.testSimple2.....OK', stderr)127 def test_service_methods(self):128 _, stderr = self.shell(129 'io.murano.test.MyTest1 io.murano.test.MyTest1.testSimple1 -v')130 self.assertIn('Executing: io.murano.test.MyTest1.setUp', stderr)131 self.assertIn('Executing: io.murano.test.MyTest1.tearDown', stderr)132 def test_package_is_not_provided(self):133 _, stderr = self.shell(exitcode=2)134 self.assertIn('murano-test-runner: error: too few arguments', stderr)135 def test_wrong_parent(self):136 _, stderr = self.shell(137 'io.murano.test.MyTest1 io.murano.test.MyTest3 -v', exitcode=1)138 self.assertIn('Class io.murano.test.MyTest3 is not inherited from'139 ' io.murano.test.TestFixture. Skipping it.', stderr)...

Full Screen

Full Screen

test_cmd_diff.py

Source:test_cmd_diff.py Github

copy

Full Screen

1from helpers.cli import cmdout2from helpers.report import report_in_progress_path3from helpers.report import make_test_result, make_suite_result, make_report4from lemoncheesecake.cli import main5from lemoncheesecake.reporting.backends.json_ import save_report_into_file6from lemoncheesecake.testtree import flatten_tests7from lemoncheesecake.cli.commands.diff import compute_diff8def check_diff(diff, added=[], removed=[], status_changed=[]):9 assert [t.path for t in diff.added] == added10 assert [t.path for t in diff.removed] == removed11 for test_path, old_status, new_status in status_changed:12 assert test_path in [t.path for t in diff.status_changed[old_status][new_status]]13def test_added_test():14 suite_1 = make_suite_result("mysuite", tests=[make_test_result("mytest1")])15 suite_2 = make_suite_result("mysuite", tests=[make_test_result("mytest1"), make_test_result("mytest2")])16 tests_1 = list(flatten_tests([suite_1]))17 tests_2 = list(flatten_tests([suite_2]))18 diff = compute_diff(tests_1, tests_2)19 check_diff(diff, added=["mysuite.mytest2"])20def test_removed_test():21 suite_1 = make_suite_result("mysuite", tests=[make_test_result("mytest1"), make_test_result("mytest2")])22 suite_2 = make_suite_result("mysuite", tests=[make_test_result("mytest1")])23 tests_1 = list(flatten_tests([suite_1]))24 tests_2 = list(flatten_tests([suite_2]))25 diff = compute_diff(tests_1, tests_2)26 check_diff(diff, removed=["mysuite.mytest2"])27def test_passed_to_failed():28 suite_1 = make_suite_result("mysuite", tests=[make_test_result("mytest1", status="passed")])29 suite_2 = make_suite_result("mysuite", tests=[make_test_result("mytest1", status="failed")])30 tests_1 = list(flatten_tests([suite_1]))31 tests_2 = list(flatten_tests([suite_2]))32 diff = compute_diff(tests_1, tests_2)33 check_diff(diff, status_changed=[["mysuite.mytest1", "passed", "failed"]])34def test_failed_to_passed():35 suite_1 = make_suite_result("mysuite", tests=[make_test_result("mytest1", status="failed")])36 suite_2 = make_suite_result("mysuite", tests=[make_test_result("mytest1", status="passed")])37 tests_1 = list(flatten_tests([suite_1]))38 tests_2 = list(flatten_tests([suite_2]))39 diff = compute_diff(tests_1, tests_2)40 check_diff(diff, status_changed=[["mysuite.mytest1", "failed", "passed"]])41def _split_lines(lines, separator):42 group = []43 for line in lines:44 if line == separator:45 if len(group) > 0:46 yield group47 group = []48 else:49 group.append(line)50def test_diff_cmd(tmpdir, cmdout):51 old_report = make_report([52 make_suite_result("mysuite", tests=[53 make_test_result("mytest1"),54 make_test_result("mytest2", status="failed"),55 make_test_result("mytest3")56 ])57 ])58 old_report_path = tmpdir.join("old_report.json").strpath59 save_report_into_file(old_report, old_report_path)60 new_report = make_report([61 make_suite_result("mysuite", tests=[62 make_test_result("mytest1", status="failed"),63 make_test_result("mytest2"),64 make_test_result("mytest4")65 ])66 ])67 new_report_path = tmpdir.join("new_report.json").strpath68 save_report_into_file(new_report, new_report_path)69 assert main(["diff", old_report_path, new_report_path]) == 070 lines = cmdout.get_lines()71 splitted = _split_lines(lines, "")72 added = next(splitted)73 removed = next(splitted)74 status_changed = next(splitted)75 assert "mysuite.mytest4" in added[1]76 assert "mysuite.mytest3" in removed[1]77 assert "mysuite.mytest2" in status_changed[1]78 assert "mysuite.mytest1" in status_changed[2]79def test_diff_cmd_test_run_in_progress(report_in_progress_path, cmdout):80 assert main(["diff", report_in_progress_path, report_in_progress_path]) == 0...

Full Screen

Full Screen

Automation Testing Tutorials

Learn to execute automation testing from scratch with LambdaTest Learning Hub. Right from setting up the prerequisites to run your first automation test, to following best practices and diving deeper into advanced test scenarios. LambdaTest Learning Hubs compile a list of step-by-step guides to help you be proficient with different test automation frameworks i.e. Selenium, Cypress, TestNG etc.

LambdaTest Learning Hubs:

YouTube

You could also refer to video tutorials over LambdaTest YouTube channel to get step by step demonstration from industry experts.

Run Lemoncheesecake automation tests on LambdaTest cloud grid

Perform automation testing on 3000+ real desktop and mobile devices online.

Try LambdaTest Now !!

Get 100 minutes of automation test minutes FREE!!

Next-Gen App & Browser Testing Cloud

Was this article helpful?

Helpful

NotHelpful