How to use test_command_syntax method in molecule

Best Python code snippet using molecule_python

predict_blood_donation.py

Source:predict_blood_donation.py Github

copy

Full Screen

...59# In[61]:60# Print target incidence proportions, rounding output to 3 decimal places61transfusion.target.value_counts(normalize=True).round(3)62# In[62]:63get_ipython().run_cell_magic('nose', '', '\ndef strip_comment_lines(cell_input):\n """Returns cell input string with comment lines removed."""\n return \'\\n\'.join(line for line in cell_input.splitlines() if not line.startswith(\'#\'))\n\nlast_input = strip_comment_lines(In[-2])\nlast_output = _\n\ndef test_command_syntax():\n assert \'transfusion.target\' in last_input, \\\n "Did you call \'value_counts()\' method on \'transfusion.target\' column?"\n assert \'value_counts(normalize=True)\' in last_input, \\\n "Did you use \'normalize=True\' parameter?"\n assert \'round\' in last_input, \\\n "Did you call \'round()\' method?"\n assert \'round(3)\' in last_input, \\\n "Did you call \'round()\' method with the correct argument?"\n assert last_input.find(\'value\') < last_input.find(\'round\'), \\\n "Did you chain \'value_counts()\' and \'round()\' methods in the correct order?"\n\ndef test_command_output():\n try:\n assert "0.762" in last_output.to_string()\n except AttributeError:\n assert False, \\\n "Please use transfusion.target.value_counts(normalize=True).round(3) to inspect proportions, not the display() or print() functions."\n except AssertionError:\n assert False, \\\n "Hmm, the output of the cell is not what we expected. You should see 0.762 in your output."')64# ## 6. Splitting transfusion into train and test datasets65# <p>We'll now use <code>train_test_split()</code> method to split <code>transfusion</code> DataFrame.</p>66# <p>Target incidence informed us that in our dataset <code>0</code>s appear 76% of the time. We want to keep the same structure in train and test datasets, i.e., both datasets must have 0 target incidence of 76%. This is very easy to do using the <code>train_test_split()</code> method from the <code>scikit learn</code> library - all we need to do is specify the <code>stratify</code> parameter. In our case, we'll stratify on the <code>target</code> column.</p>67# In[63]:68# Import train_test_split method69from sklearn.model_selection import train_test_split70# Split transfusion DataFrame into71# X_train, X_test, y_train and y_test datasets,72# stratifying on the `target` column73X_train, X_test, y_train, y_test = train_test_split(74 transfusion.drop(columns='target'),75 transfusion.target,76 test_size=0.25,77 random_state=42,78 stratify=transfusion.target79)80# Print out the first 2 rows of X_train81X_train.head(2)82# In[64]:83get_ipython().run_cell_magic('nose', '', '\nlast_output = _\n\ndef test_train_test_split_loaded():\n assert \'train_test_split\' in globals(), \\\n "\'train_test_split\' function not found. Please check your import statement."\n\ndef test_X_train_created():\n correct_X_train, _, _, _ = train_test_split(transfusion.drop(columns=\'target\'),\n transfusion.target,\n test_size=0.25,\n random_state=42,\n stratify=transfusion.target)\n assert correct_X_train.equals(X_train), \\\n "\'X_train\' not created correctly. Did you stratify on the correct column?"\n \ndef test_head_output():\n try:\n assert "1750" in last_output.to_string()\n except AttributeError:\n assert False, \\\n "Please use X_train.head(2) as the last line of code in the cell to inspect the data, not the display() or print() functions."\n except AssertionError:\n assert False, \\\n "Hmm, the output of the cell is not what we expected. You should see 1750 in the first 2 rows of the X_train DataFrame."')84# ## 7. Selecting model using TPOT85# <p><a href="https://github.com/EpistasisLab/tpot">TPOT</a> is a Python Automated Machine Learning tool that optimizes machine learning pipelines using genetic programming.</p>86# <p><img src="https://assets.datacamp.com/production/project_646/img/tpot-ml-pipeline.png" alt="TPOT Machine Learning Pipeline"></p>87# <p>TPOT will automatically explore hundreds of possible pipelines to find the best one for our dataset. Note, the outcome of this search will be a <a href="https://scikit-learn.org/stable/modules/generated/sklearn.pipeline.Pipeline.html">scikit-learn pipeline</a>, meaning it will include any pre-processing steps as well as the model.</p>88# <p>We are using TPOT to help us zero in on one model that we can then explore and optimize further.</p>89# In[65]:90# Import TPOTClassifier and roc_auc_score91from tpot import TPOTClassifier92from sklearn.metrics import roc_auc_score93# Instantiate TPOTClassifier94tpot = TPOTClassifier(95 generations=5,96 population_size=20,97 verbosity=2,98 scoring='roc_auc',99 random_state=42,100 disable_update_check=True,101 config_dict='TPOT light'102)103tpot.fit(X_train, y_train)104# AUC score for tpot model105tpot_auc_score = roc_auc_score(y_test, tpot.predict_proba(X_test)[:, 1])106print(f'\nAUC score: {tpot_auc_score:.4f}')107# Print best pipeline steps108print('\nBest pipeline steps:', end='\n')109for idx, (name, transform) in enumerate(tpot.fitted_pipeline_.steps, start=1):110 # Print idx and transform111 print(f'{idx}. {transform}')112# In[66]:113get_ipython().run_cell_magic('nose', '', '\ndef strip_comment_lines(cell_input):\n """Returns cell input string with comment lines removed."""\n return \'\\n\'.join(line for line in cell_input.splitlines() if not line.startswith(\'#\'))\n\nlast_input = strip_comment_lines(In[-2])\n\ndef test_TPOTClassifier_loaded():\n assert \'TPOTClassifier\' in globals(), \\\n "\'TPOTClassifier\' class not found. Please check your import statement."\n \ndef test_roc_auc_score_loaded():\n assert \'roc_auc_score\' in globals(), \\\n "\'roc_auc_score\' function not found. Please check your import statement."\n\ndef test_TPOTClassifier_instantiated():\n assert isinstance(tpot, TPOTClassifier), \\\n "\'tpot\' is not an instance of TPOTClassifier. Did you assign an instance of TPOTClassifier to \'tpot\' variable?"\n\ndef test_pipeline_steps_printed():\n assert \'{idx}\' in last_input, \\\n "Did you use {idx} variable in the f-string in the print statement?"\n assert \'{transform}\' in last_input, \\\n "Did you use {transform} variable in the f-string in the print statement?"')114# ## 8. Checking the variance115# <p>TPOT picked <code>LogisticRegression</code> as the best model for our dataset with no pre-processing steps, giving us the AUC score of 0.7850. This is a great starting point. Let's see if we can make it better.</p>116# <p>One of the assumptions for linear regression models is that the data and the features we are giving it are related in a linear fashion, or can be measured with a linear distance metric. If a feature in our dataset has a high variance that's an order of magnitude or more greater than the other features, this could impact the model's ability to learn from other features in the dataset.</p>117# <p>Correcting for high variance is called normalization. It is one of the possible transformations you do before training a model. Let's check the variance to see if such transformation is needed.</p>118# In[67]:119# X_train's variance, rounding the output to 3 decimal places120X_train.var().round(3)121# In[68]:122get_ipython().run_cell_magic('nose', '', '\ndef strip_comment_lines(cell_input):\n """Returns cell input string with comment lines removed."""\n return \'\\n\'.join(line for line in cell_input.splitlines() if not line.startswith(\'#\'))\n\nlast_input = strip_comment_lines(In[-2])\nlast_output = _\n\ndef test_command_syntax():\n assert \'X_train\' in last_input, \\\n "Did you call \'var()\' method on \'X_train\' DataFrame?"\n assert \'var\' in last_input, \\\n "Did you call \'var()\' method?"\n assert \'round(3)\' in last_input, \\\n "Did you call \'round()\' method with the correct argument?"\n assert last_input.find(\'var\') < last_input.find(\'round\'), \\\n "Did you chain \'var()\' and \'round()\' methods in the correct order?"\n\ndef test_var_output():\n try:\n assert "2114363" in last_output.to_string()\n except AttributeError:\n assert False, \\\n "Please use X_train.var().round(3) to inspect the variance, not the display() or print() functions."\n except AssertionError:\n assert False, \\\n "Hmm, the output of the cell is not what we expected. You should see 2114363 in your output."')123# ## 9. Log normalization124# <p><code>Monetary (c.c. blood)</code>'s variance is very high in comparison to any other column in the dataset. This means that, unless accounted for, this feature may get more weight by the model (i.e., be seen as more important) than any other feature.</p>125# <p>One way to correct for high variance is to use log normalization.</p>126# In[69]:127# Import numpy128import numpy as np129# Copy X_train and X_test into X_train_normed and X_test_normed130X_train_normed, X_test_normed = X_train.copy(), X_test.copy()131# Specify which column to normalize132col_to_normalize = X_train_normed.var().idxmax(axis=1)133# Log normalization134for df_ in [X_train_normed, X_test_normed]:135 # Add log normalized column136 df_['monetary_log'] = np.log(df_[col_to_normalize])...

Full Screen

Full Screen

test_command.py

Source:test_command.py Github

copy

Full Screen

...327 ("driver/podman", "podman", "default"),328 ],329 indirect=["scenario_to_test", "driver_name", "scenario_name"],330)331def test_command_syntax(scenario_to_test, with_scenario, scenario_name):332 options = {"scenario_name": scenario_name}333 cmd = sh.molecule.bake("syntax", **options)334 pytest.helpers.run_command(cmd)335@pytest.mark.parametrize(336 "scenario_to_test, driver_name, scenario_name",337 [338 ("driver/docker", "docker", "default"),339 ("driver/docker", "docker", "multi-node"),340 ("driver/delegated", "delegated", "default"),341 ("driver/podman", "podman", "default"),342 ],343 indirect=["scenario_to_test", "driver_name", "scenario_name"],344)345def test_command_test(scenario_to_test, with_scenario, scenario_name, driver_name):...

Full Screen

Full Screen

test_parser.py

Source:test_parser.py Github

copy

Full Screen

...343 expected = Union((344 Statement(c, expression),345 ))346 assert expected == res347def test_command_syntax():348 res = parser('.load_csv(A, "http://myweb/file.csv", B)')349 expected = Union(350 (351 Command(352 Symbol("load_csv"),353 (Symbol("A"), Constant("http://myweb/file.csv"), Symbol("B")),354 (),355 ),356 )357 )358 assert res == expected359 res = parser('.load_csv("http://myweb/file.csv")')360 expected = Union(361 (...

Full Screen

Full Screen

Automation Testing Tutorials

Learn to execute automation testing from scratch with LambdaTest Learning Hub. Right from setting up the prerequisites to run your first automation test, to following best practices and diving deeper into advanced test scenarios. LambdaTest Learning Hubs compile a list of step-by-step guides to help you be proficient with different test automation frameworks i.e. Selenium, Cypress, TestNG etc.

LambdaTest Learning Hubs:

YouTube

You could also refer to video tutorials over LambdaTest YouTube channel to get step by step demonstration from industry experts.

Run molecule automation tests on LambdaTest cloud grid

Perform automation testing on 3000+ real desktop and mobile devices online.

Try LambdaTest Now !!

Get 100 minutes of automation test minutes FREE!!

Next-Gen App & Browser Testing Cloud

Was this article helpful?

Helpful

NotHelpful