How to use resolve_from_str method in Pytest

Best Python code snippet using pytest

resolvers.py

Source:resolvers.py Github

copy

Full Screen

1"""2Resolução dos algoritmos chamados nas queries.3"""4import logging5import itertools6import spacy7from difflib import get_close_matches as closest_token8from nltk import sent_tokenize, word_tokenize9from lisa_processing.util.nlp import (stemming, text_classifier,10 get_word_offense_level, remove_stopwords,11 remove_punctuations, get_offense_level,12 get_tokens_pol, is_stopword,13 get_word_polarity, detailed_stopword_removal)14from lisa_processing.util.normalizer import Normalizer15from lisa_processing.util.tools import (get_entity_description,16 get_pos_tag_description)17logger = logging.getLogger('lisa')18logger.info('Loading Spacy...')19SPACY = spacy.load('pt')20logger.info('Done!')21class Resolver:22 """23 Classe que contém métodos resolutivos par ao processamento dos algoritmos24 utilizados pela API.25 """26 @staticmethod27 def resolve_lemming(input_data):28 """29 Resolução do processamento de lemming.30 return : <list>31 """32 normalizer = Normalizer()33 lemma_from_list = lambda texts: SPACY(normalizer.list_to_string(texts))34 execute = {35 str: SPACY,36 list: lemma_from_list37 }38 tokens = execute.get(type(input_data))(input_data)39 return [token.lemma_ for token in tokens]40 @staticmethod41 def resolve_stemming(input_data):42 """43 Resolução do processamento de stemming44 return : <list>45 """46 normalizer = Normalizer()47 stem_from_list = lambda text_list: stemming(48 Resolver.resolve_tokenize(normalizer.list_to_string(text_list))49 )50 stem_from_str = lambda text: stemming(51 Resolver.resolve_tokenize(text)52 )53 execute = {54 str: stem_from_str,55 list: stem_from_list56 }57 return execute.get(type(input_data))(input_data)58 @staticmethod59 def resolve_dependency_parse(input_data):60 """61 Resolução do processamento de Dependency Parsing62 param : input_data : <str> ou <list>63 return <list> de <dict>64 """65 normalizer = Normalizer()66 resolve_from_list = lambda text_list: SPACY(67 normalizer.list_to_string(text_list)68 )69 execute = {70 str: SPACY,71 list: resolve_from_list72 }73 tokens = execute.get(type(input_data))(input_data)74 result = []75 for token in tokens:76 result.append({77 'element': token.text,78 'children': [str(child) for child in token.children],79 'ancestors': [str(anc) for anc in token.ancestors]80 })81 return result82 @staticmethod83 def resolve_lexical_text_classifier(input_data):84 """85 Resolve a análise de sentimentos de um texto utilizando o86 algoritmo léxico.87 param : text : <str>88 return : <float>89 """90 normalizer = Normalizer()91 classify_from_list = lambda tokens: text_classifier(92 normalizer.list_to_string(tokens)93 )94 execute = {95 str: text_classifier,96 list: classify_from_list97 }98 return execute.get(type(input_data))(input_data)99 @staticmethod100 def resolve_sentence_segmentation(input_data):101 """102 Resolve a fragmentação de sentenças a partir de:103 - uma lista de textos <list>; ou104 - um texto puro (<str>)105 return : <list> : Lista de sentenças (<str>)106 """107 list_sent_tokenize = lambda sent_list: list(108 itertools.chain(*[sent_tokenize(sent) for sent in sent_list])109 )110 execute = {111 str: sent_tokenize,112 list: list_sent_tokenize113 }114 return execute.get(type(input_data))(input_data)115 @staticmethod116 def resolve_tokenize(input_data):117 """118 Resolve a atomização de palávras a partir de:119 - uma lista de sentenças <list>; ou120 - um texto puro (<str>)121 return : <list> : Lista de tokens (<str>)122 """123 list_tokenize = lambda texts: list(124 itertools.chain(*[word_tokenize(text) for text in texts])125 )126 execute = {127 str: word_tokenize,128 list: list_tokenize129 }130 return execute.get(type(input_data))(input_data)131 @staticmethod132 def resolve_remove_stopwords(input_data):133 """134 Resolve a remoção de palávras vazias a partir de:135 - uma lista de sentenças <list>; ou136 - um texto puro (<str>)137 return : <list> : Lista de tokens com significado semântico (<str>)138 """139 normalizer = Normalizer()140 remove_sw_from_str = lambda text: remove_stopwords(141 Resolver.resolve_tokenize(text)142 )143 remove_sw_from_list = lambda text_list: remove_stopwords(144 Resolver.resolve_tokenize(normalizer.list_to_string(text_list))145 )146 execute = {147 str: remove_sw_from_str,148 list: remove_sw_from_list149 }150 return execute.get(type(input_data))(input_data)151 @staticmethod152 def resolve_remove_puncts(input_data):153 """154 Resolve o processamento para remocão de pontuações a partir de:155 - uma lista de sentenças <list>; ou156 - um texto puro (<str>)157 return : <list> : Lista de tokens (<str>)158 """159 normalizer = Normalizer()160 remove_puncts_from_str = lambda text: remove_punctuations(161 Resolver.resolve_tokenize(text)162 )163 remove_puncts_from_list = lambda text_list: remove_punctuations(164 Resolver.resolve_tokenize(normalizer.list_to_string(text_list))165 )166 execute = {167 str: remove_puncts_from_str,168 list: remove_puncts_from_list169 }170 return execute.get(type(input_data))(input_data)171 @staticmethod172 def resolve_word_offense(input_data):173 """174 Resolve o processamento de identificação de palávras ofensivas.175 param : input_data : <str> or <list>176 return : <list> : Lista de <dict>177 """178 normalizer = Normalizer()179 resolve_from_list = lambda text_list: Resolver.resolve_tokenize(180 normalizer.list_to_string(text_list)181 )182 resolve_from_str = lambda text: Resolver.resolve_tokenize(text)183 execute = {184 str: resolve_from_str,185 list: resolve_from_list186 }187 tokens = execute.get(type(input_data))(input_data)188 pairs = get_word_offense_level(tokens)189 output = []190 for pair in pairs:191 # o processamento retorna o radical, ams queremos ot ermo completo192 token = closest_token(pair[0], tokens)193 # se não houver use o própprio radical194 full_token = token or pair195 output.append({196 'token': full_token[0],197 'is_offensive': bool(pair[1]),198 'value': pair[1]199 })200 return output201 @staticmethod202 def resolve_text_offense(input_data):203 """204 Resolve o processamento de classificação de um texto como ofensivo.205 param : input_data : <str> ou <list>206 return <dict>207 """208 normalizer = Normalizer()209 resolve_from_list = lambda token_list: get_offense_level(210 normalizer.list_to_string(token_list)211 )212 execute = {213 list: resolve_from_list,214 str: get_offense_level215 }216 is_offensive, average = execute.get(type(input_data))(input_data)217 return {'is_offensive': is_offensive, 'average': average}218 @staticmethod219 def resolve_word_polarity(input_data):220 """221 Resolve o processamento de polarização de palávras.222 param : input_data : <str> ou <list>223 return : <list> de <dict>224 """225 resolve_from_string = lambda text: get_tokens_pol(226 Resolver.resolve_tokenize(text)227 )228 execute = {229 str: resolve_from_string,230 list: get_tokens_pol231 }232 output = execute.get(type(input_data))(input_data)233 return output234 @staticmethod235 def resolve_named_entity(input_data):236 """237 Resolve o processamento de entidades nomeadas.238 param : input_data : <list> ou <str>239 return : <list> de <dict>240 """241 normalizer = Normalizer()242 resolve_from_string = lambda text: [{243 'token': ent.text,244 'entity': ent.label_,245 'description': get_entity_description(ent.label_)246 } for ent in SPACY(text).ents]247 resolve_from_list = lambda text_list: resolve_from_string(248 normalizer.list_to_string(text_list)249 )250 execute = {251 str: resolve_from_string,252 list: resolve_from_list253 }254 return execute.get(type(input_data))(input_data)255 @staticmethod256 def resolve_part_of_speech(input_data):257 """258 Resolve a marcação POS na entrada fornecida.259 param : input_data : <list> ou <str>260 return : <list> de <dict>261 """262 normalizer = Normalizer()263 resolve_from_string = lambda text: [{264 'token': token.text,265 'tag': token.pos_,266 'description': get_pos_tag_description(token.pos_)267 } for token in SPACY(text)]268 resolve_from_list = lambda text_list: resolve_from_string(269 normalizer.list_to_string(text_list)270 )271 execute = {272 str: resolve_from_string,273 list: resolve_from_list274 }275 return execute.get(type(input_data))(input_data)276 @staticmethod277 def resolve_token_inspection(input_data):278 """279 Resolução da inspeção de tokens.280 param : input_data : <list> ou <str>281 return : <list> de <dict>282 """283 normalizer = Normalizer()284 resolve_from_string = lambda text: [{285 'token': token.text,286 'is_alpha': token.is_alpha,287 'is_ascii': token.is_ascii,288 'is_currency': token.is_currency,289 'is_digit': token.is_digit,290 'is_punct': token.is_punct,291 'is_space': token.is_space,292 'is_stop': is_stopword(token.text),293 'lemma': token.lemma_,294 'pos_tag': get_pos_tag_description(token.pos_),295 'vector': token.vector,296 'polarity': get_word_polarity(token.text),297 'is_offensive': get_offense_level(token.text)[0],298 'root': stemming([token.text])[0]299 } for token in SPACY(text)]300 resolve_from_list = lambda text_list: resolve_from_string(301 normalizer.list_to_string(text_list)302 )303 execute = {304 str: resolve_from_string,305 list: resolve_from_list306 }307 return execute.get(type(input_data))(input_data)308 @staticmethod309 def resolve_datailed_stopword_removal(input_data):310 """311 Resolve a remoção detalhada de palávras vazias a partir de:312 - uma lista de sentenças <list>; ou313 - um texto puro (<str>)314 return : <dict> : Dicionário contendo detalhes da operação315 """316 normalizer = Normalizer()317 remove_sw_from_str = lambda text: detailed_stopword_removal(318 Resolver.resolve_tokenize(text)319 )320 remove_sw_from_list = lambda text_list: detailed_stopword_removal(321 Resolver.resolve_tokenize(normalizer.list_to_string(text_list))322 )323 execute = {324 str: remove_sw_from_str,325 list: remove_sw_from_list326 }327 return execute.get(type(input_data))(input_data) 328 @staticmethod329 def resolve_similarity(first, second):330 """331 Resolve a comparação de similaridade entre332 dois termos.333 """334 first = SPACY(first)335 second = SPACY(second)336 return first.similarity(second)337 @staticmethod338 def resolve_sentiment_batch_extraction(input_data):339 """340 Resolve a extração de sentimentos em lote.341 param : input_data : <list>342 return : <dict>343 """344 positive_sentiments = []345 negative_sentiments = []346 neutral_sentiments = []347 for data in input_data:348 extraction = {'text': data, 'sentiment': text_classifier(data)}349 if extraction['sentiment'] > 0:350 positive_sentiments.append(extraction)351 elif extraction['sentiment'] < 0:352 negative_sentiments.append(extraction)353 else:354 neutral_sentiments.append(extraction)355 # total de amostras avaliadas356 count = len(positive_sentiments) + \357 len(negative_sentiments) + \358 len(neutral_sentiments)359 # Calcula o sentimento total360 positives = [data.get('sentiment', 0) for data in positive_sentiments]361 negatives = [data.get('sentiment', 0) for data in negative_sentiments]362 # Neutros são sempre 0 então somamos apenas positivos e negativos363 total_sentiment = sum(positives + negatives)364 # A média de sentimento é a razão do total pelo número de possibilidades365 mean_sentiment = total_sentiment / count366 return {367 'count': count,368 'positive_sentiments': positive_sentiments,369 'negative_sentiments': negative_sentiments,370 'neutral_sentiments': neutral_sentiments,371 'total_sentiment': total_sentiment,372 'mean_sentiment': mean_sentiment373 }374 @staticmethod375 def resolve_char_count(input_data):376 """377 Resolve a conbtagem de caracteres.378 param : input_data : <str>379 return : <list>380 """...

Full Screen

Full Screen

cacheprovider.py

Source:cacheprovider.py Github

copy

Full Screen

...35 cachedir.mkdir()36 return cls(cachedir, config)37 @staticmethod38 def cache_dir_from_config(config):39 return resolve_from_str(config.getini("cache_dir"), config.rootdir)40 def warn(self, fmt, **args):41 from _pytest.warnings import _issue_config_warning42 from _pytest.warning_types import PytestWarning43 _issue_config_warning(44 PytestWarning(fmt.format(**args) if args else fmt), self._config45 )46 def makedir(self, name):47 """ return a directory path object with the given name. If the48 directory does not yet exist, it will be created. You can use it49 to manage files likes e. g. store/retrieve database50 dumps across test sessions.51 :param name: must be a string not containing a ``/`` separator.52 Make sure the name contains your plugin or application53 identifiers to prevent clashes with other cache users....

Full Screen

Full Screen

paths.py

Source:paths.py Github

copy

Full Screen

1from .compat import Path2from os.path import expanduser, expandvars, isabs3def resolve_from_str(input, root):4 assert not isinstance(input, Path), "would break on py2"5 root = Path(root)6 input = expanduser(input)7 input = expandvars(input)8 if isabs(input):9 return Path(input)10 else:...

Full Screen

Full Screen

Pytest Tutorial

Looking for an in-depth tutorial around pytest? LambdaTest covers the detailed pytest tutorial that has everything related to the pytest, from setting up the pytest framework to automation testing. Delve deeper into pytest testing by exploring advanced use cases like parallel testing, pytest fixtures, parameterization, executing multiple test cases from a single file, and more.

Chapters

  1. What is pytest
  2. Pytest installation: Want to start pytest from scratch? See how to install and configure pytest for Python automation testing.
  3. Run first test with pytest framework: Follow this step-by-step tutorial to write and run your first pytest script.
  4. Parallel testing with pytest: A hands-on guide to parallel testing with pytest to improve the scalability of your test automation.
  5. Generate pytest reports: Reports make it easier to understand the results of pytest-based test runs. Learn how to generate pytest reports.
  6. Pytest Parameterized tests: Create and run your pytest scripts while avoiding code duplication and increasing test coverage with parameterization.
  7. Pytest Fixtures: Check out how to implement pytest fixtures for your end-to-end testing needs.
  8. Execute Multiple Test Cases: Explore different scenarios for running multiple test cases in pytest from a single file.
  9. Stop Test Suite after N Test Failures: See how to stop your test suite after n test failures in pytest using the @pytest.mark.incremental decorator and maxfail command-line option.

YouTube

Skim our below pytest tutorial playlist to get started with automation testing using the pytest framework.

https://www.youtube.com/playlist?list=PLZMWkkQEwOPlcGgDmHl8KkXKeLF83XlrP

Run Pytest automation tests on LambdaTest cloud grid

Perform automation testing on 3000+ real desktop and mobile devices online.

Try LambdaTest Now !!

Get 100 minutes of automation test minutes FREE!!

Next-Gen App & Browser Testing Cloud

Was this article helpful?

Helpful

NotHelpful