How to use since_time method in lisa

Best Python code snippet using lisa_python

datastore.py

Source:datastore.py Github

copy

Full Screen

1# Copyright (c) 2013, Psiphon Inc.2# All rights reserved.3#4# This program is free software: you can redistribute it and/or modify5# it under the terms of the GNU General Public License as published by6# the Free Software Foundation, either version 3 of the License, or7# (at your option) any later version.8#9# This program is distributed in the hope that it will be useful,10# but WITHOUT ANY WARRANTY; without even the implied warranty of11# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the12# GNU General Public License for more details.13#14# You should have received a copy of the GNU General Public License15# along with this program. If not, see <http://www.gnu.org/licenses/>.16'''17There are currently three tables in our Mongo DB:18- diagnostic_info: Holds diagnostic info sent by users. This typically includes19 info about client version, OS, server response time, etc. Data in this table20 is permanent. The idea is that we can mine it to find out relationships21 between Psiphon performance and user environment.22- email_diagnostic_info: This is a little less concrete. The short version is:23 This table indicates that a particlar diagnostic_info record should be24 formatted and emailed. It might also record additional information (like the25 email ID and subject) about the email that should be sent. Once the diagnostic_info26 has been sent, the associated record is removed from this table.27- stats: A dumb DB that is really just used for maintaining state between stats28 service restarts.29'''30import datetime31from pymongo import MongoClient32import numpy33import pytz34_connection = MongoClient()35_db = _connection.maildecryptor36#37# The tables in our DB38#39# Holds diagnostic info sent by users. This typically includes info about40# client version, OS, server response time, etc. Data in this table is41# permanent. The idea is that we can mine it to find out relationships between42# Psiphon performance and user environment.43_diagnostic_info_store = _db.diagnostic_info44# This table indicates that a particlar diagnostic_info record should be45# formatted and emailed. It might also record additional information (like the46# email ID and subject) about the email that should be sent. Once the47# diagnostic_info has been sent, the associated record is removed from this48# table.49_email_diagnostic_info_store = _db.email_diagnostic_info50# Single-record DB that stores the last time a stats email was sent.51_stats_store = _db.stats52# Stores info about autoresponses that should be sent.53_autoresponder_store = _db.autoresponder54# Time-limited store of email address to which responses have been sent. This55# is used to help us avoid sending responses to the same person more than once56# per day (or whatever).57_response_blacklist_store = _db.response_blacklist58# A store of the errors we've seen. Printed into the stats email.59_errors_store = _db.errors60#61# Create any necessary indexes62#63# This index is used for iterating through the diagnostic_info store, and64# for stats queries.65# It's also a TTL index, and purges old records.66DIAGNOSTIC_DATA_LIFETIME_SECS = 60*60*24*7*26 # half a year67_diagnostic_info_store.ensure_index('datetime', expireAfterSeconds=DIAGNOSTIC_DATA_LIFETIME_SECS)68# We use a TTL index on the response_blacklist collection, to expire records.69_BLACKLIST_LIFETIME_SECS = 60*60*24 # one day70_response_blacklist_store.ensure_index('datetime', expireAfterSeconds=_BLACKLIST_LIFETIME_SECS)71# Add a TTL index to the errors store.72_ERRORS_LIFETIME_SECS = 60*60*24*7*26 # half a year73_errors_store.ensure_index('datetime', expireAfterSeconds=_ERRORS_LIFETIME_SECS)74# Add a TTL index to the errors store.75_EMAIL_DIAGNOSTIC_INFO_LIFETIME_SECS = 60*60 # one hour76_email_diagnostic_info_store.ensure_index('datetime', expireAfterSeconds=_EMAIL_DIAGNOSTIC_INFO_LIFETIME_SECS)77# More loookup indexes78_diagnostic_info_store.ensure_index('Metadata.platform')79_diagnostic_info_store.ensure_index('Metadata.version')80#81# Functions to manipulate diagnostic info82#83def insert_diagnostic_info(obj):84 obj['datetime'] = datetime.datetime.now()85 return _diagnostic_info_store.insert(obj)86def insert_email_diagnostic_info(diagnostic_info_record_id,87 email_id,88 email_subject):89 obj = {'diagnostic_info_record_id': diagnostic_info_record_id,90 'email_id': email_id,91 'email_subject': email_subject,92 'datetime': datetime.datetime.now()93 }94 return _email_diagnostic_info_store.insert(obj)95def get_email_diagnostic_info_iterator():96 return _email_diagnostic_info_store.find()97def find_diagnostic_info(diagnostic_info_record_id):98 if not diagnostic_info_record_id:99 return None100 return _diagnostic_info_store.find_one({'_id': diagnostic_info_record_id})101def remove_email_diagnostic_info(email_diagnostic_info):102 return _email_diagnostic_info_store.remove({'_id': email_diagnostic_info['_id']})103#104# Functions related to the autoresponder105#106def insert_autoresponder_entry(email_info, diagnostic_info_record_id):107 if not email_info and not diagnostic_info_record_id:108 return109 obj = {'diagnostic_info_record_id': diagnostic_info_record_id,110 'email_info': email_info,111 'datetime': datetime.datetime.now()112 }113 return _autoresponder_store.insert(obj)114def get_autoresponder_iterator():115 while True:116 next_rec = _autoresponder_store.find_and_modify(remove=True)117 if not next_rec:118 raise StopIteration()119 yield next_rec120def remove_autoresponder_entry(entry):121 return _autoresponder_store.remove(entry)122#123# Functions related to the email address blacklist124#125def check_and_add_response_address_blacklist(address):126 '''127 Returns True if the address is blacklisted, otherwise inserts it in the DB128 and returns False.129 '''130 now = datetime.datetime.now(pytz.timezone('UTC'))131 # Check and insert with a single command132 match = _response_blacklist_store.find_and_modify(query={'address': address},133 update={'$setOnInsert': {'datetime': now}},134 upsert=True)135 return bool(match)136#137# Functions for the stats DB138#139def set_stats_last_send_time(timestamp):140 '''141 Sets the last send time to `timestamp`.142 '''143 _stats_store.update({}, {'$set': {'last_send_time': timestamp}}, upsert=True)144def get_stats_last_send_time():145 rec = _stats_store.find_one()146 return rec['last_send_time'] if rec else None147def get_new_stats_count(since_time):148 assert(since_time)149 return _diagnostic_info_store.find({'datetime': {'$gt': since_time}}).count()150def get_stats(since_time):151 if not since_time:152 # Pick a sufficiently old date153 since_time = datetime.datetime(2000, 1, 1)154 ERROR_LIMIT = 500155 return {156 'since_timestamp': since_time,157 'now_timestamp': datetime.datetime.now(),158 'new_android_records': _diagnostic_info_store.find({'datetime': {'$gt': since_time}, 'Metadata.platform': 'android'}).count(),159 'new_windows_records': _diagnostic_info_store.find({'datetime': {'$gt': since_time}, 'Metadata.platform': 'windows'}).count(),160 'stats': _get_stats_helper(since_time),161 # The number of errors is unbounded, so we're going to limit the count.162 'new_errors': [_clean_record(e) for e in _errors_store.find({'datetime': {'$gt': since_time}}).limit(ERROR_LIMIT)],163 }164def add_error(error):165 _errors_store.insert({'error': error, 'datetime': datetime.datetime.now()})166def _clean_record(rec):167 '''168 Remove the _id field. Both alters the `rec` param and returns it.169 '''170 if '_id' in rec:171 del rec['_id']172 return rec173def _get_stats_helper(since_time):174 raw_stats = {}175 #176 # Different platforms and versions have different structures177 #178 cur = _diagnostic_info_store.find({'datetime': {'$gt': since_time},179 'Metadata.platform': 'android',180 'Metadata.version': 1})181 for rec in cur:182 propagation_channel_id = rec.get('SystemInformation', {})\183 .get('psiphonEmbeddedValues', {})\184 .get('PROPAGATION_CHANNEL_ID')185 sponsor_id = rec.get('SystemInformation', {})\186 .get('psiphonEmbeddedValues', {})\187 .get('SPONSOR_ID')188 if not propagation_channel_id or not sponsor_id:189 continue190 response_checks = [r['data'] for r in rec.get('DiagnosticHistory', [])191 if r.get('msg') == 'ServerResponseCheck'192 and r.get('data').get('responded') and r.get('data').get('responseTime')]193 for r in response_checks:194 if type(r['responded']) in (str, unicode):195 r['responded'] = (r['responded'] == 'Yes')196 if type(r['responseTime']) in (str, unicode):197 r['responseTime'] = int(r['responseTime'])198 if ('android', propagation_channel_id, sponsor_id) not in raw_stats:199 raw_stats[('android', propagation_channel_id, sponsor_id)] = {'count': 0, 'response_checks': [], 'survey_results': []}200 raw_stats[('android', propagation_channel_id, sponsor_id)]['response_checks'].extend(response_checks)201 raw_stats[('android', propagation_channel_id, sponsor_id)]['count'] += 1202 # The structure got more standardized around here.203 for platform, version in (('android', 2), ('windows', 1)):204 cur = _diagnostic_info_store.find({'datetime': {'$gt': since_time},205 'Metadata.platform': platform,206 'Metadata.version': {'$gt': version}})207 for rec in cur:208 propagation_channel_id = rec.get('DiagnosticInfo', {})\209 .get('SystemInformation', {})\210 .get('PsiphonInfo', {})\211 .get('PROPAGATION_CHANNEL_ID')212 sponsor_id = rec.get('DiagnosticInfo', {})\213 .get('SystemInformation', {})\214 .get('PsiphonInfo', {})\215 .get('SPONSOR_ID')216 if not propagation_channel_id or not sponsor_id:217 continue218 response_checks = (r['data'] for r in rec.get('DiagnosticInfo', {}).get('DiagnosticHistory', [])219 if r.get('msg') == 'ServerResponseCheck'220 and r.get('data').get('responded') and r.get('data').get('responseTime'))221 survey_results = rec.get('Feedback', {}).get('Survey', {}).get('results', [])222 if type(survey_results) != list:223 survey_results = []224 if (platform, propagation_channel_id, sponsor_id) not in raw_stats:225 raw_stats[(platform, propagation_channel_id, sponsor_id)] = {'count': 0, 'response_checks': [], 'survey_results': []}226 raw_stats[(platform, propagation_channel_id, sponsor_id)]['response_checks'].extend(response_checks)227 raw_stats[(platform, propagation_channel_id, sponsor_id)]['survey_results'].extend(survey_results)228 raw_stats[(platform, propagation_channel_id, sponsor_id)]['count'] += 1229 def survey_reducer(accum, val):230 accum.setdefault(val.get('title', 'INVALID'), {}).setdefault(val.get('answer', 'INVALID'), 0)231 accum[val.get('title', 'INVALID')][val.get('answer', 'INVALID')] += 1232 return accum233 stats = []234 for result_params, results in raw_stats.iteritems():235 response_times = [r['responseTime'] for r in results['response_checks'] if r['responded']]236 mean = float(numpy.mean(response_times)) if len(response_times) else None237 median = float(numpy.median(response_times)) if len(response_times) else None238 stddev = float(numpy.std(response_times)) if len(response_times) else None239 quartiles = [float(q) for q in numpy.percentile(response_times, [5.0, 25.0, 50.0, 75.0, 95.0])] if len(response_times) else None240 failrate = float(len(results['response_checks']) - len(response_times)) / len(results['response_checks']) if len(results['response_checks']) else 1.0241 survey_results = reduce(survey_reducer, results['survey_results'], {})242 stats.append({243 'platform': result_params[0],244 'propagation_channel_id': result_params[1],245 'sponsor_id': result_params[2],246 'mean': mean,247 'median': median,248 'stddev': stddev,249 'quartiles': quartiles,250 'failrate': failrate,251 'response_sample_count': len(results['response_checks']),252 'survey_results': survey_results,253 'record_count': results['count'],254 })255 return stats256#257# Functions related to the sqlexporter258#259def get_sqlexporter_diagnostic_info_iterator(start_datetime):260 cursor = _diagnostic_info_store.find({'datetime': {'$gt': start_datetime}})261 cursor.sort('datetime')...

Full Screen

Full Screen

cbapi.py

Source:cbapi.py Github

copy

Full Screen

1'''2 a full-featured API library to allow downloading and presenting organization and people data from Crunchbase3'''45import pandas as pd6import json7import requests8import threading910RAPIDAPI_KEY = "4166b8c4fcmsh77ea0f96ec19fe0p14a98ajsn8b6e21c88565"111213def trigger_api_orgs(page = "", since_time = "", name="", query="", domain = "", locations="", types=""):14 '''trigger Crunchbase API for orgaization data of certain pages'''1516 querystring = {17 "page": page, # query specific pages, if not specified then query all pages18 "updated_since": since_time, # query data that is updated since "updated_since" timestamp19 "name": name, # full-text search of organization name, aliases20 "query": query, # full-text search of organization name, aliases, short description21 "domain_name": domain, # full-text search of organization domain_name, e.g. "www.amazon.com"22 "locations": locations, # filter by locations, e.g. "China,Beijing"23 "organization_types": types # filter by organization types, e.g. "company", "investor", "school", "group"24 }2526 headers = {27 'x-rapidapi-host': "crunchbase-crunchbase-v1.p.rapidapi.com",28 'x-rapidapi-key': RAPIDAPI_KEY29 }3031 url = "https://crunchbase-crunchbase-v1.p.rapidapi.com/odm-organizations"3233 response = requests.request("GET", url, headers=headers, params=querystring)34 response_orgs = pd.DataFrame(json.loads(response.text)) # response in dataframe, using json.loads()35 36 return response_orgs373839def threader_orgs(result_list, page = "", since_time = "", name="", query="", domain = "", locations="", types=""):40 '''threader function to trigger Crunchbase API for organization data of certain pages'''41 response_orgs = trigger_api_orgs(page, since_time, name, query, domain, locations, types)42 result_list.append(pd.DataFrame(list(pd.DataFrame(response_orgs["data"]["items"])["properties"])))43 44 45def get_orgs(page = "", since_time = "", name="", query="", domain = "", locations="", types=""):46 '''trigger Crunchbase API for organization data of all pages'''4748 response_orgs = trigger_api_orgs(page, since_time, name, query, domain, locations, types)49 50 if page != "":51 result = pd.DataFrame(list(pd.DataFrame(response_orgs["data"]["items"])["properties"])) # dataframe of organization data of certain pages52 53 else:54 num_pages = response_orgs['data']['paging']['number_of_pages'] # number of pages55 print("number of pages: ", num_pages)56 threads = [None for _ in range(num_pages)] # list of threads57 result_list = [] # list of organization data of each page in dataframe58 59 for i in range(num_pages):60 threads[i] = threading.Thread(target = threader_orgs, args = (result_list, str(i+1), since_time, name, query, domain, locations, types))61 threads[i].start() # start all threads62 63 for i in range(num_pages):64 threads[i].join() # join all threads65 66 result = pd.concat(result_list, axis = 0) # dataframe of organization data of all pages67 68 # set index to "name"69 result.set_index(result["name"], inplace=True)70 return result717273def trigger_api_ppl(page = "", since_time = "", name="", query="", locations="", socials="", types=""):74 '''trigger Crunchbase API for people data of certain pages'''7576 querystring = {77 "page": page, # query specific pages, if not specified then query all pages78 "updated_since": since_time, # query data that is updated since "updated_since" timestamp79 "name": name, # full-text search of name only80 "query": query, # full-text search of name, title, company81 "locations": locations, # filter by locations, e.g. "China,Beijing"82 "socials": socials, # filter by social media identity, e.g. "ronconway"83 "types": types # filter by type, e.g. "investor"84 }8586 headers = {87 'x-rapidapi-host': "crunchbase-crunchbase-v1.p.rapidapi.com",88 'x-rapidapi-key': RAPIDAPI_KEY89 }9091 url = "https://crunchbase-crunchbase-v1.p.rapidapi.com/odm-people"9293 response = requests.request("GET", url, headers=headers, params=querystring)94 response_ppl = pd.DataFrame(json.loads(response.text)) # response in dataframe, using json.loads()9596 return response_ppl979899def threader_ppl(result_list, page = "", since_time = "", name="", query="", locations="", socials="", types=""):100 '''threader function to trigger Crunchbase API for people data of certain pages'''101 response_ppl = trigger_api_ppl(page, since_time, name, query, locations, socials, types)102 result_list.append(pd.DataFrame(list(pd.DataFrame(response_ppl["data"]["items"])["properties"])))103104105def get_ppl(page="", since_time = "", name="", query="", locations="", socials="", types=""):106 '''trigger Crunchbase API for people data of all pages'''107108 response_ppl = trigger_api_ppl(page, since_time, name, query, locations, socials, types)109 110 if page != "":111 result = pd.DataFrame(list(pd.DataFrame(response_ppl["data"]["items"])["properties"])) # dataframe of people data of certain pages112 113 else:114 num_pages = response_ppl['data']['paging']['number_of_pages'] # number of pages115 print("number of pages: ", num_pages)116 threads = [None for _ in range(num_pages)] # list of threads117 result_list = [] # list of people data of each page in dataframe118 119 for i in range(num_pages):120 threads[i] = threading.Thread(target = threader_ppl, args = (result_list, str(i+1), since_time, name, query, locations, socials, types))121 threads[i].start() # start all threads122 123 for i in range(num_pages):124 threads[i].join() # join all threads125 126 result = pd.concat(result_list, axis = 0) # dataframe of people data of all pages127 128 # set index to "first_name last_name"129 result.set_index(result["first_name"] + " " + result["last_name"], inplace=True)130 return result131 ...

Full Screen

Full Screen

line_bot.py

Source:line_bot.py Github

copy

Full Screen

1# from pprint import pprint as pp2import datetime3import requests4from linebot import LineBotApi, WebhookHandler5from linebot.models import TextSendMessage6import os7from dotenv import load_dotenv8load_dotenv()9line_api_key = os.environ["LineBotApiKey"]10line_bot_api = LineBotApi(line_api_key)11# WebhookHandlerKey = os.environ["WebhookHandlerKey"]12# handler= WebhookHandler('WebhookHandlerKey')13line_bot_list = []14now = datetime.datetime.now()15JST = datetime.timezone(datetime.timedelta(hours=+9), "JST")16def notify_todays_illegal():17 checker_for_bot()18 print("start notifications")19 print(line_bot_list)20 if line_bot_list:21 text = ""22 for i in [u for u in range(len(line_bot_list)) if u % 3 == 0]:23 text += f"本日、{line_bot_list[i]} で\n{line_bot_list[i+1]} に\n『{line_bot_list[i+2]}』の処理が行われました。\n"24 messages = TextSendMessage(text=text)25 line_bot_api.broadcast(messages=messages, notification_disabled=True)26def checker_for_bot():27 def convert_str_to_datetime_jst(str_time: str):28 paid_at = str(datetime.datetime.fromisoformat(str_time.replace("Z", "+00:00")).astimezone(JST))[:19] # ex. 2021-01-02 19:51:4029 paid_at = paid_at.split('-') # ex. [2021, 01, 02 19:51:40]30 paid_at[0] += "年"31 paid_at[1] += "月"32 paid_at = "".join(paid_at).replace(' ', '日 ')33 return paid_at34 def translate_status_to_japanese(status_msg: str):35 if status_msg == "close":36 status = "会計済み"37 return status38 elif status_msg == "delete":39 status = "中断"40 return status41 elif status_msg == "cancel":42 status = "取り消し"43 return status44 fes_key = os.environ["UBIREGI_FES_KEY"]45 garage_key = os.environ["UBIREGI_GARAGE_KEY"]46 tourou_key = os.environ["UBIREGI_TOUROU_KEY"]47 ichi_key = os.environ["UBIREGI_ICHI_KEY"]48 nakame_key = os.environ["UBIREGI_NAKAME_KEY"]49 api_key_dic = {50 "FES": fes_key,51 "Garage": garage_key,52 "灯籠": tourou_key,53 "一目": ichi_key,54 "中目黒": nakame_key55 }56 for storename, api_key in api_key_dic.items():57 query = "accounts/current/checkouts"58 API_Endpoint = "https://ubiregi.com/api/3/{}".format(query)59 headers = {"X-Ubiregi-Auth-Token": api_key, "Content-Type": "application/json"}60 since_time = now.astimezone(JST)61 since_time = datetime.datetime.isoformat(since_time, timespec="seconds")62 since_time = since_time.replace(since_time[11:19], "04:00:00")63 # since_time = f"{day}T04:00:00+09:00"64 # since_time = f"2021-02-10T04:00:00+09:00"65 until_time = now.astimezone(JST)66 until_time = datetime.datetime.isoformat(until_time, timespec="seconds")67 # until_time = f"{str(day+datetime.timedelta(days=1))}T04:00:00+09:00"68 # until_time = f"2021-02-11T04:00:00+09:00"69 print(f"取得データ日時:{since_time} ~ {until_time}")70 params = {71 "since": f"{since_time}",72 "until": f"{until_time}",73 "total_count": "true",74 }75 res = requests.get(API_Endpoint, headers=headers, params=params).json()76 for s in res["checkouts"]:77 if s["status"] == "delete" or s["status"] == "cancel":78 paid_at = convert_str_to_datetime_jst(s["paid_at"])79 status = translate_status_to_japanese(s["status"])80 line_bot_list.append(f"【{storename}】")81 line_bot_list.append(paid_at[-8:])82 line_bot_list.append(status)83if __name__ == "__main__":...

Full Screen

Full Screen

Automation Testing Tutorials

Learn to execute automation testing from scratch with LambdaTest Learning Hub. Right from setting up the prerequisites to run your first automation test, to following best practices and diving deeper into advanced test scenarios. LambdaTest Learning Hubs compile a list of step-by-step guides to help you be proficient with different test automation frameworks i.e. Selenium, Cypress, TestNG etc.

LambdaTest Learning Hubs:

YouTube

You could also refer to video tutorials over LambdaTest YouTube channel to get step by step demonstration from industry experts.

Run lisa automation tests on LambdaTest cloud grid

Perform automation testing on 3000+ real desktop and mobile devices online.

Try LambdaTest Now !!

Get 100 minutes of automation test minutes FREE!!

Next-Gen App & Browser Testing Cloud

Was this article helpful?

Helpful

NotHelpful