How to use last method in Cypress

Best JavaScript code snippet using cypress

PerformanceAnalytics_Class.py

Source:PerformanceAnalytics_Class.py Github

copy

Full Screen

1#Python Script---------------------------------------------------------------------------------------------------------------------------2#Title: Performance Analytics3# coding: utf-84#---5import pandas as pd6import datetime 7import time8import math9import csv10import numpy as np11import scipy12from scipy.stats import trim_mean, kurtosis13from scipy.stats.mstats import mode, gmean, hmean14from scipy.stats import norm15from pandas.tseries.offsets import BDay16import ta as ta17class Performance():18 def __init__(self, Reuters):19 self.reuters = Reuters20 21 #actual price data22 url_csv = "http://matterhorn-lab.herokuapp.com/download/"+ str(self.reuters)23 prices_data = pd.read_csv(url_csv, sep = ",") 24 start, stop, step = 0, -14, 125 prices_data["Date"] = prices_data["Date"].str.slice(start, stop, step)26 27 prices_data = prices_data[::-1].reset_index()28 prices_data = prices_data.drop(['index'], axis=1)29 prices_data = prices_data.sort_values(["Date"], ascending = [1]).reset_index()30 prices_data = prices_data.drop(['index'], axis=1)31 32 33 #static34 stock_info = pd.read_csv('data/DB_Stock_Info.csv', sep=';')35 key_ratios = pd.read_csv('data/DB_Stock_Key_Ratios.csv', sep=';')36 37 # get the number of business days38 c_size = len(prices_data.columns)39 r_size = prices_data.shape[0]40 41 date_data = prices_data.iloc[:,0]42 today = date_data.iloc[r_size-1:r_size]43 today = pd.to_datetime(today)44 today = datetime.date(int(today.dt.year),int(today.dt.month),int(today.dt.day))45 46 # calculate days yesterday47 yesterday = today -BDay(1)48 # calculate days last week49 lastweek = today -BDay(5)50 # calculate days since start month51 startmonth = datetime.date(int(today.strftime('%Y')),int(today.strftime('%m')),1)52 days_start_month = np.busday_count(startmonth,today)53 # calculate days last month54 lastmonth = datetime.date(int(today.strftime('%Y')),int(today.strftime('%m'))-1,int(today.strftime('%d')))55 days_last_month = np.busday_count(lastmonth,today)56 # calculate days since start year57 yearstart = datetime.date(int(today.strftime('%Y')),1,1)58 days_start_year = np.busday_count(yearstart,today)59 # calculate days one year60 lastyear = datetime.date(int(today.strftime('%Y'))-1,int(today.strftime('%m')),int(today.strftime('%d')))61 days_last_year = np.busday_count(lastyear,today)62 # calculate days three years63 last3years = datetime.date(int(today.strftime('%Y'))-3,int(today.strftime('%m')),int(today.strftime('%d')))64 days_last_3years = np.busday_count(last3years,today)65 # calculate days five years66 last5years = datetime.date(int(today.strftime('%Y'))-5,int(today.strftime('%m')),int(today.strftime('%d')))67 days_last_5years = np.busday_count(last5years,today)68 # calculate days ten years69 last10years = datetime.date(int(today.strftime('%Y'))-10,int(today.strftime('%m')),int(today.strftime('%d')))70 days_last_10years = np.busday_count(last10years,today)71 72 # calculate returns 73 prices = prices_data.iloc[:,1:c_size]74 #returns = math.log(prices/prices.shift(1))75 #prices_year = prices.iloc[r_size-days_year:r_size]76 price_change = pd.DataFrame(prices.values[r_size-1] - prices)77 price_change.columns = [ Reuters]78 returns = prices.pct_change(1)79 80 # calculate price and return today81 returns_today = returns.iloc[r_size-1:r_size]82 prices_today = prices.iloc[r_size-1:r_size]83 price_change_today = price_change.iloc[r_size-1:r_size]84 85 # calculate price and return yesterday86 returns_yesterday = returns.iloc[r_size-2:r_size]87 prices_yesterday = prices.iloc[r_size-2:r_size]88 cum_return_yesterday = prices_yesterday.loc[r_size-1] / prices_yesterday.loc[r_size-2] -189 average_return_yesterday = np.mean(returns_yesterday)90 price_change_yesterday = price_change.iloc[r_size-2:r_size-1]91 92 # calculate price and return last week93 returns_last_week = returns.iloc[r_size-5:r_size]94 prices_last_week = prices.iloc[r_size-5:r_size]95 cum_return_last_week = prices_last_week.loc[r_size-1] / prices_last_week.loc[r_size-5] -196 average_return_last_week = np.mean(returns_last_week)97 price_change_last_week = price_change.iloc[r_size-5:r_size]98 vola_last_week = np.std(returns_last_week)99 sharpe_ratio_last_week = average_return_last_week /vola_last_week100 101 # calculate price and return since start month102 returns_start_month = returns.iloc[r_size-days_start_month:r_size]103 prices_start_month = prices.iloc[r_size-days_start_month:r_size]104 cum_return_start_month = prices_start_month.loc[r_size-1] / prices_start_month.loc[r_size-days_start_month] -1105 average_return_start_month = np.mean(returns_start_month)106 price_change_start_month = price_change.iloc[r_size-days_start_month:r_size]107 vola_start_month = np.std(returns_start_month)108 sharpe_ratio_start_month = average_return_start_month /vola_start_month109 110 # calculate price and return last month111 returns_last_month = returns.iloc[r_size-days_last_month:r_size]112 prices_last_month = prices.iloc[r_size-days_last_month:r_size]113 cum_return_last_month = prices_last_month.loc[r_size-1] / prices_last_month.loc[r_size-days_last_month] -1114 average_return_last_month = np.mean(returns_last_month)115 price_change_last_month = price_change.iloc[r_size-days_last_month:r_size]116 117 # calculate price and return since start year118 returns_start_year = returns.iloc[r_size-days_start_year:r_size]119 prices_start_year = prices.iloc[r_size-days_start_year:r_size]120 cum_return_start_year = prices_start_year.loc[r_size-1] / prices_start_year.loc[r_size-days_start_year] -1121 average_return_start_year = np.mean(returns_start_year)122 price_change_start_year = price_change.iloc[r_size-days_start_year:r_size]123 vola_start_year = np.std(returns_start_year)124 sharpe_ratio_start_year = average_return_start_year /vola_start_year125 126 # calculate price and return one year127 returns_last_year = returns.iloc[r_size-days_last_year:r_size]128 prices_last_year = prices.iloc[r_size-days_last_year:r_size]129 cum_return_last_year = prices_last_year.loc[r_size-1] / prices_last_year.loc[r_size-days_last_year] -1130 average_return_last_year = np.mean(returns_last_year)131 price_change_last_year = price_change.iloc[r_size-days_last_year:r_size]132 vola_last_year = np.std(returns_last_year)133 sharpe_ratio_last_year = average_return_last_year /vola_last_year134 135 # calculate price and return three years136 returns_last_3years = returns.iloc[r_size-days_last_3years:r_size]137 prices_last_3years = prices.iloc[r_size-days_last_3years:r_size]138 cum_return_last_3years = prices_last_3years.loc[r_size-1] / prices_last_3years.loc[r_size-days_last_3years] -1139 average_return_last_3years = np.mean(returns_last_3years)140 price_change_last_3years = price_change.iloc[r_size-days_last_3years:r_size]141 vola_last_3years = np.std(returns_last_3years)142 sharpe_ratio_last_3years = average_return_last_3years /vola_last_3years143 144 # calculate price and return five years145 returns_last_5years = returns.iloc[r_size-days_last_5years:r_size]146 prices_last_5years = prices.iloc[r_size-days_last_5years:r_size]147 cum_return_last_5years = prices_last_5years.loc[r_size-1] / prices_last_5years.loc[r_size-days_last_5years] -1148 average_return_last_5years = np.mean(returns_last_5years)149 price_change_last_5years = price_change.iloc[r_size-days_last_5years:r_size]150 vola_last_5years = np.std(returns_last_5years)151 sharpe_ratio_last_5years = average_return_last_5years /vola_last_5years152 153 # calculate price and return ten years154 returns_last_10years = returns.iloc[r_size-days_last_10years:r_size]155 prices_last_10years = prices.iloc[r_size-days_last_10years:r_size]156 cum_return_last_10years = prices_last_10years.loc[r_size-1] / prices_last_10years.loc[r_size-days_last_10years] -1157 average_return_last_10years = np.mean(returns_last_10years)158 price_change_last_10years = price_change.iloc[r_size-days_last_10years:r_size]159 vola_last_10years = np.std(returns_last_10years)160 sharpe_ratio_last_10years = average_return_last_10years /vola_last_10years161 162 # all time163 cum_return_all = prices.loc[r_size-1] / prices.loc[3] -1164 average_return_all = np.mean(returns)165 vola_all = np.std(returns)166 sharpe_ratio_all = average_return_all /vola_all167 # year high, low and range168 year_high = prices_last_year.max()169 year_low = prices_last_year.min()170 range_low_high = year_high - year_low171 range_percent = range_low_high / year_high172 173 # investment of 10000 CHF174 help_investment = returns175 help_investment = help_investment.drop(help_investment.index[0:2])176 help_invest = [0] * (c_size-1)177 help_investment.iloc[0] = help_invest178 investment = (1+help_investment).cumprod() *10000179 180 # describtive statistics181 mean = np.mean(returns_last_year)182 std = np.std(returns_last_year)183 Z_99 = norm.ppf([0.01])184 185 186 # Value at risk187 Covar_Var = -(mean-Z_99*std)188 n_sims = 1000000189 SimVar =[]190 for i in range(c_size-1):191 np.random.seed(i)192 random_numbers = np.random.normal(0, 1, n_sims)193 sim_returns=mean[i]+std[i]*random_numbers194 SimVar = (np.percentile(sim_returns, 1))195 196 HistVar=[]197 for i in range(0,r_size-days_last_year):198 help_VaR = returns.iloc[r_size-days_last_year-i:r_size-i]199 HistVar.append(np.percentile(help_VaR, 1))200 201 df_HistVar = {}202 df_HistVar= {"Name": HistVar}203 HistVar = pd.DataFrame(HistVar)204 205 # Expected Shortfall206 cutoff = int(round(days_last_year * 0.01,0))207 208 ES = []209 for i in range(0,r_size-days_last_year):210 help_ES = returns.Price.iloc[r_size-days_last_year-i:r_size-i]211 losses = help_ES.sort_values()212 expectedloss = np.mean(losses.iloc[0:cutoff])213 ES.append(expectedloss)214 215 data_ES = {}216 data_ES = {"Name": ES}217 ES = pd.DataFrame(ES)218 219 # Drawdown220 Roll_Max = prices.cummax()221 Daily_Drawdown = (prices/Roll_Max - 1.0)222 Max_Daily_Drawdown = Daily_Drawdown.cummin()223 224 Daily_Drawdown = abs(Daily_Drawdown)225 Max_Daily_Drawdown = abs(Max_Daily_Drawdown)226 227 228 #Key Ratios229 key_ratios.columns= ["Name", "ABBN.S", "ADEN.S", "ALCC.S", "CSGN.S", "GEBN.S", "GIVN.S", 230 "LHN.S", "LONN.S", "NESN.S", "NOVN.S", "CFR.S", "ROG.S", "SGSN.S",231 "SIKA.S", "UHR.S", "SLHN.S", "SRENH.S", "SCMN.S", "UBSG.S", "ZURN.S"]232 233 key_ratios_clean = key_ratios["NESN.S"]234 price_earnings_ratio = key_ratios_clean.iloc[4]235 236 # price/book ratio237 price_book_ratio = key_ratios_clean.iloc[5]238 239 # return on equity ratio240 return_on_equity_ratio = key_ratios_clean.iloc[12]241 242 # Dividend yield - indicated annual dividend divided by closing price243 dividend_yield_ratio = key_ratios_clean.iloc[8]244 245 # debt to ratio246 debt_equity_ratio = key_ratios_clean.iloc[20]247 248 249 # =============================================================================250 #Sort all analysis from above to get dataframes which are used on the webpage for the tables and figures251 252 #Overview: Data for Figure Annual Performance253 Data = {'Date': [lastmonth,lastyear, last3years, last5years, last10years],254 'Price': [cum_return_last_month.Price, cum_return_last_year.Price, cum_return_last_3years.Price, cum_return_last_5years.Price, cum_return_last_10years.Price],255 }256 257 self.df_annual_perf = pd.DataFrame(Data, columns = ['Date','Price'])258 259 #Table Price Performance260 Data_Price_Performance = {261 'Name': ["Placeholder"],262 '1 month': [cum_return_last_month.Price],263 '1 Years': [cum_return_last_year.Price],264 '3 Years': [cum_return_last_3years.Price],265 '5 Years': [cum_return_last_5years.Price],266 '10 Years': [cum_return_last_10years.Price],267 'Since Inception': [cum_return_all.Price],268 }269 270 self.df_Price_Performance = pd.DataFrame(Data_Price_Performance, columns = ['1 month','1 Years', '3 Years', '5 Years', '10 Years', 'Since Inception'])271 272 #Overview: Hypothetical Growth 273 V2007 = investment.iloc[r_size-3-days_last_year*12].Price274 V2008 = investment.iloc[r_size-3-days_last_year*11].Price275 V2009 = investment.iloc[r_size-3-days_last_year*10].Price276 V2010 = investment.iloc[r_size-3-days_last_year*9].Price277 V2011 = investment.iloc[r_size-3-days_last_year*8].Price278 V2012 = investment.iloc[r_size-3-days_last_year*7].Price279 V2013 = investment.iloc[r_size-3-days_last_year*6].Price280 V2014 = investment.iloc[r_size-3-days_last_year*5].Price281 V2015 = investment.iloc[r_size-3-days_last_year*4].Price282 V2016 = investment.iloc[r_size-3-days_last_year*3].Price283 V2017 = investment.iloc[r_size-3-days_last_year*2].Price284 V2018 = investment.iloc[r_size-3-days_last_year].Price285 V2019 = investment.iloc[r_size-3].Price286 287 hypothetical_growth = {'Date': ['2006','2007','2008','2009','2010','2011','2012','2013','2014','2015','2016','2017','2018','2019'],288 'Value': [10000,V2007, V2008,V2009,V2010,V2011,V2012,V2013,V2014,V2015,V2016,V2017,V2018,V2019]289 }290 291 self.df_hypothetical_growth = pd.DataFrame(hypothetical_growth, columns = ['Date','Value'])292 293 #Overview: Figure Average Annual Performance294 annual_perf_average = {'Date': [lastmonth,lastyear, last3years, last5years, last10years],295 'Price': [average_return_last_month.Price*252, average_return_last_year.Price*252, average_return_last_3years.Price*252, average_return_last_5years.Price*252, average_return_last_10years.Price*252],296 }297 self.df_annual_perf_average = pd.DataFrame(annual_perf_average, columns = ['Date','Price'])298 299 300 #Overview: igure Risk Potential301 #Define quantiles for Graph302 q0=-1.854444201294828303 q1=-0.8269888130616426304 q2=0.22536003249425604305 q3=0.6619326773878177306 q4=1.1356494832642325307 SR = sharpe_ratio_last_year.Price * math.sqrt(252)308 309 #Define Values for Figure310 if (SR< q1):311 self.SR_q = 0.09312 elif (SR >= q1 and SR< q2):313 self.SR_q = 0.29314 elif (SR >= q2 and SR < q3):315 self.SR_q = 0.49316 elif (SR >= q3 and SR < q4):317 self.SR_q = 0.69318 elif (SR >= q4):319 self.SR_q = 0.89320 321 322 Data_Current_Statistic = {"Name": ["Price change yesterday", "Average Annual Return", "Average daily Volatility","1 Year Volatilty", "1 Year Sharpe Ratio"],323 "Numbers": [round(float(price_change_yesterday.values[0]),2), round(average_return_all.Price*252* 100, 2).astype(str) + '%', round(vola_last_year.Price,3), round(vola_last_year.Price * math.sqrt(252),3), round(sharpe_ratio_last_year.Price * math.sqrt(252),3)]324 }325 326 self.df_Current_Statistic = pd.DataFrame(Data_Current_Statistic, columns = ["Name", "Numbers"])327 328 329 #Price Performance: Table Historic Prices330 Avg_price = pd.DataFrame.mean(prices)331 Data_Historic_Prices = {"Name": ["Current Price", "Price Last Year", "Average Price","Year High", "Year Low"],332 "Numbers": [prices_today.Price.iloc[0],prices_last_year.Price.iloc[0], Avg_price.Price, year_high.Price, year_low.Price]333 }334 335 self.df_Historic_Prices = pd.DataFrame(Data_Historic_Prices, columns = ["Name", "Numbers"]) 336 self.df_Historic_Prices.Numbers = round(self.df_Historic_Prices.Numbers , 2)337 338 339 #Price Performance: Figure Price Development340 date_data_clean = pd.DataFrame(date_data)341 date_data_clean["Prices"] = prices342 343 self.df_Performance_Graph = date_data_clean344 345 #Price Performance: Figure returns 346 date_data_clean2 = pd.DataFrame(date_data)347 date_data_clean2["Returns"] = returns348 349 self.df_Return_Graph = date_data_clean2350 351 352 #Price Performance: Key Ratios353 Data_Key_Ratios = {"Name": ["Price Earnings Ratio", "Price Book Ratio", "Return on Equity","Debt Equity Ratio", "Dividend Yield Ratio",],354 "Numbers": [round(price_earnings_ratio,2), round(price_book_ratio,2), round(return_on_equity_ratio,2), round(debt_equity_ratio* 100, 2).astype(str) + '%', round(dividend_yield_ratio* 100, 2).astype(str) + '%']355 }356 self.df_Key_Ratios = pd.DataFrame(Data_Key_Ratios, columns = ["Name", "Numbers"])357 358 #Risk Measures: Table 1359 Data_Risk_Measures1 = {"Name": [ "Sharpe Ratio last year", "Sharpe Ratio Total", "Daily Drawdown", "Max Daily Drawdown"],360 "Numbers": [round(sharpe_ratio_last_year.Price * math.sqrt(252),2), round(sharpe_ratio_all.Price *math.sqrt(r_size),2), round(Daily_Drawdown.Price.iloc[-1]* 100, 2).astype(str) + '%', round(Max_Daily_Drawdown.Price.iloc[-1]* 100, 2).astype(str) + '%']361 }362 self.df_Risk_Measure1 = pd.DataFrame(Data_Risk_Measures1, columns = ["Name", "Numbers"])363 364 #Risk Measures: Table 2365 Data_Risk_Measures2 = {"Name": [ "Historic Value at Risk","Simulated Value at Risk", "Parametic Value at Risk","Expected Shortfall"],366 "Numbers": [ round(float(HistVar.values[0]),4), round(SimVar,4), round(Covar_Var.Price,4) , round(float(ES.values[0]),4)]367 }368 self.df_Risk_Measure2 = pd.DataFrame(Data_Risk_Measures2, columns = ["Name", "Numbers"])369 370 371 #Risk Measures: Value at Risk 372 data_VaR = pd.DataFrame(df_HistVar, columns = ["Name"])373 data_VaR = data_VaR[::-1].reset_index()374 data_VaR = data_VaR.drop(['index'], axis=1)375 Date_VaR = pd.DataFrame(date_data.iloc[days_last_year:r_size]).reset_index()376 Date_VaR = Date_VaR.drop(['index'], axis=1)377 Date_VaR["Price"] = data_VaR378 self.df_VaR = Date_VaR379 380 #Risk Measures: Expected Shortfall381 Data_ES = pd.DataFrame(data_ES, columns = ["Name"])382 Data_ES = Data_ES[::-1].reset_index()383 Data_ES = Data_ES.drop(['index'], axis=1)384 Date_ES = pd.DataFrame(date_data.iloc[days_last_year:r_size]).reset_index()385 Date_ES = Date_ES.drop(['index'], axis=1)386 Date_ES["Price"] = Data_ES387 self.df_ES = Date_ES388 389 390 #Risk Measures: Drawdown391 date_data_clean1 = pd.DataFrame(date_data)392 date_data_clean1["Max_DD"] = Max_Daily_Drawdown393 date_data_clean1["DD"] = Daily_Drawdown394 date_data_clean1["Roll_Max"] = Roll_Max395 self.df_Max_Daily_Drawdown = date_data_clean1396 397 398 #Technical399 b = prices.Price400 bollinger_mavg = ta.bollinger_mavg(b)401 bollinger_hband = ta.bollinger_hband(b)402 bollinger_lband = ta.bollinger_lband(b)403 bollinger_hband_indicator = ta.bollinger_hband_indicator(b)404 bollinger_lband_indicator=ta.bollinger_lband_indicator(b)405 406 rsi = ta.rsi(b)407 aroon_up = ta.aroon_up(b)408 aroon_down = ta.aroon_down(b)409 410 #Technical Analysis: Table Technical Analysis411 aroon_up_today = aroon_up.values[r_size-2]412 aroon_down_today = aroon_down.values[r_size-2]413 if (aroon_up_today > aroon_down_today):414 if (aroon_up_today > 50):415 aroon_text = 'The Aroon Indicator detects a current strong upwards trend'416 else:417 aroon_text = 'The Aroon Indicator detects a current weak upwards trend'418 else:419 if (aroon_down_today > 50):420 aroon_text = 'The Aroon Indicator detects a current strong downwards trend'421 else:422 aroon_text = 'The Aroon Indicator detects a current weak downwards trend'423 424 425 rsi_today = rsi.values[r_size-2]426 if (rsi_today > 70):427 rsi_text = 'The Relative Strength Index detects a current overvaluation of the stock'428 elif(rsi_today > 30 and rsi_today <70):429 rsi_text = 'The Relative Strength Index detects no current overvaluation or undervaluation of the stock' 430 else:431 rsi_text = 'The Relative Strength Index detects a current undervaluation of the stock'432 433 bollinger_hband_indicator_today = bollinger_hband_indicator.values[r_size-2]434 bollinger_lband_indicator_today = bollinger_lband_indicator.values[r_size-2]435 if (bollinger_hband_indicator_today > bollinger_lband_indicator_today):436 bollinger_text = 'The Bollinger Band Oscillator detects that the current price is higher than the higher Bollinger Band and therefore recommends a buy of the stock'437 elif(bollinger_lband_indicator_today > 0):438 bollinger_text = 'The Bollinger Band Oscillator detects that the current price is lower than the lower Bollinger Band and therefore recommends a selling of the stock'439 else:440 bollinger_text = 'The Bollinger Band Oscillator detects that the current price is between the lower and higher Bollinger Band and therefore recommends no trading activities in the stock'441 TechnicalAnalysis = {"Name": [ "Boolinger Band:", "Relative Strength Index:", "Aroon Indicator:"],442 "Implications": [bollinger_text, rsi_text, aroon_text]443 }444 self.df_TechnicalAnalysis= pd.DataFrame(TechnicalAnalysis, columns = ["Name", "Implications"])445 446 #Technical Analyis: Figure Bollinger447 Date_Bollinger = pd.DataFrame(date_data)448 Date_Bollinger["mavg"] = bollinger_mavg449 Date_Bollinger["hband"] = bollinger_hband450 Date_Bollinger["lband"] = bollinger_lband451 self.df_BollingerBands = Date_Bollinger452 453 #Technical Analyis: Figure RSI454 Date_RSI = pd.DataFrame(date_data)455 Date_RSI["RSI"] = rsi456 457 df_RSI = Date_RSI.drop(Date_RSI.index[0:14]).reset_index()458 self.df_RSI = df_RSI.drop(['index'], axis=1)459 460 #Technical Analyis: Figure Aroon461 Date_aroon = pd.DataFrame(date_data)462 Date_aroon["aroon_up"] = aroon_up463 Date_aroon["aroon_down"] = aroon_down464 self.df_AroonIndicator = Date_aroon...

Full Screen

Full Screen

0003_auto_20170616_1404.py

Source:0003_auto_20170616_1404.py Github

copy

Full Screen

1# -*- coding: utf-8 -*-2# Generated by Django 1.11.1 on 2017-06-16 14:043from __future__ import unicode_literals4from django.db import migrations, models5class Migration(migrations.Migration):6 dependencies = [7 ('raw', '0002_viewingtimes'),8 ]9 operations = [10 migrations.AddField(11 model_name='address',12 name='last_updated',13 field=models.DateTimeField(auto_now=True),14 ),15 migrations.AddField(16 model_name='agency',17 name='last_updated',18 field=models.DateTimeField(auto_now=True),19 ),20 migrations.AddField(21 model_name='agent',22 name='last_updated',23 field=models.DateTimeField(auto_now=True),24 ),25 migrations.AddField(26 model_name='attribute',27 name='last_updated',28 field=models.DateTimeField(auto_now=True),29 ),30 migrations.AddField(31 model_name='attributeoption',32 name='last_updated',33 field=models.DateTimeField(auto_now=True),34 ),35 migrations.AddField(36 model_name='attributerange',37 name='last_updated',38 field=models.DateTimeField(auto_now=True),39 ),40 migrations.AddField(41 model_name='bid',42 name='last_updated',43 field=models.DateTimeField(auto_now=True),44 ),45 migrations.AddField(46 model_name='bidcollection',47 name='last_updated',48 field=models.DateTimeField(auto_now=True),49 ),50 migrations.AddField(51 model_name='branding',52 name='last_updated',53 field=models.DateTimeField(auto_now=True),54 ),55 migrations.AddField(56 model_name='broadbandtechnology',57 name='last_updated',58 field=models.DateTimeField(auto_now=True),59 ),60 migrations.AddField(61 model_name='category',62 name='last_updated',63 field=models.DateTimeField(auto_now=True),64 ),65 migrations.AddField(66 model_name='charity',67 name='last_updated',68 field=models.DateTimeField(auto_now=True),69 ),70 migrations.AddField(71 model_name='contactdetails',72 name='last_updated',73 field=models.DateTimeField(auto_now=True),74 ),75 migrations.AddField(76 model_name='currentshippingpromotion',77 name='last_updated',78 field=models.DateTimeField(auto_now=True),79 ),80 migrations.AddField(81 model_name='dealer',82 name='last_updated',83 field=models.DateTimeField(auto_now=True),84 ),85 migrations.AddField(86 model_name='dealership',87 name='last_updated',88 field=models.DateTimeField(auto_now=True),89 ),90 migrations.AddField(91 model_name='dealershiplistingcounts',92 name='last_updated',93 field=models.DateTimeField(auto_now=True),94 ),95 migrations.AddField(96 model_name='dealershipphonenumbers',97 name='last_updated',98 field=models.DateTimeField(auto_now=True),99 ),100 migrations.AddField(101 model_name='dealershowroom',102 name='last_updated',103 field=models.DateTimeField(auto_now=True),104 ),105 migrations.AddField(106 model_name='district',107 name='last_updated',108 field=models.DateTimeField(auto_now=True),109 ),110 migrations.AddField(111 model_name='embeddedcontent',112 name='last_updated',113 field=models.DateTimeField(auto_now=True),114 ),115 migrations.AddField(116 model_name='fixedpriceofferdetails',117 name='last_updated',118 field=models.DateTimeField(auto_now=True),119 ),120 migrations.AddField(121 model_name='fixedpriceofferrecipient',122 name='last_updated',123 field=models.DateTimeField(auto_now=True),124 ),125 migrations.AddField(126 model_name='flatmate',127 name='last_updated',128 field=models.DateTimeField(auto_now=True),129 ),130 migrations.AddField(131 model_name='flatmateadjacentsuburbids',132 name='last_updated',133 field=models.DateTimeField(auto_now=True),134 ),135 migrations.AddField(136 model_name='flatmateadjacentsuburbnames',137 name='last_updated',138 field=models.DateTimeField(auto_now=True),139 ),140 migrations.AddField(141 model_name='flatmatephotourls',142 name='last_updated',143 field=models.DateTimeField(auto_now=True),144 ),145 migrations.AddField(146 model_name='flatmates',147 name='last_updated',148 field=models.DateTimeField(auto_now=True),149 ),150 migrations.AddField(151 model_name='foundcategory',152 name='last_updated',153 field=models.DateTimeField(auto_now=True),154 ),155 migrations.AddField(156 model_name='geographiclocation',157 name='last_updated',158 field=models.DateTimeField(auto_now=True),159 ),160 migrations.AddField(161 model_name='largebannerimage',162 name='last_updated',163 field=models.DateTimeField(auto_now=True),164 ),165 migrations.AddField(166 model_name='listeditemdetail',167 name='last_updated',168 field=models.DateTimeField(auto_now=True),169 ),170 migrations.AddField(171 model_name='locality',172 name='last_updated',173 field=models.DateTimeField(auto_now=True),174 ),175 migrations.AddField(176 model_name='member',177 name='last_updated',178 field=models.DateTimeField(auto_now=True),179 ),180 migrations.AddField(181 model_name='memberprofile',182 name='last_updated',183 field=models.DateTimeField(auto_now=True),184 ),185 migrations.AddField(186 model_name='memberrequestinformation',187 name='last_updated',188 field=models.DateTimeField(auto_now=True),189 ),190 migrations.AddField(191 model_name='membershipdistrict',192 name='last_updated',193 field=models.DateTimeField(auto_now=True),194 ),195 migrations.AddField(196 model_name='membershiplocality',197 name='last_updated',198 field=models.DateTimeField(auto_now=True),199 ),200 migrations.AddField(201 model_name='motorwebbasicreport',202 name='last_updated',203 field=models.DateTimeField(auto_now=True),204 ),205 migrations.AddField(206 model_name='openhome',207 name='last_updated',208 field=models.DateTimeField(auto_now=True),209 ),210 migrations.AddField(211 model_name='option',212 name='last_updated',213 field=models.DateTimeField(auto_now=True),214 ),215 migrations.AddField(216 model_name='optionset',217 name='last_updated',218 field=models.DateTimeField(auto_now=True),219 ),220 migrations.AddField(221 model_name='optionsetvalues',222 name='last_updated',223 field=models.DateTimeField(auto_now=True),224 ),225 migrations.AddField(226 model_name='photo',227 name='last_updated',228 field=models.DateTimeField(auto_now=True),229 ),230 migrations.AddField(231 model_name='photourl',232 name='last_updated',233 field=models.DateTimeField(auto_now=True),234 ),235 migrations.AddField(236 model_name='properties',237 name='last_updated',238 field=models.DateTimeField(auto_now=True),239 ),240 migrations.AddField(241 model_name='property',242 name='last_updated',243 field=models.DateTimeField(auto_now=True),244 ),245 migrations.AddField(246 model_name='propertyadjacentsuburbids',247 name='last_updated',248 field=models.DateTimeField(auto_now=True),249 ),250 migrations.AddField(251 model_name='propertyadjacentsuburbnames',252 name='last_updated',253 field=models.DateTimeField(auto_now=True),254 ),255 migrations.AddField(256 model_name='propertyphotourls',257 name='last_updated',258 field=models.DateTimeField(auto_now=True),259 ),260 migrations.AddField(261 model_name='question',262 name='last_updated',263 field=models.DateTimeField(auto_now=True),264 ),265 migrations.AddField(266 model_name='questions',267 name='last_updated',268 field=models.DateTimeField(auto_now=True),269 ),270 migrations.AddField(271 model_name='refunddetails',272 name='last_updated',273 field=models.DateTimeField(auto_now=True),274 ),275 migrations.AddField(276 model_name='sale',277 name='last_updated',278 field=models.DateTimeField(auto_now=True),279 ),280 migrations.AddField(281 model_name='searchparameter',282 name='last_updated',283 field=models.DateTimeField(auto_now=True),284 ),285 migrations.AddField(286 model_name='shippingoption',287 name='last_updated',288 field=models.DateTimeField(auto_now=True),289 ),290 migrations.AddField(291 model_name='simplememberprofile',292 name='last_updated',293 field=models.DateTimeField(auto_now=True),294 ),295 migrations.AddField(296 model_name='sponsorlink',297 name='last_updated',298 field=models.DateTimeField(auto_now=True),299 ),300 migrations.AddField(301 model_name='store',302 name='last_updated',303 field=models.DateTimeField(auto_now=True),304 ),305 migrations.AddField(306 model_name='storepromotion',307 name='last_updated',308 field=models.DateTimeField(auto_now=True),309 ),310 migrations.AddField(311 model_name='suburb',312 name='last_updated',313 field=models.DateTimeField(auto_now=True),314 ),315 migrations.AddField(316 model_name='suburbadjacentsuburbs',317 name='last_updated',318 field=models.DateTimeField(auto_now=True),319 ),320 migrations.AddField(321 model_name='variant',322 name='last_updated',323 field=models.DateTimeField(auto_now=True),324 ),325 migrations.AddField(326 model_name='variantdefinition',327 name='last_updated',328 field=models.DateTimeField(auto_now=True),329 ),330 migrations.AddField(331 model_name='variantdefinitionsummary',332 name='last_updated',333 field=models.DateTimeField(auto_now=True),334 ),335 migrations.AddField(336 model_name='viewingtime',337 name='last_updated',338 field=models.DateTimeField(auto_now=True),339 ),340 migrations.AddField(341 model_name='viewingtimes',342 name='last_updated',343 field=models.DateTimeField(auto_now=True),344 ),...

Full Screen

Full Screen

expressions.py

Source:expressions.py Github

copy

Full Screen

1"""This module contains the expressions applicable for CronTrigger's fields."""2from calendar import monthrange3import re4from apscheduler.util import asint5__all__ = ('AllExpression', 'RangeExpression', 'WeekdayRangeExpression',6 'WeekdayPositionExpression', 'LastDayOfMonthExpression')7WEEKDAYS = ['mon', 'tue', 'wed', 'thu', 'fri', 'sat', 'sun']8MONTHS = ['jan', 'feb', 'mar', 'apr', 'may', 'jun', 'jul', 'aug', 'sep', 'oct', 'nov', 'dec']9class AllExpression(object):10 value_re = re.compile(r'\*(?:/(?P<step>\d+))?$')11 def __init__(self, step=None):12 self.step = asint(step)13 if self.step == 0:14 raise ValueError('Increment must be higher than 0')15 def validate_range(self, field_name):16 from apscheduler.triggers.cron.fields import MIN_VALUES, MAX_VALUES17 value_range = MAX_VALUES[field_name] - MIN_VALUES[field_name]18 if self.step and self.step > value_range:19 raise ValueError('the step value ({}) is higher than the total range of the '20 'expression ({})'.format(self.step, value_range))21 def get_next_value(self, date, field):22 start = field.get_value(date)23 minval = field.get_min(date)24 maxval = field.get_max(date)25 start = max(start, minval)26 if not self.step:27 next = start28 else:29 distance_to_next = (self.step - (start - minval)) % self.step30 next = start + distance_to_next31 if next <= maxval:32 return next33 def __eq__(self, other):34 return isinstance(other, self.__class__) and self.step == other.step35 def __str__(self):36 if self.step:37 return '*/%d' % self.step38 return '*'39 def __repr__(self):40 return "%s(%s)" % (self.__class__.__name__, self.step)41class RangeExpression(AllExpression):42 value_re = re.compile(43 r'(?P<first>\d+)(?:-(?P<last>\d+))?(?:/(?P<step>\d+))?$')44 def __init__(self, first, last=None, step=None):45 super(RangeExpression, self).__init__(step)46 first = asint(first)47 last = asint(last)48 if last is None and step is None:49 last = first50 if last is not None and first > last:51 raise ValueError('The minimum value in a range must not be higher than the maximum')52 self.first = first53 self.last = last54 def validate_range(self, field_name):55 from apscheduler.triggers.cron.fields import MIN_VALUES, MAX_VALUES56 super(RangeExpression, self).validate_range(field_name)57 if self.first < MIN_VALUES[field_name]:58 raise ValueError('the first value ({}) is lower than the minimum value ({})'59 .format(self.first, MIN_VALUES[field_name]))60 if self.last is not None and self.last > MAX_VALUES[field_name]:61 raise ValueError('the last value ({}) is higher than the maximum value ({})'62 .format(self.last, MAX_VALUES[field_name]))63 value_range = (self.last or MAX_VALUES[field_name]) - self.first64 if self.step and self.step > value_range:65 raise ValueError('the step value ({}) is higher than the total range of the '66 'expression ({})'.format(self.step, value_range))67 def get_next_value(self, date, field):68 startval = field.get_value(date)69 minval = field.get_min(date)70 maxval = field.get_max(date)71 # Apply range limits72 minval = max(minval, self.first)73 maxval = min(maxval, self.last) if self.last is not None else maxval74 nextval = max(minval, startval)75 # Apply the step if defined76 if self.step:77 distance_to_next = (self.step - (nextval - minval)) % self.step78 nextval += distance_to_next79 return nextval if nextval <= maxval else None80 def __eq__(self, other):81 return (isinstance(other, self.__class__) and self.first == other.first and82 self.last == other.last)83 def __str__(self):84 if self.last != self.first and self.last is not None:85 range = '%d-%d' % (self.first, self.last)86 else:87 range = str(self.first)88 if self.step:89 return '%s/%d' % (range, self.step)90 return range91 def __repr__(self):92 args = [str(self.first)]93 if self.last != self.first and self.last is not None or self.step:94 args.append(str(self.last))95 if self.step:96 args.append(str(self.step))97 return "%s(%s)" % (self.__class__.__name__, ', '.join(args))98class MonthRangeExpression(RangeExpression):99 value_re = re.compile(r'(?P<first>[a-z]+)(?:-(?P<last>[a-z]+))?', re.IGNORECASE)100 def __init__(self, first, last=None):101 try:102 first_num = MONTHS.index(first.lower()) + 1103 except ValueError:104 raise ValueError('Invalid month name "%s"' % first)105 if last:106 try:107 last_num = MONTHS.index(last.lower()) + 1108 except ValueError:109 raise ValueError('Invalid month name "%s"' % last)110 else:111 last_num = None112 super(MonthRangeExpression, self).__init__(first_num, last_num)113 def __str__(self):114 if self.last != self.first and self.last is not None:115 return '%s-%s' % (MONTHS[self.first - 1], MONTHS[self.last - 1])116 return MONTHS[self.first - 1]117 def __repr__(self):118 args = ["'%s'" % MONTHS[self.first]]119 if self.last != self.first and self.last is not None:120 args.append("'%s'" % MONTHS[self.last - 1])121 return "%s(%s)" % (self.__class__.__name__, ', '.join(args))122class WeekdayRangeExpression(RangeExpression):123 value_re = re.compile(r'(?P<first>[a-z]+)(?:-(?P<last>[a-z]+))?', re.IGNORECASE)124 def __init__(self, first, last=None):125 try:126 first_num = WEEKDAYS.index(first.lower())127 except ValueError:128 raise ValueError('Invalid weekday name "%s"' % first)129 if last:130 try:131 last_num = WEEKDAYS.index(last.lower())132 except ValueError:133 raise ValueError('Invalid weekday name "%s"' % last)134 else:135 last_num = None136 super(WeekdayRangeExpression, self).__init__(first_num, last_num)137 def __str__(self):138 if self.last != self.first and self.last is not None:139 return '%s-%s' % (WEEKDAYS[self.first], WEEKDAYS[self.last])140 return WEEKDAYS[self.first]141 def __repr__(self):142 args = ["'%s'" % WEEKDAYS[self.first]]143 if self.last != self.first and self.last is not None:144 args.append("'%s'" % WEEKDAYS[self.last])145 return "%s(%s)" % (self.__class__.__name__, ', '.join(args))146class WeekdayPositionExpression(AllExpression):147 options = ['1st', '2nd', '3rd', '4th', '5th', 'last']148 value_re = re.compile(r'(?P<option_name>%s) +(?P<weekday_name>(?:\d+|\w+))' %149 '|'.join(options), re.IGNORECASE)150 def __init__(self, option_name, weekday_name):151 super(WeekdayPositionExpression, self).__init__(None)152 try:153 self.option_num = self.options.index(option_name.lower())154 except ValueError:155 raise ValueError('Invalid weekday position "%s"' % option_name)156 try:157 self.weekday = WEEKDAYS.index(weekday_name.lower())158 except ValueError:159 raise ValueError('Invalid weekday name "%s"' % weekday_name)160 def get_next_value(self, date, field):161 # Figure out the weekday of the month's first day and the number of days in that month162 first_day_wday, last_day = monthrange(date.year, date.month)163 # Calculate which day of the month is the first of the target weekdays164 first_hit_day = self.weekday - first_day_wday + 1165 if first_hit_day <= 0:166 first_hit_day += 7167 # Calculate what day of the month the target weekday would be168 if self.option_num < 5:169 target_day = first_hit_day + self.option_num * 7170 else:171 target_day = first_hit_day + ((last_day - first_hit_day) // 7) * 7172 if target_day <= last_day and target_day >= date.day:173 return target_day174 def __eq__(self, other):175 return (super(WeekdayPositionExpression, self).__eq__(other) and176 self.option_num == other.option_num and self.weekday == other.weekday)177 def __str__(self):178 return '%s %s' % (self.options[self.option_num], WEEKDAYS[self.weekday])179 def __repr__(self):180 return "%s('%s', '%s')" % (self.__class__.__name__, self.options[self.option_num],181 WEEKDAYS[self.weekday])182class LastDayOfMonthExpression(AllExpression):183 value_re = re.compile(r'last', re.IGNORECASE)184 def __init__(self):185 super(LastDayOfMonthExpression, self).__init__(None)186 def get_next_value(self, date, field):187 return monthrange(date.year, date.month)[1]188 def __str__(self):189 return 'last'190 def __repr__(self):...

Full Screen

Full Screen

single_strategy.py

Source:single_strategy.py Github

copy

Full Screen

1# encoding: utf-82# author: gao-ming3# time: 2019/7/14--22:024# desc:5def stragtegy_MACD(df):6 macd_df = df[['DIF', 'DEA', 'MACD']]7 macd_list = df['MACD']8 # var_macd_list=[]9 # i=110 # while 1:11 # var_macd=macd_list[-i]-macd_list[-1-1]12 # var_macd_list.append(var_macd)13 last_macd = macd_df.iloc[-1, :]14 last_DIF = last_macd['DIF']15 last_DEA = last_macd['DEA']16 last_MACD = last_macd['MACD']17 up_trend = False18 if last_DIF > last_DEA and last_MACD > 0:19 up_trend = '上升趋势'20 else:21 up_trend = '震荡或下降趋势'22 var_macd_mark = []23 for i in range(1, len(macd_list)):24 var_macd = macd_list[-i] - macd_list[-i - 1]25 if i == 1:26 try:27 first_mark = var_macd / abs(var_macd)28 except:29 first_mark = 030 if var_macd > 0:31 var_macd_mark.append(1)32 elif var_macd < 0:33 var_macd_mark.append(-1)34 else:35 var_macd_mark.append(0)36 i += 137 if first_mark == 1:38 trend_add = '增强'39 else:40 trend_add = '减弱'41 res = {42 'trend': up_trend + '--' + trend_add,43 }44 return res45def strategy_BOLL(df, n: int = 20):46 """47 判断价格边界 价格运动 有一定的边界48 :param df:49 :param n: 使用的数据长度 默认2050 :return: 上下轨价格 以及 在上轨或下轨附近,给出操作的价格区间 其它为051 """52 boll_df = df.iloc[-n:, :]53 mid_s = boll_df['boll_mid']54 last_mid = mid_s[-1]55 # 计算下一周期的boll轨道56 next_mid = mid_s[-1] * 2 - mid_s[-2]57 next_up = boll_df['boll_up'][-1] * 2 - boll_df['boll_up'][-2]58 next_dn = boll_df['boll_dn'][-1] * 2 - boll_df['boll_dn'][-2]59 last_close = df['close'][-1]60 risk_rate_income, section = 0, 061 # boll线,上轨附近,卖出 给出价格区间62 if abs(last_close - next_up) < last_mid * 0.025:63 section = (next_up - 0.025 * last_mid, next_up + 0.025 * last_mid)64 # 卖出 收益风险比很小65 risk_rate_income = 0.0166 # boll线下轨 买入 给出买入区间67 if abs(last_close - next_dn) < last_mid * 0.025:68 section = (next_dn - 0.025 * last_mid, next_dn + 0.025 * last_mid)69 try:70 risk_rate_income = (next_mid - last_close) / (last_close - next_dn) - 171 except:72 risk_rate_income = 673 if risk_rate_income > 5 or risk_rate_income < -1:74 risk_rate_income = 575 res = {76 'next_up': next_up,77 'next_dn': next_dn,78 # 风险回报比79 'risk_income': '%.2f' % risk_rate_income,80 # 价格参考区间81 'section': section,82 }83 return res84def strategy_KDJ(df):85 """86 判断概率边界 超买 跌的概率大 超卖 涨的概率大87 :param df:88 :return:89 """90 kdj_df = df[['kdj_K', 'kdj_D']]91 kdj_df['K-D'] = kdj_df['kdj_K'] - kdj_df['kdj_D']92 last_kdj = kdj_df.iloc[-1, :]93 kdj_K = last_kdj['kdj_K']94 kdj_D = last_kdj['kdj_D']95 too_much = False96 if kdj_K > 80 or kdj_D > 80:97 too_much = '进入超买区间'98 if kdj_K < 20 or kdj_D < 20:99 too_much = '进入超卖区间'100 res = {101 'kdj_K': kdj_K,102 'kdj_D': kdj_D,103 }104 if too_much:105 res['kdj_res'] = too_much106 return res107def strategy_RSI(df):108 rsi_df = df[['RSI_6', 'RSI_12', 'RSI_24']]109 pass110def strategy_MA(df, n: int = 20):111 """112 均线策略,判断趋势113 :param df:114 :return:115 """116 try:117 ma_df = df[f'close_MA_{n}']118 except Exception as e:119 raise Exception('均线周期或数据有误!')120 # 上升趋势,平盘,下降趋势,最后返回的趋势结果121 # mark_up,mark_line,mark_dn,trend=0,0,0,0122 # print('0',mark_up)123 # boll中线的波动值边界,在这个值内波动,认为是合理的波动124 stand_value = ma_df[-1] * 0.007125 var_mid_list = []126 var_mid_mark = []127 for i in range(2, n):128 # 计算boll中线前后差值129 var_mid = ma_df[-i] - ma_df[1 - i]130 var_mid_list.append(var_mid)131 # 判断趋势132 if abs(var_mid) < stand_value:133 mark = 0 # 平盘134 elif var_mid > stand_value:135 mark = 1 # 上升趋势136 else:137 mark = -1 # 下降趋势138 var_mid_mark.append(mark)139 last_mark = var_mid_mark[0]140 # 保存趋势,以及加强还是减弱141 trend_res = [last_mark, -1]142 if var_mid_mark[0] * var_mid_mark[1] > 0:143 if abs(var_mid_list[0]) > abs(var_mid_list[1]):144 trend_res[1] = 1 # 趋势加强145 # 判断趋势延续的周期146 trend_num = 0147 for i in range(1, n):148 trend_num += 1149 if var_mid_list[i] * var_mid_list[i - 1] < 0:150 break151 suggest = '清仓' if last_mark == -1 else '持仓或波段'152 trend_dict = {153 '1': '上升',154 '0': '平盘',155 '-1': '下降',156 }157 trend_add = {158 '-1': '减弱',159 '1': '加强',160 }161 trend_judge = trend_dict[trend_res[0]] + ' ' + trend_add[trend_res[1]]162 res = {163 'suggest': suggest,164 # 趋势165 'trend': trend_judge,166 # 周期持续的长度167 'trend_num': trend_num,168 }169 return res170def strategy_VOL(df):171 """172 根据成交量来判断173 :param df:含有成交量 和成交量均值的dataframe174 :return: 判断结果:175 'abnormal':量能是否异常,176 'vol_status':量能增减状态,177 'period':持续时间,178 """179 vol_df = df[['vol', 'vol_MA_10']]180 # 判断量能异常181 vol_abnormal = 0182 last_vol = vol_df[-1, :]183 if last_vol['vol'] / last_vol['vol_MA_10'] > 2:184 vol_abnormal = '量能异常'185 # 记录成交量的增减状态186 var_vol_MA_sign = []187 for i in range(1, 50):188 last_vol = vol_df[-i, :]189 if last_vol['vol'] - last_vol['vol_MA_10'] >= 0:190 var_vol_MA_sign.append(1)191 else:192 var_vol_MA_sign.append(-1)193 if var_vol_MA_sign[0] == 1:194 vol_status = '量能增加'195 else:196 vol_status = '量能减少'197 # 保存变化的节点index198 change_index = []199 #200 for i in range(len(var_vol_MA_sign)):201 if var_vol_MA_sign[i] * var_vol_MA_sign[i + 1] < 0:202 change_index.append(i)203 # 量能交替不能超过3个周期,否则持续状态结束204 i = 0205 while 1:206 if change_index[i + 1] - change_index[i] > 3:207 vol_period = i208 break209 i += 2210 vol_period = change_index[vol_period]211 res = {212 'abnormal': vol_abnormal,213 'vol_status': vol_status,214 'period': vol_period,215 }...

Full Screen

Full Screen

ewma.py

Source:ewma.py Github

copy

Full Screen

1#!/usr/bin/env python2# python 33## @file: EWMA.py4# @par:5# @author: Luke Gary6# @company:7# @date: 2018/12/128# @brief:9# @verbatim:10################################################################11# @copyright12# Copyright 2018 [Luke Gary] as an unpublished work.13# All Rights Reserved.14################################################################15#/16import math17##18## @brief Class for 0-6th order cascade exponential weighted moving average19##20class EWMA:21 def __init__(self, coeff, initialValue):22 ##23 ## ewma3 states for coefficient optimization24 ##25 self.last_ewma3 = [0.0,0.0,0.0]26 ##27 ## ewma6 states for coefficient optimization28 ##29 self.last_ewma6 = [0.0,0.0,0.0,0.0,0.0,0.0]30 ##31 ## default coefficients to 1.0 so the order can be from 0 - 632 ## since cascade elements will pass input signal to output with a=133 ##34 self.coeff = [1.0, 1.0, 1.0, 1.0, 1.0, 1.0]35 for c in range(0, len(coeff)):36 if(c >= len(self.coeff)):37 print(f'EWMA Coefficients Length Mismatch! len(coeff) = {len(coeff)}, max is 6')38 break;39 self.coeff[c] = coeff[c]40 ##41 ## realtime filter states42 ##43 self.states = [0,0,0,0,0,0,0]44 self.states[0] = initialValue45 self.states[1] = initialValue46 self.states[2] = initialValue47 self.states[3] = initialValue48 self.states[4] = initialValue49 self.states[5] = initialValue50 self.states[6] = initialValue51 def preload(self, value):52 self.states[0] = value53 self.states[1] = value54 self.states[2] = value55 self.states[3] = value56 self.states[4] = value57 self.states[5] = value58 self.states[6] = value59 ##60 ## @brief calculate single EWMA element61 ##62 ## @param self The object63 ## @param alpha filter coefficient64 ## @param this current input sample65 ## @param last last output sample from this stage (feedback)66 ##67 ## @return EWMA result68 ##69 def ewma(self, alpha, this, last):70 return (float(alpha)*float(this)) + ((1.0-float(alpha))*float(last))71 ##72 ## @brief calculate 6th order cascade ewma73 ##74 ## @param self The object75 ## @param inputValue Raw input sample76 ##77 ## @return output of 6th cascade element78 ##79 def calculate(self, inputValue):80 result = 0.081 self.states[0] = float(inputValue)82 self.states[1] = self.ewma(float(self.coeff[0]), self.states[0], self.states[1])83 self.states[2] = self.ewma(float(self.coeff[1]), self.states[1], self.states[2])84 self.states[3] = self.ewma(float(self.coeff[2]), self.states[2], self.states[3])85 self.states[4] = self.ewma(float(self.coeff[3]), self.states[3], self.states[4])86 self.states[5] = self.ewma(float(self.coeff[4]), self.states[4], self.states[5])87 self.states[6] = self.ewma(float(self.coeff[5]), self.states[5], self.states[6])88 return self.states[6]89 def get_last_output(self):90 return self.states[6]91 def model_ewma3_preload(self, v):92 self.last_ewma3[0] = v93 self.last_ewma3[1] = v94 self.last_ewma3[2] = v95 ##96 ## @brief ewma 3rd order for IIR Model Fitting via SciPy Optimize97 ##98 ## @param self The object99 ## @param y0 The input value100 ## @param a coeff a101 ## @param b coeff b102 ## @param c coeff c103 ##104 ## @return IIR output105 ##106 def model_ewma3(self, y0, a, b, c):107 y1 = self.ewma(a, y0, self.last_ewma3[0])108 y2 = self.ewma(b, y1, self.last_ewma3[1])109 y3 = self.ewma(c, y2, self.last_ewma3[2])110 self.last_ewma3[0] = y1111 self.last_ewma3[1] = y2112 self.last_ewma3[2] = y3113 return y3114 def model_ewma6_preload(self, v):115 self.last_ewma6[0] = v116 self.last_ewma6[1] = v117 self.last_ewma6[2] = v118 self.last_ewma6[3] = v119 self.last_ewma6[4] = v120 self.last_ewma6[5] = v121 ##122 ## @brief ewma 6th order for IIR Model Fitting via SciPy Optimize123 ##124 ## @param self The object125 ## @param y0 The Input Value126 ## @param a coeff a127 ## @param b coeff b128 ## @param c coeff c129 ## @param d coeff d130 ## @param e coeff e131 ## @param f coeff f132 ##133 ## @return { description_of_the_return_value }134 ##135 def model_ewma6(self, y0, a, b, c, d, e, f):136 y1 = self.ewma(a, y0, self.last_ewma3[0])137 y2 = self.ewma(b, y1, self.last_ewma3[1])138 y3 = self.ewma(c, y2, self.last_ewma3[2])139 y4 = self.ewma(d, y3, self.last_ewma3[3])140 y5 = self.ewma(e, y4, self.last_ewma3[4])141 y6 = self.ewma(f, y5, self.last_ewma3[5])142 self.last_ewma6[0] = y1143 self.last_ewma6[1] = y2144 self.last_ewma6[2] = y3145 self.last_ewma6[3] = y4146 self.last_ewma6[4] = y5147 self.last_ewma6[5] = y6148 return y6149 def get_cutoff(self, Fs=1.0):150 x = [1.0,1.0,1.0,1.0,1.0,1.0]151 try:152 x[0] = (Fs/2*math.pi)*math.acos(1.0 - (math.pow(self.coeff[0], 2)/(2.0*(1.0 - self.coeff[0]))))153 except:154 print("filter tap not initialized")155 try:156 x[1] = (Fs/2*math.pi)*math.acos(1.0 - (math.pow(self.coeff[1], 2)/(2.0*(1.0 - self.coeff[1]))))157 except:158 print("filter tap not initialized")159 try:160 x[2] = (Fs/2*math.pi)*math.acos(1.0 - (math.pow(self.coeff[2], 2)/(2.0*(1.0 - self.coeff[2]))))161 except:162 print("filter tap not initialized")163 try:164 x[3] = (Fs/2*math.pi)*math.acos(1.0 - (math.pow(self.coeff[3], 2)/(2.0*(1.0 - self.coeff[3]))))165 except:166 print("filter tap not initialized")167 try:168 x[4] = (Fs/2*math.pi)*math.acos(1.0 - (math.pow(self.coeff[4], 2)/(2.0*(1.0 - self.coeff[4]))))169 except:170 print("filter tap not initialized")171 try:172 x[5] = (Fs/2*math.pi)*math.acos(1.0 - (math.pow(self.coeff[5], 2)/(2.0*(1.0 - self.coeff[5]))))173 except:174 print("filter tap not initialized")175 return x176def test():177 import numpy as np178 import matplotlib.pyplot as plt179 alpha = 0.1523347180 beta = 0.0547115181 gamma = 0.059647154182 coeff = [alpha, beta, gamma]183 filt = EWMA(coeff, 0)184 print(filt.get_cutoff(Fs=33.333))185 # test func model for optimization186 t = np.arange(0,100,0.1)187 inputs = []188 for i in range(0, len(t)):189 if i < len(t)/2:190 v = -1.0191 else:192 v = 1.0193 inputs.append(v)194 outputs = []195 realishOutputs = []196 filt.model_ewma3_preload(inputs[0])197 filt.preload(inputs[0])198 for i in range(0, len(inputs)):199 outputs.append(filt.model_ewma3(inputs[i], *coeff))200 realishOutputs.append(filt.calculate(inputs[i]))201 plt.figure()202 plt.plot(inputs,label='in')203 plt.plot(outputs,label='optimout')204 plt.plot(realishOutputs,label='realish', linestyle=':')205 plt.legend(loc='best')206 plt.show()207if __name__ == '__main__':...

Full Screen

Full Screen

batch_utils.py

Source:batch_utils.py Github

copy

Full Screen

1import numpy as np2class Batch_Loader(object):3 def __init__(self, train_triples, n_entities, batch_size=100, neg_ratio=0, contiguous_sampling=False):4 self.train_triples = train_triples5 self.batch_size = batch_size6 self.n_entities = n_entities7 self.contiguous_sampling = contiguous_sampling8 self.neg_ratio = neg_ratio9 self.idx = 010 self.new_triples = np.empty((self.batch_size * (self.neg_ratio + 1), 3)).astype(np.int64)11 self.new_labels = np.empty((self.batch_size * (self.neg_ratio + 1))).astype(np.float32)12 13 def __call__(self):14 if self.contiguous_sampling:15 if self.idx >= len(self.train_triples):16 self.idx = 017 b = self.idx18 e = self.idx + self.batch_size19 this_batch_size = len(self.train_triples[b:e])20 self.new_triples[:this_batch_size,:] = self.train_triples[b:e,:]21 self.new_labels[:this_batch_size] = 1.022 self.idx += this_batch_size23 last_idx = this_batch_size24 else:25 idxs = np.random.randint(0, len(self.train_triples), self.batch_size)26 self.new_triples[:self.batch_size,:] = self.train_triples[idxs,:]27 self.new_labels[:self.batch_size] = 1.028 last_idx = self.batch_size29 if self.neg_ratio > 0:30 rdm_entities = np.random.randint(0, self.n_entities, last_idx * self.neg_ratio)31 rdm_choices = np.random.random(last_idx * self.neg_ratio)32 self.new_triples[last_idx:(last_idx*(self.neg_ratio+1)),:] = np.tile(self.new_triples[:last_idx,:], (self.neg_ratio, 1))33 self.new_labels[last_idx:(last_idx*(self.neg_ratio+1))] = np.tile(self.new_labels[:last_idx], self.neg_ratio)34 for i in range(last_idx):35 for j in range(self.neg_ratio):36 cur_idx = i * self.neg_ratio + j37 if rdm_choices[cur_idx] < 0.5:38 self.new_triples[last_idx + cur_idx, 0] = rdm_entities[cur_idx]39 else:40 self.new_triples[last_idx + cur_idx, 2] = rdm_entities[cur_idx]41 self.new_labels[last_idx + cur_idx] = -142 last_idx += cur_idx + 143 train = {44 "heads": self.new_triples[:last_idx,0], 45 "relations": self.new_triples[:last_idx,1],46 "tails": self.new_triples[:last_idx,2], 47 "labels": self.new_labels[:last_idx]48 }49 return train50class Extended_Batch_Loader(object):51 def __init__(self, train_triples, n_entities, n_relations, batch_size=100, neg_ratio=0, contiguous_sampling=False):52 self.train_triples = train_triples53 self.batch_size = batch_size54 self.n_entities = n_entities55 self.n_relations = n_relations56 self.contiguous_sampling = contiguous_sampling57 self.neg_ratio = neg_ratio58 self.idx = 059 self.new_triples = np.empty((self.batch_size * (self.neg_ratio*2 + 1), 3)).astype(np.int64)60 self.new_labels = np.empty((self.batch_size * (self.neg_ratio*2 + 1))).astype(np.float32)61 62 def __call__(self):63 if self.contiguous_sampling:64 if self.idx >= len(self.train_triples):65 self.idx = 066 b = self.idx67 e = self.idx + self.batch_size68 this_batch_size = len(self.train_triples[b:e])69 self.new_triples[:this_batch_size,:] = self.train_triples[b:e,:]70 self.new_labels[:this_batch_size] = 1.071 self.idx += this_batch_size72 last_idx = this_batch_size73 else:74 idxs = np.random.randint(0, len(self.train_triples), self.batch_size)75 self.new_triples[:self.batch_size,:] = self.train_triples[idxs,:]76 self.new_labels[:self.batch_size] = 1.077 last_idx = self.batch_size78 if self.neg_ratio > 0:79 rdm_entities = np.random.randint(0, self.n_entities, last_idx * self.neg_ratio)80 rdm_relations = np.random.randint(0, self.n_relations, last_idx * self.neg_ratio)81 rdm_choices = np.random.random(last_idx * self.neg_ratio)82 self.new_triples[last_idx:(last_idx*(self.neg_ratio*2+1)),:] = np.tile(self.new_triples[:last_idx,:], (self.neg_ratio*2, 1))83 self.new_labels[last_idx:(last_idx*(self.neg_ratio*2+1))] = np.tile(self.new_labels[:last_idx], self.neg_ratio*2)84 for i in range(last_idx):85 for j in range(self.neg_ratio):86 cur_idx = i * self.neg_ratio + j87 if rdm_choices[cur_idx] < 0.5:88 self.new_triples[last_idx + cur_idx, 0] = rdm_entities[cur_idx]89 else:90 self.new_triples[last_idx + cur_idx, 2] = rdm_entities[cur_idx]91 self.new_labels[last_idx + cur_idx] = -192 offset = cur_idx + 193 for i in range(last_idx):94 for j in range(self.neg_ratio):95 cur_idx = i * self.neg_ratio + j96 self.new_triples[last_idx + offset + cur_idx, 1] = rdm_relations[cur_idx]97 self.new_labels[last_idx + offset + cur_idx] = -198 last_idx += offset + cur_idx + 199 train = {100 "heads": self.new_triples[:last_idx,0], 101 "relations": self.new_triples[:last_idx,1],102 "tails": self.new_triples[:last_idx,2], 103 "labels": self.new_labels[:last_idx]104 }...

Full Screen

Full Screen

depth_correction.py

Source:depth_correction.py Github

copy

Full Screen

1import math23from reference import read_chrom_sizes456def write_corrected_bedgraph(input_bedgraph, chrom_sizes, output_bedgraph,7 y_int, scalar, mean_log, sd_log, slope):8 '''9 Correct values in a depth bedGraph file.1011 Correction is performed using the sum of a log-normal cumulative12 distribution function and linear function.13 '''14 def print_line(chromosome, start, end, value, OUT):15 if value: # Only prints non-zero values16 OUT.write(17 '{}\t{}\t{}\t{}\n'.format(18 chromosome,19 str(start - 1),20 str(end),21 str(value),22 )23 )2425 last_pos = None26 last_val = None27 last_chr = None28 last_start = None2930 with open(input_bedgraph) as f, open(output_bedgraph, 'w') as OUT:31 for line in f:32 chromosome, start, end, coverage = line.strip().split()[:4]33 start = int(start) + 1 # Convert from zero-based34 end = int(end)3536 for position in range(start, end + 1):37 relative_pos = min(38 position - 1, chrom_sizes[chromosome] - position)39 if relative_pos == 0:40 relative_pos = 14142 corr = float(coverage) / (43 scalar * (44 0.545 + 0.5 * math.erf(46 (mean_log - math.log(relative_pos))47 / (math.sqrt(2) * sd_log)48 )49 ) + y_int + (slope * relative_pos)50 )51 value = int(math.floor(corr))5253 if not last_chr and not last_val and not last_pos and \54 not last_start:5556 last_pos = position57 last_val = value58 last_chr = chromosome59 last_start = position6061 else:6263 if chromosome != last_chr or value != last_val or \64 position != last_pos + 1:6566 print_line(67 last_chr, last_start, last_pos, last_val, OUT)68 last_start = position6970 last_pos = position71 last_val = value72 last_chr = chromosome73 ...

Full Screen

Full Screen

thing_utils.py

Source:thing_utils.py Github

copy

Full Screen

1from datetime import datetime2import pytz3def make_last_modified():4 last_modified = datetime.now(pytz.timezone('GMT'))5 last_modified = last_modified.replace(microsecond = 0)6 return last_modified7def last_modified_key(thing, action):8 return 'last_%s_%s' % (str(action), thing._fullname)9def last_modified_date(thing, action):10 """Returns the date that should be sent as the last-modified header."""11 from pylons import g12 cache = g.permacache13 key = last_modified_key(thing, action)14 last_modified = cache.get(key)15 if not last_modified:16 #if there is no last_modified, add one17 last_modified = make_last_modified()18 cache.set(key, last_modified)19 return last_modified20def set_last_modified(thing, action):21 from pylons import g22 key = last_modified_key(thing, action)...

Full Screen

Full Screen

Using AI Code Generation

copy

Full Screen

1describe('My First Test', function() {2 it('Does not do much!', function() {3 cy.contains('type').click()4 cy.url().should('include', '/commands/actions')5 cy.get('.action-email')6 .type('

Full Screen

Using AI Code Generation

copy

Full Screen

1describe('My First Test', function() {2 it('Does not do much!', function() {3 expect(true).to.equal(true)4 })5 })6 it('Does not do much!', function() {7 expect(true).to.equal(true)8 })9 it('Does not do much!', function() {10 expect(true).to.equal(true)11 })12 it('Does not do much!', function() {13 expect(true).to.equal(true)14 })15 it('Does not do much!', function() {16 expect(true).to.equal(true)17 })18 it('Does not do much!', function() {19 expect(true).to.equal(true)20 })21 it('Does not do much!', function() {22 expect(true).to.equal(true)23 })24 it('Does not do much!', function() {25 expect(true).to.equal(true)26 })27 it('Does not do much!', function() {28 expect(true).to.equal(true)29 })30 it('Does not do much!', function() {31 expect(true).to.equal(true)32 })33 it('Does not do much!', function() {34 expect(true).to.equal(true)35 })36 it('Does not do much!', function() {37 expect(true).to.equal(true)38 })39 it('Does not do much!', function() {40 expect(true).to.equal(true)41 })42 it('Does not do much!', function() {43 expect(true).to.equal(true)44 })45 it('Does not do much!', function() {46 expect(true).to.equal(true)47 })48 it('Does not do much!', function() {49 expect(true).to.equal

Full Screen

Using AI Code Generation

copy

Full Screen

1describe('My First Test', function() {2 it('Does not do much!', function() {3 expect(true).to.equal(true)4 })5 })6{7 }8describe('My First Test', function() {9 it('Does not do much!', function() {10 expect(true).to.equal(true)11 })12 })13describe('My First Test', function() {14 it('Visits the Kitchen Sink', function() {15 })16 })17describe('My First Test', function() {18 it('Visits the Kitchen Sink', function() {19 cy.contains('type')20 })21 })22describe('My First Test', function() {23 it('Visits the Kitchen Sink', function() {24 cy.contains('type').click()25 })26 })27describe('My First Test', function() {28 it('Visits the Kitchen Sink', function() {29 cy.contains('type').click()30 cy.url()31 .should('include', '/commands/actions')32 })33 })34describe('My First Test', function() {35 it('Finds an element', function() {36 cy.contains('type').click()37 cy.url()38 .should('include', '/commands/actions')39 cy.get('.action-email')40 .type('

Full Screen

Using AI Code Generation

copy

Full Screen

1describe('Test', () => {2 it('should do something', () => {3 cy.contains('type').click()4 cy.url().should('include', '/commands/actions')5 cy.get('.action-email')6 .type('

Full Screen

Using AI Code Generation

copy

Full Screen

1it('should click on the link "type"', () => {2 cy.contains('type').click()3})4it('should click on the link "type"', () => {5 cy.get('.home-list > :nth-child(1) > a').click()6})7it('should click on the link "type"', () => {8 cy.contains('type').click()9})10it('should click on the link "type"', () => {11 cy.get('.home-list > :nth-child(1) > a').click()12})13it('should click on the link "type"', () => {14 cy.contains('type').click()15})16it('should click on the link "type"', () => {17 cy.get('.home-list > :nth-child(1) > a').click()18})19it('should click on the link "type"', () => {20 cy.contains('type').click()21})22it('should click on the link "type"', () => {23 cy.get('.home-list > :nth-child(1) > a').click()24})25it('should click on the link "type"', () => {26 cy.contains('type').click()27})28it('should click on the link "type"', () => {29 cy.get('.home-list > :nth-child(1) > a').click()30})

Full Screen

Cypress Tutorial

Cypress is a renowned Javascript-based open-source, easy-to-use end-to-end testing framework primarily used for testing web applications. Cypress is a relatively new player in the automation testing space and has been gaining much traction lately, as evidenced by the number of Forks (2.7K) and Stars (42.1K) for the project. LambdaTest’s Cypress Tutorial covers step-by-step guides that will help you learn from the basics till you run automation tests on LambdaTest.

Chapters:

  1. What is Cypress? -
  2. Why Cypress? - Learn why Cypress might be a good choice for testing your web applications.
  3. Features of Cypress Testing - Learn about features that make Cypress a powerful and flexible tool for testing web applications.
  4. Cypress Drawbacks - Although Cypress has many strengths, it has a few limitations that you should be aware of.
  5. Cypress Architecture - Learn more about Cypress architecture and how it is designed to be run directly in the browser, i.e., it does not have any additional servers.
  6. Browsers Supported by Cypress - Cypress is built on top of the Electron browser, supporting all modern web browsers. Learn browsers that support Cypress.
  7. Selenium vs Cypress: A Detailed Comparison - Compare and explore some key differences in terms of their design and features.
  8. Cypress Learning: Best Practices - Take a deep dive into some of the best practices you should use to avoid anti-patterns in your automation tests.
  9. How To Run Cypress Tests on LambdaTest? - Set up a LambdaTest account, and now you are all set to learn how to run Cypress tests.

Certification

You can elevate your expertise with end-to-end testing using the Cypress automation framework and stay one step ahead in your career by earning a Cypress certification. Check out our Cypress 101 Certification.

YouTube

Watch this 3 hours of complete tutorial to learn the basics of Cypress and various Cypress commands with the Cypress testing at LambdaTest.

Run Cypress automation tests on LambdaTest cloud grid

Perform automation testing on 3000+ real desktop and mobile devices online.

Try LambdaTest Now !!

Get 100 minutes of automation test minutes FREE!!

Next-Gen App & Browser Testing Cloud

Was this article helpful?

Helpful

NotHelpful