top of page
Writer's pictureH-Barrio

All Tomorrow's Parties - Identifying Largest Market Moves Using Deep Learning (Part I)

Updated: Feb 2, 2021

A song by the Velvet Underground, a favorite of Andy Warhol, a William Gibson novel. Note that we believe that the best Gibson related title is 'No Map for These Territories', which might inspire a post in the future.


In this publication we are going to try and really enjoy all tomorrow's parties, we are going to try to predict the largest movers (both winners and losers) in the market one day ahead using past price action data. Our hypothesis is that past price action, price, volume and bid/ask data, contains information regarding the near-term future behaviour of the stock and this information can be used to make accurate predictions for unusually large price moves.


The territory is, apparently, unmapped, there are no readily available research papers on very short term momentum effects, the causes for 'too-large' daily price displacements or their predictability using market data. The topic is apparently left to the hands of technical predictors and daily market recommendations in general stock markets web pages.

We will break down this analysis process in several posts as reaching the end goal will require multiple steps. This process will be divided in four separate publications:

  1. This post: generation of the target values.

  2. Generation of the features.

  3. Construction of the machine learning model, a deep neural network in this case.

  4. Evaluation of the model.

We will use Quantconnect´s research environment to develop our model with a small subset of stock tickers, if we use full resolution data with all SP500 constituents right away we will end up with huge data frames that are impractical to use due to their relatively long computing times. We can develop our hypothesis within a small subset of companies initially and after that, when the model is working for low volumes of data we will extend it. This is a way to save time and be able to test each of the model steps incrementally before commiting to a lengthy training and validation session.


Let's find out what are going to be our targets for this 'limited' model. We will find the daily 'tops' and 'flops' (this is how marketscreener.com calls them, for example) for 5 years within a sample of the DOW10 index, ten companies. This gives us the non-trivial case of predicting the result for more than three companies. Any model can be trivialized to a single company, and then extended to two companies, but real extension capabilities are only achieved with a model with three or more components. Two companies can result in a non-trivial but 'binarized' model that could be difficult to extend to three or more companies. By using three or more companies from the initial stages of modelling we ensure that the model is easily extendable to any number of companies. Basically one is nothing, two is just binary and three is infinity.


The period will be January 2010 to January 2015, we will use dates before 2015 to train and test our model as it will be validated with out of sample data in the period 2015 to 2020 through backtesting:

self = QuantBook()
tickers = ['MMM', 'AA', 'AXP', 'T', 'BAC', 'BA', 'CAT',
            'CVX', 'CSCO', 'KO']
symbols = {}

for ticker in tickers:
    symbols[ticker] = self.AddEquity(ticker)

start = datetime(2010,1,1)
end = datetime(2015,1,1)
daily_history = self.History(self.Securities.Keys, start, end, Resolution.Daily).round(2)

This yields a data frame indexed both by symbol and day:

An example of historical prices.

At this point we are only interested in the daily returns for each of the days and each of the companies so we extract them to a daily returns dataframe:

daily_history['1D_Return'] = daily_history.groupby('symbol')['close'].pct_change(1).dropna()
daily_returns = daily_history[['1D_Return']].dropna()

It is easy to obtain from this data frame the companies that moved the most and the least close-to-close, the dataframe can be unstacked and we can drop the '1D_Return' level from it. The maximum and minimum returns for each day will be the index of the maximum and minimum values in the row (idxmax and idxmin). We also have map the indexes to the inverse symbol dictionary to maintain the data type of the resulting arrays as idxmax or idxmin will return a string representation of the pandas dataframe column header masking the original type completely:

daily_returns = daily_returns.unstack(0)
daily_returns.columns = daily_returns.columns.droplevel()
tops = daily_returns.idxmax(axis=1)
flops = daily_returns.idxmin(axis=1)

inv_map = {str(v.ID): k for k, v in symbols.items()}
np.vectorize(inv_map.get)(tops)
np.vectorize(inv_map.get)(flops)
print('Tops:\n', tops)
print('\nFlops:\n', flops)

The results are two pandas series that contain our targets. For each day we will attempt to predict the next day's top and flop. We will feed past day data to the model and assign a 1, 0 or -1 depending on the following day performance, 1 for a top, -1 for a flop and 0 if neither. We have to check that the largest mover was indeed a positive move, the same for the largest loss, otherwise we may end up with 'tops' at negative returns and 'flops' at the positive side, a configuration typical of strong, market-level moves that will probably confuse our model. With this results in our hands we will be able to enter long or short positions accordingly. This is our version of an 'encoding' for the multiclass classification problem we have gotten ourselves into:

symbol_idxs = set(daily_history.index.get_level_values(0))
targets = pd.DataFrame(0, index=tops.index, columns = symbol_idxs)
for date in targets.index:
    if daily_returns.loc[date,tops[date]]>0:
        targets.loc[date][tops[date]] = 1
    if daily_returns.loc[date,flops[date]]<0:
        targets.loc[date][flops[date]] = -1

targets.rename(columns = inv_map, inplace = True)

The target dataframe is flexible enough to provide the possibility of just training our model for the tops, for the flops or for both at the same time, basically trying to force the model to predict 1 or 0, or 1,0 and -1. The most appropriate approach will be investigated during model development as the prediction task can vary from regression problem to classification problem, or to a set of classification problems. In any case replacing the values in the targets dataframe allows us to quickly adjust them.



These target alterations are easily achieved using chained replace statements upon the dataframe:

flops = targets.replace(1,0).replace(-1,1)

This will generate the 'flops' category classification dataframe:


In the same manner, this code will generate the 'tops' category classification:

tops = targets.replace(-1,0)
tops

Now we can inspect the statistical properties within these 10 companies during the 2010-2015 period regarding their prevalence, within this group, as tops or flops.

targets.describe()

and:

targets.apply(pd.value_counts)

Yield the following dataframes:

All these companies have their time to shine or to fall from grace in this period, 3M (MMM) shows the lowest appearances in the list of tops and flops while Bank of America (BAC) tops the charts. There may be a pattern emerging even with a low quantity of companies.


It is possible now to extend this analysis to all SP500 companies by hard coding them into the research environment. This is an approximation to the SP500 companies tickers as of October 2020 in list format in case it is useful for readers (color marked symbols may result in as error, can be safely removed at this stage):

sp500_tickers = ['MMM','AOS','ABT','ABBV','ABMD','ACN','ATVI','ADBE','AAP','AMD','AES','AFL','A','APD','AKAM','ALK','ARE','ALB','ALXN','ALGN','ALLE','LNT','ALL','GOOGL','GOOG','MO','AMZN','AMCR','AEE','AAL','AEP','AXP','AIG','AMT','AWK','AMP','ABC','AME','AMGN','APH','ADI','ANSS','ANTM','AON','APA','AIV','AAPL','AMAT','APTV','ADM','ANET','AJG','AIZ','T','ATO','ADSK','ADP','AZO','AVB','AVY','BKR','BLL','BAC','BAX','BDX','BRK.B','BBY','BIO','BIIB','BLK','BA','BKNG','BWA','BXP','BSX','BMY','AVGO','BR','BF.B','CHRW','COG','CDNS','CPB','COF','CAH','KMX','CCL','CARR','CAT','CBOE','CBRE','CDW','CE','CNC','CNP','CTL','CERN','CF','SCHW','CHTR','CVX','CMG','CB','CHD','CI','CINF','CTAS','CSCO','C','CFG','CTXS','CME','CMS','KO','CTSH','CL','CMCSA','CMA','CAG','CXO','COP','ED','STZ','CPRT','GLW','CTVA','COST','COTY','CCI','CSX','CMI','CVS','DHI','DHR','DRI','DVA','DE','DAL','XRAY','DVN','DXCM','FANG','DLR','DFS','DISCA','DISCK','DISH','DG','DLTR','D','DPZ','DOV','DOW','DTE','DUK','DRE','DD','DXC','ETFC','EMN','ETN','EBAY','ECL','EIX','EW','EA','EMR','ETR','EOG','EFX','EQIX','EQR','ESS','EL','RE','EVRG','ES','EXC','EXPE','EXPD','EXR','XOM',
'FFIV','FB','FAST','FRT','FDX','FIS','FITB','FRC','FE','FISV','FLT','FLIR','FLS','FMC','F','FTNT','FTV','FBHS','FOXA','FOX','BEN','FCX','GPS','GRMN','IT','GD','GE','GIS','GM','GPC','GILD','GPN','GL','GS','GWW',
'HRB','HAL','HBI','HIG','HAS','HCA','PEAK','HSIC','HES','HPE','HLT','HFC','HOLX','HD','HON','HRL','HST','HWM','HPQ','HUM','HBAN','HII','IEX','IDXX','INFO','ITW','ILMN','INCY','IR','INTC','ICE','IBM','IFF','IP',
'IPG','INTU','ISRG','IVZ','IPGP','IQV','IRM','JBHT','JKHY','J','SJM','JNJ','JCI','JPM','JNPR','KSU','K','KEY','KEYS','KMB','KIM','KMI','KLAC','KSS','KHC','KR','LB','LHX','LH','LRCX','LW','LVS','LEG','LDOS',
'LEN','LLY','LNC','LIN','LYV','LKQ','LMT','L','LOW','LYB','MTB','MRO','MPC','MKTX','MAR','MMC','MLM','MAS','MA','MXIM','MKC','MCD','MCK','MDT','MRK','MET','MTD','MGM','MCHP','MU','MSFT','MAA','MHK','TAP',
'MDLZ','MNST','MCO','MS','MSI','MSCI','MYL','NDAQ','NOV','NTAP','NFLX','NWL','NEM','NWSA','NWS','NEE','NLSN','NKE','NI','NBL','NSC','NTRS','NOC','NLOK','NCLH','NRG','NUE','NVDA','NVR','ORLY','OXY','ODFL',
'OMC','OKE','ORCL','OTIS','PCAR','PKG','PH','PAYX','PAYC','PYPL','PNR','PBCT','PEP','PKI','PRGO','PFE','PM','PSX','PNW','PXD','PNC','PPG','PPL','PFG','PG','PGR','PLD','PRU','PEG','PSA','PHM','PVH','QRVO',
'QCOM','PWR','DGX','RL','RJF','RTX','O','REG','REGN','RF','RSG','RMD','RHI','ROK','ROL','ROP','ROST','RCL','SPGI','CRM','SBAC','SLB','STX','SEE','SRE','NOW','SHW','SPG','SWKS','SLG','SNA','SO','LUV','SWK',
'SBUX','STT','STE','SYK','SIVB','SYF','SNPS','SYY','TMUS','TROW','TTWO','TPR','TGT','TEL','FTI','TDY','TFX','TXN','TXT','BK','CLX','COO','HSY','MOS','TRV','DIS','TMO','TIF','TJX','TSCO','TT','TDG','TFC',
'TWTR','TYL','TSN','USB','UDR','ULTA','UAA','UA','UNP','UAL','UNH','UPS','URI','UHS','UNM','VLO','VAR','VTR','VRSN','VRSK','VZ','VRTX','VFC','VIAC','V','VNO','VMC','WRB','WAB','WBA','WMT','WM','WAT','WEC',
'WFC','WELL','WST','WDC','WU','WRK','WY','WHR','WMB','WLTW','WYNN','XEL','XRX','XLNX','XYL','YUM','ZBRA','ZBH','ZION','ZTS']

Applying the same treatment to the SP500 companies we obtain the following ranking for the period, these are the SP500 companies that have appeared more times in the tops/flops list 2010 to 2015:

Netflix (NFLX) replaced the New York Times (NYT) in December 2010, may be one of the several possible reasons why it sits on the top of the list, a list filled with technology companies and airlines. We will not venture any conclusion at this point and will let the future machine learning model extract its own, if there are any conclusions to be extracted. On the bottom side of the table there are 98 companies that do not appear even a single day as top or flop in this five year period, The Coca-Cola Company (KO) and PepsiCo (PEP) are among them.


We have what we are looking for now, and we are confident it can be scaled to multiple company tickers. We are ready now to find input data to act as prediction features and a prediction model that can learn which of these companies moved the most in each period and hopefully learns to predict the future tops and flops with certain reliability. We will set up these input data and factors in our next publication.


Remember that information in ostirion.net does not constitute financial advice, we do not hold positions in any of the companies or assets that we mention in our posts at the time of posting. If you are in need of algorithmic model development, deployment, verification or validation do not hesitate and contact us.




34 views0 comments

Comments


bottom of page