Top Skills for nailing Quant or Trader Interview

Share on FacebookTweet about this on TwitterShare on LinkedInShare on Google+

Want to secure a Quant or Trader’s job? Following are the areas that you should focus on to get your dream job.

 

1) Equity Derivatives/Options

Derivatives are highly traded instruments. Knowledge of option pricing models, greeks, volatility, hedging, and various option strategies are a must.

2) Programming

Sound programming skills are required for backtesting, writing low latency and super-efficient codes.

3) Statistics & Probability

Probability and statistics form a key part of trading. Basic statistics, time series, multivariate analysis etc. is used for formulating strategies, and risk-management.

4) Markets and the Economy

Good knowledge of how markets and economy work.

5) Numerical & Brain Teasers

Numerical and thinking questions test the ability to work out the answer with sound reasoning.

6) Question about You

These are asked to determine if you are a good fit for the job.

7) Job Awareness Questions

Job Awareness questions evaluate your understanding of the job profile.

Get Free Career Advice from a leading HFT firm’s Head of Technology and a panel of quants/traders on 25th Jan at 6 PM IST by registering here: http://bit.do/algowebinar

Share on FacebookTweet about this on TwitterShare on LinkedInShare on Google+
Read more

Implementing Pairs Trading Using Kalman Filter

Share on FacebookTweet about this on TwitterShare on LinkedInShare on Google+

Implementing Pairs Trading Using Kalman Filter

By Dyutiman Das

This article is the final project submitted by the author as a part of his coursework in Executive Programme in Algorithmic Trading (EPAT™) at QuantInsti. Do check our Projects page and have a look at what our students are building.

Introduction

Some stocks move in tandem because the same market events affect their prices. However, idiosyncratic noise might make them temporarily deviate from the usual pattern and a trader could take advantage of this apparent deviation with the expectation that the stocks will eventually return to their long term relationship. Two stocks with such a relationship form a “pair”. We have talked about the statistics behind pairs trading in a previous article.

This article describes a trading strategy based on such stock pairs. The rest of the article is organized as follows. We will be talking about the basics of trading an individual pair, the overall strategy that chooses which pairs to trade and present some preliminary results. In the end, we will describe possible strategies for improving the results.

Pair trading

Let us consider two stocks, x and y, such that

y = \alpha + \beta x + e

\alpha and \beta are constants and e is white noise. The parameters {\alpha, \beta} could be obtained from a linear regression of prices of the two stocks with the resulting spread 

e_{t} = y_{t} – (\alpha + \beta x_{t})

Let the standard deviation of this spread be \sigma_{t}. The z-score of this spread is

z_{t} = e_{t}/\sigma_{t}

Trading Strategy

The trading strategy is that when the z-score is above a threshold, say 2, the spread can be shorted, i.e. sell 1 unit of y and buy \beta units of x. we expect that the relationship between x and y will hold in the future and eventually the z-score will come down to zero and even go negative and then the position could be closed. By selling the spread when it is high and closing out the position when it is low, the strategy hopes to be statistically profitable. Conversely, if the z-score is below a lower threshold say -2, the strategy will go long the spread, i.e. buy 1 unit of y and sell \beta units of x and when the z score rises to zero or above the position can be closed realizing a profit.

There are a couple of issues which make this simple strategy difficult to implement in practice:

  1. The constants \alpha and \beta are not constants in practice and vary over time. They are not market observables and hence have to be estimated with some estimates being more profitable than others.
  2. The long term relationship can break down, the spread can move from one equilibrium to another such that the changing {\alpha,\beta} gives an “open short” signal and the spread keeps rising to a new equilibrium such that when the “close long” signal come the spread is above the entry value resulting in a loss.

Both of these facts are unavoidable and the strategy has to account for them.

Determining Parameters

The parameters {\alpha, \beta} can be estimated from the intercept and slope of a linear regression of the prices of y against the prices of x. Note that linear regression is not reversible, i.e. the parameters are not the inverse of regressing x against y. So the pairs (x,y) is not the same as (y,x). While most authors use ordinary least squares regression, some use total least squares since they assume that the prices have some intraday noise as well. However, the main issue with this approach is that we have to pick an arbitrary lookback window.

In this paper, we have used Kalman filter which is related to an exponential moving average. This is an adaptive filter which updates itself iteratively and produces \alpha, \beta, e and \sigma simultaneously. We use the python package pykalman which has the EM method that calibrates the covariance matrices over the training period.

Another question that comes up is whether to regress prices or returns. The latter strategy requires holding equal dollar amount in both long and short positions, i.e. the portfolio would have to be rebalanced every day increasing transaction cost, slippage, and bid/ask spread. Hence we have chosen to use prices which is justified in the next subsection.

Stability of the Long Term Relationship

The stability of the long term relationship is determined by determining if the pairs are co-integrated. Note that even if the pairs are not co-integrated outright, they might be for the proper choice of the leverage ratio. Once the parameters have been estimated as above, the spread time series e_{t} is tested for stationarity by the augmented Dickey Fuller (ADF) test. In python, we obtain this from the adfuller function in the statsmodels module. The result gives the t-statistics for different confidence levels. We found that not many pairs were being chosen at the 1% confidence level, so we chose 10% as our threshold.

One drawback is that to perform the ADF test we have to choose a lookback period which reintroduces the parameter we avoided using the Kalman filter.

Choosing Sectors and Stocks

The trading strategy deploys an initial amount of capital. To diversify the investment five sectors will be chosen: financials, biotechnology, automotive etc. A training period will be chosen and the capital allocated to each sector is decided based on a minimum variance portfolio approach. Apart from the initial investment, each sector is traded independently and hence the discussion below is limited to a single sector, namely financials.

Within the financial sector, we choose about n = 47 names based on large market capitalization. We are looking for stocks with high liquidity, small bid/ask spread, ability to short the stocks etc.  Once the stock universe is defined we can form n (n-1) pairs, since as mentioned above (x,y) is not the same as (y,x). In our financial portfolio, we would like to maintain up to five pairs at any given time. On any day that we want to enter into a position (for example the starting date) we run a screen on all the n (n-1) pairs and select the top pair(s) according to some criteria some of which are discussed next.

Choosing Pairs

For each pair, the signal is obtained from the Kalman filter and we check if |e| > nz \sigma, where nz is the z-score threshold to be optimized. This ensures that this pair has an entry point. We perform this test first since this is inexpensive. If the pair has an entry point, then we choose a lookback period and perform the ADF test.

The main goal of this procedure is not only to determine the list of pairs which meets the standards but rank them according to some metrics which relates to the expected profitability of the pairs.

Once the ranking is done we enter into the positions corresponding to the top pairs until we have a total of five pairs in our portfolio.

Results

In the following, we calibrated the Kalman filter over Cal11 and then used the calibrated parameters to trade in Cal12. In the following, we kept only one stock-pair in the portfolio.

In the tests shown we kept the maximum allowed drawdown per trade to 9%, but allowed a maximum loss of 6% in one strategy and only 1% in the other. As we see from above the performance improves with the tightening of the maximum allowed loss per trade. The Sharpe ratio (assuming zero index) was 0.64 and 0.81 respectively while the total P&L was 9.14% and 14%.

The thresholds were chosen based on the simulation in the training period.

Future Work

  1. Develop better screening criterion to identify the pairs with the best potentials. I already have several ideas and this will be ongoing research.
  2. Optimize the lookback window and the buy/sell Z-score thresholds.
  3. Gather more detailed statistics in the training period. At present, I am gathering statistics of only the top 5 (based on my selection criteria). However, in future, I should record statistics of all pairs that pass. This will indicate which trades are most profitable.
  4. In the training period, I am measuring profitability by the total P&L of the trade, from entry till the exit signal is reached. However, I should also record max profit so that I could determine an earlier exit threshold.
  5. Run the simulation for several years, i.e. calibrate one year and then test the next year. This will generate several year’s worths of out-of-sample tests. Another window to optimize is the length of the training period and how frequently the Kalman filter has to be recalibrated.
  6. Expand the methodology to other sectors beyond financials.
  7. Explore other filters instead of just Kalman filter.

Next Steps

If you are a coder or a tech professional looking to start your own automated trading desk. Learn automated trading from live Interactive lectures by daily-practitioners. Executive Programme in Algorithmic Trading (EPAT™) covers training modules like Statistics & Econometrics, Financial Computing & Technology, and Algorithmic & Quantitative Trading. Enroll now!

 

Note:

The work presented in the article has been developed by the author, Mr. Dyutiman Das. The underlying codes which form the basis for this article are not being shared with the readers. For readers who are interested in further readings on implementing pairs trading using Kalman Filter, please find the article below.

Link: Statistical Arbitrage Using the Kalman Filter by Jonathan Kinlay

 

Share on FacebookTweet about this on TwitterShare on LinkedInShare on Google+
Read more

Market Impact Cost

Share on FacebookTweet about this on TwitterShare on LinkedInShare on Google+

By Milind Paradkar

Market impact cost, a very important component of trading costs get closely tracked by portfolio managers as it can make or break a fund’s performance. In this post, we will throw some light on market impact cost, and identify its sources and the different means adopted by portfolio managers to mitigate it.  (more…)

Share on FacebookTweet about this on TwitterShare on LinkedInShare on Google+
Read more

Recommended Quant Readings for you – Best of 2016!

Share on FacebookTweet about this on TwitterShare on LinkedInShare on Google+
Recommenced Quant Articles of 2016

Recommenced Quant Articles of 2016

As 2016 nears its finish line, here we are with the list of recommended reading on our blog with the top-rated blog posts, as voted by you! Enjoy the last few days doing what you love most! Read on.

System Architecture of Algorithmic Trading

This one is straight out of a lecture in the curriculum of QuantInsti’s Executive Programme in Algorithmic Trading (EPAT™). It compares the traditional trading structure with algorithmic trading architecture and highlights the complexities in the latter. The post explains the three core components of the trading server: Complex Event Processing Engine (the brain), Order Management System (the limbs) and the Data Storage component. Life Cycle of the entire system is also explained so that the readers under what happens when a data package is received from the exchange, where trading decisions happen, how risk is monitored and how are orders managed.

 Backtesting platforms for quants

There are many platforms out there and for beginners, it is often confusing to pick the most relevant for them. The posts list out the USPs of available platforms so that you can make an informed choice before you start using a platform for backtesting. It is important to make this decision carefully as you would require to spend enough time on one platform to get comfortable with it!

A Pairs Trading quant project, a working strategy with open sourced code in R

In this highly insightful article, QuantInsti’s EPAT™ graduate, Jacques Joubert shares his project work on Statistical Arbitrage in R programming language. For readers who are more comfortable in Excel, they can download a pair trading model in Excel here to get started. He talks briefly about the history of Statistical Arbitrage before moving on to the strategy and its markdown in R programming language.

Algorithmic Trading Strategies

What are the different Algo Trading Strategies? What are the strategy paradigms and modelling ideas associated with each strategy? How do we build an Algo trading strategy? These are some of the key questions answered in this in-depth article. QuantInsti’s article on Algorithic Trading Strategies covers the following:

  • Momentum based strategies
  • Arbitrage
  • Statistical Arbitrage
  • Market Making
  • Machine Learning Based

Python as programming language for DIY traders

Python has sufficed as one of the most popular programming languages for algorithmic traders. In this set of articles, we have talked about Zipline, building technical indicators and the benefits of learning Python for trading. The articles came into light during the webinar on Automated trading using Python conducted by Dr. Yves Hilpisch. This year, we also had Dr. Hui Liu conducting a webinar on implementing Python in Interactive Broker’s C++ based API. Both Dr. Yves and Dr. Hui, who are two of the renowned names in the field of automated trading, have joined QuantInsti’s impressive line-up of outstanding faculty for EPAT™.

Learn Machine Learning for Trading

Machine Learning and Artificial Intelligence are the most sought-after streams of technology in this era. As trading has become automated, Machine Learning’s importance has only become critical for maintaining competency in the market. From fetching historical information to placing orders to buy or sell in the market, machine learning is an integral part of Automated trading and we have covered it in detail on our blog.

5 things to know about Algorithmic Trading to get started (in India)

As Algorithmic trading picks up pace in India, more and more conventional traders and beginners are wanting to know about this lucrative field. However, owing to shortage of resources in the market, QuantInsti decided to churn out a very primitive article for amateurs who want to step out in the world of algorithmic trading. Explained in basic language, this article covers all the things one needs to know before starting algorithmic trading.

Next Step

We would love to hear from you – why you liked any or all. If you would like to read something specific in 2017, all suggestions are welcome!

 

Share on FacebookTweet about this on TwitterShare on LinkedInShare on Google+
Read more

A Glimpse Into Features of High Frequency Data

Share on FacebookTweet about this on TwitterShare on LinkedInShare on Google+
A Glimpse Into Features of High Frequency Data

A Glimpse Into Features of High Frequency Data

By Milind Paradkar

As the race to zero latency continues, high frequency data, a key component in HFT remains under the scanner of researchers and quants across markets. Beginners to algorithmic trading often find the words high frequency trading (HFT), latency, market microstructure, noise etc. being tossed around on numerous algorithmic trading sites, in research papers, and quant literature. This post aims to unravel some of these terms for our readers. In this post, we will take a brief overview of the features of high frequency data, some of which include:

  • Irregular time intervals between observations
  • Market microstructure noise
  • Non-normal asset return distributions (e.g. fat tail distributions)
  • Volatility clustering and long memory in absolute values of returns
  • High computations loads and related “Big data” problems

1) Irregular time intervals between observations

On any given trading day, liquid markets generate thousands of ticks which form the high frequency data. By nature, this data is irregularly spaced in time and is humongous compared to the regularly spaced end-of-the-day (EOD) data.

high frequency data sample

Example of tick-by-tick data for AUD/JPY pair, Source: Pepperstone.com

 High frequency trading (HFT) involves analyzing this data for formulating trading strategies which are implemented with very low latencies. As such it becomes very essential for mathematical tools and models to incorporate the features of high frequency data such as irregular time series and some others that we will outline below to arrive at the right trading decisions. Let us cover some of the other features that define high frequency data.

2) Market Microstructure Noise

Market Microstructure Noise is a phenomenon observed with high frequency data that relates to observed deviation of the price from the base price. The presence of Noise makes high frequency estimates of some parameters like realized volatility very unstable. Noise in high frequency data can result from various factors including:

  1. Bid-Ask Bounce
  2. Asymmetry of information
  3. Discreteness of price changes
  4. Order arrival latency

 

Let us look at the concept of Bid-Ask Bounce, which is one of the causes of Noise.

Bid-Ask bounce – Bid-Ask bounce occurs when the price for a stock keeps changing from the bid price to ask price (or vice versa). The stock price movement takes place only inside the bid-ask spread, which gives rise to the bounce effect. This occurrence of bid-ask bounce gives rise to high volatility readings even if the price stays within the bid-ask window.

 Bid ask bounce

volatility

3) Fat tail distributions

High frequency data exhibit fat tail distributions. To understand fat tails we need to first understand a normal distribution. A normal distribution assumes that all values in a sample will be distributed equally above and below the mean. Thus, about 99.7% of all values falls within three standard deviations of the mean and therefore there is only a 0.3% chance of an extreme event occurring.

Many financial models such as Modern Portfolio TheoryEfficient Markets, and the Black-Scholes option pricing model assume normality. However, real market events in the past have shown us that the unpredictable human behavior makes marketplace less than perfect. This gives rise to extreme events and consequently to the fat tail distribution and the consequent risks.

By definition, a fat tail is a probability distribution which predicts movements of three or more standard deviations more frequently than a normal distribution. Quant analysts doing HFT need to model the tail risks to avoid big losses, and hence tail risk hedging assumes importance in HFT.

The plot shown below illustrates a fat tail distribution vis-à-vis normal a distribution.

 

HFT

Source: lexicon.ft.com

4) Volatility clustering and long memory in absolute values of returns

High frequency data exhibits volatility clustering and long memory effects in absolute values of returns.

Volatility Clustering – In finance, volatility clustering refers to the observation, as noted as Mandelbrot (1963), that “large changes tend to be followed by large changes, of either sign and small changes tend to be followed by small changes.”

Long-range dependence (Long memory) – Long-range dependence (LRD), also called long memory or long-range persistence, is a phenomenon that may arise in the analysis of spatial or time series data. It relates to the rate of decay of statistical dependence of two points with increasing time interval or spatial distance between the points. A phenomenon is usually considered to have long-range dependence if the dependence decays more slowly than an exponential decay, typically a power-like decay.

5) High computations loads and related “Big data” problems

HFT players rely on microsecond/nanosecond latency and have to deal with enormous data. Utilizing big data for HFT comes with its own set of problems. HFT firms need to have the latest state-of-the-art hardware and latest software technology to deal with big data, which otherwise can increase the processing time beyond the acceptable standards.

To Conclude

These were some of the features underlying high frequency data that HFT models need to take into account. If you want to learn various aspects of Algorithmic trading then check out the Executive Programme in Algorithmic Trading (EPAT™). The course covers training modules like Algorithmic & Quantitative Trading, Statistics & Econometrics, and Financial Computing & Technology. Enroll now!

Share on FacebookTweet about this on TwitterShare on LinkedInShare on Google+
Read more

Creating Heatmap Using Python Seaborn

Share on FacebookTweet about this on TwitterShare on LinkedInShare on Google+
Creating Heatmap using Python seaburn

Python Data Visualization – Creating Heatmap using Seaborn

by Milind Paradkar

In our previous blog we talked about Data Visualization in Python using Bokeh. Now, let’s take our series on Python data visualization forward, and cover another cool data visualization Python package. In this post we will use the Python Seaborn package to create Heatmaps which can be used for various purposes, including by traders for tracking markets.

Seaborn for Python Data Visualization

Seaborn is a Python data visualization library based on Matplotlib. It provides a high-level interface for drawing attractive statistical graphics. Because seaborn is built on top of Matplotlib, the graphics can be further tweaked using Matplotlib tools and rendered with any of the Matplotlib backends to generate publication-quality figures. [1]

Types of plots that can be created using seaborn includes:

  • Distribution plots
  • Regression plots
  • Categorical plots
  • Matrix plots  
  • Timeseries plots

The plotting functions operate on Python dataframes and arrays containing a whole dataset, and internally perform the necessary aggregation and statistical model-fitting to produce informative plots.[2]

seaburn 1

                             Source: seaborn.pydata.org

What is a heatmap?

A heatmap is a two-dimensional graphical representation of data where the individual values that are contained in a matrix are represented as colors. The seaborn package allows for creation of annotated heatmaps which can be tweaked using Matplotlib tools as per the creator’s requirement.

seaburn 2

Annotated Heatmap

 

Python Heatmap Code

We will create a seaborn heatmap for a group of 30 Pharmaceutical Company stocks listed on the National Stock Exchange of India Ltd (NSE). The heatmap will display the stock symbols and its respective single-day percentage price change.

We collate the required market data on Pharma stocks and construct a comma-separated values (CSV) file comprising of the stock symbols and their respective percentage price change in the first two columns of the CSV file.

Since we have 30 Pharma companies in our list, we will create a heatmap matrix of 6 rows and 5 columns. Further, we want our heatmap to display the percentage price change for the stocks in a descending order. To that effect we arrange the stocks in a descending order in the CSV file and add two more columns which indicate the position of each stock on X & Y axis of our heatmap.

Import the required Python packages

We import the following Python packages:

 

code 1

Load the dataset

We read the dataset using the read_csv function from pandas, and visualize the first ten rows using the print statement.

 

python code for data visualization Python data visualization

Create a Python Numpy array

Since we want to construct a 6 x 5 matrix, we create an n-dimensional array of the same shape for “Symbol” and the “Change” columns.

Python data visualization

Python data visualization codes

Python data visualization in seaburn

Create a Pivot in Python

The pivot function is used to create a new derived table from the given dataframe object “df”. The function takes three arguments; index, columns, and values. The cell values of the new table are taken from column given as the values parameter, which in our case is the “Change” column.

Python data visualization seaburn codesPython data visualization seaburn codes

Create an Array to Annotate the Heatmap

In this step we create an array which will be used to annotate the heatmap. We call the flatten method on the “symbol” and “percentage” arrays to flatten a Python list of lists in one line. The zip function which returns an iterator zips a list in Python. We run a Python For loop and by using the format function; we format the stock symbol and the percentage price change value as per our requirement.

Python data visualization for algo trading

Create the Matplotlib figure and define the plot

We create an empty Matplotlib plot and define the figure size. We also add the title to the plot and set the title’s font size, and its distance from the plot using set_position method.

We wish to display only the stock symbols and their respective single-day percentage price change. Hence, we hide the ticks for the X & Y axis, and also remove both the axes from the heatmap plot.

data visualization in python

Create the Heatmap

In the final step, we create the heatmap using the heatmap function from the Python seaborn package. The heatmap function takes the following arguments:

data – 2D dataset that can be coerced into an ndarray. If a Pandas DataFrame is provided, the index/column information will be used to label the columns and rows.

annot – an array of same shape as data which is used to annotate the heatmap.

cmap – a matplotlib colormap name or object. This maps the data values to the color space.

fmt – string formatting code to use when adding annotations.

linewidths – sets the width of the lines that will divide each cell.

creating heatmap

 heatmap

Here’s our final output of the seaborn heatmap for the chosen group of pharmaceutical companies. Looks pretty neat and clean, doesn’t it? A quick glance at this heatmap and one can easily make out how the market is faring for the period.

Download the Python Heatmap Code

Readers can download the entire Python code plus the excel file using the download button provided below and create their own custom heatmaps. A little tweak in the Python code and you can create Python heatmaps of any size, for any market index, or for any period using this Python code. The heatmap can be used in live markets by connecting the real time data feed to the excel file that is read in the Python code.

To Conclude

As illustrated from the heatmap example above, seaborn is easy to use and one can tweak the seaborn plots to one’s requirement. You can refer to the documentation of seaborn for creating other impressive charts that you can put to use for analyzing the markets.

Next Step

Python Data Visualization is just one of the elements covered in the vast domain of Algorithmic Trading. To understand the patterns, one must be well-versed in the basics. Want to know more about Algorithmic trading? You should click here and check out more about Algorithmic Trading.

Download Python Code:

  • Data Visualization using Seaburn.rar
    • Pharma Heatmap using Seaburn.py
    • Pharma Heatmap.data

(more…)

Share on FacebookTweet about this on TwitterShare on LinkedInShare on Google+
Read more

Put-Call Parity in Python Programming Language

Share on FacebookTweet about this on TwitterShare on LinkedInShare on Google+
Put Call Ratio in Python

Put Call Parity in Python

We talked about Covered Call Strategy and Long Call Butterfly Strategy in our previous articles on the blog. Now, we shall talk about the Put-call Parity.

Put-call parity principle defines the relationship between the price of a European Put option and European Call option, both having the same underlying asset, strike price and expiration date.

If there is a deviation from put-call parity, then it would result in an arbitrage opportunity. Traders would take advantage of this opportunity to make riskless profits till the time the put-call parity is established again.

The put-call parity principle can be used to validate an option pricing model. If the option prices as computed by the model violate the put-call parity rule, such a model can be considered to be incorrect.

Understanding Put Call Parity

To understand put-call parity, consider a portfolio “A” comprising of a call option and cash. The amount of cash held equals the call strike price. Consider another portfolio “B” comprising of a put option and the underlying asset. S0 is the initial price of the underlying asset and ST is its price at expiration. Let “r” be the risk-free rate and “T” be the time for expiration. In time “T” the cash will be worth K (strike price) given the risk-free rate of “r”.

Portfolio A = Call option + Cash

Portfolio B = Put option + Underlying Asset

Put call parity diagram

If the share price is higher than K the call option will be exercised. Else, cash will be retained. Hence, at “T” portfolio A’s worth will be given by max(ST, K).

If the share price is lower than K, the put option will be exercised. Else, the underlying asset will be retained. Hence, at “T”, portfolio B’s worth will be given by max(ST, K).

If the two portfolios are equal at time “T”, then they should be equal at any time. This gives us the put-call parity equation –

C + Ke-rT = P + S0

When put-call parity principle gets violated, traders will try to take advantage of the arbitrage opportunity. An arbitrage trader will go long on the undervalued portfolio and short the overvalued portfolio to make a risk-free profit.

Python codes used for plotting the charts:
Pu call parity codes

 

Next Step

This was a brief explanation of put-call parity wherein we provided the Python code for plotting the constituents of the put-call parity equation. In our future posts we will cover and attempt to illustrate other derivatives concepts using Python. Our Executive Programme in Algorithmic Trading (EPAT) includes dedicated lectures on Python and Derivatives. To know more about EPAT, check the EPAT course page or feel free to contact our team at contact@quantinsti.com for queries on EPAT.

Download Python Code:

  • Put_Call_Parity.rar
    • putcallparity.py

(more…)

Share on FacebookTweet about this on TwitterShare on LinkedInShare on Google+
Read more

Pairs Trading on ETF – EPAT Project Work

Share on FacebookTweet about this on TwitterShare on LinkedInShare on Google+

This article is the final project submitted by the author as part of his coursework in Executive Programme in Algorithmic Trading (EPAT™) at QuantInsti. You can check out our Projects page and have a look at what our students are building after reading this article.

About the AuthorEPAT student

Edmund Ho did his Bachelors in commerce from University of British Columbia, He completed his Masters in Investment Management from Hong Kong University of Science and Technology. Edmund was enrolled in the 27th Batch of EPAT™, and this report is part of his final project work.

Project Summary

ETFs are very popular for pairs trading simply because it eliminates the firm-specific factors.   On top of that, most of the ETFs are short-able so we don’t have to worry about the short constraint.   In this project, we are trying to build a portfolio using 3 ETF pairs in oil (USO vs XLE), technology (XLK vs IYW), and financial sectors (XLF vs PSCF).

Over the long run, the overall performance of the miners is highly correlated with the commodities.  In short term, they may have divergences due to individual company’s performance or overall equity market performance, and hence the short term arbitrage opportunities may exist.  In technology sector, we attempt to seek mispricing on both large cap technology ETFs. Last, we attempt to see if arbitrage opportunity exists between the large and mid-cap financial ETFs.

Pair 1 – Oil Sector USO vs XLE

Cointegration Test

Cointegration test

The above charts were generated in R Studio.   The in sample data generated between Jan 1st, 2011 and Dec 31, 2014.

First, we plot the price for the pairs and it gives us an impression that both price series are quite similar.  Then we perform the regression analysis for USO vs XLE (return USO = Beta * Return XLE + Residual) and find the beta or hedge ratio to be 0.7493.  Next, we apply the hedge ratio to generate the spread returns.  We can see the spread returns deviate closely around 0, which shows the characteristic of cointegrating pattern.  At last, we apply the Augmented Dickey-Fuller test with a confidence level at 0.2 and check if the pairs pass the ADF test.  The results are as follow:

Augmented Dickey-Fuller Test

data:(spread) Dickey-Fuller = -3.0375, Lag order = 0, p-value = 0.1391

alternative hypothesis: stationary

[1] “The spread is likely Cointegrated with a pvalue of 0.139136738842547”

With the p-value of 0.1391, the pairs satisfied the cointegration test, and we will go ahead and back-test the pairs in next section.

Strategy Back-Testing

etf strategy backtesting

The above back-testing result were generated in R Studio.  The back-testing period was using the in-sample data similar to the cointegration test.  Our trading strategy is relatively simple as follow:

  • If the spread is greater than +/- 1.5 standard deviations of its rolling lookback period of 120 days’ standard deviation, then we go short / long accordingly.
  • At all time, only 1 open position
  • Close the long/short position when the spread reverts to its mean/moving average.

etf backtesting result

The above back-testing result were generated in R-Studio using the PerformanceAnalystics package.  During the in sample back-testing period, the strategy achieved a cumulated return of 121.03% where the SPY (S&P500) had a cumulative return of 61.78%.  This translated into an annualized return of 22% and 12.82%, respectively.  In terms of risk analysis, the strategy had much lower annualized standard deviation of 11.63% vs. 15.35% in SPY.  The worst drawdown percentage for the strategy was 6.39% vs. 19.42% in SPY.  The annualized Sharpe ratio was superior in our strategy at 1.89 vs. 0.835 in SPY.  Please note that all of the above calculations did NOT factor into transaction cost.

Out of sample test

For the out of sample period between Jan 1st 2015 and Dec 31, 2015, the pairs did not pass the ADF test suggested by a high p-Value at 0.3845.  The phenomenon could be explained by the sharp decline in cruel oil price where the equity market was persisting in an uptrend.  If we look at the spread returns, they first seem to be cointegrating around 0 but with a much larger deviation suggested by the chart below.

etf strategy

The actual spread obviously did not suggest a cointegrating pattern as indicated by the high p-Value.  Next, we will go ahead and back-test the same strategy using the out of sample data despite the pair fails the cointegration test.   The hedge ratio was found to be 1.1841.  The key back-testing results were generated in R-Studio as follow:

USO and XLE Stat Arb          SPYAnnualized Return           0.09862893 -0.007623957

USO and XLE Stat Arb          SPYCumulative Return           0.09821892 -0.007593818

USO and XLE Stat Arb         SPYAnnualized Sharpe Ratio (Rf=0%)            0.5632756 -0.04884633

USO and XLE Stat Arb       SPYAnnualized Standard Deviation            0.1750989 0.1560804

USO and XLE Stat Arb       SPYWorst Drawdown            0.1706643 0.1228571

 

At a first glance, the strategy seems outperform the SPY in all aspects, but due to the lookback period which was set same as in-sample back-testing data (120 days for consistency), this strategy only had 1 trade during the out of sample period, which may not reflect the situation going forward.  However, this shows that it is not necessary to have a perfect cointegrating pairs in order to extract profit opportunities.  In reality, only a few perfect pairs would pass the test.

Pair 2 – Large Cap Technology XLK vs. IYW

Cointegration Test

cointegration test for ETF

It should not be a surprise that XLK and IYW has a strong linear relationship as demonstrated in the regression analysis with a hedge ratio at 0.903.  The two large cap technology ETFs are very similar in nature except for its size, volume, expense ratio, etc.  However, if we take a closer look at the actual return spreads, it doesn’t seem to satisfy the cointegration test.  If we run the ADF test, the result shows they are not likely to be cointegrated with a p-Value at 0.5043.

Augmented Dickey-Fuller Test

data: (spread) Dickey-Fuller = -2.1748, Lag order = 0, p-value = 0.5043

alternative hypothesis: stationary

[1] “The spread is likely NOT Cointegrated with a pvalue of 0.504319921216107”

The purpose to run the strategy in this pair is to see if there is any mispricing (short term deviation) in the pair in order to profit from it.  In the USO and XLE example, we observed that profit opportunity may still exist despite the pair failing the cointegration test.  Here, we will go ahead to test the pair and see if any profit opportunity exists.

Back-testing Result

backtesting result

XLK and IYW Stat Arb       SPYAnnualized Return         -0.003581305 0.1282006

XLK and IYW Stat Arb       SPYCumulative Return          -0.01420635 0.6177882

XLK and IYW Stat Arb       SPYAnnualized Sharpe Ratio (Rf=0%)           -0.2839318 0.8347157

XLK and IYW Stat Arb      SPYAnnualized Standard Deviation           0.01261326 0.153586

XLK and IYW Stat Arb       SPYWorst Drawdown           0.02235892 0.1942388

The back-testing results illustrated that the strategy performed very poorly during the back-testing period between Jan 1st 2011 and Dec 31, 2014.  This demonstrates that the two ETFs are highly correlated to each other and it’s very hard to extract profit opportunity from them.  In order for a statistical arbitrage strategy to work, we need a pair with some volatility in their spreads but they should eventually show a mean-reverting pattern.  In next section, we will perform the same analysis on the Financial ETFs.

Pair 3 – Financial Sectors XLF vs. PSCF

Cointegration Test

ETF Cointegration Test 2

In this pair, we attempt to seek a trading opportunity between the large financial cap ETF XLF and the small financial cap ETF PSCF.  From the price series they both show very similar pattern.  In terms of regression analysis, they obviously show a strong correlation with a hedge ratio of 0.9682.  The spread return also illustrates some cointegrating pattern with the spread deviating around 0.  The ADF test with test value set at 80% confidence shows the pair is likely to be cointegrated with a p-value at 0.1026.

Augmented Dickey-Fuller Test

 

data:(spread)

Dickey-Fuller = -3.1238, Lag order = 0, p-value = 0.1026

alternative hypothesis: stationary

 

[1] “The spread is likely Cointegrated with a pvalue of 0.102608136882834”

Back-testing Result

backtesting results for etf pair 3

XLF and PSCF Stat Arb       SPYAnnualized Return            0.01212355 0.1282006

XLF and PSCF Stat Arb       SPYCumulative Return            0.04923268 0.6177882

XLF and PSCF Stat Arb       SPYAnnualized Sharpe Ratio (Rf=0%)             0.1942203 0.8347157

XLF and PSCF Stat Arb      SPYAnnualized Standard Deviation            0.06242163 0.153586

XLF and PSCF Stat Arb       SPYWorst Drawdown            0.07651392 0.1942388

Although the pair satisfied the cointegration test with a low p-value, the back-testing results demonstrated a below average performance when we compare to the index return.

Conclusion

In this project, we chose 3 different pairs of ETFs to back test our simple mean-reverting strategy.  The back test results show superior performance on USO/XLE, but not on the other two pairs.  We can conclude that in order for the pair trading strategy to work, we do not need a pair that shows strong linear relationship, but a long term mean reverting pattern is essential to obtain a decent result.  In the pair XLK/IYW, we attempt to seek for mispricing between the two issuers, however, in an efficient ETF market in US, mispricing on such big ETFs is very rare, hence this strategy performs very poorly on this pair.  On the other hand, the correlation and cointegration test on the pair XLF/PSCF illustrate the pair is an ideal candidate to trade the statistical arbitrage strategy, however, the back-testing results show the other way around.  Any statistical arbitrage strategy we are essentially playing with the volatility, but if there is not enough volatility around the spreads to begin with, like the pair in XLK/IYW, the profit opportunity is trivial.   In the pair USO/XLE, the volatility around the spreads is ideal and the cointegration test shows the pair has a mean-reverting pattern, therefore it is not a surprise this pair prevails in the back-testing results.

Next Step

Read about other strategies in this article on Algorithmic Trading Strategy Paradigms. If you also want to learn more about Algorithmic Trading, then click here.

  • Project_Download.rar
    • Project_Cointegration_Test.R
    • Project_Backtest_Test.R

(more…)

Share on FacebookTweet about this on TwitterShare on LinkedInShare on Google+
Read more