Posts

Showing posts with the label Covariance matrix

Compute Minimum Variance Portfolio

This post will describe how to compute the minimum variance portfolio. Minimum variance portfolio is closely related to the Maximum Sharpe Ratio Portfolio. Minimum variance Portfolio is just a special case of the Maximum Sharpe Ratio Portfolio, where the returns of each asset is assumed to be equal, then the optimiser simply focuses on the variances. If the asset returns are incorporated into the optimiser, it will incorporate both factors while computing the optimal result. Physical meaning:  These three short videos offer an excellent explanation: 1. Video 1:  https://www.youtube.com/watch?v=3SzYWjLDCSY 2. Video 2:  https://www.youtube.com/watch?v=y-hADD-nwb4 3. Video 3:  https://www.youtube.com/watch?v=fNIoVdDNFmk Practical use: Asset Managers can use this formula to determine the weightage in which they should hold different assets to minimise variance. What is the interesting is that we need only covariance matrix to compute the minimum variance portfolio. This ...

Denoise a Covariance Matrix using Targeted shrinkage

Image
  This post will describe how to denoise a Covariance matrix using Targeted shrinkage. Constant Residual Eigen is the still the preferred method as Targeted shrinkage does slightly distort the Eigen Values representing signal Practical use: The Eigen Values are naturally stored in decreasing order. Once we find the range of Eigen Values that correspond to signal (using fit to Marcenko Pastur Distribution), we can then reset all other eigen values below this threshold to a constant value. This basically vaccums all the eigen values containing random noise into a single Eigen value. These transformed Eigen values are used with the original Eigen Vectors, to recreate a denoised covariance matrix, that will be used for further processing. Compute using Python: # File: Denoise-Targeted-Shrinkage.py

Denoise a Covariance Matrix using Constant Residual Eigen Value Method

Image
 This post will describe how to denoise a Covariance matrix using Constant Residual Eigen Value Method Practical use: The Eigen Values are naturally stored in decreasing order. Once we find the range of Eigen Values that correspond to signal (using fit to Marcenko Pastur Distribution), we can then reset all other eigen values below this threshold to a constant value. This basically vaccums all the eigen values containing random noise into a single Eigen value. These transformed Eigen values are used with the original Eigen Vectors, to recreate a denoised covariance matrix, that will be used for further processing. Compute using Python: # File: Denoise-Constant-Residual-Eigen.py import numpy as np,pandas as pd import matplotlib.pyplot as plt #--------------------------------------------------- def mpPDF(var,q,pts): # Marcenko-Pastur pdf # q=T/N eMin,eMax=var*( 1 -( 1. /q)** .5 )** 2 ,var*( 1 +( 1. /q)** .5 )** 2 eVal=np.linspace(eMin,eMax,pts) pdf=q/( 2 *np.pi*var*eVal)*...

Identify noise in Correlation and Covariance matrices

Image
This post will describe how to identify and remove noise from a Correlation or Covariance matrix Practical use:  Correlation and Covariance matrices are used as the basis for many quant finance algorithms. If these matrices are corrupted through noise, the results could be invalid. If there is an easy workflow to identify the presence of noise in our dataset, we can then take steps to denoise. Physical meaning: We first need a template of what to examine, what good looks like, and how we can compare our covariance matrix against this template, to determine if action is required. What to examine: We will be examining the probability density function of the Eigen values of our covariance/correlation matrices. This is an excellent video on the meaning of probability density function:  https://www.khanacademy.org/math/statistics-probability/random-variables-stats-library/random-variables-continuous/v/probability-density-functions What good looks like: The Marcenko–Pastur Theorem ...