Corporate leverage targets

The following regression of leverage ratios at quarter $t+1$ on the same at quarter $t$ $$\left(\frac{D}{V}\right)_{t+1} = \gamma \left(\frac{D}{V}\right)^* + (1-\gamma) \left(\frac{D}{V}\right)_{t} + \epsilon_{t+1}$$ should describe reasonably well the behavior of a corporation that partially adjusts its leverage ratio from where it currently is towards a long-term target $\left(\frac{D}{V}\right)^*,$ where $\gamma$ measures the speed of adjustment. For more discussion, see section 4.6 of my notes.

Here $D$ is the value of all non-operating liabilities (see below for details) while $V$ is enterprise value. The current leverage ratio matters because adjusting one's leverage ratio either by issuing more debt or retiring some debt is costly and takes time. White noise $\epsilon$ simply reflects the fact that every one has a plan until they get punched in the mouth (Mike Tyson, 1987.)

Notice that if $\gamma=0$ the equation degenerates to $$\left(\frac{D}{V}\right)_{t+1} = \left(\frac{D}{V}\right)_{t} + \epsilon_{t+1}$$ which is known as a random walk. A corporation whose leverage ratio follows a random walk has no long-term target. Finding evidence of a long-term leverage target, therefore, is rejecting the hypothesis that $\gamma=0$.

Our main goal is to estimate $\gamma$ and $\left(\frac{D}{V}\right)^*.$ For that we'll need a few standard python modules.

In [11]:
import pandas as pd # pandas is excellent for creating and manipulating dataframes, R-style
import numpy as np # great for array manipulations and simulations
import matplotlib.pyplot as plt #graphing module with matlab-like properties
%matplotlib inline 
import requests # to make requests to webpages (like mine)
import statsmodels.api as sm # module full of standard statistical and econometric models, including OLS and time-series stuff

Next we import the data from my webpage and look at it a bit.

In [12]:
pd.options.display.float_format = '{:,.2f}'.format # this is the financial format we like
df = pd.read_csv('http://erwan.marginalq.com/index_files/tea_files/IBMlast.csv')
df.describe() # summary stats for our newly imported dataset 
Out[12]:
FF_CASH_ST FF_DEBT_ST FF_DEBT_LT FF_PFD_STK FF_MKT_VAL
count 105.00 105.00 105.00 105.00 105.00
mean 9,705.50 9,561.03 22,885.37 103.47 144,308.22
std 4,781.39 3,300.76 11,446.97 224.58 48,768.45
min 3,033.00 4,050.00 9,478.00 0.00 31,853.30
25% 6,861.00 6,987.00 14,828.00 0.00 121,991.00
50% 9,756.00 9,181.00 18,775.00 0.00 146,355.00
75% 11,764.00 12,315.00 28,478.00 247.00 170,714.00
max 46,408.00 20,543.00 62,391.00 1,091.00 240,675.00
In [13]:
df[0:4]
Out[13]:
tic DATE FF_CASH_ST FF_DEBT_ST FF_DEBT_LT FF_PFD_STK FF_MKT_VAL
0 IBM 3/31/1994 8951 11942 14937 1091 31,853.30
1 IBM 6/30/1994 8571 10357 14892 1091 34,360.80
2 IBM 9/30/1994 10804 10240 14077 1091 40,881.90
3 IBM 12/31/1994 10554 9570 12548 1081 43,196.70

Next we create the variables we need. Notice the use of panda's shift operator to easily create a lagged variable.

In [14]:
df['FF_PFD_STK']=df['FF_PFD_STK'].fillna(0) # note how we make sure that the NaN becomes zeros so as not to lose data
df['V']=df['FF_DEBT_ST']+df['FF_DEBT_LT']+df['FF_PFD_STK']+df['FF_MKT_VAL']-df['FF_CASH_ST'] # this is EV
df['D']=df['FF_DEBT_ST']+df['FF_DEBT_LT']+df['FF_PFD_STK']-df['FF_CASH_ST'] # this is leverage
df['Lev']=df['D']/df['V']
df['LevLag']=df.Lev.shift(1)

Next we plot leverage. Based on this it's obviously hard to tell whether IBM has a long-term tendency to return to some target. There seem to be three regimes: high leverage early on, low leverage in the middle, and high leverage again recently. Clearly, a regime-switching model would be a better way to represent IBM's leverage history. Still, for illustration, we will estimate our model as is.

In [15]:
datenew=np.asarray([1994.125+ i*.25 for i in range(len(df['DATE']))])    # a date variable that makes for a prettier chart
plt.plot(datenew,df['Lev'])       
Out[15]:
[<matplotlib.lines.Line2D at 0x28df5f39d90>]

Now we run our regression of leverage on lagged leverage.

In [16]:
y=df.loc[1:,'Lev'] # note that this excludes the first row of data since lag is missing for that line
x=df.loc[1:,'LevLag']
x=sm.add_constant(x) # we run a standard OLS with constant 
mod=sm.OLS(y,x)
res=mod.fit()
res.summary()
Out[16]:
OLS Regression Results
Dep. Variable: Lev R-squared: 0.845
Model: OLS Adj. R-squared: 0.843
Method: Least Squares F-statistic: 555.9
Date: Tue, 01 Dec 2020 Prob (F-statistic): 4.43e-43
Time: 12:41:41 Log-Likelihood: 237.71
No. Observations: 104 AIC: -471.4
Df Residuals: 102 BIC: -466.1
Df Model: 1
Covariance Type: nonrobust
coef std err t P>|t| [0.025 0.975]
const 0.0121 0.006 1.966 0.052 -0.000 0.024
LevLag 0.9164 0.039 23.577 0.000 0.839 0.994
Omnibus: 34.854 Durbin-Watson: 1.921
Prob(Omnibus): 0.000 Jarque-Bera (JB): 101.621
Skew: 1.148 Prob(JB): 8.58e-23
Kurtosis: 7.264 Cond. No. 16.3


Warnings:
[1] Standard Errors assume that the covariance matrix of the errors is correctly specified.

Now we need to convert those coefficients to the objects we are trying to estimate. Recall our model:$$\left(\frac{D}{V}\right)_{t+1} = \gamma \left(\frac{D}{V}\right)^* + (1-\gamma) \left(\frac{D}{V}\right)_{t} + \epsilon_{t+1}.$$ To get $\gamma$ we just need to do 1 - the coefficient on lag leverage, and then we can back out $\left(\frac{D}{V}\right)^*$ from the fact that the constant is an estimate of $\gamma \left(\frac{D}{V}\right)^*$. The python algebra below gives: $$\left[\gamma,\left(\frac{D}{V}\right)^*\right] \approx [0.084,0.145].$$

In [17]:
gamma=1-res.params[1]
DtoVstar=res.params[0]/gamma
print('gamma estimates to %.3f or so' %gamma)
print('DtoVstar estimates to %.3f or so' %DtoVstar)
gamma estimates to 0.084 or so
DtoVstar estimates to 0.145 or so

Now we perform a DF test for H0: unit root. The default model in the module below is the one we ran, so no need to specify options, it's all baked in. We get a high p-value. We cannot reject the hypothesis that $\gamma=0$ in favor of the hypothesis with $\gamma>0$ with any sort of confidence, which is hardly surprising given the graph above.

In [18]:
from statsmodels.tsa.stattools import adfuller
adf_test = adfuller(df['Lev'])

# print(adf_test[0])
print('The p-value of H0 is %.5f' %adf_test[1])
The p-value of H0 is 0.22468

What if we exclude the latest leverage surge? After restricting the analysis to pre-2017 data, then we can reject the hypothesis that $\gamma=0$ with high confidence. This slicing data until we get the p-value we want is data-mining of the worst kind but this is consistent with what we saw on the chart above, namely the idea that IBM held their leverage in check until 2017 when something clearly changed.

In [19]:
df['DATE'] = pd.to_datetime(df['DATE'])
slice = (df['DATE'] > '1994-1-1') & (df['DATE'] <= '2017-12-31')

adf_test2=adfuller(df['Lev'].loc[slice])

print('The p-value of H0 is now %.5f' %adf_test2[1])
The p-value of H0 is now 0.00024