This notebook computes steady states in the bank only and bank and direct lenders in BMQ. It also computes transitions from the first steady state to the second one. Fantastic research assistance from Jiahui Nui, MSFE student at the University of Wisconsin -- Madison, is gratefully aknowledged. Any and all remaining coding errors are the authors'.
import numpy as np
import matplotlib.pyplot as plt
from scipy.linalg import expm
from scipy.stats import lognorm
from scipy.optimize import fsolve
from scipy.integrate import quad
import scipy.optimize as opt
import pandas as pd
import math
# declare global parameters
rho=0.05 # rate of time preference
w=1.97 # wage rate for borrowers
y=3 # project output when successful
sigma=0.1 # project death rates
rL=0.05 # bank funding cost
rH=0.055 # direct lender funding cost
lam=4.8 # verification process completion rate
c=1.23 # fixed effort cost of approaching a lender
# we also need a distribution for project success rate
# We will start with a log-normal truncated to zero 1
loc=-4.43 # location of the distribution
# (Be very careful here, this is not the same as the loc parameter in scirpy!
# Scipy is very confusing in this respect, location is best handled via the scale parameter, see below
disp=2 # dispersion of the distribution
# let's plot the histogram for a good visual
fig,ax=plt.subplots(1,1)
s=disp;
scale=np.exp(loc)
trunc=lognorm.cdf(1,s,0, scale);
x = np.linspace(0,1, 1000)
ax.plot(x, 1/trunc*lognorm.pdf(x, s,0,scale),
'r-', lw=2, alpha=0.5, label='lognorm pdf')
# plt.title("Distribution of project quality (p) at birth")
plt.xlabel(r'$p$')
plt.ylabel('density')
plt.show()
# now we compute the bank only steady state
# first find the threshold quality level
f = lambda p: lam/(lam+rho+sigma)*(p *(y-rL/p*(1+sigma/rho))/(sigma+rho) +(1-p)*w/(sigma+rho)) - c -w/(sigma+rho)
pbar=fsolve(f, 0.5)
pbar=min(pbar,0.995) # to avoid problems
# compute the mass of active projects
def pdensity(x):
return 1/trunc*lognorm.pdf(x, s,0,scale)
deltaB, err = quad(pdensity,pbar, 1) # call quad to integrate f from pbar to 1
#little checkeroo along the way
print("The following number should be 1 if truncation performed correctly: %.2f" %quad(pdensity,0, 1)[0])
print("\n")
# now compute the success rate of active projects
def xpdensityB(x):
return x*pdensity(x)/deltaB
etaB, err = quad(xpdensityB,pbar, 1) # call quad to integrate xpdensity from pbar to 1
# now compute the average producer income
def incB(x):
return (y-rL*(1+sigma/rho)/x)*pdensity(x)/deltaB
netincomeB, err = quad(incB,pbar, 1) # call quad to integrate premiumB from pbar to 1
# now we have all we need to compute the transition/intensity matrix (this is where Jiahui's preliminiary work comes super handy)
piB= np.array([[-sigma, sigma * deltaB, sigma*(1-deltaB)],
[lam*etaB, -lam -sigma *(1-deltaB), lam*(1-etaB)+ sigma*(1-deltaB)],
[0, sigma * deltaB, -sigma * deltaB]]) # Transfer intensity of Markov chain between different states
# and now, finally, we cam compute transitions to the one and only bank-only steady state
# starting from a distribution when all projects are abandoned
# initial distributions do not matter much
# Assume initial conditions
mu_0 = np.array([0, 0, 1]) # Initial distribution of states
# states order: funded, in-verification, abandoned
# Define time interval
t_start = 0
t_end = 40
num_points = 1000
dt = (t_end - t_start) / num_points # our practical definition of an instant
# Function of computing mu_t at each time point
def compute_mu_t():
# Initialize mu_t values
mu_t_values = [mu_0]
# Perform time integration using Euler's method
for i in range(1, num_points):
t = i * dt
# Markov property states that the state transfer probability is constant for a given time interval.
# By multiplying the infinitesimal generator matrix by the time interval,
# we can approximate the evolution of the state distribution over that time interval.
# Compute the transition matrix at time t
transition_matrix = expm(piB * t)
# Compute mu_t using transition matrix and initial distribution
mu_t = np.dot(mu_0, transition_matrix)
# Append mu_t to the list
mu_t_values.append(mu_t)
return np.array(mu_t_values)
# Compute mu_t at each time point
mu_t_values = compute_mu_t()
# display policy functions
print('Policy functions in steady state \n')
print('The project quality cutoff is pbar=%.2f' %pbar)
print('The fraction of newborns that seek bank funding is deltaB=%.2f' %deltaB)
print('The fraction of activated projects that succeed is etaB=%.2f' %etaB)
print('The charge off rate is %.2f' %(100*mu_t_values[999, 1]*lam*(1-etaB)*
(1-np.exp(-(sigma+lam)))/(mu_t_values[999, 0]*(lam+sigma))))
print('The producer premium over wages is %.2f' %(100*(netincomeB)/w))
np.set_printoptions(precision=4)
print('The invariant distibution of [funded,in verification, abandoned] is ', mu_t_values[999])
# Plot time evolution of mu_t
plt.figure(figsize=(10, 6))
plt.plot(np.linspace(t_start, t_end, num_points), mu_t_values[:, 0], label='Funded')
plt.plot(np.linspace(t_start, t_end, num_points), mu_t_values[:, 1], label='Being verified')
# plt.plot(np.linspace(t_start, t_end, num_points), mu_t_values[:, 2], label='Abandoned')
plt.xlabel('Time in years')
plt.ylabel(r'Mass under $\nu_t$')
# plt.title('Time Evolution of $\nu_t$')
plt.legend()
plt.show()
The following number should be 1 if truncation performed correctly: 1.00 Policy functions in steady state The project quality cutoff is pbar=0.39 The fraction of newborns that seek bank funding is deltaB=0.03 The fraction of activated projects that succeed is etaB=0.60 The charge off rate is 1.36 The producer premium over wages is 138.74 The invariant distibution of [funded,in verification, abandoned] is [1.5953e-02 5.6198e-04 9.8348e-01]
# first we find the cutoff below which borrowers prefer direct lenders to banks
# guess a qD
qD=rH/etaB*(1+sigma/rho) # this is what the price would be if the success rate of direct lenders was the same as that of banks
# and now we look for the fixed point we need
# unlike most papers in econ, we won't use dichotomy here, because that's bound to give the wrong root
# instead we do an exhaustive search
# a bit expensive, but the only way to get the right answer safely
gridsize=200
qtry=np.linspace(rL/1*(1+sigma/rho),4*rL/1*(1+sigma/rho), gridsize) # direct lenders need at least rH so it is low enough
# whether the upper bound is high enough will need to be found out below
# recall that no solution at all may exist
gaptry=np.zeros(gridsize)
etaDtry=np.zeros(gridsize)
etaBtry=np.zeros(gridsize)
deltaDtry=np.zeros(gridsize)
deltaBtry=np.zeros(gridsize)
plowtry=np.zeros(gridsize)
pbartry=np.zeros(gridsize)
netincomeBtry=np.zeros(gridsize)
# ready for recording purposes
# we could use list comprehension below
# but writing out an explicit loop helps clarity
gap=-5
i=0
# while gap<0: # use this opening if pressed for time
while i<gridsize:
qD=qtry[i]
# find the bank threshold quality level
fB = lambda p: lam/(lam+rho+sigma)*(p *(y-rL/p*(1+sigma/rho))/(sigma+rho) +(1-p)*w/(sigma+rho)) \
-(p*(y-qD)/(sigma+rho)+(1-p)*w/(sigma+rho))
pbar=fsolve(fB, 0.5)
pbar=min(pbar,0.995) # to avoid issues, graphs will show if we have to worry about it or not
# now compute the average producer income
#fig,ax=plt.subplots(1,1)
#x = np.linspace(0.1,1, 100)
#ax.plot(x, f(x),'r-', lw=2, alpha=0.5)
#plt.show()
# we also need the threshold past which people choose not to seek funding
f = lambda p: (p*(y-qD)/(sigma+rho)+(1-p)*w/(sigma+rho))-c - w/(sigma+rho)
plow=fsolve(f, 0.5)
plow=min(plow,pbar) #if plow>pbar all agents choose banks
#now we measure the fraction of projects that are between plow and pbar
deltaD, err = quad(pdensity,plow, pbar)
#now we measure the fraction of projects funded directly that succeed
def xpdensityD(x):
return x*pdensity(x)/deltaD # note this the deltaB from
etaD, err = quad(xpdensityD,plow, pbar)
#now we measure the fraction of projects that are above pbar
deltaB, err = quad(pdensity,pbar, 1)
def incB(x):
return (y-rL*(1+sigma/rho)/x)*pdensity(x)/deltaB
netincomeBtry[i], err = quad(incB,pbar, 1) # call quad to integrate premiumB from pbar to 1
#now we measure the fraction of projects funded directly that succeed
def xpdensityB(x):
return x*pdensity(x)/deltaB
etaB, err = quad(xpdensityB,pbar, 1)
# now we check the break-even condition
# I already selected parameters so that we are almost at equilibrium
gap=qD-rH/etaD*(1+sigma/rho)
#print("at iteration " + str(i) + " the gap is:", gap)
#print("plow is:", plow)
#print("pbar is:", pbar)
#print("etaD is:", etaD)
#print("\n")
# record mapping if to be displayed
etaDtry[i]=etaD
etaBtry[i]=etaB
deltaDtry[i]=deltaD
deltaBtry[i]=deltaB
plowtry[i]=plow
pbartry[i]=pbar
gaptry[i]=gap
i +=1
if i==gridsize:
gap=1
# now we display
fig, axs=plt.subplots(2, 2,figsize=(12, 8))
axs[0,0].plot(qtry,plowtry,label='lower')
axs[0,0].plot(qtry,pbartry,label='upper')
axs[0,0].legend()
axs[0,1].plot(qtry,etaDtry,label='direct lenders')
axs[0,1].plot(qtry,etaBtry,label='banks')
axs[0,1].legend()
axs[1,0].plot(qtry,deltaDtry,label='direct lenders')
axs[1,0].plot(qtry,deltaBtry,label='banks')
axs[1,0].legend()
axs[1,1].plot(qtry,gaptry)
axs[0,0].set_title('Project quality thresholds')
axs[0,1].set_title('Success rate')
axs[1,0].set_title('Mass of projects')
axs[1,0].set_title('Mass of projects')
axs[1,1].set_title(r'Fixed point gap: $q_D-\frac{r_H}{\eta_D} (1+\frac{\sigma}{\rho})$')
axs[1,1].axhline(y=0, color='r', linestyle='dotted')
# axs[0,0].set_xlabel('qD')
# axs[0,1].set_xlabel('qD')
axs[1,0].set_xlabel(r'$q_D$')
axs[1,1].set_xlabel(r'$q_D$')
axs[0,0].set_xlim([0.15,0.56])
axs[0,1].set_xlim([0.15,0.56])
axs[1,0].set_xlim([0.15,0.56])
axs[1,1].set_xlim([0.15,0.56])
C:\Users\equintin\AppData\Local\Temp\ipykernel_12448\1557866300.py:65: RuntimeWarning: divide by zero encountered in double_scalars return x*pdensity(x)/deltaD # note this the deltaB from C:\Users\equintin\AppData\Local\Temp\ipykernel_12448\1557866300.py:66: IntegrationWarning: Extremely bad integrand behavior occurs at some points of the integration interval. etaD, err = quad(xpdensityD,plow, pbar)
(0.15, 0.56)
# first we find the middle equilibrium in the above chart
# the only interesting one for our purposes
flag=1 # flag becomes zero when we have found what we want
iout=-1000
i=1
gridsize=200
while i <gridsize :
if gaptry[i]*gaptry[i-1]<0 and pbartry[i]<0.995: # looks for the first crossing with positive mass of banks
iout=i
i=gridsize
i +=1
deltaD=deltaDtry[iout]
deltaB=deltaBtry[iout]
etaD=etaDtry[iout]
etaB=etaBtry[iout]
netincomeB=netincomeBtry[iout]
# now get invariant distibutions from bank-only case
mubank=mu_t_values[999]
mu_0D = np.array([0, mubank[0], mubank[1], mubank[2]])
# initiate things once direct lenders are part of the economy
# now we have all we need to compute the transition/intensity matrix
# States are ordered as follows: funded directly, bank funded, in verification, abandoned
piD= np.array([[-sigma*(1-deltaD), 0, sigma*deltaB, sigma*(1-deltaB-deltaD)],
[sigma*deltaD, -sigma, sigma*deltaB, sigma*(1-deltaB-deltaD)],
[sigma*deltaD, lam * etaB, -lam -sigma * (1-deltaB), sigma*(1-deltaB-deltaD)+lam*(1-etaB)],
[sigma*deltaD, 0, sigma*deltaB ,-sigma*(deltaB+deltaD)]]) # Transfer intensity of Markov chain between different states
# Define time interval
t_start = 0
t_end = 60
num_points = 1000
dt = (t_end - t_start) / num_points # our practical definition of an instant
# Function of computing mu_t at each time point
def compute_mu_tD():
# Initialize mu_t values
mu_t_valuesD = [mu_0D]
# Perform time integration using Euler's method
for i in range(1, num_points):
t = i * dt
# Markov property states that the state transfer probability is constant for a given time interval.
# By multiplying the infinitesimal generator matrix by the time interval,
# we can approximate the evolution of the state distribution over that time interval.
# Compute the transition matrix at time t
transition_matrix = expm(piD * t)
# Compute mu_t using transition matrix and initial distribution
mu_t = np.dot(mu_0D, transition_matrix)
# Append mu_t to the list
mu_t_valuesD.append(mu_t)
return np.array(mu_t_valuesD)
# Compute mu_t at each time point
mu_t_valuesD = compute_mu_tD()
# Compute average producer income in steady state
ProducerIncome=(mu_t_valuesD[999, 0]*(y-qtry[iout])+mu_t_valuesD[999, 1]*netincomeB)/(mu_t_valuesD[999, 0]+mu_t_valuesD[999, 1])
# Plot time evolution of mu_t
plt.figure(figsize=(10, 6))
plt.plot(np.linspace(t_start, t_end, num_points), mu_t_valuesD[:, 0], label='Funded directly')
plt.plot(np.linspace(t_start, t_end, num_points), mu_t_valuesD[:, 1], label='Bank funded')
plt.plot(np.linspace(t_start, t_end, num_points), mu_t_valuesD[:, 2], label='Being verified')
# plt.plot(np.linspace(t_start, t_end, num_points), mu_t_valuesD[:, 3], label='Abandoned')
plt.xlabel('Time in years')
plt.ylabel('Mass')
# plt.title('Time Evolution of mu_t')
plt.legend()
plt.show()
# Now we chart aggregate outcomes
fig, axs=plt.subplots(1, 2,figsize=(12, 8))
axs[0].plot(np.linspace(t_start, t_end, num_points),(mu_t_valuesD[:,0]+mu_t_valuesD[:,1])*y)
# axs[0].set_xlabel('time')
axs[0].set_title('Value added')
axs[0].set_xlabel('Time in years')
axs[1].plot(np.linspace(t_start, t_end, num_points),mu_t_valuesD[:,0]/((mu_t_valuesD[:,0]+mu_t_valuesD[:,1])*y),label='direct loans')
axs[1].plot(np.linspace(t_start, t_end, num_points),mu_t_valuesD[:,1]/((mu_t_valuesD[:,0]+mu_t_valuesD[:,1])*y),label='bank loans')
axs[1].set_xlabel('Time in years')
axs[1].set_title('Lending as a fraction of value-added')
axs[1].legend()
#axs[2].plot(np.linspace(t_start, t_end, num_points),(100*mu_t_valuesD[:, 2]*lam*(1-etaB)*
# (1-np.exp(-(sigma+lam)))/(mu_t_valuesD[:, 1]*(lam+sigma))))
# axs[2].set_xlabel('time')
# axs[2].set_title('Bank charge-off rates')
# axs[2].legend()
#defaultD=(1-etaD)
#defaultB=(1-etaB)
#axs[1,0].plot(np.linspace(t_start, t_end, num_points),(mu_t_valuesD[:,0]*defaultD+mu_t_valuesD[:,0]*defaultB)/((mu_t_valuesD[:,0]+mu_t_valuesD[:,1])))
# axs[1,1].plot(np.linspace(t_start, t_end, num_points),mu_t_valuesD[:,1]/((mu_t_valuesD[:,0]+mu_t_valuesD[:,1])*y),label='bank loans')
#axs[1,0].set_xlabel('time')
#axs[1,0].set_title('AggregateDefault rates')
print('Policy functions and key moments in steady state \n')
print('The bank quality cutoff is pbar=%.2f' %pbartry[iout])
print('The direct lender quality cutoff is pbar=%.2f' %plowtry[iout])
print('The fraction of newborns that seek bank funding is deltaB=%.2f' %deltaB)
print('The fraction of newborns that seek direct funding is deltaD=%.2f' %deltaD)
print('The fraction of bank funded projects that succeed is etaB=%.2f' %etaB)
print('The fraction of direct lender funded projects that succeed is etaB=%.2f' %etaD)
print('The charge off rate among banks is %.2f' %(100*mu_t_valuesD[999, 2]*lam*(1-etaB)*
(1-np.exp(-(sigma+lam)))/(mu_t_valuesD[999, 1]*(lam+sigma))))
print('The producer premium over wages is fraction %.2f' %(100*(ProducerIncome)/w))
print('The fraction of producers in the labor force is %.2f' %(100*(mu_t_valuesD[999,0]+mu_t_valuesD[999,1])))
np.set_printoptions(precision=4)
print('The invariant distibution of [funded directly, bank funded, in verification, abandoned] is ', mu_t_valuesD[999])
Policy functions and key moments in steady state The bank quality cutoff is pbar=0.53 The direct lender quality cutoff is pbar=0.30 The fraction of newborns that seek bank funding is deltaB=0.02 The fraction of newborns that seek direct funding is deltaD=0.02 The fraction of bank funded projects that succeed is etaB=0.72 The fraction of direct lender funded projects that succeed is etaB=0.40 The charge off rate among banks is 0.80 The producer premium over wages is fraction 134.25 The fraction of producers in the labor force is 3.57 The invariant distibution of [funded directly, bank funded, in verification, abandoned] is [2.4726e-02 1.0954e-02 3.1799e-04 9.6400e-01]