By Stefan Stoyanov, Solutions Manager, Experian Decision Analytics
Both linear and logistic regression models were built in order to determine which of them provides the best estimate of the observed LGD. • Linear regression - continuous dependent variable In order to meet the statistical requirements of the OLS regression an attempt was made to transform the LGD to normality or at least symmetry by using Box – Cox type of transformation function • Logistic regression – the LGD was transformed to a binary dependent variable using two methods: o Uniform random number: if LGD > random number then LGD_ Binary = 1 (Bads) ; else LGD_Binary = 0 (Goods)
Set import_data; Bin_lgd=1; _freq=LGD; Output; Bin_lgd=0; _freq=1-LGD; Output; Run;
• Manual Cut-Off: if LGD > 0.2 then LGD_Binary = 1 (Bads) ; else LGD_Binary = 0
Power transformation of the LGD distribution • Usually the LGD has highly non-normal distribution, often with an U shape and spikes at the two tails of the distribution • However, the basic properties of the least squares regression do not require normality • Non-normality does not affect the estimation of the regression parameters. The least squares estimates are still BLUE (best linear unbiased estimates) if the other regression assumptions are met • Non-normality affects the tests of significance and the confidence interval estimates of the regression parameters
Binary transformation of the LGD using uniform random Numbers Theoretical Beta distribution parameters: The estimation of the alpha (a) and beta (b) parameters of the theoretical beta distribution was based on the first two moments of the observed LGD distribution
Functional calibration of the logistic regression scores to estimated LGD
o The OLS regression model provides direct LGD estimates whereas the logistic regression models provide indirect LGD estimates. Hence, it is necessary to calibrate the logistic regression scores to direct LGD estimates in order to be able to compare the two types of models o After obtaining the functional relationship between the logistic regression scores and LGD it is possible to assign an estimated LGD to each individual score Due to the small sample size all data were used for model development. The models were validated using bootstrap techniques o The following calibration tests were used to validate the LGD models: –Spearman’s Rank Correlation –Mean Squared Error –R-square
Since the 90s, most of the large international banks
have set up heavy credit risk management systems, and inparticular
in order to measure and monitor the risks they hold on each business line. One
of the goals of theses systems is to allocate capital to each business line and
to compute the overall capital of the bank. All these
techniques are known under the generic acronym RAROC
methodology(Risk Adjusted Return On Capital) ;
implicitly, this methodology focuses on economic based
estimations of credit risk, taking into account both all the individual risks
and the portfolio view of the bank. The aim of the RAROC methodology is twofold:
1. Risk management: in the financial theory, the bank
aims at reaching its optimal capital structure and finding the proportion of
equity to assets that minimizes the cost of funding. The RAROC methodology is
used for determining the overall capital requirement of the bank and the
contribution of each business line to the total risk of the bank. This process
is called capital allocation.
2. Performance measurement: the RAROC device computes
the profitability of each transaction or business line for the shareholder. The
performance measurement is the result of the interplay between revenues on one
hand, and risk components on the other hand.
the case of a AA rated bank that wants to capitalize its portfolio in a manner
consistent with a AA rating target. This amount of capital is of course driven
by all the stand alone risks included in the portfolio, but also benefits the
internal diversification of this portfolio. In this sense, this equity capital
requirement is called
Whereas the expected loss is the average loss that the
bank anticipates to loose on its portfolio, the economic capital refers to the
unanticipated losses that occur in extreme situations or market conditions. The
economic capital is the cushion required above the expected loss for the
bank to remain solvent in the event of extreme losses on the bank’s portfolio.
There are many available criteria for defining
economic capital, but generally, economic capital is defined as the amount
required to cushion the portfolio up to a given confidence level. The required confidence
level depends on the target rating of the bank. For instance, if the bank’s
portfolio has an average maturity of 2.5 years, the confidence level is around
99.9% for a AA- target rating. From a mathematical
viewpoint, the economic capital is linked to Credit
Value at Risk (CvaR) of the portfolio and to the expected loss
of the portfolio by the relationship:
EC = VaR 99.9%− EL
The portfolio loss distribution is generally obtained
by Monte-Carlo simulations.
Portfolio managers make the distinction between
marginal capital and incremental capital of a transaction. The incremental
capital is the additional amount of equity capital required when the
transaction is added to the portfolio, whereas the marginal capital is equal to
the contribution of the transaction to the total capital once this
transaction is included inside the portfolio.
To make this more precise, we call P the reference
and Mx a marginal transaction with nominal amount x.
Finally, we call EC(A) the economic capital of portfolio
A. The incremental capital of the transaction M is
ECi = EC(P+ Mx )- EC(P)
property of the marginal capital compared to incremental capital is that the
sum of the marginal capital over all the transactions of the portfolio is equal
to the economic capital of the portfolio. This property leads to an easy
capital allocation on condition that we are able to compute accurately marginal
Bank needs is to include risk adjustment functions
into the traditional performance
measures. There are many ways of doing that. One of
them is to introduce risk elements into the traditional
Return On Capital (ROC) ratio defined by:
Allocated capital is the regulatory capital that the
bank has to allocate to the transaction of interest. Bothrevenues and allocated
capital don’t take into account any risk sensitivity. There are several
possibilities tointroduce risk
sensitivity in this equation, at the numerator and the denominator. According
to where we include
a risk adjustment, we are led to different ratios such
as RAROC, RORAC and RARORAC (RISK ADJUSTED PERFORMANCE MEASURES –RAPM) . The
most popular performance measure is the Risk Adjusted Return On Risk Adjusted
Capital (RARORAC), obtained by correcting the revenues by the anticipated
losses on the transaction, and by replacing the allocated capital by the
marginal economic capital of the transaction:
RORAC= (financial income – financial costs)/ scaled economic
RAROC=(financial income – financial costs – expected losses)/
scaled economic capital alocation
The management may also be interested in the value
created by a marginal transaction or a business line. EVA (Economic
Value Added) is the relevant performance measure for value creation. It is
equal to the revenues less the cost of capital.
The main characteristic of this model is its reliance on financial ratios.A statistical technique is used in order to assign risk weights to several financial ratios that differentiate between defaulting and successful companies. For example, 22 financial ratios were tested while developing the Altman Model (1968), which is widely used both in the academic literature and in practice.
A logit model is a popular statistical model, which is used widely for the measurement of PD for corporate customers, mainly for two reasons. First, the output from the logit model can be directly interpreted as the probability of default. Second, this model can be verified easily. Hence, recommendation is to use a logistic model.
The event of default must be clearly defined. Historically, the definition used for rating models was bankruptcy, as this information was readily available and this type of model is powerful in predicting. However, the definition of default may include delays in payments and other situations in which the bank does not receive full payment.
Ratios are calculated to standardize the available information. For example, the ratio “Earnings per Total Assets” enables to compare the profitability of firms of different sizes. In addition to calculating ratios that reflect different financial aspects of the borrowers, dynamic ratios that compare current and past levels of particular balance sheet items can be extremely useful in predicting the event of default. Input ratios represent the most important credit risk factors (leverage, liquidity, productivity, turnover, level of activity, profitability, firm size, growth rates and leverage development).
This is a classification model, which is used to decide which of the existing customers is in danger of defaulting in the near or medium-term future
Behavior scoring models are derived from a retrospective statistical analysis of the credit performance of individual accounts. The purpose of the statistical analysis is to find the most predictive set of data elements that distinguish between the good credit risks from the poor credit risks. Behavior scoring models evaluate the creditworthiness of existing customers. The output of behavior models is the probability that an ongoing account will be delinquent and/or written-off and/or experience bankruptcy and/or sent to a collection agency and/or exhibit some other type of derogatory payment behavior over a specified period of time (behavior probability). Behavior models are effective risk management tools and can be used to adjust credit limits and decide on the marketing and operational strategy to be applied to each customer.
The extra performance variables in behavioral scoring systems include the following variables: the current balance owed by the account and various averages on this balance, the amount repaid by the account during the last month, six months, etc, the amount of new credit extended and the usage of credit facilities over similar periods. Over variables refer to the status of the account. For example, the number of times it had exceeded its credit limit, the number of dunning letters that had been sent, and the time that had passed since the last repayment had been made. Thus, there can be a large number of similar performance variables with strong correlation. The statistical convention is to include only a few of these similar variables in the scoring system and to use only those that have the greatest impact.
A common definition of a bad account is an account that has missed three, possibly consecutive, months of payments during the outcome period.
A particular point of time is chosen as the observation point. A period preceding the observation point, for example the previous 12 to 18 months (minimum 6 months) is chosen as the performance period. The characteristics of performance during this period are used as explanatory variables. A point in time, for example, 12 months after the observation point is chosen as the outcome point. The customer is classified as good or bad depending on its status at the outcome point.
Factor models are a well established technique from multivariate statistics, applied in credit risk models, for identifying underlying drivers of correlated defaults and for reducing the computational effort regarding the calculation of correlated losses. Assume we have two firms A and B which are positively correlated. For example, let A be DaimlerChrysler and B stand for BMW. Then, it is quite natural to explain the positive correlation between A and B by the correlation of A and B with an underlying factor; We could think of the automotive industry as an underlying factor having significant impact on the economic future of the companies A and B. Of course there are probably some more underlying factors driving the riskiness of A and B
We additionally wish underlying factors to be interpretable in order to identify the reasons why two companies experience a down- or upturn at about the same time. For example, assume that the automotive industry gets under pressure. Then we can expect that companies A and B also get under pressure, because their fortune is related to the automotive industry. The part of the volatility of a company’s financial success (e.g., incorporated by its asset value process) related to systematic factors like industries or countries is called the systematic risk of the firm. The part of the firm’s asset volatility that can not be explained by systematic influences is called the specific or idiosyncratic risk of the firm. For limited liability companies, default is expected to occur if the asset value (i.e. the value of the firm) is not sufficient to cover the firm’s liabilities. Why should this be so? From the identity
Asset value=Value of equity+Value of liabilities).
and the rule that equity holders receive the residual value of the firm, it follows that the value of equity is negative if the asset value is smaller than the value of liabilities. If you have something with negative value, and you can give it away at no cost, you are more than willing to do so.This is what equity holders are expected to do. They exercise the walk-away option that they have because of limited liability and leave the firm to the creditors. As the asset value is smaller than the value of liabilities, creditors’ claims are not fully covered, meaning that the firm is in default. The walk-away option can be priced with standard approaches from option pricing theory. This is why structural models are also called option-theoretic or contingent-claim models. Another common name is Merton models because it was Robert C. Merton (1974) who first applied option theory to the problem of valuing a firm’s liabilities in the presence of default and limited liability.