28 Şubat 2011 Pazartesi

Credit default swaps

In a credit default swap (CDS), the protection seller, the provider of credit protection,
receives a payment in return for the obligation to make a payment that is contingent
on the occurrence of a credit event for a reference entity. The size of the payment
reflects the decline in value of a reference asset issued by the reference entity. A
credit event is normally a payment default, bankruptcy or insolvency, failure to pay,
or receivership. It can also include a restructuring or a ratings downgrade. A
reference asset can be a loan, security, or any asset upon which a ‘dealer price’ can
be established. A dealer price is important because it allows both participants to a
transaction to observe the degree of loss in a credit instrument. In the absence of a
credit event, there is no obligation for the protection seller to make any payment,
and the seller collects what amounts to an option premium. Credit hedgers will
receive a payment only if a credit event occurs; they do not have any protection
against market value declines of the reference asset that occur without a credit event.
Figure 11.2 shows the obligations of the two parties in a CDS.

In the figure the protection buyer looks to reduce risk of exposure to XYZ. For
example, it may have a portfolio model that indicates that the exposure contributes
excessively to overall portfolio risk. It is important to understand, in a portfolio
context, that the XYZ exposure may well be a high-quality asset. A concentration in
any credit risky asset, regardless of quality, can pose unacceptable portfolio risk.
Hedging such exposures may represent a prudent strategy to reduce aggregate
portfolio risk.
The protection seller, on the other hand, may find the XYZ exposure helpful in
diversifying its own portfolio risks. Though each counterparty may have the same
qualitative view of the credit, their own aggregate exposure profiles may dictate
contrary actions.
If a credit event occurs, the protection seller must pay an amount as provided in
the underlying contract. There are two methods of settlement following a credit event:
(1) cash settlement; and (2) physical delivery of the reference asset at par value. The
reference asset typically represents a marketable obligation that participants in a
credit derivatives contract can observe to determine the loss suffered in the event of
default. For example, a default swap in which a bank hedges a loan exposure to a
company may designate a corporate bond from that same entity as the reference
asset. Upon default, the decline in value of the corporate bond should approximate
the loss in the value of the loan, if the protection buyer has carefully selected the
reference asset.
Cash-settled transactions involve a credit event payment (CEP) from the protection
seller to the protection buyer, and can work in two different ways. The terms of the
contract may call for a fixed dollar amount (i.e. a ‘binary’ payment). For example, the
contract may specify a credit event payment of 50% upon default; this figure is
negotiated and may, or may not, correspond to the expected recovery amount on the
asset. More commonly, however, a calculation agent determines the CEP. If the two
parties do not agree with the CEP determined by the calculation agent, then a dealer
poll determines the payment. The dealer poll is an auction process in which dealers
‘bid’ on the reference asset. Contract terms may call for five independent dealers to
bid, over a three-day period, 14 days after the credit event. The average price that
the dealers bid will reflect the market expectation of a recovery rate on the reference
asset. The protection seller then pays par value less the recovery rate. This amount
represents the estimate of loss on assuming exposure to the reference asset. In both
cases, binary payment or dealer poll, the obligation is cash-settled because the
protection seller pays cash to settle its obligation.
In the second method of settlement, a physical settlement, the protection buyer
may deliver the reference asset, or other asset specified in the contract, to the
protection seller at par value. Since the buyer collects the par value for the defaulted
asset, if it delivers its underlying exposure, it suffers no credit loss.
CDSs allow the protection seller to gain exposure to a reference obligor, but absent
a credit event, do not involve a funding requirement. In this respect, CDSs resemble
and are economically similar to standby letters of credit, a traditional bank credit
product.
Credit default swaps may contain a materiality threshold. The purpose of this is to
avoid credit event payments for technical defaults that do not have a significant
market impact. They specify that the protection seller make a credit event payment
to the protection buyer, if a credit event has occurred and the price of the reference
asset has fallen by some specified amount. Thus, a payment is conditional upon a
specified level of value impairment, as well as a default event. Given a default, a
payment occurs only if the value change satisfies the threshold condition.
A basket default swap is a special type of CDS. In a basket default swap, the
protection seller receives a fee for agreeing to make a payment upon the occurrence
of the first credit event to occur among several reference assets in a basket. The
protection buyer, in contrast, secures protection against only the first default among
the specified reference assets. Because the protection seller pays out on one default,
of any of the names (i.e. reference obligors), a basket swap represents a more
leveraged transaction than other credit derivatives, with correspondingly higher fees.
Basket swaps represent complicated risk positions due to the necessity to understand
the correlation of the assets in the basket. Because a protection seller can lose on
only one name, it would prefer the names in the basket to be as highly correlated as
possible. The greater the number of names in the basket and the lower the correlation
among the names, the greater the likelihood that the protection seller will have to
make a payment.
The credit exposure in a CDS generally goes in one direction. Upon default, the
protection buyer will receive a payment from, and thus is exposed to, the protection
seller. The protection buyer in a CDS will suffer a default-related credit loss only if
both the reference asset and the protection seller default simultaneously. A default
by either party alone should not result in a credit loss. If the reference entity defaults,
the protection seller must make a payment. If the protection seller defaults, but the
reference asset does not, the protection purchaser has no payment due. In this event,
however, the protection purchaser no longer has a credit hedge, and may incur
higher costs to replace the protection if it still desires a hedge. The protection seller’s
only exposure to the protection buyer is for periodic payments of the protection fee.
Dealers in credit derivatives, who may have a large volume of transactions with other
dealers, should monitor this ‘receivables’ exposure. 315

What are credit derivatives?

Credit derivatives permit the transfer of credit exposure between parties, in isolation
from other forms of risk. Banks can use credit derivatives both to assume or
reduce (hedge) credit risk. Market participants refer to credit hedgers as protection
purchasers, and to providers of credit protection (i.e. the party who assumes credit
risk) as protection sellers.
There are a number of reasons market participants have found credit derivatives
attractive. First, credit derivatives allow banks to customize the credit exposure
desired, without having a direct relationship with a particular client, or that client
having a current funding need. Consider a bank that would like to acquire a twoyear
exposure to a company in the steel industry. The company has corporate debt
outstanding, but its maturity exceeds two years. The bank can simply sell protection
for two years, creating an exposure that does not exist in the cash market. However,
the flexibility to customize credit terms also bears an associated cost. The credit
derivative is less liquid than an originated, directly negotiated, cash market exposure.
Additionally, a protection seller may use only publicly available information in
determining whether to sell protection. In contrast, banks extending credit directly to
a borrower typically have some access to the entity’s nonpublic financial information.
Credit derivatives allow a bank to transfer credit risk without adversely impacting
the customer relationship. The ability to sell the risk, but not the asset itself, allows
banks to separate the origination and portfolio decisions. Credit derivatives therefore
permit banks to hedge the concentrated credit exposures that large corporate relationships,
or industry concentrations created because of market niches, can often
present. For example, banks may hedge existing exposures in order to provide
capacity to extend additional credit without breaching internal, in-house limits.
There are three principal types of credit derivative products: credit default swaps,
total return swaps, and credit-linked notes. A fourth product, credit spread options,
is not a significant product in the US bank market.

Legal and cultural issues

Unlike most financial derivatives, credit derivative transactions require extensive
legal review. Banks that engage in credit derivatives face a variety of legal issues,
such as:
1 Interpreting the meaning of terms not clearly defined in contracts and
confirmations when unanticipated situations arise
2 The capacity of counterparties to contract and
3 Risks that reviewing courts will not uphold contractual arrangements.
Although contracts have become more standardized, market participants continue
to report that transactions often require extensive legal review, and that many
situations require negotiation and amendments to the standardized documents.
Until recently, very few default swap contracts were triggered because of the relative
absence of default events. The recent increase in defaults has led to more credit
events, and protection sellers generally have met their obligations without threat of
litigation. Nevertheless, because the possibility for litigation remains a significant
concern, legal risks and costs associated with legal transactional review remain
obstacles to greater participation and market growth.
Cultural issues also have constrained the use of credit derivatives. The traditional
separation within banks between the credit and treasury functions has made it
difficult for many banks to evaluate credit derivatives as a strategic risk management
tool. Credit officers in many institutions are skeptical that the use of a portfolio
model, which attempts to identify risk concentrations, can lead to more effective
risk/reward decision making. Many resist credit derivatives because of a negative
view of derivatives generally.
Over time, bank treasury and credit functions likely will become more integrated,
with each function contributing its comparative advantages to more effective risk management decisions. As more banks use credit portfolio models and credit derivatives,
credit portfolio management may become more ‘equity-like’. As portfolio managers
buy and sell credit risk in a portfolio context, to increase diversification and to
make the portfolio more efficient, however, banks increasingly may originate exposure
without maintaining direct borrower relationships. As portfolio management evolves
toward this model, banks will face significant cultural challenges. Most banks report
at least some friction between credit portfolio managers and line lenders, particularly
with respect to loan pricing. Credit portfolio managers face an important challenge.
They will attempt to capture the diversification and efficiency benefits offered by the
use of more quantitative techniques and credit derivatives. At the same time, these
risk managers will try to avoid diminution in their qualitative understanding of
portfolio risks, which less direct contact with obligors may imply.

Limited ability to hedge illiquid exposures

Credit derivatives can effectively hedge credit exposures when an underlying borrower
has publicly traded debt (loans or bonds) outstanding that can serve as a reference
asset. However, most banks have virtually all their exposures to firms that do not
have public debt outstanding. Because banks lend to a large number of firms without
public debt, they currently find it difficult to use credit derivatives to hedge these
illiquid exposures. As a practical matter, banks are able to hedge exposures only for
their largest borrowers. Therefore, the potential benefits of credit derivatives largely
remain at this time beyond the reach of community banks, where credit concentrations
tend to be largest.

Credit risk complacency and hedging costs

The absence of material domestic loan losses in recent years, the current strength of
the US economy, and competitive pressures have led not only to a slippage in
underwriting standards but also in some cases to complacency regarding asset
quality and the need to reduce credit concentrations. Figure 11.1 illustrates the
‘lumpy’ nature of credit losses on commercial credits over the past 15 years. It plots
charge-offs of commercial and industrial loans as a percentage of such loans.

Over the past few years, banks have experienced very small losses on commercial
credits. However, it is also clear that when the economy weakens, credit losses can
become a major concern. The threat of large losses, which can occur because of
credit concentrations, has led many larger banks to attempt to measure their credit
risks on a more quantitative, ‘portfolio’, basis.
Until recently, credit spreads on lower-rated, non-investment grade credits had contracted sharply. Creditors believe lower credit spreads indicate reduced credit
risk, and therefore less need to hedge.

Even when economic considerations indicate a bank should hedge a credit exposure,
creditors often choose not to buy credit protection when the hedge cost exceeds
the return from carrying the exposure. In addition, competitive factors and a desire
to maintain customer relationships often cause banks to originate credit (funded or
unfunded) at returns that are lower than the cost of hedging such exposures in the
derivatives market. Many banks continue to have a book value, as opposed to an
economic value, focus.

Application of risk-based capital rules

Regulators have not yet settled on the most appropriate application of risk-based
capital rules for credit derivatives, and banks trying to use them to reduce credit risk
may find that current regulatory interpretations serve as disincentives.5 Generally,
the current rules do not require capital based upon economic risk. For example,
capital rules neither differentiate between high- and low-quality assets nor do they
recognize diversification efforts. Transactions that pose the same economic risk may
involve quite different regulatory capital requirements. While the Basel Committee
has made the review of capital requirements for credit derivatives a priority, the
current uncertainty of the application of capital requirements has made it difficult
for banks to measure fully the costs of hedging credit risk.6

Difficulty of measuring credit risk

Measuring credit risk on a portfolio basis is difficult. Banks traditionally measure
credit exposures by obligor and industry. They have only recently attempted to define
risk quantitatively in a portfolio context, e.g. a Value-at-Risk (VaR) framework.3
Although banks have begun to develop internally, or purchase, systems that measure
VaR for credit, bank managements do not yet have confidence in the risk measures
the systems produce. In particular, measured risk levels depend heavily on underlying
assumptions (default correlations, amount outstanding at time of default,
recovery rates upon default, etc.), and risk managers often do not have great
confidence in those parameters. Since credit derivatives exist principally to allow for
the effective transfer of credit risk, the difficulty in measuring credit risk and the
absence of confidence in the results of risk measurement have appropriately made
banks cautious about using credit derivatives. Such difficulties have also made bank
supervisors cautious about the use of banks’ internal credit risk models for regulatory
capital purposes.
Measurement difficulties explain why banks have not, until very recently, tried to
implement measures to calculate Value-at-Risk (VaR) for credit. The VaR concept,
used extensively for market risk, has become so well accepted that bank supervisors
allow such measures to determine capital requirements for trading portfolios.4 The
models created to measure credit risk are new, and have yet to face the test of an
economic downturn. Results of different credit risk models, using the same data, can
vary widely. Until banks have greater confidence in parameter inputs used to measure
the credit risk in their portfolios, they will, and should, exercise caution in using
credit derivatives to manage risk on a portfolio basis. Such models can only complement,
but not replace, the sound judgment of seasoned credit risk managers.

Size of the credit derivatives market and impediments to growth

The first credit derivative transactions occurred in the early 1990s, as large derivative
dealers searched for ways to transfer risk exposures on financial derivatives. Their
objective was to be able to increase derivatives business with their largest counterparties.
The market grew slowly at first. More recently, growth has accelerated as
banks have begun to use credit derivatives to make portfolio adjustments and to
reduce risk-based capital requirements.
As discussed in greater detail below, there are four credit derivative products:
credit default swaps (CDS), total return swaps (TRS), credit-linked notes (CLNs) and
credit spread options. Default swaps, total return swaps and credit spread options are
over-the-counter transactions, while credit-linked notes are cash market securities.
Market participants estimate the current global market for credit derivatives will
reach $740 billion by the year 2000.2 Bank supervisors in the USA began collecting
credit derivative information in Call Reports as of 31 March 1997. Table 11.1 tracks
the quarterly growth in credit derivatives for both insured US banks, and all institutions
filing Call Reports (which includes uninsured US offices of foreign branches).
The table’s data reflect substantial growth in credit derivatives. Over the two years
US bank supervisors have collected the data, the compounded annual growth rate of
notional credit derivatives for US insured banks, and all reporting entities (including
foreign branches and agencies), were 216.2% and 137.2% respectively.
Call Report data understates the size of the credit derivatives market. First, it
includes only transactions for banks domiciled in the USA. It does not include the
activities of banks domiciled outside the USA, or any non-commercial banks, such
as investment firms. Second, the data includes activity only for off-balance sheet
transactions; therefore, it completely excludes CLNs.

Activity in credit derivatives has grown rapidly over the past two years. Nevertheless,
the number of institutions participating in the market remains small. Like
financial derivatives, credit derivatives activity in the US banking system is concentrated
in a small group of dealers and end-users. As of 31 March 1999, only 24
insured banking institutions, and 38 uninsured US offices (branches and agencies)
of foreign banks reported credit derivatives contracts outstanding. Factors that
account for the narrow institutional participation include:
1 Difficulty of measuring credit risk
2 Application of risk-based capital rules
3 Credit risk complacency and hedging costs
4 Limited ability to hedge illiquid exposures and
5 Legal and cultural issues.
An evaluation of these factors helps to set the stage for a discussion of credit
derivative products and risk management issues, which are addressed in subsequent
sections.

Risk management of credit derivatives

Credit risk is the largest single risk in banking. To enhance credit risk management,
banks actively evaluate strategies to identify, measure, and control credit concentrations.
Credit derivatives, a market that has grown from virtually zero in 1993 to an
estimated $350 billion at year end 1998,1 have emerged as an increasingly popular
tool. Initially, banks used credit derivatives to generate revenue; more recently, bank
usage has evolved to using them as a capital and credit risk management tool. This
chapter discusses the types of credit derivative products, market growth, and risks.
It also highlights risk management practices that market participants should adopt
to ensure that they use credit derivatives in a safe and sound manner. It concludes
with a discussion of a portfolio approach to credit risk management.

Credit derivatives can allow banks to manage credit risk more effectively and
improve portfolio diversification. Banks can use credit derivatives to reduce undesired
risk concentrations, which historically have proven to be a major source of bank
financial problems. Similarly, banks can assume risk, in a diversification context, by
targeting exposures having a low correlation with existing portfolio risks. Credit
derivatives allow institutions to customize credit exposures, creating risk profiles
unavailable in the cash markets. They also enable creditors to take risk-reducing
actions without adversely impacting the underlying credit relationship.

Users of credit derivatives must recognize and manage a number of associated
risks. The market is new and therefore largely untested. Participants will undoubtedly
discover unanticipated risks as the market evolves. Legal risks, in particular, can be
much higher than in other derivative products. Similar to poorly developed lending
strategies, the improper use of credit derivatives can result in an imprudent credit
risk profile. Institutions should avoid material participation in the nascent credit
derivatives market until they have fully explored, and developed a comfort level with,
the risks involved. Originally developed for trading opportunities, these instruments
recently have begun to serve as credit risk management tools. This chapter primarily
deals with the credit risk management aspects of banks’ use of credit derivatives.
Credit derivatives have become a common element in two emerging trends in how
banks assess their large corporate credit portfolios. First, larger banks increasingly
devote human and capital resources to measure and model credit portfolio risks
more quantitatively, embracing the tenets of modern portfolio theory (MPT). Banks have pursued these efforts to increase the efficiency of their credit portfolios and look
to increase returns for a given level of risk or, conversely, to reduce risks for a given
level of returns. Institutions adopting more advanced credit portfolio measurement
techniques expect that increased portfolio diversification and greater insight into
portfolio risks will result in superior relative performance over the economic cycle.
The second trend involves tactical bank efforts to reduce regulatory capital requirements
on high-quality corporate credit exposures. The current Basel Committee on
Bank Supervision Accord (‘Basel’) requirements of 8% for all corporate credits,
regardless of underlying quality, reduce banks’ incentives to make higher quality
loans. Banks have used various securitization alternatives to reconcile regulatory
and economic capital requirements for large corporate exposures. Initially, these
securitizations took the form of collateralized loan obligations (CLOs). More recently,
however, banks have explored ways to reduce the high costs of CLOs, and have
begun to consider synthetic securitization structures.

The synthetic securitization structures banks employ to reduce regulatory capital
requirements for higher-grade loan exposures use credit derivatives to purchase
credit protection against a pool of credit exposures. As credit risk modeling efforts
evolve, and banks increasingly embrace a MPT approach to credit risk management,
banks increasingly may use credit derivatives to adjust portfolio risk profiles.

Conclusion

The rapid proliferation of credit risk models, including credit risk management
models, has resulted in sophisticated models which provide crucial information to
credit risk managers (see Table 10.1). In addition, many of these models have focused
attention on the inadequacy of current credit risk management practices. Firms
should continue to improve these models but keep in mind that models are only
one tool of credit risk management. While many banks have already successfully
implemented these models, we are a long way from having a ‘universal’ credit risk
management model that handles all the firm’s credit risky assets.
Author’s note
This paper is an extension of Richard K. Skora, ‘Modern credit risk modeling’, presented at
the meeting of the Global Association of Risk Professionals. 19 October 1998.
Note
1 Of course implementing and applying a model is a crucial step in realizing the benefits of
modeling. Indeed, there is a feedback effect, the practicalities of implemention and application
affect many decisions in the modeling process.

Capital and regulation

Regulators ensure that our financial system is safe while at the same time that it
prospers. To ensure that safety, regulators insist that a bank holds sufficient capital
to absorb losses. This includes losses due to market, credit, and all other risks. The
proper amount of capital raises interesting theoretical and practical questions. (See,
for example, Matten, 1996 or Pratt, 1998.) Losses due to market or credit risk show
up as losses to the bank’s assets. A bank should have sufficient capital to absorb
not only losses during normal times but also losses during stressful times.
In the hope of protecting our financial system and standardizing requirements
around the world the 1988 Basel Capital Accord set minimum requirements for
calculating bank capital. It was also the intent of regulators to make the rules simple.
The Capital Accord specified that regulatory capital is 8% of risk-weighted assets.
The risk weights were 100%, 50%, 20%, or 0% depending on the asset. For example,
a loan to an OECD bank would have a risk weighting of 20%. Even at the time the
regulators knew there were shortcomings in the regulation, but it had the advantage
of being simple.
The changes in banking since 1988 have proved the Capital Accord to be very
inadequate – Jones and Mingo (1998) discuss the problems in detail. Banks use
exotic products to change their regulatory capital requirements independent of their
actual risk. They are arbitraging the regulation. Now there is arbitrage across
banking, trading, and counterparty bank books as well as within individual books
(see Irving, 1997).
One of the proposals from the industry is to allow banks to use their own internal
models to compute regulatory credit risk capital similar to the way they use VaR
models to compute add-ons to regulatory market risk capital. Some of the pros and
cons of internal models are discussed in Irving (1997). The International Swaps and
Derivatives Association (1998) has proposed a model. Their main point is that
regulators should embrace models as soon as possible and they should allow the
models to evolve over time.
Regulators are examining ways to correct the problems in existing capital regulation.
It is a very positive development that the models, and their implementation, will
be scrutinized before making a new decision on regulation.
The biggest mistake the industry could make would be to adopt a one-size-fits all
policy. Arbitrarily adopting any of these models would certainly stifle creativity. More
importantly, it could undermine responsibility and authority of those most capable
of carrying out credit risk management.

Risk calculation engine

The last component is the risk calculation engine which actually calculates the
expected returns and multivariate distributions that are then used to calculate the
associated risks and the optimal portfolio. Since the distributions are not normal,
this portion of the portfolio model requires some ingenuity.
One method of calculation is Monte Carlo simulation. This is exemplified in
many of the above-mentioned models. Another method of calculating the probability
distribution is numerical. One starts by approximating the probability distribution
of losses for each asset by a discrete probability distribution. This is a reasonable
simplification because one is mainly interested in large, collective losses – not
individual firm losses.
Once the individual probability distributions have been discretized, there is a
well-known computation called convolution for computing the aggregate probability
distribution. This numerical method is easiest when the probability distributions are
independent – which in this case they are not. There are tricks and enhancements
to the convolution technique to make it work for nonindependent distributions.
The risk calculation engine of CreditRiskò uses the convolution. It models the
nonindependence of defaults with a factor model. It assumes that there is a finite
number of factors which describe nonindependence. Such factors would come from
the firm’s country, geographical location, industry, and specific characteristics.

Exposure model

The exposure model is depicted in Figure 10.6. This portion of the model aggregates
the portfolio of assets across business lines and legal entities and any other appropriate
category. In particular, netting across a counterparty would take into account
the relevant jurisdiction and its netting laws. Without fully aggregating, the model
cannot accurately take into account diversification or the lack of diversification.
Only after the portfolio is fully aggregated and netted can it be correctly priced. At
this point the market risk pricing model and credit risk pricing model can actually
price all the credit risky assets.
The exposure model also calculates for each asset the appropriate time period,
which roughly corresponds to the amount of time it would take to liquidate the asset.
Having a different time period for each asset not only increases the complexity of the
model, it also raises some theoretical questions. Should the time period corresponding
to an asset be the length of time it takes to liquidate only that asset? To liquidate
all the assets in the portfolio? Or to liquidate all the assets in the portfolio in a time
of financial crisis? The answer is difficult. Most models simply use the same time
period, usually one year, for all exposures. One year is considered an appropriate
amount of time for reacting to a credit loss whether that be liquidating a position or
raising more capital. There is an excellent discussion of this issue in Jones and
Mingo (1998). Another responsibility of the exposure model is to report the portfolio’s
various concentrations.

Market risk pricing model

The market risk pricing model is analogous to the credit risk pricing model, except
that it is limited to assets without credit risk. This component models the change in
the market rates such as credit-riskless, US Treasury interest rates. To price all the
credit risky assets completely and accurately it is necessary to have both a market
risk pricing model and credit risk pricing model.
Most models, including CreditMetrics, CreditRiskò, Portfolio Manager, and Portfolio
View, have a dynamic credit rating model but lack a credit risk pricing model
and market risk pricing model. While the lack of these components partially cripples
some models, it does not completely disable them. As such, these models are best
suited to products such as loans that are most sensitive to major credit events like
credit rating migration including defaults. Two such models for loans only are
discussed in Spinner (1998) and Belkin et al. (1998).

Credit risk pricing model

The next major component of the model is the credit risk pricing model, which is
depicted in detail in Figure 10.5. This portion of the model together with the market
risk model will allow the credit risk management model to calculate the relevant
return statistics.
The credit risk pricing model is necessary because the price of credit risk has two
components. One is the credit rating that was handled by the previous component,
the other is the spread over the riskless rate. The spread is the price that the market
charges for a particular credit risk. This spread can change without the underlying
credit risk changing and is affected by supply and demand.
The credit risk pricing model can be based on econometric models or any of the
popular risk-neutral pricing models which are used for pricing credit derivatives.
Most risk-neutral credit pricing models are transplants of risk-neutral interest rate
pricing models and do not adequately account for the differences between credit risk
and interest rate risk. Nevertheless, these risk-neutral models seem to be popular.
See Skora (1998a,b) for a description of the various risk-neutral credit risk pricing
models.
Roughly speaking, static models are sufficient for pricing derivatives which do not
have an option component and dynamic models are necessary for pricing derivatives
which do have an option component. As far as credit risk management models are
concerned, they all need a dynamic credit risk term structure model. The reason is
that the credit risk management model needs both the expected return of each asset
as well as the covariance matrix of returns. So even if one had both the present price
of the asset and the forward price, one would still need to calculate the probability
distribution of returns.
So the credit risk model calculates the credit risky term structure, that is, the yield
curve for the various credit risky assets. It also calculates the corresponding term
structure for the end of the time period as well as the distribution of the term
structure. One way to accomplish this is by generating a sample of what the term
structure may look like at the end of the period. Then by pricing the credit risky
assets off these various term structures, one obtains a sample of what the price of
the assets may be.
Since credit spreads do not move independently of one another, the credit risk
pricing model, like the asset credit risk model, also has a correlation component.
Again depending on the assets in the portfolio, it may be possible to economize and
combine this component with the previous one.
Finally, the choice of inputs can be historical, econometric or market data. The
choice depends on how the portfolio selection model is to be used. If one expects to
invest in a portfolio and divest at the end of the time period, then one needs to
calculate actual market prices. In this case the model must be calibrated to market
data. At the other extreme, if one were using the portfolio model to simply calculate
a portfolio’s risk or the marginal risk created by purchasing an additional asset, then
the model may be calibrated to historical, econometric, or market data – the choice
is the risk manager’s.

Asset credit risk model

The first component is the asset credit risk model that contains two main subcomponents:
the credit rating model and the dynamic credit rating model. The credit
rating model calculates the credit riskiness of an asset today while the dynamic
credit rating model calculates how that riskiness may evolve over time. This is
depicted in more detail in Figure 10.4. For example, if the asset is a corporate bond,
then the credit riskiness of the asset is derived from the credit riskiness of the issuer.
The credit riskiness may be in the form of a probably of default or in the form of a
credit rating. The credit rating may correspond to one of the international credit
rating services or the institution’s own internal rating system.

An interesting point is that the credit riskiness of an asset can depend on the
particular structure of the asset. For example, the credit riskiness of a bond depends
on its seniority as well as its maturity. (Short- and long-term debt of the same issuer
may have different credit ratings.) The credit risk does not necessarily need to be
calculated. It may be inputted from various sources or modeled from fundamentals.
If it is inputted it may come from any of the credit rating agencies or the institution’s
own internal credit rating system. For a good discussion of banks’ internal credit
rating models see Treacy and Carey (1998).
If the credit rating is modeled, then there are numerous choices – after all, credit
risk assessment is as old as banking itself. Two examples of credit rating models are
the Zeta model, which is described in Altman, Haldeman, and Narayanan (1977),
and the Lambda Index, which is described in Emery and Lyons (1991). Both models
are based on the entity’s financial statements.

Another well-publicized credit rating model is the EDF Calculator. The EDF model
is based on Robert Merton’s (1974) observation that a firm’s assets are the sum of
its equity and debt, so the firm defaults when the assets fall below the face value of
the debt. It follows that debt may be thought of as a short option position on the
firm’s assets, so one may apply the Black–Scholes option theory.
Of course, real bankruptcy is much more complicated and the EDF Calculator
accounts for some of these complications. The model’s strength is that it is calibrated
to a large database of firm data including firm default data. The EDF Calculator
actually produces a probability of default, which if one likes, can be mapped to
discrete credit ratings. Since the EDF model is proprietary there is no public
information on it. The interested reader may consult Crosbie (1997) to get a rough
description of its workings. Nickell, Perraudin, and Varotto (1998) compare various
credit rating models including EDF.

To accurately measure the credit risk it is essential to know both the credit
riskiness today as well as how that credit riskiness may evolve over time. As was
stated above, the dynamic credit rating model calculates how an asset’s credit
riskiness may evolve over time. How this component is implemented depends very
much on the assets in the portfolio and the length of the time period for which risk
is being calculated. But if the asset’s credit riskiness is not being modeled explicitly,
it is at least implicitly being modeled somewhere else in the portfolio model, for
example in a pricing model – changes in the credit riskiness of an asset are reflected
in the price of that asset.

Of course, changes in credit riskiness of various assets are related. So Figure 10.4
also depicts a component for the correlation of credit rating which may be driven by
any number of variables including historical, econometric, or market variables.
The oldest dynamic credit rating model is the Markov model for credit rating
migration. The appeal of this model is its simplicity. In particular, it is easy to
incorporate non-independence of two different firm’s credit rating changes.
The portfolio model CreditMetrics (J.P. Morgan, 1997) uses this Markov model.
The basic assumption of the Markov model is that a firm’s credit rating migrates at
random up or down like a Markov process. In particular, the migration over one
time period is independent of the migration over the previous period. Credit risk
management models based on a Markov process are implemented by Monte Carlo
simulation.

Unfortunately, there has been recent research showing that the Markov process is
a poor approximation to the credit rating process. The main reason is that the credit
rating is influenced by the economy that moves through business cycles. Thus the
probability of downgrade and, thus, default is greater during a recession. Kolman
(1998) gives a non-technical explanation of this fact. Also Altman, and Kao (1991)
mention the shortcomings of the Markov process and propose two alternative processes.
Nickell, Perraudin, and Varotto (1998a,b) give a more thorough criticism of
Markov processes by using historical data. In addition, the credit rating agencies
have published insightful information on their credit rating and how they evolve over
time. For example, see Brand, Rabbia and Bahar (1997) or Carty (1997).
Another credit risk management model, CreditRiskò, models only two states: nondefault
and default (CSFP, 1997). But this is only a heuristic simplification. Rolfes
and Broeker (1998) have shown how to enhance CreditRiskò to model a finite
number of credit rating states. The main advantage of the CreditRiskò model is that
it was designed with the goal of allowing for an analytical implementation as opposed
to Monte Carlo.

The last model we mention is Portfolio View (McKinsey, 1998). This model is based
on econometric models and looks for relationships between the general level of default
and economic variables. Of course, predicting any economic variable, including the
general level of defaults, is one of the highest goals of research economics. Risk
managers should proceed with caution when they start believing they can predict
risk factors.

As mentioned above, it is the extreme events that most affect the risk of a portfolio
of credit risky assets. Thus it would make sense that a model which more accurately
measures the extreme event would be a better one. Wilmott (1998) devised such a
model called CrashMetrics. This model is based on the theory that the correlation
between events is different from times of calm to times of crisis, so it tries to model
the correlation during times of crisis. This theory shows great promise. See Davidson
(1997) for another discussion of the various credit risk models.

Value-at-Risk

Before going into more detail about credit risk management models, it would be
instructive to say a few words about Value-at-Risk. The credit risk management
modeling framework shares many features with this other modeling framework called
Value-at-Risk. This has resulted in some confusions and mistakes in the industry,
so it is worth-while explaining the relationship between the two frameworks.
Notice we were careful to write framework because Value-at-Risk (VaR) is a
framework. There are many different implementations of VaR and each of these
implementations may be used differently.
Since about 1994 bankers and regulators have been using VaR as part of their
risk management practices. Specifically, it has been applied to market risk management.
The motivation was to compute a regulatory capital number for market risk.
Given a portfolio of assets, Value-at-Risk is defined to be a single monetary capital
number which, for a high degree of confidence, is an upper bound on the amount of
gains or losses to the portfolio due to market risk. Of course, the degree of confidence
must be specified and the higher that degree of confidence, the higher the capital
number. Notice that if one calculates the capital number for every degree of confidence
then one has actually calculated the entire probability distribution of gains or losses
(see Best, 1998).
Specific implementation of VaR can vary. This includes the assumptions, the
model, the input parameters, and the calculation methodology. For example, one
implementation may calibrate to historical data and another to econometric data.
Both implementations are still VaR models, but one may be more accurate and
useful than the other may. For a good debate on the utility of VaR models see
Kolman, 1997.
In practice, VaR is associated with certain assumptions. For example, most VaR
implementations assume that market prices are normally distributed or losses are
independent. This assumption is based more on convenience than on empirical
evidence. Normal distributions are easy to work with.
Value-at-Risk has a corresponding definition for credit risk. Given a portfolio of
assets, Credit Value-at-Risk is defined to be a single monetary capital number which,
for a high degree of confidence, is an upper bound on the amount of gains or losses
to the portfolio due to credit risk.
One should immediately notice that both the credit VaR model and the credit risk
management model compute a probability distribution of gains or losses. For this
reason many risk managers and regulators do not distinguish between the two.
However, there is a difference between the two models. Though the difference may
be more of one of the mind-frame of the users, it is important.
The difference is that VaR models put too much emphasis on distilling one number
from the aggregate risks of a portfolio. First, according to our definition, a credit risk
management model also computes the marginal affect of a single asset and it
computes optimal portfolios which assist in making business decisions. Second, a
credit risk management model is a tool designed to assist credit risk managers in a
broad range of dynamic credit risk management decisions.
This difference between the models is significant. Indeed, some VaR proponents
have been so driven to produce that single, correct capital number that it has been
at the expense of ignoring more important risk management issues. This is why we
have stated that the model, its implementation, and their applications are important.
Both bankers and regulators are currently investigating the possibility of using the
VaR framework for credit risk management. Lopez and Saidenberg (1998) propose a
methodology for generating credit events for the purpose of testing and comparing
VaR models for calculating regulatory credit capital.

Aframework for credit risk management models

This section provides a framework in which to understand and evaluate credit risk
management models. We will describe all the components of a complete (or nearly
complete) credit risk model. Figure 10.3 labels the major components of a credit risk
model.
While at present there is no model that can do everything in Figure 10.3, this
description will be a useful reference by which to evaluate all models. As will be seen
below, portfolio models have a small subset of the components depicted in Figure
10.3. Sometimes by limiting itself to particular products or particular applications,
a model is able to either ignore a component or greatly simplify it. Some models
simply settle for an approximately correct answer. More detailed descriptions and
comparisons of some of these models may be found in Gordy (1998), Koyluglu and
Hickman (1998), Lopez and Saidenber (1998), Lentino and Pirzada (1998), Locke
(1998), and Crouhy and Mark (1998).
The general consensus seems to be that we stand to learn much more about credit
risk. We have yet to even scratch the surface in bringing high-powered, mathematical
techniques to bear on these complicated problems. It would be a mistake to settle
for the existing state of the art and believe we cannot improve. Current discussions
should promote original, customized solutions and thereby encourage active credit
risk management.

Adapting portfolio selection theory to credit risk management

Risk management distinguishes between market risk and credit risk. Market risk is
the risk of price movement due either directly or indirectly to changes in the prices
of equity, foreign currency, and US Treasury bonds. Credit risk is the risk of price
movement due to credit events. A credit event is a change in credit rating or perceived
credit rating, which includes default. Corporate, municipal, and certain sovereign
bond contain credit risk.

In fact, it is sometimes difficult to distinguish between market risk and credit risk.
This has led to debate over whether the two risks should be managed together, but
this question will not be debated here. Most people are in agreement that the risks
are different, and risk managers and their models must account for the differences.
As will be seen below, our framework for a credit risk management model contains a
market risk component.
There are several reasons why Markowitz’s portfolio selection model is most easily
applied to equity assets. First, the model is what is called a single-period portfolio
model that tells one how to optimize a portfolio over a single period, say, a single day.
This means the model tells one how to select the portfolio at the beginning of the
period and then one holds the portfolio without changes until the end of the period.
This is not a disadvantage when the underlying market is liquid. In this case, one
just reapplies the model over successive periods to determine how to manage the
portfolio over time. Since transaction costs are relatively small in the equity markets,
it is possible to frequently rebalance an equity portfolio.
A second reason the model works well in the equity markets is that their returns
seem to be nearly normal distributions. While much research on equity assets shows
that their returns are not perfectly normal, many people still successfully apply
Markowitz’s model to equity assets.

Finally, the equity markets are very liquid and deep. As such there is a lot of data
from which to deduce expected returns and covariances of returns.
These three conditions of the equity markets do not apply to the credit markets.
Credit events tend to be sudden and result in large price movements. In addition,
the credit markets are sometimes illiquid and have large transaction costs. As a
result many of the beautiful theories of market risk models do not apply to the credit
markets. Since credit markets are illiquid and transactions costs are high, an
appropriate single period can be much longer that a single day. It can be as long as
a year. In fact, a reasonable holding period for various instruments will differ from a
day to many years.

The assumption of normality in Markowitz portfolio model helps in another way. It
is obvious how to compare two normal distributions, namely, less risk is better than
more risk. In the case of, say, credit risk, when distributions are not normal, it is not
obvious how to compare two distributions. For example, suppose two assets have
probability distribution of losses with the same mean but standard deviations of $8
and $10, respectively. In addition, suppose they have maximum potential losses of
$50 and $20, respectively. Which is less risky? It is difficult to answer and depends
on an economic utility function for measuring. The theory of utility functions is
another field of study and we will not discuss it further. Any good portfolio theory for
credit risk must allow for the differences between market and credit risk.

Review of Markowitz’s portfolio selection theory

Harry Markowitz (1952, 1959) developed the first and most famous portfolio selection
model which showed how to build a portfolio of assets with the optimal risk and
return characteristics.
Markowitz’s model starts with a collection of assets for which it is assumed one
knows the expected returns and risks as well as all the pair-wise correlation of the
returns. Here risk is defined as the standard deviation of return.
It is a fairly strong assumption to assume that these statistics are known. The
model further assumes that the asset returns are modeled as a standard multivariate
normal distribution, so, in particular, each asset’s return is a standard normal
distribution.
Thus the assets are completely described by their expected return and their pairwise
covariances of returns
E[ri ] and
Covariance(ri , rj )óE[rirj ]ñE[ri ]E[rj ]
respectively, where ri is the random variable of return for the ith asset. Under these
assumptions Markowitz shows for a target expected return how to calculate the exact
proportion to hold of each asset so as to minimize risk, or equivalently, how to
minimize the standard deviation of return. Figure 10.2 depicts the theoretical
workings of the Markowitz model. Two different portfolios of assets held by two
different institutions have different risk and return characteristics.
While one may slightly relax the assumptions in Markowitz’s theory, the assumptions
are still fairly strong. Moreover, the results are sensitive to the inputs; two
users of the theory who disagree on the expected returns and covariance of returns
may calculate widely different portfolios. In addition, the definition of risk as the
standard deviation of returns is only reasonable when returns are a multi-normal
distribution. Standard deviation is a very poor measure of risk. So far there is no
consensus on the right probability distribution when returns are not a multi-normal
distribution.
Nevertheless, Markowitz’s theory survives because it was the first portfolio theory
to quantify risk and return. Moreover, it showed that mathematical modeling could
vastly improve portfolio theory techniques. Other portfolio selection models are
described in Elton and Gruber (1991). 294

Functionality of a good credit risk management model

A credit risk management model tells the credit risk manager how to allocate scarce
credit risk capital to various businesses so as to optimize the risk and return
characteristics of the firm. It is important to understand that optimize does not mean
minimize risk otherwise every firm would simply invest its capital in riskless assets.
Optimize means for any given target return, minimize the risk.
A credit risk management model works by comparing the risk and return characteristics
between individual assets or businesses. One function is to quantify the
diversification of risks. Being well-diversified means that the firm has no concentrations
of risk to, say, one geographical location or one counterparty.
Figure 10.1 depicts the various outputs from a credit risk management model. The
output depicted by credit risk is the probability distribution of losses due to credit
risk. This reports for each capital number the probability that the firm may lose that
amount of capital or more. For a greater capital number, the probability is less. Of
course, a complete report would also describe where and how those losses might
occur so that the credit risk manager can take the necessary prudent action.
The marginal statistics explain the affect of adding or subtracting one asset to the
portfolio. It reports the new risks and profits. In particular, it helps the firm decide
whether it likes that new asset or what price it should pay for it.
The last output, optimal portfolio, goes beyond the previous two outputs in that it
tells the credit risk manager the optimal mix of investments and/or business
ventures. The calculation of such an output would build on the data and calculation
of the previous outputs.
Of course, Figure 10.1 is a wish lists of outputs. Actual models may only produce some of the outputs for a limited number of products and asset classes. For example,
present technology only allows one to calculate the optimal portfolio in special
situations with severe assumptions. In reality, firms attain or try to attain the optimal
portfolio through a series of iterations involving models, intuition, and experience.
Nevertheless, Figure 10.1 will provide the framework for our discussion.
Models, in most general terms, are used to explain and/or predict. A credit risk
management model is not a predictive model. It does not tell the credit risk manger
which business ventures will succeed and which will fail. Models that claim predictive
powers should be used by the firm’s various business units and applied to individual
assets. If these models work and the associated business unit consistently exceeds
its profit targets, then the business unit would be rewarded with large bonuses and/
or increased capital. Regular success within a business unit will show up at the
credit risk management level. So it is not a contradiction that the business unit may
use one model while the risk management uses another.
Credit risk management models, in the sense that they are defined here, are used
to explain rather than predict. Credit risk management models are often criticized
for their failure to predict (see Shirreff, 1998). But this is an unfair criticism. One
cannot expect these models to predict credit events such as credit rating changes or
even defaults. Credit risk management models can predict neither individual credit
events nor collective credit events. For example, no model exists for predicting an
increase in the general level of defaults.
While this author is an advocate of credit risk management models and he has
seen many banks realize the benefits of models, one must be cautioned that there are risks associated with developing models. At present many institutions are rushing
to lay claim to the best and only credit risk management model. Such ambitions may
actually undermine the risk management function for the following reasons.
First, when improperly used, models are a distraction from the other responsibilities
of risk management. In the bigger picture the model is simply a single component,
though an important one, of risk management. Second, a model may undermine risk
management if it leads to a complacent, mechanical reliance on the model. And more
subtly it can stifle competition. The risk manager should have the incentive to
innovate just like any other employee.

Motivation

Banks are expanding their operation around the world; they are entering new
markets; they are trading new asset types; and they are structuring exotic products.
These changes have created new opportunities along with new risks. While banking
is always evolving, the current fast rate of change is making it a challenge to respond
to all the new opportunities.
Changes in banking have brought both good and bad news. The bad news includes
the very frequent and extreme banking debacles. In addition, there has been a
divergence between international and domestic regulation as well as between regulatory
capital and economic capital. More subtly, banks have wasted many valuable resources correcting problems and repairing outdated models and methodologies.
The good news is that the banks which are responding to the changes have been
rewarded with a competitive advantage. One response is the investment in risk
management. While risk management is not new, not even in banking, the current
rendition of risk management is new.
Risk management takes a firmwide view of the institution’s risks, profits, and
opportunities so that it may ensure optimal operation of the various business units.
The risk manager has the advantage of knowing all the firm’s risks extending across
accounting books, business units, product types, and counterparties. By aggregating
the risks, the risk manager is in the unique position of ensuring that the firm
may benefit from diversification. Risk management is a complicated, multifaceted
profession requiring diverse experience and problem-solving skills (see Bessis, 1998).
The risk manager is constantly taking on new challenges. Whereas yesterday a
risk manager may have been satisfied with being able to report the risk and return
characteristics of his firm’s various business units, today he or she is using that
information to improve his firm’s business opportunities.
Credit risk is traditionally the main risk of banks. Banks are in the business of
taking credit risk in exchange for a certain return above the riskless rate. As one
would expect, banks deal in the greatest number of markets and types of products.
Banks above all other institutions, including corporations, insurance companies,
and asset managers, face the greatest challenge in managing their credit risk. One
of the credit risk managers’ tools is the credit risk management model.

Credit risk management models

Financial institutions are just beginning to realize the benefits of credit risk management
models. These models are designed to help the risk manager project risk,
measure profitability, and reveal new business opportunities.
This chapter surveys the current state of the art in credit risk management
models. It provides the reader with the tools to understand and evaluate alternative
approaches to modeling. The chapter describes what a credit risk management model
should do, and it analyses some of the popular models. We take a high-level approach
to analysing models and do not spend time on the technical difficulties of their
implementation and application.1
We conclude that the success of credit risk management models depends on sound
design, intelligent implementation, and responsible application of the model. While
there has been significant progress in credit risk management models, the industry
must continue to advance the state of the art. So far the most successful models
have been custom designed to solve the specific problems of particular institutions.
As a point of reference we refer to several credit risk management models which
have been promoted in the industry press. The reader should not interpret this as
either an endorsement of these models or as a criticism of models that are not cited
here, including this author’s models. Interested readers should pursue their own
investigation and can begin with the many references cited below.

27 Şubat 2011 Pazar

Appendix: Intra- and interday P&L

For the purposes of backtesting, P&L from positions held from one day to the next
must be separated from P&L due to trading during the day. This is because market
risk measures only measure risk arising from the fluctuations of market prices and
rates with a static portfolio. To make a meaningful comparison of P&L with risk, the
P&L in question should likewise be the change in value of a static portfolio from close
of trading one day to close of trading the next. This P&L will be called interday P&L.
Contributions from trades during the day will be classified as intraday P&L. This
appendix aims to give unambiguous definitions for inter- and intraday P&L, and
show how they could be calculated for a portfolio. The basic information required for
this calculation is as follows:
Ω Prices of all instruments in the portfolio at the close of the previous business day.
This includes the prices of all OTC instruments, and the price and number held
of all securities or exchange traded contracts.
Ω Prices of all instruments in the portfolio at the close of the current business day.
This also includes the prices of all OTC instruments, and the price and number
held of all securities or exchange traded contracts.
Ω Prices of all OTC contracts entered into during the day. Price and amount of security
traded for all securities trades (including exchange traded contract trades).
The definitions shown are for single-security positions. They can easily be extended
by summing together P&L for each security to form values for a whole portfolio. OTC
contracts can be treated similarly to securities, except that they only have one
intraday event. This is the difference between the value when the contract is entered
into and its value at the end of that business day.
Inter- and intraday P&L for a single-security position can be defined as follows:
Interday P&LóN(t0)(P(t1)ñP(t0))
where
N(t)ónumber of units of security held at time t
P(t)óPrice of security at time t
t0óClose of yesterday
t1óClose of today
This is also the definition of synthetic P&L.
Intraday P&L is the total value of the day’s transactions marked to market at the
end of the day. For a position in one security, this could be written:
Intraday P&Ló(N(t1)ñN(t0))P(t1))ñ ;
No. of trades
ió1
*NiPi
where
*Niónumber of units of security bought in trade i
Pióprice paid per unit of security in trade i
The first term is the value of net amount of the security bought during the day
valued at the end of the day. The second term can be interpreted as the cost of
purchase of this net amount, plus any profit or loss made on trades during the day. 290

Conclusion

This chapter has reviewed the backtesting process, giving practical details on how to
perform backtesting. The often difficult task of obtaining useful profit and loss figures
has been discussed in detail with suggestions on how to clean available P&L figures
for backtesting purposes. Regulatory requirements have been reviewed, with specific
discussion of the Basel Committee regulations, and the UK (FSA) and Swiss (EBK)
regulations. Examples were given of how backtesting graphs can be used to pinpoint
problems in P&L and risk calculation. The chapter concluded with a brief overview
of backtesting information available in the annual reports of some investment banks.

Review of backtesting results in annual reports

Risk management has become a focus of attention in investment banking over the
last few years. Most annual reports of major banks now have a section on risk
management covering credit and market risk. Many of these now include graphs
showing the volatility of P&L figures for the bank, and some show backtesting graphs.
Table 9.2 shows a summary of what backtesting information is present in annual
reports from a selection of banks.
Table 9.2 Backtesting information in annual reports
Risk
management Backtesting
Company Date section P&L graph graph
Dresdner Bank 1997 Yes No No
Merrill Lynch 1997 Yes Yesa No
Deutsche Bank 1997 Yes No Nob
J. P. Morgan 1997 Yes Yes Noc
Lehman Brothers 1997 Yes Yesd No
ING Group 1997 Yes No No
ABN AMRO Holding 1997 Yes No No
Credit Suisse Group 1997 Yes Yes Yes
Sanwa Bank 1998 Yes Noe Yes
a Merrill Lynch’s P&L graph is of weekly results. It shows 3 years’ results year by year for comparison.
b Deutsche Bank show a graph of daily value at risk
c J. P. Morgan gives a graph of Daily Earnings at Risk (1-day holding period, 95% confidence interval)
for two years. The P&L histogram shows average DEaR for 1997, rebased to the mean daily profit.
d Lehman Brothers graph is of weekly results.
e Sanwa Bank also show a scatter plot with risk on one axis and P&L on the other. A diagonal line
indicates the confidence interval below which a point would be an exception.
Of the banks that compare risk to P&L, J. P. Morgan showed a number of exceptions
(12 at the 95% level) that was consistent with expectations. They interpret their Daily
Earnings at Risk (DEaR) figure in terms of volatility of earnings, and place the
confidence interval around the mean daily earnings figure of $12.5 million. The shift
of the P&L base for backtesting obviously increases the number of exceptions. It compensates for earnings that carry no market risk such as fees and commissions,
but also overstates the number of exceptions that would be obtained from a clean
P&L figure by subtracting the average profit from that figure. Comparing to the
average DEaR could over or understate the number of exceptions relative to a
comparison of each day’s P&L with the previous day’s risk figure.
Credit Suisse Group show a backtesting graph for their investment bank, Credit
Suisse First Boston. This graph plots the 1-day, 99% confidence interval risk figure
against P&L (this is consistent with requirements for regulatory reporting). The graph
shows no exceptions, and only one loss that even reaches close to half the 1-day,
99% risk figure. The Credit Suisse First Boston annual review for 1997 also shows a
backtesting graph for Credit Suisse Financial Products, Credit Suisse First Boston’s
derivative products subsidiary. This graph also shows no exceptions, and has only
two losses that are around half of the 1-day, 99% risk figure. Such graphs show that
the risk figure measured is overestimating the volatility of earnings. However, the
graph shows daily trading revenue, not clean or hypothetical P&L prepared specially
for backtesting. In a financial report, it may make more sense to show actual trading
revenues than a specially prepared P&L figure that would be more difficult to explain.
Sanwa Bank show a backtesting graph comparing 1-day, 99% confidence interval
risk figures with actual P&L (this is consistent with requirements for regulatory
reporting). Separate graphs are shown for the trading and banking accounts. The
trading account graph shows only one loss greater than half the risk figure, while
the banking account graph shows one exception. The trading account graph shows
an overestimate of risk relative to volatility of earnings, while the banking account
graph is consistent with statistical expectations
The backtesting graphs presented by Credit Suisse Group and Sanwa Bank indicate
a conservative approach to risk measurement. There are several good reasons for this:
Ω It is more prudent to overestimate, rather than underestimate risk. This is
especially so as market risk measurement systems in general do not have several
years of proven performance.
Ω From a regulatory point of view, overestimating risk is acceptable, whereas an
underestimate is not.
Ω Risk measurement methods may include a margin for extreme events and crises.
Backtesting graphs for 1998 will probably show some exceptions.
This review of annual reports shows that all banks reviewed have risk management
sections in their annual reports. Backtesting information was only given in a few cases,
but some information on volatility of P&L was given in over half the reports surveyed.

Systems requirements

A backtesting system must store P&L and risk data, and be able to process it into a
suitable form. It is useful to be able to produce backtesting graphs, and exception statistics. The following data should be stored:
Ω P&L figures broken down by:
– Business unit (trading desk, trading book)
– Source of P&L (fees and commissions, provisions, intraday trading, interday
trading)
Ω Risk figures broken down by:
– Business unit
The backtesting system should be able at a minimum to produce backtesting
graphs, and numbers of exceptions at each level of the business unit hierarchy. The
system should be able to process information in a timely way. Data must be stored
so that at least 1 year’s history is available.

Risk measured is too high

Possible causes
It is often difficult to aggregate risk across risk factors and broad risk categories in a
consistent way. Choosing too conservative a method for aggregation can give risk
figures that are much too high. An example would be using a simple sum across
delta, gamma, and vega risks, then also using a simple sum between interest rate,
FX, and equity risk. In practice, losses in these markets would probably not be
perfectly correlated, so the risk figure calculated in this way would be too high.
A similar cause is that figures received for global aggregation may consist of
sensitivities from some business units, but risk figures from others. Offsetting and
diversification benefits between business units that report only total risk figures
cannot be measured, so the final risk figure is too high.
Solutions
Aggregation across risk factors and broad risk categories can be done in a number
of ways. None of these is perfect, and this article will not discuss the merits of each
in detail. Possibilities include:
Ω Historical simulation
Ω Constructing a large correlation matrix including all risk factors
Ω Assuming zero correlation between broad risk categories (regulators would require
quantitative evidence justifying this assumption)
Ω Assuming some worst-case correlation (between 1 and 0) that could be applied to
the risks (rather than the sensitivities) no matter whether long or short positions
were held in each broad risk category.
To gain full offsetting and diversification benefits at a global level, sensitivities must
be collected from all business units. If total risks are reported instead, there is no
practical way of assessing the level of diversification or offsetting present.
P&L has a positive bias
There may be exceptions on the positive but not the negative side. Even without
exceptions, the P&L bars are much more often positive than negative (Figure 9.7). 282

Solutions

To identify missing risk factors, the risk measurement method should be compared
with the positions held by the trading desk in question. It is often helpful to discuss
sources of risk with traders, as they often have a good idea of where the main risks of their positions lie. A risk factor could be missed if instruments’ prices depend on
a factor outside the four broad risk categories usually considered (e.g. prices of
mortgage backed securities depend on real estate values). Also, positions may be
taken that depend on spreads between two factors that the risk measurement system
does not distinguish between (e.g. a long and a short bond position both fall in the
same time bucket, and appear to hedge each other perfectly).
When volatility increases suddenly, a short observation period could be substituted
for the longer observation period usually used for calculating extreme moves. Most
regulators allow this if the overall risk figure increases as a result.
Mapping of business units for P&L and risk calculation should be the same. When
this is a problem with banking and trading book positions held for the same trading
desk, P&L should be broken down so that P&L arising from trading book positions
can be isolated.

Possible causes

There may be risk factors that are not included when risk is measured. For instance,
a government bond position hedged by a swap may be considered riskless, when
there is actually swap spread risk. Some positions in foreign currency-denominated
bonds may be assumed to be FX hedged. If this is not so, FX fluctuations are an
extra source of risk.
Especially if the problem only shows up on a recent part of the backtesting graph,
the reason may be that volatilities have increased. Extreme moves used to calculate
risk are estimates of the maximum moves at the 99% confidence level of the
underlying market prices or rates. Regulatory requirements specify a long observation
period for extreme move calculation (at least one year). This means that a sharp
increase in volatility may not affect the size of extreme moves used for risk measurement
much even if these are recalculated. A few weeks of high volatility may have a
relatively small effect on extreme moves calculated from a two-year observation
period. Figure 9.5 shows the risk and return on a position in the S&P 500 index. The
risk figure is calculated using two years of historical data, and is updated quarterly.
The period shown is October 1997 to October 1998. Volatility in the equity markets
increased dramatically in September and October 1998. The right-hand side of the
graph shows several exceptions as a result of this increase in volatility.
The mapping of business units for P&L reporting may be different from that used
for risk reporting. If extra positions are included in the P&L calculation that are
missing from the risk calculation, this could give a risk figure that is too low to
explain P&L fluctuations. This is much more likely to happen at a trading desk level
than at the whole bank level. This problem is most likely to occur for trading desks
that hold a mixture of trading book and banking book positions. Risk calculations
may be done only for the trading book, but P&L may be calculated for both trading
and banking book positions.

Analysis of backtesting graphs

Backtesting graphs prepared at a trading desk level as well as a whole bank level
can make certain problems very clear. A series of examples shows how backtesting
graphs can help check risk and P&L figures in practice, and reveal problems that
may not be easily seen by looking at separate risk and P&L reports. Most of the
examples below have been generated synthetically using normally distributed P&L,
and varying risk figures. Figure 9.5 uses the S&P 500 index returns and risk of an
index position.

Benefits of backtesting beyond regulatory compliance

Displaying backtesting data
Stating a number of exceptions over a given period gives limited insight into the
reliability of risk and P&L figures. How big were the exceptions? Were they closely
spaced in time, or separated by several weeks or months? A useful way of displaying
backtesting data is the backtesting graph (see Figure 9.2). The two lines represent
the 1-day 99% risk figure, while the columns show the P&L for each day. The P&L is
shifted in time relative to the risk so that the risk figure for a particular day is
compared with the P&L for the following trading day. Such a graph shows not only
how many exceptions there were, but also their timing and magnitude. In addition,
missing data, or unchanging data can be easily identified.
Many banks show a histogram of P&L in their annual report. This does not directly
compare P&L fluctuations with risk, but gives a good overall picture of how P&L was distributed over the year. Figure 9.3 shows a P&L histogram that corresponds to the
backtesting graph in Figure 9.2.

Backtesting to support specific risk measurement

In September 1997, the Basel Committee on Banking Supervision released a modification
(1997a,b) to the Amendment to the Capital Accord to include market risks
(1996) to allow banks to use their internal models to measure specific risk for
capital requirements calculation. This document specified additional backtesting
requirements to validate specific risk models. The main points were:
Ω Backtesting must be done at the portfolio level on portfolios containing significant
specific risk.
Ω Exceptions must be analysed. If the number of exceptions falls in the red zone for
any portfolio, immediate action must be taken to correct the model. The bank
must demonstrate that it is setting aside sufficient capital to cover extra risk not
captured by the model.
FSA and EBK regulations on the backtesting requirements to support specific risk
measurement follow the Basel Committee paper very closely.

EBK regulations

The Eidgeno¨ssische Bankenkommission, the Swiss regulator, gives its requirements
in the document Richtlinien zur Eigenmittelunterlegung von Marktrisiken (Regulations
for Determining Market Risk Capital) (1997). The requirements are generally closely
in line with the Basel Committee requirements. Reporting of exceptions is on a
quarterly basis unless the number of exceptions is greater than 4, in which case, the
regulator must be informed immediately. The bank is free to choose whether dirty,
clean, or synthetic P&L are used for backtesting. The chosen P&L, however, must be
free from components that systematically distort the backtesting results.

FSA regulations

The Financial Services Authority (FSA) is the UK banking regulator. Its requirements
for backtesting are given in section 10 of the document Use of Internal Models to
Measure Market Risks (1998). The key points of these regulations that clarify or go
beyond the requirements of the Basel Committee are now discussed.
Ω When a bank is first seeking model recognition (i.e. approval to use its internal
market risk measurement model to set its market risk capital requirement), it
must supply 3 months of backtesting data.
Ω When an exception occurs, the bank must notify its supervisor orally by close of
business two working days after the loss is incurred.
Ω The bank must supply a written explanation of exceptions monthly.
Ω A result in the red zone may lead to an increase in the multiplication factor greater
than 1, and may lead to withdrawal of model recognition.
The FSA also explains in detail how exceptions may be allowed to be deemed
‘unrecorded’ when they do not result from deficiencies in the risk measurement
model. The main cases when this may be allowed are:
Ω Final P&L figures show that the exception did not actually occur.
Ω A sudden increase in market volatility led to exceptions that nearly all models
would fail to predict.
Ω The exception resulted from a risk that is not captured within the model, but for
which regulatory capital is already held.
Other capabilities that the bank ‘should’ rather than ‘must’ have are the ability to
analyse P&L (e.g. by option greeks), and to split down backtesting to the trading book
level. The bank should also be able to do backtesting based on hypothetical P&L
(although not necessarily on a daily basis), and should use clean P&L for its daily
backtesting.

Regulatory requirements

The Basel Committee on Banking Supervision sets out its requirements for backtesting
in the document Supervisory framework for the use of ‘backtesting’ in conjunction
with the internal models approach to market risk capital requirements (1996b). The
key points of the requirements can be summarized as follows:
Ω Risk figures for backtesting are based on a 1-day holding period and a 99%
confidence interval.
Ω A 1-year observation period is used for counting the number of exceptions.
Ω The number of exceptions is formally tested quarterly.
The committee also urges banks to develop the ability to use synthetic P&L as well
as dirty P&L for backtesting.
The result of the backtesting exercise is a number of exceptions. This number is
used to adjust the multiplier used for calculating the bank’s capital requirement for
market risk. The multiplier is the factor by which the market risk measurement is
multiplied to arrive at a capital requirement figure. The multiplier can have a
minimum value of 3, but under unsatisfactory backtesting results can have a value
up to 4. Note that the value of the multiplier set by a bank’s local regulator may also
be increased for other reasons. Table 9.1 (Table 2 from Basel Committee on Banking
Supervision (1996b)) provides guidelines for setting the multiplier.
The numbers of exceptions are grouped into zones. A result in the green zone is
taken to indicate that the backtesting result shows no problems in the risk measurement
method. A result in the yellow zone is taken to show possible problems. The
bank is asked to provide explanations for each exception, the multiplier will probably
be increased, and risk measurement methods kept under review. A result in the red
zone is taken to mean that there are severe problems with the bank’s risk measurement
model or system. Under some circumstances, the local regulator may decide that there is an acceptable reason for an exception (e.g. a sudden increase in market
volatilities). Some exceptions may then be disregarded, as they do not indicate
problems with risk measurement.
Local regulations are based on the international regulations given in Basel Committee
on Banking Supervision (1996b) but may be more strict in some areas.

Further P&L analysis for option books

P&L analysis (or P&L attribution) breaks down P&L into components arising from
different sources. The above breakdown removes unwanted components of P&L so
that a clean P&L figure can be calculated for backtesting. Studying these other
components can reveal useful information about the trading operation. For instance,
on a market-making desk, does most of the income come from fees and commissions
and spreads as expected, or is it from positions held from one day to the next? A
change in the balance of P&L from different sources could be used to trigger a further
investigation into the risks of a trading desk.
The further breakdown of interday P&L is now considered. In many cases, the P&L
analysis would be into the same factors as are used for measuring risk. For instance,
P&L from a corporate bond portfolio could be broken down into contributions from
treasury interest rates, movements in the general level of spreads, and the movements
of specific spreads of individual bonds in the portfolio. An equity portfolio could have
P&L broken down into one component from moves in the equity index, and another
from movement of individual stock prices relative to the index. This type of breakdown
allows components of P&L to be compared to general market risk and specific risk
separately. More detailed backtesting can then be done to demonstrate the adequacy
of specific risk measurement methods

P&L for options can be attributed to delta, gamma, vega, rho, theta, and residual
terms. The option price will change from one day to the next, and according to the
change in the price of the underlying and the volatility input to the model, this
change can be broken down. The breakdown for a single option can be written as
follows:
*cóLc
LS
*Sò1
2
L2c
LS2 (*S)2òLc
Lp
*pòLc
Lr
*ròLc
Lt
*tòResidual
This formula can also be applied to a portfolio of options on one underlying. For a
more general option portfolio, the greeks relative to each underlying would be
required. If most of the variation of the price of the portfolio is explained by the
greeks, then a risk measurement approach based on sensitivities is likely to be
effective. If the residual term is large, however, a full repricing approach would be
more appropriate. The breakdown of P&L allows more detailed backtesting to validate
risk measurement methods by risk factor, rather than just at an aggregate level.
When it is possible to see what types of exposure lead to profits and losses,
problems can be identified. For instance, an equity options desk may make profits
on equity movements, but losses on interest rate movements.

Clean P&L

Clean P&L for backtesting purposes is calculated by removing unwanted components
from the dirty P&L and adding any missing elements. This is done to the greatest possible
extent given the information available. Ideally, the clean P&L should not include:
Ω Fees and commissions
Ω Profits or losses from bid–mid–offer spreads
Ω Provisions
Ω Income from intraday trading

The clean P&L should include:
Ω Interday P&L
Ω Daily funding costs
Synthetic or hypothetical P&L
Instead of cleaning the existing P&L figures, P&L can be calculated separately for
backtesting purposes. Synthetic P&L is the P&L that would occur if the portfolio was
held constant during a trading day. It is calculated by taking the positions from the
close of one trading day (exactly the positions for which risk was calculated), and
revaluing these using prices and rates at the close of the following trading day.
Funding positions should be included. This gives a synthetic P&L figure that is
directly comparable to the risk measurement. This could be written:
Synthetic P&LóP0(t1)ñP0(t0)
where
P0(t0) is the value of the portfolio held at time 0 valued with the market prices as of
time 0
P0(t1) is the value of the portfolio held at time 0 valued with the market prices as of
time 1
The main problem with calculating synthetic P&L is valuing the portfolio with
prices from the following day. Some instruments in the portfolio may have been sold,
so to calculate synthetic P&L, market prices must be obtained not just for the
instruments in the portfolio but for any that were in the portfolio at the end of the
previous trading day. This can mean extra work for traders and business unit control
or accounting staff. The definition of synthetic P&L is the same as that of interday
P&L given in the Appendix.

Realized and unrealized P&L

P&L is usually also separated into realized and unrealized P&L. In its current form,
backtesting only compares changes in value of the portfolio with value at risk. For this
comparison, the distinction between realized and unrealized P&L is not important. If
backtesting were extended to compare cash-flow fluctuations with a cash-flow at risk
measure, this distinction would be relevant.

Intraday trading

Some trading areas (e.g. FX trading) make a high proportion of their profits and
losses by trading during the day. Daily risk reports only report the risk from end of
day positions being held to the following trading day. For these types of trading, daily
risk reporting does not give an accurate picture of the risks of the business.
Backtesting is based on daily risk figures and a 1-day holding period. It should use
P&L with contributions from intra-day trading removed. The Appendix gives a detailed
definition of intra- and interday P&L with some examples.
It may be difficult to separate intraday P&L from the general P&L figures reported.
For trading desks where intraday P&L is most important, however, it may be possible
to calculate synthetic P&L relatively easily. Synthetic P&L is based on revaluing
positions from the end of the previous day with the prices at the end of the current
day (see below for a full discussion). Desks where intraday P&L is most important
are FX trading and market-making desks. For these desks, there are often positions
in a limited number of instruments that can be revalued relatively easily. In these
cases, calculating synthetic P&L may be a more practical alternative than trying to
calculate intraday P&L based on all trades during the day, and then subtracting it
from the reported total P&L figure.

Funding

When a trading desk buys a security, it requires funding. Often, funding is provided
by the bank’s treasury desk. In this case, it is usually not possible to match up
funding positions to trading positions or even identify which funding positions belong
to each trading desk. Sometimes, funding costs are not calculated daily, but a
monthly average cost of funding is given. In this case, daily P&L is biased upwards
if the trading desk overall requires funds, and this is corrected by a charge for
funding at the month end. For backtesting, daily funding costs should be included
with daily P&L figures. The monthly funding charge could be distributed retrospectively.
However, this would not give an accurate picture of when funding was actually
required. Also, it would lead to a delay in reporting backtesting exceptions that would
be unacceptable to some regulators.

Provisionszz

When a provision is taken, an amount is set aside to cover a possible future loss.
For banking book positions that are not marked to market (e.g. loans), provisioning
is a key part of the portfolio valuation process. Trading positions are marked to
market, though, so it might seem that provisioning is not necessary. There are
several situations, however, where provisions are made against possible losses.
Ω The portfolio may be marked to market at mid-prices and rates. If the portfolio
had to be sold, the bank would only receive the bid prices. A provision of the
mid–bid spread may be taken to allow for this.
Ω For illiquid instruments, market spreads may widen if an attempt is made to sell
a large position. Liquidity provisions may be taken to cover this possibility.
Ω High yield bonds pay a substantial spread over risk-free interest rates, reflecting
the possibility that the issuer may default. A portfolio of a small number of such
bonds will typically show steady profits from this spread with occasional large
losses from defaults. Provisions may be taken to cover losses from such defaults.
When an explicit provision is taken to cover one of these situations, it appears as
a loss. For backtesting, such provisions should be removed from the P&L figures.
Sometimes, provisions may be taken by marking the instrument to the bid price or
rate, or to an even more conservative price or rate. The price of the instrument may
not be marked to market daily. Price testing controls verify that the instrument is
priced conservatively, and therefore, there may be no requirement to price the
instrument except to make sure it is not overvalued. From an accounting point of
view, there is no problem with this approach. However, for backtesting, it is difficult
to separate out provisions taken in this way, and recover the mid-market value of
the portfolio. Such implicit provisions smooth out fluctuations in portfolio value, and
lead to sudden jumps in value when provisions are reevaluated. These jumps may
lead to backtesting exceptions despite an accurate risk measurement method. This
is illustrated in Figure 9.9 (on p. 281).

Fees and commissions

When a trade is carried out, a fee may be payable to a broker, or a spread may be
paid relative to the mid-market price of the security or contract in question. Typically,
in a market making operation, fees will be received, and spreads will result in a
profit. For a proprietary trading desk, in contrast, fees would usually be paid, and
spreads would be a cost. In some cases, fees and commissions are explicitly stated
on trade tickets. This makes it possible to separate them from other sources of profit
or loss. Spreads, however, are more difficult to deal with. If an instrument is bought
at a spread over the mid-price, this is not generally obvious. The price paid and the
time of the trade are recorded, but the current mid-price at the time of the trade is
not usually available. The P&L from the spread would become part of intraday P&L,
which would not impact clean P&L. To calculate the spread P&L separately, the midprice
would have to be recorded with the trade, or it would have to be calculated
afterwards from tick-by-tick security price data. Either option may be too onerous to
be practical.
Fluctuations in fee income relate to changes in the volume of trading, rather than
to changes in market prices. Market risk measures give no information about risk
from changes in fee income, therefore fees and commissions should be excluded
from P&L figures used for backtesting.

Dirty or raw P&L

As noted above, P&L calculated daily by the business unit control or accounting
department usually includes a number of separate contributions.

Profit and loss calculation for backtesting

When market risk is calculated, it gives the loss in value of a portfolio over a given
holding period with a given confidence level. This calculation assumes that the
composition of the portfolio does not change during the holding period. In practice,
in a trading portfolio, new trades will be carried out. Fees will be paid and received,
securities bought and sold at spreads below or above the mid-price, and provisions
may be made against possible losses. This means that P&L figures may include
several different contributions other than those related to market risk measurement.
To compare P&L with market risk in a meaningful way, there are two possibilities.
Actual P&L can be broken down so that (as near as possible) only contributions from
holding a position from one day to the next remain. This is known as cleaning the
P&L. Alternatively, the trading positions from one day can be revalued using prices
from the following day. This produces synthetic or hypothetical P&L. Regulators
recognize both these methods. If the P&L cleaning is effective, the clean figure should
be almost the same as the synthetic figure. The components of typical P&L figures,
and how to clean them, or calculate synthetic P&L are now discussed.