14 Mart 2011 Pazartesi

Capital charge determination

One of the most important developments in financial risk management over the last
few years has been the development of the risk capital measurement such as Bankers
Trust’s Capital-at-Risk (CaR) or J.P. Morgan’s Value-at-Risk (VaR) and daily earnings
at risk (DEaR). The calculation of risk capital is essential for financial institutions
for consistent risk comparability across diverse positions, for the determinants of
capital adequacy, and for the performance measurement and evaluation of business
units or strategies on a risk-adjusted basis. It is now regarded as ‘best practice’ for
market risk measurement by financial institutions. However, one should remember
that Value-at-Risk is only a useful quick and dirty snapshot risk measure. Believing
that it is possible to collapse the multiple dimensions of financial risk into one single
number (or even just a few) is itself an example of model risk.
First, there is a large set of basic approaches to measure the VaR, such as historical
simulation, delta-normal (also called analytic or variance–covariance, later extended
to delta–gamma for non-linear instruments), and Monte Carlo. All these methods
were extended for trade-offs between speed of computation and accuracy. They all
assume liquid markets and constant portfolios over a given holding period. They
involve statistical assumptions, approximations, estimates, and are subject to a
strong implementation risk from a software point of view. As evidenced by Mahoney
(1995) and Hendricks (1996), the results of different methods for computing Valueat-
Risk can differ in a dramatic way, even if they use the same methodology and rely
on the same dataset! However, this should not reduce the value of the information
they provide.
Second, from an ideal point of view, the Holy Grail for any bank or financial
institution is to have a single model covering all its assets and liabilities across the
world. In practice, the goal of such a universal model is rarely achieved. Each
business, each country, each trading desk has a local model that fits its needs and
makes very simplified assumptions for (apparently) secondary issues. For instance,
an equity derivative trader may use a sophisticated multi-factor equity model, while
a bond trader will rather focus on a multi-factor yield curve model. Both models will
perform well in their respective tasks, but as we move across businesses, for instance
to assess the Value-at-Risk of the entire firm, these models have to be integrated.
When a set of disparate and inconsistent models is used conjointly, the result is
generally inconsistent. Therefore, analysing model risk is crucial, whatever market
risk approach is used.
Evidently, the tail behaviour of the distribution of asset returns plays a central
role in estimating Value-at-Risk. Mathematically, the problem is complicated. First,
it is very difficult to estimate the tail of a distribution from (by definition) a limited
number of points. Second, violations of statistical assumptions may not impact the
average behaviour of a model, but will result in non-robust results in extreme
conditions. And the use of the law of large numbers and the central limit theorem to
justify normality are generally dangerously deceptive, as they result in an underestimation
of the Value-at-Risk on moving further into the tails.
In addition, existing capital risk models are subject to important critiques. Local
approximations (such as Taylor series extensions and truncations) and the use of
local sensitivity measures (such as the greeks) make most models inconsistent with
large market events. Reliance on pricing models and distribution assumptions12 is
heavy. The parameters’ stability over time is a crucial assumption. However, all these
models provide very valuable information in spite of their limitations. Many of their
adverse trade-offs can be mitigated through the design of adequate risk management
procedures.
Within certain limits, banks are now allowed to build their own Value-at-Risk
models to complement other existing models, such as the regulator’s building-block
approach, the covariance method with a normal portfolio, the delta-normal method,
the delta–gamma methods, or simulation-based models. An increasing challenge for
risk managers is to protect their institution’s capital against the realities of marking
to model versus marking to market. Although institutions begin with the same
generic definition, the calculation methods and results differ widely.
Facing the increasing complexity of products and the diversity of models, regulators
themselves have adopted a very pragmatic approach. Banks may calculate their tendays-
ahead Value-at-Risk using their own internal models. To ensure that banks
use adequate internal models, regulators have introduced the idea of exceptions,
backtesting and multipliers. An exception occurs when the effective loss exceeds the
model calculated Value-at-Risk. The market risk capital charge is computed using
the bank’s own estimate of the Value-at-Risk, times a multiplier that depends on the
number of exceptions over the last 250 days. As noted by the Basel Committee on
Banking Supervision (1996), the multiplier varies between 3 and 4, depending on
the magnitude and the number of exceptions (which must remain below a threshold;
otherwise, the bank’s model is declared inaccurate).
Whatever these penalties or Value-at-Risk adjustments, they result generally in an
over-estimation of the capital charge and are nothing else than simple ad-hoc safety
procedures to account for the impact of model risk. A bank might use an inadequate
or inappropriate model, but the resulting impact will be mitigated by adjusting the
capital charge. 431

Hiç yorum yok:

Yorum Gönder