Several approaches to the model-building process have been reviewed in the previous
section. This brief survey is, by necessity, far from complete. However, a general
comment should have emerged from the presentation: in each step, we have reached
very quickly the limits of traditional modeling. Most financial models built using
such a framework are primarily motivated by their mathematical tractability or by
their statistical fit rather than by their ability to describe the data. Many choices are
made in the places where the trade-off between the reduction in mathematical complexity and the losses in generality appears to be favourable. One should always
remember that the first goal of a model should not be to improve its statistical fit,
but to provide a reasonable approximation of reality. Therefore, it is important to
think more carefully about the actual physical or behavioural relationships connecting
a given model to reality.
What should be the properties of an ideal model? First, it should be theoretically
consistent, both internally and with the widely accepted theories in the field. Unless
it is a path-finder, a model that contradicts every existing model in its field should
be considered suspect and must prove its superiority. An ideal model should also be
flexible, simple, realistic, and well specified, in the sense that its inputs should be
observable and easily estimable, and should afford a direct economic or financial
interpretation. It should provide a good fit of the existing market data (if any), but
not necessarily an exact one. One should not forget that liquidity effects, tax effects,
bid–ask spreads and other market imperfections can often lead to ‘errors’ in the
quotations.10 Arbitrageurs exploit these errors, and including them in a perfectly
fitted model would result in a model with built-in arbitrage opportunities! Finally, it
should allow for an efficient and tractable pricing and hedging of financial instruments.
Of course, analytical methods are preferred, but numerical algorithms are
also acceptable if they do not lead to a computational burden.
Unfortunately, in practice, all these conditions are rarely met, and a trade-off has
to be made (see Figure 14.1). If we take the example of fixed-income derivatives,
single-factor time-invariant models do not fit the term structure well, do not explain
some humped yield curves, do not allow for particular volatility structures and cannot match at the same time cap and swaption prices.11 But they provide simple
analytical solutions for bonds and bond options pricing and hedging. In fact, the
answer to model choice will depend on the specific use of the model. For interest
rates, the important questions are: What is the main goal of the model? How many
factors do we really need? Which factors? Is the model incremental complexity
justified in light of their pricing and risk management effectiveness?
Very often, the best model will also vary across time. An interesting study related
to this is Hegi and Mujynya (1996). The authors attempted to select the ‘best’ model
to explain the behaviour of the one-month Euro–Swiss franc rate across a range of
popular one-factor models. The set of potential candidates included the Merton
(1973), Vasicek (1977), Cox, Ingersoll and Ross (1985) square root process, the
Dothan (1978) the geometric Brownian motion, the Brennan and Schwartz (1980),
Cox, Ingersoll and Ross (1980) variable rate, and the Cox (1975) constant elasticity
of variance. The parameters were estimated each month from weekly data on a
moving period of five or ten years. The results (see Figure 14.2) clearly show the
supremacy of the Vasicek (1977) and constant elasticity of variance models. But they
also evidence that the best model depends crucially on the time period considered
and on the length of the historical data used for estimation. Clearly, historical data
can include unique episodes that may not repeat in the future.
Hiç yorum yok:
Yorum Gönder