The discussion above has shown that even with freedom to choose policies there are
plenty of ways in which reported accounting information can be misleading. For
external reporting there are the additional restrictions of legal regulations and
accounting standards, as well as the complications of separate legal entities. A given
trading desk frequently trades in several legal entities, the entity for each individual
trade being chosen based on where it is most tax-efficient.
While the paradigm of current-cost accounting for published accounts was investigated
in some detail during the inflation of the 1970s, it has sunk with very little
trace and virtually all published sets of accounts are prepared on a historic cost
basis. However, that would present a most misleading picture for most trading
houses. Treatments vary between countries, e.g. in the UK, banks therefore use the
‘true and fair’ override, in other words breaking the accounting rules laid down by
the law to use more appropriate methods. The particular, more appropriate, method
used is marking the trading assets and liabilities to market prices. In Switzerland
the individual entity statutory accounts are prepared on a historic cost basis and so
are of little use to understanding profitability, but group accounts are published
using International Accounting Standards which do permit marking to market.
This is not the place for a discussion of the meaning of published accounts, but if
one assumes that they are for the benefit of current and potential investors, who in
a commonsense way define profit as being the amount by which the net assets of the
group have grown, then all the earlier discussion about FX seems unnecessary here
since the net assets approach will give the answer that they seek. However, there is
one important difference which is that published consolidated profits of groups of
companies are generally performed based on the accounts of the individual entities
rather than looking at the whole list of assets and liabilities. Thus a profit made in
GBP in a Swiss subsidiary of an American bank may be converted first to CHF and
then to USD. This raises the likelihood that at least one of the conversions will be
done using the average rate method as discredited above. If so, then the reported
profit can depend on which entity a given transaction is booked in, which is not
sensible.
We see that the position correctly shows no profit if we view both sides in any one
currency, here USD are shown as well as GBP. However, if one had to produce a
profit number using just the Zurich reported loss of CHF200 and the Sydney reported
loss of AUD330 (both of which are correctly calculated) it is difficult to see how one
would ever come up with 0!
Management accounting is generally indifferent to whether an instrument is offbalance
sheet. The significance is higher concerning published accounts, and the
effect can be seen in the following example, which compares the effect of a futures
position with that of a ‘cash’ position in the underlying, and also of a marginned
‘cash’ position. As can be seen, the numbers appearing on the balance sheet are
much smaller for the off-balance-sheet futures position.
Readers of accounts have no way of knowing how many such instruments are held
from the balance sheets. The notes to the accounts give some information about
notional values, but often at a fairly aggregated level which is often difficult to
interpret.
Trading groups also have other assets, of which the most material is often property.
These are generally not marked to market. Integrated finance houses which have
significant income from say asset management or corporate finance activities will
generally report those activities using a historical cost method. The resultant mixing
of historical-cost and market values makes the published accounts very hard to
interpret. 486
30 Mart 2011 Çarşamba
Netting (for balance sheet)
All the above complications concerning internal consistency are most evident to the
accounting function which puts together all the assets and liabilities reported by the
separate trading desks to produce the group, or legal entity, balance sheets on which
regulatory capital is based. This is difficult when, as is common, information for
different trading desks is stored on separate systems. The resultant information may
lack accuracy especially if the trading function cares little for the result, and so is
not helpful.
The greatest lack of accuracy in practice concerns determining which assets and
liabilities can be netted. As capital charges are based on netted assets, and trading
organizations are usually short of capital, it is the interest of the organization to net
as much as possible, but this puts great requirements on systems.
While traders rarely care about the accounting function in its preparation of the
balance sheets, that changes when trading desks are charged for their use of capital
since the capital charge will generally be based on numbers calculated by the
accounting function. Thus introducing charging for capital may result in the trading
function being more helpful to the accountants so reducing the accounting risk.
Such behavioural aspects of risk management are important, and should not be
neglected in the pursuit of mathematical measures of risk.
accounting function which puts together all the assets and liabilities reported by the
separate trading desks to produce the group, or legal entity, balance sheets on which
regulatory capital is based. This is difficult when, as is common, information for
different trading desks is stored on separate systems. The resultant information may
lack accuracy especially if the trading function cares little for the result, and so is
not helpful.
The greatest lack of accuracy in practice concerns determining which assets and
liabilities can be netted. As capital charges are based on netted assets, and trading
organizations are usually short of capital, it is the interest of the organization to net
as much as possible, but this puts great requirements on systems.
While traders rarely care about the accounting function in its preparation of the
balance sheets, that changes when trading desks are charged for their use of capital
since the capital charge will generally be based on numbers calculated by the
accounting function. Thus introducing charging for capital may result in the trading
function being more helpful to the accountants so reducing the accounting risk.
Such behavioural aspects of risk management are important, and should not be
neglected in the pursuit of mathematical measures of risk.
Intragroup timing
Counterintuitively, it may add distortion to price the same instrument at the same
price in all businesses in all time zones. This is most disturbing for most accountants
who learn early in their training that internal balances must always net out on
consolidation.
We consider the extreme example that there is a position between the offices in
Tokyo and New York which is exactly matched by positions with the outside world at
each end. If Tokyo positions are valued at the end of the trading day in Tokyo, and
New York positions at the end of the trading day in New York, we would see:
price in all businesses in all time zones. This is most disturbing for most accountants
who learn early in their training that internal balances must always net out on
consolidation.
We consider the extreme example that there is a position between the offices in
Tokyo and New York which is exactly matched by positions with the outside world at
each end. If Tokyo positions are valued at the end of the trading day in Tokyo, and
New York positions at the end of the trading day in New York, we would see:
Non-simultaneous market closes
A particular hazard regarding certain instruments is the definition of closing price.
If distinct but related instruments (e.g. the underlying stocks, and the future on the
index consisting of those stocks) are traded on markets which close at different times
then a genuinely arbitraged position between the two markets will show noise in the
reported profit which may well be material.
While this is easy enough for one trading desk to allow for, by taking prices on the
later-closing exchange at the time of the earlier close, or by using theoretical prices
for instruments on the earlier-closing exchange derived from prices on the later close,
such an approach will again cause matching problems unless applied consistently
across an organization. Adopting one approach for daily reporting and a different
approach for month-end or year-end is not a good idea since it leads to everyone
having less confidence in any of the numbers.
If distinct but related instruments (e.g. the underlying stocks, and the future on the
index consisting of those stocks) are traded on markets which close at different times
then a genuinely arbitraged position between the two markets will show noise in the
reported profit which may well be material.
While this is easy enough for one trading desk to allow for, by taking prices on the
later-closing exchange at the time of the earlier close, or by using theoretical prices
for instruments on the earlier-closing exchange derived from prices on the later close,
such an approach will again cause matching problems unless applied consistently
across an organization. Adopting one approach for daily reporting and a different
approach for month-end or year-end is not a good idea since it leads to everyone
having less confidence in any of the numbers.
The bid–offer spread
The standard prudent accounting methodology is to value long positions at bid, and
short positions at offer. However, what about positions between two trading desks,
which clearly have no overall profit effect, but will show a loss under the standard
policy? Then there are trades between two books within one trading desk, which may
have been entered merely to neaten up books but will cause a loss to be reported
until the positions expire. The obvious approach is to price all positions at midmarket,
which has justification if the organization is a market-maker. However, it is
unlikely that the organization is a market-maker in all the instruments on its books.
The answer to this is to price at mid-market with provisions for the spread. When
there are offsetting positions within the organization no such provisions are made.
The method of implementation of this policy has implications for accountability. If
the provisions are made at a level above the books then there is the problem that the
sum of the profits of the books will be greater than the total, which is a sure way to
arguments over bonuses. If the provisions are made at book level, then the profit in
a given book can vary due purely to whether another book closed out its offsetting
position.
A related point concerns volatility. As volatility is an input parameter for OTC
option-pricing models, for valuation purposes it should be derived as far as possible
from prices for traded instruments. However, there will be a bid–offer spread on the implied volatility. Should the volatility parameter be the bid–offer or mid implied
volatility? As with absolute prices the recommended approach is to use mid-market
with provisions for the fact that one cannot usually close out at mid. However, with
volatility, not only are the bid–offer spreads typically very wide, but there are the
extra complexities that the market is frequently one-sided and the volatility parameter
also affects the hedge ratios calculated by the models.
short positions at offer. However, what about positions between two trading desks,
which clearly have no overall profit effect, but will show a loss under the standard
policy? Then there are trades between two books within one trading desk, which may
have been entered merely to neaten up books but will cause a loss to be reported
until the positions expire. The obvious approach is to price all positions at midmarket,
which has justification if the organization is a market-maker. However, it is
unlikely that the organization is a market-maker in all the instruments on its books.
The answer to this is to price at mid-market with provisions for the spread. When
there are offsetting positions within the organization no such provisions are made.
The method of implementation of this policy has implications for accountability. If
the provisions are made at a level above the books then there is the problem that the
sum of the profits of the books will be greater than the total, which is a sure way to
arguments over bonuses. If the provisions are made at book level, then the profit in
a given book can vary due purely to whether another book closed out its offsetting
position.
A related point concerns volatility. As volatility is an input parameter for OTC
option-pricing models, for valuation purposes it should be derived as far as possible
from prices for traded instruments. However, there will be a bid–offer spread on the implied volatility. Should the volatility parameter be the bid–offer or mid implied
volatility? As with absolute prices the recommended approach is to use mid-market
with provisions for the fact that one cannot usually close out at mid. However, with
volatility, not only are the bid–offer spreads typically very wide, but there are the
extra complexities that the market is frequently one-sided and the volatility parameter
also affects the hedge ratios calculated by the models.
28 Mart 2011 Pazartesi
Consistency with trading strategy
Valuation and accounting policies are intimately linked. Many of the examples below
may seem rather detailed but they can lead to big swings in reported profits and
have often taken up many hours which could be used for other work.
There are some businesses where the rule that all instruments should be valued
using market rates causes complications. For example, arbitrage businesses would
never show any profit until the market moved back into line or the instruments
expired. Given the history of supposedly arbitraged trading strategies resulting in
large losses, this may not be regarded by senior management as a particular problem,
but the theory is worth thinking through.
If exactly the same instrument can be bought in one situation for a price lower
than it can be sold in another, then the rational action is to buy it at the lower price
and sell at the higher. As the instrument is exactly the same, the net position will be
zero, and there will be a positive cash balance representing the profit.
If the two instruments are very similar, and react in the same way to changes in
underlying prices, but are not quite identical then it still makes sense to buy at the
lower price and sell at the higher one. However, this time the instrument positions
will not net off and if prices have not moved, the difference between the values of the
positions will be exactly the opposite of the cash received, so the net profit will be
reported as zero. Eventually the market prices of the two instruments will become
closer, or they will turn into identical offsetting amounts of cash (or the strategy will
be shown not to be a perfect arbitrage!) and the profit will be recognized. Indeed in
the time before expiry it is possible that the prices will move further apart from
equality and a loss will be reported. This is not popular with traders who want to be
able to report that they have locked in a ‘risk-free’ profit with their trading at
inception.
A theoretical example of such a situation would be where an OTC instrument is
identical to the sum of two or more listed instruments. The listed instruments can
be valued at market prices. If the OTC valuation model uses inputs such as implied
volatility derived from other listed instruments and the market prices of the various
listed instruments are inconsistent, then we will end up with a profit/loss being
reported when it is certain that none will be eventually realized. This is a good
illustration that markets are not always perfect, and that accounts based on marking
to market may also not be perfect. It is easy to suggest that human judgement should
be left to identify and correct such situations, but once one introduces the potential
for judgement to override policy, there is a danger that its application becomes too
common, thus reducing the objectivity of reports.
Such considerations often lead to trading desks seeking to ensure that their
positions are valued using a basis which is internally consistent to reduce ‘noise’ in
the reported profit. However, for different trading desks to value the same instrument
differently causes great problems for central accounting functions which may be
faced with values which are supposed to net off, but do not do so. The benefit of
smoothing within a trading desk is usually obtained at the cost of introducing
inconsistency for group reporting.
A particular case where instruments may be valued differently without either
trading desk even seeing that there might be a problem is in options on interest rate
instruments. In some situations the underlying may be regarded as the price of a
bond, while in others it may be the yield on a bond. Most option valuation methodologies
require the assumption that the distribution of the underlying is log-normal.
As price and yield are more or less reciprocal, it is not possible for both of them to
be log-normal, and indeed different values will be obtained from two such systems
for the same instrument.
Valuing all the listed instruments involved using market prices rather than theoretical
prices is the correct approach. The reason why this can be said to have
accounting risk is that the reporting of the loss may lead senior management to
order a liquidation of the positions, which will definitely crystalize that loss, when
holding the instruments to maturity would have resulted in a profit. While there is
much dispute in the academic and business community, this is arguably at least
part of what happened at Metallgesellschaft in 1993. 482
may seem rather detailed but they can lead to big swings in reported profits and
have often taken up many hours which could be used for other work.
There are some businesses where the rule that all instruments should be valued
using market rates causes complications. For example, arbitrage businesses would
never show any profit until the market moved back into line or the instruments
expired. Given the history of supposedly arbitraged trading strategies resulting in
large losses, this may not be regarded by senior management as a particular problem,
but the theory is worth thinking through.
If exactly the same instrument can be bought in one situation for a price lower
than it can be sold in another, then the rational action is to buy it at the lower price
and sell at the higher. As the instrument is exactly the same, the net position will be
zero, and there will be a positive cash balance representing the profit.
If the two instruments are very similar, and react in the same way to changes in
underlying prices, but are not quite identical then it still makes sense to buy at the
lower price and sell at the higher one. However, this time the instrument positions
will not net off and if prices have not moved, the difference between the values of the
positions will be exactly the opposite of the cash received, so the net profit will be
reported as zero. Eventually the market prices of the two instruments will become
closer, or they will turn into identical offsetting amounts of cash (or the strategy will
be shown not to be a perfect arbitrage!) and the profit will be recognized. Indeed in
the time before expiry it is possible that the prices will move further apart from
equality and a loss will be reported. This is not popular with traders who want to be
able to report that they have locked in a ‘risk-free’ profit with their trading at
inception.
A theoretical example of such a situation would be where an OTC instrument is
identical to the sum of two or more listed instruments. The listed instruments can
be valued at market prices. If the OTC valuation model uses inputs such as implied
volatility derived from other listed instruments and the market prices of the various
listed instruments are inconsistent, then we will end up with a profit/loss being
reported when it is certain that none will be eventually realized. This is a good
illustration that markets are not always perfect, and that accounts based on marking
to market may also not be perfect. It is easy to suggest that human judgement should
be left to identify and correct such situations, but once one introduces the potential
for judgement to override policy, there is a danger that its application becomes too
common, thus reducing the objectivity of reports.
Such considerations often lead to trading desks seeking to ensure that their
positions are valued using a basis which is internally consistent to reduce ‘noise’ in
the reported profit. However, for different trading desks to value the same instrument
differently causes great problems for central accounting functions which may be
faced with values which are supposed to net off, but do not do so. The benefit of
smoothing within a trading desk is usually obtained at the cost of introducing
inconsistency for group reporting.
A particular case where instruments may be valued differently without either
trading desk even seeing that there might be a problem is in options on interest rate
instruments. In some situations the underlying may be regarded as the price of a
bond, while in others it may be the yield on a bond. Most option valuation methodologies
require the assumption that the distribution of the underlying is log-normal.
As price and yield are more or less reciprocal, it is not possible for both of them to
be log-normal, and indeed different values will be obtained from two such systems
for the same instrument.
Valuing all the listed instruments involved using market prices rather than theoretical
prices is the correct approach. The reason why this can be said to have
accounting risk is that the reporting of the loss may lead senior management to
order a liquidation of the positions, which will definitely crystalize that loss, when
holding the instruments to maturity would have resulted in a profit. While there is
much dispute in the academic and business community, this is arguably at least
part of what happened at Metallgesellschaft in 1993. 482
Internal consistency
As with funding costs, the effects discussed in this subsection are usually very small
compared to the gross positions involved, but due to margins being very thin, can be
very large in comparison to the net profits of trading.
compared to the gross positions involved, but due to margins being very thin, can be
very large in comparison to the net profits of trading.
Gain to Treasury 4
If each trading desk went independently to the money-market the effective rate of
interest received would be 9.33%, but by netting before going to the market the bank
can achieve10%. The treasury desk takes the difference as profit.
One could argue that since the bank as a whole deposits at 10%, Trading LineB
should only be charged 10% rather than 11%, but with rational traders this would
merely lead to an unstable game of cat and mouse to be continually on the minority
side, which is best avoided. While other trading desks should be encouraged to use
the in-house treasury rather than outside banks, there is danger if this is a fixed
rule that treasury will widen its spreads to make extra profit at the expense of the
other trading desks. It is therefore usual to leave trading desks the right to go outside.
The threat is usually sufficient to make treasury’s rates reasonable and so the right
is never exercised.
Some trading houses have no internal cash accounts, but use other data for
calculating funding costs. This makes it virtually impossible to perform accurate and
transparent calculations of the amount that each business line should be charged/
credited, and as such is a significant risk. As we saw above, segregated cash accounts
are also needed for calculating FX exposure and the effect on profit of FX rate
movements.
If a trading desk knows, or at least expects, that it will have a long or short cash
position for some time, then it will usually be able to obtain a better rate for a
term loan/deposit rather than continually rolling over the overnight position. Such
positions will usually also be taken with the in-house treasury desk.
While bond-trading and money-market desks tend to be very aware of the effect on
values and profits of the different ways of calculating and paying interest, and the
movement in market rates, such sophistication does not often extend to systems
used for internal cash accounts. Thus accrual accounting is common in these
systems.
This can cause mismatch problems. Say a trading desk buys an instrument which
is effectively an annuity, and values it by discounting the expected future cash flows.
The trading desk takes out a term loan from the treasury desk to make the purchase.
If nothing else happens other than that interest rates fall, then the value calculated
for the instrument will rise, but the value of the term loan from treasury will not. In
order to get accurate profit numbers it is necessary to value all instruments in all
books using market rates, including internal term loans and deposits. Once again,
systems need to reflect the policies adapted.
interest received would be 9.33%, but by netting before going to the market the bank
can achieve10%. The treasury desk takes the difference as profit.
One could argue that since the bank as a whole deposits at 10%, Trading LineB
should only be charged 10% rather than 11%, but with rational traders this would
merely lead to an unstable game of cat and mouse to be continually on the minority
side, which is best avoided. While other trading desks should be encouraged to use
the in-house treasury rather than outside banks, there is danger if this is a fixed
rule that treasury will widen its spreads to make extra profit at the expense of the
other trading desks. It is therefore usual to leave trading desks the right to go outside.
The threat is usually sufficient to make treasury’s rates reasonable and so the right
is never exercised.
Some trading houses have no internal cash accounts, but use other data for
calculating funding costs. This makes it virtually impossible to perform accurate and
transparent calculations of the amount that each business line should be charged/
credited, and as such is a significant risk. As we saw above, segregated cash accounts
are also needed for calculating FX exposure and the effect on profit of FX rate
movements.
If a trading desk knows, or at least expects, that it will have a long or short cash
position for some time, then it will usually be able to obtain a better rate for a
term loan/deposit rather than continually rolling over the overnight position. Such
positions will usually also be taken with the in-house treasury desk.
While bond-trading and money-market desks tend to be very aware of the effect on
values and profits of the different ways of calculating and paying interest, and the
movement in market rates, such sophistication does not often extend to systems
used for internal cash accounts. Thus accrual accounting is common in these
systems.
This can cause mismatch problems. Say a trading desk buys an instrument which
is effectively an annuity, and values it by discounting the expected future cash flows.
The trading desk takes out a term loan from the treasury desk to make the purchase.
If nothing else happens other than that interest rates fall, then the value calculated
for the instrument will rise, but the value of the term loan from treasury will not. In
order to get accurate profit numbers it is necessary to value all instruments in all
books using market rates, including internal term loans and deposits. Once again,
systems need to reflect the policies adapted.
Funding
While the definition of cash is fairly obvious for most businesses, this is less so for a
bank. The money-market business line of a bank has both deposits and obligations
with other banks and other market participants, of various maturities yielding
various rates. Other business lines deal in other instruments which are settled in
what they view as cash. Clarity is much improved by having a treasury desk with
which all other business lines have accounts which they regard as their ‘bank
accounts’. The treasury desk then functions as the in-house bank, charging interest
to those business lines that have borrowed from it, and crediting interest to those
which have deposited funds with it. Because of the fine margins in many trading
businesses this interest charge is usually a very material proportion of the profit/
loss of many business lines, and so its accuracy is extremely important.
The treasury desk will usually charge a spread to its internal customers, reflecting
the fact that there is a spread between the borrowing and lending rates in the market,
and will make a profit even if it charges no larger spread than the market as we see
from the following example:
bank. The money-market business line of a bank has both deposits and obligations
with other banks and other market participants, of various maturities yielding
various rates. Other business lines deal in other instruments which are settled in
what they view as cash. Clarity is much improved by having a treasury desk with
which all other business lines have accounts which they regard as their ‘bank
accounts’. The treasury desk then functions as the in-house bank, charging interest
to those business lines that have borrowed from it, and crediting interest to those
which have deposited funds with it. Because of the fine margins in many trading
businesses this interest charge is usually a very material proportion of the profit/
loss of many business lines, and so its accuracy is extremely important.
The treasury desk will usually charge a spread to its internal customers, reflecting
the fact that there is a spread between the borrowing and lending rates in the market,
and will make a profit even if it charges no larger spread than the market as we see
from the following example:
27 Mart 2011 Pazar
Cross-currency products
A ledger which maintains all entries in the transaction currency can handle all the
pitfalls mentioned above and can give the FX exposures of the business line so long
as all the instruments are mono-currency. The exposures are simply the balances
on the profit accounts.
However, there are some products which involve more than one currency. Forward
FX trades are the simplest and the population includes currency swaps as well
as the more exotic instruments often referred to as quantos. Terminology is not
standardized in this area, but an example of such an instrument would be an option
which paid out USD1000 * max((St -S0),0) where St and S0 are the final and strike
values of a stock quoted in a currency other than USD.
Currency exposure reporting is not easy for systems with such cross-currency
instruments. If FX delta is to be calculated by the risk system performing recalculation
then the risk system must have cash balances in it (including those related to
the profit remittances discussed above) which is not often the case. Performing
recalculation on the market instruments while ignoring cash balances can give very
misleading reports of exposure resulting in mishedging. The best approach for FX
delta is therefore to have such instruments valued in component currencies, so that
the ledger will indeed show the correct exposure. FX gamma can only be obtained by
recalculation within the valuations system (the FX gamma on a cash balance is zero).
Note that not every product which pays out in a currency different from that of the
underlying is a true cross-currency instrument. For example, if the payout is (Any
function of the underlying alone) * (Spot rate on expiry) then there is no actual FX
risk for the seller, since one can execute an FX spot trade at the spot rate on expiry,
effectively on behalf of the client. This is an example where complexity is sometimes
seen where it does not exist, in what has been called ‘phantom FX’. 480
pitfalls mentioned above and can give the FX exposures of the business line so long
as all the instruments are mono-currency. The exposures are simply the balances
on the profit accounts.
However, there are some products which involve more than one currency. Forward
FX trades are the simplest and the population includes currency swaps as well
as the more exotic instruments often referred to as quantos. Terminology is not
standardized in this area, but an example of such an instrument would be an option
which paid out USD1000 * max((St -S0),0) where St and S0 are the final and strike
values of a stock quoted in a currency other than USD.
Currency exposure reporting is not easy for systems with such cross-currency
instruments. If FX delta is to be calculated by the risk system performing recalculation
then the risk system must have cash balances in it (including those related to
the profit remittances discussed above) which is not often the case. Performing
recalculation on the market instruments while ignoring cash balances can give very
misleading reports of exposure resulting in mishedging. The best approach for FX
delta is therefore to have such instruments valued in component currencies, so that
the ledger will indeed show the correct exposure. FX gamma can only be obtained by
recalculation within the valuations system (the FX gamma on a cash balance is zero).
Note that not every product which pays out in a currency different from that of the
underlying is a true cross-currency instrument. For example, if the payout is (Any
function of the underlying alone) * (Spot rate on expiry) then there is no actual FX
risk for the seller, since one can execute an FX spot trade at the spot rate on expiry,
effectively on behalf of the client. This is an example where complexity is sometimes
seen where it does not exist, in what has been called ‘phantom FX’. 480
The importance of policy
Senior management needs to decide on a policy for profit remittance by each business
line. Too often this policy is not clearly communicated, even if it is made. It is
important that remuneration of traders is based on numbers which are aligned with
the interests of the shareholders. If the businesses do not have any requirement to
pay over their profits to head office at the end of each period then the net assets
method, which gave a result of ñ200, is the only correct approach. If the businesses
do have a requirement to pay over their exact profits (and receive their losses) at the
end of each month (or day or year) then the use of the closing exchange rate appears
appropriate. In fact, if properly implemented, the two methods would give much
closer answers because if the profits have to be paid over at the end of each period,
there would never have been an opening balance of GBP900. Indeed, if the policy is
that the profits are paid over in the currency in which they arise, then the two
methods will give exactly the same answer. Paying over the profit in the reporting
currency is a policy that may be easier for everybody to understand. If the amount
required is obtained by the business line selling exactly all its profit in the transaction
currency then the closing exchange-rate method gives exactly the same answer as
the net asset approach. If the business line does not make its FX position totally flat
at the end of each period, then there will be an exposure which does affect the
wealth of the organization, and the closing-rate method will miss this. Thus all the
transactions related to profit remittance should be reflected in the cash accounts in
the ledger system and the net assets method used.
Profit remittance policies are often less than clear because they are embedded in
systems calculations. This is particularly true of any policy which involves a daily
remittance. The logic will have been perfectly clear to the founding sponsor of the
system, but even if the programmers understood it and implemented in correctly,
later users may never have really understood the motivation behind the numbers
which the computer spews out. Such embedding is a good example of system logic
risk. Such risk is usually thought of in conjunction with complicated valuation
models, but we see here that it can also apply to accounting systems, which are
generally considered to be simple.
Policies which say that all FX risk is transferred to the FX desk are effectively
policies of daily remittance in transaction currency. It is important that whatever
policy is chosen is clearly understood by all trading personnel, otherwise avoidable
FX risks may be hedged out, or even unnecessarily introduced. Policies which do not
involve daily remittance need to specify the exact timing of remittances, and the
treatment of any FX gains or losses which arise between the end of the month/year
and the date of payment.
line. Too often this policy is not clearly communicated, even if it is made. It is
important that remuneration of traders is based on numbers which are aligned with
the interests of the shareholders. If the businesses do not have any requirement to
pay over their profits to head office at the end of each period then the net assets
method, which gave a result of ñ200, is the only correct approach. If the businesses
do have a requirement to pay over their exact profits (and receive their losses) at the
end of each month (or day or year) then the use of the closing exchange rate appears
appropriate. In fact, if properly implemented, the two methods would give much
closer answers because if the profits have to be paid over at the end of each period,
there would never have been an opening balance of GBP900. Indeed, if the policy is
that the profits are paid over in the currency in which they arise, then the two
methods will give exactly the same answer. Paying over the profit in the reporting
currency is a policy that may be easier for everybody to understand. If the amount
required is obtained by the business line selling exactly all its profit in the transaction
currency then the closing exchange-rate method gives exactly the same answer as
the net asset approach. If the business line does not make its FX position totally flat
at the end of each period, then there will be an exposure which does affect the
wealth of the organization, and the closing-rate method will miss this. Thus all the
transactions related to profit remittance should be reflected in the cash accounts in
the ledger system and the net assets method used.
Profit remittance policies are often less than clear because they are embedded in
systems calculations. This is particularly true of any policy which involves a daily
remittance. The logic will have been perfectly clear to the founding sponsor of the
system, but even if the programmers understood it and implemented in correctly,
later users may never have really understood the motivation behind the numbers
which the computer spews out. Such embedding is a good example of system logic
risk. Such risk is usually thought of in conjunction with complicated valuation
models, but we see here that it can also apply to accounting systems, which are
generally considered to be simple.
Policies which say that all FX risk is transferred to the FX desk are effectively
policies of daily remittance in transaction currency. It is important that whatever
policy is chosen is clearly understood by all trading personnel, otherwise avoidable
FX risks may be hedged out, or even unnecessarily introduced. Policies which do not
involve daily remittance need to specify the exact timing of remittances, and the
treatment of any FX gains or losses which arise between the end of the month/year
and the date of payment.
Potential pitfalls
Accounting for the fact that business is carried out in different currencies should
not be difficult, but it is probably the greatest single cause of accounting faults
causing unwanted risk in trading houses today. This may be because it is an area
where mark-to-market accounting involves a very different approach from typical
historical-cost accounting and most accountants find it very difficult to ignore the
distorting rules they have learnt in their training for incorporating the results of
foreign-currency subsidiaries into published group accounts.
The worked examples which follow are deliberately simple and extreme, since this
is the easiest way to see the effect of different treatments. Some of the complications
which arise when considering interaction between groups of assets and liabilities are
considered in later subsections, but in this subsection we deliberately isolate individual
assets. As mentioned above, any implementation should be in a double entry
system, but it is easier to read the simpler presentation below. Many people’s
responses to these sorts of examples is ‘Of course I would never follow the incorrect
route’, but they fail to see that in real life these situations are clouded by all sorts of
other complications, and that such mistakes are being made underneath.
We start a period with 9oz of gold when the gold price is USD300/oz, and as a
result of buying and selling gold for other things end with 10 oz of gold when the gold
price is USD250/oz (for the purposes of this example assume that cash balances at
beginning and end are both zero). Few people would compute the profit as anything
other than:
not be difficult, but it is probably the greatest single cause of accounting faults
causing unwanted risk in trading houses today. This may be because it is an area
where mark-to-market accounting involves a very different approach from typical
historical-cost accounting and most accountants find it very difficult to ignore the
distorting rules they have learnt in their training for incorporating the results of
foreign-currency subsidiaries into published group accounts.
The worked examples which follow are deliberately simple and extreme, since this
is the easiest way to see the effect of different treatments. Some of the complications
which arise when considering interaction between groups of assets and liabilities are
considered in later subsections, but in this subsection we deliberately isolate individual
assets. As mentioned above, any implementation should be in a double entry
system, but it is easier to read the simpler presentation below. Many people’s
responses to these sorts of examples is ‘Of course I would never follow the incorrect
route’, but they fail to see that in real life these situations are clouded by all sorts of
other complications, and that such mistakes are being made underneath.
We start a period with 9oz of gold when the gold price is USD300/oz, and as a
result of buying and selling gold for other things end with 10 oz of gold when the gold
price is USD250/oz (for the purposes of this example assume that cash balances at
beginning and end are both zero). Few people would compute the profit as anything
other than:
Uses
The purpose of accounting is to be able to list the assets and liabilities of the
organization. This is the balance sheet. The increase in the net assets in a period
(plus any amounts paid out as dividends) is the amount by which wealth has
increased. This is the profit for the period. This simple paradigm should be constantly
remembered in the following discussion. Those concerned with setting accounting
standards for historical-cost accounting spend many hours defining what is an
asset or a liability, and how their values should be measured. For mark-to-market
accounting these definitions are much simpler:
Ω An asset is something that has value to the holder and its market value is the
amount of cash which a rational independent third party would pay now to
acquire it.
Ω A liability is an obligation to another party and its market value is the amount of
cash which a rational independent third party would accept now in exchange for
taking on the obligation.
Traders and management care about the reported profit figures because they
determine their bonuses. Most traders are not interested in the balance sheet, instead
preferring to judge their exposures from the ‘greeks’, the sensitivities. Regulators and
senior management are more likely to consider what information the balance sheet
can give regarding exposures, and about adherence to capital requirements. Balance
sheet information and profit numbers are used together to calculate returns on
capital. These numbers are often important to senior management in determining
where to shrink or expand business activities.
Regulatory capital requirements typically apply at the level of individual legal
entities, as well as of groups. Trading businesses frequently consist of many legal
entities and so it is important that the balance sheet can be accurately split by legal
entity. This may sound simple, but most decision-making systems and any daily
ledgers support only an individual business line. Combined with the fact that traders
care little for legal-entity split, it can be a major challenge to obtain and consolidate
this information on a timely basis.
On the other hand, some of the calculations of capital are often only implemented
in systems owned by the financial reporting department, which tend to be more
focused on the split between legal entities than on the split between business lines.
Thus while in many more advanced organizations there is a wish to charge the usage
of capital to the business lines which use it, the information required to do so may
not be available. As a result, such capital charges may be calculated on a somewhat
ad-hoc basis, leading to inaccuracies and disputes which distract staff time from
earning money for the business.
Any profit calculation that does not involve subtracting two balance sheets carries
an increased risk of being erroneous. Unfortunately it is all too common to find that
daily profits are reported based on spreadsheets which may not capture all the assets
and liabilities. When the financial reporting department attempts to add up the net
assets at the end of the month, there are numerous differences which nobody ever
has time to fully clear.
The solution to these problems is to prepare the daily profits from trading ledgers
which use the double-entry discipline, and have all information split by legal entity
and business line. If these trading ledgers feed the general ledger at the end of the
month then reconciliation issues become vastly more manageable. The importance
of keeping the ledgers split by transaction currency and of reconciling internal
positions on a daily basis is explained in the following subsections.
A separate risk is that trading lines attempt to ‘smooth’ their profit, since senior
management asks them awkward questions if profits go up and down. In some
cases swings are indeed due to inaccurate accounting, often because of inadequate
systems, and so an assumption may arise that all such swings are errors. In fact
many well-hedged positions will show volatility of earnings, and users of the accounting
reports should be educated not to panic at such moves.
organization. This is the balance sheet. The increase in the net assets in a period
(plus any amounts paid out as dividends) is the amount by which wealth has
increased. This is the profit for the period. This simple paradigm should be constantly
remembered in the following discussion. Those concerned with setting accounting
standards for historical-cost accounting spend many hours defining what is an
asset or a liability, and how their values should be measured. For mark-to-market
accounting these definitions are much simpler:
Ω An asset is something that has value to the holder and its market value is the
amount of cash which a rational independent third party would pay now to
acquire it.
Ω A liability is an obligation to another party and its market value is the amount of
cash which a rational independent third party would accept now in exchange for
taking on the obligation.
Traders and management care about the reported profit figures because they
determine their bonuses. Most traders are not interested in the balance sheet, instead
preferring to judge their exposures from the ‘greeks’, the sensitivities. Regulators and
senior management are more likely to consider what information the balance sheet
can give regarding exposures, and about adherence to capital requirements. Balance
sheet information and profit numbers are used together to calculate returns on
capital. These numbers are often important to senior management in determining
where to shrink or expand business activities.
Regulatory capital requirements typically apply at the level of individual legal
entities, as well as of groups. Trading businesses frequently consist of many legal
entities and so it is important that the balance sheet can be accurately split by legal
entity. This may sound simple, but most decision-making systems and any daily
ledgers support only an individual business line. Combined with the fact that traders
care little for legal-entity split, it can be a major challenge to obtain and consolidate
this information on a timely basis.
On the other hand, some of the calculations of capital are often only implemented
in systems owned by the financial reporting department, which tend to be more
focused on the split between legal entities than on the split between business lines.
Thus while in many more advanced organizations there is a wish to charge the usage
of capital to the business lines which use it, the information required to do so may
not be available. As a result, such capital charges may be calculated on a somewhat
ad-hoc basis, leading to inaccuracies and disputes which distract staff time from
earning money for the business.
Any profit calculation that does not involve subtracting two balance sheets carries
an increased risk of being erroneous. Unfortunately it is all too common to find that
daily profits are reported based on spreadsheets which may not capture all the assets
and liabilities. When the financial reporting department attempts to add up the net
assets at the end of the month, there are numerous differences which nobody ever
has time to fully clear.
The solution to these problems is to prepare the daily profits from trading ledgers
which use the double-entry discipline, and have all information split by legal entity
and business line. If these trading ledgers feed the general ledger at the end of the
month then reconciliation issues become vastly more manageable. The importance
of keeping the ledgers split by transaction currency and of reconciling internal
positions on a daily basis is explained in the following subsections.
A separate risk is that trading lines attempt to ‘smooth’ their profit, since senior
management asks them awkward questions if profits go up and down. In some
cases swings are indeed due to inaccurate accounting, often because of inadequate
systems, and so an assumption may arise that all such swings are errors. In fact
many well-hedged positions will show volatility of earnings, and users of the accounting
reports should be educated not to panic at such moves.
25 Mart 2011 Cuma
Internal reporting
Many accountants need to unlearn some rules they have picked up in ‘normal’,
accounting. The differences are greater for the Japanese and continental European
methods of accounting than for the Anglo-Saxon methods. It is unfortunate that
while many of the rules of historical-cost accounting are not applicable to trading
businesses, many organizations have thrown the baby out with the bathwater by
also abandoning for their daily reporting the double-entry record-keeping invented
by Pacioli in Italy in 1497. Double-entry provides a welcome discipline which is valid
whatever accounting conventions are used.
All serious players use mark-to-market accounting (where the value of assets and
liabilities is recalculated daily based on market parameters) for most of their trading
business, although this may require exceptions (see discussion of arbitrage businesses
below). The remaining trading entities which may still be using accrual
accounting (where a profit is calculated at the time of trade, and recognized over the
lifetime of the position, while losses are recognized as soon as they occur) are those
which are subsidiary businesses of organizations whose main business is other than
trading. 475
accounting. The differences are greater for the Japanese and continental European
methods of accounting than for the Anglo-Saxon methods. It is unfortunate that
while many of the rules of historical-cost accounting are not applicable to trading
businesses, many organizations have thrown the baby out with the bathwater by
also abandoning for their daily reporting the double-entry record-keeping invented
by Pacioli in Italy in 1497. Double-entry provides a welcome discipline which is valid
whatever accounting conventions are used.
All serious players use mark-to-market accounting (where the value of assets and
liabilities is recalculated daily based on market parameters) for most of their trading
business, although this may require exceptions (see discussion of arbitrage businesses
below). The remaining trading entities which may still be using accrual
accounting (where a profit is calculated at the time of trade, and recognized over the
lifetime of the position, while losses are recognized as soon as they occur) are those
which are subsidiary businesses of organizations whose main business is other than
trading. 475
Accounting risk
Accounting risk is the risk that inappropriate accounting information causes suboptimal
decisions to be made and may be due to inappropriate policy, faulty interpretation
of policy, or plain error. We can distinguish accounting risk from fraud,
which is deliberate manipulation of reported numbers, although the faulty interpretation
of policy in a computer system can facilitate fraud, as would appear to have
happened at Kidder Peabody in1994.
Salomon Brothers recorded a USD250 million hit as due to accounting errors in
1994, but most errors which are discovered are wrapped up in other results for the
same reasons that frauds are unless they are really enormous – management judges
the costs from loss to reputation to be too high. Merger provisions are an obvious
dumping ground. Most of the problems discussed in this chapter are unresolved at
many institutions today due to lack of understanding.
Dealing with accounting risk is not something that concerns only the financial
reporting function – decisions which need to be made on accounting matters affect
valuation and risk-reporting systems, interbook dealing mechanisms, and the relationship
of Treasury to trading businesses. Senior management needs to understand
the issues and make consistent decisions.
There is continual tension in most trading organizations between traders and
trading management who want to show profits as high as possible, and the risk,
control, and accounting departments, who are charged by senior management with
ensuring that what is reported is ‘true’.
Most risk information is concerned with looking at what may happen in the future.
Accounting is showing what has already happened. While traders are typically
focused purely on the future, the past does matter. One has to judge performance
based on what has already happened, rather than just on what is promised for the
future. Anything other than the net profit number is generally more relevant for
management and the control functions than for traders.
Direct losses from accounting risk typically happen when a large profit is reported
for a business and significant bonuses are paid out to the personnel involved before
it is discovered that the actual result was a much smaller profit or even a loss. In
addition, extra capital and resources may be diverted to an area which is apparently
highly profitable with the aim of increasing the business’s overall return on capital.
However, if the profitability is not real, the extra costs incurred can decrease the
business’s overall profit. As these losses are dispersed across an organization they
do not receive the same publicity as trading mistakes. Such risks are magnified
where the instruments traded have long lives, so that several years’ bonuses may
have been paid before any losses are recognized. The risk of profits being understated
is generally much lower, because those responsible for generating them will be keen
to show that they have done a good job, and will investigate with great diligence any
problems with the accounting producing unflattering numbers.
Indirect losses from the discovery and disclosure of accounting errors arise only
for those rare mistakes which are too large to hide. The additional losses come from
the associated poor publicity which may cause a downgrade in reputation. This can
result in increased costs for debt and for personnel. Regulators may require expensive
modification to systems and procedures to reduce what they see as the risk of
repetition. In addition, announcements of such failings can lead to the loss of jobs
for multiple levels of management.
In many large trading organizations the daily and monthly accounting functions
have become extremely fragmented, and it is very difficult to see a complete picture
of all the assets and liabilities of the organization. Especially in organizations which
have merged, or expanded into areas of business removed from their original areas,
there is often inconsistency of approach, which may result in substantial distortion
of the overall results. There is often a tendency for individual business lines within
an organization to paint the best possible picture of their own activities. Considerable
discipline is required from head office to ensure that the overall accounts are valid.
In many trading organizations the accounting function is held in low regard. As a
result, many of the people who are best at understanding the policy complications
which arise in accounting find that they are better rewarded in other functions.
These individuals may therefore act in an economically rational manner and move
to another function. The resulting shortage of thinkers in the accounting function
can increase the risk of problems arising.
Many non-accountants find it astonishing that there could be more than one
answer to the question ‘How much profit have we made?’. This is a very reasonable
attitude. This chapter seeks to explain some examples of where accountants commonly
tangle themselves, sometimes unnecessarily.
The main focus in this chapter is on the risks that arise from choice and interpretation
of policy in accounting performed for internal reporting in organizations for
whom the provision or use of financial instruments is the main business. These are
the organizations which knowingly take on risk, either making a margin while
matching with an offsetting risk, or gaining an excess return by holding the risk.
This is the main area which affects risk managers and is illustrated by several
common areas in which problems arise. FX, funding, and internal consistency are
areas where many accountants in banks will be well used to seeing multiple arguments.
We then briefly consider external reporting by such organizations, and the
additional complexities for internal and external reporting by ‘end-users’ – the
organizations which use financial instruments to reduce their risks.
decisions to be made and may be due to inappropriate policy, faulty interpretation
of policy, or plain error. We can distinguish accounting risk from fraud,
which is deliberate manipulation of reported numbers, although the faulty interpretation
of policy in a computer system can facilitate fraud, as would appear to have
happened at Kidder Peabody in1994.
Salomon Brothers recorded a USD250 million hit as due to accounting errors in
1994, but most errors which are discovered are wrapped up in other results for the
same reasons that frauds are unless they are really enormous – management judges
the costs from loss to reputation to be too high. Merger provisions are an obvious
dumping ground. Most of the problems discussed in this chapter are unresolved at
many institutions today due to lack of understanding.
Dealing with accounting risk is not something that concerns only the financial
reporting function – decisions which need to be made on accounting matters affect
valuation and risk-reporting systems, interbook dealing mechanisms, and the relationship
of Treasury to trading businesses. Senior management needs to understand
the issues and make consistent decisions.
There is continual tension in most trading organizations between traders and
trading management who want to show profits as high as possible, and the risk,
control, and accounting departments, who are charged by senior management with
ensuring that what is reported is ‘true’.
Most risk information is concerned with looking at what may happen in the future.
Accounting is showing what has already happened. While traders are typically
focused purely on the future, the past does matter. One has to judge performance
based on what has already happened, rather than just on what is promised for the
future. Anything other than the net profit number is generally more relevant for
management and the control functions than for traders.
Direct losses from accounting risk typically happen when a large profit is reported
for a business and significant bonuses are paid out to the personnel involved before
it is discovered that the actual result was a much smaller profit or even a loss. In
addition, extra capital and resources may be diverted to an area which is apparently
highly profitable with the aim of increasing the business’s overall return on capital.
However, if the profitability is not real, the extra costs incurred can decrease the
business’s overall profit. As these losses are dispersed across an organization they
do not receive the same publicity as trading mistakes. Such risks are magnified
where the instruments traded have long lives, so that several years’ bonuses may
have been paid before any losses are recognized. The risk of profits being understated
is generally much lower, because those responsible for generating them will be keen
to show that they have done a good job, and will investigate with great diligence any
problems with the accounting producing unflattering numbers.
Indirect losses from the discovery and disclosure of accounting errors arise only
for those rare mistakes which are too large to hide. The additional losses come from
the associated poor publicity which may cause a downgrade in reputation. This can
result in increased costs for debt and for personnel. Regulators may require expensive
modification to systems and procedures to reduce what they see as the risk of
repetition. In addition, announcements of such failings can lead to the loss of jobs
for multiple levels of management.
In many large trading organizations the daily and monthly accounting functions
have become extremely fragmented, and it is very difficult to see a complete picture
of all the assets and liabilities of the organization. Especially in organizations which
have merged, or expanded into areas of business removed from their original areas,
there is often inconsistency of approach, which may result in substantial distortion
of the overall results. There is often a tendency for individual business lines within
an organization to paint the best possible picture of their own activities. Considerable
discipline is required from head office to ensure that the overall accounts are valid.
In many trading organizations the accounting function is held in low regard. As a
result, many of the people who are best at understanding the policy complications
which arise in accounting find that they are better rewarded in other functions.
These individuals may therefore act in an economically rational manner and move
to another function. The resulting shortage of thinkers in the accounting function
can increase the risk of problems arising.
Many non-accountants find it astonishing that there could be more than one
answer to the question ‘How much profit have we made?’. This is a very reasonable
attitude. This chapter seeks to explain some examples of where accountants commonly
tangle themselves, sometimes unnecessarily.
The main focus in this chapter is on the risks that arise from choice and interpretation
of policy in accounting performed for internal reporting in organizations for
whom the provision or use of financial instruments is the main business. These are
the organizations which knowingly take on risk, either making a margin while
matching with an offsetting risk, or gaining an excess return by holding the risk.
This is the main area which affects risk managers and is illustrated by several
common areas in which problems arise. FX, funding, and internal consistency are
areas where many accountants in banks will be well used to seeing multiple arguments.
We then briefly consider external reporting by such organizations, and the
additional complexities for internal and external reporting by ‘end-users’ – the
organizations which use financial instruments to reduce their risks.
Example: Transaction accounts
The treasury is offered money from transaction accounts by the commercial
bank department. What should the treasury pay for this money? The ideas we
have in mind are all more or less inspired by time series analysis.
1 Calculate the standard deviation or another quantile of a certain time period
of past balances. The proportion of today’s balance, which corresponds to
this quantile, could be lent to the treasury at the O/N rate. The remaining
part of the balance could be lent out at the O/N rate plus a spread. The
determination of the quantile and the spread is a business decision.
2 The term structure of liquidity for demand deposits as described above can
be used for transfer pricing in the following sense. The proportion of the
balance, which can invested at a given term, is determined by the term
structure. The used interest rate is taken from the yield curve at that term
plus or minus a spread. Again, the spread is a business decision.
3 Instead of taking quantiles as in 2, it is also possible to use the minimum
balances in this period.
4 The next approach uses a segmentation hypothesis of the customers which
contribute to the balance. Customers who do not receive any or do receive
only very low interest on their accounts are likely to be insensitive to interest
rate changes. Changes in market interest rates do not have to be passed
through to those customers. On the other hand, customers who receive
interest rates close to the market will be very interest sensitive. Now it is
obvious that the non-interest-sensitive accounts will be worth more to the
bank than the interest-sensitive ones. The business has then to make a
decision as to what price should be paid to the portions of the balance, which
have different interest rate sensitivities. The transfer price of the clearing
balance can then be calculated as the weighted average of the single rates.
bank department. What should the treasury pay for this money? The ideas we
have in mind are all more or less inspired by time series analysis.
1 Calculate the standard deviation or another quantile of a certain time period
of past balances. The proportion of today’s balance, which corresponds to
this quantile, could be lent to the treasury at the O/N rate. The remaining
part of the balance could be lent out at the O/N rate plus a spread. The
determination of the quantile and the spread is a business decision.
2 The term structure of liquidity for demand deposits as described above can
be used for transfer pricing in the following sense. The proportion of the
balance, which can invested at a given term, is determined by the term
structure. The used interest rate is taken from the yield curve at that term
plus or minus a spread. Again, the spread is a business decision.
3 Instead of taking quantiles as in 2, it is also possible to use the minimum
balances in this period.
4 The next approach uses a segmentation hypothesis of the customers which
contribute to the balance. Customers who do not receive any or do receive
only very low interest on their accounts are likely to be insensitive to interest
rate changes. Changes in market interest rates do not have to be passed
through to those customers. On the other hand, customers who receive
interest rates close to the market will be very interest sensitive. Now it is
obvious that the non-interest-sensitive accounts will be worth more to the
bank than the interest-sensitive ones. The business has then to make a
decision as to what price should be paid to the portions of the balance, which
have different interest rate sensitivities. The transfer price of the clearing
balance can then be calculated as the weighted average of the single rates.
Transfer pricing of liquidity
Here we want to highlight the question of what price for liquidity the treasury unit of
a financial institution should quote to the different entities of the bank. This question
is closely related to the role of the treasury in the bank. If it is merely a servicing
unit, which provides liquidity to the different business lines for the best price or if it
is a trading unit, which is entitled to take active positions on the yield curve. The
following example should highlight the ideas.
a financial institution should quote to the different entities of the bank. This question
is closely related to the role of the treasury in the bank. If it is merely a servicing
unit, which provides liquidity to the different business lines for the best price or if it
is a trading unit, which is entitled to take active positions on the yield curve. The
following example should highlight the ideas.
24 Mart 2011 Perşembe
Example: Demand deposits
In order to determine the term structure of liquidity for demand deposits one
has to create a model for the deposit balance. The approach we propose here is
in the spirit of the techniques as described in the sections above. In order to
estimate the proportion of the demand deposits that can be used over given time
horizons, one has to answer questions like: ‘What is the lowest possible balance
in the next K days with a p% probability?’ One way to attack the problem is to
calculate the p quantiles of the distribution of the K day balance returns. An
alternative consists of looking for the minimal balance in the last K days. The
minimum of this balance and the actual balance can then be lent out for K days
This method has the advantage of being applicable also to balance sheet items
which are uncorrelated to interest rate changes but it has the same weaknesses
as all historic simulations. These weaknesses are very well known for the
estimation of volatility or the calculation VaR and can be summed up by the wise
saying that: ‘The past is not the future.’ 471
has to create a model for the deposit balance. The approach we propose here is
in the spirit of the techniques as described in the sections above. In order to
estimate the proportion of the demand deposits that can be used over given time
horizons, one has to answer questions like: ‘What is the lowest possible balance
in the next K days with a p% probability?’ One way to attack the problem is to
calculate the p quantiles of the distribution of the K day balance returns. An
alternative consists of looking for the minimal balance in the last K days. The
minimum of this balance and the actual balance can then be lent out for K days
This method has the advantage of being applicable also to balance sheet items
which are uncorrelated to interest rate changes but it has the same weaknesses
as all historic simulations. These weaknesses are very well known for the
estimation of volatility or the calculation VaR and can be summed up by the wise
saying that: ‘The past is not the future.’ 471
Example: Plain vanilla interest rate swap
For interest rate risk measurement the variable leg of an interest rate swap
would ‘end’ at the next fixing date of the variable interest rate. For interest rate
considerations the notional amount could be exchanged at that date. For liquidity
risk measurement, the variable leg of a swap matures at the end of the lifetime
of the swap. The payments can be estimated using the forward rates.
In order to optimize their liquidity management, the treasury function of a FI
is faced with the problem of determining the term structure of liquidity for their
assets and liabilities. For many investment banking products, such as derivatives
and fixed-income products this is a straightforward task as payment dates are
often known in advance and the estimated amounts can be derived from the
pricing formulae. An exception is obviously the money market and repo business.
A more challenging task (and more important in terms of liquidity) is the term
structure of liquidity for the classic commercial banking products, such as
transaction accounts, demand deposits, credit card loans or mortgages (prepayments)
as these products have no determined maturity.
would ‘end’ at the next fixing date of the variable interest rate. For interest rate
considerations the notional amount could be exchanged at that date. For liquidity
risk measurement, the variable leg of a swap matures at the end of the lifetime
of the swap. The payments can be estimated using the forward rates.
In order to optimize their liquidity management, the treasury function of a FI
is faced with the problem of determining the term structure of liquidity for their
assets and liabilities. For many investment banking products, such as derivatives
and fixed-income products this is a straightforward task as payment dates are
often known in advance and the estimated amounts can be derived from the
pricing formulae. An exception is obviously the money market and repo business.
A more challenging task (and more important in terms of liquidity) is the term
structure of liquidity for the classic commercial banking products, such as
transaction accounts, demand deposits, credit card loans or mortgages (prepayments)
as these products have no determined maturity.
Term structure of liquidity
One of the main scopes in A/L management is the classification of balance sheet
items according to their maturities. The reason for this is twofold and has consequences on the concept of ‘maturity’. The first reason is the management of interest
rate risk and the second is the management of liquidity risk. For liquidity risk
management purposes classical gap analysis is misleading as the term structure of
interest rates and liquidity will differ considerably for many financial instruments.
We give an example to clarify this.
items according to their maturities. The reason for this is twofold and has consequences on the concept of ‘maturity’. The first reason is the management of interest
rate risk and the second is the management of liquidity risk. For liquidity risk
management purposes classical gap analysis is misleading as the term structure of
interest rates and liquidity will differ considerably for many financial instruments.
We give an example to clarify this.
Funding
The use of the liquidity portfolio as a liquidity reserve is based on the assumption
that the cash equivalent for that portfolio can be funded by the normal credit line
based upon a good credit standing. As a result existing inventory will be free for
funding purposes. Based on a normal yield curve the inventory will be funded for
shorter periods producing a return on the spread difference. The funding period will
be rolled every 3–6 months.
If additional funding is needed the inventory can be used as collateral to acquire
additional liquidity from other counterparties. Normally funds are received at a lower
interest rate because the credit risk is reduced to that of the issuer, which is in most
cases better than the FI’s own credit risk. In the best case one will receive the mark
to market value without a haircut (sell-buy-back trade). In the case of repo trades
there will be a haircut (trades with the central bank).
The worst case would occur if funds are borrowed in one transaction and upon the
rollover date a ‘temporary illiquidity’ occurs. In the meantime the credit rating of the
FI may have deteriorated. For all future fundings with this FI, counterparties will
demand both collateral and a haircut. This means that if 10 units of cash are
required for clearing and 100 units for the liquidity funding the FI will receive 90
units against every 100 units of collateral. The result is that due to the increased
funding of the liquidity portfolio, the original liquidity gap of 10 units increased by
an additional 10 units to 20. As such, the liquidity portfolio failed to fulfil its purpose
of supplying additional necessary reserves. In the case of a rise in interest rates, the
price of the liquidity portfolio will also fall and may, for example, result in the FI
receiving only 80 units; the total additional funding gap for the portfolio would now
be 20 units and total difference 30 (see Figure 15.9).
If the funding of the liquidity portfolio is carried out incorrectly it can lead to an
increase of the liquidity gap. Therefore, it would be beneficial to structure the funding
into several parts. These would be funded in different periods and with different
counterparts. One part should be covered by ‘own capital’ and the market risk should
be hedged.
that the cash equivalent for that portfolio can be funded by the normal credit line
based upon a good credit standing. As a result existing inventory will be free for
funding purposes. Based on a normal yield curve the inventory will be funded for
shorter periods producing a return on the spread difference. The funding period will
be rolled every 3–6 months.
If additional funding is needed the inventory can be used as collateral to acquire
additional liquidity from other counterparties. Normally funds are received at a lower
interest rate because the credit risk is reduced to that of the issuer, which is in most
cases better than the FI’s own credit risk. In the best case one will receive the mark
to market value without a haircut (sell-buy-back trade). In the case of repo trades
there will be a haircut (trades with the central bank).
The worst case would occur if funds are borrowed in one transaction and upon the
rollover date a ‘temporary illiquidity’ occurs. In the meantime the credit rating of the
FI may have deteriorated. For all future fundings with this FI, counterparties will
demand both collateral and a haircut. This means that if 10 units of cash are
required for clearing and 100 units for the liquidity funding the FI will receive 90
units against every 100 units of collateral. The result is that due to the increased
funding of the liquidity portfolio, the original liquidity gap of 10 units increased by
an additional 10 units to 20. As such, the liquidity portfolio failed to fulfil its purpose
of supplying additional necessary reserves. In the case of a rise in interest rates, the
price of the liquidity portfolio will also fall and may, for example, result in the FI
receiving only 80 units; the total additional funding gap for the portfolio would now
be 20 units and total difference 30 (see Figure 15.9).
If the funding of the liquidity portfolio is carried out incorrectly it can lead to an
increase of the liquidity gap. Therefore, it would be beneficial to structure the funding
into several parts. These would be funded in different periods and with different
counterparts. One part should be covered by ‘own capital’ and the market risk should
be hedged.
Volume
The volume of the liquidity portfolio should cover a potential liquidity gap for the
period of an occurring liquidity gap.
period of an occurring liquidity gap.
Availability
Depending how quickly liquidity is requested the inventory of the portfolio should be
split and held with different custodians. For example:
Ω The credit line with the Bundesbank is dependent on the volume of pledgeable
securities held in the depot with the Bundesbank.
Ω If a repo is done with the market one part of the inventory should be held with a
national or an international custodian to ensure settlement.
Ω For settlement purposes the securities should be deliverable easily in national
settlement systems and in/between international depositary systems.
split and held with different custodians. For example:
Ω The credit line with the Bundesbank is dependent on the volume of pledgeable
securities held in the depot with the Bundesbank.
Ω If a repo is done with the market one part of the inventory should be held with a
national or an international custodian to ensure settlement.
Ω For settlement purposes the securities should be deliverable easily in national
settlement systems and in/between international depositary systems.
22 Mart 2011 Salı
‘Real’ currency of liquidity portfolios
As financial institutions usually trade in many different currencies, they also have
to manage their liquidity risk in many different currencies. Nevertheless, they usually
do not hold liquidity portfolios in all these currencies for reasons of cost. Therefore,
the question arises, which currencies and locations are optimal for liquidity portfolios?
In practice the liquidity portfolios are located in the regional head offices and
are denominated in the main currencies (USD, EUR, JPY).
to manage their liquidity risk in many different currencies. Nevertheless, they usually
do not hold liquidity portfolios in all these currencies for reasons of cost. Therefore,
the question arises, which currencies and locations are optimal for liquidity portfolios?
In practice the liquidity portfolios are located in the regional head offices and
are denominated in the main currencies (USD, EUR, JPY).
Liquidity of liquidity portfolios
The liquidity portfolio is a part of the liquidity reserve of FI, therefore it should
contain securities, which will
Ω Be pledgeable/repoable by the central bank or other counterparties
Ω Have a large issue volume
Ω Have an effective market
Ω Have no credit risk
Ω Be issued by international well-known issuers
These characteristics normally enable the FI to get liquidity quickly and at relatively
low cost from the central bank or other counterparties. 469
contain securities, which will
Ω Be pledgeable/repoable by the central bank or other counterparties
Ω Have a large issue volume
Ω Have an effective market
Ω Have no credit risk
Ω Be issued by international well-known issuers
These characteristics normally enable the FI to get liquidity quickly and at relatively
low cost from the central bank or other counterparties. 469
Forecast bias
Forecast bias or the tendency of our model to consistently over- or underestimate
the realizations is a concern of distributions addressed by the separation of upper
and lower quantiles. For a forecast which consistently overestimates the realizations
to a greater extent than to underestimate them, then:
DHkD[DLkD for all k
and similarly, for a forecast which consistently underestimates the realizations to a
greater extent than to underestimate them, then:
DLkD[D HkD for all k
Visually, the forecast DCL value would have the appearance of lying more closely to
one edge of the cumLk/cumHk envelope. This would probably be a good indicator
that the model parameters need some form of adjustment depending on the extent
of bias present in order to rebalance future forecasts.
the realizations is a concern of distributions addressed by the separation of upper
and lower quantiles. For a forecast which consistently overestimates the realizations
to a greater extent than to underestimate them, then:
DHkD[DLkD for all k
and similarly, for a forecast which consistently underestimates the realizations to a
greater extent than to underestimate them, then:
DLkD[D HkD for all k
Visually, the forecast DCL value would have the appearance of lying more closely to
one edge of the cumLk/cumHk envelope. This would probably be a good indicator
that the model parameters need some form of adjustment depending on the extent
of bias present in order to rebalance future forecasts.
Liquidity portfolios
Treasury units of financial institutions often hold special portfolios in order to be
able to generate liquidity for different maturities quickly and at low costs. Depending
on the credit rating of the FI the liquidity portfolio can have positive, negative or no
carry.
able to generate liquidity for different maturities quickly and at low costs. Depending
on the credit rating of the FI the liquidity portfolio can have positive, negative or no
carry.
Quantile selection and ELaR/DyLaR correction
The preferred level of confidence for behavioral DyLaR can be determined from the
analysis of past projections over the desired period of 6, 12 or 24 months. For
methodological purposes the adoption of the 12-month period or 250 working days
would be in line with current methods of VaR analysis. Periodically the projected
upper and lower DyLaR values will lie outside of the desired Lk and Hk envelope, as
should be the case for any quantile selection of less than 100%. This leads to the
requirement of a self-correcting mechanism for any forecast of behavioral DyLaR.
Under these circumstances, the previously maximum quantile value *Hk is selfrepairing,
since for cobò1 the maximum value for *HkóCexpected (D,m, k)ñCreal(D, k, k)
is replaced by the *Hk calculated for cobñ1. Should a significant change occur in
the cash flows of any portfolio the result will be a spreading of the cumLk/cumHk
envelope leading to a more cautious estimate.
analysis of past projections over the desired period of 6, 12 or 24 months. For
methodological purposes the adoption of the 12-month period or 250 working days
would be in line with current methods of VaR analysis. Periodically the projected
upper and lower DyLaR values will lie outside of the desired Lk and Hk envelope, as
should be the case for any quantile selection of less than 100%. This leads to the
requirement of a self-correcting mechanism for any forecast of behavioral DyLaR.
Under these circumstances, the previously maximum quantile value *Hk is selfrepairing,
since for cobò1 the maximum value for *HkóCexpected (D,m, k)ñCreal(D, k, k)
is replaced by the *Hk calculated for cobñ1. Should a significant change occur in
the cash flows of any portfolio the result will be a spreading of the cumLk/cumHk
envelope leading to a more cautious estimate.
From ELaR to DyLaR
Extending from our ELaR platform for predicting cash flows into the future we arrive
at the point of needing to consider new business, such business being classified as
all activities forward of cob as previously defined. In looking at the dynamic component,
we are now based at a portfolio level, since it is not possible to predict new
business on an individual transaction basis. Previously we saw that:
DCLóECLòdynamic cash flows (15.34)
At this stage we encounter the same problem as before, the forecast value of cash
flows, which now incorporates new business as well as existing, has an uncertain
path. The inescapable reality is that all models based on financial markets operate
in an often chaotic manner. That is, small parameter movements can lead to
previously unrecognized paths. Allowing for this is a development of our concept of
ELaR to bring us to Dynamic Liquidity at Risk (DyLaR).
In a broad sense, the behavioral model is one possible representation of DCL. In
which case DCL hasn’t been analysed by its individual components (ECL and new
business) as it would be ideally. Instead it is treated as an ongoing whole based on
the trends and periodicity of a given portfolio.
at the point of needing to consider new business, such business being classified as
all activities forward of cob as previously defined. In looking at the dynamic component,
we are now based at a portfolio level, since it is not possible to predict new
business on an individual transaction basis. Previously we saw that:
DCLóECLòdynamic cash flows (15.34)
At this stage we encounter the same problem as before, the forecast value of cash
flows, which now incorporates new business as well as existing, has an uncertain
path. The inescapable reality is that all models based on financial markets operate
in an often chaotic manner. That is, small parameter movements can lead to
previously unrecognized paths. Allowing for this is a development of our concept of
ELaR to bring us to Dynamic Liquidity at Risk (DyLaR).
In a broad sense, the behavioral model is one possible representation of DCL. In
which case DCL hasn’t been analysed by its individual components (ECL and new
business) as it would be ideally. Instead it is treated as an ongoing whole based on
the trends and periodicity of a given portfolio.
The blendingof behaviors
The weakness of any model designed to measure a specific trend is that it stands
independently alone. Each model until this point addresses only one aspect of the
time series. In reality, the data will more than likely have other faces. Realizing that
each series is dependent upon different variables and that no one model can be
consistently used to achieve the desired ‘exactness’ needed, a method of blending
behaviors can be constructed. Working on the proviso that all models have the
potential to produce a forecast of future realities with varying degrees of accuracy a
blend approach weighting the working models may be written as:
Blend modelkóa·BM1kòb·BM2kòc·BM3k for k days in the future
where BM1, BM2, BM3 are the predefined behavioral models and a, b, c are the
optimized weightings that historically give the fit of least error as defined in ‘basis of
trend analysis.’ By the nature of the base models it is then valid to presume that, to
some extent, aòbòcB1.
independently alone. Each model until this point addresses only one aspect of the
time series. In reality, the data will more than likely have other faces. Realizing that
each series is dependent upon different variables and that no one model can be
consistently used to achieve the desired ‘exactness’ needed, a method of blending
behaviors can be constructed. Working on the proviso that all models have the
potential to produce a forecast of future realities with varying degrees of accuracy a
blend approach weighting the working models may be written as:
Blend modelkóa·BM1kòb·BM2kòc·BM3k for k days in the future
where BM1, BM2, BM3 are the predefined behavioral models and a, b, c are the
optimized weightings that historically give the fit of least error as defined in ‘basis of
trend analysis.’ By the nature of the base models it is then valid to presume that, to
some extent, aòbòcB1.
Use of further behavioral information
Regardless as to the robustness and strength of any forecast, without the addition
of new information a limit of model accuracy will always be reached. A study of the
behavior of a series can in many cases be broken down to ‘grass roots’ levels.
Below the analysis of a total or accumulated position could, for instance, be strong
underlying counterparty trends. Their behavior could be of the nature of having
regular, payment structures, transaction openings or of a trend in the portfolio itself.
In line with this is the study of any correlation existing between more than one time
series. The detection of one or both of these behavioral features leads to the logical
progression of the modeling procedure.
of new information a limit of model accuracy will always be reached. A study of the
behavior of a series can in many cases be broken down to ‘grass roots’ levels.
Below the analysis of a total or accumulated position could, for instance, be strong
underlying counterparty trends. Their behavior could be of the nature of having
regular, payment structures, transaction openings or of a trend in the portfolio itself.
In line with this is the study of any correlation existing between more than one time
series. The detection of one or both of these behavioral features leads to the logical
progression of the modeling procedure.
21 Mart 2011 Pazartesi
Direct frequency analysis
Partitioning a significantly large historical time series into components according to
the duration or length of the intervals within the series is one approach to spectral
or frequency analysis. By considering the series to be the sum of many simple
sinusoids with differing amplitudes, wavelengths and starting points, allows for
the combination of a number of the fundamentals to construct an approximating
forecasting function. The Fourier transform uses the knowledge that if an infinite
series of sinusoids are calculated so that they are orthogonal or statistically independent
of one another, the sum will be equal to the original time series itself. The
expression for frequency function F(n) obtained from the time function f (t) is represented
as:
F(n)ó ê
ñê
f (t) cos 2nnt dtñi ê
ñê
f (t) sin 2nnt dt (15.33)
The downside of such an approach for practical usage is that the combination of a
limited number of sin functions results in a relatively smooth curve. Periodicity in
financial markets will more than likely be a spike function of payments made and
not a gradual and uniform inflow/outflow.
In addition, a Fourier transform functions optimally on a metrically based time
series. The calendar is not a metric series and even when weekends and holidays are
removed in an attempt to normalize the series, the length of the business month
remains variable. The result is that any Fourier function will have a phase distortion
over longer periods of time if a temporal correction or time stretching/shrinking
component is not implemented. However, as a means of determining the fundamental
frequencies of a series, the Fourier transform is still a valuable tool, which can be
used for periodicity analysis and as a basis for the construction of a periodic model. 466
the duration or length of the intervals within the series is one approach to spectral
or frequency analysis. By considering the series to be the sum of many simple
sinusoids with differing amplitudes, wavelengths and starting points, allows for
the combination of a number of the fundamentals to construct an approximating
forecasting function. The Fourier transform uses the knowledge that if an infinite
series of sinusoids are calculated so that they are orthogonal or statistically independent
of one another, the sum will be equal to the original time series itself. The
expression for frequency function F(n) obtained from the time function f (t) is represented
as:
F(n)ó ê
ñê
f (t) cos 2nnt dtñi ê
ñê
f (t) sin 2nnt dt (15.33)
The downside of such an approach for practical usage is that the combination of a
limited number of sin functions results in a relatively smooth curve. Periodicity in
financial markets will more than likely be a spike function of payments made and
not a gradual and uniform inflow/outflow.
In addition, a Fourier transform functions optimally on a metrically based time
series. The calendar is not a metric series and even when weekends and holidays are
removed in an attempt to normalize the series, the length of the business month
remains variable. The result is that any Fourier function will have a phase distortion
over longer periods of time if a temporal correction or time stretching/shrinking
component is not implemented. However, as a means of determining the fundamental
frequencies of a series, the Fourier transform is still a valuable tool, which can be
used for periodicity analysis and as a basis for the construction of a periodic model. 466
Basis of periodicity and frequency analysis
For systems incorporating or aggregating payment structures, which are often based
on market parameters, cash flows occurring at regular intervals in time would be
expected. Armed with this knowledge and depending upon the nature of the periodicity,
the expected value of such a series can be projected with a greater degree of confidence for either specific points in the forecast window or for the entire forecast
period as a whole.
At this stage, a series containing regular payments (e.g. mortgages, personal and
company loans, and sight deposit balance sheet movements) can be addressed and
quantified for projection purposes. The construction of a periodic model for a
calendar-based time series would logically consider dates on a weekly, monthly,
quarterly, half-yearly and yearly basis. It would also incorporate a corrective feature
for the often-occurring situation of due payments falling on weekends or public
holidays. In its simplest form, the concept of a monthly periodic model is the average
of historical values from a selected time window for the creation of a forward forecast.
on market parameters, cash flows occurring at regular intervals in time would be
expected. Armed with this knowledge and depending upon the nature of the periodicity,
the expected value of such a series can be projected with a greater degree of confidence for either specific points in the forecast window or for the entire forecast
period as a whole.
At this stage, a series containing regular payments (e.g. mortgages, personal and
company loans, and sight deposit balance sheet movements) can be addressed and
quantified for projection purposes. The construction of a periodic model for a
calendar-based time series would logically consider dates on a weekly, monthly,
quarterly, half-yearly and yearly basis. It would also incorporate a corrective feature
for the often-occurring situation of due payments falling on weekends or public
holidays. In its simplest form, the concept of a monthly periodic model is the average
of historical values from a selected time window for the creation of a forward forecast.
Basis of trend analysis
The initial analysis of a time series is to look at the trends that exist within the data
itself. Following this logic, it seems intuitively reasonable to construct a curve
(surface) through a portion of the existing data so that deviations from it are
minimized in some way. This curve can then be projected forward from cob as a
forecast of the expected future tendency of the series.
Having agreed upon the characteristics of the desired trend line, its construction
can be simply defined as the requirement to reduce historically the standard deviation
of the errors between the behavioral function and the mean of the real values.
itself. Following this logic, it seems intuitively reasonable to construct a curve
(surface) through a portion of the existing data so that deviations from it are
minimized in some way. This curve can then be projected forward from cob as a
forecast of the expected future tendency of the series.
Having agreed upon the characteristics of the desired trend line, its construction
can be simply defined as the requirement to reduce historically the standard deviation
of the errors between the behavioral function and the mean of the real values.
Behavioral modeling: DCL
The construction of a behavioral model is the progression from an understanding of
movements within the historical data to a quantitative projection forward of cob
based on this knowledge. Ideally the link between any financial series and existing
market parameters (e.g. LIBOR, PIBOR etc.) will be strong enough to construct a
robust forecast. As is often the case for non maturing assets and liabilities, the
correlation to other series is often small. The alternative, therefore, is an analysis of
the account balances or cash flows themselves.
movements within the historical data to a quantitative projection forward of cob
based on this knowledge. Ideally the link between any financial series and existing
market parameters (e.g. LIBOR, PIBOR etc.) will be strong enough to construct a
robust forecast. As is often the case for non maturing assets and liabilities, the
correlation to other series is often small. The alternative, therefore, is an analysis of
the account balances or cash flows themselves.
19 Mart 2011 Cumartesi
Does this reflect ‘the risk we meant’?
The reality tells us that it is of course possible to have huge positive or negative cash
flows on a future margin account. So is our ECL forecast CF(d, k)ó0 of any use to
compute the risk? Is this zero cash flow really the risk we meant?
What we are able to forecast with ECL is the most likely cash flow for this date,
but it does not tell us anything about the actual risk that can happen with the casflow in a worse case in both directions. Before making any statements about actual
liquidity risk, it is necessary to give an estimation of the likelihood of special events;
e.g. how likely is it that a cash flow will be over/under a defined amount? Or given
a specified quantile, what is the biggest/smallest cash flow we can expect? 458
flows on a future margin account. So is our ECL forecast CF(d, k)ó0 of any use to
compute the risk? Is this zero cash flow really the risk we meant?
What we are able to forecast with ECL is the most likely cash flow for this date,
but it does not tell us anything about the actual risk that can happen with the casflow in a worse case in both directions. Before making any statements about actual
liquidity risk, it is necessary to give an estimation of the likelihood of special events;
e.g. how likely is it that a cash flow will be over/under a defined amount? Or given
a specified quantile, what is the biggest/smallest cash flow we can expect? 458
No free lunch: arbitrage freedom
Considering a simple future as an example for what the ECL could be, we get the
following result: Using the cash flow notation introduced above, every clean cash
flow on day k can be written as: CF(d, k)óCFfix (d, k)òCFvar(d, k) with CFfix(d, k) ó0.
(We are not considering the initial margin in this example for the sake of simplicity.)
When we are looking for the ECL we try to determine the value of the CF(d, k).
Assuming that we are trying to make a forecast for a special day k in the future, we
have in general the possibilities:
either CFvar(d, k)[0
or CFvar(d, k)ó0
or CFvar (d, k)\0
Assume there is no arbitrage possible in this market and CFvar(d, k)[0. As the future
price is its own forward, one could enter into this instrument at a zero price and
generate a risk-free profit by simply selling it at CFvar(d, k)[0. For the complementary
reason the cash flow cannot be CFvar(d, k)\0. That means CFvar(d, k)ó0 must be the
ECL for a future cash flow (i.e. it is the most likely value of a forward ‘future’ cash
flow).
following result: Using the cash flow notation introduced above, every clean cash
flow on day k can be written as: CF(d, k)óCFfix (d, k)òCFvar(d, k) with CFfix(d, k) ó0.
(We are not considering the initial margin in this example for the sake of simplicity.)
When we are looking for the ECL we try to determine the value of the CF(d, k).
Assuming that we are trying to make a forecast for a special day k in the future, we
have in general the possibilities:
either CFvar(d, k)[0
or CFvar(d, k)ó0
or CFvar (d, k)\0
Assume there is no arbitrage possible in this market and CFvar(d, k)[0. As the future
price is its own forward, one could enter into this instrument at a zero price and
generate a risk-free profit by simply selling it at CFvar(d, k)[0. For the complementary
reason the cash flow cannot be CFvar(d, k)\0. That means CFvar(d, k)ó0 must be the
ECL for a future cash flow (i.e. it is the most likely value of a forward ‘future’ cash
flow).
Behavioral model
The analysis of an existing time series for the construction of a forward projection is
a means of determining the ‘behavior’ of the series. This is the investigation of
historical data for the existence of trends, periodicity or frequencies of specific events,
the correlation to other time series and the autocorrelation to itself. We can look at
any portfolio of non-maturing assets and liabilities as such a series. The balance of
customer sight deposits is an appropriate example, as neither payment dates nor
amounts are known in advance and the correlation to existing interest rate curves is
negligible.
The general tendency of the data can be used not only to interpolate between data
points but also to extrapolate beyond the data sequence. Essentially, this is the
construction of a projected forecast or ‘behavioral model’ of the series itself.
An understanding of the trends in a broad sense is an answer to ‘What is the
behavior of the time series?’ The investigation as to ‘why’ it is the behavior leads to
an understanding of other stochastic processes driving the evolution of the series
itself and is the logical progression towards ELaR/DyLaR.
a means of determining the ‘behavior’ of the series. This is the investigation of
historical data for the existence of trends, periodicity or frequencies of specific events,
the correlation to other time series and the autocorrelation to itself. We can look at
any portfolio of non-maturing assets and liabilities as such a series. The balance of
customer sight deposits is an appropriate example, as neither payment dates nor
amounts are known in advance and the correlation to existing interest rate curves is
negligible.
The general tendency of the data can be used not only to interpolate between data
points but also to extrapolate beyond the data sequence. Essentially, this is the
construction of a projected forecast or ‘behavioral model’ of the series itself.
An understanding of the trends in a broad sense is an answer to ‘What is the
behavior of the time series?’ The investigation as to ‘why’ it is the behavior leads to
an understanding of other stochastic processes driving the evolution of the series
itself and is the logical progression towards ELaR/DyLaR.
Usingterm structure models for interest rates
Another approach, which is especially relevant for commercial business, is to look
for correlations of the balance to interest rates. If a significant correlation between
the deposit balances and the short rate (O/N or 3M-deposit rate) exists, it is possible
to build a simple regression model based on this rate. It is also feasible to build more
sophisticated regression models using past balances, a time trend and changes in
interest rates.
The crucial point is now the following. As soon as we have defined such a model,
based on the short rate, one of the now classic term structure models for interest
rates as proposed by Cox, Ingersoll and Ross or Heath, Jarrow and Morton can be
used to forecast the future demand deposit behavior. One can then calculate the
sensitivity of the demand deposits to bumps in the yield curve. These sensitivities
can then be used to assign probabilities of duration for different levels of the demand
deposits.
for correlations of the balance to interest rates. If a significant correlation between
the deposit balances and the short rate (O/N or 3M-deposit rate) exists, it is possible
to build a simple regression model based on this rate. It is also feasible to build more
sophisticated regression models using past balances, a time trend and changes in
interest rates.
The crucial point is now the following. As soon as we have defined such a model,
based on the short rate, one of the now classic term structure models for interest
rates as proposed by Cox, Ingersoll and Ross or Heath, Jarrow and Morton can be
used to forecast the future demand deposit behavior. One can then calculate the
sensitivity of the demand deposits to bumps in the yield curve. These sensitivities
can then be used to assign probabilities of duration for different levels of the demand
deposits.
Monte Carlo simulation
A Monte Carlo simulation can be used in liquidity risk to simulate a variety of
different scenarios for the portfolio cash flow for a target date (in the future). The
basic concept of a Monte Carlo simulation is to simulate repeatedly a random process
for a financial variable (e.g. interest rates, volatilities etc.) in a given value range of
this variable. Using enough repetitions the result is a distribution of possible cash
flows on the target date.
In order to get a simulation running it is necessary to find a particular stochastic
model which describes the behavior of the cash flows of the underlying position, by
using any or all of the above-mentioned financial variables.
In general the Monte Carlo approach for ECL, we suggest here, is much the same
as that used in the VaR concept. Instead of the consideration of the PV, the focus is
now on the cash flow simulation itself.
Having reached this point it is crucial to find a suitable stochastic model which:
Ω Describes the cash flow behavior of the underlying instrument
Ω Describes the cash flow development of new business (necessary for DCL)
Ω Is not using that many parameters to make computation still efficient
Based on the distribution generated by the Monte Carlo run the following becomes
clear:
Ω The mean of the distribution is a forecast for ECL or DCL
Ω The tails which fulfil a given confidence level (e.g. the 99% quantile in which 99%
of the upcoming cash flows are floating in, the 1% quantile respectively) define
the two limits for the envelope encompassing expected future cash flows.
different scenarios for the portfolio cash flow for a target date (in the future). The
basic concept of a Monte Carlo simulation is to simulate repeatedly a random process
for a financial variable (e.g. interest rates, volatilities etc.) in a given value range of
this variable. Using enough repetitions the result is a distribution of possible cash
flows on the target date.
In order to get a simulation running it is necessary to find a particular stochastic
model which describes the behavior of the cash flows of the underlying position, by
using any or all of the above-mentioned financial variables.
In general the Monte Carlo approach for ECL, we suggest here, is much the same
as that used in the VaR concept. Instead of the consideration of the PV, the focus is
now on the cash flow simulation itself.
Having reached this point it is crucial to find a suitable stochastic model which:
Ω Describes the cash flow behavior of the underlying instrument
Ω Describes the cash flow development of new business (necessary for DCL)
Ω Is not using that many parameters to make computation still efficient
Based on the distribution generated by the Monte Carlo run the following becomes
clear:
Ω The mean of the distribution is a forecast for ECL or DCL
Ω The tails which fulfil a given confidence level (e.g. the 99% quantile in which 99%
of the upcoming cash flows are floating in, the 1% quantile respectively) define
the two limits for the envelope encompassing expected future cash flows.
18 Mart 2011 Cuma
Simplifying the problem
In order to find a starting point, we reduce the complexity of the situation by treating
the (theoretical) situation where we have to deal only with:
Ω One currency
Ω One payment system
Ω One legal entity
Ω No internal deals
Ω Only existing business (no new business)
Ω K days into the future
Later we will expand the problem again. 451
the (theoretical) situation where we have to deal only with:
Ω One currency
Ω One payment system
Ω One legal entity
Ω No internal deals
Ω Only existing business (no new business)
Ω K days into the future
Later we will expand the problem again. 451
Cash flow liquidity risk: redefinition
We regard cash inflows (i.e. paid in favour of our central bank account) as being
positive and cash outflows as negative. Deals between the bank’s entities that are
not executed via third parties (internal deals) are treated like regular transactions.
As they match out, it has to be ensured that such deals are completely reported (by
both parties).
Definition:
Cash liquidity risk is the risk of economic losses resulting from the fact the sum of
all inflows and outflows of a day t plus the central bank account’s balance Btñ1 of the
previous day are not equal to a certain anticipated (desired) amount.
This definition aims at manifestations of cash liquidity risk such as:
1 Only being able to
Ω Raise funds at rates higher than or
Ω Place funds at rates lower than (credit ranking adjusted) market rates (opportunity
costs)
2 Illiquidity: not being able to raise enough funds to meet contractual obligations
(as a limit case of the latter, funding rates rise to infinity)
3 Having correctly anticipated a market development but ending up with a ‘wrong’
position.
Regardless if cash liquidity risk is manifested gradually as in 1 or absolute as in 2 –
where 2 can be seen as an ‘infinite limit’ of 1 – the probability of the occurrence of 2
can be developed as a continuously monotonous function out of 1.
In addition to analysing our liquidity position relative to the market, we need to
estimate our projected liquidity position and the degree of its uncertainty for predictable
periods of market fluctuations.
positive and cash outflows as negative. Deals between the bank’s entities that are
not executed via third parties (internal deals) are treated like regular transactions.
As they match out, it has to be ensured that such deals are completely reported (by
both parties).
Definition:
Cash liquidity risk is the risk of economic losses resulting from the fact the sum of
all inflows and outflows of a day t plus the central bank account’s balance Btñ1 of the
previous day are not equal to a certain anticipated (desired) amount.
This definition aims at manifestations of cash liquidity risk such as:
1 Only being able to
Ω Raise funds at rates higher than or
Ω Place funds at rates lower than (credit ranking adjusted) market rates (opportunity
costs)
2 Illiquidity: not being able to raise enough funds to meet contractual obligations
(as a limit case of the latter, funding rates rise to infinity)
3 Having correctly anticipated a market development but ending up with a ‘wrong’
position.
Regardless if cash liquidity risk is manifested gradually as in 1 or absolute as in 2 –
where 2 can be seen as an ‘infinite limit’ of 1 – the probability of the occurrence of 2
can be developed as a continuously monotonous function out of 1.
In addition to analysing our liquidity position relative to the market, we need to
estimate our projected liquidity position and the degree of its uncertainty for predictable
periods of market fluctuations.
Lack and excess of liquidity: symmetric approach
The idea here is to move from a simply illiquidity orientated view on liquidity risk to
a view on both insufficient as well as exceeding liquidity. Both cases could lead to
situations where we have to bear economic losses in respect to rates relatively to the
market. We might be only able to attract funds at ‘high’ rates as well as only being
able to place excess funds at sub-market rates.5 Another very good reason to consider
‘over-liquidity’ is the fact that excess funds have to be loaned out and thus create
credit risk if not collaterized.
a view on both insufficient as well as exceeding liquidity. Both cases could lead to
situations where we have to bear economic losses in respect to rates relatively to the
market. We might be only able to attract funds at ‘high’ rates as well as only being
able to place excess funds at sub-market rates.5 Another very good reason to consider
‘over-liquidity’ is the fact that excess funds have to be loaned out and thus create
credit risk if not collaterized.
Re-approaching the problem
The following conceives a methodology to consistently measure, evaluate and manage
liquidity risk. Although it is tailored for a bank, it can quite easily be adapted for
other kinds of FIs.
liquidity risk. Although it is tailored for a bank, it can quite easily be adapted for
other kinds of FIs.
Conclusions
The approach to characterize liquidity as the probability of being solvent is straightforward.
Nevertheless it is incomplete in two senses:
1 After having quantified a potential future lack of funds we have not clarified
how large is the risk triggered by this shortage for the FI. Analysing the FI’s
counterbalancing capacities could give a solution: the probability that the lack
exceeds the ability to raise funds can be detected. In a VaR-like approach we
would try to determine the maximal forward deficit of funds in order not to exceed
the existing counterbalancing capacities – within a predefined probability.
2 Once we have gathered this knowledge, we are still left with the problem of its
economic impacts. One way to transform the information into a policy could be
to establish limits to restrict the business, another would be to increase the
counterbalancing capacity. Both approaches are costly, but more than that we
have to compare actual expenses against the potential loss at least of the equity
capital if the FI ends its existence by becoming insolvent.
A solution will be developed in the next section,
Nevertheless it is incomplete in two senses:
1 After having quantified a potential future lack of funds we have not clarified
how large is the risk triggered by this shortage for the FI. Analysing the FI’s
counterbalancing capacities could give a solution: the probability that the lack
exceeds the ability to raise funds can be detected. In a VaR-like approach we
would try to determine the maximal forward deficit of funds in order not to exceed
the existing counterbalancing capacities – within a predefined probability.
2 Once we have gathered this knowledge, we are still left with the problem of its
economic impacts. One way to transform the information into a policy could be
to establish limits to restrict the business, another would be to increase the
counterbalancing capacity. Both approaches are costly, but more than that we
have to compare actual expenses against the potential loss at least of the equity
capital if the FI ends its existence by becoming insolvent.
A solution will be developed in the next section,
17 Mart 2011 Perşembe
Analysis of the liquifiability of assets
If the FPS has provided us with a good understanding of potential future liquidity
gaps we then have to investigate in our ability to generate cash. The natural way is
to increase our liabilities: we have to measure our ability to generate cash by means
of secured or unsecured borrowing in time. On the other hand, we have to classify
all assets in respect to our ability to generate cash by selling or repoing them and
the speed with which this can be done. 449
gaps we then have to investigate in our ability to generate cash. The natural way is
to increase our liabilities: we have to measure our ability to generate cash by means
of secured or unsecured borrowing in time. On the other hand, we have to classify
all assets in respect to our ability to generate cash by selling or repoing them and
the speed with which this can be done. 449
Assessment of the quality of the FPS
Nobody would expect a FPS to be totally correct and in fact this never happens in
reality. ‘How correct is the FPS?’ is the crucial question. The answer has to be given
ex ante. Therefore we have to:
Ω Estimate errors due to shortcomings in data (incorrect/incomplete reporting) and
Ω Evaluate the uncertainty arising from our incorrect/incomplete assumptions (no
credit and operational risk) as well as deviations stemming from unpredictable
developments.
The above will lead us to a distribution of the FPS. From that we will be able to
deduce expectations for the most probable FPS and upper and lower limits of the FPS.
reality. ‘How correct is the FPS?’ is the crucial question. The answer has to be given
ex ante. Therefore we have to:
Ω Estimate errors due to shortcomings in data (incorrect/incomplete reporting) and
Ω Evaluate the uncertainty arising from our incorrect/incomplete assumptions (no
credit and operational risk) as well as deviations stemming from unpredictable
developments.
The above will lead us to a distribution of the FPS. From that we will be able to
deduce expectations for the most probable FPS and upper and lower limits of the FPS.
New deals
Up to now the FPS predicts cash flows that happen in the future if no new business
arises. In order to predict the future cash flows that will actually happen we have to
include new deals. Some can be forecasted with high probability (especially those
initiated by ourselves), others might be harder to tackle. We differentiate between
Ω Renewal of existing deals. Example: Customers tend to renew some percentage of
their term deposits. Hopefully a relation between the existing deal and the new
deal can be detected.
Ω Totally new deals. This could be a new customer placing a deposit or trying to
trade a new instrument as well as a new business starting.
arises. In order to predict the future cash flows that will actually happen we have to
include new deals. Some can be forecasted with high probability (especially those
initiated by ourselves), others might be harder to tackle. We differentiate between
Ω Renewal of existing deals. Example: Customers tend to renew some percentage of
their term deposits. Hopefully a relation between the existing deal and the new
deal can be detected.
Ω Totally new deals. This could be a new customer placing a deposit or trying to
trade a new instrument as well as a new business starting.
Existing deals
Although the business is completely described, the anticipated cash flows have
different grades of certainty (we do not treat credit and operational risks here).
There are:
Ω Known CF, which are known in time and amount. Example: a straight bond; we
do not consider selling the bond, this would be changing business.
Ω Contingent CF, which are unknown either in time and/or amount.
Example A: A future rate agreement (FRA).
The forward CF is dependent on the yet unknown market rates of the
underlying on the fixing date – nevertheless the payment date is known.
Example B: A European option sold.
As above with the difference that the counterparty has to decide if he or she
wants to execute the option (if certain market circumstances prevail).
Example C: An American option.
As above with the difference that the payment date is not known.
different grades of certainty (we do not treat credit and operational risks here).
There are:
Ω Known CF, which are known in time and amount. Example: a straight bond; we
do not consider selling the bond, this would be changing business.
Ω Contingent CF, which are unknown either in time and/or amount.
Example A: A future rate agreement (FRA).
The forward CF is dependent on the yet unknown market rates of the
underlying on the fixing date – nevertheless the payment date is known.
Example B: A European option sold.
As above with the difference that the counterparty has to decide if he or she
wants to execute the option (if certain market circumstances prevail).
Example C: An American option.
As above with the difference that the payment date is not known.
Evaluation of the forward payment structure (FPS)
The first step is to collect all cash flows likely to arise from deals already existing.
That means we have to treat only business that is on the balance sheet. Alterations
of existing deals such as a partial early repayment of a loan will be treated as new
business.
That means we have to treat only business that is on the balance sheet. Alterations
of existing deals such as a partial early repayment of a loan will be treated as new
business.
Measurement of insolvency risk
Measuring insolvency risk falls apart into three tasks:
Ω Evaluating the forward payment structure of the FI to gain a first forecast of the
FI’s exposure to a critical lack of funds,
Ω Assessing the correctness of this projection in order to come to an understanding
of the nature and magnitude of possible departure from reality in the forecast,
and finally
Ω Analysing the structure of assets of the FI to estimate its counterbalancing
capacity.
Ω Evaluating the forward payment structure of the FI to gain a first forecast of the
FI’s exposure to a critical lack of funds,
Ω Assessing the correctness of this projection in order to come to an understanding
of the nature and magnitude of possible departure from reality in the forecast,
and finally
Ω Analysing the structure of assets of the FI to estimate its counterbalancing
capacity.
Systemic reasons
The systemic reasons are contrary to the specific reasons as they are totally out of
control of the FI. There could be:
Ω A lack of CBM in the system itself. This is quite unlikely, but it happened in
Germany after the Herstatt crisis when the Bundesbank steered the central bank
money so short that many banks were unable to hold the required minimum
reserves. Nevertheless this could be categorised as well as
Ω A failure in the mutual exchange mechanism of central bank money: Although
the central bank allots sufficient money into the market, some market participants
hold bigger balances than they need. The reason could be an adversity to credit
risk hindering their lending out of surplus funds or it could lie in the anticipation
of upcoming market shortages that are forward covered.
Ω A technical problem: Payments systems fail to distribute the money properly as
planned by the market participants thus leaving them with unintended positions
and/or fulfilled payment obligations.
control of the FI. There could be:
Ω A lack of CBM in the system itself. This is quite unlikely, but it happened in
Germany after the Herstatt crisis when the Bundesbank steered the central bank
money so short that many banks were unable to hold the required minimum
reserves. Nevertheless this could be categorised as well as
Ω A failure in the mutual exchange mechanism of central bank money: Although
the central bank allots sufficient money into the market, some market participants
hold bigger balances than they need. The reason could be an adversity to credit
risk hindering their lending out of surplus funds or it could lie in the anticipation
of upcoming market shortages that are forward covered.
Ω A technical problem: Payments systems fail to distribute the money properly as
planned by the market participants thus leaving them with unintended positions
and/or fulfilled payment obligations.
Specific reasons
There are various reasons leading to an insufficient ability of the FI to rebalance its
shortages:
Ω Its rating could be downgraded. If sufficient collateral is available in time the FI
could switch from unsecured to secured borrowing.
Ω There may be rumours about its solvency. The ability to attract funds from other
liquid market participants is weakened. Liquid assets that can be sold or repoed
instantly could restore the confidence of the market.
Ω Even with undoubted standing it could be hard to raise cash: the limits other
institutions hold for the FI could be utilized.
Some of those reasons are hardly under the control of the FI. Others like the building
up of collateral and liquid asset holdings are at least controllable if handled ex ante.
shortages:
Ω Its rating could be downgraded. If sufficient collateral is available in time the FI
could switch from unsecured to secured borrowing.
Ω There may be rumours about its solvency. The ability to attract funds from other
liquid market participants is weakened. Liquid assets that can be sold or repoed
instantly could restore the confidence of the market.
Ω Even with undoubted standing it could be hard to raise cash: the limits other
institutions hold for the FI could be utilized.
Some of those reasons are hardly under the control of the FI. Others like the building
up of collateral and liquid asset holdings are at least controllable if handled ex ante.
Insufficient counterbalancingcapacity
The FI is not able to raise enough funds to balance its central bank account. There
are a variety of possible reasons for that. Again they can be ordered intrinsic, specific
and systematic, reflecting the decreasing ability of the FI to manage them.
Intrinsic reasons
The FI is not able to raise enough funds because:
Ω Too much money was raised in the past by means of unsecured borrowing (other
FIs’ credit lines are not large enough)
Ω Liquid asset are not available in the right time and place, with the appropriate
legal framework.
are a variety of possible reasons for that. Again they can be ordered intrinsic, specific
and systematic, reflecting the decreasing ability of the FI to manage them.
Intrinsic reasons
The FI is not able to raise enough funds because:
Ω Too much money was raised in the past by means of unsecured borrowing (other
FIs’ credit lines are not large enough)
Ω Liquid asset are not available in the right time and place, with the appropriate
legal framework.
Systemic: the payment process is disturbed
These events are even more out of the control of the FI. Counterparties pay as
scheduled but the payments simply do not get through. Reasons might be technical
problems as well as the unwillingness of the central bank to fix them. The problem
is truly systemic.
scheduled but the payments simply do not get through. Reasons might be technical
problems as well as the unwillingness of the central bank to fix them. The problem
is truly systemic.
Specific: unexpected loss of funds caused by counterparties
A FI might have expected inflows actually not coming in (a counterparty becomes
insolvent) thus causing an unexpected shortage of funds. On the other hand,
unexpected outflow might be caused by the decision of counterparty to subtract
funds (withdrawal of savings deposits during a run on a bank). Those factors are out
of control of the FI when they happen, nevertheless there is a certain possibility of
steering them in advance by selecting customers.
insolvent) thus causing an unexpected shortage of funds. On the other hand,
unexpected outflow might be caused by the decision of counterparty to subtract
funds (withdrawal of savings deposits during a run on a bank). Those factors are out
of control of the FI when they happen, nevertheless there is a certain possibility of
steering them in advance by selecting customers.
Lack of central bank money (CBM)
A temporary lack of CBM does not necessarily mean upcoming insolvency, the
relation between the shortage and the capacity to attract external funds is crucial.
Nevertheless is it a necessary condition to be insolvent: if the central bank account
is long, there is no necessity to act immediately. Again, this does not mean the
institution will be solvent in the future. We have to investigate further into the term
structure of solvency. There are intrinsic, specific and systematic reasons to become
insolvent and they are decreasingly manageable for a FI.
relation between the shortage and the capacity to attract external funds is crucial.
Nevertheless is it a necessary condition to be insolvent: if the central bank account
is long, there is no necessity to act immediately. Again, this does not mean the
institution will be solvent in the future. We have to investigate further into the term
structure of solvency. There are intrinsic, specific and systematic reasons to become
insolvent and they are decreasingly manageable for a FI.
How does insolvency occur?
In general, accounts with a central bank cannot be ‘overdrawn’. The status of
insolvency is finally reached if a contractual payment cannot be executed by any
means because not enough central bank funds are available. In any case, insolvency
might exist with a hidden or undetected status if such contractual payments are not
initiated (and therefore their failure cannot be detected by the central bank) but
nevertheless constitute a severe breach of terms, leaving the other institution unclear
whether it was only an operational problem leading to a lack of inflow or something
more problematic.
insolvency is finally reached if a contractual payment cannot be executed by any
means because not enough central bank funds are available. In any case, insolvency
might exist with a hidden or undetected status if such contractual payments are not
initiated (and therefore their failure cannot be detected by the central bank) but
nevertheless constitute a severe breach of terms, leaving the other institution unclear
whether it was only an operational problem leading to a lack of inflow or something
more problematic.
P/L non-neutrality
It is intuitively clear that it is the task of every liquidity manager to minimize the risk
of being insolvent for his or her institution. He achieves this by accomplishing the
highest possible liquidity for his institution. So far so good, but as always there is a
trade-off: liquidity is not free. Maximizing CFò as well as minimizing CFñ puts
restrictions on the businesses that normally result in smaller profits or even losses.
Ensuring CFñ does not fall below a certain threshold triggers direct costs in general.3
As a consequence, liquidity management turns out to be the task of maximizing the
liquidity of the bank under the constraint of minimizing costs.
of being insolvent for his or her institution. He achieves this by accomplishing the
highest possible liquidity for his institution. So far so good, but as always there is a
trade-off: liquidity is not free. Maximizing CFò as well as minimizing CFñ puts
restrictions on the businesses that normally result in smaller profits or even losses.
Ensuring CFñ does not fall below a certain threshold triggers direct costs in general.3
As a consequence, liquidity management turns out to be the task of maximizing the
liquidity of the bank under the constraint of minimizing costs.
Solvency
Annual income twenty pounds, annual expenditure nineteen nineteen six, result happiness.
Annual income twenty pounds, annual expenditure twenty pounds ought and six,
result misery. (Charles Dickens, David Copperfield)
A financial institution is defined as being solvent if it is able to meet its (payment)
obligations; consequently it is insolvent if it is not able to meet them. Insolvency ‘in
the first degree’ basically means ‘not having enough money’ but even if the liquid
assets seem to properly cover the liabilities, insolvency can stem from various
technical reasons:
Ω Insolvency in time – incoming payments on central bank accounts do not arrive
in time, so there is not enough coverage to initiate outgoing payments
Ω Insolvency in a particular currency – this can occur by simple FX cash mismanagement:
the institution is unexpectedly long in one currency and short in another;
but it could also be the result of an inability to buy the short currency, due to
exchange restrictions etc.
Ω Insolvency in a payment system – even if there is enough central bank money in
one payment system to cover the shortage in another payment system (in the
same currency) both may not necessarily be netted.
In practice, of course, it makes a big difference if a FI is insolvent in the first degree
or ‘only’ technically. ‘Friendly’ institutions will have good economic reasons to ‘help
out’ with funds; or central banks might act as lenders of last resort and thus bring
back the FI into the status of solvency. Nevertheless it is very hard to differentiate
consistently between those grades of insolvency. Concentrating on the end of a
payment day makes things clear: the FI is either solvent or not (by whatever means
and transactions), tertium non datur. In such a digital system it does not make sense
to distinguish between ‘very’ and ‘just’ solvent, a differentiation we want to make for
grades of liquidity.
Being only ‘generally liquid’ could mean having the required funds available later,
in another currency or another place or payment system or nostro account. In any
case, they are not available where and when they are required – a third party is
needed to help out.
Annual income twenty pounds, annual expenditure twenty pounds ought and six,
result misery. (Charles Dickens, David Copperfield)
A financial institution is defined as being solvent if it is able to meet its (payment)
obligations; consequently it is insolvent if it is not able to meet them. Insolvency ‘in
the first degree’ basically means ‘not having enough money’ but even if the liquid
assets seem to properly cover the liabilities, insolvency can stem from various
technical reasons:
Ω Insolvency in time – incoming payments on central bank accounts do not arrive
in time, so there is not enough coverage to initiate outgoing payments
Ω Insolvency in a particular currency – this can occur by simple FX cash mismanagement:
the institution is unexpectedly long in one currency and short in another;
but it could also be the result of an inability to buy the short currency, due to
exchange restrictions etc.
Ω Insolvency in a payment system – even if there is enough central bank money in
one payment system to cover the shortage in another payment system (in the
same currency) both may not necessarily be netted.
In practice, of course, it makes a big difference if a FI is insolvent in the first degree
or ‘only’ technically. ‘Friendly’ institutions will have good economic reasons to ‘help
out’ with funds; or central banks might act as lenders of last resort and thus bring
back the FI into the status of solvency. Nevertheless it is very hard to differentiate
consistently between those grades of insolvency. Concentrating on the end of a
payment day makes things clear: the FI is either solvent or not (by whatever means
and transactions), tertium non datur. In such a digital system it does not make sense
to distinguish between ‘very’ and ‘just’ solvent, a differentiation we want to make for
grades of liquidity.
Being only ‘generally liquid’ could mean having the required funds available later,
in another currency or another place or payment system or nostro account. In any
case, they are not available where and when they are required – a third party is
needed to help out.
Liquidity of financial institutions
In the context of a FI liquidity can have different meanings. It can describe
Ω Funds (central bank money held with the central bank directly or with other
institutions)
Ω The ability of the FI itself to attract such funds
Ω The status of the central bank account of the FI in a certain moment in the
payment process (i.e. to have enough funds).
We will concentrate on the latter and refine the description by introducing the
concept of solvency.
Ω Funds (central bank money held with the central bank directly or with other
institutions)
Ω The ability of the FI itself to attract such funds
Ω The status of the central bank account of the FI in a certain moment in the
payment process (i.e. to have enough funds).
We will concentrate on the latter and refine the description by introducing the
concept of solvency.
Example: Different prices of liquidity in different situations
If we have a look at the spread of a 29Y versus 30Y US Treasury Bond, we see it
rising from 5 bp to 25 bp in 2 Mths (Oct.–Dec. 98). Credit and option characteristics
of both bonds are the same (those spreads are zero for our bond trader).
The difference between the interest components can be neglected or at least do
not explain the large difference. A possible answer is that the ‘relative liquidity
relation’ (the one we meant intuitively to capture with this approach) between
both bonds can be assumed constant during this period. Nevertheless the
‘spot price’ of liquidity has changed dramatically during this period – quite
understandably, given the general situation in the markets with highlights as
emerging markets in Asia, LTCM, the MBS business of American investment
banks and others.
Another naive measure for liquidity is the concept of tradability of bonds. A
relatively small amount of bonds from an issue which are held for trading purposes
(assuming the rest are blocked in longer-term investment portfolios) will tend towards
higher liquidity premiums.
But does the liquidity premium depict the amount of ‘tradable’ bonds in the
market? It can be observed that bonds with a high liquidity margin are traded heavily
– in contrast to the theory. The explanation is that traders do not determine the
‘right’ price for a financial object; they try to ‘determine’ whether the spot price is
higher or lower than a future price; buy low – sell high. The same is true for liquidity
spreads. A high spread is not necessarily downsizing the amount of trades: if the
spread is anticipated to be stable or even to widen, it can even spur trading. All in
all we have to deal with the term structure of liquidity spreads. Unfortunately such a
curve does not exist, the main reason being that loan and deposit markets are
complementary markets from a bank’s point of view and, moreover, segregated into
different classes of credit risk.
rising from 5 bp to 25 bp in 2 Mths (Oct.–Dec. 98). Credit and option characteristics
of both bonds are the same (those spreads are zero for our bond trader).
The difference between the interest components can be neglected or at least do
not explain the large difference. A possible answer is that the ‘relative liquidity
relation’ (the one we meant intuitively to capture with this approach) between
both bonds can be assumed constant during this period. Nevertheless the
‘spot price’ of liquidity has changed dramatically during this period – quite
understandably, given the general situation in the markets with highlights as
emerging markets in Asia, LTCM, the MBS business of American investment
banks and others.
Another naive measure for liquidity is the concept of tradability of bonds. A
relatively small amount of bonds from an issue which are held for trading purposes
(assuming the rest are blocked in longer-term investment portfolios) will tend towards
higher liquidity premiums.
But does the liquidity premium depict the amount of ‘tradable’ bonds in the
market? It can be observed that bonds with a high liquidity margin are traded heavily
– in contrast to the theory. The explanation is that traders do not determine the
‘right’ price for a financial object; they try to ‘determine’ whether the spot price is
higher or lower than a future price; buy low – sell high. The same is true for liquidity
spreads. A high spread is not necessarily downsizing the amount of trades: if the
spread is anticipated to be stable or even to widen, it can even spur trading. All in
all we have to deal with the term structure of liquidity spreads. Unfortunately such a
curve does not exist, the main reason being that loan and deposit markets are
complementary markets from a bank’s point of view and, moreover, segregated into
different classes of credit risk.
First approach
Introduction: different types of liquidity
The concept of liquidity is used in two quite different ways. It is used in one way to
describe financial instruments and their markets. A liquid market is one made up of
liquid assets; normal transactions can be easily executed – the US treasury market
for on-the-run bonds is an especially good example. Liquidity is also used in the
sense of the solvency of a company. A business is liquid if it can make payments
from its income stream, either from the return on its assets or by borrowing the
funds from the financial markets.
The liquidity risk we consider here is about this second kind of liquidity. Financial
institutions are particularly at risk from a liquidity shortfall, potentially ending in
insolvency, simply due to the size of their balance sheets relative to their capital.
However, a financial institution with sufficient holdings of liquid assets (liquid in the
sense of the first type) may well be able sell or lend such assets quickly enough to
generate cash to avoid insolvency.
The management of liquidity risk is about the measurement and understanding of
the liquidity position of the organization as a whole. It also involves understanding
the different ways that a shortfall can arise, and what can be done about it. These
ideas will be explored in more depth in the following sections.
Liquidity of financial markets and instruments
It would seem straightforward to define the concept of market liquidity and the
liquidity of financial instruments.
A financial market is liquid if the instruments in this market are liquid, a financial
instrument is liquid if it can be traded at the ‘market price’ at all times in normal or
near-normal market amounts.
The concept of liquidity is used in two quite different ways. It is used in one way to
describe financial instruments and their markets. A liquid market is one made up of
liquid assets; normal transactions can be easily executed – the US treasury market
for on-the-run bonds is an especially good example. Liquidity is also used in the
sense of the solvency of a company. A business is liquid if it can make payments
from its income stream, either from the return on its assets or by borrowing the
funds from the financial markets.
The liquidity risk we consider here is about this second kind of liquidity. Financial
institutions are particularly at risk from a liquidity shortfall, potentially ending in
insolvency, simply due to the size of their balance sheets relative to their capital.
However, a financial institution with sufficient holdings of liquid assets (liquid in the
sense of the first type) may well be able sell or lend such assets quickly enough to
generate cash to avoid insolvency.
The management of liquidity risk is about the measurement and understanding of
the liquidity position of the organization as a whole. It also involves understanding
the different ways that a shortfall can arise, and what can be done about it. These
ideas will be explored in more depth in the following sections.
Liquidity of financial markets and instruments
It would seem straightforward to define the concept of market liquidity and the
liquidity of financial instruments.
A financial market is liquid if the instruments in this market are liquid, a financial
instrument is liquid if it can be traded at the ‘market price’ at all times in normal or
near-normal market amounts.
A new trend: uncertain parameter models
Finally, we will say a few words on a new trend that has been recently introduced in
academic research that could help in assessing model risk for a given position: the
uncertain volatility models. Work in the area was pioneered by Avellaneda, Levy, and
Paras (1995) for stocks and by Lhabitant, Martini, and Reghai (1998) for interest rate
contingent claims.
The new idea here is to build pricing and hedging models that work regardless of
the true world’s underlying model. As an example, should we consider the volatility
as constant, deterministic, or stochastic? This is difficult to answer, since the
volatility itself is a non-observable quantity! Rather than using a specific and probably
misspecified model, the uncertain volatility models take a very pragmatic view: if you
are able to bound the volatility between two arbitrary values, they will provide you
with valid prices and hedging parameters, whatever the underlying model. Rather
than specifying the behaviour of the volatility, you just specify a confidence interval
for its value. The volatility may evolve freely between these two bounds15. The
resulting pricing and hedging models are therefore much more robust than the
traditional ones, as they do not assume any particular stochastic process or require
going through an estimation procedure.
Conclusions
The application of mathematics to important problems related to financial derivatives
and risk management has expanded rapidly over the past few years. The increasing
complexity of the financial products coupled with the vast quantity of available
financial data explains why both practitioners and academics have found that the
language of mathematics is extremely powerful in modeling the returns and risks in
the application of risk management techniques. The result from this global trend is a
profusion of highly technical mathematical models, rather confusing for the financial
community. Which model to adopt? The past decade is full of examples where undue
reliance on inadequate models led to losses. The collapse of Long Term Capital
Management, which relied heavily on mathematical models for its investment strategy,
has raised some important concerns among model users. The mathematics in a
model may be precise, but they are useless if the model itself is inadequate or wrong.
Over-reliance on the security of mathematical models is simply an invitation for
traders to build up large and directional positions!
The most dangerous risks are where we do not even think about them. And they
tend to be there when we least expect them. Model risk is one of those. Furthermore,
it now appears to be present everywhere, and that we have to live with it. Virtually
all market participants are exposed to model risk and must learn to deal with it.
Regulators are also aware of it. Proposition 6 of the Basel Committee Proposal (1997)
is directly in line: ‘It is essential that banks have interest rate risk measurement
systems that capture all material sources of interest rate risk and that assess the
effect of interest rates changes in ways which are consistent with the scope of their
activities. The assumptions underlying the system should be clearly understood by
risk managers and bank management.’ Similar recommendations apply to any other
market.
However, uncertainty about the appropriate model should not necessarily reduce
the willingness of financial institutions to take risk. It should simply add one star in
the galaxy of risks because this risk is only an additional one, which can be priced,
hedged, and managed. 442
academic research that could help in assessing model risk for a given position: the
uncertain volatility models. Work in the area was pioneered by Avellaneda, Levy, and
Paras (1995) for stocks and by Lhabitant, Martini, and Reghai (1998) for interest rate
contingent claims.
The new idea here is to build pricing and hedging models that work regardless of
the true world’s underlying model. As an example, should we consider the volatility
as constant, deterministic, or stochastic? This is difficult to answer, since the
volatility itself is a non-observable quantity! Rather than using a specific and probably
misspecified model, the uncertain volatility models take a very pragmatic view: if you
are able to bound the volatility between two arbitrary values, they will provide you
with valid prices and hedging parameters, whatever the underlying model. Rather
than specifying the behaviour of the volatility, you just specify a confidence interval
for its value. The volatility may evolve freely between these two bounds15. The
resulting pricing and hedging models are therefore much more robust than the
traditional ones, as they do not assume any particular stochastic process or require
going through an estimation procedure.
Conclusions
The application of mathematics to important problems related to financial derivatives
and risk management has expanded rapidly over the past few years. The increasing
complexity of the financial products coupled with the vast quantity of available
financial data explains why both practitioners and academics have found that the
language of mathematics is extremely powerful in modeling the returns and risks in
the application of risk management techniques. The result from this global trend is a
profusion of highly technical mathematical models, rather confusing for the financial
community. Which model to adopt? The past decade is full of examples where undue
reliance on inadequate models led to losses. The collapse of Long Term Capital
Management, which relied heavily on mathematical models for its investment strategy,
has raised some important concerns among model users. The mathematics in a
model may be precise, but they are useless if the model itself is inadequate or wrong.
Over-reliance on the security of mathematical models is simply an invitation for
traders to build up large and directional positions!
The most dangerous risks are where we do not even think about them. And they
tend to be there when we least expect them. Model risk is one of those. Furthermore,
it now appears to be present everywhere, and that we have to live with it. Virtually
all market participants are exposed to model risk and must learn to deal with it.
Regulators are also aware of it. Proposition 6 of the Basel Committee Proposal (1997)
is directly in line: ‘It is essential that banks have interest rate risk measurement
systems that capture all material sources of interest rate risk and that assess the
effect of interest rates changes in ways which are consistent with the scope of their
activities. The assumptions underlying the system should be clearly understood by
risk managers and bank management.’ Similar recommendations apply to any other
market.
However, uncertainty about the appropriate model should not necessarily reduce
the willingness of financial institutions to take risk. It should simply add one star in
the galaxy of risks because this risk is only an additional one, which can be priced,
hedged, and managed. 442
Identify responsibilities and set key controls for model risk
Of course, a strong and independent risk oversight group is the first step towards
fighting model risk. Given the range of models, assumptions and data used in any
bank, such a group has a substantial role to play. But risk managers often lack the
time and resources to effectively accomplish a comprehensive and extensive model
review, audit and test. It is therefore extremely important to define relevant roles and
responsibilities, and to establish a written policy on model adoption and use. The
policy should typically define who is able to implement, test, and validate a model,
and who is in charge of keeping careful track of all the models that are used, knowing
who uses them, what they are used for, who has written the mathematics and/or
the code, who is allowed to modify them, and who will be impacted by a change. It
should also set rules to ensure that any change is verified and implemented on a
consistent basis across all appropriate existing models within the institution. Without
this, installing new software or performing a software upgrade may have dramatic
consequences on the firm’s capital market activities.
fighting model risk. Given the range of models, assumptions and data used in any
bank, such a group has a substantial role to play. But risk managers often lack the
time and resources to effectively accomplish a comprehensive and extensive model
review, audit and test. It is therefore extremely important to define relevant roles and
responsibilities, and to establish a written policy on model adoption and use. The
policy should typically define who is able to implement, test, and validate a model,
and who is in charge of keeping careful track of all the models that are used, knowing
who uses them, what they are used for, who has written the mathematics and/or
the code, who is allowed to modify them, and who will be impacted by a change. It
should also set rules to ensure that any change is verified and implemented on a
consistent basis across all appropriate existing models within the institution. Without
this, installing new software or performing a software upgrade may have dramatic
consequences on the firm’s capital market activities.
Be aware of operational risk consequences
One should also be aware of possible operational risk consequences on model risk.
Several banks use very sophisticated market or credit risk models, but they do not
protect their systems from an incorrect data input for some ‘unobservable’ parameters,
such as volatility or correlation. For instance, in the NatWest case, the major
problem was that the bank was using a single volatility number for all GBP/DEM
options, whatever the exercise price and maturity. This wiped off the smile effect,
which was very important for out-of-the-money deals.
The problem can also happen with multiple yield curve products. In 1970s,
Merrill Lynch had to book a US$70 million loss because it underpriced the interest
component and overpriced the principal component of a 30-year strip issue.14 The
problem was simply that the par-yield curve used to price the components was
different from the annuity and the zero yield curves.
Many model risk problems also occur in the calibration of the model in use. This
is particularly true for interest rate derivatives, where complex models fail to capture
deeply out-of-the money behaviour.
To reduce the impact of operational matters on model risk, and particularly the
problem of inaccurate data, it is essential to implement as much as possible a central
automatic capture of data, with a regular data validation by mid-office, in agreement
with traders and dealers.
Several banks use very sophisticated market or credit risk models, but they do not
protect their systems from an incorrect data input for some ‘unobservable’ parameters,
such as volatility or correlation. For instance, in the NatWest case, the major
problem was that the bank was using a single volatility number for all GBP/DEM
options, whatever the exercise price and maturity. This wiped off the smile effect,
which was very important for out-of-the-money deals.
The problem can also happen with multiple yield curve products. In 1970s,
Merrill Lynch had to book a US$70 million loss because it underpriced the interest
component and overpriced the principal component of a 30-year strip issue.14 The
problem was simply that the par-yield curve used to price the components was
different from the annuity and the zero yield curves.
Many model risk problems also occur in the calibration of the model in use. This
is particularly true for interest rate derivatives, where complex models fail to capture
deeply out-of-the money behaviour.
To reduce the impact of operational matters on model risk, and particularly the
problem of inaccurate data, it is essential to implement as much as possible a central
automatic capture of data, with a regular data validation by mid-office, in agreement
with traders and dealers.
Aggregate carefully
A financial institution may use different models for different purposes. Analytical
tractability, ease of understanding, and simplicity in monitoring transactions can
favour the use of distinct models for separate tasks. However, mixing or aggregating
the results of different models is risky. For instance, with respect to market prices,
an asset might appear as overvalued using one model and undervalued using an
alternative one. In such a case, the two arbitrage opportunities are just modelinduced
illusions.
The problem of model risk in the aggregation process is crucial for derivatives,
which are often decomposed into a set of simpler building blocks for pricing or
hedging purposes. The pieces together may not behave as the combined whole, as
was shown by some recent losses in the interest rates derivatives and the mortgage
back securities sectors.
tractability, ease of understanding, and simplicity in monitoring transactions can
favour the use of distinct models for separate tasks. However, mixing or aggregating
the results of different models is risky. For instance, with respect to market prices,
an asset might appear as overvalued using one model and undervalued using an
alternative one. In such a case, the two arbitrage opportunities are just modelinduced
illusions.
The problem of model risk in the aggregation process is crucial for derivatives,
which are often decomposed into a set of simpler building blocks for pricing or
hedging purposes. The pieces together may not behave as the combined whole, as
was shown by some recent losses in the interest rates derivatives and the mortgage
back securities sectors.
15 Mart 2011 Salı
Whenever possible, mark to market
Marking to market is the process of regularly evaluating a portfolio on the basis of
its prevailing market price or liquidation value. Even if you use hedge accounting, a
person separate from the dealer using quotes from multiple dealers should perform
it on a regular basis, and immediately report increasing divergences between market
and theoretical results. Unfortunately, mark to market is often transformed in mark
to model, since illiquid or complex assets are only priced accordingly to proprietary
in-house models.
When mark to market is impossible (typically when the trading room is the unique
or leading market for an instrument), one should identify the sources of the majority
of risk and reserve a cushion between the mark to model value and those of other
models. 435
its prevailing market price or liquidation value. Even if you use hedge accounting, a
person separate from the dealer using quotes from multiple dealers should perform
it on a regular basis, and immediately report increasing divergences between market
and theoretical results. Unfortunately, mark to market is often transformed in mark
to model, since illiquid or complex assets are only priced accordingly to proprietary
in-house models.
When mark to market is impossible (typically when the trading room is the unique
or leading market for an instrument), one should identify the sources of the majority
of risk and reserve a cushion between the mark to model value and those of other
models. 435
Use non-parametric techniques to validate the model
A statistician once said that parametric statistics finds exact solutions to ‘approximate
problems’, while non-parametric statistics finds approximate solutions to ‘exact
problems’. Non-parametric tests make very few assumptions, and are generally much
more powerful and easier to understand than parametric tests. In addition, nonparametric
statistics often involve less computational work and are easier to apply
than other statistical techniques. Therefore, they should be used whenever possible
to validate or invalidate the parametric assumptions.
Their results may sometimes be surprising. For instance, Ait-Sahalia (1996a,b)
estimates the diffusion coefficient of the US short-term interest rate non-parametrically
by comparing the marginal density implied by a set of models with the one
implied by the data, given a linear specification for the drift. His conclusions are that
the fit is extremely poor. Indeed, the non-parametric tests reject ’every parametric
model of the spot rate previously proposed in the literature’, that is, all linear-drift
short-term interest rate models!
problems’, while non-parametric statistics finds approximate solutions to ‘exact
problems’. Non-parametric tests make very few assumptions, and are generally much
more powerful and easier to understand than parametric tests. In addition, nonparametric
statistics often involve less computational work and are easier to apply
than other statistical techniques. Therefore, they should be used whenever possible
to validate or invalidate the parametric assumptions.
Their results may sometimes be surprising. For instance, Ait-Sahalia (1996a,b)
estimates the diffusion coefficient of the US short-term interest rate non-parametrically
by comparing the marginal density implied by a set of models with the one
implied by the data, given a linear specification for the drift. His conclusions are that
the fit is extremely poor. Indeed, the non-parametric tests reject ’every parametric
model of the spot rate previously proposed in the literature’, that is, all linear-drift
short-term interest rate models!
Stress test the model
Before using a model, one should perform a set of scenario analysis to investigate
the effect of extreme market conditions on a case-by-case analysis. The G30 states
that dealers ‘regularly perform simulations to determine how their portfolio would
perform under stress conditions’. Stress tests should ‘measure the impact of market
conditions, however improbable, that might cause market gaps, volatility swings, or
disruption of major relationships, or might reduce liquidity in the face of unfavourable
market linkages, concentrated market making, or credit exhaustion’.
Scenario analysis is appealing for its simplicity and wide applicability. Its major
drawback is the strong dependence on the ability to select the appropriate extreme
scenarios. These are often based on historical events (Gulf War, 1987 crash, European
currency crisis, Mexican peso devaluation, Russian default, etc.) during an
arbitrarily chosen time period. Unfortunately, on the one hand, history may not
repeat itself in the future; on the other, for complex derivatives, extreme scenarios
may be difficult to identify.13 Of course, increasing the number of scenarios to
capture more possible market conditions is always a solution, but at the expense of
computational time.
In addition, stress testing should not only focus on extreme market events. It
should also test the impact of violations of the model hypothesis, and how sensitive
are the model’s answers to its assumptions. What happens if prices jump, correlations
behave differently, or liquidity evaporates? Model stress testing is as important as
market stress testing. If the effects are unacceptable, the model needs to be revised.
Unfortunately, the danger is that one does not really suspect a model until
something dramatic happens. Furthermore, there is no standard way of carrying out
stress model risk testing, and no standard set of scenarios to be considered. Rather,
the process depends crucially on the qualitative judgement and experience of the
model builder.
the effect of extreme market conditions on a case-by-case analysis. The G30 states
that dealers ‘regularly perform simulations to determine how their portfolio would
perform under stress conditions’. Stress tests should ‘measure the impact of market
conditions, however improbable, that might cause market gaps, volatility swings, or
disruption of major relationships, or might reduce liquidity in the face of unfavourable
market linkages, concentrated market making, or credit exhaustion’.
Scenario analysis is appealing for its simplicity and wide applicability. Its major
drawback is the strong dependence on the ability to select the appropriate extreme
scenarios. These are often based on historical events (Gulf War, 1987 crash, European
currency crisis, Mexican peso devaluation, Russian default, etc.) during an
arbitrarily chosen time period. Unfortunately, on the one hand, history may not
repeat itself in the future; on the other, for complex derivatives, extreme scenarios
may be difficult to identify.13 Of course, increasing the number of scenarios to
capture more possible market conditions is always a solution, but at the expense of
computational time.
In addition, stress testing should not only focus on extreme market events. It
should also test the impact of violations of the model hypothesis, and how sensitive
are the model’s answers to its assumptions. What happens if prices jump, correlations
behave differently, or liquidity evaporates? Model stress testing is as important as
market stress testing. If the effects are unacceptable, the model needs to be revised.
Unfortunately, the danger is that one does not really suspect a model until
something dramatic happens. Furthermore, there is no standard way of carrying out
stress model risk testing, and no standard set of scenarios to be considered. Rather,
the process depends crucially on the qualitative judgement and experience of the
model builder.
Revise and update the model regularly
Outdated models can also originate model risk. Once implemented and accepted, a
model should not be considered as a finished manufactured good, but should be
revised and updated on a regular basis. Would you buy risk management software if
the vendor does not update it regularly? So why not do this as well with your models?
A related issue is the update of input parameters: the environment is changing.
Therefore, the input parameters should be revised and updated as frequently as
necessary.
Here again, a good signal is the acceptance of a model in the marketplace. If the
majority of the market uses similar data inputs and assumptions for comparable
activities, this is a good signal that the model is still up to date.
model should not be considered as a finished manufactured good, but should be
revised and updated on a regular basis. Would you buy risk management software if
the vendor does not update it regularly? So why not do this as well with your models?
A related issue is the update of input parameters: the environment is changing.
Therefore, the input parameters should be revised and updated as frequently as
necessary.
Here again, a good signal is the acceptance of a model in the marketplace. If the
majority of the market uses similar data inputs and assumptions for comparable
activities, this is a good signal that the model is still up to date.
Use a model for what it is made
Most models were initially created for one specific purpose. Things start breaking
down when a model is used outside its range of usefulness. Applying an existing
model to a new field, a new product, or a new market should not be considered as a
straightforward operation, but must be performed as cautiously as the development
of a new model from scratch.
Many model risk issues happen when dealers extend an existing business or enter
a new one. They also attempt to recycle the models and tools with which they are
familiar. Unfortunately, a model can be good in one area and bad in another. As an
example, compare a pricing model versus a Value-at-Risk model. Pricing errors are
not translated in the Value-at-Risk estimates, since these focus only on price
variations and not on price levels. Therefore, a good model for Value-at-Risk will not
necessarily be a good pricing model!
down when a model is used outside its range of usefulness. Applying an existing
model to a new field, a new product, or a new market should not be considered as a
straightforward operation, but must be performed as cautiously as the development
of a new model from scratch.
Many model risk issues happen when dealers extend an existing business or enter
a new one. They also attempt to recycle the models and tools with which they are
familiar. Unfortunately, a model can be good in one area and bad in another. As an
example, compare a pricing model versus a Value-at-Risk model. Pricing errors are
not translated in the Value-at-Risk estimates, since these focus only on price
variations and not on price levels. Therefore, a good model for Value-at-Risk will not
necessarily be a good pricing model!
Know your model
Users should always understand the ideas behind a model. Treating a model as a
black box is definitely a wrong approach. Devices that mysteriously come out with
an answer are potential sources of model risk. Generally, the persons that have built
the black box are no longer in contact with it, which makes things even worse.
Of course, many senior managers do not understand sophisticated mathematical
models. But the key is not there. The key is to understand the risks associated with
a model and its limits. Essential questions are: ‘Are you comfortable with the model
results? Which variables have a high likelihood of change? Where and why does a
small change in a variable cause a large variation in the results? What is the model’s
acceptance in the marketplace?’
black box is definitely a wrong approach. Devices that mysteriously come out with
an answer are potential sources of model risk. Generally, the persons that have built
the black box are no longer in contact with it, which makes things even worse.
Of course, many senior managers do not understand sophisticated mathematical
models. But the key is not there. The key is to understand the risks associated with
a model and its limits. Essential questions are: ‘Are you comfortable with the model
results? Which variables have a high likelihood of change? Where and why does a
small change in a variable cause a large variation in the results? What is the model’s
acceptance in the marketplace?’
Check your data accuracy and integrity
The qualities of a model’s results depend heavily on the quality of the data feed.
Garbage in, garbage out (GIGO) is the law in risk management. Therefore, all data
inputs to a model should always be checked and validated carefully. Typically, should
you use a bid, an ask, or a mid-price? To discount, should you build a zero-coupon
curve, a swap curve, or a government bond curve? The difference can be quite
substantial on illiquid securities, and can impact all the subsequent computations.
Data sources should also be reduced to a minimum. For instance, on a liquid
market such as US Treasuries, the end of the day pricing can differ by up to 5 basis
points across sources. Added to a little leverage, this can result in a 2% valuation
error for a simple 10-year position. Another important problem is data synchronicity,
particularly if you are dealing with multiple currency or different time zone portfolios.
Where should you get the US dollar–Swiss franc exchange rate? In New York or in
Zurich at the close? Using non-simultaneous price input can lead to wrong pricing,
or create artificial arbitrage opportunities.
Garbage in, garbage out (GIGO) is the law in risk management. Therefore, all data
inputs to a model should always be checked and validated carefully. Typically, should
you use a bid, an ask, or a mid-price? To discount, should you build a zero-coupon
curve, a swap curve, or a government bond curve? The difference can be quite
substantial on illiquid securities, and can impact all the subsequent computations.
Data sources should also be reduced to a minimum. For instance, on a liquid
market such as US Treasuries, the end of the day pricing can differ by up to 5 basis
points across sources. Added to a little leverage, this can result in a 2% valuation
error for a simple 10-year position. Another important problem is data synchronicity,
particularly if you are dealing with multiple currency or different time zone portfolios.
Where should you get the US dollar–Swiss franc exchange rate? In New York or in
Zurich at the close? Using non-simultaneous price input can lead to wrong pricing,
or create artificial arbitrage opportunities.
Whenever possible, prefer simple models
The power of any model – at least in finance – is directly proportional to its simplicity.
Obscurity is the first sign of something being amiss, and a lack of clarity at least
invites scepticism. Remember that more sophisticated models are not necessarily
better than simpler ones. A simple model may be perfectly adequate for a specific
job. Someone driving from, say, Paris to Monte Carlo will not be delayed much if he
ignores the earth’s curvature. The conclusion might be different for a plane or a
satellite. It is the same in the risk management industry. There is a fundamental
difference between being approximately right and precisely wrong.
Obscurity is the first sign of something being amiss, and a lack of clarity at least
invites scepticism. Remember that more sophisticated models are not necessarily
better than simpler ones. A simple model may be perfectly adequate for a specific
job. Someone driving from, say, Paris to Monte Carlo will not be delayed much if he
ignores the earth’s curvature. The conclusion might be different for a plane or a
satellite. It is the same in the risk management industry. There is a fundamental
difference between being approximately right and precisely wrong.
Define clearly your ‘model risk’ metric and a benchmark
The first step to assess model risk is the definition of a complete model risk metric.
What are the criteria used to qualify a model as ‘good’ or ‘bad’? Which goal should a
model pursue? The answer can vary widely across applications. For pricing purposes,
one might consider minimizing the difference between the results of a model and
market prices or a given benchmark. The latter is not always possible: for instance,
for stock options, the Black and Scholes (1973) model appears to be relatively robust
and is widely accepted as a benchmark, while for interest rate options, there are
many different models of the term structure of interest rates, but little agreement on
any natural benchmark. For hedging purposes, one may want to minimize the
terminal profit and loss of a hedged position, its average daily variation or its
volatility. But others will also focus only on the maximum possible loss, on the
drawdown or on the probability of losing money. And for regulatory capital, some
institutions will prefer minimizing the required capital, while others will prefer to be
safe and reduce the number or probability of exceptions.
What are the criteria used to qualify a model as ‘good’ or ‘bad’? Which goal should a
model pursue? The answer can vary widely across applications. For pricing purposes,
one might consider minimizing the difference between the results of a model and
market prices or a given benchmark. The latter is not always possible: for instance,
for stock options, the Black and Scholes (1973) model appears to be relatively robust
and is widely accepted as a benchmark, while for interest rate options, there are
many different models of the term structure of interest rates, but little agreement on
any natural benchmark. For hedging purposes, one may want to minimize the
terminal profit and loss of a hedged position, its average daily variation or its
volatility. But others will also focus only on the maximum possible loss, on the
drawdown or on the probability of losing money. And for regulatory capital, some
institutions will prefer minimizing the required capital, while others will prefer to be
safe and reduce the number or probability of exceptions.
How can you manage and control model risk?
Managing and controlling model risk is a difficult task, which should be performed
on a case-by-case basis. Therefore, the following should not be considered as a set
of recipes, but rather as the beginning of a checklist for model risk management and
control. Surprisingly, we have to say that the process depends crucially on the
personal judgement and experience of the model builder.
on a case-by-case basis. Therefore, the following should not be considered as a set
of recipes, but rather as the beginning of a checklist for model risk management and
control. Surprisingly, we have to say that the process depends crucially on the
personal judgement and experience of the model builder.
How can you detect model risk?
Unfortunately, we have to admit that there is no unique method to detect model risk.
The essential problem is that the model-building process is a multi-step procedure,
whereas model risk is generally assessed at the end of the chain. The result is an
amalgamation of conceptually distinct discrepancy terms. All successive errors in
the model-building process are aggregated into a single real-valued variable, typically
the average difference between some empirically observed value (for instance, an
option price) and the result of the model. This makes it difficult to detect which
aspects of the model, if any, are seriously misspecified.
However, some signals should be carefully monitored, as they are good early
indicators of model risk. For instance, a model with a poor performance out of the
sample while its performance was excellent in the sample, or time-varying parameters
that vary too much should arise suspicion. Using all the degrees of freedom in a
model to fit the data often results in an over-parametrization. If we need, say, two
degrees of freedom and we have three or more available, we can simply use the third
degree to capture the model errors as a time-varying component.
It is now widely known that models with time-varying parameters have an excellent
in-sample performance, particularly for pricing and hedging purposes. But all the
model risk is concentrated into the time-varying parameters. This is particularly true
if the parameter is not observable, such as a mean-reversion for interest rate, a riskaversion
parameter for a given investor, or a risk premium or a given source of risk.
In a sense, these models are built specifically to fit an arbitrary exogenous set of
data.
The essential problem is that the model-building process is a multi-step procedure,
whereas model risk is generally assessed at the end of the chain. The result is an
amalgamation of conceptually distinct discrepancy terms. All successive errors in
the model-building process are aggregated into a single real-valued variable, typically
the average difference between some empirically observed value (for instance, an
option price) and the result of the model. This makes it difficult to detect which
aspects of the model, if any, are seriously misspecified.
However, some signals should be carefully monitored, as they are good early
indicators of model risk. For instance, a model with a poor performance out of the
sample while its performance was excellent in the sample, or time-varying parameters
that vary too much should arise suspicion. Using all the degrees of freedom in a
model to fit the data often results in an over-parametrization. If we need, say, two
degrees of freedom and we have three or more available, we can simply use the third
degree to capture the model errors as a time-varying component.
It is now widely known that models with time-varying parameters have an excellent
in-sample performance, particularly for pricing and hedging purposes. But all the
model risk is concentrated into the time-varying parameters. This is particularly true
if the parameter is not observable, such as a mean-reversion for interest rate, a riskaversion
parameter for a given investor, or a risk premium or a given source of risk.
In a sense, these models are built specifically to fit an arbitrary exogenous set of
data.
Kaydol:
Kayıtlar (Atom)