Value at Risk
Although it is calculated according to a variety of methods, it is usually expressed as a single value (an amount of a given currency). This value is interpreted as the minimum loss that should be expected x% of the time over a given interval (given a certain set of assumptions). VaR is often misinterpreted as "maximum loss". It is in fact the minimum loss that one should expect in a few instances. Maximum loss expected for the portfolio over the time period can often be much greater and much more difficult (if not impossible) to estimate.
If a portfolio has a 2% one-day VaR of $100,000, then that portfolio should be expected to see a loss of at least $100,000 over a one-day period 2% of the time.
- In 1945, Leavens offered a quantitative method to calculate VaR in his paper about benefits of diversification. This may have been the first VaR measure published.
- In 1952, Markowitz and Roy published difference papers with similar VaR measures for practical portfolio optimization with a given level of risk.
- In 1971, Lietaer practically used VaR measure to estimate foreign exchange risk.
- In 1972, Dusak simply used VaR measures in futures portfolios.
- In 1975, the SEC implemented a Uniform Net Capital Rule (UNCR), and VaR measures were basically used in the SEC’s new system.
- In 1986 and 1987, Garbade assessed internal capital requirements by VaR measures.
- In 1988, the Basel Committee published the 1988 Basel Accord.
- In 1992, when setting capital requirements for credit and market risks, the SFA used a crude VaR measure to determine market risk capital requirements for equity, fixed income, foreign exchange and commodities positions.
- During the late 1980’s, a firm-wide VaR system was developed by JP Morgan.
- In April 1993, the Basel committee released a document which required banks to identify a trading book and hold capital for trading book market risks and organization-wide foreign exchange exposures. Capital charges for the trading book were Loosely based on a building-block VaR measure.
- A variance-covariance model assumes a normal distribution and that the change in portfolio value is linearly dependent on the risk factors and their returns. This model assumes that asset return follows normal distribution. However, real asset return is not normally distributed, so this assumption may make the result inaccurate.
- A historical simulation, or HVaR, draws a distribution curve using past, or historical data. Historical simulations are more reliant than variance-covariance model because the computing of VaR are all from historical price changes. But this method is not perfect, for the accuracy of results are related to the time period used and historical data cannot reflect all the information in the future.
- A Monte Carlo simulation uses a random scenario generator and uses the result to draw a distribution curve. Monte Carlo simulation isn’t based on the assumption of normality in returns. And it begins with historical data but brings in both subjective judgments and other information to improve forecasted probability distributions. In addition, Monte Carlo simulations can be used to assess the Value at Risk for any type of portfolio and are flexible enough to cover options and option-like securities. 
VaR methodology is sometimes criticized as being too simplistic and hence dangerous for risk managers to rely upon. These criticisms arise mainly from three areas:
- As noted above, VaR sometimes offers a false sense of security with x% of daily losses expected to be less than $Y. In fact VaR shows that 1 - x% of losses should be expected to be greater than $Y (with no indication of how much greater).
- VaR methodology relies on historical price data to calculate volatility and correlation. These historical values often are not realistic in times of significant market activity.
- VaR methodology assumes that price returns are log-normally distributed. Much research has shown that many asset classes show more skewed return distributions (i.e. "fatter tails").