In half a century, nothing much has changed in asset allocation, except for one thing: risk.

Modern asset allocation began with the mean-variance model pioneered by Nobel laureate Harry Markowitz in the 1950s, which illustrates how diversification across relatively uncorrelated assets smoothes returns. Given a set of assets with known co-variances, one can construct an efficient portfolio, one where the expected mean return matches the given level of risk. So, over a 10-year investment period, one could expect a 10% return on stocks, give or take a few bad years, with an annualized risk of 16%.

But classically conceived asset allocation has become less a tool to mitigate risk and more a vehicle to discover previously unrecognized risks or patterns of risk.

The problem is that returns are highly volatile over short periods of time; risk less so. The giant California pension fund, CalPERS, which is in the midst of reconfiguring its asset allocation policy, recently released figures that gave the industry pause. From 1984 to 2009, the risk of S&P 500 stocks was 15.53%. From 2000 to 2009, the risk was slightly higher, at 16.2%. However, a slight shift in risk yielded dramatic differences in returns. They were 10.28% from 1984 to 2009. From 2000 to 2009, they were negative 1%.

Many explanations can account for the drop. On the basis of fundamentals, stocks through the decade were still overvalued. Thus, investors still face a protracted period of retrenchment — of reversion to the mean. U.S. stocks are trading slightly above their long-term average P/E ratio, so there may be some mean-reversion to go.

Measuring risk
But then, there’s parameter mismatch, at least when it comes to forecasting. Risk is defined as standard deviation or volatility. Standard deviation appears to be more stable — or more predictable — than return streams. As Jorge Mina at RiskMetrics suggests in a 2005 paper, “There is extensive empirical evidence that volatilities and correlations can be forecasted with reasonable accuracy, but expected returns are much harder to forecast and there are no reliable methods for their estimation.”

Clearly, there’s been a mis-measurement of risk. First, the conventional definition of risk doesn’t incorporate all the risks that matter. Second, asset allocation tends to depend on a normal or bell-curved distribution of returns. The normal distribution doesn’t work, CalPERS CIO Joe Dear said earlier this year at a conference sponsored by the Milken Institute.

In a presentation to the trustees of the pension plan last month, CalPERS staff pointed to two things about the plan’s asset allocation model:

• “Model views risk in terms of volatility (standard deviation). Other risks (liquidity, leverage) need to be considered in portfolio choice.

• “Total risk of the portfolio tends to be dominated by equity risk. Focus on asset allocation and return tends to mask risk diversification.”

More particularly, the staff presentation argued, “MVO [mean-variance optimization] is most sensitive to changes in expected returns. The expected return tends to have the highest forecast error among the assumptions, particularly in the case of equities.”

Given how much returns depend on previously undefined risk, it’s little wonder that investors have been re-examining the measurement of risk for a decade or more.

Sometimes it takes the form of risk budgeting. That requires determining whether the return contribution to the portfolio from a specific asset class or manager delivers on the risk assumed. It’s a practice derived from proprietary trading desks trying to manage overall firm risk by particularizing it across multiple traders trading risky assets — lest one bad trade blow up the firm.

For a pension fund, explains another Nobel laureate, William Sharpe, “[t]o obtain an expected return, it must take on some risk. One may think of the optimal set of investments as maximizing expected return for a given level of overall portfolio risk. The level of portfolio risk provides the risk budget, and the goal is to allocate this budget across investments in an optimal manner. Once a risk budget is in place, the manager can monitor the portfolio components to assure that risk positions do not diverge from those stated in the risk budget by more than pre-specified amounts.”

This type of analysis can be helpful in understanding portfolio characteristics. For institutional and retail investors, it may turn out that — if indeed, risk is rewarded — investors have been placing lower bets on asset classes or managers than they truly merit. By contrast, the asset classes they mechanically allocate to, according to policy portfolio for a pension fund or an investment policy statement for an individual investor, could lead to bad bets: bets that add nothing to portfolio performance because the risk is disproportional to the returns. The risk/reward tradeoff — the Sharpe ratio — is low.

The relative stability of volatility and correlations leads to tools that can correct expected return forecasts. Sharpe has tried, through reverse optimization, to reverse engineer an expected rate of return, given the performance characteristics of select asset managers. Should that be incorporated into investment decisions?

CalPERS has provided a set of three return forecasts to its board of trustees. Now comes the hard part of factoring in risk — or rather, the risk/reward tradeoff. Check back in December, when CalPERS has its ad hoc risk management committee meeting.

(09/28/10)