Elliot's Twitter Feed

Subscribe to the RSS Feed:
Search
Compounding the Categories
13f aaron clauset after-tax return alan greenspan alchemy of finance alexander hamilton algo algorithmic trading allan mecham all-time highs alpha alvaro guzman de lazaro mateos amazon amsc anarchy antifragile antti ilmanen apple aqr capital architecture art of stock picking asset quality review asthma atlantic city austerity barry bonds baseball behavioral economics ben bernanke best buy bill maher biotech bitcoin black swan bobby orr bridgewater bruce bueno de mesquita bruce richards bubble buttonwood CAPE capital gains capitalism caravan cash cerberus cfa charles de vaulx charlie munger checklist checklist chicago booth china chord cutting cinecitta claude shannon Clayton Christensen clean energy commodities complex adaptive system compound interest constitution content cord cutting correlation cpi craft beer credit suisse cree cris moore crisis cybersecurity Dan Geer daniel kahneman darwin david doran david laibson david mccullough david wright debt ceiling defense department deficit deleveraging disruptive innovation diversification diversity dixie chicken don johnson economic machine economist edward thorp efficiency efficient market hypothesis elke weber eni enterprise eric sanderson eric schmidt euro european union eurozone Evgeni Malkin evolution facebook fat finger federalist 15 federalist papers ferdinand de lesseps flash crash flashboys forecasting fortune's formula fragility fred wilson gambling gene sequencing general electric genomics geoeye george soros global reserve currency gold gold standard google goose island gore-tex government budget grantham greece gregory berns grid parity guy spier hamiltonian path problem hans harvard business school henry blodgett henry kaufman hft hockey horizon kinetics housing howard marks hudson hudson river hussman iarpa ichiro iex imax implied growth incyte indexation indexing innovation innovator's dilemma internet investment commentary ipad ipo islanders italy j craig venter james gleick jets jim grant jim thome jjohn maynard keynes jk rowling jochen wermuth Joe Peta joel greenblatt john doyle john gilbert john malone john maynard keynes john rundle jonah lehrer juan enriquez justin fox kelly criterion kevin douglas kodak larry david legg mason lehman brothers linkedin liquidity little feat logical fallacies long term capital management louis ck malaria Manhattan manual of ideas marc andreesen marc lasry mark mahaney media mental model Michael Mauboussin millennials minsky mnst moat money mr. market multi-discipline murray stahl myth of the rational market nasdaq: aapl NASDAQ: GOOG nassim taleb natural gas net neutrality netflix new york NGA nicholas barberis Novus oaktree optimality optimization overfitting panama canal pat lafontaine performance personal philip tetlock Pittsburgh Penguins pixar preamble price earnings ratio price to book priceline profit margins prospect theory psychology punditry radioshack random walk ray dalio rebalancing reflexivity regeneron registered investment advisor reproduction value RGA Investment Advisors RGAIA risk risk aversion rob park robert shiller robotics robust ROE s&p 500 samsung santa fe institute satellite scarcity s-curve sectoral balance silk road silvio burlesconi solar space shuttle speculation steve bartman steve jobs stock market stock picking streaming subsidy synthetic genomics systems tax code ted talk the band the general theory the information tomas hertl Trading Bases tungsten twitter undefined van morrison vincent reinhart wall street walter isaacson warren buffet warren buffett william gorgas william poundstone woody johnson wprt yosemite valley youtube
Navigation

Entries in philip tetlock (2)

Wednesday
Oct162013

Learning Risk and the "Limits to Forecasting and Prediction" With the Santa Fe Institute

Last October, I had the privilege to attend Santa Fe Institute and Morgan Stanley's Risk Conference, and it was one of my most inspiring learning experiences of the year (read last year's post on the conference, and separately, my writeup of Ed Thorp's talk about the Kelly Criterion). It's hard not to marvel at the brainpower concentrated in a room with some of the best practitioners from a variety of multi-disciplinary fields ranging from finance to physics to computer science and beyond and I would like to thank Casey Cox and Chris Wood for inviting me to these special events.  

I first learned about the Santa Fe Institute (SFI) from Justin Fox's The Myth of the Rational Market. Fox concludes his historical narrative of economics and the role the efficient market hypothesis played in leading the field astray with a note of optimism about the SFI's application of physics to financial markets. Fox highlights the initial resistance of economists to the idea of physics-based models (including Paul Krugman's lament about "Santa Fe Syndrome") before explaining how the profession has in fact taken a tangible shift towards thinking about markets in a complex, adaptive way.  As Fox explains:

These models tend to be populated by rational but half-informed actors who make flawed decisions, but are capable of learning and adapting. The result is a market that never settles down into a calmly perfect equilibrium, but is constantly seeking and changing and occasionally going bonkers. To name just a few such market models...: "adaptive rational equilibrium," "efficient learning," "adaptive markets hypothesis," "rational belief equilibria." That, and Bill Sharpe now runs agent-based market simulations...to see how they play out.

The fact that Bill Sharpe has evolved to a dynamic, in contrast to equilibrium-based perspective on markets and that now Morgan Stanley hosts a conference in conjunction with SFI is telling as to how far this amazing multi-disciplinary organization has pushed the field of economics (and importantly, SFI's contributions extend well beyond the domain of economics to areas including anthropology, biology, linguistics, data analytics, and much more). 

Last year's focus on behavioral economics provided a nice foundation upon which to learn about the "limits to forecasting and prediction." The conference once again commenced with John Rundle, a physics professor at UC-Davis with a specialty in earthquake prediction, speaking about some successful and some wrong natural disaster forecasts (Rundle operates a great site called OpenHazards). Rundle first offered a distinction between forecasting and prediction. Whereas prediction is a statement validated by a single observation, forecasting is a statement for which multiple observations are required for a confidence level.

He then offered a permutation of risk into its two subcomponents. Risk = Hazard x exposure.  The hazard component relates to your forecast (ie the potential for being wrong) while the exposure relates to the magnitude of your risk (ie how much you stand to lose should your forecast be wrong). I find this a particularly meaningful breakdown considering how many colloquially conflate hazard with risk, while ignoring the multiplier effect of exposure.

As I did last year, I'll share my notes from the presentations below. Again, I want to make clear that my notes are geared towards my practical needs and are not meant as a comprehensive summation of each presentation. I will also look to do a second post which sums up some of the questions and thoughts that have been inspired by my attendance at the conference, for the truly great learning experiences tend to raise even more questions than they do offer answers.

Antti Ilmanen, AQR Capital

With Forecasting, Strategic Beats Tactical, and Many Beats Few

Small, but persistent edges can be magnified by diversification (and to a lesser extent, time). The bad news is that near-term predictability is limited (and humility is needed) and long-term forecasts which are right might not setup for good trades. I interpret this to mean that the short-term is the domain of randomness, while in the long-term even when we can make an accurate prediction, the market most likely has priced this in.

Intuitive predictions inherently take longer time-frames. Further, there is performance decay whereby good strategies fade over time. In order to properly diversify, investors must combine some degree of leverage with shorting. Ilmanen likes to combine momentum and contrarian strategies, and prefers forecasting cross-sectional trades rather than directional ones.

When we make long-term forecasts for financial markets, we have three main anchors upon which to build: history, theory, and, current conditions. For history, we can use average returns over time, for theory, we can use CAPM, and for current conditions we can apply the DDM. Such forecasts are as much art as they are science and the relative weights of each input depend on your time-horizon (ie the longer your timeframe, the less current conditions matter for the inevitable accuracy of your forecast).

Historically the Equity Risk Premium (ERP) has averaged approximately 5%, and today's environment the inverse Schiller CAPE (aka the cyclically adjusted earnings yield) is approximately 5%, meaning that 4-5% long run returns in equity markets are justifiable, though ERPs have varied over time. Another way to look at projected returns is through the expected return of a 60/40 (60% equities / 40% bonds) portfolio. This is Ilmanen's preferred methodology and in today's low-rate environment the prospects are for a 2.6% long-run return.

In forecasting and market positioning, "strategic beats tactical." People are attracted to contrarian signals, though the reality of contrarian forecasting is disappointing. The key is to try and get the long-term right, while humbly approaching the tactical part of it. Value signals like the CAPE tend to be very useful for forecasting. To highlight this, Ilmanen shared a chart of the 1/CAPE vs. the next five year real return.

Market timing strategies have "sucked" in recent decades. In equity, bond and commodity markets alike, Sharpe Ratios have been negative for timing strategies. In contrast, value + momentum strategies have exhibited success in timing US equities in particular, though most of the returns happened early in the sample and were driven more by the momentum coefficient than value. Cheap starting valuations have resulted in better long-run returns due to the dual forces of yield capture (getting the earnings yield) and mean reversion (value reverting to longer-term averages). 

Since the 1980s, trend-following strategies have exhibited positive long-run returns. Such strategies work best over 1-12 month periods, but not longer. Cliff Asness of AQR says one of the biggest problems with momentum strategies is how people don't embrace them until too late in each investment cycle, at which point they are least likely to succeed. However, even in down market cycles, momentum strategies provided better tail-risk protection than did other theoretically safe assets like gold or Treasuries.  This was true in eight of the past 10 "tail-risk periods," including the Great Recession.

In an ode to diversification, Ilmanen suggested that investors "harvest many premia you believe in," including alternative asset classes and traditional capital markets. Stocks, bonds and commodities exhibit similar Sharpe Ratios over long time-frames, and thus equal-weighting an allocation to each asset class would result in a higher Sharpe than the average of the constituent parts. We can take this one step farther and diversify amongst strategies, in addition to asset classes, with the four main strategies being value, momentum, carry (aka high yield) and defensive.

Over the long-run, low beta strategies in equities have exhibited high returns, though at the moment low betas appear historically expensive relative to normal times.  That being said, value as a signal has not been useful historically in market-timing.

If there are some strategies that exhibit persistently better returns, why don't all investors use them? Ilmanen highlighted the "4 c's" of conviction, constraints, conventionality and capacity as reasons for opting out of successful investment paths.

 

Henry Kaufman, Henry Kaufman & Company

The Forecasting Frenzy

Forecasting is a long-term human endeavor, and the forecaster in the business/economics arena is from the same vein as soothsayers and palm readers. In recent years, the number of forecasters and forecasts alike has grown tremendously. Sadly, forecasting continues to fail due to the following four behavioral biases:

  1. Herding--forecasts minimally fluctuate around a mean, and few are ever able to anticipate dramatic changes. When too many do anticipate dramatic changes, the path itself can change preventing such predictions from coming true.
  2. Historical bias--forecasts rest on the assumption that the future will look like the past. While economies and markets have exhibited broad repetitive patterns, history "rhymes, but does not repeat."
  3. Bias against bad news--No one institutionally predicts negative events, as optimism is a key biological mechanism for survival. Plus, negative predictions are often hard to act upon. When Kaufman warned of interest rate spikes and inflation in the 1970s, people chose to tune him out rather than embrace the uncomfortable reality. 
  4. Growth bias--stakeholders in all arenas want continued expansion and growth at all times, even when it is impractical.

Collectively, the frenzy of forecasts has far outpaced our ability to forecast. With long-term forecasting, there is no scientific process for making such predictions. An attempt to project future geopolitical events based on the past is a futile exercise. In economics, fashions contribute to unsustainable momentums, both up and down, that lead to considerable challenges in producing accurate forecasts.

Right now, Kaufman sees some worrying trends in finance. First, is the politicization of monetary policy, and he fears this will not reverse soon. The tactics the Fed is undertaking today are unprecedented and becoming entrenched. The idea of forward guidance in particular is very dangerous, for they rely entirely upon forecasts. Since it's well established that even expert forecasts are often wrong, then logic dictates that the entire concept of forward guidance is premised on a shaky foundation. Second, monetary policy has eclipsed fiscal policy as our go-to remedy for economic troubles. This is so because people like the quick and easy fixes offered by monetary solutions, as opposed to the much slower fiscal ones. In reality, the two (fiscal and monetary policy) should be coordinated. Third, economists are not paying enough attention to increasing financial concentration. There are fewer key financial institutions, and each is bigger than what used to be regarded as big. If/when the next one fails, and the government runs it through the wind-down process, those assets will end up in the hands of the next remaining survivors, further concentrating the industry.

The economics profession should simply focus on whether we as a society will have more or less freedom going forward. Too much of the profession instead focuses on what the next datapoint will be. In the grand scheme of things, the next datapoint is completely irrelevant, especially when the "next" completely ignores any revisions to prior data.  There is really no functional, or useful purpose for this type of activity.

 

Bruce Bueno de Mesquita, New York University

The Predictioneer's Game

The standard approach to making predictions or designing policy around questions on the future is to "ask the expert." Experts today are simply just dressed up oracles. They know facts, history and details, but forecasts require insight and methods that experts simply don't have. The accuracy of experts is no better than throwing darts. 

Good predictions should use logic and evidence, and a better way to do this is using game theory. This works because people are rationally self-interested, have values and beliefs, and face constraints. Experts simply cannot analyze emotions or account for skills and clout in answering tough geopolitical questions. That being said, game theory is not a substitute for good judgment and it cannot replace good internal debate.

People in positions of power have influencers (like a president and his/her cabinet). In a situation with 10 influencers, there are 3.6 million possible interactions that exist in a complex adaptive situation (meaning what one person says can change what another thinks and does). In any single game, there are 16 x (N^2-N) possible predictions, where N is the number of players.

In order to build a model that can make informed predictions, you need to know who the key influencers are. Once you know this, you must then figure out: 1) what they want on the issue; 2) how focused they are on that particular problem; 3) how influential each player could be, and to what degree they will exert that influence; and, 4) how resolved each player is to find an answer to the problem.  Once this information is gathered, you can build a model that can predict with a high degree of accuracy what people will do.  To make good predictions, contrary to what many say, you do not need to know history. It is much like a chessmaster who can walk up to a board in the middle of a game and still know what to do next.

With this information, people can make better, more accurate predictions on identified issues, while also gaining a better grasp for timing. This can help people in a game-theory situation come up with strategies to overcome impediments in order to reach desired objectives.

Bueno de Mesquita then shared the following current predictions:

  • Senkaku Island dispute between China and Japan - As a relevant aside, Xi Jinping's power will shrink over the next three years. Japan should let their claims rest for now, rather than push. It will take two years to find a resolution, which will most likely include a joint venture between Japan and China for expropriation of the natural gas reserves.
  • Argentina - The "improvements" in today's business behavior are merely aesthetic in advance of the key mid-term elections. Kirshner is marginalizing political rivals, and could make a serious move to consolidate power for the long-term.
  • Mexico - There is a 55% chance of a Constitutional amendment to open up energy, a 10% chance of no reform, and a 35% chance for international oil companies to get deep water drilling rights.  Mexico is likely to push through reforms in fiscal policy, social security, energy, labor and education, and looks to have a constructive backdrop for economic growth.
  • Syria with or without Assad will be hostile to the Western world.
  • China will look increasingly inward, with modest liberalization on local levels of governance and a strengthening Yuan.
  • The Eurozone will have an improving Spain and a higher likelihood that the Euro currency will be here to last.
  • Egypt is on the path to autocracy.
  • South Africa is at risk of turning into a rigged autocracy.

 

Aaron Clauset, University of Colorado and SFI

Challenges of Forecasting with Fat-Tailed Data

(Please note: statistics is most definitely not my strong suit. The content in Clauset's talk was very interesting, though some of it was over my head. I will therefore try my best to summarize the substance based on my understanding of it)

In attempting to predict fat-tail events, we are essentially trying to "predict the unpredictable." Fat tails exhibit high variance, so the average of a sample of data does not represent what is seen numerically. In such samples, there is a substantial gap between the two extremes of the data, and we see these distributions in book sales (best-sellers like Harry Potter), earthquakes (power law distributions), market crashes, terror attacks and wars. With earthquakes, we know a lot about the physics behind them, and how they are distributed, whereas with war we know it follows some statistical pattern, but the data is dynamic instead of fixed. This is true with war, because certain events influence subsequent events, etc.

Clauset approached the question of modeling rare events through an attempt to ascertain how probable 9/11 was, and how likely another one is. The two sides of answering this question are building a model (to discover how probable it was) and making a prediction (to forcast how likely another would be). For the purposes of the model, one would care only about large events because they have disproportionate consequences. When analyzing the data, we don't know what the distribution of the upper tail will look like because there simply are not enough datapoints. In order to overcome these problems, the modeler needs to separate the tail from the body, build a multiple tail model, bootstrap the data and repeat.

In Clauset's analysis of the likelihood for 9/11, he found that it was not an outlier based on both the model, and the prediction. There is a greater than 1% chance of such an event happening. While this may sound small, it is within the realm of possible outcomes, and as such it deserves some attention. This has implications for policymakers, because considering it is a statistical possibility, we should pursue our response within a context that acknowledges this reality.

There are some caveats to this model however. An important one is that terrorism is not a stationary process, and events can create feedback loops which drive ensuing events. Further, events themselves that in the data appear independent are not actually so. When forecasting fat tails, model uncertainty is always a big problem. Statistical uncertainty is a second one, due to the lack of enough data points and the large fluctuations in the tails themselves. Yet still, there is useful information within the fat tails which can inform our understanding of them. 

 

Philip Tetlock, University of Pennsylvania

Geopolitical Forecasting Tournaments Test the Limits of Judgment and Stretch the Boundaries of Science

I summarized Tetlock's talk at last year's SFI Risk Conference, so I suggest checking out those notes on the IARPA Forecasting Tournament as well. IARPA has several goals/benefits: 1) making explicit one's implicit theories of good judgment; 2) getting people in the habit of treating beliefs like testable hypothesis; and, 3) helping people discover the drivers of probabilistic accuracy. (All of the above are reasons I would love to participate in the next round). With regard to each area there are important lessons. 

There is a spectrum that runs from perfectly predictable on the left to perfectly unpredictable on the right, and no person or system can perfectly predict everything. In any prediction, there is a trade-off between false positives and correct hits. This is called the accuracy function. 

With the forecasting tournament, people get to put their pet theories to the test. This can help improve the "assertion-to-evidence" ratios in debates between opposing schools of thought (for example, the Keynesians vs the Hayekians). Predictions would be a great way to hold opposing schools of thought accountable to their predictions, while also eliciting evidence as to why events are expected to transpire in a given way.

In the tournament, the participants are judged using a Brier Score, a measure that originated in weather forecasting to determine accuracy on probabilistic predictions over time. The people who perform best tend to have a persistence in good performance. The top 2% of performers from one year demonstrated minimal regression to the mean, leading to the conclusion that predictions are 60% skill and 40% luck on the luck/skill spectrum.

There are tangible benefits of interaction and collaboration. The groups with the smartest, most open-minded participants consistently outperformed all others. Those who used probabilistic reasoning in making predictions were amongst the best performers. IARPA concentrated the talent of some of the best performers in order to see if these "super teams" could beat the "wisdom of crowds." Super teams did win quite handily. Ability homogeneity, rather than a problem, was an enhancer of successes. Elitist algorithms were used to generate forecasts by "extremizing" the forecasts from the best forecasters, and weighting those most heavily (5 people with a .7 Brier would upgrade to approximate a .85 based on the non-correlation of their success. Slight digression: it was interesting sitting behind Ilmanen during this lecture and seeing him nod his head, as this theme resonated perfectly with his points on diversifaction in a portfolio resulting in the portfolio's Sharpe Ratio being above the average of its constituent parts)

There are three challenges when thinking about the value of a forecasting tournament. First, automation from machines is getting better, so why bother with people? While this is important, human judgment is still a very valuable tool and can actually improve the performance of these algorithms. Second, the efficient market theory argues that what can be anticipated is already "priced in" so there should be little economic value to a good prediction anyway. Yet markets and people alike have very poor peripheral vision and good prediction can in fact be valuable in that context. Last, game theory models like Buena de Mesquita's can distill inputs from their own framework. While this may be a challenge, it's probably even better as a complementary endeavor.

Friday
Oct122012

Learning Risk and Behavioral Economics with the Santa Fe Institute

This week I had the privilege to attend the Sante Fe Institute’s conference in conjunction with Morgan Stanley entitled Risk: The Human Factor.  There was quite the lineup of speakers, on topics ranging from Federal Reserve policy to prospect theory to fMRI’s of the brain’s mechanics behind prediction.  The topics flowed together nicely and I believe helped cohesively construct an important lesson—rules-based systems are an outstanding, albeit imperfect way for people and institutions alike to increase the capacity for successful prediction and controlling risk.  In the past on this blog, I have spoken about the essence of financial markets as a means through which to raise capital.  However, in many key respects, financial markets have become a living being in their own right, and as presently orchestrated are vehicles where humans engage in continuous prediction and risk management, thus making the lessons learned from the SFI speakers amazingly important ones.

This notion of financial markets as living beings in SFI’s parlance can be described as a “complex adaptive system” and is precisely what SFI is geared towards learning about.  While financial markets (and human beings) are complex adaptive systems, SFI is a multi-disciplinary organization that seeks to understand such systems in many contexts, including financial markets, but also in biology, anthropology, social structures, genetics, chemistry, drug discovery and all else where the concepts can be applied. 

To highlight the multi-disciplinary nature of the event, John Rundle, one of the co-organizers of the event and a physics professor at the University of California Davis, with a special background in earthquake simulation and prediction introduced the theme for the day. Dr. Rundle presented results for his trading strategy founded upon his theories for earthquake prediction.   The strategy was built upon asking the following question: can models for market risk be constructed that implicitly or explicitly account for human risk?  Seems like things are off to a great start.

Some of the coolest, most interesting moments came during the Q&A sessions, where this year’s presenters, some past presenters, and many brilliant minds from finance including Michael Mauboussin, Bill Miller and Marty Whitman had the opportunity to engage each other on their theses, refining and expounding upon each other’s ideas.  Sitting in the room and absorbing conversations like John Rundle speaking with Ed Thorp during an intermission about their own risk management perspectives and how to maximize the Kelly Criterion in investments was a surreal experience that I sadly cannot impart in this blog post, but I hope to channel the spirit in sharing some of the important ideas I learned. Further, I'd like to invite any of you readers out there to add your own thoughts in the comments below. 

Let’s start with the first presentation and walk through the day together.  In each subsection, I will give the presenter and their lecture title, followed by some notes from the lecture that I felt were relevant to my practical needs (this is not meant to be a thorough overview of each and all presentations).  I will type up my notes from Ed Thorp’s presentation in its own blog post, for there seemed to be considerable interest from fellow Twitterers on that one lecture in particular.

David Laibson, Harvard University

  Can We Control Ourselves?

Does society have the capacity to prepare for demographic change?  Experiments consistently show that people want the right thing, particularly when the question is presented as one of future choice.  However, when faced with the very same choice in the present, we fail to make the right decision; the very same decision we would make for longer-term planning purposes.   There is a behavioral reason for this: we want the right thing, but right now gets the full brunt of the emotional psychological weight, while planning is not nearly as influenced by the emotional element.  As a result, humans have a knack for making terrific plans, with no follow-through.

There is a neural foundation for this, as we have 2 systems (this is derivative of the idea presented in Daniel Kahneman’s Thinking Fast, and Slow). 

  • The planning and focused system
  • The dopamine reward system based on immediate satisfaction

How can we help people follow-through on their goals in planning as it pertains to saving for retirement?

  • We can change the system from opt-in to auto-enroll, also known as the Nudge. Nudge is based on an idea presented by behavioral economists, Richard Thaler and Cass Sunstein in the book Nudge: Improving Decisions about Health, Wealth, and Happiness.
  • We can use what’s called “active choice” and punish inaction, such that people must call and make a decision about their savings, rather than delaying it.
  • Make enrollment quicker by taking away the 30 minute paperwork barrier.

Which is most effective:

  • 40% participate with opt-in
  • 50% participate with an easier process
  • 70% enroll with active choice
  • 90% participate with a nudge

To that end, we were presented with information that showed people recognize self-control problems and opt for less liquid savings options if given the choice, EVEN IF the returns are exactly the same.  That is, people acknowledge their inability to control the itch to break their well-made plans.

Vincent Reinhart, Managing Director and Chief U.S. Economist at Morgan Stanley

FED Behavior and Its Implications

  1. Our paradigm for monetary policy:
    1. We have an expectation for the path of the economy and the Fed sets policy to meet that expectation
    2. The difference in policy over 2 successive actions follows a random walk. You can only acquire so much new information about the economy over the course of six weeks, making decisions based primarily on prior knowledge.
    3. The puzzle of persistence:
      1. Despite the random walk on decision-making, a chart of the Fed Funds Rate doesn’t actually follow a random walk.  It is a persistent path, whereby if the interest rate went down the prior month, it is more likely to go down again in the present month.
      2. The source of persistence:
        1. If there is persistence, and policies are predictable, then there should be ways to generate returns off of it.  Prices then would be drive to a fundamental value by arbitrage.  However, in central banking there is no arbitrage opportunity, because the mechanisms are confined to just the Fed and commercial banks, with no open market participation.
        2. While many talk about recent actions being “unprecedented” this is unequivocally not true.  These actions are very consistent with central bank behavior—QE and its ilk are balance sheet actions. 
          1. Previously the Fed had a larger balance sheet as a % of GDP in the mid-1940s.
  2. Policy decisions are made by committees:
    1. Larger committees lead to less variance
    2. The right model to think about this is the committee as a jury, not a sample of policy options. The committees deliberate and take the best argument.
    3. There is an hierarchy of status in the Fed, including titles and media-friendliness that lead to greater degrees of influence from some members, over others.  This leads to the perfect setting for herding outcomes.
    4. Thus the random walk fails.
    5. Why have we not had a strong bounce-back from this recession?
      1. Milton Friedman talks of “plucking on a string” whereby a big drop should lead to a big bounce.
        1. There are serious problems with this analogy:
          1. An equal percent decline, and rise will not get you back to your starting point. (1-x) * (1+x)=1-x²
          2. In a “pluck” in physics it never gets you back to your starting point, as there is a transfer of energy in the transition from down to up.
          3. The observation that recessions should work like a plucked string were misguided, since they focused on a small sample size ONLY covering 1946-1983, looking neither at the prior 100 years, or updating for the past 30 years.
  3. After severe financial crisis, recoveries are consistently very poor.
  4. What is the best paradigm for decision-making?
    1. Rules consistently do better than discretion.
    2. From June to December that conversation has started to change, and QE3 is far more analogous to a rules-based system. However, we don’t yet have enough information on when or how the rules will end.

I had the opportunity to ask a question, so I asked whether NGDP targeting would be such an optimal rules-based system, and if QE3 was something akin to NGDP.  Reinhart answered that while QE3 does get us closer to a rules-based system it is not like NGDP.  He further asserted that he wouldn’t necessarily be in favor of NGDP targeting, and that a system of NGDP targeting would be an implicit, under-the-radar way for the Fed to let the market know it will slacken on the inflation coefficient of its dual mandate.

Philip Tetlock, University of Pennsylvania

The IARPA Forecasting Tournament: How Good (Bad) Can Expert Political Judgment Become Under Favorable (Unfavorable) Conditions?

In the 1980s, the government funded a study looking into how well experts predict global events, called the IARPA Forecasting Tournament.  Today, this experiment is being recreated, with a focus on forecasting global events of interest to the US government.  The experiment uses the Brier Score, first developed for weather forecasters, in order to gauge accuracy.  The best Brier score is 0, a dart-throwing chimp registers a 0.5 and the worst possible score is 2.

Types of prediction ceilings:

  • Perfectly predictable events (100% ability to predict)
  • Partly predictable events
  • Perfectly unpredictable

In the first year of the tournament, the average score in the baseline was 0.37, better than the chimp, but not quite perfect.  The best algorithms score 0.17 and sit 0.29 units away from the truth.

In the top performing groups of participants had the following traits in common (note: collaboration was welcomed and fostered by the moderators).  I’m injecting my opinion here, but I find these to be very important goals for any organization in attempting to participate in an arena where prediction is important (in this case, for investors the lessons can be particularly apt).

  1. The best participants
  2. Collaboration whereby people actually work together and deliberate about their predictions.
  3. A training in probabilistic reasoning a la Kahneman’s ideas in Thinking, Fast and Slow
  4. Combine the training and teamwork
  5. Elitist aggregation methods whereby more weight is added to the best predictors/experts in certain areas when combining predictions to make one uniform “best” effort at prediction.

Two lessons/observations:

  • Teams and algorithms consistently outperform individuals.
  • Forecasters consistently tend to over-predict change.

Elke Weber, Columbia University

Individual and Cultural Differences in Perceptions of Risk

In finance we think of risk as volatility. Culturally however, risk is a parameter, not a model.  Risk is therefore subjective and intuitive on an individual level.  Further, when faced with extreme outcomes, emotion becomes an increasingly more powerful force on perceptions of risk.  It is the perceptions of risk that drive behavior, and these perceptions exist on a relative, not absolute scale. Humans are biologically wired to that end.

Weber’s Law (not Elke Weber, an earlier Weber): the differences in the magnitude required to perceive two stimuli is proportional to the starting point. i.e. all differences are measured by a relating the new position to the original.

Familiarity actually works to reduce perceptions of risk, but not risk itself. Experts in a certain field tend to underestimate risks due to familiarity.  Return expectations and perceived riskiness predict choice, NOT the expectation of volatility (i.e. risk is perceived on a relative scale, not through the formulaic calculation of volatility).

Cultural differences—Shanghai vs. US MBA students:

  • The collectivist nature of Chinese culture mitigates the damaging effects of risk gone awry. This is called the “cushion hypothesis.”  As a result, Chinese MBA students tend to be more risk-seeking.
    • Families in China tend to help their members far more than in the US when it comes to transferables (people help mitigate the risk of a money-based decision gone wrong, but cannot do so on risky health decisions).
    • Risk was consistently based on relative perceptions of risk within the context of the safety net.

In the Animal Kingdom, the most base way to perceive risk is through experience.  Small probability events tend to be underweighted by experience, but overweighted by perception.

  • When small probability events hit, the recency bias makes people overweight the chances it will happen again.
  • Experience metrics tend to be more volatile in how they perceive risk.
  • Studies show that crisis (like the Great Depression) do have an enduring impact on how risk is perceived.

I had the opportunity to ask Dr. Weber a question. I asked her about the point that familiarity tends to lead people to overlook risk, and how that can be reconciled with the value investing concept of sticking to a core competency? If through focusing on a core competency, rather than mitigating risk, investors were increasing it.  Dr. Weber rightly observed that focusing on a core competency does have some distinctions with familiarity in that the idea is to work in areas where one has the most skill, but that there could very well be such a connection. In fact, she thought my question to be “very interesting” and worth further observation.

Nicholas Barberis, Yale University

Prospect Theory Applications in Finance

Can we do better in financial markets replacing expected utility with prospect theory?

Some core elements of prospect theory in finance:

  1. People care about gains and losses, not absolute levels of performance
  2. People are more sensitive to losses than gains
  3. People weight probabilities in a non-linear way (i.e. they overweight low probability, underweight high probability). 

There is little support for beta as a predictor of returns.  Prospect theory instead focuses on the idea that a security’s (or indices) own skewness will be priced based on the scale of the left or right tail. 

  • In positively skewed stocks people tend to overweight small chances of big success, and thus get low returns as a result (and vice versa). 
  • As a result, big right skewness should have a low average return and this is proven in IPOs, out of the money options, distress stocks and volatile stocks.
  • Probability weighting in prospect theory is a better predictor of returns.
  • If people are loss averse, as prospect theory holds, the equity premium will be higher.
  • Overall, the market is negatively skewed, thus probability weighting produces a higher equity risk premium overall.

The Disposition Effect – people sell stocks that have gone up far quicker than stocks that have gone down.

  • Do people get pleasure/pain from realizing gains/losses? i.e. realization utility. The model predicts that:
    • There is greater turnover in bull markets as a result.
    • There is a greater propensity for selling above historical level of highs.
    • There is a preference for volatile stocks.
    • Momentum is also preferred.

Gregory Berns, Emory University

When Brains are Better than People: Using fMRI to Predict Markets

Dr. Berns started with a history of using blood pressure in order to ascertain where/how/why certain stimuli impact the brain.  Today we can use fMRI in order to clearly see ventricular activity and this provides a nice window into how the brain works.  Blood flow to regions of the brain change based on which part of the brain is active/engaged at any given point in time.  Animals in the wild that are most adept at prediction can survive far better in changing environments than those who cannot.

Contrary to conventional wisdom, dopamine is not directly correlated to pleasure. Dopamine in fact is correlated to the anticipation (i.e. the delta) of pleasure.  It is the changes in dopamine levels which lead to decisions.  Dr. Berns showed a fascinating slide using the corking and drinking of a fine wine to illustrate this point.  It is in the moment of opening the bottle of wine that people experience the dopamine release, rather than during the pouring of the glass or taking the first sip.

Dr. Barberis had mentioned fMRI and its application to measuring the disposition effect and here Dr. Berns confirmed and illustrated.  There are three explanations for why the disposition effect happens:

  1. People’s risk preference
  2. The realization utility (i.e. people like realizing gains, loathe realizing losses)
  3. Mean reversion

Using fMRI, we can see that there are different approaches to the disposition effect depending on how and where the brain reacts (note: boy do I wish I had these slides, because the images are amazing in highlighting the effects).  People tend to fall into 2 camps—those who are influenced by the disposition effect, and those who are not.  fMRI shows that in those who ARE influenced by the effect, the blood flow is most active in the stem of the brain, the area where dopamine is released.  In those who are NOT impacted by the disposition effect, there is brain activity in a much broader portion of the cerebrum (the bigger part of the brain).

This effect was studied using fMRI in 2 contexts involved in understanding prediction.

  • Music: people were given fMRI while many songs were played, analyzing where in fact the brain was triggered. Only years later, when one of the obscure songs became a hit did Dr. Berns check his data and it showed that this hit song actually did in fact induce a higher degree of activity in the brain. Brain data correlated more with the likeability of success.
  • Markets: MBA students were given fMRI while simulating the ownership of stocks into earnings. Their reactions were tested for beats or misses.  The tests were demonstrative of the fact that negative surprises hurt far more than positive ones feel good.  This could be a major explanatory force behind the disposition effect.  

 

 

Please note: I apologize for any formatting errors. This post was drafted in Word and did not transfer very cleanly at all into the Squarespace format. In the interest of sharing the ideas in a timely mannger, I will go ahead and publish before I have the chance to clean up all the spacing, tabbing, etc.  Please enjoy the content and try to look past the messy spacing.