Elliot's Twitter Feed

Subscribe to the RSS Feed:
Compounding the Categories
13f aaron clauset after-tax return alan greenspan alchemy of finance alexander hamilton algo algorithmic trading allan mecham all-time highs alpha alvaro guzman de lazaro mateos amazon amsc anarchy antifragile antti ilmanen apple aqr capital architecture art of stock picking asset quality review asthma atlantic city austerity barry bonds baseball behavioral economics ben bernanke best buy bill maher biotech bitcoin black swan bobby orr bridgewater bruce bueno de mesquita bruce richards bubble buttonwood CAPE capital gains capitalism caravan cash cerberus cfa charles de vaulx charlie munger checklist checklist chicago booth china chord cutting cinecitta claude shannon Clayton Christensen clean energy commodities complex adaptive system compound interest constitution content cord cutting correlation cpi craft beer credit suisse cree cris moore crisis cybersecurity Dan Geer daniel kahneman darwin david doran david laibson david mccullough david wright debt ceiling defense department deficit deleveraging disruptive innovation diversification diversity dixie chicken don johnson economic machine economist edward thorp efficiency efficient market hypothesis elke weber eni enterprise eric sanderson eric schmidt euro european union eurozone Evgeni Malkin evolution facebook fat finger federalist 15 federalist papers ferdinand de lesseps flash crash flashboys forecasting fortune's formula fragility fred wilson gambling gene sequencing general electric genomics geoeye george soros global reserve currency gold gold standard google goose island gore-tex government budget grantham greece gregory berns grid parity guy spier hamiltonian path problem hans harvard business school henry blodgett henry kaufman hft hockey horizon kinetics housing howard marks hudson hudson river hussman iarpa ichiro iex imax implied growth incyte indexation indexing innovation innovator's dilemma internet investment commentary ipad ipo islanders italy j craig venter james gleick jets jim grant jim thome jjohn maynard keynes jk rowling jochen wermuth Joe Peta joel greenblatt john doyle john gilbert john malone john maynard keynes john rundle jonah lehrer juan enriquez justin fox kelly criterion kevin douglas kodak larry david legg mason lehman brothers linkedin liquidity little feat logical fallacies long term capital management louis ck malaria Manhattan manual of ideas marc andreesen marc lasry mark mahaney media mental model Michael Mauboussin millennials minsky mnst moat money mr. market multi-discipline murray stahl myth of the rational market nasdaq: aapl NASDAQ: GOOG nassim taleb natural gas net neutrality netflix new york NGA nicholas barberis Novus oaktree optimality optimization overfitting panama canal pat lafontaine performance personal philip tetlock Pittsburgh Penguins pixar preamble price earnings ratio price to book priceline profit margins prospect theory psychology punditry radioshack random walk ray dalio rebalancing reflexivity regeneron registered investment advisor reproduction value RGA Investment Advisors RGAIA risk risk aversion rob park robert shiller robotics robust ROE s&p 500 samsung santa fe institute satellite scarcity s-curve sectoral balance silk road silvio burlesconi solar space shuttle speculation steve bartman steve jobs stock market stock picking streaming subsidy synthetic genomics systems tax code ted talk the band the general theory the information tomas hertl Trading Bases tungsten twitter undefined van morrison vincent reinhart wall street walter isaacson warren buffet warren buffett william gorgas william poundstone woody johnson wprt yosemite valley youtube

Entries in justin fox (2)


Learning Risk and the "Limits to Forecasting and Prediction" With the Santa Fe Institute

Last October, I had the privilege to attend Santa Fe Institute and Morgan Stanley's Risk Conference, and it was one of my most inspiring learning experiences of the year (read last year's post on the conference, and separately, my writeup of Ed Thorp's talk about the Kelly Criterion). It's hard not to marvel at the brainpower concentrated in a room with some of the best practitioners from a variety of multi-disciplinary fields ranging from finance to physics to computer science and beyond and I would like to thank Casey Cox and Chris Wood for inviting me to these special events.  

I first learned about the Santa Fe Institute (SFI) from Justin Fox's The Myth of the Rational Market. Fox concludes his historical narrative of economics and the role the efficient market hypothesis played in leading the field astray with a note of optimism about the SFI's application of physics to financial markets. Fox highlights the initial resistance of economists to the idea of physics-based models (including Paul Krugman's lament about "Santa Fe Syndrome") before explaining how the profession has in fact taken a tangible shift towards thinking about markets in a complex, adaptive way.  As Fox explains:

These models tend to be populated by rational but half-informed actors who make flawed decisions, but are capable of learning and adapting. The result is a market that never settles down into a calmly perfect equilibrium, but is constantly seeking and changing and occasionally going bonkers. To name just a few such market models...: "adaptive rational equilibrium," "efficient learning," "adaptive markets hypothesis," "rational belief equilibria." That, and Bill Sharpe now runs agent-based market simulations...to see how they play out.

The fact that Bill Sharpe has evolved to a dynamic, in contrast to equilibrium-based perspective on markets and that now Morgan Stanley hosts a conference in conjunction with SFI is telling as to how far this amazing multi-disciplinary organization has pushed the field of economics (and importantly, SFI's contributions extend well beyond the domain of economics to areas including anthropology, biology, linguistics, data analytics, and much more). 

Last year's focus on behavioral economics provided a nice foundation upon which to learn about the "limits to forecasting and prediction." The conference once again commenced with John Rundle, a physics professor at UC-Davis with a specialty in earthquake prediction, speaking about some successful and some wrong natural disaster forecasts (Rundle operates a great site called OpenHazards). Rundle first offered a distinction between forecasting and prediction. Whereas prediction is a statement validated by a single observation, forecasting is a statement for which multiple observations are required for a confidence level.

He then offered a permutation of risk into its two subcomponents. Risk = Hazard x exposure.  The hazard component relates to your forecast (ie the potential for being wrong) while the exposure relates to the magnitude of your risk (ie how much you stand to lose should your forecast be wrong). I find this a particularly meaningful breakdown considering how many colloquially conflate hazard with risk, while ignoring the multiplier effect of exposure.

As I did last year, I'll share my notes from the presentations below. Again, I want to make clear that my notes are geared towards my practical needs and are not meant as a comprehensive summation of each presentation. I will also look to do a second post which sums up some of the questions and thoughts that have been inspired by my attendance at the conference, for the truly great learning experiences tend to raise even more questions than they do offer answers.

Antti Ilmanen, AQR Capital

With Forecasting, Strategic Beats Tactical, and Many Beats Few

Small, but persistent edges can be magnified by diversification (and to a lesser extent, time). The bad news is that near-term predictability is limited (and humility is needed) and long-term forecasts which are right might not setup for good trades. I interpret this to mean that the short-term is the domain of randomness, while in the long-term even when we can make an accurate prediction, the market most likely has priced this in.

Intuitive predictions inherently take longer time-frames. Further, there is performance decay whereby good strategies fade over time. In order to properly diversify, investors must combine some degree of leverage with shorting. Ilmanen likes to combine momentum and contrarian strategies, and prefers forecasting cross-sectional trades rather than directional ones.

When we make long-term forecasts for financial markets, we have three main anchors upon which to build: history, theory, and, current conditions. For history, we can use average returns over time, for theory, we can use CAPM, and for current conditions we can apply the DDM. Such forecasts are as much art as they are science and the relative weights of each input depend on your time-horizon (ie the longer your timeframe, the less current conditions matter for the inevitable accuracy of your forecast).

Historically the Equity Risk Premium (ERP) has averaged approximately 5%, and today's environment the inverse Schiller CAPE (aka the cyclically adjusted earnings yield) is approximately 5%, meaning that 4-5% long run returns in equity markets are justifiable, though ERPs have varied over time. Another way to look at projected returns is through the expected return of a 60/40 (60% equities / 40% bonds) portfolio. This is Ilmanen's preferred methodology and in today's low-rate environment the prospects are for a 2.6% long-run return.

In forecasting and market positioning, "strategic beats tactical." People are attracted to contrarian signals, though the reality of contrarian forecasting is disappointing. The key is to try and get the long-term right, while humbly approaching the tactical part of it. Value signals like the CAPE tend to be very useful for forecasting. To highlight this, Ilmanen shared a chart of the 1/CAPE vs. the next five year real return.

Market timing strategies have "sucked" in recent decades. In equity, bond and commodity markets alike, Sharpe Ratios have been negative for timing strategies. In contrast, value + momentum strategies have exhibited success in timing US equities in particular, though most of the returns happened early in the sample and were driven more by the momentum coefficient than value. Cheap starting valuations have resulted in better long-run returns due to the dual forces of yield capture (getting the earnings yield) and mean reversion (value reverting to longer-term averages). 

Since the 1980s, trend-following strategies have exhibited positive long-run returns. Such strategies work best over 1-12 month periods, but not longer. Cliff Asness of AQR says one of the biggest problems with momentum strategies is how people don't embrace them until too late in each investment cycle, at which point they are least likely to succeed. However, even in down market cycles, momentum strategies provided better tail-risk protection than did other theoretically safe assets like gold or Treasuries.  This was true in eight of the past 10 "tail-risk periods," including the Great Recession.

In an ode to diversification, Ilmanen suggested that investors "harvest many premia you believe in," including alternative asset classes and traditional capital markets. Stocks, bonds and commodities exhibit similar Sharpe Ratios over long time-frames, and thus equal-weighting an allocation to each asset class would result in a higher Sharpe than the average of the constituent parts. We can take this one step farther and diversify amongst strategies, in addition to asset classes, with the four main strategies being value, momentum, carry (aka high yield) and defensive.

Over the long-run, low beta strategies in equities have exhibited high returns, though at the moment low betas appear historically expensive relative to normal times.  That being said, value as a signal has not been useful historically in market-timing.

If there are some strategies that exhibit persistently better returns, why don't all investors use them? Ilmanen highlighted the "4 c's" of conviction, constraints, conventionality and capacity as reasons for opting out of successful investment paths.


Henry Kaufman, Henry Kaufman & Company

The Forecasting Frenzy

Forecasting is a long-term human endeavor, and the forecaster in the business/economics arena is from the same vein as soothsayers and palm readers. In recent years, the number of forecasters and forecasts alike has grown tremendously. Sadly, forecasting continues to fail due to the following four behavioral biases:

  1. Herding--forecasts minimally fluctuate around a mean, and few are ever able to anticipate dramatic changes. When too many do anticipate dramatic changes, the path itself can change preventing such predictions from coming true.
  2. Historical bias--forecasts rest on the assumption that the future will look like the past. While economies and markets have exhibited broad repetitive patterns, history "rhymes, but does not repeat."
  3. Bias against bad news--No one institutionally predicts negative events, as optimism is a key biological mechanism for survival. Plus, negative predictions are often hard to act upon. When Kaufman warned of interest rate spikes and inflation in the 1970s, people chose to tune him out rather than embrace the uncomfortable reality. 
  4. Growth bias--stakeholders in all arenas want continued expansion and growth at all times, even when it is impractical.

Collectively, the frenzy of forecasts has far outpaced our ability to forecast. With long-term forecasting, there is no scientific process for making such predictions. An attempt to project future geopolitical events based on the past is a futile exercise. In economics, fashions contribute to unsustainable momentums, both up and down, that lead to considerable challenges in producing accurate forecasts.

Right now, Kaufman sees some worrying trends in finance. First, is the politicization of monetary policy, and he fears this will not reverse soon. The tactics the Fed is undertaking today are unprecedented and becoming entrenched. The idea of forward guidance in particular is very dangerous, for they rely entirely upon forecasts. Since it's well established that even expert forecasts are often wrong, then logic dictates that the entire concept of forward guidance is premised on a shaky foundation. Second, monetary policy has eclipsed fiscal policy as our go-to remedy for economic troubles. This is so because people like the quick and easy fixes offered by monetary solutions, as opposed to the much slower fiscal ones. In reality, the two (fiscal and monetary policy) should be coordinated. Third, economists are not paying enough attention to increasing financial concentration. There are fewer key financial institutions, and each is bigger than what used to be regarded as big. If/when the next one fails, and the government runs it through the wind-down process, those assets will end up in the hands of the next remaining survivors, further concentrating the industry.

The economics profession should simply focus on whether we as a society will have more or less freedom going forward. Too much of the profession instead focuses on what the next datapoint will be. In the grand scheme of things, the next datapoint is completely irrelevant, especially when the "next" completely ignores any revisions to prior data.  There is really no functional, or useful purpose for this type of activity.


Bruce Bueno de Mesquita, New York University

The Predictioneer's Game

The standard approach to making predictions or designing policy around questions on the future is to "ask the expert." Experts today are simply just dressed up oracles. They know facts, history and details, but forecasts require insight and methods that experts simply don't have. The accuracy of experts is no better than throwing darts. 

Good predictions should use logic and evidence, and a better way to do this is using game theory. This works because people are rationally self-interested, have values and beliefs, and face constraints. Experts simply cannot analyze emotions or account for skills and clout in answering tough geopolitical questions. That being said, game theory is not a substitute for good judgment and it cannot replace good internal debate.

People in positions of power have influencers (like a president and his/her cabinet). In a situation with 10 influencers, there are 3.6 million possible interactions that exist in a complex adaptive situation (meaning what one person says can change what another thinks and does). In any single game, there are 16 x (N^2-N) possible predictions, where N is the number of players.

In order to build a model that can make informed predictions, you need to know who the key influencers are. Once you know this, you must then figure out: 1) what they want on the issue; 2) how focused they are on that particular problem; 3) how influential each player could be, and to what degree they will exert that influence; and, 4) how resolved each player is to find an answer to the problem.  Once this information is gathered, you can build a model that can predict with a high degree of accuracy what people will do.  To make good predictions, contrary to what many say, you do not need to know history. It is much like a chessmaster who can walk up to a board in the middle of a game and still know what to do next.

With this information, people can make better, more accurate predictions on identified issues, while also gaining a better grasp for timing. This can help people in a game-theory situation come up with strategies to overcome impediments in order to reach desired objectives.

Bueno de Mesquita then shared the following current predictions:

  • Senkaku Island dispute between China and Japan - As a relevant aside, Xi Jinping's power will shrink over the next three years. Japan should let their claims rest for now, rather than push. It will take two years to find a resolution, which will most likely include a joint venture between Japan and China for expropriation of the natural gas reserves.
  • Argentina - The "improvements" in today's business behavior are merely aesthetic in advance of the key mid-term elections. Kirshner is marginalizing political rivals, and could make a serious move to consolidate power for the long-term.
  • Mexico - There is a 55% chance of a Constitutional amendment to open up energy, a 10% chance of no reform, and a 35% chance for international oil companies to get deep water drilling rights.  Mexico is likely to push through reforms in fiscal policy, social security, energy, labor and education, and looks to have a constructive backdrop for economic growth.
  • Syria with or without Assad will be hostile to the Western world.
  • China will look increasingly inward, with modest liberalization on local levels of governance and a strengthening Yuan.
  • The Eurozone will have an improving Spain and a higher likelihood that the Euro currency will be here to last.
  • Egypt is on the path to autocracy.
  • South Africa is at risk of turning into a rigged autocracy.


Aaron Clauset, University of Colorado and SFI

Challenges of Forecasting with Fat-Tailed Data

(Please note: statistics is most definitely not my strong suit. The content in Clauset's talk was very interesting, though some of it was over my head. I will therefore try my best to summarize the substance based on my understanding of it)

In attempting to predict fat-tail events, we are essentially trying to "predict the unpredictable." Fat tails exhibit high variance, so the average of a sample of data does not represent what is seen numerically. In such samples, there is a substantial gap between the two extremes of the data, and we see these distributions in book sales (best-sellers like Harry Potter), earthquakes (power law distributions), market crashes, terror attacks and wars. With earthquakes, we know a lot about the physics behind them, and how they are distributed, whereas with war we know it follows some statistical pattern, but the data is dynamic instead of fixed. This is true with war, because certain events influence subsequent events, etc.

Clauset approached the question of modeling rare events through an attempt to ascertain how probable 9/11 was, and how likely another one is. The two sides of answering this question are building a model (to discover how probable it was) and making a prediction (to forcast how likely another would be). For the purposes of the model, one would care only about large events because they have disproportionate consequences. When analyzing the data, we don't know what the distribution of the upper tail will look like because there simply are not enough datapoints. In order to overcome these problems, the modeler needs to separate the tail from the body, build a multiple tail model, bootstrap the data and repeat.

In Clauset's analysis of the likelihood for 9/11, he found that it was not an outlier based on both the model, and the prediction. There is a greater than 1% chance of such an event happening. While this may sound small, it is within the realm of possible outcomes, and as such it deserves some attention. This has implications for policymakers, because considering it is a statistical possibility, we should pursue our response within a context that acknowledges this reality.

There are some caveats to this model however. An important one is that terrorism is not a stationary process, and events can create feedback loops which drive ensuing events. Further, events themselves that in the data appear independent are not actually so. When forecasting fat tails, model uncertainty is always a big problem. Statistical uncertainty is a second one, due to the lack of enough data points and the large fluctuations in the tails themselves. Yet still, there is useful information within the fat tails which can inform our understanding of them. 


Philip Tetlock, University of Pennsylvania

Geopolitical Forecasting Tournaments Test the Limits of Judgment and Stretch the Boundaries of Science

I summarized Tetlock's talk at last year's SFI Risk Conference, so I suggest checking out those notes on the IARPA Forecasting Tournament as well. IARPA has several goals/benefits: 1) making explicit one's implicit theories of good judgment; 2) getting people in the habit of treating beliefs like testable hypothesis; and, 3) helping people discover the drivers of probabilistic accuracy. (All of the above are reasons I would love to participate in the next round). With regard to each area there are important lessons. 

There is a spectrum that runs from perfectly predictable on the left to perfectly unpredictable on the right, and no person or system can perfectly predict everything. In any prediction, there is a trade-off between false positives and correct hits. This is called the accuracy function. 

With the forecasting tournament, people get to put their pet theories to the test. This can help improve the "assertion-to-evidence" ratios in debates between opposing schools of thought (for example, the Keynesians vs the Hayekians). Predictions would be a great way to hold opposing schools of thought accountable to their predictions, while also eliciting evidence as to why events are expected to transpire in a given way.

In the tournament, the participants are judged using a Brier Score, a measure that originated in weather forecasting to determine accuracy on probabilistic predictions over time. The people who perform best tend to have a persistence in good performance. The top 2% of performers from one year demonstrated minimal regression to the mean, leading to the conclusion that predictions are 60% skill and 40% luck on the luck/skill spectrum.

There are tangible benefits of interaction and collaboration. The groups with the smartest, most open-minded participants consistently outperformed all others. Those who used probabilistic reasoning in making predictions were amongst the best performers. IARPA concentrated the talent of some of the best performers in order to see if these "super teams" could beat the "wisdom of crowds." Super teams did win quite handily. Ability homogeneity, rather than a problem, was an enhancer of successes. Elitist algorithms were used to generate forecasts by "extremizing" the forecasts from the best forecasters, and weighting those most heavily (5 people with a .7 Brier would upgrade to approximate a .85 based on the non-correlation of their success. Slight digression: it was interesting sitting behind Ilmanen during this lecture and seeing him nod his head, as this theme resonated perfectly with his points on diversifaction in a portfolio resulting in the portfolio's Sharpe Ratio being above the average of its constituent parts)

There are three challenges when thinking about the value of a forecasting tournament. First, automation from machines is getting better, so why bother with people? While this is important, human judgment is still a very valuable tool and can actually improve the performance of these algorithms. Second, the efficient market theory argues that what can be anticipated is already "priced in" so there should be little economic value to a good prediction anyway. Yet markets and people alike have very poor peripheral vision and good prediction can in fact be valuable in that context. Last, game theory models like Buena de Mesquita's can distill inputs from their own framework. While this may be a challenge, it's probably even better as a complementary endeavor.


Is Stock Picking Dead?

In these times of macro-volatility, it’s a line heard more each day: “stock picking is dead.”  The reasons listed are plentiful, but all focus on the increased correlations across the stock market and various asset classes today.  There is certainly an element of truth to the notion that in these volatility storms, everything moves closer together, but the claim that stock picking is dead is not a necessary outcome of the idea that correlations are higher.  There are factors that go beyond just what is happening in today’s market that have led to this misguided conclusion that stock picking is dead, so let’s take a deeper look at what’s behind this financially harmful directive.

Just the other day, Jason Zweig, in his Wall Street Journal column, took a look at whether “index funds are complicating the market,” and if so, what the consequences are.   This is an important conversation, and relates directly to whether stock picking is in fact dead, or not.

Who makes the claim:

There are several different groups behind this claim, and they are important.  In terms of economics, one of the core arguments for the efficient market hypothesis (EMH) is the notion that there is no alpha (outperformance), as prices reflect all known information, and therefore it is nearly impossible to beat the broader indices.  In a rational and efficient world, where information is ubiquitous, why would anyone sell to another with exactly the same information?  Or so the EMHers ask.  

In the stock market, indexers are a direct outgrowth of the EMH. Burton Malkiel’s Random Walk is probably the most visible bridge between the intellectual economics community and actual asset allocators.  Indexers carry out the theoretical consequences of the efficient market theory in buying broad baskets of stocks designed to mirror the performance of the economy at large. 

Next up are the class of speculators that call themselves technical analysts.  There are many kinds of technical analysts, and I use technicals as an important tool myself for timing and scaling into positions, as well as a basic risk management tool.   I do not mean those types of technical analysts.  More particularly, I am speaking of the technicians who use technicals as their one and only metric through which to buy a stock.  Stock picking doesn’t matter to these people, because they believe that charts rule the market and therefore only charts matter.  They buy the best looking lines on charts, and sell the worst, without “picking” a company based on its own intrinsic metrics.  Their intellectual ancestors are Charles Dow and Roger Babson.

Last are the perma-bears.  They believe that correlations are high, because volatility is here to stay as a direct result of some sort of serious global economic malady that will lead to the “end of the economic world” as we know it.  You’ll know these people based on their zealotry for gold as an asset class, complete conviction that they know exactly why everything is going to shit when markets are crashing, and ongoing claims that the market is “inflated, manipulated and rigged” whenever prices are moving higher. To them, everything is on its way to zero, so why bother?

Taking the other side:

The EMHers like to ask what information one person could possibly possess that the other does not, leading to the belief that buying Company A’s stock is worthwhile for oneself, while selling Company A’s stock is worthwhile for the counterparty?  The problem with this question is that there are far more reasons why one type of market participant would sell a stock than just a question about the known information pertaining to the value of a particular business. 

This is a crucial point in why the efficient market theory breaks down.  It only works in an environment when people are buying and selling based on the same information, focusing on the same companies, and thinking about only factors related to the underlying businesses.  There are plenty of people who don’t base their decisions on “information” at all, and are acting based on emotion like sellers in a panic or buyers in a bubble.  There are other examples outside the emotional realm: Indexers, macro-traders, technicians, and short-term speculators.  Importantly, in practice, indexing itself is a major source of company-specific inefficiencies, rather than a factual outgrowth of the EMH.  Legendary value investor, Seth Klarman, has specifically referenced his affinity for investment opportunities that directly arise out of inefficiencies during index rebalancing (Hat tip to Distressed Debt Investing), and this is but one example.

Beyond styles, there are many reasons that people may sell (or buy) stocks that have nothing to do with company-specific beliefs.  This list includes, but is not limited to, young workers’ deposits into 401(k)s, retirees selling savings to live off of, and mutual funds that are forced to sell shares in a spin-off which is worth a price below the fund’s mandated minimum capitalization.  All of the above transactions have nothing to do with what the person inspiring the purchase (or sale) of an equity thinks about that one particular company.  Again, this is important.

Why we have markets in the first place:

While financial headlines swing from "the end of the financial world" to dramatic rescues, it's necessary to take a step back and think about what investing in markets is all about.  Stock picking is precisely it. 

When we look to the essence of the stock market, we realize that it is an arena for companies to sell fractional ownership interests in exchange for the capital necessary to build, grow and/or maintain a business (forget for a second that most IPOs today are to cash out early investors).  This is not a transaction based on the belief that one person is wrong and the other right, nor is it a situation where one person must lose at the other’s expense.  It’s really supposed to be a win/win for the buyer and seller—the buyer gets an ownership stake in a company, and the seller gets the capital they need to deploy in order to increase the value of the buyer’s ownership interest.  At the end of the day, unless people are in markets to invest in companies themselves, then everything else is illusory, and that’s not really the case.

Somehow, the meaning of markets has been so greatly abstracted by the proliferation of types of participants that this existential fact ends up forgotten.   In many ways, it’s thanks to actual inefficiencies in markets that people learned ways beyond just investing in businesses in order to make substantial sums of money in capital markets.  This is why they say that “stock picking” aka investing in companies is dead.

Pulling it All Together:

These other strategies have made money for periods of time, but the most consistent through all time periods is investing in strong businesses for the long-term.  This is not to discount the efficacy and importance of other types of analysis.  In some ways, it is the generalist who is most adequately equipped for long-term investment, for knowledge of macroeconomics, technicals, etc. are invaluable tools for an actual company-specific investor.  It’s important for any kind of investor to understand who the market participants are, and why happenings play out as they do.  Use these other strategies as tools to maximize the returns from investing in really solid long-term businesses, not as ends in and of themselves. 

In 2010, I conducted an interview of Justin Fox (now the editorial director of the Harvard Business Review Group) shortly after he wrote The Myth of the Rational Market, a comprehensive review of the history of the competing types of investment theories and how they came together to form competing bases of economic theory.  Fox concludes that Benjamin Graham-style value investing is the only conssitent strategy through all time, but asserts that t is psychologically draining to consistently instill a sense of discipline on oneself, and therefore people tend to float dangerously towards alternatives. The book is an excellent read that traces the history of investing and economics from Irving Fisher to Benjamin Graham to Eugene Fama all the way through the physics-drive Santa Institute.  In my interview with Fox, we had the following exchange that I think very aptly sums up the problem with the question "is stock picking dead" and why the answer is conclusively no:

Elliot: Now, correct me if I’m wrong but one of the concepts that I took to be a semi-conclusion in “The Myth of the Rational Market” was the idea that the Benjamin Graham model of value investing has withstood the test of time, that people who are able to take the information and come up with what they think is a fair value and have the ability to ignore what “Mr. Market” is telling them on a daily basis have an edge over time.  Do you think that’s a valid interpretation?

Justin: Absolutely – I completely think that’s true.  And I think one of the interesting things that is sort of common sense, but that finance scholars have finally started studying and recognizing, is that one of the big reasons for why it’s really hard for a professional investor to stick it out as a value investor is that it requires being unfashionable and going against the crowd.  And unless you’ve either built up this incredible reputation—although even that doesn’t really help you that much in investing as people forget your reputation and a year or two if you fail to beat the market—or you get a situation like Berkshire Hathaway where it’s actually not the investors’ discretionary money that you’re investing, but the cash flow from Berkshire.  There’s really no way anybody can discipline Buffett except over maybe a really long period that gives you the freedom to do it.

The flip side of that is that as an investor you have a situation where there’s such little control over the investment decisions.  That’s the difficult situation that our investors fear. There’s a reason why people invest in mutual funds.  They like the flexibility of being able to take their money out.  But the very fact that they do that, and that if you’re some value fund and it’s 1999, everybody wants to take their money out of your fund regardless of your own performance; that’s exactly why it’s hard to be a value investor, and a good economic reason for why value investing works.  Beyond that is the individual as a value investor.  Obviously, you don’t have to worry about customers but you just need a pretty strong constitution and maybe a different psychological wiring than most people to be able to stick that out.