Books by Taleb, Nassim Nicholas

Taleb, Nassim Nicholas. Antifragile. New York: Random House, 2012. ISBN 978-0-8129-7968-8.
This book is volume three in the author's Incerto series, following Fooled by Randomness (February 2011) and The Black Swan (January 2009). It continues to explore the themes of randomness, risk, and the design of systems: physical, economic, financial, and social, which perform well in the face of uncertainty and infrequent events with large consequences. He begins by posing the deceptively simple question, “What is the antonym of ‘fragile’?”

After thinking for a few moments, most people will answer with “robust” or one of its synonyms such as “sturdy”, “tough”, or “rugged”. But think about it a bit more: does a robust object or system actually behave in the opposite way to a fragile one? Consider a teacup made of fine china. It is fragile—if subjected to more than a very limited amount of force or acceleration, it will smash into bits. It is fragile because application of such an external stimulus, for example by dropping it on the floor, will dramatically degrade its value for the purposes for which it was created (you can't drink tea from a handful of sherds, and they don't look good sitting on the shelf). Now consider a teacup made of stainless steel. It is far more robust: you can drop it from ten kilometres onto a concrete slab and, while it may be slightly dented, it will still work fine and look OK, maybe even acquiring a little character from the adventure. But is this really the opposite of fragility? The china teacup was degraded by the impact, while the stainless steel one was not. But are there objects and systems which improve as a result of random events: uncertainty, risk, stressors, volatility, adventure, and the slings and arrows of existence in the real world? Such a system would not be robust, but would be genuinely “anti-fragile” (which I will subsequently write without the hyphen, as does the author): it welcomes these perturbations, and may even require them in order to function well or at all.

Antifragility seems an odd concept at first. Our experience is that unexpected events usually make things worse, and that the inexorable increase in entropy causes things to degrade with time: plants and animals age and eventually die; machines wear out and break; cultures and societies become decadent, corrupt, and eventually collapse. And yet if you look at nature, antifragility is everywhere—it is the mechanism which drives biological evolution, technological progress, the unreasonable effectiveness of free market systems in efficiently meeting the needs of their participants, and just about everything else that changes over time, from trends in art, literature, and music, to political systems, and human cultures. In fact, antifragility is a property of most natural, organic systems, while fragility (or at best, some degree of robustness) tends to characterise those which were designed from the top down by humans. And one of the paradoxical characteristics of antifragile systems is that they tend to be made up of fragile components.

How does this work? We'll get to physical systems and finance in a while, but let's start out with restaurants. Any reasonably large city in the developed world will have a wide variety of restaurants serving food from numerous cultures, at different price points, and with ambience catering to the preferences of their individual clientèles. The restaurant business is notoriously fragile: the culinary preferences of people are fickle and unpredictable, and restaurants who are behind the times frequently go under. And yet, among the population of restaurants in a given area at a given time, customers can usually find what they're looking for. The restaurant population or industry is antifragile, even though it is composed of fragile individual restaurants which come and go with the whims of diners, which will be catered to by one or more among the current, but ever-changing population of restaurants.

Now, suppose instead that some Food Commissar in the All-Union Ministry of Nutrition carefully studied the preferences of people and established a highly-optimised and uniform menu for the monopoly State Feeding Centres, then set up a central purchasing, processing, and distribution infrastructure to optimise the efficient delivery of these items to patrons. This system would be highly fragile, since while it would deliver food, there would no feedback based upon customer preferences, and no competition to respond to shifts in taste. The result would be a mediocre product which, over time, was less and less aligned with what people wanted, and hence would have a declining number of customers. The messy and chaotic market of independent restaurants, constantly popping into existence and disappearing like virtual particles, exploring the culinary state space almost at random, does, at any given moment, satisfy the needs of its customers, and it responds to unexpected changes by adapting to them: it is antifragile.

Now let's consider an example from metallurgy. If you pour molten metal from a furnace into a cold mould, its molecules, which were originally jostling around at random at the high temperature of the liquid metal, will rapidly freeze into a structure with small crystals randomly oriented. The solidified metal will contain dislocations wherever two crystals meet, with each forming a weak spot where the metal can potentially fracture under stress. The metal is hard, but brittle: if you try to bend it, it's likely to snap. It is fragile.

To render it more flexible, it can be subjected to the process of annealing, where it is heated to a high temperature (but below melting), which allows the molecules to migrate within the bulk of the material. Existing grains will tend to grow, align, and merge, resulting in a ductile, workable metal. But critically, once heated, the metal must be cooled on a schedule which provides sufficient randomness (molecular motion from heat) to allow the process of alignment to continue, but not to disrupt already-aligned crystals. Here is a video from Cellular Automata Laboratory which demonstrates annealing. Note how sustained randomness is necessary to keep the process from quickly “freezing up” into a disordered state.

In another document at this site, I discuss solving the travelling salesman problem through the technique of simulated annealing, which is analogous to annealing metal, and like it, is a manifestation of antifragility—it doesn't work without randomness.

When you observe a system which adapts and prospers in the face of unpredictable changes, it will almost always do so because it is antifragile. This is a large part of how nature works: evolution isn't able to predict the future and it doesn't even try. Instead, it performs a massively parallel, planetary-scale search, where organisms, species, and entire categories of life appear and disappear continuously, but with the ecosystem as a whole constantly adapting itself to whatever inputs may perturb it, be they a wholesale change in the composition of the atmosphere (the oxygen catastrophe at the beginning of the Proterozoic eon around 2.45 billion years ago), asteroid and comet impacts, and ice ages.

Most human-designed systems, whether machines, buildings, political institutions, or financial instruments, are the antithesis of those found in nature. They tend to be highly-optimised to accomplish their goals with the minimum resources, and to be sufficiently robust to cope with any stresses they may be expected to encounter over their design life. These systems are not antifragile: while they may be designed not to break in the face of unexpected events, they will, at best, survive, but not, like nature, often benefit from them.

The devil's in the details, and if you reread the last paragraph carefully, you may be able to see the horns and pointed tail peeking out from behind the phrase “be expected to”. The problem with the future is that it is full of all kinds of events, some of which are un-expected, and whose consequences cannot be calculated in advance and aren't known until they happen. Further, there's usually no way to estimate their probability. It doesn't even make any sense to talk about the probability of something you haven't imagined could happen. And yet such things happen all the time.

Today, we are plagued, in many parts of society, with “experts” the author dubs fragilistas. Often equipped with impeccable academic credentials and with powerful mathematical methods at their fingertips, afflicted by the “Soviet-Harvard delusion” (overestimating the scope of scientific knowledge and the applicability of their modelling tools to the real world), they are blind to the unknown and unpredictable, and they design and build systems which are highly fragile in the face of such events. A characteristic of fragilista-designed systems is that they produce small, visible, and apparently predictable benefits, while incurring invisible risks which may be catastrophic and occur at any time.

Let's consider an example from finance. Suppose you're a conservative investor interested in generating income from your lifetime's savings, while preserving capital to pass on to your children. You might choose to invest, say, in a diversified portfolio of stocks of long-established companies in stable industries which have paid dividends for 50 years or more, never skipping or reducing a dividend payment. Since you've split your investment across multiple companies, industry sectors, and geographical regions, your risk from an event affecting one of them is reduced. For years, this strategy produces a reliable and slowly growing income stream, while appreciation of the stock portfolio (albeit less than high flyers and growth stocks, which have greater risk and pay small dividends or none at all) keeps you ahead of inflation. You sleep well at night.

Then 2008 rolls around. You didn't do anything wrong. The companies in which you invested didn't do anything wrong. But the fragilistas had been quietly building enormous cross-coupled risk into the foundations of the financial system (while pocketing huge salaries and bonuses, while bearing none of the risk themselves), and when it all blows up, in one sickening swoon, you find the value of your portfolio has been cut by 50%. In a couple of months, you have lost half of what you worked for all of your life. Your “safe, conservative, and boring” stock portfolio happened to be correlated with all of the other assets, and when the foundation of the system started to crumble, suffered along with them. The black swan landed on your placid little pond.

What would an antifragile investment portfolio look like, and how would it behave in such circumstances? First, let's briefly consider a financial option. An option is a financial derivative contract which gives the purchaser the right, but not the obligation, to buy (“call option”) or sell (”put option”) an underlying security (stock, bond, market index, etc.) at a specified price, called the “strike price” (or just “strike”). If the a call option has a strike above, or a put option a strike below, the current price of the security, it is called “out of the money”, otherwise it is “in the money”. The option has an expiration date, after which, if not “exercised” (the buyer asserts his right to buy or sell), the contract expires and the option becomes worthless.

Let's consider a simple case. Suppose Consolidated Engine Sludge (SLUJ) is trading for US$10 per share on June 1, and I buy a call option to buy 100 shares at US$15/share at any time until August 31. For this right, I might pay a premium of, say, US$7. (The premium depends upon sellers' perception of the volatility of the stock, the term of the option, and the difference between the current price and the strike price.) Now, suppose that sometime in August, SLUJ announces a breakthrough that allows them to convert engine sludge into fructose sweetener, and their stock price soars on the news to US$19/share. I might then decide to sell on the news, exercise the option, paying US$1500 for the 100 shares, and then immediately sell them at US$19, realising a profit of US$400 on the shares or, subtracting the cost of the option, US$393 on the trade. Since my original investment was just US$7, this represents a return of 5614% on the original investment, or 22457% annualised. If SLUJ never touches US$15/share, come August 31, the option will expire unexercised, and I'm out the seven bucks. (Since options can be bought and sold at any time and prices are set by the market, it's actually a bit more complicated than that, but this will do for understanding what follows.)

You might ask yourself what would motivate somebody to sell such an option. In many cases, it's an attractive proposition. If I'm a long-term shareholder of SLUJ and have found it to be a solid but non-volatile stock that pays a reasonable dividend of, say, two cents per share every quarter, by selling the call option with a strike of 15, I pocket an immediate premium of seven cents per share, increasing my income from owning the stock by a factor of 4.5. For this, I give up the right to any appreciation should the stock rise above 15, but that seems to be a worthwhile trade-off for a stock as boring as SLUJ (at least prior to the news flash).

A put option is the mirror image: if I bought a put on SLUJ with a strike of 5, I'll only make money if the stock falls below 5 before the option expires.

Now we're ready to construct a genuinely antifragile investment. Suppose I simultaneously buy out of the money put and call options on the same security, a so-called “long straddle”. Now, as long as the price remains within the strike prices of the put and call, both options will expire worthless, but if the price either rises above the call strike or falls below the put strike, that option will be in the money and pay off the further the underlying price veers from the band defined by the two strikes. This is, then, a pure bet on volatility: it loses a small amount of money as long as nothing unexpected happens, but when a shock occurs, it pays off handsomely.

Now, the premiums on deep out of the money options are usually very modest, so an investor with a portfolio like the one I described who was clobbered in 2008 could have, for a small sum every quarter, purchased put and call options on, say, the Standard & Poor's 500 stock index, expecting to usually have them expire worthless, but under the circumstance which halved the value of his portfolio, would pay off enough to compensate for the shock. (If worried only about a plunge he could, of course, have bought just the put option and saved money on premiums, but here I'm describing a pure example of antifragility being used to cancel fragility.)

I have only described a small fraction of the many topics covered in this masterpiece, and described none of the mathematical foundations it presents (which can be skipped by readers intimidated by equations and graphs). Fragility and antifragility is one of those concepts, simple once understood, which profoundly change the way you look at a multitude of things in the world. When a politician, economist, business leader, cultural critic, or any other supposed thinker or expert advocates a policy, you'll learn to ask yourself, “Does this increase fragility?” and have the tools to answer the question. Further, it provides an intellectual framework to support many of the ideas and policies which libertarians and advocates of individual liberty and free markets instinctively endorse, founded in the way natural systems work. It is particularly useful in demolishing “green” schemes which aim at replacing the organic, distributed, adaptive, and antifragile mechanisms of the market with coercive, top-down, and highly fragile central planning which cannot possibly have sufficient information to work even in the absence of unknowns in the future.

There is much to digest here, and the ramifications of some of the clearly-stated principles take some time to work out and fully appreciate. Indeed, I spent more than five years reading this book, a little bit at a time. It's worth taking the time and making the effort to let the message sink in and figure out how what you've learned applies to your own life and act accordingly. As Fat Tony says, “Suckers try to win arguments; nonsuckers try to win.”

April 2018 Permalink

Taleb, Nassim Nicholas. The Black Swan. New York: Random House, 2007. ISBN 978-1-4000-6351-2.
If you are interested in financial markets, investing, the philosophy of science, modelling of socioeconomic systems, theories of history and historicism, or the rôle of randomness and contingency in the unfolding of events, this is a must-read book. The author largely avoids mathematics (except in the end notes) and makes his case in quirky and often acerbic prose (there's something about the French that really gets his goat) which works effectively.

The essential message of the book, explained by example in a wide variety of contexts is (and I'll be rather more mathematical here in the interest of concision) is that while many (but certainly not all) natural phenomena can be well modelled by a Gaussian (“bell curve”) distribution, phenomena in human society (for example, the distribution of wealth, population of cities, book sales by authors, casualties in wars, performance of stocks, profitability of companies, frequency of words in language, etc.) are best described by scale-invariant power law distributions. While Gaussian processes converge rapidly upon a mean and standard deviation and rare outliers have little impact upon these measures, in a power law distribution the outliers dominate.

Consider this example. Suppose you wish to determine the mean height of adult males in the United States. If you go out and pick 1000 men at random and measure their height, then compute the average, absent sampling bias (for example, picking them from among college basketball players), you'll obtain a figure which is very close to that you'd get if you included the entire male population of the country. If you replaced one of your sample of 1000 with the tallest man in the country, or with the shortest, his inclusion would have a negligible effect upon the average, as the difference from the mean of the other 999 would be divided by 1000 when computing the average. Now repeat the experiment, but try instead to compute mean net worth. Once again, pick 1000 men at random, compute the net worth of each, and average the numbers. Then, replace one of the 1000 by Bill Gates. Suddenly Bill Gates's net worth dwarfs that of the other 999 (unless one of them randomly happened to be Warren Buffett, say)—the one single outlier dominates the result of the entire sample.

Power laws are everywhere in the human experience (heck, I even found one in AOL search queries), and yet so-called “social scientists” (Thomas Sowell once observed that almost any word is devalued by preceding it with “social”) blithely assume that the Gaussian distribution can be used to model the variability of the things they measure, and that extrapolations from past experience are predictive of the future. The entry of many people trained in physics and mathematics into the field of financial analysis has swelled the ranks of those who naïvely assume human action behaves like inanimate physical systems.

The problem with a power law is that as long as you haven't yet seen the very rare yet stupendously significant outlier, it looks pretty much like a Gaussian, and so your model based upon that (false) assumption works pretty well—until it doesn't. The author calls these unimagined and unmodelled rare events “Black Swans”—you can see a hundred, a thousand, a million white swans and consider each as confirmation of your model that “all swans are white”, but it only takes a single black swan to falsify your model, regardless of how much data you've amassed and how long it has correctly predicted things before it utterly failed.

Moving from ornithology to finance, one of the most common causes of financial calamities in the last few decades has been the appearance of Black Swans, wrecking finely crafted systems built on the assumption of Gaussian behaviour and extrapolation from the past. Much of the current calamity in hedge funds and financial derivatives comes directly from strategies for “making pennies by risking dollars” which never took into account the possibility of the outlier which would wipe out the capital at risk (not to mention that of the lenders to these highly leveraged players who thought they'd quantified and thus tamed the dire risks they were taking).

The Black Swan need not be a destructive bird: for those who truly understand it, it can point the way to investment success. The original business concept of Autodesk was a bet on a Black Swan: I didn't have any confidence in our ability to predict which product would be a success in the early PC market, but I was pretty sure that if we fielded five products or so, one of them would be a hit on which we could concentrate after the market told us which was the winner. A venture capital fund does the same thing: because the upside of a success can be vastly larger than what you lose on a dud, you can win, and win big, while writing off 90% of all of the ventures you back. Investors can fashion a similar strategy using options and option-equivalent investments (for example, resource stocks with a high cost of production), diversifying a small part of their portfolio across a number of extremely high risk investments with unbounded upside while keeping the bulk in instruments (for example sovereign debt) as immune as possible to Black Swans.

There is much more to this book than the matters upon which I have chosen to expound here. What you need to do is lay your hands on this book, read it cover to cover, think it over for a while, then read it again—it is so well written and entertaining that this will be a joy, not a chore. I find it beyond charming that this book was published by Random House.

January 2009 Permalink

Taleb, Nassim Nicholas. Fooled by Randomness. 2nd. ed. New York: Random House, [2004] 2005. ISBN 978-0-8129-7521-5.
This book, which preceded the author's bestselling The Black Swan (January 2009), explores a more general topic: randomness and, in particular, how humans perceive and often misperceive its influence in their lives. As with all of Taleb's work, it is simultaneously quirky, immensely entertaining, and so rich in wisdom and insights that you can't possible absorb them all in a single reading.

The author's central thesis, illustrated from real-world examples, tests you perform on yourself, and scholarship in fields ranging from philosophy to neurobiology, is that the human brain evolved in an environment in which assessment of probabilities (and especially conditional probabilities) and nonlinear outcomes was unimportant to reproductive success, and consequently our brains adapted to make decisions according to a set of modular rules called “heuristics”, which researchers have begun to tease out by experimentation. While our brains are capable of abstract thinking and, with the investment of time required to master it, mathematical reasoning about probabilities, the parts of the brain we use to make many of the important decisions in our lives are the much older and more instinctual parts from which our emotions spring. This means that otherwise apparently rational people may do things which, if looked at dispassionately, appear completely insane and against their rational self-interest. This is particularly apparent in the world of finance, in which the author has spent much of his career, and which offers abundant examples of individual and collective delusional behaviour both before and after the publication of this work.

But let's step back from the arcane world of financial derivatives and consider a much simpler and easier to comprehend investment proposition: Russian roulette. A diabolical billionaire makes the following proposition: play a round of Russian roulette (put one cartridge in a six shot revolver, spin the cylinder to randomise its position, put the gun to your temple and pull the trigger). If the gun goes off, you don't receive any payoff and besides, you're dead. If there's just the click of the hammer falling on an empty chamber, you receive one million dollars. Further, as a winner, you're invited to play again on the same date next year, when the payout if you win will be increased by 25%, and so on in subsequent years as long as you wish to keep on playing. You can quit at any time and keep your winnings.

Now suppose a hundred people sign up for this proposition, begin to play the game year after year, and none chooses to take their winnings and walk away from the table. (For connoisseurs of Russian roulette, this is the variety of the game in which the cylinder is spun before each shot, not where the live round continues to advance each time the hammer drops on an empty chamber: in that case there would be no survivors beyond the sixth round.) For each round, on average, 1/6 of the players are killed and out of the game, reducing the number who play next year. Out of the original 100 players in the first round, one would expect, on average, around 83 survivors to participate in the second round, where the payoff will be 1.25 million.

What do we have, then, after ten years of this game? Again, on average, we expect around 16 survivors, each of whom will be paid more than seven million dollars for the tenth round alone, and who will have collected a total of more than 33 million dollars over the ten year period. If the game were to go on for twenty years, we would expect around 3 survivors from the original hundred, each of whom would have “earned” more than a third of a billion dollars each.

Would you expect these people to be regular guests on cable business channels, sought out by reporters from financial publications for their “hot hand insights on Russian roulette”, or lionised for their consistent and rapidly rising financial results? No—they would be immediately recognised as precisely what they were: lucky (and consequently very wealthy) fools who, each year they continue to play the game, run the same 1 in 6 risk of blowing their brains out.

Keep this Russian roulette analogy in mind the next time you see an interview with the “sizzling hot” hedge fund manager who has managed to obtain 25% annual return for his investors over the last five years, or when your broker pitches a mutual fund with a “great track record”, or you read the biography of a businessman or investor who always seems to have made the “right call” at the right time. All of these are circumstances in which randomness, and hence luck, plays an important part. Just as with Russian roulette, there will inevitably be big winners with a great “track record”, and they're the only ones you'll see because the losers have dropped out of the game (and even if they haven't yet they aren't newsworthy). So the question you have to ask yourself is not how great the track record of a given individual is, but rather the size of the original cohort from which the individual was selected at the start of the period of the track record. The rate hedge fund managers “blow up” and lose all of their investors' money in one disastrous market excursion is less than that of the players blown away in Russian roulette, but not all that much. There are a lot of trading strategies which will yield high and consistent returns until they don't, at which time they suffer sudden and disastrous losses which are always reported as “unexpected”. Unexpected by the geniuses who devised the strategy, the fools who put up the money to back it, and the clueless journalists who report the debacle, but entirely predictable to anybody who modelled the risks being run in the light of actual behaviour of markets, not some egghead's ideas of how they “should” behave.

Shall we try another? You go to your doctor for a routine physical, and as part of the laboratory work on your blood, she orders a screening test for a rare but serious disease which afflicts only one person in a thousand but which can be treated if detected early. The screening test has a 5% false positive rate (in 5% of the people tested who do not actually have the disease, it erroneously says that they do) and a 0% false negative rate (if you have the disease, the test will always report that you do). You return to the doctor's office for the follow-up visit and she tells you that you tested positive for the disease. What is the probability you actually have it?

Spoiler warning: Plot and/or ending details follow.  
Did you answer 95%? If you did, you're among the large majority of people, not just among the general population but also practising clinicians, who come to the same conclusion. And you'd be just as wrong as them. In fact, the odds you have the disease are a little less than 2%. Here's how it works. Let's assume an ensemble of 10,000 randomly selected people are tested. On average, ten of these people will have the disease, and all of them will test positive for it (no false negatives). But among that population, 500 people who do not have the disease will also test positive due to the 5% false positive rate of the test. That means that, on average (it gets tedious repeating this, but the natterers will be all over me if I don't do so in every instance), there will be, of 10,000 people tested, a total of 510 positive results, of which 10 actually have the disease. Hence, if you're the recipient of a positive test result, the probability you have the disease is 10/510, or a tad less than 2%. So, before embarking upon a demanding and potentially dangerous treatment regime, you're well advised to get some other independent tests to confirm that you are actually afflicted.
Spoilers end here.  
In making important decisions in life, we often rely upon information from past performance and reputation without taking into account how much those results may be affected by randomness, luck, and the “survivor effect” (the Russian roulette players who brag of their success in the game are necessarily those who aren't yet dead). When choosing a dentist, you can be pretty sure that a practitioner who is recommended by a variety of his patients whom you respect will do an excellent job drilling your teeth. But this is not the case when choosing an oncologist, since all of the people who give him glowing endorsements are necessarily those who did not die under his care, even if their survival is due to spontaneous remission instead of the treatment they received. In such a situation, you need to, as it were, interview the dead alongside the survivors, or, that being difficult, compare the actual rate of survival among comparable patients with the same condition.

Even when we make decisions with our higher cognitive facilities rather than animal instincts, it's still easy to get it wrong. While the mathematics of probability and statistics have been put into a completely rigorous form, there are assumptions in how they are applied to real world situations which can lead to the kinds of calamities one reads about regularly in the financial press. One of the reasons physical scientists transmogrify so easily into Wall Street “quants” is that they are trained and entirely comfortable with statistical tools and probabilistic analysis. The reason they so frequently run off the cliff, taking their clients' fortunes in the trailer behind them, is that nature doesn't change the rules, nor does she cheat. Most physical processes will exhibit well behaved Gaussian or Poisson distributions, with outliers making a vanishingly small contribution to mean and median values. In financial markets and other human systems none of these conditions obtain: the rules change all the time, and often change profoundly before more than a few participants even perceive they have; any action in the market will provoke a reaction by other actors, often nonlinear and with unpredictable delays; and in human systems the Pareto and other wildly non-Gaussian power law distributions are often the norm.

We live in a world in which randomness reigns in many domains, and where we are bombarded with “news and information” which is probably in excess of 99% noise to 1% signal, with no obvious way to extract the signal except with the benefit of hindsight, which doesn't help in making decisions on what to do today. This book will dramatically deepen your appreciation of this dilemma in our everyday lives, and provide a philosophical foundation for accepting the rôle randomness and luck plays in the world, and how, looked at with the right kind of eyes (and investment strategy) randomness can be your friend.

February 2011 Permalink

Taleb, Nassim Nicholas. Skin in the Game. New York: Random House, 2018. ISBN 978-0-425-28462-9.
This book is volume four in the author's Incerto series, following Fooled by Randomness (February 2011), The Black Swan (January 2009), and Antifragile (April 2018). In it, he continues to explore the topics of uncertainty, risk, decision making under such circumstances, and how both individuals and societies winnow out what works from what doesn't in order to choose wisely among the myriad alternatives available.

The title, “Skin in the Game”, is an aphorism which refers to an individual's sharing the risks and rewards of an undertaking in which they are involved. This is often applied to business and finance, but it is, as the author demonstrates, a very general and powerful concept. An airline pilot has skin in the game along with the passengers. If the plane crashes and kills everybody on board, the pilot will die along with them. This insures that the pilot shares the passengers' desire for a safe, uneventful trip and inspires confidence among them. A government “expert” putting together a “food pyramid” to be vigorously promoted among the citizenry and enforced upon captive populations such as school children or members of the armed forces, has no skin in the game. If his or her recommendations create an epidemic of obesity, type 2 diabetes, and cardiovascular disease, that probably won't happen until after the “expert” has retired and, in any case, civil servants are not fired or demoted based upon the consequences of their recommendations.

Ancestral human society was all about skin in the game. In a small band of hunter/gatherers, everybody can see and is aware of the actions of everybody else. Slackers who do not contribute to the food supply are likely to be cut loose to fend for themselves. When the hunt fails, nobody eats until the next kill. If a conflict develops with a neighbouring band, those who decide to fight instead of running away or surrendering are in the front line of the battle and will be the first to suffer in case of defeat.

Nowadays we are far more “advanced”. As the author notes, “Bureaucracy is a construction by which a person is conveniently separated from the consequences of his or her actions.” As populations have exploded, layers and layers of complexity have been erected, removing authority ever farther from those under its power. We have built mechanisms which have immunised a ruling class of decision makers from the consequences of their decisions: they have little or no skin in the game.

Less than a third of all Roman emperors died in their beds. Even though they were at the pinnacle of the largest and most complicated empire in the West, they regularly paid the ultimate price for their errors either in battle or through palace intrigue by those dissatisfied with their performance. Today the geniuses responsible for the 2008 financial crisis, which destroyed the savings of hundreds of millions of innocent people and picked the pockets of blameless taxpayers to bail out the institutions they wrecked, not only suffered no punishment of any kind, but in many cases walked away with large bonuses or golden parachute payments and today are listened to when they pontificate on the current scene, rather than being laughed at or scorned as they would be in a rational world. We have developed institutions which shift the consequences of bad decisions from those who make them to others, breaking the vital feedback loop by which we converge upon solutions which, if not perfect, at least work well enough to get the job done without the repeated catastrophes that result from ivory tower theories being implemented on a grand scale in the real world.

Learning and Evolution

Being creatures who have evolved large brains, we're inclined to think that learning is something that individuals do, by observing the world, drawing inferences, testing hypotheses, and taking on knowledge accumulated by others. But the overwhelming majority of creatures who have ever lived, and of those alive today, do not have large brains—indeed, many do not have brains at all. How have they learned to survive and proliferate, filling every niche on the planet where environmental conditions are compatible with biochemistry based upon carbon atoms and water? How have they, over the billions of years since life arose on Earth, inexorably increased in complexity, most recently producing a species with a big brain able to ponder such questions?

The answer is massive parallelism, exhaustive search, selection for survivors, and skin in the game, or, putting it all together, evolution. Every living creature has skin in the ultimate game of whether it will produce offspring that inherit its characteristics. Every individual is different, and the process of reproduction introduces small variations in progeny. Change the environment, and the characteristics of those best adapted to reproduce in it will shift and, eventually, the population will consist of organisms adapted to the new circumstances. The critical thing to note is that while each organism has skin in the game, many may, and indeed must, lose the game and die before reproducing. The individual organism does not learn, but the species does and, stepping back another level, the ecosystem as a whole learns and adapts as species appear, compete, die out, or succeed and proliferate. This simple process has produced all of the complexity we observe in the natural world, and it works because every organism and species has skin in the game: its adaptation to its environment has immediate consequences for its survival.

None of this is controversial or new. What the author has done in this book is to apply this evolutionary epistemology to domains far beyond its origins in biology—in fact, to almost everything in the human experience—and demonstrate that both success and wisdom are generated when this process is allowed to work, but failure and folly result when it is thwarted by institutions which take the skin out of the game.

How does this apply in present-day human society? Consider one small example of a free market in action. The restaurant business is notoriously risky. Restaurants come and go all the time, and most innovations in the business fall flat on their face and quickly disappear. And yet most cities have, at any given time, a broad selection of restaurants with a wide variety of menus, price points, ambiance, and service to appeal to almost any taste. Each restaurant has skin in the game: those which do not attract sufficient customers (or, having once been successful, fail to adapt when customers' tastes change) go out of business and are replaced by new entrants. And yet for all the churning and risk to individual restaurants, the restaurant “ecosystem” is remarkably stable, providing customers options closely aligned with their current desires.

To a certain kind of “expert” endowed with a big brain (often crammed into a pointy head), found in abundance around élite universities and government agencies, all of this seems messy, chaotic, and (the horror!) inefficient. Consider the money lost when a restaurant fails, the cooks and waiters who lose their jobs, having to find a new restaurant to employ them, the vacant building earning nothing for its owner until a new tenant is found—certainly there must be a better way. Why, suppose instead we design a standardised set of restaurants based upon a careful study of public preferences, then roll out this highly-optimised solution to the problem. They might be called “public feeding centres”. And they would work about as well as the name implies.

Survival and Extinction

Evolution ultimately works through extinction. Individuals who are poorly adapted to their environment (or, in a free market, companies which poorly serve their customers) fail to reproduce (or, in the case of a company, survive and expand). This leaves a population better adapted to its environment. When the environment changes, or a new innovation appears (for example, electricity in an age dominated by steam power), a new sorting out occurs which may see the disappearance of long-established companies that failed to adapt to the new circumstances. It is a tautology that the current population consists entirely of survivors, but there is a deep truth within this observation which is at the heart of evolution. As long as there is a direct link between performance in the real world and survival—skin in the game—evolution will work to continually optimise and refine the population as circumstances change.

This evolutionary process works just as powerfully in the realm of ideas as in biology and commerce. Ideas have consequences, and for the process of selection to function, those consequences, good or ill, must be borne by those who promulgate the idea. Consider inventions: an inventor who creates something genuinely useful and brings it to market (recognising that there are many possible missteps and opportunities for bad luck or timing to disrupt this process) may reap great rewards which, in turn, will fund elaboration of the original invention and development of related innovations. The new invention may displace existing technologies and cause them, and those who produce them, to become obsolete and disappear (or be relegated to a minor position in the market). Both the winner and loser in this process have skin in the game, and the outcome of the game is decided by the evaluation of the customers expressed in the most tangible way possible: what they choose to buy.

Now consider an academic theorist who comes up with some intellectual “innovation” such as “Modern Monetary Theory” (which basically says that a government can print as much paper money as it wishes to pay for what it wants without collecting taxes or issuing debt as long as full employment has not been achieved). The theory and the reputation of those who advocate it are evaluated by their peers: other academics and theorists employed by institutions such as national treasuries and central banks. Such a theory is not launched into a market to fend for itself among competing theories: it is “sold” to those in positions of authority and imposed from the top down upon an economy, regardless of the opinions of those participating in it. Now, suppose the brilliant new idea is implemented and results in, say, total collapse of the economy and civil society? What price do those who promulgated the theory and implemented it pay? Little or nothing, compared to the misery of those who lost their savings, jobs, houses, and assets in the calamity. Many of the academics will have tenure and suffer no consequences whatsoever: they will refine the theory, or else publish erudite analyses of how the implementation was flawed and argue that the theory “has never been tried”. Some senior officials may be replaced, but will doubtless land on their feet and continue to pull down large salaries as lobbyists, consultants, or pundits. The bureaucrats who patiently implemented the disastrous policies are civil servants: their jobs and pensions are as eternal as anything in this mortal sphere. And, before long, another bright, new idea will bubble forth from the groves of academe.

(If you think this hypothetical example is unrealistic, see the career of one Robert Rubin. “Bob”, during his association with Citigroup between 1999 and 2009, received total compensation of US$126 million for his “services” as a director, advisor, and temporary chairman of the bank, during which time he advocated the policies which eventually brought it to the brink of collapse in 2008 and vigorously fought attempts to regulate the financial derivatives which eventually triggered the global catastrophe. During his tenure at Citigroup, shareholders of its stock lost 70% of their investment, and eventually the bank was bailed out by the federal government using money taken by coercive taxation from cab drivers and hairdressers who had no culpability in creating the problems. Rubin walked away with his “winnings” and paid no price, financial, civil, or criminal, for his actions. He is one of the many poster boys and girls for the “no skin in the game club”. And lest you think that, chastened, the academics and pointy-heads in government would regain their grounding in reality, I have just one phrase for you, “trillion dollar coin”, which “Nobel Prize” winner Paul Krugman declared to be “the most important fiscal policy debate of our lifetimes”.)

Intellectual Yet Idiot

A cornerstone of civilised society, dating from at least the Code of Hammurabi (c. 1754 B.C.), is that those who create risks must bear those risks: an architect whose building collapses and kills its owner is put to death. This is the fundamental feedback loop which enables learning. When it is broken, when those who create risks (academics, government policy makers, managers of large corporations, etc.) are able to transfer those risks to others (taxpayers, those subject to laws and regulations, customers, or the public at large), the system does not learn; evolution breaks down; and folly runs rampant. This phenomenon is manifested most obviously in the modern proliferation of the affliction the author calls the “intellectual yet idiot” (IYI). These are people who are evaluated by their peers (other IYIs), not tested against the real world. They are the equivalent of a list of movies chosen based upon the opinions of high-falutin' snobbish critics as opposed to box office receipts. They strive for the approval of others like themselves and, inevitably, spiral into ever more abstract theories disconnected from ground truth, ascending ever higher into the sky.

Many IYIs achieve distinction in one narrow field and then assume that qualifies them to pronounce authoritatively on any topic whatsoever. As was said by biographer Roy Harrod of John Maynard Keynes,

He held forth on a great range of topics, on some of which he was thoroughly expert, but on others of which he may have derived his views from the few pages of a book at which he happened to glance. The air of authority was the same in both cases.

Still other IYIs have no authentic credentials whatsoever, but derive their purported authority from the approbation of other IYIs in completely bogus fields such as gender and ethnic studies, critical anything studies, and nutrition science. As the author notes, riding some of his favourite hobby horses,

Typically, the IYI get first-order logic right, but not second-order (or higher) effects, making him totally incompetent in complex domains.

The IYI has been wrong, historically, about Stalinism, Maoism, Iraq, Libya, Syria, lobotomies, urban planning, low-carbohydrate diets, gym machines, behaviorism, trans-fats, Freudianism, portfolio theory, linear regression, HFCS (High-Fructose Corn Syrup), Gaussianism, Salafism, dynamic stochastic equilibrium modeling, housing projects, marathon running, selfish genes, election-forecasting models, Bernie Madoff (pre-blowup), and p values. But he is still convinced his current position is right.

Doubtless, IYIs have always been with us (at least since societies developed to such a degree that they could afford some fraction of the population who devoted themselves entirely to words and ideas)—Nietzsche called them “Bildungsphilisters”—but since the middle of the twentieth century they have been proliferating like pond scum, and now hold much of the high ground in universities, the media, think tanks, and senior positions in the administrative state. They believe their models (almost always linear and first-order) accurately describe the behaviour of complex dynamic systems, and that they can “nudge” the less-intellectually-exalted and credentialed masses into virtuous behaviour, as defined by them. When the masses dare to push back, having a limited tolerance for fatuous nonsense, or being scolded by those who have been consistently wrong about, well, everything, and dare vote for candidates and causes which make sense to them and seem better-aligned with the reality they see on the ground, they are accused of—gasp—populism, and must be guided in the proper direction by their betters, their uncouth speech silenced in favour of the cultured “consensus” of the few.

One of the reasons we seem to have many more IYIs around than we used to, and that they have more influence over our lives is related to scaling. As the author notes, “it is easier to macrobull***t than microbull***t”. A grand theory which purports to explain the behaviour of billions of people in a global economy over a period of decades is impossible to test or verify analytically or by simulation. An equally silly theory that describes things within people's direct experience is likely to be immediately rejected out of hand as the absurdity it is. This is one reason decentralisation works so well: when you push decision making down as close as possible to individuals, their common sense asserts itself and immunises them from the blandishments of IYIs.

The Lindy Effect

How can you sift the good and the enduring from the mass of ephemeral fads and bad ideas that swirl around us every day? The Lindy effect is a powerful tool. Lindy's delicatessen in New York City was a favoured hangout for actors who observed that the amount of time a show had been running on Broadway was the best predictor of how long it would continue to run. A show that has run for three months will probably last for at least three months more. A show that has made it to the one year mark probably has another year or more to go. In other words, the best test for whether something will stand the test of time is whether it has already withstood the test of time. This may, at first, seem counterintuitive: a sixty year old person has a shorter expected lifespan remaining than a twenty year old. The Lindy effect applies only to nonperishable things such as “ideas, books, technologies, procedures, institutions, and political systems”.

Thus, a book which has been in print continuously for a hundred years is likely to be in print a hundred years from now, while this season's hot best-seller may be forgotten a few years hence. The latest political or economic theory filling up pages in the academic journals and coming onto the radar of the IYIs in the think tanks, media punditry, and (shudder) government agencies, is likely to be forgotten and/or discredited in a few years while those with a pedigree of centuries or millennia continue to work for those more interested in results than trendiness.

Religion is Lindy. If you disregard all of the spiritual components to religion, long-established religions are powerful mechanisms to transmit accumulated wisdom, gained through trial-and-error experimentation and experience over many generations, in a ready-to-use package for people today. One disregards or scorns this distilled experience at one's own great risk. Conversely, one should be as sceptical about “innovation” in ancient religious traditions and brand-new religions as one is of shiny new ideas in any other field.

(A few more technical notes…. As I keep saying, “Once Pareto gets into your head, you'll never get him out.” It's no surprise to find that the Lindy effect is deeply related to the power-law distribution of many things in human experience. It's simply another way to say that the lifetime of nonperishable goods is distributed according to a power law just like incomes, sales of books, music, and movie tickets, use of health care services, and commission of crimes. Further, the Lindy effect is similar to J. Richard Gott's Copernican statement of the Doomsday argument, with the difference that Gott provides lower and upper bounds on survival time for a given confidence level predicted solely from a random observation that something has existed for a known time.)

Uncertainty, Risk, and Decision Making

All of these observations inform dealing with risk and making decisions based upon uncertain information. The key insight is that in order to succeed, you must first survive. This may seem so obvious as to not be worth stating, but many investors, including those responsible for blow-ups which make the headlines and take many others down with them, forget this simple maxim. It is deceptively easy to craft an investment strategy which will yield modest, reliable returns year in and year out—until it doesn't. Such strategies tend to be vulnerable to “tail risks”, in which an infrequently-occurring event (such as 2008) can bring down the whole house of cards and wipe out the investor and the fund. Once you're wiped out, you're out of the game: you're like the loser in a Russian roulette tournament who, after the gun goes off, has no further worries about the probability of that event. Once you accept that you will never have complete information about a situation, you can begin to build a strategy which will prevent your blowing up under any set of circumstances, and may even be able to profit from volatility. This is discussed in more detail in the author's earlier Antifragile.

The Silver Rule

People and institutions who have skin in the game are likely to act according to the Silver Rule: “Do not do to others what you would not like them to do to you.” This rule, combined with putting the skin of those “defence intellectuals” sitting in air-conditioned offices into the games they launch in far-off lands around the world, would do much to save the lives and suffering of the young men and women they send to do their bidding.

August 2019 Permalink