In the 20th century, the Philosopher Karl Poper said that we could get closer to the truth by negative instances, not by verification, and our knowledge does not increase from a series of confirmatory observations. He contradicted the logical positivists who argued that the only fundamental aspects are the ones that could be measured and empirically verified. Poper suggested verification is not enough to establish the truth. For example, finding a single malignant tumor proves that the patient has cancer, but the absence of such a tumor cannot allow certainty that the patient is cancer-free. Nassim Taleb was born in 1960 in Amion, Lebanon. He belonged to an influential political middle-class intellectual family, but wealth and power declined when the unexpected civil war broke out in 1976. Taleb soon left to study in France and The United States. When the book was published in 2007, Taleb became a celebrity, invited to lecture and answer questions in universities and conferences. Of course, everyone wants to know how to prepare for an unexpected event, but Taleb pointed out that people might misunderstand his point. It is unattainable to know something before it is known. The Lebanese civil war, for example, could not foreknow. After centuries where Muslims and Christians lived together, everything turned upside down overnight; they carried weapons and killed each other.
The field of economics, statistics, and finance depends on induction, seeking to make predictions of patterns of past events. One of the models they use is the bell curve or Gaussian, named after the 19th-century German philosopher Friedrich Gauss. The bell curve is a graph shaped like a bell, often used as a primer approximation to describe random variables—the large number of results clustered near the middle or average. For example, let's say we had a group of 1,000 people chosen randomly. The weights of these people operate well within the normal Gaussian bell curve distribution. One person's weight does not significantly affect someone else's weight, and even if we imagine a person weighing four times more than the average, between 300 and 400 kilos, that will be a small fraction of the total weight of the sample. The weight and height are appropriate for a bell curve distribution, but not most real-life events. Based on the ideas of Hume and Popper, Taleb argues that the bell curve model does not count for improbable but significant events. For instance, book sales are poorly represented on the bell curve because one author may sell several times more than many other authors combined. This author's sales belong to "Extremistan," the term Talib uses to describe scenarios haunted by extreme outliers. In 2008, a year after the publication of The Black Swan, the financial crises occurred and proved that Taleb wasn't arguing for the sake of controversy. The black swan's events often happen in real life when no one is ready, as the financial crisis of 1987 or 2008, or wars erupt from the unseen, such as World War I and the Lebanese Civil War, the two examples Taleb recites frequently.
Taleb argues that unexpected events, not the expected ones, play a prominent role in changing the course of history, from religion to economy, to our personal life.
The human mind constructs the illusion of comprehending high complexity that is too hard to understand. As a result, the chaotic events seem regular and predictable by reliance on retrospect and over-classification, which Taleb calls "Platonicity" after the Greek philosopher Plato. In short, Platonicity is the practice of classifying and assigning simple causes to complex events and issues. Platonicity makes us think that we understand more than we do, but this does not apply to all applications. The models and structures being mental maps of reality are not always wrong, but they are wrong in some applications. We won't know about map errors until after the event, which can be disastrous. For example, someone knowing that a hurricane destroyed their home may not prevent them from rebuilding on the same site if that person thinks storms are rare. And if a storm comes in the upcoming rainy season will destroy the house, or perhaps the heavy rain will cause the nearby river to overflow and wash away the house, or maybe an earthquake. But if this person realizes the complexity of the factors that may lead to destroying the home, whether the location, the management of nearby rivers and streams, and the building model, this person may take preventive measures; building a house on solid foundations regardless of the cause that may lead to the catastrophic events. When we say the event is unpredictable, the unpredictability is relative, especially if some people prepare the event in secret. If September 11 was a black swan to the targets, it wasn't for the assaulter's team. Even when the security parties couldn't predict the attack by examining past terrorist raids or airplane misfortunes, the event is not purely random; it is unknown to one person or group but not another. Black swans vary in size. Some black swans affect a narrow domain in life, and others have enormous consequences. For example, people believe that the earth is the center of the universe. Everything was going well until Copernicus proposed that the sun was the center. His discovery was a black swan that affected entire humanity.
We are attracted to the stories over truths, which mistakes our mental representation of the world, particularly when it comes to a rare event.
We as humans are good with turning the world around us into stories, our minds tend to simplify it to reduce its complexity, and one way we do this is by the narrative fallacy. The limitations on our ability to see events without explaining them make us create narratives based on past events, believing that the past is suitable to represent the future. We look for causes for the events and then create stories that explain such causes, and when we remember the past, we focus on the facts that support this accepted narrative and neglect what contradicts it. The problem that we neglect can play a tremendous role in renouncing this causal narrative of the event. Imagine a turkey who lives on a farm, and the farmer has fed him for 12 months, letting him wander around freely. There is no reason for the turkey to think that tomorrow will be different if the past is its guide. The turkey will believe that the general role of life is to be fed every day by the kind human race, but tomorrow is Thanksgiving Day. He is beheaded, filled with herbs, and thrown in the oven. If things are going one way, it does not mean they remain that way. The narrative fallacy makes the wars and economic crashes seem understandable and predictable and prevents us from grasping unexpected events. Because of the complexity and our inclination to simplify to reduce the complexity, we focus on the dramatic events that are highly unlikely to affect us, Such as focusing on pharmacological complications of vaccination, which are extremely rare, and forgetting the infection what is more likely frequent. Many people confuse the statement "almost all terrorists are Muslims" with "almost all Muslims are terrorists," assuming the first statement is correct that 99% of the terrorists are Muslims. Considering more than one billion Muslims worldwide, say there are 10 thousand terrorists, one in a hundred thousand. This would mean that only 001 percent of Muslims are terrorists. The logical mistake makes the person overestimate the odds of a randomly Muslim individual being a terrorist by close to fifty thousand times.
Our thinking and reactions depend on the object or event's domain. We react to a bit of information not on its logical worth but on which framework encircles it and how it registers in our emotional system.
If I say that someone took a nap on the railroad or the highway, and was not killed, is that a piece of evidence that taking naps on the railroad is risk-free? Likewise, if someone were watching the turkey on the farm before Thanksgiving day, they would tell that there is no evidence of the possibility of a significant event in the turkey's life. Knowledge, even when it is exact, does not often lead to appropriate actions because we tend to forget how to process it properly if we do not pay attention, even when we are experts. Statisticians tend to leave their brains in the classroom and engage in the most trivial inferential errors once they are out on the streets. In 1971, the psychologist Danny Kahneman and Amos Tversky plied statistics professors with statistical questions. One question was similar to the following: Assume that you live in a town with two hospitals - one large and the other small. On a given day, 60 percent of those born in one of the two hospitals are boys. Which hospital is likely to be? Many statisticians made the equivalent mistake of choosing the large hospital during a casual conversation. The answer contradicts the very basis of statistics, large samples are more stable and should fluctuate less from the long-term average - here 50 percent for each of the sexes- than smaller samples. These statisticians may have failed in their exams.
Influential discoveries can also be Black Swans.
The inventions around us do not come from someone sitting in a kiosk and make them according to a plan. Instead, you find something you are not looking for, and it changes the world. For example, in 1965, two radion astronomists at Bell Labs in New Jersey who were mounting a large antenna were bothered by background noise, like the hiss you hear when you have bad reception. They could not get rid of the noise even after they cleaned the bird excrement from the dish since the astronomists were quite convinced that bird poop was behind the noise. It took a long time until they found out that the noise they were hearing was the universe's background radiation, which provided evidence of the big bang.
People, in general, are satisfied with what they know, usually giving more information on a topic, especially to experts in different fields, the more assumptions they will make, and the worse off they will be.
Our beliefs are sticky, and we are not likely to change our ideas. When we develop an opinion based on weak evidence or no evidence, we will have difficulty absorbing the following information that contradicts this opinion. Two mechanisms play a role here, the confirmation bias, which means the inclination to process information by focusing on the part that is consistent with one's existing beliefs. And belief perseverance, the tendency not to reverse opinions you already have. In an experiment in 1965, Stuart Oskamp, the (Ph. D., Stanford University) who has focused his research interests in attitudes and attitude change, supplied clinical psychologists with successive files containing an increasing amount of patient information. The psychologists' diagnostic capability did not improve with additional information supply. Instead, they became more confident in their original diagnosis. Another example of the overconfident effect, people will not change their decision not to take the vaccine even if more evidence proves them wrong, rather than admit being mistaken about the anti-vaccine attitude. Instead, they may interpret the contradictory evidence as support.
Taleb explains how the Gaussian bell curve distribution can't foresee the black swans, but the fractal model may be more helpful.
The book does not suggest that all are random, and we should stop trying to predict the future, but it shows how the models used by experts are not qualified to predict the black swans. Black swans are not about some objectively defined phenomenon, like rain or a car crash—they lie outside the tunnel of possibilities. But, while it is not easy to compute their odds, we still can reduce their effects. Taleb proposes that we convert them into "grey swans," referring to extreme events modeled by fractal geometry. Fractal is a word used by the mathematician Benoit Mandelbrot, and fractality is the repetition of geometric patterns that appear the same at different scales, revealing smaller versions of themselves. Small parts, in some cases, resemble the whole. The typical example of a fractal in nature is the tree, the trunk is the origin point for the fractal, and each set of branches that grow off of that main trunk subsequently have their branches that continue to grow and have branches of their own. Eventually, the branches become small enough they become twigs, and these twigs will consequently grow into bigger branches and have twigs of their own. This cycle forms an "infinite" pattern of tree branches. Each branch of the tree resembles a smaller scale version of the whole tree. The fractal model is different from the bell curve model when it comes to predictions; the fractal has numerical measures preserved across scales, and, unlike the Gaussian, the ratio is the same. In the example of the weight, if you know that the total weight of two people is 160 kilos, you will guess the most likely weight is 80 kilos each, not 20 kilos and 140 kilos, a person less than 50 or 40 kilos are so rare. But in the book's domain, if I told you that two authors sold a total of a million copies of their books, the most likely is one of them sold 990,000 copies and the other 10,000. This prediction is far more likely than the books each sold 500,000 copies. The bell curve does not count for such a possibility because it focuses on the ordinary and deals with the exceptions later. The fractal model somehow takes the exception as a start point and treats the normal as subordinate. The book's central argument is to be cautious when making predictions and point at the blind spots that experts in finance and economics are oblivious to when making their forecasts. The book achieved its goal of bringing attention and brought the discussion of uncertainty, randomness, and improbability to a broader range of raiders. Moreover, the financial crisis of 2008 contributed to enormous success. The book sold more than 3 million copies in the first three years after the publication, and it translated to more than 30 languages and spent 36 weeks as New York Times bestseller.