Think Twice
TT1:
Inside/Outside View
TT2: Biases – Overview
TT3:
Experts and the Crowd
TT4:
Situational Awareness
TT5:
More is Different
TT6:
Evidence of Circumstance
TT7:
Grand Ah Whooms
TT8:
Sorting Luck from Skill
TT1:
Inside/Outside View
We have a
tendency to favor the inside view over the outside view. Inside view considers a problem by focusing
on the specific task and by using information that is close at hand and makes
predictions based on that narrow and unique set of inputs. These inputs may include anecdotal evidence
and fallacious perceptions. These
assessments are generally built in the evaluation model and usually paints an
optimistic view.
Outside view
asks if there are similar situations that can provide a statistical basis for
making a decision. Rather than seeing a
problem as unique, it looks to find comparable problems, and if so, what
happened. Outside view is generally considered
an unnatural way of thinking as it disregards cherished, or close, information. Outside view creates a valuable reality for
decision makers.
Three
reasons why people take the inside view:
1.
Illusion
of superiority: the least capable people
tend to have the largest gap between what they think they can do and what they
can achieve. When people do rank
themselves poorly, then tend to dismiss those traits they rank as poor.
2.
Optimism:
Most people see their future as brighter than others.
3.
Illusion of control: People behave as chance events are subject to
their control.
People who
believe that they have control have the perception that their odds of success
are better than they actually are.
People who don’t have a sense of control don’t experience the same bias.
Anecdote: short and interesting story about a real
person or event. An account considered
unreliable.
Positive
anecdotes paired with a low statistic tends to be more powerful than a negative
anecdote paired with a high percentage of success. We are susceptible to anecdotes, especially
from friends, and sources we consider reliable.
Although the anecdotes have no real merit to the subject at hand, Anecdotes guide us to the inside view.
Planning
fallacy: People have difficulty
estimating jobs and costs. When they are
wrong, they usually underestimate the time and expense. Only about ¼ of people use base rate data
when making projections.
When people
are poor about making judgments of themselves, they tend to be pretty good at
judging other people. People tend to
think of themselves as different or better than the people around them.
How to
invoke the outside rule:
1.
Select a
reference class. Find a group broad
enough to statistically encompass like events, but narrow enough to be useful.
2.
Assess the distributions of outcomes. Take a close look at success and fail
rates. Study the distributions, find the
average and the extremes. The
statistical rate of success and failure must remain reasonably stable over time
for the reference class to be valid.
Also, be careful if small perturbations can lead to large scale
change. Cause and effect are difficult
to determine thus affecting the ability to rely on the reference class.
3.
Make a prediction. With valid data and an awareness of
distributions, one can make a forecast.
Estimate your chance of success or failure. Realize your probabilities will be somewhat
optimistic.
4.
Assess the reliability of your predictions and
fine tune. When cause and effect is
clear, you can have more confidence in your prediction. If predictions are off, then adjustments need
to be made. Best decisions arise form
sameness, not uniqueness.
TT2: Biases –
Overview
Anchoring
biases was reviewed: Even after
explaining the bias, you can still view it in effect. Psychologists believe it is attributed to
anchoring being a predominantly subconscious trait. Anchoring is symptomatic of insufficient
considerations to alternatives. Failure
to entertain options or possibilities can lead to dire consequences. Including unwarranted confidence in final
decisions.
Reasoning: People reason from a set of premises and only
consider compatible possibilities. As a
result, people fail to consider what they believe is false.
A problems
presentation significantly influences our approach, evaluation and choice. How its described, how the subject feels
about it and their individual knowledge all contribute. Supported by prospect theory.
Mental
models are internal representations of an external reality. They are incomplete and inherintly trade
detail for speed. Mental models replace
more cumbersome reasoning processes. Ill
conceived mental models are difficult to alter and consistently provide poor
decisions.
Our brains
are very good at getting answers quickly and often efficiently. By getting the right solution expeditiously
means homing in on what seems to us to be the most likely outcomes and leaving
out a lot of what could be. These
processes have worked well for us in the natural world, but is not as well
suited for today’s environment.
Key
example: giving real estate agents
houses to appraise. Same house,
different initial listings. Most agents
anchored on the listing price, then adjust.
However, agents insisted their appraisals were independent. The bias is pernicious because we are so
unaware of it.
Representativeness
heuristic: Often rush to conclusions
biased on representative categories in our mind.
Availability
Heuristic: Judging the probability or an
event based on what is readily available in memory. We tend to heavily weigh a probability if we’ve
recently seen it.
We also tend
to extrapolate inappropriately from past results. Though statistically a few items don’t
represent a confirmed trend, our brain sure thinks they do. Tough to overcome. Pattern recognitions deeply rooted in our
minds in conjunction with predictive behaviors.
Models based
on the past assume the future will be like the past. As such, we tend to anticipate without giving
suitable considerations to other possibilities.
Cognitive
dissonance: the rigidity that comes with
the innate human desire to be internally and externally consistent. Arises when an individual holds two
cognitions (attitudes, beliefs, opinions) that are psychologically
inconsistent. The dissonance causes
mental discomfort that our minds seek to reduce. Usually wants to resolve with the least
amount of mental effort.
The primary
way we deal with it is to justify our actions.
Author indicates a little self delusion is ok when the stakes are low,
but significantly problematic when the states are high.
Confirmation
bias: Seeking outside information to
affirm a position or belief we hold. The
bias offers us a couple benefits:
1.
Permits
us to stop thinking about it, and gives us a mental break.
2.
Frees us from the consequences of reason and
possibly some responsibility.
The first
allows us to avoid thinking, the second to avoid acting.
However,
people tend to be selective in their exposure and retention. Reinforcement of ideas tend to create a
positive stimulus. Contrary ideas tend
to form negative emotions. As its easy
to filter out contrary ideas we should be aware of the bias and keep problem
solving in a broader context.
Some
discussion on stress, in the form of a time bias. Stressful situations that focus people on the
short term hamper them from making decisions for the long run.
Incentives: A factor, financial or otherwise, that
encourages a particular decision or action.
Incentives can cause conflicts of interest that compromises a persons
ability to consider alternatives.
Subprime
Example: Incentives that led to it.
1.
People
with poor credit could buy a house.
2.
Lenders earned fees on the loans they made to
people with poor credit.
3.
Banks bought individual mortgages and bundled
them to investors, for a fee.
4.
Rating agencies were paid a fee to rate the
mortgage backed securities. Many of
which were deemed AAA.
5.
Investors in AAA rated mortgage backed
securities earned higher returns than other AAA investments.
The subprime
event clearly illustrates that what may be beneficial for parts of the system
are not necessarily beneficial for the system as a whole.
Key
Example: Accountants to review
accounting procedures at a firm. Half
were told they were hired by the company, half by an outside firm. Those hired by the firm were 30% more likely
to be less critical
Once again,
the bias/incentive is so ingrained that people are completely oblivious to it.
How to avoid
biases:
1.
Explicitly consider alternatives.
2.
Seek dissent.
3.
Avoid making decisions at emotional extremes.
4.
Keep track of previous decisions.
5.
Understand incentives.
Humans tend
to consider too few alternatives. Most
cases present an obvious choice.
However, biases hinder the ability to objectively view reality and the hindermint
increases with the complexity of the system and the increase in uncertainty.
TT3: Experts and the Crowd
It is
impossible to find any domain in which humans clearly out perform crude
extrapolation algorithms, much less sophisticated statistical ones.
Despite the
above statement, we still tend to rely on experts. Most people have a hard time assimilating
broad statistical evidence into the judgment at hand. Wisdom of crowds generally provide better
forecasts than experts.
Diversity
prediction Theorem: collective error =
individual error – prediction diversity.
Squares are
used so that negative and positive values do not cancel each other out.
Average
individual error capotes the accuracy of individual guesses.
Prediction
diversity reflects the dispersion of guesses, or how different they are.
Collective
error is the difference between the correct answer and the average guess.
You can
reduce the collective either by increasing ability or by increasing diversity.
Both ability and diversity are important.
Implies healthier understanding of the markets.
A diverse
collective always beats the average and frequently beats everyone.
For the
crowd to be effective, three things must be in place. Diversity, aggregation and incentives:
Incentives
help reduce individual errors.
Diversity
reduces the collective error.
Aggregation
assumes everyone’s information is included.
Inappropriately
relying on intuition. Intuition can play
into successful decision making. The
trick is to know when it guides you right and when it leads you astray.
Kahneman two
systems of decision making.
1.
Experiential system: fast, automatic,
effortless, associative, and difficult to control or modify.
2.
Analytical system: slower, serial, effortful and deliberately controlled.
Experiential
system uses perception and intuition to generate impressions of objects or
problems, and the individual may not be able to explain them. An analytical system involved in all
judgments where or not the individual is conscious of the decision.
So,
intuition is a judgment that reflects an impression.
Experts
collect experiences in which analytical systems is aggregated into experiential
system. Effectively, experts internalize
the salient features of the system they are dealing with, which frees higher
brain functions.
Traits of
experts:
1.
Perceive
patterns in their expertise.
2.
Solve problems faster than novices do.
3.
Represents problems at a deeper level.
4.
Can solve problems qualitatively.
Intuition
works well in stable environments where feedback is clear and cause and effect
relationships are linear.
Intuition
fails with a changing system, especially if it deals with phase
transitions. Intuition is loosing
relevancy in an increasingly complex world.
Note: True experts become experts by actively using
deliberate practices to train their experiential system. It’s repetitive, has clear feed back and not
a whole lot of fun. Most experts do not
come close to meeting these traits.
Crowds nor
statistics should be used with blind faith.
In many cases experts use numbers with no predicative value in their
evaluations and forecasts. This effect,
the mismatch, extends to other areas where people evaluate and rank people on
questions or tests that do not forecast performance. Example: Interviews for jobs
If
breakdowns occur in the collective error, it can multiply and dramatically
undercut the wisdom of crowds. Diversity
is usually the main culprit as we are social and imitative. Information cascades occur when people start
following others. The cascades explain
fads, booms and busts.
Diversity
can also break down in smaller groups if there is a dominate personality and/or
the absence of facts, or if the group tends to think alike.
How can you
make the expert squeeze work for you:
1.
Match the
problem you face with the most appropriate solution. What approaches are the best.
2.
Seek diversity:
knowing a little about a lot tends to make you a better predictor. However, it lacks the depth of those who know
a lot about a little. What separates
levels of expertise is not what experts think, but how they think. Diversity increases predictive ability.
3.
Use technology when possible. Identify the nature of the problem and
determine the best method to solve.
Technology can do better if data is valid and helps eliminates biases.
TT4:
Situational Awareness
Group
decisions, even poor ones, influence our individual decisions. About one third of people will conform to the
group answer, even if it is wrong. The
real question is what is going on in the heads of the people who do conform.
1.
Distortion of Judgment: People conclude their perception is wrong and
the groups decision is right.
2.
Distortion of Action: People suppress their own knowledge to
conform with the group.
3.
Distortion of Perception: Not aware that the majority opinion distorts
their estimates.
Studies
indicate the groups decision affects the individual’s perception, not
necessarily their higher functions of judgment or action. Seeing becomes what the group wants you to
see. In people who remained independent
from the groups decision had higher activity in the amygadala, which is known
for preparing for action (fight or flight).
Suggests that standing alone is unpleasant.
Our
situation influences our decisions enormously.
Hard to avoid as this is largely unconscious. Making good decisions in the face of
subconscious pressure requires a high degree of knowledge and self awareness.
If someone
gives you a cue word, it creates an associative path and it will color your
decision.
People
around us also influence us.
1.
Asymmetric information. Information someone knows, but you don’t.
2.
Peer pressure or the desire to be part of a
group. A collection of interdependent
individuals. Conformity does lead to
diversity breakdown within a group.
Fundamental
attribution error: the tendancy to
explain behavior based on an individual’s disposition versus their
situation. We naturally associate bad
behavior with bad character, except with ourselves. Although we like to believe our choices are
independent of your circumstances, the evidence strongly suggests otherwise.
Situational
awareness does not manifest itself the same across cultures. Eastern versus western thought.
1.
Easterners focus on environment, westerners on
the individual.
2.
Westerners believe they are more in control.
3.
Easterners are more open to change. The differences are attributed to two
distinct philosophical decisions.
Example: Shoot the boss due to some action on the
bosses part and how the media treats it.
The west focuses on the individual; bad temper or mentally
unstable. East is focused on
relationships. Didn’t get along with
coworkers, maybe influenced by a similar act.
It is a
mistake to believe our decisions are independent of our experiences
(situation).
Subliminal
advertising does not work as the link is weak.
Effective priming must be sufficiently strong and the individual must be
in an environment that sparks behavior.
Default
setting: People tend to go with the
default cue over opting out.
It is a mistake
to perceive that people decide what is best for them independent of how the
choice is framed. Choice
architecture: We can nudge people toward
a particular decision based on how the choices are arranged for them.
Structuring
choices creates a context for decision making.
Can also be applied to large groups.
It is a
mistake to rely on immediate emotional reactions to risk instead of an
impartial judgment of possible future outcomes.
Affect: How positive or negative emotional impression
of a stimulus influences decisions.
How we feel
about something influences how we decide about it. If we feel strongly about something, we tend
to go with system 1 (fast, automated, reflexive) rather than system 2 (slow,
analytical, reflective). Highly
situational and largely unconscious.
Two core
principles.
1.
When
outcomes of an opportunity are without potent affective meaning, people tend to
overweight probabilities. Example: system that saves 150 lives versus system
that saves 98% of 150 lives. The 98%,
although worse than without it, scored as a better system as the 98% proved to
be a potent positive stimuli.
2.
When out comes are vivid, people pay too little
attention to probabilities and too much attention to the outcomes (such as
gambling).
Probability
insensitivity: paying more attention to
the outcomes than to the probability of the outcome.
Final
mistake: Explaining behavior by focusing
on people’s disposition, rather than considering the situation. Restatement of attribution error. Situation is generally much more powerful
than people, especially westerners, acknowledge. The combination of the group and the setting
lay the groundwork for behavior that can dramatically deviate from the norm.
Factors
affecting situation:
1.
Situational power is most likely in novel
settings, where there are no previous behavioral guidelines.
2.
Rules which can emerge through interaction, can
create a means to dominate and suppress others, because people justify their
actions as only conforming to their rules.
3.
People that are asked to play a certain role,
for a prolonged period of time, may not be able to break their roles later on.
4.
In situations that lead to negative behavior,
there is often an enemy.
Power of
Inertia:
Inertia: Resistance to change, also show s the
situation shapes real world decisions.
Discussion
on ‘we’ve always done it that way’. Can be changed through a fresh look, often
by outsiders.
Discussion
on how strict regulation can hamper change and sometimes counter common sense.
Coping with
situation bias:
1.
Be aware
of your situation. A) Creating a
positive environment and focusing on processes, keeping stresses in check,
having thoughtful choice, and make sure to diffuse forces that encourage
negative behavior. B) coping with
subconscious influences. Requires
awareness of the influences, the motivation to deal with it and the willingness
to address possible poor decisions.
2.
Consider the situation first and the individual
second. Evaluate the decisions of others
by starting with the situation, and then turning to individuals.
3.
Watch out for the institutional imperative: imitating what your peers are doing. Companies tend to want to be in the in
group. Alsok, incentives. Executives can reap financial benefits from
following the crowd. If this dynamic is in
effect, it is difficult not to be drawn to it..
4.
Avoid inertia.
Periodically review your processes.
We tend to:
1.
Think of
ourselves as good decision makers.
2.
Think we weigh the facts and consider the
alternatives and select the best course of action.
3.
Think we are immune from the influences of
others.
4.
Convince ourselves that facts and experiences
carry the day.
In reality,
decision making is
1.
Inherently a social exercise.
2.
Primes, defaults, affect and the behaviors of
those around us influence our decisions, mostly at an subconscious level.
A thoughtful
decision maker becomes aware of the biases and influences and finds ways to
manage them.
TT5:
More is different.
Complex
Adaptive System:
1.
Consists
of a group of heterogenous agents.
(neurons in the brain, bees in a hive).
Heterogeneity means each agent has different and evolving decision rules
that both reflect the environment and attempt to anticipate change in it.
2.
The agents interact with each other and their
interactions create structure. Also referred
to as emergence.
3.
The system that emerges behaves like a higher
level system and has properties and characteristics that are distinct from
those of the underlying system themselves.
Even though
individuals may be inept, the colony as a whole is smart. The whole is greater than the sum of the
parts.
The behavior
of complex aggregates of elementary particles is not to be understood in terms
of simple extrapolations of a few particles.
At each new level of complexity, new properties appear. Don’t study the ant, study the colony.
Humans have
a deep desire to understand cause and effect.
In complex adaptive systems, there is no simple method for understanding
the whole by studying the parts.
Searching for simple agent level causes of system elevel effects is
useless.
When a mind
seeking links between cause and effect meets a system that conceal them,
accidents will happen.
First
mistake: Inappropriately extrapolating
individual behavior to explain collective behavior.
Example: earnings capture the headlines as the driving
force in determining share price.
However, studies indicate cash flows may be a better driver. Both address the question from different
perspectives.
Earnings is
what the media focuses on. Economists
look at how the market is behaving. One
group focuses on components, one is focused on the aggregate. The opinion of the market is more relevant
than listening to a few individuals.
Just because individuals can be consistently wrong, doesn’t mean the
whole is wrong.
Market
irrationality does not flow from individual irrationality. Collective behavior matters more. You must carefully consider the unit of
analysis to make a proper decision.
In a complex
system that has many interconnected parts, a small tweak can have significant
impacts to the rest of the system. Or
unforeseen consequences.
Second
mistake: How addressing one component of
the system can have unintended consequences for the whole. Even those that arise from the best of
intentions. The decision making challenge
remains for a couple of reasons:
1.
Our
modern world has more interconnected parts.
So we encounter them more often and usually with greater consequences.
2.
We still attempt to resolve problems in complex
systems with a naïve understanding of cause and effect.
Third
Mistake: Isolating performance without
proper consideration of the individuals surrounding system. Analyzing results requires separating
relative contributions of the individual from the environment. When we err, we tend to overstate the role of
the individual.
All three
mistakes have the same root. A focus on
an isolated part of a complex adaptive system without an appreciation of the
system dynamics. In addition the
frequency and scope are only accelerating in our world which means you will
encounter them more often.
Advice:
1.
Consider
the system at the correct level. ‘More
is different.’ Most common mistake is to
extrapolate from the individual agents rather than the whole. To understand the stock market, understand it
at the market level.
2.
Watch for tightly coupled systems. Complex systems that are tightly coupled can
have rapid failure. One process in the
system is closely followed by another.
Nuclear power plant example. Most
complex adaptive systems are loosely coupled.
The removal of a few agents ahs little consequences. Markets tend to be loose, but boom and busts
illustrate when the market becomes tight.
Regulation (good or bad) can couple markets tightly.
3.
Use simulations to create virtual worlds. Feedback in complex systems ids difficult to
accurately obtain due to limited information and a lack of cause and effect
relationships.
Understanding
how well intentioned intelligent people can create an outcome that no one
expected and no one wants is a profound life lesson.
Orderly
processes in creating human judgments and intuition lead people to wrong
decisions when faced with complex and highly interacting systems.
Our innate
desire to grasp cause and effect leads us to understand the system at the wrong
level, resulting in predictable mistakes.
Complex systems can work well with dumb agents as well as fail
spectacularly when well meaning intelligent people attempt to manage the
system.
TT6:
Evidence of Circumstance.
People try
to cram the lessons or experiences from one situation into a different
situation. This strategy often fails
because decisions that work in one context fail miserably in another. The right answer to most questions is ‘it
depends.’
People
ground their choices in theory: A belief
that a certain action will lead to a satisfactory outcome. You can consider a decision as theory: an
explanation of cause and effect.
Theory
building occurs in three stages:
1.
Observation:
Carefully measuring a phenomenon and documenting the results. The goal is to set standards that can be
reproduced.
2.
Classification:
Simplify and organize the world into categories to clarify differences
among phenomena. Based predominately on
attributes.
3.
Definition:
Describing the relationship between the categories and the outcome.
Theories
improve when researchers test predictions against real world data. They also tend to improve as researchers
start classifying using circumstances, not just attributes in the definition
state. Theories advance beyond simple
correlations and sharpens to define causes.
First
mistake: Embracing a strategy without
fully understanding the conditions under which it succeeds or fails.
Second
mistake: Failure to think properly about
competitive circumstances.
Third
mistake: Failure to distinguish between
correclation and causality. Arises when
researchers observe a correlation between two variables and assumes one caused
the other.
Three
conditions to have causality between two variables.
1.
X must
occur before y.
2.
Functional relationship between X and Y. Cause and effect take on at least two or more
values (does person smoke, does persona have lung cancer).
3.
For X to cause Y, there cannot be a factor Z
that causes on X or Y.
Fourth
mistake: inflexibility in the face of evidence that change is necessary.
Advice:
1.
Ask if
the theory behind your decision making accounts for circumstances. People tend to take decisions from successful
experiences in the past into new situations.
Not good decision making if followed blindley, or if components of the
new situation are different from the old.
2.
Watch for the correlation trap. We innately look for cause and effect in
complex systems. We are not beyond
making up or perceiving things wrongly to make cause and effect work. Runs the risk of observing an event, due to
chance, and attributing it to a correlation.
3.
Balance simple rules with changing conditions.
4.
Remember there is no ‘best’ practice.
TT7:
Grand Ah Whooms.
When
positive feedback takes over. Feedback
can be positive or negative and many healthy systems have both. Positive feedback promotes change, negative
feedback is resistant and provides stabilization. Too much of either can leave a system out of
balance.
Postive
feedback reinforces an initial change in the same direction.
Phase
transitions: When small incremental
changes cause large scale effects. They
occur in many complex systems where collective behavior emerges from the
interaction of its constituent parts.
These systems have critical point or thresholds when phase transitions
occur.
Critical
points are very important for proper counterfactual thinking. As in, ‘what might have been’. For every phase transition you saw, how many
close calls were there. Large scale
outcomes are the result of internal workings, not external shock. Referred to as invisible invulnerability.
Distributions
in many systems don’t stray too much from their averages. However, distributions are heavily skewed for
some complex systems. The idea of an
average holds little meaning.
Black
swan: Extreme outcomes in
distributions. An outlier event that has
a consequential impact and that humans try to explain after the fact. People understand black swan events, but not
what propagates them.
Positive feedback
leads to outcomes that are outliers. And
critical points help explain our perpetual surprise at black swan events as we
have a hard time understanding small perturbations that lead to such large
impacts.
What are
behind critical points: Answer can be
found in the wisdom of crowds. Crowds
ten to make accurate predictions when diversity, aggregation and incentives are
in place.
Diversity: People having different ideas.
Aggregation: Can bring groups information together.
Incentive: Rewards for being right.
Diversity is
the most likely to fail when humans are involved. It is important to note that crowds don’t go
smart to dumb gradually. As diversity
lessens, there is no impact. Bat at some
point, further reduction of diversity will hit a critical point and cause
quantitative change within a system.
During a run
up to a crash, population diversity falls.
Agents begin using similar trading styles and the common good is
reinforced. Population becomes brittle
and a reduction in demand can have a huge impact on the market.
First
Mistake: Induction: How you should logically go from specific
observations to general conclusions.
Popper argued that seeing lots of white swans doesn’t prove all swans
are white. But seeing one black swan
does prove that not all swans are white.
Popper’s point is that we’re better off at falsification than
verification. However, we’re not
naturally inclined to falsify something.
When we
think about something a certain way, it’s hard for us to think of it in a different
way. Strong tendency to stick to
established perspective and slow to consider alternatives.
Good
outcomes provide us with confirming evidence that our strategy is good. The illusion emphasizes the overconfidence
bias and we are surprised when it fails.
Second
mistake: Reductive bias. The tendency to oversimplify complex
systems. People tend to think of systems
as simple and linear. Even though
systems tend to be complex and non-linear.
Mistake: Belief in prediction. Ours is the only world we know. Generally there is no way to test the
inevitability of the outcomes we see.
Social
influence plays a huge part in success or failure. There is no guarantee of quality and
commercial success. Social influences
tends to exacerbate product successes and failures.
Inequality
of outcomes is substantially greater in the social worlds than in the
independent worlds. Luck of the draw is
defined early on. Our world represents
one of many possible worlds, and small changes in initial conditions lead to a
big difference in outcomes. Social
influences can be the engine for positive (change oriented) feedback.
Advice:
1.
Study the
distribution of outcomes for the system you are dealing with. If evaluations include extremes, then black
swans become grey. The key is to prepare
for extremes. We’re not necessarily
scorched by black swans, but scorched by not preparing for gray swans.
2.
Look for phase transitions moments. Reduction in diversity increases chances
of a system failure. Also referred to as coordinated behavior.
3.
Beware of forecasters. Accuracy of forecasts is dismal in systems
with phase transitions. Even by the
experts.
4.
Mitigate the downside, capture the upside. Betting too much in a system with extreme
outcomes lead to ruin. Extremes occur on
the positive and negative sides.
Consequences are more important than probabilities.
We tend to
over simplify complex systems and thus become mistake prone. If you see such a system, slow your thinking
down. When navigating through
potentially many black swans, the key is to live to see another day.
TT8:
Sorting Luck from Skill
We have
difficulty sorting luck from skill. The
result is that we often make predictable and natural mistakes.
Reversion to
the mean suggests things become more average over time, while a stable
distribution implies things don’t change.
Understanding that change and stability go together is the key to understanding reversion to the
mean.
Any system
that requires skill and luck will revert to the mean over time. Kahneman suggests the following equations:
Success = Some talent + luck
Extreme
Success = Some talent + a lot of luck
When you
ignore reversion to the mean, you make the following mistakes;
1.
Your
thinking that you are special
2.
Misinterpretation of what the data says. What looks like cause and effect is really a
reversion to the mean.
3.
Making specific inferences from general
observations.
Reversion to
the main also applies to economic indicators.
We tend to
observe financially successful companies, attach attributes (leadership,
visionary strategy, financial controls) to that success, and recommend that
others embrace those traits to achieve their own success.
In other
words we identify traits to success without understanding the role of
luck. When things start to revert back,
we become critical of them.
The effect
is emphasized by the media. The presses
tendency to focus on extreme performance is so predictable that it has become a
reliable counter indicator.
Example: Forbes, Business Week and fortune feature
companies in bullish articles and bearish articles. Those in the bullish category had for the
previous two years generated 40+% returns, the opposite to companies featured
in bearish articles. In the next two
years, bearish companies outperformed bullish companies three to one.
Advice:
1.
Evaluate
the mix of skill and luck in the system you are analyzing. To evaluate, ask yourself if you can loose on
purpose. If you can’t then its mostly
based on luck. If you can than skill is
involved. When we win we tend to
attribute it to skill and when we lose, to bad luck.
2.
Carefully consider the sample size. We tend to extrapolate unfounded conclusions
from small sample sizes. The more luck
contributes to outcomes, the larger the sample size you will need to
distinguish luck from skill. Streaks ae
an indicator of skill. Favorable
impressions lead to future or more interactions. Although the next experience will revert back
to the mean, more information will be gathered by the recipient. Unfavorable impressions lead to people not
interacting or returning. This ensures
additional information will not be gathered.
3.
Watch for change within the system or of the
system.
4.
Watch for the halo effect.
An
appreciation of the relative contributions of skill and luck will allow you to
think clearly about reversion to the mean.
When outcomes are good, prepare for the not so good and vice versa.
End Notes:
The value of
this information pertains to higher risk decisions. Decisions with low impact normally are straight
forward and easy to determine. We do it
numerous times every day.
You should:
1.
Prepare:
learn about potential mistakes and
2.
Recognize:
identify them in context and
3.
Apply: sharpen your ultimate decisions.
No comments:
Post a Comment