Like many people interested in the subject of this blog, I’ve been
reading and enjoying Daniel Kahneman’s best-selling book Thinking Fast and Slow.
Those of us who dabble in the subject of critical thinking tend to assume
a classical understanding of the human makeup, one that sees people as
essentially rational creatures. And when
reason fails us, we tend to blame this failure on emotion or some other
component of our animal/non-reasoning self temporarily overwhelming the rationality
that makes us us.
Kahneman’s work in psychology (which won him Nobel Prize when applied
to economics) contradicts (or at least confounds) these assumptions,
demonstrating as it did that our reason might actually be faulty (or, at least,
doesn’t work the way we think it does).
Kahneman (in landmark work done with his colleague Amos Tversky) posited
that our “mind” actually consists of two components: a fast-processing piece
which he names System 1, and a slower, more deliberate part named (you guessed
it) System 2. And unlike other attempts
to bifurcate or trifurcate the brain (into artistic vs. quantitative right and left hemispheres, or Freud’s Ego, Superego and Id), Kahneman’s fast System 1 and slow
System 2 seems to provide a great deal of rigorous descriptive
and predictive power.
Under this framework, System 1 processes information (such as
information coming in from the senses) lightning fast and attempts to make
sense of it via associations and stories.
You can experience the uncontrolled associative nature of System 1 the
next time you hear a familiar song and immediately (and without any deliberate
effort) remember the last time you heard it, the first time you heard it, a
dozen songs like it, and that great date when you danced to it in college. Stories provide a way for System 1 to create
coherence around sensory data and other input, without having to engage the
more deliberate concentrative power of System 2.
And this System 2 is extremely powerful, grabbing control and overriding
the association- and story-driven decisions of System 1 whenever it likes. The trouble is, deliberative System 2 doesn’t
like to do this very often since it is a lazy system that would prefer to take
System 1 at its word whenever possible.
Times when this is not possible include situations when understanding
requires a statistical vs. story-based understanding since System 1 doesn’t
really “do” probabilities. In fact, the illustration
(and tool) Kahneman uses to illustrate the distinction between the two Systems are bets or gambles which make no sense from a purely utilitarian point of
view, but are perfectly understandable once you see decisions on whether to
take those bets being made by System 1 that doesn’t really get probability and
a System 2 that would rather not bother if it didn’t have to.
Beyond statistics, this two-part model helps explain our susceptibility
to visual and cognitive illusions, such as this famous example:
Looking at this image, most of us “know” that the two parallel lines
are the same length, regardless of the fact that our own eyes registers the
first longer than the second (a visual that is confirmed by System 1 acting on
its own- which is what happens when children confront this illusion for the
first time). The reason we grownups
“know” the lines are of equal length is that our System 2 is pulling in not
visual imagery (which is what System 1 uses to process data), but data drawn
from memory, i.e., the specific memory of having experienced this illusion previously
as a child. (In fact, my memory recalls
not just this illusion, but the exact puzzle book where I saw it published –one
which had an elaborate maze on the cover whose overall shape resembles a British
toff wearing a bowler hat.)
This combination of a fast associative processor and slower, lazier
deliberative processor leads to other types of illusions/errors, my favorite
being the response you get when you ask people how many of each animal
Moses brought onto the ark. This gag doesn’t work on the printed page,
but when you ask someone the question out loud, most of them will confidently
announce “two,” and only afterwards feel sheepish that they mistook Moses for
Noah, the confusion arising because both names fall into the associative category
“famous Biblical figures with long-O sounds in their names”. (My favorite use of this trick came when I
asked the Moses question of my neighbor, the local Episcopalian Minister, who
began on a long exegesis regarding relevant chapters of Genesis before I
stopped her and told her Moses never had an ark.)
But the slow processor, which must take over to perform certain tasks (such
as multiplying two-digit numbers in your head) can also cause errors and
omissions, one of the most famous illustrated in this observational assessment.
Illusions aside, during the course of any given day, most of the mind’s
work is performed by System 1 with System 2 intervening only when necessary. Reading this piece, for example (no matter
how engrossed you might be by it) is pretty much a System 1 activity, given
that it consists of processing written information in a language you understand. In fact, System 2 has only really been
engaged during a small portion of the period when you were involved with this piece
(when you were counting the basketball passes if you clicked on the link
above).
All very intriguing, I hear you cry out. But what does that have to do with critical
thinking in general and critical thinking about the US election
specifically? Expect an answer to that
question next time.
No comments:
Post a Comment