Sunday, March 18, 2012

Nudge Nudge


Finishing up the discussion of Kahneman’s Thinking Fast and Slow on a cautionary note, the book’s model of a human mind broken into two parts: a fast, associative System 1 that does the bulk of our day-to-day cognitive work and a slower, deliberative System 2 that analyzes deeply, handles counter-intuitive situations, and takes control when necessary should be seen for what it is: a metaphor.

The author himself announces the metaphorical nature of his construct of the mind, telling us early on that his creation of the labels System 1 and System 2 was a deliberate artifice which, among other things, provides his alleged System 1 some named characters for one of the stories it uses to achieve understanding.  And as much as the metaphor of slow and fast mental entities working in partnership helps explain a wide number of observable phenomena (such as our tendency to fall for visual and cognitive illusions), we shouldn’t lose sight of the fact that Kahneman’s is just one of a long line of metaphors used to describe human behavior.

For example, while it has become fashionable to sneer at Freud, his identification of a conscious and unconscious mind (not to mention the Ego, Superhero and Id he posited) not only impacted both the scientific and wider culture, it also served as the foundation for modern public relations and advertising when put to practical use by Freud’s nephew Edward Bernays.  And our culture’s fondness for the notion of a Left-Right creative-logical split in our brain (despite an absence of physical evidence for such a divide) simply proves how much we humans desire to break our world into neatly labeled categories.

Now it may very well turn out that Kahneman’s fast/slow dichotomy will withstand experimental scrutiny over time.  But work done since then (including a colleague’s thesis I’m currently reading that proposes three different modes of thinking vs. Kahneman’s measly two) is already building on and refining the original hypothesis.  So before we start abandoning previous theories and models and start thinking of ourselves entirely in fast and slow terms, it’s best to exercise some humility and accept the fact that this model, while useful, may very well turn out to be wrong.

Shifting gears to political matters (which is supposed to be the subject of this blog), I must also admit to becoming uncomfortable when the author began suggesting examples of his theories could be put to practical advantage. 

If our minds work a certain way, the author asserts, why not take advantage of this phenomenon to gently move (or nudge) people towards socially acceptable or preferable behavior?

The classic example of this idea comes out of the near 100% volunteer rate for organ donation in certain European countries, vs. a far smaller rate for Americans.  This difference turns out not to be the result of differing levels of generosity between cultures, but rather a different in forms – specifically organ-donation volunteer forms which are opt-out abroad (meaning volunteering for organ donation is the default choice) but opt-in here in the US.  So if telling your printer or web page designer to change which box is checked automatically can lead to such a huge increase in some socially beneficial good, why not apply it to other societal problems (such as improving American saving habits by making joining an automatic savings plan the default option)?

The trouble is that you pretty rapidly run out of examples of causes where playing on global cognitive hard wiring will lead to unquestionably good outcomes.  Kahneman highlights the popularity of Richard Thaler and Cass Sunstein’s recent book Nudge which shows how slight manipulations in how information is presented can gently push people towards making the right choices about their money, health and overall happiness.  And he’s particularly excited that one of the book’s authors is playing a role in the current US administration, providing an avenue to put these ideas into practice.

But who gets to decide what behaviors the population will be nudged into?  And who gets to determine where friendly nudging ends and outright manipulation begins? Historically, powerful persuasive techniques such as classical rhetoric or Freudian science have been embraced by politicians and advertisers who want to get us to do what they want.  So what is to prevent new techniques based on what we think we know about our fast and slow processors from simply becoming the friendliest option on a list that also includes propaganda and coercion?

It may be a fool’s errand, but if this blog is about anything it’s about trying to engage that reasoning part of our minds (or our slow processor if you like) much more than often we do when it comes time to make key decisions that can affect our future as individuals and as a country.  And while it would be far simpler if we could all be made to “do the right thing” by simply switching around some wording or some options on a form, fears over who gets to be the nudger makes the far messier option of trying to get people to think for themselves a preferable alternative.

Friday, March 9, 2012

Slow Down


Continuing with the discussion of Daniel Kahneman’s Thinking Fast andSlow, I noted that the author’s construct of a brain separated into a fast, associative processor (System 1) informing but, occasionally, overridden by a powerful but lazy slow processor (System 2) helps explain a number of observable phenomena such as our susceptibility to visual and cognitive illusions.

This model can also help us understand pan-human tendencies towards certain types of cognitive biases, notably confirmation bias which makes us readily accept information that already conforms to our beliefs, but treats information that contradicts or confounds those beliefs with suspicion.  After all, if our fast associative System 1 uses stories to categorize and understand information coming at us from all directions, what makes more sense than to create a story that plays up our preferences and plays down or rejects our dislikes? 

Making room in our mind for uncomfortable facts takes effort, often involving looking at the world outside the framework of an easy-to-understand storyline and delving into uncertainties (or probabilities) which bring us into the realm of System 2.  Our slow processor does this work well, but engaging this lazy workhorse requires effort.  And if we can get along just fine believing a comfortable story about, say, our preferred presidential candidate representing all goodness and truth (while his opponent is a dishonest cad), why not simply accept fast System 1’s take on the matter and let slow System 2 enjoy the day off?

Kahneman’s discussion of psychology and behavioral economics can also inform other challenges that come into play during a presidential campaign, such as anchoring and framing. 

Anchoring involves putting out some information (usually numeric) that our mind automatically makes the starting point for further thinking about an issue.  An example the author uses is that if I ask a group of people if Gandhi was 140 years old when he died, all of them will, of course, answer no.  But if I then ask him how old they think he was when he died, they will tend to pick a significantly higher number than a group who was first asked an equally ridiculous anchoring question of whether the Indian leader died at age 9. 

Anchoring is a powerful phenomenon which takes advantage of a System 1 that grabs the first information it receives and uses it to build out a story that is harder to edit or delete than to create.  And thus we stick to first impressions, even if those first impressions are wrong or were purposefully implanted in the conversation to anchor us.  This is why politicians and advertisers (whether communicating employment statistics or quantitative health information regarding food products) want to get their numbers entered into the conversation early, so that subsequent conversations will stay anchored where they prefer.

Fast and Slow also draws attention to our dramatic preference for frames of reference that highlight gains vs. losses.  For example, many people who would buy a five dollar lottery ticket for a 1:10 chance to win $100 would refuse a bet that offered them a 10% chance of winning $100 but a 90% chance of losing $5.  Even though the two bets are nearly identical in economic terms (actually, once you do the math the second is better than the first), in psychological terms the notion of paying $5 for a lottery ticket is treated far more positively than losing $5 as the outcome of a failed bet.

Keep this in mind during the campaign season when candidates explain that they want to help you keep four-fifths of you money (rather than tax you at 20%) or perform what you suspect might represent some mathematical slight of hand.  In fact, whenever you are confronted by statistical data (from political friend or foe), it might make sense to stop what you’re doing, spend five minutes doing multiplication of various two-digit numbers in your head (which tends to engage System 2), and then come back issue afresh.

While I suspect that follow up to the work of Kahnaman and the researchers which followed him will help us further understand other political phenomena (such as our preference for certain cadences and word choice reflected in the rules of rhetoric), I started to get uncomfortable with his final conclusions regarding how his discoveries could be applied in the real world – the subject of some final thoughts next time.

Saturday, March 3, 2012

Think Fast


Like many people interested in the subject of this blog, I’ve been reading and enjoying Daniel Kahneman’s best-selling book Thinking Fast and Slow.

Those of us who dabble in the subject of critical thinking tend to assume a classical understanding of the human makeup, one that sees people as essentially rational creatures.  And when reason fails us, we tend to blame this failure on emotion or some other component of our animal/non-reasoning self temporarily overwhelming the rationality that makes us us. 

Kahneman’s work in psychology (which won him Nobel Prize when applied to economics) contradicts (or at least confounds) these assumptions, demonstrating as it did that our reason might actually be faulty (or, at least, doesn’t work the way we think it does). 

Kahneman (in landmark work done with his colleague Amos Tversky) posited that our “mind” actually consists of two components: a fast-processing piece which he names System 1, and a slower, more deliberate part named (you guessed it) System 2.  And unlike other attempts to bifurcate or trifurcate the brain (into artistic vs. quantitative right and left hemispheres, or Freud’s Ego, Superego and Id), Kahneman’s fast System 1 and slow System 2 seems to provide a great deal of rigorous descriptive and predictive power. 

Under this framework, System 1 processes information (such as information coming in from the senses) lightning fast and attempts to make sense of it via associations and stories.  You can experience the uncontrolled associative nature of System 1 the next time you hear a familiar song and immediately (and without any deliberate effort) remember the last time you heard it, the first time you heard it, a dozen songs like it, and that great date when you danced to it in college.  Stories provide a way for System 1 to create coherence around sensory data and other input, without having to engage the more deliberate concentrative power of System 2.

And this System 2 is extremely powerful, grabbing control and overriding the association- and story-driven decisions of System 1 whenever it likes.  The trouble is, deliberative System 2 doesn’t like to do this very often since it is a lazy system that would prefer to take System 1 at its word whenever possible.

Times when this is not possible include situations when understanding requires a statistical vs. story-based understanding since System 1 doesn’t really “do” probabilities.  In fact, the illustration (and tool) Kahneman uses to illustrate the distinction between the two Systems are bets or gambles which make no sense from a purely utilitarian point of view, but are perfectly understandable once you see decisions on whether to take those bets being made by System 1 that doesn’t really get probability and a System 2 that would rather not bother if it didn’t have to.

Beyond statistics, this two-part model helps explain our susceptibility to visual and cognitive illusions, such as this famous example:


Looking at this image, most of us “know” that the two parallel lines are the same length, regardless of the fact that our own eyes registers the first longer than the second (a visual that is confirmed by System 1 acting on its own- which is what happens when children confront this illusion for the first time).  The reason we grownups “know” the lines are of equal length is that our System 2 is pulling in not visual imagery (which is what System 1 uses to process data), but data drawn from memory, i.e., the specific memory of having experienced this illusion previously as a child.  (In fact, my memory recalls not just this illusion, but the exact puzzle book where I saw it published –one which had an elaborate maze on the cover whose overall shape resembles a British toff wearing a bowler hat.)

This combination of a fast associative processor and slower, lazier deliberative processor leads to other types of illusions/errors, my favorite being the response you get when you ask people how many of each animal Moses  brought onto the ark.  This gag doesn’t work on the printed page, but when you ask someone the question out loud, most of them will confidently announce “two,” and only afterwards feel sheepish that they mistook Moses for Noah, the confusion arising because both names fall into the associative category “famous Biblical figures with long-O sounds in their names”.  (My favorite use of this trick came when I asked the Moses question of my neighbor, the local Episcopalian Minister, who began on a long exegesis regarding relevant chapters of Genesis before I stopped her and told her Moses never had an ark.)

But the slow processor, which must take over to perform certain tasks (such as multiplying two-digit numbers in your head) can also cause errors and omissions, one of the most famous illustrated in this observational assessment

Illusions aside, during the course of any given day, most of the mind’s work is performed by System 1 with System 2 intervening only when necessary.  Reading this piece, for example (no matter how engrossed you might be by it) is pretty much a System 1 activity, given that it consists of processing written information in a language you understand.  In fact, System 2 has only really been engaged during a small portion of the period when you were involved with this piece (when you were counting the basketball passes if you clicked on the link above).

All very intriguing, I hear you cry out.  But what does that have to do with critical thinking in general and critical thinking about the US election specifically?  Expect an answer to that question next time.