Steven Pinker Has His Reasons

Steven Pinker Has His Reasons

A few years ago, at the Princeton Club in Manhattan, I chanced on a memorable chat with the Harvard psychologist Steven Pinker. His spouse, the philosopher Rebecca Goldstein, with whom he was tagging along, had been invited onto a panel to discuss the conflict between religion and science and Einstein’s so-called “God letter,” which was being auctioned at Christie’s. (“The word God is for me,” Einstein wrote, “nothing more than the expression and product of human weakness …”) The panel produced a fascinating discussion, but it was only a prologue to the impromptu one I had afterward with Pinker.

Pinker had recently published Enlightenment Now: The Case for Reason, Science, Humanism, and Progress. I was eager to pepper him with questions, mainly on religion, rationality, and evolutionary psychology. I remember I wanted Pinker’s take on something Harvey Whitehouse, one of the founders of the cognitive science of religion, told me in an interview—that my own little enlightenment, of becoming an atheist in college, was probably mostly a product of merely changing my social milieu. I wasn’t so much moved by rational arguments against the ethics and existence of God but by being distanced from my old life and meeting new, non-religious friends.

I recall Pinker almost pouncing on that argument, defending reason’s power to change our minds. He noted that people especially high in “intellectance,” a personality trait now more commonly called “openness to experience,” tend to be more curious, intelligent, and willing to entertain new ideas. I still think that Pinker’s way of seeing things made more sense of my experience in those heady days. I really was, for the first time, trying my best to think things through, and it was exhilarating. We talked until the event staff shelved the wine, and parted ways at a chilly midtown intersection.

OUTSMART YOURSELF: With countless measures, we can rationally outsmart certain of our tendencies that would tend to favor our short-term interests over our long-term ones, Steven Pinker says. “Like, I don’t go shopping when I’m hungry because then I’ll buy all kinds of impulsive items that I’d rather not have around the house to tempt me.”Rebecca Goldstein

Recently Pinker joined me in another conversation, this time on Zoom, to discuss his new book Rationality: What It Is, Why It Seems Scarce, Why It Matters. Reading it, I got the sense that he was at last indulging a yearning to give reason his full and undivided attention. It’s made appearances in his previous books of course. “We are all intuitive physicists, biologists, engineers, psychologists, and mathematicians,” Pinker wrote, over two decades ago, in How the Mind Works. “Thanks to these inborn talents, we outperform robots and have wreaked havoc on the planet.” And more recently, in 2011’s The Better Angels of Our Nature and 2018’s Enlightenment Now, Pinker detailed reason’s role in realizing, via a problem-solving approach to human affairs, the relatively peaceful and prosperous world we now enjoy. Because of the fruits of science and free markets, among other things, Pinker says, we’ve never been less impoverished or plagued by violence—and rationality, he argues, is a big reason why.

Crystallizing the message of Rationality, Pinker alludes, in a pair of sentences toward the end of the book, to both George Orwell and Martin Luther King, Jr. “For all the vulnerabilities of human reason,” Pinker writes, “our picture of the future need not be a bot tweeting fake news forever. The arc of knowledge is a long one, and it bends toward rationality.” The more we know, the more reasonable we tend to become.

On Zoom, with a digital view of what may have been the Massachusetts bay at his back, Pinker explained, with characteristic precision and color, such things as how to best form beliefs, what evolution has to do with irrationality, and journalism’s inattention to genuine progress. He also gamely responded to criticisms of his work (The Guardian just dubbed him as “one of the world’s most contentious thinkers.”) He offered his views on capitalism, environmental stewardship, having children amid climate change, and billionaires.

How do you define rationality?

I searched for a better definition of rationality than “the use of reason,” because there is a bit of circularity to that. I came up with “the use of knowledge to attain goals.”

You reference the philosopher Spinoza in your book. What do you make of his argument in The Ethics that we can live a kind of spiritually serene life if we follow the dictates of reason? Is following reason anything like a spiritual practice for you?

I checked with my local expert and spouse, Rebecca Goldstein, author of Betraying Spinoza, to double-check my understanding. Perhaps not surprisingly, we both think he was right. By understanding the source and function of one’s negative emotions, one can escape being enslaved to them, and that does grant a gift of serenity. Perhaps it’s even a kind of spirituality, though it’s not a word either of us would use.

Do you try to tamp down any feelings of hope or regret? You mention these emotions and wonder if it can be “rational” for them to drive our choices. It can be, you say, “depending on whether you think hope and regret are natural responses we should respect, like eating and staying warm, or evolutionary nuisances our rational powers should override.”

Certainly not hope; though as Spinoza might advise, it might be wise to consider what one should learn from a feeling of regret—what one did wrong and should not repeat—and move on, not stewing in self-recrimination.

What the more wacko conspiracies have in common is that they cast an enemy in the role of depraved villain.

One of the ways you say we can be more rational is by adopting a Bayesian approach to understanding the world. What is Bayesian reasoning?

It’s named after the Reverend Thomas Bayes. It is one of a family of what are called normative models. Others include logic and probability and game theory. Bayes’ rule, or Bayesian reasoning, is an optimal way to calibrate our beliefs in response to evidence. It assumes that people shouldn’t just hold a belief or reject it, but rather have a degree of credence in it which can be captured as a probability between 0 and 1, the probability that the belief is true, and it offers a way to adjust beliefs upward or downward, depending on the strength of that evidence, or on how probable the idea was to begin with—the prior probability.

If the hypothesis is true, how likely are you to obtain the evidence that you are now seeing? In the case of, say, a medical test, that might be the sensitivity of the test; that is, if a person has the disease, what are the chances the test will show it, divided by the commonness of the data? That is, how often does that evidence occur across the board whether the idea is true or false? So Bayes’ rule is simply to combine those three numbers and to deliver a credence estimate that reflects the balance between how likely the hypothesis was in the first place and what the latest evidence says about it.

Neuroscientists have made the argument that we actually have a Bayesian brain, that the brain implements Bayes’ rule in some sense, where your perception is guided by these calculations. Is there a way to reconcile that fact—that our brain is unconsciously following Bayes’ rule—with the more deliberate attempt to be more Bayesian, or rational, ourselves?

You’re right to note that there is a tension within cognitive science between the researchers who study judgment and decision making, who tend to emphasize our departures from Bayesian reasoning. In particular, our base-rate neglect—that is, of the three terms that I mentioned, the prior, the likelihood of the data, and the commonness of the data—the claim is that people tend to neglect the prior.

For example, in diagnosing a disease, they will focus on the extent to which the disease has the symptoms that you’re observing or will deliver a test result that you’ve administered and forget about how rare and common it is in the population. Now that is indeed, as you point out, in tension with the idea that the basic computational principles of the brain are Bayesian, that’s what allows us to be smart, coming from a somewhat different community of researchers.

So one way to reconcile it is to say that a lot of the intuitive, unconscious, or experience-driven forms of cognition indeed follow Bayes’ rule, because that is the optimal way to adjust beliefs, but that what we don’t have is a general rule extracted from those different processes that we can apply across the board to novel problems when they’re presented to us in verbal form. So, we’re Bayesian when we get a general sense of what we’re likely to encounter through the day, when we develop concepts, when we learn words. But if I give you a medical-diagnosis problem in so many words, you may not have Bayes’ rule extracted as a general-purpose formula where you can plug in values for the data term, the hypothesis term, and crank out an answer.

That sounds like hard work.

Yes. The conspicuous failures of human cognition often occur in this “formal” rationality, in solving logic puzzles, probability problems, even in cases where people in living their lives tacitly obey the same principles that they flout in researchers’ lab experiments. We’re good at detecting violations of logic when they are involved in enforcing, say, a social rule, a contract. Like, if you want to send a letter first-class, you’ve got to put on 50 cents of postage. If you imagine yourself as a postal worker, you’re pretty good at weeding out the letters that violate that logical rule. That’s “ecological” rationality, the ability to deal with recurring problems in one’s life, maybe even recurring problems for the species. But as that postal worker, if you extract the same logical rule and now apply it to an arbitrary set of cards, like if there’s a P on one side, there has to be a 3 on the other, then you see people make errors.

Also in Philosophy  

Philosophy Is a Public Service

By Jonathon Keats

Several years ago, I climbed Mt. Washington in Nevada to see the oldest complex life forms on Earth. Typically found at elevations higher than 3,000 meters, bristlecone pine trees can live for as long as five millennia. They do so…READ MORE

Is there a difference between being rational and being intelligent?

Yes. If intelligence is a measure of raw brain power and is measured by some tasks like repeating numbers backward, then it isn’t necessarily deployed toward rational goals. And empirical studies that try to assess rationality independently of intelligence by assembling batteries of some of the common fallacies that have been documented in cognitive psychology and behavioral economics—things like the gambler’s fallacy, confirmation bias, sunk-cost fallacy—and by comparing individuals, you find there’s a correlation between traditional measures of intelligence and measures of rationality per se. But it is way less than 1. So there are lots of smart people who aren’t so rational and vice versa.

Help us understand the draw of conspiracy theories—QAnon, for example. In Rationality, you mention signal detection, among other things. What’s the allure?

Signal detection theory—how to choose a cutoff for deciding how to react to uncertain information given the relative costs of misses and false alarms—offers a clue on why people can be paranoid about conspiracies. The most lethal aggression in tribal societies came from ambushes and surprise raids; that is, conspiracies, not pitched battles. Since presumably the cost of failing to detect a real conspiracy is greater than the cost of reacting to a false alarm, it’s conceivable that we’re primed to detect conspiracies on flimsy evidence.

I don’t think that can explain fanciful conspiracies like QAnon, which have the hallmarks of entertainment and group-bonding rather than detection of danger. What the more wacko conspiracies have in common is that they cast an enemy in the role of depraved villain. Belief in them is not an assessment of the world that happens to be mistaken but an expression of moral condemnation of an enemy. “Hillary Clinton ran a child sex ring out of a pizzeria” is another way of saying, “Clinton is so depraved that that is the kind of thing she is capable of doing,” or even just, “Boo, Hillary!”

Many of us are incredulous and outraged that people can turn a moralistic sentiment into a bogus factual claim. But that’s because we have the historically unusual conviction that factual beliefs should be held only if one has good grounds for them.

For most of our existence, before there were historical archives and science and investigative journalism and reliable datasets, there was no way to verify claims about the hidden processes or the machinations of the powerful, so beliefs were held for other reasons: entertainment, enchantment, uplift, solidarity, mobilization. Even today many people protect religious faith and nationalist founding myths from factual scrutiny.

Crazy conspiracy theories are basically moralistic fables. Of course, sometimes they can cross the line into real, actionable beliefs, like the guy who burst into the Comet Ping-Pong Pizzeria with his guns blazing in a heroic attempt to rescue the children. But as the cognitive psychologist Hugo Mercier pointed out, most “believers” in Pizzagate merely spread the story around, or left a one-star review on Google, not what you’d expect from someone who literally believed that children were being raped in the basement.

What do you make of the mismatch hypothesis, the idea that much of our irrationality stems from living in a modern environment radically different in many ways from the conditions we evolved over eons in?

The mismatch there is not so much between, say, the Pleistocene Savannah and a 21st-century high-tech city. It’s rather between any domain that you just have to handle as an experiential human being and the formal school environment in which you have available to you formulas and algorithms and abstract rules. I think that is the crucial cut. In Rationality, I emphasized the rationality of the San, formerly called the Bushman, the hunter-gatherers in the Kalahari, to show that it is not that they’re devoid of logic or probabilistic thinking. It’s just that they apply it in particular domains. You go to a modern Western school and you learn formulas that can apply to anything, to card games, climate, disease, politics. And that’s what we did not evolve with, abstract formulas, and that’s a gift of modern schooling.I have a little bit of fun saying, “Oh, reducing death from disease and famine and war, giving people longer, happier lives? Boring!”

We tend to get distracted by utopian visions or by a cynical fatalism that says, “Let’s just burn the whole thing down because nothing can improve it.”

You talk about how the San have a scientific mindset. But is that due to the rational powers of individual members of the tribe or to the evolution of cultural practices over time that prove adaptive? In his book The Secret of Our Success, your Harvard colleague Joseph Henrich writes, “complex adaptations can emerge precisely because natural selection has favored individuals who often place their faith in cultural inheritance—in accumulated wisdom implicit in the practices and beliefs derived from their forebears—over their own intuitions and personal experiences.” How might you resolve that apparent conflict between faith and reason?

I think by “faith” in that passage it just means accepting practices that have been ratified by your culture. It doesn’t mean believing them without a reason because if you’re in a successful society, the practices of that society are ways in which you can survive and prosper. And within every society, you do have people who question the current practices, including the San. Skepticism, questioning of practices, is something that you find in probably all cultures.

And here’s where I differ from my colleague, Joe Henrich. I think it’s a false dichotomy to pit human cognition against cultural practices. It’s not a coincidence that we’re the species that has developed complex cultural practices for technologies and for cooperation and large-scale social organization. Because we’re smart enough to formulate them, and we’re smart enough to learn and apply them. And of course the fact that we’re smart enough doesn’t mean that we can operate on nothing, that any single person can recreate all of their cultures. We don’t figure out everything from scratch, we rely on cultural knowledge and cultural habits, but we didn’t get culture from Martians, we didn’t get it from God’s commandments, we got it from the feats and accomplishments of human beings. And when we learn them, we don’t just execute some set of steps robotically. We internalize the principles and the goals, and we think them through when we apply them.

Henrich goes over examples where you have some process—of how to detoxify some food ingredient, say—that’s been passed down over generations and no one understands why each step is necessary. It’s unquestioned. And the result is good, because that practice has been selected over time. And other practices, like divination as a technique for hunting, implement a game theoretic or randomized process, where you can say why that’s rational. But these people are not being rational—they are just following this procedure that has evolved in their culture. Isn’t that in tension with what you’re saying?

Well, it wasn’t as if these practices arose like monkeys at a typewriter just trying things out at random. They were intuitions of what kinds of things are likely to work, which experiments are worth trying that radically narrow the space of possibilities, to prune the decision tree of options. And it explains why it’s our species that developed this complex culture and not just any old species. It doesn’t mean that they understand the science behind it. That had to wait for the invention of modern science. But there’s the ability to vary the techniques according to the circumstances, not to apply them robotically or mechanically, the ability to refine them when there are seasonal changes in the ecology. All of these depend on some underlying intuitions as to how they work at some level.

Recently on Jordan Peterson’s podcast, he suggested to you that we need something like religion to inspire people to care about society, particularly the liberal project of gradually, incrementally making progress in human well-being. Do you think he has any point there?

Well, certainly not for me. I have a little bit of fun in Enlightenment Now in saying, “Oh, reducing death from disease and famine and war, giving people longer, happier lives? Boring!” It’s a satire of the mindset that says we have to believe in gods and heroes and myths in order to improve humanity.

Now, as a question of tactics—influencing hearts and minds, propaganda, building a mass movement—do we need some kind of mythological misunderstanding of reality in order to get people’s hearts pumping? A somewhat cheeky way of putting it: Do we need to have young men in colored shirts saluting giant posters of John Stuart Mill? I tend not to think so. I guess I tend to agree with the presupposition of your question that making people better off ought to be inspiring in itself. If we could put an end to extreme poverty, to diseases like malaria, to war—which I don’t think is an unrealistic aspiration—to dealing with climate change, to me, that’s thrilling.

I have a little bit of fun saying, “Oh, reducing death from disease and famine and war, giving people longer, happier lives? Boring!”

Is that going to be thrilling enough for other people?

Well, there have been some leaders who have at least momentarily built that kind of enthusiasm for what you might call prosaic, secular improvements in the human condition. Franklin Roosevelt briefly. John F. Kennedy did get people to volunteer for the Peace Corps. He has obviously a mixed legacy. Barack Obama in his early years. It can be done. It’s hard to sustain because there’s so much cynicism and fatalism. And with the practical complications of implementing improvement in a difficult, messy world, that idealism tends to get cut down to size very quickly. Partly that itself is a problem of the cynicism and fatalism of our journalistic and intellectual elites, who don’t acknowledge improvement when it happens, and who themselves are not sufficiently mobilized by the prospect of incrementally improving our condition.

We tend to get distracted by utopian visions or by a cynical fatalism that says, “Let’s just burn the whole thing down because nothing can improve it.” So part of it has to be a mixture of commitment of people like you and me, the chattering classes, to the goal of improving human well-being together with the right mixture of leaders who sign onto that vision and have the right charisma, inspiration, rhetoric to get other people excited by it.

One irrational tendency you highlight is not putting enough thought toward our own future well-being. As an example, you point out that 50 percent of Americans nearing retirement age have saved nothing for retirement. But couldn’t that behavior be explained by people having to live paycheck to paycheck? If households aren’t saving, I’d assume it’s because after basic expenses, there’s little left to save.

Well, for those, yes. But that’s not all of them. And it’s a pretty widespread consensus among economists that people save too little. Now, of course, it doesn’t apply to people who have to consume their paycheck every month. We’re not talking about those people. We’re talking about the people who don’t save, and that’s one of the reasons why we empower our employers to withdraw something from our paycheck every month. We agree to that proactively because we know that we probably won’t put away enough.

I noticed that in a recent New York Times review of your book, that bit was picked out, and I think the reviewer said that the median household income in the data set that you rely on was $26,000. That seems pretty low for people to save money if they’re trying to, I don’t know, live comfortably.

Not too much to be read into that 50-percent figure. That was just a parenthetical in a discussion of how, in general, people who can afford to save often don’t save as much partly because they don’t anticipate how much their savings will compound. The whole point of the discussion was that people fail to intuitively appreciate exponential growth, and therefore don’t value their future selves as much as they should.

This was an extremely hostile reviewer from the very first sentence, who glommed onto every example, missing the point of the discussion. It’s pedantic, but, yes, the people who have nothing to save: It’s not a cognitive bias that they won’t save. But what about everyone else? Unless you want to say everyone saves the optimal amount, which is patently false, it’s irrelevant that there are some people, of course, who don’t have anything to save. That was just part of generally an adversarial approach to the whole book, where the reviewer did not appreciate when an example was just an example, not the main point of the discussion.

You write, in a section on critical thinking, that “intellectual cliques often revolve around a guru whose pronouncements become secular gospel.” It can seem like a lot of your critics maintain that you yourself have that standing among people who, for example, champion capitalism as an engine of progress and see no issues with the existence of philanthropic billionaires. How do you handle that sort of charge?

I don’t think anyone has treated me as an intellectual guru in the sense of saying, “Well, as Pinker has written on a certain page, therefore, that tells us what we ought to believe.” Which I have seen happen in academia, when there have been cults around gurus, like Foucault, Marx, Chomsky, Freud. I don’t see any evidence that that happens with me. Life would probably be a lot easier if it did, but I don’t think that’s been a problem. Unfortunately, I haven’t tried to cultivate that.

In terms of all the various caricatures and distortions that you just listed, I’d have to go through them one by one. But each one of them are things that I’ve explicitly disavowed. So in terms of capitalism, if the alternative is communism then yes, I think that there are actually excellent arguments why market society is better than a top-down authoritarian, totalitarian dictatorship. I am guilty of that. Would I rather live in South Korea or North Korea? I would argue that South Korea really is more conducive to human flourishing. Not because I said so. I can point to reasons. Namely, lots of people would like to escape from North Korea to South Korea, not so much the other way around. Likewise comparing West Germany and East Germany.

I also note that when people talk about capitalism, they often talk about something that doesn’t exist. A kind of fantasy anarchical capitalism with no regulation or social safety net. Now granted there are people on the right who might advocate for that—the Rand Paul-style of right-wing libertarians. But in practice, that’s not what capitalism is anywhere, including the United States. When we talk about capitalism, we’re talking about a system with a lot of social redistribution and a lot of regulation of environment, safety, workplace protections. That is capitalism. Compared to the alternatives, it is better. Again, not because I said so, but because that’s what the evidence points to, and I’m open to evidence to the contrary.

When it comes to some billionaire philanthropists—again, what is the alternative? Should we have a system that confiscates all the wealth of entrepreneurs? I’m not sure that that would be a better system, although I’d be open to arguments. Should someone like Bill Gates, who’s credited with saving a hundred million lives, have instead endowed some museums and opera houses? I think he deserves some credit for deploying the money to save a hundred million lives instead, or to try and solve the very, very tough problems of climate change by investing in vulnerable startups that would have no chance otherwise, but might give us ways out of the climate crisis.

But should we be relying on philanthropists to fund problem-solving endeavors?

I don’t think we can depend on philanthropists, but on the other hand, I don’t think that governments should have a monopoly on improving the world, because governments are hamstrung by numerous constraints—interest groups, democracy, and populist movements. If it was only governments that we had, I think we’d be worse off than if we had a mixture of governments and private actors. Again, that’s not a guru-like pronouncement, it’s an argument.

What do you make of the possibility of us causing a sixth mass extinction? How does that figure in your picture of our long-term progress?

Clearly we should be preventing as many extinctions as we can. By geological standards, there are a number of biologists who object to the idea that what we’re going through now, the so-called Anthropocene, is comparable to the five great extinctions in terms of which species are affected, how many species, and we clearly can’t save every species. There are trade-offs. And as valuable as a species is, it would be impossible to assume that every last one of them is sacrosanct. But clearly, we should be protecting ecosystems that are species-rich to the extent that we can. And this is another example of perhaps a too steep discounting of the future where short-term benefits are done at the expense of long-term well-being, such as descendants that will enjoy the biodiversity of an ecologically rich planet.

Do you think capitalism is to blame for that—for short-term benefits being done at the expense of long-term well-being?

Pointing the finger at capitalism is misdirecting the criticism because it’s not as if communist regimes were at the forefront of preserving species diversity, and the ecological harm of being the alternative to capitalism among developed societies was an ecological disaster. There has been a price that we’ve paid that’s probably too steep, and we probably early on should have paid more attention to gaining the advantages of economic development with fewer costs in terms of environmental harm. But it is one of the products of economic development, not of capitalism per se. It is a trade-off that we face as societies develop, as they capture energy to improve the well-being of humans, where we have decimated extreme poverty, multiplied human lifespans, reduced child mortality, and enriched experience in many ways.

Some people think it’s irrational or irresponsible to be having kids in our era of climate change, since they contribute to population growth and climate change and perhaps would suffer in life on a planet broken by global warming. What do you think?

I would not say it’s irrational. This isn’t to say that harmful effects of climate change won’t happen, they surely will. It’s a misguided calculus for anyone in the West. Your children will do just fine. They will live through a world whose impacts are felt more in low lying parts of the global south, in Bangladesh and island nations. They’ll suffer compared to a world in which we managed to avert global warming. There will be more sea walls, more retreat from coastal areas, more population migrations in other parts of the world. It’s possible for there to be harms in climate change without them being so catastrophic that they’d be better off not having been born at all. And if your children are responsible citizens, if they vote for leaders that implement policies that continue to mitigate climate change as we ought to do today, if they are smart enough to contribute to technological innovations that will make disastrous climate change less likely, they can play a role in reducing it and you’d be better off having them than not having them.

Can it be rational to be ignorant?

Well, a concrete example is the driver of the Brink’s truck that has a sign plastered on the door saying, “Driver does not know combination to safe.” That works in his interest because he can’t be extorted into opening the safe if he doesn’t know the combination. If a carjacker jumps in and says, “Open the safe or I’ll blow your brains out,” and the driver says, “Well, I don’t know the combination,” the carjacker could think that he’s lying. But he’s still better off not knowing the combination and for it to be known that he doesn’t know it. There are cases in which there can be an advantage to being ignorant or even irrational, to be the more stubborn, the more hotheaded member of a pair of negotiators to a degree, because of the peculiar game-theory logic of bargaining and negotiations. If you’re too much of a loose cannon, others can either just take you out or decide never to enter negotiations with you in the first place. It’s a calculated strategy of being irrational that may not pay off in the long run, but there can be those advantages.

What makes you think the long arc of knowledge bends toward rationality?

When was the last time you read about a human sacrifice, witch-burning, or exorcism? How many people believe in the divine right of kings, or omens in eclipses, comets, and rainbows? A century ago, our greatest writers extolled the beauty and spirituality of war, and the decadence and selfishness and effeminacy of peace. Heroes such as Theodore Roosevelt, Winston Churchill, and Woodrow Wilson avowed racist beliefs that today would make people’s flesh crawl. Women were barred from juries in rape trials because supposedly they would be embarrassed by the testimony. Homosexuality was a felony crime. At various times contraception, anesthesia, vaccination, life insurance, and blood transfusion were considered immoral. I could go on.

Brian Gallagher is an associate editor at Nautilus. Follow him on Twitter @bsgallagher.

Lead image: Josie Elias / Shutterstock

Read More

Author: admin

Leave a Reply

Your email address will not be published. Required fields are marked *