Julia_GalefWhat follows is a transcript and video of Julia’s talk from SK5, super special thanks to Karou Negisa for typing them up for us!

Presenter: You may know our next speaker from her podcast, “Rationally Speaking” or her blog, “Measure of Doubt”, Julia Galef.

[audience applause]
Julia Galef: Hey, everyone. Can you hear me? [Inaudible] Can you hear me now?
Audience: Yes.
Galef: Great. So, my story begins…my story *will* begin…While we’re setting up, I’ll just tell a little more about myself. I am the host of the podcast “Rationally Speaking,” which is sponsored by the New York City Skeptics, but I’m also more recently the president of a new organization called the Center for Applied Rationality, so I’m no longer based in New York, I’m living in Berkley, California running this new organization more than full time. And so I fly back to New York every month and a half to record a bunch of podcast episodes and pass them out.
Alright, I think my story is about to begin. It is! My story begins in Burma on a hot day in the late 1920s where a young British officer named Eric Blair was stationed. Eric received some bad news. A work elephant had broken free of its chains and was on a rampage in the town and he was needed to track it down and if necessary subdue it.
So Eric got his gun, went off in search of the beast, and as he tracked it through the town he amassed a crowd of interested onlookers who were all following him eerily awaiting the confrontation. When he finally found the elephant, it was grazing in a field. And when he saw it he realized two things: first, that the elephant was now calm and posed no danger to anyone, and that he had neither the justification nor the desire to kill it. And second that the crowd of now roughly 2,000 people behind him, waiting and watching, expected him to shoot the elephant, but if he didn’t he would look foolish and weak.
Eric shot the elephant in the head. It didn’t die, so he shot it again. And again and again, until he finally gave up and left it there to die a slow and painful death. Eric Blair later adopted the pen name George Orwell and he wrote about his experience in an essay entitled, “Shooting the Elephant,” and this is the last line of the essay: “I often wondered whether any of the others had any inkling that I had done it solely to avoid looking a fool.” [Note: Galef’s spoken words are different than the ones projected on the screen]
So, what I think is striking about this story, what I want to highlight, is how clearly we can see these two separate decision-making systems, both wrestling for control of George Orwell’s decision.

The first, having been programmed to avoid looking weak in front of your fellow primates, and the second which was able to reason that shooting the elephant would be unnecessary and cruel.
So the first is one of many examples – the system that is afraid of looking foolish – is one of many examples of a decision-making rule or impulse that evolved because it was evolutionarily beneficial to our genes. It helped our genes spread. But which now in modern context gets triggered in ways in which it no longer applies, like being in front of a crowd of people who you don’t know and don’t need to impress and are never are going to see again.

 

So in this case, George Orwell, rather than reflecting on his values, deferred to this instinctual, ancient drive which evolved for a separate purpose – that of genetic proliferation – and in a separate time and place. And when we make decisions for our future and the future of our world, it’s crucial we don’t make the same mistake.
You’re probably familiar with some of the ways our brains go awry when they try to make decisions in the modern world, like for example the fact that we’re constantly craving fat and sugar, which is a drive that was definitely helpful to our ancestors back on the Savannah when they needed the motivation to get enough calories to survive, but which today in our era of modern abundance produces things like this.
[yummy sound, audience laughs]
So the thing is this happens also with the way that we reason and evaluate claims. So when someone makes a claim to you, anything, “the moon is made of green cheese,” “marijuana causes cancer,” whatever. There’s a lot of questions your brain could ask that would help you ascertain how much trust to put in the claim, how seriously to take it. Like, “Where did this person, who’s making the claim get their information?” or “Can I think of any evidence that contradicts this claim?” or “What would the world look like if this claim were true compared to if it were false?”
But those are not the questions that the human brain automatically asks unprompted. Our brains instead prefer to ask hard hitting questions like, “How symmetric are the facial features of the person telling me this?”, which I wish was an exaggeration, but it is in fact the case that you will be given the benefit of the doubt more by jurors if you’re on trial, by the electorate if you’re campaigning, and by parents and teachers if you’re a child if you are facially attractive.
What else does our intuitive epistemology find persuasive? Well, personal, vivid anecdotes. So the claim that vaccination is responsible for the rise in the diagnoses of autism has been thoroughly disproven beyond a shred of reasonable doubt by study after large epidemiological study, and yet there are still tens of thousands of parents like celebrity Jenny McCarthy for whom that vast body of scientific research cannot hold a candle in terms of its evidentiary weight to a single emotionally fraught example of a child who developed autism shortly after being vaccinated. Which is why McCarthy likes to dismiss the scientific consensus by saying, “My son is my science.”
Another type of question that the human brain seems to unjustifiably put a lot of weight in is, “How do I feel emotionally about this issue that I’m considering?” So, for example, natural foods, people feel warm and positive about, but that’s only very loosely connected to the questions that we actually care about, like, “How healthy is it compared to a conventional food?” or “How good is it for the environment as compared to a conventional food?”
And conversely, we have negative associations with nuclear power. It conjures up creepy things like mushroom clouds and Blinky the Fish. And that’s totally understandable, but again those emotional reactions are only roughly correlated with the really important question, which is, “How many lives does nuclear power save or cost compared to other sources of energy?”
So using intuitive emotional connections as a cue to—or as a proxy for what the consequences of a decision are going to be, was probably a pretty good heuristic back on the savannah when the issues we were considering were, “Should I run away from the rapidly enlarging spot on the horizon?” or “Should I try to mate with that cavelady with makes me feel happy?” They’re just much less reliable proxies for the kind of issues that really matter today. And relying on emotional cues as a heuristic for what to do is especially danger when the questions you’re considering involve large scale loss of human life.
So, this is a scene from one of my favorite movies, The Third Man. On the right there is a character named Harry Lime who’s played by Orson Wells and he has been profiting in post-war Vienna by selling counterfeit medicine and as a result hundreds of children have died from meningitis because they haven’t gotten the medicine they needed. So in this scene he’s up in a Ferris wheel with his old friend, Holly, and Holly is taking him to task for his actions.
[button clicks] Oh, that doesn’t work. [button clicks] [button clicks] Oh, that didn’t work, either. [button clicks] Alright, I will tell you what he says.
So Holly asks him, “Did you ever see one of your victims?” And Harry Lime sort of makes a shrugging expression and he opens the door to the Ferris wheel and gestures to Holly to look down, they’re high up now, high up in the sky, and he gestures to look down at the ground and he says, “Look down there at those dots moving around on the ground way below us. Would you really feel very much pity if one of those dots stopped moving forever? If somebody offered you $100,000 for every dot that stopped moving, would you really tell me, old man, to keep my money, or would you calculate how many dots you could afford to spare?”
And hopefully none of us are poised to become the next Harry Lime, but we do share one thing in common with him which is our ability to feel completely emotionless at large scale loss of human life if that information is presented to us in an abstract way, like dots on the ground or like numbers in a newspaper.
So Paul Slovak who is a cognitive psychologist who has done a lot of great work on rationality and cognitive biases , he’s also on the board of advisors of my organization, and some of his most striking and disturbing work is on this issue of scope and sensitivity when information is presented abstractly. And he’s demonstrated repeatedly that people are more moved and more willing to donate to save a single child than save a large group of children. And this phenomenon way before Orson Wells illustrated it and before Paul Slovak proved it was observed pithily by a writer you might have heard of called [difficult to understand Russian name that this transcriber can’t hear very well, but is not Stalin, sorry] who said, “One death, that’s a catastrophe. 100,000 dead, that’s a statistic.”
So a corollary to this phenomenon is that our brains can barely distinguish, emotionally, between vastly different orders of magnitude of human life. I couldn’t possibly say this better than Eliezer Yudkowsky, so I’m just going to quote him here, “The human brain cannot release enough neurotransmitters to feel emotion a thousand times as strong as the grief of one funeral. A perspective risk going from 10 million death to 100 million deaths does not multiply by ten the strength of our determination to stop it, it adds one more zero on paper for your eyes to glaze over.”
And the decisions we are increasingly faced with today are not the decisions our brains are optimized to make, our brains didn’t evolve to make. How should we deal with risks like a warming climate or nuclear proliferation or the risk of a new pandemic emerging? What are the perspective consequences if we pursue technology like nanotechnology or genetic engineering or cloning? What are the expected consequences if we don’t, and what are corresponding probabilities on all of those possibilities?
These decisions we’ve never had to make, decisions like theses, on this scale of complexity and abstraction and over such long-term horizons. And we’ve never had to make decisions that have had stakes this high: we can’t afford to get these wrong. So, that’s why I and several of my friends founded the Center for Applied Rationality.
[button clicks] Oops. Founded the Center [button clicks] for Applied Rationality [button clicks] Ok, there we go.
We’re an evidence-based non-profit founded to give people more understanding and control over their own decisions. The techniques that we teach are derived from probability theory and decision theory which provide us with models for the ideal way to reason based on evidence and to make decisions based on your goals. Then we combine them with research from cognitive science about how the brain actually reasons and makes decisions and where we tend to go awry. And then we turn these techniques into everyday skills which people can learn in workshops like this, skills like how to make more accurate predictions, how to weigh risks, how to avoid deceiving yourself and how to have productive arguments. So I’m going to share with you today a few principles that we at CFAR believe are crucial to actually making people more rational.
Principle #1: People don’t like being told they’re not rational. [audience and Galef laugh] I know, we were surprised, too. Who knew?
This is the first panel of a Calvin and Hobbes strip, Calvin’s selling “a swift kick in the butt for the low price of $1.” This is the rest of the strip. So just replace “a swift kick in the butt” with “rationality” and you’ll have some sense of the challenge we’re up against here with our project.
But that brings me to principle #2 which is that if you want people to become more rational, you have to teach them how to use rationality on things that they care about. Their relationships, their health, their career choices, their finances, their day-to-day happiness. If people are going to go through the work, and it is work, of learning and practicing rationality, they’re going to do it because they expect it to pay off in terms of things that they value.
And it does pay off. It really pays to notice your own cognitive biases. Just to take one example, a student of rationality was offered a job in Silicon Valley that would pay him $70,000 per year more than he was currently earning. He was reluctant to take it because it would mean moving away from his hometown where his friends and his family lived. So he tried a simple trick that we call a reframing. He asked himself, “If I already had this job in Silicon Valley, how would I feel about taking a $70,000 per year pay cut in order to live closer to my friends and family?” And when framed like that, the answer was a much clearer, “Oh, no, I wouldn’t be willing to do that.” Which was really instructive because it suggested that his initial reluctance to take the job was based, much more than he realized, on what is called the “Status Quo bias” in which we feel an attachment to whatever happens to be the case, regardless of what our actual preferences are.
It also pays to notice the unexamined beliefs that you have kicking around in your head that you might have been actually acting on and making important life decisions on, despite having never consciously asking yourself, “Do I believe this to be the case?” So, I teach a class called “Epistemic Spring Cleaning” in which we go through some of the common sources of unexamined beliefs like our parents, our community, our culture, the fiction we grew up with, and we bring them our into the light and we ask, “Do we have good reason to believe these?”
So, the board after every one of these classes is just filled with cached beliefs that people have in many cases been acting on without ever thinking about, like “Intellectual pursuits are more virtuous,” “men don’t cry,” “you shouldn’t try to change people’s minds.”
And this is not to say that every belief you uncover during epistemic spring cleaning is one you should necessarily reject, just that any belief that is actively shaping your life decisions is one you should be consciously thinking about. So that’s another big part of rationality.
And the last thing I’ll say about why it’s important to teach people rationality on real life domains is— one of the biggest challenges to becoming more rational, and that’s called “domain transfer.” That term refers to you could abstractly learn and completely understand the principle of rationality in one domain, like the classroom, and then completely fail to see when and how to apply it in real life.
So, I can personally testify to this. When I was an undergraduate I was considering going into academia and I asked a bunch of professors who I knew, “How do you like your jobs? Do you recommend it as a career for a young person like me?” And they were generally very positive [audience laughs], which I took to be a really encouraging sign. And it occurred to me years later that I had only been surveying professors who liked academia enough to stick around and all of the people who knew up front that academia was going to be awful or, you know, who didn’t like it and left, were not in the sample of people that I was actually talking to.
This is a textbook example of a basic statistical fallacy called selection bias in which the sample that you’re looking at is not representative of the population that you’re trying to learn about. And I was a statistics major, so this gives you some sense of the challenge of overcoming domain transfer, which is why we teach on the actual problems that people want to make decisions about—want to make good decisions about.
So, principle #3 of teaching rationality: Community is Key. And this isn’t just because it’s more fun to learn rationality around people who are your friends or who you’re forming bonds with, although it is. There’s a more fundamental reason why this is important. The most useful, the most important skill in learning rationality is not learning to overcome selection bias or status quo bias or any of the other biases. It’s a more meta skill and it’s the skill of actually wanting to figure out the truth more than you want to win a particular argument or more than you want to prove yourself right. And this is a skill that some few people seem to have been born with or seem to have been brought up with, but what about the rest of us?
Well, we’re all primates, as we know, and primates are influenced by the primates around them . And so what we at CFAR are doing, in the background of all of the specific classes that we run, teaching people about specific skills and biases, is that we’re creating this culture where people are rewarded and applauded for being willing to change their mind when they encounter good arguments or new evidence. This has been one of the most exciting things for me, to see this culture develop. I can’t tell you how gratifying it is to be around people who say things to each other like, “Oh, you’re right, that’s a good point, I’m changing my mind.” Or people who treat disagreements as opportunities to work together to figure out the truth, rather than as battles where the goal is to win.
One of my favorite embodiments of this principle comes from Richard Dawkins, it’s a story about his time in the Zoology department in Oxford. This is the Golgi apparatus, which is a structure in the cell that distributes macro-molecules around the cell. And when Dawkins was at Oxford, there was an elderly professor in the department who was famous for his claim that the Golgi apparatus was illusory, that it was an artifact of observation, that it didn’t actually exist.
So one day a visiting professor from the States came to give a talk at Oxford in which he presented new and compelling evidence that the Golgi apparatus was, in fact, real. So, as you can imagine, throughout the whole talk everyone is glancing over at the elderly professor like, “How’s he taking this? What’s he going to say?” And at the end of the talk, the elderly professor marches up to the front of the lecture hall and he extends his hand and he says, “My dear fellow, I wish to thank you. I have been wrong these 15 years.” And Dawkins describes how the lecture hall erupted in applause and he says, “To this day, the memory of the incident still brings a lump to my throat.”
And I’ll be honest, it brings a lump to my throat every single time I tell that story. That is the kind of person I want to be, that is the kind of person I want to inspire other people to be, [audience begins applauding] and that’s the kind of person I want making important decisions about the future of our world.
There’s one more thing about the project of improving human rationality that I find particularly inspiring. This is a scene from one of my other favorite movies, this is Rutger Hauer in Bladerunner, and he’s playing Roy, a replicant, which is essentially a sophisticated organic robot, and he was created by human to defend their colonies. So in this scene he’s reaching the end of his short, pre-programmed life, and the poignancy of his death scene comes from this contrast between the harsh truth that he recognizes and confronts that he was just a machine created by another species to serve their ends without any particular regard to his needs or desires, contrasts between that and that nevertheless he feels as if his life has significance and meaning, and that for lack of a better word, he has a soul. And to me the scene is really poignant because this is the situation in which we as human beings have found ourselves over the last 80 years.

 

This is the bitter pill that science has offered us in response to our questions about where we came from and what it all means. Turns out we’re survival machines created by our genes to make as many of them as possible. And to make matters creepier, our genes don’t particularly care about us or our well-being above and beyond our function as gene-copiers. They don’t particularly care if we’re healthy after we finish making copies of them. They don’t particularly care if we are happy. In fact, it’s probably better for the proliferation of our genes if we remain stuck on what’s called the “hedonic treadmill” in which we’re never satisfied with what we have and are always driven to get more and more but then quickly become dissatisfied with it once we get it. And our genes don’t care about strangers on the other side of the world or about the distant future of humanity.
But we care. We as autonomous individuals care. And that’s why it’s so important to have the ability to install new processes in our brains to replace some of the ones our genes have bestowed upon us, so we can fight for the things that we care about. And that’s why I see this process of becoming more rational as a crucial step in our quest for self-determination as a species.
There’s a cognitive psychologist named Keith Stanovich who’s also on our board of advisors and he’s written a book expanding on this theme called The Robots’ Rebellion, which I highly recommend and I’m going to close with a quote from Stanovich. “If you don’t want to be the captive of your genes, you had better be rational.”

 

We’re CFAR. Visit us online at appliedrationality.org and learn more about what we’re doing to further the quest of building a rational future. Thank you.


[audience applause]


0 Comments

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.