In a moment, we’re going to pretend that we live in a world in which state-sponsored torture (specifically by the U.S.) works.
To be clear, the available evidence suggests that we do not, in fact, live in such a world. Information extracted via torture is notoriously unreliable–it turns out that people will invent whatever wild fabrication it takes to get you to stop beating them or pretending to drown them or whatever awful thing we may or may not be doing to people these days, and aren’t really more likely to reveal the actual location of the rebel base than to just tell you it’s on Dantooine1 and be done with it. But in the pretend world we’re constructing for the purposes of the point I promise I’ll get to, torture works great. Put the screws to somebody with sufficient ruthlessness and they will give you detailed information (between screams or sobs, presumably–torture is not, unfortunately, any more pleasant in this world) about their plans, org structure, diet, etc.
Another problem with torture in our world is the geopolitical ramifications: when other countries observe that, “huh, those guys sure seem to torture a lot of folks,” it makes them love us less and potentially even makes them more likely to torture our citizens. But let’s say that in this world, the U.S. has hyper-competent intelligence agencies which are incredibly discreet and a diplomatic corps which can perfectly smooth out any ruffled feathers.
Okay, so here we are in Tortureworld2 and it’s finally time for the million-dollar question and the point of the introduction of this essay: do I, Doug Woos, support torture in this world? If we capture a terrorist and have reason to believe that he or she knows about a planned attack on Manhattan, do I think we should torture them?
The answer, as it happens, is no. I don’t think it’s worth sacrificing our principles, even at great cost to life and property. As somebody probably said, maybe on Star Trek: The Next Generation, if we win by compromising our values the victory is meaningless; we’ve fought for nothing.
But boy, am I sure glad we live here on planet Earth and not on Tortureworld. It’s way easier to argue against torture from a practical standpoint than from an ideological one. The former argument can cite studies and experts and stuff, while the latter relies on doofy phrases like “compromising our values.”
And here’s where I want to make my broader point, the actual reason I’m writing this essay and idea that I’ve been playing with for a little while: isn’t it more intellectually honest to make the principled argument regardless of the world I happen to live in? Let’s say I’m not just trying to win an argument–I’m honestly trying to articulate my views to someone. Shouldn’t I present the ideological argument, and only the ideological argument, to explain my position? After all, that’s really why I think torture is bad news. The fact that it’s also impractical is just a fortuitous accident. Furthermore, shouldn’t I be a bit suspicious that the evidence turns out to align so neatly with my ideals? It seems likely that I will have a bias, perhaps an extreme one, toward taking data at face value when they provide additional arguments for something I already believe for non data-driven reasons. This provides another reason for being honest with my interlocutors about the reason I oppose torture: so that they know that I’m unlikely to be objective when evaluating the data.
I think there are lots of examples of this phenomenon in policy discussions, on every side. When I argue for public and private efforts to increase diversity in science and technology fields, I’m likely to argue from a practical standpoint (diversity fosters innovation, under-served communities represent untapped markets, etc.) when in fact I think diversity in any community is a fundamental good in its own right. I think environmentalists (again, myself included) probably find anthropogenic climate change to be a convenient reason to argue against environmental disruption, when many of us have fuzzier notions about humanity’s obligation to preserve species and ecosystems. For an example from a camp I generally disagree with (because I get sick of only attacking myself3), I think libertarians do this pretty regularly: many, if not most, free-market libertarians believe that, for instance, taxation is just fundamentally wrong for all kinds of deep philosophical reasons–but rarely argue from that position, instead preferring to make economic arguments (which range, in my experience, from “argument based on insufficient sample sizes” to “argument based on Plato-esque just-so stories in which everyone happens to behave exactly like a libertarian would”).
Obviously some of this is inevitable–people are going to try to win debates, and will use any convincing argument in order to do so. But I think we can do some things to counteract this phenomenon. One, as I mentioned, is to explicitly acknowledge your biases when arguing. Another scarier option is to consciously look for practical arguments that oppose your ideals. An example for me is racial profiling in policing: as far as I understand, it actually works pretty well (especially if your goal is filling jail cells with people who are guilty of relatively minor infractions, which–according to the broken window theory or whatever–it kinda is). Even if it’s effective, though, I’m against it–I don’t think it’s something a just society can possibly engage in. Finding these arguments forces us to question our ideals and to learn more about our real reasons for supporting or opposing policies.
Before closing, I’ll briefly note a possible objection to this line of reasoning: that practical arguments for idealistic principles exist because those principles are correct; that the things we support because of our ideals will always work in practice (and the things we oppose will not); that we can be pragmatic and idealistic, having our cake and eating it too. This might be a defensible position, but probably requires a belief in something resembling an ultimately Just universe, which is a philosophical commitment that I doubt many people are willing to make. A perfectly Just universe might be nice, though–beats Tortureworld.
If I were Nassim Nicholas Taleb, I would give this phenomenon a super catchy, possibly animal-related name and then write like eight books about it4, drawing more examples from finance, history, literature, sports, and probably animé or something (I may not have read all of his books). Instead, I think I’ll wrap up here. If you happen to be reading this (which, according to my occasional cursory forays into this site’s Google Analytics data, is deeply improbable) and have comments or criticisms, feel free to shoot me an email or twitter at me. And because it seems like the cool thing to do among my grad school cohort, I’ll promise a followup post (which I’ll never write) on, let’s say, the advantages and disadvantages of having ideals at all in an increasingly data-driven world.
Thanks for reading!
What the hell is up with Dantooine, anyhow? Why does its name sound so much like Tattooine? How many ‘tooines are there? ↩
Like Waterworld, but with tor–actually, quite a bit like Waterworld. There are lots of horrible things about that movie, but the name has got to be the worst, right? The producers probably struggled for months to come up with something better than their shitty working title (The Mariner? The Myth of Dryland? The End of Kevin Costner’s Credibility as a Dramatic Actor?) before just throwing in the towel upon remembering that their movie’s villains are called “Smokers” (because they smoke cigarettes, for Pete’s sake) and coming to the conclusion that naming is just not their strong suit. ↩
And everyone involved in the production of Waterworld. ↩
To be fair, this is a less interesting concept than the Black Swan problem and (as far as I know) has not been directly responsible for a global financial collapse, so let’s say three books. ↩