Welcome to Earth
In comments [scroll down], Jesse Mazer has pressed me on how my conception of ethics and suffering applies to “intelligent aliens.”
if we met a species of intelligent aliens, you would say that torturing them is not bad because of the harm it causes to them, but only because of the harm it causes to the human torturers? […] You say “some secular humanists might not find it that absurd to define ‘the human’ genetically”, but I imagine you’d find relatively few secular humanists who’d say that intelligent aliens should occupy a lower rung on the ethical ladder simply by virtue of not sharing our glorious human DNA. And if we are defining ethics in terms of shared genetics, what argument would you make against a white supremacist who believes that he owes moral duties only to members of his own race?
These are good questions.
The first thing I thought of when reading them was the famous scene in Independence Day where Will Smith punches the alien in the face and utters the title of this blog post. Next I remembered the somewhat not as famous scene in Mars Attacks! where the Martians are running around Vegas lazering people to death while one particular Martian runs around with a repeating loudspeaker that says “Don’t shoot! We’re your friends!”
And the third thing I thought of, a bit later, was this:
Now. The point so far is that the intelligence of a being alone has little and/or indeterminate bearing on the question of under what circumstances it’s ethical to harm that being. One main problem is that defining ‘intelligence’ is itself a difficult and even silly task without reference to human intelligence. Consider, in yet a fourth example from popular sci-fi, the film Starship Troopers. Were the giant bug enemies Earth faced in that story ‘intelligent’? Obviously yes. Were they intelligent in a way that made them fit into the case Jesse wants us to consider — that is, in a way that made them human-like? In some ways, yes (consider the fear of the giant larva thing at the very end of the film). But fighting an invasion of giant bees — we know ants are very intelligent — will probably call forth a different ethical response than fighting an invasion of near-human aliens like those that populate the Star Trek universe.
Sticking with Star Trek, let’s now consider the Borg — an intelligent alien ‘race’ that isn’t just not human but isn’t wholly biological, either. When it comes to the possibility that the Borg and the human race have ethical parity, I’ve got to side with Picard: real ethical parity would involve peacefully being assimilated. Sure enough, this is what troubles me about Alex Wendt’s strain of constructivism in International Relations: the idea, best captured by the praise Wendt heaps on Mikhail Gorbachev, is that ethics involves the personal choice to surrender to the greater good of … humanity? Life? Sentience? Beings capable of flourishing? That’s not exactly clear. This is less of a shortcoming in a universe with only one human-like species (i.e. humans; as Peter Lawler and others have pointed out, as much as, say, Alasdair MacIntyre wants to emphasize our shared animality with rational animals like dolphins, there are no dolphin universities, dolphin eros is unlike human eros, etc.). But in the universe Jesse is calling us to attend to, the possibility that some conceptions of ethics will cause humans to try to transcend their own humanity is a serious concern.
And as I’ve said, it’s not that human genes are necessary and sufficient to being human; it’s just that they’re necessary. For that reason, the argument I’d make against the white supremacist is that his conception of moral duty isn’t a real human ethics, precisely because it doesn’t take the human as its category of ethical analysis. That’s not a bad argument, but there are problems. First, category of analysis isn’t unit of analysis. So even in a ‘human ethics’ as I’ve described it, someone could still take what strikes many as an outrageously unethical approach to the status of the suffering and death of individual human beings, or even huge groups of them. British public philosopher John Gray has done this — announcing that there are billions and billions too many individual humans on Earth, and that our only hope is to experience a very substantial dieoff.
But this gets us a little farther afield of the main issue, which is that the ethical status of receiving suffering may be necessary to understanding how inflicting certain harms is wrong, but it isn’t sufficient — and that the ethical status of inflicting certain harms does get us to sufficiency for discussing a human ethics. There’s one more leap I could make at this point, but I think I’ll save it for later.