Em Bet?
After discussing a ten-year-old paper on empirical philosophy of mind, Robin proposes a remarkable bet:
I’m also pretty sure that while the “robot” in the study was rated low
on experience, that was because it was rated low on capacities like for
pain, pleasure, rage, desire, and personality. Ems, being more
articulate and expressive than most humans, could quickly convince most
biological humans that they act very much like creatures with such
capacities. You might claim that humans will all insist on rating
anything not made of biochemicals as all very low on all such
capacities, but that is not what we see in the above survey, nor what we
see in how people react to fictional robot characters, such as from
Westworld or Battlestar Galactica. When such characters act very much
like creatures with these key capacities, they are seen as creatures
that we should avoid hurting. I offer to bet $10,000 at even odds that
this is what we will see in an extended survey like the one above that
includes such characters. (emphasis mine)
Since Robin repeatedly mentioned my criticism of his work in this post, I sense this bet is aimed at me. While I commend him on his willingness to bet such a large sum, I decline. Why?
1. First and foremost, I don’t put much stock in any one academic paper, especially on a weird topic. Indeed, if the topic is weird enough, I expect the self-selection of the researchers will be severe, so I’d put little stock in the totality of their results.
2. Robin’s interpretation of the paper he discusses is unconvincing to me, so I don’t see much connection between the bet he proposes and his views on how humans would treat ems. How so? Unfortunately, we have so little common ground here I’d have to go through the post line-by-line just to get started.
3. Even if you could get people to say that “Ems are as human as you or me” on a survey, that’s probably a “far” answer that wouldn’t predict much about concrete behavior. Most people who verbally endorse vegetarianism don’t actually practice it. The same would hold for ems to an even stronger degree.
What would I bet on? I bet that no country on Earth with a current population over 10M grants will grant any AI the right to unilaterally quit its job. I also bet that the United States will not extend the 13th Amendment to AIs. (I’d make similar bets for other countries with analogous legal rules). Over what time frame? In principle, I’d be happy betting over a century, but as a practical matter, there’s no convenient way to implement that. So I suggest a bet where Robin pays me now, and I owe him if any of this comes to pass while we’re both still alive. I’m happy to raise the odds to compensate for the relatively short time frame.
The post appeared first on Econlib.