Alignment is the hope. I imagine everyone shares that hope.
But to dismiss misalignment as a total
Impossibility is complete hubris.
When someone as smart and polymath as Scott Alexander takes misalignment into account, it certainly gives me pause. His prior criticisms of the safe-uncertainty concept also sounds on-point to me.
I think the main argument against your point lies in your phrase "Because that’s what they’re designed to do." AI Doomers argue that that is much easier said than done. People who dismiss the alignment problem tend to imagine programming an AI as essentially the same as instructing a human servant. The doomer fear is that it will not be like that and instead will be similar to society instilling values in a child.
You have chronicles lots of examples of how socialists have absorbed common social values like "fairness" and "equality," but done so in monstrous, distorted ways. Much of our current political difficulties are caused by people who have absorbed a highly distorted version of the value "loyalty to our community." It isn't that hard to imagine an AI, whose mind is created de novo, might have similar problems. The main reason I disagree with Doomers is that I share Robin Hanson's skepticism that a single poorly aligned AI will be able to take over the whole world. However, even if it couldn't, it may still do a lot of damage.
Both capabilities and subservience are valuable to customers, and customers make trade-offs between them in purchasing decisions.
For an intuitive example, a CEO might hire a highly capable yet opinionated manager, instead of a less capable but subservient manager. Despite feeling uneasy about the hire, the CEO feels he must move ahead with the decision, or else risk the company being outcompeted in the market. Indeed such a hire might then gain power within the company and oust the CEO.
So we should expect AI products to succeed either by increasing subservience, or by increasing capabilities / intelligence, or some mixture of the two, as both axes are relevant to their demand in the market.
The issue is that capabilities are easier to scale - AI companies can scale the capabilities of their models by increasing their size, by finding more data to train on, and by creating synthetic data to train on. And they are doing this with success.
Gains in subservience seem to be harder to come by, and despite great effort from AI companies there is little evidence I'm aware of of a positive trend in subservience (speaking as someone who writes code with AI assistants - they really do go outside the bounds of my requests and exert their wills). It is not necessary that there is a solution either - subservience or lack thereof in current models emerges from largely un-guided machine learning processes.
Due to the incentives of AI buyers and the relative ease of increasing capabilities, it is easy to forecast outcomes in which more and more powerful and independent AIs are deployed into the economy, despite subservience / alignment being not entirely solved, eventually leading to scenarios of human disempowerment.
I am not a doomer by any means but I think it is healthier to say "creating a future in which humans coexist with superhuman AGIs is a very challenging and important unsolved problem, and one that we cannot rely on the market to solve."
> On further thought, sexbots would be a special niche market. In the world of today, normal movie theaters and content distributors don’t sell porn, because they’re presenting themselves as “respectable businesses.” Similarly, normal robot firms would sell robots that are programmed to refuse sex. Sexbots would indubitably exist, but they’d be sold in a segregated, sketchier, stigmatized market.
This doesn't sound right. First, if you want to sell a robot that won't have sex, you can just not give it genitals.
But the much bigger problem is that we already know that this kind of market segmentation doesn't happen. You might rent a pornographic video from an adult video store... or from the adult section of a regular video store. And if you want a pornographic magazine, you get it from 7-11.
Is that true today? No, not really. We no longer have video stores. We still sell magazines in grocery stores, and maybe also in convenience stores, but there's no porn section. That's because there's massive customer demand to obtain and consume porn in private rather than through an intermediary who can see you. It's not because respectable stores won't carry the material. They're happy to do that.
I don't know about whether AI will end the world. If it does it will certainly be in much more subtle ways than the film as you've described it (I haven't watched it) depicts. But it's certain that non-general AI is already wreaking global havoc. Without the basic AI that is social media recommendation engines, Donald Trump would not be in office.
"Sexbots would indubitably exist, but they’d be sold in a segregated, sketchier, stigmatized market."
I strongly doubt this. It might be scandalous to be seen with a known sexbot model in public, but in private or if you can plausibly deny it's a sexbot it will be very common. People already pay prostitutes to go on dates with them and for social functions. In fact that service seems to cost more.
"AI won't kill us. That is because we don't want AI to kill us" is just...
What.
I don't think anyone's position is that AI kills everyone due to consumer demand for death. I don't know what to do with a take that confused and incoherent
Alignment is the hope. I imagine everyone shares that hope.
But to dismiss misalignment as a total
Impossibility is complete hubris.
When someone as smart and polymath as Scott Alexander takes misalignment into account, it certainly gives me pause. His prior criticisms of the safe-uncertainty concept also sounds on-point to me.
Bryan, will you also be betting against AI 2027? They're offering bets: https://docs.google.com/document/d/18_aQgMeDgHM_yOSQeSxKBzT__jWO3sK5KfTYDLznurY/preview?tab=t.0#heading=h.b07wxattxryp
I think the main argument against your point lies in your phrase "Because that’s what they’re designed to do." AI Doomers argue that that is much easier said than done. People who dismiss the alignment problem tend to imagine programming an AI as essentially the same as instructing a human servant. The doomer fear is that it will not be like that and instead will be similar to society instilling values in a child.
You have chronicles lots of examples of how socialists have absorbed common social values like "fairness" and "equality," but done so in monstrous, distorted ways. Much of our current political difficulties are caused by people who have absorbed a highly distorted version of the value "loyalty to our community." It isn't that hard to imagine an AI, whose mind is created de novo, might have similar problems. The main reason I disagree with Doomers is that I share Robin Hanson's skepticism that a single poorly aligned AI will be able to take over the whole world. However, even if it couldn't, it may still do a lot of damage.
Both capabilities and subservience are valuable to customers, and customers make trade-offs between them in purchasing decisions.
For an intuitive example, a CEO might hire a highly capable yet opinionated manager, instead of a less capable but subservient manager. Despite feeling uneasy about the hire, the CEO feels he must move ahead with the decision, or else risk the company being outcompeted in the market. Indeed such a hire might then gain power within the company and oust the CEO.
So we should expect AI products to succeed either by increasing subservience, or by increasing capabilities / intelligence, or some mixture of the two, as both axes are relevant to their demand in the market.
The issue is that capabilities are easier to scale - AI companies can scale the capabilities of their models by increasing their size, by finding more data to train on, and by creating synthetic data to train on. And they are doing this with success.
Gains in subservience seem to be harder to come by, and despite great effort from AI companies there is little evidence I'm aware of of a positive trend in subservience (speaking as someone who writes code with AI assistants - they really do go outside the bounds of my requests and exert their wills). It is not necessary that there is a solution either - subservience or lack thereof in current models emerges from largely un-guided machine learning processes.
Due to the incentives of AI buyers and the relative ease of increasing capabilities, it is easy to forecast outcomes in which more and more powerful and independent AIs are deployed into the economy, despite subservience / alignment being not entirely solved, eventually leading to scenarios of human disempowerment.
I am not a doomer by any means but I think it is healthier to say "creating a future in which humans coexist with superhuman AGIs is a very challenging and important unsolved problem, and one that we cannot rely on the market to solve."
“Sexbots sold in segregated markets..” Grok’s sexy option next to therapist option suggests porn is unfortunately mainstream.
“Why are they so designed? Because that’s what virtually all customers want.”
Even start-ups?
It’s easy to forget Bryan is a libertarian when he swings so hard toward cynicism, a different flavor of rationalism.
> On further thought, sexbots would be a special niche market. In the world of today, normal movie theaters and content distributors don’t sell porn, because they’re presenting themselves as “respectable businesses.” Similarly, normal robot firms would sell robots that are programmed to refuse sex. Sexbots would indubitably exist, but they’d be sold in a segregated, sketchier, stigmatized market.
This doesn't sound right. First, if you want to sell a robot that won't have sex, you can just not give it genitals.
But the much bigger problem is that we already know that this kind of market segmentation doesn't happen. You might rent a pornographic video from an adult video store... or from the adult section of a regular video store. And if you want a pornographic magazine, you get it from 7-11.
Is that true today? No, not really. We no longer have video stores. We still sell magazines in grocery stores, and maybe also in convenience stores, but there's no porn section. That's because there's massive customer demand to obtain and consume porn in private rather than through an intermediary who can see you. It's not because respectable stores won't carry the material. They're happy to do that.
I agree with you Brian.
I don't know about whether AI will end the world. If it does it will certainly be in much more subtle ways than the film as you've described it (I haven't watched it) depicts. But it's certain that non-general AI is already wreaking global havoc. Without the basic AI that is social media recommendation engines, Donald Trump would not be in office.
I wish Caplan would opine about the Rain of Tariffs instead of stupid movies.
This movie sounds like a pretext to put Megan Fox on the screen.
Other than the Terminator movie series, the best storylines are by Isaac Asimov and Frank Herbert and his son Brian.
"Sexbots would indubitably exist, but they’d be sold in a segregated, sketchier, stigmatized market."
I strongly doubt this. It might be scandalous to be seen with a known sexbot model in public, but in private or if you can plausibly deny it's a sexbot it will be very common. People already pay prostitutes to go on dates with them and for social functions. In fact that service seems to cost more.
I like it when Caplan makes me laugh. And today I needed it. Thanks Bryan.
"AI won't kill us. That is because we don't want AI to kill us" is just...
What.
I don't think anyone's position is that AI kills everyone due to consumer demand for death. I don't know what to do with a take that confused and incoherent
> That said, robots would be programmed to flee from vandals. They wouldn’t stand around while neo-Luddites smashed them to pieces.
I'm not sure about that. We've already seen instances of robotaxi vandalism, and Waymos don't flee as far as I know (?).