Discussion about this post

User's avatar
Bassoe's avatar

Yep. The problem is that safety-from-AI and safety-from-other-humans-monopolizing-AI-against-you are both x-risks and their solutions mutually contradictory.

Safety from AI requires extensively restricting access and commands given to the AI to avoid anything it could possibly misinterpret, while safety from an AI monopoly requires giving everyone access to AI so the majority of humanity aren't rendered economically and militarily irrelevant to an AI-monopolizing oligarchy.

The best-case scenario for the rest of the world is to become the AI monopolizer's rentist company-town serfs, the worst, to be genocided by their exterminist killbots.

https://jacobin.com/2011/12/four-futures

And the choice is entirely up to the monopolizers, there's jack shit anyone else can do to stop them. And the oligarchy is incentivized to be genocidal because there literally aren't enough resources on earth to give everyone a first world quality of life.

https://www.bbc.com/news/magazine-33133712

Spite is an underrated motive. If AI development is a choice between:

The rich use regulatory capture ironically in the name of “AI safety“ to monopolize AI, so once AI advances sufficiently to consume the entire job market, everyone else is priced out of everything and revolts are violently suppressed by weaponized robots, leading to everyone but the rich starving to death followed by their enjoying post-scarcity utopia built atop our mass graves.

Everyone has AI, meaning they can use it to create whatever products and services they want in the aftermath of the collapse of capitalism and provide MAD deterrence against exterminists.

...plenty of people are going to choose the second option, despite doing so being riskier for humanity as a whole since it means more doomsday buttons with more fingers on them.

This is unironically more survivable than the alternative.

https://www.angryflower.com/422.html

If the ruling oligarchy doesn’t like it, they ought to be hashing out some kind of BGI now, so having our own AIs to protect us once we’ve became economically redundant doesn’t seem like our only chance of survival.

Expand full comment

No posts