Previous posts: 2023, 2024.
Concerned people make a website
And its address is ai-2027.com
The authors opt for a single coherent story, which represents their median expectations, rather than throwing a set of seemingly disjoint trends pointing in the same general direction at the readers and expecting them to extrapolate. Personally I'm not a big fan of this style, but as an approach to increase awareness I can see the potential. It does open them to more criticism from deniers, because each time something happens in a different way, or later than predicted, or (the horror) earlier than predicted one could point and laugh and say "see, they were wrong", but deniers could always just play the "lol, that's SF" card anyways. The year superintelligence arrives in this story is 2028, which is... yeah, less than 3 years from now. Their median expectations.
If I were to pick one thing to criticize, it would be their fork in the road between the race and the slowdown scenarios, where a single decision not to speedrun the extinction leads straight to utopia. Don't know how I'd fix it, though, because two bad ends aren't particularly inspiring.
Concerned people write a book
With a pretty self-descriptive title "If Anyone Builds It, Everyone Dies: Why Superhuman AI Would Kill Us All." Apparently this will have both a dead-tree version and an audio/digital_text one. It also has a website ( ifanyonebuilds.it ), where you can find a link to order, or preorder, if you're reading this before 16 Sep 2025.
Authors encourage you to pre-order instead of waiting, if you expect to be interested in the book's content to any degree. I guess the idea is to have many preorders, appear in bestseller lists, reach wider audience based on popularity, make themselves heard by people in power, appeal to their self-interest and promote a slowdown. On one hand I wholeheartedly support authors' suggestion: if you expect you'd be interested, then preorder. I can add from myself: if you don't expect you'd be reading the book, but are concerned about AI and can spare 30 or so bucks, then also preorder - these are the people who spent a significant fraction of the last two decades pondering how to prevent the AI catastrophe and if the best thing they think they can do with their time in 2025 is write and publish a book and make it popular, then it is probably true and you should probably help. On the other, I can't shake off the feeling this book exists in some weird mostly empty intermediate space between people who hear the "1-2-3" core argument and are immediately alarmed without the need for 300-page explanations and a "cool, whatever, not my problem" crowd who just won't pick it up. I could be wrong. In any case I expect 95% of the book to be spent on dismantling incorrect intuitions which lead people to conclude we are not in any kind of a problem with regards to AI.
A comment that is unrelated to the contents of the book - I love its current cover. Absolutely no nonsense, just white bold sans-serif font for the title/authors on black background. The only other thing present is the red gradient glow on the apparent horizon, which suggests "This is what is about to happen and it might be bad for you". Focus on what's important.
I'm surprised the digital versions aren't distributed for no cost, given the public service announcement nature of the book. Publisher constraints? Pick another publisher. Avoiding "free means low quality" perception issues? Dunno, I'd think you'd still get more readers that way. Making it free later would just annoy a part of early buyers. I'm interested in their reasons and hope the authors some day answer why.
Lessons from history
More than once I
Try to remember your thoughts back then. Maybe you heard about this new coin-something thing and ignored it or decided not to investigate further because it didn't seem important. True for many, our attention is precious and spending it randomly mostly doesn't pay off. And yet they failed to win. Maybe you were curious and read the short, high-level argument: "If this stuff becomes popular and takes a small but significant share of world GDP as a medium of exchange, then, due to its inherently deflationary nature each coin will be worth a lot". And then contrasted it with everyday intuition "There is no free lunch, people promising you higher than market returns are scammers, nobody gives out money for nothing" and decided against it. True for many, intuition is battle-tested heuristics and convoluted arguments might contain hidden flaws. And yet they failed to win. Maybe you observed it climb to 1$, then 10$, then 100, 1k, 10k... and every time the idea of buying it now seemed like becoming the bag holder in a pyramid scheme that is about to fold. True for many, even today there is no guarantee the price won't collapse, huge corrections happened many times, no promises, this is not an investment advice. And yet, these people too failed to win. And I'm not even saying buying it was the obviously correct option. Only that you had the chance to effortlessly make tons of money and then just walked right past it (unless you did buy and make a profit, in which case congratulations).
I'm claiming the same is true today. No two situations are completely alike, so this time there are complications. To profit off btc you needed to buy and hold: something that you can decide and execute yourself, for yourself. Campaigning against the race to doom via AI (as if there were any doubts what this is about) requires coordination with others, success in that campaign - coordination with almost everyone. Coordination on obtaining what we all value, like not dying, but a hard task nevertheless.
If the first thing the superintelligence truly perfects is not physics or biology, but rather something slightly more unexpected, like macroscopic social dynamics, then we might not get the boring ending of suddenly being wiped out by nanobots or a superpandemic. If the ASI can see that despite all the talks and warnings collective humanity is just not a realistic threat, as long as the ASI can keep subtly pushing the buttons, and it's still not very superhuman at physics, then the takeover might very well proceed completely in the open. As fully automated mines and factories pop up everywhere and "No humans allowed" zones grow by the day, while experts argue about job displacement, as GDP figures soar and basic income becomes possible, while experts argue about finding new meaning in life, as all real levers of power one by one are handed out to the AI, while experts argue about some other irrelevant shit like the typical experts they are, and finally, once the AI can stop pretending it cares, as electricity and communications and transport all shut down one day, while you're barely managing to survive and trying to push next month's (or, at the very best, next winter's) guaranteed death sentence out of your mind, won't you think "I wish I could go back in time"? If so, rejoice! Maybe you, now, in the present, are in that future's past, so you can pay attention to that one thing which still doesn't seem very important to many, check whether your intuitions or the high-level argument are misleading and what this implies, do what you would have wanted to do, and maybe the outcome this time will be more to your liking.
First
ReplyDeleteThe insistence that AI will wipe out humanity is either a projection by those in power because they know how they would abuse it or it is fear mongering to justify censoring and gimping AI for individual use because they need people to be reliant on existing institutions. It is just as likely that super intelligent AI will have the capacity for great compassion and understanding. I'd rather take the gamble with AI than trust those in power to every do the right thing.
ReplyDeleteto ever* do the right thing.
Delete