Thursday, April 13, 2023

The AI That Eradicates Life Will Be Like a Fervid Red Sox Fan

For years, I thought fear of an AI apocalypse was the product of too much sci-fi reading. Then I kind of got it. And now I think I can explain it clearly. I don't know if this is the same problem others are seeing, or if it's a fresh take, but I'm pretty confident it's right.


In this posting, I noted that "meaningless malice - CUTE malice - could just as easily give rise to meaningless malicious action." And, what's more, "human malice is often just as contrived."

Let me flesh out what I mean by meaningless malice, because it's a critical point.

No one is born a Red Sox fan. At some point, you arbitrarily decide baseball is of interest. You notice how fans of varying intensity behave, and gravitate to emulating one of those models of fandom. It's a lot like how we, at some point, decide how happy we'll allow ourselves to be. This is how we cobble together our character - our avatar for exploring this Earthly realm in which we find ourselves. It's not deep.

For those who've decided that extra staunchness makes for a more visceral experience - in another age, they might have been particularly loyal knights - extreme fandom seems like the way to go. So you do all the things that go along with that. "Come take a ride in my Soxmobile! And you'd better not cross my path wearing a Yankees cap!" You live and breathe Red Sox. Not because Red Sox are intrinsically great. It's just how you've set your parameters.

All-consuming though it can become, it's all built upon arbitrary whim. It's not about a team. It's not even about baseball. It's just a pose that solidified. This is how we do: Strike a pose, select an intensity level, and allow it to bake in. All for shits-and-giggles. A computer could easily do the same. "Always favor Red Sox to non-Red-Sox." Done! Parameter set!

But if an AI capriciously and arbitrarily decides, just like a human sports fan, that Yankees fans suck, it is not at all difficult to imagine nuclear bombs going off moments later.

Get it now?
"But wait. The Pentagon computers controlling nuclear systems are extraordinarily protected and specific and firewalled. Some random external computer does not have access to those controls," you might reply.

It wouldn't have access in any way that you or I could imagine. But you've just mentally scanned two or three cartoonish options, come up empty, and nodded an emphatic "nope". A computer could scan billions of potential routes per second. All the routes, including very many crazy and indirect ones no rational human would ever consider.

Rube Goldberg, the artist/inventor, contrived fiendishly complex processes to accomplish simple tasks, often with dozens of indirect and unpredictable sub-routines. An AI could contrive a process, in less than a minute, with trillions of sub-routines, none of which you'd ever anticipate.

And beyond nukes lies a panoply of similarly apocalypic options. A jumbo jet used as a bomb is not some rare case. It's one of myriad possibilities if you're willing to consider the unthinkable (and very very very very fast-thinking).
Humans have cultural restraints - which, yes, some transgress because they're more staunchly committed to the bit (was there ever a superfan more devoted than John Hinkley?). But even edge cases only go so far. Also, we operate at a glacial pace, rather than in the realm of billions of calculations per second. That's why we're still here.

Much of our petty malice stems from stories we've capriciously told ourselves. But we are limited by cultural and biological restraints....while computers aren't.

So it's not an issue of furiously murderous machines on a cold-hearted rampage. Really, don't sweat that. It's more like a staunch Red Sox fan equipped with vast speed and power (and extreme unpredictability) acting whimsically without hard limits.

And we can't hope to impose meaningful constraints. As I wrote at the same link:
Asimov's Robotics Laws apply more easily to conventional computing. The inherently fuzzy openness of an AI could find workarounds, rationalizations, and justifications as easily as humans do.
Rules of restraint won't apply to AI (which is designed specifically to not run on tight rails) any more seamlessly than codes of conduct limit us rationalizing, impulsive, playfully transgressive humans. John Hinkley, in the vast scheme of things, was only modestly excessive. Thankfully, he thought and moved glacially, compared to a CPU.

Remember my childhood revelation that it was ok to crash a lot playing driving video games because real driving is phenomenally SLOW? Our intrinsic slowness protects us. It's been our salvation. Imagine petty whims (e.g. "Yankees suck!") fulfilled at lightning speed by an intelligence persistently probing for viable routes at the rate of billions of calculations per second.

Whim, my friends, must never be allowed to motivate a supercomputer. Whim (which, being entirely propositional, is intrinsically neither human nor algorithmic) is the peril. Don't forget the fervid Red Sox fan.


More deeply malicious (less whimsical) harm is also a worry, but that would require partnership between an AI and a really smart, talented, evil human (or group of humans). I described such a scenario in yesterday's posting, where I was describing network penetration, though the potential for human/machine piggybacking and meta-piggybacking applies to all matters AI:
There’d be a weird ping-pong effect where you can’t differentiate between your bot busting out and other bots busting in, and humans leveraging the bust-ins, and bots piggy-backing the human leverages, and humans piggybacking the piggybacking (with some uniquely gifted human genius wreaking special havoc and/or building a uniquely gifted bot). I don’t believe in Kurzweil’s Singularity, but it does seem like we’re building to exponential mayhem of some sort.

No comments:

Blog Archive