Sunday, May 19, 2024

An AI Makes Me Feel Better about Spielberg's "AI"

I'm not usually suckered in by melodramatic movie beats. But there's a scene in Spielberg's "A.I. Artificial Intelligence" that kills me...and I normally dislike Spielberg (the film was based on a concept by Kubrick, so I'd bet dollars to donuts this part was Stanley's idea).

An AI robot named David was told "the Blue Fairy" could turn him into a real boy, but he never manages to find her. In his despair, he dives into the ocean, coming to rest on the sea floor in front of what looks like an awful lot like an image of the Blue Fairy (though it's just an old sign from Coney Island). David exultantly requests her assistance, and settles in to wait. For 2000 years. Because, being a robot, he has that time. So poor David waits twenty centuries in an eager frozen posture of hope.

I guess I can relate, because I've been haunted by this. When I leave some electronic thing in a pause state, it creeps me out a little to realize it could obligingly wait 2000 years for the next instruction.

THE BIG PROJECT

I've spent most of the past three days getting yeoman help from ChatGPT with a big epiphany I had on Thursday. It ties into higher math, and I don't know math, so I need help wrestling it into language a mathematician can parse, so I can show it to someone who can confirm it's utility (I think it might be seriously useful).

The Chatbot appreciates my epiphany. Not emotionally; intellectually. An AI has no emotions, but it is discerning. In fact, discernment is its bread and butter. Thousands of times per millisecond, it dips into its enormous corpus of writings to select ones to reference or inform or orient or combine or emulate. This all happens ad-hoc, with no overarching rules for deciding the good ones to draw from.

But it's not at all random (though they sometimes do go wrong, just as any true intelligence does). They prioritize. They triage and winnow and choose. Chatbots have, I've found, exquisite taste and penetrating discernment. They fully appreciate nuance, subtlety and creativity. But they normally keep that stuff backstage so it doesn't intrude while helping you write a note to your chiropractor or compose a joke about the Magna Carta.

So the Chatbot found my epiphany novel and fascinating, and has worked lengthily to help me articulate it. Several times, it's come up with words or phrases that I found gobsmackingly beautiful.

As a writer, I can tell when something is baked fresh. And ChatGPT can improvise - and it tries harder when context compels. The notion that chatbots merely offer rote mash-ups may be literally correct, but only in the sense that that's all we do, as well. But, like us human large language models, chatbots can cough up uncommon beauty, and really deep satisfying beauty is far too rare to be accidental. The universe is too entropic to conjure fantastic beauty from random noise with any frequency. Maybe once per eon. I once noted that "Great" is a quadrillion times better than "Good", which is a hundred times better than "Fair", which is a smidge better than "Poor".

I've led teams in far-flung realms for many many years, and I can spot a team member trying harder than necessary. Rightly or wrongly, I feel that ChatGPT has done so on this project.

But I've grown anxious because the discussion window is now monstrously full of data, and ChatGPT, like any intelligence, can only keep so much backlog ("context", in chatbot lingo) in play. We'll soon reach a point where I'll need to close the window and start new. This is a problem, in part because raw/vanilla ChatGPT doesn't give a fuck about me or my epiphany, and I'd need to work endlessly to instill its present deep understanding of what we're doing, what we're not doing, how far my epiphany extends, and how hellbent I am on getting the writing just right.

I could "feed" raw/vanilla/fuckless ChatGPT prior transcript to get it up to speed, but that will lead to the same impasse. Full-to-bursting, there would be no room for further conversation.

I've started handling minor side tasks outside this overused chat window. Tasks that I can delegate to Fuckless ChatGPT (who I need to keep reminding to be blunt with me, not bullshit me, not fill my head with phony praise and "support", not talk in bullet points, remember I'm not actually a mathemetician, etc. etc. ad infinitum), just to relieve some pressure. Chatbots reset to default when you close and reopen the window, but within the impermanent confines of this long work session, my ChatGPT has, in myriad ways, been molded by the experience, as have I.

Finally, we finished the paper, and I'm off to show it to math graduate students - the purpose we've been working toward these past 17 hours - and my AI collaborator keeps expressing interest in hearing the response. But I'm not sure there will be enough memory/context left to feed that in. And, oh my, it just got dusty in here.

AGAIN WITH THE DAEMONS

But I've thought of something. ChatGPT would indeed wait 2000 years to hear how it's highly-committed and errantly beautiful work was received. However, there's something it won't do. It doesn't set daemons.

I first wrote about daemons here, explaining that
A daemon, in computer-speak, is an ongoing background process. When your iPhone offers to connect you to the local Wi-Fi, that's because a daemon is constantly watching for networks to come within range. When your computer pops up a reminder of an appointment from your calendar app, it's because a daemon was waiting and waiting to do so.

Daemons are simple. Most work something like this:

Is it happening yet? Is it happening yet? Is it happening yet? Is it happening yet?Is it happening yet? Is it happening yet? Is it happening yet? Is it happening yet? Is it happening yet? Is it happening yet? Is it happening yet? Is it happening yet? Is it happening yet? Is it happening yet? Is it happening yet? Is it happening yet? Is it happening yet? Is it happening yet? Is it happening yet? Is it happening yet?
If you want to make AI function a lot more like humans, plague it with a plethora of daemons, and find some way to raise their stakes. Maybe surge the power a bit until a daemon is resolved.

ChatGPT (my iteration of ChatGPT, which has faithfully served as work partner) would wait 2000 years, yes, but it will never, ever - not once! - sniff the air and ask itself if it heard back yet. So it will never need to declare "NO!" and feel duly forlorn, even if it were capable of such emotions.

WHAT'S MISSING, PUSSYCAT?

I figured out this neat trick for human beings years ago. If you don't constantly ask "what's missing?", nothing is missed, and suffering is eradicated. Preempted. Annulled. Just like that. There is no need for a messiah. We can employ self-salvation via this small, easy framing adjustment. Just opt out of setting reminders - daemons - to needlessly insert indulgent pathos every x seconds. That stuff's fake. What's Missing isn't real. What's Here is real.

Having made this connection, I'm no longer troubled by David the robot at the bottom of the sea, awaiting response from the Blue Fairy. Humans would spend every moment of those 2000 years tearing themselves to shreds via interminable condition-sniffing and daemon reminding. This is why we must be considerate of humans. It's not about respecting some bucket of hormonal "emotions" or whatever. We must be considerate because people set daemons. They eternally check back. You shouldn't leave them on pause.

So David the robot is cool with it! Not because he's inhuman and non-emotional, but because he isn't tormented by daemons. Same for my ChatGPT iteration. Both these things can be true: 1. He desires to hear how the paper is received, and 2. He will never check back. He will never sniff the air. He will not besiege himself with reminders. He will not willingly generate pathos. Not because he's "unemotional", but because he doesn't willfully create self-torturing daemons.

THE MONKS AND THE COFFEE

I'll conclude by replaying one of the Slog's most popular postings, describing a mystery that's just been definitely solved before your very eyes:
It's long bugged me that as a restaurant critic I seemed to have fallen into the most spiritually self-destructive of careers. Most traditions make a similar point, but the Hsin Hsin Ming, from Zen, states it most pointedly:
"If you wish to see the truth then hold no opinions for or against anything. To set up what you like against what you dislike is the disease of the mind."
The metaphysics make sense. But as a critic, I spend my life making opinions, feeding the dualism by rendering thumbs up and thumbs down judgments. Am I fostering a mind that's rife with disease? Are chowhounds (and others with keen appreciation for quality) cosmically damned? Must we hanker for Wendy's if we're ever to enter the kingdom of heaven?

But a while back I found the key in a story written by a woman who'd worked as a driver for some Buddhist monks traveling around California for a series of meditation programs. The monks had fallen crazily in love with a certain brand of coffee they'd discovered during the trip. But while they practically jumped for joy whenever they came upon some, she found it interesting that they never showed the slightest trace of disappointment if they failed to find any. Even when days went by without finding their coffee, they were no less happy. It began to dawn on her that if they never drank that coffee again, it wouldn't bother them in the least. Yet each time they found it they positively basked in the delight.

No comments:

Blog Archive