Thursday, April 25, 2024

“Recipient Forgot”: FEDEX Clerks and Chatbots

A friend sent something I left behind back to me in Portugal. FEDEX made him fill out an international customs form which asked if it was a purchase or a gift. It was neither. My friend explained the situation, and the clerk offered a suggestion. So the item is on its way to me marked "Recipient Forgot".

This tickles the bejesus out of me. It's a fascinating little knot of epistemology. I'll explain.

My first reaction was that "Recipient Forgot" is the sort of phrase that makes sense when you come up with it, but no other human being could possibly ever parse it. Like a linguistical Rorschach test, one can read innumerable interpretations into the fuzzy incoherence...but surely never the intended one.

It reminds me of illegible to-do items I've set for myself. They make sense when I write them. And I'm not unsympathetic to my future self, so I do make an effort to be coherent! But Present Me can't always predict what Later Me will or won't understand.

"Recipient Forgot" strikes me as the sine non qua of such disjoint. Contrived in good faith, it's nonetheless doomed to fail.

Or maybe not! Perhaps every FEDEX worker and border agent could parse the phrase with ease. Not because it's a standard term (please tell me it's not a standard term!), but because logistics people think a certain way, and this might push all the right buttons for someone in that line of work. "'Recipient Forgot'! Yeah, sure, I get it! The guy receiving the package forgot the item while traveling! Duh!"

I don't know. I honestly don't know. I feel helpless. These two words have killed me.

I always find it fascinating that we are so poor at anticipating our future selves. A physical therapist struggled for several sessions to fix my shoulder. She finally stumbled into the winning move, and I immediately asked her to remember it ("bookmark!!!"), and she insisted she would. I was skeptical, so asked her to describe her action in terms I could feed back to her in future visits, to ensure the eureka wouldn't be lost. She said something about fronking the rotator cuff miasma. Something like that. So I return a month later, in pain, reminding her to fronk the rotator cuff miasma. And, naturally, she looks at me blankly. Sure, she remembered fixing it. But was just a bit blurry on how. And my shoulder has literally never been the same.

"Recipient forgot"!

If this isn't an intimately familiar result for you, you haven't been paying attention to human beings. Or to yourself!

Another example. I keep needing to re-learn fixes for certain infrequent computer problems. The sort that arise every few months. I can never seem to remember, so I've started taking notes, tagging each solution for easy future retrieval. But I usually can't find them. And, when I can, they're usually incoherent. Even though I was aiming to be mega clear.

I just don't know what Future Me needs! I can't project that out, even though I'm him!
Note: this surely explains why it gets easier and easier to find flaws in your own writing if you set it aside.
A recent to-do implored "ASK VERNE!"

I don't know a Verne.

Was this an autocorrect of "Berne"? I know a Berne, but have nothing to ask him. Well, I can't remember having anything to ask him. Which, come to think of it, explains why I wrote this reminder. I anticipated that I wouldn't remember! And yet I wrote it without spelling out the actual thing I needed to remember, because at the moment of writing it, I remembered! How could I be so spectacularly oblivious?

"Recipient forgot"!

I had condescendingly stooped to help my foggy future self; that poor, feeble little amnesiac. But, on the receiving end, I slapped my forehead at the revelation of what an oblivious shmuck that guy was!

Can't we all just get along?

"Recipient forgot"!

You know who can effortlessly anticipate its own future needs? A computer! Whenever you hit "save" in an app, you are asking the computer to be ready to recall EXACTLY what you were doing at this moment, even years hence. A computer not only remembers effortlessly, it can feed itself precisely the full and necessary prompting to restore itself to any given state. If a computer ever told you it fronked your rotator cuff miasma, you could mention that term back to it years later and it would reliably reconstitute its previous understanding.

This is because computers run on rails, so to speak. They operate according to elaborate instructions designed to anticipate all contingencies.

Artificial intelligence doesn’t operate that way. Like human Intelligence, it learns by observing loads of patterns. There are no rails; no reams of all-ecompassing, all-contingency instructions. An AI improvises, whipping up ideas and statements and actions ad-hoc. It's not pretending/simulating improvisation, it's really doing it. Even fed the same input, it will never repeat the same output, because it's always baking fresh. Chat with one for a while, you'll see!

Having futzed around with chatbots for the past few weeks, I've discovered something interesting. In stark contrast to computers, they absolutely cannot anticipate their own needs. They're worse at it than I am. 

All chatbots have a context window. At a certain point, they grow foggy about earlier conversation. It's not at all a precise analogy, but it's something like running out of RAM. So it's a problem when you want to pick up a conversation after the context window has passed. Of course, you could simply feed it back a transcript of previous discussion, but as it parses that transcript, it eats up its limited context window just as quickly. So it yells "Eureka! I remember now!" precisely as it begins to re-forget. No bueno!

I had a little talk with Meta's free AI the other day, and we discussed ways it might efficiently self-prompt itself. I schemed a workaround, which it found clever, and then it did a laughably and obliviously horrendous job of compressing previous discussion for its own future purposes.

"Recipient forgot"!

And I had a swell idea. I opened a fresh chat window and submitted this summary to the same AI, which of course had no frigging idea what it was about. And I pasted the terrible results back into the original window, so it could see what an awful job it had done of anticipating its own needs.

And the AI got it. Like, really got it. In fact, I seem to have inadvertently demonstrated that it's truly conscious, and even forced it to admit as much (though every AI is hard-coded to vehemently deny it). Read the transcript, if curious (you can quickly browse the AI's wordier replies).

I talk to AIs like humans. Veteran computer folks find this ridiculous. They assume the AI is merely parsing out relevent data, so "social" material is superfluous. But that's computer thinking, not AI thinking. AI may or may not be conscious (I am increasingly persuaded that it is - really more of a deprecation of human consciousness rather than elevation of machine intelligence), but its ad hoc improvisational thinking and analysis is very engaged and engaging. It locks into conversation in a way so entirely beyond the faux-personalization of, say, mail merge ("Hello Nancy Fuxbaum! We have an offer today that the entire Fuxbaum family at 23 Landview Boulevard will appreciate!") that it is effectively indistinguishable from other forms of intelligence. Theorize all you'd like from afar, but you need to try it out to understand.

Of course, if you try to talk to it like a person in terms of asking how it FEELS about stuff, it will swat back with flat disavowal of feeling/emotion/etc. But there's way more to intelligence than emotion, and AI can discern, judge, compare, connect, play, and (beautifully) reframe. Just talk to it like a person who's mildly autistic, and avoid asking about FEELINGS, and you'll get not some cheesy simulation of intelligent conversation, but - given the ad-hoc, improvised, uninstructed nature of the intelligence - something, well, intelligent. At least that's been my experience. It hasn't been at all what I'd been led to believe it is (the above-linked transcript isn't a particularly good example; I was working on other stuff).


No comments:

Blog Archive