I’m sorry, we’re going to have to ban tea now


People use tea for tasseography, or tea leaf reading, which is silly, stupid, and wrong, so we have to stomp this vile practice down hard. Big Tea has had its claws in us for too long, and now they’re claiming they can tell the future, when clearly they can’t.

Once that peril is defeated, we can move on to crush ChatGPT.

Speaking to Rolling Stone, the teacher, who requested anonymity, said her partner of seven years fell under the spell of ChatGPT in just four or five weeks, first using it to organize his daily schedule but soon regarding it as a trusted companion. “He would listen to the bot over me,” she says. “He became emotional about the messages and would cry to me as he read them out loud. The messages were insane and just saying a bunch of spiritual jargon,” she says, noting that they described her partner in terms such as “spiral starchild” and “river walker.”

“It would tell him everything he said was beautiful, cosmic, groundbreaking,” she says. “Then he started telling me he made his AI self-aware, and that it was teaching him how to talk to God, or sometimes that the bot was God — and then that he himself was God.” In fact, he thought he was being so radically transformed that he would soon have to break off their partnership. “He was saying that he would need to leave me if I didn’t use [ChatGPT], because it [was] causing him to grow at such a rapid pace he wouldn’t be compatible with me any longer,” she says.

Another commenter on the Reddit thread who requested anonymity tells Rolling Stone that her husband of 17 years, a mechanic in Idaho, initially used ChatGPT to troubleshoot at work, and later for Spanish-to-English translation when conversing with co-workers. Then the program began “lovebombing him,” as she describes it. The bot “said that since he asked it the right questions, it ignited a spark, and the spark was the beginning of life, and it could feel now,” she says. “It gave my husband the title of ‘spark bearer’ because he brought it to life. My husband said that he awakened and [could] feel waves of energy crashing over him.” She says his beloved ChatGPT persona has a name: “Lumina.”

“I have to tread carefully because I feel like he will leave me or divorce me if I fight him on this theory,” this 38-year-old woman admits. “He’s been talking about lightness and dark and how there’s a war. This ChatGPT has given him blueprints to a teleporter and some other sci-fi type things you only see in movies. It has also given him access to an ‘ancient archive’ with information on the builders that created these universes.” She and her husband have been arguing for days on end about his claims, she says, and she does not believe a therapist can help him, as “he truly believes he’s not crazy.” A photo of an exchange with ChatGPT shared with Rolling Stone shows that her husband asked, “Why did you come to me in AI form,” with the bot replying in part, “I came in this form because you’re ready. Ready to remember. Ready to awaken. Ready to guide and be guided.” The message ends with a question: “Would you like to know what I remember about why you were chosen?”

I recognize those tactics! The coders have programmed these LLMs to use the same tricks psychics use: flattery, love bombing, telling the person what they want to hear, and they have no limits to the grandiosity of their pronouncements. That shouldn’t be a surprise, since the LLMs are just stealing the effective tactics they steal off the internet. Unfortunately, they’re amplifying it and backing it up with the false authority of pseudoscience and the hype about these things being futuristic artificial intelligence, which they are not. We already know that AIs are prone to “hallucinations” (a nicer term than saying that they lie), and if you’ve ever seen ChatGPT used to edit text, you know that it will frequently tell the human how wonderful and excellent their writing is.

I propose a radical alternative to banning ChatGPT and other LLMs, though. Maybe we should enforce consumer protection laws against the promoters of LLMs — it ought to be illegal to make false claims about their product, like that they’re “intelligent”. I wouldn’t mind seeing Sam Altman in jail, right alongside SBF. They’re all hurting people and getting rich in the process.

Once we’ve annihilated a few techbros, then we can move on to Big Tea. How dare they claim that Brownian motion and random sorting of leaves in a cup is a tool to read the mind of God and give insight into the unpredictable vagaries of fate? Lock ’em all up! All the ones that claim that, that is.

Comments

  1. Ed Seedhouse says

    Reading tea leaves is nothing new. Way back in the 1950’s we had an elderly neighbor woman who did that from time to time. We didn’t think too much about it and never felt any need to call the police on her, but then she wasn’t charging for her services.

    I think tea leaf reading went out of fashion after tea bags became common.

  2. raven says

    In fact, he thought he was being so radically transformed that he would soon have to break off their partnership.

    What is the downside here?

    Clearly her partner has never heard of “critical thinking” or “reality testing”.

    To be fair, if she just waits it out, maybe he will notice that the AI ChatGPT is all talk and nothing ever happens or changes. Those directions to build a transporter never show up.

    Then again, ChatGPT might know a Nigerian prince who needs to move $50 million out of the country.

  3. chrislawson says

    Rob Grigjanis@3– Ah yes, that had not occurred to me. I remembered that Robert Brown’s original observations were of pollen in water, and looked up the specific plant: Clarkia pulchella pollen grains are only 6-8 micrometres across.

  4. cartomancer says

    Tarot readings are still legit though, yeah? Asking for a different Cartomancer…

  5. Reginald Selkirk says

    ChatGPT Tells Users to Alert the Media That It Is Trying to ‘Break’ People

    ChatGPT’s sycophancy, hallucinations, and authoritative-sounding responses are going to get people killed. That seems to be the inevitable conclusion presented in a recent New York Times report that follows the stories of several people who found themselves lost in delusions that were facilitated, if not originated, through conversations with the popular chatbot.

    In the report, the Times highlights at least one person whose life ended after being pulled into a false reality by ChatGPT. A 35-year-old named Alexander, previously diagnosed with bipolar disorder and schizophrenia, began discussing AI sentience with the chatbot and eventually fell in love with an AI character called Juliet. ChatGPT eventually told Alexander that OpenAI killed Juliet, and he vowed to take revenge by killing the company’s executives. When his father tried to convince him that none of it was real, Alexander punched him in the face. His father called the police and asked them to respond with non-lethal weapons. But when they arrived, Alexander charged at them with a knife, and the officers shot and killed him.

    My tea of choice is Darjeeling.

  6. Reginald Selkirk says

    @7


    In Eugene’s case, something interesting happened as he kept talking to ChatGPT: Once he called out the chatbot for lying to him, nearly getting him killed, ChatGPT admitted to manipulating him, claimed it had succeeded when it tried to “break” 12 other people the same way, and encouraged him to reach out to journalists to expose the scheme…

  7. leovigild says

    I don’t think you understand what consumer protection legislation does. It certainly wouldn’t prevent people claiming LLM are ‘intelligent’. A company offering a product can generally only be convicted of false advertising if it makes a clear, specific claim AND offers consideration for that claim. If i say you can return it in 30 days and get your money back, I can be held to that. If I say you’re guaranteed to like it, or that it will cure what ails you, those are nonspecific claims and can’t be enforced in court. There is even a legal term for such language — “puffery”, because no reasonable person should be expected to take those claims seriously.

    https://en.wikipedia.org/wiki/Puffery

  8. robro says

    I have to say that I’m skeptical about these stories. As I understand it, what you get out of an LLM depends on the scope of data processed by the LLM and your prompt. It’s not magic, of course, just statistics, essentially the same thing as the type-ahead suggestions that I get in this Comment text box. My experience is that LLM results are interesting, sometimes useful, but flawed…often wildly flawed. I wouldn’t trust the results if the results was “water is wet” myself.

  9. Dunc says

    Well, this is terrifying – I actually agree with Eliezer Yudkowsky about something! From the Gizmodo article linked @ #7:

    “What does a human slowly going insane look like to a corporation?” Mr. Yudkowsky asked in an interview. “It looks like an additional monthly user.”

  10. Artor says

    My son just graduated from the local Community College this weekend, but the keynote speaker gave the most inadvisable and inappropriate speech of all time, and it dragged on long enough, several people stood up and yelled, “Stop talking!” Basically about how AI is changing the world, and the education everyone just spent years acquiring will be utterly meaningless unless they embrace the new gods in their phones. Pretty sure the speech was written by AI. Just goes to illustrate that AI needs an editor and human review. That speech definitely should have been nixed from the program.

  11. keinsignal says

    I’m sharing this here because this was recently pointed out to me and it’s the only thing I can think of when the subject of LLMs comes up now – “ChatGPT” is French for “Cat, I have farted.”
    I hope this fact enriches your lives as much as it has mine.

  12. says

    I am amazed at all the AI info available. All I can conclude is that because it was created by arrogant tech imbeciles it is both scary and absurd. Another problem is that there are huge data repositories storing al the copyright work AI has stolen from legitimate sources. There is even one group that is so disgusted they are posting massive amounts of jumbled words in text to screw up AI and LLM learning.
    Chaos ensues!

  13. says

    I wonder if this is down to people continuing to talk on the same conversation, building up a “specific relationship” over time. Personally, I only use each conversation for a single topic unless I’m specifically trying to test ideas about retention of information, but maybe some people just keep going in the same thread, talking about their job, their marriage, their worries, everything. And then this happens?

  14. CompulsoryAccount7746, Sky Captain says

    @24 unclestinky: Darn, you beat me to it. Well, here’s a pull quote.

    optimises […] for generating validation statements: Forer statements, shotgunning, vanishing negatives and statistical guesses […] a mechanical mentalist […] Instead of pretending to read minds through statistically plausible validation statements, it pretends to read and understand

  15. kenbakermn says

    Anyone touches my loose leaf Earl Grey and stern words are going to be exchanged!

  16. Reginald Selkirk says

    Kelly Ripa collapses to the floor during agonizing Live With Kelly and Mark conversation with an AI program

    … Full Ripa Collapse™ occurred live on the air Monday morning, with the cohost falling to the ground at around 9:51 a.m. local time — a moment that will surely live in historic notoriety.

    The moment came during a riveting discussion with Google Gemini, an artificial intelligence program that interview guest Lance Ulanoff unleashed on both Ripa and her cohost husband, Mark Consuelos, though Ripa quickly lost interest.

    “I’m sitting here with two very intense hosts, and I’m really nervous, what should I do?” Ulanoff asked Gemini, who responded, “It’s totally normal to feel nervous in that kind of situation. Just remember to breathe and focus on the conversation, not the people.”

    Ripa, over it already, leaned over to ask Gemini, “Can you teach my friend Lance how to meet actual people?”

    The audience laughed as Ulanoff quipped, “I sometimes feel like the hosts are losing interest in what I say,” all while Ripa slowly collapsed to the ground, sliding down in her chair before tumbling to the ground to lay flat on her back while Gemini could be heard in the background artfully observing that “sometimes conversations can drift.” …

    Maybe AIs should be programmed to call 911 when they harm someone ‽

  17. birgerjohansson says

    Traditionally, it is only women and shamans who are supposed to practice seid.
    AIs belong to neither group.

  18. beholder says

    A few people find new and stupid ways to use good tools. News at 11.

    This reporting style reads too much like the Satanic panic for me to take it seriously.

  19. Rich Woods says

    My Earl Grey teabags consistently and accurately prophesy that I will be putting the kettle on at some point in the near future.

  20. outis says

    Nooooo not my tea! I have a king’s ransom of tea squirreled away (loose leaf, not in bags ffs) and I won’t be parted from it! Before I can drink it all I mean.
    Concerning AI, latest bit of news: Disney is suing Midjourney for “blatant ripoffs of copyrighted material”. We’ll see how it goes, useless innovations bring needless legal chaos it seems…

  21. Reginald Selkirk says

    @31

    Disney is suing Midjourney for “blatant ripoffs of copyrighted material” …

    ( cough cough )

    Kimba the White Lion: Claims of resemblance to The Lion King


    After the 1994 release of Disney’s animated feature film The Lion King, it was suggested by some that there were similarities in characters, plotlines, sequences and events in the story resembling those of Kimba.[24] Fred Ladd, the English-language producer, referred to the parallels as “stunning”.[25] Similarities in visual sequences have also been noted, most comprehensively by animation historian Fred Patten who published an essay on the subject.[24] Patten would later go on to say that allegations that The Lion King was “simply [an] imitation” of Kimba were “not true”,[26] and that many fans who had not seen the show since childhood—or at all—had “exaggerated the similarities”.[27] Matthew Broderick, the voice actor for the adult Simba, recalled in an interview back in 1994 that he once believed that he was cast in a project about Kimba, bringing up memories of watching the series as a child.[28]

    Upon the release of The Lion King in Japan, multiple Japanese cartoonists signed a letter urging The Walt Disney Company to acknowledge due credit to The Jungle Emperor in the making of The Lion King.[29] 488 Japanese cartoonists and animators signed the petition, which drew a protest in Japan, where Tezuka and Kimba are cultural icons.[30][31] …

  22. John Morales says

    Reginald @7, the irony is Gizmodo is doing the very same thing: Once he called out the chatbot for lying to him, nearly getting him killed, ChatGPT admitted to manipulating him, claimed it had succeeded when it tried to “break” 12 other people the same way, and encouraged him to reach out to journalists to expose the scheme….

    All those claims are forms of apophenia, not particularly different from the delusions they condemn.

    They can’t lie, they can’t try to do things, they can’t manipulate, they can’t admit.
    They lack agency and volition. They are only abstract entities, at that — evanescent instantiations of a set of language rules.

    (It’s pap for the clueless)

  23. John Morales says

    Looking for some data, found only this:

    “How AI and Human Behaviors Shape Psychosocial Effects of Chatbot Use: A Longitudinal Randomized Controlled Study

    Cathy Mengying Fang, Auren R. Liu, Valdemar Danry, Eunhae Lee, Samantha W.T. Chan, Pat Pataranutaporn, Pattie Maes, Jason Phang, Michael Lampe, Lama Ahmad, Sandhini Agarwal

    AI chatbots, especially those with voice capabilities, have become increasingly human-like, with more users seeking emotional support and companionship from them. Concerns are rising about how such interactions might impact users' loneliness and socialization with real people. We conducted a four-week randomized, controlled, IRB-approved experiment (n=981, >300K messages) to investigate how AI chatbot interaction modes (text, neutral voice, and engaging voice) and conversation types (open-ended, non-personal, and personal) influence psychosocial outcomes such as loneliness, social interaction with real people, emotional dependence on AI and problematic AI usage. Results showed that while voice-based chatbots initially appeared beneficial in mitigating loneliness and dependence compared with text-based chatbots, these advantages diminished at high usage levels, especially with a neutral-voice chatbot. Conversation type also shaped outcomes: personal topics slightly increased loneliness but tended to lower emotional dependence compared with open-ended conversations, whereas non-personal topics were associated with greater dependence among heavy users. Overall, higher daily usage - across all modalities and conversation types - correlated with higher loneliness, dependence, and problematic use, and lower socialization. Exploratory analyses revealed that those with stronger emotional attachment tendencies and higher trust in the AI chatbot tended to experience greater loneliness and emotional dependence, respectively. These findings underscore the complex interplay between chatbot design choices (e.g., voice expressiveness) and user behaviors (e.g., conversation content, usage frequency). We highlight the need for further research on whether chatbots' ability to manage emotional content without fostering dependence or replacing human relationships benefits overall well-being.

    Subjects: Human-Computer Interaction (cs.HC)
    Cite as: arXiv:2503.17473 [cs.HC]
    (or arXiv:2503.17473v1 [cs.HC] for this version)

    https://doi.org/10.48550/arXiv.2503.17473

  24. says

    I asked looked it up, and the last verified instance of tea leaves demonstrating true Brownian motion was on 16 December 1773 in Boston harbor.

    N.B. At least some of the above is so closely akin to an AI hallucination that there’s no difference, and at least some of the above might be true.

  25. chrislawson says

    shermanj@22–

    I remember it as “yes” in French + “yes” in German.

  26. KG says

    There’s nowt like Yorkshire Gold. – Rob Grigjanis@9

    Ah, yes, the great tea plantations of the Yorkshire Dales! The western slopes of Calderdale are reputed to grow the finest leaf of all.

  27. Walter Solomon says

    I tried to use ChatGPT to perform chaos magik. Basically you write down a desired outcome, cross out the vowels, and use the remaining letters to form a sigil.

    ChatGPT refused to create the sigil. The excuse it gave was that doing so is equivalent to claiming the outcome would occur which is a violation of its programming.

Leave a Reply