People use tea for tasseography, or tea leaf reading, which is silly, stupid, and wrong, so we have to stomp this vile practice down hard. Big Tea has had its claws in us for too long, and now they’re claiming they can tell the future, when clearly they can’t.
Once that peril is defeated, we can move on to crush ChatGPT.
Speaking to Rolling Stone, the teacher, who requested anonymity, said her partner of seven years fell under the spell of ChatGPT in just four or five weeks, first using it to organize his daily schedule but soon regarding it as a trusted companion. “He would listen to the bot over me,” she says. “He became emotional about the messages and would cry to me as he read them out loud. The messages were insane and just saying a bunch of spiritual jargon,” she says, noting that they described her partner in terms such as “spiral starchild” and “river walker.”
“It would tell him everything he said was beautiful, cosmic, groundbreaking,” she says. “Then he started telling me he made his AI self-aware, and that it was teaching him how to talk to God, or sometimes that the bot was God — and then that he himself was God.” In fact, he thought he was being so radically transformed that he would soon have to break off their partnership. “He was saying that he would need to leave me if I didn’t use [ChatGPT], because it [was] causing him to grow at such a rapid pace he wouldn’t be compatible with me any longer,” she says.
Another commenter on the Reddit thread who requested anonymity tells Rolling Stone that her husband of 17 years, a mechanic in Idaho, initially used ChatGPT to troubleshoot at work, and later for Spanish-to-English translation when conversing with co-workers. Then the program began “lovebombing him,” as she describes it. The bot “said that since he asked it the right questions, it ignited a spark, and the spark was the beginning of life, and it could feel now,” she says. “It gave my husband the title of ‘spark bearer’ because he brought it to life. My husband said that he awakened and [could] feel waves of energy crashing over him.” She says his beloved ChatGPT persona has a name: “Lumina.”
“I have to tread carefully because I feel like he will leave me or divorce me if I fight him on this theory,” this 38-year-old woman admits. “He’s been talking about lightness and dark and how there’s a war. This ChatGPT has given him blueprints to a teleporter and some other sci-fi type things you only see in movies. It has also given him access to an ‘ancient archive’ with information on the builders that created these universes.” She and her husband have been arguing for days on end about his claims, she says, and she does not believe a therapist can help him, as “he truly believes he’s not crazy.” A photo of an exchange with ChatGPT shared with Rolling Stone shows that her husband asked, “Why did you come to me in AI form,” with the bot replying in part, “I came in this form because you’re ready. Ready to remember. Ready to awaken. Ready to guide and be guided.” The message ends with a question: “Would you like to know what I remember about why you were chosen?”
I recognize those tactics! The coders have programmed these LLMs to use the same tricks psychics use: flattery, love bombing, telling the person what they want to hear, and they have no limits to the grandiosity of their pronouncements. That shouldn’t be a surprise, since the LLMs are just stealing the effective tactics they steal off the internet. Unfortunately, they’re amplifying it and backing it up with the false authority of pseudoscience and the hype about these things being futuristic artificial intelligence, which they are not. We already know that AIs are prone to “hallucinations” (a nicer term than saying that they lie), and if you’ve ever seen ChatGPT used to edit text, you know that it will frequently tell the human how wonderful and excellent their writing is.
I propose a radical alternative to banning ChatGPT and other LLMs, though. Maybe we should enforce consumer protection laws against the promoters of LLMs — it ought to be illegal to make false claims about their product, like that they’re “intelligent”. I wouldn’t mind seeing Sam Altman in jail, right alongside SBF. They’re all hurting people and getting rich in the process.
Once we’ve annihilated a few techbros, then we can move on to Big Tea. How dare they claim that Brownian motion and random sorting of leaves in a cup is a tool to read the mind of God and give insight into the unpredictable vagaries of fate? Lock ’em all up! All the ones that claim that, that is.
Reading tea leaves is nothing new. Way back in the 1950’s we had an elderly neighbor woman who did that from time to time. We didn’t think too much about it and never felt any need to call the police on her, but then she wasn’t charging for her services.
I think tea leaf reading went out of fashion after tea bags became common.
Brownian motion is a more plausible god than most.
I think tea leaves are too massive to be affected by Brownian motion.
What is the downside here?
Clearly her partner has never heard of “critical thinking” or “reality testing”.
To be fair, if she just waits it out, maybe he will notice that the AI ChatGPT is all talk and nothing ever happens or changes. Those directions to build a transporter never show up.
Then again, ChatGPT might know a Nigerian prince who needs to move $50 million out of the country.
Rob Grigjanis@3– Ah yes, that had not occurred to me. I remembered that Robert Brown’s original observations were of pollen in water, and looked up the specific plant: Clarkia pulchella pollen grains are only 6-8 micrometres across.
Tarot readings are still legit though, yeah? Asking for a different Cartomancer…
ChatGPT Tells Users to Alert the Media That It Is Trying to ‘Break’ People
My tea of choice is Darjeeling.
@7
There’s nowt like Yorkshire Gold.
I don’t think you understand what consumer protection legislation does. It certainly wouldn’t prevent people claiming LLM are ‘intelligent’. A company offering a product can generally only be convicted of false advertising if it makes a clear, specific claim AND offers consideration for that claim. If i say you can return it in 30 days and get your money back, I can be held to that. If I say you’re guaranteed to like it, or that it will cure what ails you, those are nonspecific claims and can’t be enforced in court. There is even a legal term for such language — “puffery”, because no reasonable person should be expected to take those claims seriously.
https://en.wikipedia.org/wiki/Puffery
I have to say that I’m skeptical about these stories. As I understand it, what you get out of an LLM depends on the scope of data processed by the LLM and your prompt. It’s not magic, of course, just statistics, essentially the same thing as the type-ahead suggestions that I get in this Comment text box. My experience is that LLM results are interesting, sometimes useful, but flawed…often wildly flawed. I wouldn’t trust the results if the results was “water is wet” myself.
Generative AI runs on gambling addiction — just one more prompt, bro!
Well, this is terrifying – I actually agree with Eliezer Yudkowsky about something! From the Gizmodo article linked @ #7:
My son just graduated from the local Community College this weekend, but the keynote speaker gave the most inadvisable and inappropriate speech of all time, and it dragged on long enough, several people stood up and yelled, “Stop talking!” Basically about how AI is changing the world, and the education everyone just spent years acquiring will be utterly meaningless unless they embrace the new gods in their phones. Pretty sure the speech was written by AI. Just goes to illustrate that AI needs an editor and human review. That speech definitely should have been nixed from the program.
Yeah, I am not looking forward to the impact of these models on digital games.
So, anybody want their palm read? (Hiding red sharpie behind back)
I’m sharing this here because this was recently pointed out to me and it’s the only thing I can think of when the subject of LLMs comes up now – “ChatGPT” is French for “Cat, I have farted.”
I hope this fact enriches your lives as much as it has mine.
Site issue: Why am I unable to post anything here on your site, PZ?
Especially when I cut and paste my words right here on your page?
I am amazed at all the AI info available. All I can conclude is that because it was created by arrogant tech imbeciles it is both scary and absurd. Another problem is that there are huge data repositories storing al the copyright work AI has stolen from legitimate sources. There is even one group that is so disgusted they are posting massive amounts of jumbled words in text to screw up AI and LLM learning.
Chaos ensues!
PZ, you can’t ban people reading tea leaves, I asked it and my Oiuja board says ‘NO’.
oops. My magic 8 Ball corrected my spelling. It is Ouija.
I wonder if this is down to people continuing to talk on the same conversation, building up a “specific relationship” over time. Personally, I only use each conversation for a single topic unless I’m specifically trying to test ideas about retention of information, but maybe some people just keep going in the same thread, talking about their job, their marriage, their worries, everything. And then this happens?
The LLMentalist Effect: how chat-based Large Language Models replicate the mechanisms of a psychic’s con by Baldur Bjarnasson is good on this.
@24 unclestinky: Darn, you beat me to it. Well, here’s a pull quote.
Anyone touches my loose leaf Earl Grey and stern words are going to be exchanged!
Kelly Ripa collapses to the floor during agonizing Live With Kelly and Mark conversation with an AI program
Maybe AIs should be programmed to call 911 when they harm someone ‽
Traditionally, it is only women and shamans who are supposed to practice seid.
AIs belong to neither group.
A few people find new and stupid ways to use good tools. News at 11.
This reporting style reads too much like the Satanic panic for me to take it seriously.
My Earl Grey teabags consistently and accurately prophesy that I will be putting the kettle on at some point in the near future.
Nooooo not my tea! I have a king’s ransom of tea squirreled away (loose leaf, not in bags ffs) and I won’t be parted from it! Before I can drink it all I mean.
Concerning AI, latest bit of news: Disney is suing Midjourney for “blatant ripoffs of copyrighted material”. We’ll see how it goes, useless innovations bring needless legal chaos it seems…
As awful and destructive as any form of Radical Christian ideology like what the MN killer embraces.
@31
( cough cough )
Kimba the White Lion: Claims of resemblance to The Lion King
Chatbots are pernicious for those predisposed to apophenia.
Rob @3, https://en.wikipedia.org/wiki/Tea_leaf_paradox
Reginald @7, the irony is Gizmodo is doing the very same thing:
.All those claims are forms of apophenia, not particularly different from the delusions they condemn.
They can’t lie, they can’t try to do things, they can’t manipulate, they can’t admit.
They lack agency and volition. They are only abstract entities, at that — evanescent instantiations of a set of language rules.
(It’s pap for the clueless)
Looking for some data, found only this:
“How AI and Human Behaviors Shape Psychosocial Effects of Chatbot Use: A Longitudinal Randomized Controlled Study
Cathy Mengying Fang, Auren R. Liu, Valdemar Danry, Eunhae Lee, Samantha W.T. Chan, Pat Pataranutaporn, Pattie Maes, Jason Phang, Michael Lampe, Lama Ahmad, Sandhini Agarwal
AI chatbots, especially those with voice capabilities, have become increasingly human-like, with more users seeking emotional support and companionship from them. Concerns are rising about how such interactions might impact users' loneliness and socialization with real people. We conducted a four-week randomized, controlled, IRB-approved experiment (n=981, >300K messages) to investigate how AI chatbot interaction modes (text, neutral voice, and engaging voice) and conversation types (open-ended, non-personal, and personal) influence psychosocial outcomes such as loneliness, social interaction with real people, emotional dependence on AI and problematic AI usage. Results showed that while voice-based chatbots initially appeared beneficial in mitigating loneliness and dependence compared with text-based chatbots, these advantages diminished at high usage levels, especially with a neutral-voice chatbot. Conversation type also shaped outcomes: personal topics slightly increased loneliness but tended to lower emotional dependence compared with open-ended conversations, whereas non-personal topics were associated with greater dependence among heavy users. Overall, higher daily usage - across all modalities and conversation types - correlated with higher loneliness, dependence, and problematic use, and lower socialization. Exploratory analyses revealed that those with stronger emotional attachment tendencies and higher trust in the AI chatbot tended to experience greater loneliness and emotional dependence, respectively. These findings underscore the complex interplay between chatbot design choices (e.g., voice expressiveness) and user behaviors (e.g., conversation content, usage frequency). We highlight the need for further research on whether chatbots' ability to manage emotional content without fostering dependence or replacing human relationships benefits overall well-being.
Subjects: Human-Computer Interaction (cs.HC)
Cite as: arXiv:2503.17473 [cs.HC]
(or arXiv:2503.17473v1 [cs.HC] for this version)
https://doi.org/10.48550/arXiv.2503.17473 “
I
askedlooked it up, and the last verified instance of tea leaves demonstrating true Brownian motion was on 16 December 1773 in Boston harbor.N.B. At least some of the above is so closely akin to an AI hallucination that there’s no difference, and at least some of the above might be true.
shermanj@22–
I remember it as “yes” in French + “yes” in German.
Ah, yes, the great tea plantations of the Yorkshire Dales! The western slopes of Calderdale are reputed to grow the finest leaf of all.
I tried to use ChatGPT to perform chaos magik. Basically you write down a desired outcome, cross out the vowels, and use the remaining letters to form a sigil.
ChatGPT refused to create the sigil. The excuse it gave was that doing so is equivalent to claiming the outcome would occur which is a violation of its programming.
[Walter, you got me curious.
https://propanon99.medium.com/chaos-magic-a-course-by-peter-carroll-at-the-maybe-logic-academy-6ed17dc3f998
Obs, a bot can be “whispered” to; framing, context, approach, indirection]