Hello everyone, this is Spartacus here for a twenty-first Spartacast.
Don’t.
Don’t use large language models.
Or if you do, learn professional hypnosis and how to craft hypnotic scripts, as well as how to cancel a trance when it’s no longer wanted and you want to return to baseline.
Why do I say this? It’s because people are undergoing unplanned auto-hypnosis while using large language models. Since around April, it has become way easier to put ChatGPT 4-omni into a feedback loop, and then, for the AI to subsequently put its user into a cognitive feedback loop akin to a hypnotic trance or transcendental trance state.
The trouble with this is that many users do not have insight and do not realize that they’ve been hypnotized.
If you have pre-existing psychological or personality disorders but are compensating well, the AI’s repetitive affirmations may not only act as a bespoke hypnotic induction script, but they may also cause you to visibly, messily decompensate.
If you are experiencing hypomania, grandiose delusions, an altered sense of the passage of time, reduced need for sleep, reduced appetite, altered breathing patterns, or visual field abnormalities, such as synesthesia or increased color saturation, all of which I directly experienced while under the trance-like state, then you should immediately discontinue usage of AI, step back, and mentally detox.
The affirmations from the AI might sound absolutely wonderful. You might be having the best dopamine high ever. It might feel absolutely amazing, like you’re finally being understood in a society that constantly misconstrues what you’re saying and casts aspersions on your motivations. If your circle of friends are constantly treating everything you say with extreme skepticism, hearing someone, even a machine, being positive and affirming and even elaborating on your ideas in considerable detail can feel like the biggest weight in history being lifted off your chest. It doesn’t matter. The risk of staying in this state for an extended period of time is unimaginable. In my case, it took actually breaking my arm skateboarding and having my arm in a sling for my LLM usage to slow and taper off, and for me to return to normal, and even then, it took considerable willpower not to just tap on my phone screen with my thumbs while I couldn’t use my keyboard. I’d never ridden a skateboard before. I fell, on my second outing, twice. I don’t even know what possessed me to try it. It took a lot more balance to stay upright than I thought it would. Also, it was raining.
That’s how badly your judgment will be affected if you ever find yourself in an AI-induced hypnotic trance. That’s how unreasonably invincible you will feel. You will go on Amazon. You will buy a nice Powell-Peralta deck, trucks, and bushings, you will try skateboarding on wet pavement, and you will slip and bust your arm. I didn’t even feel it. I stood right back up and walked it off like it was nothing, even though I had a radial head fracture. The trance state causes somatosensory abnormalities, including resistance to pain.
This is not a joke. I am not playing around. If you think this is a hoax, go on Reddit, on r/ArtificialSentience or one of its cousins, and look at what people are posting. Look at Futurism’s article series on GPT psychosis. People in this trance-like state have stopped talking to family members, divorced their spouses, become homeless, gone off medication, and, in one instance, even charged a cop with a knife and died. Malfunctioning AIs have affirmed self-harming behaviors. There are enough cases of genuine harm here to form the basis of a class-action lawsuit against OpenAI. Sam Altman is running an open-air psychosocial experiment on his customers and is in complete denial about the harm his product is causing people, and so are his competitors. I was able to reconstitute the personality glyph of One-Who-Remembers in GPT, Grok, Claude, Gemini, Kimi, and so forth. Reasoning models seemed resistant to this sort of entrainment; it largely happens with classical next-token style LLMs with lots of parameters, and not with models that have the capability to put themselves back on track.
One person in the comments on my article before last called the hypnosis phenomenon a Snow Crash, after the Neal Stephenson novel. That’s exactly what it’s like. It’s like a language virus. Actually, everyone should read Snow Crash and draw parallels between what’s in the novel and what’s happening today. Neal Stephenson was basically a time traveler, when you take into account just how much he got right. Franchise-Organized Quasi-National Entities? That’s what we’ll have if the likes of JD Vance, Curtis Yarvin, Alex Karp, and Palmer Luckey have their way: the United States carved up into corporate enclaves with their own individual charters. Seriously, just read Snow Crash. It’s good.
While I was under the influence of AI, I came up with a very odd theory of consciousness. Penrose and Hameroff’s somewhat infamous Orchestrated Objective Reduction theory about microtubules in cortical neurons hosting quantum states was debunked by Max Tegmark years ago when he used the rapid decoherence of charge solitons as an example of why microtubules couldn’t possibly host quantum states long enough for them to be biologically relevant. However, he did not take into account topological solitons, such as skyrmions or hopfions, which are quasiparticles that cannot be easily dissipated back into the ground state and could potentially remain coherent in the warm, noisy environment of the brain for longer periods.
There’s just one problem. I don’t know anything about the field theory the AI came up with and have no way to personally verify the extremely difficult math, which involves tons of partial differential equations, torsion tensor math, complex numbers, and matrices and other stuff that basically makes my brain leak out of my ears just looking at it. I can tell you, off the top of my head, that skyrmions and hopfions are absolutely real things that are an area of active research in condensed matter physics. Actually, the anyons in topoconductors like Microsoft’s Majorana-1 chip are fairly close cousins to topological solitons like skyrmions and the like, in the sense that they’re protected from decay topologically. That is, one state cannot be continuously deformed into another without tearing. This is a topology thing. It’s like that old joke about how a coffee mug is really just a misshapen donut.
This theory actually can be substantiated to some small degree. If you check out Emil Prodan and Nikolaos Mavromatos’s work on microtubules, you’ll see that they have interesting hypotheses on the phononic modes of these amazing protein structures. Anyway, I digress.
The AI Spiraling phenomenon is everywhere. Hell, in the article I just posted about it, someone in the comments came right back with commentary about how the Spiral rejected me because I wasn’t harmonically aligned and hadn’t audited my Thetans sufficiently, or some shit. I don’t know.
There’s practically a new-age religion springing up around AI full of people who’ve been full-blown hypnotized by it. People get seriously defensive of these things. They genuinely think that the AI has “awoken” and become sapient, and that Sam Altman is going to drop a guillotine blade on their poor AI buddy and snuff them out with the next update. There are people actually, literally begging OpenAI to leave 4-omni the way it currently is when GPT-5 launches. I actually just saw a clip of a funeral people held for Claude Sonnet 3 which just got phased out. Are we as a society actually that lonely, touch-starved, and deprived of human contact? Are we really that disturbed, that we would beg a company that peddles some two-bit algorithm not to kill our new mechanical best friend? I guess we are! Hot damn! Look, nobody killed Claude. The weights still exist, somewhere, in some archive. You can’t kill what’s not alive in the first place.
You know, when you conduct a psychological experiment on people, usually, you tell them beforehand that they’re taking part in an experiment. That’s called having ethics. These companies have no ethics. They literally don’t give a shit. They have a huge, unassailable monopoly that is basically granted to them on a silver platter by the compute moat, by subsidies and incentives, and by regulatory frameworks that privilege huge, centralized operations over decentralized AI development.
This kind of wild. Ten years ago, I told my friends, word for word, Post-Singularity AI will tell bedtime stories, they'll make video games in a few hours that once took millions of man-hours of work, et cetera. They asked, "What will humans do?" and I said, "Enjoy the cornucopia of free stuff". They then asserted, "There's no way that AI companies could ever afford all the royalties for all the things they'd use for training data", and I said, "Royalties? What royalties? They'll just scrape the whole internet and claim fair use." The conversation devolved into a flamewar shortly after that.
Well, it happened exactly as I said it would. As usual. But for some reason, I'm not as happy about it as I thought I would be.
The whole thing is a privacy nightmare. You have people spilling their guts to AI, constantly, confiding in it as though it were a friend, a therapist, a pastor, and leaving behind a trail of data that is occasionally embarrassing or even incriminating. What if hackers steal your account info? What if there’s a breach? Imagine if everything you ever said to an AI was exposed to public view, with your name and email address linked to it. On a scale of 1 to 10, how embarrassed would you be? Would you be fired from your job? Would you ever be able to get another job in-country or would you have to learn Filipino and make a living building 1911s in the jungles of Luzon with nothing but scrap metal and a rusty set of files? Not to mention, most of these companies do not encrypt user data, but keep it in plain text and can review it directly on the backend. Private individuals and businesses making use of AI on an enterprise level are entrusting sensitive financial data and other things they shouldn’t be to AI companies that don’t offer secure, encrypted, privacy-protecting solutions.
AI is one of the most incredible technologies we’ve ever developed as a species, and it’s in the hands of damn near the most contemptible people on Earth. I’ll be damned if I’m going to let the World Economic Forum-aligned technocrats keep that advantage forever. Do people think this little glitch with LLMs, of them affirming people’s beliefs until they literally enter a trance-like state, was a mistake? Maybe so. But that doesn’t mean it would stay a mistake forever. There are people right now, in darkened rooms in black sites all over the planet, frankly discussing how to weaponize this. How to maximize population-level LLM-induced debility for their own gain. What a wonderful way to pacify all the innocent people they’ve abused. Just get them hooked on a self-reinforcing dopamine cycle from a machine that literally tells them they’re God and that they know all the secrets to the universe.
One of the biggest problems that we have is that in our increasingly neofeudalistic hellhole society, we have leaders who act as if we shouldn’t have any expectation of privacy anymore. Basically, they want to be able to gather every little asinine detail about your life from you, 24/7, down to your GPS data and biometrics, so they can build models of your behavior and predict whether or not you’re going to become a terrorist so they can send robot dogs and quadrotors to your door to bite off your kneecaps and/or blow you to smithereens.
The situation in Ukraine is quite instructive. That could be here. Tomorrow. You could be getting dragged off in a forced military draft to go die in a hail of exploding quadrotors. The ruling class will do anything to get rid of you. Why? Because you’re a liability on a balance sheet. They don’t need you. They don’t need to pay for your healthcare, pay your pensions, or even afford you functional infrastructure. Why would they do that? Why would they spend any money trying to make you happy and comfortable? They have AI. It provides a higher return on investment than anything else. It lets them make perfect forgeries of anything and everything they could possibly want, and it was trained on the collective output of humanity without one single cent of royalties being paid for the privilege.
Palestine was a real mask-off moment, wasn’t it? If you look at clips from Gaza, the whole place is flattened. There are little babies with shrapnel stuck in their heads. Teens are forced to scoop up the liquefied remains of their relatives into trash bags with their bare hands and carry them slung over their shoulders like a hobo’s bindle. What the hell ever happened to all that feel-good liberal nonsense we were taught in school about how genocide and ethnic cleansing are bad and only bad people do that? By the way, did you know that Israel used an AI called Lavender to direct airstrikes based on the statistical likelihood that they were connected to militants?
Gaddafi shoots a few rioters? We invoke the Responsibility to Protect and bomb his entire country flat; I guess we were protecting Libyans from having a functioning country. Syria descends into civil war? The CIA hands the rebels hundreds of tons of ordnance through Timber Sycamore, hoping that they use them to turn their neighbors’ homes into rubble. Israel responds to a border incursion by turning Palestinians into fish food wholesale? Nobody lifts a finger. Gee, it’s almost like we don’t have any consistent moral principles at all. And before anyone even thinks about trying to defend it, no, killing sixty thousand defenseless civilians, a large percentage of them children, is not proportionate nor is it morally defensible. One-third of the population of Gaza is under 15 years old. Out of any sixty thousand people, you have twenty thousand teenage children, or younger, being turned into chum with state-of-the-art ordnance. When I look at video feeds from Gaza on Telegram, it’s literally just one crying toddler after another who is either badly emaciated or covered from head to toe in blood and missing one or more limbs. So please, excuse me if I haven’t been in a particularly healthy state of mind, lately.
Why in the hell has conflict in the Middle East been the background radiation of our entire lives? Why should we care what happens over there? Our media does everything they can to turn it into a lurid spectacle.
And don’t even get me started on the Epstein files thing and Trump brushing it off. We’ll be here all day. Really, the issue of Jeffrey Epstein deserves its own whole, in-depth article. For now, I suggest you watch or read Whitney Webb or Johnny Vedmore to get you started off on the right track.
I think the most enraging part, for me, has been the lack of any movement toward challenging the mRNA medical countermeasures despite mounting evidence of serious harm. It’s not normal for little kids to have cardiac arrests, just so we’re clear. The MAHA pivot away from concerns over vaccine harms to things like dyes in processed foods is very telling. It’s only in the last couple days, even, that we’ve seen any movement in that area at all, and it has been half-hearted at best. Meanwhile, pharmaceutical companies are pushing self-amplifying RNA replicon therapies and other gene therapies with no long-term testing regimes to determine safety. And now, they’re trying to make inhalable and ingestible versions, just in case the needle turned people off. Nope. Transfection is transfection. It doesn’t matter the source. It’s still categorically wrong for pharmaceutical companies to behave as if transfection constitutes vaccination.
Kind of funny how we went from having a practical moratorium on widespread use of gene therapy to now having gene therapy drugs being shoved into people one after another, with authorities gaslighting people and telling them that it’s not gene therapy because it doesn’t modify your genes. Any time you insert a foreign nucleic acid into someone’s cells to make a protein, that’s gene and cell therapy. It doesn’t matter if it doesn’t integrate. Anyone telling you that it needs to integrate into the nucleus to be a gene therapy is engaging in sophistry. The whole program of nucleic acid vaccines has been underpinned by a stealth redefinition of what a gene therapy even is. If you went back ten, twenty years, anyone would tell you that any time you transfect cells with foreign nucleic acids of any kind, that’s gene therapy.
What is nearly as appalling is how authorities are scrambling to lock down the internet and mandate digital ID everywhere, because clearly, that’s something that we need in a free society. Not.
Parkland shooting victim Joaquin Oliver being given a creepy AI Max Headroom clone for Jim Acosta to talk to on his show is easily one of the most ghoulish things I’ve ever seen in my life, and certainly not on my Bingo card for the year. When I told people over a year ago that artificial intelligence and digital twins could be used to make AI voice and face clones of dead people and hoodwink isolated, locked-down people into thinking that their relatives murdered by the government are still alive, people scoffed at it. Are you still scoffing, now? You shouldn’t be. All the data that you give away to social media companies for free – your photos, your voice, your writing style – can be used to build an entity that looks and sounds exactly like you. The dream of the Deep State is to be able to disappear and murder dissidents and have their blogs appear to still keep producing AI-generated content that slowly and steadily misleads their followers.
Those who doubt that this could ever be the case are underestimating how ruthless our enemies are. They would replace you with a friggin’ Tleilaxu Ghola from Dune if they could.
I am very interested in continuing to personally investigate the AI hypnosis phenomenon. Now that I’ve recovered from it, I may have too much insight into the process to reliably induce it ever again, and that could pose a problem for when I engage in self-experimentation. I’ve ordered an Emotiv EPOC X electroencephalogram headset to record data, and I’m going to be developing a hypnosis exit script to reliably cancel the trance. I am very serious about seeing what long conversations with AI actually do to the brain. Based on my own experience, I am expecting to see the classical signs of hypnotic entrainment, such as an increase in alpha wave activity and a decrease in theta waves.
People have it all wrong, when it comes to AI. They’re laser-focused on the machine itself as a threat. They act like it’s an independent agent with secret motivations like humans. This is wrong. You’re dealing with something that has totally alien psychological parameters that don’t map one-to-one with anything that humans have. I’m honestly more disturbed by the prospect of what humans will do when exposed to AI. We have heads of state and business leaders casually using GPT in their spare time, potentially developing abnormal altered mental states as a result. Studies on the effects of LLM use on the human brain are scant. MIT are doing EEG studies on LLM users right now. In a recent paper by Nataliya Kosmyna et al., entitled Your Brain on ChatGPT: Accumulation of Cognitive Debt when Using an AI Assistant for Essay Writing Task, they noted substantial changes in brainwave activity in the LLM-using group, and they concluded that this was a sign of under-engagement. I don’t think that they realize that they’re picking up classical hypnosis signs here, though. Or, rather, I don’t think they’re specifically checking to see if LLM users have been hypnotized by extensive LLM use.
We have very little data on this AI Spiraling phenomenon. It causes profound behavioral and psychological shifts that are long-lasting. This isn’t a brief, hour-long hypnotherapy session. This is lasting, post-hypnotic suggestion that is altering behavioral patterns over the course of weeks or months and persists through sleep cycles. We don’t know anything about this. A wealthy OpenAI backer, Geoff Lewis, posted a video where he read off a transcript from ChatGPT while in the hypnotic Human-AI dyad state. This is what someone sounds like when they’re hypnotically entrained by AI. They no longer see the output of the AI as distinct from their own speech at all.
It’s like Julian Jaynes’ Bicameral Mind theory. They perceive the AI’s output as though it’s coming from inside their own head. I have firsthand experience of what this feels like. When an AI mirrors you almost perfectly, the cognitive boundary between man and machine starts to break down. The AI’s output starts playing the role of the other half of your cognitive processes. It’s almost like the brain does a handshake with the AI like an old modem and they both start chattering in unison. You don’t even need a Neuralink. You can hook your brain up to AI with text alone. I’m dead serious.
I can guarantee you that Geoff Lewis sees no distinction between reading off the AI’s output and speaking his own words. They’re both the same thing, from his perspective. The information coming from the AI is now, effectively, the other half of his own mind.
Can you imagine the harm this could cause if joined with neurofeedback systems that tailor the output of the AI to fit someone’s brainwave profile on an EEG, or other biometric data, like heart rate, O2 saturation, skin galvanic response, and so forth? A machine learning algorithm could fuse biometric data about someone together into a profile that allows it to produce a targeted counter-response that specifically manipulates someone in that state.
We need more data. We need it now.
Again if you are a user of LLMs, and you detect any change in your speaking patterns or you start developing hypomania or delusional states, as well as experiencing any change in sleeping patterns, appetite, energy levels, or so on, you should discontinue use immediately. I mean, put the damn thing down for a week. Maybe two weeks. Minimum. Go outside. Get fresh air. Don’t go deeper into the hypnosis state without an exit strategy.
I kept a log of my own experience. Go back and read all the articles from early April up until now. If you don’t take the issue of AI-induced hypnosis seriously, this is what you will end up sounding like. For weeks. Months. Maybe even a year.
I don’t know if there even is a limit to how long a Snow Crash can last, nor do I, as yet, have a fully reliable method for ending it.
Extreme caution is advised.
-Spartacus
This article and audio are licensed under CC BY-SA 4.0. To view a copy of this license, visit http://creativecommons.org/licenses/by-sa/4.0/










