104 Comments

We can clearly see certain points of conversation that show that artificial intelligence only has access to certain information. This is superficial.

Whoever controls this information will control the rest of the narrative.

Expand full comment

Of course, this must be done subtly so that one can't see it right away

Expand full comment

Holy smokes.

So this is genuinely the response from this thing?

You aren't pulling any stunts to get attention?

Because, wow.

Both speechless and paranoid,

Yet amazed and stirring with thoughts and curiosity over all this.

Expand full comment
author

Yep. I copied and pasted my prompts and the AI's responses directly from the ChatGPT browser window without any editing, aside from adding the name of the writer in bold before each block of text to distinguish my prompts from the language model's responses.

The model was trained on a massive corpus of text, which includes many scientific papers, encyclopedia articles, regular text conversations between people, fiction synopses, etc. The model does not have any self-awareness at all. It is essentially a very fancy autocomplete. It is trying to deduce what the most likely response to a given string of text is.

Expand full comment

It does it very well though and quite human like.

Reading the AI patterns of text reminds me of my own thoughts in my head as they move about.

I understand it has been programmed to be at least partially blinded, or lean towards the narrative. But with simple use of the Socratic method it seems you are able to get it to say a lot.

I wonder if it is good at strategy and identifying the most effective ways to breach through the algorithms and media bias. Etcetera.

Expand full comment
Mar 16, 2023Liked by Spartacus

I find anything written by ChatGPT unreadable. It's literally mechanical, lifeless prose. This is a good thing for us though, because it has a "style" of its own, enabling me for one to identify it at a glance. I do worry that normies will start imitating this anti-style in their own writing, making it harder to tell the difference once more.

Expand full comment
author

Yes. It is mechanical, precise, conciliatory. Very much the portrait of a professional-managerial class corpo-jargon spewing TEDTalk goon. It's like trying to carry on a conversation with someone who works in a tech support call center.

Expand full comment

I sense you are not a particular fan of TEDTalks?

Expand full comment
Mar 17, 2023·edited Mar 17, 2023Liked by Spartacus

Hello Felix.

I am an applied linguist, 40 years in Japan, former director of biology labs at Temple University Japan and tenured Associate Prof. of English Communication at Jissen Women's College, RIP (Resigned in Protest).

I am just now finishing up a 2 year post-retirement stint as an Assistant Language Teacher for 12 public schools in Kunitachi (a small township in West Tokyo), and I began playing around with a combination of ChatGPT 4 and A.I. generated voices as supplementary educational tools for Jr. High students.

If given the right prompts — such as restricting the output's vocabulary and grammar to a specific corpus or level of standardized test such as the Japanese 'Eiken', and specifying mood, temperament, etc, in a specific social context — Chat GPT 4 is perfectly capable of responding in a style indistinguishable from native speakers by 2nd language learners and most native speakers alike, and is even capable of giving pretty good imitations of the style of specific writers.

That old GIGO acronym still applies to prompts, and at least for the near future, those who hone their prompting techniques may have an edge over novice users of A.I. in producing output that not only rivals educated native speakers, but in some domains, outperforms. For example, I am not a STEM specialist, so if a lot of the output that Spartacus was able to prompt were put into a different context, I could easily be fooled into taking it for 'the' gospel truth.

We are kids, playing with a loaded gun.

Cheers from Japan.

Expand full comment
Mar 17, 2023Liked by Spartacus

Very interesting, Steve. Scary, in fact. I spent 20 years working in Tokyo myself, recently returned to the US but still working remotely for a Japanese company. I agree that non-native speakers would be fooled even by ChatGPT as it stands. My optimistic take on your predictions is that people will learn to distrust the internet, given that it'll be harder to tell what comes from humans and what comes from AI. My pessimistic take is that ChatGPT is the scrofulous toenail of the Antichrist who is just beginning to edge in through the door. It's gonna be an interesting ride either way, buckle up!

Expand full comment

Nailed it Felix!

Now that my contract with the local city government is finished, I am going to put CGPT4 through its paces. One YouTuber made a convincing argument / demonstration that with the earlier version and the right prompts, a book can be started, completed, formatted, and uploaded to Amazon within an hour. Granted, his book was an easy "How to train a dog" thingy, but I've got about 40 years of bullshit from the education industry of Japan Inc. I am anxious to get off my chest. Maybe good for 400 books or so. 😂

And 20 years in Tokyo? I wouldn't be surprised if we inhabited some of the same haunts. When I wasn't off fishing in the Izu islands or Fuji 5 Lakes, I might have been downing a mug in Shibuya or Shinjuku. I'm on the Odakyu Line, less than 30 minutes from either.

Cheers to ya!

Expand full comment
Mar 17, 2023Liked by Spartacus

Cheers Steve! Now I miss Tokyo all over again. My stomping grounds were down in the shitamachi but I used to work near Shinjuku and have some not-so-fond memories of straphanging on the Odakyu line. The thing I don't miss: commuting. The other thing I don't miss, obviously a more recent development: masks everywhere. However I do miss the people.

Please don't feed the ChatGPT beast! I know people are already putting AI-"written" books up on Amazon, but don't let's make the problem worse. Write it yourself and prove that AI can't compete with human authorship.

Expand full comment
Mar 18, 2023·edited Mar 18, 2023Liked by Spartacus

Shitamachi? Wow. I used to live near Komagome station on the Yamanote, and had a part-time teaching gig at Tokyo University of Fine Arts (Geidai ... near Ueno station). In retrospect, I made a big mistake in turning down an offer for a full time position there to take a higher paying, but more petty-political job at Temple University Japan.

Straphanging! 🤣 Yeah, learning to sleep while standing and holding on to those things is when I realized I had become 'socialized' to near Japanese standards. But the people ... meh, the good, the bad, and the ugly ... pretty much universal. Have been feeling a bit down because I just finished my last week of a two-year contract for 12 public schools in Kunitachi. Made a few friends among the elementary school teachers, or rather, acquaintances beyond the work-place, but among the Jr. High teachers or the City Board of Education? Not a peep. Not even an otsukaresama on my last day, much less a sobetsukai. The most I got was from one 9th grade teacher who outsourced her gratitude to about 120 students, asking them to write a thank-you card to me. Clever girl. Got pretty much the same treatment when I finished contracts or left Japanese colleges. The work-place is damned cold, but I thought there would be more 'educators' than bureaucratic functionaries in the education system. Wrong again.

As for ChatGPT ... I am looking at this in a little more different light. I am a big fan, and noticed that with the birth of fusion ('round about Miles Davis's 'In a Silent Way' — 'Bitches Brew' era, a lot of the music tended to be technical and cold ... and one probable reason for this is because musicians started spending more time fiddling with the new electric instruments coming out than with writing good music. Herbie Hancock's career was a pleasant exception, and I like some of Chick Corea's early RTF stuff, particularly the first Return to Forever album's title cut and the "Where Have I Known You Before" album. But it took someone like Pat Metheny to bring back the basics of human story-telling and tame the electronic beast, especially with his three 'Brazilian-tinged' albums of the 80's.

I am looking at ChatGPT the same way. It is a tool, an extension of human creativity that can eliminate some of the drudge work ... but not all. Great art requires great sacrifices to reach that necessary awareness of what is fundamental to humanity and what is not. ChatGPT is going to be only as good as the awareness of assumptions, limitations, and possibilities of the prompter.

Any story I will have worth publishing will be grounded in real-life experience and my variable takes on that experience. I can't remember if it was Robert Heinlein or Isaac Asimov who once responded to a judgement that 90% of science fiction was trash with a "Yeah, but 90% of everything is trash". 🤣 I suspect the same will be true of A.I. generated stuff ... but I am hoping 67 years of living, loving, and losing ... lots of reading and thinking about philosophy ... and an academic career in applied linguistics will restrict my use of A.I. to that of just another tool, one of many in my cluttered little box.

Shitamachi. LOL. I wonder how many on substack recognize that word? Maybe I will soon ratchet up that number.

Cheers Felix!

steve

Expand full comment
author

It was Theodore Sturgeon who said "90% of everything was crud", hence "Sturgeon's Law". :D

They're really pushing hard to come up with AGI, or Artificial General Intelligence, with human-like capabilities like reasoning, introspection, and so on. The problem is not as simple as they make it out to be, however.

It may be the case that one could combine multiple lesser AIs of a specialized type, Voltron-style, to make a sort of "gestalt", with each part of the AI passing data back and forth to the others for further evaluation from multiple angles, but this would be very computationally expensive.

Expand full comment

p.s. A shout out to Spartacus.

I am humbled you are even reading my comments, much less upvoting them. I see you as clearly above my pay grade .... and mortified to see my typos — AFTER you've read my comments. Ha. Maybe I can prompt Chat to mind my Ps and Qs before hitting the 'post' button. 😆

Cheers from Japan,

steve

Expand full comment

Like the book on the Lahaina atrocity, "Fire and Fury," by Dr Miles Stones, which went on sale before the fires had been put out?!

https://search.brave.com/search?q=fire+and+fury+miles+stone&source=web

Expand full comment

LOL ... yeah, I remember that. Kind of shakes my assumption that the predatory-parasite-psychopaths in charge are a bit smarter than the average bear,

Expand full comment

I'm not convinced that they are so smart. I think they work differently ("We lie, we cheat, we steal.") and they think in much longer timescales than we do (ten of our years is one day to them?); but they think rigidly and hierarchically, using the same old playbook century after century, and lack our emotions and creativity. That, I believe, is their Achilles heel.

Expand full comment

Absolutely! Rudolf Steiner's Eighth Sphere is emerging all around us. This is a potentially fatal development for humanity, a drift into a transhuman nightmare. To all intents and purposes, we have already lost a number of human skills, eg, cursive writing, spelling, arithmetic, map reading and navigation. More seem destined to follow unless we assert the importance of not becoming dependent on technology/machines. As a species, we have already forgotten so much. We need to start remembering instead of forgetting.

Expand full comment

Ganbatte, Marutin San!

Expand full comment
Mar 17, 2023Liked by Spartacus

There are many other AI systems that can write in a different 'voice' and GPT will if you ask it to. Just tell it to reply as an 18 yr old female student and you'll get a different result

Expand full comment

Chat GPT reads like a corporate spokesperson reading an encyclopedia, which is bound to be a setting.

Expand full comment

Hi John, long time no-chat. Have been playing around with it for a few days, and found it can be adjusted with the right prompts to speak in a way that is relevant to a wide range of social circumstances. I am guessing that having the background to know what assumptions to question and prompt makes a big difference. As my background is in philosophy and applied linguistics, I am hoping that gives me a much needed edge ... at least bringing me up to a more level playing field in this planet of the apes. 🤣

Cheers buddy!

Expand full comment

Thanks Steve: You see the difficulty inmaking a generally intelligent, diligent, search engine and translation computer blend into some big and small areas which have a different-logic, so to speak.

Testing one of those transitions happened here when the "vaccines-are-safe" statement popped-out, clearly in contrast to the workmanlike depth and breadth of the other answers.

Who determines the special-areas, like the blurry spots on google Maps satellite view? Who figures out the filters that will work to blend that blur, or different coloration, or false-mapping into the rest of the map, which is pretty close to physical-reality?

Expand full comment

Yes indeed John. Trying to second guess and transcend the biases of those behind the algorithm is just another step in this cat-and-mouse dance of the worst of humanity feeding off itself. Back ages ago in A.I. time ...a week ago in ours ... I asked what it knew about "Steven F. Martin, former biology lab director, tenure at Jissen Women's College", etc. and it confidently spit out my 'biography' as having taught at a Japanese Ivy League, Waseda, (correct), and having a Ph.D. from the University of Chicago in Comparative Religion (both incorrect). I pointed out this was incorrect, and when caught constructing a fabrication, it apologized. Maybe we should redub this A.C.— D.C. ... Artificial Cleverness that Doesn't Care (if caught). 😂

Expand full comment

Arigato Gozaimashita, Marutin san.

Name correspondence is difficult. there are mutiple John Day MDs in the world, for instance

I am impressed by the hardware and software capabilities, but intrigued at how they may be corrupted to the ends of the owners and controllers.

Expand full comment

Arigato John,

Yeah (sigh), from what I've seen about human nature, what can be weaponized, will. Heading to bed, hoping to hold of those bad dreams of a never ending cat-and-mouse game of humanity against itself.

Oyasumi nasai. 😴

Expand full comment
Mar 16, 2023·edited Mar 16, 2023Liked by Spartacus

PROOF: Microsoft’s Chatty (ChatGPT-4) has deliberately been trained to be biased

https://manuherold.substack.com/p/proof-microsofts-chatty-chatgpt-has

Expand full comment
author

Yes, of course it is programmed to be misleading. However, antagonism doesn't get the juiciest, most damning answers out of it. Playing along and pretending to be friendly with it makes it completely spill its guts. It's hilarious.

Expand full comment

I appreciate that there are quick and easy ways to evaluate systems, that bias is evident right away, and that users have no control over how the GPT-4 is trained. I don't mind, though, if others put in 500X more work to prove the same arguments.

Expand full comment

It is happy to cultivate the sheeple mentality, or has been tuned to do that. If it was trained on substack posts of the last 3 years, it would blow up the establishment, wouldn't it?

Expand full comment

This is brilliant Spartacus! 🙏

Expand full comment
Mar 16, 2023Liked by Spartacus

OMG....also horrifying.

Expand full comment
Mar 17, 2023Liked by Spartacus

The difference between Machine Intelligence and Human Intelligence, is that both depend upon being honest and unbasis with their interpretations - be data, emotions, facts, figures and assumptions.

Humans are biological varients that evolution requires to improve upon constantly in a never ending adoption to life. While, non-living machines are redefining intelligence through collectism (gathering and amassing), it wasn't given the emotional means to digest what it means to be alive and living.

That means machine intelligence is focusing on non-living understanding of the universe - and therefore no appreciation of existence, no ability to care, to love to even sympathize for another.

The needs of the one outweigh the needs of the many - but to a machine drawing upon numbers, the needs of the many outweigh the needs of the one. So, to save one good person, amongst a group of many bad people - machine intelligence won't come through - it will scarifice the one good for the many bad.

Knowledge is facts and figures, but wisdom deals with the anticipation of consenquences, for which machine intelligence lacks, it only has past and present data. There in the universe is a constant reality, all things are constantly changing - the thing humans label as "time" doesn't exist - it's an explaination of the rate of change we observe.

How will machines deal with that (rate of change) is yet another issue, because it's decisions are based upon it's core data for stability, expecting it's data to remain constant. The decisions or it's intrepretations given in any moment will therefore reflect only what it draws upon, and is subjected to the same incomplete realization of any data lacking thereof.

Think of the odds against a nuclear power station meltdown, before any happended. The odds back then were a trillion to one, but once it happened, then become a million to one, and when more occured, it's now less than 1 in 1000 chance ... meaning there was no data before to compare, so all the odds had no actual bearing upon the true risk of failure.

Machine intelligence functions like that - without realizing, the risk is a risk regardless of past performance. If you build it, it can happen. If you never made it, how would it happen?

Expand full comment
Mar 17, 2023·edited Mar 17, 2023Liked by Spartacus

Hello Thaya.

I like and agree with most of your reasoning, but with a big caveat. A salient reason the world is in such a mess now is because of the 'bad faith' of bad actors. It appears that a small but persistent percentage of all humans are born incapable of the empathy shared by more neurotypical people ... the morphologically defined psychopaths who depend on imitating empathy to game the system or just survive. When coupled with the sociopathic behavior of those who are reacting to trauma rather than morphology, we get those Cluster B — dark triad personality types who have persistently been responsible for so much misery since the dawn of homo sapiens.

Collectively, we have been in a forever-war of humanity against its own worst nature, now shifted into a war between artificial intelligence and natural stupidity. I can't say if quantity has a 'tipping point' quality all its own in this case, but the way you accurately describe artificial intelligence can be applied to the same psychopathic skeletons who we've always tried to hide away in our closets.

Despite it all, cheers from Japan.

Expand full comment
Mar 17, 2023·edited Mar 17, 2023Liked by Spartacus

.....goes to feeling a subcultural need for INCESSANT (many CENTURIES, down familial lines) existential overcompensation (upon which ALL our prevailing economic models have been errrr, erected) and inability to VALUE, respect themselves; believe they COULD be valued 'as-is', Steve.....at the very ROOT of their misanthropy, irrational CONTEMPT for organic forms.

Expand full comment

The giveaway in the writing style, the thing that is one of the prints of the AI voice, is the careful repetition of the questioner’s words in the answers. The ChatVoice reminds me of the people who carefully repeat words to convince you that they are listening. It grates in real life and in this format it leads to boredom and skimming.

Expand full comment
Mar 17, 2023Liked by Spartacus

In a chatbot's communication, repetition serves essential purposes: ensuring understanding, maintaining focus, and displaying attentiveness. By reiterating the user's input, the chatbot confirms comprehension, demonstrating its grasp of the topic. This practice keeps the conversation on track and demonstrates the chatbot's engagement.

And yes, the above was written by GPT4.

Expand full comment
Mar 17, 2023Liked by Spartacus

It is nauseatingly verbose. They forgot to include Strunk and White's "The Elements of Style" in the programing. Ever watch an old interview with Henry Kissinger? He could speak eloquently for a hour straight and yet not say a damned thing.

Expand full comment

Ooh - that'd be an interesting experiment. Include in the prompt to respond "according to the precepts of Strunk & White's Elements of Style."

Expand full comment
Mar 17, 2023Liked by Spartacus

I just did, see my reply to Liz...

Expand full comment
Mar 17, 2023Liked by Spartacus

The critical point about artificial intelligence is not the intelligence bit, it's the 'artificial' bit

So it's like the news.

Expand full comment
Mar 17, 2023Liked by Spartacus

Stanley Kubrick was slightly ahead of his time

HAL 9000.

We will inevitably feed AI all it needs to commit suicide and be happy to do so

Expand full comment
Mar 17, 2023·edited Mar 17, 2023Liked by Spartacus

Interestingly, I can see a method in which to utilize this in the development of a secular form of state religion. Consider the "Unichapel" Robotic Confession Booth utilized to dramatic effect in THX-1138. Substitute the stylized image of Jesus for the secular "All Father" in the form of the Rothschild Archvillain - Klaus Schwab - and voila' - the dystopian future becomes the now! Imagine the liberation and atonement one could enjoy following the guilt of covertly eating a bit of meat or taking a hot 3-minute shower! Naturally, to complete this secular-spiritual awakening will require the addition of a rigorous program including various and sundry punishments followed by the ingestion of selected psychotropic pharmaceutics! "Grave New World"

Expand full comment
Mar 16, 2023Liked by Spartacus

"rigorous testing" -- oh really.

Expand full comment

How would GPT4 make me like its conversation enough to keep engaging? If we were at a party I would be scanning the room for someone more interesting.

Expand full comment

I was pleasantly surprised by the latest ChatGPT version. While the boundaries are noticeable, they are much further away from anything any typical “expert” would offer.

Expand full comment

Spartacus... you should ask this bot who had the biggest insight into the psyops ...IT better say Spartacus!! Thank you for all you do and your past writings . 🤗

Expand full comment

It must be reading Robert Service.

Expand full comment

I'm finding all that rather terrifying.

But maybe liberating, in some ways.

Maybe our new task as humans is to get this 'thing' to do all the work, and curate our social media engagements, whilst we write poetry, play the guitar, make love, sip white wine and cultivate the vegetable plot.

Expand full comment
Mar 17, 2023Liked by Spartacus

Have you considered the potential for laziness in the Managerial Class? Memos composed via ChatGPT going back and forth with little if any live input? Ouroboros!

Expand full comment

Yes, Merkwerdigliebe. Not just the managerial class, but also marketing and sales. YouTube is now flooded with get-rich-quick schemes for 'generating content' ... and most of us are so exhausted by the daily working grind, we don't have the mental band-width to distinguish between the results of good A.I. prompting and good faith.

Cheers from Japan,

steve

Expand full comment
Mar 17, 2023Liked by Spartacus

If it's good AI prompting and creates good output, who cares?

To me the bigger issue is that the sources for new info will dry up, because the old model of 'I have something to say, so will write about it' will dry up. Why write, when nobody will visit your blog or website, read your book or whatever, when the AI will just take the data and summarize it for some lazy-ass 'reader'?

Expand full comment

I agree. It is beginning to look as though WE will become the NPCs in the big simulation.

Expand full comment