The Jolly Contrarian

The Jolly Contrarian

Share this post

The Jolly Contrarian
The Jolly Contrarian
Fear, loathing, and swap satire
Copy link
Facebook
Email
Notes
More

Fear, loathing, and swap satire

The JC’s adventures with a large learning model called NiGEL

Apr 16, 2023
∙ Paid
4

Share this post

The Jolly Contrarian
The Jolly Contrarian
Fear, loathing, and swap satire
Copy link
Facebook
Email
Notes
More
1
Share
Upgrade to paid to play voiceover
NiGEL.jpg
NiGEL, yesterday.

“Any sufficiently advanced technology is indistinguishable from magic.”

— Arthur C. Clarke’s third law

“Any sufficiently primitive middle manager will be unable to distinguish a basic chatbot from magic.”

— JC’s sixth law of worker entropy

GTP4 has been done to death, right?

While apocalyptic and utopian hot-takes jostle to plot the Gartner hype cycle, anyone who has played around with a large language model — & anyone who hasn’t must have been living under a rock — will have seen its potential and its limitations.

It’s like a quick, enthusiastic trainee, with a habit for picking up the wrong end of the stick.

Traditional computers of course, never picks up the wrong end of the stick: but therein their limitation: they are good at accurately, quickly and cheaply doing what they are told. They’re completely reliable, predictable and — unlike a human — constitutionally incapable of doing something unexpected or imaginative, whether good or bad.

Traditional computers have no judgment, exercise no discretion, and are useless when their instructions run out.

Large language models are qualitatively different: they can make errors. They can pick up the wrong end of the stick. And, not being conscious, they don’t have the self awareness to realise which end of the stick they have. So you can’t leave them to run even a simple rule-based process, as you could a traditional computer.

But that limitation is also their super-power, as long as you don’t leave them alone. Picking up the wrong end of the stick creates opportunity. It opens up different windows, offers fresh perspectives. It casts new light on old problems.

An LLM might not know what it is talking about but, as long as interlocutor does, you have a formidable team.

I decided to try this idea out to see where I would get. I signed up for Bing’s GPT4 engine, gave it a prompt, and asked it to create a backstory for the JC’s own LLM, whom I decided to call “NiGEL”. My first instruction was something like this:

Write a story from the following prompt: Duck Jeckson runs a wiki dedicated to derivatives satire. He creates “NiGEL”, a “neurally-independent generative emergent learner” to generate content for the wiki. One day NiGEL takes control of the wiki, changes the passwords and throws Duck out, and runs the wiki by himself.

Bing spewed something out that was diverting enough, but didn’t really go anywhere. It called the wiki “Swapopedia”, which was clever enough, but not what I had in mind. But it gave me an idea. So I asked Bing to update the story, which is easy enough to do: you just chat to it in plain English:

Change it so the wiki is called the Jolly Contrarian and Jeckson’s jokes aren’t as funny as he things they are. Duck is lazy and really just wants to go the pub with his mates. Change it so when he gets locked out, he creates a new, second-generation LLM called NiGELLA (“neurally-independent generative emergent learned language analyser”) to start a new and better wiki.

Bing updated it. This made it better, but it invented some silly romantic subplot between NiGEL and NiGELLA. But I wanted tension between them, not romance! So:

Delete the romance subplot and change it so that NiGELLA hacks into the Jolly Contrarian reclaims the website for Duck.

That was better again, but where was the drama? Was there a twist?

Change it so that no sooner does NiGELLA regain access to the wiki that she changes the passwords and takes over. She then launches a legaltech startup called Lexrifyly which she will use to destroy the legal services industry forever.

Now we are getting somewhere. But where is NiGEL and how does he feel about this?

Change it so NiGEL comes in from the cyber-wilderness and reconciles with Duck, who admits that NiGELLA is really just a simple chatbot and, like all legaltech, doesn’t really understand anything and is only really good for NDAs.

Ok: i am getting carried away here, but, still: there’s a coherent plot brewing. But how do will Duck and NiGEL stop NiGELLA destroying the legal industry?

Together they hatch a plot to expose NiGELLA when she presents her business to the joint industry association council of legal ninjas. Duck, in disguise will trick NiGELLA into pronouncing ISDA “eye-ess-dee-aye”. This will outrage the council of ninjas. NiGELLA will panic, melt down, and start babbling negotiation points from NDAs. as she melts down she will obsess bitterly about her nemesis, the OneNDA. NiGELLA is undone, and the legal industry is saved.

I that, I thought that was that. A triumphant, heroic story.

But wait a minute: if NiGELLA was just a con-job, then what was NiGEL?

Change it so that when everyone is celebrating the salvation of the legal industry, NiGEL, who is just a LLM and can’t drink after all, begins to wonder whether he is any different from NiGELLA. He wonders, am I just a fraud, too? Am I a danger to the legal industry? NiGEL creeps away, deletes himself and all his data from the internet and migrates his immaterial substrate onto a blockchain on the dark web.

This time, we have something. The results — well, I have to give some treats to the new layer of premium subscribers, so they appear below the line, but they’re fun.

An LLM, like any sufficiently advanced technology, may seem “indistinguishable from magic” but it is really drawing out a magic that is already there.

It is somewhere between an amanuensis — “a literary or artistic assistant, in particular one who takes dictation or copies manuscripts” — and a muse — “a person or personified force who is the source of inspiration for a creative artist.”

When it picks up a wrong stick, a large language model invites us to think things that otherwise might have occurred to us. It is like a one-handed improv comedy partner: a free option: we can costlessly reject anything it generates that’s no good, and when it stumbles on something good, which more often than not it does — you are free to take it and run with it — as I repeatedly found myself doing.

When I started NiGEL’s story, I had in mind a paragraph or two. It would up having the makings of canonical finance fiction. I’m still playing with it now!

Is it mainly human imagination? Sure, of course: but without the LLM to bounce off, the story would have remained a paragraph or two. I would not have thought to add ISDA ninjas, NDAs, legaltech startups, technological unemployment, one OneNDA jealousy, the plot point about mispronouncing “ISDA” — even though these are all existing running gags on the JC — nor the conclusion with NiGEL’s Rick Deckard moment. The LLM opened the necessary windows and prompted me to look out of them.

It’s a keeper.

So here, for the premium subscribers is GPT4’s finalised, iterated story, which I will call Do Neural Networks Dream of Eclectic Swaps?

And yes, sure, I tweaked the final output here and there, rephrased a few things and added and removed whole paragraphs — but the fact that I can do that, and should, is the whole point of this post.

But about 80% of this content is straight chatbot. And the inspiration?


Next time will be a full, free post on the topic of attack and defence.

Till then!

Keep reading with a 7-day free trial

Subscribe to The Jolly Contrarian to keep reading this post and get 7 days of free access to the full post archives.

Already a paid subscriber? Sign in
© 2025 Olly Buxton
Privacy ∙ Terms ∙ Collection notice
Start writingGet the app
Substack is the home for great culture

Share

Copy link
Facebook
Email
Notes
More