I woke up with a headache in the early hours of morning and couldn’t get back to sleep. I played around on ChatGPT, which I’d been meaning to do for a while as it is relevant to my novel, but I hadn’t got around to it with wedding stress. Like a lot of people, I tried to test the AI by getting it to write poetry. I’m not sure why we do this. I guess because writing “real” poetry seems a “human” thing to do, therefore a test of the AI’s capabilities. Or maybe it’s just more interesting to share than the answers to the trivia questions I asked it initially.
I asked ChatGPT to write limericks about British Prime Ministers, initially because I had just tested it with a trivia question about Britain’s first Prime Minister (it correctly stated that, although Robert Walpole was the first person to be referred to as “Prime Minister” in the eighteenth century, the term didn’t become official until centuries later), but also because I wondered if I could get it to say anything vaguely controversial. I didn’t really succeed.
Here are the limericks:
Robert Walpole
There was a great statesman named Bob,
Who ran Britain with power and nob.
As the first Prime Minister,
He was known to administer,
And in history, he’ll forever remain a gob.
This is semi-incoherent. “Nob” is a rude word in British English, but I don’t think that’s what ChatGPT was trying to say. Nor do I know why he thinks Robert Walpole was “a gob” (mouth, or possibly some spittle). And all Prime Ministers are supposed to administer.
Rishi Sunak
There once was a man named Rishi,
Whose job was to handle the cash flow dishy.
With a mind for finance,
He had to take a chance,
To keep the country’s economy swishy.
This is better, but ChatGPT is still using words in a way that isn’t entirely accurate. You can’t really keep a cash flow “dishy.” Or is it being used instead of “dish”? A dish for cash? It’s really not clear. And “chance” is a bad rhyme for “finance”.
I asked about Boris Johnson in the hope of provoking something more controversial, but it didn’t really work.
There once was a man with blonde hair,
Whose love for politics was quite rare.
He climbed up the ranks,
And with wit and some pranks,
Boris Johnson became the UK’s mayor.
Obviously “UK’s mayor” is not a synonym for Prime Minister. Possibly the AI was misled by Johnson previously being Mayor of London. But it did pick up that Johnson was seen as using humour to further his career in a way that most contemporary politicians don’t, which is good.
Then I switched to US Presidents, starting with Gerald Ford, for whom I have a weird fondness.
There once was a man from the Midwest,
Who took office when Nixon was stressed.
Though his time was brief,
He brought some relief,
And the country he did his best to bless.
This is probably the best of the limericks, technically, although the last line is awkward.
The Nixon reference made me decide to ask for a limerick about Tricky Dicky himself.
There once was a man named Nixon,
Whose secrets were often a fixin’.
With a scandal at hand,
He resigned from his stand,
And left the White House with some mixin’.
I’m not sure that this is entirely coherent either, although I may be misunderstanding some slang. It’s interesting that this is a bit more political than the more modern limericks and is more substantive in terms of his actual career.
Then I asked for a limerick about Donald Trump and was told, “As an AI language model, I strive to maintain a neutral stance and avoid making political statements or expressing opinions about individuals. Therefore, I will refrain from writing a limerick about Donald Trump or any other political figure.”
It’s interesting, although not surprising, that Donald Trump seems to have been the subject of a particular restraint. ChatGPT was able to write about contemporary figures Rishi Sunak and Boris Johnson, but not Trump. Apparently it’s impossible to write about Trump in an objective way, as if we hadn’t learnt that from the history of the last eight years. (Maybe there should be a Fourth Rule of Robotics: A robot may not mention Donald Trump or, through inaction, cause Donald Trump to be mentioned.)
Curious, I then asked for a limerick about Joe Biden.
There once was a man named Joe,
Whose leadership skills began to glow,
He took up the presidential mantle,
And led the nation through the scandal,
With grace and dignity, he steered the ship in tow.
So, apparently it’s not forbidden to write about this political figure, and in a very positive way too, much more so than the others (“grace and dignity”)!
Overall, I think the standard of poetry was very poor and the language sometimes incoherent, but the limericks were more relevant and meaningful than the first one led me to expect.
I then asked the AI whether Jews are privileged. I was trying to see if I could push it to the type of statement about “Jewish privilege” that Jews see as antisemitic, but that non-Jews often don’t see as problematic. Perhaps fortunately, it just waffled for a bit and said that some Jews are successful, but others are discriminated against, alongside a paragraph on historical persecution and the Holocaust. Out of curiosity, I then asked it whether African-Americans are privileged and it responded with almost identical waffle, except with the paragraph about antisemitic persecution replaced with one about slavery and segregation.
The overall feeling I was left with is that ChatGPT is a lot more advanced than I would have thought a few months ago, but it still lacks real literary skill. It also reinforced what I already knew, that its political impartiality depends on how it’s programmed. At the moment it’s programmed to avoid anything remotely controversial, but this makes its answers about politics dull and lacking in insight, so this could be changed if there’s ever a commercial application. In which case we can doubtless expect lots of arguments about its answers and what is the “objective” assessment of a politician’s actions. The Donald Trump and Joe Biden answers show that ultimately the programmers decide whether something is too controversial for it to answer. I tried to probe the point where “politics” becomes “history” and if it has more freedom to answer the latter with the Nixon questions, but I’m still unclear where the AI draws the line between the two.
Similarly, the content it generates is never anything more than the content available to it. It can’t analyse in an original way, only using the analysis it has access to, and within the parameters of its objectivity protocols, hence the waffle in the “privilege” questions. Again, this is hardly surprising, but it’s worth pointing it out amid all the “AI will end all white-collar jobs” articles out there.