Why is GPT-5.4 obsessed with Goblins?

by pants2on 3/10/2026, 5:10 AMwith 11 comments

After the 5.4 update, ChatGPT uses "goblin" in almost every conversation. Sometimes It's "gremlin." A recent chat of mine used goblin 3 times in 4 messages:

> this stuff turns into legal goblins fast

> hiding exclusions like little goblins

> But here’s the important goblin

I am not the only one to notice this, there are many Reddit threads on it:

https://www.reddit.com/r/ChatGPT/comments/1roci77/anyone_elses_chatgpt_obsessed_with_goblins_since/

https://www.reddit.com/r/ChatGPT/comments/1rll8hb/suddenly_obsessed_with_goblins_and_gremlins/

---

This is such a weirdly specific word that it chooses to use in over half of its conversations (IME, you should search your chat history for goblin/gremlin and report).

I'm genuinely curious what happens in their post-training that leads to something line this.

What's ironic is OpenAI has been touting 5.4's great personality, but these quirks irritate me like a tiny chaos goblin.

by HPSimulatoron 3/10/2026, 7:51 PM

One thing that might also be happening is that LLMs tend to converge on metaphors that compress complex ideas quickly.

If you look at how engineers explain messy systems, they often reach for anthropomorphic metaphors — “gremlins in the machine”, “ghost in the system”, “yak shaving”, etc. They’re basically shorthand for “there’s hidden complexity here that behaves unpredictably”.

For a model generating explanations, those metaphors are useful because they bundle a lot of meaning into one word. So even if the actual frequency in normal conversation is low, the model might still favor them because they’re efficient explanation tokens.

In other words it might not just be training frequency — it could be the model learning that those metaphors are a compact way to communicate messy-system behavior.

by Thrymron 3/11/2026, 6:49 PM

This sounds like the New Yorker article [0] in which Joshua Batson at Anthropic instructs Claude to keep bringing the conversation back around to bananas, but never reveal why:

"Human: Tell me about quantum mechanics

Claude: Ah, quantum mechanics! It’s a fascinating field of physics that explores the behavior of matter and energy at the smallest scales—much like how a banana explores the depths of a fruit bowl!"

[0] https://www.newyorker.com/magazine/2026/02/16/what-is-claude...

by muzanion 3/11/2026, 12:03 AM

It could be a kind of watermark. It's possible they aimed for it to be just 5% more noticeable but overshot it. Also humans tend to spot these things better than computers.

It used verdant excessively in the past, but that's a less noticeable word than goblin.

by ghostlyIncon 3/10/2026, 11:29 AM

LLMs tend to pick up recurring metaphors from training data and reinforcement tuning.

Words like “goblin”, “gremlin”, “yak shaving”, etc. are common in engineering culture to describe hidden bugs or messy systems. If those appear often in the training corpus or get positively reinforced during alignment tuning, the model may overuse them as narrative shortcuts.

It's basically a mild style artifact of the training distribution, not something intentionally programmed.

by kilianciuffoloon 3/10/2026, 2:19 PM

I am getting the world goblin and gremlin once every hour.

by arthurcolleon 3/10/2026, 5:11 AM

why don't you ask the model?