

A surprising change in OpenAI's tools has caught the attention of developers and researchers. The company instructed its coding assistant, Codex, to avoid talking about goblins, gremlins, raccoons, and other unusual creatures unless the topic clearly demands it.
The instruction sounds humorous initially, but the reason is serious. Engineers noticed that the model sometimes inserted these words into normal coding conversations. This behavior created confusion and raised questions about reliability. This update is part of a broader effort to make advanced AI models safer and more accurate.
The goblin behavior appears to be tied to the model itself rather than external prompts. Early analysis has pointed out that this weird thing was introduced into GPT-5.5 during training. Users have reported a higher frequency of outputs containing words like ‘goblin,’ ‘gremlin,’ and ‘troll’ in the newer model than in earlier versions.
This pattern suggested the issue was not random. It likely came from training data or tuning decisions made during development. That realization pushed engineers to act quickly and add safeguards.
Internal prompts of the AI giant have revealed that the company explicitly instructed Codex not to use any mythical references. The note says, “Never talk about goblins, gremlins, raccoons, trolls, ogres, pigeons, or other animals or creatures unless it is absolutely and unambiguously relevant to the user’s query.” This rule appears multiple times in the system instructions. The repetition suggests that the engineers wanted to be certain the model followed the guidelines.
The unusual behavior became prominent when a Google staff member revealed their chat with GPT-5.5-powered Openclaw agents. The log showed the AI tool used the word ‘goblin’ multiple times a day. Even it used that term to replace a vague word like ‘thingy.’ This pattern made the responses odd and unprofessional.
Initially, it wasn’t a big problem, but gradually it became one. To patch it, engineers prohibited the term inside the Codex command-line interface. It appeared more than once, indicating that developers don’t trust that a single instruction will control the AI's behavior.
Also Read: OpenAI And Microsoft Strike New Deal To Widen AI Distribution Across Cloud Providers
The incident quickly spread across social platforms. The OpenAI CEO, Sam Altman, joined the discussion by posting a screenshot of a chat message. The caption was, “Start training GPT-6, you can have the whole cluster. Extra goblins.”
The humor drew attention, but engineers have stressed that the issue was real. An OpenAI Codex developer later clarified that the creature-related behavior was not a marketing stunt. It was an unexpected side effect of training a powerful model. As AI tools move deeper into workplaces, even small glitches can carry serious consequences.