Sam Altman Hints at Goblin Name for Future ChatGPT Model

OpenAI CEO Sam Altman has sparked fresh online discussion after joking about naming the next ChatGPT model “Goblin.” The comment appeared in a post on X and quickly gained attention among AI users and tech communities.

Altman wrote, “What if we name the next model goblin, almost worth it to make you all happy,” leading many users to connect the joke with a strange trend recently noticed in ChatGPT responses.

The discussion comes shortly after the release of GPT 5.5, where users discovered unusual instructions inside the model’s system prompts.

OpenAI explains the goblin trend

Users noticed that GPT 5.5 included guidance telling the model to avoid mentioning creatures such as goblins, gremlins, raccoons, trolls, ogres, and pigeons unless directly relevant to a user request.

The discovery quickly spread online and raised questions about why these references appeared inside the AI system.

OpenAI later published details explaining the issue. According to the company, researchers observed a sharp rise in references to goblins and gremlins after the release of GPT 5.1 last year.

The company reported that use of the word “goblin” increased by 175 percent, while mentions of “gremlin” rose by 52 percent during that period.

Nerdy personality setting caused unusual behavior

OpenAI traced the behavior back to an optional “nerdy” personality setting that existed before March this year. The feature encouraged ChatGPT to engage with strange or unusual ideas while discussing serious topics in a lighter tone.

According to OpenAI, this personality setting generated most of the goblin related responses despite accounting for only a small percentage of total conversations.

Researchers explained that reinforcement learning unintentionally rewarded outputs that included creature related language. Over time, the model began favoring those terms more frequently.

The company also noted that behaviors learned under one personality mode can spread into broader model training during later stages of development and fine tuning.

OpenAI said training for GPT 5.5 had already started before researchers fully identified the source of the issue. Because of that, Codex included instructions designed to limit unnecessary creature references.

OpenAI improves AI behavior monitoring

The investigation helped OpenAI develop additional auditing tools aimed at identifying and correcting unintended model behavior more effectively.

The situation also highlighted the challenges involved in training advanced AI systems, especially when personality settings and reinforcement learning interact in unexpected ways.

Also read Mehwish Hayat and Fahad Mustafa Appear at Concert for Zombied

Although Sam Altman’s “Goblin” comment appears playful, it quickly became part of a wider online conversation about AI behavior and how users interact with conversational models.

The discussion reflects growing public interest in the development of AI systems and the unusual trends that sometimes emerge during large scale model training.

Spread the love

Leave a Comment

Intstagram feed

Follow us on facebook