How a Made-Up Word Taught Me That LLMs Are More Creative Than We Think
My curiosity led me to `unchartered` territories.

When I first dug into how large language models (LLMs) work, I ran into a recurring explanation: đ At their core, LLMs are just dealing with tokens.
You type something in, it breaks your text into tokens, crunches probabilities, and spits out more tokens.
Simple. Mechanical. Almost⊠boring.
And naturally my skeptical brain went: âOkay, so what happens if you throw something at it that doesnât exist in its whole token universe?â
So I decided to run an experiment. One word. Pure nonsense. Completely new.
And thatâs how Brectha was born.
The Experiment
I coined âBrecthaâ and gave it a meaning:
âTo breathe underwater.â
A word no one has seen. Not in any dictionary, not in Tolkien, not in gaming lore. My expectation? The model would stumble. Maybe spit out, âI donât know that word,â or worse, treat it as gibberish.
Instead, what happened actually shocked me.
What the AI Did with âBrecthaâ
Not only did the LLM take my word seriously, it⊠well, ran with it.
It created clean example sentences.
Built me a dictionary style entry (part of speech, pronunciation, usage, even related forms like brecthing and brecthed).
Imagined how the word could enter pop culture: in sci-fi novels, VR games, meditation retreats, even marketing slogans.
Proposed a roadmap for how âBrecthaâ might weave itself into human language over the next decade.
What started as my scratch-test suddenly looked like the early life of a real word. Here is a dictionary style entry that Perplexity helped me craft:

Wait, How Is This Possible If It Only Knows Tokens?
Hereâs the cool part.
LLMs never saw âBrecthaâ before.
Zero training examples.
But they have seen endless patterns of:
how words like breathe/breath behave,
how suffixes like -a or -tha often feel,
how definitions, examples, and cultural usage typically look.
So, when I dropped âBrecthaâ into the conversation, the model didnât need a historical definition. It built one on the fly using learned patterns.
Thatâs the magic:
Tokens arenât limits. Theyâre Lego bricks.
And the model has learned a million ways to snap them together into new shapes.
đ The Possible Journey of âBrecthaâ
At this point, I thought: What if âBrecthaâ really took off? What would its journey look like?
The model even helped sketch a whimsical timeline:
2025 đ«§
â Word is coined in an experiment
("Brectha" = to breathe underwater).
â Shared as a fun blog post.
Early adopters chuckle and try using it in private conversations.
2026 đ
â Sci-fi short stories and indie games pick it up.
â Diving enthusiasts start creating memes: "Learn to Brectha!"
2028 đź
â Major VR game introduces "Brectha" as a special underwater ability.
â Word starts trending in gaming communities.
2030 đ§
â Meditation apps use "Brectha" metaphorically: "Brectha through stress."
â Influencers casually drop it into lifestyle and wellness content.
2032 đ
â The word gets its first unofficial listing in online slang dictionaries.
â Pop culture adopts it: interviews, TV shows, and late-night comedy skits.
2035 đ
â Language authorities recognize it in official dictionaries.
â Used both literally (new tech gadgets that help you brectha underwater) and figuratively ("Just brectha and relax").
Suddenly, a nonsense test word felt like it had a future⊠a simulated life cycle, a cultural footprint.
My âAha!â Moment
Reading about LLMs, youâd think theyâre bound to whatever tokens live in their fixed vocabulary. Talking to them is a completely different experience.
They donât just repeat⊠they generalize.
They donât just memorize⊠they improvise.
They donât just accept old words⊠they welcome new ones.
Thatâs why âBrecthaâ worked. The model didnât âknowâ it, but it could make sense of it instantly and let it grow into something bigger than just a test word.
And to me, that was pleasantly surprising. Like realizing your calculator not only solves your equation, but also writes you a story about where that number might live in the real world.
The Bigger Learning
This experiment taught me something valuable:
Yes, LLMs are statistical systems at the core.
But what emerges is a tool that feels creative, adaptive, even playful.
The token system that feels like a straitjacket in theory is a powerful foundation for endless recombination.
Because language itself is alive. Weâve always invented words⊠selfie, emoji, google it. And now, interestingly enough, LLMs are companions in playing this game of invention.
Wrapping Up
So hereâs my reflection: when you hear that âLLMs are only token machines,â donât write them off. Go test them.
Coin your own word. See what happens.
I invented Brectha, expecting failure. Instead, I got a mini-dictionary, cultural lore, and even a sneak peek at its possible future in society. That felt less like rigid token math and more like co-creating with a language partner.
đ Who knows? Maybe one day weâll really need a word like âBrectha.â And when that moment comes, we already gave it its first breath.
Like what you see?
Subscribe to my newsletter for the curious minds: https://when.substack.com to stay in the know of when I write future articles đ



