— by Chenuli Jayasinghe
Glitch is a text-generation model shaped after one ordinary person living an ordinary life in America… and that ordinariness is the whole point.Glitch is an LLM1 that isn’t chasing perfection or polish. It’s trying to hold memory, doubt, impulses, half-formed thoughts, and contradictions without sanding them off.
Glitch didn’t come from a lab, a research grant, or a team of PhDs polishing a benchmark chart. It came from a pretty simple question: what happens if you stop trying to build a “smart” AI and instead try to build a human one?
Glitch is an ongoing attempt to build an AI that feels closer to an actual person— not polished, not optimized, just human in the messy, everyday sense. It is biased, a bit cultural, and even dumb in some cases, as the human behind it— whose data were used in training— is a normal, real person who lives in New York.Version 1 is already live2. It’s clumsy, inconsistent, and honestly pretty bad at a lot of things. But it’s the baseline: it's the “untrained brain” of Glitch.Version 2 is coming soon with sharper reasoning, clearer language, and a more coherent personality. Glitch isn’t trying to be the smartest model in the room— rather the opposite. It’s trying to act like someone you could actually talk to—flawed, reactive, opinionated, and unmistakably itself.
[1]Glitch v1 is not a full LLM.
It’s a LoRA adapter stack on Llama-3-8B that defines the personality layer. The “human side” lives in the adapters; the base model provides its language and reasoning abilities.[2]As GGUF on HuggingFace, click here to view.
© 2025 Chenuli Jayasinghe. All rights reserved.