LLM Problems Observed in Humans
A reflective essay exploring how classic LLM failure modes---limited context, overgeneration, poor generalization, and hallucination---are increasingly recognizable in everyday human conversation.
Published 7 Jan 2026. Written by Jakob Kastelic. While some are still discussing why computers will never be able to pass the Turing test, I find myself repeatedly facing the idea that as the model… [+7081 chars]