Commentary
In a world racing to build smarter machines, a group of children just reminded us what real intelligence looks like.
At the University of Washington, researchers recently put a group of 7- to 11-year-olds to the test. Their goal wasn’t to teach kids how to use artificial intelligence—but how to outthink it.
The children were asked to solve a series of visual logic puzzles—problems designed to test abstract reasoning, not memorization. Then they compared their answers to what generative AI tools like ChatGPT produced.
The results were telling.
While the AI confidently offered incorrect answers, the children spotted the flaws almost immediately. Some even began “debugging” the machine—rewording prompts, testing different versions, and analyzing patterns of failure. One 9-year-old summed it up perfectly: “AI just keeps guessing.”
These kids weren’t fooled by the polished tone or fast responses. They were thinking for themselves. And they were winning.
That’s worth celebrating. But it’s also worth pausing to consider what happens next.
Behind the scenes, companies like OpenAI, Google DeepMind, and Anthropic are racing to make AI capable of reasoning better than humans. They’re not just aiming for machines that sound smart—they want machines that are smart: systems that can solve complex problems, reflect on their own logic, and outperform us in every mental task. And the only way to get there is by learning from us.
In this case, that means learning from children.
This study, which was designed to help kids recognize AI’s flaws, could just as easily become the blueprint for closing the gap. AI engineers now have a clear map of where children outperform machines—and how. It’s not hard to imagine that knowledge being used to train the next version of AI to “think more like a human child,” or worse—outthink one.
We’ve seen this pattern before. Human chess games trained the computers that now dominate grandmasters. Human drivers trained the algorithms powering self-driving cars. Human writing trained the large language models we rely on today. So it’s not alarmist to ask: are we preparing our children to stay ahead—or are we giving AI the edge to surpass them? This is the danger of our current trajectory: we are building increasingly powerful technologies while neglecting building equally powerful wisdom in the people who use them.
Most adults today struggle to question what AI tells them—especially when it sounds confident. We were never taught how. But these children, thanks to a visual puzzle and a curious mind, saw what many grown-ups can’t: that sounding smart isn’t the same as being smart.
That’s the lesson we should be scaling, not just in schools but in society. In the age of artificial intelligence, the greatest form of defense isn’t a better app or a smarter algorithm. It’s a brain that knows when something doesn’t add up. That’s what these children had and that’s what every parent, educator, and legislator should be fighting to protect: a child’s ability to think clearly, question confidently, and trust their own reasoning—even when the machine says otherwise.
Because the real danger isn’t that AI will become smarter than us. It’s that we’ll stop teaching our children how to be smart in the first place.
Views expressed in this article are opinions of the author and do not necessarily reflect the views of The Epoch Times.