About a year ago I was sitting on the living-room floor watching my grandson try to walk.
He had decided, with the absolute confidence of a one-year-old, that upright travel was clearly the next step in life. Unfortunately his legs hadn’t quite received the memo yet, so the result looked less like walking and more like a series of small negotiations with gravity.
He would take three determined steps forward, wobble slightly, lean into the next one, and then slowly collapse onto the carpet.
What struck me wasn’t the falling. It was the speed of the adjustments.
Each attempt looked a little different from the one before. His feet spread wider. His body leaned forward earlier. His arms shot out instinctively the moment his balance began to go.
Nobody had explained the physics of balance. There were no instructions, diagrams, or equations.
His brain was simply running the oldest learning loop on Earth.
Try something.
See what happens.
Adjust.
Try again.
That small scene on the living-room floor explains more about intelligence than most textbooks. Intelligence, whether in brains or machines, often begins the same way: by noticing patterns and learning from mistakes.
For a long time scientists assumed the brain worked something like a calculator: information came in, the brain processed it, and out came a decision.
It was a comforting picture, but over the past few decades a different understanding has emerged. The brain appears to work less like a calculator and more like a prediction engine.
Instead of waiting for the world to tell it what is happening, the brain constantly guesses what will happen next. It builds an internal model of the world and quietly adjusts that model whenever reality proves it wrong.
You experience the results of this every day. You reach for a coffee cup without calculating its weight. You catch a ball without solving the equations of motion. You hear tension in someone’s voice before a single angry word is spoken.
None of that feels like math.
It feels like intuition.
But underneath that feeling your brain is doing something remarkably similar to what modern machine-learning systems do: making predictions, measuring errors, and gradually refining its internal model of the world.
Long before humans built computers, that ability kept our ancestors alive. A flicker of movement in tall grass might signal a predator. A strange smell might mean spoiled food. A wall of dark clouds on the horizon might mean a storm.
The brain learned to detect those signals because the cost of missing them could be fatal.
Over millions of years evolution tuned the system into an extraordinarily sensitive pattern detector. Most of the time we don’t notice it working because it operates below the level of conscious thought.
You simply have a feeling that something isn’t quite right, or a sense that something is about to happen.
For most of human history that kind of learning existed only inside biological brains. Eventually engineers began trying to build machines that could learn in roughly the same way.
Computers arrived much later and eventually discovered a similar trick.
Early programmers tried to create intelligence by writing rules: if this happens, do that. But the real breakthrough came when engineers tried something closer to nature’s approach.
Instead of teaching the machine every rule, let it make predictions and then show it when those predictions are wrong.
Prediction.
Error.
Adjustment.
Repeat that loop millions of times and the system begins to recognize patterns in the data.
If that cycle sounds familiar, it should. It’s exactly the same one my grandson was running on the living-room floor.
Human intelligence, however, has another ingredient that computers still struggle to replicate.
Emotion.
Inside the brain are chemical systems that shape how experiences are remembered. When something works, dopamine reinforces the behavior that produced the reward. When we form bonds with other people, hormones like oxytocin strengthen those connections.
Pain, pleasure, curiosity, fear—these signals guide what we learn and what we avoid.
They aren’t decorations added to intelligence.
They are part of the learning machinery.
Humans also learn inside groups. A child doesn’t grow up studying the world alone. We watch parents, teachers, friends, coworkers. We imitate what works and copy what successful people do. Knowledge moves through stories, arguments, jokes, and gossip.
Civilization itself becomes a network of shared learning in which each brain contributes a little and the next brain carries the idea a bit further.
That is why the knowledge of our species accumulates across generations.
Modern AI systems have now tapped into that same river of shared knowledge. When a language model reads books, articles, conversations, and research papers, it absorbs patterns that emerged from millions of human lives. The machine is studying the written traces of human experience.
That’s why interacting with these systems can sometimes feel uncanny. You ask a question and the answer sounds thoughtful—not because the machine has lived a life, but because it has read the record of many lives.
There is still a crucial difference.
Humans learn through bodies.
We feel gravity, hunger, fatigue, warmth, affection. Our nervous systems evolved inside a physical world full of consequences, and those experiences shape how we interpret events and treat other people.
Machines don’t feel any of that.
They see text.
That gap matters because values—things like empathy, fairness, and loyalty—didn’t arise from logic alone. They emerged from millions of years of social life among creatures that depended on one another to survive.
Morality, in other words, is not just philosophy.
It’s biology.
For the first time in history we are building systems that learn from the written record of our civilization—science, art, medicine, politics, arguments, and dreams. Everything we have written down.
Which means those systems are studying us: the way we reason, the way we cooperate, and sometimes the way we fail.
That realization is both exciting and unsettling.
Because the patterns the machines are learning didn’t originate in silicon. They were shaped by billions of human experiences—people falling down, getting back up, adjusting, and passing what they learned to the next generation.
Exactly the way my grandson was doing on the living-room floor.
And if artificial intelligence is going to share the world we built, it may eventually have to learn the same lesson every toddler discovers sooner or later.
Balance takes practice.