Most People Are Good
But This Is Still a Hard Read
Most people are good.
I’ve come to believe that more strongly over time, not less. If you actually sit down with people — not online, not through ideology, not through labels — but really sit with them, you almost always find the same things underneath.
Fear.
Damage.
Anger.
Grief.
But also decency. Care. A real desire not to hurt other people.
Most people are reachable.
Which is exactly why what’s happening right now is so dangerous.
Because we are building systems that abstract love out of the equation. Systems that remove proximity. Systems that replace judgment with probability. Systems that let cruelty enter the world without anyone feeling cruel.
And yes — this is partly about evil people.
Not most people.
But a few people have enormous power.
People who understand exactly what these systems do and choose to use them anyway. People who benefit from distance. From speed. From abstraction. People who know that when responsibility is spread thin enough, no one feels accountable for the harm that follows.
The system makes it easier.
But someone still decides to turn it on.
I’ve lived through the speed-up.
I’m fifty-five years old. I grew up before screens were everywhere. My parents didn’t let us watch much TV, so I read. Constantly. Hundreds of books. That was the technology we had: paper, libraries, time. Reading trained patience without anyone naming it. You stayed with the ideas. You learned that understanding took effort. You learned that meaning often arrived late.
When computers entered my life, they weren’t magic. They were tools.
Los Alamos was full of nerdy kids. Little geniuses. Chuck Watson. Jon Wilkins. Kids who weren’t trying to dominate anything — they just wanted to make something work. They were obsessed with building video games on early machines, pushing them past what they were supposed to do. I remember one setup literally attached to a remote tape recorder, loading code off cassette tapes. You’d hit play and wait. Sometimes for minutes. Half the time it failed. When it worked, it felt like magic you had earned. I think it had like 4k of memory. You couldn’t abstract your way through it. You had to understand what you were doing.
I worked on Commodore 64s. Early IBMs. You typed commands. You waited. You learned what memory was because you could feel its limits. These machines didn’t pretend to be human. They didn’t talk back. They were literal. Honest. If something broke, you usually knew why or could figure it out.
Technology felt like an extension of human intention.
Then the speed changed.
Interfaces got smoother. Complexity disappeared behind glass. Friction became the enemy. Understanding became optional. And at the same time, the incentive structure hardened.
Tools stopped being built to help people think.
They started being built to extract attention.
Profit replaced care.
Scale replaced truth.
Artificial intelligence isn’t a break from that trend.
It’s the acceleration of it.
Here’s the problem.
Most modern AI systems are not grounded in objective truth. They don’t reliably know when they don’t know. They fill gaps with plausible stories. They project confidence. And when those systems are optimized for speed, dominance, and profit, truth becomes optional.
That’s annoying when you’re recommending playlists.
It’s catastrophic when you’re deciding who lives and who dies.
We’ve seen this before.
IBM’s punch-card systems were used by the Nazis to catalog, track, and optimize the logistics of concentration camps. The technology wasn’t violent — but it made violence efficient, scalable, and emotionally distant. They weren’t AI, but they were an abstraction from evil.
That pattern didn’t end.
It evolved.
In Israel’s recent wars, AI-assisted targeting systems were used to generate bombing targets at massive scale. These systems answered probabilistic questions — how likely is this building connected to a militant? — and then moved fast.
Human review existed.
But it existed inside the machine’s frame.
At machine speed.
Uncertainty collapsed into confidence.
The result wasn’t abstract. In Gaza, entire neighborhoods were flattened. In many places, nearly every house was destroyed. Not because a human sat down and decided each home should be erased — but because the system made it permissible. The AI didn’t pull the trigger. It gave permission. Over and over. At scale.
That’s what abstraction does.
It removes the moment where love might intervene.
Running alongside this was hasbara — narrative saturation. Certainty. Repetition. Speed. A system designed to overwhelm doubt rather than sit with reality. Once that kind of system is running, truth stops mattering. Effectiveness takes over.
This is what makes it so painful for me to watch.
Israel was central to my early moral formation. Like many Jews of my generation, I was raised with the Holocaust not as distant history but as a warning. It was never again. Period. And never again applied to all types of humans. Because we are human.
The destruction in Gaza is a warning about what happens when bureaucracy, dehumanization, and technical efficiency outrun reflection and reality.
And yet here we are.
This isn’t just accidental harm.
It’s cruelty made clean by systems.
Now we’re importing the same logic here.
Today, the Pentagon announced that it will integrate Grok, Elon Musk’s AI system, into U.S. military networks. A few months ago, Grokapedia was announced — a Grok-powered “living reference system,” framed as an authoritative way to organize and explain knowledge.
This matters.
Grok was not built for epistemic truth. It was built for confidence, engagement, and speed. Turning that kind of system into a reference layer — and then allowing similar systems to inform defense, intelligence, or homeland security decisions — is how you end up with authority without grounding.
These systems hallucinate.
They infer.
They tell convincing stories.
They cannot reliably say I don’t know.
We’re told humans will be in the loop, but which humans will those be? When AI defines the options, frames the threats, controls the information flow, and sets the tempo, the human role becomes ceremonial. Oversight becomes approval. Judgment becomes rationalization after the fact.
That’s not control.
That’s abdication.
Here’s the thing I refuse to let go of.
Most people are good.
Most people, if you slow things down enough, don’t want to hurt others. But love requires proximity. It requires friction. It requires time. And we are building systems designed to remove all three.
When decisions are abstracted.
When responsibility is diluted.
When speed replaces reflection.
Cruelty doesn’t arrive as malice.
It arrives as a process.
If this feels heavy, it’s because your instincts are still human. You’re being asked to process the world at machine speed with a nervous system that wasn’t built for that. That gap — between what we are and what we’ve built — is where trust breaks. Where relationships fracture. Where love gets squeezed out.
I’m not writing this as a spectator. I’m writing it because this is the problem I’m choosing to work on and have been working on with my friends and team, including Correy Kowall, one of the world’s greatest inventors and future thinkers.
Alignment isn’t a buzzword. Truth isn’t optional. These systems already shape reality. And if we don’t insist on grounding them in truth, humility, and real human accountability, we will keep building cruelty without ever meaning to.
If you’re reading this and recognizing the problem, that’s not despair.
That’s clarity.
We’re already in this together.
The only question left is whether we take responsibility for what we’re building —
or let abstraction finish the job for us.
Why I’m Asking You to Support This
I need to say something plainly.
I haven’t asked for money nearly enough.
Part of that is habit. Organizers are trained to give first, to build before they ask. Another part is discomfort. Asking directly still feels awkward, even after a lifetime of hard fights.
But this moment demands leadership.
And leadership requires resources.
On this Substack, I’m inviting you into a journey — to understand AI through the eyes of an organizer and early technologist who has lived inside every phase of modern computing. I’ve watched Moore’s Law play out in real time. I know what exponential change looks like before it arrives.
This Substack isn’t content. It’s public thinking. It’s slowing things down. It feeds the real work I’m doing to build AI systems grounded in epistemic truth and human values — systems that leave room for love.
So this is the ask.
If you can afford it, subscribe at a paid level.
If you’re a leader, a builder, or someone with resources, I’m asking you to step up — now, while direction still matters.
We are already in this together.
The curve is already bending.
I’m here. I’m working on this.
And I’m inviting you to take responsibility with me.




Sharing with my network
Love this article, Mitch!