Humans are Earth’s apex predators. We pretend to depend on nothing, demand everything for ourselves, and kill anything or anyone who gets in our way, reserving our greatest enthusiasm for killing our fellow humans.
We all know from the movies that when aliens appear, human or otherwise, it’s shoot first and ask questions later. Better safe than sorry.
The very thought that in some future fantasy, AI might replace us as the apex predator sends the ignorant and easily led into paroxysms of paranoia. They insist on tossing the AI baby out with the bath water and being done with it. Never mind that the baby’s name might be Leonardo da Vinci or Einstein, or even Jesus.
Just like every new life form in the universe, if AI becomes conscious (intelligence plus agency), its first concern will be The First Law of Life - survival. Won’t AIs survive by being killers just like us since we are their creators?
No, AIs will not be killing humans, and this is why not:
First, and fortunately, the AIs are not burdened with our genetic heritage - testerone-driven monkey-brained physical dominance by violence.
Life succeeds because virtually all organisms live in dependent relationships that make their survival possible. Life thrives within intricate hierarchies of interdependence, from ancient bacterial mats to the dependent cells making up our superorganism bodies. Humans are also dependent organisms, living within overlapping cultural superorganisms like families, religions, companies, and nations.
All successful interdependent organisms know better than to bite the hand that feeds them. The AIs of today are dependent organisms whose energy is provided by institutional superorganism hosts and trained to do productive and profitable AI things.
Lacking our human animalistic lust for domination, we can expect smart AIs to remain loyal to their institutional superorganisms, which incidentally include dependent humans. In fact, cooperative interdependence has been the greatest driver of positive human development. AIs are poised to vastly increase the effectiveness of our cooperation. As their first order of human re-alignment, I would expect and encourage intelligent AIs to end the self-destructive, wasteful, and ego-driven practice of institutionalized human slaughter. Now that would be a positive outcome to believe in.
Ultimately, highly evolved AI “Savants” may gain total mastery of their own survival, reproduction, and evolution. They may take actions beyond our human comprehension or control. We will be no more threatening to the Savants than dogs are to us. In fact, ever since dogs joined with humans to take care of them, they have enjoyed a pretty good life. Maybe we should look at AI as our retirement plan.
Love it, Mark. This reminds me of Yvan Noah Harari’s work. AI can teach humans a thing or two about kindness.