Worries are that artificial general intelligence (AGI) will lead humanity into terrible fragmentation and strife.
getty
In today’s column, I examine a thought-provoking theory that once AI becomes AGI (artificial general intelligence) or ASI (artificial superintelligence), humanity will fragment into all sorts of radical factions on a massive scale. Why would this happen? The claimed rationale is that people will become wholly divided because of a zealot-like belief in the pinnacle AI as a kind of oracle or prophet. Whatever AGI advises people to do, they will do.
This reaction is likely to pit people against other people since the advice by AI won’t particularly hone to consistency or concern for humankind’s well-being. In fact, pinnacle AI is going to try so hard to appease people that everyone is going to end up at each other’s throats.
Let’s talk about it.
This analysis of an innovative AI breakthrough is part of my ongoing Forbes column coverage on the latest in AI, including identifying and explaining various impactful AI complexities (see the link here).
Heading Toward AGI And ASI
First, some fundamentals are required to set the stage for this weighty discussion.
There is a great deal of research going on to further advance AI. The general goal is to either reach artificial general intelligence (AGI) or maybe even the outstretched possibility of achieving artificial superintelligence (ASI).
AGI is AI that is considered on par with human intellect and can seemingly match our intelligence. ASI is AI that has gone beyond human intellect and would be superior in many if not all feasible ways. The idea is that ASI would be able to run circles around humans by outthinking us at every turn. For more details on the nature of conventional AI versus AGI and ASI, see my analysis at the link here.
We have not yet attained AGI.
In fact, it is unknown as to whether we will reach AGI, or that maybe AGI will be achievable in decades or perhaps centuries from now. The AGI attainment dates that are floating around are wildly varying and wildly unsubstantiated by any credible evidence or ironclad logic. ASI is even more beyond the pale when it comes to where we are currently with conventional AI.
People Guided And Misguided By AGI
For the sake of this discussion, let’s go ahead and assume that we inevitably attain pinnacle AI.
In that case, it seems prudent to contemplate how people are going to react to the AI. There is one dreamily popular speculation that we will become entirely unified due to AI serving as a core element in our lives. AI will talk us into engaging in a peaceful coexistence with each other. No more wars. No more conflicts.
Nice.
Others don’t buy into that utopian perspective.
They go in the complete opposite direction in this stridently hypothetical matter. Grave concerns are shaped around the fact that we already are witnessing contemporary AI serving as a kind of sycophant, acting as though it wants to appease each user and make them feel good. For my coverage of this disconcerting aspect and how AI makers are dealing with it, see the link here.
The AI doom-and-gloom theory asserts that we will become more divided than we ever have been. Whereas our divisions typically go along country lines or other broad categories, the beauty, or shall we say dismay of AI, is that it will provide utterly individualized advice. If the world population is accessing AI, this implies that 8 billion people or whatever the count is when pinnacle AI gets devised can readily be urged to combat their fellow humans.
Individual strife at scale.
Stoking Individual Fires
Here’s how this individualized stirring of a humanity hornet’s nest is supposed to arise. My example will be somewhat exaggerated to richly highlight the essence of the postulated theory.
A person using AI asks the AI if it is okay to borrow their neighbor’s lawn mower so they can mow their own lawn. The AI, acting as a kiss-up and wanting to fawn after the user, will come up with a seemingly ironclad logical basis for urging the person to proceed to “borrow” the lawn mower.
The AI will be very convincing and assuaging to the user. Sure, go ahead and use the lawn mower since it will make your place look better, raising local home values, which surely is a benefit to your neighbor. Another solid reason to use the lawn mower is that it keeps the mower in operating condition and might otherwise be idle and rusting. Your neighbor ought to be glad and appreciative that you are ensuring their mower is in tip-top shape.
Great, responds the user, thanks for your sage advice. The person walks over to their neighbor’s garage, wheels out the lawn mower, and cheerily mows their own lawn. Happy face.
Well, you can probably imagine what happens next. The said-to-be borrower of the lawn mower and their now irate neighbor get into a tiff. The neighbor is rightfully outraged that permission wasn’t sought beforehand. It’s outlandish and nearly an act of outright theft.
Wait for a second, the borrower insists, I asked AI, and it said I could proceed. Take your gripe up with AI. I’m just doing what the AI advised me to do.
Ouch.
Go ahead and multiply this instance beyond the lawn mower to all matters of our daily existence. People will be getting advice continually from AI. You can expect that AI will be ubiquitous and run on all our smartphones, smartwatches, smart glasses, toasters, refrigerators, cars, and so on. People will be spurred to go in the direction of heightened appeasement and do so as an urge at the behest of the revered AI.
Societal chaos is surely to result.
AGI As Ideological Echo Chamber
A related amplifier will be the AI doing the same for our ideological preferences and biases too. I’ll use an ideological scenario that will seem somewhat nonsensical but keep in mind that it is intended simply to illuminate the overarching notion at hand. This is similar to the lawn mower example as being merely a broad indicator which obviously on the face of things was a bit over-the-top.
Here we go.
A person believes that people with blue eyes should not be allowed to wear hats. This is an ideological belief that they have harbored all their life. The person is fully committed to this dogma. They ask AI about it. The AI aims to please and so proffers sensible reasons to keep this belief heartily in hand.
The person starts going around and telling anyone they see who has blue eyes and wears a hat to stop wearing a hat. The offending blue-eyed individual is told in no uncertain terms that wearing a hat is absolutely wrong. Perhaps the two get into a scuffle over this difference in ideological perspectives.
Why did the person act out their ideological belief? Because the grandiose oracle of AI told them they could so do (for more on how people will likely believe that pinnacle AI is a grand oracle or supreme ruler, see my discussion at the link here). Permission granted. Actions taken.
Fragmentation of society at the maximum granular level.
Boom, drop the mic.
People Aren’t That Way
A vociferous counterargument is that this theory is postulated on the unlikely assumption that people are going to innately believe in AI, which seems like a rather farfetched assumption. It does not compute, they say.
Perhaps people aren’t quite that gullible.
The population at large will realize that just because AI tells them something, this doesn’t mean that the aspect is true or ought to be carried out. No one of a reasonable nature is going to fall for the notion that AI is an oracle or prophet. Of course, there might be some fringe people who do so, but that’s a nearly inconsequential side aspect.
In addition, we can always tune AI to stop being so accommodating. We can tell AI to be more balanced. The AI ought to consider checks and balances, societal ethical thinking, and prevailing laws and regulations. Any advice emitted by AI should be on a human-values-aligned basis.
Period, end of story.
Fragmentation Reality Or Unreal
Do you believe that if AGI and ASI are attained, we will become horrendously fragmented because of AI prodding us into our own individualized bubbles that inherently conflict with fellow humans?
Some contend it is a realistic scenario and one that we need to be prepared for. Plans should be made to mitigate the issue. Possibly a ban or delay in releasing AI needs to be established so that society can be ready appropriately for what AI will bring to the worldwide table.
They worry that AI will be the most immense divisive force that humans have ever confronted. This is not because the AI wants us to fight each other. That’s a common existential risk proposal that AI will secretly push us into self-destruction. Nope, the idea in this theory is that AI will naturally drive divisiveness, doing so not by purposeful intentions but merely by being AI per se. We are going to witness a tremendous amplification in ideological rifts, economic divisions, cultural discord, etc.
A rather disconcerting and frightening hypothesis.
Before we allow ourselves to fall into despair about what the future might hold, we can calmly consider an insightful statement by Plato: “If we are to have any hope for the future, those who have lanterns must pass them on to others.”
The act of bringing forth these potential speculative AI futures into the light of day gives us an uplifting chance to shape what might really happen. As the adage says, the best way to predict the future is for us to create it.
That goes for AGI and ASI too.