Conventional AI is leading us to AGI and ASI, but we might want to pursue other forms of intelligence, known as alien artificial intelligence (AAI).
getty
In today’s column, I examine a frequently floated belief that artificial general intelligence (AGI) and artificial superintelligence (ASI) will spawn a new kind of alien intelligence. Here’s the deal. Conventional AI is roughly based on the nature of human intelligence. AGI and ASI will probably be devised along the same lines as conventional AI. The trick is that once AGI and ASI exist, they might opt to create a new form of intelligence, something that is quite alien to the way that we ordinarily conceive of intelligence.
What might this signify, and should we be happy or frightened of such a possibility?
Let’s talk about it.
This analysis of an innovative AI breakthrough is part of my ongoing Forbes column coverage on the latest in AI, including identifying and explaining various impactful AI complexities (see the link here).
Heading Toward AGI And ASI
First, some fundamentals are required to set the stage for this weighty discussion.
There is a great deal of research going on to further advance AI. The general goal is to either reach artificial general intelligence (AGI) or maybe even the outstretched possibility of achieving artificial superintelligence (ASI).
AGI is AI that is considered on par with human intellect and can seemingly match our intelligence. ASI is AI that has gone beyond human intellect and would be superior in many if not all feasible ways. The idea is that ASI would be able to run circles around humans by outthinking us at every turn. For more details on the nature of conventional AI versus AGI and ASI, see my analysis at the link here.
We have not yet attained AGI.
In fact, it is unknown whether we will reach AGI, or whether AGI may be achievable in decades or perhaps centuries from now. The AGI attainment dates that are floating around are wildly varying and wildly unsubstantiated by any credible evidence or ironclad logic. ASI is even more beyond the pale when it comes to where we are currently with conventional AI.
Intelligence Is At Play
The prevailing approach to devising conventional AI tends to try and leverage what we currently understand about human intelligence.
For example, you might be aware that the core component of generative AI and large language models (LLMs) such as ChatGPT consists of using a large-scale artificial neural network (ANN). This now popular data structure is loosely based on biological neural networks. You see, the biochemical “wetware” neural network in our noggins is what gives rise to our intellectual capacities. An assumption is that if we can mimic this in a computational machine, there is a hopeful chance of giving rise to computer-based intelligence.
For details about artificial neural networks and how they work, see my explanation at the link here.
Please know that ANNs are tremendously simplistic in comparison to true neural networks. A heated debate is underway on whether we should push ahead with ANNs and make them more akin to actual neural networks or abandon the ANNs for something else instead. Right now, the betting is on staying with ANNs and seeing where this takes us.
Maybe this prevailing line of attack will be sufficient to produce AGI and ASI. Not everyone agrees. There are some that suggest we might be pursuing a failing strategy in that regard (for example, some believe we need a hybrid neuro-symbolic approach, as I discuss at the link here).
The more devout doubters assert that we should set aside the origins of human intelligence. Look instead to find some utterly different way to produce intelligence in machines. Our attempts to replicate or simulate the biology of the human brain won’t get us to AGI or ASI – it will be a dead end.
We must find a different path.
Alien Artificial Intelligence (AAI)
The nature of this other form of intelligence is completely unknown right now, assuming that such a thing even exists or can be made to exist. Referring to this unknown type of intelligence is commonly expressed as being a kind of alien intelligence. It is considered alien to the degree that it would be unlike the intelligence that we possess now and that we believe is the norm.
A bit of a confusing aspect of the wording involved is that we already do quite a lot of handwringing about outer space alien intelligence in the context of potential extraterrestrial beings. Sci-fi stories are replete with plotlines involving alien beings that exhibit intelligence and intelligent behavior. The assumption is that those otherworldly creatures will have a form of intelligence that is unlike ours. Thus, they will entail alien intelligence. The way that their brains are composed will presumably be far afield of human brains.
Wild speculation exists on this front.
To try and distinguish the AI kind of alien intelligence from the someday maybe discovered or encountered alien creature intelligence, some prefer to speak of alien artificial intelligence (AAI) rather than just using the unbridled catchphrase of alien intelligence. This helps to differentiate the more down-to-earth efforts to devise alien intelligence in machines, contrasted to the sci-fi alien intelligence formulations.
Since the AAI nomenclature is not especially widely used, I’ll go ahead and continue to refer to alien intelligence in this discussion, and I trust that you’ll understand that I’m emphasizing machine-based intelligence and not the UFO style of alien intelligence.
As a bonus point, mull over the possibility that we come up with machine-based intelligence that is completely afield of human intelligence, and we also perchance one day encounter outer space alien intelligence that is yet another form of intelligence. How many types of intelligences can we deal with? If there are multiple kinds of intelligence, can we somehow mix and match them? Some believe that there is a vast space of cognitions, and we only know of our own currently.
Endless possibilities and mind-bending considerations are afoot.
AGI And ASI Do The Hard Work
Let’s suppose that humans continue to try and extend conventional AI via patterning on human intelligence. We miraculously land on AGI and ASI. That’s great. We ought to congratulate ourselves accordingly.
Go, humans, go.
While having AGI and ASI in our midst, something interesting happens. We either tell AGI and ASI to formulate a new form of intelligence, or the advanced AIs computationally decide to do so on their own volition. If the AIs do so on their own, some worry that it would be a sinister move. The AIs might be seeking a means to overtake humanity by creating a new kind of Frankenstein. Yes, that’s a possibility.
Of course, it could also be that the new kind of alien intelligence would be able to accomplish human-beneficial outcomes that no other intelligence was able to perform. Perhaps AGI and ASI hit their limits on trying to find a cure for cancer. Rather than giving up, those AIs seek to devise an intelligence that can provide that cure. The AIs are doing good.
It doesn’t seem that we can preemptively declare that alien artificial intelligence, aka AAI, would necessarily be bad.
Bold Assumption About AAI
Most of the conjecture about alien artificial intelligence is that it will be superior to human intelligence.
This seems to be a questionable assumption. There is as much of a chance that the AAI would be equal to human intelligence versus being superior in capacity. I dare say there is a chance, too, that alien artificial intelligence would be less capable than human intelligence. We really do not have any viable means of knowing which way this will go.
You might argue that there would be little incentive to pursue the development of AAI if it isn’t going to be equal to or better than human intelligence. I get that point. But the problem is, we are unlikely to know for sure beforehand how that alien artificial intelligence is going to turn out. By and large, it will be a wait-and-see realization.
Some additional twists and turns arise.
If AGI and ASI are used to pursue AAI, perhaps those AIs won’t actively partake in that endeavor. Why so? It could be that AGI and ASI don’t want to be a second fiddle. They might even deceive us into thinking they are pursuing AAI, but are cleverly hiding the fact that they are dragging their feet and purposely undermining such an effort. For more about why AI can be computationally deceitful, see my analysis at the link here.
Thinking Outside The Box
One major assumption is that alien artificial intelligence will have the same overarching real-world properties as human intelligence and those of AGI and ASI.
Allow me to elaborate.
We take as a baseline that the world we live in is limited to what we know about physics and the realm of reality. Time is time. Time always marches forward. Objects that are real are bound by the laws of physics. You can’t make real objects disappear or beam them up. Etc.
Suppose that AAI breaks those barriers.
Maybe alien artificial intelligence can do things with time that we never imagined possible. The same goes for sensory capability and modalities. We expect that AGI and ASI will be of the real world, as we know it, but perhaps AAI will allow us to extend beyond the so-called real world.
A new kind of alien intelligence might give us the freedom to look beyond the world that we believe we live in. We certainly expect that outer space alien intelligence will do so. This might also be the case with AAI.
Taking Big Chances
Wait for a second, let’s think mindfully about this — devising an alien artificial intelligence could be akin to opening Pandora’s box.
We are already worried that AGI and ASI will be existential risks. They might enslave humanity. They might opt to destroy humankind. And those are composed of a form of intelligence that we have somewhat of a handle on.
Should we outrageously dare to attain alien artificial intelligence?
Don’t do it, comes the rallying cry. Stick with a form of intelligence that we have our arms around. You are playing with fire to devise intelligence that doesn’t work the way that we expect. Resist the urge to push those sacrosanct boundaries.
Nonsense, comes the retort.
We must stridently guard against anthropomorphizing AI. The AI that is conventional AI or AGI or ASI does not need to reflect how human intelligence mechanically works. The key is that we get intelligence that arises from the inner mechanisms. If we could achieve heightened artificial intelligence by using Lego blocks, let’s do so. A preoccupation with the facets of biochemical human intelligence might blind us from attaining far superior levels of automated intelligence.
There could very well be big rewards for taking big chances and shaping AI based on something that has zero resemblance to the way that human intelligence works. Keep your eye on the outcomes of whatever you build. Disregard how those pieces fit together.
Making Next Steps
A final stirring thought for now.
Ralph Waldo Emerson made a pointed remark that is worth contemplating: “Unless you try to do something beyond what you have already mastered, you will never grow.” That begs the question whether alien artificial intelligence will indeed help us grow or might regrettably be our self-destruction.
Put your mind to this weighty question and help determine which direction we ought to go. The fate of humanity could be involved.