19 December 2016

Nautilus Magazine: It May Not Feel Like Anything To Be an Alien

The world Go, chess, and Jeopardy champions are now all AIs. AI is projected to outmode many human professions within the next few decades. And given the rapid pace of its development, AI may soon advance to artificial general intelligence—intelligence that, like human intelligence, can combine insights from different topic areas and display flexibility and common sense. From there it is a short leap to superintelligent AI, which is smarter than humans in every respect, even those that now seem firmly in the human domain, such as scientific reasoning and social skills. Each of us alive today may be one of the last rungs on the evolutionary ladder that leads from the first living cell to synthetic intelligence.

What we are only beginning to realize is that these two forms of superhuman intelligence—alien and artificial—may not be so distinct. The technological developments we are witnessing today may have all happened before, elsewhere in the universe. The transition from biological to synthetic intelligence may be a general pattern, instantiated over and over, throughout the cosmos. The universe’s greatest intelligences may be postbiological, having grown out of civilizations that were once biological. (This is a view I share with Paul Davies, Steven Dick, Martin Rees, and Seth Shostak, among others.) To judge from the human experience—the only example we have—the transition from biological to postbiological may take only a few hundred years. [...]

In light of this, contact with an alien intelligence may be even more dangerous than we think. Biological aliens might well be hostile, but an extraterrestrial AI could pose an even greater risk. It may have goals that conflict with those of biological life, have at its disposal vastly superior intellectual abilities, and be far more durable than biological life. [...]

The question of whether AIs have an inner life is key to how we value their existence. Consciousness is the philosophical cornerstone of our moral systems, being key to our judgment of whether someone or something is a self or person rather than a mere automaton. And conversely, whether they are conscious may also be key to how they value us. The value an AI places on us may well hinge on whether it has an inner life; using its own subjective experience as a springboard, it could recognize in us the capacity for conscious experience. After all, to the extent we value the lives of other species, we value them because we feel an affinity of consciousness—thus most of us recoil from killing a chimp, but not from munching on an apple.

No comments:

Post a Comment