Friday, September 11, 2015

"A Lack of Human Intelligence is Still a Much Larger Threat Than Artificial Intelligence"

I'm not concerned about AI at all. It's not going to happen, and there is no possibility of Skynet going all wonky on us.

People, as always, are the problem. I'm more concerned about Dr. Strangelove that some computer-loving lunkhead thinking AI's a problem at all.

I don't believe in global warming, either, but the author is right than people are the threat, not computer software.

This is from the site Thought Infected.


Elon Musk made headlines recently when, in an interview at the MIT Aerospace Symposium, he stated that he believed that the development of artificial intelligence (AI) is likely the biggest existential threat to humanity; he went as far as to compare the development of AI with the summoning of a demon. Musk is concerned enough about the rapid development of AI systems that he has also put some financial power behind his words, investing in some AI start-ups so he can keep a close eye on progress in the field.

While I am reluctant to disagree with the visionary behind three high-tech companies which are working the hardest to address genuine existential threats (Tesla, SpaceX and Solarcity), I feel that on this point I must. No, Mr Musk, it is not the threat of summoning a computer demon, but ancient demons of the human soul which represent our biggest existential threats.

Human cruelty, greed and ignorance are still far more likely to be our collective undoing than artificial intelligence.

Human greed and ignorance are the root causes which have prevented real movement in addressing the existential threat of global environmental disaster. There is no scientific debate as to whether putting huge amounts of carbon dioxide into the atmosphere will lead to environmental malaise in the form of extinction of sensitive animal species and loss of habitats, but the scariest possibilities of global warming are often avoided in scientific circles. To avoid seeming overly alarmist, scientists generally don’t talk about what might happen if global warming triggers the sudden melting of ice-sheets in Greenland for instance. Unlocking this amount of water would put somewhere around one third (or more) of the world’s population underwater, and mean almost certain civilizational collapse. Even worse would be the possibility of a sudden release of arctic methane hydrates which contain many times the amount of carbon humans have already released into the atmosphere, which could lead to such rapid climate change as to make human life essentially impossible on the surface of the earth.

It is a sad state of affairs, that even the near complete scientific consensus on the threat of climate change is inadequate to overcome the effects of greed and ignorance within our society and enact the kind of changes which will be necessary to save ourselves. I give Elon Musk great credit for being one of the people on the planet who has done the most to address the issue of climate change head on, but I am amazed that he is so optimistic about our progress to rate global warming below artificial intelligence as a threat to human existence.

In addition to global environmental threats, we should also keep in mind that we still very much maintain our capacity to destroy ourselves at a moments notice. There are still a few men in the world, who given a momentary loss of sanity or morality, could easily sent us hurtling into a conflict which might ultimately set us back centuries in progress. We do not yet live in a world where an insane artificial intelligence could kill even a single person, but we entrust a few fallible and corruptible human brains with the power of nuclear apocalypse.

The recent uptick in high-risk confrontations between NATO and Russian forces, following the conflagration in the eastern Ukraine, should be adequate to convince observers that we have not yet outgrown threats of global scale military conflict. There are still plenty of historical military axes to grind (Korea, China/Japan, Pakistan/India, Middle Eastern Conflicts) which could push us from localized hot-spots into larger confrontations.

Even without the power of nuclear super-weapons, we have unequivocally and repeatedly proven our expertise at killing each other on an industrial scale. World war I and II resulted in the extermination of 2 and 3% of the world population respectively, and nuclear weapons were but punctuation at the end of these conflicts. Given a large and long enough conflict, the machine gun would probably be a perfectly adequate tool to erase global civilization.

I would rate both global conflict and climate change as both clearly greater existential threats than artificial intelligence, but there is another reason I do not give significant mental energy to the threat of a murderous Artificial Intelligence: I do not see any reason to believe that a strong artificial intelligence would seek to destroy humanity.

The idea that AI would naturally come into conflict with humans is simply another expression of our anthropocentric world view. Artificial intelligence should have no more malice for humans than we have for more rudimentary forms of biological intelligence. Ants for example, show some of similar abilities of humans to create complex structures, have complex societies etc… yet we do not generally go to war with ants. At worst, our activities might inadvertently affect ants if living within the same environment brings us into resource conflict.

Unlike what occasionally occurs between us and ants, I do not think that we share adequate resource overlap with AI to bring about any conflict. Humans can (so far) only exist within a thin skin of atmosphere on a single water planet. In contrast, the key resources of computational life would be the energy and raw materials necessary to create and run more computational hardware. Given that these resources are equally or more available outside of the earth, I think that any AI would likely exit the planet as soon as possible.

With plenty of raw material and solar energy, the moon and eventually the Kuiper belt would likely be a more suiting environment for computer intelligences, leaving only a short period of Earthly egress when we might come into resource conflict with artificial intelligences. Even in this case, the remote possibility that a war with humans might lead to the destruction of the AI could be adequate to discourage competition with us.

It has been suggested that AI might seek to destroy humanity for fear that we would continue to produce future artificial intelligences which would then compete with the AI for resources in the Universe. I do not accept, this argument as it implies that the AI itself would not already be evolving and forking off-shoots of intelligence on its own. Any AI which can edit itself would be constantly evolving its own intelligence in ways which would be much more significant than that anything spawned from the earth. Humans are not seeking to eliminate Chimpanzees for fear that they might eventually evolve into a competing species.

Fear of AI is cover for a more uncomfortable truth, maybe AI simply wouldn’t care about us at all.

In my mind, the only case where an artificial intelligence represents a likely existential threat for humanity is if some kind of weak AI akin to the paperclip maximizer is set to achieve a narrow goal, and inadvertently destroys us in the process. At this point it is not clear whether it would even be possible to create this kind of a puritanical intelligence. If such a weak AI were adequately smart to pose a real threat to greater humanity, it seems likely that it should also be capable of rewriting its own code towards embracing more selfish goals, ultimately evolving into a stronger AI which poses less threat to humanity for the reasons discussed above.

Does artificial intelligence represent an existential threat? The answer is unequivocally yes, but I would not at this time rate it on a scale anywhere near that of global warming or world war. In the hyper-technological modern world, we might like to imagine that we have evolved beyond the threats of ignorance and greed but I think the reality tells a different story.

I hope that one day this will change, but for now I think we have much more to fear from a lack of human intelligence than from an artificial one.

13 comments:

Mindstorm said...

https://en.wikipedia.org/wiki/Extinction_event - hmmm, what can be learned from the past....
https://en.wikipedia.org/wiki/Permian%E2%80%93Triassic_extinction_event - definitely the closest 'shave'.

"A study published in the journal Science[68] found that during the Great Extinction the oceans' surface temperatures reached 40 °C (104 °F), which explains why recovery took so long: it was simply too hot for life to survive.[69]" - I think it might be even worse.

Mindstorm said...

Also consider thermal expansion of water plus complete thawing of polar caps. It's a wonder how any quadrupeds survived.

Mindstorm said...
This comment has been removed by the author.
Mindstorm said...

https://goo.gl/FYnIEG - key expressions: "great extinction" "water temperature". 43 hits total.

Mindstorm said...

https://goo.gl/b4WUYA - less restrictive keywords. Over 11 000 hits. Interesting content, to say the least.

Mindstorm said...

http://www.cugb.edu.cn/upload/20600/20121224161744186.pdf - hmmmm... recorded temperatures in conodont skeletons... but what about the upper limit? Lethal conditions for conodont bearers would be obviously absent from the record. I don't see that addressed.

https://en.wikipedia.org/wiki/Conodont

Mindstorm said...

^ And lethal for their prey as well. Bearer specimens living in fringe environments would be a rarity in the first place, so fossil record would be skewed against them.... An interesting oversight.

Mindstorm said...

https://en.wikipedia.org/wiki/Indicator_species - a more thermophilic species would be needed to chart temperatures above 105°F.

Anonymous said...


I actually fear my own government's shenanigans, rather than any terrorist group or some supposed hostile country:

http://www.washingtonsblog.com/2015/09/remember-your-country-admits-to-false-flag-terror.html

Mindstorm said...

^^ Make that more generial 'thermophilic taxon' instead of too specific 'species'.

Anonymous said...

"Robots are going to steal the jobs of chefs, salespeople and models, researchers say as they unveil full list of likely robot professions"

http://www.independent.co.uk/life-style/gadgets-and-tech/news/robots-are-going-to-steal-the-jobs-of-chefs-salespeople-and-models-researchers-say-as-they-unveil-full-list-of-likely-robot-professions-10499771.html

Mindstorm said...

'Steal'? Interesting turn of phrase.

Mindstorm said...

^ It conveys that:
* replacing humans would be robots' intention
* these jobs are the property of those who are doing them.

Seriously?