The Dangers of AGI

How dangerous is Artificial Intelligence? Chances are you are viewing this thanks to a device in your hand which constitutes AI in many of its functions.

Are Elon Musk and Stephen Hawking right in saying AI is the greatest potential threat to humanity? Sam Harris (samharris.org) originally thought their pronouncements to be hyperbolic — but now he agrees and adds that the only scarier potential prospect regarding AI besides developing its super-intelligent, self-learning and self-replicating AGI (Artificial General Intelligence) is not developing it, because it can solve our problems. ‘However’ — Dr. Harris points out — ‘if we develop to the extent that it is a million times faster than the greatest human minds, it could go through 20,000 years of human intellectual development in a week.’

Harris goes on to say that we will have deprive AI access to the internet at first — how we have to solve the political problems we have such that this thing does not cause unemployment at 30% — how we will have to program AI to not do what HAL did in the epic and prophetic film “2001: A Space Odyssey” (though he did not put it that way), because if you give AI instructions to protect humanity, it could wind up waging war on what it sees as deleterious or harmful members of humanity…. Sam Harris also worries about the fact that — as he put it — ‘some of the people working on this are keyed up on Red Bull and apparently on the Asperger’s spectrum; they have totally “drunk the Cool Aid” on AI.’

This has to be heard.

Thanks for coming. What do you think?

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s