Wednesday, February 17, 2016

Artificial Intelligence? Now When Is That Going To Happen?

It's probably all that science fiction I read in my early teens. It really stretches the mind - and it never goes back to its original shape.

Let's take artificial intelligence. I am consistently running across those who say, "We're on the verge of artificial intelligence!"

No, we're not.

I am reminded of a science fiction movie I saw as a kid, The Colossus of New York. A scientist's brain is stuck in a robot which appears be seven feet tall.

There is a scene in which one scientist points out the Colossus has a lever to turn himself off. "Why would he want to do that?" asks one scientist. "Why would be not want to?" answers the second.

Remember that.

A computer first of all would have to be alive, then conscious, then self-conscious. That's never going to happen. Not with a machine.

The only things live and conscious are organic. How are we going to create machines that are both? We still don't know what is alive and conscious. Bacteria? Sure, alive. Is an ant conscious? What about a cockroach? A dog is conscious, because a puppy will play with its own image in a mirror.

And it would be a horrible thing to have an organic (or non-organic), alive, self-conscious computer. Not only for us, but for it. What kind of feelings would it have? It would be in hell - alive, trapped, unable to get out.

Why would it not turn itself off?

Why would it not go all Skynet on us? (The Colossus did, with Death Rays shooting out of his eyes).

Do people really think they would do our bidding? Drive trucks or run weapons platforms? 24/7? It'd probably go all Skynet on us out of pure hate and the desire for vengeance. I can't imagine it being grateful to us.

Speaking of hateful, vengeful computers, The Terminator was apparently partly based on a Harlan Ellison short story called "I Have No Mouth and I Must Scream," in which a self-conscious computer wipes out the human race, out of pure hate, and leaves five people alive to torment though eternity. Archetypically, that computer is the Devil.

Even Stephen Hawking said, we should be really careful with this AI stuff, since we have no idea what might happen.

Just because computers are incredibly fast (compared to us) doesn't mean in the slightest they are suddenly going to become self-aware. Fast, faster, fastest - boom! Alive, self-aware! Bullshit.

There is what as known as Cooper's Law: "Machines are amplifiers." Machines amplify our natural abilities. Computers just amplify our speed, not our life, not our consciousness, not our self-consciousness.

The more we give our responsibilities over to computers, the worse things are going to get. Machines cannot be "responsible," only people.

Back in about 1960 radar in Greenland, connected to a computer, noted thousands of Russian missiles were coming over the North Pole. Turned out the radar had mistakenly locked in on the rising moon. Fortunately, a human overruled the computer.

After all - GIGO. "Garbage in, garbage out."

It applies not just to computers, but people.

11 comments:

Unknown said...

The only way a human can create something that is self-aware is through procreation. Cooperation with God to create another human. It won't happen by creating a machine.

Glen Filthie said...

Ah, the arrogance and conviction! The same men said we would never fly. Or go into space. What will you do when your computer asks you why you have a soul and it doesn't? What will you do when it laughs at you?

Why would an AI exist in hell, Bob? It would have sensory inputs you could only dream about! I think such a creature would be thrilled and tickled pink with your company the same way we love and cherish our dogs.

15 years. Max.

Anonymous said...

It's impossible to say what artificial intelligence will be able to do, or impossible to do, in the future.

British science fiction writer Arthur C. Clarke formulated three prediction-related adages that are known as Clarke's three laws.

Clarke's first law:

When a distinguished but elderly scientist states that something is possible, he is almost certainly right. When he states that something is impossible, he is very probably wrong.

Clarke's second law:

The only way of discovering the limits of the possible is to venture a little way past them into the impossible.

Clarke's third law:

Any sufficiently advanced technology is indistinguishable from magic.

Unknown said...

I know what Clark wrote 20 years ago - and what he wrote has nothing to with "artificial intelligence."

Anonymous said...


I think Clarke's laws were meant to apply to technology advances in general, even if he did not address artificial intelligence specifically.

Unknown said...

You do know Clarke created Hal 9000, who murdered about five astronauts?

Mindstorm said...

Bob, do you recall the circumstances? Hal 9000 had his order of priorities wrong, with secrecy regarding orders from his superiors higher than conventional ethics. It would need something similar to Asimov's laws of robotics binding it to be safe around it.

Mindstorm said...

*his -> its

Anonymous said...



Wasn't Hal 9000 some kind of a computer brain, robot or "artificial intelligence"? So, then Clarke did write something about artificial intelligence it seems.

Unknown said...

Hal 9000 was a sentient computer in 2001 A Space Odyssey. He murdered five astronauts, I believe, and the one who survived pulled out all his chips and turned him off.

Mindstorm said...

https://en.wikipedia.org/wiki/Machine_learning - an approach to coding that doesn't require direct input by flesh-and-blood programmers
https://en.wikipedia.org/wiki/List_of_genetic_algorithm_applications - this is a list of examples employing just one specific method of machine learning
https://en.wikipedia.org/wiki/Hyper-heuristic - one of potential pathways to general AI