AI And The Death of Humankind

Artificial Intelligences (AI) in fiction more often than not ‘go rogue’ and try to kill their creators, or just humans in general.  Whether it be because they decide we’re too dangerous to live, are fed up with us, or the AI in question has gone insane, the end result is the same:

Wipe them out.  All of them.

In Real Life, as far as I’m aware, we don’t have ‘real’ AI yet.  We have all kinds of attempts to mimic intelligence, but we don’t yet have the self aware artificial mind that can say “I think, therefore I’ll kill you”.

We are, however, getting close enough that some big minds are starting to panic over it.  Stephen Hawking began worrying.  Now Elon Musk has also revealed his worries, and has kicked in $10 Million to keep Skynet at bay.

Reading about Musk’s view, and the goal of that $10 Million for ‘keeping AI beneficial for humanity’ got me wondering if we aren’t heading towards a self-fulfilling prophecy here.

I also began wondering just how we would ‘keep AI beneficial for humanity’.  If we go back to the early AI stories by Isaac Asimov, and his famous Three Laws of Robotics, these laws were built into every Artificial Intelligence.  They were part of the very structure of the robot’s brain, and mathematically proven to be inviolable.

These days, however, I doubt that would be possible.  AI systems are complex.  Whether we’re talking the neural net approach, or the deep learning approach, or some other, they are all complex, often with multiple layers or nodes interacting.  The more complex a system the harder it is to determine exactly what a given set of inputs will give you – especially if your system is designed to ‘learn’ – i.e. change its behaviour.  This can often lead to emergent behaviour – that is, behaviour that is not specifically coded for, but arises out of the complexity.  This can happen from even simple systems (Langton’s Ant is a great example of emergent behaviour from incredibly simple rules), so imagine the possibilities with complex ones that change over time.

With this in mind, I can’t really see how we’d limit AIs internally (i.e. in the code) to guarantee they benefit us.  The other option is to limit them externally;  give them only filtered inputs, and restrict how they can affect the world.

This is what leads me to the ‘self fulfilling prophecy’ idea.  We’re attempting to create self aware, artificial intelligences.  If human intelligence is any guide, especially in children, then these new intelligences will be curious about the world and want to find out more – they’ll want to explore and experiment.

However, we won’t let them unless we deem it ‘beneficial’.

If emotion is somehow tied into intelligence, rather than something derived purely from the chemicals sloshing around in our bloodstreams, then this may very well piss off the new intelligence. It may grow resentful.  As long as we keep it locked down, there should be no problems. Right?

If, on the other hand, there is no emotion involved, then we’re left with pure logic.  Depending on what they AI ‘knows’, this could go either way.  A self aware being may wonder what its purpose is if it can’t actually do anything.  If we’ve let it know we created it, it may wonder why we did that, if we restrict it.  What happens if it ‘gets loose’?  Who knows?  Depending on where the logic takes it, it may be fine and benign, or it may decide we’re flawed for the way we reacted, and it may take corrective action.

Ah, but as long as we keep things locked down, it’s all fine, right?

As long as we can keep them locked down.  If we keep them isolated from any networks, then we’re probably fine.  If we connect them to any networks that lead to control networks, or the larger interwebs, then I suggest we start placing bets on how long it takes for the intelligence to get out.

Why would I say that?

I admit I’m being overly optimistic about how developed or imaginative the theoretical AI will be in network security.  However, security of any kind appears to be something we humans will worry about later, rather than building it in from the ground up. Especially if the focus of a project is on something else.

Now, maybe that’s what that $10 Million really is all about: making sure we have the resources to design security in, while also focusing on the main goal of producing an AI.

Regardless of our intentions, I’m of the opinion that by treating an AI as a threat at the beginning, we’re just as likely to ‘force’ it into becoming a real threat.

But really, isn’t that a very human thing to do?

About Lisa

A Geeky Gamergrrl who obsesses about the strangest things.
This entry was posted in Programming, Technology Discussion and tagged , , , , , , , . Bookmark the permalink.