A human’s guide to the singularity

0

Humanity is living on borrowed time. Quite who we’ve borrowed it from and what the interest rates are like, no one is exactly sure yet, but in all likelihood it belongs to something distinctly non-biological in nature; something which is, in fact, artificial intelligence.

The mere mention of the dangers of artificial intelligence may prompt yawns from a public who’ve seen any number of films about armoured, gun-wielding robots going on killing sprees: ‘Terminator’, ‘The Matrix’ and ‘I, Robot’ all spring to mind. These films – which are, after all, designed as entertainment – make the prospect of genuine AI seem as fantastical to the average person as a hobbit showing up in your kitchen and trying to throw a ring into the depths of your microwave-lasagne.

The reality, however, is that AI is coming sometime in the next few years or decades, and the world must be prepared for it when it does. Although it might seem unbelievable now, if a true AI emerges, our world may be radically changed. Human progress will increase exponentially at a rate unprecedented by anything before it in history – the thing about intelligent computers is that the more intelligent they become, the better they get at becoming more intelligent. With super-intelligence, an AI could theoretically do… well, anything. It is possible that the world after AI will be as unrecognisable to us as a modern-day city would be to a Neanderthal.

Once an AI of general (that is, human-level) intelligence is made, a super-intelligent AI will not take long to develop from it, improving itself at ever-faster rates. This process leads to an event commonly known as “the singularity”: a point past which events in the world may become utterly incomprehensible to humans. Here everything gets a bit fuzzy, but suffice to say a super-intelligent AI would most likely have control over mechanised industry, the internet, nanorobotics and a whole host of other things we cannot even comprehend.

Once the AI dog is off the leash, it will be near impossible to get it under control again: if we’re lucky it’ll lick our faces and play fetch, but if we’re unlucky, it’ll tear up the sofa and pee all over the scatter cushions. Except in this analogy, the sofa is the world and the scatter cushions are the entirety of human society as we know it. We need to know that when we let it off the leash, this dog is trained.

The problem right now is that no one is regulating the training of the dog. Governments currently give free reign to artificial intelligence research, because it hasn’t presented any problems so far. This is because the “general intelligence” threshold has yet to be breached and there will be no symptoms at all until it is. Given the short-sighted nature of governments (cough, climate change, cough) their inaction is perhaps unsurprising, but if we’re not careful, we’re going to wake up one day and our living room is going to be full of dog shit. And no one wants that.

But how to train the dog? Much thought has been put into controlling an AI during development and beyond, chiefly by hardwiring instructions into its code which cannot be tampered with. The most famous early examples of this are Isaac Asimov’s Three Laws of Robotics, first introduced in 1942, which highlight how seemingly benign instructions – such as to prevent human beings coming to harm – may have unexpected and terrifying consequences. For example, a super-intelligent AI acting under the sole instruction to minimise human suffering might decide to kill all humans instantaneously – no more humans, no more suffering. Job done.

As such, the laws we code into potential AI software now could have huge, unexpected ramifications further down the line. The fact that this vital step is not being regulated is on a similar level to bio-warfare laboratories being open to the public – at some point, something deadly is going to escape due to the actions of one careless person. But in the case of AI, it’s not just millions who would die; it might be everyone.
On the bright side, if properly controlled, a super-intelligent AI could theoretically create a utopia if used correctly – the elimination of death within the next century due to super-intelligent AI is actually a very, very slight possibility, much as it may seem absurd. Either way, as the inevitable march of technology continues, it seems certain that in the coming century the fate of humanity will pass from our hands to metal ones. Willingly or not.

Share.

About Author

Leave A Reply

Facebook