Should we be scared of AI? Yes, but not for the reason you think

Kaila Colbin
5 min readOct 29, 2017

This article is part of our ongoing work to help people understand the powerful forces shaping our future. If you’re interested in this kind of thing, you should join us at the SingularityU Australia Summit in Sydney this February.

They had invited me to talk about the technological singularity: the moment when computer intelligence surpasses human intelligence.

“Is it true?” they asked, brows furrowed. “Are the robots going to kill us all?”

“Very possibly,” I replied. “But that’s not why we should be scared.”

Let me explain.

Part I: What we shouldn’t be scared of

First of all, we shouldn’t be scared of the thing the movies tell us to be scared of.

We shouldn’t be scared of the Terminator or I, Robot. We shouldn’t be scared of the robots getting mad at us, or wanting revenge. Even if we are awful to them.

We shouldn’t be scared of these things because even when computers become smarter than people, they’ll still be computers.

Anger, revenge, boredom, frustration… these aren’t computational decisions. They’re emotional decisions. And no matter how intelligent computers get, there’s nothing to suggest they’ll develop emotions.

(It’s kind of funny that we’re worried about robots being angry at us, but not worried about them falling in love with us. Do we think we’re more hate-able than loveable?)

So, no. We shouldn’t be afraid of Robot Revenge. It’s not really a thing. But hang on a minute. Up top I said it’s likely robots will kill us all. What gives?

Part II: Why robots might kill us all

On August 3rd, 2014, Elon Musk tweeted that AI is potentially “more dangerous than nukes.”

The book he’s referring to is Nick Bostrom’s Superintelligence: Paths, Dangers, Strategies, and it’s entirely about the true danger associated with artificial superintelligence: the Control Problem.

The Control Problem goes something like this: you assign a task to an AI. It could be any task — the one Bostrom uses is making paper clips.

Because it’s an AI, you don’t give it step-by-step instructions. Rather, you give it a desired outcome: make as many paper clips as you can with the resources available to you.

So the AI looks at what it’s meant to be doing, and realises that “available resources” aren’t limited to what’s been explicitly provided. Technically, it could turn the entire planet into paper clips. And that could be dangerous. Here’s Bostrom:

The AI will realize quickly that it would be much better if there were no humans because humans might decide to switch it off. Because if humans do so, there would be fewer paper clips. Also, human bodies contain a lot of atoms that could be made into paper clips. The future that the AI would be trying to gear towards would be one in which there were a lot of paper clips but no humans.

This isn’t a particularly likely scenario, of course; paper clip manufacture isn’t high on the priority list for AI development. But it’s a useful illustration of the problem.

Right now, you’re probably thinking something like, “Well, how come they don’t just turn it off?” or “Why don’t they just program it to not kill all the people?” or “Can’t they just give it morals?” I wondered all those things, too. Long story short: it’s not as easy as it sounds.

People are spending lots of money to try to solve these issues. In 2015, Elon Musk and others pledged USD$1 billion to create OpenAI, dedicated to “discovering and enacting the path to safe artificial general intelligence.”

The Control Problem is legit. But it’s also not what we should be scared of. Why not?

Part III: Why we shouldn’t be scared of the robots killing us all

Because, unless you are directly involved with the development of AI, or unless you direct policy or resources that can influence the development of AI, there’s very little you can do about it. You might as well be worried about Yellowstone erupting or Earth getting hit by an asteroid.

Part IV: What we should really be scared of

We love to focus on the Terminator outcomes because they are, in a way, sexy. They’re like the movies. They’re dramatic and shocking. And they’re safely in the future.

But we’re actually experiencing some of the negative effects of AI right now.

First up: technological unemployment. In 2013, Frey and Osborne published a now-famous paper suggesting that 47–81% of all US jobs would be under threat from technology within 20 years. In 2015 the Committee for the Economic Development of Australia said it was 40% of Aussie jobs within 10–15 years.

Last year a study from the International Labor Organization suggested 137 million jobsin Southeast Asia would be at risk within 20 years.

As MIT’s Andrew McAfee said, “If the current trends continue, the people will rise up before the machines do.”

Second: inequality. Even if jobs don’t go away, automation can exacerbate inequality. Last month, a German study found that total employment had only remained stable because wages had gone down:

German unions have a strong preference for maintaining high employment levels, and are willing to accept flexible wage setting arrangements… in the presence of negative shocks in order to keep jobs.

Further contributing to inequality: in the German study, highly-skilled workers had actually seen their wages go up; it was the medium- and low-skilled workers who suffered.

Of course, technological unemployment increases inequality as well: the people who can buy the robots accumulate all the wealth.

Third: systemic biases. Our artificially intelligent algorithms embed and reinforce our historic biases and prejudices far more effectively than we ever did.

Thanks to automated ad placements, women are less likely than men to be shown ads for high-paying jobs. The COMPAS recidivism algorithm predicts black defendants to be more likely to reoffend than they really are — and white defendants less likely to reoffend than they really are.

Even a simple sentiment analysis algorithm can turn ugly quickly — deciding, for example, that “Let’s go get Italian food,” is a positive thing to say, while, “Let’s go get Mexican food,” isn’t.

Technological unemployment, inequality and systemic biases aren’t sexy. They’re not dramatic or shocking. They’d make terrible movies. But they’re here and now. And, unlike the Control Problem, we can all play a part in addressing them.

We can look at policy responses like Minimum Basic Income, affirmative action for humans, or shifting taxation systems to favour labour over capital. We can ask how humans can add unique value to our businesses rather than looking for opportunities to eliminate them. We can call for transparency in algorithmic decision-making. We can transform our education system to prepare kids for a lifetime of continuous learning and adaptation.

We are co-creating our future right now. We can make conscious choices about it, or we can let it happen to us.

Forget about a Terminator future. Your present society needs you.

We’ll be discussing artificial intelligence, robotics, the future of work, technology and public policy, the future of education, corporate innovation and many other topics at the SingularityU Australia Summit in Sydney this February. Join us.

--

--

Kaila Colbin
Kaila Colbin

Written by Kaila Colbin

Founder/CEO Boma. Dual citizen USA/NZ. Certified Dare to Lead™ Facilitator. Just wants the world to be a better place.

Responses (1)