Should We Be Afraid Of Artificial Intelligence?


AI is everywhere, and often dressed in sheep's clothes. The Snapchat filter that adds dog's ears to your selfie? That's AI, right there on your smartphone, and called face detection. Whether it's social media or medical imaging, most of us, if we like it or not, have grown dependent on AI systems. The main drivers of recent progress in AI technology are decades of exponential growth in computing power, availability of large data sets which are used to train learning systems, advances in the implementation of learning algorithms and increasing investment from industry.

Has it had a largely positive net effect? I would argue probably yes. So far the advantages have largely outgrown the disadvantages. We should be happy that a lot of smart people are doing amazing things with these systems. But even though potential risks of AI might seem unclear and somehow far away, I will in the next sections explain why they are in fact very imminent and require much more thought and resources than we allocate to it at the moment.

Superintelligence

The AI community makes an important distinction between the concept of narrow AIs and general AIs, also called Artificial General Intelligence.

AI systems that we know today such as the ones deployed in self-driving cars or in your smartphone are all instances of narrow AIs. The term "narrow" refers here to the ability to only perform a specific task that the AI designed for. So for instance, a narrow AI for self-driving cars can steer your car over the highway but fails at investing your stocks. But what would happen if systems become capable of all tasks that humans can perform? A loose definition of AI that I can put here is:
AI systems that rival or surpass the human brain in complexity and speed, that can acquire, manipulate and reason with general knowledge, and that are usable in essentially any phase of industrial or military operations where a human intelligence would otherwise be needed.
 Most experts believe that such a technology is at least a few decades away. This can mean that it comes in a lifetime or two, depending if one is bluish about mid-term technologies progress. This is a very small time frame in the large scale of things.

There are strong reasons to believe that a system which reaches our level of cognitive ability does not stop becoming better. If we were able to improve intelligent systems, we can employ AI agents that work on new AI technology. At some point these agents will probably come up with an AI system that is superhuman, so is at least as good as humans in all tasks and better in some.

Imagine having a few Albert Einsteins in silicon. Such an armada of intelligent agents will probably produce fast progress in a lot of fields, including medicine, physics and genetics. We should be happy, right?

The problem we ignored so far is that a general AI will probably do its own moral reasoning. As Oxford philosopher, Nick Bostrom, argues, a super-intelligence will likely even surpass us in moral thinking. These moral motivations should be encoded by the creators a priori, because a super-intelligence might be so powerful that it can't be stopped by humans. So as a society, we have to think about which values we want to have encoded in these systems and if we are willing to accept them after a point of no return.

Automation And Mass Unemployment

One of the main points that politicians and media outlets mention about AI is its potential to make redundant a large number of jobs. Why is this a problem? One might argue that the destruction of jobs will necessarily lead to the creation of jobs in other areas, potentially higher up the value chain and requiring more abstract thoughts and machines are not (yet) capable of. Even if we suppose that this argument is true, it will still lead to a period of societal transformation with large potential for political and economical instability, where resentments among the population might grow and populist parties gain even more traction.

Depressingly, I do have a quote from U.S. Treasury Secretary Larry Summers:
I expect that more than one-third of all men between 25 and 54 will be out of work at mid-century.
 Some thinkers go even so far as to predict a time where most tasks will probably be solved by a general AI and we as humans can largely just enjoy the fruits. As I see it, this is largely a question of timeline. Before we have created a superhuman intelligence, we probably have to do most of the work ourselves but many jobs will be replaced by very focused, specialized systems. Then, when we have created a superhuman system, most of the jobs will be taken over by AI except the ones that really require human-to-human connections.

In any case, like the industrial revolution, AI will reshape the relationship between capital and labor in the world economy. It is possible that an edge in technology of a country is more important in the future than population size for international power. Government and organisations have to prepare for this next industrial revolution and it won't be easy.

Interpret-ability Of Neural Networks

One of the main drivers of current advances in AI systems are artificial neural networks. Those are loosely inspired by what we know how the human brain works. More specifically, they are a family of algorithms that can - with little manual engineering - learn how to solve a variety of problems. Even though we are able to optimize them with some human intuition, it would be overconfident to say that we really can understand how they come up with their predictions.

There is a whole area of research which deals with interpreting the decision boundaries of neural networks. This shows that they largely a black box that 'magically' transforms input to output. In the end, we are just happy they work so well.

Simpler algorithms can somehow be interpreted and are somehow intuitive to our understanding of making decisions. If we bet the future of the field on neural networks and reinforcement learning algorithms however, we must place great importance on ensuring the safety of these systems.

Technical Accidents

Okay, let's assume we have created a benign AI that we've aligned with our good human values. Nothing can go wrong, right? I wouldn't be so sure about that. In fact, alarming messages are coming from AI researchers who explain that current algorithms are prone to many risks which arise from technical design issues.

Besides the issues specific to constructing learning agents, there are all the issues arising from insecure IT systems that we have seen increasingly in the last decade. Let's say, there is a swarm of autonomous armed drones flying around for military use and someone finds weaknesses in the algorithms. Could we potentially end up with an army of drones that was benign before but can be turned against their creators? Before we "solve" IT security, we can't be confident in our ability to control intelligent systems fully.

Conclusion

We have touched upon several aspects of potentially dangerous AI systems and the question remains: What do we do about it? Should we be paralyzed by our fear and just stop developing AI systems altogether?

In my point of view, we shouldn't let ourselves be forced into panic. AI will probably make a huge positive impact on us individually and society at large. The transformation towards more health and life quality is going to be significant. Nevertheless, we should put time and money into ensuring that AI systems of the future will ultimately benefit us.

I'll see you all tomorrow.

Buh-bye.

Comments

Popular Posts

Mental Illnesses Personified

Y2K Bug / New Blog

GOING AWAY PARTY!