Expectations for artificial intelligence are expanding, and people are making more and more attempts to use AI in a variety of areas, from finance and medical care to manufacturing.
However, some are sounding warning bells and say that AI will eventually exceed the capabilities of humans. Shojiro Nishio, president of Osaka University, has been working on computer research for many years.
He speaks on AI-related arguments and issues we can expect in the future.
What is AI?
AI is technology that uses computers and software to do the work of a human brain. When a person uses software to issue a command to a computer, the computer uses an enormous amount of data to make calculations and quickly produce an answer. Computers can also learn to give answers by themselves.
How do computers learn that?
They use methods such as deep learning, which imitates the mechanisms of the human brain. While repeating calculations many times as instructed by software, computers learn a pattern. They then become able to create judgement criteria themselves to produce an answer independently of any instructions given to them via software. The reason AI is attracting attention is because the technological environment to make it possible now exists.
What kind of environment is that?
First of all, there has been
an explosive increase in the amount of data available that can be used for calculations. The more a computer can use large amounts of reliable data to make calculations, the more accurate its answers will be.
Q: How large is that increase in data?
According to a survey of a US company, the amount of data that existed in the world in 2013 was about 4.4 trillion gigabytes – 1 gigabyte corresponds to the volume of information contained
in up to 1,000 books. This amount of data is expected to increase to 44 trillion gigabytes by 2020.
If you stored these amounts of
data on 7-millimetre thick tablet devices with 128 gigabytes of
storage and stacked the tablets on top of each other, in 2013 you would have got about two-thirds of the way from the earth to the moon.
In 2020, or so people say, it will be 6.6 times that distance.
That’s an incredible amount of data. Computing abilities must also need to improve to keep up.
Indeed. It is said that computers’ processing power doubles every year and a half. In reality, however, it has been increasing even faster. The price of related equipment is also becoming cheaper.
Because these computers work like a human brain while leveraging huge amounts of data and great processing power, their productivity is in some cases higher than that of humans. That is why some wonder if AI will take jobs away from people.
Are there any technical problems anymore?
Yes, there is still a big challenge, namely what to do about the “black box” of AI.
What is that?
Sometimes humans cannot determine how AI arrived at a certain answer. In other words, it’s a black box.
An AI called AlphaGo won a Go match against a professional Go player from South Korea in March last year.
In May this year, it defeated a Chinese player, who is said to be the best in the world. However, during those matches, the AI made moves that humans found difficult to understand. It’s not just like this with Go – similar things are happening in other areas as well.
That reminds me: there are people who worry that AI will suddenly “go mad”. However, it could be that AI is producing correct answers, and people are simply unable to follow it or incapable of understanding it.
That’s one way of looking at it. I am more concerned about what happens if AI produces incorrect answers or develops in a way that humans cannot control. That sort of thing happens more and more often as AI starts getting used to real-life situations. Take self-driving cars – it’s dangerous if you cannot explain why a car made the judgement it did.
Is there any way to solve the black box problem?
You can check all the processes that the computer went through, from the moment the software gave a command until the AI produced an answer. However, creating such software is difficult. Even if you do manage to create it, it will slow the processing speed of the computer until it is not practical to use.
How to improve this checking is a major issue for research.
How do people in other countries think about the black box?
They believe it’s a problem. The important thing is to have an international response.
What do you mean by that?
AI is not something that exists as an island. Eventually, it will become connected to the Internet and affect things on a global scale.
The black box was
an item on the agenda during
the meeting of the information communications technology
ministers of the seven major advanced economies that was
held in Takamatsu last spring.
At this meeting, Japan proposed the creation of international guidelines for AI development. The Internal Affairs and Communications Ministry announced a draft plan in June this year.
What kind of plan is it?
The plan stipulates that developers should make sure that they can explain the results of the judgements their AI makes, that they can control their AI, and so on. The ministry will now seek to form a consensus both in Japan and abroad.
Do you think it is enough to ask that people make sure to do things or to issue guidelines? Such things are not enforceable.
If you focus too much on making things enforceable, you could end up hindering AI development.
Why is that?
Researchers and companies will lose motivation.
Also, to preserve transparency, you need to make sure anything you come up with works with intellectual property and trade secrets.
Checking whether something can be controlled also takes time and money. What matters is where we stand on issues like these. However, as we wrestle with them, it is essential to keep the black box in mind – also from an ethical point of view.