The concept of artificial intelligence and its evolution stretches over the last 65 years. In 1950 Alan Turing was the first one who asked the question “can computers think?” and so the discussion on the possibility of creating an artificial brain that can simulate the neural pathways and human reasoning has started. From the ’50s through the’ 80s research in the field of AI has gone through various stages, including one of turmoil between 1974-1980 when critics and financial obstacles almost stopped all research regarding artificial intelligence.
However, starting in the 90s the concept of AI began to be used successfully in IT. Research in this field has led to great discoveries concerning genetic algorithms and neural networks. Having said that, a possible future in which AI has a strong presence in everyday life is both seen optimistically and pessimistically. Lately, there were multiple and vehement warnings coming from researchers and experts. These were related to the need for researching various means of avoiding potential harm or calamity caused by an AI system with super intelligence. More than 150 scientists and entrepreneurs among which are Stephen Hawking, Elon Musk and the winner of Nobel Prize in physics, Frank Wilczek, have signed an open letter that emphasizes caution in using AI and accentuates the statement “artificially intelligent systems need to do what we want them to do.”
The first warning signal regarding AI came in 2013. An AI system (an algorithm) was programmed to play Tetris in such manner as to never lose. Surprisingly or not, it disregarded the commands and quickly learned that the way in which it will never lose is to put the game on pause. After this event many researchers have argued that an algorithm could learn on the long-term that it can avoid planned interruptions from a person if it disables the “red button” or if it identifies and deletes the code that allows the existence of interruptions outside the system.
What’s being done today in order to avoid unwanted situations?
Considering the idea of democratizing the power over artificial intelligence, Elon Musk built OpenAI. Through OpenAI, the entrepreneur said that he wants to make public all the information regarding the discoveries on AI. “Freedom lies in the distribution of power and despotism in its focus” he said, pointing out that he wants to “contribute to the possibility that the future will be a good one”. OpenAI aims at keeping the power over artificial intelligence away from a small number of powerful people and sharing its research with anyone who wants to have access to this technology for free. Thus Elon Musk and OpenAI want to fight malicious artificial intelligence not by restricting the access to this technology but by expanding it.
On the other hand, a group of researchers from DeepMind revealed how to safely interrupt an AI system. Through this operation one can stop an artificially intelligent computer and at the same time avoiding negative consequences on its learning processes. The safe interruption was tried on an AI which had a learning method based on rewards. In this case, the behaviour of the artificially intelligent system is guided by the successes of previous activities. By learning in the context of actions that will end up being rewarding it becomes unaware if a certain activity will cause damage for itself of for those around it. When the possibility of negative consequences arises, a person intervenes and presses the “red button”. But the human intervention can change the context in which the AI system operates and can cause two problems:
– Either the robot will learn that it shouldn’t repeat that action;
– Or it will regard the outside intrusion as an obstacle and will learn to avoid it by resisting human intervention and disabling the “red button”.
Researchers and experts have shown that the activity of an AI system can be interrupted safely through changes in its algorithm or code. This would cause the system to believe that he decided to change the course of an action or to discontinue it. It is a kind of selective amnesia. This framework allows the “human operator” to interrupt an artificially intelligent system repeatedly and safely without the AI learning how to prevent interruptions. However, the same researchers added that it is not clear if all algorithms are designed to be “safely discontinued”. This was prompted by the fact that they have failed in changing one which was more “general” and was not limited by certain assumptions such as “class environment”.
All in all, artificially intelligent systems have much evolving to do until red flags will arise. The AI should first become self aware. Then it should not be limited by the amount of coded data. Also, deep learning should take place out of human supervision. But the essential ingredient is acquiring common sense (“Basic knowledge about how the world works”). This “common sense” is difficult to program and involves much hand coding regarding a large amount of taxonomies. Without it, however, AI will never reach super-intelligence.