Looking at how much time a person spends with gadgets, we can say that technologies already enslave the world. At the same time, some scientists are discussing the possibility of a machine uprising and the emergence of artificial intelligence (from now on – AI), which will start the war with people. So far, this threat is too far away for us to take it seriously. Still, scientists have reason to assume something like this; all we have to do is wait.
Yet we see how rapidly technologies are penetrating our lives. Our devices are becoming smarter, and at the same time, there are new threats of loss of important information or even identity theft. So far, it is difficult to imagine installing antivirus on a refrigerator or smart house, but trying to protect your computer and smartphone is possible. For example, read a review of Norton antivirus and see if you’re looking for it, or if you need something else in your antivirus solution.
Speaking about machine uprising and AI, the neuroscientists from the United States conducted the study that AI could be inspired by a sense of its own “mortality” and literally explained to the computer that its existence is rather fragile.
Many intelligent systems like neural networks work with information, so their application is relevant in all spheres of human activity. Transport logistics, medical services, banking, financial transactions, industry optimization, autonomous driving, urban infrastructure are only a small proportion of where different neural networks can be used and where they are already used.
Yes, such systems have taken on the part of human work. Trained robots can operate aircraft, deal with legal cases, create journalistic texts, and even conduct medical operations. Of course, all these are promising areas for the full application of AI, and their activities are still strictly controlled by man.
AI Feeling of Own Fragility – What Does It Mean?
People have already taught computers to synthesize and analyze information at the initial level, and now researchers from the USA plan to go further and recreate the function of homeostasis in machines. The authors of the work say that if you make the computer take care of itself – it can not only improve its work but also help scientists to explore the nature of feelings and consciousness of machines.
For example, if AI can determine input on touch and physical pressure – it will be able to assess risks and hazards. Creating a system of such introductory, similar to human touch, smell, hearing, vision, and taste, would allow the computer to take care of its survival and make decisions at a new intellectual level. The ultimate goal of this project is to create AI, represented by, say, an autonomous robot that will be able to make decisions based on something similar to human feelings. Another question is where the machine will take such feelings and how it will interact with a person knowing about its vulnerability.
What Are the Risks?
Microsoft President Brad Smith recently warned that we already need to create a new Geneva Convention that should limit the development of lethal autonomous weapons systems. When an armed drone shoots an object is not a manifestation of AI consciousness at all but an action of the algorithm that man prescribed, giving the machine the right to kill. Despite this, many countries, including the United States, China, Israel, South Korea, Russia, and the United Kingdom, develop and use autonomous weapons.
AI researchers have no consensus on when a full-fledged artificial intelligence will emerge. A few years ago, a conference in Puerto Rico surveyed leading scientists who are working on AI. Most experts said that a semblance of strong AI could emerge as early as 2045, but some researchers were confident that hundreds of years separated us from this event.
Some IT professionals, entrepreneurs, and even philosophers believe that we will never be able to create an absolute form of AI, as we ourselves are part of the intellectual system and live in a computer simulation.
The most famous adherer of the idea that humans live in the matrix is the head of SpaceX and Tesla, Elon Musk, who is also closely associated with AI technology. In 2016, Musk said that there was “just one in a billion chances that we live in basic reality.”
The idea of AI’s sense of fragility seems rather interesting, considering the prediction about their rapid evolution. Creating homeostasis simulation in Androids are reasonable because, at least, they will base their decisions not on the simple orders and logic, but on the compassion and empathy. And who knows, maybe we will get along with the AI to coexist with them peacefully.