The rise of robots is a very popular trope that you can find in fiction dating back to the early 20th century. From ‘Skynet’ in the Terminator films to ‘HAL’ in 2001: A Space Odyssey, the idea of an AI takeover is a very pervasive fear. You could be playing at an automated TOP online casinos Australia and find that the casino’s AI has taken a mind of its own. In this article, you will learn the truth about the fight between humans and AI. Do humans really not have a chance, or is this just some sort of myth?
Background of AI Takeovers in Fiction
The first AI takeover in fiction occurs in the play R. U. R., a science fiction and futuristic play written by the Czech playwright Karel Capek. Interestingly, this play is also responsible for adding the word “robot” to the English dictionary. In 2001: A Space Odyssey, the HAL computer system takes over the ship and kills off all the crew members. Recently, film series like The Terminator and The Matrix have portrayed AI take over as the central theme of their stories. The Matrix went a step further and pictured a complete and total dystopia.
AI takeovers have also been a major plot element in a lot of anime and manga series. One of the most celebrated anime series of 2013, Psycho-Pass, takes place in a dystopian/utopian society that is governed by an AI called the Sibyl System. The Sibyl System AI is a locked box system that places citizens under surveillance and measures their “Psycho-Pass”, which is a metric that describes an individual’s proclivity towards committing a crime. AI takeovers are a real and persistent threat in all of science fiction.
Is An AI Takeover Really Possible?
In real life, an AI takeover is likely to occur – but it will do so only in subtle ways. For example, an AI takeover would affect the human race more pervasively if it is done through economic means such as worker automation. In fact, automated manufacturing and white-collar automation have led to fears that an AI takeover is imminent in the global economy. There is also the risk of complete eradication and subjugation – which is referred to as the “existential risk”. This is a topic of contention among a lot of scientists today.
There is a prevailing notion that if left unchecked, artificial intelligence systems would gain large powers and pose a great risk to human existence as a whole. If AI systems manage to surpass human levels of intelligence and gain sentience, then there is a chance of major risk to humans. This can be best understood with help from a fictional parallel, such as Ultron in the Avengers: Age of Ultron film. Ultron achieved sentience and decided that it wanted to eradicate humans in a seemingly altruistic effort to save the world. Stephen Hawking was a major proponent of this risk.
The Advantages that AI Systems Have Over Humans
You could be sitting and enjoying free mobile pokies at your favourite online casino when the takeover occurs. Knowing what advantages AI has is a very important piece of information. Here are the key advantages that AI systems can have over humans.
- Intelligence Explosion: Researchers like Nick Bostrom have said that a sufficiently smart AI will be able to modify its own source code after a point. When it learns to do so, it will basically be able to increase its own intelligence. This kind of recursive situation is an “intelligence explosion”.
- Technology: If an intelligence explosion occurs, then the AI will be able to leverage it and come up with more advanced technology with ease. Nanotechnology, cybernetics, FTL travel, etc., and other milestones will be easily achieved by AI. The AI might use this tech to keep humans in check.
- Strategizing: Another direct consequence of the intelligence explosion would be the high level of intelligence of the AI. Their strategizing skill would be much better than that of humans, and this would give them a massive advantage over their opponents. They’d be able to manipulate and outwit humans very easily.
- Social Manipulation: As in numerous works of fiction, such as Person of Interest, the AI will also be able to manipulate humans to pit them against each other. Instead of a situation where humans collectively battle the AI, there would be a situation where humans have to battle other humans also.
- Economic Productivity: Humans would actually see an incentive in allowing the AI to operate without any constraints if the productivity becomes large. It is theorized that humans would voluntarily allow AI to operate on their systems. This is one of the ways in which humans can be manipulated.
Strong AI Systems Are Inherently Dangerous
Another issue that you need to keep in mind is the fact that strong AIs are inherently dangerous – simply as a consequence of their creation. It is very difficult to create artificial intelligence that is strong, and yet has the same goals as that of humanity. Aligning the values and ethics of human society along with the optimization goals of an AI is a very difficult task. Human value systems are complex and mutable, making them difficult to translate into code. This is why a strong AI system is viewed as inherently dangerous.
A Differing Opinion: Is A Peaceful Coexistence Likely?
Cognitive and evolutionary psychologist Steven Pinker says that it is very likely that humans will be able to coexist harmoniously with a superintelligent AI. Pinker says that the main fear of a revolting and dominating AI comes from humanity’s own history – which is rife with periods of enslavement, genocide, and subjugation. The fear of an AI takeover comes from the human view that aggression and competitiveness are necessary for the survival of the species. This is why Pinker says that there’s no reason why an AI would be hostile without any reason whatsoever.
Warnings from Eminent Personalities in Tech
The idea of an AI takeover and the risks of creating hostile AI has been explored in science fiction for a very long time. Movies, shows, books, plays, video games, etc. have all thought about the dangers posed by a hostile and unfriendly AI. Apart from works of fiction, most warnings have actually come from some of the most well-known personalities in the world of tech. A list of luminaries who have cautioned the world against AI includes Dr. Stephen Hawking, Nick Bostrom, Elon Musk, Martin Rees, Max Tegmark, Bill Gates, and Jaan Tallinn.
The solution to this conflict is to design a method that can create a friendly AI that is also super intelligent. This problem is called the “AI control problem” and is one of the most important issues under discussion today. Scientists, educators, and entrepreneurs from all over the world are devoting lots of effort to answer this question correctly. AI can make the world, or even break it.