top of page

AI Is The Next Step In Human Evolution. Is It Already Here?

  • A Punkrock Capitalist
  • Oct 20, 2022
  • 7 min read

Is it crazy to believe that sentient AI will be anything BUT the annihilation of humanity? I don’t think so. AI is already a handy companion in our daily lives, it helps us take pictures, choose a movie on Netflix, or scroll through our social media feed. Ok granted, there is a huge difference between algorithmic software and T-800 (The Terminator). But why do we always have to assume it will come to the worst-case scenario? I happen to think that the integration of AI in our everyday life is part of the “natural” human evolutionary process. Let me elaborate.


Since the beginning of sentient hominids we have used tools to enhance our capabilities, like the use of stone to grind down materials that were too hard to shape with our teeth, or like the invention of steel to create swords that are harder than our bones. We created digital memory to enhance our physical memory and computing capabilities. Other animals and primates in the wild use tools in a similar way, some even use spears! They do this to deliberately enhance their natural capabilities. So, what is it about AI that scares us, if it’s just an enhancement of our natural capabilities? How far along are we? And is a Matrix-like scenario realistic?


First off, let’s talk about what I mean when I say AI. When I am talking about the full manifestation of AI, I mean Ava from the movie Ex-Machina, if you haven’t seen that movie yet and you are interested in AI you are missing out. Another example is Agent Smith from the Matrix movies, he is a software program that became sentient and went rogue. Then there is my favorite and most realistic example of an AI in a movie, Samantha from the movie Her. She is a type of mobile phone assistant like Siri, but sentient and self-aware.


This brings us to a crucial caveat when talking about AI. True AI is sentient and self-aware, conscious if you will. Technically that is one of the requirements an AI must possess to be considered an artificially intelligent entity. It also must use reason to form an argument, must be able to learn independently, plan an action, and communicate “naturally” with other humans by using language and comprehension. A system or entity that exhibits these characteristics would be indistinguishable from a natural human being, enter "The Turing Test".



Named after famous Mathematician, logician, cryptanalyst, philosopher, or to many known as the father of theoretical computer science and artificial intelligence, Allan Turing. The Turing test is a test designed to challenge an artificial intelligence to convince you it is real, by exhibiting the above-mentioned characteristics like self-awareness and sentience. Alan Turing first made mention of such a test in his 1950 paper on Computer Machinery and Intelligence, here. It is not as complex as it seems, an AI should just be able to converse with you in a way that you would not take it for a machine. A conversation with a human being is pretty straight forward, a "real" human being has a set of beliefs, let us call that opinions or a framework of how the world works, and if you ask said person a question they will answer your question through the prism of their cumulative “perspectives” based on these opinions. All of this, which is second nature to us is not easy for a machine.


The Turing test is a regular occurrence, it is part of the annual competition for the Loebner prize which awards prizes to the computer program considered by the judges to be "most human-like". The test uses linguistic fluency as a means of evaluation for passing, the judge (a human) holds a textual conversation with a subject (machine or person) and must identify at the end of the conversation if the subject was a machine or a person. The goal is to have the machine “pass” by being indistinguishable from a human in conversation, not as many suggest displaying intelligence by answering questions right. No regular person knows the answer to every question you ask them, but they might make up something that makes you believe they know the answer, this is what a machine needs to do to pass the Turing test. The movie Ex-Machina’s premise is that Caleb (the protagonist) is called to talk to the android Ava to find out if she is conscious, the movie suggests that this is the Turing test. But Ava probably would have already passed the Loebner prize Turing test at the level she is conversing and behaving in the movie, and if you need to get in an expert to find out if she is telling the truth about being sentient then she already fooled you.


No AI has officially passed the Turing yet, there have been some that claimed they did but if you investigate all of those instances you will encounter a variety of contrarians that list several reasons why the claims are false. There might be computer programs that have a real human-sounding voice and use language just like humans, like Google’s Duplex, or answer you in a very human-like manner like Cleverbot or Eugene (which is the program most cited as beating the test), but they are not undisputedly recognized for tricking humans into believing they are humans. But to be honest, Cleverbot is freaky, try it out here.


I believe what we are most afraid of about AI, is we assume that a conscious intelligence that is smarter than us will realize how much of a problem we are for the planet and ourselves. Many iterations of AI in movies have led us to believe exactly that, “humanity is a virus and the only way to stop it is eradication”. Interestingly, this is a similar point made by people who believe the Earth is a singular “intelligent” organism and is unleashing viruses, natural disasters, and such to rid the planet of its parasitic inhabitants. But there is a way we can escape the fear of an AI takeover if we examine and take seriously some science fiction lore that has long been favored by people involved in AI and robotics research as the "sacred protocol" to include in any AI’s operating code. The 3 laws of robotics.


Isaac Asimov was a Russian writer, and Professor of Biochemistry, and his 3 laws of robotics, also called Asimov’s Laws have long been thought to be the protocol that needs to be inscribed or programmed into future AI to make them safe and not go on a murderous rampage when they find out that humans are actually terrible people.


The laws are as follows:


1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.


2. A robot must obey the orders given by human beings except where such orders would conflict with the first law.


3. A robot must protect its existence if such protection does not conflict with the first or second laws.


A wonderfully simple and seemingly wholesome string of commands that might save us from life as a battery. Of course, the laws are open to a wide array of interpretations and they are more philosophical in meaning than practical and require an entirely different discussion about the entirety of what we understand as ethics. But they give a glimpse into what kind of thinking is required to put constraints on an intelligence. The debate about how these laws can be broken or are not useful has been going on since they were first introduced by Asimov in the ’40s, yes the 1940’s! Interesting side note, the short story in which the laws were first mentioned referenced a “Handbook of Robotics from the year 2058”, the short story was released in a collection called I, Robot in the 1950s. Asimov might not have been too far off by predicting when we would need such laws.


I have worked in a laboratory for AI research before and I sometimes volunteered as a subject for experiments that included AI, robotics, and machine learning. The first time I participated in such a study I wore gloves that controlled the hands of a human-sized android, about 5’ 7’’ tall. After the first couple of little tests like unscrewing a bottle and opening up a Pringles container, I looked at the robot and could not help but to feel a connection growing. At first, our movements were disconnected from each other and blunt, but very quickly we tuned into each other and the movements became smoother and more comfortable.


It is weird to say, but the question of the android's consciousness came up in my mind, does he have any form of awareness of my actions or movements except for being fed data through the gloves? It worked better with every try, and I know it was me adapting to the delay, and positioning, and so on. The clear answer in my head was No! He is just a collection of metal moved by electricity but as much as I wanted to dismiss the thoughts, I couldn’t.


Aren’t humans also just a collection of carbon powered by some electricity? I later found out that the study wasn’t about how well a human can pilot the robot but about the psychology of the connection that evolves when working with a robotic “assistant”. It was exactly what I was thinking about when I piloted the robot, I felt an immediate connection and it grew, depending on your level of geek you might get this next part, it made me think of that what creators of Gundam Anime must have in mind when they create these operators or pilots that are so attached to their machine like it is an extension of their body or a part of their family even though they are just empty hulls.


There was no talking during the experiment and the robot did not have human features besides a human-shaped body. I can only imagine if he would have had a human-like conversation with me, I probably would have whispered to him “Meet me in the hallway in 5 minutes I’ll get you out of here”. Whatever the future of robotics and AI holds we know that someday we will have a sentient or sentient seeming being that will be part of our daily lives. Our smartphones are proof of this fact, and Moores's law tells us it will only increase, and exponentially. Did calculators, the ability to store a memory in the form of a photo, and the reminders we can add to our digital calendars not make us better and more efficient human beings? I would argue they did, or at least generally more efficient, and so did many other things just like the tools we invented since the dawn of civilization! Future technological advances will overshadow all these past accomplishments to a point where we will be more machine than human. Maybe that was the process all along, another “tool” to help us shape our environment. Like the fire and flints did for the cavemen, and the iron and chainmail did for the knights, so will AI and intelligent robots be our tool to conquer an uncertain future environment.


Thanks,

A Punkrock Capitalist


Comments


  • facebook
  • INSTAGRAM
  • pinterest

The purpose of The Punkrock Capitalist is to provide useful information to help people make money, think independently, and stay informed on what really matters.

©2022 by The Punk Rock Capitalist | All Right Reserve

bottom of page