A Simplest Paradigm for Understanding Artificial Intelligence

A Suggested Simplest Paradigm for Understanding Artificial Intelligence

By Salvador Garcia

http://www.the-nref.org/sites/default/files/Polyhedron_1.gif

Introduction

The term Artificial Intelligence (AI) is thrown about a lot these days. It can bring to mind computers that converse with us as though they were human, or memorable characters from science fiction movies such as HAL9000 or C3PO. Even the term sounds odd. How can there be an intelligence that is artificial? Isn’t the concept of intelligence directly indicative of life?

I am not going to try to provide a definition because AI is such a broad field that most likely I will leave out something important. Instead, let’s just look at AI as a way for computers to make decisions. If you think about it, isn’t this a good way to describe biological intelligence? We, as humans, make decisions all the time based on information which we could also call knowledge. We know that a red light means stop, so when our car gets to an intersection where the light is red we then take action to stop it.

Computers also need knowledge, although when we talk about these silicon units we use the term “information”. The information is stored in a database of some sort. It could be a list of items stored in memory or it could be real time data that is acquired from the outside world. Robots are a perfect example of an AI system with various degrees of complexity that acquires its “knowledge” from the outside world.

This article will take you on a brief journey through AI and what it encompasses, hopefully spurring thoughts and ideas. A future article will provide a basic example of how AI can be implemented in a Windows desktop application.

 

C:\Users\owner\0. DESKTOP\0. FRIEND'S Stuff  Here\SALVADOR GARCIA\0. Salvador Garcia AI etc\Fig01.png

When the coin is tossed it could land heads or tails. We’ll just ignore the remote possibility that it will land on its edge. This allows us to have an AI system that can make decisions based on two possible inputs, in other words, a binary system. The randomness factor removes any allusion to intelligence, but it remains a decision making system based on (silly) input.

With this technique we can build a system that can provide two outcomes. This means that we can ask it any Yes/No question; however, the randomness of the decision makes it a rather useless application.

Implementing this in code is trivial. All we need is a text box for the user’s question, a random number generator to provide us with one of two numbers and a mechanism to display either “Yes” or “No”.

Taking it Up a Notch

Let’s abandon the orange juice and move up to a more complex model. Our AI system can now provide the following answers: “Yes”, “No”, “Maybe”, “You’ll have to ask again”, “It is highly probable”, and “It’s highly unlikely”. This provides six possible outcomes. What sort of decision making process should we have? The easy and “un-intelligent” way to do this is to generate one of six random numbers and providing one of these six answers. Could there be a way to actually understand the question and decide on an answer based on this?

This now takes us to another level where the application actually understands the question to a certain point and is able to provide a more sensible answer. A question like “Will I ever get married?” could produce an answer of “Maybe”, but a question like “Do I have two heads?” should not.

The Turing Test

We can still take AI to another level. Up to this point we have been limiting the user to questions that can be answer by either “Yes”, “No”, or some related locution (“Maybe”). The next step is to let the user type in any statement and then have the AI application respond with something that mimics the answer that another person could give.

Alan Turing was a British fellow born back in 1912. He was instrumental in breaking the Nazi’s encrypted messages. He discussed a test to determine how to distinguish a computer from a human. The test consisted of three parties: Two humans and one computer. The first human converses through a text based conduit with the other human and then, the first human, instead, converses with the computer, not knowing before hand with which. The computer passes the Turing Test if the first human cannot tell the difference between the other human and the computer. Figure 2 illustrates this process.

 

C:\Users\owner\0. DESKTOP\0. FRIEND'S Stuff  Here\SALVADOR GARCIA\0. Salvador Garcia AI etc\Fig02.png

 

To pass the Turing test, the computer needs to understand natural language and respond to the party at the other end of the line in such a way that that party will not be able to tell the difference between a human and a computer. Here, AI needs to accomplish two tasks. The first is to be able to interpret the text that it is given and the second task is to be able to respond intelligently.

In this case, our AI system acquires the information, this being the text that it is provided, plus a defined set of rules that it uses to understand it. Using the given text and the aforementioned rules it makes the necessary decisions to compile an answer. This is easily said, but implementing a system that is convincing is another matter entirely.

By repeatedly doing this, the AI application attempts to carry on an intelligent conversation with the user. The AI system mimics the intelligent process that a person would employ in order to sustain the conversation. In actual Turing Test simulators and online Avatars you can find through a simple Google search, today, the conversation can become somewhat weird. However, if the AI system is intelligent enough and is able to formulate human-like responses, then it becomes ever more difficult for the person at the other end of the line to easily identify it as a computer.

Probability

An AI system needs information to make decisions, but in some cases there isn’t enough information directly related to the topic to make a conclusive decision. Here the AI system can use probability to help it reach some logical conclusion which it then uses to make a decision.

We can see probability as a scoring system from 0 to 100 % where some event is unlikely to happen as it approaches 0 and very likely to happen as it approaches 100%. For example, we can be given a list of block colors: green, blue, red, pink, black, grey, yellow and brown. Then three blocks can be placed in front of us colored red, pink and blue. What is the probability of one of the blocks being blue? We don’t have to wrack our brain to answer this one, but what if the blocks were inside boxes and we could not see the colors?

Although we can’t answer the question with certainty because we can’t see the blocks, we can provide a certain answer based on probability because we know that blue is on the list of colors, so there exists a possibility that one of the three blocks could be blue. What if we were asked the same question, but instead of blue, the color was violet? Given that this color is not in the list of colors we don’t have to go into a complex probabilistic model to answer this question with certainty.

We have used probability to make a decision. Using a similar technique the AI system can also make use of this technique to make decisions, especially when there are unknown variables. This can come in handy when the robot’s sensors don’t provide enough information to make a straight forward decision regarding what to do next.

Robot AI

It could conceivably be “easy” to use a robot’s computer system to add an AI application that would allow the robot to carry on a pseudo intelligent conversation with someone. Given today’s technology, the robot could actually verbalize the compiled answers and even understand spoken words, but AI can have a more significant role. A robot has diverse inputs. Up to now, my discussion has been of AI systems implemented using a keyboard to allow the user to enter the conversation’s text, but a robot has more inputs than a keyboard (or microphone if it “understands” the spoken word). It also contains a number of sensors that allows it to survey its surroundings. These sensors can be used to measure heat or light, to determine whether the robot is near an obstacle, to be able to distinguish between a person and a stone pillar, among others. Figure 3 presents a rudimentary AI robotic system.

 

C:\Users\owner\0. DESKTOP\0. FRIEND'S Stuff  Here\SALVADOR GARCIA\0. Salvador Garcia AI etc\Fig03.png

 

The True Implications of Asimov's Three Laws of Robotics

The AI system could use all of these inputs to make decisions. These decisions can be based upon a set of rules. Such a set of rules was proposed by Boston University Biochemistry professor and renowned science fiction writer Isaac Asimov. The irony is that his rules, which seem completely sensible at first glance, become unworkable in many situations – and Asimov's rules provided the basis for many of his most fascinating stories. Let’s use Asimov's Three Laws of Robotics as a heuristic example to get a peek into a possible AI system. For the purpose of this illustration, let's discard any extreme, mind-blowing scenarios where these laws don't hold up.

“A robot may not injure a human being or, through inaction, allow a human being to come to harm.” The AI system can determine, based on the information acquired, if the robot is about to crash into someone or if there is danger in the environment (a fire, for example) that could cause harm to someone. The AI system could then take action to either prevent harming the person or warn him or her.

“A robot must obey orders given it by human beings except where such orders would conflict with the First Law.” To maximize the efficiency of giving the robot commands, it must have the necessary input systems. The most convenient for the humans would be the spoken word. Whatever form it is it will most likely follow the grammar of some human language. The AI system must be able to understand the command. Having done so, the robot must be able to decide whether the order can be carried out without violating the First Law.

“A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.” The robot needs to decide what actions to take, based on its surroundings, so that it will not come to harm. This may include not crashing into walls or falling into a hole. Once the AI system has decided which actions are safe to take it must validate them against the First and Second Laws to ensure that there is no conflict.

The above implies a hierarchy of decisions that an AI system is equipped to handle. How well it is able to handle them is another matter. What is common to processing any of the three laws is that the robot needs to acquire data from its environment and its logic needs to assign a priority to the information so that the AI core can determine what the next step is.  Once that is decided the robot can either discard the information or save it to create a memory of events. This memory can be helpful for future decisions when not all data is available and probability analysis needs to be involved.

 

Epilog

Where is AI today? The answer depends on whom you ask. Personally, I believe that it is in its infancy. Yes, we have come a long way, but the fact remains, we have a long way to go before AI can compete with the human mind. As an example, let's get extreme and consider a situation involving risk to life.

Consider the following scenario: A robot bus driver hurtling down the road suddenly finds a stalled car carrying a family immediately in front, and the bus is full of passengers, and the only other avenue is straight into a packed restaurant. My first question is: What would you do? This is clearly a lose-lose scenario. No matter how noble you are, how strong, or how competent in driving the bus, someone is going to get hurt. Now consider the Three Laws of Robotics and an AI system.  What will it decide? How can it make such a decision? AI systems will reach a milestone in evolution when it is able to make a life and death decision in such a way that it is not distinguishable from a decision that a human mind would make.

As mentioned earlier, the AI system needs information and the ability to decide based on that. In the case of the bus-driving robot, the level of information that would be assessed and acted upon in an instant by a human are well beyond the capabilities of current AI systems.  Here is where Moore's Law comes to the rescue. This law basically states that the number of transistors in a microprocessor will double every 18 months, and the size and cost will halve. Let's expand that to look beyond the transistor count. What Moore was saying is that as time moves forward so do the advancements in technology. If AI systems available today do not have the capability of making such crucial decisions, future technology at some point will provide AI with enough complexity and sophistication to make such a process a reality.

To see robot AI in action, navigate over to You Tube and search for “artificial intelligence robot.” What I found was many video demonstrations of how a robot can converse with a human, but did not locate one that used AI for anything else beyond that. Simulating intelligent conversation is far removed from making decisions such as the one presented above. While it can be entertaining to converse with an AI robot, AI's capabilities and potential must go far beyond that.

We can only imagine what further advancements in technology will bring to AI.