Check out more papers on | Artificial Intelligence Epistemology Metaphysics |
John Searle, a very well-respected philosopher from Berkeley University has constructed a controversial view on Artificial Intelligence (AI) and what it means. He distinguishes between strong and weak Artificial Intelligence. His most recognized representation of this phenomenon is debuted in his “Chinese Room” argument. Searle explores the following propositions:
Programs and organized computational systems that are able to, in a way, learn through their experiences. However, he goes on to explain that this cannot truly be Artificial Intelligence. Searle argues that this is merely a computer input with algorithms by humans to complete a process. So, when the machine seems to be ‘learning,’ it is technically not; only doing what it was built for. No matter how complex the systems and computers may be, such as IBM’s ‘Watson,’ a very complex form of Artificial Intelligence, Searle says that this is not valid. This brings us to strong Artificial Intelligence.
In order for something to qualify as strong Artificial Intelligence, at least in Searle’s book, it would have to be a direct duplicate of the human brain. Strong AI is in essence is a mind itself, and not just a tool such as weak AI. Technically, our brains are pretty much the strongest form of artificial intelligence known at the moment; they are essential: digital computers. Of course, this is a feat that is (at least at the moment) impossible. The human brain is so complex that even we do not know its full functions and how it works. This is why Searle believes that creating Strong Artificial Intelligence is not something that is feasible. Even then, the device would have to be a duplicate, with exact nerves and tissue that the brain uses, not electronics. Creating something like this would pretty much be creating another human.
Thinking is also something that you have to consider when talking about Artificial Intelligence. Are computers really thinking? What constitutes thinking? Well, Searle has views on this as well. He believes that Strong Artificial Intelligence has little to tell us about thinking since, “it is not about machines but about programs, and no program by itself is sufficient for thinking” (Searle, 1980). Searle differentiates between machines and programs within this piece and makes it clear that when any form of Artificial Intelligence is created, it has to be through a program, and not just a machine. Something has to be implemented into the machine in order for it to have artificial intelligence and no program (by itself) is sufficient for thinking. Only the strongest of machines will not require a program such as brains and machines that have unique similar functions as brains do. This brings us to the ‘Chinese room’ argument and how it questions thinking and knowledge aspects of Artificial Intelligence.
Let’s say that we place a human in a room and have an input and an output in the room. Other individuals outside of the room input questions in Chinese and the human on the inside uses a symbol-to-symbol translation manual to output the correct answers to the questions in Chinese. Does the human know Chinese? Searle thinks not. This is the Chinese room argument. There are also multiple responses to his argument, these include, the systems reply, the robot reply, and the brain simulator reply. All three of these are very different and unique replies to Searle.
Let’s start with the system's reply. This reply explains that the individual in the room is a part of a system with ledgers, rules, and ‘data banks.’ It is also mentioned that “While it is true that the individual person who is locked in the room does not understand the story, the fact is that he is merely part of a whole system, and the system does understand the story” (Berkeley). Thinking about this from an artificial intelligence perspective, the systems reply merely tries to show that human inside of the room himself is only just a part of a system. Chinese questions are inputted and the human or in this case the algorithm memorizes what symbols will correspond to the input and outputs the correct symbols, and this continues. While the human may not understand what information is being handled, it does not matter because the factor of understanding is not only leaned on the individual but instead the entire system itself.
Searle’s reply: Let’s say the human in the room memorizes the entire manual and no longer needs to refer to it to answer questions. The individual memorizes all aspects including the ledgers, rules, and ‘data banks’ this human then becomes the ‘system’ itself. Even then, Searle mentions, “he understands nothing of the Chinese, and a fortiori neither does the system” (Searle, 1980). He goes on to explain that because the human is part of the system, there is really nothing that the individual does not encompass of the system because they are the system. In the same way that if he does not understand something, in this case, the Chinese, then the system cannot understand either since the system is also a part of the human. Overall, Searle believes that the argument is solid but because the individual is a part of a whole and the whole is a part of the individual, this response is in a way, invalid.
Moving on to the robot reply, the other side takes a different stance than the other replies by agreeing with Searle in his views of strong and weak Artificial Intelligence but brings up the following situation: Suppose we put a computer inside a robot, and this computer would not just take informal symbols as input and give out formal symbols as output, but rather would actually operate the robot in such a way that the robot does something very much like perceiving, walking, moving about, hammering nails, eating, drinking - anything you like” (Yale). Within the reply, Yale questions, would this robot learn? Is it possible for this robot to take in what it has seen, heard, and experienced and come back as a more knowledgeable machine? Searle believes it would. However, he adds that he strongly believes that this machine would not be capable of understanding. Learning is one thing, we can program a machine, in principle, to learn and add to its databases but we cannot program understanding. Searle personally believes this is impossible because in order to understand, this takes emotions, morals, values, etc. After this robot has traveled the world and learned as much as it can, it would achieve no meaning, because it would not understand. No semantics and realized by additional syntax.
The final reply is the brain simulation reply. This focuses on the configuration of the program or system, in the sense that maybe if we focus on the configuration and not the mass we could accomplish thinking within a machine. For example, if we configure a program to simulate the actual sequences of nerves firing as they do in a native Chinese speaker’s brain, the replier believes that maybe we could create a machine that is, in theory, thinking just as a Chinese speaker would. Although this is one of the weaker replies out of the three, I can see where they are coming from as they are merely trying to mimic or create a very close simulation to the brain. Searle replies to this by simply stating if we did this with buckets, valves, and water would we look at the system and believe that it is thinking or understanding? Of course not. Understanding is beyond much more than just the configuration of parts of a whole. Searle stresses that simulation is just not the real thing.
In my opinion, I believe that the systems reply is the most worthwhile counterargument because in a way it makes sense from a logical perspective. The human on the inside is only a part of the whole system. To say that the human is the whole system is a very bold statement because there are many things that are needed for the human to complete the processes. In order to get from point A to point B, you need all of the tools and resources available to complete this journey. I can see where Searle is coming from as when the human memorizes everything, he, in a way, no longer needs any more of the other parts as they are internalized.
However, I feel that Searle’s view makes more sense from artificial intelligence and computational way, rather than the practical view that the replier takes. In terms of Searle’s view on biological realism and cognition, I highly agree with him. There is only so much a digital computer can do, and understanding is not something that is possible at the moment and may not ever be. To instill understanding into a machine is to instill emotions and values, something that has to be developed through unique experiences. Even if a machine were to go out and obtain unique experiences, it would not be able to make sense of them. The cognition of biological organisms and the cognition of digital beings is extremely different. One creates unique experiences and molds their personality into what they have experienced, while the other is programmed and merely completes a task regardless of how complex.
Weak artificial intelligence is what we see in modern society. (2021, Nov 26).
Retrieved December 11, 2024 , from
https://studydriver.com/weak-artificial-intelligence-is-what-we-see-in-modern-society/
Save time with Studydriver!
Get in touch with our top writers for a non-plagiarized essays written to satisfy your needs
A professional writer will make a clear, mistake-free paper for you!
Get help with your assignmentPlease check your inbox
Hi!
I'm Amy :)
I can help you save hours on your homework. Let's start by finding a writer.
Find Writer