In the Black Mirror episode, “Be Right Back”, Ash and Martha are a couple in the process of moving into Ash’s empty childhood home when Ash dies suddenly in a car accident. Racked with grief, Martha chooses to utilize a software technology that takes all of Ash’s “digital” presence (videos, photos, audio, social media posts, etc.) and recreates Ash’s personality to allow her to speak with this new A.I version of him via the phone and online. After a while, Martha decides she wants more and opts to upload the software to a fully functioning robotic clone of Ash. What she soon realizes is that what makes Ash the person she knew and loved cannot be replicated by A.I. Unlike Ash, the robot does whatever Martha tells it to do and is overly agreeable …show more content…
Eventually, this causes her to regret her decision. She takes the robot to a cliffside and orders it to jump off it. She tells the robot it is “just a performance of stuff that he performed without thinking, and it’s not enough” (Harris). Is Martha right in her idea that the robot cannot think, which is why it could never be a duplication of a human, such as Ash? This paper shall attempt to demonstrate the validity of artificial intelligence’s inability to be thinking, conscious entities via Black Mirror’s “Be Right Back” episode by supporting it with Descartes’ Dualism theory. The current paradigm states “consciousness is organically based and cannot be emulated by A.I.” (Boss). Indeed, Descartes believed that the conscious mind and body are two separate things and, to be human, you need both ( (Matravers). Robot Ash certainly has a body that is separate from the A.I software, but that software is still not a conscious mind, rather it’s, albeit an …show more content…
When Martha has sex with the robot, it is clearly a more lasting performance than Ash’s (as was shown earlier in the episode). Even though the mechanics of having sex with the robot are better than with Ash, this only serves to highlight that the robot is better at something than Ash, but in the doing so, fails at being an accurate duplicate of him. Descartes believed that machines could “never express their thoughts or respond to the meaning of what has been said to them” ( (Irwin, Brown and Decker , Terminator and Philosophy: I‛ll Be Back, Therefore I am 22). It’s true that artificial intelligence can communicate, however, it is never really expressing its own conscious thoughts while doing so since consciousness is what is missing. This brings us back to that cliffside. When Martha tells the robot to jump, it responds with, “I never expressed suicidal thoughts. Or self-harm.”, implying that it would if human Ash had those thoughts, and only for that reason. She replies with, “yeah well, you aren’t you, are you?”. He then tells her the question is a difficult one to process (Harris). Without a mind, the robot cannot understand the nuances and gravity of this very human conversation and only interacts the way it perceives Martha wants it to. Until Martha yells at it for not having a normal human response to such a request as jumping off a cliff, there is no emotional inflection or reaction on the robot’s part. Is there a way to prove whether
In his 2011 The Chronicle Review article “Programmed for Love” Jeffrey R. Young interviews Professor Sherry Turkle about her experience with what she calls “sociable robots”. Turkle has spent 15 years studying robotics and its social emergence into society. After extensive research and experimenting with the robots, she believes that soon they will be programmed to perform specific tasks that a human would normally do. While this may seem like a positive step forward to some people, Turkle fears the worst. The article states that she finds this concept “demeaning, ‘transgressive,’ and damaging to our collective sense of humanity.” (Young, par. 5). She accredits this to her personal and professional experience with the robots. Turkle and her
The author's purpose of this essay is contemplating whether or not laws should be made protecting robots. Throughout the essay he uses evidence from scientists who have dones tests, and it shows how people act.
This article begins by outlining the tragic death of an artificial intelligence robot, named Steve. Steve’s accidental death, by stairs, raises a lot of new questions surrounding robots, and their rights. In his article, Leetaru, discusses the range of questions that have sparked from not only Steve’s death, but the rise of advanced robot mechanics. While the Silicon Valley is busy grinding out new plans and models of robots, especially security robots, how can we establish what a mechanical robot is entitled to? Leetaru offers many different scenarios concerning robots against aggressors, in hopes to reveal that these rights be outlined with the rise in usage of this technology. The article speculates how in the future, when these robots
In Death by Robot, Robin Henig talks about what goes into the decision making of the robots and the types of decisions that a robot will have to make, including the difficult ones. For one, he talks about the algorithm that goes into effect when a robot is in a sticky situation. For example, when a patient of the robot is asking for medicine, the robot has to check with the supervisor, but the supervisor is not reachable. This is a situation in which the robot is in a “hypothetical dilemma.” The robot is commanded to make its patient pain-free but only if it can get permission to give the patient medicine from the supervisor. Henig also talks about what the experts in the emerging field of robot morality are going so that robots are able to
There is a man and woman drenched in sweat trudging away from a crumbling building as tumbles to the ground. Usually when humans hear the word robot it brings the image of the world ending or various items of technology. In today’s world robots are being created to do he hard dangerous things that most humans shouldn’t do such as welding or even painting. Those two activities both be very harmful to the human body. When painting the body can take in a lot of toxins into the lungs causing the workers not to be able to breath. If a robot was to do that job it wouldn’t have to worry about toxins because it wouldn’t be able to feel any pain. Eventually humans won’t even have to work because robots are going to be doing all the jobs humans should
Descartes is a mind-body dualist, who in the Discourse on Method argues that humans are the only species that have a mind and intelligence. He states that animals are different in nature than humans and uses different arguments to defend his position. In this essay, I discuss Descartes effort to show that humans are distinct from machines and animals. He presents two tests to determine if a machine is a human and I will establish my view on each test.
Asimov’s short story “Reason” in I, Robot is about a fictional robot character which uses reason to perceive and question its own existence. Similarly to Descartes, a robot named “QT” embarks on a philosophical journey to rid himself of any preconceived beliefs and ideas that cannot be confirmed(verified?) for certain, accepting only axiomatic principles. Although Descartes and QT live in different time periods and environments, they both challenge their current society’s belief systems and the macro view of existence itself which leads them to different conclusions about the world they exist in. Cutie goes through three phases of philosophical belief shift, each representing one three Descartes meditations. In the short story “Reason”, Asimov supports the ideas portrayed in the first three meditations explored by Descartes, through the use of themes, symbolism, and Cutie’s actions, which dawns new light on the concept of creation and existence.
In order to more fully understand these principles, we can apply them to a more in depth scenario. The show Black Mirror created and produced by Charlie Brooker is made up of episodes set in different realities, all set in “the way we might be living in 10 minutes' time if we're clumsy;” the near future with slightly more advanced technology (“Charlie Brooker”). They follow plot lines that serve as warnings about being cautious with these advancements and the destruction they might cause. One of these episodes, “Be Right Back,” focuses on a time where a program exists that mimics a person based on their social media, personal files, and even videos. Martha and Ash are
The ideas of Cosmopolitanism, Artificial Intelligence, and Factor X are all exceptionally complex. Each of them involves the advancement of society, and how people interact with each other and technology. Kwame Anthony Appiah writes about the idea that “we have obligations to others…taking an interest in the practices and beliefs that lend them significance,” (69) in “Making Conversation”, the idea is Cosmopolitanism. Cosmopolitanism is seen as being able to understand and accept the cultures and traditions of others. Despite how different other cultures may be, cosmopolitanism allows for one to be able to accept the traditions of others without having to agree on the principles.
Humans and the AI possible now, are truly one and the same What does this mean?. The Human body is but a foundry sundry? of systems and preconditioned thinking that is lead through cause and effect. AI is the pinnacle of humanity’s attempts at mimicking the creation of life through “artificial” thinking.
Many people have objections to Turing’s test, objections that he counters in the latter part of his paper. In this particular objection Professor Jefferson brings up the idea that a machine cannot be deemed intelligent because it is unable to feel. A machine cannot compose art from thoughts or feelings, nor can a machine feel pleasure, guilt, or grief. Jefferson claims that until a machine is able to do these things, it cannot be considered as intelligent as the human brain. He claims that Turing’s test does not
In his paper “Computing Machinery and Intelligence,” Alan Turing sets out to answer the question of whether machines can think in the same humans can by conceptualizing the question in concrete terms. In simple terms, Turing redefines the question by posing whether a machine can replicate the cognition of a human being. Yet, some may object to the notion that Turing’s new question effectively captures the nature of machines’ capacity for thought or consciousness, such as John Searle. In his Chinese room thought experiment, Searle outlines a scenario that implies machines’ apparent replication of human cognition does not yield conscious understanding. While Searle’s Chinese thought experiment demonstrates how a Turing test is not sufficient to establish that a machine can possess consciousness or thought, this argument does not prove that machines are absolutely incapable of consciousness or thought. Rather, given the ongoing uncertainty of the debate regarding the intelligence of machines, there can be no means to confirm or disconfirm the conscious experience of machines as well as the consciousness of humans by extension of that principle.
In “Minds, Brains and Programs” by John R. Searle exposed his opinion about how computers can not have Artificial intelligence (Al). Searle portraits this claim about computers through an experiment he created called the “Chinese Room” where he shows that computers are not independent operating systems and that they do not have minds. In order to understand better the experiment Searle demonstrates the contrast between strong and weak Al, which later through my paper I will explain what this means. In what follows, I will explain what Searle’s “Chinese Room” experiment is, and what does it, according to him, demonstrate. I will also argue how I agree with his conclusion because I believe that computer cannot think.
Rene Descartes’ “Discourse on the Method” focuses on distinguishing the human rationale, apart from animals and robots. Wherein, he does so by explaining how neither animals, nor machines possess the same mental faculties as humans. For Descartes distinguishes the human rationale apart from non-humans, even though he does agree the two closely resemble each other because of their sense organs, and physical functions (Descartes, pp22). Nevertheless, it is because the mechanical lacks a necessary aspect of the mind, which consequently separates them from humans. For in Descartes “Discourse on the Method,” he argues that the noteworthy difference between humans, and the mechanical is that machines are only responding to the world through of their sense organs. Whereas humans possess the significant faculties of reasoning, which allows them to understand external inputs and information obtained from the surrounding environment. This significantly creates a dividing ‘line’, which separates humans from non-humans. For in this paper, I will firstly distinguish the differences between the human and mechanical’s mentality in regards to Descartes “Discourse on the Method”. Secondly, I will theorize a modern AI that could possess the concept of an intellectual mind, and then hypothesize a powerful AI that lacks the ability to understand its intelligence. Lastly, in disagreeing in why there are no such machines that is equivalent to the human mind. For humans don’t possess all the
Artificial intelligence is the most controversial field in robotics. It is agreed that a robot can work in an assembly line, but whether or not the robot can be intelligent is debatable. Intelligence is described as the ability to adapt to new environments and situations and being able to understand consequences and effects that one’s actions cause (Pros 1). A robot with complete artificial intelligence would have the same thought process as human beings. Like humans, the robot would also have the ability to reason, learn, and formulate original ideas. Computers can already solve problems in a limited realm while some modern robots have the ability to learn in a restricted capacity. Unlike humans, robots can solve complex problems every second of everyday, without sleep or coffee breaks (Bowman 1). Developing artificial intelligence is not like creating an artificial heart - scientists do not