Recovered Writing, PhD in English, Dissertation Defense Opening Statement, May 15, 2012

This is the sixty-fourth post in a series that I call, “Recovered Writing.” I am going through my personal archive of undergraduate and graduate school writing, recovering those essays I consider interesting but that I am unlikely to revise for traditional publication, and posting those essays as-is on my blog in the hope of engaging others with these ideas that played a formative role in my development as a scholar and teacher. Because this and the other essays in the Recovered Writing series are posted as-is and edited only for web-readability, I hope that readers will accept them for what they are–undergraduate and graduate school essays conveying varying degrees of argumentation, rigor, idea development, and research. Furthermore, I dislike the idea of these essays languishing in a digital tomb, so I offer them here to excite your curiosity and encourage your conversation.

I prepared this brief statement to introduce the thinking behind the choices that I made on which writers to include and the emergent theme of the dissertation that would lead to my current research: technological ephemerality. This statement is part justification and part roadmap for where I am now and will be in the future.

To set the stage for making this statement, imagine me sitting at the head of a conference table. Behind me on a podium is a Powerbook 145 with Gibson’s eBook of Neuromancer, Count Zero, and Mona Lisa Overdrive open and the big box for the Neuromancer video game adaptation from the late-1980s.

Dissertation Defense Opening Statement

Jason W. Ellis

15 May 2012

            I would like to thank you all for reading my dissertation, “Brains, Minds, and Computers in Literary and Science Fiction Neuronarratives” and for meeting with me today. I am looking forward to your questions and our discussion. Before we begin, I would like to take this opportunity to describe my project’s goals, it’s origins, my methods of research, and what I hope it accomplishes. As you will see, my iPad figures prominently in these things.

In my dissertation, I draw on my interdisciplinary interests in literary studies, science fiction studies, history of science and technology, and evolutionary psychology to situate science fiction’s emergence as a genre in the early twentieth century within the larger context of the human animal’s evolutionary co-development with technology. In a sense, I sought the raison d’être of the genre in a Darwinian and cognitive context. I believe the communal teaching aspect of science fiction to be an integral part of the genre itself, and it is this aspect that I gave the name “future prep.” From another perspective, I define science fiction as the kind of literature that performs this function. I also wanted to take one related thread from the genre’s overall development—that being brains, computers, and artificial intelligence—and trace it through the work of three significant writers, namely: Asimov, Dick, and Gibson.

My dissertation originates in part from my long interest in the biology of the human brain. Perhaps this is a byproduct of the conceptual metaphors that I learned in school or in books that the brain was a type of computer and the computer was a type of brain. We know that these are imperfect analogies, but you can imagine that they can have a strong influence on the development of a curious mind. Even at an early age, I strongly felt the link between brains and computers as evidenced by a sustained performance that convinced my kindergarten classmates I was a robot. More recently, I fell into the physics of mind when I was in high school. Thanks to Stephen Hawking, I stumbled onto the work of his collaborator Roger Penrose, who had done other work arguing that the brain is not a Turning-type computer and that quantum phenomena must play some part in the emergence of human consciousness. Much later, during my MA at the University of Liverpool, I made a deal with a friend in the neuroscience program to give me a digital copy of my brain in exchange for my participating in his neural correlates of facial attractiveness study. However, the most recent and profound shift in my thinking came about in a serendipitous way. During the preparation for my PhD exams, I met with Professor Clewell to discuss my readings for the postmodern theory exam. I recall our conversation veering toward computers and the human brain. I learned from Professor Clewell about the emergent discourse surrounding the human brain and the human experience from a Darwinist/evolutionary rather than a Freudian/psychological or Marxist/social perspective. As invested as my work up to that point was in cultural theory, I was very intrigued by the interdisciplinary possibilities that neuroscientific topics and evolutionary psychology might provide for my work in literary history. Without a doubt, this was a pivotal moment in the development of my dissertation. It provided me a direction to expand the scope of my project from one author—originally on the fiction of Philip K. Dick alone—to three by developing a new theory of the genre in terms of the human brain’s evolution. This was new territory for the literary history of science fiction, and I wanted to trek an unexplored path into this uncharted territory.

The next stage was to select the literary focus of my research. I chose Dick’s work, because I believe his awareness of the brain’s role in human experience and in our relationship with technology strongly connects to my theory of science fiction. Then, I selected Asimov as a connection between the early editors who shaped the genre and later writers including Dick, whose androids obviously respond to Asimov’s robots. Finally, I decided on Gibson, because he reinvented Dick’s concerns about technologization of the human experience in a more nuanced manner than Dick’s paranoiac division between the android and the human.

Research and writing of my dissertation presented its own challenges, but I was very pleased that part of the subject matter inspired my own processes of work. In my reading and research, I leveraged computer technology to my advantage to build efficiencies and speed into my work. In particular, I wanted to make all of my research—primary and secondary sources—available on my computer, iPad, and iPhone. The primary reason for this was to make it easier for me to track my research and use digital tools such as textual analysis software and key word search on materials I had read or skimmed. Having the materials on my various computing devices made it easy to search the same or multiple documents very easily and quickly while taking notes or writing in Microsoft Word on my MacBook. Of course, my brain did the work of configuring, contemplating, and creating the dissertation itself.

The issue of obsolescence, which I discuss a bit about in the concluding part of my dissertation, was also a driving force behind my efforts at digitization of my research materials. For example, the last half of the second chapter presented a unique problem—I needed to read the editorials of the old pulps—particularly Amazing Stories and Astounding—but these pulps are not widely available in library collections, and when they are, it can be difficult to handle and read them due to their extreme fragility. Luckily for my research, legions of science fiction pulp collectors have made much of this material available online as scanned copies. Obviously, there are tensions between the efforts of cultural preservationists and the Disney-fication of copyright law, but due to the nature of my research and its importance to the long literary history of science fiction, some of which is egregiously at risk of disappearing, I side with the preservations. Unfortunately, the scanned materials were not always complete, but they did provide me with some useful evidence and clues to more. I filled these missing holes with interlibrary loan requests that took several weeks to complete. For other primary sources, I was able to track down circulating text files—such as for Asimov’s, Dick’s, and Gibson’s novels, and others, I purchased either through Amazon’s Kindle shop or Apple’s iBook store. I should note that I used these non-paginated materials for research purposes, and I cross-referenced any findings there with the physical copies that I own or borrowed from the library—the only exception being Dick’s Exegesis.

I also converted many sources on hand into digital copies for my personal use. Generally, I took photos of pages, created a PDF, and ran OCR software to generate searchable text. Due to my limited time, this was especially useful during my research trip to UC-Riverside’s Eaton Collection in February. In addition to my typewritten notes on my MacBook, I captured over 1000 pages of rare and interesting primary research for the Dick and Gibson chapters with my iPhone 4S’s built-in camera. Some of this research is included in the dissertation, but there is much left for me to review as I begin the process of transforming the dissertation into a publishable manuscript. This extra work paid off by revealing quotes overlooked during skimming or reading. While I am reading to you from my iPad, I also have my dissertation manuscript, primary sources, secondary sources, notes, and much more all available at the touch of my finger. However, I have to remain vigilant with my archival practices to ensure my access to my data now and in the future. It is also a challenge to find software that maintains compatibility and preserves my workflow.

As Gibson warns us in his afterword to the Neuromancer e-book, technology’s fate is obsolescence. As he foretold, it was nearly impossible to access his e-book in its original version. First, I had to wait several weeks to receive a copy of the e-book’s disk from one of the three American universities that hold it. Then, I had to find an older Macintosh with a floppy disk drive to read the disk and in turn allow me to read the e-book. Unfortunately, there are no Macs with floppy disk drives anywhere near Kent State. I turned to eBay to find an early PowerBook, but unfortunately, the first one I purchased was destroyed during shipping. Eventually, I was able to read the e-book with this PowerBook 145, but it took time, money, and know-how. What does the future hold for those of us who want to read the stories these technologies have to tell us, and what effects do these technologies have on our cognitive development? These are questions I plan to investigate following the dissertation.

In closing, I hope that my work on the literary history of science fiction accomplishes two things. First, I believe that science fiction’s roots run deep, and my dissertation is meant to show how it is a literature that emerges as a byproduct of powerful evolutionary forces of the development of the human brain in conjunction with the human animal’s co-evolution with technology. Second, I hope that my work facilitates further cross-discipline discussion and leads to additional research into the brain’s role in the emergence of human experience and the enjoyment of fiction—especially science fiction.

Recovered Writing: PhD in English, Semeiotics Final Paper, Deconstructing the Human/Machine Hierarchy in the Works of Asimov and Dick, Fall 2007

This is the thirty-fourth post in a series that I call, “Recovered Writing.” I am going through my personal archive of undergraduate and graduate school writing, recovering those essays I consider interesting but that I am unlikely to revise for traditional publication, and posting those essays as-is on my blog in the hope of engaging others with these ideas that played a formative role in my development as a scholar and teacher. Because this and the other essays in the Recovered Writing series are posted as-is and edited only for web-readability, I hope that readers will accept them for what they are–undergraduate and graduate school essays conveying varying degrees of argumentation, rigor, idea development, and research. Furthermore, I dislike the idea of these essays languishing in a digital tomb, so I offer them here to excite your curiosity and encourage your conversation.

As I wrote in my last Recovered Writing post here, I consider myself very fortunate to have taken Dr. Gene Pendleton’s ENG 75057 Semeiotics course. This is in part due to his acumen as a teacher with grit, and also, in part due to his philosophy background, which I believe enriched our seminar.

In this Recovered Writing post, I am including my final paper in Dr. Pendleton’s class. After discussing some ideas and my previous work on Isaac Asimov and Cold War doppelgangers, he suggested that I bring in Philip K. Dick’s Do Androids Dream of Electric Sheep? This paper helped me rethink some of my previous work in a totally new light.

Jason W. Ellis

Dr. Gene Pendleton

Semeiotics

Fall 2007

Deconstructing the Human/Machine Hierarchy in the Works of Asimov and Dick

            The fiction of Isaac Asimov and Philip K. Dick are often evoked in critical discourse to describe the rise of autonomous technology during the American Cold War (1945-1990).  The embodiment of the increasing complex systems of Command, Control, Communications, Computers, and Intelligence (C4I) is featured in the Science Fiction (SF) image of the android.  An android is a synthetic being that to all outward appearance and behavior is human.  The internal construction of such a being may be mechanical or organic, but in either case, an android is a constructed object, rarely afforded subjectivity, despite the possibility that androids are self-aware, have subjective experience of the world, and in some cases, emotional responses.

Androids, or human-like robots are a recurring theme in SF works.  By writing SF stories featuring androids and robots, SF authors directly engage the discussion surrounding autonomous technologies and the overarching networks that technology is situated within.   These artificial beings are the embodiment of autonomous technology and they double for humanity because they are constructed in our image.  Because androids are generally capable of making their own decisions, they challenge the authority of human mastery over technological artifice.  Additionally, androids challenge what it means to be human in a world populated by the real and the artificial.  If someone acts human and looks human why is there any reason to question the validity of that person’s humanity?  The answer is that:  the existence of human-like robots makes the very concept of humanity suspect.  Thus, androids are a representation of autonomous technology that elicits anxiety over the loss of human control over technology.

Asimov constructs a utopic world around his robot and android creations in his collected Robot novels:  I, Robot (1950), The Caves of Steel (1954), The Naked Sun (1957), and The Robots of Dawn (1983).  Unlike the majority of pulp SF robots that destroy humanity, Asimov, along with his friend and editor, John W. Campbell, Jr., devised a system of laws that govern his robots.  However, Dick writes a bleaker picture into his dystopia, Do Androids Dream of Electric Sheep? (1968).  Dick’s androids have no such system to protect humanity from its synthetic doppelganger, and as a result, present an unleashed monstrous threat to humanity by their very existence.  As such, the works of these two authors heavily contrast with one another when juxtaposed.  Despite the apparent contradiction between the projects of these two authors, the representations of humanity and androids in their works follow a similar trajectory and promote a similar thesis:  humanity is better than machines.  This is a gross over-simplification that I will address in more depth in this paper, but at the root of this discussion is the fact that works by these authors promote these hierarchies:  human/machine, organic/synthetic, origin/derivative, soul/soulless, and presence/absence.  These hierarchies are deeply embedded within the Cold War and Cold War culture, but they continue to appear into the present through the on-going Terminator films and the Wachowski Brother’s Matrix series.  Where do these hierarchies come from?  Why are they perpetuated within SF, particularly those involving autonomous technologies such as androids?

Returning to Asimov and Dick’s works, there is a significant approach to uncovering and exploring these binary opposed hierarchies within the texts.  Jacques Derrida’s “processless” process of deconstruction provides for a reading of hierarchies within texts that obviates other variables of influence.  Derrida argues, “There is nothing outside the text” (Of Grammatology 158).  This statement means more than Derrida’s supposed logocentrism.  It completes Barthes’ claims that the author is dead, but it extends much further to the way in which we each cognize, understand, and respond to a given text.  It involves the way textual information and our responses to texts are laid down in the mind, even extending to the level of engrams, or the physical trace of memory in the brain.

Jacques Derrida’s attack on the metaphysics of presence and challenge to supplementarity and culturally created hierarchies are significant tools for the evaluation of the human/android hierarchy in the works of Asimov and Dick.  Finding différance and slippages underlying the concepts out of which the hierarchies are constructed is one step toward deconstruction.  Furthermore, I challenge the supposed supplements of humanity–technology, machines, and androids.  Each of these aspects of the androids and the hierarchies of human/android in the texts discussed below are unstable and open for debate.  After considering these texts, the human/machine hierarchy is a binary opposite of the base level, which is important to the application of deconstruction according to Derrida:

Henceforth, in order better to mark this interval…it has been necessary to analyze, to set to work, within the text of the history of philosophy, as well as within the so-called literary text…certain marks…that by analogy…I have called undecidables, that is, unites of simulacrum, “false” verbal properties (nominal or semantic) that can no longer be included within philosophical (binary) opposition, but which, however, inhabit philosophical opposition, resisting and disorganizing it, without ever constituting a third term…It is a question of re-marking a nerve, a fold, an angle that interrupts totalization:  in a certain place, a place of well-determined form, no series of semantic valences can any longer be closed or reassembled.  Not that it opens onto an inexhaustible wealth of meaning or the transcendence of semantic excess.  (Positions 42-43).

The results of this reading will present a particular view of these hierarchies deconstructed, but the work accomplished here adds to the discussion rather than provides a singular truth hidden and transcendent behind the human/android hierarchy.  Additionally, meanings are deferred, and hard answers aren’t always forthcoming.  However, this analysis begins a process of further discovery and potential for understanding.  The analysis will incorporate, “différance,” which is “neither a word nor a concept,” and, “With its a, differance more properly refers to what in classical language would be called the origin or production of differences and the differences between differences, the play [jeu] of differences” (“Différance” 279).  Studying différance through “the play of differences” is integral to deconstructing hierarchies.  It’s word play, and a play on the alleged natural hierarchies embedded in texts.  Also, Derrida writes, “The concept of play [jeu] remains beyond this opposition; on the eve and aftermath of philosophy, it designates the unity of chance and necessity in an endless calculus” (“Différance” 282).  The word play employed does not enter into the binary opposition under study, and it affords “chance and necessity in an endless calculus.”  Therefore, play is an on-going process that may bring up unexpected results, and it continually rises toward the asymptote on the edge of potential understanding.

Toward that goal, but not end, I employ a deconstructionist reading of the human/android narratives of two central Cold War SF authors:  Asimov and Dick.  The noir and detective fiction aspects of the novels further connect them within the cultural milieu in which they were originally published.  The first phase of the paper specifically addresses and undressed the human/machine hierarchies in Asimov’s Olivaw-Baley novels that feature human and android detective working a variety of hard-boiled cases.  The second phase concerns the human/android pairings in Dick’s Do Androids Dream of Electric Sheep? (Do Androids Dream).  In this novel, hierarchies are continually turned on end between human/machine, man/woman, and hunter/prey.  Throughout the paper, and culminating in the conclusion, I upturn these hierarchies in attempting to better understand the solutions to these problems:  What is the origin of the human/machine hierarchy?  Why is the human/machine hierarchy predominantly forwarded through the fictional concept of the android?  And, finally, what other concepts or ideas are bound up with these hierarchies and the traces associated with the texts that build them?

Deconstructing Asimov’s Detective Buddies and Human/Android Hierarchy

Isaac Asimov’s R. Daneel Olivaw-Elijah Baley novels create and reinforce the supposedly natural human-machine hierarchy.  These novels, The Caves of Steel, The Naked Sun, and The Robots of Dawn, span from the first phase to the final phase of the Cold War.  They incorporate the author’s own expertise as a scientist along with contemporary developments in cybernetics widely publicized by Norbert Weiner in Cybernetics:  Or the Control and Communication of the Animal and the Machine (1948) and The Human Use of Human Beings (1950).

The Olivaw-Baley novels comprise a utopic vision of human-machine interaction in a far future founded on the human/machine hierarchy.  Baley grows to like his new partner through the trilogy of novels, ultimately defending him from those persons opposed to androids.  Underlying their relationship of human detective to android detective is the fact that Asmovian robots contain The Three Laws of Robotics, which problematizes Olivaw’s status as an android subject with a voice and agency to act and make its own choices.  This aspect is integral to an understanding of the human/machine hierarchy at play in these stories.

The novels take place in a far future where humans have colonized a significant portion of the galaxy.  Although the robots are instrumental in the process of colonization, humans remain fiercely divided on whether or not robots should exist at all.  Given that Asimov himself was very much in favor of the promising new technologies of his day (e.g., automation in manufacturing and computers), it is not surprising that he picks the robots in his novels to be utopic in nature.  His robots are the embodiment of these new technologies.  In order to make his robots “perfect people,” he constructed his robots with the Three Laws of Robotics that he first made explicit in his short story, “Runaround:”

(1) A robot may not injure a human being, or, through inaction, allow a human being to come to harm.

(2) A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.

(3) A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws. (I, Robot 44-45)

The Three Laws provided each robot with an ethical system that must be obeyed because it is hardwired into its positronic brain.  Therefore, Asmovian robots represent the best of what humans can be, but at the same time they reveal what we are not.

R. Daneel Olivaw’s artificiality is revealed to the humans he works with, and this knowledge places Daneel automatically at the “back of the bus” and subservient to human wishes as delegated by the Three Laws.  He/It is what Asimov termed a “humaniform” robot.  Daneel has the appearance of a human from one of the fifty Spacer worlds (i.e., worlds originally populated by Earth people during a period of expansion in our future).  Daneel’s partner is Elijah Baley, a detective from Earth.  In The Caves of Steel, Baley describes Daneel as appearing “completely human” (83).  He later says:

The Spacers in those pictures had been, generally speaking, like those that were occasionally featured in the bookfilms:  tall, red-headed, grave, coldly handsome.  Like R. Daneel Olivaw, for instance (The Caves of Steel 94).

Baley even suggests that Daneel is secretly Dr. Sarton, the Spacer found dead in The Caves of Steel.  However, this is not the case.  Daneel was modeled after Dr. Sarton’s appearance.  This revelation prompts Daneel to reveal what lies beneath.  In Dr. Han Fastolfe’s office:

R. Daneel pinched the ball of his right middle finger with the thumb and forefinger of his left hand…just as the fabric of the sleeve had fallen in two when the diagmagnetic field of its seam had been interrupted, so now the arm itself fell in two…There, under a thin layer of fleshlike material, was the dull blue gray of stainless steel rods, cords, and joints.  (The Caves of Steel 111)

As Baley passes out from the shock, the fact that the “R.,” which stands for “Robot,” in front of Daneel’s name is in fact deserved!

R. Daneel Olivaw is paired as a binary opposite broadly with humanity.  He/It, along with his robot kin, mirror humanity–opposites in a mirror looking back, disconcertingly similar, and evoking the uncanny.  When a character becomes aware of Daneel’s true being it destabilizes that character’s understanding of the difference between robot and human.  Most of Asimov’s robots are very metal and very plastic.  They are the epitome of synthetic.  Daneel’s construction sets him apart from the apparent synthetic robots because he appears human.  Elijah Baley first greets Daneel at Spacetown thinking that he is a Spacer, a human who lives on a planet other than Earth.  Later Baley says to his superior, Commissioner Julius Enderby, “You might have warned me that he looked completely human” and he goes on to say “I’d never seen a robot like that and you had.  I didn’t even know such things were possible” (The Caves of Steel 83).  Elijah and most other humans are not aware that a human form robot was a possibility.  Although Elijah comes to terms with Daneel, other characters desire to destroy humaniform robots.  Elijah’s wife is secretly a member of the Medievalists, a group that wants to do away with all robots, including Daneel.  Commissioner Enderby, also a Medievalist, murders Dr. Sarton, not because he wants to kill Sarton, but because he mistakes him for Daneel.

The more intimate binary opposition takes place between R. Daneel Olivaw and his human partner, Elijah Baley.  Before Elijah meets Daneel, he is confident in his own abilities as a detective.  After he partners with Daneel, however, he begins to call into question his own abilities and talents.  Robots are meant to be superior to humans and Elijah extends this to his own profession that is now being intruded on by an android.  Baley is narrating at the beginning of The Caves of Steel:

            The trouble was, of course, that he was not the plain-clothes man of popular myth.  He was not incapable of surprise, imperturbable of appearance, infinite of adaptability, and lightning of mental grasp.  He had never supposed he was, but he had never regretted the lack before.

What made him regret it was that, to all appearances, R. Daneel Olivaw was that very myth, embodied.

He had to be.  He was a robot.  (The Caves of Steel 26-27)

This anxiety is one of the motivating factors behind The Robots of Dawn, when Elijah is brought in to investigate the murder of a humaniform robot like Daneel.  If Elijah fails, he will loose his job and be declassified.  The fear of declassification is dire to Elijah because he had seen his own father declassified when he was a child.  Therefore, the existence of humaniform robots subverts human superiority over humanity’s synthetic constructs.

R. Daneel Olivaw’s doppelganger pairing with the human Elijah Baley causes real concern for those persons directly threatened (i.e., ego and job prospects, not bodily) by robotic superiority.  However, the Olivaw-Baley novels, “illustrate Asimov’s faith that man and machine can form a harmonious relationship” (Warrick 61).  These novels promote a utopic vision of human-machine cooperation.  Therefore, the hierarchy of human/machine that Asimov is responding to is inverted within the texts.

That being said, Asimov’s human/machine hierarchy contains a built-in flaw for a full inversion–the Three Laws of Robotics.  R. Daneel Olivaw, with his/its human appearance, for all intents and purposes appears to want to work along side humanity.  He/It appears to form a bond of friendship with his human partner, Baley.  He/It appears to make conscious decisions to protect Baley and other humans.  This appearance of intent comes from the imposition of the Three Laws.  They are built-in, integrated, and non-removable.  Robots and androids are constructed rather than develop, so they come preloaded with those laws as well as experiences necessary for the fulfillment of their respective jobs (e.g., an android detective will have a different set of experiences/knowledge built-in than a garbage collecting robot).  Asimov’s robots and androids can have no original sin, and they cannot make choices outside the bounds of their hardwired programming.  Humanity’s imposition of these laws re-asserts the human/machine hierarchy within the texts.  Thus, utopia can be achieved in Asimov’s fictional world through the artificially constructed superiority of humans over machines by subjecting them to an existence of slavery to humanity’s laws for robots.

The Asmovian robot/android is a supplement to humanity, thus creating/reinforcing the assumed natural human/machine hierarchy.  They fulfill menial tasks as well as specialized jobs for which automated/autonomous labor is required/requested.  Humans build them, and the positronic brain of Asimov’s robots/androids is a human creation that approximates human thought in the anthropomorphized machine.  Furthermore, the positronic brain is a linguistic engine producing logical thought for the android.  Troubleshooting robots and androids is done both mechanically (i.e., employing spanners, wires, readouts, etc.) as well as with the talking-cure transplanted to diagnose the android (i.e., the field of robopsychology–the image of Susan Calvin comes to mind).  The law, superego, or symbolic order comes from the Three Laws of Robotics hardwired into the positronic brain.  The deux ex machina is a replication of human linguistic systems of signs–a semeiotics for anthropomorphized, embodied machines.

Apparently, R. Daneel Olivaw and the other androids/robots are derived from humanity.  Humans came first, and then the robots.  But, does that necessarily make androids supplemental to humans?  Androids behave and perform themselves as human.  They are more accomplished physically–faster, stronger, and incapable of experiencing fatigue.  Additionally, Asmovian robots and androids are more intelligent and capable of learning much more than humans, due to their potentially longer lifespan.  Why, then, are androids considered supplemental to humans when they are superior in many ways?  Deconstructing the human/machine hierarchy in Asimov’s stories is relatively easy considering the occasional critical displeasure over the simplicity of his works.  That aside, his novels represent the human/machine hierarchy in a way that reinforces its appearance elsewhere in pulp SF and SF film of the era, but it destabilizes the hierarchy in the way Asimov constructs his robots.  Their connections to humanity are paramount to an analysis of the human/machine hierarchy in these works, and it’s telling that Asimov resisted the “killer robot” image by giving his creation a conscience.  Unfortunately, that conscience makes the android subservient to humanity and therefore obviates its own subjectivity in favor of the supposedly superior human.

Deconstructing Do Androids Dream Human/Android and Hunter/Prey Hierarchies

Dick’s novel, Do Androids Dream of Electric Sheep? (Do Androids Dream) is a significant novel from the New Wave era of SF that arguably began with Michael Moorcock’s editorship of New Worlds in 1964, and is characterized by literary experimentation, emphasis on the “soft” sciences (e.g., psychology, sociology, psionics, and philosophy), and more adult themes including sex, sexuality, and illicit drugs.[1]  Dick’s work engages these New Wave and postmodern themes in his works, and diverges from the straight story of Asimov into new, unexplored territory.

Do Androids Dream was originally published in 1968 when the Cold War was entering its second phase of escalated tensions between East and West over Southeast Asia.  The military-industrial complex was sending armaments, materiel, and men to a far off space to hold back the so-called “domino effect.”  It was released in the same year that President Lyndon B. Johnson signed the Civil Rights Act of 1968.  Whereas Asimov was probably directly influenced by Norbert Weiner’s early writings on cybernetics, Dick was probably aware of Weiner’s later work:  God & Golem, Inc.:  A Comment on Certain Points Where Cybernetics Impinges on Religion (1966).  Weiner’s metaphysics of cybernetics is apparent in Do Androids Dream as well as many of Dick’s middle and later works, which deal more explicitly with metaphysical questions of self, identity, existence, and religious experience.  Dick and Asimov’s works are under the surface allegories about racial divide in America following World War II, but Dick problematizes the differences between android and human along the lines of psychology and metaphysical questions of existence and religion.  However, in both cases, the overarching thesis of the human/machine hierarchy is unavoidable and reinforced through the texts.

Do Androids Dream approaches the presupposed human/machine hierarchy from a more metaphysical trajectory than Asimov’s Olivaw-Baley novels.  The story takes place in San Francisco in the year 2021 following a devastating nuclear war that prompts the majority of the surviving population to emigrate to Mars.  However, the proverbial “40 acres and a mule” is provided by governments to sweeten and entice migration to another world.  The mule in Do Androids Dream is the android.  It is billed as a worker and companion–constructed to the needs/wants of the human settler.  These androids are produced by a number of companies, and they are continually improved upon.  These androids, or by the derogatory term, andys, are part flesh and part machine.  If they are caught escaping their enforced servitude/slavery, they are “retired” (i.e., killed) by a human bounty hunter.  Locating escaped androids is problematic, because they appear and behave human.  Also, the corporations building them, such as the Rosen Corporation, continually strive to build more human-like androids, culminating with the latest design, the Nexus-6.  The only methods of detection are 1) reflex response, 2) the Voigt-Kampff Empathy Test, and 3) a bone marrow analysis.  All but the physically invasive test is potentially suspect because of biological and psychological variation in humans.

Again, why are humans supposedly superior to androids?  Humanity builds androids.  They are a commodity.  They are slave labor with a definite lifespan built-in due to technological limitations.  Humans are the masters and androids are the slaves.  For a slave to challenge the authority of the master, the android incurs the harshest penalty–death.  Furthermore, androids display what’s called a “flattening of affect” (Dick 37).  They don’t “actually” feel emotions–they can only approximate an appropriate human-inspired response.  For this reason, they are not believed to have a soul and cannot under go fusion with the religious figure of Mercer through the technological mediation of the Empathy Box.  But what about schizophrenics with a similar “flattening of affect?”  His superior warns Deckard about this possibility:

The Leningrad psychiatrists…think that a small class of human beings could not pass the Voigt-Kampff scale.  If you tested them in line with police work, you’d assess them as humanoid robots.  You’d be wrong, but by then they’d be dead.  (Dick 38).

Similarly, these humans shouldn’t be able to worship with other humans.  Mercerism is supposedly cut off for these individuals.  This aspect of the schizophrenics isn’t addressed in Do Androids Dream, but Deckard responds to his superior’s concerns:

They’d be in institutions…They couldn’t conceivably function in the outside world; they certainly couldn’t go undetected as advanced psychotics–unless of course their breakdown had come recently and suddenly and no one had gotten around to noticing.  But that can’t happen.  (Dick 38)

So, these individuals with a “flattening of affect,” or no appropriate emotional response to a given situation, “couldn’t conceivably function in the outside world” according to Deckard.  However, the six androids he hunts integrate into daily life, hold jobs in some cases, and live their lives as best they can while looking over their shoulder for a bounty hunter on their trail.  Certainly not all schizophrenics can go unnoticed, but going by the DSM IV-TR criteria, it seems clear that someone could maintain a modicum of self-sufficient life without getting the men in white coats chasing after them.  This indicates one aspect of the human/android hierarchy that breaks down under scrutiny.  Thus, experiencing emotion and affect are not necessarily something inherently human, and there’s no underlying machineness that dictates that they cannot experience emotion.

Let’s consider the human/machine hierarchy inverted in Do Androids Dream.  Again, like Asimov’s robots, the androids of Do Androids Dream are unique and talented.  For example, Luba Luft, an android, becomes a public opera singer that Deckard later regrets retiring.  He thinks to himself after the act, “I don’t get it, how can a talent like that be a liability to our society?  But it wasn’t the talent, he told himself; it was she herself” (Dick 137).  She is a recognized singer, and Deckard enjoys hearing her sing during rehearsal.  Yet, he and another bounty hunter kill her, because “it was she herself,” an android.  Human superiority over the android slave marks the android for subjection or destruction depending on the android’s choice to comply or rebel.  Rebellion raises the hierarchy of predator/prey, bounty hunter/android.  This new hierarchy is inverted during the last standoff between Deckard and the remaining three androids:  Pris Stratton, Irmgard Baty, and Roy Baty.  Pris makes the move to attack Deckard, using her similar appearance to Rachael Rosen to her advantage.

Another example of android hierarchical inversion has to do with Roy and Irmgard Baty.  They are married androids, and when they are cornered Roy tries to draw Deckard away from his wife.  Deckard kills her first, and Roy lets out a scream of rage before his own death.  Who’s to say that that Roy and Irmgard didn’t feel?  Who’s to say that they really feel something (e.g., sadness, happiness, joy, etc.)?  The humans in the story have less feeling than some of the androids.  For example, Rick and Iran Deckard have a Penfield Mood Organ, a technological device that alters their moods.  In many ways, it’s debatable if they could be married without the artificial stimulation of the mood organ.  Phil Resch also addresses the “feelings” of androids, while under suspicion of being an android.  While tracking Luba Luft in an art gallery, he stops in front of a painting:

At an oil painting Phil Resch halted, gazed intently.  The painting showed a hairless, oppressed creature with a head like an inverted pear, its hands clapped in horror to its ears, its mouth open in a vast, soundless scream.  Twisted ripples of the creature’s torment, echoes of its cry, flooded out into the air surrounding it; the man or woman, whichever it was, had become contained by its own howl.  It had covered its ears against its own sound.  The creature stood on a bridge and no one else was present; the creature screamed in isolation.  Cut off by–or despite–its outcry

[…]

“I think,” Phil Resch said, “that this is how an andy must feel.”  He traced in the air the convolutions, visible in the picture, of the creature’s cry.  “I don’t feel like that, so maybe I’m not an–”  He broke off as several persons strolled up to inspect the picture.  (Dick 130-131).

Edvard Munch’s Scream (1893) is emblematic of being overwhelmed, and acting out against an oppressive or repressive force.  Also, it serves to signify the emotional experience of androids in the novel.  What’s peculiar about this passage is that it’s a human bounty hunter, perhaps questioning his own identity at this point, but nevertheless indicating that androids are capable of feeling.  That feeling is one of the most oppressive and heavy expressionist paintings.  Another reading is that Resch is projecting his own stress and panic onto his prey.  In either case, the suggestion is made, which is disturbing considering Resch’s later cold-blooded killing of Luba Luft.  However, before that act, Deckard makes a token gesture of kindness toward Luba Luft.  After apprehending her with Resch’s help, she asks Deckard to buy her a print of the painting she was looking at.  After a pause, Deckard buys a book with the print of Munch’s Puberty (1895) inside for her, knowing that she will have to be “retired.”  She tells Deckard, “It’s very nice of you…There’s something very strange and touching about humans.  An android would never have done that” (Dick 133).  Deckard’s act is one of compassion, even for the condemned android in his possession.  Resch’s lack of affect toward androids is reinforced by his admission that he would never made such a gesture.  However, he would do something even more dehumanizing, but from his perspective, it isn’t such an act because it doesn’t involve another human.  Humans with artificial emotions, and androids with arguably emotional responses of love and self-preservation serve to deconstruct the assumed human/machine hierarchy in Do Androids Dream.

The idea that humans can be attracted to androids, and the destabilization of human subjectivity by androids further complicates the human/machine hierarchy.  Deckard’s human subjectivity is challenged during the episode at the fake Mission Street Police Station.  There, he’s surrounded and considered an android by a swarm of police officers.  However, these cops are actually androids, pretending to be police officers in a fake police station–a safe-house of sorts for wayward androids.  Again, the hierarchy is inverted.  Then, Deckard escapes with the help of Phil Resch, who Deckard is told by a then retired android that Resch is one of them.  During the process of revelation, the destabilization of human subjectivity passes from Deckard to Resch.  Resch begins to doubt he’s human.  His lack of affect toward killing androids seems to reinforce this view, because androids supposedly don’t care for one another (yet evidence in the story that contradicts that assumption).  However, things are turned around once again when Resch is diagnostically determined to be human by Deckard’s Voigt-Kampff Test.  He merely lacks any affect toward androids–something that Deckard begins to experience toward female androids including Luba Luft and Rachael Rosen.  This double inversion results in Deckard questioning his own abnormal affective response:

And he felt instinctively that he was right.  Empathy toward an artificial construct?  he asked himself.  Something that only pretends to be alive?  But Luba Luft had seemed genuinely alive; it had not worn the aspect of a simulation.  (Dick 141).

One shouldn’t be attracted to androids, because they aren’t human, they aren’t real.  However, Luba Luft “had seemed genuinely alive,” and didn’t seem like a “simulation.”  This is moving into the realm of Jean Baudrillard and his theorization of simulacra and simulation, but it’s an important digression for this discussion.  In Deckard’s postmodern world, the android is a simulacra–a copy without an original, and an image that, “has no relation to any reality whatsoever” (Baudrillard 6).  As mentioned before, her/its embodiment as an artificial life form is the only register for her destruction.  That signification is a cultural construct just as considering slaves in the Old South as inhuman and not deserving of Constitutional protection was a cultural practice upheld in the hierarchies:  white/black, master/slave, free/captive.

Next, the human/android hierarchy and its analogous master/slave hierarchy are coupled with gender and sex hierarchies.  It’s Resch’s cold-hearted suggestion to Deckard that prompts his next move–to sleep with a female android before killing it.  Soon, Deckard has sex with Rachael Rosen, the Rosen Corporation’s in-house Nexus-6 model android, but his narrated descriptions of her seems like an attempt to put it off as a possibility.  He tries to resist a desire he clearly has for her/it.  This is made clearer in this example:

Rachael’s proportions, he noticed once again, were odd; with her heavy mass of dark hair, her head seemed large, and because of her diminutive breasts, her body assumed a lank, almost childlike stance.  But her great eyes, with their elaborate lashes, could only be those of a grown woman; there the resemblance to adolescence ended.  Rachael rested very slightly on the fore-part of her feet, and her arms, as they hung, bent at the joint:  the stance, he reflected, of a wary hunter of perhaps the Cro-Magnon persuasion.  The race of tall hunters, he said to himself.  No excess flesh, a flat belly, small behind and smaller bosom–Rachael had been modeled on the Celtic type of build, anachronistic and attractive.  Below the brief shorts her legs, slender, and a neutral, nonsexual quality, not much rounded off in nubile curves.  The total impression was good, however.  Although definitely that of a girl, not a woman.  Except for the restless, shrewd eyes.  (Dick 187).

“Childlike” is woven together with “grown woman.”  “Cro-Magnon” is juxtaposed with “Celtic type of build.”  Her girlish “flat belly, small behind and smaller bosom,” gives Deckard an overall “good” impression.  Physically she’s described like a lanky teenage girl, but it’s her eyes that make her/it a woman to Deckard.  Her/Its eyes connect Deckard to her/its soul, the Nexus-6 control unit, and the artificially created brain impregnated with simulacral memories.  Nevertheless, the human/machine, male/female, hunter/prey hierarchy gets inverted.  Rachael’s arousal provokes her to take charge of Deckard’s attempt to get out of having sex with her.  She demands, “Goddamn it, get into bed,” and he does (Dick 195).

Do Androids Dream illustrates the culturally contrived hierarchy of human/machine, master/slave, and dominant/submissive.  However, in each case, these hierarchies of binary opposites can be inverted through an analysis of the text in order to arrive at the beginning of understanding regarding these hierarchies.  Deconstruction of these hierarchies opens things up for further discussion involving how these hierarchies are presented in SF as well as how they come to be culturally instituted and replicated in works of fiction.

Introduction/Conclusion

Asimov’s detective fiction SF and Dick’s noir bounty hunters inhabit and promote Cold War human/machine hierarchies.  Asimov’s utopia of humanity and androids coexisting is undercut by the android’s loss of agency due to the Three Laws of Robotics.  Dick’s dystopian San Francisco provides a different set of possibilities where androids seem more human than human.  Certainly, Asimov’s work came first, but to say that Dick’s work is supplemental would be an error.  There are shared ideas, themes, and terminology in these works.[2]  Each SF work, sentence, and word carries with it traces of meaning, and no one particular word is privileged over another.  One idea is not privileged over another.  More importantly, the hierarchies present in these works mean something, but they cannot be assumed to be right, true, and natural.  The continuous process of deconstruction must be applied in order to open up these works and their embedded hierarchies for further analysis and understanding.  However, that understanding is not an end point any more than deconstruction is a process of reading.  It’s a way of thinking that leads to new avenues and ways of thinking, which is important to any cultural work including SF.  Deconstruction is only the beginning.

As a beginning, what’s next?   Cold War human/machine hierarchies are reinforced in a variety of media including the critical works that shouldn’t have preexisting assumptions about the works in question.  The traces of meaning connected to “human” and “machine” and the relation between the two needs further development.  How is that hierarchy presented in other works by Asimov and Dick, and are there other connections between these two significant SF authors related to this hierarchy?  How do hierarchies play out between SF authors and the associated literary movements a particular author is associated with?  These and many other questions deserve further critical attention through an open-ended deconstructionist lens.  This won’t yield further hard facts, but it will lead to more compelling questions.  And that is where the play begins again.

 

 

Works Cited

Asimov, Isaac.  The Caves of Steel.  New York:  Bantam Doubleday Dell, 1954.

—.  I, Robot.  New York:  Gnome Press, 1950.

—.  The Naked Sun.  New York:  Bantam Doubleday Dell, 1957.

—.  The Robots of Dawn.  New York:  Doubleday, 1983.

Baudrillard, Jean.  Simulacra and Simulation.  Trans.  Shelia Glaser.  Ann Arbor:  University of Michigan Press, 1994.

Broderick, Damien.  Reading by Starlight:  Postmodern Science Fiction.  London:  Routledge, 1995.

Derrida, Jacques.  “Différance.”  Trans.  David B. Allison.  Literary Theory:  An Anthology.  2nd Edition.  Ed. Julie Rivkin and Michael Ryan.  Malden, MA:  Blackwell Publishing, 2004:  279-299.

—.  Of Grammatology.  Trans. Gayatri Chakravorty Spivak.  Baltimore:  John Hopkins UP, 1976.

—.  Positions.  Trans. Alan Bass.  Chicago:  University of Chicago Press, 1981.

Dick, Philip K.  Do Androids Dream of Electric Sheep?  New York:  Doubleday, 1968.

Munch, Edvard.  Puberty.  1895.  National Gallery, Oslo.  12 December 2007 <http://artchive.com/artchive/M/munch/puberty.jpg.html&gt;.

—.  The Scream.  1893.  National Gallery, Oslo.  12 December 2007 <http://www.ibiblio.org/wm/paint/auth/munch/&gt;.

McHale, Brian.  Constructing Postmodernism.  New York:  Routledge, 1992.

Warrick, Patricia S.  The Cybernetic Imagination in Science Fiction.  Cambridge, MA:  MIT Press, 1980.

Wiener, Norbert.  Cybernetics:  Or the Control and Communication in the Animal and Machine.  Cambridge:  MIT Press, 1948.

—.  God & Golem, Inc.:  A Comment on Certain Points Where Cybernetics Impinges on Religion.  Cambridge:  MIT Press, 1966.

—.  The Human Use of Human Beings.  Boston:  Houghton Mifflin, 1950.


[1] Brian McHale makes the case that New Wave SF, which began in the 1960s was a precursor to true dialog between postmodernism and SF, and it’s in the 1970s that, “SF and postmodernist mainstream fiction become one another’s contemporaries, aesthetically as well as chronologically, with each finally beginning to draw on the current phase of the other, rather than on some earlier and now dated phase” (228).

[2] Damien Broderick explores this idea more fully in his book, Reading by Starlight:  Postmodern Science Fiction (1995).  In that work, he extends Christine Brooke-Rose’s idea of the fantasy megastory to SF, and calls that shared collection of terminology the mega-text of SF.

Recovered Writing: My First Professional, Academic Presentation, “Monstrous Robots: Dualism in Robots Who Masquerade as Humans,” Monstrous Bodies Symposium, March 31-April 1, 2005

This is the thirtieth post in a series that I call, “Recovered Writing.” I am going through my personal archive of undergraduate and graduate school writing, recovering those essays I consider interesting but that I am unlikely to revise for traditional publication, and posting those essays as-is on my blog in the hope of engaging others with these ideas that played a formative role in my development as a scholar and teacher. Because this and the other essays in the Recovered Writing series are posted as-is and edited only for web-readability, I hope that readers will accept them for what they are–undergraduate and graduate school essays conveying varying degrees of argumentation, rigor, idea development, and research. Furthermore, I dislike the idea of these essays languishing in a digital tomb, so I offer them here to excite your curiosity and encourage your conversation.

Almost nine years ago, I gave my first academic conference presentation at the Monstrous Bodies Symposium—a continuation of Science Fiction-focused initiatives at Georgia Tech by Professor Lisa Yaszek. In addition to presenting, I organized the academic track of the symposium and recorded the sessions for the School of Literature, Communication, and Culture (now, Literature, Media, and Communication). After my presentation below, I am including a press release for the symposium that describes it in more detail along with our special guests: Paul di Filippo and Rhonda Wilcox.

My presentation, “Monstrous Robots: Dualism in Robots Who Masquerade as Humans,” continues the work that I began in the SF Lab the previous year and  continued in my undergraduate thesis later. These ideas figured large throughout the close of my undergraduate degree and my MA in Science Fiction Studies at the University of Liverpool. By the time that I was well into my PhD at Kent State University, I began thinking along parallel lines in terms of human-computer interaction and its effect on human brains and the “minds” of computers. Instead of thinking of doppelgängers and opposition, I reframed my thinking around co-evolution, evolutionary psychology, neuroscience of mind, and human-computer interaction. This presentation is another step in the development of my thinking and self along these lines.

Later, I will post another version of this essay that was revised for my first SFRA Conference in White Plains, NY in 2006.

Jason W. Ellis

Monstrous Bodies Symposium 2005

31 March 2005

Monstrous Robots:  Dualism in Robots Who Masquerade As Humans

Robots who masquerade as human in science fiction (SF) are monstrous bodies because they are humanity’s created doppelganger of itself and as a result they reflect the best and the worst of what it means to be human.   These technological appropriations of what it means to be human are important because they are a space within SF where issues about the encroaching of science and technology on the borders of the human body after the end of World War II.

In order to explore these issues, I want to begin by defining the terminology that I will be using.  I define doppelganger as an unnatural double of a person or of humanity.  Human-like robots are the doppelganger of humanity because they mimic what it means to be human.  They appear human and they must perform themselves accordingly.  This doppelganger is haunting because its existence challenges what it means to be human.  If someone acts human and looks human why is there any reason to question the validity of that person’s humanity?  The answer is that:  the existence of human-like robots makes the very concept of humanity suspect.  Robots are the product of their creators.  The double mirrors its creator by reflecting an extreme of human behavior.  This reflection is called dualism.  I define dualism as a doubled status such as good and evil or organic and synthetic.  Human-like robots are either very good or very bad and this is determined by the nature of their creators.  Therefore, these robots tell us a great deal about the nature of their creators.

I will be examining two examples of human-like robots in SF literature and film.  The first is Isaac Asimov’s “humaniform” robot, R. Daneel Olivaw, from the Robot, Empire, and Foundation series of novels.  Daneel is best described as an android because he is a robot made in the appearance of a man.  His outer skin is not organic in nature.  The second human-like robot is James Cameron’s original Terminator from the film of the same name.  The Terminator is best called a cyborg because he is a fusion of man and machine (organic skin and hair covering a robotic interior).  The former is an example of a good android and the latter is an example of a bad cyborg.  These characters are doubles of humanity in their respective stories and they are also mirrors of one another.

Asimov began writing the robot novels that feature R. Daneel Olivaw in the 1950s, during the first phase of the Cold War.  The novels take place in a far future where humans have colonized a significant portion of the galaxy.  Although the robots are instrumental in the process of colonization, humans remain fiercely divided on whether or not robots should exist at all.  Given that Asimov himself was very much in favor of the promising new technologies of his day (e.g., automation in manufacturing and computers), it is not surprising that he picks the robots in his novels to be utopic in nature.  His robots are the embodiment of these new technologies.  In order to make his robots “perfect people,” he constructed his robots with the Three Laws of Robotics that he first made explicit in his short story, “Runaround:”

(1) A robot may not injure a human being, or, through inaction, allow a human being to come to harm.

(2) A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.

(3) A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws. (I, Robot 44-45)

The Three Laws provided each robot with an ethical system that must be obeyed because it is hardwired into its positronic brain.  Therefore, Asmovian robots represent the best of what humans can be, but at the same time they reveal what we are not.

R. Daneel Olivaw is what Asimov termed a “humaniform” robot.  Daneel has the appearance of a human from one of the fifty Spacer worlds (i.e., worlds originally populated by Earth people during a period of expansion in our future).  Daneel’s partner in the novels The Caves of Steel, The Naked Sun, and The Robots of Dawn is Elijah Baley, a detective from Earth.  In The Caves of Steel, Baley describes Daneel as appearing “completely human” (83).  He later says, “The Spacers in those pictures had been, generally speaking, like those that were occasionally featured in the bookfilms:  tall, red-headed, grave, coldly handsome.  Like  R. Daneel Olivaw, for instance” (94).  Baley even suggests that Daneel is secretly Dr. Sarton, the Spacer found dead in The Caves of Steel.  This however is not the case.  Daneel was modeled after Dr. Sarton’s appearance.  This revelation leads to Daneel revealing what lies beneath.  In Dr. Han Fastolfe’s office, “R. Daneel pinched the ball of his right middle finger with the thumb and forefinger of his left hand…just as the fabric of the sleeve had fallen in two when the diagmagnetic field of its seam had been interrupted, so now the arm itself fell in two…There, under a thin layer of fleshlike material, was the dull blue gray of stainless steel rods, cords, and joints” (The Caves of Steel 111).  As Baley passes out from the shock, the fact that the “R.,” which stands for “Robot,” in front of Daneel’s name is in fact deserved!

The broadest doubling that involves Daneel is that he is a mirror for humanity.  When a character becomes aware of Daneel’s true being, it destabilizes that character’s understanding of the difference between robot and human.  Most of Asimov’s robots are very metal and very plastic.  They are the epitome of synthetic.  Daneel’s construction sets him apart from the apparent synthetic robots because he appeared to be human.  Elijah Baley first greets Daneel at Spacetown thinking that he is a Spacer.  Later Baley says to his superior, Commissioner Julius Enderby, “You might have warned me that he looked completely human” and he goes on to say “I’d never seen a robot like that and you had.  I didn’t even know such things were possible” (The Caves of Steel 83).  Elijah and most other humans are not aware that a human form robot was a possibility.  Although Elijah comes to terms with Daneel, other characters are driven to destroy humaniform robots.  Elijah’s wife is secretly a member of the Medievalists, a group that wants to do away with all robots, including Daneel.  Commissioner Enderby, also a Medievalist, murders Dr. Sarton, not because he wants to kill Sarton, but because he mistakes him for Daneel.

Daneel is also the double of his human partner, Elijah Baley.  Before Elijah meets Daneel, he is confident in his own abilities as a detective.  After he partners with Daneel, however, he begins to call into question his own abilities and talents.  Robots are meant to be superior to humans and Elijah extends this to his own profession that is now being intruded on by an android.  Baley is narrating at the beginning of The Caves of Steel:

The trouble was, of course, that he was not the plain-clothes man of popular myth.  He was not incapable of surprise, imperturbable of appearance, infinite of adaptability, and lightning of mental grasp.  He had never supposed he was, but he had never regretted the lack before.

What made him regret it was that, to all appearances, R. Daneel Olivaw was that very myth, embodied.

He had to be.  He was a robot (The Caves of Steel 26-27).

This anxiety is one of the motivating factors behind The Robots of Dawn, when Elijah is brought in to investigate the murder of a humaniform robot like Daneel.  If Elijah fails, he will loose his job and be declassified.  The fear of declassification is dire to Elijah because he had seen his own father declassified when he was only a boy.  Therefore, the existence of humaniform robots creates the situation that elicits this fear in Elijah.  Eventually Elijah warms up to his robot partner, but along the way Elijah often finds ways to make himself feel superior to robots by making Daneel follow unnecessary orders or by calling other robots by the derogatory label, “boy” (The Robots of Dawn 34).

James Cameron’s Terminator is a cyborg character that is born of a different cultural moment than Asimov’s robots.  The Terminator was originally released in 1984 while the Cold War was still in full swing and Ronald Reagan had been reelected President of the United States.  Even more significantly, The Terminator was riding the wave of office computing and robotic manufacturing.  Whereas Asimov viewed technology in utopic terms, Cameron only sees these technological advances as dystopic.  The Terminator would have been a film that the Medievalists of Asimov’s Robot novels would have lauded.

After the opening scene of the future wasteland of 2029, the Terminator arrives naked in Los Angeles of 1984.  J. P. Telotte writes that the “film’s title implies that its central concern is the technological threat, embodied in a killer cyborg which, for all of Arnold Schwarzenegger’s excess muscularity, disconcertingly blends in with the human:  speaks our language, crudely follows our basic customs, acts in roughly effective ways.  In fact, the film emphasizes just how easy it is to ‘pass’ for human in a world that judges that status so superficially” (172).  The Terminator has been given instructions to kill Sarah Connor in 1984 in order to prevent the birth of her future child who will lead humanity to victory over the machines.  He goes about doing this in a militarily calculated manner.  He obtains the weaponry and clothes that his mission requires.  The Terminator uses his human appearance and voice to blend into mid-1980s California.  Despite his robotic core, he is able to perform himself as human effectively enough to maintain the belief that he is human to those who passively interact with him.  Sarah Connor and Kyle Reese, the man sent back in time to save her, are the only persons that know what the Terminator really is.

The Terminator is a chillingly evil double of humanity.  Through the first part of the film the audience does not yet know exactly what lies beneath his skin.  We are treated to his superior strength, but only later in the film, after he has sustained damage, do we really begin to understand what lies beneath the surface.  The hard metal robot body that is under the soft organic skin is the true nature of the Terminator.  Without the skin he looks like the killing machines that greet the audience at the beginning of the movie.  The shining flying machines and the bone crunching treads of the tank are siblings of the Cyberdyne Systems Model 101 Terminator.  The Terminator is the result of the military-industrial complex losing control of Skynet, a computer network of control and command systems that were integrated into the implements of American war making.  After Skynet becomes self-aware, it views humanity as its only threat.  Skynet then acts in its own best interest by appropriating humanities’ weapons of war in order to eliminate its creator.  In contrast to Asimov’s robots, the Terminator seems to be the direct result of machine rather than human construction.  In the movie, Terminator 3:  Rise of the Machines, smaller versions of the flying Terminator and tank Terminator are revealed to have been developed before Skynet launches its nuclear attack.  Therefore, it seems reasonable to assume that the cyborg Terminators were developed by Skynet for the purpose of infiltrating pockets of human habitation to wreak havoc by undermining the belief that what appears human actually is.  Again, the cyborg Terminator, like R. Daneel Olivaw, threatens what it means to be human by destabilizing the criteria used to determine human from machine.  But Cameron’s view is diametrically opposite Asimov’s in respect to machine agency.  Asimov’s robots are dedicated to helping humanity, but Cameron’s Skynet becomes self-aware on its own without any safeguards in place.  In Cameron’s look at the future, humanity loses control to the machines and must take that control back.

Another doubling is between the Terminator and Sarah’s protector, Kyle Reese.  The most obvious difference is that Reese is much smaller than the Terminator.  Additionally, Reese feels pain and he can be injured.  The Terminator sustains damage but it unrelentingly follows it programming.  Because of the limitations placed on time-travel, neither Reese nor the Terminator can bring any weaponry with them into the past.  The Terminator takes his weapons indiscriminately from a gun shop and in turn kills the proprietor.  Reese takes his first weapon, a revolver, from a police officer and then he takes his second, a shotgun, from a parked police cruiser.  The other weapons that Reese and Sarah use are hand made explosives.  Reese uses ingenuity and resourcefulness to match the brute force onslaught of the Terminator.  In effect, the Terminator itself is a weapon.

An interesting mirroring in The Terminator is between the machines and Sarah Conner.  On one level, the Terminator is the destructor.  Its mission is to go into the past and eradicate any instance of a “Sarah Connor” in the Los Angeles area.  Sarah, on the other hand, is told that she will give birth to John Connor, the future leader of the human resistance.  The Terminator tries to kill the woman who is capable of creation.  On a broader level, Skynet is capable of creation through production.  Skynet must have a means for building Terminators (cyborgs, airplanes, and tanks) and it must also have some creative capabilities because it created the mechanism for traveling into the past.  Thus, Skynet and Sarah follow parallels in that each stand for their species and point toward the future.  Skynet wants to maintain its existence and the existence of its machine armies.  Sarah wants to live and know that humanity will continue with the help of her yet-to-be-born son, John.  The Terminator, as a creation of Skynet, is the means by which Skynet can strike at Sarah because Skynet and Sarah’s futures are mutually exclusive.  Within the frame of the movies, machines and human beings are not meant to live together in harmony.  Another doubling between Sarah and the Terminator is that they are both covered in some way.  Telotte points out, “If the gradual stripping away of the Terminator’s human seeming warns us not to judge an android by its cover, the gradual emergence of Sarah’s character and potential as she responds to this threat reminds us that it is no more reliable to judge the human self by its various cultural trappings” (173).  His true robotic interior is revealed throughout the progression of the movie.  This is done “by seeing for ourselves how he sees…for the point-of-view shots reveal that the Terminator does not “see” images but merely gathers ‘information'” (Pyle 232).  Additionally, the Terminator’s flesh is stripped away through gunfights and explosions that eventually reveal the cold metal of its endoskeleton.  Sarah’s cultural coverings are removed as well as she shifts from clumsy waitress that freezes at the sight of the Terminator to technologically adept mother of the future who triumphantly crushes the machine in a hydraulic press.

Finally, Cameron’s Terminator is the doppelganger of Asimov’s R. Daneel Olivaw.  The Terminator works toward the domination of machines over humanity whereas Daneel works cooperatively with humans such as his partner and friend, Elijah Baley.  The text at the beginning of The Terminator states, “The machines rose from the ashes of the nuclear fire.  Their war to exterminate mankind had raged for decades, but the final battle would not be fought in the future.  It would be fought here, in our present.  Tonight.”  The machines (i.e., Skynet and the Terminator) mean to “exterminate mankind.”  On the other hand, Patricia Warrick writes, “The…robot detective novels…illustrate Asimov’s faith that man and machine can form a harmonious relationship” (61).  Both have their robotic selves hidden under a layer of flesh.  They perform themselves as human in order to fit in with the cultural surroundings in which they find themselves (e.g., 1980s Los Angeles or Asimov’s Earth encased in “caves of steel”).   The Terminator means to destroy humanity while Daneel wishes to work along side humanity.

Both R. Daneel Olivaw and the Terminator are doppelgangers of humanity, other characters in their respective works, and each other.  They maintain a human appearance and performance in order to pass as human to the casual observer.  R. Daneel Olivaw is given his “humaniform” appearance in order to work with humans (both Spacer and Earth person alike).  The Terminator uses his appearance as a sort of disguise in order to infiltrate humanity in order to kill from within.  Daneel represents the very best of human nature through cooperation and a moral imperative.  The Terminator represents the very worst of humanity through death dealing and a lack of moral standing. Despite the best intentions of Daneel, who was built the way he was, he is still viewed as a threat by some.  The Terminator, who also had no choice in his appearance, is a real threat to humanity because he uses his appearance to get closer to his prey.  Therefore, the bodies of R. Daneel Olivaw and the Terminator are examples of monstrous bodies in SF because they assume an appearance and identity that destabilizes what it means to be human and in so doing they each have a unique nature that is dependent on that of their creators.

Works Cited

Asimov, Isaac.  The Caves of Steel.  New York:  Bantam Doubleday Dell, 1954.

—.  I, Robot.  New York:  Gnome Press, 1950.

—.  The Naked Sun.  New York:  Bantam Doubleday Dell, 1957.

—.  The Robots of Dawn.  New York:  Doubleday, 1983.

Pyle, Forest.  “Making Cyborgs, Making Humans:  Of Terminators and Blade Runners.”    Film Theory Goes to the Movies.  Ed. Jim Collins, et al.  New York:  Routledge,            1993.  227-241.

Short, Sue.  “The Measure of a Man?:  Asimov’s Bicentennial Man, Star Trek’s Data, and     Being Human.”  Extrapolation 44:2 (Summer 2003):  209-223.

Telotte, J.P.  Replications:  A Robotic History of the Science Fiction Film.  Urbana, IL:         University of Illinois Press, 1995.

The Terminator.  Dir. James Cameron.  Orion Pictures, 1984.

Terminator 3:  Rise of the Machines.  Dir. Jonathan Mostow.  Warner Bros., 2003.

Warrick, Patricia S.  The Cybernetic Imagination in Science Fiction.  Cambridge, MA:         MIT Press, 1980.

——————–

Monstrous Bodies Press Release

What:  “Monstrous Bodies in Science, Fiction, and Culture: Celebrating 25 Years of the Fantastic in the Arts at Georgia Tech”

When:  March 31-April 1, 2005

Where:  Bill Moore Student Success Center and the Skiles Building, Georgia Institute of Technology

From March 31st through April 1st the School of Literature, Communication and Culture (LCC) will host a two-day symposium in which participants explore the meaning of monstrous bodies in science, fiction, and culture. The symposium, which will take place in the Bill Moore Student Success Center at the Georgia Institute of Technology, is free of charge and open to all interested parties.

The symposium celebrates both LCC’s ongoing commitment to the study of the fantastic in the arts and, more specifically, the pivotal role that LCC Professor Emeritus Irving F. “Bud” Foote played in shaping this commitment. Foote taught the first accredited science fiction class at Tech in the early 1970s and over the course of the next two decades brought a number of science fiction writers to Tech including Frederik Pohl, Ursula K. LeGuin, Octavia Butler, and Kim Stanley Robinson. Upon his retirement in 1997 Foote donated 8000 science fiction-related items to the Georgia Tech Library, and the Bud Foote Science Fiction Collection was born. With additional gifts from Georgia Tech alumni and science fiction authors such as David Brin and Kathleen Ann Goonan, the Bud Foote Collection is now one of the twenty largest research collections of its kind.

The Monstrous Bodies symposium will commemorate both Professor Foote’s legacy and LCC’s continued dedication to the study of the fantastic in the arts by featuring student research on and creative writing in science fiction, fantasy, horror, and the gothic. The symposium will also include art and film exhibits as well as presentations by local scholars, science fiction writers, editors, publishers, and artists from Adult Swim, Cartoon Network’s late-night cartoon programming for adult audiences.

Our special guests of honor are two leading figures in fantastic art and scholarship: science fiction author Paul di Filippo and popular culture expert Rhonda Wilcox. In 2004 Di Filippo received the Prix L’Imaginaire for his short story “Sisyphus and the Stranger”; other stories have been nominated for Hugo, Nebula, BSFA, Philip K. Dick, Wired Magazine, and World Fantasy Awards as well. Wilcox is the author of the forthcoming book Why Buffy Matters: The Art of Television and coeditor of Fighting the Forces: What’s at Stake in Buffy the Vampire Slayer and Slayage: The Online International Journal of Buffy Studies.

If you have any other questions or comments, contact conference coordinator Prof. Lisa Yaszek or conference assistant Amelia Shackelford.

For more information

On the symposium, please visit http://monstrousbodies.lcc.gatech.edu;

On the Bud Foote Science Fiction Collection, please visit http://sf.lcc.gatech.edu;

On previous student work in the Bud Foote Collection, please visit http://sciencefiction.lcc.gatech.edu.

Science Fiction, LMC3214: Golden Age, Part 2 and SF Film Lecture

In today’s class, I covered large swaths of background material on Ray Bradbury, Robert A. Heinlein, and Tom Godwin. Then, I gave the class a rough sketch of the development of SF film through the SF-film boom of the 1950s as preparation for tomorrow’s viewing of Forbidden Planet. After lecture, we discussed the readings from Monday and Tuesday: Asimov’s “Reason,” Bradbury’s “There Will Come Soft Rains,” Heinlein’s “All You Zombies–,” and Godwin’s “The Cold Equations.”

I was glad to hear that Godwin’s story connected emotionally with some students despite it being “hard SF.” There were also a number of students  who preferred “All You Zombies–” and were already familiar with time travel narratives, which supported my lecture argument about Heinlein’s reliance on reader’s experience with the SF mega-text. One student on Bradbury’s story said, “This was the first story that made me feel sorry for a house.” After class, I had a great conversation with two students about Cold War anxieties and the shifting experiences of SF in film and television via new media.

Science Fiction, LMC 3214: Exam 1 and Lecture on Golden Age SF Part 1

Today, my students bravely wielded their pens and Blue Books to endure their first exam in our Science Fiction class. The exam covered Mary Shelley’s Frankenstein through the early SF film serials. The exam had twenty short and long answer questions. A few students completed the exam in the allotted 60 minutes, but I gave the rest of the class an additional 15 minutes to complete the test. I made it very clear that I could not give credit to illegible responses, so I think that the writing component slowed some students down. I will take this into consideration as I plan the second exam while making my lecture notes for the upcoming two weeks of class.

After the exam, I delivered the first part of my lecture on Golden Age SF. I covered a rough sketch of the Golden Age, John W. Campbell, Jr., and Isaac Asimov. In tomorrow’s class, I will lecture on Robert A. Heinlein, Tom Godwin, Ray Bradbury, and the maturation of SF film. We will discuss the readings for Monday and Tuesday, too: Asimov’s “Reason,” Bradbury’s “There Will Come Soft Rains,” Heinlein’s “All You Zombies–,” and Godwin’s “The Cold Equations.”

Asimov, Robots, and Christmas Sales

I am sitting at the mall Starbucks working on my dissertation while Y takes advantage of last minute sales. I don’t get to work among the bustle as much as I used to, because the local Starbucks is always packed in Kent. Also, Scribbles is too far away for a walk.

I look around and I wonder if I will ever see a world where robots walk among us. Some folks, like David Levy, believe that this and more is right around the corner. However, I wonder if pro-robotics folks, myself included, will find our enthusiasm challenged by the antirobotic Luddites that Asimov writes about in his Robot, Empire, and Foundation stories? I say coexist, coevolve, and cooperate.

SFRA 2010, Saturday, Avatar and Empire

Saturday, June 26, was Yufang’s and my big day at SFRA 2010. We missed the first part of the conference, because we were called to the Cleveland branch of USCIS for Yufang’s green card interview. Luckily, we arrived in time for the last full day of the conference and Craig was nice to arrange the panels so that we were able to participate.

Our day began with the 11:00am paper session: Avatar and Empire. Mack Hassler expertly moderated the panel, which included presentations by me (“James Cameron’s Avatar and the Machine in the Garden: Reading Movie Narratives and Practices of Production”), Yufang (“A Certain Tendency of the Hollywood Cinema Concerning White Males, the Military, and the Alien Other: A Reading of Avatar Against Apocalypse Now”), and Jari Kakela (“Robots, Rationalism, and Endless Growth: The Role of Frontier Expansionism in Asimov’s Work”).

In terms of the theme of the conference, Jari’s presentation was right on the money. I enjoyed hearing his reading of Asimov with Turnerian manifest destiny. Before I made the switch to more contemporary science fictions, I cut my teeth with Asimov at Georgia Tech and in my first SFRA paper. Jari demonstrated that Asimov’s robot and Foundation stories still have much to offer us in thinking about the continuing American project of frontier expansion.

Yufang and I each had terrific responses to our essays. Janice Bogstad, Andrew Hageman, Richard Erlich, and this year’s Pilgrim Award winner Eric Rabkin, among others, provided some insightful comments and tough questions. In particular, Eric’s observations on the positive aspects of Avatar are important to keep in mind–even for us who made critical analyses of the film, but Janice was quick to point out the difference between our works, particularly Yufang’s, as analysis and readings versus attacks. We had a fantastic discussion during the panel, which carried over into the hallway afterwards.

I should also say that this was Yufang’s first SFRA, and it was the first time that we presented together at the same conference (though we have presented together before at the AGES Symposium before at Kent State).

After the panel, we went to lunch with Mack and Sue Hassler, Adam Frisch, William Sun, and Jari. Then, it was off to the 2:00pm roundtable on Immigration, Alienation, and Arizona SB 1070.