Recovered Writing: PhD in English, Semeiotics Final Paper, Deconstructing the Human/Machine Hierarchy in the Works of Asimov and Dick, Fall 2007

This is the thirty-fourth post in a series that I call, “Recovered Writing.” I am going through my personal archive of undergraduate and graduate school writing, recovering those essays I consider interesting but that I am unlikely to revise for traditional publication, and posting those essays as-is on my blog in the hope of engaging others with these ideas that played a formative role in my development as a scholar and teacher. Because this and the other essays in the Recovered Writing series are posted as-is and edited only for web-readability, I hope that readers will accept them for what they are–undergraduate and graduate school essays conveying varying degrees of argumentation, rigor, idea development, and research. Furthermore, I dislike the idea of these essays languishing in a digital tomb, so I offer them here to excite your curiosity and encourage your conversation.

As I wrote in my last Recovered Writing post here, I consider myself very fortunate to have taken Dr. Gene Pendleton’s ENG 75057 Semeiotics course. This is in part due to his acumen as a teacher with grit, and also, in part due to his philosophy background, which I believe enriched our seminar.

In this Recovered Writing post, I am including my final paper in Dr. Pendleton’s class. After discussing some ideas and my previous work on Isaac Asimov and Cold War doppelgangers, he suggested that I bring in Philip K. Dick’s Do Androids Dream of Electric Sheep? This paper helped me rethink some of my previous work in a totally new light.

Jason W. Ellis

Dr. Gene Pendleton


Fall 2007

Deconstructing the Human/Machine Hierarchy in the Works of Asimov and Dick

            The fiction of Isaac Asimov and Philip K. Dick are often evoked in critical discourse to describe the rise of autonomous technology during the American Cold War (1945-1990).  The embodiment of the increasing complex systems of Command, Control, Communications, Computers, and Intelligence (C4I) is featured in the Science Fiction (SF) image of the android.  An android is a synthetic being that to all outward appearance and behavior is human.  The internal construction of such a being may be mechanical or organic, but in either case, an android is a constructed object, rarely afforded subjectivity, despite the possibility that androids are self-aware, have subjective experience of the world, and in some cases, emotional responses.

Androids, or human-like robots are a recurring theme in SF works.  By writing SF stories featuring androids and robots, SF authors directly engage the discussion surrounding autonomous technologies and the overarching networks that technology is situated within.   These artificial beings are the embodiment of autonomous technology and they double for humanity because they are constructed in our image.  Because androids are generally capable of making their own decisions, they challenge the authority of human mastery over technological artifice.  Additionally, androids challenge what it means to be human in a world populated by the real and the artificial.  If someone acts human and looks human why is there any reason to question the validity of that person’s humanity?  The answer is that:  the existence of human-like robots makes the very concept of humanity suspect.  Thus, androids are a representation of autonomous technology that elicits anxiety over the loss of human control over technology.

Asimov constructs a utopic world around his robot and android creations in his collected Robot novels:  I, Robot (1950), The Caves of Steel (1954), The Naked Sun (1957), and The Robots of Dawn (1983).  Unlike the majority of pulp SF robots that destroy humanity, Asimov, along with his friend and editor, John W. Campbell, Jr., devised a system of laws that govern his robots.  However, Dick writes a bleaker picture into his dystopia, Do Androids Dream of Electric Sheep? (1968).  Dick’s androids have no such system to protect humanity from its synthetic doppelganger, and as a result, present an unleashed monstrous threat to humanity by their very existence.  As such, the works of these two authors heavily contrast with one another when juxtaposed.  Despite the apparent contradiction between the projects of these two authors, the representations of humanity and androids in their works follow a similar trajectory and promote a similar thesis:  humanity is better than machines.  This is a gross over-simplification that I will address in more depth in this paper, but at the root of this discussion is the fact that works by these authors promote these hierarchies:  human/machine, organic/synthetic, origin/derivative, soul/soulless, and presence/absence.  These hierarchies are deeply embedded within the Cold War and Cold War culture, but they continue to appear into the present through the on-going Terminator films and the Wachowski Brother’s Matrix series.  Where do these hierarchies come from?  Why are they perpetuated within SF, particularly those involving autonomous technologies such as androids?

Returning to Asimov and Dick’s works, there is a significant approach to uncovering and exploring these binary opposed hierarchies within the texts.  Jacques Derrida’s “processless” process of deconstruction provides for a reading of hierarchies within texts that obviates other variables of influence.  Derrida argues, “There is nothing outside the text” (Of Grammatology 158).  This statement means more than Derrida’s supposed logocentrism.  It completes Barthes’ claims that the author is dead, but it extends much further to the way in which we each cognize, understand, and respond to a given text.  It involves the way textual information and our responses to texts are laid down in the mind, even extending to the level of engrams, or the physical trace of memory in the brain.

Jacques Derrida’s attack on the metaphysics of presence and challenge to supplementarity and culturally created hierarchies are significant tools for the evaluation of the human/android hierarchy in the works of Asimov and Dick.  Finding différance and slippages underlying the concepts out of which the hierarchies are constructed is one step toward deconstruction.  Furthermore, I challenge the supposed supplements of humanity–technology, machines, and androids.  Each of these aspects of the androids and the hierarchies of human/android in the texts discussed below are unstable and open for debate.  After considering these texts, the human/machine hierarchy is a binary opposite of the base level, which is important to the application of deconstruction according to Derrida:

Henceforth, in order better to mark this interval…it has been necessary to analyze, to set to work, within the text of the history of philosophy, as well as within the so-called literary text…certain marks…that by analogy…I have called undecidables, that is, unites of simulacrum, “false” verbal properties (nominal or semantic) that can no longer be included within philosophical (binary) opposition, but which, however, inhabit philosophical opposition, resisting and disorganizing it, without ever constituting a third term…It is a question of re-marking a nerve, a fold, an angle that interrupts totalization:  in a certain place, a place of well-determined form, no series of semantic valences can any longer be closed or reassembled.  Not that it opens onto an inexhaustible wealth of meaning or the transcendence of semantic excess.  (Positions 42-43).

The results of this reading will present a particular view of these hierarchies deconstructed, but the work accomplished here adds to the discussion rather than provides a singular truth hidden and transcendent behind the human/android hierarchy.  Additionally, meanings are deferred, and hard answers aren’t always forthcoming.  However, this analysis begins a process of further discovery and potential for understanding.  The analysis will incorporate, “différance,” which is “neither a word nor a concept,” and, “With its a, differance more properly refers to what in classical language would be called the origin or production of differences and the differences between differences, the play [jeu] of differences” (“Différance” 279).  Studying différance through “the play of differences” is integral to deconstructing hierarchies.  It’s word play, and a play on the alleged natural hierarchies embedded in texts.  Also, Derrida writes, “The concept of play [jeu] remains beyond this opposition; on the eve and aftermath of philosophy, it designates the unity of chance and necessity in an endless calculus” (“Différance” 282).  The word play employed does not enter into the binary opposition under study, and it affords “chance and necessity in an endless calculus.”  Therefore, play is an on-going process that may bring up unexpected results, and it continually rises toward the asymptote on the edge of potential understanding.

Toward that goal, but not end, I employ a deconstructionist reading of the human/android narratives of two central Cold War SF authors:  Asimov and Dick.  The noir and detective fiction aspects of the novels further connect them within the cultural milieu in which they were originally published.  The first phase of the paper specifically addresses and undressed the human/machine hierarchies in Asimov’s Olivaw-Baley novels that feature human and android detective working a variety of hard-boiled cases.  The second phase concerns the human/android pairings in Dick’s Do Androids Dream of Electric Sheep? (Do Androids Dream).  In this novel, hierarchies are continually turned on end between human/machine, man/woman, and hunter/prey.  Throughout the paper, and culminating in the conclusion, I upturn these hierarchies in attempting to better understand the solutions to these problems:  What is the origin of the human/machine hierarchy?  Why is the human/machine hierarchy predominantly forwarded through the fictional concept of the android?  And, finally, what other concepts or ideas are bound up with these hierarchies and the traces associated with the texts that build them?

Deconstructing Asimov’s Detective Buddies and Human/Android Hierarchy

Isaac Asimov’s R. Daneel Olivaw-Elijah Baley novels create and reinforce the supposedly natural human-machine hierarchy.  These novels, The Caves of Steel, The Naked Sun, and The Robots of Dawn, span from the first phase to the final phase of the Cold War.  They incorporate the author’s own expertise as a scientist along with contemporary developments in cybernetics widely publicized by Norbert Weiner in Cybernetics:  Or the Control and Communication of the Animal and the Machine (1948) and The Human Use of Human Beings (1950).

The Olivaw-Baley novels comprise a utopic vision of human-machine interaction in a far future founded on the human/machine hierarchy.  Baley grows to like his new partner through the trilogy of novels, ultimately defending him from those persons opposed to androids.  Underlying their relationship of human detective to android detective is the fact that Asmovian robots contain The Three Laws of Robotics, which problematizes Olivaw’s status as an android subject with a voice and agency to act and make its own choices.  This aspect is integral to an understanding of the human/machine hierarchy at play in these stories.

The novels take place in a far future where humans have colonized a significant portion of the galaxy.  Although the robots are instrumental in the process of colonization, humans remain fiercely divided on whether or not robots should exist at all.  Given that Asimov himself was very much in favor of the promising new technologies of his day (e.g., automation in manufacturing and computers), it is not surprising that he picks the robots in his novels to be utopic in nature.  His robots are the embodiment of these new technologies.  In order to make his robots “perfect people,” he constructed his robots with the Three Laws of Robotics that he first made explicit in his short story, “Runaround:”

(1) A robot may not injure a human being, or, through inaction, allow a human being to come to harm.

(2) A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.

(3) A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws. (I, Robot 44-45)

The Three Laws provided each robot with an ethical system that must be obeyed because it is hardwired into its positronic brain.  Therefore, Asmovian robots represent the best of what humans can be, but at the same time they reveal what we are not.

R. Daneel Olivaw’s artificiality is revealed to the humans he works with, and this knowledge places Daneel automatically at the “back of the bus” and subservient to human wishes as delegated by the Three Laws.  He/It is what Asimov termed a “humaniform” robot.  Daneel has the appearance of a human from one of the fifty Spacer worlds (i.e., worlds originally populated by Earth people during a period of expansion in our future).  Daneel’s partner is Elijah Baley, a detective from Earth.  In The Caves of Steel, Baley describes Daneel as appearing “completely human” (83).  He later says:

The Spacers in those pictures had been, generally speaking, like those that were occasionally featured in the bookfilms:  tall, red-headed, grave, coldly handsome.  Like R. Daneel Olivaw, for instance (The Caves of Steel 94).

Baley even suggests that Daneel is secretly Dr. Sarton, the Spacer found dead in The Caves of Steel.  However, this is not the case.  Daneel was modeled after Dr. Sarton’s appearance.  This revelation prompts Daneel to reveal what lies beneath.  In Dr. Han Fastolfe’s office:

R. Daneel pinched the ball of his right middle finger with the thumb and forefinger of his left hand…just as the fabric of the sleeve had fallen in two when the diagmagnetic field of its seam had been interrupted, so now the arm itself fell in two…There, under a thin layer of fleshlike material, was the dull blue gray of stainless steel rods, cords, and joints.  (The Caves of Steel 111)

As Baley passes out from the shock, the fact that the “R.,” which stands for “Robot,” in front of Daneel’s name is in fact deserved!

R. Daneel Olivaw is paired as a binary opposite broadly with humanity.  He/It, along with his robot kin, mirror humanity–opposites in a mirror looking back, disconcertingly similar, and evoking the uncanny.  When a character becomes aware of Daneel’s true being it destabilizes that character’s understanding of the difference between robot and human.  Most of Asimov’s robots are very metal and very plastic.  They are the epitome of synthetic.  Daneel’s construction sets him apart from the apparent synthetic robots because he appears human.  Elijah Baley first greets Daneel at Spacetown thinking that he is a Spacer, a human who lives on a planet other than Earth.  Later Baley says to his superior, Commissioner Julius Enderby, “You might have warned me that he looked completely human” and he goes on to say “I’d never seen a robot like that and you had.  I didn’t even know such things were possible” (The Caves of Steel 83).  Elijah and most other humans are not aware that a human form robot was a possibility.  Although Elijah comes to terms with Daneel, other characters desire to destroy humaniform robots.  Elijah’s wife is secretly a member of the Medievalists, a group that wants to do away with all robots, including Daneel.  Commissioner Enderby, also a Medievalist, murders Dr. Sarton, not because he wants to kill Sarton, but because he mistakes him for Daneel.

The more intimate binary opposition takes place between R. Daneel Olivaw and his human partner, Elijah Baley.  Before Elijah meets Daneel, he is confident in his own abilities as a detective.  After he partners with Daneel, however, he begins to call into question his own abilities and talents.  Robots are meant to be superior to humans and Elijah extends this to his own profession that is now being intruded on by an android.  Baley is narrating at the beginning of The Caves of Steel:

            The trouble was, of course, that he was not the plain-clothes man of popular myth.  He was not incapable of surprise, imperturbable of appearance, infinite of adaptability, and lightning of mental grasp.  He had never supposed he was, but he had never regretted the lack before.

What made him regret it was that, to all appearances, R. Daneel Olivaw was that very myth, embodied.

He had to be.  He was a robot.  (The Caves of Steel 26-27)

This anxiety is one of the motivating factors behind The Robots of Dawn, when Elijah is brought in to investigate the murder of a humaniform robot like Daneel.  If Elijah fails, he will loose his job and be declassified.  The fear of declassification is dire to Elijah because he had seen his own father declassified when he was a child.  Therefore, the existence of humaniform robots subverts human superiority over humanity’s synthetic constructs.

R. Daneel Olivaw’s doppelganger pairing with the human Elijah Baley causes real concern for those persons directly threatened (i.e., ego and job prospects, not bodily) by robotic superiority.  However, the Olivaw-Baley novels, “illustrate Asimov’s faith that man and machine can form a harmonious relationship” (Warrick 61).  These novels promote a utopic vision of human-machine cooperation.  Therefore, the hierarchy of human/machine that Asimov is responding to is inverted within the texts.

That being said, Asimov’s human/machine hierarchy contains a built-in flaw for a full inversion–the Three Laws of Robotics.  R. Daneel Olivaw, with his/its human appearance, for all intents and purposes appears to want to work along side humanity.  He/It appears to form a bond of friendship with his human partner, Baley.  He/It appears to make conscious decisions to protect Baley and other humans.  This appearance of intent comes from the imposition of the Three Laws.  They are built-in, integrated, and non-removable.  Robots and androids are constructed rather than develop, so they come preloaded with those laws as well as experiences necessary for the fulfillment of their respective jobs (e.g., an android detective will have a different set of experiences/knowledge built-in than a garbage collecting robot).  Asimov’s robots and androids can have no original sin, and they cannot make choices outside the bounds of their hardwired programming.  Humanity’s imposition of these laws re-asserts the human/machine hierarchy within the texts.  Thus, utopia can be achieved in Asimov’s fictional world through the artificially constructed superiority of humans over machines by subjecting them to an existence of slavery to humanity’s laws for robots.

The Asmovian robot/android is a supplement to humanity, thus creating/reinforcing the assumed natural human/machine hierarchy.  They fulfill menial tasks as well as specialized jobs for which automated/autonomous labor is required/requested.  Humans build them, and the positronic brain of Asimov’s robots/androids is a human creation that approximates human thought in the anthropomorphized machine.  Furthermore, the positronic brain is a linguistic engine producing logical thought for the android.  Troubleshooting robots and androids is done both mechanically (i.e., employing spanners, wires, readouts, etc.) as well as with the talking-cure transplanted to diagnose the android (i.e., the field of robopsychology–the image of Susan Calvin comes to mind).  The law, superego, or symbolic order comes from the Three Laws of Robotics hardwired into the positronic brain.  The deux ex machina is a replication of human linguistic systems of signs–a semeiotics for anthropomorphized, embodied machines.

Apparently, R. Daneel Olivaw and the other androids/robots are derived from humanity.  Humans came first, and then the robots.  But, does that necessarily make androids supplemental to humans?  Androids behave and perform themselves as human.  They are more accomplished physically–faster, stronger, and incapable of experiencing fatigue.  Additionally, Asmovian robots and androids are more intelligent and capable of learning much more than humans, due to their potentially longer lifespan.  Why, then, are androids considered supplemental to humans when they are superior in many ways?  Deconstructing the human/machine hierarchy in Asimov’s stories is relatively easy considering the occasional critical displeasure over the simplicity of his works.  That aside, his novels represent the human/machine hierarchy in a way that reinforces its appearance elsewhere in pulp SF and SF film of the era, but it destabilizes the hierarchy in the way Asimov constructs his robots.  Their connections to humanity are paramount to an analysis of the human/machine hierarchy in these works, and it’s telling that Asimov resisted the “killer robot” image by giving his creation a conscience.  Unfortunately, that conscience makes the android subservient to humanity and therefore obviates its own subjectivity in favor of the supposedly superior human.

Deconstructing Do Androids Dream Human/Android and Hunter/Prey Hierarchies

Dick’s novel, Do Androids Dream of Electric Sheep? (Do Androids Dream) is a significant novel from the New Wave era of SF that arguably began with Michael Moorcock’s editorship of New Worlds in 1964, and is characterized by literary experimentation, emphasis on the “soft” sciences (e.g., psychology, sociology, psionics, and philosophy), and more adult themes including sex, sexuality, and illicit drugs.[1]  Dick’s work engages these New Wave and postmodern themes in his works, and diverges from the straight story of Asimov into new, unexplored territory.

Do Androids Dream was originally published in 1968 when the Cold War was entering its second phase of escalated tensions between East and West over Southeast Asia.  The military-industrial complex was sending armaments, materiel, and men to a far off space to hold back the so-called “domino effect.”  It was released in the same year that President Lyndon B. Johnson signed the Civil Rights Act of 1968.  Whereas Asimov was probably directly influenced by Norbert Weiner’s early writings on cybernetics, Dick was probably aware of Weiner’s later work:  God & Golem, Inc.:  A Comment on Certain Points Where Cybernetics Impinges on Religion (1966).  Weiner’s metaphysics of cybernetics is apparent in Do Androids Dream as well as many of Dick’s middle and later works, which deal more explicitly with metaphysical questions of self, identity, existence, and religious experience.  Dick and Asimov’s works are under the surface allegories about racial divide in America following World War II, but Dick problematizes the differences between android and human along the lines of psychology and metaphysical questions of existence and religion.  However, in both cases, the overarching thesis of the human/machine hierarchy is unavoidable and reinforced through the texts.

Do Androids Dream approaches the presupposed human/machine hierarchy from a more metaphysical trajectory than Asimov’s Olivaw-Baley novels.  The story takes place in San Francisco in the year 2021 following a devastating nuclear war that prompts the majority of the surviving population to emigrate to Mars.  However, the proverbial “40 acres and a mule” is provided by governments to sweeten and entice migration to another world.  The mule in Do Androids Dream is the android.  It is billed as a worker and companion–constructed to the needs/wants of the human settler.  These androids are produced by a number of companies, and they are continually improved upon.  These androids, or by the derogatory term, andys, are part flesh and part machine.  If they are caught escaping their enforced servitude/slavery, they are “retired” (i.e., killed) by a human bounty hunter.  Locating escaped androids is problematic, because they appear and behave human.  Also, the corporations building them, such as the Rosen Corporation, continually strive to build more human-like androids, culminating with the latest design, the Nexus-6.  The only methods of detection are 1) reflex response, 2) the Voigt-Kampff Empathy Test, and 3) a bone marrow analysis.  All but the physically invasive test is potentially suspect because of biological and psychological variation in humans.

Again, why are humans supposedly superior to androids?  Humanity builds androids.  They are a commodity.  They are slave labor with a definite lifespan built-in due to technological limitations.  Humans are the masters and androids are the slaves.  For a slave to challenge the authority of the master, the android incurs the harshest penalty–death.  Furthermore, androids display what’s called a “flattening of affect” (Dick 37).  They don’t “actually” feel emotions–they can only approximate an appropriate human-inspired response.  For this reason, they are not believed to have a soul and cannot under go fusion with the religious figure of Mercer through the technological mediation of the Empathy Box.  But what about schizophrenics with a similar “flattening of affect?”  His superior warns Deckard about this possibility:

The Leningrad psychiatrists…think that a small class of human beings could not pass the Voigt-Kampff scale.  If you tested them in line with police work, you’d assess them as humanoid robots.  You’d be wrong, but by then they’d be dead.  (Dick 38).

Similarly, these humans shouldn’t be able to worship with other humans.  Mercerism is supposedly cut off for these individuals.  This aspect of the schizophrenics isn’t addressed in Do Androids Dream, but Deckard responds to his superior’s concerns:

They’d be in institutions…They couldn’t conceivably function in the outside world; they certainly couldn’t go undetected as advanced psychotics–unless of course their breakdown had come recently and suddenly and no one had gotten around to noticing.  But that can’t happen.  (Dick 38)

So, these individuals with a “flattening of affect,” or no appropriate emotional response to a given situation, “couldn’t conceivably function in the outside world” according to Deckard.  However, the six androids he hunts integrate into daily life, hold jobs in some cases, and live their lives as best they can while looking over their shoulder for a bounty hunter on their trail.  Certainly not all schizophrenics can go unnoticed, but going by the DSM IV-TR criteria, it seems clear that someone could maintain a modicum of self-sufficient life without getting the men in white coats chasing after them.  This indicates one aspect of the human/android hierarchy that breaks down under scrutiny.  Thus, experiencing emotion and affect are not necessarily something inherently human, and there’s no underlying machineness that dictates that they cannot experience emotion.

Let’s consider the human/machine hierarchy inverted in Do Androids Dream.  Again, like Asimov’s robots, the androids of Do Androids Dream are unique and talented.  For example, Luba Luft, an android, becomes a public opera singer that Deckard later regrets retiring.  He thinks to himself after the act, “I don’t get it, how can a talent like that be a liability to our society?  But it wasn’t the talent, he told himself; it was she herself” (Dick 137).  She is a recognized singer, and Deckard enjoys hearing her sing during rehearsal.  Yet, he and another bounty hunter kill her, because “it was she herself,” an android.  Human superiority over the android slave marks the android for subjection or destruction depending on the android’s choice to comply or rebel.  Rebellion raises the hierarchy of predator/prey, bounty hunter/android.  This new hierarchy is inverted during the last standoff between Deckard and the remaining three androids:  Pris Stratton, Irmgard Baty, and Roy Baty.  Pris makes the move to attack Deckard, using her similar appearance to Rachael Rosen to her advantage.

Another example of android hierarchical inversion has to do with Roy and Irmgard Baty.  They are married androids, and when they are cornered Roy tries to draw Deckard away from his wife.  Deckard kills her first, and Roy lets out a scream of rage before his own death.  Who’s to say that that Roy and Irmgard didn’t feel?  Who’s to say that they really feel something (e.g., sadness, happiness, joy, etc.)?  The humans in the story have less feeling than some of the androids.  For example, Rick and Iran Deckard have a Penfield Mood Organ, a technological device that alters their moods.  In many ways, it’s debatable if they could be married without the artificial stimulation of the mood organ.  Phil Resch also addresses the “feelings” of androids, while under suspicion of being an android.  While tracking Luba Luft in an art gallery, he stops in front of a painting:

At an oil painting Phil Resch halted, gazed intently.  The painting showed a hairless, oppressed creature with a head like an inverted pear, its hands clapped in horror to its ears, its mouth open in a vast, soundless scream.  Twisted ripples of the creature’s torment, echoes of its cry, flooded out into the air surrounding it; the man or woman, whichever it was, had become contained by its own howl.  It had covered its ears against its own sound.  The creature stood on a bridge and no one else was present; the creature screamed in isolation.  Cut off by–or despite–its outcry


“I think,” Phil Resch said, “that this is how an andy must feel.”  He traced in the air the convolutions, visible in the picture, of the creature’s cry.  “I don’t feel like that, so maybe I’m not an–”  He broke off as several persons strolled up to inspect the picture.  (Dick 130-131).

Edvard Munch’s Scream (1893) is emblematic of being overwhelmed, and acting out against an oppressive or repressive force.  Also, it serves to signify the emotional experience of androids in the novel.  What’s peculiar about this passage is that it’s a human bounty hunter, perhaps questioning his own identity at this point, but nevertheless indicating that androids are capable of feeling.  That feeling is one of the most oppressive and heavy expressionist paintings.  Another reading is that Resch is projecting his own stress and panic onto his prey.  In either case, the suggestion is made, which is disturbing considering Resch’s later cold-blooded killing of Luba Luft.  However, before that act, Deckard makes a token gesture of kindness toward Luba Luft.  After apprehending her with Resch’s help, she asks Deckard to buy her a print of the painting she was looking at.  After a pause, Deckard buys a book with the print of Munch’s Puberty (1895) inside for her, knowing that she will have to be “retired.”  She tells Deckard, “It’s very nice of you…There’s something very strange and touching about humans.  An android would never have done that” (Dick 133).  Deckard’s act is one of compassion, even for the condemned android in his possession.  Resch’s lack of affect toward androids is reinforced by his admission that he would never made such a gesture.  However, he would do something even more dehumanizing, but from his perspective, it isn’t such an act because it doesn’t involve another human.  Humans with artificial emotions, and androids with arguably emotional responses of love and self-preservation serve to deconstruct the assumed human/machine hierarchy in Do Androids Dream.

The idea that humans can be attracted to androids, and the destabilization of human subjectivity by androids further complicates the human/machine hierarchy.  Deckard’s human subjectivity is challenged during the episode at the fake Mission Street Police Station.  There, he’s surrounded and considered an android by a swarm of police officers.  However, these cops are actually androids, pretending to be police officers in a fake police station–a safe-house of sorts for wayward androids.  Again, the hierarchy is inverted.  Then, Deckard escapes with the help of Phil Resch, who Deckard is told by a then retired android that Resch is one of them.  During the process of revelation, the destabilization of human subjectivity passes from Deckard to Resch.  Resch begins to doubt he’s human.  His lack of affect toward killing androids seems to reinforce this view, because androids supposedly don’t care for one another (yet evidence in the story that contradicts that assumption).  However, things are turned around once again when Resch is diagnostically determined to be human by Deckard’s Voigt-Kampff Test.  He merely lacks any affect toward androids–something that Deckard begins to experience toward female androids including Luba Luft and Rachael Rosen.  This double inversion results in Deckard questioning his own abnormal affective response:

And he felt instinctively that he was right.  Empathy toward an artificial construct?  he asked himself.  Something that only pretends to be alive?  But Luba Luft had seemed genuinely alive; it had not worn the aspect of a simulation.  (Dick 141).

One shouldn’t be attracted to androids, because they aren’t human, they aren’t real.  However, Luba Luft “had seemed genuinely alive,” and didn’t seem like a “simulation.”  This is moving into the realm of Jean Baudrillard and his theorization of simulacra and simulation, but it’s an important digression for this discussion.  In Deckard’s postmodern world, the android is a simulacra–a copy without an original, and an image that, “has no relation to any reality whatsoever” (Baudrillard 6).  As mentioned before, her/its embodiment as an artificial life form is the only register for her destruction.  That signification is a cultural construct just as considering slaves in the Old South as inhuman and not deserving of Constitutional protection was a cultural practice upheld in the hierarchies:  white/black, master/slave, free/captive.

Next, the human/android hierarchy and its analogous master/slave hierarchy are coupled with gender and sex hierarchies.  It’s Resch’s cold-hearted suggestion to Deckard that prompts his next move–to sleep with a female android before killing it.  Soon, Deckard has sex with Rachael Rosen, the Rosen Corporation’s in-house Nexus-6 model android, but his narrated descriptions of her seems like an attempt to put it off as a possibility.  He tries to resist a desire he clearly has for her/it.  This is made clearer in this example:

Rachael’s proportions, he noticed once again, were odd; with her heavy mass of dark hair, her head seemed large, and because of her diminutive breasts, her body assumed a lank, almost childlike stance.  But her great eyes, with their elaborate lashes, could only be those of a grown woman; there the resemblance to adolescence ended.  Rachael rested very slightly on the fore-part of her feet, and her arms, as they hung, bent at the joint:  the stance, he reflected, of a wary hunter of perhaps the Cro-Magnon persuasion.  The race of tall hunters, he said to himself.  No excess flesh, a flat belly, small behind and smaller bosom–Rachael had been modeled on the Celtic type of build, anachronistic and attractive.  Below the brief shorts her legs, slender, and a neutral, nonsexual quality, not much rounded off in nubile curves.  The total impression was good, however.  Although definitely that of a girl, not a woman.  Except for the restless, shrewd eyes.  (Dick 187).

“Childlike” is woven together with “grown woman.”  “Cro-Magnon” is juxtaposed with “Celtic type of build.”  Her girlish “flat belly, small behind and smaller bosom,” gives Deckard an overall “good” impression.  Physically she’s described like a lanky teenage girl, but it’s her eyes that make her/it a woman to Deckard.  Her/Its eyes connect Deckard to her/its soul, the Nexus-6 control unit, and the artificially created brain impregnated with simulacral memories.  Nevertheless, the human/machine, male/female, hunter/prey hierarchy gets inverted.  Rachael’s arousal provokes her to take charge of Deckard’s attempt to get out of having sex with her.  She demands, “Goddamn it, get into bed,” and he does (Dick 195).

Do Androids Dream illustrates the culturally contrived hierarchy of human/machine, master/slave, and dominant/submissive.  However, in each case, these hierarchies of binary opposites can be inverted through an analysis of the text in order to arrive at the beginning of understanding regarding these hierarchies.  Deconstruction of these hierarchies opens things up for further discussion involving how these hierarchies are presented in SF as well as how they come to be culturally instituted and replicated in works of fiction.


Asimov’s detective fiction SF and Dick’s noir bounty hunters inhabit and promote Cold War human/machine hierarchies.  Asimov’s utopia of humanity and androids coexisting is undercut by the android’s loss of agency due to the Three Laws of Robotics.  Dick’s dystopian San Francisco provides a different set of possibilities where androids seem more human than human.  Certainly, Asimov’s work came first, but to say that Dick’s work is supplemental would be an error.  There are shared ideas, themes, and terminology in these works.[2]  Each SF work, sentence, and word carries with it traces of meaning, and no one particular word is privileged over another.  One idea is not privileged over another.  More importantly, the hierarchies present in these works mean something, but they cannot be assumed to be right, true, and natural.  The continuous process of deconstruction must be applied in order to open up these works and their embedded hierarchies for further analysis and understanding.  However, that understanding is not an end point any more than deconstruction is a process of reading.  It’s a way of thinking that leads to new avenues and ways of thinking, which is important to any cultural work including SF.  Deconstruction is only the beginning.

As a beginning, what’s next?   Cold War human/machine hierarchies are reinforced in a variety of media including the critical works that shouldn’t have preexisting assumptions about the works in question.  The traces of meaning connected to “human” and “machine” and the relation between the two needs further development.  How is that hierarchy presented in other works by Asimov and Dick, and are there other connections between these two significant SF authors related to this hierarchy?  How do hierarchies play out between SF authors and the associated literary movements a particular author is associated with?  These and many other questions deserve further critical attention through an open-ended deconstructionist lens.  This won’t yield further hard facts, but it will lead to more compelling questions.  And that is where the play begins again.



Works Cited

Asimov, Isaac.  The Caves of Steel.  New York:  Bantam Doubleday Dell, 1954.

—.  I, Robot.  New York:  Gnome Press, 1950.

—.  The Naked Sun.  New York:  Bantam Doubleday Dell, 1957.

—.  The Robots of Dawn.  New York:  Doubleday, 1983.

Baudrillard, Jean.  Simulacra and Simulation.  Trans.  Shelia Glaser.  Ann Arbor:  University of Michigan Press, 1994.

Broderick, Damien.  Reading by Starlight:  Postmodern Science Fiction.  London:  Routledge, 1995.

Derrida, Jacques.  “Différance.”  Trans.  David B. Allison.  Literary Theory:  An Anthology.  2nd Edition.  Ed. Julie Rivkin and Michael Ryan.  Malden, MA:  Blackwell Publishing, 2004:  279-299.

—.  Of Grammatology.  Trans. Gayatri Chakravorty Spivak.  Baltimore:  John Hopkins UP, 1976.

—.  Positions.  Trans. Alan Bass.  Chicago:  University of Chicago Press, 1981.

Dick, Philip K.  Do Androids Dream of Electric Sheep?  New York:  Doubleday, 1968.

Munch, Edvard.  Puberty.  1895.  National Gallery, Oslo.  12 December 2007 <;.

—.  The Scream.  1893.  National Gallery, Oslo.  12 December 2007 <;.

McHale, Brian.  Constructing Postmodernism.  New York:  Routledge, 1992.

Warrick, Patricia S.  The Cybernetic Imagination in Science Fiction.  Cambridge, MA:  MIT Press, 1980.

Wiener, Norbert.  Cybernetics:  Or the Control and Communication in the Animal and Machine.  Cambridge:  MIT Press, 1948.

—.  God & Golem, Inc.:  A Comment on Certain Points Where Cybernetics Impinges on Religion.  Cambridge:  MIT Press, 1966.

—.  The Human Use of Human Beings.  Boston:  Houghton Mifflin, 1950.

[1] Brian McHale makes the case that New Wave SF, which began in the 1960s was a precursor to true dialog between postmodernism and SF, and it’s in the 1970s that, “SF and postmodernist mainstream fiction become one another’s contemporaries, aesthetically as well as chronologically, with each finally beginning to draw on the current phase of the other, rather than on some earlier and now dated phase” (228).

[2] Damien Broderick explores this idea more fully in his book, Reading by Starlight:  Postmodern Science Fiction (1995).  In that work, he extends Christine Brooke-Rose’s idea of the fantasy megastory to SF, and calls that shared collection of terminology the mega-text of SF.

I am a professor of English at the New York City College of Technology, CUNY whose teaching includes composition and technical communication, and research focuses on 20th/21st-century American culture, science fiction, neuroscience, and digital technology.

Tagged with: , , , , , , , , , , ,
Posted in Kent State, Recovered Writing, Science Fiction
Who is Dynamic Subspace?

Dr. Jason W. Ellis shares his interdisciplinary research and pedagogy on Its focus includes the exploration of science, technology, and cultural issues through science fiction and neuroscientific approaches. It includes vintage computing, LEGO, and other wonderful things, too.

He is an Assistant Professor of English at the New York City College of Technology, CUNY (City Tech) where he teaches college writing, technical communication, and science fiction.

He holds a Ph.D. in English from Kent State University, M.A. in Science Fiction Studies from the University of Liverpool, and B.S. in Science, Technology, and Culture from Georgia Tech.

He welcomes questions, comments, and inquiries for collaboration via email at jellis at citytech dot cuny dot edu or Twitter @dynamicsubspace.


Enter your email address to follow this blog and receive notifications of new posts by email.

Join 847 other followers

Blog Stats
  • 484,835 visits