
Behind the Gold House Restaurant at 135 N Main St. in Nahunta, Georgia, there’s a small skatepark with mini ramps, curbs, and other surfaces to skate. It’s right on the edge of a thick wooded area.

Behind the Gold House Restaurant at 135 N Main St. in Nahunta, Georgia, there’s a small skatepark with mini ramps, curbs, and other surfaces to skate. It’s right on the edge of a thick wooded area.

Please keep in mind that new technology like Generative AI (Gen AI) shouldn’t simply make your thinking or work easier, much less take the place of the uniquely singular abilities of human beings to grow cognitively, think creatively, or evaluate critically. If you use Gen AI to simply avoid work, you are doing it wrong. Instead, using Gen AI in the spirit of Douglas Engelbart’s “augmenting human intelligence” and Donna Haraway’s configuration of the cyborg point the way to beneficial heightening of human possibility instead of harmful erasure of the cognitive distinctions of humanity. If you use Gen AI, use it wisely and use it well. This post is the twelfth in this series.
In the science fiction novel Neuromancer, William Gibson explores the concept of cyborgs as beings who seamlessly integrate technology into their bodies and minds. Similarly, when students use generative AI tools to edit or paraphrase their writing, they risk integrating AI-generated changes that alter the meaning or tone of their work. This raises important questions about the role of AI in the writing process and the potential for losing one’s unique voice.
AI tools are designed to analyze and modify text based on patterns in their training data. While this can be helpful for improving grammar or clarity, it can also lead to unintended changes in meaning or tone. For example, a student might ask an AI tool to paraphrase a complex sentence, only to find that the tool has altered the nuance or emphasis of the original text. This can result in a piece of writing that no longer accurately reflects the student’s intentions or ideas.
This issue is reminiscent of the theme of identity in science fiction, where characters often grapple with the implications of merging human and machine. In works like Isaac Asimov’s I, Robot, the line between human consciousness and technological enhancement is increasingly blurred. Similarly, when students rely on AI tools to edit their writing, they risk blurring the line between their own voice and that of the machine.
To address this problem, students should approach AI-generated edits with caution. They should carefully review any changes made by the AI, ensuring that the meaning and tone of their writing remain intact. Reading the writing of others and doing more writing of one’s own helps each student recognize and develop their own voice as a writer.
In conclusion, while generative AI tools can be valuable editing assistants, they also pose a risk of altering the meaning or tone of a student’s writing. The cyborg student must approach these tools with discernment, ensuring that their unique voice is preserved in the process. By doing so, they can harness the benefits of AI while maintaining the integrity and authenticity of their own voice and ideas in their writing.

Please keep in mind that new technology like Generative AI (Gen AI) shouldn’t simply make your thinking or work easier, much less take the place of the uniquely singular abilities of human beings to grow cognitively, think creatively, or evaluate critically. If you use Gen AI to simply avoid work, you are doing it wrong. Instead, using Gen AI in the spirit of Douglas Engelbart’s “augmenting human intelligence” and Donna Haraway’s configuration of the cyborg point the way to beneficial heightening of human possibility instead of harmful erasure of the cognitive distinctions of humanity. If you use Gen AI, use it wisely and use it well. This post is the eleventh in this series.
In the science fiction film The Matrix, humans unknowingly live within a simulated reality created by machines. Similarly, when students input personal or private information into AI tools, they may be contributing to a vast, invisible dataset that could be used in unintended ways. This raises important questions about privacy and the responsible use of AI in academic writing.
Generative AI tools require input to generate responses, and this input is often incorporated into their systems for future use. While this allows the tools to improve over time, it also means that any sensitive or personal information provided by users could be shared or misused. For example, a student working on a sensitive topic might input detailed personal reflections or original ideas into an AI tool, only to have that information become part of the tool’s training data. This creates a privacy paradox: the more students rely on AI tools, the more they may be compromising their own privacy.
This issue is reminiscent of the theme of surveillance in science fiction, where individuals are constantly monitored and controlled by technological systems. In works like George Orwell’s 1984, the pervasive surveillance of the state undermines individual freedom and creativity. Similarly, the use of AI tools in academic writing could undermine students’ control over their own ideas and personal information.
To address this problem, students must be mindful of what they input into AI tools. They should avoid sharing sensitive or personal information and instead use the tools for general brainstorming or drafting. Using local Gen AI tools on one’s own computer or mobile device keeps your data safe on your own system instead of sending to a remote system.
While generative AI tools offer powerful possibilities for academic writing, they also pose significant privacy risks. The cyborg student must approach these tools with caution, carefully considering what they share and how they protect their personal information. By doing so, they can use AI responsibly while safeguarding their privacy.

Please keep in mind that new technology like Generative AI (Gen AI) shouldn’t simply make your thinking or work easier, much less take the place of the uniquely singular abilities of human beings to grow cognitively, think creatively, or evaluate critically. If you use Gen AI to simply avoid work, you are doing it wrong. Instead, using Gen AI in the spirit of Douglas Engelbart’s “augmenting human intelligence” and Donna Haraway’s configuration of the cyborg point the way to beneficial heightening of human possibility instead of harmful erasure of the cognitive distinctions of humanity. If you use Gen AI, use it wisely and use it well. This post is the tenth in this series.
In the science fiction novel Do Androids Dream of Electric Sheep?, Philip K. Dick explores a world where advanced androids, nearly indistinguishable from humans, challenge the notion of humanity. Similarly, the rise of generative AI tools challenges our understanding of authorship and academic integrity. As students increasingly use these tools to assist with writing, they must navigate a gray area between acceptable use and academic dishonesty.
The issue of academic integrity arises because AI tools can generate original, coherent text based on a prompt. While this can be a powerful tool for brainstorming or overcoming writer’s block, it also raises questions about authorship. If a student submits work that includes AI-generated text without proper citation or permission, they may be violating academic integrity policies. This is particularly concerning because many institutions are still developing guidelines for the use of AI in academic writing.
Consider a student struggling to articulate their thoughts on a complex topic. They prompt an AI tool to help rephrase their ideas, and the AI generates a well-written paragraph that clearly expresses their points. The student then includes this paragraph in their paper without citation, assuming it is their own work. This scenario raises important questions about the boundaries between collaboration and cheating in the age of AI.
This dilemma is reminiscent of the theme of identity in science fiction, where characters often question what it means to be human. Similarly, students using AI tools must question what it means to be the author of their work. Are they still the sole authors if they rely on AI to generate text? How should they cite AI-generated content, and under what circumstances is it acceptable to use it?
To navigate this gray area, students must consult with their instructors and familiarize themselves with their institution’s policies on AI use. Always read your class syllabus and assignment prompts to ensure AI-related policies are followed. They should also take steps to ensure transparency, such as disclosing the use of AI tools in their work and properly citing any generated content. If in doubt, ask!
The rise of generative AI tools challenges traditional notions of authorship and academic integrity. The cyborg student must navigate this complex landscape with care, ensuring that they use AI tools ethically and responsibly. By doing so, they can harness the power of AI while maintaining the integrity of their academic work.

Please keep in mind that new technology like Generative AI (Gen AI) shouldn’t simply make your thinking or work easier, much less take the place of the uniquely singular abilities of human beings to grow cognitively, think creatively, or evaluate critically. If you use Gen AI to simply avoid work, you are doing it wrong. Instead, using Gen AI in the spirit of Douglas Engelbart’s “augmenting human intelligence” and Donna Haraway’s configuration of the cyborg point the way to beneficial heightening of human possibility instead of harmful erasure of the cognitive distinctions of humanity. If you use Gen AI, use it wisely and use it well. This post is the ninth in this series.
Science and technology are not neutral and bias free. While we might aim to elevate them above human biases, they are part of human culture and carry the weight of the best and worst of ourselves. Similarly, generative AI tools, trained on vast datasets that reflect the biases of society, can reproduce and amplify these distortions in their responses. This raises important questions about the role of AI in academic writing and the potential for perpetuating prejudice.
AI tools are not neutral; they reflect the biases present in their training data. For example, if a dataset contains stereotypical portrayals of certain groups, the AI will likely reproduce those stereotypes in its responses. This can result in biased or offensive content that undermines the credibility of a student’s work. Moreover, because AI-generated text is often polished and coherent, students may be less likely to question its content, thereby unintentionally perpetuating harmful ideas.
Consider a student writing a paper on gender roles in society. They prompt an AI tool to provide an analysis, and the AI responds with a well-written paragraph that reinforces outdated stereotypes. The student, assuming the AI is neutral, incorporates this analysis into their paper, potentially spreading biased ideas. This scenario highlights the danger of relying on AI without critically evaluating its responses.
It bears noting that while we influence the development of technology, it in turn influences human culture. In the case of AI tools, the technology is not only shaped by society but also actively reshapes it by amplifying existing biases. Students must recognize this dynamic and take steps to mitigate its impact on their work.
To address this problem, students should actively seek out diverse perspectives and critically evaluate AI-generated content. They can do this by comparing AI responses to credible sources and looking for inconsistencies or biases. Additionally, educators can play a crucial role by teaching students to recognize and challenge biases in AI-generated text. This might involve incorporating discussions of AI bias into the curriculum and providing tools for analyzing and addressing it.
While AI tools can be valuable writing assistants, they are not immune to the biases of the data they are trained on. The cyborg student must approach these tools with a critical eye, recognizing the potential for bias and taking steps to mitigate its impact. By doing so, they can produce work that is not only well-written but also equitable and inclusive.