Tag: Pedagogy

  • Generative AI for College Students Series: Watch Out for Fabricated Footnotes and Fake Citations

    an anthropomorphic cat as a professor wearing a suit and orange tie standing in front of a chalkboard in a classroom
    Image created with Stable Diffusion.

    Please keep in mind that new technology like Generative AI (Gen AI) shouldn’t simply make your thinking or work easier, much less take the place of the uniquely singular abilities of human beings to grow cognitively, think creatively, or evaluate critically. If you use Gen AI to simply avoid work, you are doing it wrong. Instead, using Gen AI in the spirit of Douglas Engelbart’s “augmenting human intelligence” and Donna Haraway’s configuration of the cyborg point the way to beneficial heightening of human possibility instead of harmful erasure of the cognitive distinctions of humanity. If you use Gen AI, use it wisely and use it well. This post is the eighth in this series.

    In the science fiction film Blade Runner, replicants—advanced androids indistinguishable from humans—question the nature of their existence. Similarly, students using generative AI tools to write papers may find themselves grappling with questions of authenticity, particularly when it comes to citations. While AI can generate well-formatted citations and quotes, these may be entirely fabricated, leading to academic dishonesty and intellectual confusion.

    The problem arises because AI tools do not “know” the sources they cite. They generate citations based on patterns in their training data, which may include errors, inaccuracies, or outright fabrications. For example, an AI might invent a book title, attribute a quote to a nonexistent author, or misrepresent the content of a real source. These fabrications can be subtle and difficult to detect, even for experienced scholars.

    Imagine a student writing a paper on the ethics of artificial intelligence. They prompt an AI tool to include a quote from a prominent philosopher. The AI responds with a quote that seems relevant and includes a properly formatted citation. However, when the student checks the source, they discover that the philosopher never wrote those words, or that the book cited does not exist. This scenario is not only frustrating but also undermines the integrity of the student’s work.

    This issue mirrors the theme of simulacra in science fiction—copies without originals. In Jean Baudrillard’s theory of simulacra, representations of reality become more important than reality itself. AI-generated citations are simulacra of academic integrity, creating a false appearance of legitimacy. Just as replicants in Blade Runner question their humanity, students must question the authenticity of AI-generated citations.

    To combat this problem, students must adopt a cautious approach to AI-generated citations. They should avoid prompting AI tools to generate citations outright and instead use AI to assist with finding credible sources. For example, a student could ask the AI to suggest relevant authors or topics, then locate and verify those sources independently. This approach ensures that the citations are accurate and legitimate.

    In conclusion, while AI tools can be powerful assistants, they are not substitutes for human judgment and critical thinking. The cyborg student must learn to use these tools selectively, always prioritizing accuracy and authenticity. By doing so, they can maintain the integrity of their academic work and avoid the dangers of fabricated footnotes.

  • Generative AI for College Students Series: Beware Hallucinations and Falsehoods

    an anthropomorphic cat as a professor in a business suit lecturing in front of a classroom with a chalkboard behind him
    Image created with Stable Diffusion.

    Please keep in mind that new technology like Generative AI (Gen AI) shouldn’t simply make your thinking or work easier, much less take the place of the uniquely singular abilities of human beings to grow cognitively, think creatively, or evaluate critically. If you use Gen AI to simply avoid work, you are doing it wrong. Instead, using Gen AI in the spirit of Douglas Engelbart’s “augmenting human intelligence” and Donna Haraway’s configuration of the cyborg point the way to beneficial heightening of human possibility instead of harmful erasure of the cognitive distinctions of humanity. If you use Gen AI, use it wisely and use it well. This post is the seventh in this series.

    In the realm of science fiction, the concept of cyborgs–beings that blend human and machine—often explores the tension between human intelligence and technological advancement. Today, as students increasingly rely on generative AI tools to assist with writing, they are, in a sense, becoming cyborgs of the academic world. These tools can produce coherent, polished text, but they also carry significant risks. One of the most concerning issues is the tendency of AI to “hallucinate,” or generate information that is entirely fabricated, made-up, or misleading. This phenomenon is not just a glitch; it’s a fundamental limitation of how these tools operate.

    Generative AI works by predicting and reassembling language patterns from vast datasets, often drawn from the internet. While the internet is a rich source of information, it is also a breeding ground for misinformation, conspiracy theories, and outright falsehoods. When AI tools process this data, they don’t distinguish between fact and fiction. They simply mimic the patterns they find. The result is responses that may look authoritative and well-written but are, in reality, partly or entirely inaccurate.

    Gen AI presents its responses without qualification or equivocation. The potentially wrong, made-up information in a given response is presented as if it were irrefutable.

    Consider a student working on a research paper about climate change. They prompt an AI tool to provide a summary of recent findings. The AI responds with a polished paragraph that includes specific statistics and citations. The problem? Some of those statistics might be fabricated, and the citations could refer to nonexistent studies. The student, unaware of the fabrication, incorporates this information into their paper, potentially spreading misinformation.

    This issue is reminiscent of the theme of unreliable narration in science fiction. In works like Philip K. Dick’s Ubik, reality itself is unstable, and characters must navigate a world where information is constantly shifting and misleading. Similarly, students using AI tools must navigate a landscape where the line between truth and fiction is increasingly blurred. The AI, like the narrator in Dick’s novel, presents a version of reality that may not correspond to actual facts.

    To avoid falling into this trap, students must approach AI-generated information with skepticism. They should verify any claims made by the AI by cross-referencing with credible sources. In other words, they must act as human fact-checkers, ensuring that the information they use is accurate. This process requires a critical eye and a willingness to question even the most plausible-sounding responses. Reading Gen AI responses with a healthy dose of skepticism and engaging in research written by authorities in those fields will help students verify those responses while gaining information literacy and research skills.

    The cyborg student, armed with both human critical thinking and the power of AI, must learn to use these tools responsibly. By doing so, they can harness the benefits of AI while avoiding the pitfalls of misinformation. The bottom line is Gen AI is a good tool that can work with text in various ways–especially text that you supply it with, but it shouldn’t be relied on as a knowledge base as it isn’t designed to be a reference in the same way as an encyclopedia, database, or textbook is.

  • Generative AI for College Students Series: Translation and Bridging Language Gaps

    Image created with Stable Diffusion.

    Please keep in mind that new technology like Generative AI (Gen AI) shouldn’t simply make your thinking or work easier, much less take the place of the uniquely singular abilities of human beings to grow cognitively, think creatively, or evaluate critically. If you use Gen AI to simply avoid work, you are doing it wrong. Instead, using Gen AI in the spirit of Douglas Engelbart’s “augmenting human intelligence” and Donna Haraway’s configuration of the cyborg point the way to beneficial heightening of human possibility instead of harmful erasure of the cognitive distinctions of humanity. If you use Gen AI, use it wisely and use it well. This post is the sixth in this series.

    Science fiction often features devices that break language barriers, such as Douglas Adams’ Babel Fish in The Hitchhiker’s Guide to the Galaxy (1979) or Star Trek’s popularization of the SF concept of a universal translator. AI translation tools bring this concept closer to reality. While not perfect, AI can translate texts, helping students communicate across languages and access knowledge written in languages that they don’t know. However, the concerns about Gen AI accuracy holds even more so when translating texts from one language to another, which might produce inaccuracies in terms of phrasing, thought, and facts.

    Gen AI can serve as a bridge for students working with or writing about multilingual sources, offering translations that facilitate understanding. While these translations may not always capture the subtleties of idioms or cultural context, they can open a world of ideas and provide a foundation for further exploration.

    For instance, a student researching a Spanish-language novel could use Gen AI to translate key passages, then analyze how the original language contributes to the text’s tone and meaning.

    For language learners, Gen AI translation can help students understand troublesome passages in readings or translate their native language writing into the language that they are learning. In both cases, students should use this as an aid for learning and not a plagiarism tool.

    While Gen AI translations are not perfect, they open doors to global perspectives and ideas.

    Students should use these tools with awareness of their limitations, supplementing Gen AI translations with human expertise when possible.

  • Generative AI for College Students Series: Revising Your Writing

    an anthropomorphic tabby cat wearing a blue sweatshirt is writint notes in a notebook in a library
    Image created with Stable Diffusion.

    Please keep in mind that new technology like Generative AI (Gen AI) shouldn’t simply make your thinking or work easier, much less take the place of the uniquely singular abilities of human beings to grow cognitively, think creatively, or evaluate critically. If you use Gen AI to simply avoid work, you are doing it wrong. Instead, using Gen AI in the spirit of Douglas Engelbart’s “augmenting human intelligence” and Donna Haraway’s configuration of the cyborg point the way to beneficial heightening of human possibility instead of harmful erasure of the cognitive distinctions of humanity. If you use Gen AI, use it wisely and use it well. This post is the fifth in this series.

    Students can take a cyborg approach to writing by using Gen AI as a powerful tool for refining written work. By inputting a draft into an AI tool, students can receive feedback on grammatical errors and suggestions for improving sentence structure. While these suggestions may sometimes feel a bit generic, they are often free of the grammatical mistakes that can plague even the most attentive writers. This makes AI particularly useful for catching oversights that might otherwise go unnoticed.

    Gen AI can assist in refining drafts by checking for grammatical errors, improving sentence clarity, and suggesting alternative phrasing. This process is akin to the cyborg’s ability to enhance physical or cognitive abilities through technology. For example, a student could input a draft into an AI tool and receive feedback on sentence-level improvements.

    However, it’s important to approach AI-generated feedback critically. While AI excels at identifying technical errors, it may lack the nuance to fully capture a writer’s unique voice or stylistic choices. Students should compare their original drafts with AI-edited versions, reflecting on what changes improve clarity and which ones compromise their tone.

    However, it’s important to recognize that AI is not a substitute for human judgment. The decisions to accept or reject its suggestions should remain firmly in the hands of the writer. After all, writing is a deeply personal act, shaped by individual perspectives and styles. AI can offer valuable insights, but it cannot replicate the unique voice and intent that a human brings to their work. And for any substantive learning to take place on the part of the student, they need to reflect, revise, and incorporate what they learn in future writing.

    One of the most promising aspects of using Gen AI is the opportunity it provides for reflection and growth. After using an AI tool to edit a piece of writing, students can compare their original draft with the AI’s version. They can also query the Gen AI system about what edits were made and why. This exercise can reveal patterns in the changes made by the AI, such as a tendency to simplify complex sentences or standardize certain phrasings. By examining these patterns, students can gain insights into their own writing habits and consider whether these changes enhance or detract from their intended message.

    For instance, a student might notice that the AI consistently alters their use of passive voice to active voice. This could prompt them to think about the impact of voice choice on their writing’s clarity and tone. Similarly, if the AI frequently suggests synonyms for certain words, the student might reflect on whether these alternatives better convey their intended meaning or if they lose some nuance in the process.

    This process of comparison and reflection encourages students to think critically about their writing choices and to develop a more nuanced understanding of style and syntax. It also underscores the idea that writing is a series of deliberate decisions, and that even when AI offers a suggestion, the writer retains the final say.

    The integration of Gen AI into the writing process represents a significant shift in how we approach composition and revision. It challenges us to rethink our assumptions about creativity, originality, and the role of technology in education. While some may worry that AI could diminish the uniqueness of human writing, I believe that it has the potential to enhance it—if used thoughtfully.

    By embracing the cyborg model, where human and machine collaborate to produce something greater than either could alone, students can harness the strengths of both worlds. AI can provide technical precision and objective feedback, while the human writer contributes creativity, empathy, and depth. This partnership can lead to writing that is not only more polished but also more expressive and impactful. But most importantly for learning to take place, the student must reflect on their workflow and the suggestions from their Gen AI model–don’t just accept revisions blindly, but ask why this instead of that; consider the revision and if accepted; and practice observed changes in future writing.

    In this new era of writing, the cyborg writer—part human, part machine—stands as a testament to the adaptability and resilience of the craft. By embracing this hybrid approach, students can navigate the evolving landscape of writing with confidence by leveraging the best of both worlds to produce unique work that communicates their ideas clearly to their intended audiences.

  • Generative AI for College Students Series: Enhancing Understanding by Summarizing Texts

    an anthropomorphic tuxedo cat wearing a green sweatshirt is writint notes in a notebook in a library

    Please keep in mind that new technology like Generative AI (Gen AI) shouldn’t simply make your thinking or work easier, much less take the place of the uniquely singular abilities of human beings to grow cognitively, think creatively, or evaluate critically. If you use Gen AI to simply avoid work, you are doing it wrong. Instead, using Gen AI in the spirit of Douglas Engelbart’s “augmenting human intelligence” and Donna Haraway’s configuration of the cyborg point the way to beneficial heightening of human possibility instead of harmful erasure of the cognitive distinctions of humanity. If you use Gen AI, use it wisely and use it well. This post is the fourth in this series.

    Generative AI can be a valuable tool for summarizing longer texts, aiding students in understanding key points before engaging with the full material. By summarizing dense academic articles or complex novels, AI provides a roadmap that highlights main arguments, supporting details, and conclusions. This process mirrors how cyborgs in science fiction use enhanced sensors or implants to process information more efficiently, blending human and machine capabilities to achieve better results.

    But it’s essential for students to follow through after reading summaries to then read the original text. The summarized text primes the students brain to be more receptive and engaged with the source material. Summarization can’t take the place of reading the text for deeper understanding of the text.

    For instance, a student analyzing Mary Shelley’s Frankenstein could use AI to summarize each chapter, then use those summaries to guide their identification of themes, motifs, and character development. AI can condense Victor Frankenstein’s complex narrative into clear, digestible sections, making it easier for students to trace the monster’s evolution from a rejected creature to a vengeful being. This tool doesn’t replace the need for close reading but rather enhances it by providing a framework for deeper analysis.

    Another example could be Isaac Asimov’s Foundation. This novel spans galaxies and centuries, with intricate political and mathematical concepts. AI summarization can help students break down the novel into manageable parts, such as the fall of the Galactic Empire or the rise of the Foundation. By focusing on key events and Hari Seldon’s psychohistorical predictions, students can better understand the novel’s exploration of societal change and human ingenuity. The AI acts as a cyborg-like enhancement, allowing students to process vast amounts of information more effectively.

    Similarly, Ursula K. Le Guin’s The Left Hand of Darkness presents a unique challenge with its exploration of gender identity and political intrigue on the planet Gethen. AI summarization can highlight the novel’s central themes, such as Ambassador Genly Ai’s struggles to understand the Gethenians’ androgynous society and the political tensions between nations. This clarity can help students focus on Le Guin’s nuanced commentary on human nature and societal structures, using the summaries as a starting point for their own insights.

    The true value of AI lies in its ability to free students from the initial challenge of parsing complex texts, allowing them to delve into deeper analysis by already having their bearings when beginning the text. While it would certainly be better (in my opinion) for students to pick up a book or article and trudge through it in the snow, up hill both ways, but I can’t deny how this technology might help some students who might not have a background being a reader or have difficulties with one text but not others. By using AI as a tool, students can embrace a cyborg-like approach to learning, combining human critical thinking with machine efficiency. This blend enables them to explore themes, analyze motifs, and develop their own interpretations, fostering a richer understanding of the literature.