Category: Pedagogy

  • CUNY Graduate Center Classroom for Interactive Technology and Pedagogy I

    panoramic view of a seminar style classroom from a corner opposite the door, two chalkboards, overhead projector, tv in back

    Last night, I stuck around after my first co-taught Interactive Technology and Pedagogy I class at the CUNY Graduate Center. The class went very well. Students demonstrated that they had done the reading and some brought deep perspectives from their disciplines to bear on our first discussion on technology and media.

    Thankfully, there were no classes afterwards, because after everyone cleared out, I used the classroom to meet with a City Tech student over Zoom for her Individualized Study of ENG3790 Information Architecture.

  • Fallback to Debian 12 Bookworm

    While I’m keeping Debian 13 Trixie on my media center computer, I’ve decided to fallback to Debian 12 Bookworm on my laptop and workstation. The more that I used Trixie on those machines, the more I realized some things that I relied on just weren’t working right. Once that software gets updated, I’ll try Trixie again, but for now, especially while I’m frantically getting things ready for classes to begin next Tuesday, I’ll rely on tried-and-true Bookworm.

  • CUNY Graduate Center ID Picked Up, Teaching There and City Tech This Fall

    front entrance of the cuny graduate center, multi-story building with stone facade and columns rising from the sidewalk to the second floor

    I visited the CUNY Graduate Center on 5th Avenue in Manhattan to get an adjunct faculty ID made, because I’ll be co-teaching Interactive Technology and Pedagogy I: History and Theory (ITCP 70010) this fall. This course is part of the Interactive Technology and Pedagogy (ITP) Certificate Program, which when offered the opportunity to contribute to this program, I jumped at! It is aligned with some of the work that I do in the Professional and Technical Writing Program at City Tech, which involves using technology for communicating and learning about the history of digital technologies, and it is a kindred program with the learning to teach with technology aspects of the Brittain Fellowship at Georgia Tech. I’m excited to work with the program’s graduate students beginning in a couple of weeks.

  • Generative AI for College Students Series: Beware Hallucinations and Falsehoods

    an anthropomorphic cat as a professor in a business suit lecturing in front of a classroom with a chalkboard behind him
    Image created with Stable Diffusion.

    Please keep in mind that new technology like Generative AI (Gen AI) shouldn’t simply make your thinking or work easier, much less take the place of the uniquely singular abilities of human beings to grow cognitively, think creatively, or evaluate critically. If you use Gen AI to simply avoid work, you are doing it wrong. Instead, using Gen AI in the spirit of Douglas Engelbart’s “augmenting human intelligence” and Donna Haraway’s configuration of the cyborg point the way to beneficial heightening of human possibility instead of harmful erasure of the cognitive distinctions of humanity. If you use Gen AI, use it wisely and use it well. This post is the seventh in this series.

    In the realm of science fiction, the concept of cyborgs–beings that blend human and machine—often explores the tension between human intelligence and technological advancement. Today, as students increasingly rely on generative AI tools to assist with writing, they are, in a sense, becoming cyborgs of the academic world. These tools can produce coherent, polished text, but they also carry significant risks. One of the most concerning issues is the tendency of AI to “hallucinate,” or generate information that is entirely fabricated, made-up, or misleading. This phenomenon is not just a glitch; it’s a fundamental limitation of how these tools operate.

    Generative AI works by predicting and reassembling language patterns from vast datasets, often drawn from the internet. While the internet is a rich source of information, it is also a breeding ground for misinformation, conspiracy theories, and outright falsehoods. When AI tools process this data, they don’t distinguish between fact and fiction. They simply mimic the patterns they find. The result is responses that may look authoritative and well-written but are, in reality, partly or entirely inaccurate.

    Gen AI presents its responses without qualification or equivocation. The potentially wrong, made-up information in a given response is presented as if it were irrefutable.

    Consider a student working on a research paper about climate change. They prompt an AI tool to provide a summary of recent findings. The AI responds with a polished paragraph that includes specific statistics and citations. The problem? Some of those statistics might be fabricated, and the citations could refer to nonexistent studies. The student, unaware of the fabrication, incorporates this information into their paper, potentially spreading misinformation.

    This issue is reminiscent of the theme of unreliable narration in science fiction. In works like Philip K. Dick’s Ubik, reality itself is unstable, and characters must navigate a world where information is constantly shifting and misleading. Similarly, students using AI tools must navigate a landscape where the line between truth and fiction is increasingly blurred. The AI, like the narrator in Dick’s novel, presents a version of reality that may not correspond to actual facts.

    To avoid falling into this trap, students must approach AI-generated information with skepticism. They should verify any claims made by the AI by cross-referencing with credible sources. In other words, they must act as human fact-checkers, ensuring that the information they use is accurate. This process requires a critical eye and a willingness to question even the most plausible-sounding responses. Reading Gen AI responses with a healthy dose of skepticism and engaging in research written by authorities in those fields will help students verify those responses while gaining information literacy and research skills.

    The cyborg student, armed with both human critical thinking and the power of AI, must learn to use these tools responsibly. By doing so, they can harness the benefits of AI while avoiding the pitfalls of misinformation. The bottom line is Gen AI is a good tool that can work with text in various ways–especially text that you supply it with, but it shouldn’t be relied on as a knowledge base as it isn’t designed to be a reference in the same way as an encyclopedia, database, or textbook is.

  • Generative AI for College Students Series: Translation and Bridging Language Gaps

    Image created with Stable Diffusion.

    Please keep in mind that new technology like Generative AI (Gen AI) shouldn’t simply make your thinking or work easier, much less take the place of the uniquely singular abilities of human beings to grow cognitively, think creatively, or evaluate critically. If you use Gen AI to simply avoid work, you are doing it wrong. Instead, using Gen AI in the spirit of Douglas Engelbart’s “augmenting human intelligence” and Donna Haraway’s configuration of the cyborg point the way to beneficial heightening of human possibility instead of harmful erasure of the cognitive distinctions of humanity. If you use Gen AI, use it wisely and use it well. This post is the sixth in this series.

    Science fiction often features devices that break language barriers, such as Douglas Adams’ Babel Fish in The Hitchhiker’s Guide to the Galaxy (1979) or Star Trek’s popularization of the SF concept of a universal translator. AI translation tools bring this concept closer to reality. While not perfect, AI can translate texts, helping students communicate across languages and access knowledge written in languages that they don’t know. However, the concerns about Gen AI accuracy holds even more so when translating texts from one language to another, which might produce inaccuracies in terms of phrasing, thought, and facts.

    Gen AI can serve as a bridge for students working with or writing about multilingual sources, offering translations that facilitate understanding. While these translations may not always capture the subtleties of idioms or cultural context, they can open a world of ideas and provide a foundation for further exploration.

    For instance, a student researching a Spanish-language novel could use Gen AI to translate key passages, then analyze how the original language contributes to the text’s tone and meaning.

    For language learners, Gen AI translation can help students understand troublesome passages in readings or translate their native language writing into the language that they are learning. In both cases, students should use this as an aid for learning and not a plagiarism tool.

    While Gen AI translations are not perfect, they open doors to global perspectives and ideas.

    Students should use these tools with awareness of their limitations, supplementing Gen AI translations with human expertise when possible.