Tag: Pedagogy

  • Joan Slonczewski Added to Yet Another Science Fiction Textbook (YASFT)

    An image of a woman walking through a tunnel toward an ocean's beach and a sky filled with stars inspired by Joan Slonczewski's novel A Door Into Ocean. Created with Stable Diffusion.

    I added a whole new section on the Hard SF writer Joan Slonczewski (they/them/theirs) to the Feminist SF chapter of the OER Yet Another Science Fiction Textbook (YASFT). It gives students an overview of their background as a scientist, writer, and Quaker, and it discusses three representative novels from their oeuvre: A Door Into Ocean (1986), Brain Plague (2000), and The Highest Frontier (2011). Like the Afrofuturism chapter, I brought in more cited, critical analysis of Slonczewski’s writing, which is parenthetically cited with a full citation instead of using a works cited list or footnotes.

    Slonczewski’s A Door Into Ocean was the inspiration for the image above that I created using Stable Diffusion. It took the better part of a day to create the basic structure of the image, then there was inpainting of specific details such as the woman’s footprints in the sand, and finally, feeding the inpainted image back into SD’s controlnet to produce the final image.

  • University of Pittsburgh’s Cathedral of Learning

    Cathedral of Learning building at the University of Pittsburgh in 2010, photo taken from a distance.

    In 2010, Y and I went on a day trip to Pittsburgh to look around before going to Ikea to pick up some new furniture. My favorite place in Pittsburgh is the Cathedral of Learning at the University of Pittsburgh, which I made use of when I lived there, so we made that one stop on our itinerary.

    From a distance, it is an easy to see landmark for getting around the University of Pittsburgh campus.

    Cathedral of Learning at the University of Pittsburgh, photo taken at its base near the entrance.

    Standing at its entrance, the building’s magnitude is unavoidable. And, to think that this gigantic building–the second tallest educational building in the world–is dedicated to learning.

    Interior of the Cathedral of Learning at the University of Pittsburgh.

    It’s interior first floor study space is equally impressive. This cavernous space lends itself to individual and collaborative work.

    From an upper floor, you can look east to see Carnegie Mellon University.

    Sitting in the big chair in the study area of the Cathedral of Learning at the University of Pittsburgh.

    Before leaving, Y took a photo of me sitting in one of the big chairs in the study area on the first floor of the Cathedral of Learning.

    I think that all universities should invest in basic studying and learning spaces where students can work individually and together. It can be something as architecturally impressive as the Cathedral of Learning, or it could be something designed around sustainability and efficiencies such as Georgia Tech’s Clough Undergraduate Learning Commons. Whatever form it takes, it should center on students and their needs whether they live on campus or commute. Essentially, students need space to study, work, and collaborate outside of the classroom.

  • Yorick, a Human Skull and Brain Teaching Aid for Cognition and Neuronarrative Related Lessons

    Anatomically correct human skull with working jaw and brain, front view
        Alas, poor Yorick! I knew him, Horatio: a fellow
        of infinite jest, of most excellent fancy: he hath
        borne me on his back a thousand times; and now, how
        abhorred in my imagination it is! my gorge rims at
        it. Here hung those lips that I have kissed I know
        not how oft. Where be your gibes now? your
        gambols? your songs? your flashes of merriment,
        that were wont to set the table on a roar? Not one
        now, to mock your own grinning? quite chap-fallen?
        Now get you to my lady's chamber, and tell her, let
        her paint an inch thick, to this favour she must
        come; make her laugh at that. -Shakespeare, Hamlet

    I bring my trusted skull and brain model nicknamed Yorick to my writing and science fiction classes when I want to talk about something related to cognition–e.g., how our attentional focus works, cognitive costs of switching cognitive tasks, time delay from sensory perception to processing to conscious awareness, where are the speech regions–Broca’s area and Wernicke’s area–located, etc. Yorick’s skull and multi-component brain gives students something that they can see and feel and manipulate when it gets passed around the classroom.

    And when students leave a hat behind, Yorick gets a treat.

    human skull model wearing a knitted Michael Kors hat
  • Update on the Search for Space Station L-4: A Conversation with Steve Lenzen

    Skylab Orbital Workshop Interior, Smithsonian Air and Space Museum, Washington, DC. Photo taken in 2008.

    As I wrote last week here, I reached out to Steve Lenzen via postal mail about Space Station L-4, the Earth Sciences Educational Program from 1977, after I found his contact information on an archived version of GPN’s website. He worked at GPN from 1976 to 2006, and he co-founded Destination Education. He kindly replied to me via email with important details about the history of GPN and why it might be impossible to find a copy of the series. He explains:

    "The series was produced by Children's Television International, which was owned by Ray Gladfelter. When Ray was "winding down" his career, GPN took over distribution because Ray was an old friend of our director at the time. When Ray died, many, many years ago his old friend had also retired and GPN ceased distribution. Actually, GPN had ceased distribution years before that because there was no demand."
    
    "Back when Ray produced the series many or most of the PBS Stations broadcast programs specifically designed for use in the classroom. This mode of getting educational programming into the classroom was started before the age of VHS and Betamax. The introduction of Betamax and then VHS is what led to the "death" of 16mm film and subsequently PBS stations airing a block of programs designed specifically for in classroom use. Starting in the late 80's, teachers were demanding that PBS Stations air only new, up-to-date programs depicting current hair styles, clothes, etc. If a series did not meet this criteria, teachers did not want it."
    
    "Due to the lack of storage space, once a series was pulled from distribution GPN destroyed the submaster it had. The copyright holder/producer usually had a master. Space Station L-4 was pulled out of distribution long before advent of DVD which meant it cost of lot of money to keep old master, usually 2" Quad, 1" Helical, or Betamax in storage. As a result, the copyright owner also destroyed their copy."
    
    "After Ray's death, his son . . . took control of Children's Television International. . . . The company, CTI, was, out of business by then so all he could do was find a place to give the tapes or destroy them."

    My next move is to reach out to Ray Gladfelter’s son. I will report back with any developments.

    If you’re unfamiliar with Space Station L-4, there are details about the show in my 2013 interview with Paul Lally, its producer, writer, and director, here.

  • All In on Artificial Intelligence

    An anthropomorphic cat wearing coveralls, working with advanced computers. Image generated with Stable Diffusion.

    As I wrote about recently about my summertime studying and documented on my generative artificial intelligence (AI) bibliography, I am learning all that I can about AI–how it’s made, how we should critique it, how we can use it, and how we can teach with it. As with any new technology, the more that we know about it, the better equipped we are to master it and debate it in the public sphere. I don’t think that fear and ignorance about a new technology are good positions to take.

    I see, like many others do, that AI as an inevitable step forward with how we use and what we can do with computers. However, I don’t think that these technologies should only be under the purview of big companies and their (predominantly) man-child leaders. Having more money and market control does not mean one is a more ethical practitioner with AI. In fact, it seems that some industry leaders are calling for more governmental oversight and regulation not because they have real worries about AI’s future development but instead because they are in a leadership position in the field and likely can shape how the industry is regulated through industry connections with would-be regulators (i.e., the revolving door of industry-government regulation in other regulatory agencies).

    Of course, having no money or market control in AI does not mean one is potentially more ethical with AI either. But, ensuring that there are open, transparent, and democratic AI technologies creates the potential for a less skewed playing field. While there’s the potential for abuse of these technologies, having these available to all creates the possibility for many others to use AI for good. Additionally, if we were to keep AI behind locked doors, only those with access (legally or not) will control the technology, and there’s nothing to stop other countries and good/bad actors in those countries from using AI however they see fit–for good or ill.

    To play my own small role in studying AI, using generative AI, and teaching about AI, I wanted to build my own machine learning-capable workstation. Before I made any upgrades, I maxed out what I could do with a Asus Dual RTX 3070 8GB graphics card and 64GB of RAM for the past few months. I experimented primarily with Stable Diffusion image generation models using Automatic1111’s stable-diffusion-webui and LLaMA text generation models using Georgi Gerganov’s llama.cpp. An 8GB graphics card like the NVIDIA RTX 3070 provides a lot of horsepower with its 5,888 CUDA cores and memory bandwidth across its on-board memory. Unfortunately, the on-board memory is too small for larger models or adjusting models with multiple LORA and the like. For text generation, you can layer some of the model on the graphic’s card memory and your system’s RAM, but this is inefficient and slow in comparison to having the entire model loaded in the graphics card’s memory. Therefore, a video card with a significant amount of VRAM is a better solution.

    Previous interior of my desktop computer with air cooling, 128GB RAM, and Asus Dual Geforce RTX 3070 8GB graphics card.

    For my machine learning focused upgrade, I first swapped out my system RAM for 128GB DDR4-3200 (4 x 32GB Corsair shown above). This allowed me to load 65B parameters into system RAM with my Ryzen 7 5800X 8 core/16 thread CPU to perform the operations. The CPU usage while it is processing tokens on llama.cpp looks like an EEG:

    CPU and memory graphs show high activity during AI inference.

    While running inference on the CPU was certainly useful for my initial experimentation and the CPU usage graph looks cool, it was exceedingly slow. Even an 8 core/16 thread CPU is ill-suited for AI inference in part due to how it lacks the massive parallelization of graphics processing units (GPUs) but perhaps more importantly due to the system memory bottleneck, which is only 25.6 GB/s for DDR4-3200 RAM according to Transcend.

    Video cards, especially those designed by NVIDIA, provide specialized parallel computing capabilities and enormous memory bandwidth between the GPU and video RAM (VRAM). NVIDIA’s CUDA is a very mature system for parallel processing that has been widely accepted as the gold standard for machine learning (ML) and AI development. CUDA is unfortunately, closed source, but many open source projects have adopted it due to its dominance within the industry.

    My primary objective when choosing a new video card was that it had enough VRAM to load a 65B LLaMA model (roughly 48GB). One option for doing this is to install two NVIDIA RTX 3090 or 4090 video cards with each having 24GB of VRAM for a total of 48GB. This would solve my needs for running text generation models, but it would limit how I could use image generation models, which can’t be split between multiple video cards without a significant performance hit (if at all). So, a single card with 48GB of VRAM would be ideal for my use case. Three options that I considered were the Quadro 8000, A40, and RTX A6000 Ampere. The Quadro 8000 used three-generation-old Turing architecture, while the A40 and RTX A6000 used two-generation-old Ampere architecture (the latest Ada architecture was outside of my price range). The Quadro 8000 has memory bandwidth of 672 GB/s while the A40 has 696 GB/s and the A6000 has 768 GB/s. Also, the Quadro 8000 has far fewer CUDA cores than the other two cards: 4,608 vs. 10,572 (A40) and 10,752 (A6000). Considering the specs, the A6000 was the better graphics card, but the A40 was a close second. However, the A40, even found for a discount, would require a DIY forced-blower system, because it is designed to be used in rack mounted servers with their own forced air cooling systems. 3D printed solutions that mate fans to the end of an A40 are available on eBay, or one could rig something DIY. But, for my purposes, I wanted a good card with its own cooling solution and a warranty, so I went with the A6000 shown below.

    nvidia A6000 video card

    Another benefit to the A6000 over the gaming performance-oriented 3090 and 4090 graphics cards is that it requires much less power–only 300 watts at load (vs ~360 watts for the 3090 and 450 watts for the 4090). Despite this lower power draw, I only had a generic 700 watt power supply. I wanted to protect my investment in the A6000 and ensure it had all of the power that it needed, so I opted to go with a recognized name brand PSU–a Corsair RM1000x. It’s a modular PSU that can provide up to 1,000 watts to the system (it only provides what it is needed–it isn’t using 1000 watts constantly). You can see the A6000 and Corsair PSU installed in my system below.

    new computer setup with 128GB RAM and A6000 graphics card

    Now, instead of waiting for 15-30 minutes for a response to a long prompt ran on my CPU and system RAM, it takes mere seconds to load the model on the A6000’s VRAM and generate a response as shown in the screenshot below of oobabooga’s text-generation-webui using the Guanaco-65B model quantized by TheBloke to provide definitions of science fiction for three different audiences. The tool running in the terminal in the lower right corner is NVIDIA’s System Management Interface, which can be opened by running “nvidia-smi -l 1”.

    text generation webui running on the a6000 video card

    I’m learning the programming language Python now so that I can better understand the underlying code for how many of these tools and AI algorithms work. If you are interested in getting involved in generative AI technology, I recently wrote about LinkedIn Learning as a good place to get started, but you can also check out the resources in my generative AI bibliography.