Category: Artificial Intelligence

  • Enjoyed Alien: Romulus Despite Too Damn Loud IMAX and Other Customers Who Were Annoying

    xenomorph alien made out of paper in origami style. Image created with Stable Diffusion.

    Yesterday, Y and I took the subway to Manhattan to watch the film Alien: Romulus on the IMAX screen at the AMC 14 on 34th Street.

    I thought that Alien: Romulus was an interesting story that threaded the needle of connecting the origin film Alien (1979) via the first Xenomorph we saw and the android Ash (Ian Holm) to Prometheus (2012) and Alien: Covenant (2017) via the black liquid (hints of the black oil from The X-Files) and the Engineers. The retrocomputers, ASCII text, and a computer with a 3.5″ floppy disk drive made it feel like the same world as Alien. I felt that some of the lines were corny, over-the-top, and unnecessary fan service, but overall, it was an interesting and sometimes exciting addition to the series.

    Unrelated to the film per se, I have some thoughts instead about the technologies of presentation and communal engagement with the film.

    First, movies shown in theaters, especially IMAX films, are shown with the volume far too loud. Y and I last went to an IMAX film over 10 years ago, but the memory of how that experience hurt both of our ears, we planned ahead and brought foam ear plugs. Even with our ear plugs, which work wonders at eliminating noise in other settings, were just barely up to the task of keeping the volume of the film presentation at tolerable levels. Let me put that another way: While wearing ear plugs, I was able to hear the film’s dialog and sound effects and music just fine and sometimes a little not fine when it got so loud as to overpower the ear plugs. That’s too damn loud. It was only after we were leaving that Y thought we should have checked the decibel levels. Hindsight is 20-20.

    Second, I know to some I might sound like an old man yelling at kids to get off my lawn, but for those who have known me a long time, they know that I’ve been deadly serious about this since going to see films when I was a kid. That is we owe other theater goers our respect so that everyone can enjoy the film. Carrying on, talking, or using a phone during a movie can disturb others, so we shouldn’t do those things. Unfortunately, some of the other customers, who would have paid the same $30 per ticket we paid, don’t care for social norms and simple decency. It would be one thing if these were kids who didn’t know any better, but these were adults who acted like kids. Hell is other people, I suppose.

    Considering these things, I prefer to stay at home to enjoy a film without ear plugs or annoying guests. Of course, I am assuming the neighbors don’t act the fool, which I’ve tried my best to address following these tips.

  • Reflected Buildings in Manhattan

    Glass windows on a highrise building reflects the stone facade of the building on the opposite side of the street.

    Standing on E 41st St between 5th Ave. and Madison Ave. in Manhattan, I liked how buildings on one side of the street created these slightly distorted reflections of the buildings on the side of the street where I was standing. The reflected images seem like the wobbly straight lines in images generated by Stable Diffusion–kind of like seeing the thing reflected in a mirror with slight imperfections–not quite a fun house mirror but going in that direction.

  • Awards and the Circulation of Cultural Capital

    a statue of a playful cat with a base that reads "Best Cat." Image created with Stable Diffusion.

    The 2024 Hugo Awards announced at Worldcon in Glasgow, Scotland and the Paris Olympics closing ceremony–both last night–reminded me of something that I wrote in 2008 about the circulation of cultural capital and the erosion of the science fiction genre via awards to writers who are not considered SF writers. I had taken issue with popular, mainstream works winning genre awards that could, I believe, go further to helping promote authors within the genre instead of sending the field’s cultural capital to genre interlopers. My question was should the metric of “best” be the only qualifier in an awards contest, or should there be some kind of policing by the field about what should be considered “best” when the circulation of cultural capital is considered.

  • Mark V. Shaney v1.0, a Probabilistic Text Generator for MS-DOS

    Mark V. Shaney v.1.0 running in DOSBox.

    Of the text generators that I’ve discussed this past year, Mark V. Shaney v. 1.0 (MARKV.EXE) is by far the simplest to use but it is also one of the most advanced due to its implementation of weighted probability tables (Markov chains–the program’s name is a pun on this) that underpin how it generates text. I was able to obtain a copy from the TextWorx Toolshed archived on the Internet Archive’s Wayback Machine.

    MARKV.EXE (44,365 bytes) was developed in 1991 by Stefan Strack, who is now a Professor of Neuroscience and Pharmacology at the University of Iowa. In the MARKV.DOC (10,166 bytes) file that accompanied the executable, Strack writes, “Mark V. Shaney featured in the “Computer Recreations” column by A.K.Dewdney in Scientific American. The original program (for a main-frame, I believe) was written by Bruce Ellis based on an idea by Don P. Mitchell. Dewdney tells the amusing story of a riot on net.singles when Mark V. Shaney’s ramblings were unleashed” (par. 2). Dewdney’s article on the MARKV.EXE program appears in the June 1989 issue of Scientific American. The article that Strack mentions is available in the Internet Archive here. A followup with reader responses, including a reader’s experiment with rewriting Dewdney’s June 1989 article with MARKV.EXE, is in the January 1990 issue here.

    The program works by the user feeding a text into MARKV.EXE, which is “read.” This generates a hashed table of probabilistic weights for the words in the original text, which can be saved. The program then uses that table and an initial numerical seed value to generate text until it encounters the last word in the input text or the user presses Escape. The larger the text (given memory availability) , the more interesting its output text, because more data allows it to generate better probability weights for word associations (i.e., what word has a higher chance to follow a given word). Full details about how the program works can be found in the highly detailed and well-organized MARKV.DOC file included with the executable.

    Using DOSBox on Debian 12 Bookworm, I experimented by having MARKV.EXE read William Gibson’s “Burning Chrome” (1982). I pressed “R” for “Read,” entered the name of the text file (bchrome.txt), and pressed enter.

    The program reported “reading” for a few minutes (running DOSBox at default settings).

    After completing its “reading,” the program reported stats on the table that it created using bchrome.txt: 9167 terms (608,675 bytes).

    I pressed “G” and the program began to generate text based on its table of probabilities generated from the bchrome.txt text file, which contained the short story, “Burning Chrome.” While the generated text flows across the screen, there are options to press “Esc” to stop or any other key to pause.

    After it completed writing the generated text to the screen, I pressed “S” to save the generated text and it prompted me to type in a file name for the saved generated text: gibson.txt.

    Pressing “S” gives the user an option to save the table for future use. I went with the default name, MARKKOV.MKV (not to be confused with a modern Matroska container file). This file can be loaded in MARKV.EXE on subsequent runs by pressing “L” and entering the name of the table. When the user presses “Q”, the program exits back to DOS and displays a message, “The random number seed was x,” where x is a random number used in the generation of text. If repeatability is important to the user, you’ll want to make a note of that number and use it with the -s modifier when running MARKV.EXE again (e.g., markv.exe -s2510).

    Mark V. Shaney’s implementation of a Markov chain that builds a table of next word probability on a small text sample is one example of the predecessors to large language models (LLMs) like LLaMA and ChatGPT. However, Mark V. Shaney’s word association probabilities is far simpler than the much more complicated neural networks of LLMs (especially considering attention) with many orders of magnitude more parameters trained on gargantuan data sets. Nevertheless, Mark V. Shaney is one aspect of the bigger picture of artificial intelligence and machine learning development that led to where we are now.

  • Where to Search for Open Educational Resources (OER)

    a ulysses butterfly folded origami style out of paper, resting on a book, in a wooded area. Image created with Stable Diffusion.

    The next academic year is just around the corner, so I wanted to give a shout out for the open educational resource (OER) that I published earlier this year, Yet Another Science Fiction Textbook (YASFT), an over 60,000 word textbook on the history of SF literature that includes a syllabus, video lectures, and more.

    And, if you’re an educator needing open and free teaching materials and textbooks, here are some useful resources where you can find OERs: