Before we knew about Hurricane Helene, I had planned to visit my folks for two weeks to help out after my dad got out of the hospital for a back ailment. I took Amtrak’s 97 Silver Meteor from NYC to Jesup, Georgia (Sept. 24-25, 2024), and returned via the 98 Silver Meteor from Jesup to NYC (Oct. 8-9, 2024). Here are some pictures of the trains, sleeper car rooms, and meals.
97 Silver Meteor (NYP to JSP)
The 97 Silver Meteor sleeper cars didn’t have a toilet in the room as my previous Amtrak ride from JSP to NYC. Otherwise, the car seemed of newer construction. However, the room door rattled constantly. I should have asked for something to wedge into the door to eliminate that noise (and sleep better as a result). I liked how the in-car sink had turn knobs, which work much better than the push button faucets (either too little water or an explosion of water). Because the dining car was full, I asked for dinner in my room. It came with all the trimmings and was delicious! The worst part of the trip was someone in my car was going to bridge between cars to smoke. They left the door open, smoke entered the car, and set off the smoke alarms in unoccupied rooms and mine when I went to investigate. The culprit did not reveal him or herself.
98 Silver Meteor (JSP to NYP)
The Silver Meteor from JSP to NYP was much like my previous ride from SAV to NYP. The roomette was very similar–toilet in the room, push-button sink controls, older construction, and less vibration noise. I had breakfast in the dining car close to 7:00am. The omelet and fixings hit the spot! The downsides to this ride was that the water pressure was far too high on the sink faucet and the air conditioning was warm despite changing the thermostat.
My absolute favorite piece of software for my 486DX2/66MHz computer with a CD-ROM drive was the Star Trek: The Next Generation Interactive Technical Manual (1994). Built using Macromedia Director and Apple Quicktime VR and distributed on CD-ROM for Macintosh and Windows 3.1, it presents an LCARS (Library Computer Access/Retrieval System) interface to the user for navigating through spaces aboard the USS Enterprise NCC-1710-D, viewing the exterior and interior three-dimensionally, reading technical information, hearing ambient starship sounds, and listening to audio from the Computer (Majel Barrett-Roddenberry) and Command William Riker (Jonathan Frakes).
Before its release, I religiously carried around Rich Sternbach and Michael Okuda’s Star Trek: The Next Generation Technical Manual (1991)–a soft cover, magazine-sized book about 1/2″ thick–that detailed the design and function of 24th-century technology that went into the USS Enterprise NCC-1701-D. I filled it with marginalia and referenced it when I was drawing or discussing esoteric technical minutiae of Star Trek: TNG. It is an example of printed technical communication material about the science and technology scaffolding for the science fictional narratives of Star Trek: TNG. The Interactive Technical Manual added so much more to the experience by putting the user into the spaces described and illustrated on the two-dimensional pages of the Technical Manual. While the Interactive Technical Manual wasn’t as nearly portable as the Technical Manual, it felt like a revolutionary approach that despite being static continued to provide new and interesting experiences for the user based on the interactive path and options (e.g., tour vs. explore; voice vs. no voice; jump vs. transit) selected.
For this post, I ran the Interactive Technical Manual on Macintosh System 7.5.5 emulated on SheepShaver on a Debian 12 Bookworm host. In the past, I have got the Interactive Technical Manual to run on Windows 3.1 installed on DOSBox, but I don’t currently have that setup on this computer. Using the included Quicktime with Quicktime VR is key to successfully running the software on either operating system setup.
After loading, it gives the user options: Guided Tour or Explore. The Guided Tour features Jonathan Frakes as Commander William Riker providing voiceovers as various points, equipment, and artifacts around the Enterprise are shown on the screen. Explore takes the user directly to the Ship Exterior view with the LCARS navigation menu open on the right.
Ship Exterior is the landing page for the Explore option. The view of the Enterprise on the left is a Quicktime VR movie with options to rotate the ship up or down or left or right with gradations in between on each axis, which make it feel like rotating the ship as a three-dimensional object. This was a mind-blowing feature to me at that time. Despite the low resolution and small color palette, looking at the Enterprise from all of these angles–many I had never seen in an episode of Star Trek: The Next Generation before–felt like the future. Using the LCARS navigation menu on the right and clicking on Location loads options for other places around the Enterprise to see and learn more about.
An important place to visit on the Enterprise is the Bridge. The image of the bridge on the left is a Quicktime VR video that allows the user to look around 360 degrees and click “forward” into other nearby views. Those other views are represented by the white squares in the legend on the lower right corner of the screen. Moving through the space of the bridge felt as close to being there as possible at that time. The closest that we’ve come to that today is the fan-made Studio 9 over 20 years later.
From the Bridge screen, clicking on the Parallels option opens cross-referenced information related to the Bridge. In this case, an Exterior Details view of the Bridge.
Another top spot to visit is Engineering. The user can click the forward arrow within the Quicktime VR video showing the entrance to Engineering on the left, or click the white squares on the legend in the lower right–each view point features a 360 degree view from that vantage point and navigation arrows leading to the other nearby viewpoints.
Clicking forward from the entrance to Engineering leads to the warp core–the matter/anti-matter reactor that powers the Enterprise.
The Transporter Room is another must-see location within the Enterprise. This view is to the right of the one that the user first sees when entering the Transporter Room. Its right behind the transporter control facing the transporter pad.
Another innovative feature that helps the user conceptualize locations within the Enterprise is the Transit Mode between locations. Let’s say that I want to go to Lt. Commander Data’s quarters via Transit Mode. First, the screen on the right shows me where I currently aim and then highlights the location of Data’s quarters. On the left, the camera backs out of the Transporter Room, travels down the hallway to the Turbolift, which opens and the camera enters.
The camera shows a brief ride in the Turbolift, which then opens on the corridor for the deck where Data’s quarters are.
The camera moves down the corridor, turns at the door for Data’s quarters, the doors open, and the camera enters to the first location on the Quicktime VR views there.
Inside Data’s quarters, the user can click through the Quicktime VR videos on the left or use the legend on the right. Note that Data’s Sherlock Holmes costume is hanging on the coat rack in the back right corner.
Overall, the Star Trek: The Next Generation Interactive Technical Manual is a well-thoughtout, complete user experience that gives the user a different view and experience of the USS Enterprise D. It’s adherence to a logical and self-contained user interface that was consistently applied throughout the program brings the user into the world of the future. It’s aural and video features created an experience of being there–even though you were looking at a low-resolution 14″ monitor and hearing its audio through low-quality beige speakers that came with your sound card. It’s power was to overcome the constraints of early 1990s personal computer hardware and software to create an experience for Star Trek fans with every affordance available at that time.
Finally, Keith Halper, the CD-ROM’s producer for Simon & Schuster Interactive, writes the following in the credits for the Interactive Technical Manual–exploring both what kind of software this is and what exactly it was intended to do:
I want to endeavor to encapsulate our goals in the Interactive Technical Manual for an interactive development community that will, without doubt, surpass our best efforts here in the flash of a tachyon beacon, and also for a Star Trek community to whom we owe our gravest responsibility.
This software is not quite a game, not quite a story, not quite a work of
reference.
This is a fiction, with characters and scenes, but no preordained plot.
Rather, a story unwinds--or more precisely, occurs--as you go. The
struggles and events of the crews' lives are absent from this "episode".
The mechanism by which a storyteller traditionally tells us about
characters--and through them about both writer and audience--cannot
exist in a totally non-linear experience. Yet, in your own exploration of the
Enterprise here, of the environs and systems and quarters and art and
artifacts, you may come to understand a story about the members of a
particular starship. This story includes impressions of their world,
thoughts regarding our relationship with these characters and the
progress they represent, and about the hurdles we will overcome on our
own journey through time, till their world is our world. It is a story that we come to understand by participating in the telling of it.
If this sounds odd, consider the thought that we tell a tale about ourselves
by our actions. Ask yourself, what the is difference between your real life,
and a story about your real life? In both there are characters and scenes,
even changes in characters over time and in reaction to events. However,
there is no plot. Things "happen," of course--you visit your family, your
young nephew has grown, you get a job offer, you argue with your brother,
perhaps make up and have a drink to celebrate--but events have no
significance until they are strung together to suggest themes. There is no
story until your older brother, Robert, ties together the events of your past
and recent life, assails you for your past stoicism and says to you, (or to
someone we all know), "Jean-Luc, you are human after all." The
crystallizing thought that connects perception and conception--that
bridges the questions, what has happened? what does it mean?--contains the story. In an interactive story (and a good linear one), you, the reader, provide this insight.
So, let's you and I tell each other about the crew of the Enterprise and the
world in which they live. Listen. Explore. Notice. Evaluate. What is
present? What is missing? Mr. Roddenberry, Mr. Okuda, Mr. Berman,
and Mr. Sternbach (in their Introductions) note that the Enterprise is a real
vehicle--for story-telling. To visit the ship, or even to serve aboard her,
you need only to participate in the story-telling taking place around you.
While we are accustomed to visiting the Star Trek universe each week (at
least twice a week these days), it is our hope to bring a little of the
twenty-fourth century home to you; to create a space you can live in from
time to time, and to help us remind each other of a bright star in the
heavens by which to steer.
The Interactive Technical Manual was for me “a space you can live in from time to time.” It was an immersive and engaging way to escape the end of the 20th century and bask in the wonder and excitement that the 24th century might offer.
As I documented last year, I made a substantial investment in my computer workstation for doing local text and image generative AI work by upgrading to 128GB DDR4 RAM and swapping out a RTX 3070 8GB video card for NVIDIA’s flagship workstation card, the RTX A6000 48GB video card.
After I used that setup to help me with editing the 66,000 word Yet Another Science Fiction Textbook (YASFT) OER, I decided to sell the A6000 to recoup that money (I sold it for more than I originally paid for it!) and purchase a more modest RTX 4060 Ti 16GB video card. It was challenging for me to justify the cost of the A6000 when I could still work, albeit more slowly, with lesser hardware.
Then, I saw Microcenter begin selling refurbished RTX 3090 24GB Founder Edition video cards. While these cards are three years old and used, they sell for 1/5 the price of an A6000 and have nearly identical specifications to the A6000 except for having only half the VRAM. I thought it would be slightly better than plodding along with the 4060 Ti, so I decided to list that card on eBay and apply the money from its sale to the price of a 3090.
As you can see above, the 3090 is a massive video card–occupying three slots as opposed to only two slots by the 3070, A6000, and 4060 Ti shown below.
The next hardware investment that I plan to make is meant to increase the bandwidth of my system memory. The thing about generative AI–particularly text generative AI–is the need for lots of memory and more memory bandwidth. I currently have dual-channel DDR4-3200 memory (51.2 GB/s bandwidth). If I upgrade to a dual-channel DDR5 system, the bandwidth will increase to a theoretical maximum of 102.4 GB/s. Another option is to go with a server/workstation with a Xeon or Threadripper Pro that supports 8-channel DDR4 memory, which would yield a bandwidth of 204.8 GB/s. Each doubling of bandwidth roughly translates to doubling how many tokens (the constituent word/letter/punctuation components that generative AI systems piece together to create sentences, paragraphs, etc.) are output by a text generative AI using CPU + GPU inference (e.g., llama.cpp). If I keep watching for sales, I can piece together a DDR5 system with new hardware, but if I want to go with an eight-channel memory system, I will have to purchase the hardware used on eBay. I’m able to get work done so I will keep weighing my options and keep an eye out for a good deal.
Stable Diffusion is an image generating AI model that can be utilized with different software. I used Automatic1111’s stable-diffusion-webui to instruct and configure the model to create images. In its most basic operation, I type into the positive prompt box what I want to see in the output image, I type into the negative prompt box what I don’t want to see in the output image, and click “Generate.” Based on the prompts and default parameters, I will see an image output on the right that may or may not align with what I had in mind.
For the positive prompt, I wrote:
illustration of a 40yo woman smiling slightly with a nervous expression and showing her teeth with strawberry-blonde hair and bangs, highly detailed, next to a textbook titled introduction to VLSI systems with microprocessor circuits on the cover, neutral background, <lora:age_slider_v6:1>
I began by focusing on the type of image (an illustration), then describing its subject (woman), other details (the textbook), and the background (neutral). The last part in angle brackets is a LoRA or low rank adaptation. It further tweaks the model that I’m using, which in this case is Dreamshaper 5. This particular LoRA is an age slider, which works by inputting a number that corresponds with the physical appearance of the subject. A “1” presents about middle age. A higher number is older and a lower/negative number is younger.
ControlNet, which employs different models focused on depth, shape, body poses, etc. to shape the output image’s composition, is an extension to Automatic1111’s stable-diffusion-webui that helps guide the generative AI model to produce an output image more closely aligned with what the user had in mind.
For the Lynn Conway illustration, I used three different ControlNet units: depth (detecting what is closer and what is further away in an image), canny (one kind of edge detection for fine details), and lineart (another kind of edge detection for broader strokes). Giving each of these different levels of importance (control weight) and telling stable-diffusion-webui when to begin using a ControlNet (starting control step) and when to stop using a ControlNet (ending control step) during each image creation changes how the final image will look.
Typically, each ControlNet unit uses an image as input for its guidance on the generative AI model. I used the GNU Image Manipulation Program (GIMP) to create a composite image with a photo of Lynn Conway on the right and a photo of her co-authored textbook on the left (see the screenshot at the top of this post). Thankfully, Charles Rogers added his photo of Conway to Wikipedia under a CC BY-SA 2.5 license, which gives others the right to remix the photo with credit to the original author, which I’ve done. Because the photo of Conway cropped her right arm, I rebuilt it using the clone tool in GIMP.
I input the image that I made into the three ControlNets and through trial-and-error with each unit’s settings, A1111’s stable-diffusion-webui output an image that I was happy with and used on the post yesterday. I used a similar workflow to create the Jef Raskin illustration for this post, too.
Illustration of Lynn Conway with a copy of Mead and Conway’s Introduction to VLSI Systems. Conway’s likeness is based on Charles Roger’s photo on Wikipedia, which he released under a CC BY-SA 2.5 License. Image created with Stable Diffusion.
This past weekend, The New York Times ran an obituary for Lynn Conway, half of the namesake for the Mead-Conway VLSI Revolution and co-author of the groundbreaking textbook Introduction to VLSI Systems (1980). She died at the age of 86.
What is so cool about the Mead-Conway VLSI chip design revolution was not only that it was the paradigm shift that made possible the next step in microprocessor design and fabrication by enabling electrical engineering and computer science students to do the work that was previously the domain of physicists and the high tech industry, but also that it was a under-the-radar pedagogical hack. Conway writes in the October 2018 issue of Computer:
"With all the pieces in place, an announcement was made on ARPANET to electrical engineering and computer science departments at major research universities about what became known as "MPC79." On the surface, while appearing to be official and institutionally based, it was done in the spirit of a classic "MIT hack"--a covert but visible technical stunt that stuns the pubic, who can't figure out how it was done or what did it. (I'd been an undergrad at MIT in the 1950s).
The bait was the promise of chip fabrication for all student projects. Faculty members at 12 research universities signed on to offer Mead-Conway VLSI design courses. This was bootleg, unofficial, and off the books, underscoring the principle that "it's easier to beg forgiveness than to get permission" (p. 69).
While this was a huge contribution to the development of the computer industry leading into the 1980s and beyond, it was only one of her many accomplishments–innovating an out-of-order queuing processing system for IBM only to be fired in 1968 when she began transitioning to become a woman, starting her career over and eventually making her way to Xerox PARC, later joining the University of Michigan as a professor of electrical engineering and computer science and serving as associate dean of engineering, and becoming a transgender advocate later in life. She was recognized with many awards and honorary doctorates for her contributions to the field as an engineer and educator.