Over the weekend, I received two more pieces of my upcoming AI-focused workstation build to go along with the CPU and A4000 video cards. They are a Gigabyte B650 Eagle AX model ATX motherboard, which has four PCIe slots–one spaced for a 3-slot card like my 3090 and three spaced for single slot cards like the A4000s, and a Silverstone Fara R1 V2 ATX mid-tower case, which was the least expensive steel case without a glass window and good ventilation. My new Corsair DDR5 RAM won’t arrive until after Christmas, so the actual build will have to wait until then.
After receiving a new AMD Ryzen 7 7700 CPU earlier this week, I received the three NVIDIA RTX A4000 16GB VRAM video cards pictured above in antistatic bags today for my new AI workstation. Brand new, these cards run just over $1000, but I got these refurbished ones from an eBay seller for just under $600 each. These three video cards will work alongside my NVIDIA RTX 3090 Founders Edition 24GB VRAM video card for a total of 72GB VRAM, which will allow me to run low-or-no quantized large language models at a much faster output rate than I currently can using the 3090 with system RAM. The limited PCIe lanes on the Gigabyte motherboard that I ordered shouldn’t be too limiting as far as inference work is concerned.
Brian Porter and Edouard Machery’s “AI-generated poetry is indistinguishable from human-written poetry and is rated more favorably,” which appears in the open-access journal Scientific Reports, is a fascinating study about how human-made and Generative AI-made poetry is rated by non-expert humans. Interestingly, the study participants rated more of the Generative Ai-made poetry as more “human” than the poems actually written by humans, which included works by Geoffrey Chaucer, William Shakespeare, Samuel Butler, Lord Byron, Walt Whitman, Emily Dickinson, T.S. Eliot, Allen Ginsberg, Sylvia Plath, and Dorothea Lasky. While this quantitative approach provides some interesting talking points about the products of Generative AI, it seems like it might be saying more about the participants than the computer-generated poems. What might the results look like from experts, literature graduate students, and undergraduate students who had taken a class on poetry? What might be revealed by analyzing the AI-penned poems in relation to the work by the respective poets, considering that the prompt was very generic?