
Brian Porter and Edouard Machery’s “AI-generated poetry is indistinguishable from human-written poetry and is rated more favorably,” which appears in the open-access journal Scientific Reports, is a fascinating study about how human-made and Generative AI-made poetry is rated by non-expert humans. Interestingly, the study participants rated more of the Generative Ai-made poetry as more “human” than the poems actually written by humans, which included works by Geoffrey Chaucer, William Shakespeare, Samuel Butler, Lord Byron, Walt Whitman, Emily Dickinson, T.S. Eliot, Allen Ginsberg, Sylvia Plath, and Dorothea Lasky. While this quantitative approach provides some interesting talking points about the products of Generative AI, it seems like it might be saying more about the participants than the computer-generated poems. What might the results look like from experts, literature graduate students, and undergraduate students who had taken a class on poetry? What might be revealed by analyzing the AI-penned poems in relation to the work by the respective poets, considering that the prompt was very generic?





