Poetry News

Dennis Tang Discusses the Future of A.I. Poetry at Literary Hub

Originally Published: January 29, 2020

Tang walks readers through the possibilities for future verse composed by androids. He writes, "the robots will be much more lifelike—and because of that, even more unsettling." More: 

They won’t sound robotic, because they’ll sound just like us. They might look just like us, too. They might have psychoses, and trippy, surrealist dreams. And someday soon, they might even write some decent verse. [...]

This is an AI-generated attempt to write in the style of Emily Dickinson. It’s produced by an artificial intelligence language program called GPT-2, a project of the San Francisco-based research firm OpenAI. The bolded portions represent the prompt given the program, while the rest is the program’s own; you can try it out for yourself, at this link. OpenAI only recently released the full code for GPT-2, after initially fearing that it would help amplify spam and fake news; if its fake poetry is any indication, debate over the power of AI language models might just be getting started.

Using GPT-2, a group of Silicon Valley workers have, for our and their own amusement, compiled a collection of attempts by the AI to complete famous works of poetry. The resulting chapbook, Transformer Poetry, published by Paper Gains Publishing in December, is a tongue-in-cheek collection of surprisingly good and comically nonsensical computer-generated verse. No one will confuse it with human poetry just yet—or at least, you’d hope not. But in other ways, it’s also strikingly lifelike: an uncanny look at just how good inorganic authors might become, and the consequences that might come with it.

Computer programs have become much more like us as of late, in large part because they are increasingly modeled after our own minds. The booming field of machine learning—whose products are familiar to anyone who has used their smartphone’s voice assistant or image recognition—has been driven by the concept of the neural network, in which individual nodes, similar to neurons, “learn” to build a complex web of associations through trial and error. Where traditional programs are given rules that determine its outputs, neural nets are instead given the desired outcomes, from which they learn, through millions upon billions of repeated trials, their own ways to achieve them.

Read on at Literary Hub.