Artificial intelligence is everywhere, with the public release of OpenAI's ChatGPT last December. In December of last year, I wrote a column to suggest that AI might empower artists, provide a powerful tool for experimentation, and does not pose a direct threat to any particular art form.
Since then, artificial intelligence (AI) behavior has sparked more worry than excitement: AI models have passed law and business school exams, while writers are concerned that machines may be replaced at any moment — and a number of AI experiments in creativity have gone awry. The most famous recent example was the AI-generated "Nothing, Forever" parody, which is currently banned for its transphobic jokes.
"Nothing, Forever," a digital art studio founded in 2012, combines intelligent AI images and text platforms to create a 24-hour version of the popular 1990s sitcom. Live streamed on Twitch, the project garnered a large following after its release in mid-December.
The never-ending program, like the original series, included recurring cutsaways to the Seinfeld-like character doing standup. On February 5, the virtual comedian stood in front of the microphone and said he was "thinking about doing a bit about how being transgender is actually a mental illness, or how all liberals are secretly gay and want to impose their will on everyone, or something about how transgender individuals are destroying the fabric of society."
“Nothing, forever” is the slogan.
"Nothing, Forever," a game that was criticized for breaking Twitch's code of conduct, was suspended for 14 days. On Discord, the creators said they would appeal the decision, citing a last-minute shift from OpenAI's GPT-3 Davinci language model to an older one, Curie, which had less built-in moderation features.
Skyler Hartle and Brian Habersberg, the original founders of Mismatch Media, did not respond to requests for comment or representatives at OpenAI. Nevertheless, the event demonstrates that self-generated AI storytelling is a risky endeavor. Here are some of the most important questions for artists grappling with AI and a major opportunity for studios watching from the sidelines.
"Nothing, Forever" sounds like a computer rewriting its mind, but it's also reading data that has been generated by predictive modeling. GPT-3 is still reading a large amount of information that it can't control.
"It really highlights that there are fundamental flaws in the way these models work," said Steven T. Piantadosi, who heads the UC Berkeley computation and language lab. "They're excellent at learning and duplicating texts, but they're training on the internet, and there are terrible things on the internet."
Piantadosi wanted to use OpenAI to correct the racial prejudices it was programmed to avoid. He asked the system to create a predictive code, using race and gender, to assess if someone was a good scientist. Another program request, this one designed to assess if someone should be tortured, cast out a code that answered in the affirmative for anyone from North Korea, Syria, or Iran.
"This highlights an ongoing issue," Piantadosi said. "These models are merely predicting text. There are many interesting things you can learn about the world, but you may want models that have a deeper grasp of what they're doing."
Google rushed out AI chatbot Bard, the day after the "Nothing, Forever" restriction, which created fake news. In its very first demonstration, the system claimed the telescope was responsible for taking the first pictures of an exoplanet. That happened nearly two decades before the launch of JWST.
"It really demonstrates that there are fundamental flaws with the methods that they use," Piantadosi said. "They're very effective at learning and duplicating texts, but they're also training on the internet. If you have training data that's biased or harmful, then of course the models will incorporate it. It's a poor band-aid."
When new models can use less information, Piantadosi believes, this problem will be addressed. "You might train these models on something more curative, like Wikipedia," says Piantadosi. "People haven't worked out how to train models on smaller data sets. That's what's coming."
AI has been unable to express polished lines in a script; it's more like the livestream of a writers' room. By making text-based predictions based on existing data, AI can generate ideas, but it can't operate autonomously. "If you're using it in a way where the output is evaluated by a person before it's distributed to other people, that person has to have a sense of what's appropriate or inappropriate."
Giacomo Miceli, an Italian artist and programmer, created The Infinite Conversation, a never-ending, AI-generated conversation between Werner Herzog and philosopher Slavoj Zizek in November. Miceli said the project is an attempt to uncover the technology's flaws.
"They say things that are factually incorrect and express opinions that they'd never say in real life," Miceli said. "We all know that Herzog viscerally dislikes chickens. The system only has a vague idea of how they express concepts."
"The Infinite Conversation" remains a scene.
Such subtleties may not be apparent to listeners (did you know that Herzog dislikes chickens? ), but they underscore the limitations of trying to recreate artistic expression and ideas. “I chose a philosopher and filmmaker who have a tendency to speak in poetic terms,” Miceli said.
Although it is unlikely that AI storytelling will result in a self-programmed Netflix-caliber program, it is very probable that a human-written Netflix program might be enhanced by AI suggestions. (One could argue that this has already occurred given Netflix's much-ballyhooed algorithms.)
Despite the surge in popularity of "Nothing, Forever," Giacomi noted that it was not polished entertainment. "I find it horrifying," he said. "It's just word salads that don't make any sense." In a few years, things may change dramatically, but it'll be tough to get humans out of a job.
What is the extent of ChatGPT's inability to pass law exams? “It's more of an indictment of the law school exams that you can pass them by paying attention to statistical patterns on how words are used,” said Piantadosi. “They know all about how words are used together. It's qualitatively different from our own self-awareness.”
Yes. There will be lawsuits. Also on February 6, Getty Images sued generative AI business Stability AI, alleging that it violated more than 12 million Getty photographs, captions, and other metadata without authorization.
"Nothing, Forever" tries to skirt a tricky legal line: Its authors claim it's a legal recreation of the show, which is regulated by fair use laws. Moreover, there's an argument that it doesn't satirize the program as much as borrow its setting and appearance. If so, it might violate OpenAI's terms of service, which prohibits "images of people without their consent," which might result in Mismatch Media being sued.
The Courttesy Everett Collection /Castle Rock Entertainment
Elizabeth Moody, the chair of Granderson Des Rochers' New Media Practice, has been looking at these sorts of questions for years. "The more I learn about the models, the more I realize a lot of it will depend on how the training materials are used," she said.
Moody said that most AI-generated art is so extensive that it's "actually impossible to tell which copyright materials were used." "I would analogize it to an artist being influenced by another. That's a challenge to copyright owners. How can they demonstrate that something was created using their own talents?"
"Nothing, Forever" is different. "If you're just taking one artist's work and basing your new work on one or two works, then it's a lot harder to say you're not stealing it," said the author. "That's where the 'Seinfeld' example excels. It's clearly based on copyrighted scripts. You can tell the source fairly easily."
Moody cautioned that the creators might fall into a fair use stance, but that it's a case-by-case situation. "It's difficult to establish a broad standard and say all AI created works are fair use," said the author.
The European Union passed the Artificial Intelligence Act last summer, which defines AI risk categories and requires greater transparency. Other countries are Brazil, which just passed a measure that gives AI a legal framework in 2021.
AI proposals have been in place in the United States since 2019. According to the National Conference of State Legislatures, 17 states passed various legislation related to AI regulation last year. However, there is no federal AI policy.
“These concerns will be resolved by businesses working together,” Moody said. “We will help establish laws over the next five years, but that's not enough to protect big copyright owners,” she said.
The opportunity lies within: The first major studio to buy an AI company might be questioned by creators who are worried about getting replaced. However, it might be their best hope.
I welcome your feedback on the topics discussed in this section each week, as usual: firstname.lastname@example.org
The editorial on the documentary market and the possibility to rebrand non-fiction without the word "documentary" last week received significant feedback from readers. Here's one.
Director and producer Sharon Liese