Despite being dead, Salvador Dalí makes for a punctual employee.
For the past five years, a digital doppelgänger of the Surrealist has been on call at his eponymous museum in St. Petersburg, Florida. His only job requirement is to be his most affable self. When visitors pick up the receiver of his iconic Lobster Telephone, he chats with them about any topic of their choosing (within the bounds of propriety). He reads the day’s Tampa Bay Times. A tagline of the program is “Learn more about Salvador Dalí’s life from the person who knew him best: the artist himself.” This Dalí, however, was assembled by training generative artificial intelligence on the artist’s interviews, correspondence, archival footage, and documentaries. From his letters, the museum plucked the artist’s implicit consent: “If someday I may die, though it is unlikely, I hope the people in the cafes will say, ‘Dalí has died, but not entirely.’”
This is the current, cutting-edge museum experience and, given the proliferation of AI technology, its plausible future too. Like the Dalí Museum, the Musée d’Orsay in Paris realized a similar venture in 2021, when it gave audiences a virtual Vincent van Gogh. Both proved popular.
With AI still nascent and art production one of its most concrete applications, the technology and art have made uneasy bedfellows amid scant legislation and regulation. There have been controversies about authorship and copyright, and a number of lawsuits landing digital creators and tech companies in court. But the Ask Dalí and Hello Vincent apps introduce significant ethical and existential quandaries beyond the mere legal, though that isn’t completely resolved either.
Simply put, should museums perform these digital resurrections? Does being the custodian of a collection permit stewardship of a soul?
“We believe Dalí himself would be playing with these technologies if they [had been] available in his lifetime. It is an absolute service to his spirit to be using these things,” Kathy Greif, the Dalí Museum’s chief operating officer and deputy director, told ARTnews. “We’re here to not only preserve but to prolong his legacy.”
Dalí, the human, was born in 1908 in Figueres, Spain, and died in 1989. Salvador Dalí the undying, however, was born in San Francisco, in 2019, in the laboratory of Goodby, Silverstein & Partners (GS&P), an advertising firm of the radical sort. GS&P’s About page professes the firm’s mission to be creating “mass intimacy.” A partnership with the Smithsonian, for example, entailed feeding one’s own face and voice into a program powered by AI. From the ether emerges you, as you may you look in the year 2050. This older “you,” breaking the rules of Hollywood time travel, offers you advice.
“Can we bring Dalí back to life was the challenge given to us,” Martin Ludvigsen, GS&P UX director told ARTnews. The Dalí Museum contracted GS&P, according to Ludvigsen, to develop two projects: the aforementioned Ask Dalí app and a video installation titled Dalí Lives. For the latter, Ludvigsen and his teams intended to capture the artist’s physical quirks to help humanize the legend for young audiences.
“Dalí might be famous, but he wasn’t a prolific painter, at least compared to someone like van Gogh. There’re surprisingly few places to encounter his paintings in real life, and kids aren’t seeking out his films,” said Greif, by way of explanation.
“We wanted it to look like him, move like him, talk like him,” he added. “CGI is always getting better, but there’s always that impression of the uncanny valley.”
To create Dalí Lives, GS&P used a machine learning model known as a generative adversarial network (or GAN) in which two networks compete to create the best new data based on the content (an image, song, video) provided. Best is defined as most authentic; most authentic is defined as whichever can best duplicate a human, machine, whatever it was trained on. The GAN model, while powerful, introduces its own ethical complication; it’s the same technology that produces deepfakes, synthetic technology that mimics a person’s likeness, often for the purpose of misinformation, scamming, or sometimes, just parody.
“The world was different when we started this project,” Ludvigsen said. “When you’re talking about ethical considerations, 2019 vs. 2024 in technology might as well be centuries apart. Deepfakes were not controversial in the same way they are today.”
Five years on from 2019, deepfake technology has become highly sophisticated and accessible. The global engineering collective Arup was bilked out of $25 million this past February when scammers used deepfakes to impersonate several executives in a Zoom meeting. Meanwhile, last month, Elon Musk shared a fake Kamala Harris campaign ad using computer-generated audio of her voice; it had more than 150 million views on X. There is currently no comprehensive regulation on the technology in the United States. A 2020 law supports research on developing standards for GANs, and several recent bills have been introduced in Congress with the aim of establishing criminal penalties for the technology’s unauthorized use.
GS&P, for the record, agrees with those calling for regulation, and not just of GAN. GS&P argues that they attempt both to obtain consent and substantiate no ill intent; both should be mandatory before considering reviving a dead artist. “We’re talking about bringing people back from the dead,” Ludvigsen said. “I think anyone with a shred of [consciousness] understands that.” But how do you obtain consent from a dead artist? And why bother, when few estates have caught up to emergent technologies? Who judges intent? Surely not those believing in the immutability of the consciousness.
The internet, at least, doesn’t look kindly on misfires in this arena. In a 1998 interview with Guitar World, Prince was asked if he would ever consider playing alongside a dead artist, given advancements in digital editing. “That’s the most demonic thing imaginable,” Prince replied. “Everything is as it is, and it should be. If I was meant to jam with Duke Ellington, we would have lived in the same age.”
In 2018, two years after Prince’s death, TMZ reported that Justin Timberlake would play the Superbowl Halftime show with a hologram of the singer. (The game was to be held in Prince’s hometown of Minneapolis.) The rumor was swiftly refuted by Prince’s former bandmate and ex-fiancée, Sheila E., but the plot thickened regardless: Page Six swore the hologram was “100 percent ready” until fierce backlash from social media followed the TMZ report. Prince’s estate, which at the time was managed by a bank, said it gave Timberlake consent to use vocal elements from Prince’s recording of I Would Die 4 U. This audio ended up in a monumental projection at the Superbowl. The controversy, mired in conflicting reports and exacerbated by the executive ambiguity of Prince’s estate, eventually settled. The sour taste lingered.
In a more recent example, Drake was hit with a cease-and-desist letter from the estate of Tupac Shakur after using a synthetic audio of the late rapper on his diss track “Taylor Made Freestyle.” The estate wrote that the unauthorized reproduction was a “flagrant violation of Tupac’s publicity and the estate’s legal rights.” Responding to the controversy, the Partnership on AI, a nonprofit organization that partners with Google, Amazon, and Creative Commons in efforts to legitimize AI, released its guidelines for “Responsible Practices for Synthetic Media.” This includes cultivating “AI literacy in order to distinguish between authentic and synthetic media”; emphasizing the importance of transparency and obtaining consent; and encouraging policymakers to introduce legislation that protects individual rights. There actually is some precedent on the policy front: The ELVIS Act, passed by lawmakers in Tennessee, prohibits the use of AI to mimic an artist’s voice without explicit permission. The transition of these guidelines to visual art would not be seamless, but it is possible.
Ludvigsen cited both Prince and Drake as the sort of situations that GS&P sought to avoid during (and after) the creation of Dalí Lives and the later Ask Dalí. In the latter, visitors can ask Dalí about his art, life, death, even current events. The program was trained on voice samples drawn from archival interviews Dalí conducted in English over his lifetime, as well as translations of his writings, such as Diary of a Genius and The Secret Life of Salvador Dalí.
In a segment for NPR, a correspondent asked, “Why are the clocks in your paintings melting?”
“My dear questioner, think not of the clocks as merely melting. Picture them as a vast dream caressing consciousness.”
GS&P has it right; Dalí is an artist uniquely suited to this venture: He was fanciful and self-mythologizing, which is helpful in a case like this, where a weakness of learning machines is that they believe the information fed to them and act accordingly; his concerns were barely of this earth to begin with. The Musée d’Orsay, which debuted Hello Vincent in 2021, ran into issues with Vincent van Gogh and his tragic biography. According to a report in the New York Times, so many visitors asked the artist why he killed himself that they had to tweak the program. Now he steers the conversation in life-affirming directions. It depends on your belief in the afterlife to decide whether there’s anything unethical about the summoning of a spirit so desperate to find rest.
Maybe it helps that the Dalí AI knows it’s AI and doesn’t seem concerned. “My spirit has always been floating around the cosmos,” he replies, when questioned about metaphysics.
“We want people to enjoy [the Dalí Lives and Ask Dalí programs], but it’s also our responsibility to make sure audiences know this is meant for entertainment. If they want to learn more, they should go to scholarly sources,” Dalí Museum COO Greif said.
Ludvigsen agreed, in his way. “The Dalí you meet is always optimistic, positive. We made him that way. I’m not sure meeting the real Dalí would have been as pleasant,” he said, laughing.