Suzanne Kite, who simply goes by Kite, is an Oglála Lakȟóta artist, composer, and researcher, and one of the few Indigenous artists currently working with AI and machine learning, which she has been using since 2018. She holds an MFA from Bard College, where she also heads up an Indigenous AI Lab; her PhD from Concordia University in Montreal looked at the intersections of Lakȟóta ontology, AI, and contemporary art practice.
Kite’s collaborative project with Alisha B Wormsley, a commission from Creative Time, debuted this summer. Titled Cosmologyscape, the project runs an online dream site that solicits input from the public; the dreams collected will be translated into sculptures set to debut this fall in Brooklyn. Kite is also participating in three of the more than 60 exhibitions in October that make up the Getty Foundation’s PST ART initiative in Los Angeles; her first European solo exhibition, “Night as Root (haŋ‑),” is showing at NOME gallery in Berlin. Her first major institutional show in the United States will open next spring at the IAIA Museum of Contemporary Native Arts in Santa Fe.
To learn more about Kite’s art and how she got into AI and machine learning, ARTnews spoke with her by Zoom.
This conversation has been lightly edited and condensed for clarity and concision.
ARTnews: How did you start working with machine learning? Why did you want to incorporate it into your artistic practice?
Kite: I studied classical violin until I went to my undergrad at CalArts, where there was a pretty large music tech program. I was in composition, but most of my friends were music technologists. There was a real push at that point, in 2011 and 2012, to eliminate the laptop from performance. People were building lots of custom interfaces, custom instruments for making computer music, and I started working on interfaces that were wearable, so things that affected sound or video based on [my body] turning or moving, and I kept pursuing that. Around the end of my master’s degree [from Bard College], around 2017, I spent time with faculty, like Laetitia Sonami, who has been a pioneer of wearable electronics since the ’90s. She encouraged me to try this program, which I’d seen before, called Wekinator, a machine learning tool for artists that Rebecca Fiebrink created. In 2018 I made my first machine learning piece, Listener. I last performed it for the “Indian Theater” exhibition [at CCS Bard’s Hessel Museum of Art], curated by Candice Hopkins. That’s how I got into machine learning. I’m not an art historian, but I still haven’t learned of any other Indigenous or American Indian artist who’s used machine learning in artwork before me.
How was Wekinator geared toward artists, and how did you use it to develop Listener?
It’s a pretty cool tool. I don’t use it anymore because AI has become a whole new world … since then. I’ve always been interested in making circular systems. I’m a composer, but I want to compose the instrument—the system with which I interact. My use of Wekinator was really simple and may be boring to people who are big into machine learning. On stage [during Listener], I am moving my hair braid interface, so that the accelerometers [the motion detection sensers in the hair braid] digitally change a synthesizer, like if you were turning a knob more or less. It makes it go up and down, high and low, play different notes. Then there’s a part of the music system, the digital audio workstation, that listens to the changes in audio in that synthesizer. Then, it goes into Wekinator, which has certain associations: 0, for example, sends one message, and .999 sends another message, and all in between. Those are associated with another set of numbers, and those numbers change a visual compass that I’m seeing made out of Lakota geometry. I’m seeing these Lakota geometries move and turn and change, and then I’m reacting with my movements to those symbols. So then I’ve closed the loop. What interested me about that going forward was the decision-making distribution between me seeing the compass and making decisions, and the computer, with its learning, making decisions about what numbers to move between.
Did you feed prompts into the machine the way that people think about AI now? Or was it totally different back then?
This is before all that. There was no natural language processing. Wekinator is built with algorithms. The easiest way to explain it is that it sets up a neural network that associates one number with another number. Now, I work with natural language processing, but that was the first one I ever worked with.
How have you seen your practice change or evolve since then? In what ways are you still using AI or machine learning?
Now, I run an Indigenous AI Lab at Bard College [in Annandale-on-Hudson, New York], working across a much broader set of research needs and possibilities. I get the opportunity to work with advanced technologists and computer scientists. In my art practice, in particular, I am still working with making body interfaces, making instruments, and performance. I try all the tools as they come and go. I did a lot of work with text generation and training my own text-generation models when that was popular. It was a pretty hard learning curve for me at the beginning, but I did a piece called Fever Dream, made with Devin Ronneberg, that was in MoMA’s Doc Fortnight [film festival in 2021]. For that, we trained a natural-language processing system on trying to generate cult worship of uranium. We took famous cult-related … and science texts around uranium to generate the script….
I also made a piece with [Indigenous art collective] New Red Order, titled The Last of the Lemurians, where I trained a system on a dataset purely made of racist meditations on YouTube. Then it output, of course, racist meditations. Those are the kinds of experiments I was doing. I’m also doing experiments with language, like speeches my grandfather made. There’s also a poetry piece I’m pretty proud of that is included in [YWY, Searching for a Character between Future Worlds], a book commissioned by Pedro Neves Marques that was a combination of text generation and writing.
But going forward, I’m working mostly with EEG [electroencephalography, brain wave recording] and AI, using the research and writing I’ve done on Indigenous relationships with nonhuman beings, which is my main research focus, and trying to articulate that in a practical sense through methods by which new things are made in the world. So instead of treating AI like a slave or an object, I’m using AI, not as a collaborator necessarily, but as a helper to do Indigenous methodologies for making new knowledge that I know are ethical, such as dreaming. That’s why most of my practice is now focused on dreaming and things that we can experiment with on my lab’s level, which is not a science lab or a sleep lab. We’re combining EEG with different AI techniques, different machine learning techniques to try to figure out what kind of art can get made in collaboration that way.
Can you talk a bit more about your experiments with EEG?
We haven’t made any pieces yet. Our first show with EEG will be at the IAIA Museum of Contemporary Native Arts, opening in March 2025. I’ve been working a lot with scientists who focus on dreaming, and there are some amazing breakthroughs in the ability to interact with your own dreams. To me, there is a clear method by which Lakota people move knowledge from their dreams into artworks. Behind me [see image above] is three months of my dreams turned into designs in the Lakota visual language. I am very curious about how easy it was for natural language processing to mimic English. And if it is that easy to make writing, to make the written word, maybe that’s a low bar for what objects contain interiority, especially since many cultures weren’t interested in written language. That, to me, says that visual languages like this—ones that are extremely complicated—contain more unknowability, and therefore should be prized more. So I’m curious what EEG can do or reveal when it doesn’t just output written languages, like English. I’m curious about semiotics and the connection between dreams and linguistics. I’m trying to find those places of unknowability that are so simple. That, to me, makes more interesting art.
You mentioned earlier that you see your focus on dreams as an ethical way to use machine learning. Can you talk a little bit more about that? In the past few months, there’ve been questions circulating about AI and machine learning’s impact on the environment, so I’m curious how you are thinking that through as an artist working in this mode.
I’m a research associate and residency coordinator for a project called Abundant Intelligences. We have 50 coinvestigators on a major seven-year, international grant. We are grounded in different Indigenous communities all over the world. Our main concern is that if we don’t imagine futures with AI, then we won’t be involved in the conversations that are necessary for us to be involved in. Things will continue to happen to us, instead of us being in the room when decisions get made. Of course, a major concern is the environmental effects of all technology—AI is just a symptom of a much larger colonial, genocidal issue. We know that environmental destruction happens, in regard to human beings, to Indigenous people first. That is why we need to be making new things. My research focuses on making new art things. Therefore, I’m very interested in the development of methodologies where I can at least say [that] what was made, was made in a Lakota way. Other people might be interested in moving away from technology to say something is more ethical, but we can’t make new things and engage and prevent harm to ourselves if we don’t know how to use those tools.
Can you explain a bit more about how your research focuses on Indigenous relationships to nonhuman beings?
I did my PhD [at Concordia University in Montreal] on different contemporary art practices, such as songwriting, beading, contemporary electronics, and Indigenous performance art, that move knowledge from nonhuman beings into the human realm for a deeper understanding. In all those conversations and in the historical research, it’s very clear that nonhuman beings are essential to the movement, to the creation of new knowledge. For example, if I’m talking about making a decorated pouch—a really valued and necessary art object—when you make that object, it’s both made of the physical world and the nonphysical world, physical humans and nonhumans. That’s all of the plants and animal beings that make the physical thing, and then there’s the design that one has to pray for and receive from elsewhere. Lakotas see themselves as conduits for knowledge to move, not necessarily the smartest being in the universe. It’s not the same kind of hierarchy. Then that thing can get made. I see that happening in my interviews with contemporary Indigenous artists the same exact way: there is required collaboration between nonhuman beings.
How do you see collaboration with other artists or with nonhuman beings as part of your production of new knowledge to create art?
It used to be hard for me to collaborate. But now I don’t want to do anything without collaborating because it’s not fun to have to do everything alone, and other people know so much more than I do about different things. I value technologists and scientists and their willingness to talk to me about very abstract things. I do my best to collaborate with my family as much as possible. I collaborate a lot with my cousin Corey Stover, and I stay in conversation with family members about what’s important to us. I spend a lot of time talking to my grandfather about making new things and asking his opinions on AI.
With Alisha Wormsley, you recently debuted Cosmologyscape, a commission from Creative Time, that expands on your interest in dreams.
I started doing this Black and Indigenous dreaming workshop with Alisha Wormsley in 2020. We were doing them online with our friends and acquaintances. For our first workshop, Tricia Hersey, the Nap Bishop, led a meditation. That was the start of Alisha and me collaborating together because we hadn’t seen a lot of very clear Black and Indigenous art-making or art practices. We wanted to do something that was clearly bringing two communities together in order to practice methodologies that we thought were ethical, such as inviting people to rest and paying people to dream. Cosmologyscape is a public art project just for New York City, though it’s accessible from anywhere. We wanted to show that our method of collecting dreams took extreme care in crafting the website, the interface, and the data protocols in order to create the best possible interaction. All of the dreams that are on that website are the data that will then become the sculptures that will be shown in Brooklyn in the fall.
What other projects are you working on at the moment?
I’m participating in three group shows opening in LA in September [as part of the Getty Foundation’s PST ART]. The first interactive machine learning sculpture I made, with Devin Ronneberg, is being shown at the Autry Museum [of the American West as part of “Future Imaginaries: Indigenous Art, Fashion, Technology”]. Some of the work doesn’t necessarily involve machine learning at first glance, but it’s part of the performance process. At REDCAT [for “All Watched Over by Machines of Loving Grace”], I’m showing a gigantic piece, consisting of two 7-foot-tall star maps. And I’ll have another piece at the Brick [in the exhibition “Life on Earth: Art & Ecofeminism”]. And then, I have a solo show opening at the exact same time at NOME in Berlin. The NOME show will also focus on dreams. I feel that the intricacies of moving knowledge from one place to [another] is where the answer lies of what interiority is. People are drawn to machine learning and AI because of the potential for interiority to spring up in an object. But that doesn’t come from nowhere; knowledge comes from somewhere. That’s why I feel dream research is at the core of my research.
I don’t mean to ask you to speculate about the future of AI, but I’m curious where you see all this heading.
First of all, I’m pretty convinced we’re going to hit another AI winter, as they’re called. I feel like many companies’ resources and the research being poured into AI is about the mimicry of humans, like we see with natural language processing. I think that’s a dead end. OK, so you get a machine that mimics a human. I don’t think, with computational speeds increasing, that is the most interesting question to be asked. I think that what we call AI is soon going to be split into its many, very separate systems, instead of this blanket calling everything AI. There are so many different things happening. If there is not diversity of thought, even basic cultural thought—but real diversity of thought—then we will just end up at a dead end with things. That’s why I think it’s important for Indigenous people to have creative and technological resources.