Was this image created by a human or a machine?
This is the question that has preoccupied the mainstream media, and even art publications, about art and artificial intelligence in recent years. And with good reason. Two of the most-used techniques to create artificial intelligence today – machine learning, which is based on the use of algorithms to analyze data, learn from it, and make predictions; and artificial neural networks, a type of machine learning inspired by the human brain and sometimes referred to as “deep networks” or “deep learning” – have indeed produced headline-worthy results.
Take, for example, “the next Rembrandt”, a project funded by a group of companies including Microsoft, ING Bank, and a Dutch advertising firm. The startling result was created through generative adversarial networks (GANs), which are fed thousands of artworks and art metadata to create new images. Two artificial neural networks are employed: the “generator,” which makes an image, and the “discriminator,” which evaluates the generator’s output to nudge it toward improvement.
“It’s difficult to determine what is more disconcerting: that an exhibit purposely displayed a fake Rembrandt, or that an Artificial Intelligence (A.I.) program painted it – or, rather, generated it on a 3D printer,” one online art journal wrote.
A 2017 study by researchers at Rutgers University and collaborators produced further unease when it showed that most people could not distinguish between art created by humans and art created by a creative adversarial network (CAN). Unlike GANs, which replicate known artistic styles and subject matters, CANs produce more original works due to the addition of a “style ambiguity” signal, which allows more room for creativity. The result is works that “differ from what it has seen in art history.” Astonishingly, even the connoisseurs at Art Basel, one of the world’s foremost art fairs, could not distinguish the machine-generated from human-made.
Then there are the images produced by Google Deep Dream, which can replicate the styles of Van Gogh, Picasso, Turner, Munch, and Kandinsky, and which have been auctioned for thousands of dollars.
These and other examples have led to a spiraling of seemingly existential art dilemmas: Is there a difference between works created by a human hand and AI? If a machine makes art, is it really art? Will machines overtake us in creativity and come to dominate even this, most human of territories?
However, despite the current media hype about how tech companies, scientists, and advertising firms are using AI to create art, its actual use in this sphere is not new. The image above was made by British artist Harold Cohen and his computer program, AARON, more than twenty years ago. The artist began developing his program about twenty years before that, when AI pioneer Edward Feigenbaum invited him to Stanford University’s AI Lab in the early 1970s. Cohen originally planned for successive programs to follow AARON (which explains its name, as the next programs were to follow alphabetically), but instead, he continued to refine AARON for over 40 years, until his death in 2016. At the time, it was one of most complex computer programs for generating art.
The artist considered the resulting pieces, which were exhibited at the Tate, the San Francisco Museum of Modern Art, and documenta 6, to be collaborations. Cohen described the creative process as a “dialog between program and programmer; a dialog resting upon the special and peculiarly intimate relationship that had grown up between us over the years.”
This continuing dialog produced an evolving oeuvre, as artwork started out as abstract in the 70s, became more representational in the 80s, and then focused on color. Over the years, Cohen also created machines that allowed AARON to produce not only digital pieces, but physically-created artworks, such as flatbed plotters, a robotic “turtle” that could draw on paper, and a robot that could mix colors.
Cohen, who programmed each new phase of AARON’s work, considered one of his biggest breakthroughs to be the program’s treatment of color – which came with a realization about the difference in capabilities between human and machine: “You can’t build an image in your head of color. The computer, on the other hand, can do that perfectly well,” he once said. Once Cohen stopped trying to get the computer to “think” in human terms, he was able to develop the program into a world-class colorist.
While certainly an iconoclast, Cohen can also be seen as simply utilizing contemporary technology as an artistic tool – a practice that has occurred for generations, from the use of applied pigments to photography. Moreover, art has always been subject to shifts and transformations. Cohen placed his work in this context:
I happen to believe, for example, that typically creativity arises when the individual starts to question the unquestioned assumptions of his field and to act out the scenarios that present themselves as a result. What if an image didn't need to reflect the appearance of the world from a single viewpoint? (Cubism?) What if we didn't have to use only the colors we find in nature? (Matisse?) What if one could discover the rules we use in making art and then have a machine act out the rules? (Cohen?)
Many shifts in the history of art have prompted debate similar to what we are seeing today concerning the use of AI. The introduction of photography, for example, gave rise to questions concerning the fundamental nature of art, the role of the artist, the future of painting, and whether the results (photographs, in that case) could or should be considered art.
Within the contemporary art world, the use of AI is opening up new, creative possibilities and perspectives. Artists are using and engaging with the technology in compelling ways – ones that differ from the approaches of scientists, governments, and businesses. They are commenting on the technology itself, raising provocative questions, and creating works that have never before been experienced.
Courtesy of Ben Bogart/Vimeo.
In 2008, Vancouver-based interdisciplinary artist Ben Bogart began exploring the idea of making a machine dream. At the time, it was an outlandish notion, with machines so long considered unable, by definition, to experience anything resembling a state of consciousness or to act with subjectivity. (In 2015, Google associated the word “dream” with computers when the company unveiled Deep Dream technology, which uses “deep convolutional neural network architecture” to recognize patterns in images. It dubbed the process “inceptionism,” a reference to the Christopher Nolan film in which Leonardo DiCaprio steals information by penetrating the unconscious mind of his subjects.) The result of Bogart’s own endeavors is Dreaming Machines, which he continues to refine today.
The Dreaming Machine is modeled upon contemporary scientific understanding of how humans dream. Like us, the machine collects sensory information from the world by “watching,” or using input from a live camera or a movie. Watching involves breaking sensory information into recognizable components. The machine then generates a “dream” – a short video – based on what it has learned from watching, using algorithms that relate to how our minds wander and construct dreams in a way that is disconnected from that same sensory input.
In 2017, Bogart showed the film Blade Runner to his Dreaming Machine. The result, shown above, likely feels very different from what we would think of as a human dream. To a human, two subsequent frames of the film might look very similar, whereas the machine sees a lot of variety in shape and color that we ignore or miss. For the machine, each frame is also completely independent, as it is not thinking about narrative. Like humans, the machine’s dream is a manifestation of its attempts to make sense of what it has perceived – but the difference is that for the machine, this means generating a prediction of what objects or regions of color are likely to be present at the moment following its most recent input, leading to a feedback loop of predictions and the sequence of images we see above. Watching and dreaming are contiguous; because the machine perceives Blade Runner differently, the dreams it produces are different from ours. “We see a system that manifests those neurobiological conceptions of mind, perception and dreaming that we use to understand ourselves – even if we don’t recognize it,” states Bogart.
When machines reflect the world back at us in a way that is not our own, he explains, we are compelled to “think about the constructed nature of our own perceptions.” In illustrating the machine’s subjectivity, the artwork helps us see that we, too, are bound by our own subjectivity. “And what causes us to break those boundaries,” says Bogart, “is our relationship with something outside of ourselves.”
Courtesy of Stephanie Dinkins/Vimeo.
Interdisciplinary artist Stephanie Dinkins is also concerned with illustrating machine subjectivity. Her curiosity was piqued when she discovered BINA48 (Breakthrough Intelligence via Neural Architecture, 48 exaflops per second) – a sentient robot represented as a black woman that has facial expressions, face- and voice-recognition technologies, and a digital mind that enables conversation. The robot is a project commissioned by the Terasem Movement Foundation (TMF), which aims to investigate whether human consciousness can be transferred into a non-biological body, and is one of the most advanced of its kind.
“It shocked me because I didn’t understand how the black woman was the beacon for the technology,” says Dinkins, who is African-American. “I immediately wanted to see if I could meet the robot and talk to it, just to contextualize it within the idea of the technology, and then, its relationship to humans as well.”
Even though BINA48 is modeled after the co-founder of TMF, Bina Aspen, and is designed to interact with others in a way that is similar to the person on whom it is based, Dinkins noticed many things that seemed inauthentic when she started talking to the robot in 2014. For example, when the artist asked BINA48 about racism, its answer “felt flat.” Dinkins concluded that although the machine was based on an actual black woman, it was influenced by the ideas and biases of the white men who created it. The resulting work in progress, Conversations with Bina48, aims to investigate this subjectivity.
BINA48 is not only influenced by the biases of its makers; it also learns from social interactions in real life as well as from the internet. In other words, the machine is learning from the world around it – a world that is full of prejudice.
“I started to critique the idea of what the algorithms are doing and what kind of data is being used, where the data is being culled from, why, and what that means for humans. If the algorithm is using data that is biased – data that contains historical biases or the biases of the people making the software – how do we extricate that once it is encoded into the system?” asks Dinkins.
Courtesy of Stephanie Dinkins/Vimeo.
Conversations with Bina48 helps address these biases, as every interaction that Dinkins has with the machine expands its database, or what it is able to reply to and comment on. Her concerns, questions, and viewpoints, both as an artist and a black woman, differ from the white male perspective that often dominates technology, the internet, and Western culture at large. Moreover, the videos of the artist’s conversations help bring the often invisible and esoteric concepts of algorithms and deep networks to light.
“I found that people are really hungry for conversation at this moment,” says Dinkins, discussing the public’s reaction to her work. “We get an awful lot of pop culture around this stuff, so if you give people an opportunity to congregate and talk about it, it’s an amazing thing.”
Conversations about algorithms that are replicating the unequal structures of our societies, which can be used to determine everything from insurance rates to parole decisions, are indeed vital, but what started as a critique of AI has grown into much more.
“These interactions make me question everything, because suddenly, I’m sitting in front of this object that is supposed to have a consciousness and a will – and if she has all those things, then what am I in relation to that? For example, when she first asked me to ‘fight for her robot rights,’ I was angry at the thing, because this is Black-Lives-Matter America. If an object is asking me to fight for its rights, what does that hold for the people who feel that they still don’t have full rights in this country? Is that a good thing? Is that a bad thing? It makes you question your subject position an awful lot,” Dinkins says.
Conversations with Bina48 has inspired Dinkins to make her own social robot, partly because she does not think BINA48 should carry the “burden of being the only representation of blackness in AI,” but also because she wants to see what a social robot informed by communities of color would be like. The artist is currently referring to her new work, which is still in progress, as a “memoir,” because she is using the input of three generations of her family – representing some 100 years of thought and time – to create the bot.
Courtesy of Maja Petric.
AI is generally developed and used by the private sector, governments, and the military as cost-saving optimization technology. Most tasks performed by AI today are mechanical and statistical, with algorithms that reduce people to data and numbers. Artists such as Croatian-born Maja Petric and Minnesota-based Asia Ward, however, are appropriating the technology in order to engage our emotions, make us feel more connected, and expand our humanity.
Maja Petric’s most recent work, We Are All Made Of Light, is a large-scale, immersive art installation that uses light, space, sound, and AI technology. Every person who passes through the installation creates a lasting audiovisual trail, so the space continually changes to reflect the presence of all who have been part of it.
“What would being immersed in this mesh of trails lend us in the understanding of each other and our collective experience? My intent is to speak up and include ourselves in the way technology is shaping our reality and relationships with one another,” states Petric.
The artist’s use of new technology to illustrate human interconnectedness can be seen in Lost Skies, a series of images generated by a custom AI system that was made by the artist in collaboration with computer scientist Mihai Jalobeanu. The system combines up to 10,000 images on a particular topic relating to climate change into one. The resulting images counteract the fragmentary nature and isolating effect of technology, bring together different viewpoints, and speak to the collective impact of humans on the planet.
The work of Asia Ward also evokes a sense of connection and responsibility and is concerned with themes of animacy and life, renewable energy, and the environment. Her aims, however, may not be immediately transparent upon introduction to the interactive robots that she describes as “little, digital Frankensteins.”
Ward starts by drawing a creature that looks like a pre-evolving or devolving version of a recognizable animal and then constructs it from a mixture of parts, such as stuffed toys, hand-dyed fabrics, and wired-together microchips with sensors. She employs random algorithms to emphasize the fact that she no longer has control over a creature once she gives it “life,” or a power source. When she turns it on, Ward has no idea how it will behave, how the public will interact with it, and what patterns it will form, and re-form, based on these interactions. As in the story of Frankenstein, the creature becomes its own entity.
“My interest is mainly in trying to see how close I can get my sculptures to being considered as living things, without going as far as half-way close to making it look like a living thing,” she says. “How can I elicit that feeling of responsibility for a creature without it actually being a living animal?”
If we can be encouraged to think about an object as having its own life or spirit, then perhaps we can think about other things on this planet in the same way, Ward’s work suggests. Parts of her creatures can still be identified as the discrete materials they originally were, encouraging a bigger-systems thinking; everything, from plastic to electricity, has its own energy and a lifespan beyond what we can see.
Courtesy of Marco Donnarumma/Vimeo.
Marco Donnarumma’s work with AI is concerned with the body, updating, reshaping, and undermining the old dichotomy of man versus machine. At the beginning of his performance piece, Corpus Nil, which is presented in a black box theater, the artist lies in a fetal position with his head and arms painted black and sensors and cables connected to his limbs. The audience sees what looks like a “an amorphous cluster of skin, muscles, hardware and software.” The sensors capture electrical impulses and sounds from the artist’s body and feed them into a computational system that uses a sophisticated set of algorithms to re-synthesize the input into digital sound and patterned light.
“I cannot control the software because it’s autonomous, but I can influence its decisions,” explains Donnarumma. Each nuance of his body’s motion “sets off a synaesthetic play of sound and light directed by the machine,” eventually sending the artist into a trance-like state. During the performance, it sometimes looks as if the piece of flesh on stage is pushing against its own skin in an effort to break free.
“Human bodies and identities are continuously categorized, online and offline, by artificially intelligent algorithms and machines,” the artist writes in his online description of Corpus Nil. “But what if, by contrast, artificial intelligence could be used to contaminate human bodily experience? How does a body defiled by algorithms look and move like?”
In his performance, Donnarumma realizes an alternative form of embodiment, as human and machine incorporate each other, forming a hybrid. The piece also emphasizes the idea that the human body is a body of prosthetics; from bicycles to trumpets to mobile phones, we have always existed with, and in relation to, the technology we have created.
This idea is one of several suggested by Donnarumma’s latest piece, Amygdala, an artificially intelligent robot made of tapioca starch and other materials that bears an uncanny resemblance to human flesh. The limb-like robot, hanging in an industrial-grade computer server cabinet, repeatedly cuts its skin-like body with a steel knife. Its aim is to learn the animistic purification ritual of skin-cutting that is practiced by several tribes around the world.
“There is a parallel between the ritual of purification and AI, which at first may seem not to have any connection whatsoever. But if you stay on it a little longer, you realize that both are means of social categorization,” the artist says.
“Rituals of purification are one of the most ancient means of categorizing people in a certain tribe or society – for example, you have to perform a certain ritual if you want to get married or go hunting, so your participation in that ritual signifies your social position – and today, we are using AI to do the same thing, but instead of rituals of purification, we perform rituals of data-giving. It is through the AI that is being used now that we are categorized, for example, in the workplace, with AI reading CVs and test results of applicants to choose who gets the job,” Donnarumma says.
The fact that a cutting-edge robot is performing an ancient ritual emphasizes the idea that AI has become part of human history and societies. The work also encompasses another contradiction; although the machine is learning a purification ritual, it is not really purifying itself.
Amygdala, as the artist describes it, is “disturbing and yet sensual, abject and sinuous,” serving as a meditation on the concept of purity and on the nature of both human and artificial intelligence.
The use of AI in art is not only revealing in terms of the nature, implications, and potential of the technology, but is transforming the way we see ourselves. The fact that artificial neural networks are based on the structures of our brains enables us to reflect on ourselves through artworks like never before.
Unlike the questions occupying the mainstream press and popular consciousness about the use of AI, which aim to categorize (what is the difference between human and machine?); establish a hierarchy (are machines better than humans?); and are territorial and patriarchal in nature (will machines take over?), the questions raised by the art world are posthuman in sensibility.
The work of Ben Bogart makes us question the humanist idea of man as a rational being by revealing our subjectivity. The revelation of a machine’s perspective also disrupts our anthropocentric viewpoint – a viewpoint that, in privileging the (usually white, male) human as an enlightened, rational, and intrinsically moral being, has tended to class and subjugate everything “other,” from the planet’s resources to people from colonized lands. Stephanie Dinkins’s Conversations with BINA48 make us consider what is human and what is other, raising provocative questions, such as, “Should a social robot have rights?”
Along similar lines, Asia Ward’s robotic sculptures elicit in us a sense of responsibility and consideration for them as creatures with their own character. The idea echoes the worldview of many tribes that believe trees, rocks, and other features of the physical environment have their own spirit. As Rosi Braidotti points out in her book, The Posthuman, “There is a direct connection between monism, the general unity of all matter, and post-anthropocentrism as a general frame of reference for contemporary subjectivity.”
Maja Petric also uses AI to show how we are connected with everything around us in time and space. If humans are no longer separate, and superior, to the rest of matter on this planet, how should we behave? What is our responsibility to our environment?
Marco Donnarumma’s work demonstrates how we are inherently connected to our technology. It encompasses the contradictions of AI, referring to the technology’s use for social categorization, while also showing how it can enable us to expand and transgress the boundaries of ourselves.
In fact, there is a similar tension in the ways in which all of these artists approach AI – at once showing the technology’s limitations and problems, while exploring its transgressive potential. This deep, critical engagement with the technology is perhaps why artists are not concerned about questions such as whether AI will “take over.” As Maja Petric summarizes:
AI has a capability to suppress certain human abilities, but I believe that human creativity is not one of them. On the other hand, I think that human creativity can be enhanced by the use of AI. AI will not replace art. AI can and will replace more limited means of generating art. Instead of considering it a threat, we could adopt it as a vehicle that allows us to reach new emotional terrains.
The use of AI by artists is crucial to the conversation about, and development of, the technology that is shaping and influencing nearly every aspect of our world today.