AI, Automation, and Abstraction
In November 2022, the installation Unsupervised by the artist Refik Anadol was exhibited in the lobby of MoMA, New York. Aesthetically, it conveyed a dynamic display of abstract forms interpreting, via machine learning, MoMA’s archive of modern art. Conceptually, however, it sits at the nexus of a number of theoretical questions regarding the increasing use of artificial intelligence in art. Whereas the historical automatism of surrealism through abstract expressionism often purported to evoke the unconscious of the individual artist, the automatism of artworks such as Unsupervised claims to manifest the somehow non-human unconscious of modernity. Whereas the conceptual work of artists such as Sol LeWitt was algorithmic in an attempt to divest the art of the heroic creator figure, artworks such as Unsupervised seem to replace the latter with the heroic machine. Moreover, this romantic rhetoric and visual language obfuscates the immense amount of often emphatically exploitative labour required for such systems (Hito Steyerl) and also the fact that the ‘visual knowledge’ of computer software tends to serve existing power interests (Wendy Hui Kyong Chun). This panel seeks to accordingly historicise the emerging aesthetics of AI art making, and engage with questions such as how art historical methods might be employed to gain critical purchase on technological advances that threaten to only serve the status quo, and whether there is hope in current practice for the kind of subversive engagement with cutting-edge technological development that was often attributed to early new media artists such as Nam-June Paik and Gretchen Bender.
Session Convenors:
Ian Rothwell, Edinburgh University
Daniel Neofetou, Northampton University
Speakers:
Lindsay Caplan, Brown University
The Death of the Author and Other Fantasies: Generative Art between Cybernetics and Psychoanalysis
Today, artificial intelligence has created two classes of artists: those that master machines and profit enormously; and those replaced by automation, their (unremunerated) intellectual labor abstracted, expropriated, and integrated into the mechanisms of AI. But many artists in the 1960s wanted to disperse their authorial agency and collectively mechanize artistic production. From Frieder Nake’s computer-generated plotter drawings to Gianni Colombo’s programmed environments and Liliane Lijn’s metamorphosing kinetic sculptures, artists used automated processes and analogies of “machine life” to insist on the collective conditions of creativity—to undermine the opposition between individual and collective that contemporary AI has reinscribed. Have claims that machines materialize the social basis of creativity become irrelevant—even dangerous? Does the decentering of individual agents have any critical place within the contemporary configuration of art, labor, and politics in the age of AI?
This paper argues that liberatory claims for collectivized authorship remain essential, but only if we reevaluate the conceptions of subjectivity, systematicity, and self-consciousness upon which they are based. To do so, this paper examines how Nake, Colombo, and Lijn use analogies between humans and machines to variably empower, decenter, and insist on the incommensurability of individuals. Turning to Jacques Lacan’s writing on cybernetics and psychoanalysis, I argue that the full range of types of “dead” authors has not only been inadequately recognized but actively disavowed. Ultimately, I show how the mutability of the dead author position has profound implications for our understanding of art, politics, collective organizing, and political imaginaries today.
Caitlin Chan, Stanford University
Beyond the window: Searching for invisible origins, from William Henry Fox Talbot’s “The Oriel Window” (1835) to Artificial Intelligence-generated images
In 1835, William Henry Fox Talbot set a small camera containing sensitized paper on the mantelpiece opposite a lattice window in his home. After a few hours of sunlight branded the paper with the inverse silhouette of Lacock Abbey’s oriel window, the very first photographic negative emerged into view. Talbot’s prototype, in the bare announcement of its physical genesis, offers a unique conceptual framework through which to understand phenomenological responses to A.I.-generated images—what is the discomfort that arises upon feeling an overwhelming sense of source pressing from behind, but being unable to inhabit or identify this past visually? To probe this unease, I propose a conceptual reversal of Talbot’s “Oriel Window”, in which the viewer of the A.I.-generated image finds themselves on the other side of those latticed bars, trying to discern what distant window (What image training set? What natural-language prompt? What “region” of the latent space?) produced the image before them. In the same way Talbot’s photograph indexes the sun’s passages and arrests in space to burn inverse shadows directly onto its surface, an A.I.-generated image is underwritten with the unsettling sense of its creation within the black box of latent space, a dissonance that never resolves. With this paper, I aim to historicize viewer response to A.I. aesthetics, and draw out new resonances, tensions, and ways of seeing through the unlikely comparison of a pioneering moment in image-making to the current landscape of A.I.-image generation.
Laura Leahy, Solent University
Give a Surrealist a Smart Phone
Central to the Surrealist’s raison d’etre was a notion of the unconscious, influenced by Janet Pierre, Sigmund Freud, and others. They appropriated early 20th-century psychiatric techniques of automatism, and discarded aesthetics, skill, and intent (Andre Breton), to focus on accessing the unconscious of individual artists. Accordingly, the Surrealists challenged the habitual acceptance of reason and rationalism, which they argued served to organise the chaos after the First World War, by embracing a free flow of thought through the automatist line.
A century later, human thoughts entwined with technology, such as AI training datasets, warrant going beyond the Surrealist focus on the individual unconscious to a social unconscious. What Boris Groys refers to as the defunctionalisation of art disrupts AI tools, and emerging aesthetics of text-to-image generation aim to visually confront social images of arbitrary norms derived from previously unacknowledged consensus (Trigant Burrow). Thus, historical Surrealist approaches to automatism are appropriated in order to challenge habitually accepted technological rationalism, address mental images of a waking state to access a social unconscious, and defunctionalise technology. In this way, artworks such as Trevor Paglen’s From ‘Apple’ to ‘Anomaly’ (2019) reflect what is unseen within technology, and the socially disregarded human labour needed to bring the Internet and all its technologies to the screen (Hito Steyerl) and any social structure embedded, transmitted, and mirrored back to the world through the global network (Aranda, Wood and Vidokle) surfaces. By surrealistically accessing socially unconscious mental images embedded in the structures of technology, the Internet’s visual language and its potential to obfuscate awareness is defunctionalised. Thus, a creative approach to automatism explores the premise of the Internet as the social unconscious of the 21st century, in order to open up new forms of visual knowledge production.
Andrew Murray, Open University
Electric States: Absorption and Absorptive Formulae in the Age of Artificial Intelligence
Simon Stålenhag’s digital paintings have become adapted for a mass audience. His book Tales from the Loop (2020) was the basis for an Amazon TV series, and his The Electric State has been made into a feature film (forthcoming in 2024). These books use digital paintings to narrate stories in which human and robotic characters traverse European and North American landscapes transformed by futuristic technologies. In this paper, I argue that Stålenhag, a vocal critic of the use of AI to produce art, draws on the genre of sublime landscape imagery in two of his books, The Electric State and the forthcoming Europa Mekano, to question the continued possibility of aesthetic experiences in a world in which AI can control and model consciousness. As Cornel Robu has observed, artificial intelligence is sublime in how it requires code surpassing human comprehensibility. In Stålenhag’s work, the sublimity of AI absorbs the consciousness of its beholders while also eliminating aesthetics from those absorbed states. The challenge AI poses to the aesthetics of absorption highlights the problems Michael Fried had on clarifying whether absorptive images could be reduced to definable formulae, and thereby reproducible by a machine. Fried equivocated on this problem, and often analyses such ‘absorptive formulae’, notably the tropes of the concealed face and the Ruckenfigur, which recur in Stålenhag’s work. It is such ‘absorptive formulae’ – or even Pathosformulae, in Warburg’s terms – that have survived through the Western tradition and inform Stålenhag’s digital painting, even as he questions their continued validity.
Amanda Wasielewski, Uppsala University
Object Recognition: Photography and the Real after Generative AI
Jean Baudrillard famously defined simulation as “the generation by models of a real without origin or reality.”1 Re-reading this today, it sounds eerily like a definition for AI-generated ‘photographs’. Recent advances in generative AI, particular generative adversarial networks (GANs) and diffusion models, allow for the creation of photographic-seeming images based on learnings from vast datasets of digital and digitized photographs. So Baudrillard’s choice of the words “generation” and “models” is fitting, given that AI-generated photographs constitute a semblance of the real generated by machine learning models that has no origin or reality. In other words, they are quintessentially hyperreal. Previous scholarship on generative AI has largely revolved around the political and social implications of producing convincing “fake” photographs. This paper, however, approaches these images from a theoretical perspective, focusing on the ontological issues they present. Framed by Baudrillard’s concepts of simulation and the hyperreal, I address the historically fraught relationship between photography and the real and its implications for the photographic image after the advent of generative AI. I argue that these synthetic and abstractly composite images gamely play the part of photographs even though they are produced without reflecting and refracting light from existing referents. For Baudrillard, photography is a process of objectification, and AI image generation ultimately reinforces the view of photography as objectifier. Vis-à-vis automation, photography is understood to reveal the latent objective world. AI-generated photographs are no different. They reveal the hidden facets of the data used to produce them—the object world of the dataset.
Martin Zeilinger, Abertay University
Abstraction as Adversarial Image-Making
This paper links AI art experiments with abstraction to debates regarding the politics of artificial intelligence. Specifically, I am discussing abstraction in relation to ‘adversarial image-making,’ which, in machine vision contexts, refers to the creation of images that purposefully interfere with image recognition and the legibility of data. I want to argue that art historical perspectives on the critical valences of abstraction have much to offer to discourse on key AI pitfalls including dataset bias, IP issues, and freedom of expression. Two examples anchor my discussion: Tom White’s series Synthetic Abstractions (2018), which revolves around AI-generated images that human viewers will perceive as abstract, but which trigger image recognition classifiers indicating pornographic content; and Nightshade, an MIT-produced tool allowing artists to inject images with invisible ‘data noise’ that can potentially break generative AI models. These examples frame the following guiding questions: How are human interpretive faculties and the ability for abstract expression impacted when what we see becomes subjected to the power of AI tools to filter, censor, or otherwise disrupt the circulation of visual information? What aesthetic and technical tactics are available to respond to such disruption?
In bridging aesthetics, critical perspectives on AI, and technical discussion, my paper links the aesthetic category of ‘abstraction’ to the computing concepts of ‘generalisation’ and ‘adversarial images.’ As I will argue, both of my examples build on long-standing aesthetics of visual indeterminacy, which are here mobilised in subversive practices that can highlight and potentially resist problematic tendencies, within the field of machine vision, towards biased simplification of image content, data appropriation, information blackboxing, and process obfuscation.