Tristan Dot is showing two installations at the Cambridge Festival that investigate the impact of AI on how we view modern life.
The fact that machines could impose their own ways of seeing on our human ways of being is often perceived as a threat...But it could also be seen as an opportunity to think beyond our human agency, and consider, for instance, how animals perceive the world.
Tristan Dot
Tristan Dot [2022] will be exhibiting two of his installations at this year’s Cambridge Festival. Am I Normal? and Dreamy Cops features installations that investigate notions of AI, including computer vision, surveillance, the human body and normativity. Tristan is doing a PhD in digital art history at Cambridge Digital Humanities. His exhibition takes place from 11:00am-5:45pm on 15th March in the Faculty of English, West Road.
Q: What first attracted you to exploring the interface between computer science and art?
A: Since I started programming, I’ve been interested in using code in a generative way, to create small games or visualisations. When I studied machine learning, I was struck by the fact that the same family of models could be used to analyse and generate data. At the same time, these models were increasingly being used to monitor images on a large scale, for surveillance purposes. Certain artists and thinkers (including Harun Farocki and, after him, Trevor Paglen) had begun to criticise these new forms, which were becoming invisible to human eyes, but which could have a direct – alienating – impact on our lives. It was this combination of factors and influences that led me to explore the relationship between analysis and synthesis in computer vision.
Q: What is the aim of the Am I normal installation and where did the idea come from?
A: Am I normal is an interactive installation in which the spectators’ poses are analysed live, before being projected onto a wall. The idea is to expose what is usually hidden: the increasingly common monitoring of human poses and actions through machine learning. In France, the use of computer vision to analyse human behaviour in real time will be legally tested, on a large scale, in public spaces during the next Olympic Games.
But such methods are already being used – without a clear legal framework – in many places in Europe. Under the appearance of statistical objectivity, these algorithms actually reproduce patterns of domination, discriminating against minorities and atypical individuals. By quantifying what would be normal or abnormal in a specific behaviour, they create new self-fulfilling norms based on opaque processes and economic/political objectives.
The idea came in part from my work on Dreamy Cops, the political context in France, the apparent lack of public debate on these issues, and the work of various hacktivist artists.
Q: For Dreamy Cops, the installation seems to be doing several different things, for instance, the ephemeral nature of the images makes you question not only the meaning of images, but also the nature of humans in a digital world. Was that your intention?
A: Dreamy Cops explores the nature of digital images in neural networks, and the use of these same networks for human surveillance. In 2021, when I began working on this installation with Mathieu Rita, generative AI art was becoming increasingly popular. Neural networks for analysis and synthesis shared the same basic architecture (they were, at the time, based on convolutional operations). It then seemed pertinent to highlight the latent analytical objectives behind any AI-generated image.
So, yes, the ephemeral aspect of the images reflects both the nature of images evolving in an abstract mathematical space (the latent space of neural networks), and the way in which humans are inherently perceived through analysis – profit- and control-oriented – in our digital industry. Am I Normal deals with the same kind of questions.
Q: Another issue your work raises concerns surveillance by computers for computers. How can art address and provoke debate about fears of a post-human world?
A: Indeed, Trevor Paglen talks about invisible images: images produced by computers for computers. The fact that machines could impose their own ways of seeing on our human ways of being is often perceived as a threat, a potential alienation. But it could also be seen as a stimulating moment: an opportunity to think beyond our human agency, and consider, for instance, how animals perceive the world.
I think that artworks generated by machine learning, if they are at least a little self-reflexive, should question how images are analysed and created in the latent spaces of neural networks. If, at the same time, it is possible to expose these visual computations in an aesthetically pleasing form, then this brings human sensibility back in the loop, and makes all things more interesting.
Q: How does your work on these installations connect to your PhD?
A: My PhD aims to study the circulation of textile patterns during the 19th century in Britain. Part of my methodology involves using computer vision to trace visual correlations between ornamental motifs. I have to be self-reflexive about the tools I use, especially as they are new to my field. In the case of 19th-century textile production, which was based on massive systems of domination, and which induced various forms of standardisation, what are the implications of using neural networks to study it? What new forms of domination and standardisation are being looped in? What are the potential echoes between the textile industry during the industrial revolution, and our current digital industry? These are the kinds of questions I’m trying to keep in mind, and which are also, in a way, reflected in these installations.
Q: Are you working on any other installations?
A: Nothing precise at the moment! But I would love to mix image processing and weaving in a future installation.
*Several Gates Cambridge Scholars are speaking at the Cambridge Festival. Find out more here. The Festival runs until 28th March. Full programme here.