Main menu Close main menu Main menu Menu ×
News

Meet the speakers: Katriona Beales, Artist

Meet Artist Katriona Beales, who joins us to talk about the Unintended Consequences of AI at Future Sessions – Air of Turbulence, Mon 8 July.

On the run up to Future Sessions – The Air of Turbulence Symposium, we’re introducing you to our great lineup of speakers.

Today we meet Katriona Beales – an internationally exhibited sculptor who makes digital artefacts, moving image and installation responding to the social implications of new technologies.

Katriona will be in conversation with William Tunstall-Pedoe, AI entrepreneur and founder of Evi (now Amazon Alexa), on the topic of ‘Unintended Consequences’.

Hi Katriona, thanks for chatting with us. Could you tell us a bit about you and your practice?

As an artist, my practice is both about the making of digital and physical things and also pretty research based. I’m drawn to explore the social effects of technologies – the way that technologies are developed reflects the specific context, assumptions and biases in which they are birthed; and there are often unintended consequences which play out in the way they shape society and channel human behaviour. I made a large body of work ‘Are We All Addicts Now?’ (shown at Furtherfield in 2017) exploring online behaviour addictions and the mechanisms by which platforms and interfaces are designed to be addictive in order to monetise interactions. I’m currently in a research process looking at AI systems and through this started a conversation with William Tunstall-Pedoe, who developed Alexa. I showed the first part of this conversation at the V&A last year, and we are bringing a second iteration to the Future Sessions Symposium.

What are the questions you’re interested in raising through your practice, and why is it important we have these conversations now?

I still believe knowledge is a form of power (not the only form but still…) and many people (myself included) have limited understanding of how AI works and how many systems and decisions AI is already involved in. I am repeatedly interested in questions of agency, autonomy and the space for collectivity – all areas which AI could enhance but which, in it’s current iteration, it seems to be undermining. So in order to intervene in these oblique structures surrounding us we need to question these technologies and how they are being made and applied. I think it’s vital we have these conversations – not just in the arts or humanities – but open dialogues with the tech sector which operates under quite different assumptions.

What does ‘voice’ mean to you in the context of technology and digital culture?

I have been thinking a lot about voice in the context of voice activated interfaces such as Siri or Alexa… but really I think of it much more broadly in terms of politics of power, representation and diversity. Who has a voice? Who is recognised, heard and who is able to speak? Voice is an extension of personhood, a projection of the self or a group of selves into space and time, creating a resonance or dissonance with the objects, places and people it interacts with. As such it’s enmeshed in all of the philosophical tussles around all of these interactions. There are also driving forces such as the accruing of capital, a kind of Power personified, and very much a Voice – it’s strident and resists scrutiny, carrying an irresistible logic of it’s own. Ultimately when we understand who is speaking and why, we can make decisions about whether we need to listen or not.

The future is increasingly driven by algorithms… should we be concerned?

What concerns me is the lack of transparency within the decision making that algorithms are doing. The sheer volume and complexity of calculations means the process by which decisions are actually made is incomprehensible to human minds, and with that I fear a loss of accountability. I think of the scene in Terry Gilliam’s Brazil where due to an error the wrong man is incarcerated and killed. But this isn’t science fiction anymore potentially, as you can see in the deployment of facial recognition software by the Met police which is 96% inaccurate. Errors have always been made by humans, including errors of incarceration but in handing over decision making en masse to algorithmic systems I fear the loss of the capacity of doubt.

In the world of tech and digital culture right now, what do you see as the most exciting opportunities and the biggest challenges or concerns?

My biggest fears at the moment are to do with the climate crisis and the rise of populism / fascism. There’s opportunity as never before to take action, individually and collectively, and it’s essential that we do

Join us Future Sessions: The Air of Turbulence to catch Katriona in conversation with William Tunstall-Pedoe, exploring questions of social justice, rising inequality and accountability in relation to the development of AI.