Main menu Close main menu Main menu Menu ×
News

ClimateProv: Playing with the AI

At ClimateProv human performers work with AI to live improvise scenarios related to climate change and its implications for humanity

A blog entry by BeFantastic Within Fellows and creators of ClimateProv, one of several winning commissions due to premier at FutureFantastic Festival in Bangalore March 2023, as part of the British Council’s India/UK Together Season of Culture.

 

Artists: Blessin Varkey, Gaurav Singh, Ranji David, Tajinder Dhami Monica Hirano and Tiz Creel

May 2022. It was the culminating week of the BeFantastic Within Fellowship. Throughout the three weeks of the Fellowship we explored the interplay of art and artificial intelligence through discussions, skill sharing and guided experiments, leading us to think about how AI might find its way into a live performance. From movement arts and dance to voice and text-based artwork, it was evident that the integration of artificial intelligence into traditional performance formats will conjure up interesting results.

Building blocks come together

After many rounds of virtual huddles and (optimistic, if not cautious) imagination, the building blocks of Climateprov – improv theatre, generative AI and climate change – started falling into a cohesive structure, even if it was only at a conceptual level. The choice of improvisational theatre as the art form was informed by two reasons: firstly, our personal encounters with conversations about climate change had led us to believe they evoked a sense of helplessness, despair and anxiety amongst those listening; improvisational theatre, the other hand, focuses on humorous, heartfelt and vivid explorations of a subject matter. Secondly, improvisational theatre relies on a series of suggestions from the audience (i.e. an input) to spontaneously create a story (i.e. an output) based on certain improvisational principles and rules (i.e. the parameters).

The nature of improv, to extrapolate meaning from seemingly disconnected prompts to generate a cohesive narrative, is similar to the workings of an AI model

We began testing some of our ideas through brief playtests and quick experiments. The first of these included training a sample GPT-3 model on the popular ‘Yes, And..’ improv game.

How does the game work? Human performers — and the AI, in this instance — tell a story together, by starting each successive sentence with a ‘Yes, and…’ and building the narrative further. As you see in the video, the model is first trained on the ‘Yes, and…’ structure through a few rounds of human-generated text and then tasked to generate a response. The positive demonstration was a significant moment for us, as a coherent and responsive interplay between the human performers and the AI was critical for this project.

The diverse artistic and technical backgrounds of our team of six had already allowed us to begin imagining a wide array of possibilities for this project. A few weeks later, we received the news that our pitch had been successful and this project was officially greenlit for development! The question we faced changed from a speculative ‘what is possible?’ to an exciting ‘how to make this possible?’ And thus, we jumped into the next phase of our research where we decided to go back to the three building blocks of the project – improvisational theatre, generative AI and climate change – and take these one step at a time.

Wrangling with the AI

With generative AI, we decided on training our own model using GPT-3 and then integrating it into the performance. Additionally, we have been exploring GAN-based technologies like Dall E Mini, Dall E 2, Midjourney and Stable Diffusion. One interesting learning while exploring these technologies has been the need to create highly specific and targeted text prompts to generate better images. For this, the team researched this guide on prompts for Dall E.

Here is some AI artwork generated by us during rehearsals where we based prompts on climate change:

Another question confronting us at this stage was the visualization, or rather, personification of the AI in the context of the live performance. Just as we have human performers on stage to tell a story, we want to present the AI to the audience as an equal collaborator in the process. To do this, we needed to synthesize a public-facing avatar of the AI that can be shown during the process.

We decided to research how AI and related concepts have previously been visualized in mass media and popular culture. From Apple’s Siri to Stanley Kubrick’s 2001 Space Odyssey, we found ourselves pondering more questions: Should our AI have a name? Should it have a personality? What is it its relation or positionality with respect to climate change? Does it have a relationship with the other human performers?

Still navigating our way slowly through these questions, we shifted focus to the other pieces of the puzzle that needed to be answered first. Simultaneously, we prioritised the development of the GPT-3 and other generative AI pieces so they could point us towards a more concrete direction.

Figuring out the spine of the story 

With improvisational theatre, there was a sense of comfort and familiarity. Blessin, Ranji and Gaurav have been working with improvisational formats for a long time and already had a sense of where to begin. We began researching ‘The Documentary!’, a long-form improvisational format created by Billy Merritt at the Upright Citizens Brigade Theatre, using this as the starting point for the performance’s narrative structure. A long-form improvisational format relies on taking suggestions from an audience at the show’s beginning to improvise the first few scenes, and then use whatever came up in the previous scenes as the base to improvise every next scene.

With ‘The Documentary!’ format as our guide, we created a new long-form format that begins by asking the audience about a fictional climate crisis in the world. For example, plastic is turning sentient or polar bears are moving out of the poles. Herein, a series of scenes unfold where the human performers and the AI begin to build a narrative around this fictional crisis.

Every single scene structure begins to incorporate the AI in different ways – through text generation or image generation – and gives the output back to the human performers to incorporate into the action in real time.

Reflecting on the process so far

At the time of writing this blog, we have begun playtests with a group of improvisers in Bangalore, India who are rehearsing with our draft script structure to work out possible kinks and glitches. Our goal is to ultimately use a structure that allows for the creative integration of the AI into the story as an equal performer while making it fun, accessible and interesting for the audience.

Some of the challenges we can already foresee with using any of these models are high running costs (due to the frequency of testing and stabilising the output) and processing delays (which impede the “real-time” nature of the human-AI interaction). Another challenge is to streamline the data pipeline for the performance, which turns speech to text, feeds an input to the model, generates an output, and converts that output into a presentation-ready text and voice for the AI performer. For real-time animation of the AI performer using the model, we have been exploring Adobe Creative Cloud, Unreal Engine, Wav2Lip and other lip-synching / animation models that give our AI performer a life of its own.

However, to run the above pipeline in real-time during the actual performance (and to ensure that everything runs properly!) along with everything else that stage performance requires (lights, sounds, projection, props) is the next big task ahead of us.

 

ClimateProv is the product of the BeFantastic Within Fellowship, an online programme fostering international collaborations between creatives in the UK and India, exploring AI technologies and creating provocative performance pieces amplified with creative AI.

As one of the BeFantastic winning commissions, ClimateProv will premier in March 2023 at the FutureFantastic Festival, an exciting and ambitious new AI+Art festival as part of the British Council’s India Together UK Season of Culture, celebrating the remarkable bond between two countries and exploring our cultures, our shared planet and our relationship with digital technologies that will shape our future together. Conceptualized by Jaaga’s BeFantastic (India), in association with FutureEverything.

Related Projects and Articles