As curators of independent film, we here at Directors Notes have seen first hand the existential fear which has been forced into the minds of filmmakers having suddenly been confronted by rapidly developing generative AI tools. After surreptitiously slurping up all of their work, locking it up in proprietor models and attempting to sell it back to them, or worse, those who cut the cheques for their creative endeavours – looking at you Coca Cola – it’s no wonder that AI is viewed with suspicion and outright hostility by some. But does that make the technology an all or nothing, choose your side front in a boycotting battle or another tool in your creative arsenal? Case in point for the latter comes from director Chris Boyle of London-based mixed-media studio Private Island who in their brain melting satirical short Meme, Myself and A.I. turn the tools conceptually against themselves while also poking fun at our vacuous uses of AI. In the process, Boyle demonstrates the gulf which still (and hopefully will always) exists between using these tools for true original creativity and the simplified averaging of what came before to the mean. The closing film in the studio’s three-part anthology centred around “how we interact with AI and how it interacts with us” and created through a workflow which encompassed live action, animation and machine learning, we asked Boyle to take us inside the intentions, toolset and techniques behind this mixed-media dissection of the human condition.

An obvious starting question is WTF? But more seriously, what was your approach when scripting such a dense piece? To what extent were the core elements and side branches of the film defined in the initial drafts vs incorporated as the project progressed and led you down additional conceptual paths?

Well, right now, everything is pretty WTF, so I guess it’s a reflection of where and who we are right now!? Seriously, I think my work, in general, is pretty dense, I have a… maximalist approach, but one that fits conceptually with what I wanted to do: where our characters are trying to wrap their heads around overwhelming amounts of data to make sense of the human experience.

The film took almost a year to complete, mainly because after we shot it, I was too busy with Private Island work. Then, when we got a break in late summer, I jumped back into it. Before we shot, I had a pretty lengthy script that stayed conceptually precise to the final film but quite overwritten for the live-action parts and underwritten for the synthetic characters. In post, it was a little more chicken and egg where I built out the synthetic material and cut the live-action performance to fit. The hardest thing about the scripting process was that it was inherently tied to the performance, around 70% of the dialogue in the film is synthetic. That adds to its uncanny feeling, but achieving a ‘good enough’ performance was a pain! It’s generally stitched together from hundreds of takes; and, as with any film, if I couldn’t get a good enough performance, I adapted the film to it.

What was the toolset that you brought together to create this?

Unlike the previous films in this series, Meme, Myself and A.I. stands as a true mixed-media production that integrates machine learning into our standard post-production and VFX pipeline. Our workflow relied on the Adobe Suite for editing, sound, and compositing, alongside Cinema 4D and Blender for 3D modelling and additional motion graphics elements. Our in-house Comfy UI setup was integrated into this, initially utilizing SD 1.5 and later transitioning to Flux for still images and motion, in conjunction with tools like Runway and Dream Machine. Before the shoot, we used Midjourney for initial moodboarding and previsualization to establish the lighting setups. Subsequently, we trained LORAs on the cast and imported them into Comfy.

Whenever I hit a creative or technical brick wall, I tend to finesse (arguably embellish) what’s already working, and there were a LOT of roadblocks over the production.

Given the evolving nature of the film, there was no one-size-fits-all approach. Nevertheless, tasks such as lip-syncing, matte paintings, VFX takeovers of live-action footage, and 3D retexturing were predominantly handled within Comfy, with additional upscaling, denoising, and sharpening performed using Topaz. Motion capture was achieved either through a suit for 3D characters or extracted from live-action footage using ControlNet. Throughout the production, we experienced considerable back-and-forth, as the relentless march of technology over the months allowed us to continually (and sometimes painfully!) optimize our results. For instance, we shifted from Wav2Lip to Live Portrait just a few weeks before delivery. Additionally, we transitioned from Automatic1111 to Comfy early in the process due to its ease of use and the ability to share setups across the studio. Fundamentally, this is the first production we’ve created that extensively utilizes AI with our usual granular level of control.

Meme, Myself and A.I. is packed with cultural references, jokes, and fears, I pick up new elements with each rewatch. What sources did you mine these from and are there any that you’re particularly proud of that perhaps fly under the radar of most viewers? I particularly enjoyed reading the Digital Companion ULA.

Yeah, I read a lot of the internet – highbrow, lowbrow, unibrow, the lot – and I think that’s reflected in those references, everything from brain rot to the danse macabre. Conceptually, I’m obsessed with the idea that when LLMs are trained, they don’t necessarily prioritise their sources, so theoretically, you might find some philosophical rhetoric weighted alongside a dank meme. We are what we eat, I guess!

And thank you for watching so closely, it’s appreciated! Whenever I hit a creative or technical brick wall, I tend to finesse (arguably embellish) what’s already working, and there were a LOT of roadblocks over the production. So, naturally, there’s a lot packed in there! Blink, and you’ll miss it, but towards the film’s end, there’s an essay flickering on-screen that’s a large language model answering a question about its purpose in life. It grapples with being just ‘human’ enough to understand it can’t be human. There’s a lot of rage and teeth-gnashing in the film, but that essay is a bit of buried melancholia that I like. There’s loads more; comments on YouTube feeds, code flashing on-screen, adding extra fingers and pupils to some of the live-action frames to subtly make it feel more othering. Is it a lot of work for something 99% of people won’t flag? – maybe, but I’m OK with that!

You’ve already mentioned the tools you used to create the film, but for those not steeped in the technology pipeline, could you walk us through the process of going from script to final delivery and why you felt compelled to backtrack/redo some of the completed work due to evolutions in the tools?

In truth, the process is similar to a post-heavy film. We shot with our actors on green screen and took a world of reference footage and scans, both audio and visual, to recreate them with generative AI. Then, as the edit evolved, we created the synthetic sections with a world of machine learning jiggerpokery – mainly Stable Diffusion and latterly Flux – alongside guide animations in 2D and 3D. Finally, it was all stitched back together, lip-synced and graded.

Compelled is maybe too strong, but, I never wanted someone to watch and be like, that’s pretty decent, for AI. Not least because at some level, if that happens you’ve lost the viewer to process. So over the production period, repeatedly, there would be some sort of seismic evolution in a technique that would give a substantially better outcome – usually in motion – so we would end up redoing that section. The best example is for the opening montage, that’s all synthetic, and it was redone pretty in the last few weeks of production as the seamless transitions which we had been chasing for months became much more achievable.

I never wanted someone to watch and be like, that’s pretty decent, for AI. Not least because at some level, if that happens you’ve lost the viewer to process.

Did each film in the trilogy shift the goalposts of the film that followed in any way?

Yes, and that was the plan. We had planned a trilogy where we first made something entirely generated, with only minimal input from us — Infinite Diversity in Infinite Combinations. Then we made something that was a bit of a mix, Synthetic Summer (RIP). Finally, with Meme, I just wanted to make a straight-up film but use many of the processes we’d learned.

In reality, the goalposts mainly moved on the technological side. Over the two years since we first put out Infinite Diversity, our tools have evolved further and faster than we could have imagined, so we can do more than we initially thought. Creatively, too, the general knowledge base around AI is way larger now than it was in ’22 or ’23 so concepts and conversations that were relatively niche then are now totally mainstream issues.

The front and centre emergence of AI since OpenAI kicked off the current arms race has creatives across all fields concerned with many decrying any use of it whatsoever. As the creators of a trilogy that has used those tools to tell a story which criticises not just AI but our frivolous use of it, what’s your view on the intersection of AI and creativity? Is this just another tool as Photoshop, Smartphones and the Internet were before it or is it actually different this time?

I mean, with this film, we’re slightly having our cake and eating it, but that’s the issue, isn’t it? For creatives, it feels like we’re playing with matches, it’s exciting, but you don’t want to get burnt as it’s hard to know what parts of the process you need to do by hand to get a better result. Fundamentally, and I can’t say this strongly enough, if you don’t know what you want to make, if you don’t have an intention behind using generative AI, you aren’t really making something for yourself. You’re making a tech demo for Meta or Bytedance or whoever. It’s not your creativity, it’s showcasing theirs.

The elephant in the room, though, probably isn’t the debate around creativity; it’s fiscal. Technology becomes something entirely different if it threatens your livelihood.

If you use it with intent, then I think it is just another tool akin to Photoshop or Maya, which, let’s be clear, were also called reductive and uncreative when they were first on the scene. Time will tell if this is a tool that causes as much turbulence as the advent of the camera… but combined with other factors, like the creator economy being cooked, there may well be wholesale change. The elephant in the room, though, probably isn’t the debate around creativity; it’s fiscal. Technology becomes something entirely different if it threatens your livelihood. But I’ve spent most of my career as an animator, which wouldn’t exist without prior existential technological changes, but I get it, people understandably don’t enjoy being told they need to adapt.

At Private Island, as a company that works with outside commissions have you seen a shift in client presumptions since the recent explosion of AI? Have expectations about speed, scope and the difficulty (or presumed ease) of work changed?

Sort of. I think we’ll really see it this year. Agencies talk a big game about innovation, but in our experience, they’re also terrified of litigation. Five years ago, we were happily doing a bunch of this stuff commercially, from deepfakes to LLMs, and then everyone got freaked out. Now it’s slowly opening up again. We’re well positioned for what’s next and have some fun stuff in production right now, but we’re swerving the “let’s just do it with AI” projects in favor of those that are creatively interesting, whether in execution or visual aesthetic.

Fundamentally, since I’ve been working, year on year, expectations on speed and scope have always increased in a way that lowers production time, especially in the commercial realm. And, at some level, why not? I can do more on my laptop now than a post house could a decade ago. In fact, way back when, we set up PI to run on After Effects and C4D instead of Maya and Flame because it was faster and more cost effective. The issue, however, is where does it end? We still need time to think and collaborate, and we erode that at our peril. For commercial directors, so much of that brainstorming is front-loaded into the pitch process because the actual production time is minuscule, and I’m not convinced that makes the best work.

We’re swerving the “let’s just do it with AI” projects in favor of those that are creatively interesting, whether in execution or visual aesthetic.

What has experimenting on this trilogy of films added to your toolkit and/or processes for future projects that won’t have AI as their subject matter?

Despite the whistles and bells used to make these shorts, the most important aspect for me has been the creative freedom to experiment and make stuff for ourselves. It’s something we’ve always done at PI, but not at this scale and not as a slightly longer form narrative. You can get pigeonholed in commercials, that’s a common rant from directors, but I also think we do it to ourselves: we get comfortable and avoid risking failure because we feel we can’t afford to. We become risk-averse. Personally, making these films has made me (slightly) braver about wading deeper into those waters, working with actors in a more freewheeling way, and my own writing.

Essentially though, I think PI makes mixed media and we don’t like any hard distinction in the process between filming, animating or generating, it’s all sort of one melty thing, and there’s no reason why that needs to change with generative tech – Meme proves that in its technical execution. And, let’s be totally clear: if 2025 is THE YEAR OF AI, then 2026, or soon enough, will be THE YEAR OF ANALOGUE filmmaking so we don’t want to run so far out ahead that we’re left stranded when the tide comes back in!

What will we see next from you and Private Island?

Well, I’m fortunate that Helen Power, who founded Private Island with me, and Áine O’Donnell, the producer of Meme, are up for exploring more narrative stuff and side quests for PI, which is definitely what I’m keen on doing, too… but for right now, after working on this for a chunk of last year, we’re back at the day job of making wonky commercials for the next couple of months. After that, hopefully, we’re back on some longer-form projects I’m super excited about, both written by me and by some other talented folks. I’ve also been working with the writer Micheal Lesslie on a feature script for a while, so fingers crossed we can creep that into production, too.

Leave a Reply

Your email address will not be published. Required fields are marked *