The anxiety around A.I. is strong at the moment and that’s completely understandable given the way it can be seen to take existing art to generate something entirely new. It raises questions about what the future of being an artist means when there is software out there that can potentially, for argument’s sake, create original work. Director Patrick Hanser AKA the musical artist Bacará has created a music video for his latest single Cores (Colours) using A.I. software but rather than striving for an artificially created realism he uses it as a way to blend an amalgamation of classic styles, drawing from the work of everyone from Kandinsky to Picasso to create a kaleidoscopic smorgasbord of artistic sensibilities. DN caught up with Hanser to discuss the creation of his video, the lengthy experimentation period he needed to work through, and his thoughts surrounding the future of art and A.I.
How did this video come about both in terms of it as a filmmaking project and also as a musician?
I’ve worked as a director for about six years now and I’ve always been in awe of the power music has of making your audience feel something in a scene. When collaborating with composers for my soundtracks, I would always try and get close to them to understand the process better because honestly, I was just really fascinated with it. I never thought I would actually get into music because I didn’t play any instruments and felt I didn’t have the talent for it. But during the pandemic, amidst those early days, I started messing around with an old guitar I had at home which I had never really played, and immediately a lot of songs started coming out, even though I didn’t really know how to play. So I bought a microphone, plugged it into GarageBand, mic’ed up the little amp I had and started putting riffs into software, which eventually became my musical project called Bacará.
It was all super casual, and after a couple years of messing around with this little side project, I realized I had composed and produced about 80 demos. It just naturally occurred to me that I should release something, and Cores was the first song I had composed in my native language which is Portuguese, as I’m Brazilian. I talked with a music producer friend of mine, Luigi Sucena, and he helped me reach my vision musically, expanding my demos and making me sound way better, then I decided to release the song, which is actually the first single off my first album which will come out at the end of the year, called Insects + Critters.
And when you were conceiving an idea for a music video for Cores, what spurred you to use A.I. as a means of creation?
As soon as I realized I was actually going to release the music, my head started spinning with ideas for music videos and the visual side of things, obviously. From the start, I knew Bacará would be an opportunity for me to experiment both musically and visually, and kind of blur the line between my work as a director and as a musician. I like to use the analogy of music in the context of film and vice versa because the creative processes are so similar in my mind. When I was creating the music, I saw myself less as a composer, and more as a director, making creative decisions that would lead to an emotional outcome in whoever was listening, which at first was just myself.
I was challenging myself to use A.I. as a creative tool to reach an aesthetic I wanted.
Cores means colors in Portuguese, so I knew I had to make something that was vivid, energetic and kaleidoscopic. At the time, I was reading and watching a lot of stuff about A.I. and was just totally fascinated by the possibilities. I was really impressed by the first rounds of Midjourney and Stable Diffusion, and then it just clicked in my mind I should experiment and do something really colorful with A.I.
How did you find the process of creating the video through A.I.?
As much as I loved the potential of A.I. something about some of the images that were being created bothered me. It’s hard to put into words, but I feel like there’s an ‘A.I. texture’ that’s kind of become ingrained into our collective unconscious and which I don’t really like. I feel like whenever you increase the level of detail, or resolution in an A.I. image, it starts creating a lot of unnecessary information, these little artifacts that really bother me, and it just becomes clear it was made with A.I. Also, photorealistic A.I. doesn’t interest me, and I see a lot of what’s been done has gone towards that aesthetic, of trying to replicate reality. I wanted to go the other way. So I went for a more classic, solid based design approach, reminiscent of the works of Saul Bass, Kandinsky or Braques. I wanted to make something that could fool someone into thinking it was a more traditional type of animation, like the rotoscoping technique used in the film Waking Life by Richard Linklater.
I was challenging myself to use A.I. as a creative tool to reach an aesthetic I wanted, and not the other way around, where I would just be hostage to its whims and its aesthetic output. At first, I didn’t know if it would work, and it took a lot of experimenting with prompts and image influence strengths to get the right images. I went through a lot of different aesthetic routes before landing on the one we see on the finished product. During the initial test rounds, I also noticed that my eyes were disappearing in the A.I. images. That’s why I decided to use the Kurt Cobain glasses because they just pop out against messy backgrounds. And I also love Kurt and Nirvana so it was a way to pay my respects.
How did you approach capturing the images that would be the backbone of the video?
Basically, the music video for Cores had to work as a normal video before ‘translating’ it to A.I. Since Bacará had not been born yet, no one knew about it and I didn’t have much money to invest into a music video. So I called my brother Brian, who’s 17, to film me playing the guitar in my backyard so I could run some tests. Honestly, I didn’t even think I would use those images, so we just filmed without having a plan really, but Brian ended up getting some really cool shots and camera movements and I decided to just use those. The rest of the music video is almost like a video diary of my life in the past six months: I went to the aquarium and got some images there. Went to a cool, colorful show and just started spinning my phone around to get some shots, even though the people around me probably thought I was crazy. Most of it was done on my phone actually, since I knew I would be filtering everything with A.I. later on. I just needed the basic image, the perspectives and the shapes. This allowed me to do some crazy twirling shots just by running around spinning my phone, whereas it’d be practically impossible, or at least super expensive, to do so with conventional film cameras.
I was challenging myself to use A.I. as a creative tool to reach an aesthetic I wanted, and not the other way around, where I would just be hostage to its whims and its aesthetic output.
There are also some film references in there, such as the opening, in which I got the ‘light tunnel’ sequence from 2001: A Space Odyssey, sped it up, and then did the A.I. treatment. Honestly, I don’t even know how this would work in terms of image rights, but I love the film so much I decided to keep it in, especially because my objective with Bacará is to unite the film and music worlds in a way. So I edited all those images together to create something dynamic, then made it 12FPS instead of 24FPS, exported all of the frames, and started the work of individually treating each frame to achieve the result I wanted. In total, there are about 2,800 frames in the music video, but I actually created more than 10,000 to get to the final result. That’s why it took me six months. The cover for the single was also done with A.I.
Are you able to break down how exactly you created these images through the use of A.I.?I’ll go through the process of crafting a single frame. After filming and editing, I would do a color treatment over the original image because I found out in some tests that if you create an A.I. image of something without a lot of saturation, it’s a lot harder to color it later because there’s simply no information there. So the process of color correction was actually the first thing I had to do. After coloring and exporting the frame, I would upload it to Playground A.I., which was the software I used. They have this really cool feature in which you can upload an image as a reference for the final product you want and decide the influence level it will have on the final image. So if you put it as 100%, it would basically just spit out the same image back. And if you left it at 1%, it would create something super abstract. On top of that, there was the actual prompt, which is where you can decide the style of the image. So my prompts ranged from “colorful guitar painting in the style of Kandinsky” to “man holding guitar painting in the style of Picasso” and things like that. I decided to only prompt dead artists’ styles, and also for aesthetic reasons, simply because I really like the style of Kandinsky specifically and thought it lent itself really well to the song. But what’s really cool is that, sometimes, I would keep a prompt like “guitar painting Kandinsky” but the frame would be of my head. So it would try to find guitars in my mouth, my nose, my hair, creating this really interesting effect.
Another interesting thing I found out is that you could ‘ramp’ the image strength percentage in certain places, as if it were a keyframe, to create an effect of ‘losing control’ of the image. So, for example, I would start a sequence at frame 1142 where I realized the best image strength to have a balance between the style I wanted versus understanding of the basic shapes of the image was 34%. But, at frame 1153, I knew there was a guitar part that kind of buzzed, like a vibrato. So what I would do is kind of create ramps in the image strength, so when I got to frame 1153, I would be at only 13% of image strength, creating something super abstract, and quickly going back to 34%. And when you put it together, it would just create this super cool effect as if the guitar riff was influencing the image, making it more abstract for an instant before going back to its normal self. And it’s all so fast that your brain is still able to understand it, and it just works in a synesthetic kind of way because the music is literally influencing the image. I didn’t know any of this before starting, but while creating the sequences I realized a lot of the creative animation decisions came down to this, simple manipulations of the parameters the A.I. software was already giving me, but using them with an animator’s mindset, which would be equivalent to the squash and stretch principle in animation.
I guess in this context it would be squashing and stretching the amount of image influence the reference frame has over the final frame. Each frame took about one to two minutes to be created and downloaded. I would usually create two or three images simultaneously for each frame at slightly different image strengths and prompts, to decide which one was best. And I guess that was my job as a director during this project: just choosing which frame I liked better. I literally chose each frame of the entire music video, because for each frame there were at least three options, if not more whenever I didn’t like the initial three that came out, which happened a lot of the time. After finishing a shot, which would usually be between 200-400 frames, which would take me about two to three days to make, I would put the frames back into Premiere Pro and see how it looked. And this was the scary part, because sometimes I would absolutely love the individual frames I was getting, but as soon as I saw it play back in motion, it would be completely incomprehensible or just plain bad. So a lot of the process was going back and reanimating with different prompts and image strength percentages to reach the perfect balance between clarity of shapes, so your brain can follow what’s going on, and style.
It seems like making the video became a real learning process in the potential for working this way. Was there anything else you learnt or stumbled upon along the way?
Another super cool thing I realized was that long shots work better to convey speed than quick cuts. The first cut of the music video in live action was super dynamic, had a lot of quick successions of cuts and stuff like that. And when I tried animating that, it was simply incomprehensible. You sometimes wouldn’t even feel the cuts, because there was so much abstraction going on. So I made another cut of the video which just stayed on longer shots and that worked so much better. I think Picasso was the one who said that going back to your childhood interests and sensibilities was one of your jobs as an artist, or something similar to that. So I always make it a point to show things I’m working on to my little cousins and nephews and stuff, so I can get this unadulterated opinion on things. When I showed part of the music video to my little cousin who’s eight years old, his first reaction was to keep his eyes glued to the screen and say, “Wow, this is so fast”. That was when I knew I was going in the right direction. Even though there are barely any cuts (there are only fourteen shots in the entire video) it just gives this perception of speed because of the animation, and, ironically, because it’s 12FPS and not 24FPS.
Once something can be done in an easier and more perfect way by way of innovation, how can you push the traditional medium towards something that has never been done before?
So in this case, slowing down the frame rate and keeping the shots for longer just makes it seem like it’s moving at blistering speeds because your brain is trying to figure out the shapes and colors and the perspective. If you cut too much, the audience wouldn’t even be able to understand what’s going on. And as soon as I noticed that with my first round of tests, I knew I had to focus on fewer shots, but which are more memorable and which will give the audience the opportunity to ‘figure out’ the image, just like a puzzle. And when I watched my little cousin looking at the image, I saw this expression on his face where he was really into it, trying to figure out what he was seeing. I think half the fun is just trying to figure out what you’re watching and realizing how your brain interprets movement and colors and shapes, which is perfect because that’s exactly what I was trying to convey with the song.
Having now worked with it intensely, what are your thoughts on the future of A.I. and its relationship with culture in general?
I think change is always scary, and I admit that even though I was fascinated by the possibilities of A.I., I was also frightened by its potential. But I think innovation is inevitable and in my opinion, I’d rather embrace the change and try to use it as a tool rather than looking at it as something we have to push back against. I always think of what an art professor told me once, that when photography was invented, a lot of painters felt they would be out of a job since now all you had to do was ‘click’ and you’d have a perfect image. So they were treating photography almost like ‘cheating’, even though now no one would say photography isn’t an art form in itself. So now that reality could be captured, what would painters do? He went on to say he didn’t think it was a coincidence the impressionist movement came about right when photography was invented, and it blew my mind. I feel these two events are correlated because once something can be done in an easier and more perfect way by way of innovation, how can you push the traditional medium towards something that has never been done before?
I think half the fun is just trying to figure out what you’re watching and realizing how your brain interprets movement and colors and shapes.
I feel something similar is happening with A.I. because it’s made it possible for people like me, who don’t exactly have a background in animation or illustration, to make something like the music video for Cores. It has taken down a technical barrier. The question is, now that the barrier is down, what’s going to come out of it? Because since anyone can do it, what’s going to truly matter? In my opinion, the only thing left will be ideas and storytelling, connecting emotionally to whoever is watching. All else is decoration.
The anxiety around A.I. seems to be that there’s a fear of it replacing artists. Do you think there’s any chance of that at all?
You got me there. I’m a screenwriter as well, and the possibilities of A.I. storytelling in scripts, for example, is pretty scary, because I’ve seen some really interesting stuff come out of things like ChatGPT. So honestly, we might be celebrating our doom here. I tend to be optimistic and see this as an opportunity to free us from technical barriers and let us focus solely on ideas. But I do see the opposite side of the coin. People way smarter than me have already warned us of the threat of A.I. reaching superintelligence and eventually, the singularity. I think we have to be cautious.
And, lastly, what’s coming up for you both in your filmmaking and your music?
On the music side of things, I’m gearing up for the release of my first full length album, Insects + Critters which will be accompanied by more music videos and also live shows. On the film end of things, I have a project in development called The Falls, which is an apocalyptic coming-of-age film where teenagers are falling in love and doing teenage things all while people start falling from the sky without explanation.