A transfixing amalgamation of dance and 3D depth data set to the sounds of electronic musician Frank Lead, Blossom Through from Arthur Valverde (who joined us last year with Statures of Gods) and Michael Gugger drops us into a strange nether region, in which a woman’s virtual spirit attempts to cross from another realm in order to reunite with her body. A controlled mixture of live action footage and Xbox Kinect data, I spoke to Michael about the pitfalls and possibilities of volumetric filmmaking and how he and Arthur worked together to create this distorted digital mirror dance piece.

Did the tech or the concept come first?

I would say the tech brought about the concept. Originally Arthur was the one to pitch and get the project. He wanted to use the DepthKit tech which he had seen in my film The Performer and mix it with live action video – having these two characters – one virtual, one real – interact through movement then fuse together.

Arthur brought me on before he submitted the pitch to make sure it was actually possible to do. When pitching to Frank Lead, Arthur put together a great treatment which broke down the film by each section of the music with imagery that acted like a storyboard. This kept us on track during production as well.

Given your individual specialities what was the directorial split of duties on the project?

It was very much a team effort, we didn’t really define things too much. We always consulted one another and took turns giving notes and trying things. Overall my main focus was figuring out the virtual aspect and most decisions I took were with that in mind. I think the two of us have a different way of working but that wasn’t ever a problem. Collaborating with another director can be a very beneficial experience because I feel it allows you to reflect a lot more on how you work. I definitely learned quite a bit more about my strengths and weaknesses as a director.

How does the Kinect/DepthKit technology work?

The Xbox Kinect projects infrared lasers into space through a lens. Another adjacent lens dubbed the “Depth Camera” then measures and records the distance of each laser. DepthKit then records all of this data in the form of thousands of images/depth maps. The software then processes this data and visualizes it into a 3D point-cloud in which one can move around space using a virtual 3D camera. The software allows users to set camera positions and create camera moves (as seen in the video). Essentially it is a motion tracking tool that allows for a very unique look. In the software, one can adjust many parameters in order to achieve the desired look.

Collaborating with another director can be a very beneficial experience because I feel it allows you to reflect a lot more on how you work.

Did the tech place any restrictions on the choreography developed for Dancer Juliet Doherty?

Unfortunately, we weren’t able to bring on a choreographer but luckily we got to work with Juliet who is a super talented, creative dancer and brought great energy to the whole thing. We had just one afternoon with Juliet to come up with the movement. This is where we added a bit more depth to the concept – answering questions like who is this virtual being? What are the motives, etc.? Why are they fusing?

Arthur and I landed on “A woman‘s virtual spirit is trapped in another realm and tries to reenter her human body.” This gave us all the motives and reasons for what was happening and how the choreography should play out. From there we just tried stuff out for each section of the music until we found something that felt right. It was actually quite a funny process since Arthur and I are definitely not dancers or choreographers.

The most challenging part was coming up with movement that would work with how we wanted to move the camera and how the real Juliet and the virtual Juliet would interact – especially for the first scene which is roughly a 1 minute take. We recorded our rehearsals and tested out the composition of the opening scene in After Effects which helped us figure out if we needed to change things at the shoot.

The Kinect does have its restrictions, especially the size of the capture area. Roughly around 10ft x 10ft. You don’t want your subject on the far end of that though because then you lose a lot of detail. We had Juliet perform the virtual choreography in place as close as possible while keeping her full body in frame.

For anyone who’s never attempted a Kinect film, could you tell us the practical ways in which the production needed to be structured to yield the best results with the DepthKit technology?

Just for the record, I am using an older version of DepthKit, so I cannot speak for the newest beta version which I know requires a lot more computing power.

In our case, all the ‘real footage’ was shot on another camera system which was operated by our DP Tomas Velasquez. I would then pop in with the DepthKit in between ‘real footage’ setups to capture the virtual scenes which corresponded to the live action scenes that were just shot. Our shot list was very important and constantly referenced to make sure we got all the coverage in real and virtual looks.

It was definitely important that we scheduled enough time to calibrate the software to the cameras, there are a lot of steps to that. In our case, I had to try multiple times until I got the whole calibration process right. For anyone interested in shooting a film using DepthKit, I would suggest you do a lot of testing and playing around with the technology beforehand. Here is a link to the calibration process.

It feels that having this mix of live action footage and data at your disposal would have opened up a vast possibility of visual experimentation during post. How did you reign that in and what guided your approach to the flow and structure of the final film?

Due to the experimental nature of the technology, the concept and most scenes really continued to develop through post. For example, all the colors, flares and much of the virtual camera movement was found in post. We began our post process by placing all the ‘real footage’ in the timeline so we had a structure and reference for how to compose and place all the virtual stuff. From there we kept building and experimenting. We would keep referencing back to the treatment to make sure we were sticking to the concept and not going too far down a rabbit hole of possibilities.

What will we see from you both next?

Michael: At the moment I have a few little projects in development with some local businesses here in Berlin. I am kind of exploring the space of portrait films to further practice my craft in narrative, branded content and commercial type work. I do see myself making a narrative short film with DepthKit in the near future though – I think that could be really cool.

Arthur: I’m working on few projects at the moments including a narrative short film. This experience was amazing and I really want to explore different formats and keep continuing to create no matter what the category. Let’s see which project will be next, I’m staying open to any opportunities that come my way.

Leave a Reply

Your email address will not be published. Required fields are marked *

Newsletter Signup

Subscribe to our weekly newsletter and never miss the latest independent film news