Final Project: 360 Video – The Southside
- by jschonbo
- December 11
- in
-Larissa Urbiks and Julian Schonbohm
For our final project, we did a 360 video about the Southside of Syracuse. We also incorporated spatial audio in it. Larissa concentrated on finding a protagonist, the interview and the story, while Julian was concentrating on the technical implementation of the project. However, we always consulted with each other before proceeding throughout the whole project so that none of us worked in a vacuum. The editing process later on was done together.
The blog post is divided into both our thoughts. Larissa will begin to give insights about the story and the work with our protagonist Charlie.
The story takes place in a high poverty neighborhood in Syracuse, called Southside. Before we met our protagonist and shot the scenes, I did a research on how it came to this high poverty level in the city. Throughout the research, I could identify the highway as a place that played an important role in that development. I wanted to find other characteristic places that typify the poverty problem. Those should be our locations where we can tell the story. Charlie helped us to find those symbolic places in the Southside. Our story is a mix of those places, the interview with Charlie and text overlays, which give the viewer a background.
Interviewing our protagonist was really hard. We decided we don’t want to be in the video. Therefore, we had to hide while our protagonist was talking. Whenever we asked him a question we went to hide somewhere and therefore weren’t able to hear his answers. I asked him afterwards what he was talking about but wasn’t able to question him in the same way I would in a normal interview situation. I wasn’t able to react immediately. When I asked another question, I had to hide again and the process repeated. I thought a lot about how to improve this process but I didn’t find a good solution. The best way is probably to work with an off-voice that would be recorded separately or finding a close-by place to hide so you can hear what the protagonist says and ask him further questions from your hiding place.
In the following Julian will share his thoughts about the technical implementation and especially the work with the Ambisonics VR microphone.
Since we were already familiar with the cameras, handling them was not a big deal at all. We used the Nikon KeyMission 360 for our project. During the shootings, Larissa was operating the camera, while I was taking care of the sound recording. For synchronization purposes, we always used a clap at the beginning of each take.
The use of the microphone was pretty easy during the recording process. Most of the issues occurred later on in the editing and converting process. After setting up the digital recorder, we just had to figure out the best way to place the microphone.
We decided to place it below the camera with the tripod of the camera and the mic stand of the microphone being as close as possible to each other. In that way, we figured that the microphone will be the least noticeable.
We also placed the recorder on the ground within the 3 legs of the tripod, so that it will definitely be hidden by the Nadir as well. In a regular movie production, the sound mixer usually has the opportunity to monitor the sound while recording. Obviously, this wasn’t possible for a 360 video, which is why I adjusted the levels before the actual recording and also let a little bit of extra headroom to avoid clipping of the signal.
On the first day of recording, I forgot to set the media format to “poly mode” which is why I ended up with 4 individual mono tracks for every take. Not that this was a serious problem, it just took me a little longer, since I had to put the tracks together to a 4-channel .wav file. I used the audio workstation Reaper for this purpose. This could definitely have been avoided in order to save time.
Since the microphone just delivers the raw a-format, I still needed to convert the audio to b-format in order to get spatial audio in the final video on YouTube. I used the plugin provided by Sennheiser itself for this. Inside the plugin it is possible to select between 2 different formats, both a specification of the b-format.
YouTube requires the so called AmbiX format, which uses a different channel order than the other so called FuMa-format. The default setting inside the plugin is set to FuMa, which I missed to change once. For this reason, I had to convert all the soundfiles again, which could also have been avoided.
The most challenging part however, was the import and processing of the 4-channel audio in Premiere. Since the whole topic around 3D audio is pretty new, there isn’t that much information out there. I’m glad I found a very good tutorial on YouTube which explains the process perfectly. This can also be found in my Independent Learning Research, so that future students can benefit from it.
In the end, I used the provided sequence presets for VR provided by Premiere. With this, it was possible to import, process and edit 4-channel audio.
When exporting, I also forgot to remove a Plugin from the Master channel, which resulted in the audio being unusable. The plugin “Binauralizer - Ambisonics” is just for the purpose of previewing the audio to get an idea of how it will sound in stereo later. Since it is for previewing matters only, it has to be removed before exporting. This also resulted in loss of time, since we had to render the whole video again.
Despite the couple of setbacks we had, I’m glad that we finally got the video to work. The spatial audio also works perfectly on YouTube. At this point I want to add that the spatial audio works just on YouTube, but not on the GoPro VR Player. This is because the audio was exported according to the requirements on YouTube (AmbiX) and the GoPro VR Player doesn't support spatial audio before version 3.0. The one installed in the lab is version 2.3.1.
A last thing about the sound is that there is still wind noise, which unfortunately isn’t possible to avoid when shooting outside in Syracuse. The provided windscreen helped to reduce the wind noise, but couldn’t prevent the microphone from capturing it completely. I added a low-cut filter in post-production to remove the bassy parts of the noise and make it less recognizable.
For the titles, that give a short introduction before each scene, I wanted to add some music to make it more dramatic. I used the audio workstation “Ableton Live” for this purpose and a couple of different synthesizers and plugins for the production.
COMMENTS