Production Expert

View Original

My First Steps In Dolby Atmos - A Personal Journey

Making the leap into a new technology can be pretty daunting, especially when it’s just you and a computer in a garden studio without the benefit of colleagues to share learning with. Last year David Thomas decided to learn Dolby Atmos and this is his story of head scratching, googling and lightbulbs.

Back in April 2021, I was planning my summer’s big 8-part audio drama series. The project is still awaiting it’s launch so can’t give more details at the moment but I can tell you that it spans the whole of human history and treads beyond the bounds of this planet. So something I could get my teeth into, basically.

I’ve been making audio dramas using Pro Tools since 1994 and I’m always looking for a new spin to do my part in pushing the medium forward. One morning in early May 2021 I heard myself saying to the director “I think I can do this in Atmos”. As soon as the words were out of my mouth I knew there was no turning back. Everyone thought this was a great idea and, more importantly, as we’d been working together as a team for nearly a decade, I knew I’d have the support needed to allow me to experiment and hit a few dead ends along the way.

Recording On Location

We decided to record on location using a Neumann KU100 binaural head as our main dialogue mic but also I’d gather atmospheres with a Rode NT-SF1 Ambisonic mic, both kindly loaned to me by The University of York. I won’t go into the recording process here but I’ll summarise by saying that the audio acquisition phase had mono, stereo, binaural, 5.0 and ambisonic assets, all gathered over three weeks including recording a cast of 30 on hillsides, derelict houses and woodland, plus one final day in London’s Air Edel studio 1 to capture about 10 scenes which needed to be background free.

Steep Learning Curve – From Stereo To Atmos In One Step

I’ve been working with stereo for almost 35 years so I was expecting a steep learning curve. I was coming to Atmos having never even used a surround panner before. To make things slightly trickier most of the Atmos tutorials seem to be split between music workflows or quite complex post setups, and my need to produce a binaural headphone output seemed to fall between the two. Looking through the online video tutorials it soon became clear that the interface of Dolby Atmos Production Suite has changed a bit over the last few years so finding videos with recent versions of the software was, at that point, tricky. This has improved over the last year though.

After downloading the 30-day free trial of Dolby Atmos Production Suite (DAPS) I assumed that I should use the post focussed workflow and audio routing/ mapping but in early 2021 most of the videos I found used an older Send and Return model to the renderer, which didn’t match my software version and anyway, had me a bit baffled.

I began to see the light when I found a video on YouTube from Maggie Tobin of the Women’s Audio Mission.
It was a very clear 50 minute tutorial on how to get Atmos working for music. I tried it and it worked, and even better I understood how it worked, which meant I could begin to make it my own.

When I paired this video together with two video tutorials here on Production Expert Dolby Atmos - Setting Up Your Pro Tools Session And The Dolby Atmos Production Suite” and Dolby Atmos - Using The Dolby Audio Bridge The Way Avid And Dolby Recommend I began to have more confidence in my knowledge and felt I could commit to the project being in Atmos.

I hit an early snag early trying to get timecode syncing to the renderer but a bit of Googling helped me find that the frame rate in my session was 25fps and Atmos was set to 24fps but once that was fixed, I got going.

See this gallery in the original post

Which Workflow To Use - Music Or Post?

I began building test sessions, importing audio of various channel widths and getting to grips with how I could place audio in different parts of the 360° environment. For my purposes, I didn’t need a 10-channel speaker array as my programme output was designed solely for headphones, so I managed to get up and running with a minimal upfront cost. As a precaution, I did a couple of hours of dialogue test recording to see how binaural and ambisonic recordings would translate to speakers if needed, even though the brief for this project specified headphones only.

As you would expect the original binaural recordings were not great on speakers, quite mushy and phasey, but I was surprised to find that with other elements laid into the Atmos mix and a bit of EQ it wasn’t hard to get a mix that sounded acceptable on speakers. I certainly wouldn’t recommend using a binaural mic for loudspeaker output but I reckoned that if the client were to change their mind in post-production it wouldn’t be a disaster.

Over the course of the project, I gradually discovered that using Atmos to create binaural content in post-production rather than recording with a binaural head gives a mix, which translates fairly transparently to loudspeakers, although the Dialogue/ Fx/ Music balance tends to go slightly off, though. However, binaural recordings done the traditional way are incompatible with loudspeakers for anything other than very cursory listening.

I soon realised that although the ‘proper’ way to lay out a drama mix in Dolby Atmos was to use the post workflow, I could actually do everything I needed with the music workflow, which I found simpler, so I thought I’d start with that and see how I went.

My main question at this stage was what’s the difference between the two workflows and why do we not have one unified method for creating an Atmos mix?

The default setup in the DAPS (Dolby Atmos Production Suite) renderer gives you one bed (shown above with a purple border), which uses 10 outputs, and 118 objects, (blue circle around grey dot when idle but connected, green when audio passes through) using one output each, giving us the maximum 128 outputs to the renderer. There are also another two which are only available for timecode.

Using the music workflow the mapping to the renderer is carried out using the Dolby Atmos Music Panner plug-in. Towards the top left corner is a drop-down labelled “Object”, allowing you to route a track directly to an object path in the renderer.

This allows a direct link between a track and an object, which is straightforward when you have a given number of instruments to place in a space.

As the name suggests the Dolby Atmos Music Panner also sends object positioning metadata to the renderer so you don’t need to use the built-in Pro Tools panners.

However, with the Post workflow, you’re likely to have a less predictable number of tracks and those tracks will almost certainly be routed via different busses to give submixes for differing channel widths of Dialogue, FX, music etc which will then collapse further into beds and objects before reaching the renderer.

Pro Tools IO Setup Screen showing the mapping to outputs. Click on the image to see a larger version.

The post-production workflow requires that you set up mapping to the renderer in the IO before you start the project. This makes the routing and panning clearer when allocating tracks and submixes to beds and objects. You also gain the ability to automate a track so it can switch between a bed and an object at will, meaning you don’t have to create copies of audio paths when you decide to get some 3D movement to a track.

Using Surround Panners In Pro Tools

As I said earlier, I’ve only worked in stereo before this experiment so the whole idea of placing audio in a surround space was new to me. Here’s a quick round-up of what I’ve learned about Pro Tools surround panners…

  • If you route a mono or stereo track to a bus with 4 channels or more then you get a surround panner icon where the pan pots usually are.

  • If you route a track with greater than 2 channels (i.e. LCR or above) then you don’t get surround panners. These are upmixed to fit within the wider path’s parameters.

  • You can either click and drag the red (or green if in automation read mode) dot to place the object…

  • and if you want finer control you can click on the tiny icon of a fader next to the Output selector which reveals a separate panning window. If you’re routing mono or stereo to a standard surround path you can now move the object left-right/ front-back and if you’re routing to a path with height channels (typically a 7.1.2 path) then you also can adjust the height of the sound.

Extra Tools Needed

I soon realised that I needed some extra tools to get the most out of working in Atmos. 

For the Ambisonic recordings I needed the free “Ambisonic by Rode” plug-in. Again, I found that all the outdated tutorial videos showed the plug-in to have an output width selector within the main plug-in window but the newer releases of the plug-in actually have different versions of the plug-in for each output width (seen in the image above as ‘Soundfield by Rode (1st Order Ambisonics)’ followed by Mono, Stereo Quad etc… I spent quite a lot of time trying to get that output width selector to display before I twigged what was going on.

I also downloaded the free trial for Liquidsonics Cinematic Rooms Professional, (which I ended up buying). It’s a very clean-sounding Atmos compatible reverb that manages to be impressive yet unobtrusive at the same time. I already had some 5.1 reverbs from Exponential Audio in my arsenal to give some more surround reverb options too.

Free trials of Atmos plug-ins are really worth checking out for learning. For example, I experimented with Sound Particles Energy Panner but decided that although it was fun and impressive, it wasn’t what I needed for this particular project. But it wasn’t time wasted as I’ve learned what it’s good at and now know where to get it when needed.

Settings In The Dolby Binaural Renderer

One thing that you need to plan for if working specifically in binaural rather is the allocation for binaural depth within the Dolby Atmos Binaural Render Mode setup. The renderer gives you four options: Off, Near, Mid and Far. These are designed to help place sounds in a specifically binaural space, i.e. they use Head Related Transfer Function (HRFT) processing to re-create how humans perceive objects in 3D space. I’ve found that the implementation in Atmos presents mostly as a reverb and as such I found it unhelpful for creating positioning in outdoor spaces and had to be handled with care wherever it was being used, as the binaural reverb often didn’t match the space I’d put the actors in. If you’re creating an indoor space it was more successful though.

It's also worth noting that the binaural settings for each object are set up with the session and cannot be automated. For example, if object 120 is set to ‘Far’ then you’re locked into that for the whole session. This means that if you get all enthusiastic and set up banks of objects to have all binaural placement options at your fingertips you’ll be committing a lot of resources early. My advice is to be frugal with your binaural object placement to start with and save some unallocated objects to give flexibility for unexpected additions later in the process.

What Will I Do Next Time?

I’ve now done two projects in Atmos resulting in about 5 hours of audio, the second of which, as I write this in July 2022, has just been broadcast by BBC Radio 3 under the title of ‘He Do The Waste Land in Different Voices’. We worked with the estate of T.S. Eliot to produce the first ever dramatisation of his famous poem ‘The Waste Land’. The Atmos treatment in this project was more subtle but the fact that it came on the heels of my first Atmos commission meant I was able to embed my learning and work fairly quickly to produce the finished project.

So, what will I do differently next time? Firstly, I think I’ve reached the limit of using the music workflow for post-production. It was a really great way to get me up and running and it gave me a good grounding for the future, but I now need to give myself the flexibility of more complex audio routing with more bussing of tracks before sending to objects. I’ll also use beds more and rely on objects less. I’ve been tending to route dialogue, foley and mono FX to objects and I see now that this isn’t necessarily a good idea. 118 objects sounds like loads but it’s amazing how quickly they get used up, especially if you’re creating stereo objects and using binaural distancing.

I’m back to working on stereo projects for a while now but I always keep a test session on my desktop so that in an idle moment I can pop it open and try something new or check a workflow. I think the key is to keep coming back to it and adding little bits of learning on a regular basis so that when the call for another Atmos session comes, I’m ready.

See this content in the original post