Production Expert

View Original

Spatial Audio And Using Audiomovers Binaural Renderer For Apple Music

In this article, Grammy-nominated producer and mixer Nathaniel Reichman tests the new Binaural Renderer For Apple Music from Audiomovers which is designed to make it easy to hear what your mix will sound like when it’s on Apple Music or playing on an Apple TV. Discover the workaround Nathaniel had been using (and why) and what he thinks of Audiomovers’ new solution…

Background

I’ve spent a lot of time listening to the differences in sound between a Dolby Atmos music mix on loudspeakers, in headphones on Apple Music, or in headphones on TIDAL and Amazon.* Here are the ways as a consumer that we can listen to music (spoiler alert, they’re all markedly different from each other):

  • Conventional stereo mix on loudspeakers.

  • Conventional stereo mix on headphones.

  • Dolby Atmos DD+JOC mix on loudspeakers.

  • Dolby Atmos AC-4 binaural mix on headphones.

  • Apple Spatial Audio binaural mix on headphones.

  • Apple Personalized Spatial Audio binaural mix on headphones.

Loudspeakers

Dolby Atmos DD+JOC in 7.1.4 (or 9.1.6 and higher) is an outstanding demonstration of why the music industry needs Atmos. The 7.1.4 playback fills the room in a wonderful and varied way, while the stereo playback sounds two-dimensional, thin and constrained by comparison. Yes, DD+JOC is a lossy compressed format, but in the great majority of cases, the musical experience is more satisfying than lossless stereo. Note that the DD+JOC version sounds almost identical on both Apple Music and TIDAL. In fact, the 7.1.4 recorded output from both services cancels to nearly -30db in a null test in my DAW. Many consumers do have home theater systems, sound bars, and advanced car audio systems, so I expect that the Atmos loudspeaker experience will be heard. But the majority of immersive music listening is happening on headphones.

Headphones

Dolby engineering says that listening to an Atmos mix on headphones in the AC-4 format used by TIDAL and Amazon is nearly identical to the binaural re-render available in the professional Dolby Atmos Renderer software. This mix is affected by the individual OFF/NEAR/MID/FAR choices for both beds and objects that a mixer makes while working with the renderer. Many engineers working not just in music, but also in film, television, and podcasts are familiar with the binaural mix parameters here and the overall sound aesthetic of this re-render.**

Apple engineering describes Spatial Audio in broad terms as comprising all of the technologies Apple uses in software and hardware to create an audio experience. But for the purposes of this article, Spatial Audio refers to the way Apple’s devices adapt Dolby Atmos ADM files for headphone listening. A binaural (2-channel) mix is derived from a virtualization of a 7.1.4 listening room. Apple has control over this virtualization and has improved it in the last couple of years in response to the needs of top music mixers. It is this virtualization that allows Apple to do headtracking. As a consumer, I enjoy watching a dynamic movie late at night on my AirPods Max with headtracking. When I turn my head to reach for the popcorn, I perceive the soundtrack as coming from the television. It’s unobtrusive and clean. But in my own opinion, headtracking is not necessary for music. Fortunately, it’s easy to turn off on your iOS device. But this is the catch: Apple can’t provide headtracking while using the Dolby AC-4 codec, hence their need to develop Spatial Audio as a consistent alternative in the Apple ecosystem.

To create Spatial Audio, Apple plays the mix in a virtual 7.1.4 room. That playback is what we hear on headphones.

Workflows

Back in 2021, when Apple introduced immersive listening in Apple Music, I wrote the article If You Are Mixing Dolby Atmos For Apple Music - Read This Now, and described a tedious process of making an MP4, Airdropping it to an iOS device, and playing it back in the Files app on Apple headphones. This was a slow and difficult way to audition a mix-in-progress. Things improved considerably with an update to Logic that allowed mixers to send a 7.1.4 feed live into Logic, and get a 2-channel binaural Spatial Audio output in real-time. This is how I’ve been working for a long time now. Jeff Komar describes many ways to do this in his excellent Avid MTRX guide, and at the bottom of this article (see Logic workflow***) you can read about my routing solution using Metric Halo audio interfaces.

Many mixers, including myself, have been using a second computer running Logic Pro in order to hear Spatial Audio during a mix. This setup works, but it’s an expensive headache.

Because headphone Spatial Audio is derived from a (12)-channel 7.1.4 feed, the routing necessary to make Spatial Audio with a plug-in is complicated. Audiomovers does support this workflow, but it involves the use of either aggregate devices or their Omnibus routing software. Those options are not well-suited to my professional studio. But more recently, Audiomovers’ very clever innovation is to create an application that sits between the Dolby Atmos Renderer and your audio interface. The first (12) channels are a 100% bit-accurate pass-through of the feed that goes to your loudspeakers. This feed is tapped to create Spatial Audio. And two more pass-through pairs are available to the user as “Aux input 1” and “Aux input 2”. The obvious choice for these pairs is to route live re-renders of stereo and Dolby binaural. This is a breakthrough in ease-of-use for mixers. You can put on your headphones and in one app get the following listening options:

  • Apple Spatial Audio binaural

  • Apple Spatial Audio binaural with headtracking

  • Apple Spatial Audio binaural personalized

  • Apple Spatial Audio binaural personalized with headtracking

  • Dolby Atmos AC-4 binaural

  • Stereo

This setup is much simpler and has many technical benefits, especially for single-system Atmos mixing rigs.

Admittedly, I could get some of these listening options using my second computer and Logic. However, now I get the whole list, including headtracking, and you can set the output to wireless headphones, including Airpods/AirPods Max. I found it quite liberating to mix in my studio on the wireless AirPods Max, and I was pleased to find out that the additional latency is not too bad. You wouldn’t want to record live instruments this way, but you can make mix moves in real-time on your control surface.

There’s almost no downside to leaving “Binaural Renderer for Apple Music” running all the time in a mixing setup. At $79.99 USD, this is a no-brainer. Especially if you have clients visiting who will inevitably ask, “Okay, it sounds super-cool on speakers, but how will most people hear it when they put on their headphones?” I’m currently mixing a TV show that will be on Apple TV+ soon, and I’m going to setup a headphone distribution box on the stage to give the clients an accurate headphone experience. And while you can use any pair of headphones to hear all of these options, only Apple headphones support headtracking.

This mix is playing from the renderer straight into Audiomovers’ “Binaural Renderer for Apple Music” app and to my AirPods Max headphones.

Deep Dive

When I first became aware of Audiomovers’ efforts in this area, I was skeptical that a small third-party company could exactly replicate the virtualization of Apple’s Spatial Audio. I inquired if they were simply wrapping the Atmos AU plug-in we see in Logic. But Audiomovers engineering has told me that they are utilizing Apple’s SDKs, which in addition to my empirical listening tests, has given me assurance that the “Binaural Renderer for Apple Music” is true to the consumer experience. Not only that, but as Apple continues to develop and refine Spatial Audio (yes, it does sound different than it did on day one), those updates will be automatically integrated into the sound of “Binaural Renderer for Apple Music,” depending on your version of macOS. No action on the part of Audiomovers is necessary for their app to keep up-to-date with Apple’s development.

There is a subtle wrinkle in all of this, though; the Apple Music experience puts the DD+JOC mix into the Spatial Audio virtualization, while Audiomovers is putting a full-res PCM mix into the Spatial Audio virtualization. I have yet to do listening tests to see how large this difference is. For the true propellerheads out there, you could play a DD+JOC MP4 with Dolby Reference Player into “Binaural Renderer for Apple Music” to get something more bit-perfect. But that’s not a real-time solution and I doubt it’s worth the effort during a mix.

“Movie” mode will be especially useful in post-production. Apple uses a different Spatial Audio algorithm for movies and television shows.

Film and TV Post Production

Listening to movies on my AirPods Max with an Apple TV, I heard subtle differences in the binaural space compared to what I was hearing in my studio with Logic Pro. I didn’t give this too much thought, and chalked it up to decisions that might have been made on the mix stage. But when Audiomovers gave me a demo of the software, I saw that Apple provides a different Spatial Audio algorithm specifically designed for movies. Toggling that switch in “Binaural Renderer for Apple Music” revealed what I had heard but not fully-understood in my living room. This makes the Audiomovers app especially valuable for the post-production community. And to the right of that switch is an option for hearing the way Atmos is rendered on multi-speaker devices like MacBooks and MacBookPros.

For those of you listening on 9.1.6 (or higher) systems, I recommend sending a dedicated 7.1.4 re-render to the Audiomovers app. That workflow is a bit more complicated, and is probably best solved with Dante.

Conclusion

At the beginning of the Atmos content explosion, many producers and engineers were frustrated by the many different ways our work would be heard. Andrew Scheps famously said not to mix for Spatial Audio since it was a moving target. Audiomovers’ new application tracks that target tightly, takes the guesswork out of the consumer listening ecosystem, and makes our jobs easier and more enjoyable. Big thumbs up from me. And if you’re wondering whether all of this is worth the effort, consider the fact that all new iPhones come out of the box with Dolby Atmos turned on by default. Apple is a luxury brand, and Atmos is a luxury experience.

*Apple Music, TIDAL and Amazon reveal my North-American centric consumer experience. Globally, there are many more music streaming platforms that support Dolby Atmos playback.

**Dolby also offers their version of Personalized Spatial Audio in the form of their Personalized Head Related Transfer Function (PHRTF), but since this is not available to consumers, it is not directly relevant to this article.

***Logic Workflow

To use Logic as a real-time Spatial Audio monitoring solution, take an existing 7.1.4 render and route it into (12) mono auxiliary inputs in Logic. Pan each input channel to its associated location (L, C… Ltf, Rtf, etc.) and send them to a master bus with the “Atmos” plug-in on it. Set the plug-in to Spatial Audio. The output will be a 2-channel binaural mix that will sound a little odd on loudspeakers, but will be identical to the Spatial Audio heard on Apple Music. This is the virtualization I referred to above. Many engineers are doing this with a second computer dedicated to Logic, and are using Dante or MADI to get the channels there and back. I believe some clever engineers are doing this with Loopback on the same computer. For what it’s worth, I’m taking advantage of a Metric Halo feature called “SCP” (Satellite Computer Port), which lets you use a second computer on a Metric Halo interface in addition to your primary computer. I mult my speaker outputs, send those to Logic on my old Mac Pro, and return the 2-channel Spatial Audio mix back as a separate monitor source in Metric Halo to my main Mac Studio computer.

See this gallery in the original post