Brief Summary
Are we at a point where, because of mobile devices, streaming and changes in lifestyle headphones are the most popular way of listening to music? If so, why do we see reproduction over loudspeakers as the definitive version of a mix and headphone playback as secondary?
Going Deeper
If headphones aren’t yet the principal way music is consumed globally then we can’t be far away from that point. With mobile devices and streaming services being as dominant as they are, so much music is listened to by people on their own, whether on the move or in the company of others but engaged in separate activities. While I lack definitive data on this, I can see how it might be that today as much music is heard through headphones as is heard via speakers. And in some demographics possibly more.
To take the example of Apple’s Airpods, over 150 million pairs have been sold globally and it’s estimated that 100 million pairs are in use daily. Total revenue from Airpod sales is estimated to be over $12 billion. Looking at the sales data it’s clear that smart speakers also enjoy huge sales. But as we’ve commented on the site before, many smart speakers prioritise convenience over audio quality, with many summing at least the bottom end to mono.
The Difference Between Listening and Hearing
If we further subdivide the time spent exposed to music between passively hearing and actively listening I’d say that time spent in my day with some tunes on over smart speakers with the family doesn’t really represent engaging with and actively listening to the music in the same way as an hour walking with earbuds in the morning. I do sit and listen to music on my studio monitors for pleasure but not as much as I listen on my phone, and I’m an audio professional with a dedicated listening space. If I’m listening more to headphones than speakers then that doesn’t bode well for consumers.
Stereo Reproduction
Then of course there is the consideration of what kind of experience the domestic speaker is offering. We’ve discussed before how the convenience of smart speakers comes with compromises in audio quality, even if the reproduction is full bandwidth and relatively flat it’s very unlikely that stereo imaging will be represented meaningfully. Many smart speakers have mono bass/mid drivers with stereo tweeters and my pair of kitchen smart speakers are set up to run in dual stereo. I could run them as a left/right pair but they are placed to distribute music around a long kitchen. In stereo you would be hearing left or right but never both.
Stereo as presented by headphones also has issues, in our article Not many Audio professionals Know This Fact About Mixing On Headphones we discuss the importance of crosstalk between channels and how its absence from headphone playback affects the experience.
Listening In Cars
Then there is the car. For those of us who still commute that is valuable listening time and when I was still commuting that time represented the majority of my weekly listening time. Without the ‘forced idleness’ of driving and the suitability of music for these times I think the total hours of quality listening time would be much reduced. Cars are significant.
The other significant thing about cars is that they represent a controlled environment. A space in which an audio system can be installed where the designer knows the position of the listener and the acoustic properties of the environment, much like a studio control room. Actually it’s probably more accurate to say it’s more like a movie theatre as there are multiple listeners to cover and the space isn’t designed exclusively for listening. However, although an automobile interior is a long way from an ideal acoustic environment, unlike a domestic setting where almost anything can happen to compromise reproduction, I recall seeing one speaker of a stereo system on the floor in the living room and the other placed in the kitchen… the potential for predictable playback quality is what makes cars attractive for audio. Companies like Harman have invested in automotive systems and in a time when so many cars are so similar, partnerships with quality audio brands are an additional selling point. Indeed one of the few places a well-heeled consumer is likely to have access to a multi speaker Dolby Atmos install is in their car. Dolby certainly have high hopes for in-car Dolby Atmos, check out the video below.
Speakers Vs Rooms
Bruno Putzeys, the designer of the Kii Three monitoring system said that it was relatively easy to make a speaker which is flat in an anechoic chamber. What happens to that speaker when it gets out into the world is another matter. We don’t hear our speakers, we hear the combination of our speakers in the environment they (and we) are in. If you can control the effect of the room on the speaker’s output you have more predictable, controllable and ultimately better reproduction. Hence our preoccupation with acoustic treatment and speaker calibration.
However, if I’m right about my opening premiss and the majority of content is, or might soon be, being consumed via headphones, why are we resistant to using them as the definitive version of what our mixes will sound like out in the world? If we examine the dominance of Apple in the device market we see that while Apple might not have a majority of the smartphone market, they are the biggest single manufacturer and if you want to check a mix on any single playback device to hear it the way more of your audience will hear it on any other single device then listening on a pair of EarPods Pro would probably be that format. I can’t say I’m comfortable with that idea but the reasoning seems to stand up enough to at least ask the question. There is the matter of ‘translation’ which while we all know what it means, is more elusive when you try to pin it down. Speakers certainly don’t seem to be the best way to achieve translation though. If we’re muting our multi-thousand pound monitoring systems to run out to the car to check mixes then it looks like something isn’t working…
Making Headphones Sound More Like Speakers
So much work is being put into making headphones sound more like speakers. With mix room modelling headphone systems courtesy of Waves NX and Steven Slate VSX or software-only modelling with tools like Acustica Sienna would we actually be better served by getting headphone response to be more consistent between models and listeners. Devices like the AirPods Pro and AirPods Max use of Adaptive EQ to calibrate the response of the headphone to the listener hint at the tech which could be implemented to personally calibrate headphones. If combined with the photogrammetry techniques used by brands like Dolby and Genelec to gather data on listeners’ physiology for PHRTF creation there seems to be much which can be done to refine responses and make the inherently controllable headphone even more consistent.
I’m following an argument here. I don’t like the idea, or the reality, of listening on headphones. But if I’m right that mixes will be heard on headphones more than on speakers, it does sound sensible that they principally be mixed on them. Doesn’t it?
Listen to our podcast with dialog editor Korey Pereira and re-recording mixer Reid Caulfield for a perspective on headphones from a post production perspective.
Has your use of headphones shifted of the years and have the new developments in headphone technology had an impact on this? Share your thoughts in the comments.
A Word About This Article
As the Experts team considered how we could better help the community we thought that some of you are time poor and don’t have the time to read a long article or a watch a long video. In 2023 we are going to be trying out articles that have the fast takeaway right at the start and then an opportunity to go deeper if you wish. Let us know if you like this idea in the comment