Your browser is out-of-date!

Update your browser to view this website correctly. Update my browser now


Why Louder Sounds Better

All too often we hear complaints that the sound is too loud at concerts. Further, widespread enactment of sound control regulations often requires concert

All too often we hear complaints that the sound is too loud at concerts. Further, widespread enactment of sound control regulations often requires concert sound engineers to limit SPLs to mandated levels. However, despite regulations and a growing awareness on the part of concert engineers that high SPLs are dangerous, few seem able to turn it down. Though concerts are louder than ever-and many in the music business suffer some hearing loss-there seem to be hidden forces at work that encourage engineers to turn up the volume. We are painting ourselves into a corner.

This is not a trivial problem. Results of the free screenings offered annually at the AES convention by the House Ear Institute suggest that hearing loss accumulates in our industry like black T-shirts. The averages by age show several dB of loss above 2 kHz for each ten years of age, a process that accelerates with time. For those in their 40s, the mean threshold of hearing measured was reduced by 12 dB or more above 2.5 kHz. Pushing down all the sliders on the right side of a graphic EQ simulates the hearing of the typical middle-aged audio professional.

After four hours at 95 dBA in one day, a listener’s hearing is considered at risk. under somewhat lax oSHA guidelines, exposure time should be cut in half for each 5dB increase, so for levels averaging 100 dB, over two hours is considered dangerous, and the European formula is even stricter, halving permissible exposure for each additional 3 dB. Clearly, we are able to hurt ourselves, our clients and their audiences with today’s concert sound levels.

There are many political and career forces that encourage engineers to turn it up-the guitarist’s girlfriend and the band’s manager spring to mind. Plus, the physiological and emotional impact of loud sound simply gets everyone’s heart beating faster. Bad venue acoustics or a terrible mix position often tempt a mixer to turn it up (not always a successful tactic). But there are also subtle mechanisms of human audio perception that tend to make the console’s faders “upwards sticky” and encourage higher concert levels.

The ear is not a linear device-its response varies with frequency. Hearing sensitivity peaks in the high-mids and falls off at the extremes, and the hearing curve also changes with volume, becoming slightly flatter at higher SPLs. In order to maintain a perceived balance between high and lows (and mids and low-mids, and so on) a “flat” playback system may need to be EQ’d differently for different levels of reproduction. The “loudness” control on your stereo attempts to correct this problem by applying a progressive EQ that compensates for the well-known “equal loudness contours” of human sound perception. Because our ears become less sensitive to bass and treble at lower levels, a loudness control adds bass and treble when the hi-fi system is idling.

Some research indicates that the ear is more sensitive to these relative EQ changes than to the volume change itself. As a result, music or other familiar audio sources that sound correctly equalized at one level may sound a little “off” at a different volume. When the level of a concert sound mix changes by 10 dB, it can sound as if an invisible hand is reaching over to the P.A.’s system EQ and changing the curve by a few dB in many places.

It’s no surprise that our ears are especially sensitive in the octave around 4 kHz to begin with. But as a concert gets louder, the ear gets even more sensitive there. This corresponds to the resonant frequency of the ear canal, and is the frequency range where hearing overload and damage occurs first. Ears that have sustained damage often experience discomfort at these frequencies earlier.

Many already know this and regularly take out some high-mids in the normal course of adjusting their sound system’s EQ. But do they take out enough? It’s not just the compression driver’s response that needs to be tamed. At higher volumes there’s additional driver distortion, plus the way we hear it also needs to be taken into account. Above 105 dB, the ear itself begins to distort.

Is there a step in the pre-flight process that is missing? Talking into a microphone, listening to familiar playback material or running pink noise while observing an analyzer’s display usually occur at a lower level than the system will be run for the show. Whatever method is used to adjust the system’s EQ, if adjustments are made at a lower volume than the performance, then those adjustments are likely to become inaccurate at higher levels.

The next step is to soundcheck the band. Individual instruments and voices are checked through the P.A., and then the entire group plays a few songs together, usually at a louder level than when the system EQ was set. Adjustments made to individual channel EQ incorporate the overall response of the system and the operator’s hearing at these higher levels. Some engineers soundcheck at an even higher level than they intend for the show because it’s easier to hear and quickly make adjustments. This process of adjusting channel EQ during soundcheck results in EQ corrections that overlay the system equalizer’s settings.

Now, the ability of the system EQ to be correctly adjusted in the first place is another discussion entirely, but let’s suppose it was perfect at the level you first checked it and listened to your CD. At the new, higher levels used for soundcheck, the ear’s response has changed a little, plus distortion in the system has increased. The P.A. gets a little more boomy, muddy and harsh, plus there’s the sound coming directly from the stage. Channel EQ used to make each instrument sound “good” by itself and in the mix incorporates everything heard at this level (with the room empty, but again another story, another time). You may even make a conscious, well-intentioned effort to take the master level down a few dB after soundcheck, because you know that you soundchecked a little too loud.

Well, now it’s show time and you’re relaxed and ready to move to the other side of your brain and simply mix the show and become one with the music. You’ll spend a few moments in the first song deciding if the system EQ is okay, but then you’ve got to get right to mixing and become the fifth Beatle, playing with effects, riding solos, checking inserts and tweaking the lead vocal. Somehow it never sounds quite right until the volume creeps up past a certain point.

one more aspect of hearing perception is that the relaxed listener is comfortable with higher levels. This means that levels used during the tension of getting soundcheck together are raised without alarm when mix engineers (and band) relax and get “into their space” as the show progresses. When this physiological effect is combined with the better fit that channel EQ settings from soundcheck make with the system EQ as it’s turned up, it’s easy for levels to make their way past the loudest settings used at soundcheck.

Many recording engineers make efforts to manage control room levels, knowing that if it sounds good low, it will sound good louder, because it gets fuller, warmer, a little less bright, and articulation in the high-mids improves vocal presence. This is an important part of the studio engineer’s craft, since he or she has no control over the level at which the final product will be heard. one tool that helps is a good pair of near-field monitors. For live shows, lower soundcheck levels can also help the mix sound better lower, but it’s up to the band and engineers to work together by reducing stage volume as well.

One final thought on system optimization is that, ideally, it is a good idea to check the impulse response of a sound system. With the advent of computer-based analysis, it is possible to examine the phase response of concert systems. Correct alignment of not only the various speakers in a system, but also of the components in each frequency band, can result in a response that is less blurred and more coherent, improving intelligibility and transparency. For each speaker component, when wavelengths get larger than the transducers producing them, the signal lags behind. This is most apparent in subwoofers, where waveforms the length of a truck are coming out of 18-inch drivers. It is not uncommon for ten or more milliseconds of delay to be required on the mains to get them lined up with the subs. Dynamic instruments are more easily discerned when their reproduction has a singular arrival, allowing lower mix levels to sound good.

And on the subject of subwoofers-the least efficient transducers that take up the most space-perhaps it’s time to rethink their use. Most reflex enclosures are tuned for the octave below 100 Hz, with response falling rapidly below that where the ear is also least sensitive. If these are adjusted at relatively low playback levels, when the concert starts the subwoofers may be too boomy and lack enough headroom for accurate performance. Additionally, distortion above the crossover point can further skew their response when driven to full output. Turning them down and carefully equalizing and aligning them with the mains can add more perceived power. Cleanly extending the lowest octave is one of the last great challenges to accurate sound reinforcement.

Many live sound engineers are familiar with the experience of listening to the tape of a loud show, only to find that what had seemed like a good performance was in fact plagued by out-of-tune instruments and off-key singing. Though the deficiencies of such live recordings are often blamed on the necessarily incomplete nature of board tapes-we are talking about “sound reinforcement,” after all-this only explains problems with mix balance or EQ. Critical bandwidth, the ability to discern tone or pitch within a range, is affected by high SPLs and, as a result, many singers will pitch slightly flat in loud environments.

This extra reason why louder sounds better is also a barrier to improving the performance. If you’ve been in search of the missing “suck” knob, here it is. As volume increases, what might have sounded out of tune or off-key now sounds okay. The widening of critical bandwidth makes it harder to discern tones that are close to each other when it’s louder. Similarly, cramped rehearsal spaces can give false impressions. Another example is garage bands that go from clubs to larger venues and have trouble getting their sound right.

Another mechanism at work is the masking of one frequency range by another that is proportionately too loud. In the frequency response of a sound system, smooth peaks are preferable to sharp ones. It is increasingly understood that graphic equalizers do not have sufficient precision for smoothing out the response of sound systems. Their controls fall at fixed intervals on standard ISo frequencies, unlike the system’s response peaks, which-surprise!-rarely match the ISo marks. In the course of setting individual channel EQ, you often see many of the same boosts and cuts across the board, which simply act as corrective adjustments to the system EQ.

If a system is optimized for a smoother response with precise parametric filters, perhaps the best use of a graphic is to quickly re-contour the P.A. to help its response at louder levels. In fact, this is how you find the best mix engineers using their graphic during a show. All EQ is subjective.

Perhaps a more precise system EQ tool for mix engineers would be a set of filters centered at frequencies where human audio perception changes with different sound pressure levels (with additional facility to compensate for increased component distortion at higher levels). It is worth noting that we are now seeing crossovers on the market with dynamic filters available on each output, and a few live engineers already use a mastering EQ across their mix bus.

Last but not least, the amount of headroom in contemporary sound systems has become a panacea for a multitude of sins. All of the previous suggestions may not have as much of an impact as a good, active mix. Move the faders, feel the force, Luke. In the past, engineers were forced to mix around the limits of their systems. Back in the days when mixers brought elements up and then back down, hundreds instead of thousands of watts were sufficient for quality sound. Today we often see the insertion of many channel compressors in attempts to create a console that mixes itself. It’s not unusual to find younger engineers “mixing” without touching the faders.

A static mix must be higher in volume for all its elements to be heard. Employing an active mix, as an alternative to simply achieving a balance where everything is heard equally, can help the show sound better at a lower volume. What would happen if the lighting guy just turned all the lights on? organize the order of your input list so that individual channels can be turned up AND down without taking your fingers off other faders. As a last resort, you could try using your VCAs to mix. Eight fingers, eight faders. Cool, huh?

One final thought: You have heard the show hundreds of times, know all the words and need something extra to make it exciting, but your audience may have different needs. Try mixing for them.

Now all this may fall on deaf ears. Sure, I know some of you are already damaged goods, but it doesn’t have to get worse. Some of the best engineers have hearing problems and manage to compensate. The important thing is to manage your daily exposure so it doesn’t get any worse. It is possible to have a loud show that isn’t damaging. Recently I was the system tech at an outdoor show with a headliner whose engineer had mixed top arena rock bands for years. When the sound cop finally showed up halfway through the set, he was forced to turn the volume down 10 dB. Because of outstanding engineering skills and mixing chops, the show sounded just as good at this lower level, perhaps better. I’ve heard this year’s lack of sell-out shows attributed to everything from high ticket prices to competition from entertainment alternatives. Is it possible that the decline of tickets sales in an otherwise growing economy can be attributed to disgruntled concertgoers? A quarter-million tiny hairs suspended in fluid, winding through the coil of the inner ear. This nonrenewable asset is our most precious resource in the concert business.