Your browser is out-of-date!

Update your browser to view this website correctly. Update my browser now

×

John Meyer: A Lifetime of Sound Science

John Meyer is smart. Everyone who meets him comes away with that. And he can talk about an endless variety of subjects, learned and articulate in them all, with a rare ability to break down complex technical concepts into simple, often visual, analogies. He’s scientist smart, and at the same time infinitely accessible, often displaying a dry wit or drifting off into his passion for cameras and lenses and the visual arts. But his lifelong passion has been sound, sound reproduction in particular. Linear sound reproduction systems to be even more particular.

John Meyer is smart. Everyone who meets him comes away with that. And he can talk about an endless variety of subjects, learned and articulate in them all, with a rare ability to break down complex technical concepts into simple, often visual, analogies. He’s scientist smart, and at the same time infinitely accessible, often displaying a dry wit or drifting off into his passion for cameras and lenses and the visual arts. But his lifelong passion has been sound, sound reproduction in particular. Linear sound reproduction systems to be even more particular.

advertisement

His background has been covered elsewhere—radio telephone license as a Berkeley teenager, hi-fi experience, finding a training ground at McCune Sound, one of the epicenters in the birth of the modern P.A. Building Steve Miller Band’s P.A. for Monterey Pop, then experimenting with the Grateful Dead, Tower of Power, Metallica, Herbie Hancock and so many other artists. Developing the first commercially available self-powered studio monitor, the HD-1, then modifying the approach and applying it to concert sound with the MSL-4.

Through it all, from his early ‘70s research in Switzerland, to the development of the tri-amped JM3 at McCune, through the introduction of LYON this month at ISE, the concept of linearity weaves a thread, a theme for his lifelong passion of creating better sound no matter where the audience might be. Theater, cinema, concert hall, studio, school auditorium or baseball game—a linear approach, he would argue, benefits all.

This year marks the 35th anniversary of Meyer Sound, started by John and Helen Meyer in 1979. It’s been 25 years since the introduction of the HD-1. Just a few weeks ago, John Meyer was inducted into the TEC Awards Hall of Fame.

But he hasn’t slowed down one bit. He’s entered studio, touring, theater, installation, and cinema markets over the past two decades. Heck, he built Constellation! And this month he’s back where it all began, with the introduction of LYON, a smaller sibling of LEO, to the live sound world. Because a conversation with John Meyer can dance and shift and move and circle, while always staying on point, we’ve broken down a few things he had to say recently from his Berkeley offices. What follows is in John’s own words.

A SYSTEMS APPROACH

When you think about what it takes to create a loudspeaker, it’s hard. It’s not just the loudspeaker, which is the mechanical part. There is the amplifier, the electronics and all the things that go around it so that it can convert someone singing into a microphone to being able to hear it up to a thousand feet away. How do we do that so that it sounds like it’s fun to go to, like listening to your hi-fi on a grand scale? Right away, you know it’s not something you can do on your own. In this world you need electronic people, mechanical engineers, now software engineers, so many disciplines.

Helen and John Meyer, partners in life, partners in business.

But what really matters is that we’re trying to create acoustical energy. A big kick drum might produce 5 to 10 acoustical watts of power. It’s real energy. Power is power. A piano might be a half watt or one watt of acoustical power. A full symphony at full power is about 60 acoustical watts. But acoustical power is what we want. We don’t hear power, but it takes power to push a field of air molecules electro-acoustically to create pressure. We basically have two considerations: How much AC power can we get? How much acoustical power do we need?

Let’s say we want to reproduce a drum at 10 acoustical watts. It moves the driver back and forth maybe a half an inch to create this power. So if we were 100 percent efficient, we would need 10 watts; if it’s 10 percent efficient, we need a 100-watt amplifier. For 1 percent efficient, 1,000 watts. Why do we care? If we have a 1,000-watt amplifier and 1 percent efficiency, this energy has to go somewhere. We have 999 watts of power that has to be dissipated as heat either in the amplifier or speaker. We have to get our efficiency up, or we have to dump it out. So the speaker and amplifier become a marriage.

The biggest problem in the early days was that you had power amplifier people making amplifiers and speaker companies building speakers. Even though people liked using certain amplifiers, we knew that we would have to integrate the two systems. There was a time when people thought the amplifier was as important as the loudspeaker. We knew these had to integrate, and we knew they would have to be powered. The only way the technology would evolve was through powered speakers.

So now we have to look at systems. What we really want to do is build a 10- or 20- or 50-watt amplifier, but that sounds weak and miserable and wimpy in the world of entertainment. But if we could do that, it would be huge! Amazing!

ON MEASUREMENT

In Switzerland, in the ’70s, we were studying how to measure so that we could know what we had achieved. At that time, there was a lot of controversy about how to measure loudspeakers, how to measure speakers in rooms. One idea was you could light off an explosion, say a firecracker, record it for 10 seconds, do long-term analysis, and that gives you the entire characteristic of the long-term reverberation. But it would tend to ignore the little things.

For instance, you could run a 10-second analysis, say noise in a speaker, and take a graphic EQ and move it up and down real quickly a couple of times, and you won’t even register it over the 10 seconds. It’s buried. The problem is you have to aim your measurement over what people can hear. They can hear the movement of the equalizer. The brain listens to long-term and short-term events. So we started to build an analyzer that would be more like the way we hear.

At that time, certain conditions came up when you were trying to differentiate one tone from another, among very closely spaced tones. There seems to be a certain amount of masking, so this became a third-octave masking. You can’t say arbitrarily that you only have to do third-octave analysis. There are only certain conditions where that’s true. But the industry likes to make things simple, so we only need third-octave analysis. That’s only good for some conditions.

So we started at building analyzers where we could change from 48th- to 24th- to 1/3-octave, to octave and things like that, to try to get a handle on what we should measure that people could judge subjectively. When we started the company, that was one of the first things we did. We worked with Stanford University to create an analyzer, paper, and the whole idea of how to measure sound. Because if we don’t know how to measure it, we won’t know if we’ve achieved anything or how to reproduce it. With computers, even back in the ’70s, we could compare the file on the A side, which is what we picked up with a microphone, and then record the room with the B side and compare those two files. That analyzer won an R&D 100 Award in the ’80s.

ON LINEARITY

In concept, it’s pretty simple. What you put in is what you get out. That means that you can pick up, say, a voice and violin, and those two signals feed an amplifier, then you can pull those out and send to a loudspeaker, and the voice and the violin will sound separate. The linear system doesn’t merge them into new notes. In other words, it doesn’t create intermodulation products. The notes stay completely separate. It’s not a trivial thing to do. It’s a trivial thing to think about: We want to put two tones in and get two tones out and nothing else. But it’s hard to do.

John Meyer at a listening demonstration of the new LYON system at the Bill Graham Civic Auditorium in San Francisco.

We like linear systems in engineering because they are easy to understand if you achieve them. But in loudspeakers, the minute the cone starts to move it starts to shift some of the frequencies, so you have frequency modulation. We can’t really build something perfectly linear because the nature of the motion itself would change it.

If you start out in the linear world, linear theory, you don’t want anything that knowingly creates problems, like amplifiers that are clipping. The first thing you do is try to set up systems so that everything is running in its optimum range. The amplifiers are not running past their ability to produce power, they’re not hitting the rails. You get as close as you can.

But then air isn’t completely linear. At the normal levels we talk to each other, it’s very linear. But near the loudspeaker, you push air into nonlinear motion in order to be able to get enough power to project hundreds of meters. It then drops off in distance, just like a light bulb. As you move away from a speaker, the power goes down, regardless of how directional it is. Generally, if you’re 100 feet from a speaker, and you move to 200 feet, all speakers, regardless of how they start out, half their level will drop off. But in the near-field it can get confusing.

Rooms, however, are linear. Reverberation is a linear phenomenon. But you have to be careful with that thought because even though it’s linear—meaning it behaves the same at low level or high level—we hear it differently. If you set off a big explosion in a room that’s real reverberant, it sounds a lot different than if you clap.

That’s why the cocktail effect is so bad in restaurants or in a church—when you have a few people in there, it’s okay, but when people come and add energy, it just gets louder and louder and louder. That’s the nature of reverb. Linear systems can’t change that, obviously, but one of the things we found interesting about listening to the new LYON at the Bill Graham Civic Auditorium is that it didn’t sound like you were in a big cavern. It sounded like you’re more isolated from the space. So we’ll probably discover things as we continue to develop this.

REVERBERATION

Starting in the 1950s, we as an industry introduced the thought that you could add reverberation later to music, because it’s considered a different event. The whole industry started developing reverb units that you could add later. You put the musicians in a room that doesn’t have any reflections. You capture the original, then add early reflections or reverberation back to it, which is still kind of the thinking today. For a long time we’ve been trying to figure out why people didn’t like electronic reverb in physical systems.

It turns out, almost across the board, everyone decided that since reverberation was an audience experience and didn’t have anything to do with musicians, you could do a time-variant solution. Why? Well, it stabilizes the frequency response so you don’t have feedback.

We, however, got excited about this patent that came out of New Zealand to do electronic reverberation by more of a brute-force method, no time-variant solution. I wanted to try that out here. That’s why we built the Don Pearson Theatre [at our Berkeley headquarters], to test it out with musicians. Theoretically it would be equivalent to what a room would do. Rooms don’t do time-variant solutions, they just read echoes.

John Meyer testing the Meyer Sound 500 loudspeaker in 1986.

Echoes aren’t random; they are quite orderly. They do the same thing over and over again to a very high accuracy. This gave us the opportunity to try our theories with Constellation. We put a completely linear experiment together at the Pearson Theater and we brought in a string quartet from UC Berkeley and didn’t tell them too much. Well, they liked one of the rooms we had copied that we knew they had played in. They recognized it, they liked it. Then we told them that when the donors came in we would turn it off, so it would be dead, like a studio, then we’ll turn it back on again. So we do that, and they say, “Well, you’ll need to give us a little time to adapt to it, to adjust.” I thought, “What? Adjust to what? My whole team is here.” They say, “Well, we can’t play the same way anymore because we can’t hear each other. It’s like a studio, and we’ll have to adjust our playing. We’ll have to stretch our notes.”

We think, “No, you can’t do that, just coast through it. Don’t change your playing.” We try that, they play, and it’s like driving through a tunnel with your eyes closed. No one I’ve met within the scientific community truly realized how much musicians interact with the space. How we missed this for 50 years is a really good question. The musicians knew this, but they didn’t know how to explain it. They knew it intuitively. One of the big dangers in bringing together science and intuition is that science doesn’t believe in intuition. We didn’t even consider that this might not be correct.

LOW FREQUENCIES

For a subwoofer to produce a drum sound, it also has to reproduce high frequencies. If you strip everything off at 80 cycles, or 100 cycles, there are only a couple of notes there, so you get sort of a thumpy, pink noise, boom sound. Nothing interesting. To get that crack and the harmonics, you need to get up to 1,000 cycles. All subwoofers were tested this way through sub shootouts; everybody was doing it this way.

What we wanted to do is introduce a low-frequency element that was optimized, had no harmonics and stops at 80 cycles. The broader range a subwoofer has to work, the more it becomes like a speaker, and you lose power. In order to make the thing more powerful, we don’t want it reproducing the harmonics that either came from the drum or producing them on their own.

So we introduced the 1100, just a low-frequency element. If you’re miking a drum, the highs would go to LEO [linear line array] and the lows would go the 1100, integrated into the system. If you don’t strip out the high frequencies, then it’s not going to sound very good. If you send a sub feed that’s been stripped to 80 Hz, and that’s your sub feed, it’s not going to have any high end. Mixers immediately want to put a high end with that sub. But what you want to do is take that stuff you just stripped out and send it to LEO. We took this out to the industry in the last year. It’s very powerful. It can be very simple, but it’s a new way of thinking.

LEO and LYON

Trying to introduce [the concept of linear sound] 10 years ago would have been difficult. It was easier in cinema because they had more to gain. They have a fixed system behind the screen. They can’t change it out like P.A. people can.

So LEO was first. We thought we would start off with something for the big shows because there are a limited number of them. It doesn’t become a major introduction. Instead we created a collaborator program to work with people and introduce the concept to make sure they understood what it was we were trying to accomplish with this system. It’s now been out since 2012.

Then we thought we could build a system for smaller shows, from 2,000 seats to maybe 10,000. Something more flexible. It wouldn’t be as powerful, but it would have all the same properties. All the quality is the same, the resolution is the same. These two systems could work well together. You have LEO for the mains and you can use LYONs on the side. Then use the other half of the LEO for you other system and finish it off with LYON.

Close