Brian Eno once described the tape recorder as his “first instrument,” and has often cited minimalist composer Steve Reich’s cut-and-paste tape piece, “It’s Gonna Rain” (1967), as one of his main musical influences. These preferences led almost inevitably to one of the concepts for which he became most known: the recording studio as a musical instrument. Throughout his long career, Eno has consistently applied this instrument in ways that are unusual, innovative and often commercially very successful.
Born in 1948 with the eminently aristocratic name of Peter George St. John le Baptiste de la Salle Eno, the Briton first caught the public eye in the early 1970s as a member of Roxy Music. After leaving that group, Eno pursued a solo career, which has included numerous avant-pop efforts and the ambient music for which he is best known. In addition, Eno has been involved in all manner of collaborations, ranging from guest appearances as a musician to full-scale productions. His credits include Ultravox, Jon Hassell, David Byrne, Toto, Harold Budd, the Neville Brothers, Peter Gabriel, Elvis Costello, INXS, Johnny Cash and, of course, his now classic work with David Bowie, Talking Heads and U2. Eno also became extensively involved in non-musical activities, from visual arts and video installations to lecturing and writing.
Given all of this diverse activity, it’s amazing that he still finds time to maintain a solo career. His latest CD, Another Day on Earth, was released last summer. It’s his first vocal album in more than two decades and appears to be his most deliberate effort at blending the ambient and song-based strands of his solo work. Normally a reluctant interviewee, Eno gave a round of interviews to promote his latest effort, offering Mix the opportunity to ask some probing questions about his new album and his attitudes about modern recording.
Tell me about your fascination with Steve Reich.
“It’s Gonna Rain” was one of the most important pieces of music in my life, and the whole idea of generative and ambient music really came out of that. With a generative piece, you set a machine going and it makes itself, and you as the composer are also the listener. The act of listening is the act of composing. When you’re hearing these complicated shifting patterns going on, it’s the aural equivalent of moire illusions, and that very much impressed me.
What also impressed me was the different position it gave the composer. The old romantic idea is that the composer pours out these wonderful things to the passive you, the listener. It’s the idea of art as a kind of tube that the artist shouts down to the more-or-less thick listener at the end. Instead, with generative music, the composer becomes somebody who sets up a scenario of some kind and then lets it execute itself, and then [the composer] watches that just like any other listener. I’m absolutely uninterested in the idea of using music as a vehicle for presenting the performer’s personality.
Then why release a solo album, particularly a vocal album?
Five or six years ago, I noticed that I was starting to sing again and enjoying it. Also, certain technological developments have happened that give you the possibility to shape your voice, and that reawakened my interest. I always liked the idea of seeing what I was doing the way a playwright might think of a play or a novelist might think of a book. There are characters in there, but they’re not the novelist, they’re just characters in the book. And with the new voice-shaping technologies that are around now, you can suddenly make a voice that’s clearly not your own.
I have nothing to say. I have a lot to say when you’re asking me questions, but I don’t want to use music as a way of saying things. [Laughs] What I want to use music for is a way of making things happen to me. I want to make things that create emotional or mental conditions for me, and one of the most important conditions is surrender. My yardstick for what constitutes good music is that it changes me. Do I think, “Wow, that’s a new conception of how things could be,” or, “That’s a new set of feelings that I have never experienced before?”
On your new album, you treat your vocals with what sounds like a pitch-shifter on “And Then So Clear” and a vocoder on “Bottom-liners.” Can you provide some details?
Quite a lot of the vocal effects were done in DigiTech Pro Vocalist, which I don’t think was ever very popular. It’s a stand-alone box, not a plug-in, and it has lots of interesting functions. It’s an intelligent harmonizer that you can run off a keyboard, so it will harmonize with the notes of the chords that you’re playing. You can have a group of voices following the chords. It also has a gender-changing function with which you can alter the formant structure of your voice. That’s what I did on “And Then So Clear.”
I also pitched the voice up an octave and played the melody line on the keyboard. The latter gave a very funny effect because it makes the change between notes slightly artificial in an interesting way. I also applied Pro Vocalist for the vocoder effect. Plus, I used various forms of AutoTune a lot. This is very interesting as an effect in that it gives this unnatural perfection to your voice.
Another Day on Earth
is striking for its unhurried pace. In the context of the 3-second-attention-span paradigm that has come to dominate the entertainment industry, aren’t you scared of losing your listener?
I’ve come to realize that I can trust listeners. They don’t need to be constantly woken up. They’re quite happy to drift for a while and come back in when the music comes back in. In general, the listener wants much less than the creator. When you’re creating something, it’s very easy to get into a nervous state and think, “Oh god, here’s a whole bar where nothing happens,” and try to get more stuff in. But as a listener, you’re quite happy with these open spaces.
I noticed that years ago when I was experimenting with Revoxes and often found that I preferred the pieces played back at half-speed. This was just not because of the softer, more somber tonality, but simply because less happened.
Another Day on Earth
sounds like an attempt at bridging the gap between your ambient and song-based work.
The track “How Many Worlds” is a very short song with a very long instrumental section. There’s just enough voice in there to make you hear it as a song, making it a bluff, a deceit, and there are a number of bluffs like that on the record. I learned this when I made Another Green World , which had 14 pieces on it, five of them vocal pieces. I noticed that everyone thought about it as a song record, and I was pleased about that because people bring more quality of attention to a “song” record than an “instrumental” record.
You can research this. If you have a painting that’s just a landscape, you see the eye moving in a very complex pattern as it scans it. If you put a figure in there, even if it is minute, then the eye will keep referring back to that. The same thing happens when we hear a voice. So for me, it was like I’ve been doing landscapes for a long time and now I have re-introduced some figures; i.e., the voice. Where are they going to fit? How big will they be? Is it going to be like the Mona Lisa, with a big figure in front of the backdrop, or more like a [John] Constable painting, where it’s just a tiny figure in a large landscape? And how can I destabilize that in some way — how can I put a voice in there and not make it the center of attention?
Another Day on Earth on a Mac with Logic software, but you have in the past been very critical of computers.
There’s still quite a lot of hate going on for me in working with computers, but I think programs have improved a great deal. The objections I used to make have been taken onboard more by programmers. Programs are less menu-intensive than they used to be, and Logic is a very evolved program. I also think that plug-in instruments today are much better than the early ones. The problem remains with the interface with the computer keyboard. There are certain decisions that you make on a keyboard that you wouldn’t make on a guitar and vice versa. You have to stay aware when you start working with a computer that you’re on a very tilted playing field.
It’s very easy to do all these things that computers want you to do — like quantize or use equal temperament — if you’re working with a keyboard, or use endless tracks and editing options, and in that way have the computer determine what kind of music you’re making. This has been fatal for a lot of people because the number of options at every stage proliferates exponentially. What I often see in studios is that when one problem can’t be solved quickly — for instance, the lyric writing, which is always a problem — people start working on non-problems like, “Let’s try 38 different guitar parts on a song and let’s play around with these sounds in 150 ways each.” A huge amount of attention goes into re-cooking the bit of the track that doesn’t need attention. So you need to be very aware of the potential of technology to pull you into screwdriver mode.
Korg and Native Instruments are mentioned on the credits
of Another Day on Earth.
These companies have both made contributions to solving the computer problems I’ve been talking about. I’m a big fan of Native Instruments’ FM7 program, which is sort of based on the Yamaha DX7. It’s the DX7 that I always wanted to have because you can suddenly connect things in different ways. With the FM7, you can also tune the keyboard in any way you want so you can make music in just intonation or Arabic intonation or whatever. Korg has its Kaoss pads, which are a way of taking sounds into the domain of muscular control. If you have a few Kaoss pads in-line, like I do, you can really start playing with sound itself, with the physical character of the sound. The pads are very intuitive: Anyone can learn to use them in a second. It’s immediately obvious what you do, and it immediately takes you into a completely different place, because when working with computers, you normally don’t use your muscles in that way. You’re focused on your head, and the 3 million years of evolution that resulted in incredible muscular skill doesn’t get a look-in.
Can you elaborate on your recording and mixing processes?
When I was playing parts live into the computer, I would do processing through external boxes. I’d also sometimes feed stuff out of my computer through the Kaoss pads. There’s a lot of plug-in processing going on. I’d usually print the processed track inside of the computer and then push it back in time, because when there’s a lot of processing, you get latency problems. I like working like that because I can do different things with the already-processed track.
What do you make of the objection that working exclusively inside of computers results in a flat and lifeless sound?
It’s interesting that after working in computers for a while, when you then listen to something that wasn’t made in a computer, it sometimes has a shocking, sparkling live-ness to it. But you simply have to accept that something happens when working with computers and you work within that constraint. If you’re a print artist, you know that lithographs will give you a different effect than silk screens. So I’m aware that in working with computers, you exclude certain sonic possibilities, as you do when working with analog tape.
In working with digital, you sacrifice certain possibilities of sonic range and depth, while in working with analog, you sacrifice all the operational freedom that comes with computers.
How would you evaluate the differences between analog and digital?
I’m not sure they’re so much to do with the internal characteristics of the medium as with the different ways you work when you’re using them. When you work with analog, you go for a performance because it’s too complicated to cut up tape and so on. So you tend to do takes until you get a good performance. But with digital, you say, “That’s a good bar, we’ll copy that a few times.”
Also, when you work with digital, you tend to work with people who aren’t sound engineers, they’re computer operators. Or, they’re not people who spent their lives listening to drum sounds and thinking, “I wonder how I can make that sound better — perhaps with this compressor instead of that one or if I move that mic a little bit away from the drums.” I think that’s a different world.
I engineered Another Day on Earth myself because, otherwise, I would have had to spend six years in a studio and pay staff, and that would have become too expensive. But on the song “Under,” the drums were recorded by someone else a long time ago. When you listen to the album, the drums on “Under” are definitely the best-sounding drums on there, and that’s not only because it’s one of the world’s best drummers [Willie Green] playing, but also because he was recorded by an engineer who was very good at recording drums. But people who work with computers normally sit there on their own and are simultaneously being musician, engineer, composer — all these different jobs. It may be humbling to say, but perhaps we’re not all equally good at all these jobs and there’s a reason for calling in the experts.