On Friday, December 16, the Mercy College Department of Music Production and Recording Arts presented Lies in High Fidelity, a showcase for electronic music composed, recorded and performed by Mercy students. The production was a collaboration between three classes—Electronic Music Production III (a.k.a. MTEC-212), Electronic Music Performance (a.k.a. MTEC-318) and Live Sound Reinforcement (a.k.a. MTEC-345)—and was held in the Lecture Hall of the college’s Main Hall.
Guided by associate professor and program director Stephen B. Ward, associate director Music Studios Sam Stauff, adjunct professor Jarrod Ratcliffe and myself, students were responsible for both sides of the equation—performance and production—and were encouraged to treat the showcase as any professional production would be handled.
Studio night managers AJ Chiarella and Zaire Smith were an important part of the event, with Chiarella directing video for the livestream feed and in-house screen, and Smith solving a multitude of technical issues, as well as wrangling students when necessary(!).
Mercy students used a variety of tools to create their electronic music projects. As Ward explained, “All Electronic Music Performance solo and collaborative performances ended up as Ableton Live sessions. In several cases, the projects originated in other DAWs (FL Studio, Reason, Logic Pro X and so on) and were ported over to Ableton Live to take advantage of its performance capabilities and its easy integration with hardware controllers.
“There were also cases in which students started in Ableton Live but used particular plug-ins in their home setups that we don’t have, so they froze those tracks as audio files,” he continued. “Some performers simply used Ableton Live to launch a stereo mix of the project while they performed live on virtual instruments in Ableton Live. Other students used Live to add live effects to pre-recorded tracks as a way of remixing the project in performance.”
Students in Electronic Music Production also relied heavily on Ableton Live, though according to Ratcliffe, “One student, Stephen Burney, triggered his backing track and played/sang on top with live effects using Mainstage. Another student, Bobby McPadden, had created scenes in Ableton and triggered those live directly from Ableton for his performance. We use a lot of Ableton in the class, but I also like to let my students use tools they are most comfortable with when they are making songs for the showcase.
“A lot of people used a combination of DAWs, some starting with Ableton’s sampler or synths, then moving to Logic or Pro Tools to mix,” he added. “At least one student, José Uezyoga, started in FL Studio for a lot of his sound design, then brought his bounced stems into Ableton Live for performance in session view.”
In addition to the audio recordings, students in Electronic Music Performance created video content for the showcase. “The pre-show,” Ward explained, “began with a series of Ambient Music videos from MTEC-318. Students created each of the videos for these projects. The show began with Prologue, featuring the voice-of-god (a.k.a. me), and the abstract video was created in Ableton Live Suite using Vizzie, a collection of video generation and processing plug-in objects included with Live. Many of the students’ ambient videos were also created and/or processed using Vizzie.”
“Most of the students in Electronic Music II worked in full-featured video editors including Final Cut Pro, Da Vinci Resolve and Adobe Premiere,” Ratcliffe noted. “Videos were created after they had mixes of their songs, and they just pulled their mixes into the video editors. The footage was mostly royalty-free footage they sourced online. Two of the students had fairly involved performances and did not make their own videos, so I served as VJ for them, utilizing Resolume Arena Media Server software.”
All of the projects were pre-loaded into two master playback computers, one for each class. This enabled the Live Sound students to minimize changeover time between performers. A few pre-production meetings enabled the Live Sound students to generate an input list and create scenes in their digital mixers (Behringer x32s).
Several performers used live instruments along with their electronic content, so inputs were added for electric guitar, live toms and two vocal mics for announcements (two additional vocal mics were processed live via Ableton or Mainstage; audio from these mics was merged with the backing tracks by the respective performers). I explained to the Live Sound students that even though they did not yet have access to the audio sources, they could still do “grunt work” on the scenes—i.e., naming channels, linking stereo pairs, and turning on HPFs and compressors for inputs where they’d likely need them.
An important component of the event was a livestream broadcast for the Mercy College YouTube channel, and—having been made aware of the perils of creating a livestream mix from FOH by a recent guest lecturer—my class decided it would be a smart idea for them to use a separate console in an off-stage location to mix the livestream (gently nudged in this direction by me, of course).
This opened the door to the concept of audio networking and using a common stage box connected to two consoles via AES50. It was a great opportunity for the students to learn about clocking, shared head amps, local versus remote inputs and more. Prior to the event, they spent a class creating a mock setup so that they’d be ready to deploy the gear on location.
Their planning paid off: Within about an hour of load-in on show day, the FOH console was passing audio. Initially they had some trouble getting the livestream console to recognize the digital stage box, but this was simply a matter of turning the console off and on again. Ideally, they’d have set up the console for the livestream in an isolated room well away from the stage, but they had to settle for setting it up backstage, with student Emma Armus mixing the stream using headphones.
Another task for the Live Sound students was recording the event in Pro Tools for archival purposes, as well as for a possible remix for the YouTube channel. This recording was facilitated by using the x32 USB port, which allowed the mixer to show up as a valid interface in Pro Tools. As soon as the FOH console was up and running, my two students overseeing the recording were asked to create the PT session and figure out how to route the x32 inputs to PT via AES50 (which they did).
When soundcheck began, one of the Live Sound students was attempting to communicate from FOH to the performers on stage when another student suggested connecting a talkback microphone. One mic, one cable and two minutes later they had the talkback mic routed into the monitors and were running soundchecks for each performer, including routing a fairly painless stereo monitor mix.
The Live Sound students encountered a few speed bumps, some of which I allowed to happen so that they would have a solid learning experience, as opposed to me constantly dictating what they should do. They responded like pros, taking the necessary steps to iron out audio issues such as monitor feedback, adding effects to the FOH mix as requested, or tweaking levels. And they finished soundcheck with plenty of time for the dark stage/dinner break.
It was great to see students from all three classes interact and work together with the common goal of making the production a success. As Professor Ward mentioned that evening, “It takes a village to create a really good production.”