Computer companies rarely make audio or music hardware. There have been exceptions: If you’ve been around a while, you may recall that Atari included MIDI jacks on its ST computers; Apple once stuck its logo on a simple MIDI interface; and IBM marketed something called the “Music Feature,” which was actually a Yamaha four-op FM synth chip on a PC daughter card. But generally speaking, the companies making the general-purpose pizza boxes and laptops that have taken over our industry left others — from Creative Technologies to Motorola to Kurzweil to Digigram — to worry about the chips, the cards, the breakout boxes and the other hardware add-ons that we must have in order to do sound generation and DSP.
Computer makers don’t normally do much in the way of audio and music software, either. On the PC side, they don’t make any software at all, because they don’t own the operating system. Microsoft, which does, has come up with several Windows pieces (or is that “shards”?) that are helpful to third-party developers like DirectX and Windows Media Player. On the Mac side, Apple has historically made a few very tentative forays into the area, such as the lame QuickTime Musical Instruments and the positively comatose MIDI Manager. But by and large, most of that kind of development — even at the system level — has been left up to others like Steinberg for audio interconnection, and Opcode and Mark of the Unicorn for MIDI.
But under Apple’s latest — and as of last month, the default — operating system, OS X (which apparently can be pronounced “oh-ess-eks” or “oh-ess-10,” depending on your mood), that’s all changing. And considering Apple’s acquisition last summer of the well-regarded German software and hardware maker Emagic, it’s reasonable to assume that it’s going to be a big change.
A year ago in this column, I maintained that OS X wasn’t exactly ready for prime time when it came to music and audio applications, but with the introduction last fall of OS 10.2, that’s history. Oliver Masciarotte covered the technical ramifications of 10.2, dubbed “Jaguar,” in his November 2002 “Bitstream” column. (Apple has always used internal code names, some of them very clever, for its projects in development, but why this particular one has caught on so well with the public is beyond me. If you ask me, “Tsunami” was a whole lot cooler.)
I’m going to take a different tack and talk more about the business side of 10.2’s music and audio functionalities, and what it all means to the user and developer communities. To help me, I interviewed Dan Brown, who for the past three years has held the somewhat awkward title of “audio technologies manager, worldwide product marketing” at Apple Computer. Brown is also a member of the Board of Directors of the MIDI Manufacturers Association, which means that he’s aware of the issues from several different perspectives. And he’s willing to talk about them, or at least most of them. What follows are some of that conversation’s main points.
Why has it taken so long for many MIDI and audio developers to adopt OS X?
We believe by now, working with the developer community, that there are no longer any technical issues inherent in the OS that make it a problem for music, which is not to say that there haven’t been issues in the past. When it was first released, it was usable with real-time applications like audio, but we didn’t have carbon [OS X-compatible] apps to test it on. So we had a chicken-and-egg situation, and we had to wait until the apps were ready before we could do any testing. And then we found that there were inefficiencies; those have been fixed.
Does Apple consider USB to be a viable way to send MIDI?
A USB MIDI class driver is good for consumer-level use. For professionals, we recommend using time-stamping drivers the way Emagic and Mark of the Unicorn do. The USB Implementers Forum, www.USB.org, owns the USB specification — we don’t — and they don’t support time stamping. It’s because they are more oriented toward consumer uses of USB and not the MI or media-production industries. So we can’t do it ourselves, but we’d love to see the developer community get together and deliver a class standard for time stamping that everyone can use.
What about MIDI over FireWire?
Yamaha is doing that with mLAN. Again, we don’t own the spec, but we certainly support it; and if people want to use mLAN, that’s fine with us, but that specification belongs to Yamaha. We can only implement what mLAN consists of.
Before OS X, the Opcode MIDI System, OMS, was the de facto standard to deal with MIDI interfaces and inter-application communication, even though Opcode essentially ceased to exist some years ago and no one has been supporting it. I heard at one point that Apple was interested in picking up that technology. How come it didn’t?
I can’t really speak to that.
But is OMS’ functionality part of OS X?
Yes, we deliver all of the MIDI services that were in OMS. We have an application called Audio MIDI Setup that recognizes all devices, which are either class-compliant or for which a driver has been loaded. That includes interfaces and USB keyboards and other instruments. We would like devices to be as class-compliant as possible so that they at least work with that software, without requiring any custom drivers. Maybe you’ll need custom drivers to take advantage of the full-feature set of a device, but we’d like basic setup to be as painless as possible.
What about older applications that aren’t updated for OS X. Will they work?
Old software that has not been “carbonized” should run in OS X’s “classic” mode; but in that mode, hardware devices are not directly visible to applications. So you almost need two drivers: one to get into the OS X environment and one to get out the ports. With that much going on, performance can’t be guaranteed. Applications just won’t run well, if they run at all. On the other hand, under OS X, carbonized native applications like Reason and Ableton Live run very well. I go home and use Reason almost every night. They’re so stable; it’s a revelation.
How is synchronization between applications handled in the operating system? Under OMS, you had to specify which devices handled what kind of timing data.
Synchronization is API [application programming interface] level functionality, meaning that it’s up to each application on how to deal with it. There’s nothing special about managing synchronization data: The OS just treats it as time-sensitive, time-stamped data. If the application assigns it a high priority, then it will be passed accurately.
What’s replacing the old Sound Manager to handle audio streams, and how is it different?
Sound Manager is useful for consumer-level playback in OS 9. It handles two channels of up to 16-bit, 48kHz audio. But professionals needed something more flexible, and so ASIO, EASI and Direct I/O were developed by various companies. They were critical. Now, we have Core Audio, which is functionally the same as Direct I/O or ASIO. It allows any number of channels of input and output, with up to 32-bit floating-point precision. We’ve made the middlewares unnecessary. We haven’t prevented those companies from bringing up their protocols into OS X, but they don’t have to. And I don’t know why they would want to.
What about plug-ins? Are you looking to adopt a common format for them, as well?
The reason there is such a variety of plug-in formats is purely marketing. If the developer community were to choose a common format — consolidating all of this redundancy — then that would eliminate all of the confusion. We have a protocol for plug-ins called Audio Units. It comes in two manifestations: one for DSP and the other for virtual instruments. We’re working with developers to standardize on that protocol. We want to give them the opportunity to use our platform and prevent the complexity from increasing.
The exception is hardware-specific formats, like TDM. We think Digidesign should bring it up to OS X, and they’re planning to, but we don’t get involved with manufacturers’ support of their proprietary hardware. So we can’t tell them, for example, which versions of their software should support what hardware.
Speaking of virtual instruments, how well does OS X deal with latency issues?
Latency is always a trade-off between the number of tracks, or the amount of CPU power you want to use, and time. How low can you set your buffers with 100 channels going, each with reverb, without the sound breaking up? There’s no real way yet to quantify a system’s latency according to specific parameters, because those benchmarks haven’t been developed. That’s the role of the music-technology press. If we had a benchmark, we could answer the question.
But we think that with Core Audio, we’ve delivered the best system in terms of throughput — less than three milliseconds. The Peabody Institute at Johns Hopkins University did some testing of audio throughput on different operating systems, expecting that Linux would have the smallest latency. To their surprise, we came out ahead. [Actually, Linux came out ahead when the system was run without additional CPU load; but under more typical conditions, Core Audio was indeed faster. You can see the report at http://gigue.peabody.jhu.edu/~mdboom/latency-icmc2001.pdf.]
What was the motivation behind the acquisition of Emagic?
We do audio, and we want to do it better. Emagic had the expertise. But the amount of integration with Apple is not significantly different than that of other developers. They’re still in Germany, and we have no interest in changing that or subsuming them in the foreseeable future.
What about the Logic Windows users who feel like they’ve been abandoned?
Emagic is communicating with Logic users — we’re leaving that to them. They’re far more aware of the dynamics of communicating with its user base than we are after six months of owning the company, and they are making some very attractive offers.
As are Steinberg and Cakewalk.
There’s some thinking in the audio community that what you’re planning to do with Emagic is similar to what you did when you took over what became Final Cut Pro from Macromedia; that is, you took a big chunk of what had been Avid’s market. So are you hoping to see that happen to Digidesign as to what has happened to its parent company?
Communication between Emagic and Apple has improved, but not at the expense of other developers. We have a platform to promote, and we want it to be the platform for audio development, period. We’re interested in promoting native solutions, which a Mac can do all by itself. We’re looking to get applications that are scalable, so that people can use them at whatever level they want. We’re supporting everybody.
We’ve put in the hooks that make it easy for people to write software for the system. We are delivering the standards as much as we can. I’m looking forward to lots of little applications popping up and lots of creativity coming out.
What do you think of an application called Audio Hijack?
I think it’s delightful what they’re doing, although there are some serious copyright implications. That sort of innovation is the kind of thing I’m talking about. [And I’ll talk a lot more about it next month.]
Is there a good place to get independent information about audio in OS X?
There’s a group on Yahoo called daw-mac for people using Macs for audio. It’s platform-agnostic, which makes it really valuable. And do you know about www.osxaudio.com? It’s my home page. I look at it every day.
As I was finishing this month’s column, I found out that my old friend James Romeo died. Jim was a major character on the early MIDI scene. Armed with a Ph.D. in music composition from Harvard, he was the first person to replace a pit orchestra with MIDI sequencers and synths when he used a pair of Kurzweil 250s with Performer and a tempo-input device for a production of two operas by Claude Debussy. Naturally, he got tons of flak from the musicians’ union.
He wrote a very clever ear-training program for the Mac and got the attention of Coda Music Systems, which was developing Finale at the time. Jim ended up co-authoring the documentation for the first version of Finale, which was 1,500 pages.
He was also a pioneer in marketing royalty-free MIDI files of original and public-domain music, with hundreds of tunes in his catalog, and I once even found a CD of his library music in the checkout line at my local drug store.
In recent years, I’d lost touch with him, but I knew that he and his wife were putting most of their time and energy into writing and performing for their church. A year-and-a-half ago, he developed Lou Gehrig’s disease, and I found out about it through a mutual friend last summer. He passed away at home on December 2, at the age of 47.
Although Paul Lehrman is currently using four Macs and five operating systems, he’s not as confused as you might think. Thanks to Jerry Hsu and George Litterst for their help this month.