Multi-channel Recording

From Audacity Development Manual
Jump to: navigation, search
Multi-channel recording means recording more than two separate channels of audio at once on the same computer, synchronized to each other. Typically this does not work "out of the box" on Windows consumer systems, and always requires use of appropriate hardware and drivers, with recording software that can work with that hardware/drivers combination.


Requirements

  • Hardware support: you need a sound card or external audio interdace which has enough Analog to Digital Converters (ADC's) to do multi-channel recording. Most consumer cards only have one stereo pair of ADC's that is switched between various inputs such as Line-In and "Mic". You will need at least a semi-professional device to find support for multi-channel recording.
  • Driver support: the drivers for the device must make it possible to record more than two channels at once. This is more problematic that it might seem because the standard sound interfaces for many operating systems were designed long before multi-channel recording was possible, and so only allow for up to two channels of recording. Also, consumer-level systems are not designed to achieve the low latencies and high throughputs needed for high quality multi-channel recordings.
  • Application support: the application you are recording into must support working with multiple channels of audio. Audacity supports recording however many channels the device offers (for example, 24). The number of channels desired can be selected in the Devices tab of Preferences. There are two current limitations:
    • Channel selection: You cannot select exactly which channels are used - Audacity will simply use the first ones it finds. You may need to increase the number of recording channels in Audacity preferences (possibly to the maximum supported by the device, even though you are only recording a subset of them), until all you want are included. This may mean having to delete silent tracks after recording. Some audio interfaces however will display a "Multi" device. Selecting this as recording device in Audacity should let you record all the channels at once automatically.
    • Channel to track allocation: Particular channels of the sound device cannot be recorded to particular tracks. After recording, multi-channel files can be exported using current Audacity, by choosing the appropriate mixdown option in Preferences (Import/Export tab). Playback support in Audacity is currently limited to stereo (2 channels), so all multi-channel recordings will be sent to your sound device in stereo. Your device can probably be configured as to whether the front left and front right speakers are used, or if output is duplicated to the surround channels. Offers from developers to help us add support for multi-channel playback are welcomed - to get in touch please join our developers' mailing list.

Crucially, available driver and application support for multi-channel audio (and whether you can use Audacity for multi-channel recording) depends on the operating system you are using. Please check the relevant section below for your particular system.


Windows

Windows Sound Interfaces

MME: The standard Windows MME (Multi Media Extensions) sound interface has been around since Windows 3.1. It supports up to two channels of recording, sample depths up to 16 bits, and sample rates up to 44100Hz. On playback, multiple applications can use the sound device at the same time, with all the audio being mixed and sample rate converted to 44100Hz in Windows before being sent to the audio interface. Nice and simple for going ping and utterly hopeless for multi-channel music production.

DirectSound: It's also not very much use for writing games with, which is why after the release of Windows 95, it became necessary to offer the games manufacturers something better to persuade them off DOS. So DirectSound was born. This provided more flexible playback of audio, and later added multi-channel and surround sound playback for immersive game audio. Recording support was added later. DirectSound offers somewhat lower latencies than MME, and the possibility of multi-channel recording on some devices.

ASIO: So in the meantime, serious audio recording and playback was left out in the cold. Proprietary solutions stepped into the gap, and Steinberg created the ASIO interface for bypassing the operating system entirely, and connecting audio applications direct to the audio interface. This gives very low latencies (because all the mixing and conversion involved in the MME interface is avoided), but means that only one application can use an audio interface at a time (no sharing between multiple applications, no system sounds).

Audacity supports ASIO but that support is not distributed in releases for licensing reasons. Audacity can be compiled with ASIO support as long as that build is not distributed to others.

WASAPI: In 2005 the WASAPI application programming interface (API) was introduced starting with Windows Vista. WASAPI isolates audio more from the kernel so providing greater stability, allows a few further multi-channel devices to work without ASIO and provides lower latency than MME and Windows DirectSound.

On the other hand, direct hardware access under WASAPI is limited to a WaveRT driver which only a few built-in devices support (also, Audacity and many other audio programs do not support it). Latencies under WASAPI are higher than under WDM-KS because MME and DirectSound are both emulated over WASAPI. To compensate for this, [http://en.wikipedia.org/wiki/Windows_Store Windows Store applications on Windows 8 can support offloading of audio processing to hardware which was dropped with Vista. This is a necessary step for modern battery-dependent devices where software audio processing on the CPU would rapidly deplete battery life.

WASAPI has two significant benefits for Audacity.

External article about Windows API's: For more on how Windows sound drivers work, and the different API's, see this article by Claus Riethm\xc3\xbcller. Note: after that page was written, DirectSound added recording support, as mentioned above.

Recording With Audacity

As distributed, Audacity comes with support for Windows MME and WDM drivers. MME drivers work fine for simple stereo recording and playback, and are available on all versions of Windows where Audacity will run. However, neither these nor most WDM drivers will provide multi-channel recording; if you try to send multiple inputs to Audacity with these, you will only be presented with a series of separate two-channel "recording devices" from which one can be chosen, instead of the number of input channels there actually are.


macOS

Mac Sound interface

macOS is standardized on the Core Audio interface. Audacity fully supports Core Audio.

Recording With Audacity

Most hardware devices with ability to record multiple channels should work with Audacity on Mac, if they provide multiple channels under Core Audio - some only provide multiple channels using ASIO on Windows. A few devices are listed below which have been reported to record multi-channel into Audacity. Other devices than these may do so. If you have such a device, please let us know so we can consider adding it here.


Linux

Linux Sound Interfaces

The oldest sound driver interface in the Linux kernel is the OSS standard. This was the first serious attempt to provide a unified sound interface for *nix systems, and is also used on *BSD and some other Unix systems. It was designed in 1992 to provide an extended version of the card-specific SoundBlaster 16 interface. It made it into the Linux kernel, however in 1998 the creator handed over maintenance to the kernel maintainers, and a commercially licensed fork was produced by 4Front Technologies. This was closed-source and cost money to install, so attracted very little enthusiasm from the open-source community. The OSS drivers in the kernel source continued to be available, but few new drivers were being added, and many did not work very well.

A decision was made to start again from scratch, to address some of the limitations of the OSS interface (which although it was being developed commercially, was stuck in 1998 as far as open-source was concerned). Thus the Advanced Linux Sound Architecture or ALSA was born. This was designed to be able to provide all the functionality of OSS, whilst also making it easier to support the increasing number of high sample rate, high bit depth, multi-channel audio interfaces. Latency was also a concern, with increasing demands for low-latency full-duplex operation from users. The majority of new sound development for Linux now uses ALSA rather than OSS, although code for other Unixes often still uses OSS, and for some reason developers of binary-only software for Linux always seem to use OSS. ALSA has been included in the Linux kernel since version 2.5.0, as well as independent releases from the ALSA project.