Audio mixing (recorded music)




Digital Mixing Console Sony DMX R-100 used in project studios


In sound recording and reproduction, audio mixing is the process of combining multitrack recordings into a final mono, stereo or surround sound product. These tracks that are blended together are done so by using various processes such as equalization and compression.[1] Audio mixing techniques and approaches can vary widely, and due to the skill-level or intent of the mixer, can greatly affect the qualities of the sound recording.[2]


Audio mixing techniques largely depend on music genres and the quality of sound recordings involved.[3] The process is generally carried out by a mixing engineer, though sometimes the record producer or recording artist may assist. After mixing, a mastering engineer prepares the final product for production.


Audio mixing may be performed on a mixing console or digital audio workstation.




Contents





  • 1 History


  • 2 Equipment

    • 2.1 Mixing consoles


    • 2.2 Outboard gear and plugins



  • 3 Multiple level controls in signal path

    • 3.1 Processes that affect levels


    • 3.2 Processes that affect frequency response


    • 3.3 Processes that affect time


    • 3.4 Processes that affect space



  • 4 Mixdown


  • 5 Mixing in surround sound


  • 6 References


  • 7 External links




History


In the late 19th century, Thomas Edison and Emile Berliner developed the first recording machines. The recording and reproduction process itself was completely mechanical with little or no electrical parts. Edison's phonograph cylinder system utilized a small horn terminated in a stretched, flexible diaphragm attached to a stylus which cut a groove of varying depth into the malleable tin foil of the cylinder. Emile Berliner's gramophone system recorded music by inscribing spiraling lateral cuts onto a vinyl disc.[4]


Electronic recording became more widely used during the 1920s. It was based on the principles of electromagnetic transduction. The possibility for a microphone to be connected remotely to a recording machine meant that microphones could be positioned in more suitable places. The process was improved when outputs of the microphones could be mixed before being fed to the disc cutter, allowing greater flexibility in the balance.[5]


Before the introduction of multitrack recording, all sounds and effects that were to be part of a record were mixed at one time during a live performance. If the recorded mix wasn't satisfactory, or if one musician made a mistake, the selection had to be performed over until the desired balance and performance was obtained. With the introduction of multi-track recording, the production of a modern recording changed into one that generally involves three stages: recording, overdubbing, and mixing.[6]


Modern mixing emerged with the introduction of commercial multi-track tape machines, most notably when 8-track recorders were introduced during the 1960s. The ability to record sounds into separate channels meant that combining and treating these sounds could be postponed to the mixing stage.[7]


In the 1980s, home recording and mixing became more efficient. The 4-track Portastudio was introduced in 1979. Bruce Springsteen released the album Nebraska in 1982 using one. The Eurythmics topped the charts in 1983 with the song "Sweet Dreams (Are Made of This)", recorded by band member Dave Stewart on a makeshift 8-track recorder.[8] In the mid-to-late 1990s, computers replaced tape-based recording for most home studios, with the Power Macintosh proving popular.[9] At the same time, digital audio workstations, first used in the mid-1980s, began to replace tape in many professional recording studios.



Equipment



Mixing consoles




A simple mixing console



A mixer (mixing console, mixing desk, mixing board, or software mixer) is the operational heart of the mixing process.[10] Mixers offer a multitude of inputs, each fed by a track from a multitrack recorder. Mixers typically have 2 main outputs (in the case of two-channel stereo mixing) or 8 (in the case of surround).


Mixers offer three main functionalities.[10][11]


  1. Summing signals together, which is normally done by a dedicated summing amplifier or, in the case of a digital mixer, by a simple algorithm.

  2. Routing of source signals to internal buses or external processing units and effects.

  3. On-board processors with equalizers and compressors.

Mixing consoles can be large and intimidating due to the exceptional number of controls. However, because many of these controls are duplicated (e.g. per input channel), much of the console can be learned by studying one small part of it. The controls on a mixing console will typically fall into one of two categories: processing and configuration. Processing controls are used to manipulate the sound. These can vary in complexity, from simple level controls, to sophisticated outboard reverberation units. Configuration controls deal with the signal routing from the input to the output of the console through the various processes.[12]


Digital audio workstations (DAW) have many mixing features which potentially have more processes available than that of a major console. The distinction between a large console and a DAW equipped with a control surface is that a digital console will typically consist of dedicated digital signal processors for each channel. It is thus designed not to "overload" under the burden of signal processing, which may crash or lose signals. DAWs can dynamically assign resources like digital audio signal processing power, but may run out if too many signal processes are in simultaneous use. This overload can be solved fairly easily by simply plugging more hardware into the DAW, although the cost of such an endeavour may begin to approach that of a major console.[12]



Outboard gear and plugins


Outboard gear (analog) and software plugins (digital) can be inserted into the signal path to extend processing possibilities. Outboard gear and plugins fall into two main categories:[10][11]



  • Processors – these devices are normally connected in series to the signal path, so the input signal is replaced with the processed signal. Examples include equalization, panning, dynamic processing (compressors, gates, expanders, and limiters). However, some processors are also used in parallel, as is the case in techniques such as parallel compression/limiting (a.k.a. New York compression) and sidechain equalization.


  • Effects – these can be considered as any unit that has an effect upon the signal, the term is mostly used to describe units that are connected in parallel to the signal path, and therefore they add to the existing sounds but do not replace them. Examples of common effects include reverb and delay. Some effects are more commonly used in series like chorus, flange, and vibrato.


Multiple level controls in signal path


A single signal can pass through a large number of level controls – such as an individual channel fader, subgroup master fader, master fader and monitor volume control. According to audio engineer Tomlinson Holman, problems are created due to the multiplicity of the controls. Each and every console has their own dynamic range and it is important to utilize this correctly to avoid excessive noise or distortions. Attacking this problem – of the correct setting for the variety of controls - can be accomplished relatively quickly. Holman refers to the scale of the control as a clue for the solution of this problem. With 0 dB being the nominal setting of the controls, many have a "gain in hand," which goes above 0 dB. This means that one can turn it up from the nominal setting to have something that sounds clear. Other controls, such as sub masters and master level controls, are used for slight trims to the overall section-by-section balance or for the main fade-ins and fade-outs of the overall mix.
[12]:174



Processes that affect levels



  • Faders – used to attenuate or boost the level of signals.


  • Pan pots – A fundamental part of configuration in recording console is panning. Pan pots are devices that place sound among the channels: L, C, R, LS, and RS.[12]:174 They are also used to pan signals to the left or right and in surround, to the back or front.


  • Compressors – A device which attenuates the volume of a track when its volume passes beyond a set threshold. The primary use of a compressor in mixing is to limit the dynamic range of a track. Compressors are equipped with a number of controls including the threshold, the amount of compression (e.g. Ratio), and how quickly or slowly the compressor acts (e.g. Attack and Release).[12]:175


  • Expansion – The Expansion device does exactly the opposite of what the compressor does. It increases the volume range of a source and may do so across a wide dynamic range or may be restricted to a narrower region by control functions. Restricting expansion to only low-level sounds helps to minimize noise. This function is often referred to as downward expansion, noise gating, or keying and reduces the level below a threshold set by a specific control. Noise gates have numerous audible problems. (e.g.: In a dialog recording with air conditioning noise in the background, the threshold of the noise gate may remove the air conditioner sound between lines of dialog which can create an exaggerated difference that could be much more noticeable than if the audio had been left unprocessed.)[12]:176


  • Limiters – A limiter is a compressor with a Ratio of 10:1 or higher. Often referred to as a "brick-wall" limiter, some limiters have extremely high (or infinite) Ratios meaning that little to no audio surpasses the threshold. Limiters are most commonly used in mixing to strictly limit the maximum output volume of a track, buss, or overall mix. Limiters are especially useful in digital mixing to avoid clipping.[12]:176

These items discussed thus far affect the level of audio signal. The most commonly used process is level control, which is used even on the simplest of mixers.[12]:177



Processes that affect frequency response


Processes that primarily affect the frequency response of the signal are generally seen as second in importance to level control. These processes clean the audio signal, enhance interchangeability between other signals, adjust for the loudness effect, and generally create a much more pleasant or deliberately worse sound. There are two principle frequency response processes – equalization and filtering.[12]:177



  • Equalizers – The simplest description of EQ is the process of altering the frequency response in a manner similar to what tone controls do on a stereo system. Professional EQs dissect the audio spectrum into three or four parts which may be called the low-bass, mid-bass, mid-treble, and high frequency controls.[12]:178


  • Filters – Filters are used to eliminate certain frequencies from the output. Filters strip off the any part of the audio spectrum. There are various types of filters. A high-pass filter (low-cut) is used to remove excessive room noise at low frequencies. A low-pass filter (high-cut) is used to help isolate a low frequency instrument playing in a studio along with others. And a band-pass filter is a combination of high- and low-pass filters, also known as a telephone filter (because a sound lacking in high and low frequencies resembles the quality of sound transmitted and received by telephone).[13]


Processes that affect time



  • Reverbs – Reverbs are used to simulate boundary reflections created in a real room, adding a sense of space and depth to otherwise 'dry' recordings. Another use is to distinguish among auditory objects; all sound having one reverberant character will be categorized together by human hearing in a process called auditory streaming. This is an important feature in layering sound, in depth, from in front of the speaker to behind it.[12]:181

Before the advent of electronic reverb and echo processing, physical means were used to generate the effects. An echo chamber, a large reverberant room, could be equipped with a speaker and at least two spaced microphones. Signals were then sent to the speaker and the reverberation generated in the room was picked up by the two microphones, constituting a "stereo return".[13]



Processes that affect space



  • Panning – Static panning is used to control the location of phantom sources. This is achieved by leveraging a panning law that calculates the perceived location from the balance of loudspeakers' signal volumes or their relative time shift. Source locations can be located at any location between a pair of frontal or rear loudspeakers. Dynamic panning alters the volume or time balance of the loudspeaker pair to create the impression of moving sources.


  • Pseudostereophony – Pseudostereophony techniques are applied to broaden the sound image. This way the apparent source width or the degree of listener envelopment is increased. A number of pseudostereo recording and mixing techniques are known from the viewpoint of audio engineers[14][15] and researchers.[16][17]


Mixdown


The mixdown process converts a program with a multiple-channel configuration into a program with fewer channels. Common examples include downmixing from 5.1 surround sound to stereo, and stereo to mono. In the former case, the left and right surround channels are blended with the left and right front channels. The centre channel is blended equally with the left and right channels. The LFE channel is either mixed with the front signals or not used. Because these are common scenarios, it is common practice to verify the sound of such downmixes during the production process to ensure stereo and mono compatibility.


The alternative channel configuration can be explicitly authored during the production process with multiple channel configurations provided for distribution. For example, a stereo mix can be put on DVDAudio discs or Super Audio CDs along with the surround mix.[18] Alternatively, the program can be automatically downmixed by the end consumer's audio system. For example, a DVD player or sound card may downmix a surround sound program to stereophonic sound (two channels) for playback through two speakers.[citation needed]



Mixing in surround sound


Any device having a number of multiple bus consoles (typically having eight or more buses) can be used to create a 5.1 surround sound mix, but this may be frustrating if the device is not designed to facilitate signal routing, panning and processing in a surround sound environment. Whether working in an analog hardware, digital hardware, or DAW "in-the-box" mixing environment, the ability to pan mono or stereo sources and place effects in the 5.1 soundscape and monitor multiple output formats without difficulty can make the difference between a successful or compromised mix.[19] Mixing in surround is very similar to mixing in stereo except that there are more speakers, placed to "surround" the listener. In addition to the horizontal panoramic options available in stereo, mixing in surround lets the mix engineer pan sources within a much wider and more enveloping environment. In a surround mix, sounds can appear to originate from many more or almost any direction depending on the number of speakers used, their placement and how audio is processed.


There are two common ways to approach mixing in surround:



  • Expanded Stereo – With this approach, the mix will still sound very much like an ordinary stereo mix. Most of the sources such as the instruments of a band, the vocals, and so on, will still be panned between the left and right speakers, but lower levels might also be sent to the rear speakers in order to create a wider stereo image, while lead sources such as the main vocal might be sent to the center speaker. Additionally, reverb and delay effects will often be sent to the rear speakers to create a more realistic sense of being in a real acoustic space. In the case of mixing a live recording that was performed in front of an audience, signals recorded by microphones aimed at, or placed among the audience will also often be sent to the rear speakers to make the listener feel as if he or she is actually a part of the audience.


  • Complete Surround/All speakers are treated equally – Instead of following the traditional ways of mixing in stereo, this much more liberal approach lets the mix engineer do anything he or she wants. Instruments can appear to originate from anywhere, or even spin around the listener. When done appropriately and with taste, interesting sonic experiences can be achieved, as was the case with James Guthrie's 5.1 mix of Pink Floyd's The Dark Side of the Moon, albeit with input from the band.[20] This is a much different mix from the 1970s quadrophonic mix.

Naturally, these two approaches can be combined any way the mix engineer sees fit. Recently, a third approach to mixing in surround was developed by surround mix engineer Unne Liljeblad.



  • MSS – Multi Stereo Surround[21] – This approach treats the speakers in a surround sound system as a multitude of stereo pairs. For example, a stereo recording of a piano, created using two microphones in an ORTF configuration, might have its left channel sent to the left rear speaker and its right channel sent to the center speaker. The piano might also be sent to a reverb having its left and right outputs sent to the left front speaker and right rear speaker, respectively. Additional elements of the song, such as an acoustic guitar recorded in stereo, might have its left and right channels sent to a different stereo pair such as the left front speaker and the right rear speaker with its reverb returning to yet another stereo pair, the left rear speaker and the center speaker. Thus, multiple clean stereo recordings surround the listener without the smearing comb-filtering effects that often occur when the same or similar sources are sent to multiple speakers.


References




  1. ^ "Art of Mixing, Berklee College of Music". Retrieved 2017-09-02..mw-parser-output cite.citationfont-style:inherit.mw-parser-output qquotes:"""""""'""'".mw-parser-output code.cs1-codecolor:inherit;background:inherit;border:inherit;padding:inherit.mw-parser-output .cs1-lock-free abackground:url("//upload.wikimedia.org/wikipedia/commons/thumb/6/65/Lock-green.svg/9px-Lock-green.svg.png")no-repeat;background-position:right .1em center.mw-parser-output .cs1-lock-limited a,.mw-parser-output .cs1-lock-registration abackground:url("//upload.wikimedia.org/wikipedia/commons/thumb/d/d6/Lock-gray-alt-2.svg/9px-Lock-gray-alt-2.svg.png")no-repeat;background-position:right .1em center.mw-parser-output .cs1-lock-subscription abackground:url("//upload.wikimedia.org/wikipedia/commons/thumb/a/aa/Lock-red-alt-2.svg/9px-Lock-red-alt-2.svg.png")no-repeat;background-position:right .1em center.mw-parser-output .cs1-subscription,.mw-parser-output .cs1-registrationcolor:#555.mw-parser-output .cs1-subscription span,.mw-parser-output .cs1-registration spanborder-bottom:1px dotted;cursor:help.mw-parser-output .cs1-hidden-errordisplay:none;font-size:100%.mw-parser-output .cs1-visible-errorfont-size:100%.mw-parser-output .cs1-subscription,.mw-parser-output .cs1-registration,.mw-parser-output .cs1-formatfont-size:95%.mw-parser-output .cs1-kern-left,.mw-parser-output .cs1-kern-wl-leftpadding-left:0.2em.mw-parser-output .cs1-kern-right,.mw-parser-output .cs1-kern-wl-rightpadding-right:0.2em


  2. ^ Strong, Jeff (2009). Home Recording For Musicians For Dummies (Third ed.). Indianapolis, Indiana: Wiley Publishing, Inc. p. 249. |access-date= requires |url= (help)


  3. ^ Hepworth-Sawyrr, Russ (2009). From Demo to Delivery. The production process. Oxford, United Kingdom: Focal Press. p. 109. |access-date= requires |url= (help)


  4. ^ Rumsey, Francis; McCormick, Tim (2009). Sound and Recording (6th ed.). Oxford, United Kingdom: Elsevier Inc. p. 168. ISBN 978-0-240-52163-3.


  5. ^ Rumsey, Francis; McCormick, Tim (2009). Sound and Recording (6th ed.). Oxford, United Kingdom: Elsevier Inc. p. 169. ISBN 978-0-240-52163-3.


  6. ^ Huber, David Miles (2001). Modern Recording Techniques. Focal Press. p. 321. ISBN 0240804562.


  7. ^ "The emergence of multitrack recording". no publication date give. Retrieved June 17, 2018. Check date values in: |date= (help)


  8. ^ "Eurythmics: Biography". Artist Directory. Rolling Stone. 2010. Retrieved March 20, 2010.


  9. ^ "Studio Recording Software: Personal And Project Audio Adventures". studiorecordingsoftware101.com. 2008. Archived from the original on February 8, 2011. Retrieved March 20, 2010.


  10. ^ abc White, Paul (2003). Creative Recording (2nd ed.). Sanctuary Publishing. p. 335. ISBN 1-86074-456-7.


  11. ^ ab Izhaki, Roey (2008). Mixing Audio. Focal Press. p. 566. ISBN 978-0-240-52068-1.


  12. ^ abcdefghijk Holman, Tomlinson (2010). Sound for Film and Television (3rd ed.). Oxford, United Kingdom: Elsevier Inc. ISBN 978-0-240-81330-1.


  13. ^ ab Rumsey, Francis; McCormick, Tim (2009). Sound and Recording (6th ed.). Oxford, United Kingdom: Elsevier Inc. p. 390. ISBN 978-0-240-52163-3.


  14. ^ Levinit, Daniel J. (2004). "Instrument (and vocal) recording tips and tricks". In Greenbaum, Ken; Barzel, Ronen. Audio Anecdotes. Natick: A K Peters. pp. 147–158.


  15. ^ Cabrera, Andrés (2011). "Pseudo-Stereo Techniques. Csound Implementations". CSound Journal. 2011 (14): Paper number 3. Retrieved 1 June 2018.


  16. ^ Faller, Christof (2005). Pseudostereophony Revisited (PDF). Audio Engineering Society Convention 118. Barcelona. Retrieved 1 June 2018.


  17. ^ Ziemer, Tim (2017). "Source Width in Music Production. Methods in Stereo, Ambisonics, and Wave Field Synthesis". In Schneider, Albrecht. Studies in Musical Acoustics and Psychoacoustics. Cham: Springer. pp. 299–340. doi:10.1007/978-3-319-47292-8_10. ISBN 978-3-319-47292-8. Retrieved 1 June 2018.


  18. ^ Bartlett, Bruce; Bartlett, Jenny (2009). Practical Recording Techniques (5th ed.). Oxford, United Kingdom: Focal Press. p. 484. ISBN 978-0-240-81144-4.


  19. ^ Huber, David Miles; Runstein, Robert (2010). Modern Recording Techniques (7th ed.). Oxford, United Kingdom: Focal Press. p. 559. ISBN 978-0-240-81069-0.


  20. ^ "Archived copy". Archived from the original on 2012-04-02. Retrieved 2011-11-12.CS1 maint: Archived copy as title (link)


  21. ^ "Surround Sound Mixing". www.mix-engineer.com. Retrieved 2010-01-12.




External links


  • Modern Mixing | Mixing Articles and Tutorials







這個網誌中的熱門文章

How to read a connectionString WITH PROVIDER in .NET Core?

In R, how to develop a multiplot heatmap.2 figure showing key labels successfully

Museum of Modern and Contemporary Art of Trento and Rovereto