You are currently viewing Editing vs. Mixing: What’s the difference?

Editing vs. Mixing: What’s the difference?

There are A LOT of terms thrown around within the music production landscape. Most of these terms are more concrete, as they describe certain services, jobs, or roles within the music production sequence. However, some terms are more ambiguous with their definitions. Depending on who you ask, some will define a term differently than their counterparts. So, to clear things up a bit, we’ve decided to dive a little bit deeper into the terms editing vs. mixing.
 
 
 

What is Editing?

Here’s a quick definition I found scrolling through Google: “Audio editing is concerned with trimming the lengths of audio files, as well adjusting amplitude levels. Sometimes it includes the addition of audio effects processing. Audio editing might apply to audio content that does not need to be mixed or it might apply to mixed content that needs to be fit to some specification” (https://usv.edu/blog/beginners-guide-to-audio-production-audio-editing-vs-mixing/).
 
This offers a decent summarization of the topic, but for our purposes, lacks specific context relating to some of the things done within editing of popular music, like hip-hop, rock, pop, etc. Often times, it’s tough to form a blanket statement that encompasses ALL genres and musical mediums since you have to cover: popular music, film music, video game music, avant-garde/niche genres, app development music, etc. This might be one of the reasons it’s difficult to pinpoint an exact definition. Editing norms in one genre or medium may look different and include different factors and techniques versus another. However, we’ll give a general overview of audio editing in context of music we work with: popular music.
 

Editing within Popular Musical Genres

Not only does editing maintain its stature in manipulating time and amplitude levels, aforementioned above, but it also includes things like: breath reduction, comping, crossfading/fading, pitch correction, time quantization, de-clipping, looping, phase adjustment/alignment, take alignment, de-noising, clip gain, and more. Notice how most of these elements have to do with perfecting the overall performance of the recording. That’s because the main purpose of editing is: to enhance the performance of the musician(s) or artist(s) within the recording. Of course, it’s also to aid in fixing any mishaps during recording as well. For example, if there is a slight amount of clipping that happens during a take, you can safely de-clip the area while maintaining the recording’s overall integrity with minimal artifacts. This WON’T, however, save a recording that is clipped for long durations, so there are definite limits within editing.
 
A good example of a common misconception of editing deals with pitch correction. More often than not, people mistake pitch correction as a part of the mixing stage, and I’d argue they’re half-correct. In my opinion, pitch correction, when used in terms of manual, graphical tuning (or hard tuning as I say), occurs within the editing stage. Whereas, when pitch correction is used as automatic correction or “autotune,” then pitch correction becomes more apart of the mixing stage as it’s used as both an effect and pitch correction combined. I’m sure I’ll catch some flack for that, but this is where ambiguity comes in!
 
 
 
Pitch Correction plugins: Antares Autotune 8 (top of photo) and Melodyne (bottom of photo). Melodyne is purely graphical tuning where one can edit specific notes and their timing. Whereas Autotune has both graphical and automatic tuning, but displayed here is its automatic pitch correction mainstay.
 
One last thing to point out about the difference’s between editing in different genres is that all the editing elements mentioned above are not directly applicable to ALL genres/musical mediums. For example, you likely wouldn’t need to de-breath anything within a film score since there often isn’t a lead vocalist performing within a composition (although, there are exceptions).
 

What is Mixing?

If there was a buzzword to top all buzzwords within music production, mixing might be in top competition to take the crown. Although, mastering would make for some stiff competition. Regardless, mixing is one of those things that we all understand and agree upon, yet, toss around and substitute as a “catch-all” when we don’t know what else to call something. It’s like a security blanket or cop out term. So, what exactly is mixing?
 
What a better way to figure it out than to ask our old friend, Wikipedia!
 
In sound recording and reproduction, audio mixing is the process of optimizing and combining multitrack recordings into a final mono, stereo or surround sound product. In the process of combining the separate tracks, their relative levels (i.e. volumes) are adjusted and balanced and various processes such as equalization and compression are commonly applied to individual tracks, groups of tracks, and the overall mix. In stereo and surround sound mixing, the placement of the tracks within the stereo (or surround) field are adjusted and balanced.[1] Audio mixing techniques and approaches vary widely and have a significant influence on the final product.
You might be thinking, “wait, I thought we were optimizing tracks during editing?” or “I thought we were balancing amplitudes in editing?” And sure, sometimes we are. However, during editing, we’re focusing on the performance of an instrument, musician, or artist, not necessarily the sounds themselves. Whereas in mixing, we’re doing this in relativity to the mixture of all present instruments/sounds to create a holistically well-balanced piece, post-performance, of all accompanying instruments and elements that is then able to be delivered in one coherent song and balanced amongst different playback systems. What a mouth full!
 
 
 
The obligatory stock photo mixing console picture.
 

Mixing Overview

In addition to our lovely Wikipedia definition above, we can add a few more specific things to the mixing stage, including: EQ, compression, saturation, stereo imaging, time-based effects (reverb, delay), parallel compression, balancing levels, creating busses/send, automation, etc. There’s certainly a lot that goes into this stage and I’m sure I’m forgetting some off the top of my head, but a good mix in combination with good editing is essential to getting an excellent end product. Of course, we’ve left out the stages of: tracking & performance, pre-production, sound selection, and song-writing stages, which all occur before the editing stage, and simultaneously add up to a substantial difference. But for now, we’ll maintain our focus on editing and mixing.
 
Taking a look at both the editing list and mixing lists, it’s easy to notice the differences between editing and mixing. Here’s a decent summary:
 

Editing Elements (focused on performance):

-Comping
-Breath Reduction
-Fading/Crossfading
-Time Quantization
-Phase Alignment
-De-clipping
-De-noising
-Pitch Correction
-Take Alignment
-Looping
-Copying takes
 

Mixing Elements (focused on holistic sound):

-EQ
-Compression
-Saturation
-Stereo Imaging
-Time-based effects (reverb, delays)
-Parallel Compression
-Balancing Levels
-Creating busses/sends
-Automation
 

Final Thoughts

It’s definitely easy to get tripped up between the concepts of editing and mixing, especially if you’re just starting out and venturing into the music production landscape. However, once the staging framework and workflow is nurtured and developed, these things will become second nature and make a lot more sense in the long run. Knowing more as you approach music will ultimately lead to better and more deliberate outcomes for your sound. That’s why we’re always happy to offer these services to get the music the way you want it to sound. So, if you’ve made it this far, click this link to setup a price quote for these services, and we’ll take it from there!
 
If you ever have any questions, all feel free to email us here: blog@studio222mi.com

Leave a Reply