Here are my notes for 308 this week:
• 2 page Bach piece
• 8 systems long, any number of bars in a system
• Do edits for these 4 takes (on page)
o At the beginning of each system, put the measure number
• Bring up the tracks in PT, then make separate tracks for each take (it’s supposed to be summed mono, so make them into groups)
• When you find out where to make the breaks, put locations there to mark ‘em.
• When it’s all lined up correctly, take notes about:
o Fixing balance, if it’s much faster/slower than the other takes, etc.
• Also, can use elastic audio
o Polyphonic used for guitars, pianos, any program material that’s more than a monophonic voice.
o Rhythmic is for drums where it’s mainly detecting transients
o Monophonic is for violas, flutes, etc. monophonic voices.
o Varispeed acts like a tape deck. When you stretch it, it changes the pitch.
• This week’s assignment in lab:
o Compare the tempos (after doing all that shit), find balance between the two (tempo, dynamics), add cross fades
• Drum replacements in tracks without using a 3rd party application
o Drawing from a drum sample from the audio provided
o Can use for re-inforcement and replacement
o Good at making drums sound organic, though there are some times when a 3rd party may be necessary.
• First thing to do is:
o Create new mono audio track, find the hits you like
o AppleE for cutting the part out, duplicate it and move it into the new audio track
o Then, the section is trimmed right where it hits the audio transient. Fade each side, AppleC, then mute it.
o Examplewith two snare tracks:
Find the snare sound you like, cut it and duplicate it into two mono audio tracks (since it’s snare top and bottom)
Tab to the transient, then P and ; to move up and down between tracks.
. and , for nudging
o For vox
Choose polyphonic waveform mode.
Then, go to Analysis, and there are analysis points that appear.
• Select the region first before selecting Analysis
• Can raise/lower sensitivity of analysis points by changing the Regions-elastic properties to change the analysis properties.
After analysis, go to warp. The analysis points allow it to move within the region, compressing and expanding the audio accordingly.
In the labs, Andy and I finished with the work we needed to do fairly quickly, and we spent the rest of the time mixing some different sounds with the morphed drum hits we acquired. It was a lot of fun, and we did learn a lot of useful tricks this week.
Friday, December 3, 2010
Friday, November 19, 2010
Week of November 14
Steve Swallow, born in 1940, started out as a kid in New Jersey playing the piano, double bass, and trumpet (but mostly the bass). He studied composition at Yale under Donald Martino, where his love and knowledge of jazz technique and improvisation grew. In 1960, Swallow met Paul Bley and his wife, Carla, both esteemed jazz pianists and band leaders, and toured with him for a while, as well as Benny Goodman, Al Cohn, Zoot Sims, and many others. After that, he never went back to school, until 1974 where he taught at Berklee College. In 1978 he re-joined the Carla Bley Band, whom he still works with extensively to this day. Swallow also toured with John Scofield from 1980 to 1984 (at first with drummer Adam Nussbaum, then just the two of them). He is a producer, as well, producing music for a lot of the people he has already worked with, like Bley and Scofield. He continued to tour for a while in many locations, including Brazil, South Africa, and Italy. In 1996, he created the Steve Swallow Quintet, including Chris Potter (saxophone), Ryan Kisor (trumpet), Mick Goodrick (guitar), and Adam Nussbaum (drums). From what I’ve heard from him, he really enjoys the upper neck of the bass much more than the lower section. He’s very smooth, with technique that seems to mirror that of a six-string guitar.
Swallow Solo Video
Carla Bley comedic duet
Charles Mingus was born in 1922 Arizona and grew up in Watts, California. He’s mostly known for his bass playing, but he is also highly skilled as a pianist, bandleader, and composer. He studied double bass and composition from Herman Rheinshagen, the principal bassist for the NY Philharmonic for five years, then with Lloyd Reese. During his training, he also absorbed all he could from the early jazz stars like Duke Ellington. In the 1940s, he was already touring with the bands of Louis Armstrong, Kid Ory, and Lionel Hampton. As the years progressed, he moved to New York, where he played and recorded with Charlie Parker, Miles Davis, Bud Powell, and even Duke Ellington, one of his childhood heroes. He was one of the few double bass players able to lead the group in which he played. In 1952, he opened Debut Records with Max Roach so he could more closely catalog and protect his extensive collection of original works. In 1971, began teaching at New York State. His technique is highly syncopated, and a lot like what should be the base standard for jazz basslines.
Solo Bass, early years
Devil’s Blues, 1975
Joy Division consisted of Ian Curtis (vocals), Peter Hook (bass), Stephen Morris (drums), and Bernard Albrecht/Sumner (guitar). They were a crazy punk band from England that formed in 1976, and were soon revered by bands like U2 and The Cure. None of them went to college or had any formal musical training, but they worked well enough together that they were able to make some paradigm-shifting new-wave music. They are a lot like many punk bands that I have heard, but the thing is that they were one of the first to do it. Other bands emulated the styles that were being played by Joy Division. That being said, they are actually fairly good at their respective instruments.
Shadowplay
Gabriel Roth started his formal education at NYU. He was a huge funk feind, adamantly ignoring anything unrelated to funk. Roth is what many would call a funk purist. His ideas behind music appear fairly limited, but it’s how he executes his talent and his full embrace of simple yet affective playing. He opened Daptone Records with Neal Sugarman just to make sure others would follow his philosophies.
Jaco Pastorious was born 1951. He was an innovative bass player, though he lived for such a short time (dying in 1987). He didn’t read music until he was much older because he had no need for it. Jaco would mostly play along to the radio, getting the technique and idiosyncracies of whoever was playing at the time. This probably led to his unique playing, given that it’s probably just an amalgam of everyone he heard. One of his favourite things to do was to play the melody along with a bass line, destroying the idea that a bass has to be a rhythm instrument with multiple people there to back you up. Jaco’s bass of choice was a fretless one, on which he extended the playing range of the instrument. He even has a technique named after him, called the “Jaco Growl” which included using only the bridge pickup with multiple harmonics.
A Portrait of Tracy
Gary Burton is a professional vibraphone player who has performed with artists such as Dave Holland, Chick Corea, and Pat Metheny. In the 1970s he taught percussion and improvisation at Berklee College. He soon became the dean of the department and obtained a doctorate in music. He also created the Gary Burton Quartet, consisting of Larry Coryell (guitar), Roy Haynes (drums), and Steve Swallow (bass) in 1967.
Falling Grace
Bill Frisell is a contemporary jazz guitar player who is fluent in many different styles of music. He is considered to play the guitar like Miles Davis played the Trumpet. He began studying the clarinet, and moved onto guitar. His teacher Dale Bruning helped Bill apply the theory he learned on clarinet on the guitar and opened up the world of jazz. He played many in many school bands and festivals, school dances, frat parties, and eventually ended up on the east coast to study at Berklee, where he met Pat Metheny, and played in a top 40 band called The Boston Connection with Vinnie Colaiuta, a world famous drummer, and Kermit Driscoll, a bass player who was a student of Jaco Pastorius in 1974. Kermit also attended and graduated Berklee School of Music, and has played with countless orchestras. He has been a part of 28 Broadway shows, many film scores and TV commercials. He currently teaches at SUNY Purchase College and Sarah Lawrence College.
Keep Your Eyes Open
Wednesday we did a lot of discussion, just like we did on Monday. Many more names were said, many more laughs shared. I'm planning on looking through a lot of the names and music we were discussing sometime this coming week.
Friday, November 5, 2010
Week of October 31, 2010
Here are Monday's and Wednesday's notes:
• First, patched Drew’s bass through the Millenia. Millenia out into ProTools 1, then 3 & 4 out to 2 Track in. Then, to get it into the box, PT 1 (A1) out to the Line 1 Inputs (#), then the Channel Insert Sends of the same # to the Millenia In, then the Millenia out into PT2 (A2)
o Can turn down fader in PT to make it so the signal doesn’t distort (as well as input/output of Millenia).
o Millenia as unity gain amplifier
o 3:1 ratio, everything else at noon
o Fastest release, fastest attack, 3:1 ratio, everything at noon
o Fast Attack, long release, 3:1 ratio, everything else at noon
Pre-recorded track is going out A1
Incoming track is coming in A2
We’re monitoring out of A3-4
o Threshold, attack, releast at 9, ratio is 6:1
o Everything the same, slower attack, slower release
o Same attack (30ms), low threshold, release about a second, 9:1
Attack is slow enough, lets the original transient through. Release is slow enough that the compressor wants to go back to non-compression
o 10:1 ratio, fastest attack, slowest release
o Medium-fast attack, medium release, 6:1 ratio tends to work out best to get something of a decent dynamic range.
• New bass line!, with same patching
o 10:1 ratio, fastest attack, slowest release
o Super-low threshold,
• to get duller sound on snare (thud), you’d have a fast attack. For a crack, slow attack, Higher ratio makes th EQ more dull
• fast attack with medium release adds a lot of punch to it.
• To make everything more even, low threshold, pretty high ratio.
Snare is going out A1 into channel 40
Through Millenia, into A2
2 track is A3-4
• Fully clockwise with threshold, everything compressed, super fast attack and release, 10:1 ratio
o Tons of room because release is so fast, everything rushes up to unity gain
• Fastest attack, .5 sec release, 10:1, full threshold
• Medium attack, medium release, 10:1, full threshold
• Threshold on full, attack at 10, release at .7, ratio at 8:1
o Does a pretty decent job with vox. Unifies pieces, glues them together.
• 4:1 ratio, 2 second release, 2ms attack, threshold at about noon.
o So much louder because the threshold is so much higher. Less is getting copressed, and the makeup gain wasn’t changed. To get it back to ‘normal’, lower the threshold or turn down the makeup gain
o Remember, threshold and ratio are a function of eachother.
• A lot of compressors have automatic gain reduction so it’s one less thing to worry about
• Fast release takes out a lot of the lower frequencies. Slow release keeps them around a lot longer.
• Super fast attack with a high makeup gain is ridiculous, basically.
Snare to be with a lot of room sound
Snare with a lot of wood/crack
Electric bass, crush them to get uniform sound
Bass, how to elongate sustain
Research into pumping and breathing.
In the labs, Andy and I were working a lot with the compressors and the Eventide. We got a lot accomplished, had a lot of fun experimenting with everything, and got some interesting sounds.
• First, patched Drew’s bass through the Millenia. Millenia out into ProTools 1, then 3 & 4 out to 2 Track in. Then, to get it into the box, PT 1 (A1) out to the Line 1 Inputs (#), then the Channel Insert Sends of the same # to the Millenia In, then the Millenia out into PT2 (A2)
o Can turn down fader in PT to make it so the signal doesn’t distort (as well as input/output of Millenia).
o Millenia as unity gain amplifier
o 3:1 ratio, everything else at noon
o Fastest release, fastest attack, 3:1 ratio, everything at noon
o Fast Attack, long release, 3:1 ratio, everything else at noon
Pre-recorded track is going out A1
Incoming track is coming in A2
We’re monitoring out of A3-4
o Threshold, attack, releast at 9, ratio is 6:1
o Everything the same, slower attack, slower release
o Same attack (30ms), low threshold, release about a second, 9:1
Attack is slow enough, lets the original transient through. Release is slow enough that the compressor wants to go back to non-compression
o 10:1 ratio, fastest attack, slowest release
o Medium-fast attack, medium release, 6:1 ratio tends to work out best to get something of a decent dynamic range.
• New bass line!, with same patching
o 10:1 ratio, fastest attack, slowest release
o Super-low threshold,
• to get duller sound on snare (thud), you’d have a fast attack. For a crack, slow attack, Higher ratio makes th EQ more dull
• fast attack with medium release adds a lot of punch to it.
• To make everything more even, low threshold, pretty high ratio.
Snare is going out A1 into channel 40
Through Millenia, into A2
2 track is A3-4
• Fully clockwise with threshold, everything compressed, super fast attack and release, 10:1 ratio
o Tons of room because release is so fast, everything rushes up to unity gain
• Fastest attack, .5 sec release, 10:1, full threshold
• Medium attack, medium release, 10:1, full threshold
• Threshold on full, attack at 10, release at .7, ratio at 8:1
o Does a pretty decent job with vox. Unifies pieces, glues them together.
• 4:1 ratio, 2 second release, 2ms attack, threshold at about noon.
o So much louder because the threshold is so much higher. Less is getting copressed, and the makeup gain wasn’t changed. To get it back to ‘normal’, lower the threshold or turn down the makeup gain
o Remember, threshold and ratio are a function of eachother.
• A lot of compressors have automatic gain reduction so it’s one less thing to worry about
• Fast release takes out a lot of the lower frequencies. Slow release keeps them around a lot longer.
• Super fast attack with a high makeup gain is ridiculous, basically.
Snare to be with a lot of room sound
Snare with a lot of wood/crack
Electric bass, crush them to get uniform sound
Bass, how to elongate sustain
Research into pumping and breathing.
In the labs, Andy and I were working a lot with the compressors and the Eventide. We got a lot accomplished, had a lot of fun experimenting with everything, and got some interesting sounds.
Friday, October 29, 2010
Week of October 24, 2010
Basically, this week is all about compression.
-controls maximum levels and maintains higher average loudness (soft becomes louder, loud becomes (relatively) softer
-specialized amps used to control dynamic range (distance between loudest and softest part of a waveform)
-Flutes: difference between loudest and softest is about 3dB
-Voice: difference between loudest and softest is about 10dB
-Drums: difference between loudest and softest is about 15dB
-our ears act as compressors, responding to average level of sounds
-detector circuits built in like our ears
-brick wall limiting: pre-determined level is the peak, absolutely nothing above it
-Multiband compression: compress different levels of frequencies sepearately
-optical compressors: use photocell resistors. Audio goes into a lightbulb. The louder it is, the brighter the light and vise versa. Light and photo resistor in light-proof box (super simple explanation). Then, makeup gain to bring soft parts up.
-Field Effect Transistor (FET): First to emulate tubes and the way they worked,. fast, clean, reliable.
-Voltage Controlled Amplifiers (VCA): Most versatile of compressors, most flexible. GREAT reaction time
-VeriGain: Circuit without the top three versions of compressors, pretty much.
-Digital Compressors: Exaggerated basically. Can get precisioon you can't get with analog compressor. Some have built-in delay.
-Ratio: way to express degree the compressor is reducing the dynamic range. Difference between signal increase coming in the output. 2:1 means that for every 2dB going in that are above the threshold, it's only going to sound 1 dB over. Going over 10:1 turns the compressor into a limiter
-Threshold: Level of incoming signal at which compression amplifier turns from a unity gain amplifier to a compressor-reducing gain. The higher the threshold, the lower the input. No affect on signal below the threshold. Once threshold reached, the dB over are reduced based on ratio. Knees are where the audio hits the threshold. Hard knees are sudden and abrupt, soft knee is more gradual over the threshold. Changing the knee changes the envelope (ADSR), more specifically the A&R
-Attack time: Time it takes to compress after threshold is reached. Range from under 1ms to over 100ms. Affects tone in terms of brightness. Fast attack clamps on signal. Depending on how you adjust it, the compressor acts as an EQ and a reverb.
-Release: the time it takes the compressor takes to return to unity once signal drops below the threshold. longer creates darker sound, shorter makes it sound brighter. So, compressor released from gain reduction. 20ms to 5sec is range of releases. Really depends on tempo, program material, instrument.
Tuesday's lab I had the first hour to myself, so I worked on grouping and recording stems some more, just to make sure I know what's happening. Then, when Andy came back in, we started experimenting with the outboard gear, and found some really interesting sounds using the Eventide and Distressor.
Wednesday we went into the studio and practiced using the Millenia and Distressor, incrementally changing the attack, release, ratio, threshold, and gain output. The patching we used was getting the track that is to be treated to the board using the line 1 inputs, then using the channel insert sends into the distressor, and then send it to a Pro Tools input onto a new audio track. Record, and compare. So, we did a lot of that, and it was a lot more interesting and fun than I thought it would be.
Thursday we worked more with the outboard gear. We did some of the critical listening with the compressor, but we enjoyed the strange mixing we were doing on Tuesday, and we decided to do it again. Not as many cool sounds, but we did learn a lot and have a good time.
-controls maximum levels and maintains higher average loudness (soft becomes louder, loud becomes (relatively) softer
-specialized amps used to control dynamic range (distance between loudest and softest part of a waveform)
-Flutes: difference between loudest and softest is about 3dB
-Voice: difference between loudest and softest is about 10dB
-Drums: difference between loudest and softest is about 15dB
-our ears act as compressors, responding to average level of sounds
-detector circuits built in like our ears
-brick wall limiting: pre-determined level is the peak, absolutely nothing above it
-Multiband compression: compress different levels of frequencies sepearately
-optical compressors: use photocell resistors. Audio goes into a lightbulb. The louder it is, the brighter the light and vise versa. Light and photo resistor in light-proof box (super simple explanation). Then, makeup gain to bring soft parts up.
-Field Effect Transistor (FET): First to emulate tubes and the way they worked,. fast, clean, reliable.
-Voltage Controlled Amplifiers (VCA): Most versatile of compressors, most flexible. GREAT reaction time
-VeriGain: Circuit without the top three versions of compressors, pretty much.
-Digital Compressors: Exaggerated basically. Can get precisioon you can't get with analog compressor. Some have built-in delay.
-Ratio: way to express degree the compressor is reducing the dynamic range. Difference between signal increase coming in the output. 2:1 means that for every 2dB going in that are above the threshold, it's only going to sound 1 dB over. Going over 10:1 turns the compressor into a limiter
-Threshold: Level of incoming signal at which compression amplifier turns from a unity gain amplifier to a compressor-reducing gain. The higher the threshold, the lower the input. No affect on signal below the threshold. Once threshold reached, the dB over are reduced based on ratio. Knees are where the audio hits the threshold. Hard knees are sudden and abrupt, soft knee is more gradual over the threshold. Changing the knee changes the envelope (ADSR), more specifically the A&R
-Attack time: Time it takes to compress after threshold is reached. Range from under 1ms to over 100ms. Affects tone in terms of brightness. Fast attack clamps on signal. Depending on how you adjust it, the compressor acts as an EQ and a reverb.
-Release: the time it takes the compressor takes to return to unity once signal drops below the threshold. longer creates darker sound, shorter makes it sound brighter. So, compressor released from gain reduction. 20ms to 5sec is range of releases. Really depends on tempo, program material, instrument.
Tuesday's lab I had the first hour to myself, so I worked on grouping and recording stems some more, just to make sure I know what's happening. Then, when Andy came back in, we started experimenting with the outboard gear, and found some really interesting sounds using the Eventide and Distressor.
Wednesday we went into the studio and practiced using the Millenia and Distressor, incrementally changing the attack, release, ratio, threshold, and gain output. The patching we used was getting the track that is to be treated to the board using the line 1 inputs, then using the channel insert sends into the distressor, and then send it to a Pro Tools input onto a new audio track. Record, and compare. So, we did a lot of that, and it was a lot more interesting and fun than I thought it would be.
Thursday we worked more with the outboard gear. We did some of the critical listening with the compressor, but we enjoyed the strange mixing we were doing on Tuesday, and we decided to do it again. Not as many cool sounds, but we did learn a lot and have a good time.
Monday, October 18, 2010
Stems, Subgroups, Etc.
This is basically our week:
To Mix in the Box:
A 1-2 out of Pro Tools into 2 Track
Creating Submixes in Pro Tools
~Group instruments together (bass and guitars, vox and bgv, drums, keyboards & whatever, 1 channel for Aux Sends) or (vox and bgv, bass and drums, keys and guitar)
~Can create a Stereo Aux Channel, or to record them to stems, stereo audio channel
~Ouput Audio Path Selector output to that particular bus
~Makes it easier to add inserts and effects, easier to navigate, decreases how much we have to think about individual tings
~Mastering engineers now accepting stems
Drum and bass tracks are out bus 5-6, and in A 1-2, guitar tracks are in bus 7-8, and out A 3-4. Then create 2 new stereo Aux tracks, send the drums and bass to one of them, the guitars to the other. The Aux channel with the drums and bass, set the input as bus 5-6. The Aux channel with guitars, set input as bus 7-8. Then, create new stereo audio channels and have their inputs bus 5-6 and bus 7-8. So this is the in-the-box stems. Then, we add two more corresponding audio tracks to go out of the box. We have to set the outputs as the Pro Tools outputs on teh patch bay you would want. Patching so far would be out ProTools 1-4 into Line 1 Inputs 37-40 (for convenience sake). Pan 37 and 39 hard left, and 38 and 40 hard right. Take the faders and put them at unity gain. Go to the top of the channel strips 37-40 at the group bus, then press the 1-2 button on the drums and bass channels, and 3-4 button on the guitar channels. Then go to the group bus 1-2 (red faders) and turn them up to unity and the gain fully clockwise, and the same with group bus 3-4. We also have to pan the group pan 1 and 3 hard left and group pan 2 and 4 hard right. To give control to the Master fader (now that it works) and just push the MIX button, and NOT the 2TK 1 (like we did before). So, we sent the audio channels from Pro Tools into tracks on the board (as a subgroup), then sent the board channels into monitor groups (another subgroup). Next, we can record stems into Pro Tools by getting two new Stereo audio channels in Pro Tools, and setting their input as something like B 1-4 (1-2 for drums and bass, 3-4 for guitars). On the patch bay, that would be group outputs 1-4 (again, 1-2 is drums and bass, 3-4 is guitars) into Pro Tools A 9-10 for drums and bass and A 11-12 for guitars. Then, we record-enable the in-the-box stems and out-the-box stems. Record them, and you have the in-the-box and out-the-box stems.
Monday and Wednesday were dedicated to the understanding of this process. Tuesday in lab, Andy and I mixed Jonsey through the box with the methods listed above and recorded stems. Basically I did it, with Andy asking questions throughout. We undid it, and I re-did it again. Then we moved onto cleaning up Raw Tracks 4, which was pretty much an awful song. There is no other way to describe it. I'm not attacking it, exactly, just stating a fact. It was tracked horribly, the drummer never found a groove, the vocalists couldn't keep in time to save their own lives, and I don't even want to talk about the guitar tracks. But we still cleaned it up as much as we could. Thursday's lab we did the same thing as Tuesday's lab, but did it pretty much for an hour and a half. Then, so our heads wouldn't explode, we looked at our 306 projects on the MTA 980. Andy's was much more impressive than mine, as I don't have the MIDI experience that he does. His flanging, frequency sweeps, and crazy automation was picked up very well through the board. Once I start changing the velocities of some of the drums and make them feel more natural. Putting effects like reverb would also help.
To Mix in the Box:
A 1-2 out of Pro Tools into 2 Track
Creating Submixes in Pro Tools
~Group instruments together (bass and guitars, vox and bgv, drums, keyboards & whatever, 1 channel for Aux Sends) or (vox and bgv, bass and drums, keys and guitar)
~Can create a Stereo Aux Channel, or to record them to stems, stereo audio channel
~Ouput Audio Path Selector output to that particular bus
~Makes it easier to add inserts and effects, easier to navigate, decreases how much we have to think about individual tings
~Mastering engineers now accepting stems
Drum and bass tracks are out bus 5-6, and in A 1-2, guitar tracks are in bus 7-8, and out A 3-4. Then create 2 new stereo Aux tracks, send the drums and bass to one of them, the guitars to the other. The Aux channel with the drums and bass, set the input as bus 5-6. The Aux channel with guitars, set input as bus 7-8. Then, create new stereo audio channels and have their inputs bus 5-6 and bus 7-8. So this is the in-the-box stems. Then, we add two more corresponding audio tracks to go out of the box. We have to set the outputs as the Pro Tools outputs on teh patch bay you would want. Patching so far would be out ProTools 1-4 into Line 1 Inputs 37-40 (for convenience sake). Pan 37 and 39 hard left, and 38 and 40 hard right. Take the faders and put them at unity gain. Go to the top of the channel strips 37-40 at the group bus, then press the 1-2 button on the drums and bass channels, and 3-4 button on the guitar channels. Then go to the group bus 1-2 (red faders) and turn them up to unity and the gain fully clockwise, and the same with group bus 3-4. We also have to pan the group pan 1 and 3 hard left and group pan 2 and 4 hard right. To give control to the Master fader (now that it works) and just push the MIX button, and NOT the 2TK 1 (like we did before). So, we sent the audio channels from Pro Tools into tracks on the board (as a subgroup), then sent the board channels into monitor groups (another subgroup). Next, we can record stems into Pro Tools by getting two new Stereo audio channels in Pro Tools, and setting their input as something like B 1-4 (1-2 for drums and bass, 3-4 for guitars). On the patch bay, that would be group outputs 1-4 (again, 1-2 is drums and bass, 3-4 is guitars) into Pro Tools A 9-10 for drums and bass and A 11-12 for guitars. Then, we record-enable the in-the-box stems and out-the-box stems. Record them, and you have the in-the-box and out-the-box stems.
Monday and Wednesday were dedicated to the understanding of this process. Tuesday in lab, Andy and I mixed Jonsey through the box with the methods listed above and recorded stems. Basically I did it, with Andy asking questions throughout. We undid it, and I re-did it again. Then we moved onto cleaning up Raw Tracks 4, which was pretty much an awful song. There is no other way to describe it. I'm not attacking it, exactly, just stating a fact. It was tracked horribly, the drummer never found a groove, the vocalists couldn't keep in time to save their own lives, and I don't even want to talk about the guitar tracks. But we still cleaned it up as much as we could. Thursday's lab we did the same thing as Tuesday's lab, but did it pretty much for an hour and a half. Then, so our heads wouldn't explode, we looked at our 306 projects on the MTA 980. Andy's was much more impressive than mine, as I don't have the MIDI experience that he does. His flanging, frequency sweeps, and crazy automation was picked up very well through the board. Once I start changing the velocities of some of the drums and make them feel more natural. Putting effects like reverb would also help.
Friday, October 15, 2010
Week of October 10, 2010
In class we watched a DVD of the making of Bjork's album Medulla, released in 2004. The album itself is composed almost entirely out of vocals. After the falling of the World Trade Center, she felt that her music needed to take a more primitive, primal style. After giving birth to Isadora, also, she wanted an album whose message was one of flesh, blood, and bone, which is where the name Medulla originated (marrow in Latin). The engineer on this project, Nigel Goodrich, had been working with Bjork on her albums for a decade before, so it was almost natural for her to choose him. Bjork knew his style of mixing (and likewise, Goodrich knew how to translate her quirkiness) and they were able to make a great album. Bjork enjoys enlisting the help of others from time to time, especially if she knows someone who can bring something to the table that she can't bring herself. Every guest artist that appeared on this album was found by surfing the web. How Bjork works (as far as I can tell) is seh puts everything that she can offer into an album. But she is also enlightened enough to know when there are others that have capabilities far from her range. One of which was Rahzel, formerly of the Roots. She hadn't heard of him before, but was looking for more percussive elements of the vocal spectrum. She resisted calling him to begin with because she felt that beat-boxing would be too easy to do, and it was less an artistic decision and more a fast solution. But, after doing all she could with Goodrich, she called him to help out on the album. His beat-boxing prowess brought a lot to the album as a whole, and even Bjork ended up enjoying the beat-boxing. Another artist she found was Dokaka, an internet sensation for his vocal covers of other artist's songs. Other guest artists included Tagaq, Mike Patton, and Shlomo. Each of these artists were picked especially for their talents, and Bjork took advantage of these talents as often as she could for the recording of Medulla. An Inuit singing game is also featured in one of the songs, where women try to recreate sounds around them and attempt to make the other woman mess up. Bjork was born in Iceland in 1965, where music is an integral part of the educational system. Everyone is brought up with a background in music, so musicians like Bjork may be more common over there than they are here in the US. She may have an extensive education in music, but during the DVD, her choice of descriptive directions never really seemed attainable, since her descriptions included making something more "waaaaah" (as she puts her hands together and apart, as if stretching something) at one point. The thing about an artist like Bjork, though, is that she has already proven herself time and time again that she knows her stuff, it's everyone else's fault that they can't understand her.
Friday, October 8, 2010
Monday we were supposed to have our written assessment, but instead we went over the equipment we should be getting for a home studio. A lot of what was said was just numbers, but after looking them up, it makes a lot more sense. I'm most likely going to be getting Reason, so the Digidesign MBox isn't going to help me at all. But what sounds like a good combination of gear is the Behringer ADA8000 as a mic preamp and the RME Fireface 800 as an interface. Once I get the money, I'll start planning out my buying schedule. But, until then, I'll keep doing research on it.
Tuesday's lab, Andy and I chose to do the mono mix out of the board. I did most of the prep work, and we made it through with minimal problems. We used both distressors, both Milennias, the PCM91, and the SPX90. So, we used all but five cables or so. One thing we need to figure out is why all of our board mixes are so loud. That's one thing we may neede to ask in the near future. The tracking for this song is much better than the songs by Apparently Nothing. I sort of wish we knew who performed this song. Their drum tracks are much more even, so strip silencing them was much easier to do. We also moved a lot of the electronic drums around, so they weren't so imposing to the rest of the mix.
Wednesday we had our assessment that was originally scheduled for Monday. The first two questions I had no problems answering. The first question was in regards to phase and the Haas trick. The second was about monitors, correct room treatment, and the repercussions of incorrect treatment. The third question was asking about what Izhaki believes are the four objectives for recording engineers. They are to capture the mood of a piece, mostly by knowing what the mood of the piece is supposed to be. Another key element is to balance the piece correctly. The kick is much more important in a heavy rock song than a folk song. The third objective is definition, which is achieved through EQs and other processors. And, of course, interest needs to be captured.If the piece varies dynamically, instrumentally, and time-wise, it will be a much more interesting piece to listen to. The fourth question was about Izhaki's five different mixing domains that all mixing engineers have to work in. And, they are time, frequency, level, stereo, and depth. Time, because the tracks being mixed have a beginning, middle, and end. Frequency, because different instruments reside in different frequencies, and some of the frequencies of certain instruments need to be attenuated or amplified. Level, because the amount of that instrument that is played is always important when in relation to other instruments. Stereo, because the positioning of each of the instruments from left to right is important to how they will be heard and interpreted. Depth is important for the same reasons the stereo image is important. Putting drums right up front isn't a smart move as everything else will pale in comparison. I should have separated out my study techniques, though, because I was using acronyms to remember them, but I was getting all of acronyms mixed up. There was the mixing objectives (MBDI), the domains (FTLSD), the things to consider when mixing (SEMEL), and others. I'll have to separate them out better for myself next time.
Tuesday's lab, Andy and I chose to do the mono mix out of the board. I did most of the prep work, and we made it through with minimal problems. We used both distressors, both Milennias, the PCM91, and the SPX90. So, we used all but five cables or so. One thing we need to figure out is why all of our board mixes are so loud. That's one thing we may neede to ask in the near future. The tracking for this song is much better than the songs by Apparently Nothing. I sort of wish we knew who performed this song. Their drum tracks are much more even, so strip silencing them was much easier to do. We also moved a lot of the electronic drums around, so they weren't so imposing to the rest of the mix.
Wednesday we had our assessment that was originally scheduled for Monday. The first two questions I had no problems answering. The first question was in regards to phase and the Haas trick. The second was about monitors, correct room treatment, and the repercussions of incorrect treatment. The third question was asking about what Izhaki believes are the four objectives for recording engineers. They are to capture the mood of a piece, mostly by knowing what the mood of the piece is supposed to be. Another key element is to balance the piece correctly. The kick is much more important in a heavy rock song than a folk song. The third objective is definition, which is achieved through EQs and other processors. And, of course, interest needs to be captured.If the piece varies dynamically, instrumentally, and time-wise, it will be a much more interesting piece to listen to. The fourth question was about Izhaki's five different mixing domains that all mixing engineers have to work in. And, they are time, frequency, level, stereo, and depth. Time, because the tracks being mixed have a beginning, middle, and end. Frequency, because different instruments reside in different frequencies, and some of the frequencies of certain instruments need to be attenuated or amplified. Level, because the amount of that instrument that is played is always important when in relation to other instruments. Stereo, because the positioning of each of the instruments from left to right is important to how they will be heard and interpreted. Depth is important for the same reasons the stereo image is important. Putting drums right up front isn't a smart move as everything else will pale in comparison. I should have separated out my study techniques, though, because I was using acronyms to remember them, but I was getting all of acronyms mixed up. There was the mixing objectives (MBDI), the domains (FTLSD), the things to consider when mixing (SEMEL), and others. I'll have to separate them out better for myself next time.
Friday, September 24, 2010
Week of September 20, 2010
Here are the reading notes we've had this week:
Chapter 10: Software Mixers
Tracks
-Audio
-Aux
-MIDI
-Instrument
Mixer Strips
-Input Selection
-Output Selection
-Insert Slots
-Send Slots
Solos
Control Grouping
Audio Grouping
Sends and Effects
Naming Buses
Internal Architecture
-Integer Notation
~The highest amplitude a 16 bit sample can handle is 65,535. Anything above this results in clipping
-Floating-Point Notation
~16-bit sample can theoretically handle any amplitude
-How they work together
~Pro Tools allows two hot signals to be summed without clipping. When bouncing in Pro Tools, the audio is converted from float into integer. If you bounce onto a 16-bit file, you lose 54dB of range
Dither
-To avoid producing repeating decimals, processors round off this data. Since the data is now incorrect and rounded off in the same way every time, distortion is produced. Dithering randomizes the rounding off so that a "low level of random noise" is created.
-Most audio sequencers ship with dither capabilities
-To avoid producing repeating decimals, processors round off this data. Since the data is now incorrect and rounded off inn the same way every time, distortion is produced.
-Most audio sequencers ship with dither capabilities
Normalization and the Master Fader
-Normalization
~Brings all signal level on a track up to the highest peak, without clipping, but rounding errors can occur, resulting in distortion, especially with 16-bit files. Use with CAUTION.
-Master Fader
~Scales mix output to the desired range of values
~Sometimes clipping will occur, even when no channels are overshooting the clipping threshold
Playback Buffer and Plugin Delay Compensation
-Playback Buffer
~Determines latency of input signals. Lower buffer size results in less latency, which is better for recording
~The mixdown should utilize a higher buffer size, because the system needs to read the information faster than it is played back
-Plugin Delay Compensation
~Plugins that run on DSP expansion, like a UAD card
~Plugin delay occurs when processing involves algorithms requiring more samples than available by each playback buffer
Chapter 11: Phase
What is Phase?
-Relationship between two or more waveforms, measured in degrees
-We only consider phase in relation to similar waveforms
-Identical waveforms are usually signs of duplication
~ex: Duplicated snare, one dry and one reverb
-Waveforms of the same event are two microphones capturing the same musical event (or recording)
~ex: A kick mic and overheads, both with kick in it
-3 Types of Phase Relationships between Similar Waveforms
~In phase or phase-coherent: waveforms start at exactly the same time
~Out of phase or phase-shifted: waveforms start at different times
~Phase inverted: both waveforms start at the same time, but amplitude is inverted
-Problems arise when similar phase shifted or phase inverted waveforms are summed
~Comb Filtering: If phase off less than 35ms, frequencies attenuated, tonal alteration and timbre change
~If waves are phase-inverted, level attenuation. If phase inverted and equal in amplitude, cancel each other out completely
-Phase in Recorded Material
~Comb filtering caused by a mic a few feet from guitar amp, picking up reflected frequencies as well as the direct sound. Not much a mixing engineer can do to fix. Caused by having two or more tracks of the same take of the same instrument can be treated by the mixing engineer:
(A) top/bottom front/back tracks: Microphones that are placed on opposite sides of an instrument are likely to pick up opposite sound pressures. Fix it by inverting the phase of one of the microphones.
(B) Close-mic and overheads: Close-miced kick or snare might interact with overhead microphones to cause phase shifting or inversion. Fix it by taking the OH as a reference and make sure the kit is phase coherent
(C) Mic and Direct: The signal from a bass guitar that is recorded DI will travel much faster that a signal that goes from guitar to an amplifier to a microphone to your console. Fix it by zooming in and nudging the track
Phase Problems During Mixdown:
-Delay caused by plug-ins
-Delay caused by digital to analog conversion when using outboard gear
-Short delays may cause comb filtering
-Equalizers cause delay in a specific range of frequencies
Tricks:
-Two mixing tricks are based on a stereo setup with both identical mono signals being sent to a different extreme, and one of the signals is either delayed or phase inverted
-Haas Trick
~Helmut Haas discovered that the direction of the sound is determined solely by the initial sound providing that (1) successive sound arrive within 1-35ms from the initial sound and (2) successive sounds are less than 10dB louder than the initial sound
%Takes the original signal panned to one extreme and the other, phase-inverted signal is sent to the other extreme with a delay of 1-35ms
%One way involves panning a mono track hard to one channel, duplicating it, and panning the duplicate hard to the opposite channel and nudging the duplicate by a few milliseconds
%Second way involves loading a stereo delay on a mono track, setting one channel to have no delay and the otehr to have a short delay between 1-35ms
~Used to:
%Fatten sounds on instruments panned to the extremes making them sound more powerful
%As a panning alternative
%To create more realistic panning, since the human ear can use teh amplitude, time, and frequency differences to locate sound
~Haas Trick controls amount of delay, level
Out of Speakers Trick
-Like Haas Trick, but instead of delaying the wet signal, just invert the phase. Results in the sound coming from all around you rather than directly at you.
Chapter 12: Faders
Sliding Potentiometer
-Simplest basis for an analog fader
-The amplitude of teh analog signal is represented in voltage
-Contains a resistive track with a conductive wiper slides as the fader moves
~Different positions along the track provide different amounts of resistance
~Different degrees of level attenuation
-Can not boost the audio signal passing through it (unless a fixed-gain amplifier is placed after it)
-Audio signal enters and leaves
VCA Fader
-Combination of a voltage controlled amplifier and a fader
-VCA is an active amplifier that audio signal passes through
~Amount of boost or attenuation is determined by DC voltage
-Fader only controls the amount of voltage sent to the amplifier
~No audio signal flows through the actual fader
-Allows a number of DC sources to be summed to a VCA
~Shortens the signal path
Digital Fader
-Determines a coefficient value by which samples are multiplied
~Doubling a coefficient of 2 results in a boost of around 6dB
~0.5 results in around 6dB attenuation
Scales
-Typical measurement is in the scale unit dB
~Strong relationship to how the human ear perceives loudness
-Scale is generally based on steps of around 10dB or 6dB
~6dB is approx. doubling the voltage (or sample value) or cutting it in half
~10dB is doubling or halving the perceived loudness
-The 0dB point is called unity gain
~Where the signal is neither boosted nor attenuated
-Most faders offer extra-gain
~Generally around 6, 10, 12dB boosts
~Only used if signal is still weak while at unity
-Area between -20dB and 0dB is the most crucial area
Level Planning
-Faders are made to go up and down
-When mixing the levels start by coming up
~Generally ending up at around the same positions
-Problem
~A natural reaction to not being able to hear a track is to bring the fader up
%Bringing a snare up in the mix might begin masking vocals, so you bring up fader on vocals, then bass masked, etc.
~Eventually, end up back where you started
-Solutions
~Having a set plan for levels before bringing up faders so the extra-gains settings are left alone
~Setting the loudest track first and bringing up the rest of the tracks around it
Extremes - Inward Experiment
-Take the fader all the way down
-Bring it up gradually until the level seems reasonable
-Mark the fader position
-Take the fader all the way up (or to a point where the instrument is too loud)
-Bring it down gradually until the level seems reasonable
-Mark the fader positions
-You should now have two marks that set the limits of a level window. Now instrument level within this window based on the importance of the instrument
Chapter 13: Panning
How Stereo Works
-Alan Dower Blumlein
~Reasearcher and engineer at EMI
~December 14, 1931, applied for patent called "Improvements in and relating to Sound-transmission, Sound-recording, and Sound-reproduction System"
~Was looking for a 'binaural sound', we call it 'stereo' today
~Ironically, first stereo recording published in 1958 (16 years after Blumlein's death and 6 years after EMI's patent rights had expired
-Stereo Quick Facts
~We hear stereo based on three criteria: (EX: trumpet on your right)
%amplitude (sound be louder in R ear than L)
%time/phase (sound will reach L ear later than R)
% frequency (less high freq in L than R)
~Sound from a central source in nature reaches our ears at the same time, with the same volume and frequencies. But, with two speakers, no center speaker, so phantom center
~Best stereo perception when triangle acheived
Pan Controls
-Pan Pot (Panoramic Potentiometer)
~First studio with a stereo system was Abbey Road, London
~Splits a mono signal L and R, and attenuates the side you're not favouring
-Pan Clock
~Hours generally span from 7:00 (L) to 17:00 (R)
-Panning Laws
~A console usually has only one panning law, but some inline consoles have one for channel path and one for monitor path
%two main principles:
^if two speakers emit the same signal at the same level, listener in the center will perceive a 3dB boost of what each speaker produces.
^when two channels summed in mono, half of each level is sent to each speaker
~0dB Pan Law: doesn't drop the levels of centrally panned signals. The instrument level will drop as we pan from the center outward, with 3dB increase of perceived loudness when centered
~-3dB Pan Law: when panned center, there is a 3dB dip (generally best option when stereo mixing)
~-6dB Pan Law: used for mono-critical applications. Provides uniform level in mono, but a 3dB dip when in stereo
~-4.5dB Pan Law: compromise between -3 and -6dB laws. 1.5dB center dip when in stereo, 1.5dB center boost in mono
~-2.5dB Pan Law: gives a 0.5dB boost when panning to the sides so instruments aren't louder when panning.
-Balance Pot
~Input is stereo, unlike pan pot. 2 input channels go through separate gain stages before reaching stereo output. Pot position determines how much attenuation applied on each channel.
~never cross-feeds the input signal from one channel to the output of the other
Mono Tracks
-Problem with dry mono track is it provides no spatial perception
-Dry mono tracks always sound out of place, so add reverb or some other spatial effect to blend it
-Some mono tracks include room or artificial reverb that doesn't sit well with a stereo reverb of the whole mix
Stereo Pairs
-Coincident Pair (XY) technique provides the best mono-compatibility given that the diaphragms of the two mics are so close in proximity, and there's no need to worry about phase complications
-Spaced Pair (AB) involves two mics a few feet apart, is certain to have phase issues, and is not mono-compatible
-Near-coincident pair is two mics angled AND spaced, with less drastic phase problems
Multiple mono tracks
-Multiple mics on the same instrument
-Mirrored panning widens and creates less focus on the instrument in the stereo image
-Same panning gives a more relative stereo image, and is easier to locate
Combinations
-Like mirrored panning but less extreme
Panning Techniques
-Look at the track sheet and get a basic idea of a tentative pan plan
-Panning strategies differ with every mix
-Small tweaks in the near final stages can greatly improve mix
-Panning instruments in the same place causes masking. Panning different directions and mirroring can avoid masking
-When panning, think of a sound stage or try to visualize an actual performance
-Center and extremes in the panning field tend to be the busiest areas wehre masking is more likely to occur
-Level and frequency balance are the main concern when panning
-Be aware of the rhythmic structure of the tracks and keep them balanced
-A close-to-perfect stereo mix is basically a good mono mix, although there is still room for imbalances
-Stereo effects (reverb/delays) can be panned towards th dry track to put the desired effect in clearer focus
-Mono effects benefit more from panning the effect farther from the dry track and enhance the stereo image
Beyond Pan Pots
-Autopanners: pans cyclically between the left and right sides
~Rate: Defined by Hz, cycles/second
~Depth: How far the signal will be panned. Higher setting = more apparent effect
~Waveform: Defines shape of panning modulation, how smooth/rigid the panning will sound
~Center: This setting defines the position of modulation
And here are the notes I took on Monday and Wednesday:
Echo/Delay/Effects Processing Automation
-Create dedicated Aux track
-Create send to that Aux track
-Go to waveform drop-down, choose your automation
Serial Compression
-Try two compressors, one at 2:1, the other at 5:1. Be fairly light on both of them
-TDM is th dedicated computer for Pro Tools
-RTAS is using the Mac's hard drive, so slight latency
-OOPS (Out-of-Phase Stereo)
~Taking it out of phase to hear what's hidden (what we did with Hey Jude)
And, for the rest of Track 2, Our production schedule is something like:
Tuesday, September 21:
-Finish EQ, gates, compressors in Pro Tools
-Experiment with automation for Aux sends
Thursday, September 23:
-Mono mix through the board! Must be finished by end of class!
Tuesday, September 28:
-Stereo mix through the board!
Chapter 10: Software Mixers
Tracks
-Audio
-Aux
-MIDI
-Instrument
Mixer Strips
-Input Selection
-Output Selection
-Insert Slots
-Send Slots
Solos
Control Grouping
Audio Grouping
Sends and Effects
Naming Buses
Internal Architecture
-Integer Notation
~The highest amplitude a 16 bit sample can handle is 65,535. Anything above this results in clipping
-Floating-Point Notation
~16-bit sample can theoretically handle any amplitude
-How they work together
~Pro Tools allows two hot signals to be summed without clipping. When bouncing in Pro Tools, the audio is converted from float into integer. If you bounce onto a 16-bit file, you lose 54dB of range
Dither
-To avoid producing repeating decimals, processors round off this data. Since the data is now incorrect and rounded off in the same way every time, distortion is produced. Dithering randomizes the rounding off so that a "low level of random noise" is created.
-Most audio sequencers ship with dither capabilities
-To avoid producing repeating decimals, processors round off this data. Since the data is now incorrect and rounded off inn the same way every time, distortion is produced.
-Most audio sequencers ship with dither capabilities
Normalization and the Master Fader
-Normalization
~Brings all signal level on a track up to the highest peak, without clipping, but rounding errors can occur, resulting in distortion, especially with 16-bit files. Use with CAUTION.
-Master Fader
~Scales mix output to the desired range of values
~Sometimes clipping will occur, even when no channels are overshooting the clipping threshold
Playback Buffer and Plugin Delay Compensation
-Playback Buffer
~Determines latency of input signals. Lower buffer size results in less latency, which is better for recording
~The mixdown should utilize a higher buffer size, because the system needs to read the information faster than it is played back
-Plugin Delay Compensation
~Plugins that run on DSP expansion, like a UAD card
~Plugin delay occurs when processing involves algorithms requiring more samples than available by each playback buffer
Chapter 11: Phase
What is Phase?
-Relationship between two or more waveforms, measured in degrees
-We only consider phase in relation to similar waveforms
-Identical waveforms are usually signs of duplication
~ex: Duplicated snare, one dry and one reverb
-Waveforms of the same event are two microphones capturing the same musical event (or recording)
~ex: A kick mic and overheads, both with kick in it
-3 Types of Phase Relationships between Similar Waveforms
~In phase or phase-coherent: waveforms start at exactly the same time
~Out of phase or phase-shifted: waveforms start at different times
~Phase inverted: both waveforms start at the same time, but amplitude is inverted
-Problems arise when similar phase shifted or phase inverted waveforms are summed
~Comb Filtering: If phase off less than 35ms, frequencies attenuated, tonal alteration and timbre change
~If waves are phase-inverted, level attenuation. If phase inverted and equal in amplitude, cancel each other out completely
-Phase in Recorded Material
~Comb filtering caused by a mic a few feet from guitar amp, picking up reflected frequencies as well as the direct sound. Not much a mixing engineer can do to fix. Caused by having two or more tracks of the same take of the same instrument can be treated by the mixing engineer:
(A) top/bottom front/back tracks: Microphones that are placed on opposite sides of an instrument are likely to pick up opposite sound pressures. Fix it by inverting the phase of one of the microphones.
(B) Close-mic and overheads: Close-miced kick or snare might interact with overhead microphones to cause phase shifting or inversion. Fix it by taking the OH as a reference and make sure the kit is phase coherent
(C) Mic and Direct: The signal from a bass guitar that is recorded DI will travel much faster that a signal that goes from guitar to an amplifier to a microphone to your console. Fix it by zooming in and nudging the track
Phase Problems During Mixdown:
-Delay caused by plug-ins
-Delay caused by digital to analog conversion when using outboard gear
-Short delays may cause comb filtering
-Equalizers cause delay in a specific range of frequencies
Tricks:
-Two mixing tricks are based on a stereo setup with both identical mono signals being sent to a different extreme, and one of the signals is either delayed or phase inverted
-Haas Trick
~Helmut Haas discovered that the direction of the sound is determined solely by the initial sound providing that (1) successive sound arrive within 1-35ms from the initial sound and (2) successive sounds are less than 10dB louder than the initial sound
%Takes the original signal panned to one extreme and the other, phase-inverted signal is sent to the other extreme with a delay of 1-35ms
%One way involves panning a mono track hard to one channel, duplicating it, and panning the duplicate hard to the opposite channel and nudging the duplicate by a few milliseconds
%Second way involves loading a stereo delay on a mono track, setting one channel to have no delay and the otehr to have a short delay between 1-35ms
~Used to:
%Fatten sounds on instruments panned to the extremes making them sound more powerful
%As a panning alternative
%To create more realistic panning, since the human ear can use teh amplitude, time, and frequency differences to locate sound
~Haas Trick controls amount of delay, level
Out of Speakers Trick
-Like Haas Trick, but instead of delaying the wet signal, just invert the phase. Results in the sound coming from all around you rather than directly at you.
Chapter 12: Faders
Sliding Potentiometer
-Simplest basis for an analog fader
-The amplitude of teh analog signal is represented in voltage
-Contains a resistive track with a conductive wiper slides as the fader moves
~Different positions along the track provide different amounts of resistance
~Different degrees of level attenuation
-Can not boost the audio signal passing through it (unless a fixed-gain amplifier is placed after it)
-Audio signal enters and leaves
VCA Fader
-Combination of a voltage controlled amplifier and a fader
-VCA is an active amplifier that audio signal passes through
~Amount of boost or attenuation is determined by DC voltage
-Fader only controls the amount of voltage sent to the amplifier
~No audio signal flows through the actual fader
-Allows a number of DC sources to be summed to a VCA
~Shortens the signal path
Digital Fader
-Determines a coefficient value by which samples are multiplied
~Doubling a coefficient of 2 results in a boost of around 6dB
~0.5 results in around 6dB attenuation
Scales
-Typical measurement is in the scale unit dB
~Strong relationship to how the human ear perceives loudness
-Scale is generally based on steps of around 10dB or 6dB
~6dB is approx. doubling the voltage (or sample value) or cutting it in half
~10dB is doubling or halving the perceived loudness
-The 0dB point is called unity gain
~Where the signal is neither boosted nor attenuated
-Most faders offer extra-gain
~Generally around 6, 10, 12dB boosts
~Only used if signal is still weak while at unity
-Area between -20dB and 0dB is the most crucial area
Level Planning
-Faders are made to go up and down
-When mixing the levels start by coming up
~Generally ending up at around the same positions
-Problem
~A natural reaction to not being able to hear a track is to bring the fader up
%Bringing a snare up in the mix might begin masking vocals, so you bring up fader on vocals, then bass masked, etc.
~Eventually, end up back where you started
-Solutions
~Having a set plan for levels before bringing up faders so the extra-gains settings are left alone
~Setting the loudest track first and bringing up the rest of the tracks around it
Extremes - Inward Experiment
-Take the fader all the way down
-Bring it up gradually until the level seems reasonable
-Mark the fader position
-Take the fader all the way up (or to a point where the instrument is too loud)
-Bring it down gradually until the level seems reasonable
-Mark the fader positions
-You should now have two marks that set the limits of a level window. Now instrument level within this window based on the importance of the instrument
Chapter 13: Panning
How Stereo Works
-Alan Dower Blumlein
~Reasearcher and engineer at EMI
~December 14, 1931, applied for patent called "Improvements in and relating to Sound-transmission, Sound-recording, and Sound-reproduction System"
~Was looking for a 'binaural sound', we call it 'stereo' today
~Ironically, first stereo recording published in 1958 (16 years after Blumlein's death and 6 years after EMI's patent rights had expired
-Stereo Quick Facts
~We hear stereo based on three criteria: (EX: trumpet on your right)
%amplitude (sound be louder in R ear than L)
%time/phase (sound will reach L ear later than R)
% frequency (less high freq in L than R)
~Sound from a central source in nature reaches our ears at the same time, with the same volume and frequencies. But, with two speakers, no center speaker, so phantom center
~Best stereo perception when triangle acheived
Pan Controls
-Pan Pot (Panoramic Potentiometer)
~First studio with a stereo system was Abbey Road, London
~Splits a mono signal L and R, and attenuates the side you're not favouring
-Pan Clock
~Hours generally span from 7:00 (L) to 17:00 (R)
-Panning Laws
~A console usually has only one panning law, but some inline consoles have one for channel path and one for monitor path
%two main principles:
^if two speakers emit the same signal at the same level, listener in the center will perceive a 3dB boost of what each speaker produces.
^when two channels summed in mono, half of each level is sent to each speaker
~0dB Pan Law: doesn't drop the levels of centrally panned signals. The instrument level will drop as we pan from the center outward, with 3dB increase of perceived loudness when centered
~-3dB Pan Law: when panned center, there is a 3dB dip (generally best option when stereo mixing)
~-6dB Pan Law: used for mono-critical applications. Provides uniform level in mono, but a 3dB dip when in stereo
~-4.5dB Pan Law: compromise between -3 and -6dB laws. 1.5dB center dip when in stereo, 1.5dB center boost in mono
~-2.5dB Pan Law: gives a 0.5dB boost when panning to the sides so instruments aren't louder when panning.
-Balance Pot
~Input is stereo, unlike pan pot. 2 input channels go through separate gain stages before reaching stereo output. Pot position determines how much attenuation applied on each channel.
~never cross-feeds the input signal from one channel to the output of the other
Mono Tracks
-Problem with dry mono track is it provides no spatial perception
-Dry mono tracks always sound out of place, so add reverb or some other spatial effect to blend it
-Some mono tracks include room or artificial reverb that doesn't sit well with a stereo reverb of the whole mix
Stereo Pairs
-Coincident Pair (XY) technique provides the best mono-compatibility given that the diaphragms of the two mics are so close in proximity, and there's no need to worry about phase complications
-Spaced Pair (AB) involves two mics a few feet apart, is certain to have phase issues, and is not mono-compatible
-Near-coincident pair is two mics angled AND spaced, with less drastic phase problems
Multiple mono tracks
-Multiple mics on the same instrument
-Mirrored panning widens and creates less focus on the instrument in the stereo image
-Same panning gives a more relative stereo image, and is easier to locate
Combinations
-Like mirrored panning but less extreme
Panning Techniques
-Look at the track sheet and get a basic idea of a tentative pan plan
-Panning strategies differ with every mix
-Small tweaks in the near final stages can greatly improve mix
-Panning instruments in the same place causes masking. Panning different directions and mirroring can avoid masking
-When panning, think of a sound stage or try to visualize an actual performance
-Center and extremes in the panning field tend to be the busiest areas wehre masking is more likely to occur
-Level and frequency balance are the main concern when panning
-Be aware of the rhythmic structure of the tracks and keep them balanced
-A close-to-perfect stereo mix is basically a good mono mix, although there is still room for imbalances
-Stereo effects (reverb/delays) can be panned towards th dry track to put the desired effect in clearer focus
-Mono effects benefit more from panning the effect farther from the dry track and enhance the stereo image
Beyond Pan Pots
-Autopanners: pans cyclically between the left and right sides
~Rate: Defined by Hz, cycles/second
~Depth: How far the signal will be panned. Higher setting = more apparent effect
~Waveform: Defines shape of panning modulation, how smooth/rigid the panning will sound
~Center: This setting defines the position of modulation
And here are the notes I took on Monday and Wednesday:
Echo/Delay/Effects Processing Automation
-Create dedicated Aux track
-Create send to that Aux track
-Go to waveform drop-down, choose your automation
Serial Compression
-Try two compressors, one at 2:1, the other at 5:1. Be fairly light on both of them
-TDM is th dedicated computer for Pro Tools
-RTAS is using the Mac's hard drive, so slight latency
-OOPS (Out-of-Phase Stereo)
~Taking it out of phase to hear what's hidden (what we did with Hey Jude)
And, for the rest of Track 2, Our production schedule is something like:
Tuesday, September 21:
-Finish EQ, gates, compressors in Pro Tools
-Experiment with automation for Aux sends
Thursday, September 23:
-Mono mix through the board! Must be finished by end of class!
Tuesday, September 28:
-Stereo mix through the board!
Friday, September 17, 2010
Week of September 13, 2010
Notes taken on Monday:
-When compressing, make sure all of the quiet sounds (made louder by compression) are taken out, or there will be loud breaths. And put fades at ends of regions
-Talk about the differences between two different mixes with partner, try and improve on it.
-If there is a hiss in the mix, must one track at a time to find it. Go to wherever it is being sent, and adjust faders in sends. Check Line 1 button/pot is fully counter-clockwise. Then fader of the channel. Turn all green knobs (aux sends) down to zero. Then check on master aux sends
~Input on SPX 90, turn it down. Same with Lexicon, Millenia
On our mix:
-250k EQ drop on opening guitars
-HPF across board because it's way too boomy
-LFO (Low Frequency Oscillation)
~An inaudible sound could affect absolutely everything
-50Hz is a good HPF, but continue to attenuate bottom end with EQ
-Select all tracks, bring them all down if it's too overpowering
-HPF at 50Hz depressed, approximately -7dB around 80Hz
~Group 2 on hi-hat
~ex: Something sounds good at 80Hz dropped 15dB. Instead, drop at the different harmonics in the overtones maybe 2 or 3dB
Wednesday's class notes:
-Dial out the frequency range where vox don't make sense. Then, go to guitars and take out that frequency a bit (probably somewhere around 6dB or so)
-Verb, put a HPF so no amplification of low frequencies
-Try complimentary EQs on guitars
~Can take single guitar part, copy it, then do a complimentary EQ with a delay on one
Production Schedule:
-Tuesday, Sept 14
~Housekeeping
~EQ at least the drums
-Thursday, Sept 16
~EQ, compression, etc.
-Tuesday, Sept 21
~Mix through the board
Here are the notes that were presented on Wednesday:
Chapter 6: Mixing Domains and Objectives
-Macromixing: Concerned with the overall mix
-Micromixing: Concerned with the individual treatment of each instrument
-The Five Main Domains of the Mixing Process
~Time
~Frequency
~Level
~Stereo
~Depth
-Space: the combination of stereo depth
-Using various tools, we can manipulate the sonic aspects of instruments and their presentation in each domain
Mixing Objectives
-Four principal objectives in mixing
~Mood
~Balance
~Definition
~Interest
-Evaluatine the quality of a mix starts with considering how coherent and appealing each domain is and then assessing how well each objective was accomplished within that domain
Mood
-Concerned with reflecting the emotional content of the music in the mix
-Of all the objectives, this one involves the most creativity and is the most central to the project
-Applying techniques and tools in a way that are not congruent with the project can destroy the project's emotional qualities
-Mixing engineers must be careful not to apply genre specific techniques to a project that is not from that genre
Balance
-Balance normally refers to the balance in three domains
~Frequency balance: a lack of frequency balance can leave the mix wanting in an area
~Stereo image balance: an imbalance can throw off the stereo image
~Level balance: contains relative and absolute balance
-Any of these balances may be traded for interest or a creative effect. And usually happens for a short period
-Definition
~How distinct and recognizable sounds are
~Can be used to describe instruments as well as reverbs
~Not every mix requires a high degree of definition
~Also deals with how well each instrument is presented in relation to it's timbre
Interest
-Arrangement changes and many other production techniques all result in variations that grab the attention of the listener
-A mix has to retain and increase the intrinsic interest in a production
-A mix can add or create interest
-Ways of achieving interest
~Automate levels
~Have a certain instrument play in certain sections only
~Use different EQ settings for the same instrument and toggle them between sections
~Apply more compression on the drum mix during the chorus
~Distort a bass guitar during the chorus
~Set different snare reverbs in different sections
Frequency Domain
-The Frequency Spectrum
~20Hz to 20kHz
~The four bands in the spectrum include low, low-mids, high-mids, highs
~The cutoffs are 250Hz, 2kHz, 6kHz
-Frequency Balance and Common Problems
~Achieving frequency balance is a prime challenge in most mixes
~The most common problems with frequency balance involve the extremes
~Most novices will come up mixes with these problems because of lack of experience in evaluating these ranges
~Too many highs can make the mix brittle and cause problems in the mastering stage
~Lows are usually the hardest to stabilize and should be payed close attention to when checking the mix on different systems. It is more common to have an excess of lows than a lack of them
~The low-mids present problems because most instruments have their fundamentals in this range
~Separation and definition are achieved with work done in this area
-Separation
~In our perception we want each instrument to have a defined position and size on the frequency spectrum
~Separate the instruments first then see if there are empty areas
-Definition
~Instruments may mask one another and therefore one instrument may lack definition
~We can equalize various instruments to increase their definition
~At the same time, equalization may be used to decrease definition
-Mood
~Lo frequency emphasis relates to a darker, more mysterious mood
~High frequency emphasis relates to happiness and liveliness
~Power is usually linked to the low frequencies, but can be achieved in all areas
~Equalization can be used to convey certain moods when applied to instruments
-Interest
~Can be achieved by momentarily cutting all of the low frequencies and then bringing them back in
~Shelving EQs can be automated to bring up the lows during an exciting section
~Brightening a chorus section may also add interest
Level Domain
-Levels
~The question is not how loud, but how loud compared to other instruments
~Setting relative levels between instruments is highly a matter of taste
~Only fledgling mixers may truly get this wrong, adjusting levels comes naturally
~Adjusting levels dues not only involve moving faders, it also includes EQ and compressor settings
~Getting exceptional relative level balance is an art and requires practice
~Tip: To make everything louder in the mix, bring up the monitor level
-Levels and Balance
~Only very few mixes require all the instruments to be equally loud
~Setting relative level balance is determined by importance
~Steep level variations of the overall mix might be a problem
~Not automating the levels of a particular mix may cause the levels to rise and drop in disturbing ways
-Levels and Interest
~Some degree of level variation is needed to promote interest
~Even if we do not automate levels, the production itself may lend itself to louder choruses and softer verses
~Additional spice can be added by additionally automating these naturally occurring peaks and valleys through out the project
~The options are endless
~Level automation is used to preserve overall level balance, but also to break it
-Levels, Mood, and Definition
~We should always ask ourselves what is the emotional function of each instrument and how can it enhance or damage the mood of the song, then set the levels respectively
~The louder an instrument is the more defined it appears in the mix
~However, an instrument with frequency deficiencies might not benefit from a level boost - it may still be undefined, just louder
~Bringing up the level of an instrument can cause a loss of definition in another
-Dynamic Processing
~Noticeable level changes within the performance of an instrument are of equal importance (micromixing)
~Level variations in a performance break the relative balance of a mix
~We can control this by using gain-riding, compression or both
~At the other end of the spectrum, over compression can lead to a lifeless performance and can end up sounding unnatural
Stereo Domain
-It's the imaginary space listeners perceive between left and right speakers. The stereo image can be seen as the whole mix or an individual instrument.
-We control the stereo's panoramic aspects with pan pots and reverb.
-Localization: aware where the sound is coming from (L or R)
-Stereo width: how much the stereo image occupies the sound
-Stereo focus: how focus the sounds are
-Stereo spread: how the instruments are spread across the stereo image
Stereo Balance
-Need to have balance between the left and right. If we pan most instruments to one side the mix can turn to one speaker. Balance also allows for the introduction of new instruments, giving it space to be heard
-Stereo frequency imbalance: even though the frequency balance between left and right are rarely identical, sometimes having too much variation can cause image shifting
-Tracks or instruments with frequency variations can be put in the center to avoid shifting
-Stereo spread imbalance (I-Mix): when little of the sides are used to spread instruments and make monophonic I-Mix. I-Mix is used for hip-hop. The mix can also have a V-Mix, a weak center with more on the sides
-W-Mix: a combination of a V and I-Mix. The mix can be unpleasant due to intense focus on the right, left, and center and nothing in-between
-Stereo Panorama can also have a specific area lacking from music and be empty. This can be fixed by panning
Stereo Image and Other Objectives
-Stereo Image: can promote moods making it more powerful with making the panning less natural
Depth
-What instrument is behind or in front of the other instrument. A mix can have instruments close to each other making it tight, or have instruments with depth to make it spacious
-Reverb helps us create a sense of depth in the mix
-Coherent depth: making a depth a field and retaining an instrument's definition can be a challenge. Depth variations are uncommon unless done as an artistic method
-Sonic depth in nature is to re-create natural or artificial appealing sound
-Every pair of speaker system are integral depth, with sounds from the sides appearing closer than the sounds from the center
Chapter 7: Monitoring - How did we get here?
Sound Reproduction
-An accurate monitoring environment is absolutely necessary for mixing. Specific room construction, room treatments and monitor placement make an ideal monitoring environment
-Loudspeaker drivers displace air in response to an incoming voltage that corresponds to a waveform. Two-way loudspeakers require two or more drivers, one for the low frequencies and one for the highs
-Low frequencies require a large rigid cone capable of displacing large masses of air
-Higher frequencies require a small diaphragm than can move rapidly
-A crossover frequency splits the signal usually around 2kHz and sends one through a high pass filter to the small diaphragm, the other through a hpf to the small diaphragm, the other through a lpf and onward to the large cone
Aurotones, Near Fields and Full Range Monitors
-Aurotones are monitors with a well defined mid-range response and are used by mixing engineers to simulate a domestic listening environment
-Near-Field monitors provided clarity that aurotones and main speakers previously could not. The majority of mixes are done using near field monitors.
-Full range monitors reproduce the complete audible range of sound, 20Hz-20kHz. The provide higher resolution than near-field monitors. Usually used to level out th low end of a mix
Choosing Monitors
-Active vs. Passive
~Passive: no integrated amplifier and must be fed with a speaker level signal that was amplified by an external amplifier (not inside the monitor)
*When connecting an amp to a monitor, the cables should be high quality and as short as possible. Should be the exact same length or else a possible stereo imbalance between speakers. The wiring must be properly placed (+ to +, - to -) or there will be inverted phase
**To check for incorrect connection, check kick to see if cone pulls inward rather than pushing outward
=Powered speakers: has a built-in amplifier and its inputs are XLR or 1/4"
~Active: does have its own amplifier. They are shielded to drain magnetic interference between speaker and CRI computer screens screens and some have A/D converters to prevent analog interference
~There is no guarantee that either monitor will perform better
Enclosure Designs & Specifications
-Dipole design: studio monitors with holes on their enclousure (vents, ports)
~Provide extended low frequency response but it is inaccurate
-Monopole design: monitors with no ports and air within the enclosure. They provide better dampening of woofer cone resulting in tighter bass response
The Room Factor
-"No pair of speakers sound the same, unless placed in the same room." High priced, high quality speakers will perform poorly in an acoustically untreated room.
-Room modes: result of interaction between reflected low frequency waveforms. They travel out in a spherical fashion, and bounce off of obstacles they encounter. When sound waves act upon two parallel walls, they will bounce from one to the other, losing energy each time, until they die out. However, constant monitoring can reinforce those waves.
~Standing waves: waves trapped between two parallel surfaces that interact causing attenuation and boosting at certain frequencies
~Always cause problems at the same frequency, but the problems are not consistent throughout the whole room. Every point in the room has its own frequency response. Inhibit speaker's ability to transmit the problematic frequencies of a mix
-Treating Room Modes
~Treat the reflections in your room with diffusion and absorption
~Diffusers scatter sound energy and break up low frequency standing waves
~Use absorbers on walls to catch sound energy and break up standing waves. Absorbers are more effective at higher frequencies
-Flutter echo: the result of interaction between reflected mid-high frequency waveforms. Use absorbers eliminate flutter echo
-Early reflections: waves bouncing off of surfaces very close to the sound source, and then interfering with the direct waveforms causing phase issues
Positioning Monitors
-Where in the room?
~The positioning of the monitors is determined by the positioning of the listener
~Room modes affect the frequency response at the listening position
~Minor changes affect small rooms more, which unfortunately provides little to no option for monitor placement
-The Equilateral Triangle
~Monitors placed on two vertexes of equal height
~Use a string/measuring tape to measure equal distances between monitors and focal point
~Speakers angled towards listener, 60 degree angles
~EXPERIMENT!!!
~Moving the speakers farther apart results in a wider stereo image with less of a focal point. Whereas a more narrow image allows for more a center, but less feel of stereo
-How far?
~Monitors are close: if by moving head slightly causes dramatic difference, there will be more phase difference between left and right ears resulting in less solid stereo image
~Monitors are far: the further away they are, the wider the stereo image becomes, which can make panning decisions easier. The lower frequencies will be louder because they will bounce off back wall and return to super-impose on the direct sound
-Dampening Monitors
~Monitor Isolation Pads: Made of dense acoustic foam and metal spikes on which the monitors rest. They isolate the monitor from the stand, ensuring the monitor performs independently with no vibrations (presents vibrations onto stand as well)
Chapter 8: Meters
-Amplitude vs. Level
~Amplitude: describes changes in air pressure compared to the normal atmospheric pressure
=Microphones convert changes in air pressure to voltage
=An AD converter converts the voltage into discrete numbers
~Level: denotes the absolute magnitude of the signal
=Describing a signal's level in dB is a lot easier because using dB system is a log system, to express very large increments with very small numbers
=Level is responsible for adjusting a signals amplitude
-Mechanical and Bar Meters
~Mechanical meter: made of a magnet and coil that move a needle dependent on the level of voltage being produced
=Have a scale of around 24dB
~Bar meter: visually measured with the use of LEDs, a plasma screen, or a control on a computer screen
=Has extra indicators that display the peak hold, peak level, and clip indicator
-Peak Meters
~Peak meters: display level of signal. Used to monitor the signal when there is a pre-defined limit
=On a digital system the highest level of peak meter is 0dB. Lowest level can be dependent on bit depth
=Analog equipment can have a scale higher than 0dB
-Average Meters
~Our ears perceive loudness to the average level of sound not their peak levels
=Peak meters tell us little about the loudness
=Are often mechanical meters that employ an RC circuit (resistor-capacitor)
-Phase Meters
~VU or peak meters are provided per channel for the stereo mix
~Phase meters
=common on large format consoles
=meter the phase coherency between left and right channels
=+1 means that both channels are outputting exactly the same signal; 0 means that the channels are outputting different signals; -1 means that the channels are completely phase inverted
Chapter 9: Mixing Console
-Buses
o Common signal path where many signals can be mixed
o Typical buses
* Mix bus
* Group bus (or single record bus on CD)
* Aux bus
* Solo bus
* Processors vs. Effects
o A dry signal is the unaffected audio, while a wet signal is the affected audio
* For processors, can adjust the percentage used between wet and dry
o Processors: Made to alter the input signal and replace it with a processed signal
* Added with an insert point
* Include EQs, dynamic range processors (such as compressors, limiters, gates, expanders, and duckers), distortions, pitch correctors, faders, and pan pots
o Effects: Add something to the original sound. Takes signal and generates a new signal based on original one
* Added by using an auxiliary send
* Include time-based effects (such as reverb, delay, chorus, flanger), pitch-related effects (such as pitch shifters and harmonizers)
* Basic Signal Flow
o Step 1: Faders, pan pots, cut switch
* Each channel is fed from a track on the multitrack recorder. Signal travels from the line input socket, the fader, then the pan pot.
* Pan pots take the mono signal and send out a stereo signal, then sum it into the mix bus. Single fader alters the level of the stereo bus signal.
* Then, mix bus signal goes to two mono outputs on the back of the console (L, R)
o Step 2: Line gains, phase-invert and clip indicators
* Line-gain (or tape-trim) boosts/attenuates the level of the audio signal before it gets to the channel signal path
* Optimize the level of the incoming signal to the highest levels possible without clipping (digitally) or unwanted distortion (using analog)
* Some engineers use the over-hot input because it adds appealing harmonic distortion
* Check phasing with the phase invert
* Don't always trust clip indicators, trust your ear above all else
o Step 3: On-board processors
* Quality dictates much of a console's value
* Include hpf, EQ, basic compressors at times
o Step 4: Insert points
* Many engineers prefer to use external insert points rather than in-board.
* Lets us insert devices into the signal path
* Each external unit can only be connected to one channel, but multiple tracks can use the unit through inserts.
* Can use multiple inserts on a single track
* Importance of Signal Flow Diagrams
o Step 5: Auxiliary sends
* Takes a copy of the signal on the cannel path and sends it to an auxiliary bus
* Local aux controls are on the individual channels, containing:
* Level control: pot to control level of the copy sent to the aux bus
* Pre/post fader switch: determines if the signal is taken before or after the channel fader. Post-fader lets you control level of signal with channel fader. We often want aux effect level to correspond to instrument level, so we use post-fader feed. If pre-fader, the level is independent of the channel fader and will play regardless of channel fader level.
* Pan control: Aux buses can be mono or stereo. If stereo, pan pot available to determine how mono channel signal is panned to the aux bus
* On/off switch: Often called MUTE
* Master aux controls in master section. Same as the local ones, but no pre/post fader. Most have multiple auxiliary buses
o Step 6: FX returns (or aux returns)
* Dedicated stereo inputs that can be routed to the mix bus
* Provide quick and easy way to blend an effect return into the mix, but offer very limited functionality
* When possible, effects are better returned into the channels
* Groups
o Control grouping: Allocate a set of channels to a group, so moving one fader controls all of them
* VCA grouping: Consoles with motorized faders have master VCA group faders. Individual channel faders are then assigned to a VCA group
* Cutting or soloing VCA group affects each channel assigned to it
o Audio grouping
* To handle many signals, must sum a group of channels to a group bus (subgrouping). Group signal can then be processed and routed to the mix bus.
* Format: Channels:Groups:Mix-buses
* Ex: 16:8:2 denotes 16 channels, 8 group buses and 2 mix buses (or 1 stereo mix bus)
* Routing matrix: collection of buttons that can be situated either vertically next to fader or in its own area. Depress one, and the channel will be sent to the corresponding master group
* In-line grouping
* Ex: In a 24 track recording, drums may be ch 1-8. They are routed through the matrix to Channels 24 and 25 that now function as a group.
* Bouncing: by sending groups to yet another subgroup, we then send that final subgroup to an available audio track on the multitrack recorder
o In-line consoles
* The desk accommodates two types of signals:
* Live performance signals: Are sent to a group to be recorded onto the multitrack
* Multitrack signals: Already recorded information sent to a group
* In-line consoles and mixing
* Since the channel path is stronger than the monitor path, it's ideal to use the channel path for multitrack recording and return signals and use the monitor path for:
o Effects returns
* Ex: We can send a guitar to line 1 inputs to a delay unit and bring the delay back to the monitor path on the same channel strip/module
o Additional aux sends
* Ex: We can send the background vocals on a bus to a group, the group to the delay and/or reverb. The bus acts as a local aux send while the group channel acts as a master fader of what is being received.
o Signal copies
* Ex: Multiple snare tracks sent to a single channel through the monitor path
o The Monitor Section
* Monitor output
* To hear it, it needs to be sent from the mix output (we commonly use Pro Tools 1 - 2 on the patch bay, and MIX pressed on master channel) to the 2 Track Recorder (2TRK button on master channel). Then, to the monitor output (the actual monitors)
* Additional controls
* Cut: cuts monitor output. Feedback, noise bursts, clicks/thumps, etc.
* Dim: Attenuates monitor level by user-definable amount of dB (for audible convenience in studio).
* Mono: Sums the stereo output to mono (for phasing, masking issues).
* Speaker selection: Allows you to switch between different monitors (if you have them)
* Cut left, cut right: Mutes right or left monitor.
* Swap left/right: Left speaker in right speaker, right speaker in left speaker (used to check stereo imbalance)
* Source selection: Determines where the speakers get the audio (mix bus, external outputs, aux bus
o Solos
* Two types of solos:
* Destructive in-place (when one channel soloed, every other channel is cut
* Nondestructive
o PFL (takes a copy before the channel fader and pan pot, so mix levels and panning aren't engaged
o AFL (takes a copy after the fader but before the pan, so it maintains levels, but not panning) or APL (takes a copy after the fader and pan, so both panning and levels are maintained)
* Solo safe
o Keeps a channel soloed permanently, even when other tracks soloed.
* Which solo?
o Destructive solo is favored for mixdown because when a track is soloed, the signal level remains the same as it previously was,, as opposed to nondestructive solo where the signals may drop or rise in level.
o Correct Gain Structure
* Make sure that the signal is at its optimum level so 100% of the signal is sent and received
* Given that most analog gear gives off unwanted noise, just use the channel fader, not the processor's output. This will prevent the noise given off by the processor from being boosted.
o The Digital Console
* ADA vs. DA
* Digital consoles have fader-layer capabilities
* Allow complete control over automating any parameter
* External processing is still possible, but it is an option. On an analog console, it would be a necessity
-When compressing, make sure all of the quiet sounds (made louder by compression) are taken out, or there will be loud breaths. And put fades at ends of regions
-Talk about the differences between two different mixes with partner, try and improve on it.
-If there is a hiss in the mix, must one track at a time to find it. Go to wherever it is being sent, and adjust faders in sends. Check Line 1 button/pot is fully counter-clockwise. Then fader of the channel. Turn all green knobs (aux sends) down to zero. Then check on master aux sends
~Input on SPX 90, turn it down. Same with Lexicon, Millenia
On our mix:
-250k EQ drop on opening guitars
-HPF across board because it's way too boomy
-LFO (Low Frequency Oscillation)
~An inaudible sound could affect absolutely everything
-50Hz is a good HPF, but continue to attenuate bottom end with EQ
-Select all tracks, bring them all down if it's too overpowering
-HPF at 50Hz depressed, approximately -7dB around 80Hz
~Group 2 on hi-hat
~ex: Something sounds good at 80Hz dropped 15dB. Instead, drop at the different harmonics in the overtones maybe 2 or 3dB
Wednesday's class notes:
-Dial out the frequency range where vox don't make sense. Then, go to guitars and take out that frequency a bit (probably somewhere around 6dB or so)
-Verb, put a HPF so no amplification of low frequencies
-Try complimentary EQs on guitars
~Can take single guitar part, copy it, then do a complimentary EQ with a delay on one
Production Schedule:
-Tuesday, Sept 14
~Housekeeping
~EQ at least the drums
-Thursday, Sept 16
~EQ, compression, etc.
-Tuesday, Sept 21
~Mix through the board
Here are the notes that were presented on Wednesday:
Chapter 6: Mixing Domains and Objectives
-Macromixing: Concerned with the overall mix
-Micromixing: Concerned with the individual treatment of each instrument
-The Five Main Domains of the Mixing Process
~Time
~Frequency
~Level
~Stereo
~Depth
-Space: the combination of stereo depth
-Using various tools, we can manipulate the sonic aspects of instruments and their presentation in each domain
Mixing Objectives
-Four principal objectives in mixing
~Mood
~Balance
~Definition
~Interest
-Evaluatine the quality of a mix starts with considering how coherent and appealing each domain is and then assessing how well each objective was accomplished within that domain
Mood
-Concerned with reflecting the emotional content of the music in the mix
-Of all the objectives, this one involves the most creativity and is the most central to the project
-Applying techniques and tools in a way that are not congruent with the project can destroy the project's emotional qualities
-Mixing engineers must be careful not to apply genre specific techniques to a project that is not from that genre
Balance
-Balance normally refers to the balance in three domains
~Frequency balance: a lack of frequency balance can leave the mix wanting in an area
~Stereo image balance: an imbalance can throw off the stereo image
~Level balance: contains relative and absolute balance
-Any of these balances may be traded for interest or a creative effect. And usually happens for a short period
-Definition
~How distinct and recognizable sounds are
~Can be used to describe instruments as well as reverbs
~Not every mix requires a high degree of definition
~Also deals with how well each instrument is presented in relation to it's timbre
Interest
-Arrangement changes and many other production techniques all result in variations that grab the attention of the listener
-A mix has to retain and increase the intrinsic interest in a production
-A mix can add or create interest
-Ways of achieving interest
~Automate levels
~Have a certain instrument play in certain sections only
~Use different EQ settings for the same instrument and toggle them between sections
~Apply more compression on the drum mix during the chorus
~Distort a bass guitar during the chorus
~Set different snare reverbs in different sections
Frequency Domain
-The Frequency Spectrum
~20Hz to 20kHz
~The four bands in the spectrum include low, low-mids, high-mids, highs
~The cutoffs are 250Hz, 2kHz, 6kHz
-Frequency Balance and Common Problems
~Achieving frequency balance is a prime challenge in most mixes
~The most common problems with frequency balance involve the extremes
~Most novices will come up mixes with these problems because of lack of experience in evaluating these ranges
~Too many highs can make the mix brittle and cause problems in the mastering stage
~Lows are usually the hardest to stabilize and should be payed close attention to when checking the mix on different systems. It is more common to have an excess of lows than a lack of them
~The low-mids present problems because most instruments have their fundamentals in this range
~Separation and definition are achieved with work done in this area
-Separation
~In our perception we want each instrument to have a defined position and size on the frequency spectrum
~Separate the instruments first then see if there are empty areas
-Definition
~Instruments may mask one another and therefore one instrument may lack definition
~We can equalize various instruments to increase their definition
~At the same time, equalization may be used to decrease definition
-Mood
~Lo frequency emphasis relates to a darker, more mysterious mood
~High frequency emphasis relates to happiness and liveliness
~Power is usually linked to the low frequencies, but can be achieved in all areas
~Equalization can be used to convey certain moods when applied to instruments
-Interest
~Can be achieved by momentarily cutting all of the low frequencies and then bringing them back in
~Shelving EQs can be automated to bring up the lows during an exciting section
~Brightening a chorus section may also add interest
Level Domain
-Levels
~The question is not how loud, but how loud compared to other instruments
~Setting relative levels between instruments is highly a matter of taste
~Only fledgling mixers may truly get this wrong, adjusting levels comes naturally
~Adjusting levels dues not only involve moving faders, it also includes EQ and compressor settings
~Getting exceptional relative level balance is an art and requires practice
~Tip: To make everything louder in the mix, bring up the monitor level
-Levels and Balance
~Only very few mixes require all the instruments to be equally loud
~Setting relative level balance is determined by importance
~Steep level variations of the overall mix might be a problem
~Not automating the levels of a particular mix may cause the levels to rise and drop in disturbing ways
-Levels and Interest
~Some degree of level variation is needed to promote interest
~Even if we do not automate levels, the production itself may lend itself to louder choruses and softer verses
~Additional spice can be added by additionally automating these naturally occurring peaks and valleys through out the project
~The options are endless
~Level automation is used to preserve overall level balance, but also to break it
-Levels, Mood, and Definition
~We should always ask ourselves what is the emotional function of each instrument and how can it enhance or damage the mood of the song, then set the levels respectively
~The louder an instrument is the more defined it appears in the mix
~However, an instrument with frequency deficiencies might not benefit from a level boost - it may still be undefined, just louder
~Bringing up the level of an instrument can cause a loss of definition in another
-Dynamic Processing
~Noticeable level changes within the performance of an instrument are of equal importance (micromixing)
~Level variations in a performance break the relative balance of a mix
~We can control this by using gain-riding, compression or both
~At the other end of the spectrum, over compression can lead to a lifeless performance and can end up sounding unnatural
Stereo Domain
-It's the imaginary space listeners perceive between left and right speakers. The stereo image can be seen as the whole mix or an individual instrument.
-We control the stereo's panoramic aspects with pan pots and reverb.
-Localization: aware where the sound is coming from (L or R)
-Stereo width: how much the stereo image occupies the sound
-Stereo focus: how focus the sounds are
-Stereo spread: how the instruments are spread across the stereo image
Stereo Balance
-Need to have balance between the left and right. If we pan most instruments to one side the mix can turn to one speaker. Balance also allows for the introduction of new instruments, giving it space to be heard
-Stereo frequency imbalance: even though the frequency balance between left and right are rarely identical, sometimes having too much variation can cause image shifting
-Tracks or instruments with frequency variations can be put in the center to avoid shifting
-Stereo spread imbalance (I-Mix): when little of the sides are used to spread instruments and make monophonic I-Mix. I-Mix is used for hip-hop. The mix can also have a V-Mix, a weak center with more on the sides
-W-Mix: a combination of a V and I-Mix. The mix can be unpleasant due to intense focus on the right, left, and center and nothing in-between
-Stereo Panorama can also have a specific area lacking from music and be empty. This can be fixed by panning
Stereo Image and Other Objectives
-Stereo Image: can promote moods making it more powerful with making the panning less natural
Depth
-What instrument is behind or in front of the other instrument. A mix can have instruments close to each other making it tight, or have instruments with depth to make it spacious
-Reverb helps us create a sense of depth in the mix
-Coherent depth: making a depth a field and retaining an instrument's definition can be a challenge. Depth variations are uncommon unless done as an artistic method
-Sonic depth in nature is to re-create natural or artificial appealing sound
-Every pair of speaker system are integral depth, with sounds from the sides appearing closer than the sounds from the center
Chapter 7: Monitoring - How did we get here?
Sound Reproduction
-An accurate monitoring environment is absolutely necessary for mixing. Specific room construction, room treatments and monitor placement make an ideal monitoring environment
-Loudspeaker drivers displace air in response to an incoming voltage that corresponds to a waveform. Two-way loudspeakers require two or more drivers, one for the low frequencies and one for the highs
-Low frequencies require a large rigid cone capable of displacing large masses of air
-Higher frequencies require a small diaphragm than can move rapidly
-A crossover frequency splits the signal usually around 2kHz and sends one through a high pass filter to the small diaphragm, the other through a hpf to the small diaphragm, the other through a lpf and onward to the large cone
Aurotones, Near Fields and Full Range Monitors
-Aurotones are monitors with a well defined mid-range response and are used by mixing engineers to simulate a domestic listening environment
-Near-Field monitors provided clarity that aurotones and main speakers previously could not. The majority of mixes are done using near field monitors.
-Full range monitors reproduce the complete audible range of sound, 20Hz-20kHz. The provide higher resolution than near-field monitors. Usually used to level out th low end of a mix
Choosing Monitors
-Active vs. Passive
~Passive: no integrated amplifier and must be fed with a speaker level signal that was amplified by an external amplifier (not inside the monitor)
*When connecting an amp to a monitor, the cables should be high quality and as short as possible. Should be the exact same length or else a possible stereo imbalance between speakers. The wiring must be properly placed (+ to +, - to -) or there will be inverted phase
**To check for incorrect connection, check kick to see if cone pulls inward rather than pushing outward
=Powered speakers: has a built-in amplifier and its inputs are XLR or 1/4"
~Active: does have its own amplifier. They are shielded to drain magnetic interference between speaker and CRI computer screens screens and some have A/D converters to prevent analog interference
~There is no guarantee that either monitor will perform better
Enclosure Designs & Specifications
-Dipole design: studio monitors with holes on their enclousure (vents, ports)
~Provide extended low frequency response but it is inaccurate
-Monopole design: monitors with no ports and air within the enclosure. They provide better dampening of woofer cone resulting in tighter bass response
The Room Factor
-"No pair of speakers sound the same, unless placed in the same room." High priced, high quality speakers will perform poorly in an acoustically untreated room.
-Room modes: result of interaction between reflected low frequency waveforms. They travel out in a spherical fashion, and bounce off of obstacles they encounter. When sound waves act upon two parallel walls, they will bounce from one to the other, losing energy each time, until they die out. However, constant monitoring can reinforce those waves.
~Standing waves: waves trapped between two parallel surfaces that interact causing attenuation and boosting at certain frequencies
~Always cause problems at the same frequency, but the problems are not consistent throughout the whole room. Every point in the room has its own frequency response. Inhibit speaker's ability to transmit the problematic frequencies of a mix
-Treating Room Modes
~Treat the reflections in your room with diffusion and absorption
~Diffusers scatter sound energy and break up low frequency standing waves
~Use absorbers on walls to catch sound energy and break up standing waves. Absorbers are more effective at higher frequencies
-Flutter echo: the result of interaction between reflected mid-high frequency waveforms. Use absorbers eliminate flutter echo
-Early reflections: waves bouncing off of surfaces very close to the sound source, and then interfering with the direct waveforms causing phase issues
Positioning Monitors
-Where in the room?
~The positioning of the monitors is determined by the positioning of the listener
~Room modes affect the frequency response at the listening position
~Minor changes affect small rooms more, which unfortunately provides little to no option for monitor placement
-The Equilateral Triangle
~Monitors placed on two vertexes of equal height
~Use a string/measuring tape to measure equal distances between monitors and focal point
~Speakers angled towards listener, 60 degree angles
~EXPERIMENT!!!
~Moving the speakers farther apart results in a wider stereo image with less of a focal point. Whereas a more narrow image allows for more a center, but less feel of stereo
-How far?
~Monitors are close: if by moving head slightly causes dramatic difference, there will be more phase difference between left and right ears resulting in less solid stereo image
~Monitors are far: the further away they are, the wider the stereo image becomes, which can make panning decisions easier. The lower frequencies will be louder because they will bounce off back wall and return to super-impose on the direct sound
-Dampening Monitors
~Monitor Isolation Pads: Made of dense acoustic foam and metal spikes on which the monitors rest. They isolate the monitor from the stand, ensuring the monitor performs independently with no vibrations (presents vibrations onto stand as well)
Chapter 8: Meters
-Amplitude vs. Level
~Amplitude: describes changes in air pressure compared to the normal atmospheric pressure
=Microphones convert changes in air pressure to voltage
=An AD converter converts the voltage into discrete numbers
~Level: denotes the absolute magnitude of the signal
=Describing a signal's level in dB is a lot easier because using dB system is a log system, to express very large increments with very small numbers
=Level is responsible for adjusting a signals amplitude
-Mechanical and Bar Meters
~Mechanical meter: made of a magnet and coil that move a needle dependent on the level of voltage being produced
=Have a scale of around 24dB
~Bar meter: visually measured with the use of LEDs, a plasma screen, or a control on a computer screen
=Has extra indicators that display the peak hold, peak level, and clip indicator
-Peak Meters
~Peak meters: display level of signal. Used to monitor the signal when there is a pre-defined limit
=On a digital system the highest level of peak meter is 0dB. Lowest level can be dependent on bit depth
=Analog equipment can have a scale higher than 0dB
-Average Meters
~Our ears perceive loudness to the average level of sound not their peak levels
=Peak meters tell us little about the loudness
=Are often mechanical meters that employ an RC circuit (resistor-capacitor)
-Phase Meters
~VU or peak meters are provided per channel for the stereo mix
~Phase meters
=common on large format consoles
=meter the phase coherency between left and right channels
=+1 means that both channels are outputting exactly the same signal; 0 means that the channels are outputting different signals; -1 means that the channels are completely phase inverted
Chapter 9: Mixing Console
-Buses
o Common signal path where many signals can be mixed
o Typical buses
* Mix bus
* Group bus (or single record bus on CD)
* Aux bus
* Solo bus
* Processors vs. Effects
o A dry signal is the unaffected audio, while a wet signal is the affected audio
* For processors, can adjust the percentage used between wet and dry
o Processors: Made to alter the input signal and replace it with a processed signal
* Added with an insert point
* Include EQs, dynamic range processors (such as compressors, limiters, gates, expanders, and duckers), distortions, pitch correctors, faders, and pan pots
o Effects: Add something to the original sound. Takes signal and generates a new signal based on original one
* Added by using an auxiliary send
* Include time-based effects (such as reverb, delay, chorus, flanger), pitch-related effects (such as pitch shifters and harmonizers)
* Basic Signal Flow
o Step 1: Faders, pan pots, cut switch
* Each channel is fed from a track on the multitrack recorder. Signal travels from the line input socket, the fader, then the pan pot.
* Pan pots take the mono signal and send out a stereo signal, then sum it into the mix bus. Single fader alters the level of the stereo bus signal.
* Then, mix bus signal goes to two mono outputs on the back of the console (L, R)
o Step 2: Line gains, phase-invert and clip indicators
* Line-gain (or tape-trim) boosts/attenuates the level of the audio signal before it gets to the channel signal path
* Optimize the level of the incoming signal to the highest levels possible without clipping (digitally) or unwanted distortion (using analog)
* Some engineers use the over-hot input because it adds appealing harmonic distortion
* Check phasing with the phase invert
* Don't always trust clip indicators, trust your ear above all else
o Step 3: On-board processors
* Quality dictates much of a console's value
* Include hpf, EQ, basic compressors at times
o Step 4: Insert points
* Many engineers prefer to use external insert points rather than in-board.
* Lets us insert devices into the signal path
* Each external unit can only be connected to one channel, but multiple tracks can use the unit through inserts.
* Can use multiple inserts on a single track
* Importance of Signal Flow Diagrams
o Step 5: Auxiliary sends
* Takes a copy of the signal on the cannel path and sends it to an auxiliary bus
* Local aux controls are on the individual channels, containing:
* Level control: pot to control level of the copy sent to the aux bus
* Pre/post fader switch: determines if the signal is taken before or after the channel fader. Post-fader lets you control level of signal with channel fader. We often want aux effect level to correspond to instrument level, so we use post-fader feed. If pre-fader, the level is independent of the channel fader and will play regardless of channel fader level.
* Pan control: Aux buses can be mono or stereo. If stereo, pan pot available to determine how mono channel signal is panned to the aux bus
* On/off switch: Often called MUTE
* Master aux controls in master section. Same as the local ones, but no pre/post fader. Most have multiple auxiliary buses
o Step 6: FX returns (or aux returns)
* Dedicated stereo inputs that can be routed to the mix bus
* Provide quick and easy way to blend an effect return into the mix, but offer very limited functionality
* When possible, effects are better returned into the channels
* Groups
o Control grouping: Allocate a set of channels to a group, so moving one fader controls all of them
* VCA grouping: Consoles with motorized faders have master VCA group faders. Individual channel faders are then assigned to a VCA group
* Cutting or soloing VCA group affects each channel assigned to it
o Audio grouping
* To handle many signals, must sum a group of channels to a group bus (subgrouping). Group signal can then be processed and routed to the mix bus.
* Format: Channels:Groups:Mix-buses
* Ex: 16:8:2 denotes 16 channels, 8 group buses and 2 mix buses (or 1 stereo mix bus)
* Routing matrix: collection of buttons that can be situated either vertically next to fader or in its own area. Depress one, and the channel will be sent to the corresponding master group
* In-line grouping
* Ex: In a 24 track recording, drums may be ch 1-8. They are routed through the matrix to Channels 24 and 25 that now function as a group.
* Bouncing: by sending groups to yet another subgroup, we then send that final subgroup to an available audio track on the multitrack recorder
o In-line consoles
* The desk accommodates two types of signals:
* Live performance signals: Are sent to a group to be recorded onto the multitrack
* Multitrack signals: Already recorded information sent to a group
* In-line consoles and mixing
* Since the channel path is stronger than the monitor path, it's ideal to use the channel path for multitrack recording and return signals and use the monitor path for:
o Effects returns
* Ex: We can send a guitar to line 1 inputs to a delay unit and bring the delay back to the monitor path on the same channel strip/module
o Additional aux sends
* Ex: We can send the background vocals on a bus to a group, the group to the delay and/or reverb. The bus acts as a local aux send while the group channel acts as a master fader of what is being received.
o Signal copies
* Ex: Multiple snare tracks sent to a single channel through the monitor path
o The Monitor Section
* Monitor output
* To hear it, it needs to be sent from the mix output (we commonly use Pro Tools 1 - 2 on the patch bay, and MIX pressed on master channel) to the 2 Track Recorder (2TRK button on master channel). Then, to the monitor output (the actual monitors)
* Additional controls
* Cut: cuts monitor output. Feedback, noise bursts, clicks/thumps, etc.
* Dim: Attenuates monitor level by user-definable amount of dB (for audible convenience in studio).
* Mono: Sums the stereo output to mono (for phasing, masking issues).
* Speaker selection: Allows you to switch between different monitors (if you have them)
* Cut left, cut right: Mutes right or left monitor.
* Swap left/right: Left speaker in right speaker, right speaker in left speaker (used to check stereo imbalance)
* Source selection: Determines where the speakers get the audio (mix bus, external outputs, aux bus
o Solos
* Two types of solos:
* Destructive in-place (when one channel soloed, every other channel is cut
* Nondestructive
o PFL (takes a copy before the channel fader and pan pot, so mix levels and panning aren't engaged
o AFL (takes a copy after the fader but before the pan, so it maintains levels, but not panning) or APL (takes a copy after the fader and pan, so both panning and levels are maintained)
* Solo safe
o Keeps a channel soloed permanently, even when other tracks soloed.
* Which solo?
o Destructive solo is favored for mixdown because when a track is soloed, the signal level remains the same as it previously was,, as opposed to nondestructive solo where the signals may drop or rise in level.
o Correct Gain Structure
* Make sure that the signal is at its optimum level so 100% of the signal is sent and received
* Given that most analog gear gives off unwanted noise, just use the channel fader, not the processor's output. This will prevent the noise given off by the processor from being boosted.
o The Digital Console
* ADA vs. DA
* Digital consoles have fader-layer capabilities
* Allow complete control over automating any parameter
* External processing is still possible, but it is an option. On an analog console, it would be a necessity
Friday, September 10, 2010
Week of September 6, 2010
Chapter 1: Music and Mixing
Music-An Extremely Short Introduction
-The music unfolds itself with perfect freedom; but it is so heart-searching because we know all the time it runs along the quickest nerves of our life, our struggles and aspirations and sufferings and exaltations *Michael Allis
-Today, music rarely fails to produce emotions
-As mixing engineers, one of our prime functions/responsibilities is to help deliver the emotional context of a musical piece
-When approaching a mix, ask:
%What is this song about?
%What emotions are involved?
%What message is the artist trying to convey?
%How can I support/enhance the vibe?
%How should the listener respond to this piece of music?
-A mix can and should enhance the music, its mood, the emotions it entails, and the response it should incite
The Role and Importance of the Mix
-Mixing: A process in which multitrack material - whether recorded, sampled, or synthesized - is balanced, treated, and combined into a multichannel format, most commonly two-channel stereo
%A mix is a sonic presentation of emotions, creative ideas, and performance
-Everyone appreciates good sonic quality (cellphones, hi-fi systems for example)
-Live performances are final for music, equipment
-The mix plays an important role in an album's or track's success
-A mix is as good as its song
The Perfect Mix
-The excerpt set (20 seconds from different songs, different albums)
Chapter 2: Some Axioms and Other Gems
Louder Perceived Better
-Harvey Fletcher and W. A. Munson of Bell Labs (1993)
%Played a test frequency followed by a reference tone of 1kHz. Listener had to decide which one was louder. Wanted to see how loud certain frequencies had to be to sound as loud as 1kHz.
%Fletcher-Munson Curves
%Formal name for the outcome of these studies is called the equal-loudness contours
-The louder music is played, the more lows and highs we perceive
-Two common beliefs that are false: 1) Mids are key to a balanced mix at varying levels, and 2)a mix that sounds good at quiet levels is likely to sound good at loud levels.
-When quiet, we perceive more room noise. When loud, we perceive more direct sound.
Percussives Weigh Less
-In a practical sense, sustained instruments require more attention due to their constant battle for audible dominance
Importance
-Ask yourself constantly: How important is it?
Natural vs. Artificial
-Patti Page, Confess, and the first multitracking
-'Natural' comes straight from the instrument
-It's a taste thing: you either like it or you don't.
Chapter 3: Learning to Mix
What Makes a Great Mixing Engineer?
-Vision: How do I want it to sound?
-Action: What equipment should I use? How should I use the equipment?
-Evaluation: Does it sound like what I want it to? Does it sound right? What is wrong with it?
%Approach #1: Okay, let's try to boost this frequency and see what happens.
%Approach #2: I can imagine the snare having less body and sound more crispy.
-Mixing vision ultimate question: How do I want it to sound?
-Having a mixing vision can make all the difference between the novice and the professional mixing engineer. While the novice shapes the sounds by trial and error, the professional imagines sounds and then achieves them with the mix.
-"What's wrong with it?"
-ABSOLUTELY NECESSARY
%Have the skill to critically evaluate sounds
%Master your own tools, have knowledge of other common tools
~But ultimately, jack of all trades, master of none
%Must have theoretical knowledge that can help you when a problem arises
~It is better to know what you can do, and how to do it, than to understand what you have done
%Interpersonal skills are a must
%Be able to work quickly and efficiently
Methods of Learning
-Read the damned book
-Reading and hearing
%Read the book, listen to examples
-Seeing and hearing
-DOING IT
Mixing Analysis
-Your CD collection contains hundreds of mixing lessons
-Continuously ask questions about what you're listening to
Reference Tracks
-Great idea to have a few mixes, learn them inside and out, fully analyse them, and have them readily accessible
-Using Reference Tracks
%As a source for imitation
%As a source of inspiration
%As an escape from a creative dead end
%As a reference for a finished mix
%To calibrate our ears to different listening environments
%To evaluate monitor models before purchase
-How to choose a reference track
%A good mix
~Important to choose more modern songs with newer, better equipment
%A contemporary mix
~Part of the game is keeping up with the trends
%Genre related
~Don't get country track to help with a rap song
%A dynamic production
~Three songs in one track gives you that many more options
-Don'ts
%A characteristic mix
~The Strokes really played only one way, and wouldn't help much unless you want to copy them
%Too busy/simple
~Too much and you can't discern what's going on. Too little and it won't be helpful
Chapter 5: Related Issues
Breaks
-Sometimes, people go hours not feeling need to take a break
-Brain-demanding, ear-demanding process
-Breaks let us forget the recent tracks you were working on (to a degree) so you can come back and work on something new
-Our ears get tired, so we bring down monitoring level
-Critical break: day or two withour listening to the song after completing it
%Clears our individual mixing from our heads, we have less sonic prejudice
Using Solos
-Novices tend to abuse/misuse this, mixing isolated tracks without mixing it into the song
-The mix is a composite, beneficial to adapt a mix-perspective approach. Solos adversely promote element-perspective approach
-Lose the reference to the mix
%Compressing vox for example
%When vox solo'd, no reference to volume. Soloing guitars gives that perspective
%Panning useless unless with the rest of the mix
-Useful, like when getting buzz out of the guitar track, but useless once found
Mono Listening
-Many tvs still monophonic with music
-AM radio stations, FM if the signal is too weak
-Large venues sum in mono because it's difficult to give stereo feel to so many people, very expensive, not much different
-Mono-compatible mix: Mixes that translate well when summed to mono
%So few mixes are meant to be in mono, but the aim is to minimize the side-effects that summing might bring about
-Many engineers install a single speaker for mono listening
%gives true mono (out of one speaker) rather than phantom mono (out of two speakers)
-When we sum in mono, balance of the mix changes
-Also helps in evaluating the stereo aspects of our mix (can be easier to determine authenticity of panning, various studio effects, and overall impact of stereo panorama
-Many analogue and digital desks offer a mono switch
Bouncing Submixes
-Recording the mix or submix
-Bouncing drums for example would let you play a single track rather than all tracks; free up the compressors, EQ, reverb, and other tools; frees up channels (if using a board); frees up CPU
-Options available when bouncing:
%File type: WAV (BWF if time stamp included)
%Bit rate: 32 bit float (or 24 bit integer)
%Sample rate: should be same to the project's sample rate
%File format: multiple-mono (stereo file stored as two mono files with a respective .L and .R extension) and stereo-interleaved (stereo file stored as a single stereo file)
%Realtime (online)/offline: saves to disk as it plays. May be longer, but less prone to timing errors, lets you listen to it as it is being recorded
*When bouncing the final mix, best to leave any bit or sample rate to mastering stage, but if being put on CD, 16-bit, 44.1kHz, stereo interleaved*
Mix Edits
-Very often we'll run into many versions of a single mix called edits
-Common editing candidates:
%Album version: least restricting mixes
%Radio edit: heavily compressed or limited because it's how it will play. Vox commonly pushed up since radios commonly listened to in loud areas. Long songs sometimes cut short with edits or fades. Limited ability to translate/reproduce low frequencies
%Vox up/down: often two edits bounced with difference in vox by 1dB
%Club/LP versions: requires centered bass content, minimum phase
Mastering
-Simply put, mastering is an art and a science reserved for the experts
-Mixes used to be submitted to mastering on 1/2" analog tapes, Later, it was DATs, now it's CD/DVD or external drives
-Not professional to submit CDs because they're prone to mistakes
-Log included with everything on it
-Biggest problem in mastering is each instance of processing affects all elements of the song
-The more the mix needs correction, the more distant the hope of perfection
-Has become common to submit mixes in subgroups (vox, rhythm, leads, residue mix [everything not in other groups])
-Full stereo mix should be submitted that is equal to the sum of all the stems when faders at unity
For class on Wednesday, we went over the presentations of these first five chapters and went over the mixes we had worked on the week before. Some of the notes that were said were:
-Fading the edits that we make to avoid the possibility of pops
%Apple+F after selecting everything makes it faster to apply fades
-Exaggerate the lows and highs
Music-An Extremely Short Introduction
-The music unfolds itself with perfect freedom; but it is so heart-searching because we know all the time it runs along the quickest nerves of our life, our struggles and aspirations and sufferings and exaltations *Michael Allis
-Today, music rarely fails to produce emotions
-As mixing engineers, one of our prime functions/responsibilities is to help deliver the emotional context of a musical piece
-When approaching a mix, ask:
%What is this song about?
%What emotions are involved?
%What message is the artist trying to convey?
%How can I support/enhance the vibe?
%How should the listener respond to this piece of music?
-A mix can and should enhance the music, its mood, the emotions it entails, and the response it should incite
The Role and Importance of the Mix
-Mixing: A process in which multitrack material - whether recorded, sampled, or synthesized - is balanced, treated, and combined into a multichannel format, most commonly two-channel stereo
%A mix is a sonic presentation of emotions, creative ideas, and performance
-Everyone appreciates good sonic quality (cellphones, hi-fi systems for example)
-Live performances are final for music, equipment
-The mix plays an important role in an album's or track's success
-A mix is as good as its song
The Perfect Mix
-The excerpt set (20 seconds from different songs, different albums)
Chapter 2: Some Axioms and Other Gems
Louder Perceived Better
-Harvey Fletcher and W. A. Munson of Bell Labs (1993)
%Played a test frequency followed by a reference tone of 1kHz. Listener had to decide which one was louder. Wanted to see how loud certain frequencies had to be to sound as loud as 1kHz.
%Fletcher-Munson Curves
%Formal name for the outcome of these studies is called the equal-loudness contours
-The louder music is played, the more lows and highs we perceive
-Two common beliefs that are false: 1) Mids are key to a balanced mix at varying levels, and 2)a mix that sounds good at quiet levels is likely to sound good at loud levels.
-When quiet, we perceive more room noise. When loud, we perceive more direct sound.
Percussives Weigh Less
-In a practical sense, sustained instruments require more attention due to their constant battle for audible dominance
Importance
-Ask yourself constantly: How important is it?
Natural vs. Artificial
-Patti Page, Confess, and the first multitracking
-'Natural' comes straight from the instrument
-It's a taste thing: you either like it or you don't.
Chapter 3: Learning to Mix
What Makes a Great Mixing Engineer?
-Vision: How do I want it to sound?
-Action: What equipment should I use? How should I use the equipment?
-Evaluation: Does it sound like what I want it to? Does it sound right? What is wrong with it?
%Approach #1: Okay, let's try to boost this frequency and see what happens.
%Approach #2: I can imagine the snare having less body and sound more crispy.
-Mixing vision ultimate question: How do I want it to sound?
-Having a mixing vision can make all the difference between the novice and the professional mixing engineer. While the novice shapes the sounds by trial and error, the professional imagines sounds and then achieves them with the mix.
-"What's wrong with it?"
-ABSOLUTELY NECESSARY
%Have the skill to critically evaluate sounds
%Master your own tools, have knowledge of other common tools
~But ultimately, jack of all trades, master of none
%Must have theoretical knowledge that can help you when a problem arises
~It is better to know what you can do, and how to do it, than to understand what you have done
%Interpersonal skills are a must
%Be able to work quickly and efficiently
Methods of Learning
-Read the damned book
-Reading and hearing
%Read the book, listen to examples
-Seeing and hearing
-DOING IT
Mixing Analysis
-Your CD collection contains hundreds of mixing lessons
-Continuously ask questions about what you're listening to
Reference Tracks
-Great idea to have a few mixes, learn them inside and out, fully analyse them, and have them readily accessible
-Using Reference Tracks
%As a source for imitation
%As a source of inspiration
%As an escape from a creative dead end
%As a reference for a finished mix
%To calibrate our ears to different listening environments
%To evaluate monitor models before purchase
-How to choose a reference track
%A good mix
~Important to choose more modern songs with newer, better equipment
%A contemporary mix
~Part of the game is keeping up with the trends
%Genre related
~Don't get country track to help with a rap song
%A dynamic production
~Three songs in one track gives you that many more options
-Don'ts
%A characteristic mix
~The Strokes really played only one way, and wouldn't help much unless you want to copy them
%Too busy/simple
~Too much and you can't discern what's going on. Too little and it won't be helpful
Chapter 5: Related Issues
Breaks
-Sometimes, people go hours not feeling need to take a break
-Brain-demanding, ear-demanding process
-Breaks let us forget the recent tracks you were working on (to a degree) so you can come back and work on something new
-Our ears get tired, so we bring down monitoring level
-Critical break: day or two withour listening to the song after completing it
%Clears our individual mixing from our heads, we have less sonic prejudice
Using Solos
-Novices tend to abuse/misuse this, mixing isolated tracks without mixing it into the song
-The mix is a composite, beneficial to adapt a mix-perspective approach. Solos adversely promote element-perspective approach
-Lose the reference to the mix
%Compressing vox for example
%When vox solo'd, no reference to volume. Soloing guitars gives that perspective
%Panning useless unless with the rest of the mix
-Useful, like when getting buzz out of the guitar track, but useless once found
Mono Listening
-Many tvs still monophonic with music
-AM radio stations, FM if the signal is too weak
-Large venues sum in mono because it's difficult to give stereo feel to so many people, very expensive, not much different
-Mono-compatible mix: Mixes that translate well when summed to mono
%So few mixes are meant to be in mono, but the aim is to minimize the side-effects that summing might bring about
-Many engineers install a single speaker for mono listening
%gives true mono (out of one speaker) rather than phantom mono (out of two speakers)
-When we sum in mono, balance of the mix changes
-Also helps in evaluating the stereo aspects of our mix (can be easier to determine authenticity of panning, various studio effects, and overall impact of stereo panorama
-Many analogue and digital desks offer a mono switch
Bouncing Submixes
-Recording the mix or submix
-Bouncing drums for example would let you play a single track rather than all tracks; free up the compressors, EQ, reverb, and other tools; frees up channels (if using a board); frees up CPU
-Options available when bouncing:
%File type: WAV (BWF if time stamp included)
%Bit rate: 32 bit float (or 24 bit integer)
%Sample rate: should be same to the project's sample rate
%File format: multiple-mono (stereo file stored as two mono files with a respective .L and .R extension) and stereo-interleaved (stereo file stored as a single stereo file)
%Realtime (online)/offline: saves to disk as it plays. May be longer, but less prone to timing errors, lets you listen to it as it is being recorded
*When bouncing the final mix, best to leave any bit or sample rate to mastering stage, but if being put on CD, 16-bit, 44.1kHz, stereo interleaved*
Mix Edits
-Very often we'll run into many versions of a single mix called edits
-Common editing candidates:
%Album version: least restricting mixes
%Radio edit: heavily compressed or limited because it's how it will play. Vox commonly pushed up since radios commonly listened to in loud areas. Long songs sometimes cut short with edits or fades. Limited ability to translate/reproduce low frequencies
%Vox up/down: often two edits bounced with difference in vox by 1dB
%Club/LP versions: requires centered bass content, minimum phase
Mastering
-Simply put, mastering is an art and a science reserved for the experts
-Mixes used to be submitted to mastering on 1/2" analog tapes, Later, it was DATs, now it's CD/DVD or external drives
-Not professional to submit CDs because they're prone to mistakes
-Log included with everything on it
-Biggest problem in mastering is each instance of processing affects all elements of the song
-The more the mix needs correction, the more distant the hope of perfection
-Has become common to submit mixes in subgroups (vox, rhythm, leads, residue mix [everything not in other groups])
-Full stereo mix should be submitted that is equal to the sum of all the stems when faders at unity
For class on Wednesday, we went over the presentations of these first five chapters and went over the mixes we had worked on the week before. Some of the notes that were said were:
-Fading the edits that we make to avoid the possibility of pops
%Apple+F after selecting everything makes it faster to apply fades
-Exaggerate the lows and highs
Monday, August 30, 2010
Week of August 19, 2010
This Monday, we chose groups for the rest of the semester. We first chose groups, but it didn't work out for everyone, so we re-chose groups based on heavy whiteboarding and I ended up with Andy. We're group 4, and our presentation for next week is chapter 5. This also means we are meeting for labs on Tuesdays and Thursdays from 2 - 4. I'm pretty stoked, we work fairly well together.
Save As -->*dropdown menu* mpa 308 raw tracks 1 -->Group 4-->Group_4_Raw_Tracks_1
To get to the tracks that we will be editing and mixing this semester, we must go through the csumbuser folder, then open the mpa 308 folder, then the raw tracks 1 folder. There we will find the ptf file we are working with this semester.
For right now, we are only working through Pro Tools and bypassing the board to get the Pro Tools session organized with some light mixing. So, for patching, we will only need to use the Pro Tools 1 & 2 out into the 2 Track.
These are the notes that I have written down:
TO DO BEFORE ANY FINE-TUNING:
-Give them meaningful names to see the tracks more quickly
-Clean up heads and tails for redundant sounds
-Make location points at top (enter-->name it-->enter). Give them meaningful names (verse 1, good chorus, weak bass, etc.)
-Command + E to delete everything before selected point on audio track
-Remember, tracks are there for choice, and we don't have to use all of them
-After drums are cleaner, group them (Command + G) and call them something distinct to differentiate between other drum groups you may use
-Strip Silence (Command + U) all the tracks you can to get rid of repetitive noise
-R & T to zoom
-Lead vs. Other Guitars:
>Lead centered by itself
>Lead centered with another guitar centered
>Lead centered with the other centered
>Lead centered with both guitars centered
>Lead centered with guitar 3 hard right
>Lead centered with guitar 3 hard left
>Lead centered with guitar 4 hard right
>Lead centered with guitar 4 hard left
>etc...
-Get two new stereo auxiliary channels up (Shift + Apple + N)
>On one channel, input and mix at 100% with reverb send
>Command and click the solo button to safe solo (the track will then always be solo'd)
>Make the other stereo aux channel with a delay send
-Send the vox tracks to the delay aux channel. Pan the lead vox and background vox to right and left (respectively) and send the delayed lead vox to the left and the delayed background vox to the right. It gives the effect of many more singers.
-Use hpf on background vox until it isn't stepping on the lead vox
-Ad hpf on verb track, about 150 Hz, so there won't be a loss of distinction on the lower notes
-Complimentary EQ to find the exact effect you're looking for
Save As -->*dropdown menu* mpa 308 raw tracks 1 -->Group 4-->Group_4_Raw_Tracks_1
So far, as Monday classes go, it was a little atypical since he won't be doing these kinds of lessons for much longer. As class dynamics go, they're just like they were in 307, but we're a little more experienced than before. That, and we're not going to be doing really any recording in this class.
Tuesday's lab, Andy and I worked on mixing the first song. We added a lot of EQ, which was expected. We didn't use the Bomb Factory compressor as often, but we did apply it to a few of the drum tracks, like the snare and toms, and some of the guitars, including the lead guitar. It was helpful to go through all of the EQ and compression practice. It is also a little bit less stressful since the music isn't our own. So, the weird vocal tracks and inconsistent kick aren't ideal, but we can feel better since our job is to improve it.
In Wednesday's class, we went right into the recording room to listen to how well our groups worked in the previous lab. Throughout the two hours, we continued to add to our list of things to do for our mixes:
-Group different instruments together (bass drum, snare, OH) and check for phasing issues. The more combinations the better.
-In most rock music, there's a lot of tonics and fifths then eventually a third.
-Watch for extra noise in the tom tracks and take it out when able.
-Make sub-group with toms to get rid of the filler faster (if working with mono tom tracks and not the summed stereo track)
-Make sure the mid-range is scooped when it is applicable (when there are too many voices in the mid-range)
-Highlight all groups, bring them all the way down, then de-select it all, then go by groups. It makes the balancing easier to do.
-Shelving is good for guitars because they're all in the same tonal range.
-80-160Hz has a lot of power behind it on guitars
-In the compressor, 1 is slow, 7 is fast (for attacks and releases)
-For an extra rude sound, put the attack at 1, the release at 7
We also received the assignment that will be due next week. We have to mix the entire song in mono while checking for phasing issues, being selective of tracks, and using subtractive EQ to bring out the most important instruments. Also, we will have to take the mono track and 1) pan the guitars from center to left and right and back again for different sections, 2) create aux reverb and delay to spread vocals and back up vocals, 3)send the snare to an aux and create a different sound, parallel to the first. Also, we have to transcribe the song. And, it's all to be done by Wednesday.
Thursday's class, the biggest thing to mention is the problems Andy and I were having with the stereo tom track. We both agreed that it seemed to work out the best for the mix, and we started taking out a lot of the extra noise that wasn't necessary. Unfortunately, there were a lot of other things going on in the tom tracks other than the toms (namely the cymbals) that were extremely difficult to hide/take out. When we would be playing the song, there was sometimes an increase in the volume of the cymbals. And it would generally be in the middle of a cymbal crash, so it was really hard to take out. We worked with the EQ and compression a lot to try and get it out as much as we could, as well as using the OH tracks. We're still going to be working at it a bit for next lab, but we do need to move on with the rest of our work. We finished the mono mix and everything that came with it, and we're hoping to finish up the last half of it on Tuesday. Andy and I also worked on the transcription on Friday, and we're going to work on it all day on Monday and Tuesday.
Subscribe to:
Posts (Atom)