Basically, this week is all about compression.
-controls maximum levels and maintains higher average loudness (soft becomes louder, loud becomes (relatively) softer
-specialized amps used to control dynamic range (distance between loudest and softest part of a waveform)
-Flutes: difference between loudest and softest is about 3dB
-Voice: difference between loudest and softest is about 10dB
-Drums: difference between loudest and softest is about 15dB
-our ears act as compressors, responding to average level of sounds
-detector circuits built in like our ears
-brick wall limiting: pre-determined level is the peak, absolutely nothing above it
-Multiband compression: compress different levels of frequencies sepearately
-optical compressors: use photocell resistors. Audio goes into a lightbulb. The louder it is, the brighter the light and vise versa. Light and photo resistor in light-proof box (super simple explanation). Then, makeup gain to bring soft parts up.
-Field Effect Transistor (FET): First to emulate tubes and the way they worked,. fast, clean, reliable.
-Voltage Controlled Amplifiers (VCA): Most versatile of compressors, most flexible. GREAT reaction time
-VeriGain: Circuit without the top three versions of compressors, pretty much.
-Digital Compressors: Exaggerated basically. Can get precisioon you can't get with analog compressor. Some have built-in delay.
-Ratio: way to express degree the compressor is reducing the dynamic range. Difference between signal increase coming in the output. 2:1 means that for every 2dB going in that are above the threshold, it's only going to sound 1 dB over. Going over 10:1 turns the compressor into a limiter
-Threshold: Level of incoming signal at which compression amplifier turns from a unity gain amplifier to a compressor-reducing gain. The higher the threshold, the lower the input. No affect on signal below the threshold. Once threshold reached, the dB over are reduced based on ratio. Knees are where the audio hits the threshold. Hard knees are sudden and abrupt, soft knee is more gradual over the threshold. Changing the knee changes the envelope (ADSR), more specifically the A&R
-Attack time: Time it takes to compress after threshold is reached. Range from under 1ms to over 100ms. Affects tone in terms of brightness. Fast attack clamps on signal. Depending on how you adjust it, the compressor acts as an EQ and a reverb.
-Release: the time it takes the compressor takes to return to unity once signal drops below the threshold. longer creates darker sound, shorter makes it sound brighter. So, compressor released from gain reduction. 20ms to 5sec is range of releases. Really depends on tempo, program material, instrument.
Tuesday's lab I had the first hour to myself, so I worked on grouping and recording stems some more, just to make sure I know what's happening. Then, when Andy came back in, we started experimenting with the outboard gear, and found some really interesting sounds using the Eventide and Distressor.
Wednesday we went into the studio and practiced using the Millenia and Distressor, incrementally changing the attack, release, ratio, threshold, and gain output. The patching we used was getting the track that is to be treated to the board using the line 1 inputs, then using the channel insert sends into the distressor, and then send it to a Pro Tools input onto a new audio track. Record, and compare. So, we did a lot of that, and it was a lot more interesting and fun than I thought it would be.
Thursday we worked more with the outboard gear. We did some of the critical listening with the compressor, but we enjoyed the strange mixing we were doing on Tuesday, and we decided to do it again. Not as many cool sounds, but we did learn a lot and have a good time.
Friday, October 29, 2010
Monday, October 18, 2010
Stems, Subgroups, Etc.
This is basically our week:
To Mix in the Box:
A 1-2 out of Pro Tools into 2 Track
Creating Submixes in Pro Tools
~Group instruments together (bass and guitars, vox and bgv, drums, keyboards & whatever, 1 channel for Aux Sends) or (vox and bgv, bass and drums, keys and guitar)
~Can create a Stereo Aux Channel, or to record them to stems, stereo audio channel
~Ouput Audio Path Selector output to that particular bus
~Makes it easier to add inserts and effects, easier to navigate, decreases how much we have to think about individual tings
~Mastering engineers now accepting stems
Drum and bass tracks are out bus 5-6, and in A 1-2, guitar tracks are in bus 7-8, and out A 3-4. Then create 2 new stereo Aux tracks, send the drums and bass to one of them, the guitars to the other. The Aux channel with the drums and bass, set the input as bus 5-6. The Aux channel with guitars, set input as bus 7-8. Then, create new stereo audio channels and have their inputs bus 5-6 and bus 7-8. So this is the in-the-box stems. Then, we add two more corresponding audio tracks to go out of the box. We have to set the outputs as the Pro Tools outputs on teh patch bay you would want. Patching so far would be out ProTools 1-4 into Line 1 Inputs 37-40 (for convenience sake). Pan 37 and 39 hard left, and 38 and 40 hard right. Take the faders and put them at unity gain. Go to the top of the channel strips 37-40 at the group bus, then press the 1-2 button on the drums and bass channels, and 3-4 button on the guitar channels. Then go to the group bus 1-2 (red faders) and turn them up to unity and the gain fully clockwise, and the same with group bus 3-4. We also have to pan the group pan 1 and 3 hard left and group pan 2 and 4 hard right. To give control to the Master fader (now that it works) and just push the MIX button, and NOT the 2TK 1 (like we did before). So, we sent the audio channels from Pro Tools into tracks on the board (as a subgroup), then sent the board channels into monitor groups (another subgroup). Next, we can record stems into Pro Tools by getting two new Stereo audio channels in Pro Tools, and setting their input as something like B 1-4 (1-2 for drums and bass, 3-4 for guitars). On the patch bay, that would be group outputs 1-4 (again, 1-2 is drums and bass, 3-4 is guitars) into Pro Tools A 9-10 for drums and bass and A 11-12 for guitars. Then, we record-enable the in-the-box stems and out-the-box stems. Record them, and you have the in-the-box and out-the-box stems.
Monday and Wednesday were dedicated to the understanding of this process. Tuesday in lab, Andy and I mixed Jonsey through the box with the methods listed above and recorded stems. Basically I did it, with Andy asking questions throughout. We undid it, and I re-did it again. Then we moved onto cleaning up Raw Tracks 4, which was pretty much an awful song. There is no other way to describe it. I'm not attacking it, exactly, just stating a fact. It was tracked horribly, the drummer never found a groove, the vocalists couldn't keep in time to save their own lives, and I don't even want to talk about the guitar tracks. But we still cleaned it up as much as we could. Thursday's lab we did the same thing as Tuesday's lab, but did it pretty much for an hour and a half. Then, so our heads wouldn't explode, we looked at our 306 projects on the MTA 980. Andy's was much more impressive than mine, as I don't have the MIDI experience that he does. His flanging, frequency sweeps, and crazy automation was picked up very well through the board. Once I start changing the velocities of some of the drums and make them feel more natural. Putting effects like reverb would also help.
To Mix in the Box:
A 1-2 out of Pro Tools into 2 Track
Creating Submixes in Pro Tools
~Group instruments together (bass and guitars, vox and bgv, drums, keyboards & whatever, 1 channel for Aux Sends) or (vox and bgv, bass and drums, keys and guitar)
~Can create a Stereo Aux Channel, or to record them to stems, stereo audio channel
~Ouput Audio Path Selector output to that particular bus
~Makes it easier to add inserts and effects, easier to navigate, decreases how much we have to think about individual tings
~Mastering engineers now accepting stems
Drum and bass tracks are out bus 5-6, and in A 1-2, guitar tracks are in bus 7-8, and out A 3-4. Then create 2 new stereo Aux tracks, send the drums and bass to one of them, the guitars to the other. The Aux channel with the drums and bass, set the input as bus 5-6. The Aux channel with guitars, set input as bus 7-8. Then, create new stereo audio channels and have their inputs bus 5-6 and bus 7-8. So this is the in-the-box stems. Then, we add two more corresponding audio tracks to go out of the box. We have to set the outputs as the Pro Tools outputs on teh patch bay you would want. Patching so far would be out ProTools 1-4 into Line 1 Inputs 37-40 (for convenience sake). Pan 37 and 39 hard left, and 38 and 40 hard right. Take the faders and put them at unity gain. Go to the top of the channel strips 37-40 at the group bus, then press the 1-2 button on the drums and bass channels, and 3-4 button on the guitar channels. Then go to the group bus 1-2 (red faders) and turn them up to unity and the gain fully clockwise, and the same with group bus 3-4. We also have to pan the group pan 1 and 3 hard left and group pan 2 and 4 hard right. To give control to the Master fader (now that it works) and just push the MIX button, and NOT the 2TK 1 (like we did before). So, we sent the audio channels from Pro Tools into tracks on the board (as a subgroup), then sent the board channels into monitor groups (another subgroup). Next, we can record stems into Pro Tools by getting two new Stereo audio channels in Pro Tools, and setting their input as something like B 1-4 (1-2 for drums and bass, 3-4 for guitars). On the patch bay, that would be group outputs 1-4 (again, 1-2 is drums and bass, 3-4 is guitars) into Pro Tools A 9-10 for drums and bass and A 11-12 for guitars. Then, we record-enable the in-the-box stems and out-the-box stems. Record them, and you have the in-the-box and out-the-box stems.
Monday and Wednesday were dedicated to the understanding of this process. Tuesday in lab, Andy and I mixed Jonsey through the box with the methods listed above and recorded stems. Basically I did it, with Andy asking questions throughout. We undid it, and I re-did it again. Then we moved onto cleaning up Raw Tracks 4, which was pretty much an awful song. There is no other way to describe it. I'm not attacking it, exactly, just stating a fact. It was tracked horribly, the drummer never found a groove, the vocalists couldn't keep in time to save their own lives, and I don't even want to talk about the guitar tracks. But we still cleaned it up as much as we could. Thursday's lab we did the same thing as Tuesday's lab, but did it pretty much for an hour and a half. Then, so our heads wouldn't explode, we looked at our 306 projects on the MTA 980. Andy's was much more impressive than mine, as I don't have the MIDI experience that he does. His flanging, frequency sweeps, and crazy automation was picked up very well through the board. Once I start changing the velocities of some of the drums and make them feel more natural. Putting effects like reverb would also help.
Friday, October 15, 2010
Week of October 10, 2010
In class we watched a DVD of the making of Bjork's album Medulla, released in 2004. The album itself is composed almost entirely out of vocals. After the falling of the World Trade Center, she felt that her music needed to take a more primitive, primal style. After giving birth to Isadora, also, she wanted an album whose message was one of flesh, blood, and bone, which is where the name Medulla originated (marrow in Latin). The engineer on this project, Nigel Goodrich, had been working with Bjork on her albums for a decade before, so it was almost natural for her to choose him. Bjork knew his style of mixing (and likewise, Goodrich knew how to translate her quirkiness) and they were able to make a great album. Bjork enjoys enlisting the help of others from time to time, especially if she knows someone who can bring something to the table that she can't bring herself. Every guest artist that appeared on this album was found by surfing the web. How Bjork works (as far as I can tell) is seh puts everything that she can offer into an album. But she is also enlightened enough to know when there are others that have capabilities far from her range. One of which was Rahzel, formerly of the Roots. She hadn't heard of him before, but was looking for more percussive elements of the vocal spectrum. She resisted calling him to begin with because she felt that beat-boxing would be too easy to do, and it was less an artistic decision and more a fast solution. But, after doing all she could with Goodrich, she called him to help out on the album. His beat-boxing prowess brought a lot to the album as a whole, and even Bjork ended up enjoying the beat-boxing. Another artist she found was Dokaka, an internet sensation for his vocal covers of other artist's songs. Other guest artists included Tagaq, Mike Patton, and Shlomo. Each of these artists were picked especially for their talents, and Bjork took advantage of these talents as often as she could for the recording of Medulla. An Inuit singing game is also featured in one of the songs, where women try to recreate sounds around them and attempt to make the other woman mess up. Bjork was born in Iceland in 1965, where music is an integral part of the educational system. Everyone is brought up with a background in music, so musicians like Bjork may be more common over there than they are here in the US. She may have an extensive education in music, but during the DVD, her choice of descriptive directions never really seemed attainable, since her descriptions included making something more "waaaaah" (as she puts her hands together and apart, as if stretching something) at one point. The thing about an artist like Bjork, though, is that she has already proven herself time and time again that she knows her stuff, it's everyone else's fault that they can't understand her.
Friday, October 8, 2010
Monday we were supposed to have our written assessment, but instead we went over the equipment we should be getting for a home studio. A lot of what was said was just numbers, but after looking them up, it makes a lot more sense. I'm most likely going to be getting Reason, so the Digidesign MBox isn't going to help me at all. But what sounds like a good combination of gear is the Behringer ADA8000 as a mic preamp and the RME Fireface 800 as an interface. Once I get the money, I'll start planning out my buying schedule. But, until then, I'll keep doing research on it.
Tuesday's lab, Andy and I chose to do the mono mix out of the board. I did most of the prep work, and we made it through with minimal problems. We used both distressors, both Milennias, the PCM91, and the SPX90. So, we used all but five cables or so. One thing we need to figure out is why all of our board mixes are so loud. That's one thing we may neede to ask in the near future. The tracking for this song is much better than the songs by Apparently Nothing. I sort of wish we knew who performed this song. Their drum tracks are much more even, so strip silencing them was much easier to do. We also moved a lot of the electronic drums around, so they weren't so imposing to the rest of the mix.
Wednesday we had our assessment that was originally scheduled for Monday. The first two questions I had no problems answering. The first question was in regards to phase and the Haas trick. The second was about monitors, correct room treatment, and the repercussions of incorrect treatment. The third question was asking about what Izhaki believes are the four objectives for recording engineers. They are to capture the mood of a piece, mostly by knowing what the mood of the piece is supposed to be. Another key element is to balance the piece correctly. The kick is much more important in a heavy rock song than a folk song. The third objective is definition, which is achieved through EQs and other processors. And, of course, interest needs to be captured.If the piece varies dynamically, instrumentally, and time-wise, it will be a much more interesting piece to listen to. The fourth question was about Izhaki's five different mixing domains that all mixing engineers have to work in. And, they are time, frequency, level, stereo, and depth. Time, because the tracks being mixed have a beginning, middle, and end. Frequency, because different instruments reside in different frequencies, and some of the frequencies of certain instruments need to be attenuated or amplified. Level, because the amount of that instrument that is played is always important when in relation to other instruments. Stereo, because the positioning of each of the instruments from left to right is important to how they will be heard and interpreted. Depth is important for the same reasons the stereo image is important. Putting drums right up front isn't a smart move as everything else will pale in comparison. I should have separated out my study techniques, though, because I was using acronyms to remember them, but I was getting all of acronyms mixed up. There was the mixing objectives (MBDI), the domains (FTLSD), the things to consider when mixing (SEMEL), and others. I'll have to separate them out better for myself next time.
Tuesday's lab, Andy and I chose to do the mono mix out of the board. I did most of the prep work, and we made it through with minimal problems. We used both distressors, both Milennias, the PCM91, and the SPX90. So, we used all but five cables or so. One thing we need to figure out is why all of our board mixes are so loud. That's one thing we may neede to ask in the near future. The tracking for this song is much better than the songs by Apparently Nothing. I sort of wish we knew who performed this song. Their drum tracks are much more even, so strip silencing them was much easier to do. We also moved a lot of the electronic drums around, so they weren't so imposing to the rest of the mix.
Wednesday we had our assessment that was originally scheduled for Monday. The first two questions I had no problems answering. The first question was in regards to phase and the Haas trick. The second was about monitors, correct room treatment, and the repercussions of incorrect treatment. The third question was asking about what Izhaki believes are the four objectives for recording engineers. They are to capture the mood of a piece, mostly by knowing what the mood of the piece is supposed to be. Another key element is to balance the piece correctly. The kick is much more important in a heavy rock song than a folk song. The third objective is definition, which is achieved through EQs and other processors. And, of course, interest needs to be captured.If the piece varies dynamically, instrumentally, and time-wise, it will be a much more interesting piece to listen to. The fourth question was about Izhaki's five different mixing domains that all mixing engineers have to work in. And, they are time, frequency, level, stereo, and depth. Time, because the tracks being mixed have a beginning, middle, and end. Frequency, because different instruments reside in different frequencies, and some of the frequencies of certain instruments need to be attenuated or amplified. Level, because the amount of that instrument that is played is always important when in relation to other instruments. Stereo, because the positioning of each of the instruments from left to right is important to how they will be heard and interpreted. Depth is important for the same reasons the stereo image is important. Putting drums right up front isn't a smart move as everything else will pale in comparison. I should have separated out my study techniques, though, because I was using acronyms to remember them, but I was getting all of acronyms mixed up. There was the mixing objectives (MBDI), the domains (FTLSD), the things to consider when mixing (SEMEL), and others. I'll have to separate them out better for myself next time.
Subscribe to:
Posts (Atom)