The Basics

By Paul Kimbrel

Gorilla Recording

The home digital recording revolution is a fairly recent and remarkable phenomenon. Until recently, high quality recording equipment was available only to the rich and established artists. The most a peon artist could hope for was a 4-track recorder and a cassette tape. The idea that a person could “causally” enter the “professional sound” arena would be unthinkable. If you wanted to become a professional audio engineer, you’d have to pay some serious dues as an intern for a large record company and darn near grovel to even sit behind a behemoth recording console.

But the digital revolution changed everything. Specifically, the personal computer. As computer got faster, and sound card quality got better, the idea that the peon artists could record a high quality demo – or even a final album – began to become a reality.

My personal journey began in college when I realized that the biggest part of my recording-studio-to-be was the computer I already owned. I was 90% on my way (or so I thought) to having a professional recording studio. Sure, I need those pesky mics and monitors but my $5 Labtec microphone and gaming speakers should be enough, right?

Actually, yes, the were. But the quality was just a notch above those ancient 4-track recorders. Computers take the quality of the storage medium up by leaps and bounds, but they do nothing to the quality of the incoming signal or the outgoing reference. I recorded my first song with that junk and got exactly what I put into it.

My next step was to start upgrading a piece at a time. I got a better microphone. I got a better interface from the microphone to the computer. I got a better computer. I got better software. I got better monitors. Then I got a better room with sound insulation on the walls.

Bottom line… don’t wait until you have all the right gear to record. Use what you have. Borrow what you don’t have. Make music. Can’t play? Find a friend who’s itching to record a song. That’s what I did, and that’s how I met Chad Lemons, and how this whole studio thing got started.

My first recording computer used a Gravis Ultrasound sound card as the audio interface. While the details of the sound card were different, it was no different than any other SoundBlaster or built-in sound interface you’ll find on a computer today. It had a line-in and a line-out. Yes, it had a mic-in, but don’t use those. They suck.

Hooking up my $5 Labtec mic was easy. It was built to be plugged into a computer. However, when I borrowed my first “real” studio microphone (a Shure SM-58), I had a small problem hooking it up to the computer. The next section (”Hook up”) deals with how I did it. Suffice it to say, it involved adapters.

Adapters will get you miles down the road, but they will also drive you insane. A good studio always has a plethora of adapters to go from any connection to another. However, my next upgrade was to get a mixing board. I was able to build some connectors to go directly from my mixing board to the computer without adapters and I was able to plug my microphones directly into the mixing board. I didn’t actually mix with the board per se. I simply used it as a preamp to give me more control over the mic signal going into the computer. It also allowed me to plug other things in, like a keyboard, without going through adapter hell.

Still, I was using those gaming speakers. I was plagued by the fact that I never had the sub-woofer properly adjusted on those things. One day, I’d have no bass and I’d crank all the bass frequencies of my tracks. I’d pop them in my car stereo and watch woofer cones come flying out of the dash. Other days, I’d have the sub-woofer cranked up to high on the computer. I’d play my mixes on my HiFi and found that the sound had no depth, no umph.

My next was my monitor speakers. I put some money into those things and it’s paid off tremendously. I also purchased a good set of reference headphones. Though I hate to admit it, I’ve mixed more with my headphones than with my speakers. That’s not always a good idea, and I’ll touch on why in my mixing tutorial.

Still, I’ve found that my studio didn’t magically appear over night. And it’s still not where I want it to be. I’d like to upgrade my computer interface to be able to record 8 tracks at once without hot-wiring my two mixing boards. I’d like to have a Neumann microphone. I’d like better reference monitors. I’d really, really like a keyboard. But no one ever got to where they were headed by starting at the end.

Roles

Before you can really dive into the details of sound engineering, you first need to have the big picture. What is the “process” of recording sound? What does it mean to “engineer” sound?

Audio engineering is the practice of collecting sound, storing it, and editing the sound to the desired result. The engineering role, specifically, involves the actual capture of a performance. As the engineer, you are responsible for hooking up the microphones, the recording gear, the effects units, etc. You are responsible for ensuring the performance is properly captured and saved. You are responsible for ensuring the recorded performance is not lost and is properly processed after the fact. There are several sub-roles as an engineer, and during a recording session, you may take on one, or all of these roles. You might even take on a side role of “producer” by orchestrating arrangements and stylistic approaches used by the performer.

Recording Engineer

The recording engineer is responsible for the actual capturing of sound. You are responsible for ensuring the raw performance is captured with all its nuances – good or bad. As long as you capture enough information for the next engineer to process, you’ve done your job.

The key to performing this job well is to understand this fundamental principle: once added, you cannot remove; once removed, you cannot add.

Here’s the principle in action. It’s very tempting to add effects, compression, or even “intonators” to a vocal track as it is being recorded. However, if you make any modification to your vocal signal on its way to the recorder, that modification gets recorded forever. If you wake up the next day and find you hate all that reverb, or the intonator was off on a few notes… you cannot undo it. It’s set in stone.

Likewise, if you put an equalizer on your vocal signal on the way to the recorder, you may be able to filter out some unwanted frequencies before the signal gets recorded. You might create a vocal sound that you love by filtering out the 1khz range of the vocals. But, again, you might wake up the next morning and find that you were wrong. Once removed, sound cannot be added back in. Sure… you can try and boost that which you removed, but if it’s not there, it won’t magically reappear. It’s set in stone.

This is why the recording engineer should record tracks as raw and unaltered as possible. You can always add effects and intonation later, and if you screw it up, you can change it – because the original performance is raw and unchanged. Let the next engineer, the “mixing” engineer deal with molding and shaping the sound.

Also remember that the old computer adage, “garbage in, garbage out,” applies here. If you give the mixing engineer a tinny sounding acoustic track, no amount of EQ or effects will fix it. Get as full a range of sound as you can and let the mixing engineer work out the rest.

Mixing Engineer

In my humble opinion, this role is the most fun. Here you take the raw materials you’ve been given… drum tracks, bass tracks, guitars, vocals… and you mix them together into a final song. Here is where the engineer can literally change the course of a song. Do you keep the song dry, or add lots of reverb and echo? Do you compress the vocal tracks, or retain their dynamic range. Here’s where the aesthetics of audio engineering come to play. And this is where the most contention comes in with respect to your job. Not everyone will agree on how the song should sound. Know your boundaries and respect to the artists for whom you’re working. Try to achieve the vision of the artist through your mix.

There’s a lot of crossover here between live and studio engineers. In the studio, the mixing engineer takes prerecorded tracks and mixes them to a single entity that is passed to the mastering engineer. In the live setting, the mixing engineer takes the raw tracks from the performers, mixes them, and passes the final entity on to the audience directly. The principle is the same… take the raw materials, the raw sounds, and mix them into the final product. But the live mixing engineer has no do-overs. No rewind. No second chances. It’s a fickle profession.

But the studio mixing engineer can go back as often as needed to redo a mix until the final product is realized. However, the mixing engineer may have a skewed vision of the final product. The mixing engineer’s talent is realizing the relative levels of the raw material. How loud are the guitars relative to the vocals? When does the bass solo begin – is it turned up? How loud is the echo relative to the other tracks. But frequency response may not be top priority. That’s where the “mastering” engineer comes into play.

Mastering Engineer

In the mixing engineer’s studio, he may be able to get the final product to sound phenomenal. But who’s to say that his speakers are built like Joe Shmoe’s boom box? Or truck? Or TV? Once the final product is realized, it is sent to the mastering engineer for “tweaking.”

The mastering engineer is a much more technical role that involves breaking the final mix down into its audio components, or frequencies. He evaluates the bass frequencies in relation to the mid and high frequencies. He compares the final product to products already in the market place. What will the final product actually sound like on a boom box? Or a HiFi? Or even worse… a car? He adjusts the final equalization and dynamics of the song to bring it within industry standards so that the final product sounds just as phenomenal on a Ford Tempo’s sound system as it did on the mixing engineer’s sound system. Sure, it won’t sound as pristine in the Ford Tempo, but it will sound as good as a Ford Tempo can make it sound (and it won’t blow out the speakers).

Your Role(s)

Most often, you will take on the role of recording and mixing engineer in your own studio. Or you my specialize in the role of a mastering engineer. Either way, there tends to be a division between the mixing and mastering roles in the recording industry. There are several reasons for this, but the primary reason you will encounter is that perfect monitoring solutions for a mixing studio are nary impossible to achieve. As you mix, you may not be aware that your system is deficient in the 250-500Hz range. You may not realize that your signal will be, on average, 10db quieter than other songs in the CD turn table.

A mastering engineer’s studio tends to focus on the monitoring system almost exclusively. The monitor speakers there will be darn near perfect and reproduce the sound exactly as it is recorded. That allows them to make objective decisions on which frequencies to adjust to prevent consumer sound systems from blowing up from the latest hip-hop album. Or to prevent ears from literally bleeding from the latest head bangers anthem.

As you work through the various roles of engineer, try not to cross that boundary. Focus on recording and mixing, or focus on mastering. But at the very least, don’t try to do all the roles for a single album. Another set of ears, at the very least, will ensure that you end up with the best product possible.

Aspects of Sound

Whenever I describe running sound, I break things down into two basic parts:

  1. Technical
  2. Aesthetics

The technical part of sound engineering consist of hooking up equipment, knowing what each knob does, how to route signals from start to finish, etc. The aesthetic part of sound engineering involves understanding what makes a mix “sound good.”

The technical part is very objective and clear cut. There’s not necessarily one way to do things, but there are definite “right ways” and “wrong ways.” It’s hard to argue one way or another so long as things work. If you can get a signal from point A to point B, it’s usually not a big deal how you did it, so long it works.

Aesthetics, on the other hand, have been debated for years and will continue to be debated so long as we record sound. What sound good to one person sounds awful to another, and what sound awful too another, sound indifferent to the rest. There are a few “rules” that make a good starting point from which you can create your own sound, and a few “rules” that define industry standards. As the old adage goes, you must first learn the rules before you can break them. Then you break them to create your own unique sound.

Table of Contents | Next – Hook-up »