ARTIST SPOTLIGHT: J.R. Rotem | Producing In the Fast Lane

 

J.R. ROTEM ON THE GEAR AND TECHNIQUES HE USES TO MAKE HIT RECORDS
By Mike Levine

Thriving as a producer in the ultracompetitive L.A. pop and hip-hop music scenes is no easy task, but J.R. Rotem has proven himself more than equal to the task. Between his freelance production work and his work for his own label, Beluga Heights (which is affiliated with Epic/Sony), Rotem has produced a host of major artists including 50 Cent, Dr. Dre, Rihanna, Lil’ Kim, Jason Derulo, Sean Kingston, Lindsay Lohan, and Britney Spears, among many others.

Born in Johannesburg, South Africa, Rotem spent much of his childhood in Canada before moving to the San Francisco Bay Area when he was in junior high school. He began his musical life as a student of classical piano but moved on to jazz when he enrolled in Berklee School of Music. After graduation, he embarked on a career as a jazz pianist in the Bay Area. At that point, producing was only a sidelight for him. “It just kind of seemed like a hobby,” he says.

But after a while, Rotem decided that playing jazz might not be what he really wanted to do. “I just felt like there was a ceiling or a wall, or maybe it wasn’t the ultimate perfect fit for my creativity. So I decided to start making beats and didn’t really have any contacts; I was still in the Bay Area. But somehow my beats got in the hands of Dwayne Wiggins of Tony! Toni! Toné!, who was affiliated with Destiny’s Child, and one of my beats was bought by [Destiny’s Child] and ended up on their Survivor album. And then when that happened, it was a sign that there was a future for me in this.”

Rotem has built his career to the point where he is now finding and breaking artists himself through his record label. I had a chance to talk to him recently about his production techniques, his console-free studio, his choice of instruments, and a lot more.

Do you have your own setup or do you work in a commercial studio?
We have our own wing in a commercial studio. I’m the type of person who doesn’t like to travel or bounce around, so I don’t do my setup in different locations. I have one location that I come to every day. Essentially, I have a residency in a commercial studio. I do everything in the same spot, every day.

Is it a studio based around a large-format console?
No. In my particular room, we don’t use an SSL at all. The epicenter of the entire thing is really just a Pro Tools rig and my keyboards.

You just find that’s an easier way to work?
Much easier as far as opening sessions and recalling them. I don’t use any hardware or outboard gear. Everything is digital. You can open up a session at 3 a.m. and it’s the same as it was before; no knobs to turn or anything like that.

No recall sheets, none of that stuff.
Exactly, it’s completely a digital thing.

What are some of your “go-to” instruments and plug-ins?
As far as hardware first, I’m still very much a keyboard person. I could never fully rely on soft synths. Me being a jazz musician, I’m very sensitive to latency and timing. And, obviously, nothing could be like playing a real piano where there’s an acoustic connection, but I find that even keyboards are more like playing something real than a MIDI controller triggering soft synths from a computer. The keyboards that I rely on are the Yamaha Motif, the Access [Virus], a Korg Triton. I use the Roland Fantom-G8 and a few other more analog-type keyboards.

And then, I definitely have a very, very wide selection of soft synths that I also use. I’ll use Ivory for real pianos, Miroslav for real strings, and [Spectrasonics] Trilian for real bass. My drums are a combination of samples I have on an MPC and also [Native Instruments] Battery. I’ll use [XLN Audio] Addictive Drums for acoustic-sounding drums, and then I have [Native Instruments] B-4 for organs. I have a big variety of different soft instruments, and a lot of synthy analog stuff: [reFX] Nexus, [Native Instruments] Massive, [Korg] Legacy—stuff like that.

So when you’re recording a keyboard part, do you just record the audio directly or do you also record the part via MIDI?
Everything is recorded into MIDI first. I’ll have all of my instruments kind of setup on a template; I’ll record into MIDI. I’ll build the base of the track, usually quantize everything in MIDI unless something is specifically supposed to happen with a more natural feel. And then I’ll track from MIDI into audio. Sometimes the beat will be a 4- or 8-bar loop that’s simply tracked and copied and pasted. Other times, if I’m working in more of a song format—I’m playing different sections and kind of editing it and maybe for the full duration of the song, whatever it is—then I’ll track all of that. And once that’s tracked, I’ll always add the bells and whistles on top of it. But it’s definitely a process of first recording it into MIDI and then playing back that MIDI and bouncing it into audio.

Rotem records all his keyboard parts to MIDI, putting his final sound choices off until later, but likes the feel of the keyboards’ internal sounds when he’s playing.
Photo: Joe Magnani for J Squared Photography

So the keyboard sound is mainly just for when you’re recording the part?
Just for the feel of what I’m playing.

So you’ll add soft synths later when you’re layering and when you’re choosing your final sounds?
Yes. I’ll either use them at that point or, you know if I really need inspiration and I’m really sick of all the sounds on my keyboard, I might start with a soft synth, but mainly because I feel like I’m going to get to a sound I’ve maybe never heard on a soft synth. I have a lot of sounds on soft synths and a lot more undiscovered sounds. I know my keyboards so well that it’s very rare that I’m going to stumble on a patch or a sound that I haven’t heard, whereas on soft synths I might hear a sound first and the newness of it will be the inspiration for a new beat or a new song. Otherwise, like you said, I’ll go to the soft synth afterward to replace.

How involved do you get in the mixing and processing on your projects?
Very heavily involved. On many of my songs, I’m actually the one who mixes; I don’t always rely on an outside mixer. And even when I do, the song is usually 90-percent mixed already, from the process of recording it and tweaking it.

Let’s talk about working with the artist. Another role of a producer is to try to coax the best performance out of an artist. What’s your basic approach to working with people and trying to make them do their best?
I guess I try to tailor it to who I’m working with because different people want different things. Some people really want a solid framework, they want me to tell them the exact vocal arrangement, to tell them every harmony to sing, to tell them “this section is the lead,” “this section we’re going to double.” For the most part I do work with people who allow me to do that so it’s very hands on. So what I’m doing is having them keep singing stuff. Keep telling them, “No, do it with a little more personality here,” a little softer, that kind of stuff. And I’ll do a bunch of takes, and when they’re not there, I’ll comp all the takes into one that sounds like a magical take. That’s usually the way I work.

Do you do a lot of experimenting with different mics to see which one works best with the artist, or do you have a particular vocal chain that typically works well with vocals?
Yeah, I’m a creature of habit. I don’t do too much experimentation with different mics. We have basically relied on two mics for the most part. I think a Neumann U 67 was what we usually use. But lately, for the majority of vocalists, we’ve preferred to use a Sony C-800, a very popular mic that a lot of people are using. It cuts through. It seems to have a lot of high end. It sounds very current and very radio, and makes things pop without needing to EQ them as much. We do that through, I believe, an Avalon M5 preamp.

What about processing vocals? What’s your favorite compressor to use?
The plug-ins that I rely on are the Waves Renaissance package. The Waves Renaissance compressor and the Waves Renaissance EQ (see Fig. 1) I would say are on almost every track for me, vocals and instruments.

FIG 1: For most EQ and compression chores, Rotem’s plug-ins of choice are the Waves Renaissance Compressor and Renaissance EQ.

What is it about them in particular that you like?
First of all, I’m just really, really used to them and know how they sound, and I know how to get what I want out of them. And I think in general they’re well-balanced between easy to work with and understand, along with giving you a certain kind of character, a certain kind of warmth, but not too much color. When I want to do things a little bit more dramatic, I have the SSL kind of stuff that I use for EQ and even for some compression, when I’m going for a more severe sound. And I have a compressor that I use, the Bomb Factory, that emulates the 1176. It kind of just depends—when I’m really trying to process a track, trying to get a piano to sound crazy compressed and I want a very severe sound, I might not rely on the Renaissance stuff. But the Renaissance stuff is kind of my go-to. I’m always cutting lows with it, I’m always compressing. I would say I use those about 90 percent of the time.

And what about [Antares] Auto-Tune and the issue of the T-Pain effect. Do you see that as something that’s going to keep going for a while?
I think at this point it’s safe to say that it’s dying in popularity. I think it’s also safe to say that everything in music is cyclical. Sometimes cycles last even longer than you expected. But the T-Pain thing was very weird. Just T-Pain was doing it and it was very popular and it was his sound, and then, you know, somehow everybody started doing it and it became industry standard, and it became a very popular sort of sound at the time, and it’s lasted a long time. But I would say at this point there’s a bit of a movement for things to sound more organic than they were.

Definitely, I still use Auto-Tune extensively. In fact, most people are so used to that sound that [they like to hear it] even while they’re recording scratch vocals. I never record through Auto-Tune. Everything I record is unprocessed so that I can do whatever I want to it afterward. I never color it when I record, but they definitely want to hear in their headphones their vocals being played back through [Auto-Tune]. The same way a vocalist wants to hear a little bit of reverb and delay on the vocal.

Beyond its role as a mixing tool, vocal tuning is now being used for tracking?
The way people work is influenced by the technology. In the ’60s and ’70s, if you couldn’t sing on pitch, then it’s not likely that you’d have a career as an artist. Nowadays, with technology, it’s not a prerequisite [to sing on pitch]. I’m not saying that to talk badly about anybody. There are very, very talented people who can write very hip songs, who have identities as an artist, and while they’re recording they want to hear their pitch being corrected. It is a crutch in a way; it makes them so that they don’t have to focus on singing in tune, but they almost want to hear that correction as they’re singing it. It influences them to feel more confident and almost write and work in a different way.

So it’s opening things up in a way. What else is hip right now as far as processing a vocal goes?
I kind of feel like lately it depends on the artist. More in terms of doubling and quadrupling—[vocal] stacks. I think there’s been a trend of things sounding a little more organic. It sort of became standard for a while that every time you were hearing a vocal, you were quadding it, you were hearing four vocals. I think a lot of people now—whether it’s Kanye, whether it’s rappers, or other people who are more into an organic sound—sometimes you’re just hearing a lead vocal on a hook. And it’s weird. You get into a habit like, “Oh, yeah, it’s the hook, I have to stack it.” But there are no rules to it. Sometimes it is refreshing to hear just a single lead vocal on a hook and maybe it can be super-dry without reverb. I think a lot of people are pitch correcting with Melodyne [see Fig. 2]. It corrects pitch in a way that you can’t tell that it’s been corrected. So it’s the opposite of the T-Pain thing. It just makes someone sound like they actually sang it in pitch as opposed to hearing that digital thing. I think a lot of people are doing that. And like I said, with more organic-sounding music, people are opting to not stack things as much.

FIG. 2: When Rotem wants to correct pitch without it sounding corrected, he uses Celemony’s Melodyne editor.

And when you talk about stacking, you’re not just talking about the background vocals, but also the lead vocals? Just doubling and doubling and doubling?
Yes, sometimes. For instance, if I want a lead vocal that’s not overly processed yet is still a little bit thicker than just the verse, I might have a lead vocal for the hook and then I’ll triple that. So I’ll have one lead vocal in the middle, which is the main one, and then I’ll have two other doubles, panned left and right that are much softer than the lead. So it doesn’t sound like you’re hearing three vocals that are all at the same level, you’re hearing one, but there’s almost a little bit of wideness from the panning and a little bit more texture because you’re hearing three. But it’s the balance. The levels of those doubles are less than the lead.

Other times, if I’m looking at a section as, okay, this is definitely background, a call-and-response thing. One part is like a background singer and he’s responding to the lead. I might stack the background with four vocals and pan them completely around like 100, 100, 50, 50 and have the lead [in the center]. And then it serves more like: Okay, there’s a lead vocal and then he has his background singers. Obviously, he was the one who sang the backgrounds. It’s not like I brought in other singers to do that, but that’s kind of the effect of like one person standing at a mic with background singers.

The background singers are in unison at that point, not harmonized?
They could be in unison. I might have four voices in unison, and then I might stack harmony notes on top of those and just blend them. So then the overall texture of those backgrounds is that they sound like a distinct lead vocal that’s in your face, but they sound very full and very wide. They work really well against the balance between them and the lead vocal, which is right in the middle and right in your face.


Mike Levine is EM’s editor and senior media producer.

This entry was posted in Artists in Residence, Electronic Musician Presents. Bookmark the permalink.

4 Responses to ARTIST SPOTLIGHT: J.R. Rotem | Producing In the Fast Lane

  1. Enjoyed this, it’s always interesting to see how other people work.

  2. Great article.. thanks for the time spent!

  3. The article was insightful and inspiring…thanks

  4. WOOOOW I still need to learn how to use a lot of stuuf i just make musica with my computer and my piano :)

Leave a Reply

Your email address will not be published. Required fields are marked *

*

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>