Synth Forum

Notifications
Clear all

Voice Volume?

5 Posts
3 Users
0 Likes
2,923 Views
Michael Trigoboff
Posts: 0
Honorable Member
Topic starter
 

I need to balance the volumes of various Voices in a song I'm creating. It looks like the way to do this would be the Volume parameter in the Play Mode tab of the General tab of Voice Common Edit (Reference Manual, p. 55).

I'm wondering why the various Voices have values for this parameter that are not set to 127. I have sometimes run into a problem with this: I can't use a slider to make a Voice as loud as it needs to be compared to other voices because that Voice's Volume parameter is set too low.

Is there some reason why this parameter is not set to 127 for every Voice?

 
Posted : 06/02/2015 2:36 am
Posts: 0
Eminent Member
 

Hi Michael -

I would NOT recommend changing the volumes of individual voices in common edit mode.

In short, balancing the volumes of the various voices in your song is what we call mixing, and with each song, you have a mixer available to you. It's such an important part of making music, that there's actually a button on the machine that says "mixer". When in song mode, hit that baby and a mixer will appear. If you're familiar with a mixer in a PA system, the one pictured works just like one of those. Read up on the mixer in the manual - - that's where you really want to mix each song.

I hope this helps. - Pete Radd

 
Posted : 06/02/2015 7:41 am
Bad Mister
Posts: 0
 

Sorry about the length of the response, but it's not a "how's the soup" type question. As you know, I got (deeply) into recording engineering to augment my musician tool kit. So it is a subject I find fascinating and important. Hope you and others find it informative and helpful. "Voice volume" is equivalent to the musicians control over their own volume. Like the volume knob on a Rhodes. The "Part volume" is equivalent to the fader on a studio's mixing console, and is like the engineer's artistic balancing control. "How you approach these two separate volume controls" is what this discussion is about. You are asking why aren't all the VOICE volumes set to maximum...

Is there some reason why this parameter is not set to 127 for every Voice?

Yes, there is. Of course, you can agree or disagree... Usually the side you come down on, is a result of how you discover they are not all at 127. We can discuss some of the reasons from the standpoint of use case: MIDI mixing, versus Audio recording.

The reason is so that you can subjectively "mix". A flute (talking real world now) is not as loud as a trumpet. Nor are they recorded in the same way. Not all instrument sounds are miked the same or recorded/sampled from the same distance or with the same microphone. So not all instruments in the real world are at "127" relative to each other.

If you need more from an instrument (in your particular use case) turn it up. If your goal is individually tracking audio to an external recorder, expect to have to adjust levels for this purpose. A recording engineer often has to ask for more, or less level from a musician. That is what VOICE volume is, the musician's output level from their instrument. The PART volume is relative to the role the instrument plays in the ensemble. The PART volume is like the mixer's control.

Random Facts: The job of the recording engineer is a unique task. Recording Midi is vastly different then recording audio. Only the "audio guy" would ask for all things at 127. We sell Motif XF's to musicians, not engineers.

Okay, let's break that down...
If all Voices were at maximum Volume, and you began to combine them you would quickly overload your Mixdown device as you combine instruments. When you use a single Voice (alone) if it fills the entire volume capacity, how will you possibly add fifteen more Voices to the mix? It is designed so the majority of users, who Yamaha recognizes are not professional engineers, can create a mix and then without too much difficulty, can play a lead instrument on top. (This is important)... The playing on top.

The levels of individual Voices can afford to be conservative because the noise floor is so very far down as to be a non-issue. Besides: when you combine instruments for use in a Performance w/access to 4 arpeggiators - you want the "subjective" (musical) mix to rule, not the "scientific" (all at max) mix, when you initially call up a Part. A lot of consideration was given to this balancing of levels.

"I can't make myself loud enough over the rest of my mix" has to be one of the most asked question in this kind of system.
Reason: (and this is based on experience) most folks new to audio mixing have no idea, no concept, as to why the slide controls are called "Faders", and not "Boosters". 🙂

I've made that joke before, but it's true. They bring all the instruments in and discover that their lead instrument cannot be heard... The proper solution is not to turn the lead instrument up more, but to turn everything else down. (But many are stifled by this rather obvious and simple solution). In audio you are given a maximum, a ceiling, all your sound energy must remain below that ceiling. If everything is up near the ceiling it makes for chaos.

Science of Sound
Sound is in motion, and naturally compresses... Here's what I mean: one piano playing is a certain volume, add a second piano playing the same thing at the same time, it does not double in output level. (Thank goodness)... Sound piles up logarithmically. It takes approximately 10 pianos playing to double the audio volume output. So it's not like adding three or four instruments all at 127 would overload the system. But by the time you get to 16 instruments you will be getting in serious trouble... Particularly on down beats when the ensemble all hits together. To preserve the dynamic impact of that moment you should always treasure the quiet moments so the punch still has impact. (ie, not every portion of everything needs to be maxed out).

But certainly just one instrument alone should not fill the entire dynamic space... So the concept is to leave the programmer plenty of room with certain instruments to increase them as necessary. Look, the typical musician knows only how to boost, the concept of fading, turning down to balance things is foreign and not initially intuitive.

Opinion: Listen to some of today's (ridiculous) mixes where every instrument is trying to fill all the bits of the system. The loudness war is real. Why is not everything turned up to maximum???.. Thank goodness everything is not turned up to maximum.

Common Myths: audio / Midi
Please adjust the instrument as necessary, when necessary. Remember, mixing MIDI is one thing, mixing audio is quite another. MIDI mixing is done totally (innocently) without meters, all by ear... Totally subjective, musical. Audio mixing gets the eyes involved (often to the detriment of the whole thing) because folks get overly concerned about the audio record level reaching 0 (even when they don't know what type of audio meter they are looking at) they just know "I need to get as close to zero as possible, right?" Yikes, I say!!! They are looking at a meter and actually making music-related decisions based on that information.

(Quick aside: in CUBASE you can have it display a Waveform on your audio tracks... I speak to many who use that as a measure of how loud their track is... And this is wrong on many levels. An audio track in Cubase is not "loud" (Loud being an analog word, it does not apply here, your speakers can be "loud", a digital audio waveform is a graphic). It does show relative levels, peaks and valleys, but overall can be globally resized using the slide control to right side of the Track's data. Resizing the graphic does not make the signal louder, nor does it change the output level. It is "eye candy". To know what level you are actually recording you need to view a proper meter. The type of meter is also important ... Is it a VU meter or is it a PEAK meter? Do you understand what you should be looking for from each?)

You attempt to maximize the record level on each track, get each as close to optimum as is possible. Doing so will make a "non-musical mix". This is why it is not monitored. It's what gets recorded to the individual audio tracks but the engineer sets up a separate "monitor mix" which rebalances it musically. The object is to optimize the levels of the individual instruments on their respective audio tracks, but in the monitor mix, and ultimately, in the final mix, is to use just enough of each to add up to a musically balanced ensemble result!

Follow this, it is at the heart of the issue: If each instrument Part is recorded as separate audio tracks, and each is optimized as to record level, it will not sound like a musical experience if you play them all together. More like a sonic argument.

The Scientific versus the Subjective
The so-called "monitor mix" is what the engineer listens to, because it is subjectively rebalanced so that it sounds like a band playing musically with each other. What gets sent to the multitrack recorder is optimized (dry) levels... What the engineer monitors is a remixed, musically balanced monitor mix, one that reflects more of what the finished product will eventually sound like. The monitor mix can even have reverberation ... while the signal going to the multitrack is devoid of reverb (dry). Reverb is usually added at the time of the final MIXDOWN... But anything the engineer does to the "monitor mix" is not permanent. It is a separate mix generated to give an idea of what the final result will be, eventually. The final decisions (on things like reverb send amounts, and the musical balance of instrument levels) are made only once all the musical pieces are in place.

_ So your request for everything at 127, is a non-musical request. More absolute and objective, than relative and subjective.
_ One that comes because you are viewing it from the "extra" work you must do to balance the instruments in the audio level optimizing task to the multi-track recorder. (When you are wearing your audio engineering "hat"). Gain staging your audio material.
_ And that's properly the job of the engineer. In one composition the flute is used in an ensemble of woodwinds blended to make a sweet counter-melody, in another composition the flute is featured out front as the principal melody. If recorded to a separate track, you must "break" the MIDI mix balance - record each woodwind separately, optimized as to level (absolutely), - then in the final MIXDOWN restore the balance of the ensemble (subjectively) so that it sounds like they are playing together, not shouting to be heard over each other.

It is the job of the audio person (engineer) to understand how this instrument is going to function, and to adjust the final level accordingly. When you have you musician "hat" on your requirements are different. With your musician hat on, you record that backing string pad, will very little velocity, which naturally results in a sound that mixes/melts naturally and nicely into place in your composition. Had you thought to maximize audio level first, your approach would have been different.

When you are not recording audio... Then you are not so concerned with boosting levels to reach a value that only your eyes (not you ears) are telling you to reach. That's the difference... Right there!

What type of meter are using to set these record levels?
Peak meters show the rapid spikes oft found in hammered-struck-plucked instruments.
VU or volume units are purposefully less accurate (slower because our eyes are not fast enough to see all level changes) but show us a result more like we are hearing. And as we move forward into the sonic future there will be more and more changes in the meters we use to "make our decisions" for us. I suggest you use your ears as the primary judge in all of this. And certainly when it comes to audio rendering, don't request that all instruments be the same volume... When in nature does that ever happen, and if it did it would be a unique thing, like the combination of instrument sounds you are currently working on. It's unique, it's music. Expect to have to mix.

You do a composition with the same instrumentation as someone in Maine, what's the possibility that you both will mix the thing with the same levels... Less than zero.

The Mixing Voice... And why it's important
The MIXING VOICE is provided to allow you to customize each (normal) Voice for the role (the part) it will play in this composition... which is unique. Use them, please. You can edit any VOICE while remaining in context of your MIXING setup.
Press [MIXING] to view the XF's mixer screen
Press the Track Select button [1]-[16] to select the PART you wish to Edit
Press [F6] VOICE EDIT - this will allow you to customize the Voice in its current surroundings. You do not have to return to VOICE mode, you can edit this Voice (both Common and Element parameters) right here, in context. Adjust the Voice Volume as you require. You will find you plenty of gain to do so.
Press [STORE]
If you are editing a normal Voice you will be offered a MV location (Mixing Voice) that corresponds to the Part number you are editing. For example, if you are editing PART 4, the MV location will be 04. There are potentially 16 MV location per MIXING setup. Optionally you can redirect the store routine to one of your 512 Normal USER 1-4 bank location. But keeping the data with the MIXING makes it easy to find... Anytime you recall this Song/Pattern Mixing, all stored Mixing Voices are recalled.

Bad Mister's suggested workflow
_ Wear one hat at a time. First and foremost, is your musician hat. Record your music worry about rendering the audio later.
_ Customize each normal Voice for this composition. Utilize the MIXING VOICE to store changes to the Insertion Effects, EQ, controllers, etc. for use in the current composition (never make a Voice sound like it does in Voice mode, the original programmer is not listening to how you are using that voice now). Transfer your entire Mixing setup to the Cubase Sound Browser as a "VST PRESET" giving it the title of song as a name. This will allow you access to all your edits in the future.
_ When your music recording is complete... Get your "producer" hat out, and while listening back: construct / map out a plan to render the project to audio.
_ Do you need to separate Parts to individual tracks - a good reason to do so is if you want/desire/need to further enhance what you have (few other good reasons exist). If you like the mix as you have it, MIXDOWN to stereo... Done!
_ If you require separate tracks on some items map out what you are going to do, and write out the instructions for the guy with the "engineering" hat. 🙂
_ proceed with the selective rendering and processing. And make sure when done, you have actually improved what you had originally. Isolating a Part to its own audio track means you take responsibility to improve the result over what it was previously. Isolating gives you the potential to do so, make sure you evaluate your results.

 
Posted : 06/02/2015 1:44 pm
Michael Trigoboff
Posts: 0
Honorable Member
Topic starter
 

Wow! Thanks for the great mini-course in music production.

I'm reminded of a Vietnamese student I had a few terms ago, whose first language was not English. I explained something to him via email, and he responded by saying, "That's a whole lot of make sense!"

When I'm setting up a Voice in a Mixing, when would I or wouldn't I use the "Parameter with Voice" setting? I saw what it says on page 109 of the Reference Manual, but I don't quite know what to make of those details.

 
Posted : 08/02/2015 11:05 pm
Bad Mister
Posts: 0
 

When I'm setting up a Voice in a Mixing, when would I or wouldn't I use the "Parameter with Voice" setting?

You would use this when you made some additions to the original programming of a VOICE in VOICE mode. There are parameters that are considered a deep part of the VOICE, and there are those that are tagged on as quick additions. (or Offsets to the original data)

For example, each Element has its own Filter - the CUTOFF knob "adds" or "subtracts" an OFFSET value to the settings of each Element's Filter.
Deep programming (at the individual Element level) would be to go to the Element's FILTER itself and program the exact Cutoff Frequency per Element.
Offset programming applies a change to all 8 Element's Filter settings together.

The CUTOFF knob does not control a Filter! ... if all of the Elements were set so that FILTER TYPE = thru... turning the CUTOFF KNOB would have no effect. Because the CUTOFF knob does not control a FILTER... it controls any and all Filters that are assigned to Elements within the VOICE. It is a KNOB that applies an OFFSET to the various Element Filters. Say you turn the CUTOFF KNOB to +16... and STORE this to a USER location... this is an OFFSET to the values programmed in the original VOICE. If you want to carry this into a PERFORMANCE or MIXING you need to set Parameter with VOICE = ON.

So think of this type of editing of a Voice as via OFFSETS (quick access) parameters versus those parameters that are deep within the Element itself.

You can imagine that with 8 Oscillators per Normal Voice, it would be slow going if you wanted to adjust the VOICE for a slow ATTACK... Instead of a quick bow stroke, you want the strings to coming less abruptly. If you had to manually go into each Element and change the ATTACK segment of each of the 8 AEG. Instead, all 8 Amplitude Envelope Generators are "ganged together" and controlled by the ATTACK Knob - which "adds" or" subtracts" values from each of the 8 Amplitude Envelope Generators proportionally: it's QUICK ACCESS. Its a store-able OFFSET.

If you have edited any of these KNOB (quick access/offset) parameters and wish to have them applied to the VOICE when you move it to a PERFORMANCE PART or a SONG MIXING PART or a PATTERN MIXING PART, then you would mark the "Parameter with Voice" = ON -- this is saying: Copy those quick access/offset parameters with the VOICE to my PART setup. Apply the same offsets to the Voice in the PART.

You can also understand that ARPEGGIO assignments are not really apart of the VOICE in VOICE mode. They are only assigned that way. Any arpeggio can be applied to any VOICE. These are additions to the deep programming... they are simply associated with the VOICE in VOICE mode, not really apart of the VOICE. If you want to copy these additions to the original programming, you would set "Parameter with Voice" = ON in the PART to copy those ARPEGGIOS that are associated with the Stored Voice in VOICE mode to your current PART.

Parameter with Voice is a specific set of parameters that are applied as offsets to the deep programming of a VOICE. If you wish to carry them into one of the combination modes, set the Parameter with Voice = ON prior to selecting the Voice for the PART.

 
Posted : 09/02/2015 5:46 pm
Share:

© 2024 Yamaha Corporation of America and Yamaha Corporation. All rights reserved.    Terms of Use | Privacy Policy | Contact Us