# What is soundstage?



## sneaglebob

I hear people discussing the soundstage of headphones and I dont know what that is. so, what is it?


----------



## Head Injury

The sense that instruments are coming from some distance beyond the headphone driver. Usually it's divided into a width and depth, and headphones have mostly width unless the recording is binaural (basically, designed for headphones).


----------



## sneaglebob

So is it good if headphones have "Above average soundstage"?


----------



## Head Injury

That depends on you. There's no agreed upon "neutral" size, because that depends on the recording more than anything.
   
  I'm more of an imaging guy myself. I want to be able to point in a direction and say "That instrument is coming from there." I don't care as much if it sounds like it's coming from a couple inches away or a football field.


----------



## PomPWNius

Depends if you like big soundstage, which depends on what music you listen to.


----------



## firev1

To give you a better reference the "better" soundstage sounds closer to a good pair of speakers positioned optimally.


----------



## kiteki

Quote: 





> Originally Posted by *Head Injury* /img/forum/go_quote.gif
> 
> 
> I don't care if it sounds like it's coming from a couple inches away or a football field.


----------



## kiteki

I think soundstage is a speaker term, since speakers can create holographic sounds, that resemble a stage.
   
  An IEM or HP doesn't really have any of that soundstage, it can't make a holographic piano in front of you, so I call it soundspace instead, I think that's more accurate, that's just me.


----------



## tinyman392

Sound Stage is defined (in my own words) as: The ability for a sound system (headphones, speakers, etc) to create the sense, or illusion, that different instruments are placed around you at different angles and different distances.  Essentially, a sound stage allows a user to visualize the location of certain sounds in a given recording.
   
  Terms used with sound stage that describes it:

 Width: Your left to right is the width of the sound stage. 
 Height: Your up and down is the height of the sound stage.
 Forward/backward: Your front and back is this part of the sound stage.
   
  Your in the sound science section, so I'm going to give you my ideas on how it actually works.  Please note that these ideas are 100% accurate, they are just generalizations based on every pair of headphones I've ever put on (from IEMs, to actual headphones, to earbuds).  I believe that it is based on the actual frequency curve of the sound system.  As you know, the way we actually determine distance is through the _loudness _of something.  So if a certain frequency is softer than another on a pair of headphones, it'll create the illusion that it is further away.  What also seems to follow is that most headphones that have really great sound stage are not neutral or flat (normally are never close to it either). To add, it seems that on just about every headphone that I've heard have sound stage, it almost always had a V/U shaped curve to it.  This allows layering in the mid-range which makes some things sound really close, and some others really far.  Again, this isn't 100% concrete.


----------



## kiteki

If you down-convert stereo to mono there is still a soundstage so it's not just a stereo illusion, obv.


----------



## NinjaSquirt

Quote: 





sneaglebob said:


> I hear people discussing the soundstage of headphones and I dont know what that is. so, what is it?
> 
> 
> 
> ...


 
  The illusion created by stereo speakers that instruments have "place" on the stage i.e. left/right guitar vocals in the middle, drum sets have pieces from the left to the right, all kinds of other effects are mixed in here and there. Only happens in stereo speaker systems. Headphones created head stage, technically but the two are used interchangably.
  
  Above average is size? Depth? Width? From the headphones I've heard, it's very difficult to get much depth. Width/Size is good and artificially created on sound headphones like the AKG K70x series. But most of what sound stage is is determined by the recording. Also keep in mind, there no agreed upon normal sound stage. Some people like width, some like depth, some like small sound stage for a more engaging presentation. Tastes are all over the place.


----------



## kiteki

It's not only a stereo illusion though.


----------



## cer

It's an unquantifiable subjective term, thats what it is.


----------



## nikp

Quote: 





ninjasquirt said:


> The illusion created by stereo speakers that instruments have "place" on the stage i.e. left/right guitar vocals in the middle, drum sets have pieces from the left to the right, all kinds of other effects are mixed in here and there.


 

 Isn't that instrument separation?


----------



## Willakan

Nah, instrument separation is a function of soundstage and clarity


----------



## spkrs01

Sound stage is the recreation of the recording of a musical event in the x, y, and z axis. (w, d, h) be it in mono/stereo.
   
  The x axis is the easiest to reproduce, the z hardest. In large home systems you are really entering deeply into diminishing returns to get that height.
   
  Best recordings to identify sound stage easily is to choose a well recorded orchestra such CSO, LPO etc. and see where the wind, strings etc. sections and see if you can identify the section according to the arrangement of that orchestra.
   
  Imaging is whats placed within the sound stage. Here you should be able to pick up the individual characteristics of the said instrument, its size in relation to the others in the sound stage. ie a piano should sound much larger than a violin etc.


----------



## spkrs01

Adding to my post above......two songs illustrating x, y and z in sound staging:-
   
  Bodyguard OST , I have nothing - (x and y) huge width and depth in this recording.....beautifully layered, the timpani drums are miles away right at the back of the stage/studio. Very good recording showcasing huge theatrical sound staging. With the right equipment, there's no sibilance nor sharpness.
   
  Chris Botti in Boston, Hallelujah - (z axis) you can clearly hear his trumpet head and shoulders above the guitar accompanying and the orchestra....with IEMs/headphones the guitar and orchestra is below ear level while his trumpet emanates above eye level.


----------



## TMRaven

Modest Mouse's Bukowski has a neat little effect that you'll only get with headphones and not speakers.  You'll hear a voice that appears to be behind you, but that's moreso a psychoacoustic thing and the placement of the headphone drivers relative to your ears.  On speakers, the voice is behind the speakers, of course.
   
  Corinne Bailey Rae's Till it Happens to You has a nice center image for her vocals.  Open backed headphones do a better job of separating that center image from the rest of the layers in the soundstage, while speakers do the best job.
   
  Pink Floyd's Time does a good job at showing how sound can move between a pair of stereo speakers in a virtual space with its intro.
   
  When it comes to headphones, half the game is just that-- psychoacoustics.  Headphones can't properly convey as good as a pair of speakers unless the recording was a binaural.
   
  Googling naturespace and youtubing virtual barbershop on youtube will get you some good results to truly test the soundstage of your headphones.


----------



## Iniamyen

I think we have a winner:
  
  Quote: 





cer said:


> It's an unquantifiable subjective term, thats what it is.


 

 Even though you can explain to someone what soundstage is, everyone has their own take. And lets face it, with headphones, you've usually got 2 drivers to work with. Sorry, but you aren't going to be able to give any up/down or in/out cues with just the properties of the headphones, you can do left/right because you have left & right drivers. Any other spatial cues are from the recording, and describing them (i.e., describing the recording) usually ends up being pretty subjective.


----------



## kiteki

If you think you can hear soundstage with _one ear at a time_, then a microphone should be able to record that and transfer it to a 3D graph.
   
   
  If you think soundstage is a subjective stereo illusion, then you will have to program your subjective stereo illusion into the computer software in order for it to transfer that to a graph.
   
  In either case, it's quantifiable, it's not like measuring an emotional response from music, which is unquantifiable.


----------



## SixthFall

Quote: 





tinyman392 said:


> Sound Stage is defined (in my own words) as: The ability for a sound system (headphones, speakers, etc) to create the sense, or illusion, that different instruments are placed around you at different angles and different distances.  Essentially, a sound stage allows a user to visualize the location of certain sounds in a given recording.
> 
> Terms used with sound stage that describes it:
> 
> ...


 


  x14331424356


----------



## EddieE

In terms of recording it a combination of the stereo mix and how far the instrument/vocalist is from the microphone. Various digital filters and trickery can be used to augment this or simulate it as well, but the old-school way is actual real world distance and then stereo trickery.
   
  If you have a singer five meters away from the microphone and you put that recording entirely on the left channel, it is going to sound like they are a distance away to your left.
   
  If you have them sing close-mic and put it equally in both channels, it is going to sound like they are right in front of you (or in the case of headphones – right inside your skull).
   
  Then there are binaural recordings, where two mics are placed inside a dummy head where the ear drums would be. In this instance (with headphones on) the music should sound to the listener to be coming from wherever the instrument/vocalist really was in relation to the dummy head. You can get much more realistic (almost spooky) soundstage effects with binaural recordings played back on headphones.


----------



## kiteki

Quote: 





eddiee said:


> You can get much more realistic (almost spooky) soundstage effects with binaural recordings played back on headphones.


 

 Indeed you can, and all that is correct... until you assess soundstage in a HP or IEM as a mono signal.....


----------



## EddieE

Quote: 





kiteki said:


> Indeed you can, and all that is correct... until you assess soundstage in a HP or IEM as a mono signal.....


 

 The different distances from the mic that the various instruments would play in a mono recording, plus your minds own "ordering" of things would mean that there would still be a soundstage with mono. You can get greater directional placement with stereo.


----------



## EnOYiN

Some of you might find this review of the Omega II interesting. It explains things like headstage and soundstage pretty well in my opinion. It's quite a long read though.


----------



## john29302

Sure up and down is heard and proccessed.. the barbershop is in a room and the ear trains the brain to take the reflective sonar type data and can store like a hard drive.blind people are much better than us at this. but you sound as if you have made your mind up. its very similar to running in the woods, some can read terrrain faster than others and adjust at a rate of 2,000 decisions a minute and some are less or more, therefore they fall less and have more speed. if you can tell the left and right then the up and down is possible, actually the perception of the waves is read by the brain. maybe one cant pinpoint but like me i am a slop artist and proud of it. no anal know it all can touch this.


----------



## JefferyK

To me, soundstage is directional placement and image is the holographic thing.
   
  I have never experienced imaging with headphones, only speakers. Which is fine with me.


----------



## kiteki

To me sound-stage or more accurately soundspace imho (in a headphone, not the recording) is the X/Y/Z space in which you hear the music.  If you envision that space like an onion, the layers are the layering.  Imaging is the movement of sounds within this space, imaging can be fuzzy, or precise, like a hazy left-right sound, or a marble floating through the air.
   
  To me this soundspace is completely seperate from the recording / stereo image, since you can still hear it in one IEM or headphone cup at a time, as evidence.
   
  The Audio Technica CK10 introduced me to imaging, at which point headphones started to sound lackluster.  The Shure SE425 is a good example of imaging too, I haven't heard speakers sound that precise, but the sound-field is much more enveloping of course, and you need to take room acoustics into account.
   
  A speaker junkie will use cross-feed to emulate speaker sound in headphone's or IEM's.
   
  Afaik there is no technology or software which can measure the soundspace, layering and imaging characteristics within an IEM or headphone, those Brüel & Kjær dummy heads only measure frequency response (volume balance) and square-wave response is not sufficient, clearly evidenced by data like this (two extremely different sounding IEM's)
   








   
   
  To me, this is one of the reasons I find audio fascinating, it's an inexact science, still unsophisticated in relation to what we hear, and there is no speaker system yet which is 100% transparent, i.e. they are all coloured in a sense.
   
  There are some scientists which like to purport that audio is a complete and finalized field, but then I don't see why there is such an extreme amount of controversy, compared to a field like video cameras, or mathematics.  Audio is more like medicine to me.


----------



## money4me247

hey just wondering, you guys have other song suggestions that illustrate soundstage?? just fun to listen to them using different headphones


----------



## ultrabike

Quote: 





money4me247 said:


> hey just wondering, you guys have other song suggestions that illustrate soundstage?? just fun to listen to them using different headphones


 
   
  Try this thread: http://www.head-fi.org/t/511850/awesome-binaural-albums


----------



## joeyjojo

Quote: 





money4me247 said:


> hey just wondering, you guys have other song suggestions that illustrate soundstage?? just fun to listen to them using different headphones


 
   
  Zombie thread!
   
  Soundstage only exists with headphones for binaurally recorded material. For stereo content mixed for loudspeakers I think people are usually talking about some kind of "fidelity", maybe instrument separation, etc.


----------



## money4me247

heh.... zombies are awesome.
   
  my friend was talking to me about his akg k550 & saying how you can really tell where each sound is coming from and where it moves to. this is using flac files, not binaurally. so i was thinkin that has to be soundstage right?


----------



## ultrabike

Seems so. However, going by my brief impressions of the AKG K550, I can't say it did sound localization very well. IME open headphones in general do a much better job in that regard. There are a few exceptions of course.


----------



## money4me247

Quote: 





ultrabike said:


> Seems so. However, going by my brief impressions of the AKG K550, I can't say it did sound localization very well. IME open headphones in general do a much better job in that regard. There are a few exceptions of course.


 
  oh ya? i was under the impression that the akg k550 were one of the exceptions of a closed headphone w/ good soundstage. what would you say are the exceptions?


----------



## ultrabike

money4me247 said:


> oh ya? i was under the impression that the akg k550 were one of the exceptions of a closed headphone w/ good soundstage. *what would you say are the exceptions?*


 

   
  The Paradox would be one (http://www.head-fi.org/t/633956/the-t50rp-paradox-reviews-discussion-mini-tour-impressions)


----------



## XxDobermanxX

Try the sennheiser HD 800 and find out 
  and to find out what is not soundstage try the athm50


----------



## bigshot

joeyjojo said:


> Zombie thread!
> 
> Soundstage only exists with headphones for binaurally recorded material. For stereo content mixed for loudspeakers I think people are usually talking about some kind of "fidelity", maybe instrument separation, etc.




This


----------



## starstern

what's the definition difference of bright sound stage versus deep sound stage ?
  does bright refer to more mid's more warmness ?
  does deep refer to a denser brightness so to say ??


----------



## bigshot

Brightness refers to frequency response, not soundstage. Soundstage is either forward, meaning close miked... or recessed, meaning distant miked. Depth of soundstage is determined by the inclusion of room acoustics, usually distant miked.
   
  A lot of people mix unrelated sound terms together in a mulligan stew because they think it makes them sound smart. If you learn the terms, it's easy to sort those horse's hindquarters out.


----------



## SP Wild

Ambience exists in bass tones, midrange tones and treble tones.
   
  Inaccuracies in reproducing any of those ambience details lead to a subjective frequency response tilt, even when this response is measured flat.
   
  In order to recreate an accurate 3rd dimension,  the audio signal needs to be accurate in the 4th dimension.


----------



## bigshot

HA! Good one! Get your orders in for the fourth dimensional sound processor now!


----------



## jaddie

Quote: 





bigshot said:


> HA! Good one! Get your orders in for the fourth dimensional sound processor now!


 
  Already done.  You probably own one.  Dimension #4 is time. (ok, technically it's "spacetime", but if you're out of space, and you got no time...well, that's just tough). So any processor that deals with time is playing in the 4th dimension.  
   
  But what we really need is a processor that deals with another dimension.  A dimension not of sight or of sound, but of mind.


----------



## bigshot

Where is Rod Serling when you need him?!


----------



## jaddie

Oh, you KNOW where he is...


----------



## uchihaitachi

Quote: 





jaddie said:


> Oh, you KNOW where he is...


 
  Jaddie, except for speakers, so for IEMs and headphones, what do you believe are the determinants of 'soundstage'? Are you of the opinion that they are entirely constructs of the mind? If so would you say that the disparity in 'soundstage' among headphones and IEMs that people perceive are due to some head gear possessing different frequency responses that lead to different representation of the recordings which in turn may lead to perception of a lack of or a more of 'soundstage'?


----------



## jaddie

Just briefly, been traveling all day...
  Quote: 





uchihaitachi said:


> Jaddie, except for speakers, so for IEMs and headphones, what do you believe are the determinants of 'soundstage'?


 
  Kind of a long discussion, some of which I've posted before, but I'll get back to you on this when I'm not so tired.
  Quote: 





uchihaitachi said:


> Are you of the opinion that they are entirely constructs of the mind?


 
  No, it's always a combination of the total hearing mechanism plus the brain.
  Quote: 





uchihaitachi said:


> If so would you say that the disparity in 'soundstage' among headphones and IEMs that people perceive are due to some head gear possessing different frequency responses that lead to different representation of the recordings which in turn may lead to perception of a lack of or a more of 'soundstage'?


 
  I'm not all that convinced there is all that much disparity in soundstage between headphones.  The major qualities of headphone listening perspective are so overwhelming, and unnatural that the differences in soundstage should be quite minimal within different models of the same type.  There is a difference between types though.  
   
  I'm afraid I don't survey many headphones.  Most of them are so bad I just don't bother.  The last time I auditioned about a dozen, pretty much hated them all, and some were pretty pricy and famous.   I do have a few favorites, and all of those for their neutrality.  I really hate colored headphone sound.  So I may not be the best one to judge how vastly different response characteristics are perceived  as changes in soundstage, other than theoretically, which as I said, I'll have to get back to you with, it's been a 20 hour day so far.


----------



## jaddie

OK, more awake now.  
   
  The concept of soundstage is a bit complex.  If you start with the idea of a concert listener in an audience, he hears the position of each instrument on stage using his spacial hearing abilities, which are based on the direction of the sound, or angle of incidence. He hears the distance from the source based on direct and reflected sound, the timing, intensity and spectral distribution of each, and related to that, room acoustics made up of multiple reflections at various timings and spectral distributions each arriving to the listener at different angles of incidence.  The hearing mechanism comprised of the inner and outer ear, head and chest, combined with processing in the brain let the listener perceive the position of the source in space, and sense the space around him, all in three dimensions, and might I add, *in real-time!*  Impressive processor, the human brain.  
   
  When that concert is recorded, a lot of that spacial hearing system is bypassed.  We don't have a head with ears and a chest (binaural recordings excepted), so we eliminate most of the 3D spacial and directional cues.  A stereo recording is not ever a reproduction of the original, but rather an acceptable artistic representation of it, but also as an entirely new audio event (see Ch 2 and 3, "Sound Reproduction..." by Floyd Toole).  
   
  The soundstage produced by two speakers in a small room is influenced by the position of the speakers in the room, the position of the listener in the room, the direct and reflected sound of the speakers in the room arriving at the listener, and whatever spacial information is included in the recording.  Notice what I mentioned last: the recording.  It's all pretty much artificial in the recording because we are limited to two channels, which is far from enough to reproduce a believable 3d sound field.  So, stereo recordings are a fake-out from the beginning, and soundstage in a stereo listening room can never be an accurate reproduction of the original.  (I'm deliberately ignoring 3D sound processing systems, which don't really apply here.) 
   
  Now we take a recording mixed and produced for two-channel stereo on speakers in a room, and play it on headphones that again bypass the spacial hearing mechanisms of the outer ear, head and chest, leaving the brain with most of its spacial input eliminated.  The perspective we get, regardless of what headphones or IEMs we use, is basically hyper-stereo, with most images placed on a line between our ears that goes right through the center of our skull.  To get any sound to image off that line, the sound has to have included in it, or around it in time, spacial cues that present at least some spacial information to the brain, something to tell it the sound is in front of us, near or far, or behind or above, or left or right.  Some of that can be faked in a recording, but it's never quite right without the full spacial hearing mechanism.  
   
  Most of what we hear in headphones as spacial cues in stereo recordings is accidental.  Music is almost never mixed for headphones, because if it were, it wouldn't work well on speakers.  However, music mixed for stereo speakers translates acceptably, though completely differently, to headphone listening, so for now we have speaker-centric mixes. 
   
  So what changes the soundstage with different headphones or IEMS?  My guess would be that different headphone and IEM response curves emphasize different areas of the spectrum that contain spacial cues. A direct sound has a certain spectral distribution, the reflection of it (and there would be many, unless it's a completely dry, close-mic studio recording) would have completely different spectral distribution.  To put it another way, the FR of reflections is very different than the FR of a direct sound.  Many reverberation spectra are concentrated mid-band, with high frequency reflections being absorbed or diffused more in the room, and thus containing much less energy by comparison to the direct sound.  So a mid-heavy IEM may present a more "spacious" effect than a flat IEM.  
   
  You can take that reflection and reverb spectral difference analysis to just about the infinite, but the thing is, every recording is different, so I would also theorize that an IEM that presents a wide or deep soundstage on one recording may not on another.  It may even get down to general genre of music having different generalized acoustic qualities.  For example, most modern vocal tracks are very dry, where vocals of 30 years ago were always recorded with some reverb around them.  If all you listen to was 70's rock, then that stereo vocal reverb might sound better on an IEM with more mid emphasis. If you listen to current music, you might not think that was true.  Just a theory, and I'm not going to be the one to prove it, unless somebody wants to fund a research project.  
   
  Sorry for the long-way-round, but hopefully more detail helps answer this and future questions.


----------



## uchihaitachi

Wow as ever, thanks for the very extensive feedback. I think your hypothesis could not be any more accurate. I have found in my experience with mid-centric IEMs, listening to vocals by say Ella Fitzgerald era provides a much more spacious representation of the music whereas with modern tracks it feels like the singer is merely inches away from my face.


----------



## bigshot

That's because back in the 50's most records were engineered with a whole band playing at once. They set up the mikes to capture a performance in real space. Today, everything is multi-tracked, one element at a time. The mike may be right up close to one instrument and far away from another. There is no real space. It's just laid into the mix in an artificial space, and that artificial space might change every few seconds.


----------



## uchihaitachi

Quote: 





bigshot said:


> That's because back in the 50's most records were engineered with a whole band playing at once. They set up the mikes to capture a performance in real space. Today, everything is multi-tracked, one element at a time. The mike may be right up close to one instrument and far away from another. There is no real space. It's just laid into the mix in an artificial space, and that artificial space might change every few seconds.


 
  Yeah... Such a shame


----------



## bigshot

Well it's technology. Back in the 50s, they didn't have any choice but to mike performances as a group. They were limited to four track recorders. Today, digital audio allows for an infinite number of channels, so they use it.


----------



## jaddie

You might find this article interesting.  It's about the evolution of the broadcast console, but on page 3 there's a photo of an old radio studio, and the description below is about how it used to be done...one mic, no mixer...because the had to do it that way.  Nobody today would have the guts!
   
  http://www.thebdr.net/articles/prof/history/HPH-Consoles.pdf


----------



## uchihaitachi

Quote: 





bigshot said:


> Well it's technology. Back in the 50s, they didn't have any choice but to mike performances as a group. They were limited to four track recorders. Today, digital audio allows for an infinite number of channels, so they use it.


 
  I am even more impressed by the great classical pianists. Single take.... Amazing.


----------



## morethansense

Most definitely just reviving a dead thread here,
  
 I'm just a small timer in the recording industry and I'm really glad we're (some of us) moving from recording in near-anechoic chambers and acoustically dead studios and blending artificially to recording in halls and live spaces all together. Believe me when I say that I've used and heard tens of thousands of reverb plugins and units and none of them come close to a good real hall. A lot of studios that I work with are beginning to realise the importance of mic-ing not just individual instruments, but also the room.


----------



## bigshot

Recording studios used to sometimes drag a monitor across the hall and into the restroom, mike the reflected sound, and mix it back into the track. That's a natural reverb, but it didn't work if someone was doing something in the next stall!


----------



## jeffnev

This is all very interesting. So if I was looking for the most in soundstage, which headphones would be best?
  
 Jeff


----------



## ab initio

There's also the stories about Jimmy Page micromanaging the studio recording of the led Zeppelin albums back in the beginning of the 70s.

Things like mixing the mics close to the individual band members amps/drum kit with mics far from the band to capture the "sound of the band". Also, recording the drum track for 'when the levee breaks' im the studio stairwell to get a really wet reverb. That drum track is a really great, obvious example for laypeople to hear what reverb is.

Fun stuff

Cheers


----------



## bigshot

jeffnev said:


> This is all very interesting. So if I was looking for the most in soundstage, which headphones would be best?




If you're looking for the most in sound stage, you need speakers, not headphones. Headphone sound stage is weak compared to speakers...so weak to be pretty much nonexistant.


----------



## Ruben123

So many users banned... What's your opinion on sound stage by now? Width - I get it, but height and depth? Let alone in headphones?


----------



## frodeni

This might not be the correct use of terms. Would be nice to get things right and in agreement. Please fill me in, if I am wrong.
  
 I don't know what the correct terminology is, but from my experience, the following makes sense to me. (I start of by talking speaker sound)
  
Sound stage Some systems seems to disappear. There is no sounds "coming from the speaker" and if I close my eyes, it is actually hard to pinpoint where the speaker is. For such systems, the far right, is about the straight line going from my head to the speaker, just continuing behind the speaker. Some of these systems, are not as great as to draw the dept.
  
 By dept, I mean the ability to place the instruments at the axis running on lines that intersects my head. Or the area behind the speakers. Some systems are great at listening to orchestral music, as instruments appear both back and forth in dept.
  
 A combination of reproduction in both dept and width is rare. It also is strongly dependent on the room, which the speakers are placed, and how everything is placed. In particular, in the extended triangle made up by my head, and the two speakers.
  
 Some speaker makers, tries actively to draw the sound to the front of the stage. Like Monitor Audio. In that case, the soundstage could be described as close an personal. Meaning nothing else, than that the speaker is tuned to draw the music closer to the foreground. Foreground is the line running between the speakers.
  
 Other speakers, like the ones I am using, Snell, do it the old fashion way. They like to draw things more in dept. That is just awesome for big stage music, like big choirs. But not so great for a single artist with a guitar.
  
 I do not know if I hit this correctly, but as i see it, width and dept, as to where instruments possibly might be positioned, is important for sound stage. As would be the way the sound stage is drawn, like front or rear heavy. That sort of makes sense to me. If I got this wrong, please let me know.
  
Imaging This term is quite new to me, as I would not use it by my native tongue. The way I see it, how well instruments and voices are placed in the stereo perspective, the positioning, is independent of the sound stage itself. Well, to some degree. By that, I mean that even if reproduction is wide and deep, instruments might not be placed with spectacular accuracy. Where they are placed both by dept and width, might be a bit blurry. I am accustomed to call this perspective in my native tongue. My understanding is that this would be imaging.
  
 I guess, that body, should fall into this as well? Let me explain. If a instrument is well reproduced, all its tones are placed correctly. In some recordings, the movement of the fingers (pressing the strings) on the guitar is placed to the right, while the fingers hitting the strings, slightly to the left. The sound of the guitar box, the wooden box, extends a bit more, as it should. That is like hyper excellent recording, and superb imaging. Also, the expected sounds are reproduced as expected, and in tune. If so, the guitar seems to be given body, and virtually exists.
  
 If I were to use a term like imaging, that would make sense to me. I might be wrong though. Would be nice to nail this one.
  
Articulation Articulation is related to imaging. In my experience, this is not abut placement in the stereo perspective. It is more about how well articulated the reproduction is of voices and instruments.
  
 By that I mean tone and details. Not just that things are reproduced, but that it is produced with precision. This is best described by the horrific reproduction of percussion by mp3 compression. In particular cymbals. They are reproduced by mp3, but that is it. The finer details are completely lost by the conversion. It is washed out into a high pitched something, lacking almost any articulation.
  
Separation To me, this is the ability to separate instruments and vocals. If I focus, I am able to track an instrument, and the easier it is, the better the separation. I can hear what it plays.
  
 Then there is, again, false separation. Like if I lossy compress music, some instruments actually gets easier to follow. But that often comes at a cost, as other instruments simply just disappears. Filtering out other instruments, is not improved separation.
  
Musicality I do not think this has been mentioned, but musicality is to me, the ability to draw me into the music, to engage me. Meaning what exactly?
  
 To me, first and foremost, that everything is reproduced in harmony. By that I mean, that harmonies are heard. Like listening to a choir, the voices harmonizes. Or even for pop, the instruments blend as they should. They are reproduced correctly by tone. To me, when that is the case, I sort of forget everything else, as I am sensitive to that.
  
 Also, nothing should be off. If you got great speed and attack in the highs, slow and dull base sounds off with that. The infamous hiss of the HD800 is another example.
  
Details Details, is more simple to quantify. It is simply if sounds are audible reproduced. Accuracy has little to do with this. Some gear lifts all the lower level sounds, which is a safe bet to improve details. But not articulation. Not musicality. Not imaging. Just lifting the low level treble, usually brings a lot of details. Like the breath of the woman singing.
  
 Great reproduction of details is often times accompanied by great dynamic range. By that I mean the ability to be articulate across the entire dynamic range. If you listen to Metal, the details remain. If you play classical music, articulation will be great for all instruments, both those playing loud, and those playing soft. But sounds will be reproduced at their correct sound level. There is a huge difference at that.
  
 This is why, when people just throw out there that things are more detailed, that is not necessarily a good thing.
  
Clarity This is a tricky term. A silly way to describe it, would be that the less haze in the the reproduction the better clearity. Like removing foam covering the speaker. Only it is bloody tricky to pinpoint in the reproduction, and when a lossy mp3 is clearer, things suddenly turn messy on me.
  
 Clarity is the one thing that tricks me. My Note3 mobile phone sounds clear as a bell, but why?
  
 Sound drawn closer to me, front heavy that is, typically sounds cleaner.
  
 Also, a simpler reproduction, highlighting the main traits of the instrument, oftentimes sounds clearer to me. Its like this USM of photography, in which less sharp is sharper.
  
 It is unclear to me, what makes up clarity, as by my senses.
  
 In my experience, real clarity is best described, as the combination of other aspects, and the synergy of that. Sound stage, imaging, separation, articulation, and details. When they mix, the perceieved clarity is of a completely different nature. That is the best I can do.
  
 Might sound silly, but I have learned not to trust my ears on "clarity". If I tune anything by my experience of that, I mess everything up.
  
 To add to my confusion, digital noise masked my rig at one point. The sound stage was tilted far back there. Something was clearly off. Once removing a greater part of the digital noise, moved everything way more upfront. I had mistaken digital noise for sound stage and imaging. Embarrassing, to say the least.
  
Headphones This is really easy. Headphones would give you a sound stage, as if you had the speakers up to your ears, as you do. Left to right passes through your head.
  
 Dept is relatively small, as the distance between the speakers are small, and the distance between you and the line running between the speakers, is like nill.
  
 The HD800 has the elements at a distance from the ear, resulting in a slightly longer path between them, and your ears. By default, that offers a wider sound stage. The axes between the cans, no longer passes through the middle of your head. Rather the front part of it. (So, I guess that means that I hear voices in my head then? Not sure if I like the sound of that.)
  
 As for the rest of the terms, as I have described it here, headphones excel.
  
 In fact, if anyone made user specific vector correction to the sound, and placed the sounds individually on the fly, headphones is the only current artifact that may reproduce 3D, in any direction. If the listener had a directional sensor on its head, sounds would remain positioned fixed relative to the listener, like one meter in front, slightly to the left. Even if the listener turns its head.
  
 The limited sound stage of current tech, is due to how the music is recorded and reproduced. Left to right is oftentimes just the difference in sound level. For real acoustic recordings, you also got the time delay between the microphones, and environmental reflections. But that is not matched to the listeners ears.
  
 The ear picks up at least the following:

Level differences (as used in stereo recordings)
Time difference between hitting the ears(accustic recordings)
Directional movement by the source (as in an airplane moving toward or away from you: It sound differently)
  
 Example: Most people can pinpoint a plane in the sky (well, they would point to a point where it was three second ago, if the plane was a 1000 meters away). At that distance, the sound level difference between the ears is tiny. Double the distance to the source, results in half the level. Sound level only decreases 3db from 500m to 1000m, resulting in hardly any difference between the ears. But the sound will hit the ears at slight time delta. That difference in time, is instrumental, in the human ability to pinpoint sounds.
  
 Even more impressive, is that the speed of sound alters quite a bit by temperature, but that do no seem to affect the hearing much.
  
 A plane moving towards you, has a different pitch, as its movement compresses the sound. You hear this in particular as the plane passes above you, as the sound alters at that point. I lived near an airport, so this became second nature to me. Same applies to cars.
  
 The headphone is the only current artifact, that can possible reproduce all of this, as it by design, could reproduce the time delta. But the tech is not there yet.


----------



## Ruben123

Great post. You should put it on some wiki page or something.


----------



## GloriousLettuce

This is an awesome thread.
  
 Soundstage feels as what it is - soundstage - for the sake of simplicity we can even call it "surround feel".
  
 But what it really is is I think - the achievement of shaping frequencies in "groups" or "separation" in combination with the volume and panning. The reason why you feel the guitar is "right upfront" from your head is cause it's panned mostly right, with a particular frequency range that convinces your brain to interpret where it's placed. You can lookup virtual barber shop stereo test on Youtube to test this out.
  
 So I think a good soundstage is defined by a broad frequency range and good reproduction of the same, and how the listener interprets this (we have our unique ABC  analog-to-brain-converter), along with the track signature (which itself contains many other blueprints in producer's mixing process) and so on. This is why soundstage can be subjective as well and experiences will differ with different listeners, tracks, sources e.g. the headphone signature may not suit the track ruining the soundstage as well, so soundstage itself not a standard headphone measurement that much, it's more of a RESULT of different elements in the chain (but some headphones are generally soundstage-rich).


----------



## frodeni

gloriouslettuce said:


> This is an awesome thread.
> 
> Soundstage feels as what it is - soundstage - for the sake of simplicity we can even call it "surround feel".
> 
> ...


 
  
 That is one way of seeing it. Sure.
  
 My problem with it, is that it gives no quantifiable sonic signature, it is rather vague, general,  and emotional.
  
 How would you describe this sonicly? What specific traits would showcase this?
  
 How is that guitar "panned out", and how can a guitar be "panned mostly right, with a particular frequency range that convinces your brain to interpret where it's placed"? Are you not mixing sonic traits here?
  
 Why would anything be sound staged by if it plays harmonic to you? What would not matter then for sound stage? Where is its limits? Where does it start, where does the term end?
  
 To me, this is useless as a definition of the term, as it simply does not describe the term in any useful, nor quantifiable, manner. Actually, it seems more to represent more a description of the total sonic experience, than this particular trait.
  
 "for the sake of simplicity we can even call it 'surround feel'." Yep. A feeling. As in emotion. But not analytic and reasonable. So as an reasonable analytic definition, this simply does not work.
  
 But, hey. You might feel this way. I am not arguing that.


----------



## GloriousLettuce

This is cause I think soundstage isn't real, it's an illusion made from all these elements - an experience. And considering our auditory system is a sense, we are talking about feelings here. What's useful here is to think of it more broader and see how many things influence what makes a soundstage, not just 1-10 rating per headphone.


----------



## frodeni

gloriouslettuce said:


> This is cause I think soundstage isn't real, it's an illusion made from all these elements - an experience. And considering our auditory system is a sense, we are talking about feelings here. What's useful here is to think of it more broader and see how many things influence what makes a soundstage, not just 1-10 rating per headphone.


 
  
 Yeah. I just do not get the elements you mean make up this illusion of sound stage. Because, I think, that when you analyse it a bit, you will end up with every sonic trait imaginable. Seems more like you speak of the sensation of being at a concert, and the speakers reproduce that in front of you.
  
 But breaking it into something limited and reasonable, is like impossible then. The term is close to borderless.
  
 And no, it perfectly possible to break down this sonic illusion into parts, reasonable parts. The parts just needs to be quantifiable and reasonable. They must also be useful. Actually, I just did that a few posts ago.


----------



## RRod

gloriouslettuce said:


> This is cause I think soundstage isn't real, it's an illusion made from all these elements - an experience. And considering our auditory system is a sense, we are talking about feelings here. What's useful here is to think of it more broader and see how many things influence what makes a soundstage, not just 1-10 rating per headphone.


 
  
 One way to start looking at the problem is to consider the aspects of the sound arriving at our ears from speakers that change as the speakers move apart. frodeni already mentioned the differences in time delay and intensity that can happen. These are what crossfeed plugins try to compensate for. Another thing that changes is the frequency/spectral content of sound reaching our eardrum, since moving the speakers changes how the wavefront interacts with our body (ears, head, torso).
  
 Consider this graph, which is a smoothed frequency plot of data from one of the public HRTF databases. The colors represent different position of speakers relative to the listener's head (0° is straight ahead, 90° is full left). The dB values use 1000kHz as a 0dB reference. What this shows is that as a set of speakers is rotated around the head, the peaks and valleys of the frequency response of an impulse change in location.
  
 Individual responses deviate all over the place from the average. Still, we can make a hypothesis that certain aspects of what we call soundstage/headstage are related to how the design philosophy of a headphone matches up with aspects of these curves. Individual deviations within the range affected by the ears (>2kHz) might therefore account for occasional disagreement from what is considered the norm.
  
 Just some things to think about. Like you said, Lettuce, it would be interesting to try and find those things that correlate with the perceived soundstage; here is one possibility.


----------



## frodeni

rrod said:


> ... Still, we can make a hypothesis that certain aspects of what we call soundstage/headstage are related to how the design philosophy of a headphone matches up with aspects of these curves. ...


 
  
 There is no consensus as to what soundstage is. Just read this thread.


----------



## RRod

frodeni said:


> There is no consensus as to what soundstage is. Just read this thread.


 
  
 "Just some things to think about. Like you said, Lettuce, it would be interesting to try and find those things that correlate with the perceived soundstage; here is one possibility."
  
 Just read my post.


----------



## frodeni

I did read your post RRod. People do not see the need for a term to describe the scope of range of the sonic reproduction. They do not analyze the sound.
  
 The hear the term soundstage, and starts babbling about what that should mean, not realizing it is the only term in use, that covers dept and width. They end up rather talking about feelings, opinions, and in all honesty: Almost any sonic trait at once, backed into this soundstage of theirs.
  
 If so, we have no term for width and dept. The sonic scope in space.
  
 I hope I am wrong, but your post did not seem to reflect mine, nor a lot of posts in this tread of late. Reflect the same understanding of soundstage. Thus, no consensus. Thus your statement is based on yet another definition.
  
 I have prompted a definition of the term, and argued its term. I have quantified how it materializes. I have described the necessary terms for most to apply the definition. Just not in this tread, because there is not the needed structure in here. It is all a complete mess.
  
 Nice to finally hear people speaking of hypothesis by the way. If you like that way of argument, we really need to try to pull in that direction together, instead of this petty argument going on here between us.
  
 My main concern, is that we need to break things down into manageable pieces of sonic traits, and those need to be quantifiable, and thus describe able. They need to be well defined, and limited by nature. Not over lapping like nuts. And they need to offer us a language that we can describe the sound with accuracy and reason. A language for analytic reason, not this "the sound is intimate, almost to close and personal" craze.
  
 Most sonic traits are not objective observable, not to my knowledge, but some are. To be objective observable, they must be sharply defined anyway.
  
 But as I have said, the necessary structure for that, is not in here, as this is not a thread for such tight reason to begin with.


----------



## RRod

I do agree that fuzzy language gets science nowhere (though it seems to get headphone makers quite the $$$). But how is frequency response something that isn't quantifiable or describable?


----------



## frodeni

rrod said:


> I do agree that fuzzy language gets science nowhere (though it seems to get headphone makers quite the $$$). But how is frequency response something that isn't quantifiable or describable?


 
  
 To my knowledge, there is no test for soundstage. Not as it most commonly is used, as in width and dept. Unless high ress sampling, or new computing power has offered something radical new.
  
 Frequency response is quantifiable, but translating it into soundstage measurment, well, that has proven hard for quite some time now. Like at least 20 years. Stereo interference is one issue in that regard.
  
 When things are understood, and when the tech is there, I am pretty confident that most subjective data and objective data will melt. That has been the story so far. I simply have no issue with either.


----------



## icebear

Just my $0.02:
 Sound stage is the virtual recreation of the recording space in width, depth and height by our brain processing the ambient information captured in the recording and translated back into sound by the transducers, either headphones or speakers. Consequently this can not be directly measured. Forget about that approach.
  
 About the claim that 3D reproduction is impossible with just 2 speakers ... that is complete BS in my book. We have only two ears and are perfectly capable of locating sources of sound in real world, i.e. in 3D space. If in a recording a certain instrument with instructions "from far away" is playing from a back room of the concert hall or from a 3rd tier position in the hall, this will clearly be audible as not coming from the same sound stage plane (as in width and depth) as the rest of the orchestra. If it's not, it's time to upgrade ... poor wallet


----------



## RRod

icebear said:


> Just my $0.02:
> Sound stage is the virtual recreation of the recording space in width, depth and height by our brain processing the ambient information captured in the recording and translated back into sound by the transducers, either headphones or speakers. Consequently this can not be directly measured. Forget about that approach.
> 
> About the claim that 3D reproduction is impossible with just 2 speakers ... that is complete BS in my book. We have only two ears and are perfectly capable of locating sources of sound in real world, i.e. in 3D space. If in a recording a certain instrument with instructions "from far away" is playing from a back room of the concert hall or from a 3rd tier position in the hall, this will clearly be audible as not coming from the same sound stage plane (as in width and depth) as the rest of the orchestra. If it's not, it's time to upgrade ... poor wallet


 
  
 Sit in front of two speakers that are right next to each other, then have friends move them apart for you. You'll hear a widening of the soundstage. This will be due to the change in how the wavefront interacts with the body. It's reasonable to ask what aspects of headphone design might lead to similar effects, though exact quantification might never be obtainable.
  
 Totally agree about the second point. In fact, the technique of speaker cross-talk cancellation has exactly this task in mind.


----------



## frodeni

icebear said:


> Just my $0.02:
> Sound stage is the virtual recreation of the recording space in width, depth and height by our brain processing the ambient information captured in the recording and translated back into sound by the transducers, either headphones or speakers. Consequently this can not be directly measured. Forget about that approach.
> 
> About the claim that 3D reproduction is impossible with just 2 speakers ... that is complete BS in my book. We have only two ears and are perfectly capable of locating sources of sound in real world, i.e. in 3D space. If in a recording a certain instrument with instructions "from far away" is playing from a back room of the concert hall or from a 3rd tier position in the hall, this will clearly be audible as not coming from the same sound stage plane (as in width and depth) as the rest of the orchestra. If it's not, it's time to upgrade ... poor wallet


 
  
 There is no height in stereo. It is 2D, not 3D in stereo. How do claim a 3D signal is modulated to produce this 3D effect?
  
 What happens, is that some speakers have the deeper soundings elements at the bottom, making some sound appear as coming from a lower place. That effects ruins the reproduction of the guitar for instance. For the same reasons, a lot of speakers have elements placed in vertical balance.
  
 And what you describe as soundstage, is actually imaging, not soundstage. Given what you wrote above, what would imaging be?
  
 We need something defining the scope of possible imaging, as in width and dept. If that is not soundstage, which it is most places outside Head-Fi, then what describes width and dept? Then we need a term to describe the accuracy sounds are reproduced by height and width. That is usually imaging. If not, we need a term for it. What would that be? And so on.
  
 People need to realize what we need to describe is sonic traits, not what the words should mean by their face name. Going face names, you cannot analyse a sonic signature, as you have no language to do so with.
  
 Soundstage as a reproduction of a stage with playing musicians, would be made by a ton of sonic traits. The same probably will happen for a thread of this kind, about imaging. Or body. Not to mention clarity. In the end, nothing is clear and distinct, and everything overlap like nuts.
  
 No wonder why people struggle to both analyze and describe a sonic character.


----------



## money4me247

I think there is a difference in soundstage with headphones. There is an easy test that anyone can do for this. You can easily affect the soundstage of open headphones by covering the earcups with your hands. you will hear a noticeable difference in width and depth of where the music is coming from. You can still image where the notes are coming from, but they are just closer than previously


----------



## frodeni

money4me247 said:


> I think there is a difference in soundstage with headphones. There is an easy test that anyone can do for this. You can easily affect the soundstage of open headphones by covering the earcups with your hands. you will hear a noticeable difference in width and depth of where the music is coming from. You can still image where the notes are coming from, but they are just closer than previously


 
  
 That would be my understanding as well. If soundstage is the imaginary space of width and dept that is.


----------



## icebear

frodeni said:


> There is no height in stereo. It is 2D, not 3D in stereo. How do claim a 3D signal is modulated to produce this 3D effect? (1)
> 
> What happens, is that some speakers have the deeper soundings elements at the bottom, making some sound appear as coming from a lower place. That effects ruins the reproduction of the guitar for instance. For the same reasons, a lot of speakers have elements placed in vertical balance. (2)
> 
> ...


 
 (1) I don't know how it works I have listened to examples that clearly let me locate a sound source above the rest of the orchestra, and that is with two channel stereo.
  
 (2) TV sound bars for example
	

	
	
		
		

		
			





.
      Could you please put some info in your profile with what kind of set up you are listening, so we have an idea why you hear or don't hear certain acoustic aspects?
  
 (3) For me sound stage are the dimensions of the recording venue and imaging is the position of the individual sound sources within this stage.
      This obviously limits the natural sound stage to live recordings and not something pieced together and artificially panned between the left and right channel.
  
 (4 a,b,c) You wan to invent a new language and set the definitions differently because we need this? 
	

	
	
		
		

		
		
	


	




 Have fun.
  
 (5) Yes indeed, this how it actually works. It is a very complex interaction of time differences between the direct sound and the reflected sound bouncing of all the walls in the actual 3D space. There is no single factor that you can measure from the signal and get a reading on sound stage.


----------



## frodeni

icebear said:


> .... For me sound stage are the dimensions of the recording venue and imaging is the position of the individual sound sources within this stage. ...


 
  
 Then we are in agreement then. This is the defacto meaning of these terms. Formalizing it, is not making a new language.
  


icebear said:


> (1) I don't know how it works I have listened to examples that clearly let me locate a sound source above the rest of the orchestra, and that is with two channel stereo.


 

  
 Listen: If you hear instruments above the rest of the orchestra, well you probably do. I simply do not know why. Ideally, you simply should not.
  
 A guitar often spans over the range all elements of a speaker. Tapping or the base, is reproduced low, like at the floor sometimes. The guitar box is often lower than the hand hitting the strings. Yet the singer might be at height of the strings.
  
 Sometimes, hitting higher tones, results in an increase in rendering height. The fingers hitting the guitar neck, often results in highs for the tweeter, rendered high up there, as by physical height.
  
 This happens all the time for two or three way speakers of classic design. 


icebear said:


> ...     Could you please put some info in your profile with what kind of set up you are listening, so we have an idea why you hear or don't hear certain acoustic aspects? ...


 
  
 Sure.


----------



## icebear

I can't really follow your decription of guitar sound reproduction.
 If it sounds like you describe, then I guess you have a problem with your speakers.
  
 Get your self THIS CD (not the regular edition) and have a listen how live guitar sound can be captured with a lot of ambient information that let you "hear" the space of the venue.


----------



## frodeni

icebear said:


> I can't really follow your decription of guitar sound reproduction.
> If it sounds like you describe, then I guess you have a problem with your speakers.
> 
> Get your self THIS CD (not the regular edition) and have a listen how live guitar sound can be captured with a lot of ambient information that let you "hear" the space of the venue.


 
  
 There has been absolutely nothing wrong with the like 30 setups I have heard this on. And sure, this is a distortion, and thinks should behave exactly as I described, as that is the expected behavior. The descirpiton in the last post, was for the classic tweeter on top, then mid, then base designs. This is the way they behave. If you do not follow, how bad is it? Where did you loose the description?
  
 It is just the result of music being reproduced at the height of the speaker elements, and that depends upon the frequency.
  
 As for buying a CD for the sole purpose of hearing great sound, and not for the music, I do not do that anymore. Thanks for the tip anyway.


----------



## icebear

frodeni said:


> There has been absolutely *nothing wrong with the like 30 setups* I have heard this on. And sure, this is a distortion, and thinks should behave exactly as I described, as that is the expected behavior. *The descirpiton in the last post, was for the classic tweeter on top, then mid, then base designs. This is the way they behave. If you do not follow, how bad is it? Where did you loose the description?*
> 
> It is just the result of music being reproduced at the height of the speaker elements, and that depends upon the frequency.
> 
> As for buying a CD for the sole purpose of hearing great sound, and not for the music, I do not do that anymore. Thanks for the tip anyway.


 
 If the (multi-way/driver) speaker is breaking up the different frequency parts the way you describe it, than the construction of the speaker, most likely the design of the crossover network is screwed up. Listen with a single driver speaker e.g. Lowther. If you want to get to terms with something like soundstage reproduction you will have to eliminate obvious shortcomings of the equipment.
  
 When the sound is audibly separating between the individual drivers you might consider your judgment of quote "there has been absolutely nothing wrong" as possibly not the last word in accuracy.


----------



## frodeni

icebear said:


> If the (multi-way/driver) speaker is breaking up the different frequency parts the way you describe it, than the construction of the speaker, most likely the design of the crossover network is screwed up. Listen with a single driver speaker e.g. Lowther. If you want to get to terms with something like soundstage reproduction you will have to eliminate obvious shortcomings of the equipment.
> 
> When the sound is audibly separating between the individual drivers you might consider your judgment of quote "there has been absolutely nothing wrong" as possibly not the last word in accuracy.


 
 A short primer:
 A speaker pair with this setup:
  

  
 Sound input is then split between the different elements. They are filtered by frequency range. For this particular speaker, the circuitry is like this:
  

  
 Crossover frequency is at 300Hz and 3500Hz.
  
 That leaves us with:

Tweeter at 3.5-20KHz
Midrange at 300-3500Hz
Base at (40)-300Hz
  
 I once read an entire book on this subject, but that was years ago. The bases is still the same, there is some overlapping between the elements, in which both 1 and 2, and 2 and 2 reproduce sound. LIke at 298Hz and 3550Hz. The roll off might be sharp or slight.
  
 So any sound at say 6KHZ will be reproduced by the tweeter. Sounds at 1KHz will be by the midrange. 150HZ by the base.
  
 As for soundstage, that will be the triangle running from your head and the speakers. It will be the extension of that triangle, running behind the speakers. It is flat and two-dimensional. Lets just keep the assumption of a acoustic dead room for now.
  
 By this speaker, it should be pretty obvious, that the tweeter is significantly higher positioned than the base. There would be a significant difference in height between all these elements.
  
 Playing a guitar, could produce sounds being reproduced by all three elements. Some by the tweeter, some by the midrange, and some by the base.
  
 These sound will emerge at the soundstage given by each element, which is at different heights.
  
 The high pitch finger play on the guitar neck, as when changing the grip, will be at both tweeter and midrange.
  
 The tune of the strings them selves, are like 70-700Hz, meaning both base and mid. So when moving up the tone scale of the guitar, the reproduction will move from the base elements to the midrange. By definition. During the cross over, the sounds will gradually move up in physical height.
  
 So, the guitar playing will be in the height center of those two bases. For both some tones, and some drumming on the guitar.
  
 The tones coming from the guitar, covers a range going into the heart of the midrange element. As you work your way up the guitar scale, the reproduction will move up in height, from the base to the midrange elements..
  
 Now, the finger play, will sometimes be firmly by the tweeter. That will be even further up in physical height.
  
 This is how things are expected to reproduce, and this is how they are reproduced. It is easily audible. And not just for the guitar. It manifests for any speaker with an off center design. Most speakers are.


----------



## icebear

You keep dissecting your sound impressions on your journey to describe and analyze sound stage and imaging.
 To make the problem easier to tackle you might seriously consider a single driver speaker, just a suggestion.
 The drivers and electronics in a '97 loudspeaker might be due for a check-up.
  
 I have never heard the effect that you describe.
 Sometimes ignorance can be bliss you know, I enjoy listening to music.
	

	
	
		
		

		
		
	


	



  
 I'm out of this one here, some else please take over


----------



## frodeni

icebear said:


> You keep dissecting your sound impressions on your journey to describe and analyze sound stage and imaging.
> To make the problem easier to tackle you might seriously consider a single driver speaker, just a suggestion.
> The drivers and electronics in a '97 loudspeaker might be due for a check-up.
> 
> ...


 
  
 Great, so you got that point.
  
 KEF makes such a speaker, but it was six times as expensive as the ones I got.
  
 Not hearing this effect, is probably a blessing for some. For me, it is not like the only effect I sort of know of. Still enjoy the music.
  
 As for dissecting, that is right on the money to. That is just how I roll.
  
 Keep on enjoying the music! That is the spirit.


----------



## Ruben123

Does sound stage even EXIST at all?
  
 You can pan a sound, by hand (mixing) or in real life (stereo microphone). That will be your left and right (width). The "depth" is as easy as louder sounds seem to be nearer than softer sounds. That is, to me, a dynamic range thingie (the source recording or volume of the amp)  and not per se the speakers themselves. The height is just a joke as it seems to me, because mostly the tweeter is higher than the base speakers so the higher frequency will always come from above.
 Of course some of these things might be altered (err, all are altered) by frequency response differences. Because of that, sound stage could be different.
  
 Isnt it that simple?


----------



## Audioholic123

ruben123 said:


> Does sound stage even EXIST at all?
> 
> You can pan a sound, by hand (mixing) or in real life (stereo microphone). That will be your left and right (width). The "depth" is as easy as louder sounds seem to be nearer than softer sounds.
> 
> *Isnt it that simple?*


 
*Fundamentaly....yes...*


----------



## frodeni

ruben123 said:


> Does sound stage even EXIST at all?
> 
> You can pan a sound, by hand (mixing) or in real life (stereo microphone). That will be your left and right (width). The "depth" is as easy as louder sounds seem to be nearer than softer sounds. That is, to me, a dynamic range thingie (the source recording or volume of the amp)  and not per se the speakers themselves. The height is just a joke as it seems to me, because mostly the tweeter is higher than the base speakers so the higher frequency will always come from above.
> Of course some of these things might be altered (err, all are altered) by frequency response differences. Because of that, sound stage could be different.
> ...


 
  
 No.
  
 If that was the case, there would be plenty of dept by headphones, but there is not.
  
 The main positioning mechanism, is time delta.
  
 As if the sound stage exists at all, you at least need to define it first. Then the definition should exists. From there on, it is insanely debatable, all they way to that it does not exists at all. Like given by Descartes. Easy to get lost in that line of thought, yet very useful to understand the arguments.
  
 The soundstage is a sonic illusion, just like the screen you read this on, is a visual illusion.


----------



## Audioholic123

frodeni said:


> No.
> 
> If that was the case, there would be plenty of dept by headphones, but there is not.
> 
> ...


 

 Fundamentaly though, Ruben123 is right. He just didn't go into detail. The truth is, soundstage is literally decided by how a sound is recorded and then panned. Depth is influenced by volume and frequency, thirdly by distance. Yes there is more to it but really, these are the basic principles of soundstage. I used to record music on the Reason Synthesizer Programme. I have plenty experience on this.If it helps then you should know that the spatial elements of a passage of music is really down to our aural instruments telling our brain to localize individual parts of a recording
  
 I think we all just dont know how to use the term soundstage.


----------



## frodeni

audioholic123 said:


> Fundamentaly though, Ruben123 is right. He just didn't go into detail. The truth is, soundstage is literally decided by how a sound is recorded and then panned. Depth is influenced by volume and frequency, thirdly by distance. Yes there is more to it but really, these are the basic principles of soundstage. I used to record music on the Reason Synthesizer Programme. I have plenty experience on this.If it helps then you should know that the spatial elements of a passage of music is really down to our aural instruments telling our brain to localize individual parts of a recording
> 
> I think we all just dont know how to use the term soundstage.


 
  
 I know that this is often times how it is done, and that to some degree, that works for speakers. But really, turning up the volume, do not alter the soundstage much, which should tell us something?
  
 It is more like a painting technique, in which objects farther away is given less saturation and less accuracy. But claiming that such techniques makes the painting 3D, is a far stretch. But it sure helps a bit.
  
 I am not particularly impressed with the music industry, and how a lot of things seems to be understood. A lot of things is done as by need, but the need is not understood. The explanations for the way things are done may be completely off.


----------



## Shaffer

Imagine sitting in a concert hall and the band/orchestra is playing some distance away, as happens in real life. A stereo recording played through a capable system (not headphones) can recreate the three-dimensional perspective of such an event. That's a soundstage in a nutshell. Headphones cannot do this. Headphones do not soundstage. They project a soundfield, for a lack of a better term, centered in one's head, not an illusion of sitting in an audience listening to a performance taking place on a stage. 

Edit: text


----------



## Shaffer

FWIW, my system allow for a presentation of height, as do a number of others. Again, I'm not talking about headphones. Headphones do not soundstage; it's physically impossible. I do see in your profile that you do not own a system capable of producing a soundstage. As such, why are you even talking to me about this? Goodbye.


----------

