# To crossfeed or not to crossfeed? That is the question...



## jasonb

I thought it might be a nice idea to see who likes or dislikes crossfeed.
   
  Please vote and share your opinion either way. I wanna hear what people here have to say about it one way or another.


----------



## revolink24

I do use crossfeed. It makes the music sound more natural to my ears, and I always strive for natural sound.


----------



## jasonb

I must say that ever since first using it, I now have to use it. Music through headphones now without it just sounds strange to me.
   
  As you can me in my sig, I use an HTC Incredible smartphone as my source. I have an app on it called DSP Manager. it's a 5 band user adjustable EQ, and also has "virtual room effects", better known as crossfeed.
   
  link to the Android DSP page: http://www.bel.fi/~alankila/android-dsp/
   
  and an excerpt from that page about the "Virtual room effects" used in this app:
   
  "A delayed, lowpass-filtered version of the opposite channel is added to the current channel. The delay is achieved bs2b-style using a single high shelve filter giving about 0.5 ms delay. After that, the signal is mixed without phase delay with 12 dB attenuation. In addition, there is a small reverb based on Haas stereo widening effect of 30 ms ping-pong buffers."
   
  I have also used one of the crossfeed plugins that are available for winamp with good results as well.


----------



## leeperry

Most xfeed plugins color the sound to death...I've currently settled down on VNoPhones, it's free and it doesn't color the sound whatsoever...it simply adds a very dim delayed signal of the opposite channel to make my brain happy. Most xfeed plugins have become room simulators...very funny for a few days, unbearable after +1 week IME.


----------



## Happy Camper

I do with my IEMs and the portable rig but not the home rig.


----------



## smeggy

I like crossfeed in theory, in practice I have not found one that doesn't make a mess of the sound in some way.


----------



## aimlink

I currently always use the crossfeed option on my Head UltraDesktop Amp.  
   
  My purist side used to scoff at it, did not like it; and for that reason, I never used it for a long time.  I'd very soon switch it off once I heard the introduced warmth in the sound that was taking me away from what I considered to be the more neutral and therefore, correct and superior presentation.  It was, at the time, more about the technically correct sound and feeling that I was listening to greater balance, rather than the sheer enjoyment of the music.  Yes siree, I can now attest to the direct experience of something sounding like crap once I expected it to.
   
  However, once I realized that this issue of purity and balance is a moving target and very personal, rather than an objective achievement, I grew to prefer the crossfeed. I stopped looking at its effect on balance as a 'colouration' perse.  I now much prefer my listening experience with it enabled.  It sounds better to my ears.


----------



## xnor

I also use crossfeed, cmoy's implementation that is (though software based).


----------



## p a t r i c k

I have become a huge fan of cross-feed since I got my Meier-Audio StageDAC. To be honest this is the only implementation of cross-feed I have heard but it is extremely good indeed and imho represents a great improvement in the quality of the signal, in fact I think it produces a very much more accurate rendition.
   
  I had been very skeptical of the idea because I am old enough to remember "mono reprocessed for stereo" (ugh!) and of course I have the idea that surely processing the sound cannot be good.
   
  However the big problem with headphones is that by design they greatly disturb the stereo image created by the recording engineer/producer.
   
  Listening to classical music comparing with Meier-Audio cross-feed on and off is very interesting and reveals that the Meier-Audio cross-feed really does great work in correcting the problem inherent in headphone replay of music originally intended for stereo loudspeakers.
   
  There is a fairly standard "layout" for an orchestra, they are in front and the sound for loudspeakers is usually mixed so that the listener has an ideal position which is a bit back from the position of the conductor. This position is not available in the concert hall but it can, and is, made available in the recordings.
   
  With headphones, however, this arrangement goes monstrously wrong. Musicians playing on the left or right of the orchestra end up playing as if they are right beside you on either side. The detail on their instruments becomes very misplaced and greatly emphasised over the rest of the orchestra.
   
  I have one recording which reveals to me just how wrong things are with headphone replay (without cross-feed) and that is Poème Symphonique for 100 Metronomes by György Ligeti. The performance I have is by Françoise Terrioux in 1962. "100 Metronomes" is excatly what you get for 19 mins 56 secs. The metronomes are placed in front of the listener and tick away very nicely. They should create this rather marvellous sonic sea of ticking. If I use headphones without cross-feed then there is, what sounds like, a great deal of surface noise and static as you would expect from a worn LP playing a historic recording. Turn on the cross-feed and this goes away and you can hear that the metronomes are in front of you and ticking away nicely. I think this "surface noise" from headphones without cross-feed is simply out of place detail from the metronomes at the right and left sides of the metronome array. This piece of music in its defined sonic structure is very illuminating for this fault with headphones.
   
  Listening to more conventional classical music I find that the cross-feed restores a great sense of integrity, particularly to music from the actual classical and late classical/early romantic periods.
   
  Some people complain that cross-feed loses a sense of air, but if you think about it there really shouldn't be a sense of air around the listener for classical music recordings. There are many microphones on the orchestra, but there is no microphone to record an atmosphere for the actual location of the listener. The sense of air that comes from using headphones without cross-feed is in fact the atmosphere from the microphones at the extreme right and left of the orchestra which is wrong placed right next to the right and left ears of the headphone wearing listener. As soon as you turn on the cross-feed, then that wrongly placed sense of air returns to the correct position.
   
  When I bought my Meier-Audio StageDAC I thought that it was a nice "extra" to have cross-feed, but I now find that I always use cross-feed and for me it always improves the listening.


----------



## jasonb

nice. so far 6 say yes, 3 say no, and 1 has no idea what crossfeed is.
   
  keep it coming guys!


----------



## silverxxx

I like crossfeed on some songs, but since i listen to wide range of genres, it has detrimental effect on some. So i end up not using it. Try listening to techno with crossfeed on. That and i cant have both vnophones and bass enhancer at the same time.


----------



## EddieE

I voted yes, but it depends on the crossfeeed of course. I tried the cross feed on Rockbox and thought it was pretty awful, but I love the subtle, well realised crossfeed on my Meier amp. It's a barely perceptible change, doesn't make the music sound different, just more realistic and less jarring.
   
  If I buy an amp that isn't a Meier in the future, it will be after I buy a Stage DAC so I won't lose it.


----------



## anetode

Depending on how the music's mixed, I sometimes opt to use the PIC function on a Lavry DA11. It's a very simple but effective fix that allows you to choose the apparent width (R/L placement) of the stereo image.


----------



## jasonb

Quote: 





anetode said:


> Depending on how the music's mixed, I sometimes opt to use the PIC function on a Lavry DA11. It's a very simple but effective fix that allows you to choose the apparent width (R/L placement) of the stereo image.


 

 is it still technically crossfeed?


----------



## LFF

Quote: 





smeggy said:


> I like crossfeed in theory, in practice I have not found one that doesn't make a mess of the sound in some way.


 


  X 2!!


----------



## TheOtus

No, I don´t... Never even tried. I don´t see any reason really. I suppose the channel separation bothers some people, but not me.


----------



## xnor

Crossfeed is a step towards more natural, imo realistic sound. (If you've ever listened to speakers in a correct set-up, sitting in the sweet spot --> amazing stereo imaging, you should know what I'm talking about.) Also see http://www.linkwitzlab.com/headphone-xfeed.htm.


----------



## littletree76

There is always this doubt on my mind:
   
  If the audio engineer in recording studio has done mixing correctly during production, in other words the recording is targeted at both speaker and headphone (subtle cross feed does not have much effect on speaker anyway), then post-production processing such as cross feed during playback of record become redundant. Then there will be no controversy whether cross feed is needed in any audio equipment for music playback.
   
  Correct me if this statement is deemed to be unrealistic or not practical at all. I particularly like to hear opinions from practicing audio engineers.


----------



## Shike

Depends on certain implementations.  If it rolls off the treble or doesn't accentuate it I'm fine.  If it does however then I find it intolerable in most situations.  I have the same thing with my Dolby Headphone to some extent.  When using it with the K601 it's perfectly acceptable.  On my AD700's I get sibilance and ear fatigue after a short time.  Without it the AD700's don't seem to have it and are very listenable again.


----------



## Uncle Erik

Yes, I like Dr. Meier's crossfeed implementation quite well.
   
  Though I like dipolar radiation even better.  
	

	
	
		
		

		
			




   
  Crossfeed is a funny thing.  People either like it or not, but it seems that no one knows what side they fall on until they hear it.  Give it a listen and find out whether you do or not.


----------



## anadin

Quote: 





smeggy said:


> I like crossfeed in theory, in practice I have not found one that doesn't make a mess of the sound in some way.


 


  I totally agree.


----------



## GreatDane

I've used crossfeed with portable amps from Xin, Meier,HeadRoom & Practical Devices. I currently only have the XM5 and use it 50% with Westone 3. I did have a Corda Cross at one time. I regret selling that now.


----------



## xnor

Quote: 





littletree76 said:


> There is always this doubt on my mind:
> 
> If the audio engineer in recording studio has done mixing correctly during production [...]


 

 That's the problem: *if.*
   
  I have many recordings that sound better (closer to what you hear with speakers) with crossfeed enabled. But most important, it kills fatigue!


----------



## SP Wild

The more I use crossfeed the more I need crossfeed.  The less I use it, the less I need it.  After a long session with the K1000s - most cans sound wrong without it afterwards...but if my session starts without crossfeed than I don't need it.  I will always have crossfeed available to me at all times.


----------



## misc.

I find it really is jsut dependent on the music you are listening to and the production.
   
  For older music, especially stuff that went over the top with stereo bias I find it works fantastically and greatly reduces fatigue.
   
  Otherwise for most modern or electronic recordings I find it is sort of redundant. It works a lot better with 'natural' sounds that come from instruments than it does with electronic and synthesized sounds.


----------



## jasonb

just bumping this up...


----------



## Very Legal

Tried crossfeed on my Rockboxed Ipod Video but the sound became monoish so i turned it off again. Since then i havent touched it.


----------



## jasonb

Some work much better than others. I used to love the crossfeed option I have on my HTC Incredible, that was until I heard HeadRoom's crossfeed. HeadRoom's is much more subtle, but still does the trick. I am really liking HeadRoom's crossfeed.
  
  Quote: 





very legal said:


> Tried crossfeed on my Rockboxed Ipod Video but the sound became monoish so i turned it off again. Since then i havent touched it.


----------



## Very Legal

Still dont understand why a smaller soundstage is preferable, because thats whats happening when you use crossfeed, right?
	

	
	
		
		

		
		
	


	



  
  Quote: 





jasonb said:


> Some work much better than others. I used to love the crossfeed option I have on my HTC Incredible, that was until I heard HeadRoom's crossfeed. HeadRoom's is much more subtle, but still does the trick. I am really liking HeadRoom's crossfeed.
> 
> Quote:
> 
> ...


----------



## jasonb

not really. it makes it a tad narrower, but it's not cutting the width in half or anything. what crossfeed tries to do is simulate speakers in front of you, instead of headphones an inch away from your ear drums. crossfeed adds depth at the expense of a very tiny amount of width. it kinda makes the sound seem more in front of you than beside you. i think crossfeed greatly improves imaging and soundstage, HeadRoom's crossfeed anyway. i am a true believer in crossfeed when done right, and i actually think headphones sound very unnatural without it. i was never big into headphones until i first heard headphones with crossfeed.
   
  http://www.headphone.com/learning-center/about-headroom-crossfeed.php
  
  Quote: 





very legal said:


> Still dont understand why a smaller soundstage is preferable, because thats whats happening when you use crossfeed, right?


----------



## Achmedisdead

I tried the Rockbox default crossfeed setting, and it rolled off the highs, and compressed the soundstage so that I didn't like the sound at all. I was listening to Electric Ladyland, which I've listened to countless times over the years, and I couldn't make it through the album with the xfeed enabled. I'll certainly try another implementation if I have the chance, but I don't know when that might be.


----------



## Very Legal

For now i really like the sound spread all around my head and inside. I think thats the beauty of headphones vs speakers.


----------



## Arctia

Depends on what headphone you have also.
   
  Crossfeed worked well for my K701. It took the edge off and provided a much smoother sound.
   
  For my HD800 however, all it did was greatly reduced the soundstage. Some headphones don't need crossfeed.


----------



## GreatDane

Quote: 





achmedisdead said:


> I tried the Rockbox default crossfeed setting, and it rolled off the highs, and compressed the soundstage so that I didn't like the sound at all. I was listening to Electric Ladyland, which I've listened to countless times over the years, and I couldn't make it through the album with the xfeed enabled. I'll certainly try another implementation if I have the chance, but I don't know when that might be.


 

 When I used RB crossfeed, I set the parameters so that the effect was at the minimum possible. Play with it and it gets much better.


----------



## stevenswall

BBE ViVA on my S9 seems to have some crossfeed (tested with a few hard panned songs) I think it works quite well; it makes the soundstage more holographic/3D.


----------



## jasonb

heck yes it does!!
  
  Quote: 





stevenswall said:


> BBE ViVA on my S9 seems to have some *crossfeed* (tested with a few hard panned songs) I think it works quite well; it *makes the soundstage more holographic/3D.*


----------



## edstrelow

As the term crossfeed is used in audio, it can mean any of a number of fundamentally different things so it is not appropriate to lump them all together.
   
  First you have simple "blend", i.e. mixing the left and right channels so that there is less stereo separation. The extreme setting of a blend control is single channel monaural sound. I had a pre-amp that alllowed this and it was of some use on a few old stereo recordings that had extreme separation, eg. voice in one cahnnel, instruments in another. Generally however I found it rarely useful.
   
  The other meanings of crossfeed are based on the proprietary techniques of the designers and do other kinds of black magic, including frequency response alteration as well as some blending. I suspect that much of the appeal of these systems has more to do with the black magic than the blending.
   
  One common claim which I totally disagree with is that by making phones sound more speaker like they will be more realistic. No - they will be more speaker like and that is far from giving a realistic spatial image.
   
  All conventional speakers suffer from inadvertent cross-feed which is simply an artifact of the speaker presentation and which causes each channel to feed both ears with the same signals. In effect your brain is getting hit from 4 signals rather than the 2 in the source. The 2 crossfeed signals are extraneous to the original 2 channel signals and simply degrade the sound.
   
  Headphones by comparison give only one channel to each ear and produce a more accurate spatial image. What they don't do is produce a sense of externalization, rather you get the in-the-ear effect that some complain about. But in other respects the headphone image is much clearer. Accordingly efforts to give phones speaker-like cross-feed are simply wrong in principle, just a way of buggering up the sound.
   
  However I doubt that many commercial crossfeed systems really do provide speaker-like cross-feed which also requires time delays of the cross-fed signal. Most are I guess simply blend, plus frequency tweaking, plus some other voodoo.
   
  From time to time efforts to get rid of speaker cross-feed are tried. Polk made its SDA speakers some years ago. I bought them and still have them because they do a pretty good job of giving a much more precise stereo image. Unfortunately they are no longer made.
   
  http://www.polkaudio.com/forums/showthread.php?t=45468
   
  Here are some other discussions of this issue.
   
http://news.cnet.com/8301-13645_3-20022412-47.html?tag=mncol;title
   
http://www.princeton.edu/3D3A/
   
http://www.freepatentsonline.com/6009178.html
   
http://kom.aau.dk/group/02gr960/docs/lspkpos02.pdf
   
http://www.isvr.soton.ac.uk/fdag/vap/html/xtalk.html


----------



## littletree76

Quote: 





edstrelow said:


> All conventional speakers suffer from inadvertent cross-feed which is simply an artifact of the speaker presentation and which causes each channel to feed both ears with the same signals. In effect your brain is getting hit from 4 signals rather than the 2 in the source. The 2 crossfeed signals are extraneous to the original 2 channel signals and simply degrade the sound.


 

  Meier Audio's StageDAC cross-feed switch come with one option where partial signal from one channel is subtracted (instead of added) to another channel. This option is meant for speaker and it produces much better sound stage and spatial image than normal stereo. With intensity switch for cross-feed set to minimum, the channel separation effect does not affect sound stage of headphone too much. Thus whether cross-feed is pleasant or not also depend on how much it has been applied. Too much of good thing is bad for cross-feed.
   
  I suppose this is manifestation of what you have claimed.


----------



## BlackbeardBen

Quote: 





edstrelow said:


> As the term crossfeed is used in audio, it can mean any of a number of fundamentally different things so it is not appropriate to lump them all together.
> 
> First you have simple "blend", i.e. mixing the left and right channels so that there is less stereo separation. The extreme setting of a blend control is single channel monaural sound. I had a pre-amp that alllowed this and it was of some use on a few old stereo recordings that had extreme separation, eg. voice in one cahnnel, instruments in another. Generally however I found it rarely useful.
> 
> ...


 


 I think the problem is that music used to be entirely mixed for listening from speakers, and to this day that remains a primary objective.
   
  Another attempt at nulling out the unwanted signal from the opposite speaker (in addition to the SDA speakers) is Bob Carver's Sonic Holography - Professor Choueiri's system looks like it does the exact same thing, except with software instead of circuitry, and probably with a whole lot more customizability.  Anyway, you can get a Sonic Holography processor or preamp pretty cheap these days - about $60 for the C-9 processor or $100-$200 for a C-1 or C-11 preamp - and they work with any speakers.
   
  The basic concept behind all of these systems is to introduce a phase-inverted copy of the opposite channel's signal, delayed by about 0.2 ms (the difference in time it takes for the sound to go to your other ear instead).  The goal is to cancel out the unwanted signal from the other speaker.  Of course, because the other ear _also_ hears the cancellation signal, it's not a perfect solution.  I'd be willing to bet that neither is Choueiri's.
   
  Oh, and for the record, this type of processing could be put in the recording, but it isn't because listening to it through headphones is entirely unnatural.
   
  The effect is interesting, to say the least.
   
  It isn't always an improvement - while the soundstage becomes absolutely huge in quite a few recordings, and it's crazy to hear sounds around and behind you (I tricked a friend into thinking I had a 5 channel SACD setup) - it's not always very natural sounding.  For example, the female backup vocalists on Clapton's _Lay Down Sally_ from _Slowhand_ sound like they're behind you...  It sounds cool, but it's a novelty that wears off quickly.
   
  It also has a tendency to diffuse the soundstage quite a bit - vocalists and other precisely located instruments become slightly more diffuse in location.
   
   

 But, it remains that sound coming from headphones, a source so close to your ear, don't fully mimic our ears' and brains' accustomation to hearing far away sound sources in 3D space.  Of course, speakers don't either...  They both have their failings in this respect, and since they fail in different manners, it's impossible to account for both in the recording.
   
   
   
  So yes, I do use crossfeed much of the time in Winamp.  The "HeadPlug MKII" plugin actually does a really good job, with adjustable amounts of crossfeed, delay, treble control, and more.
   
  No, it's not perfect - some recordings don't sound nearly as dynamic with it on.  But for listening to early stereo stuff it's a godsend - I can't listen to Cream without it!  For more modern mixes, it does do a good job of providing a sense of distance and bringing the sounds more in front of you, like at a live show.
   
  But like I said, sometimes I turn it off as it doesn't always sound better.


----------



## Strangelove424

Recently, I found a digital simulation of the Meier crossfeed for Foobar, and have been enjoying it. 

http://www.foobar2000.org/components/view/foo_dsp_meiercf

I'm not much of a crossfeed fan, but this one is very good, subtle but effective when not overdone. I am listening to Jimi Hendrix experience now and it's making a big difference. I don't know if I could enjoy this album on headphones otherwise.


----------



## 71 dB

I found cross-feed 5-6 years ago. It was kind of awakening, sudden realization of how unnatural headphone listening without proper cross-feed is. I felt stupid for not realizing it much sooner. I had the education and knowledge (acoustic engineering), but I never questioned headphone listening on fundamental level. Just shows how important it is to question things… ... _everything_. Better late than never. At that point I wasn't much of a headphone guy, but cross-feed changed it all for me. 

So yes to _proper_ cross-feed. At bass frequencies our ears don't expect more than about 3 dB difference in level (ILD) and the expected time difference (ITD) is less than about 650 microseconds. Going outside these limits causes spatial distortion in our brain. In everyday life all bass we hear is always almost mono unless we feed signals with larger stereo separation into our ears with headphones. We learn to think the unnatural headphone sound is correct, but from scientific point of view considering how our hearing works, it is not correct. Cross-feed is not only about more pleasant and natural sound, it is about understanding what kind of biological creatures we are. It is about realizing how it's possible that something has been done wrong and can be fixed. Cross-feed is a great topic when it comes to showcase the complexity of issues related to audio and listening.

After 5-6 years of cross-feed experimenting and thinking, I have learned things which I want to share here. If you disagree with me, please bring it up, because I am willing to learn more and correct mistakes of my thinking. 


*Spatial distortion*

Spatial distortion means spatial information in the form of channel difference in a recording that is "outside" what our spatial hearing expects. Our brain doesn't know how to interpret such information and as a result the sound feels unnatural and tiring and the spatial image is wrinkled/fragmented. It seems to vary how strongly people "suffer" from spatial distortion. Personally I don't want to experience spatial distortion at all. The purpose of cross-feed is to remove spatial distortion, scale spatial information so that it's within the expectation space of our spatial hearing so that our brain can decode it normally with ease like any sound in our sonic environment.


*Proper cross-feed*

Each recording has it's own proper cross-feed level. Monophonic recordings have _negative_ proper cross-feed level, because we would like to create some channel separation to have stereophonic sound. Some (a few percent) of stereophonic recordings don't need cross-feed at all. They are simply recorded to have a "binaural" sound signature using for example a Jecklin disk microphone setup. Majority of stereophonic recordings need cross-feed. Some less, some more. Early stereophonic recordings typically had HUGE channel separation in order to demonstrate stereo sound and these recording require HUGE cross-feed. On the other hand modern popular music is often mixed headphone listening in mind and requires mild if any cross-feed. So, there is an optimal level of cross-feed for every recording depending on how it is produced. Not enough cross-feed means spatial distortion and too much cross-feed means narrowed mono-like sound.


*Cross-feed level*

In my opinion typical cross-feed levels of for example headphone amps with cross-feed are quite conservative. Many recordings require stronger cross-feed in my opinion. Proper cross-feed level varies according to my tests between -1 dB (strong!) and -12 dB (weak!). Strong cross-feed works well with "ping-pong" stereo recordings and multichannel movie soundtracks witch contain a lot of channel separation after down mix to stereo due to surround channel (out-of-phase) information.


*Does cross-feed mess up the sound? Does it remove details?*

People who don't like cross-feed often say cross-feed messes up the sound. In my opinion these people have it logically wrong, but it's not their fault. Cross-feed is considered exotic, something extra to thinker with the sound. It is not. Loudspeaker listening causes strong acoustic cross-feed, because both ears hear the sound of both loudspeakers. Why doesn't people think loudspeakers mess up the sound because of this? It's because the recordings are produced to take this into account. Recordings have strong channel differences, because loudspeaker listening causes acoustic cross-feed. In other worlds, recordings are "anti-messed" and cross-feed (acoustic with loudspeakers or electric with headphones) mess with this anti-messed up sound to create non-messed up sound. It is true that cross-fed signals sound less detailed, but it's not because relevant information is lost. It's because spatial distortion is removed, information that never should have been there in the first place. Getting rid of spatial distortion makes possible to notice real musical information better. Our hearing is very good at detecting details in correctly cross-fed sounds, so proper cross-feed gives the best circumstances to notice as much details as possible. So, if you want to hear details in your music, you should favor proper cross-feed. If you prefer tiny details cloaked under spatial distortion then cross-feed is not for you.


----------



## castleofargh

how crossfeed is wrong because it changes the signature is indeed backward thinking. pick a headphone that sounds more balanced with crossfeed, and now it's removing crossfeed that messes up with the signature. 
but crossfeed as it's mostly implemented is a very simplified approach to the stereo issue of headphones+music made for speakers. doesn't mean that doing nothing is a great idea, but it can be tricky to find a crossfeed that really works well for us. ultimately we'll need to move on to more customized solutions if headphones want to become an actual HI-FI tool someday.


----------



## bigshot

The thing I learned with multichannel sound is that in the modern world, the old concepts of signal purity are just plain wrong. Purists refuse to use EQ or even tone controls and end up with a response curve created by a dice roll. Others refuse to use a DSP to create a room ambience or re-channel stereo to 5.1 and they end up with 2 dimensional sound. Still others approach system calibration like it's an order from God on high, and they never properly hear the hundreds and thousands of discs mastered slightly off spec.

I believe that tools have their purpose. We should be able to correct mistakes made by engineers, and we should be able to tailor the sound to fit out room and personal tastes. If you don't know how to use a tool properly, it's probably best to leave it alone. But if you're interested in learning about them and use them to help sculpt your sound, that's a good thing.


----------



## endgame

bigshot said:


> The thing I learned with multichannel sound is that in the modern world, the old concepts of signal purity are just plain wrong. Purists refuse to use EQ or even tone controls and end up with a response curve created by a dice roll. Others refuse to use a DSP to create a room ambience or re-channel stereo to 5.1 and they end up with 2 dimensional sound. Still others approach system calibration like it's an order from God on high, and they never properly hear the hundreds and thousands of discs mastered slightly off spec.
> 
> I believe that tools have their purpose. We should be able to correct mistakes made by engineers, and we should be able to tailor the sound to fit out room and personal tastes. If you don't know how to use a tool properly, it's probably best to leave it alone. But if you're interested in learning about them and use them to help sculpt your sound, that's a good thing.



So much awesomeness in this post.


----------



## Zapp_Fan (Sep 18, 2017)

My own personal stance: Technically, it's changing the sound and therefore lower fidelity.  Therefore, I do not like the idea on a sheerly emotional / neurotic level.

Also, most mixing engineers (at least for pop / non-audiophile recordings) Will try to produce a mix that sounds reasonable on headphones as well as speakers, so it's not as if headphones are a forgotten realm.  Most mixing engineers are resigned to the fact that their recordings will most frequently be heard on iPod earbuds. 

I'm guessing here, but crossfeed should be minimally or not necessary at all for recordings where the soundstage is recorded using a small stereo or binaural setup.  Hopefully, a stereo image recorded this way should still sound somewhat natural in headphones, as the recording will still contain the intact stereo image / reverb of the space where the recording was made.

When you record with more than 2-3 mics and you're placing instruments artificially and building the soundstage from scratch (this is really a lot more common), the listener may or may not want crossfeed. Hard panning is generally only used as a special effect now, but if you're listening to old Beatles records, you will not hear the recording as intended without crossfeed, because at the time listening equipment was a lot more limited.  On the other hand, a fully synthesized electronic music track will not have a "natural" sound no matter what you listen with, as there never was one in the first place. 

At the end of the day, if it sounds good, it is good, right?  It is usually impossible to know if there is a "more correct" way to play any given recording, but it's not hard to decide whether you like a given effect or not.


----------



## bigshot

alex_aiwa_USA said:


> My own personal stance: Technically, it's changing the sound and therefore lower fidelity.  Therefore, I do not like the idea on a sheerly emotional / neurotic level.



If that's the case, you should sell your headphones and get a pair of studio monitors, because the difference between listening to music on headphones instead of the intended speakers is greater than any kind of signal processing. Start thinking about that and get your emotional neurotic level to work on that. You'll at least get better sound for your trouble.


----------



## Zapp_Fan

bigshot said:


> If that's the case, you should sell your headphones and get a pair of studio monitors, because the difference between listening to music on headphones instead of the intended speakers is greater than any kind of signal processing.



You might be joking, but actually for the past 10 years or so most of my personal listening has been done on studio monitors...  ... 

and, I totally recognize that DSP can be a valuable part of a good listening setup... I don't question it, just personally feel a non-specific discomfort about it. 

However, I would argue that "intended speakers" is usually a very poorly defined category.  Audio pros know that their audience doesn't typically have nearfield monitors. Engineers *usually* make as few assumptions as possible about the ultimate listening conditions, meaning they want it to sound good on headphones and speakers alike.  In fact, it is seen as a major failure if your mix only sounds good on studio monitors.  However, now we again run into the fact that "sounds good" is entirely subjective.


----------



## Strangelove424

I have a hard enough time figuring out my own intents, let alone the intents of a mastering engineer who lived 50 years ago when decent headphones were a gleam in an engineer's eye, and the reference monitors they were using on a different technological level as well. Not to mention that intent and sacrifice are hard to discern. Was the treble or bass boosted to make up low-fi equipment? Did they drag the band in to do stereo mixes even though they loathed the idea of stereo? Intent is a psychological aspect, and it can be a neurotic enterprise trying to discern it. You could end up on a slippery slope only buying vintage vinyls and listening through vintage gear. And boy that would suck. Intent and faithful reproduction are not the same thing.

Nevertheless, those who are all about intent, faith, and neutrality have to admit to the fact that many historic albums were never mastered with headphones in mind. For them, crossover is a more authentic way to listen over headphones.


----------



## Strangelove424 (Sep 18, 2017)

bigshot said:


> The thing I learned with multichannel sound is that in the modern world, the old concepts of signal purity are just plain wrong. Purists refuse to use EQ or even tone controls and end up with a response curve created by a dice roll. Others refuse to use a DSP to create a room ambience or re-channel stereo to 5.1 and they end up with 2 dimensional sound. Still others approach system calibration like it's an order from God on high, and they never properly hear the hundreds and thousands of discs mastered slightly off spec.
> 
> I believe that tools have their purpose. We should be able to correct mistakes made by engineers, and we should be able to tailor the sound to fit out room and personal tastes. If you don't know how to use a tool properly, it's probably best to leave it alone. But if you're interested in learning about them and use them to help sculpt your sound, that's a good thing.



+1 I'm big into DSP right not. I loaded Foobar up with DSPs and am going to town on them. I've never felt so empowered to customize my listening experience. Some people do tube rolling, I do plugin rolling. And things have never sounded so good. That Jimi Hendrix album I mentioned above, I was using dynamic EQ for treble spikes, parametric EQ for tone adjustment, slickEQ for saturation/tube sound, and Meier crossfeed to stop the ping pong. lol. The neutrality folks would probably freak out hearing that, but man did it sound gooood. If I was using speakers, I'd go for stereo->5.1 DSP too. I'm not exactly sure what Jimi Hendrix intended, but I assume he wanted me to enjoy his music, and that I did.


----------



## bigshot

alex_aiwa_USA said:


> Audio pros know that their audience doesn't typically have nearfield monitors. Engineers *usually* make as few assumptions as possible about the ultimate listening conditions, meaning they want it to sound good on headphones and speakers alike.



The only headphones I've seen in the studios I've worked in were the beaters in the booth used for playback. I never saw an engineer put headphones on. We would usually mix on the big full range monitors, then do a check on small speakers to make sure it worked well with cheaper systems. We never checked with headphones.


----------



## 71 dB

alex_aiwa_USA said:


> My own personal stance: Technically, it's changing the sound and therefore lower fidelity.  Therefore, I do not like the idea on a sheerly emotional / neurotic level.



Cross-feed definitely changes the sound. It would be pointless if it didn't do anything, so of course it _changes_ the sound. As for the lower fidelity, are you really thinking that, or is it an assumption that all changes to sound are for the worse? Is noise reduction always bad? Is filtering a bass bump away bad? Is removing spatial distortion bad? Did the producers of a recording intent spatial distortion to be part of the listening experience? If so, they fail miserable whenever the recording is listened to with loudspeakers, due to acoustic cross-feed, which by the way changes the sound significantly more than an average headphone cross-feeder does. Sometimes changes to sound are for the better meaning higher fidelity, and cross-feed is a IMO a good example of that.



alex_aiwa_USA said:


> Also, most mixing engineers (at least for pop / non-audiophile recordings) Will try to produce a mix that sounds reasonable on headphones as well as speakers, so it's not as if headphones are a forgotten realm.  Most mixing engineers are resigned to the fact that their recordings will most frequently be heard on iPod earbuds.



You are correct, modern pop is often produced to contain only mild levels of spatial distortion (depends on the producers), but is that all you want to listen to? Weak cross-feed might improve even these recordings taming occasional bursts of spatial distortion. Spatial distortion free recordings do exist and I do listen to them cross-feed off. Knowing how much cross-feed is needed is important for best results (highest fidelity). Sometimes you don't need it at all, but most of the time spatial distortion does exist, even when it's modern pop intended for headphones.



alex_aiwa_USA said:


> I'm guessing here, but crossfeed should be minimally or not necessary at all for recordings where the soundstage is recorded using a small stereo or binaural setup.  Hopefully, a stereo image recorded this way should still sound somewhat natural in headphones, as the recording will still contain the intact stereo image / reverb of the space where the recording was made.



Correct. There are microphone setups that cause very little spatial distortion (such as OSS, ORTF and XY). Binaural recordings should be listened to cross-feed off, but they are pretty rare. Don't cross-feed when there is no spatial distortion to remove! Cross-feed as little as possible to get rid of spatial distortion. It's like adjusting colors on a tv set. You don't want colors pale or over-saturated. You want natural colors. Proper cross-feed means the sound contains just the right amount of spatial information (channel difference).

However, simple stereo setups (such as AB and Blumlein) can produce significant spatial distortion and strong cross-feed is needed to fix things.



alex_aiwa_USA said:


> When you record with more than 2-3 mics and you're placing instruments artificially and building the soundstage from scratch (this is really a lot more common), the listener may or may not want crossfeed. Hard panning is generally only used as a special effect now, but if you're listening to old Beatles records, you will not hear the recording as intended without crossfeed, because at the time listening equipment was a lot more limited.  On the other hand, a fully synthesized electronic music track will not have a "natural" sound no matter what you listen with, as there never was one in the first place.



I pretty much agree with this. However, electronic music can have a very natural sounding spatial image thanks to advanced plugins simulating acoustics for the sound. Spatial distortion is spatial distortion no matter the nature of the music, so fully synthesized music needs cross-feed just as much as totally acoustic music recorded in a real room. If there is spatial distortion, you need to fix it with cross-feed be it jazz, edm, classical or rock.



alex_aiwa_USA said:


> At the end of the day, if it sounds good, it is good, right?  It is usually impossible to know if there is a "more correct" way to play any given recording, but it's not hard to decide whether you like a given effect or not.



My take is that spatial distortion free sound is "correct", because it makes most sense considering how human hearing works and to me it sounds best (natural, realistic, fatigue-free, precise and detailed).


----------



## Zapp_Fan

71 dB said:


> As for the lower fidelity, are you really thinking that, or is it an assumption that all changes to sound are for the worse? Is noise reduction always bad?



I actually don't take a good/bad stance on this, for me the only ultimate truth in audio is "if it sounds good, it is good".  Now, the definition of "good" is an exercise left to the reader, but... when I say 'lower fidelity' I only mean this in the most technical sense, as in, the signal has been altered somehow and is a less-exact copy of the original.  

Anyway, I think that crossfeed is probably a very reasonable thing to do, to the extent that instruments in a mix are over-panned to create a wide or more spatialized image on loudspeakers, and therefore sound odd on headphones - which probably applies to a lot of recordings.  Probably my personal discomfort comes from the fact that it is difficult to characterize a perfect loudspeaker listening setup, therefore it is equally difficult to characterize a perfect crossfeed implementation.  For example, should you just do free-air filtering of high frequencies and 3 feet worth of delay?  Or do you also add early reflections from a virtual room?  If so, do you also go as far as adding actual reverb?  Even 50ms of reverb can really change how things sound... is it for the better?  It's a can of worms I'd prefer not to have to think about 

Again, this is all just my personal feeling and I don't mean to imply anything negative about using crossfeed.  



bigshot said:


> The only headphones I've seen in the studios I've worked in were the beaters in the booth used for playback. I never saw an engineer put headphones on. We would usually mix on the big full range monitors, then do a check on small speakers to make sure it worked well with cheaper systems. We never checked with headphones.



I haven't worked in proper studios, but have mixed a couple albums... even in my limited experience, it's very true that 95% of mixing takes place on speakers, probably more. I made it a point to check on headphones, (notably the crap Apple earbuds) but I will also admit that tweaking spatialization on headphones was not a priority at all.  It was a cursory check just to make sure nothing sounded totally bizarre or got lost.


----------



## 71 dB

alex_aiwa_USA said:


> I actually don't take a good/bad stance on this, for me the only ultimate truth in audio is "if it sounds good, it is good".  Now, the definition of "good" is an exercise left to the reader, but... when I say 'lower fidelity' I only mean this in the most technical sense, as in, the signal has been altered somehow and is a less-exact copy of the original.



What is the "original" signal? Isn't the signal from the microphone the "original" one? That signal is altered in many ways in music production. Finally you buy a CD containing that guitar on a track. You can name the CD the original signal, but if you listen to it with loudspeakers, that signal is altered pretty heavily by the acoustics of your room including acoustic cross-feed. When you listen to the same CD with headphone and use cross-feed, the signal is altered less. 



alex_aiwa_USA said:


> Anyway, I think that crossfeed is probably a very reasonable thing to do, to the extent that instruments in a mix are over-panned to create a wide or more spatialized image on loudspeakers, and therefore sound odd on headphones - which probably applies to a lot of recordings.  Probably my personal discomfort comes from the fact that it is difficult to characterize a perfect loudspeaker listening setup, therefore it is equally difficult to characterize a perfect crossfeed implementation.  For example, should you just do free-air filtering of high frequencies and 3 feet worth of delay?  Or do you also add early reflections from a virtual room?  If so, do you also go as far as adding actual reverb?  Even 50ms of reverb can really change how things sound... is it for the better?  It's a can of worms I'd prefer not to have to think about
> 
> Again, this is all just my personal feeling and I don't mean to imply anything negative about using crossfeed.



What I do is simple straight forward cross-feed using passive circuits based on Linkwitz-Cmoy designs. That removes spatial distortion and ensures the sound is "natural". Extra "thinkering" might improve the sound even more, or it might mess up things. I don't feel the need to do extra things, because the sound is natural, detailed and pleasing. It just works for me.


----------



## Zapp_Fan

71 dB said:


> You can name the CD the original signal, but if you listen to it with loudspeakers, that signal is altered pretty heavily by the acoustics of your room including acoustic cross-feed.


  Yes, I basically consider the recorded media (say a CD) to be the "original signal", which I realize is significantly distorted by basically all transducers and real-world listening scenarios. To be honest, I have a very loudspeaker-centric mentality.  My view has been that if you can eliminate distortion everywhere from your source to your loudspeaker, and you have good acoustic treatment in your space, then you have something approximating an ideal listening setup.  If you take that view further, headphones are REALLY ideal because they present ONLY direct signal to the ear with no acoustic cross-over.

owever, I had never considered headphone listening itself as inherently creating a form of distortion.  Viewed that way, you almost need crossfeed. Either that, or you accept an unnatural presentation of the audio to each ear (i.e. each ear treated separately) as valid... problematic.

Good discussion!


----------



## bigshot

There really isn't any reason to do cross feed in a speaker setup. But you might want to use EQ, or various DSPs to improve the natural room ambience of either the recording or the listening room, or to re channel stereo to multichannel sound. It's rare, but I occasionally run across recordings that require a little compression because the dynamics are too wide to listen to comfortably, or a little peak expansion if they are too compressed.

The "purity" theory only gets you so far. If you want music to really sound good, you might need to alter it to suit your room and equipment and your ears.


----------



## pinnahertz

bigshot said:


> The only headphones I've seen in the studios I've worked in were the beaters in the booth used for playback. I never saw an engineer put headphones on. We would usually mix on the big full range monitors, then do a check on small speakers to make sure it worked well with cheaper systems. We never checked with headphones.


+1.  Never used headphones for a studio mix.  No mix I'm aware of other than a binaural recording ever considered headphones.


----------



## 71 dB

alex_aiwa_USA said:


> Yes, I basically consider the recorded media (say a CD) to be the "original signal", which I realize is significantly distorted by basically all transducers and real-world listening scenarios. To be honest, I have a very loudspeaker-centric mentality.  My view has been that if you can eliminate distortion everywhere from your source to your loudspeaker, and you have good acoustic treatment in your space, then you have something approximating an ideal listening setup.  If you take that view further, headphones are REALLY ideal because they present ONLY direct signal to the ear with no acoustic cross-over.



The problem is that the "original signal" such as a CD isn't problem-free. It is flawed and I don't mean because the music on it sucks. The problem is that arbitrary 2-channel signal doesn't match human hearing. Audio formats allow "original signals" to exist in larger signal spaces than the signal space of human hearing expects them to be. The correlation between left and right channel can be anything between -1 and 1. In other words you can have spatial information that doesn't exist for our hearing, because sounds heard in real environments just can't have any correlation between -1 and 1. For low frequencies the correlation between left and right ear is always very high, near 1 if not 1. It can't be negative, not even zero. I can write date January 32, but such day does not exist. Similarly you can have crazy out-of-phase bass on a CD, signals that as such doesn't make sense to for out hearing.

Luckily this problem of original signals is pretty easily fixed. Loudspeakers fix it using acoustic cross-feed. If you use headphones, you don't have acoustic cross-feed, so you need to do electric cross-feed, or if the CD happens to be produced for headphones (binaural/monophonic etc. recording), you don't need to do anything, because there is nothing to fix.



alex_aiwa_USA said:


> However, I had never considered headphone listening itself as inherently creating a form of distortion.  Viewed that way, you almost need crossfeed. Either that, or you accept an unnatural presentation of the audio to each ear (i.e. each ear treated separately) as valid... problematic.
> 
> Good discussion!



It happened to me too. Before 2012 or so I didn't realize there's a fundamental problem in headphone listening. I can still remember the moment when I suddenly realized the problem, because it was like a child finding out Santa Claus doesn't exist. You have it correct my friend, headphone listening requires proper cross-feed unless you accept spatial distortion. For me this isn't a huge problem, because I can design and construct cheap cross-feeders for myself. When I rip my CDs for my portable player, I pre-cross-feed the music in Audacity using a simple Nyquist plugin I wrote before exporting to mp3 files for the portable player. Cross-feed has opened a completely new world for me, exposing how great headphone listening can be when done right.

I'm glad you find this interesting.


----------



## bigshot (Sep 19, 2017)

The problem with the way a lot of people think about speakers is that they think of them like headphones- independent, isolated producers of sound for ears to hear. Speakers do a lot more than that. In fact, the sound of the room is just as important as the sound of the speakers. The goal of room treatment isn't to eliminate the sound of the room. That would be like building an anechoic chamber. The goal is to eliminate *unwanted* reflections... the kind that interfere with the sound. There are plenty of desirable things that rooms do that you don't want to eliminate. The room is what allows the sound of the music to bloom and fill the space. Headphones omit that part of the sound. Secondly, speakers don't just produce sound for ears to hear. There's a kinesthetic effect to the bass that you feel in your body. Without that, bass doesn't have the same impact, and headphones just can't do that. Thirdly, speakers exist in space. They provide an anchored soundstage and directional location to the sound. Headphones are one dimensional- a straight line through the middle of your head. Albums are generally mixed to create a directional speaker soundstage that exists in front of you in space. Headphones can't reproduce that either.

Headphones are great for hearing tiny details. If you shove your ear right up against your speaker, you can get that too. Headphones are also great at allowing you to listen to music without disturbing the people around you. Speakers aren't good for that at all. But even with cross feed and the best headphones in the world, headphones don't sound as natural as speakers. Crossfeed only makes the sound less directional along that one dimensional line down the middle of the head. It doesn't do anything to add the bloom of the room, produce kinesthetic thump or create a dimensional soundstage in space.


----------



## Zapp_Fan

71 dB said:


> Similarly you can have crazy out-of-phase bass on a CD, signals that as such doesn't make sense to for out hearing.



True, but this is not necessarily a problem.  Unnatural or "special effect" type acoustic reproductions can still be artistically useful.  Ask Ryoji Ikeda, I am sure he's not concerned with what the human auditory system is equipped to process naturally.   I get what you're saying though, 2-channel reproduction has some real, inherent divergences from real life. 

Luckily this problem of original signals is pretty easily fixed. Loudspeakers fix it using acoustic cross-feed. If you use headphones, you don't have acoustic cross-feed, so you need to do electric cross-feed, or if the CD happens to be produced for headphones (binaural/monophonic etc. recording), you don't need to do anything, because there is nothing to fix.




bigshot said:


> In fact, the sound of the room is just as important as the sound of the speakers. The goal of room treatment isn't to eliminate the sound of the room. That would be like building an anechoic chamber. The goal is to eliminate *unwanted* reflections... the kind that interfere with the sound. There are plenty of desirable things that rooms do that you don't want to eliminate.


  I mostly agree. It's very true that room influence is huge - In many rooms, more sound will reach the ear via reflection than directly from the loudspeaker.  I would argue that one should strive to eliminate reflections up to the extent that a really well-treated mixing studio does, at least, in an ideal case.  Having zero reflected sound is bad, but the amount of reverberation you get in your typical untreated, acoustically unfavorable room is arguably just as bad. 



bigshot said:


> Headphones are great for hearing tiny details. If you shove your ear right up against your speaker, you can get that too. Headphones are also great at allowing you to listen to music without disturbing the people around you. Speakers aren't good for that at all. But even with cross feed and the best headphones in the world, headphones don't sound as natural as speakers. Crossfeed only makes the sound less directional along that one dimensional line down the middle of the head. It doesn't do anything to add the bloom of the room, produce kinesthetic thump or create a dimensional soundstage in space.


  Can't really argue with any of that.


----------



## 71 dB

alex_aiwa_USA said:


> True, but this is not necessarily a problem.  Unnatural or "special effect" type acoustic reproductions can still be artistically useful.  Ask Ryoji Ikeda, I am sure he's not concerned with what the human auditory system is equipped to process naturally.   I get what you're saying though, 2-channel reproduction has some real, inherent divergences from real life.



I don't know Ryoji Ikeda's art, but if it is based on spatial distortion then one can listen to his music cross-feed off, just like binaural stuff etc. Most of the music in the world as far as I know is not based on spatial distortion. Mozart hardly had headphones in mind while writing his Requiem…


----------



## Zapp_Fan

71 dB said:


> I don't know Ryoji Ikeda's art, but if it is based on spatial distortion then one can listen to his music cross-feed off, just like binaural stuff etc. Most of the music in the world as far as I know is not based on spatial distortion. Mozart hardly had headphones in mind while writing his Requiem…



It's basically a bunch of hard-panned beeps and tones, I guess the most normal name for it is 'noise music':   ... he uses little to no spatialization in his mixes, either, making any notion of naturalistic listening sort of beside the point... although I realize it's an extreme case


----------



## bigshot

alex_aiwa_USA said:


> I would argue that one should strive to eliminate reflections up to the extent that a really well-treated mixing studio does, at least, in an ideal case.  Having zero reflected sound is bad, but the amount of reverberation you get in your typical untreated, acoustically unfavorable room is arguably just as bad.



The purpose of a recording studio is completely different than the purpose of a listening room in a home. A studio depends on calibration and precise control of every element to be able to consistently capture a performance and build a mix from the captured sound. They want to isolate the sound so they can balance it and finesse the mix and not get extraneous ambience added to the recording that isn't intended. So they record in a sound proofed booth and mix in a carefully calibrated and treated mixing stage.

A listening room in a home doesn't need that kind of isolation and precision because the purpose is different- to present recordings of all types in a pleasing manner. The most natural sound of all is the sound of the real room that you inhabit and are familiar with. If that sound environment is complementary to music, the added ambience adds a level of natural presence that isn't in the recording itself. It allows the sound of the recording to bloom and inhabit space. If you want to isolate your listening experience to just the recording and nothing but the recording, headphones are fine for that. But that isn't the intent that the engineers are trying to create. They want the music to interact with your room and fill it with sound.

That said, there are good rooms and bad rooms. Good ones add euphonic ambience and bad ones have acoustics that muddle the sound with primary reflections or cancellation. I've been in a lot of fantastic recording studios, and although their equipment was top notch and the acoustics of the mixing stage were just about perfect for recording, it isn't necessarily the ideal for what a home system should sound like. Try as engineers might to be consistent, there is still a huge variation in the sound of different types of recordings. The listener has to abandon calibration at some point and create a balance for his particular room and circumstances.

I know it's common among audiophiles to cite the old adage "I want the sound they heard in the studio when they created the recording." But that is easier to say than to achieve. And I'm not convinced that even if you achieve it that you will be getting the absolute best sound that way. We could record symphony orchestras section by section in sound proofed booths... violins in one booth, brass in another, percussion in another... but the result would be a total gnocchi- a mulligan stew of sound. Take that same orchestra and put it in the Berlin Philharmonie and add a few microphones to capture the sound of the hall and it sounds great. Multichannel sound allows room ambiences to be altered. A living room can sound exactly like a philharmonic hall or gothic cathedral. That is the "fourth dimension" of sound that goes beyond a flat stereo soundstage and begins to create a dimensional sound field. Pursuing that is a lot more effective than pursuing the ideal of creating a duplicate of a recording studio in your home.


----------



## 71 dB

alex_aiwa_USA said:


> It's basically a bunch of hard-panned beeps and tones, I guess the most normal name for it is 'noise music':   ... he uses little to no spatialization in his mixes, either, making any notion of naturalistic listening sort of beside the point... although I realize it's an extreme case




Thanks! I think this kind of music benefits from cross-feed the most. This stuff sound pretty awful without cross-feed. With Cross-feed it becomes pleasant, just quite boring imo. I listen to Autechre when I want to listen to something of this sort.


----------



## Zapp_Fan

bigshot said:


> The purpose of a recording studio is completely different than the purpose of a listening room in a home. A studio depends on calibration and precise control of every element to be able to consistently capture a performance and build a mix from the captured sound. They want to isolate the sound so they can balance it and finesse the mix and not get extraneous ambience added to the recording that isn't intended. So they record in a sound proofed booth and mix in a carefully calibrated and treated mixing stage.



I mean, yes, nobody should listen in a recording booth, much too dead, it sounds bad.  But, a "just live enough" room with really flat loudspeakers is my idea of an 'ideal' listening setup.  Something like what was heard when the recording was mixed (as opposed to when parts were recorded).  Now, I concede that's not the most enjoyable setup possible for everyone, it's more like it's comforting for me to know the music has been minimally interfered with or altered.



71 dB said:


> Thanks! I think this kind of music benefits from cross-feed the most. This stuff sound pretty awful without cross-feed. With Cross-feed it becomes pleasant, just quite boring imo. I listen to Autechre when I want to listen to something of this sort.



I would take the other side of that.  For example, one of his albums is actually called "headphonics".  It features a good deal of hard-panned, simple tones.  It's easy to imagine that he intends the listener to sit through a lot of totally unnatural and challenging tones, rather than turn it into a more natural listening experience - and the title definitely suggests headphones as the preferred listening equipment!


----------



## 71 dB

alex_aiwa_USA said:


> I would take the other side of that.  For example, one of his albums is actually called "headphonics".  It features a good deal of hard-panned, simple tones.  It's easy to imagine that he intends the listener to sit through a lot of totally unnatural and challenging tones, rather than turn it into a more natural listening experience - and the title definitely suggests headphones as the preferred listening equipment!



Well, these are opinions and your opinion is just as good as anyone else's. I don't know his art apart from couple of tracks I listened to thanks to your youtube link. Maybe he is a genius or expert on spatial hearing, but generally speaking people who limit their "spatial expressions" to hard amplitude panning tricks are not that wise/advanced on the issue. Real panning is about careful combination of amplitude, phase and spectral tweaks.

personally I think I choose not to sit through a lot of totally unnatural (but not so challenging from what I heard - short bursts of noise or sinusoids are hardly "challenging" in the 21st century) tones. There is too much great music in the world competing of my listening time to even consider wasting it on this. Sorry.


----------



## bigshot (Sep 20, 2017)

alex_aiwa_USA said:


> Now, I concede that's not the most enjoyable setup possible for everyone, it's more like it's comforting for me to know the music has been minimally interfered with or altered.



Well I can't speak to your comfort level. All I can speak about is sound. I don't lie in bed at night worrying if my room is altering my sound. I just listen and it sounds good. If there's a problem with the sound, I try to fix it. When I do that, I'm trying to make *my* room sound as good as it can. I'm not trying to guess what the mixer's room was like and shoehorn my room into conforming to that guess. I don't think I'm uncommon. I think a lot of audiophiles talk about not interfering with sound, but they don't have the slightest idea how to achieve that. Neither do I to be honest. Luckily, I don't even try. I just focus on getting great sound. I'm often listening to music that is over 50 years old. I don't want to hear it unaltered the way the original engineers heard it. I like in the 21st century with a lot of fabulous technology. I expect to hear it better than they did.


----------



## pinnahertz

alex_aiwa_USA said:


> Audio pros know that their audience doesn't typically have nearfield monitors. Engineers *usually* make as few assumptions as possible about the ultimate listening conditions, meaning they want it to sound good on headphones and speakers alike.  In fact, it is seen as a major failure if your mix only sounds good on studio monitors.





alex_aiwa_USA said:


> I haven't worked in proper studios, but have mixed a couple albums... even in my limited experience, it's very true that 95% of mixing takes place on speakers, probably more. I made it a point to check on headphones, (notably the crap Apple earbuds) but I will also admit that tweaking spatialization on headphones was not a priority at all.  It was a cursory check just to make sure nothing sounded totally bizarre or got lost.


It seems to me the two above quotes are somewhat at odds with each other.  Which is it, we mix for speakers and headphones, or we mix for speakers primarily then do a cursory check for headphones?  (It's the latter, BTW).

I'm kinda wishing there were fewer statements and assumptions about what audio pros do made by people who don't seem to actually know.


----------



## pinnahertz

71 dB said:


> The problem is that the "original signal" such as a CD isn't problem-free. It is flawed and I don't mean because the music on it sucks. The problem is that arbitrary 2-channel signal doesn't match human hearing. Audio formats allow "original signals" to exist in larger signal spaces than the signal space of human hearing expects them to be. The correlation between left and right channel can be anything between -1 and 1. In other words you can have spatial information that doesn't exist for our hearing, because sounds heard in real environments just can't have any correlation between -1 and 1. For low frequencies the correlation between left and right ear is always very high, near 1 if not 1. It can't be negative, not even zero. I can write date January 32, but such day does not exist. Similarly you can have crazy out-of-phase bass on a CD, signals that as such doesn't make sense to for out hearing.
> 
> Luckily this problem of original signals is pretty easily fixed. Loudspeakers fix it using acoustic cross-feed. If you use headphones, you don't have acoustic cross-feed, so you need to do electric cross-feed, or if the CD happens to be produced for headphones (binaural/monophonic etc. recording), you don't need to do anything, because there is nothing to fix.


The idea that you can fix acoustic crosstalk by using crossfeed in loudspeakers by design or pre-processing has been a long-time study of mine, I've designed and tested several systems.  I have to disagree that the problem is easily fixed, though.  

Basically, what you can do is modify to some extent how sound from those speakers is localized by partially cancelling the direct signal from speakers that form localization cues we would normally use to localize the speakers themselves.  However, you're not fixing acoustic crosstalk, it's still all over the place.  But the cancellation only works in one very specific listening position, and is extremely fragile, being affected by room acoustics with reflective surfaces (pretty much a fact of life), and head position, especially when cancellation is the result of a signal processor and not speaker design.  All the loudspeaker crossfeed in the world can't compensate for unknown reflections.  But worse, what you get is something entirely new, not heard in the mixing environment, or the original acoustic environment either.  It no more represents "reality" than any other perspective, though may to some be more pleasing.  

As for headphone crossfeed, the Linkwitz circuit is just a somewhat frequency selective reduction in separation, but that doesn't address the rather prominent headphone problem if mid-head localization, which is much harder to deal with.  But again, you're creating something new that was neither heard nor planned for before.  It might be pleasant, it might not, or anywhere between.  I personally would be frustrated with a crossfeed on/off switch, I'd need it to be variable.  In fact, one system I designed for this monitored the channel separation of the actual signals and changed crossfeed dynamically.  Worked ok, a definite improvement over a switch. 


71 dB said:


> It happened to me too. Before 2012 or so I didn't realize there's a fundamental problem in headphone listening. I can still remember the moment when I suddenly realized the problem, because it was like a child finding out Santa Claus doesn't exist. You have it correct my friend, headphone listening requires proper cross-feed unless you accept spatial distortion.


I'm not sure I'd agree, because headphones always have a unique spacial presentation, which is part of the experience.  In fact, it used to be promoted!  Prog rock radio in the 1970s used to have "headphone hour" programming where widely separated mixes, or some with whip-panned sounds, lots of whacky effects where desirable.  We grew up with headphones being different, and for that reason the modification of the spacial perspective is more accepted.   I don't like applying the  term "spacial distortion" to the headphone perspective, because distortion implies a reference undistorted original, but in recorded music that never exists.  Even the original mix in the original studio is synthetic.  Yes, headphones present differently, but neither perspective is actually undistorted.  


71 dB said:


> For me this isn't a huge problem, because I can design and construct cheap cross-feeders for myself. When I rip my CDs for my portable player, I pre-cross-feed the music in Audacity using a simple Nyquist plugin I wrote before exporting to mp3 files for the portable player. Cross-feed has opened a completely new world for me, exposing how great headphone listening can be when done right.
> 
> I'm glad you find this interesting.


I was disappointed when I looked up some of the crossfeed models.  So basic, so crude.


----------



## Niouke

My amateur music is mixed for headphones as I don't have a decent monitoring system 

Jokes aside if I wanted to add crossfeed to my spotify music, what should I use on android, and windows?
Good read as usual.


----------



## 71 dB

pinnahertz said:


> The idea that you can fix acoustic crosstalk by using crossfeed in loudspeakers by design or pre-processing has been a long-time study of mine, I've designed and tested several systems.  I have to disagree that the problem is easily fixed, though.



Headphone cross-feed doesn't address any _acoustic_ problem, so I am not sure what the relevance here is.



pinnahertz said:


> Basically, what you can do is modify to some extent how sound from those speakers is localized by partially cancelling the direct signal from speakers that form localization cues we would normally use to localize the speakers themselves.  However, you're not fixing acoustic crosstalk, it's still all over the place.  But the cancellation only works in one very specific listening position, and is extremely fragile, being affected by room acoustics with reflective surfaces (pretty much a fact of life), and head position, especially when cancellation is the result of a signal processor and not speaker design.  All the loudspeaker crossfeed in the world can't compensate for unknown reflections.  But worse, what you get is something entirely new, not heard in the mixing environment, or the original acoustic environment either.  It no more represents "reality" than any other perspective, though may to some be more pleasing.



Cancellation of loudspeaker crosstalk as a concept is familiar to me. I studied acoustics in the university and worked in the acoustics lab for almost a decade. However, I am not sure why you talk about loudspeaker cross-talk cancellation in a thread about cross-feed in headphone listening. Personally I am not that worried about loudspeaker cross-talk. It is a "natural" acoustic phenomenon that doesn't create unnatural signals to my ears. Making the listening room more absorbent and using more directional loudspeakers one can reduce cross-talk, if that is an issue. It will make the loudspeaker sound more headphone-like, but isn't it easier to just use headphones if that's what you want?

I think you confuse cross-talk and cross-feed in some places. 



pinnahertz said:


> As for headphone crossfeed, the Linkwitz circuit is just a somewhat frequency selective reduction in separation, but that doesn't address the rather prominent headphone problem if mid-head localization, which is much harder to deal with.


Linkwitz circuit, as pretty much all cross-feeders, are frequency selective because that is how our spatial hearing works. Our head is a frequency selective barrier for the sound. Cross-feeder also delay cross-fed signals typically about 0.2-0.3 ms to simulate the delay caused but loudspeakers in ~30° angles. The delay is conveniently created by the low-pass filter. Cross-feeders are simple circuits, but they miraculously fix the problem, spatial distortion. Mid-head localization is partially fixed and depends on the recording itself. Acoustic recordings done in real acoustics such as classical music can sound pretty amazing after proper cross-feed, but not as amazing as real binaural recordings.



pinnahertz said:


> But again, you're creating something new that was neither heard nor planned for before.


Cross-feed removes spatial distortion by scaling spatial information into the "value-space" our brain expects it to be. Not using cross-feed creates something that was not planned, spatial distortion. 



pinnahertz said:


> It might be pleasant, it might not, or anywhere between.


It sounds natural, realistic and fatique-free. Drums sound like real drums in a room, not fake plastic toys. Short transient sounds are located in the sound image at pin-point accuracy instead of spreading all over the place because brain doesn't know how to interpret crazy spatial cues. Cross-feed doesn't remove details, it removes spatial distortion revealing the tiny details of the music itself. If that's not desirable then I don't know what is.



pinnahertz said:


> I personally would be frustrated with a crossfeed on/off switch, I'd need it to be variable.


Same here. That's why my DIY cross-feed headphone adapter has got 6 cross-feed levels (+ off of course).



pinnahertz said:


> In fact, one system I designed for this monitored the channel separation of the actual signals and changed crossfeed dynamically.  Worked ok, a definite improvement over a switch.



Interesting. Did you reduce the speed cross-feed level is changed? What are the benefits of dynamic cross-feed compared to constant cross-feed in your opinion? Any down-sides?



pinnahertz said:


> I'm not sure I'd agree, because headphones always have a unique spacial presentation, which is part of the experience.  In fact, it used to be promoted!  Prog rock radio in the 1970s used to have "headphone hour" programming where widely separated mixes, or some with whip-panned sounds, lots of whacky effects where desirable.



Since when has the marketing people understood anything about audio quality? I'm not going to suffer spatial distortion just because some lunatic radio shows decades ago when people didn't know what to do with stereo sound. Such whacky effects are childish.



pinnahertz said:


> We grew up with headphones being different, and for that reason the modification of the spacial perspective is more accepted. I don't like applying the  term "spacial distortion" to the headphone perspective, because distortion implies a reference undistorted original, but in recorded music that never exists.  Even the original mix in the original studio is synthetic.  Yes, headphones present differently, but neither perspective is actually undistorted.



What you experienced in your childhood doesn't change scientific facts. We all have to admit sometimes that the way we have done things or thought about things have been wrong. That's how we learn, accepting new understanding. I listened to music wrong when young. I'm probably still doing something wrong, but hopefully just a tiny little bit. Cross-feed was a huge step for me. I don't believe spatial distortion was ever intended with headphones. It's an accident of stereo sound. In the late 50's and 60's people were so exited about stereo sound and the possibility to have huge channel separation they didn't think about the consequences. It's something people simply ignore not realizing how it destroys the potential on headphone listening. People get used to things and when somebody questions things they are in denial. Sad.

Spatial distortion happens in our brain, but it is just as real for the listener, just as pain is real for a person. You can listen to headphones the way you want, that's your business, but I feel _responsible_ to educate people about spatial distortion and how to significantly enhance headphone listening using cross-feed. I have science on my side. Open-minded people do get what I say.

If the sound from a cowbell spreads all over the place instead of being in one position in the sound image then I am going to call it spatial distortion. Spatial information gets distorted. A cowbell is not all over around your head. It's on the left or right or in the center. It's in one place and it sounds like a real cowbell if you hear it like that, without spatial distortion. The thing exists and people should be educated about it.

Also, don't blame cross-feed for some crappy prog rock sounding weird because some anarchistic sound engineers using drugs liked to play with the knobs in the studio. Cross-feed makes miracles, but not so big miracles as to transforming a badly produced rock album of the 70's into gold. Listen to some well recorded classical music (e.g. SACD by BIS label) with proper cross-feed and then you hear how good the result is.



pinnahertz said:


> I was disappointed when I looked up some of the crossfeed models.  So basic, so crude.


The market for headphone amps with cross-feed is miserable. Headphone amps are expensive, only a few models have cross-feed and then it's one or two levels available. There's SPL Phonitor, but that's very expensive. I recommend DIY cross-feed headphone adapters. Having a DIY cross-feeder in between of your source and headphone amp is another option.


----------



## Strangelove424 (Sep 21, 2017)

Niouke said:


> My amateur music is mixed for headphones as I don't have a decent monitoring system
> 
> Jokes aside if I wanted to add crossfeed to my spotify music, what should I use on android, and windows?
> Good read as usual.



I found a simulation of Meier "natural" crossfeed filter for foobar that sounds very good to me. That will work for Windows, not sure what to tell you about Android.

edit: just noticed you were specific to Spotify, so foobar plugins will be of no use, but I decided to keep the link here in case you give it a shot with Foobar. 

Plugin:
http://www.foobar2000.org/components/view/foo_dsp_meiercf

Explanation of Meier Crossfeed:
http://www.meier-audio.homepage.t-online.de/crossfeed.htm



pinnahertz said:


> I personally would be frustrated with a crossfeed on/off switch, I'd need it to be variable.  In fact, one system I designed for this monitored the channel separation of the actual signals and changed crossfeed dynamically.  Worked ok, a definite improvement over a switch.



You are essentially describing Meier crossfeed.



71 dB said:


> The market for headphone amps with cross-feed is miserable. Headphone amps are expensive, only a few models have cross-feed and then it's one or two levels available. There's SPL Phonitor, but that's very expensive. I recommend DIY cross-feed headphone adapters. Having a DIY cross-feeder in between of your source and headphone amp is another option.



Don't overlook DSP.


----------



## 71 dB

Strangelove424 said:


> You are essentially describing Meier cross feed.


Cross-feeders do spatialize sound depending on the incoming channel separation and Meier is no exception, but that's not dynamic cross feed. Even Meier has a constant cross feed level.

The difference of Meier (a "H-topology" cross-feeder) and Linkwitz-Cmoy (a "X-topology" cross-feeder) is that Meier distributes sound according to the channel difference while Linkwitz-Cmoy emphasizes 30° angles simulating loudspeaker listening. Meier gives more vivid/aggressive/wide sound than Linkwitz-Cmoy which is more calm and relaxed.



Strangelove424 said:


> Don't overlook DSP.


DSP is a great way to do cross-feed if you can. It's just that often you just can't use one (Spotify?) so I do all my cross-feed with my DIY cross-feed headphone adapter which is available at home no matter what the source is (CD, DVD, Blu-ray, Spotify, Youtube, TV,…) For portable music I pre-crossfeed the music before exporting them to mp3-files* for my portable player (I use a Nyquist-plugin I wrote for Audacity).

* In my opinion mp3s are "good enough" at bit rate 192 kbps or more outdoor in the noisy environment.


----------



## pinnahertz

71 dB said:


> Headphone cross-feed doesn't address any _acoustic_ problem, so I am not sure what the relevance here is.


The task, as I see it, is to get the in-head, hard left-right perspective of headphones back into a more natural presentation, in essence, a more acceptable, if artificial, acoustic space.


71 dB said:


> Cancellation of loudspeaker crosstalk as a concept is familiar to me. I studied acoustics in the university and worked in the acoustics lab for almost a decade. However, I am not sure why you talk about loudspeaker cross-talk cancellation in a thread about cross-feed in headphone listening.


You mentioned it, I quoted you.  You said it was easy, it's not.  You should know that.


71 dB said:


> Personally I am not that worried about loudspeaker cross-talk. It is a "natural" acoustic phenomenon that doesn't create unnatural signals to my ears. Making the listening room more absorbent and using more directional loudspeakers one can reduce cross-talk, if that is an issue.


No, it can't.  Both ears still hear both speakers, even in an anechoic chamber.


71 dB said:


> It will make the loudspeaker sound more headphone-like, but isn't it easier to just use headphones if that's what you want?


Even with speakers and as much crosstalk cancellation as you can manage, it's till a completely different perspective than headphones.


71 dB said:


> I think you confuse cross-talk and cross-feed in some places.


You brought it up and make misleading statements.  I know the difference.


71 dB said:


> Linkwitz circuit, as pretty much all cross-feeders, are frequency selective because that is how our spatial hearing works. Our head is a frequency selective barrier for the sound. Cross-feeder also delay cross-fed signals typically about 0.2-0.3 ms to simulate the delay caused but loudspeakers in ~30° angles. The delay is conveniently created by the low-pass filter.


This is one of the things that jumped out at me when I looked up the circuit.  The "delay" caused by the filters is not actually time delay, it's phase shift which looks like delay when you look at one group of frequencies, but is not time delay.  That time delay could be simulated well enough with an all-pass network, but not with a single pole filter.  Sorry, I tried that 35 years ago.  It sort of works, but not well.  That's why I was disappointed.  You need a real DSP to do that well.


71 dB said:


> Cross-feeders are simple circuits, but they miraculously fix the problem, spatial distortion. Mid-head localization is partially fixed and depends on the recording itself. Acoustic recordings done in real acoustics such as classical music can sound pretty amazing after proper cross-feed, but not as amazing as real binaural recordings.


At best that thing is an improvement, but it's not really doing what needs to be done.  The fact that certain recordings work better than others should tell you that.  Mid-head localization should not depend only on the recording, proper correction would place it outside the head all the time.  That's not what you have there.


71 dB said:


> Cross-feed removes spatial distortion by scaling spatial information into the "value-space" our brain expects it to be.


Change "removes" to "reduces", and we're good.  That circuit can't remove spacial distortion.  I can't even minimize it.


71 dB said:


> Not using cross-feed creates something that was not planned, spatial distortion.


In some cases I would agree, but certainly not all.  As I referred to earlier, there is material that while mixed on speakers was happily embraced on headphones as a new, if hyper-stereo, experience.  Remember, mixes are checked on headphones, especially today in contemporary popular music, since that's the market, but mixed on speakers, because mixing on speakers translates to a pleasing headphone experience, but not the other way 'round.



71 dB said:


> It sounds natural, realistic and fatique-free. Drums sound like real drums in a room, not fake plastic toys. Short transient sounds are located in the sound image at pin-point accuracy instead of spreading all over the place because brain doesn't know how to interpret crazy spatial cues. Cross-feed doesn't remove details, it removes spatial distortion revealing the tiny details of the music itself. If that's not desirable then I don't know what is.


To be completely fair, I appreciate your opinion, but do not share it.


71 dB said:


> Interesting. Did you reduce the speed cross-feed level is changed? What are the benefits of dynamic cross-feed compared to constant cross-feed in your opinion? Any down-sides?


Speed and degree are program determined and variable.  The benefit is more consistent results, the down side is more consistent results.  It's just different.  I didn't develope the idea any farther because the problem was the algorithm that determined the required crossfeed.  It turned out it's not just the amount, that was easy to quantify, but to work well it needed delay, and the amount changes with program.  What morphed out of this was abandoning the idea in favor of an idea akin to the Smyth Realizer, but I didn't have DSP in those days, so I moved on. 


71 dB said:


> Since when has the marketing people understood anything about audio quality? I'm not going to suffer spatial distortion just because some lunatic radio shows decades ago when people didn't know what to do with stereo sound. Such whacky effects are childish.


Actually, those efforts were very successful.  Stereo headphones were fairly new, and a program featuring cool headphone mixes was a revenue source for broadcast.  BTW, you're expressing opinion again.  Thanks, but it's not fact, just opinion.  That's why there's an off switch on crossfeed!


71 dB said:


> What you experienced in your childhood doesn't change scientific facts. We all have to admit sometimes that the way we have done things or thought about things have been wrong. That's how we learn, accepting new understanding. I listened to music wrong when young. I'm probably still doing something wrong, but hopefully just a tiny little bit. Cross-feed was a huge step for me. I don't believe spatial distortion was ever intended with headphones. It's an accident of stereo sound. In the late 50's and 60's people were so exited about stereo sound and the possibility to have huge channel separation they didn't think about the consequences. It's something people simply ignore not realizing how it destroys the potential on headphone listening. People get used to things and when somebody questions things they are in denial. Sad.


Well, it wasn't childhood, but...
Your view is very rigid, very black and white.  If you want stereo done "right" then the only way you'll be satisfied is with binaural recordings made with mics in your own ears.  That works very well, but just for you. 

Recording and reproduction, especially in two-channel stereo, is very much a subjective art.  As you grow older (ok, sorry, just a return jab), you may realize there are lots of "rights" and "grays" in...well, everything.  And there are some absolute rights and wrongs.  Experience helps us to understand the difference.

Your statements show an understanding gap.  The generalizations are disturbing too, like the comment about the 50s and 60s, like it was all huge ping-pong ball stereo.  It wasn't, there are some very fine recordings from that time period, some even made with more that two recording channels so the phantom center could be brought under control.  Most of the stereo mic techniques we still use were introduced then.  And even earlier, Bell Labs research into stereophony (that doesn't mean two channels, BTW), showed that the real reproduction with accurate spacial reproduction would require a grid of over 1000 microphones and recording channels, and a speaker grid to match.  The reduced the channel count until it was practical, and landed at the lower limit of 3.  That was the 1930s.  Give history some credit!


71 dB said:


> Spatial distortion happens in our brain, but it is just as real for the listener, just as pain is real for a person.


No, spacial distortion results from the way signals are transduced. 


71 dB said:


> You can listen to headphones the way you want, that's your business, but I feel _responsible_ to educate people about spatial distortion and how to significantly enhance headphone listening using cross-feed. I have science on my side. Open-minded people do get what I say.


Yeah, right, except you are not educating people with the whole story.  You've been rather definitive with your precepts, and I'm just pointing out that a few things are not so definitive. And your definition of what is "right" includes a half-baked attempt at crossfeed that doesn't take real time delay into consideration, nor the actual response curve of sound diffracting around a head, nor any thought of the angle of the phantom transducers.  That's not definitive, don't portray it as final.  It might not even be desirable!


71 dB said:


> If the sound from a cowbell spreads all over the place instead of being in one position in the sound image then I am going to call it spatial distortion. Spatial information gets distorted. A cowbell is not all over around your head. It's on the left or right or in the center. It's in one place and it sounds like a real cowbell if you hear it like that, without spatial distortion. The thing exists and people should be educated about it.


What if the creator wanted it over head?  How would you know?  This is again another strong effort to categorize something that is far more subjective.


71 dB said:


> Also, don't blame cross-feed for some crappy prog rock sounding weird because some anarchistic sound engineers using drugs liked to play with the knobs in the studio.


Well, I didn't do that, but I think you just did.  "Crappy" could be your opinion, and reproducing a whip-panned guitar in headphones actually was the intent of the creators some times.  If you cross-feed that out, you've taken away their intention.  Is that the right thing to do?


71 dB said:


> Cross-feed makes miracles, but not so big miracles as to transforming a badly produced rock album of the 70's into gold. Listen to some well recorded classical music (e.g. SACD by BIS label) with proper cross-feed and then you hear how good the result is.


Again...perhaps yes...perhaps no.  There's no way I can  agree that crossfeed of the type you've defined is universally miraculous.  Just as i can't agree that all album rock in the 1970s is badly produced.  Not that you'll care or be impressed, but my recording background is in classical music, not 70s rock. 


71 dB said:


> The market for headphone amps with cross-feed is miserable. Headphone amps are expensive, only a few models have cross-feed and then it's one or two levels available. There's SPL Phonitor, but that's very expensive. I recommend DIY cross-feed headphone adapters. Having a DIY cross-feeder in between of your source and headphone amp is another option.


I'd probably look for a crossfeed DSP plugin and use whatever headphone amp you have.  At least then the crossfeed wouldn't be limited to a simple filter and whatever phase shift it creates. It could include head diffraction, and time delay, and have all variables that are actually required.  In truth why I believe you don't find much crossfeed on commercial headphone amps.  It's too complex to do well.

What we have here is a difference in opinion.  I respect that you love the Linkwitz crossfeed circuit, please respect that I feel it to be inadequate.  You feel crossfeed is essential and miraculous, I feel it is an occasional improvement in the implementation you've cited.  I also know from experience that there is normal stereo material that crossfeed would ruin in terms of what the creators intended.  I understand that from experiencing the culture of the era.  Correcting that would be inauthentic, just as trying to massage stereo out of a mono recording would also be inauthentic. 

Looks like we'll differ here.  Perhaps we should let it go at that.


----------



## castleofargh

Strangelove424 said:


> I found a simulation of Meier "natural" crossfeed filter for foobar that sounds very good to me. That will work for Windows, not sure what to tell you about Android.
> 
> edit: just noticed you were specific to Spotify, so foobar plugins will be of no use, but I decided to keep the link here in case you give it a shot with Foobar.
> 
> ...


on windows I believe equalizer APO can offer system wide crossfeed. else there is always the option of a virtual cable and a VST host where you can use all the VSTs that can be used in foobar, and more as foobar is limited in some ways(vst needs a gui, 32bit). once you have such an implementation, you can decide to route anything through it, or not. 

on android I still use viper4android but I'm on some older version and haven't really looked into replacing that with something possibly better. it requires root, has a basic kind of crossfeed setting that's called something else, or can be used as a convolver. the issue with the convolution choice is to find some files to use it with and at the right sample rate. but at least some options exist for those who don't know how to make their own stuff. our fellow member @Joe Bloggs has made such impulses he shared some time back, and a few other people did the same.


----------



## bigshot

That was the longest line by line reply I’ve ever seen. I hope I live long enough to actually getting around to reading it someday.


----------



## Strangelove424 (Sep 21, 2017)

71 dB said:


> Cross-feeders do spatialize sound depending on the incoming channel separation and Meier is no exception, but that's not dynamic cross feed. Even Meier has a constant cross feed level.
> 
> The difference of Meier (a "H-topology" cross-feeder) and Linkwitz-Cmoy (a "X-topology" cross-feeder) is that Meier distributes sound according to the channel difference while Linkwitz-Cmoy emphasizes 30° angles simulating loudspeaker listening. Meier gives more vivid/aggressive/wide sound than Linkwitz-Cmoy which is more calm and relaxed.



My understanding of dynamic is that it fluctuates depending on variables in the signal, either frequency or amplitude. What do you mean by dynamic?

I enjoy Meier crossfeed because it is the most subtle I have used. Just enough to make some albums less annoying or fatiguing. It maintains the spaciousness and L/R separation I believe is still faithful to the mastering while making it palatable on headphones. 



71 dB said:


> DSP is a great way to do cross-feed if you can. It's just that often you just can't use one (Spotify?) so I do all my cross-feed with my DIY cross-feed headphone adapter which is available at home no matter what the source is (CD, DVD, Blu-ray, Spotify, Youtube, TV,…) For portable music I pre-crossfeed the music before exporting them to mp3-files* for my portable player (I use a Nyquist-plugin I wrote for Audacity).
> 
> * In my opinion mp3s are "good enough" at bit rate 192 kbps or more outdoor in the noisy environment.



I like DSP because it gives complete control over range, I can set the crossover strength from 0-100 (usually have it ≤20 and mix it with many other DSPs). I like the ability to change at whim and listen on different setups, so would personally feel uncomfortable re-encoding all my files permanently with a certain DSP setting. I think it's an interesting idea, I just wouldn't do it personally. I've struggled for a while to find plugins that worked in a playback environment so I would never have to deal with rendering files in order to apply DSPs. Everything I have (including VSTs I got to work in Foobar finally) function in real time, and don't require render. Agree about mp3. I use Spotify in low quality setting when on the go. I never saw the purpose of trying to attain audio purity outdoors.


----------



## Strangelove424

alex_aiwa_USA said:


> It's basically a bunch of hard-panned beeps and tones, I guess the most normal name for it is 'noise music':   ... he uses little to no spatialization in his mixes, either, making any notion of naturalistic listening sort of beside the point... although I realize it's an extreme case



Well, this Ryoki Ikeda is certainly not as bad as Merzbow. Anyone ever hear Merzbow? I'm not going to put a link down, but if you are curious I will warn you: _turn down your volume!_ I found the existence of Merzbow after browsing the DR database one day and organizing results by album with least dynamic range. Merzbow and a couple other "noise artists" earned a prestigious 0 DR. Yep, that's 0 db of range. Nada. Nill. Nichts. It'll drill your flippin' brain out. Japanese electronic music seems to be heavily influenced by noise.


----------



## pinnahertz

bigshot said:


> That was the longest line by line reply I’ve ever seen. I hope I live long enough to actually getting around to reading it someday.


You could just ignore it.  That's what I do with posts I find aggravating.   Most of the time, anyway.


----------



## pinnahertz

Strangelove424 said:


> My understanding of dynamic is that it fluctuates depending on variables in the signal, either frequency or amplitude. What do you mean by dynamic?


I can't find anything that says the Meier crossfeed changes dynamically.  Looks like a user-adjustable setting, but signal doesn't change it.  Did you find something that indicates its signal dependant?


----------



## Strangelove424

From Meier's website:

"Especially in the high frequency range, the delayed crossfeed signal interferes with the original input and attenuates specific frequencies. The frequency-curve is no longer flat but shows a larger number of dips. This is the so-called Comb-filter effect.

A unique feature of the crossfeed circuitry on the CORDA headphone amplifiers is that it "recognizes" the virtual positions of the instruments and singers in a recording. The sound of an instrument in the middle of the soundstage will be equally present in both audio-channels and isn't given any crossfeed. A crossfeed signal is only generated for instruments that are not placed at the center. The more off-center the instrument is placed, the stronger the crossfeed and the longer its delay. The frequency-curve is flat again and the Comb-filter effect is eliminated. This is called "natural crossfeed"."

...

"The original (standard) version is based on the small resistor-capacitor-network shown in the figure to the right. It can be easily recognized that the left channel input signal will also be seen at the right channel output and vice versa. A mono signal will pass unaltered and without any delay to both outputs. This version is found on CORDA amplifiers designed/built till around 2005. Crossfeed only is given for signal components with frequencies upto 1 kHz.

With crossfeed activated lower frequency signals are no longer present in one channel only but are now more evenly distributed over both channels. They are less isolated and become a more integrated part of the soundstage. They no longer stand out and this may feel as if the energy in the frequency range below 1 kHz is slightly reduced. From 2005 till now a second slighly modified filter is used that automatically compensates for this apparent bass losss.

Psychoacoustic studies have shown that our sense of direction is mainly determined by the sonic components with frequencies upto 2 kHz. However, with a simple passive network natural delay times only can be achieved for frequencies upto 1 kHz. In recent years therefore a technically more sophisticated filter was designed that allows crossfeed with appropriate delay times with signal components upto 3 kHz. This extended crossfeed filter can be found in the CLASSIC and in the (now discontinued) STAGEDAC."


----------



## pinnahertz

Strangelove424 said:


> From Meier's website:
> 
> "Especially in the high frequency range, the delayed crossfeed signal interferes with the original input and attenuates specific frequencies. The frequency-curve is no longer flat but shows a larger number of dips. This is the so-called Comb-filter effect.
> 
> ...


Yes, I read that.  None of that indicates a dynamic change, though.  The fact that crossfeed changes depending on relative position is still a static function.  In fact, my experiments fundamentally did that by extracting a crossfeed signal by developing an L-R signal then summing it with L (2L-R), then inverting it and summing it with R (2R-L).  Inserting frequency response modifiers and time delay in the base L-R signal forms the approximation of head diffraction, and yes, the result is a comb filter of sorts.  But it's a fixed algorithm, there is no dynamic change. 

I played with dynamic change by adding variable gain control in the L-R because I found that widely separated material became over-compensated.  That helped, but didn't work reliably for everything.  The same process can be modified and applied to speaker acoustic crosstalk cancellation with similarly variable results. There were several products marketed that did this in the early 1980s. They only sort of worked because with speakers the phantom center image is confused by both ears hearing both speakers with two ITDs.  The basis for the Carver Sonic Holography device was to compensate for that condition.  It worked better, but the effect was much more fragile and affected by acoustics.  It didn't apply well to headphones because in headphones you only hear one transducer per ear. 

None of the above helped the middle of the head image issue, though.


----------



## 71 dB

pinnahertz said:


> The task, as I see it, is to get the in-head, hard left-right perspective of headphones back into a more natural presentation, in essence, a more acceptable, if artificial, acoustic space.


If you record music in real space or create "synthetic" sound image using advanced effects, it is likely to contain spatial cues to achieve this. Spatial distortion often messes it up in headphone listening, but we have the solution for that: Cross-feed. If your music doesn't contain proper spatial cues, it will cause this "in-head" left-right perspective. 



pinnahertz said:


> You mentioned it, I quoted you.  You said it was easy, it's not.  You should know that.



Huh?



pinnahertz said:


> No, it can't.  Both ears still hear both speakers, even in an anechoic chamber.



Of course, but the cross-talk is minimized to the point of appearing unnatural because most people aren't used to anechoic chambers.



pinnahertz said:


> Even with speakers and as much crosstalk cancellation as you can manage, it's till a completely different perspective than headphones.



Yes, but that doesn't mean you can't make headphones sound exactly like loudspeakers. Ever heard of HRTF?



pinnahertz said:


> You brought it up and make misleading statements.  I know the difference.



I brought it up? Huh?



pinnahertz said:


> This is one of the things that jumped out at me when I looked up the circuit.  The "delay" caused by the filters is not actually time delay, it's phase shift which looks like delay when you look at one group of frequencies, but is not time delay.  That time delay could be simulated well enough with an all-pass network, but not with a single pole filter.  Sorry, I tried that 35 years ago.  It sort of works, but not well.  That's why I was disappointed.  You need a real DSP to do that well.



You're right. You need a real DSP to do it well, but the phase shift it constant enough in the relevant frequency range to make it do what cross-feed is supposed to do, remove spatial distortion. At frequencies where the delay starts to fall of, our hearing moves from delay to amplitude mode in spatial hearing so it doesn't matter much. For me it works very well and I will never go back to un-cross-fed sound. 

I don't know what you expected 35 years ago. I suppose you need some serious DSP to turn some badly produced 70's rock albums lacking all important spatial cues into brilliant sonic experiences… …I'm into King Crimson so I know. I ignore the sonic crappiness and concentrate on the brilliant music. Having no spatial distortion helps a lot.



pinnahertz said:


> At best that thing is an improvement, but it's not really doing what needs to be done.  The fact that certain recordings work better than others should tell you that.  Mid-head localization should not depend only on the recording, proper correction would place it outside the head all the time.  That's not what you have there.



Well, I take that improvement! Of course I could have better, HRTF DSP stuff but I don't have. That stuff is for millionaires. For the money, cross-feed gives insane improvement.



pinnahertz said:


> Change "removes" to "reduces", and we're good.  That circuit can't remove spacial distortion.  I can't even minimize it.
> In some cases I would agree, but certainly not all.  As I referred to earlier, there is material that while mixed on speakers was happily embraced on headphones as a new, if hyper-stereo, experience.  Remember, mixes are checked on headphones, especially today in contemporary popular music, since that's the market, but mixed on speakers, because mixing on speakers translates to a pleasing headphone experience, but not the other way 'round.



Remove, reduce, whatever. In my experience at some point the spatial distortion really disappears when the sound is "mono enough", is masked by the music. So that's why I say remove. If I hear spatial distortion, I increase the level of cross-feed until I don't hear it anymore.



pinnahertz said:


> To be completely fair, I appreciate your opinion, but do not share it.



You do have some weird opinions and I don't know what to think of you. It's as if you want to disagree no matter what I say. I try to educate people about spatial distortion and you mess things up making everything 1000 more complex for other readers. The concept of spatial distortion is complex enough as it is. There is no need to talk about how first order filters don't create "real" delays. It doesn't matter! The phase shift works, spatial distortion is reduced/removed = problem solved = enjoyable listening experience. 



pinnahertz said:


> Speed and degree are program determined and variable.  The benefit is more consistent results, the down side is more consistent results.  It's just different.  I didn't develope the idea any farther because the problem was the algorithm that determined the required crossfeed.  It turned out it's not just the amount, that was easy to quantify, but to work well it needed delay, and the amount changes with program.  What morphed out of this was abandoning the idea in favor of an idea akin to the Smyth Realizer, but I didn't have DSP in those days, so I moved on.



That's why I keep my cross feeder simple. It works and does what I want.



pinnahertz said:


> Actually, those efforts were very successful.  Stereo headphones were fairly new, and a program featuring cool headphone mixes was a revenue source for broadcast.  BTW, you're expressing opinion again.  Thanks, but it's not fact, just opinion.  That's why there's an off switch on cross feed!



I don't know these broadcasts so hard to tell my opinion. I started listening to radio around 1987 (and I live in Finland). It is my experience that dynamic compression on radio broadcasts reduce channel separation, hence reducing spatial distortion. I don't hear very strong spatial distortion on radio. Some, but not much.



pinnahertz said:


> Well, it wasn't childhood, but...
> Your view is very rigid, very black and white.  If you want stereo done "right" then the only way you'll be satisfied is with binaural recordings made with mics in your own ears.  That works very well, but just for you.
> 
> Recording and reproduction, especially in two-channel stereo, is very much a subjective art.  As you grow older (ok, sorry, just a return jab), you may realize there are lots of "rights" and "grays" in...well, everything.  And there are some absolute rights and wrongs.  Experience helps us to understand the difference.
> ...


Huh? I can't have my favorite music as binaural recording! I must live with what is possible. Having DIY headphone adapters as cross-feeders is possible. So that's what I have. I am 46 now. How old must I be to get rights and wrongs?

The history of stereo recordings is beyond the point of cross-feed, the topic here. Of course good old recordings exists, but ping pong was "the big thing" to lure people into stereo sound.

Understanding gap? I am the first to admit that. I know enough to know how little I know. How about you?



pinnahertz said:


> No, spacial distortion results from the way signals are transducer.



The way recording are done creates signals that causes spatial distortion in the brain, but the recordings themselves from a technical point of view are distortion-free. Can't you even try to understand what I say? As a Finn my English might be less than perfect, but I think it's not that bad… ..God...



pinnahertz said:


> Yeah, right, except you are not educating people with the whole story.  You've been rather definitive with your precepts, and I'm just pointing out that a few things are not so definitive. And your definition of what is "right" includes a half-baked attempt at crossfeed that doesn't take real time delay into consideration, nor the actual response curve of sound diffracting around a head, nor any thought of the angle of the phantom transducers.  That's not definitive, don't portray it as final.  It might not even be desirable!


I'm happy to spent my time elsewhere if my education is not welcome. I don't get one penny out of this so people can suffer spatial distortion if they want. It seems it was a mistake to join this forum if this is the mentality over here. I am talking to people who don't know much about spatial distortion, not to besser-wissers who build dynamic cross-feeders 35 years ago. Tell me how to achieve the things you mention. What does it cost? $5000? My cross-feeder is $50. Bang for the buck!



pinnahertz said:


> What if the creator wanted it over head?  How would you know?  This is again another strong effort to categorize something that is far more subjective.



Ok. Over head = ceiling loudspeakers? Atmos sound or similar...



pinnahertz said:


> Well, I didn't do that, but I think you just did.  "Crappy" could be your opinion, and reproducing a whip-panned guitar in headphones actually was the intent of the creators some times.  If you cross-feed that out, you've taken away their intention.  Is that the right thing to do?
> Again...perhaps yes...perhaps no.  There's no way I can  agree that crossfeed of the type you've defined is universally miraculous.  Just as i can't agree that all album rock in the 1970s is badly produced.  Not that you'll care or be impressed, but my recording background is in classical music, not 70s rock.



What if I don't like what they intended? Then I don't listen to it at all. I'm sure the same goes for you. Sorry, if you think my cross-feeder sucks. I feel bad about it. I thought I would feel good on this forum among people who understand me. I was mistaken. Life sucks, but at least I have properly cross-fed music to enjoy.



pinnahertz said:


> I'd probably look for a crossfeed DSP plugin and use whatever headphone amp you have.  At least then the crossfeed wouldn't be limited to a simple filter and whatever phase shift it creates. It could include head diffraction, and time delay, and have all variables that are actually required.  In truth why I believe you don't find much crossfeed on commercial headphone amps.  It's too complex to do well.



Do what you wish. Headphone amps lack cross-feed because people hasn't been educated about spatial distortion enough. I'm trying to change that, but you ruin it. Thanks a lot man.



pinnahertz said:


> What we have here is a difference in opinion.  I respect that you love the Linkwitz crossfeed circuit, please respect that I feel it to be inadequate.  You feel crossfeed is essential and miraculous, I feel it is an occasional improvement in the implementation you've cited.  I also know from experience that there is normal stereo material that crossfeed would ruin in terms of what the creators intended.  I understand that from experiencing the culture of the era.  Correcting that would be inauthentic, just as trying to massage stereo out of a mono recording would also be inauthentic.



I thought you were against cross-feed in general, but you are against Linkwitz? Of course one can use different cross-feeders if Linkwitz is not your cup of tea. Yes, I feel crossfeed is essential and miraculous. It revolutionized my headphone listening. Some recordings work best without cross-feed, but most of them benefit form it and I don't believe spatial distortion is intended in over 99.9 % of all music. If spatial distortion is intended then loudspeakers ruin it.

This was so frustrating...


----------



## 71 dB

pinnahertz said:


> None of the above helped the middle of the head image issue, though.


Crossfeed doesn't even try to fix that. You need floor reflections and other spatial cues to achive that. However, if the recording itself has good spatial information, it helps when spatial distortion is removed with cross-feed.


----------



## Strangelove424

pinnahertz said:


> Yes, I read that.  None of that indicates a dynamic change, though.  The fact that crossfeed changes depending on relative position is still a static function.  In fact, my experiments fundamentally did that by extracting a crossfeed signal by developing an L-R signal then summing it with L (2L-R), then inverting it and summing it with R (2R-L).  Inserting frequency response modifiers and time delay in the base L-R signal forms the approximation of head diffraction, and yes, the result is a comb filter of sorts.  But it's a fixed algorithm, there is no dynamic change.



It also changes depending upon frequency, but maybe I'm not following your concept of static/dynamic. An algorithm is always fixed, it's the data or the variables that are changing, and that's what allows an algorithm to react dynamically. All the algorithm needs to account for are the relationships. I'm confused as to how you would go about creating a dynamic system without setting up relationships with an algorithm. There must have been some variable that your gain control responded to.


----------



## pinnahertz (Sep 21, 2017)

(reply to 71 dB's lengthy post...)

Well. Wow.

I think you've just pegged the superciliometer with that one.


----------



## pinnahertz

71 dB said:


> Crossfeed doesn't even try to fix that. You need floor reflections and other spatial cues to achive that. However, if the recording itself has good spatial information, it helps when spatial distortion is removed with cross-feed.


Yup.


----------



## pinnahertz

Strangelove424 said:


> It also changes depending upon frequency, but maybe I'm not following your concept of static/dynamic. An algorithm is always fixed, it's the data or the variables that are changing, and that's what allows an algorithm to react dynamically. All the algorithm needs to account for are the relationships. I'm confused as to how you would go about creating a dynamic system without setting up relationships with an algorithm. There must have been some variable that your gain control responded to.


Static would be the specific amount of crossfeed maintains a fixed relationship to the input signal.  Dynamic would be the specific amount of crossfeed is altered by some characteristic of the input signal.  

I already described how I did it with a variable gain element.


----------



## pinnahertz

71 dB said:


> I thought you were against cross-feed in general, but you are against Linkwitz? Of course one can use different cross-feeders if Linkwitz is not your cup of tea. Yes, I feel crossfeed is essential and miraculous. It revolutionized my headphone listening. Some recordings work best without cross-feed, but most of them benefit form it and I don't believe spatial distortion is intended in over 99.9 % of all music. If spatial distortion is intended then loudspeakers ruin it.


I believe the LInkwitz method is fatally flawed, inadequate, crude, and isn't at all miraculous.  I don't think all music benefits from that sort of crossfeed, and I don't think it is essential.  If it were, it would be included in every DMP.


71 dB said:


> This was so frustrating...


Finally something we agree on.


----------



## 71 dB

Strangelove424 said:


> My understanding of dynamic is that it fluctuates depending on variables in the signal, either frequency or amplitude. What do you mean by dynamic?



Dynamic would mean the resistors and capacitors were dynamic and a function of channel separation. If you input Left only, you output left and right so that left > right. That tells you the cross feed level

level = 20*log10 (R/L). That level is constant.



Strangelove424 said:


> I enjoy Meier crossfeed because it is the most subtle I have used. Just enough to make some albums less annoying or fatiguing. It maintains the spaciousness and L/R separation I believe is still faithful to the mastering while making it palatable on headphones.



Meier's crossfeed is good in my opinion. Nice if you enjoy. 



Strangelove424 said:


> I like DSP because it gives complete control over range, I can set the crossover strength from 0-100 (usually have it ≤20 and mix it with many other DSPs). I like the ability to change at whim and listen on different setups, so would personally feel uncomfortable re-encoding all my files permanently with a certain DSP setting. I think it's an interesting idea, I just wouldn't do it personally. I've struggled for a while to find plugins that worked in a playback environment so I would never have to deal with rendering files in order to apply DSPs. Everything I have (including VSTs I got to work in Foobar finally) function in real time, and don't require render. Agree about mp3. I use Spotify in low quality setting when on the go. I never saw the purpose of trying to attain audio purity outdoors.



I use mac so I don't have as much choice. Never used foobar. Vox player has cross feeders.


----------



## 71 dB

pinnahertz said:


> I believe the LInkwitz method is fatally flawed, inadequate, crude, and isn't at all miraculous.  I don't think all music benefits from that sort of crossfeed, and I don't think it is essential.  If it were, it would be included in every DMP.
> 
> Finally something we agree on.


Linkwitz is really "easy" to implement into a headphone adapter and having multiple cross-feed levels is easy. Your claims of it being "fatally flawed" is beyond me, but each to their own. I am not promoting "Linkwitz" only. I am promoting cross-feed generally. It just happens that Linkwitz is practical in headphone adapters. I also have "line level" Meier between pre out and main in of my amp, but it's one level only.  

Commercial products are often "stupid", because consumers aren't that wise.


----------



## 71 dB

pinnahertz said:


> (reply to 71 dB's lengthy post...)
> 
> Well. Wow.
> 
> I think you've just pegged the superciliometer with that one.



I might have misunderstood where you are coming from. It was difficult to see what it is you criticize. Sorry if it caused bad blood between us. I am not here to make enemies. I was very  surprised about your opinions.


----------



## Strangelove424

pinnahertz said:


> Dynamic would be the specific amount of crossfeed is altered by some characteristic of the input signal.



Are you saying channel separation and frequency are not characteristics of the input signal? Those are the variables that alter the amount of crossover and delay in the Meier system. 



pinnahertz said:


> I already described how I did it with a variable gain element.



I don't see that anywhere. I see a brief description of _what _you did... "I played with dynamic change by adding variable gain control in the L-R because I found that widely separated material became over-compensated." And an account of others' failed attempts. But no explanation of _how_, and no description of a variable that your gain control responded to. 



71 dB said:


> Dynamic would mean the resistors and capacitors were dynamic and a function of channel separation. If you input Left only, you output left and right so that left > right. That tells you the cross feed level
> 
> level = 20*log10 (R/L). That level is constant.



Yes, a variable in the signal, a "function of channel separation". According to your own definition, the Meier design is dynamic.


----------



## 71 dB

Strangelove424 said:


> Yes, a variable in the signal, a "function of channel separation". According to your own definition, the Meier design is dynamic.


Dynamic in that sense, but not in the sense that history affects what happens now. Dynamic would mean cross feed is stronger now because channels separation has been large or vice versa.


----------



## pinnahertz (Sep 21, 2017)

Strangelove424 said:


> Are you saying channel separation and frequency are not characteristics of the input signal?


No, that would just be silly.


Strangelove424 said:


> Those are the variables that alter the amount of crossover and delay in the Meier system.


I don't see the variation in delay, and I only see a fixed crossfeed amount that varies in a linear relationship with amplitude and phase and therefore the pan position of a sound.  My idea was to vary crossfeed based on more than just it's position, (amplitdued and phase), in a nonlinear process.  Meier isn't doing that I don't think.



Strangelove424 said:


> I don't see that anywhere. I see a brief description of _what _you did... "I played with dynamic change by adding variable gain control in the L-R because I found that widely separated material became over-compensated." And an account of others' failed attempts. But no explanation of _how_, and no description of a variable that your gain control responded to.


Yes, that's what I did.  I'm not going to share the schematic, really no point at this stage.


Strangelove424 said:


> Yes, a variable in the signal, a "function of channel separation". According to your own definition, the Meier design is dynamic.


Well, to me since the amount of crossfeed is derived from a matrix, tracks the signal level exactly, and depends only on the relative amplitude and phase of a sound between channels, it's not dynamic, it's fixed.  In other words, the crossfeed signal has the same relationship to the primary signal regardless of level.


----------



## bigshot

pinnahertz said:


> You could just ignore it.



Great suggestion!


----------



## Strangelove424

71 dB said:


> Dynamic in that sense, but not in the sense that history affects what happens now. Dynamic would mean cross feed is stronger now because channels separation has been large or vice versa.



Dynamic means constant change or adaptation to the signal. I don't know why that would entail history. History invokes stored data, and adaptation over time. Artificial intelligence uses historical data to adapt. But I can't think of anything in audio that uses historic algorithms except for Pandora's preference learning algorithm or something like it. Can't begin to imagine how a crossover would use historic data though.




pinnahertz said:


> I don't see the variation in delay, and I only see a fixed crossfeed amount that varies in a linear relationship with amplitude and phase and therefore the pan position of a sound.  My idea was to vary crossfeed based on more than just it's position, (amplitdued and phase), in a nonlinear process.  Meier isn't doing that I don't think.



Yes, to quote Meier again: "The more off-center the instrument is placed, the stronger the crossfeed and the longer its delay."



pinnahertz said:


> Well, to me since the amount of crossfeed is derived from a matrix, tracks the signal level exactly, and depends only on the relative amplitude and phase of a sound between channels, it's not dynamic, it's fixed.



I would consider a fixed crossover one that cannot fluctuate its values (amplitude, frequency, timing) depending on the signal, and one that can dynamic. I've had rotten experiences with truly fixed crossovers. Meier is not a fixed crossover. Perhaps it's not adapting to the signal variables you want it to, but it is adapting nonetheless.


----------



## 71 dB

Strangelove424 said:


> Dynamic means constant change or adaptation to the signal. I don't know why that would entail history. History invokes stored data, and adaptation over time. Artificial intelligence uses historical data to adapt. But I can't think of anything in audio that uses historic algorithms except for Pandora's preference learning algorithm or something like it. Can't begin to imagine how a crossover would use historic data though.


You can't chance the behavior very fast with the signal, because that would lead to noise-like artifacts. You need to slow down the change and that requires integrating/averaging the signal with a reasonable time constant, perhaps 0.1 s or so. The channel difference signal (L-R) behaves quite randomly in sort term. At a given moment it can be almost anything. I have tried dynamic cross-feeding in Audacity and found it tricky so I dropped the idea. A dynamic cross-feeder is tricky and I don't know how pinnahertz managed to build one by himself.


----------



## pinnahertz

Strangelove424 said:


> I would consider a fixed crossover one that cannot fluctuate its values (amplitude, frequency, timing) depending on the signal, and one that can dynamic. I've had rotten experiences with truly fixed crossovers. Meier is not a fixed crossover. Perhaps it's not adapting to the signal variables you want it to, but it is adapting nonetheless.


Two things to make clear.  First, the Meier crossfeed is not adaptive, crossfeed remains the same ratio to the original affected by a fixed relationship derived from a sound's relative channel level and phase.  THte amount of crossfeed signal remains a fixed ratio to the original regardless of level.  That's not dynamic, that's fixed.  

Second, I said my experiment with dynamic crossfeed didn't work well.  It failed.  That's why it's not in the Meier algorithm.  At this point I'm not advocating dynamic crossfeed, I'm just explaining the difference between it and a fixed crossfeed.  The Meier algorithm should work better than the Linkwitz because delay is involved, and crossfeed depends on relative channel level.


----------



## pinnahertz (Sep 22, 2017)

71 dB said:


> You can't chance the behavior very fast with the signal, because that would lead to noise-like artifacts. You need to slow down the change and that requires integrating/averaging the signal with a reasonable time constant, perhaps 0.1 s or so.


Um...no...that's not true.  But I don't think another exercise in countering superciliousness is worth my time.


71 dB said:


> The channel difference signal (L-R) behaves quite randomly in sort term. At a given moment it can be almost anything.


Um..again...no, if the signal you're trying to crossfeed is hard-panned L, the L-R signal with be mostly L and follow the envelope of the dominant signal.


71 dB said:


> I have tried dynamic cross-feeding in Audacity and found it tricky so I dropped the idea. A dynamic cross-feeder is tricky and I don't know how pinnahertz managed to build one by himself.


I built it with analog processing many, many years ago.  Analog hardware level detectors, gain control elements, etc. I have background in audio processing. It didn't work well for many reasons, but noise modulation wasn't one of them. 

If I were to do it today in DSP it would involve a complex program dependant time constant and more than one band of dynamic gain control.  Except I wouldn't because the whole idea is wrong!


----------



## RRod

Count me in the "why would you want dynamic crossfeed" category. I love crossfeed, though; lots of bang for the buck. Best results I got were altering the parameters to match up with the speaker positioning implied by the headphone's frequency response, which meant my HD800s liked a bit more aggressive crossfeed than my PM-3s.


----------



## 71 dB

pinnahertz said:


> Um...no...that's not true.  But I don't think another exercise in countering superciliousness is worth my time.
> Um..again...no, if the signal you're trying to crossfeed is hard-panned L, the L-R signal with be mostly L and follow the envelope of the dominant signal.



Of course L-R follows closely L if L is dominant. I have been thinking this: LR2MS transform:

a = 1/SQRT(2) = 0.707…

M = a*(L+R) = "Mono part"
S = a*(L-R) = "Side part"

Now, S can be compressed and filtered. Let's call the result Sc. We can calculate new processed stereo signal using LR2MS again:

L' = a*(M+Sc)
R' = a*(M-Sc)

This signal has more constant stereophonic width than the original, hopefully just under the amount that would cause spatial distortion.



pinnahertz said:


> I built it with analog processing many, many years ago.  Analog hardware level detectors, gain control elements, etc. I have background in audio processing. It didn't work well for many reasons, but noise modulation wasn't one of them.



That is very cool you tried it and I can understand why it didn't work.



pinnahertz said:


> If I were to do it today in DSP it would involve a complex program dependant time constant and more than one band of dynamic gain control.  Except I wouldn't because the whole idea is wrong!



I also feel that dynamic cross-feed is perhaps going too far. Constant cross-feed seems to do the magic in removing spatial distortion and being constant doesn't mess up things like dynamic solution could.


----------



## 71 dB

pinnahertz said:


> The Meier algorithm should work better than the Linkwitz because delay is involved, and crossfeed depends on relative channel level.


Meier and Linkwitz have a different approach to cross-feed. Linkwitz tries to simulate stereo loudspeaker listening idea being that since most recordings are mixed for loudspeakers, simulating that should work. Meier on the other hand re-distributes sounds from left to right depending on how they are panned in the recording, as if we had a lot of loudspeakers next to each other and they played different parts (instruments) of the recording.

I find Meier more vivid/aggressive/wide than Linkwitz which is more relaxed/calm/narrow/forward, so they have own strengths. Meier is a "H"-topology cross-feeder which makes it mono-neutral (it does absolutely nothing to mono signal), but adding multiple cross-feed levels is difficult. Linkwitz is a "X"-topology cross-feeder so it's not mono-neutral, but multiple cross-feed levels is easy. Linkwitz is also easy to incorporate into a headphone adapter, because it's quite flexible in impedance-sense. Meier isn't.


----------



## Arpiben

71 dB said:


> Of course L-R follows closely L if L is dominant. I have been thinking this: LR2MS transform:
> 
> a = 1/SQRT(2) = 0.707…
> 
> ...



And what about if real signals are more like this:
L(t)=|L(t)|ejwL(t)
R(t)=|R(t)|ejwR(t)
In other words, what about phase?

To be clear I have no knowledge in cross-feed. Maybe by simplification, maybe for other reason you are not taking into account phase.
I am just curious


----------



## Strangelove424

pinnahertz said:


> Two things to make clear.  First, the Meier crossfeed is not adaptive, crossfeed remains the same ratio to the original affected by a fixed relationship derived from a sound's relative channel level and phase.  THte amount of crossfeed signal remains a fixed ratio to the original regardless of level.  That's not dynamic, that's fixed.



The Meier algorithm itself is fixed, but the amount of crossfeed (and delay) is dynamic to the signal. If you wanted the algorithm itself to vary, what variable would it change according to? 



pinnahertz said:


> Second, I said my experiment with dynamic crossfeed didn't work well.  It failed.  That's why it's not in the Meier algorithm.  At this point I'm not advocating dynamic crossfeed, I'm just explaining the difference between it and a fixed crossfeed.  The Meier algorithm should work better than the Linkwitz because delay is involved, and crossfeed depends on relative channel level.



Well, I didn’t push for schematics. You could easily mention what input variable your gain responded to, but it’s ultimately your design, and I’m certainly not trying to force you to reveal anything you don’t want to. As long as you’re not referring to it here as a firm example of dynamic crossover, its success or failure is neither here nor there. 

But successful implementation aside, on a completely theoretical level, what would a truly dynamic crossfeed function like, and how would it improve crossfeed performance? 



pinnahertz said:


> It failed.  That's why it's not in the Meier algorithm.



Do you design for Meier?


----------



## Strangelove424

71 dB said:


> You can't chance the behavior very fast with the signal, because that would lead to noise-like artifacts. You need to slow down the change and that requires integrating/averaging the signal with a reasonable time constant, perhaps 0.1 s or so. The channel difference signal (L-R) behaves quite randomly in sort term. At a given moment it can be almost anything. I have tried dynamic cross-feeding in Audacity and found it tricky so I dropped the idea. A dynamic cross-feeder is tricky and I don't know how pinnahertz managed to build one by himself.



Ok, understandable, but averaging does not differentiate fixed or dynamic. I'm still trying to figure out what you guys have in mind when you say "dynamic crossfeed" or more specifically what variables you want crossfeed to respond to.


----------



## castleofargh

my timid attempts at playing with mid/side was really not worth the effort. can't say it's wrong or bad in general, only that it was in my hands ^_^. 
I thought about applying an EQ to the middle signal so that I would perceive the singer in front of me, not up or down. and then try different options to make something that would still retain channel separation with the rest and apply some close enough HRTF profile for 30° speakers. 
most attempts ended in clear mistakes from me being too ignorant. and getting delays and/or level issues have indeed been some of the issues, that and having the same instrument at 2 places at once in my mind when failing to completely filter it out of the mid channel...  all clearly noob mistakes, and I went for most of them.

soon enough I was thinking something well known to people with no willpower and no achievement, "F it, I give up!". and since I've been waiting for the realiser A16(typical lack of brain compensated with money). what I use ATM is to simply apply the HRTF impulses for 30° with a true stereo convolver. it gives marginally better results than Xnor's crossfeed with settings for my big empty head, but it was such a pain just to try a bunch of HRTF impulses one after the other until one felt ok for my subjective image of instrument placements. if I was to start from scratch again, I'd enjoy basic crossfeed while eating pringles instead.


----------



## 71 dB

Arpiben said:


> And what about if real signals are more like this:
> L(t)=|L(t)|ejwL(t)
> R(t)=|R(t)|ejwR(t)
> In other words, what about phase?
> ...


Real signals aren't like that. You get that if you do Hilbert transformation to make the signal complex. Phase however is of course real. What do you mean taking phase into account?


----------



## 71 dB

Strangelove424 said:


> Ok, understandable, but averaging does not differentiate fixed or dynamic. I'm still trying to figure out what you guys have in mind when you say "dynamic crossfeed" or more specifically what variables you want crossfeed to respond to.



All normal cross-feeders (Linkwitz, Meier etc.) are constant and they do their job. Pinnahertz has build an analog dynamic cross-feeder long ago and it didn't work that well. I have thought about dynamic cross-feed in theoretical level and found it troublesome and iffy. Since constant cross-feeders do their job (remove/reduce spatial distortion), there is no real need for dynamic cross-feeders, is there?


----------



## Arpiben

71 dB said:


> Real signals aren't like that. You get that if you do Hilbert transformation to make the signal complex. Phase however is of course real. What do you mean taking phase into account?



Well my mistake, I was at first glance thinking you were only dealing with real part of L&R signals and letting aside phase.


----------



## Strangelove424

71 dB said:


> I have thought about dynamic cross-feed in theoretical level and found it troublesome and iffy. Since constant cross-feeders do their job (remove/reduce spatial distortion), there is no real need for dynamic cross-feeders, is there?



I don't know if there is a real need or not, since the concept of a dynamic crossfeed is purely theoretical at this point, and I'm still trying to get an idea of how it would behave hypothetically. Since the Meier plugin was a recent discovery for me, and the amps upon which they're based relatively new as well, I am open to the idea that crossover technique is something that can still be improved upon greatly.


----------



## pinnahertz

...and so it goes.


----------



## pinnahertz

Strangelove424 said:


> I don't know if there is a real need or not, since the concept of a dynamic crossfeed is purely theoretical at this point, and I'm still trying to get an idea of how it would behave hypothetically. Since the Meier plugin was a recent discovery for me, and the amps upon which they're based relatively new as well, I am open to the idea that crossover technique is something that can still be improved upon greatly.


Yes, improved upon.  The problem we seem to have is that the current common (some say normal) are pretty huge approximations and therefore imperfect.  What results is something that covers the range from subjectively objectionable to miraculous.  I'd say there's room for improvement in there.


----------



## pinnahertz

Strangelove424 said:


> The Meier algorithm itself is fixed, but the amount of crossfeed (and delay) is dynamic to the signal. If you wanted the algorithm itself to vary, what variable would it change according to?


Backtracking again... I said it didn't work.  I tried varying crossfeed based on program dynamics and separation. 



Strangelove424 said:


> Well, I didn’t push for schematics. You could easily mention what input variable your gain responded to, but it’s ultimately your design, and I’m certainly not trying to force you to reveal anything you don’t want to. As long as you’re not referring to it here as a firm example of dynamic crossover, its success or failure is neither here nor there.


When you ask for details where else would I go?  Block diagrams and schemaitics.  Why would I publish that if it didn't work?

I don't view my experiments as a failure, they provided valuable information.   As an improvement to fixed crossfeed, no I didn't think it accomplished the goal universally. 


Strangelove424 said:


> But successful implementation aside, on a completely theoretical level, what would a truly dynamic crossfeed function like, and how would it improve crossfeed performance?


I have no idea.  My particular goal was to create a processor that would do the crossfeed thing well but provide more consistent results when handling widely varying material.  I still think there's a point to doing that, but I also think there's a relatively microscopic market.


Strangelove424 said:


> Do you design for Meier?


Assuming you meant that as a compliment...thanks, but no.

There are actually a number of broadcast audio processors that perform a sort of crossfeed on a dynamic basis.  The goal is more consistent on air sound.  I've tried them, and pretty much hate them all. The now even include algorithms for "fixing" mp3 compression artifact!  And they work just about as well. 

We definitely live in a "just because you can doesn't 'mean you should" world now.


----------



## 71 dB

Arpiben said:


> Well my mistake, I was at first glance thinking you were only dealing with real part of L&R signals and letting aside phase.


You can think phase with simple signals such as a sinusoidal, but music is a different story. 

If phase difference between L and R increases, (L+R) decreases and (L-R) increases.


----------



## 71 dB

Strangelove424 said:


> I don't know if there is a real need or not, since the concept of a dynamic crossfeed is purely theoretical at this point, and I'm still trying to get an idea of how it would behave hypothetically. Since the Meier plugin was a recent discovery for me, and the amps upon which they're based relatively new as well, I am open to the idea that crossover technique is something that can still be improved upon greatly.



The problem with improving cross-feed is that you jump from simple circuits to DSP processors convoluting HRTF files with the music and what not. That's not something everyone are willing to do. Simple cross-feed removes spatial distortion taking headphone listening to a whole new level.

The correct way to improve things would be to have real binaural recordings for headphones, but we don't have that many, because binaural recordings sound weird on loudspeakers. So, the recordings are mostly for loudspeakers and we cross-feed them for headphones to get rid of spatial distortion.


----------



## pinnahertz

71 dB said:


> The problem with improving cross-feed is that you jump from simple circuits to DSP processors convoluting HRTF files with the music and what not. That's not something everyone are willing to do. Simple cross-feed removes spatial distortion taking headphone listening to a whole new level.
> 
> The correct way to improve things would be to have real binaural recordings for headphones, but we don't have that many, because binaural recordings sound weird on loudspeakers. So, the recordings are mostly for loudspeakers and we cross-feed them for headphones to get rid of spatial distortion.


This... after all we've posted....

Well, your opinion is not shared by everyone.  Simple cross-feed does not remove spatial distortion, it may reduce it.  The "whole new level" may be higher (better) or lower (worse) than a given recording without it.

Binaural recordings are correct only for the listener matching the specific HRTF that the recording was made with.  Other listener results vary quite widely all the way to unacceptable.  

We don't "get rid of spatial distortion" with cross-feed because the solution is only a general approximation of the inverse of the problem to begin with, but we may reduce it in some cases.

Every signal DAP has at least a little DSP capability, certainly enough for improved cross-feed, on board.  It goes without saying PCs could handle the process.  I'm surprised we aren't inundated with advanced, capable, and fully adjustable cross-feed plug-ins.  It can't be because of complexity, and the headphone listening market is huge and growing.  So why is cross-feed not universal at this late date?  Could it be it's not universally desired or accepted?  I don't know, but I'm in the "not universally desired" camp.


----------



## 71 dB

Wide modification of Linkwitz using fixed -3 dB cross-feed level and additional treble cross-feed (channel difference limiter) at -25 dB works well with varying material. The wider you make the cross feeder, the smaller is the need to vary cross-feed level for different material. For 30° it is about -11 dB … -1 dB. For 60° it is probably -5 dB … -2 dB and for 90° it's just -3 dB.

Wide Linkwitz is Meier-like and actually pans sounds similarly, but it avoids the "aggressive" nature of Meier. The only downside seems to be that the soundstage isn't that deep and forward, but well-recorded music contains enough spatial cues to create a feeling of depth. Those who don't like cross-feed because it reduces the apparent width of the sound may find wide Linkwitz pleasing, because it does hardly anything to the width, just removes spatial distortion.

Wide cross-feed is an easy modification to "normal" Linkwitz: You re-calculatale the capacitors in the cross-feed section to lower the cross-feed cut-off frequency to 300-400 Hz (roughly double the capacitance) and add an resistor between left and right channel for the treble cross-feed. If the resistors between output and ground are R then this treble cross feed resistor is about 17*R. Cross-feed level must be tuned to -3 dB of course. Below is the schematic (sorry, quick hand-drawing) of my wide Linkwitz headphone adapter:






R1 = 120 Ω
R2 = 2.2 Ω
R3 = 220 Ω
R4 = 100 Ω
R5 = 70 Ω (82 Ω and 470 Ω in parallel)
R6 = 37 Ω (22 Ω + 15 Ω)
C1 = 1.17 µF (three 390 nF capacitors in parallel)
C2 = 11.5 µF (6.8 µF and 4.7 µF in parallel)
K1 = 2 x ON/ON switch for cross-feed on/off.​
The effective output impedance = R2 = 2.2 Ω which is low enough for practically all headphones in the world (the most demanding cans need 4-5 Ω at most, so this is half of it). One can add another switch in series with R6 to bypass treble cross-feed, but this is the most versatile cross-feeder I know having only on/off switch. This is a type of cross-feeder you can forget existing and just concentrate on enjoying the music because it works with almost any material imo.


----------



## 71 dB

pinnahertz said:


> This... after all we've posted….



So, should everyone just agree with you 100 % after all we've posted? We have a different approach: You are after perfection or something like that no matter what it takes or costs. I am after the most bang for the buck. Most of us aren't millionaires, so "most bang for the buck" is a rational approach to us.



pinnahertz said:


> Well, your opinion is not shared by everyone.  Simple cross-feed does not remove spatial distortion, it may reduce it.  The "whole new level" may be higher (better) or lower (worse) than a given recording without it.



Increasing cross-level makes the signal more mono-like and mono signal contains zero spatial distortion. Proper cross-feed reduces the channel difference enough to make the sound spatial distortion free. Too little cross-feed only reduces spatial distortion, but that is not proper cross-feed. Some recordings do sound better without cross-feed, but my experience is that that's something like 2 % of recording (of my music collection anyway) and I have the off switch for those cases. So, for the 98 % of my music cross-feed makes things better and in case of those 2 % I bypass cross-feed and my headphone adapter does nothing.

If you have developed better solutions then good for you, but I am very happy with the solutions I have. You haven't presented those better solutions much have you?



pinnahertz said:


> Binaural recordings are correct only for the listener matching the specific HRTF that the recording was made with.  Other listener results vary quite widely all the way to unacceptable.



That's one problem with them, but in general binaural recordings work quite well for anyone, just not perfectly unless you have the "correct" head shape/size.



pinnahertz said:


> We don't "get rid of spatial distortion" with cross-feed because the solution is only a general approximation of the inverse of the problem to begin with, but we may reduce it in some cases.



Why you insist this is beyond me. Proper cross-feed removes the uncomfortable sensation and messiness of stereo image spatial distortion causes, hence I say proper cross-feed removes spatial distortion. Too little cross-feed only _reduces_ spatial distortion (uncomfortable sensations and messiness of stereo image remain, but are weaker), but even that is a big plus. My first cross-feeder was a one-level (-8,7 dB) model and it was too weak for many recordings, but even it did revolutionized my headphone listening. I recognized the need for stronger (and weaker) cross-feed levels and modified it to 3-level before building a completely new 6-level model. My approach is to keep things as simple as possible and only increase complexity if it's something _relevant_.  I believe that is a healthy approach in audio and in life in general.



pinnahertz said:


> Every signal DAP has at least a little DSP capability, certainly enough for improved cross-feed, on board.  It goes without saying PCs could handle the process.  I'm surprised we aren't inundated with advanced, capable, and fully adjustable cross-feed plug-ins.  It can't be because of complexity, and the headphone listening market is huge and growing.  So why is cross-feed not universal at this late date?  Could it be it's not universally desired or accepted?  I don't know, but I'm in the "not universally desired" camp.



I'm here to promote all kind of cross-feed. If you have DSP capability then of course it's good to use it to cross-feed. Your posts questioning especially Linkwitz cross-feeder does not have positive effect on people recognizing the benefits of cross-feed. The reality is that most people are completely ignorant about audio. The concept of cross-feed is VERY difficult for people to understand. So, it takes a lot of education.

Nokia Lumia phones had Dolby Phone prosessing, which is effectively cross-feeding, but people wanted iPhones and Android phones so Windows phones lost.


----------



## pinnahertz

71 dB said:


> So, should everyone just agree with you 100 % after all we've posted?


No, of course not. But you state everything as immutable fact when the entire nature of generalized cross-feed is an approximation, with the results being entirely subjective. Don't expect every to agree with you either!


71 dB said:


> We have a different approach: You are after perfection or something like that no matter what it takes or costs. I am after the most bang for the buck. Most of us aren't millionaires, so "most bang for the buck" is a rational approach to us.


 You've misunderstood my "approach" entirely. Not surprising.


71 dB said:


> If you have developed better solutions then good for you, but I am very happy with the solutions I have. You haven't presented those better solutions much have you?


A careful reading of my posts would reveal that I've state clearly that I have not developed a better solution.  Given that, I have nothing specific to present. But since I have worked on the problem, I recognize the  failings of the approach you advocate, having tried that myself.


71 dB said:


> That's one problem with them, but in general binaural recordings work quite well for anyone, just not perfectly unless you have the "correct" head shape/size.


And in my opinion they universally present a novel but incomplete perspective unless made with the individual HRTF. I've actually made binaural recordings both with an artificial head and pinna, and with my own head and pinna. The difference is not small, and forms the based of my conjecture re the failings of recordings made with a generalized average HRTF.



71 dB said:


> Why you insist this is beyond me. Proper cross-feed removes the uncomfortable sensation and messiness of stereo image spatial distortion causes, hence I say proper cross-feed removes spatial distortion. Too little cross-feed only _reduces_ spatial distortion (uncomfortable sensations and messiness of stereo image remain, but are weaker), but even that is a big plus. My first cross-feeder was a one-level (-8,7 dB) model and it was too weak for many recordings, but Your posts questioning especially Linkwitz cross-feeder does not have positive effect on people recognizing the benefits of cross-feed.


I object to subjective opinion being stated as scientific fact, and frankly the arrogance of insisting absolutes when clearly you're working with a very significant approximation with results that could only be subjective, doesn't help either.


----------



## 71 dB

pinnahertz said:


> No, of course not. But you state everything as immutable fact when the entire nature of generalized cross-feed is an approximation, with the results being entirely subjective. Don't expect every to agree with you either!



I try to base my opinions on science which is more or less objective. I don't want to promote BS. Cross-feed is an approximation and I never said it isn't, but that doesn't mean it doesn't remove spatial distortion because it does. It is science based on human spatial hearing. Cross-feed doesn't give exact stereo image along to lines of instruments being in exactly correct places and angles etc. It puts instruments in approximately correct places instead of those instruments spreading all over the place due to spatial distortion. That is what is _relevant_ and that's what we can relatively easily fix. You can keep whining, but I am happy with it.

I don't expect narrow-minded "purists" agree with me. I don't expect besser-wissers agree with me. I expect open-minded people who are interested in improving their headphone listening experience concentrating on relevant "bang for the buck" things being open to what I say.




pinnahertz said:


> You've misunderstood my "approach" entirely. Not surprising.



Sorry. I am new to this forum and I am only learning to know you based on what you say. So far your approach has been rather "harassing." and other members here seem much nicer and more relaxed. 80 % of my energy and time here goes defending myself against your attacks (including this post!).



pinnahertz said:


> A careful reading of my posts would reveal that I've state clearly that I have not developed a better solution.  Given that, I have nothing specific to present. But since I have worked on the problem, I recognize the  failings of the approach you advocate, having tried that myself.



I did mention earlier that I have studied the problem of dynamic cross-feed too. I have also said I think dynamic cross-feed is "going too far" and it's best to settle with "normal" cross-feed methods as those when done properly do fix the _relevant_ problem, spatial distortion and messiness or sound image. In audio you can't have everything 100 % so it's best to know how to compromise in an optimal way. Understanding helps in it and I have used the last 5-6 years with studying cross-feeding so I'd say I know something about the issue. What I don't know is something I want to learn.



pinnahertz said:


> And in my opinion they universally present a novel but incomplete perspective unless made with the individual HRTF. I've actually made binaural recordings both with an artificial head and pinna, and with my own head and pinna. The difference is not small, and forms the based of my conjecture re the failings of recordings made with a generalized average HRTF.


Tell me something I don't know. Good luck finding your favorite music recorded using your own head. I know that's impossible so I acquiesce in cross-feeding recordings and having if not 100 % authentic sound immersion something that is enjoyable and free of problems generated by excessive stereo separation.



pinnahertz said:


> I object to subjective opinion being stated as scientific fact, and frankly the arrogance of insisting absolutes when clearly you're working with a very significant approximation with results that could only be subjective, doesn't help either.



My subjective opinions on this issue are based on scientific facts/understanding and subjective experiences so you can call them semi objective if you want. Sometimes approximation is all it takes to solve a problem.


----------



## bigshot (Sep 23, 2017)

71 dB said:


> So, should everyone just agree with you 100 % after all we've posted? We have a different approach: You are after perfection or something like that no matter what it takes or costs. I am after the most bang for the buck. Most of us aren't millionaires, so "most bang for the buck" is a rational approach to us.



Welcome to the rational rationalists club!

One of the things I've noticed about audiophiles is that some people never get past the "This porridge is too hot." and "This porridge is too cold." to arrive at the "This porridge is just right." The sound can't be good because it isn't a high sampling rate... Your headphones don't perform well about 20kHz so they are crappy... Until you master the time domain, nothing can sound good... I used to think only dyed in the wool audiophools laid awake at night worrying about things that were completely inaudible, but I've come to learn that there's a faction of sound scientists that are just as absolutist on the opposite extreme. Your calibration is never precise enough... I can tell your system sounds like a dog's breakfast by looking at a photo of it... You aren't paying enough attention to my pet theory so you are absolutely wrong... Someone the other day was referring to it as "Audiophilia Nervosa". It appears to afflict people of all home audio religions and creeds.


----------



## 71 dB

bigshot said:


> Welcome to the rational rationalists club!



Thanks!


----------



## pinnahertz

71 dB said:


> I try to base my opinions on science which is more or less objective. I don't want to promote BS. Cross-feed is an approximation and I never said it isn't,


"Cross-feed removes spatial distortion" doesn't sound like it's an approximation.


71 dB said:


> I expect open-minded people who are interested in improving their headphone listening experience concentrating on relevant "bang for the buck" things being open to what I say.


But then won't you please extend that same open mindedness to those who approached your solution with an open mind but don't have the same reaction as you?



71 dB said:


> Sorry. I am new to this forum and I am only learning to know you based on what you say. So far your approach has been rather "harassing." and other members here seem much nicer and more relaxed. 80 % of my energy and time here goes defending myself against your attacks (including this post!).


 this is the Sound Science forum, we need facts backed with proof, and opinions differentiated from facts.


71 dB said:


> I did mention earlier that I have studied the problem of dynamic cross-feed too.
> I have also said I think dynamic cross-feed is "going too far" and it's best to settle with "normal" cross-feed methods as those when done properly do fix the _relevant_ problem, spatial distortion and messiness or sound image. In audio you can't have everything 100 % so it's best to know how to compromise in an optimal way. Understanding helps in it and I have used the last 5-6 years with studying cross-feeding so I'd say I know something about the issue. What I don't know is something I want to learn.


Yes, we mostly agree on this, but even though I wasn't happy with my results with dynamic cross-feed I do think there may be some program dependent adjustment that may be beneficial.  I may have perceived a gap in your understanding of dynamic audio processing, but it's kind of a moot point since neither of us care for our results anyway.


71 dB said:


> Tell me something I don't know. Good luck finding your favorite music recorded using your own head.


Sarchasm noted.  Silly, but noted.


71 dB said:


> I know that's impossible so I acquiesce in cross-feeding recordings and having if not 100 % authentic sound immersion something that is enjoyable and free of problems generated by excessive stereo separation.


I recognize that as your opinion. Thanks.


71 dB said:


> My subjective opinions on this issue are based on scientific facts/understanding and subjective experiences so you can call them semi objective if you want. Sometimes approximation is all it takes to solve a problem.


I'll stick with calling them your subjective opinion. I didn't see a lot of scientific fact, but plenty of personal opinion and preference. I'm not hearing any statistical evidence of preference across a population group (as we have with target curves, for example). I am hearing one guy making opinions sound like fact through conviction to them.  That's not very scientific or open minded.


----------



## 71 dB

Pinnahertz, what should I say to make you happy? Should I say that people should not use cross-feed because it's just approximative? What do you want? Frankly, I am done with you and this goes on in circles.


----------



## pinnahertz

71 dB said:


> Pinnahertz, what should I say to make you happy? Should I say that people should not use cross-feed because it's just approximative? What do you want? Frankly, I am done with you and this goes on in circles.


 Just stop saying the crossfeed is so great and so absolute  as if it's universal fact. If some decide it's not for them, or they don't like what it does, then don't jump all over the case.  My main objection is your use of absolute terms when clearly the entire thing as an approximation with subjective results.


----------



## 71 dB (Sep 23, 2017)

pinnahertz said:


> Just stop saying the crossfeed is so great and so absolute  as if it's universal fact. If some decide it's not for them, or they don't like what it does, then don't jump all over the case.  My main objection is your use of absolute terms when clearly the entire thing as an approximation with subjective results.


Are you saying that the science of human spatial hearing does not objectively support use of cross-feed? It was this scientific aspect that made me realize the need of cross-feed in headphone listening.

My SUBJECTIVE opinion is that approximation works really well in cross-feed. Happy?


----------



## RRod

71 dB said:


> Are you saying that the science of human spatial hearing does not objectively support use of cross-feed? It was this scientific aspect that made me realize the need of cross-feed in headphone listening.
> 
> My SUBJECTIVE opinion is that approximation works really well in cross-feed. Happy?



The perfect is the enemy of the very good.


----------



## bigshot

And only God is perfect.


----------



## pinnahertz

71 dB said:


> Are you saying that the science of human spatial hearing does not objectively support use of cross-feed?


Did I say that?  Read again.


71 dB said:


> It was this scientific aspect that made me realize the need of cross-feed in headphone listening.


Yep, me too.


71 dB said:


> My SUBJECTIVE opinion is that approximation works really well in cross-feed. Happy?



Now, was that so hard?


----------



## bigshot

bigshot said:


> And only God is perfect.





pinnahertz said:


> Yep, me too.



I was thinking about doing a line by line pick apart but I decided to let it speak for itself!


----------



## leeperry (Sep 23, 2017)

VNoPhones VST remains king in my book and yes xfeed is mandatory for me otherwise my brain wonders wth it's hearing in dual-mono and not stereo huh.


----------



## Strangelove424

pinnahertz said:


> Assuming you meant that as a compliment...thanks, but no.
> 
> There are actually a number of broadcast audio processors that perform a sort of crossfeed on a dynamic basis.  The goal is more consistent on air sound.  I've tried them, and pretty much hate them all. The now even include algorithms for "fixing" mp3 compression artifact!  And they work just about as well.



No, just a simple question. Why would you assume it was a compliment? I have no experience with your gear, so am in no position to make a judgment. I asked because you said earlier "my experiment with dynamic crossfeed didn't work well. It failed. That's why it's not in the Meier algorithm." I was curious how it would get into the Meier algorithm in the first place unless you designed for Meier.



pinnahertz said:


> We definitely live in a "just because you can doesn't 'mean you should" world now.



Capability vs taste, sometime's it's hard to figure out which was exceeded. With modern capabilities it tends toward taste.


----------



## pinnahertz

Strangelove424 said:


> No, just a simple question. Why would you assume it was a compliment?


I respect the company, and  like to assume the best intentions of people.  My mistake. 


Strangelove424 said:


> I have no experience with your gear, so am in no position to make a judgment. I asked because you said earlier "my experiment with dynamic crossfeed didn't work well. It failed. That's why it's not in the Meier algorithm." I was curious how it would get into the Meier algorithm in the first place unless you designed for Meier.


Sorry, the way I stated that was indeed misleading.  What I meant is it didn't work, and so Meier wouldn't have included it for that reason.  I don't mean to imply I have any influence over what Meier does.


----------



## pinnahertz

bigshot said:


> I was thinking about doing a line by line pick apart but I decided to let it speak for itself!


I only do it because I know bigshot hates it


----------



## Strangelove424

No worries, sorry If I was confused. I respect Meier too. They're one of the names that stood out in the amp market to me though a bit expensive for my budget. One of the reasons I was glad to find a plugin version of their crossfeed.


----------



## 71 dB (Sep 24, 2017)

pinnahertz said:


> Now, was that so hard?



No, not hard at all, but mentioning the word "subjective" makes it sound as if there weren't objective scientific facts backing up what I say. It would be like trying to educate ignorant people about climate change saying "well, it's my subjective opinion that climate change is real."

Also, are we supposed to give our subjective opinion about cross-feed in this thread, or are we supposed to give our subjective opinion about what we think science says objectively about cross-feed? There is subjectivity somewhere to make the topic even meaningful, because without subjectivity the topic of this thread would be like "Is two plus two four or five? That is the question…"


----------



## pinnahertz (Sep 24, 2017)

71 dB said:


> No, not hard at all, but mentioning the word "subjective" makes it sound as if there weren't objective scientific facts backing up what I say. It would be like trying to educate ignorant people about climate change saying "well, it's my subjective opinion that climate change is real."


But if you said "Climate change is real!", would you not expect someone to say, "Show us the evidence!"? And if we had data that showed a global average temperature increase charted over time, then we have some objective data to support the statement.

You said, "Cross-feed is a miraculous improvement! Takes headphone listening to a whole new level!" but the data to back that up was one guy's opinion, stated firmly with conviction.  That's not objective data, so there's no support for the hypothesis that your cross-feed is perceived as an improvement by anyone other than you, let alone the other questions about when it's an improvement and how much.


71 dB said:


> Also, are we supposed to give our subjective opinion about cross-feed in this thread, or are we supposed to give our subjective opinion about what we think science says objectively about cross-feed? There is subjectivity somewhere to make the topic even meaningful, because without subjectivity the topic of this thread would be like "Is two plus two four or five? That is the question…"


If you post an opinion as fact backed by science and someone doesn't share your view, be prepared to offer scientific data to back up your claim.  If you don't have it, then post as opinion. Subjective results are easily quantified into objective data, which can then be simply compiled into an average, but that takes some organized testing.

Nobody has disputed how a cross-feed system does what it does.  What has been disputed is your evaluation of the results...how well cross-feed does what it does.  If this had been a peer-reviewed technical paper you'd have had the same reaction.  The defense given was to state years spent in research, deriding what others said, restating your position with conviction, and the entire thrust was to "educate" (possibly ignorant?) people with opinions and experiences that conflict with yours.

Now, why would that not go down well?


----------



## 71 dB (Sep 24, 2017)

pinnahertz said:


> But if you said "Climate change is real!", would you not expect someone to say, "Show us the evidence!"? And if we had data that showed a global average temperature increase charted over time, then we have some objective data to support the statement.


Of course, but the evidence for climate change does exist. Temperatures are raising, sea level is going up, more extreme weather conditions (Ask people in Texas, Florida, Puerto Rico etc.). The evidence it literally destroying people's lives!



pinnahertz said:


> You said, "Cross-feed is a miraculous improvement! Takes headphone listening to a whole new level!" but the data to back that up was one guy's opinion, stated firmly with conviction.  That's not objective data, so there's no support for the hypothesis that your cross-feed is perceived as an improvement by anyone other than you, let alone the other questions about when it's an improvement and how much.



Science tells us about spatial hearing and how cross-feed improves headphone listening. You yourself found cross-feeding because of science of spatial hearing. I am not the only one enjoying cross-feed. I know a lot of people who admit cross-feed improves things. I think that's a strong case. Now, try to show how cross-feed does not take headphone listening to another level and we'll see how strong case you have.

You keep telling how cross-feed is imperfect, but you fail to offer anything better. Cross-feed is what we have. Luckily I am very happy  with it. Yes, I don't have 100 % objective facts and data, but I have a practical solution to the problem of spatial distortion. I have even given here the schematics for a headphone adapter for anyone to use. I give solutions. You keep questioning them. Why? For what purpose? How does what you do help anyone? Your approach of making everything fuzzy and iffy only makes people confused. In one sentence you are for cross-feed, then you disagree when I say cross-feed is a miraculous improvement. What? Are you promoting cross-feed or not?



pinnahertz said:


> If you post an opinion as fact backed by science and someone doesn't share your view, be prepared to offer scientific data to back up your claim.  If you don't have it, then post as opinion. Subjective results are easily quantified into objective data, which can then be simply compiled into an average, but that takes some organized testing.



You do realize those who disagree me don't necessary have scientific data to back up their claims either. Good luck demonstrating how science shows no cross-feed is being better than cross-feed. I'm curious to see such attempts taking place. However, I don't demand anyone to prove their opinions scientifically.



pinnahertz said:


> Nobody has disputed how a cross-feed system does what it does.  What has been disputed is your evaluation of the results...how well cross-feed does what it does.  If this had been a peer-reviewed technical paper you'd have had the same reaction.  The defense given was to state years spent in research, deriding what others said, restating your position with conviction, and the entire thrust was to "educate" (possibly ignorant?) people with opinions and experiences that conflict with yours.
> 
> Now, why would that not go down well?



If I put $50 in a DIY cross feeder, the improvement is DECADES larger than if I invest the same money on cables or other improvements so in that sense what you get for $50 is miraculous and I don't think it's just my opinion.


----------



## pinnahertz

71 dB said:


> <snip!>


I edited the quote because we are now really locked in a circular loop. No need for me to respond again to issues I've already responded to.


71 dB said:


> If I put $50 in a DIY cross feeder, the improvement is DECADES larger than if I invest the same money on cables or other improvements so in that sense what you get for $50 is miraculous and I don't think it's just _my_ opinion.


The above just underscored my point again. 

"DECADES larger" ....on your subjective scale?
"Miraculous and I don't think it's just my opinion." Ok, fine. Who else's is it? What percentage of a random group agrees with you?  "I don't think..." is somewhat less than scientific data. And your dataset consists of one data point, not even enough for a trend.


----------



## 71 dB (Sep 24, 2017)

pinnahertz said:


> "DECADES larger" ....on your subjective scale?
> "Miraculous and I don't think it's just my opinion." Ok, fine. Who else's is it? What percentage of a random group agrees with you?  "I don't think..." is somewhat less than scientific data. And your dataset consists of one data point, not even enough for a trend.



So, what does this mean? Perhaps to a conclusion that people should not really bother with cross-feed, because the data shows there is only one person (me) who finds it's miraculous*. The problem with that statement is that it's so lame I don't bother spreading in anywhere online. I watch Youtube videos of Loki The Read Fox instead.

What definitive and objective can we say about cross-feed?

* as if only miraculous improvements were worth $50. How about oil snake cables costing thousands and improving the sound only because of placebo effect?


----------



## Mr Rick

I see crossfeed as akin to EQ. If it enhances your enjoyment of the music, use it.

BTW, I love to EQ.


----------



## pinnahertz (Sep 24, 2017)

h71 dB said:


> So, what does this mean? Perhaps to a conclusion that people should not really bother with cross-feed, because the data shows there is only one person (me) who finds it's miraculous*.


You are WAY overthinking this, and have now attached your own meaning.


h71 dB said:


> What definitive and objective can we say about cross-feed?


So far, nothing. So the possibilities are still open.


h71 dB said:


> * as if only miraculous improvements were worth $50. How about oil snake cables costing thousands and improving the sound only because of placebo effect?



 Now you're attempting to attach value to "miraculous", even though it remains undefined, and unproven.  Why the creative writing exercise?


----------



## 71 dB

pinnahertz said:


> So far, nothing. So the possibilities are still open.


That sucks. I spend years in university to be able to say something scientific and objective, have authority in _something, _feel that I have value and respect as a person in something.



pinnahertz said:


> Now you're attempting to attach value to "miraculous", even though it remains undefined, and unproven.  Why the creative writing exercise?


Well, you kind of have to do that when you decide what to do in life. I find cross-feed miraculous and snake oil cables not, so I say yes to cross-feed and no to snake oil cables.


----------



## pinnahertz (Sep 24, 2017)

71 dB said:


> That sucks. I spend years in university to be able to say something scientific and objective, have authority in _something, _feel that I have value and respect as a person in something.


Nothing was said or even implied about your education or value as a person! Don't make this more than it is. There is one issue here only, and it's not about any of the above.


71 dB said:


> I find cross-feed miraculous and snake oil cables not, so I say yes to cross-feed and no to snake oil cables.


There it is! Wonderful! I'm so happy I'm going to relax and listen to music on my headphones._..with cross-feed!_


----------



## 71 dB

pinnahertz said:


> Nothing was said or even implied about your education or value as a person! Don't make this more than it is. There is one issue here only, and it's not about any of the above.



Of course nothing of this sort has been said or even hinted, but it's how this makes me_ feel_. But that's my problem. Just venting my feelings here...



pinnahertz said:


> There it is! Wonderful! I'm so happy I'm going to relax and listen to music on my headphones._..with cross-feed!_



Well nice to hear that. Enjoy the music.


----------



## 71 dB (Sep 24, 2017)

_It's funny. My knowledge and understanding of spatial hearing makes me assume that I can generalize my experiences of cross-feed to everybody. If I didn't know about spatial hearing, but enjoyed cross-feed nevertheless, I wouldn't generalize as strongly, because I would emphasize my personal preferences instead of general theories of hearing as the explanation. _


----------



## pinnahertz

One observation; as iOS currently comprises a significant market share of DAP and DAP capable devices, is odd that there currently are no apps with cross-feed. The one that was evidently the only offering in that technosphere is no longer available. So no cross-feed for iOS!

Indicator? Opportunity? Both? Neither? Your call.


----------



## 71 dB (Sep 25, 2017)

Cross-feed should be incorporated to all portable devices capable of playing music with headphones, but that's not how capitalism works. In capitalism people have freedom to be completely ignorant and consume whatever is "cool", for example the newest iOS product on the market. It's the superficial dumbing down culture that makes it impossible to sell cross-feed to the masses or even educate them about it. In other words, Apple has calculated that cross-feed would not increase the cash flow to justify the effort.

(I'm not a Apple hater. I use a Mac Mini myself, and I think Apple products in general are good, perhaps overpriced, but it is a brand (and most of the time the design is top notch).


----------



## Arpiben

71 dB said:


> Cross-feed should be incorporated to all portable devices capable of playing music with headphones, but that's not how capitalism works. In capitalism people have freedom to be completely ignorant and consume whatever is "cool", for example the newest iOS product on the market. It's the superficial dumbing down culture that makes it impossible to sell cross-feed to the masses or even educate them about it. In other words, Apple has calculated that cross-feed would not increase the cash flow to justify the effort.
> 
> (I'm not a Apple hater. I use a Mac Mini myself, and I think Apple products in general are good, perhaps overpriced, but it is a brand (and most of the time the design is top notch).



Similarly, cross-feed's implementation in Lumia phones was not enough to change the fate of Nokia's phone division (unfortunately).


----------



## pinnahertz (Sep 25, 2017)

71 dB said:


> Cross-feed should be incorporated to all portable devices capable of playing music with headphones, but that's not how capitalism works. In capitalism people have freedom to be completely ignorant and consume whatever is "cool", for example the newest iOS product on the market. It's the superficial dumbing down culture that makes it impossible to sell cross-feed to the masses or even educate them about it. In other words, Apple has calculated that cross-feed would not increase the cash flow to justify the effort.
> 
> (I'm not a Apple hater. I use a Mac Mini myself, and I think Apple products in general are good, perhaps overpriced, but it is a brand (and most of the time the design is top notch).


And yet "Sound Enhancer" has been in iTunes since the beginning.

I found cross-feed in the *Parrot Zik3 headphone app.*  It goes beyond cross-feed into a bit of auralization and provides user adjustable angles and room size.  I believe it only works with Parrot headphones (I have a pair). I personally never round a setting that works to my liking, always turn it off.  Probably just a poor design.

Yeah, Apple is a little stodgy when it comes to adding features they didn't think of.  We won't even have native FLAC support until iOS 11. But I wouldn't necessarily say they don't put on cross-feed because it wouldn't increase cash flow. They do a lot of things, many much bigger and more expensive to develop, that don't increase cash flow!  My hot button is designs that don't permit user upgrades and replacements of things like HDD/SSD, batteries, memory, etc.  That takes deliberate and special design, and the results do not add product appeal.  It's why I won't buy a new MacBook Pro (mine's 2013, and has been extensively upgraded).  Some might reason that the non-upgradable computer stimulates new sales, but if you have 3Tb internal now, what new MacBook Pro are you going to buy to replace it?

The way you increase cash flow with cross-feed is to develop an app.  Yours, and their cash flow.   It's a wide open field, a market gap.  The first iOS app to make significant money was an audio app... fart machine.  So there's always room at the bottom...and middle...and top.


----------



## bigshot

The problem with doing audio apps for iOS is that things keep changing and developers don't want to keep up with updates. I bought an EQ app and with the next os upgrade it broke and they never did an update to make it work again.


----------



## 71 dB

I know nothing about apps. I am an old fart using a "dummy phone" waiting to get old enough to die away from this icky world of selfies and constant competition. 



Arpiben said:


> Similarly, cross-feed's implementation in Lumia phones was not enough to change the fate of Nokia's phone division (unfortunately).



It wasn't. Wrong OS. Lumia phones had also great cameras and rich recording (undistorted recorded sound up to 140 dB!), but wrong OS ruined it all and Lumia went literally out the Window. Nokia had no clue how to advertise Lumia Phones (great engineers, idiotic bosses).


----------



## Arpiben (Sep 25, 2017)

Same situation in other divisions or companies. Uncorrelated management.
No fix or cross-feed available.


----------



## pinnahertz

71 dB said:


> I know nothing about apps. I am an old fart using a "dummy phone" waiting to get old enough to die away from this icky world of selfies and constant competition.


Me too, my youth was entirely tube-based. But I've embraced enough of the current tech to put some to good use.

Here's all I know about apps: if you can make one that isn't in the App Store  25 times over, and there's a clear need, purpose, and advantage, you can hire a kid app designer and pay him/her a percentage of your cut from the App Store, and fund your obsession, hobby, collection, travels, health care, whatever for a few years until someone notices you have the top app in its category then builds a better one.  The hardest part is finding an app designer you can work with and understands your first language. 

BTW, the equalizer apps are all nonsense too.


----------



## 71 dB

You are a youthtube -person, not a youtube -person pinnahertz. 

Technological advancements allow so much today, put the potential is not fully used in a rational way.  

Vinyls use "elliptic filtering" to reduce channel separation (horizontal stylus movement) at bass frequencies. This filtering it actually a kind of cross-feed. I think this is one off the reasons why some people find technically inferior vinyl sound more pleasing.


----------



## castleofargh

I personally prefer even a bad crossfeed to nothing at all, but that's only because headphone panning annoys me a lot. I know several people who don't feel there is an issue as they like headphones' presentation just as much if not more than speakers(seems weird to me, but hey, different people, different habits and taste). I feel like simply getting a delay with some attenuation closes to what I need, is an actual improvement over nothing at all. but there is no point in pretending that basic crossfeed has it right either. you don't simulate HRTF and a room with a simple slope starting at a more or less arbitrary point, a delay picked by somebody else, and no reverb.  fairly often the center voice takes a hit with default crossfeed and it can end up being just as annoying as the "double mono" effect from usual headphone use.
I personally use some sort of crossfeed on everything headphone or IEM related(ok not on podcasts), so I don't need convincing about how I crave for anything taking me away from default headphone stereo and most albums. but if I had to pick a side in that long ping pong game between you and pinnahertz, I would more often take his side. because while crossfeed is at least to me a step in the right direction and much better than nothing, it's an incomplete approach. customization from measurements at the ear is the right approach. if speaker sound is what we desire, then room simulation is the right approach. and in front of it, crossfeed is more like a flawed toy.  
crossfeed is amazing because of how cheap and readily available it is compared to better stuff. but it's not the answer, it's a lousy band aid until the industry admits that customized solutions are the only path toward actual headphone hifi. 


vinyls accumulate so many issues and specificities that it's hard to make up a reason to anything. the crosstalk is so bad that it's audible, that alone can't be without consequences on the perceived "stage".


----------



## bigshot

Crossfeed only addresses one of the problems with headphone listening. You can spend thousands on black boxes to make headphones closer to being speakers, but it still isn't going to be like speakers. I just take headphones for what they are and I don't fuss with it too much. I'm happy with my speakers, so I only use headphones when I have to keep quiet for other people or when I'm working on noise reduction on a transfer from an LP or 78.


----------



## pinnahertz

71 dB said:


> You are a youthtube -person, not a youtube -person pinnahertz.


Good one!


71 dB said:


> Technological advancements allow so much today, put the potential is not fully used in a rational way.
> 
> Vinyls use "elliptic filtering" to reduce channel separation (horizontal stylus movement) at bass frequencies. This filtering it actually a kind of cross-feed. I think this is one off the reasons why some people find technically inferior vinyl sound more pleasing.


There are many reasons people find they vinyl experience more pleasing.  That certainly could be one of them, but the thing is there was no fixed elliptic filter in the mastering chain. Bass mono-ing was done as needed, and to the degree needed, varying from none to mono below 250Hz.  Mostly the determining factor was how "loud" the record was cut, and how much bass modulation was a part of it.  Yes, the loudness war existed on vinyl!  Oh, and the basic channel separation of vinyl is not fantastic, so there's a bit of built in crossfeed, albeit just reduced separation.

I think the discussion of vinyl preference is off topic here, but it is something I have delved deep into.  I found the results interesting.


----------



## pinnahertz

bigshot said:


> Crossfeed only addresses one of the problems with headphone listening. You can spend thousands on black boxes to make headphones closer to being speakers, but it still isn't going to be like speakers.


You should audition the Smyth.  Not easy to do, but if you ever get a chance, take it.  It might change your mind on a few things.


----------



## pinnahertz

castleofargh said:


> crossfeed is amazing because of how cheap and readily available it is compared to better stuff.


I'm not finding "readily available" on many platforms.  It might be cool to provide a list of cross-feeders and platforms they work on.


----------



## castleofargh

pinnahertz said:


> I'm not finding "readily available" on many platforms.  It might be cool to provide a list of cross-feeders and platforms they work on.


that's good idea, I can't claim to be an expert though, Ive tried several on PC but that's where everybody would expect to find some in the first place. 
on portable stuff, for me it started with rockbox. sony DAPs have some surround DSP settings and the first one "studio" is fairly close to a default crossfeed as the reverb is really minimal. on my android phone I think the neutron player had some crossfeed, but I usually end up using the player provided with my BT headphone so I go mostly for viper4android(needs root and is system wide). but for some time now the option has also become something more than just crossfeed and doesn't really offer settings(not for that at least, else it has EQ, convolver,...) . I've been using a UHA760 DAC/amp for a few years and it has 2 crossfeed settings(analog stuff this time). so while I've clearly been aiming at stuff with crossfeed of sort for some time now, I never felt like it was really hard to find. 
on mac I know nothing for religious reasons, but I remember a buddy talking about canopener having crossfeed features?


----------



## 71 dB

castleofargh said:


> I personally prefer even a bad crossfeed to nothing at all, but that's only because headphone panning annoys me a lot.



"Headphone panning" means "illegal/impossible" spatial information which doesn't make sense to our brain. It is no wonder such information annoys.



castleofargh said:


> I know several people who don't feel there is an issue as they like headphones' presentation just as much if not more than speakers(seems weird to me, but hey, different people, different habits and taste).



Most people are not aware of the concept of spatial distortion so they don't know the full potential of headphone listening (I was one of these people until 6 years ago). I personally think investing money on expensive headphones and listening to them without cross feed while pretending the sound quality is good is silly. Those who don't like cross-feed (know about it and have tried it) do not in my opinion fully understand what cross-feed is about and they misunderstand in my opinion what being a purists means. 



castleofargh said:


> I feel like simply getting a delay with some attenuation closes to what I need, is an actual improvement over nothing at all. but there is no point in pretending that basic crossfeed has it right either.



"Basic" cross-feed is right in the sense that it reduces or removes spatial distortion making the signal "legal/possible" for the brain. In other words, a recording could have be done in real world that sounds exactly the same when played back perfectly. How do you know what the sound originally was? Isn't the most important thing that the recording sounds as if it is identical to the original sound? If cross-feed places the drums 5° too left, then what is the problem really? The drums could have been 5° more left in the studio. At least the drums won't sound "fake" and "all over the place" because of spatial distortion.



castleofargh said:


> you don't simulate HRTF and a room with a simple slope starting at a more or less arbitrary point, a delay picked by somebody else, and no reverb.  fairly often the center voice takes a hit with default crossfeed and it can end up being just as annoying as the "double mono" effect from usual headphone use. I personally use some sort of crossfeed on everything headphone or IEM related(ok not on podcasts), so I don't need convincing about how I crave for anything taking me away from default headphone stereo and most albums. but if I had to pick a side in that long ping pong game between you and pinnahertz, I would more often take his side. because while crossfeed is at least to me a step in the right direction and much better than nothing, it's an incomplete approach. customization from measurements at the ear is the right approach. if speaker sound is what we desire, then room simulation is the right approach. and in front of it, crossfeed is more like a flawed toy.
> crossfeed is amazing because of how cheap and readily available it is compared to better stuff. but it's not the answer, it's a lousy band aid until the industry admits that customized solutions are the only path toward actual headphone hifi.



How much can we improve sound quality by going from basic cross-feed to "customized solution?" Is it worth the increased complexity and price? I ask this because I am a "bang for the buck" guy.



castleofargh said:


> vinyls accumulate so many issues and specificities that it's hard to make up a reason to anything. the crosstalk is so bad that it's audible, that alone can't be without consequences on the perceived "stage".



Technically vinyl is abysmally bad compared to digital audio formats, but still many praise it being superior. This is an interesting issue and I think I have found some explanations for it.


----------



## 71 dB

bigshot said:


> Crossfeed only addresses one of the problems with headphone listening.


What are the other problems not addressed by cross-feed?


----------



## 71 dB

pinnahertz said:


> I'm not finding "readily available" on many platforms.  It might be cool to provide a list of cross-feeders and platforms they work on.



My cross-feed is based on DIY headphone adapter, but there is Vox - player for Mac (and iPhone?). It has 3 different cross-feeders to choose from: Bauer stereo to binaural, Chu Moy and Jan Meier.


----------



## bigshot

pinnahertz said:


> You should audition the Smyth.  Not easy to do, but if you ever get a chance, take it.  It might change your mind on a few things.



The things I was thinking about in that were the kinesthetic feel of the bass in your chest, not getting sweaty ears from the ear cups, and the ability to share music with friends while you visit. I’m sure the SR solves the problems of directionality and room acoustics.


----------



## castleofargh

71 dB said:


> "Headphone panning" means "illegal/impossible" spatial information which doesn't make sense to our brain. It is no wonder such information annoys.
> 
> 
> 
> ...



*1 *yeah, lack of experience and in general simply poor choice of reference can most certainly lead to different ideas of what is right. but I know many people who simply don't seem to mind much. you and I simply aren't among them. 
*2* well that's the all Hi-Fi idea. getting a little closer to whatever reference. I'm like a very lazy audiophile in that aspect, but the all hifi bizz runs on people who find small improvements very relevant and "worth it".*3* follows on that, to each his own idea of what is worth it. I can talk only for myself and say that I'm rather cheap about audio, my headphone rig right now is odac/o2 and a hd650. not exactly TOTL. but when I saw the kickstarter campaign for the Realiser A16, I took my credit card out of my pocket and started filling in the required information to get one. ^_^ didn't need to think, and didn't for a second reconsidered since, despite how we'll probably get it in 2037(well that's in the game with kickstarter). 
and I'm not even after any idea of perfection or hifi, I just wish to get something close to the sound of my speakers when it's late at night and I can't use them. because ... neighbors. what I use now feels to me slightly better than using Xnor's crossfeed VST with settings that honestly worked pretty well for my head I sure was glad to have found it at the time. and that felt slightly better than any other crossfeed, and any crossfeed felt better than nothing at all. what's worth it is the most subjective thing there is.


----------



## pinnahertz

Better sit down, 71....

I've just done a bit of cross-feed experimenting.  (Ok, you can get up now.) It had been years, and now I have software and a bit of free time today, so here's what I did.  

First, I got VOX iOS, played a few tracks, tried the 3 different xfd presets.  The results were so completely dependant on the original material it was unnerving.  The effect on hard-panned material was quite obvious, but on more contemporary mixes the effect went from nothing at all to just discernable.  

To my ears none of the presets were always an improvement, sometimes the were inaudible, sometimes I preferred the origina, sometimes I liked the crossfeed version.  I didn't discern any significant differences between Chu Moi and Jan Meier, but the default, whatever that is is slightly different.  There is no adjustment within a crossfeed method, and there really should be. 

Being the tweak head that I am, I opened a couple of tracks into Audition where I have complete control over what's going on.  I simulated the Meier crossfeed, the Chu Moi, one of my own, and simple variable separation.  All sound different, and all work differently on each track.  Grrr.  

For each test track each method had to be adjusted for optimum, and they didn't seem to track each other at all.  One that was fairly consistent was the one I whipped up, that was derived from L-R/R-L, delayed, filtered, then mixed back in.  I then had control of crossfeed level, time delay, and filter response.  Once I had it dialed in, when images became relocated they were solid and palpable.  Other methods resulted in images that were somewhat smeared and vague.   However, the whole thing still needed to be customized per track.  Darn.  Well, expected of course.  I also tried inserting dynamic processing on the crossfeed signal.  Yeah, pretty much how I remembered it working, not well.  Relocated images begin to wander around, kind of swimmy.  But the control sample was just being derived from L-R, which is technically wrong, so that will have to be revisited to work right, likely processing through a special side-chain.  

The other method that was pretty consistent was simply separation reduction (mixing L+R into the stereo mix), though it was less satisfying, it of course cured the hard-panning problem.

Test material, in part, was:
Beatles Flying from Magical Mystery Tour (hard panned instruments, hard center vocal)
Bob Dylan: Man of Constant Sorrow (hard panned guitar and vocal)
Diana Krall: Besame Mucho (contemporary mix)
I tried a few more too, and will still play a bit, perhaps write a bit more about this.  So far it's kind of what I expected...sorry 71!!!

If anybody cares I can list my processing settings to get the algorithms right.  Otherwise this is just a brain dump.  Just do with it whatever you usually do with a dump of any sort.


----------



## bigshot (Sep 26, 2017)

I find that true of a lot of processing. I have about four DSPs that I juggle depending on the situation. And there are albums that have been mastered out of spec that I have to adjust EQ for. In multichannel, the volume of the rears to the fronts is sometimes wonky. No one size fits all solution.

Maybe it would be best to have a crossfeed where the parameters are all flexible and you can just adjust to each recording as you see fit.


----------



## pinnahertz (Sep 26, 2017)

The Parrot app lets  you couple tunings and crossfeed settings to individual tracks with your settings maintained in a database you can even share.  So you can play a tune, adjust, and save, then it recalls those settings next time you play the track.  The problem is it doesn't work without an internet connection.

edit: And it's crossfeed sucks.


----------



## Malfunkt (Sep 26, 2017)

My brain has pretty much adapted to listening without any kind of crossfeed or emulation. When I'm listening to classical or jazz, without any processing, I may have improper spatialization, but timbre is intimately enjoyable. I listen often for melodic input, and so only a certain level of resolution is necessary for my brain to fill in the blanks and to become engaged.

For movies and some gaming, I now use Dolby Atmos for Headphones, which can handle crossfeed for 2-channel as well as simulate 5.1 and 7.1 sound in headphones. It works remarkable well with the right material. This is on PC, and I would still say that a lot of my listening happens without crossfeed. I also listen to a lot of electronic music, where the spatial side effects of listening without crossfeed typically suffer less .

On iphone / android , I know @bigshot mentioned how many apps don't get ongoing support, and this is important. But I don't mind spending a bit of money now and then to get an app that has some DSP involved. I wish Apple built it right into their headphones, or even licensed Dolby Atmos so as we move forward with the future of VR and media (which will likely involve a lot of headphone listening) we can work around some sort of standard for full spatialization.

For iOS and Android, I'm using a well supported app called nPlayer https://nplayer.com/ They recently implemented DTS-X and using the 'normal' ' over-ear headphones' setting, it is worth listening. Much better than BS2B in my opinion. DTS-X though will also work very well on multi-channel content, and so its perfect for watching movies on the go.

(above post typed while immensely enjoying a lowly Denon AH-D2000 without any DSP processing. Vox Player on Mac listening to Entheogenic 'Anima Mundi')


----------



## 71 dB

pinnahertz said:


> Better sit down, 71….   ...a lot of venting... ...sorry 71!!!



No need to apologize me. I'm sorry to hear cross-feed is so problematic for you.


----------



## 71 dB

If you can use DSP to convolute HRTF-impulse responses with the music then of course do that. That's cross-feed too, just in more sophisticated form.

I don't have any hardware or software to do that, so normal cross-feed is the best I can do. This is not a money issue. I have money, but in my opinion it's more important to safe money for bad times/pension than use it all in fancy stuff while you are young. I enjoy my music on $200 cans with $50 cross-feed headphone adapter. I think I can sustain such level of consumerism and still be able to pay my bill when old. If I won millions in lottery and my life was completely secured financially, I would go for Smyth and whatever luxury products there are out there… …but I understand that's only for privileged people and it doesn't even make people happier.

My message with cross-feed is that since basic cross feed is quite affordable (for one guy I designed a simple DIY cross-feeder that cost $20 to build and he was happy with it) and help  with spatial distortion, there is no reason to be without. Cross-feed as such is not to make your all headphone fantasies true.


----------



## pinnahertz

But would you spend $2 for DSP cross-feed done right?


----------



## 71 dB

pinnahertz said:


> But would you spend $2 for DSP cross-feed done right?



$2, $20, even $200 if it really is superior to simple cross-feed.


----------



## pinnahertz

The typical app is around $2. If someone wanted to popularize cross-feed and get it into millions of hands/ears, that's probably the best way.  The DSP load required is well within the capability of every smart device in the last few years.


----------



## bigshot

Add a good digital equalizer to it and I'd pay a ten spot


----------



## pinnahertz

What makes a "good digital equalizer" in your world?


----------



## bigshot

Either a graphic equalizer with at least 15 bands or a parametric with at least 5 bands. Perhaps auto normalizing to avoid clipping.


----------



## 71 dB

pinnahertz said:


> The typical app is around $2. If someone wanted to popularize cross-feed and get it into millions of hands/ears, that's probably the best way.  The DSP load required is well within the capability of every smart device in the last few years.



Probably yes, but it wouldn't hurt if these smart devices had it integrated as a default. DSP load is low for "normal" cross-feed.

However, more advanced methods are another story. Convolution of HRTF impulse responses is much more demanding and then there is of course the problem of using right HRTFs.


----------



## pinnahertz

71 dB said:


> Probably yes, but it wouldn't hurt if these smart devices had it integrated as a default. DSP load is low for "normal" cross-feed.
> 
> However, more advanced methods are another story. Convolution of HRTF impulse responses is much more demanding and then there is of course the problem of using right HRTFs.


You convolve once, then it's a low load fixed filter that is easy to run.  It's been done, though not specifically an impulse response for HRTF.  Audyssey's AMP app (now discontinued...nuts!) ran headphone EQ which started by essentially taking impulse response in their lab, developing the "tuning" filter, then users downloaded it and ran it on their iOS device.  It worked fine even on devices several generations previous to current.  In fact, the apps issues had nothing to do with running the DSP filter, that was actually the easy part.  

I'm not sure why you're arguing about using the "right HRTF", when any generalized HRTF beats any 'normal' cross-feed. It's clearly impractical to measure every users HRTF, so it must be a generalization, perhaps a choice of 2 or 3, like small-medium-large, or whatever. 

All I'm saying is the technology is in our hands now, and there's a market gap.  The missing critical information is market perceived need, which is probably why it hasn't been done.  None of the details of the DSP algorithm are road blocks at all.   It sounds like "Mr. Cross-Feed" is pushing back on the idea of a $2 DSP cross-feed app.  Huh.


----------



## 71 dB

pinnahertz said:


> You convolve once, then it's a low load fixed filter that is easy to run.  It's been done, though not specifically an impulse response for HRTF.  Audyssey's AMP app (now discontinued...nuts!) ran headphone EQ which started by essentially taking impulse response in their lab, developing the "tuning" filter, then users downloaded it and ran it on their iOS device.  It worked fine even on devices several generations previous to current.  In fact, the apps issues had nothing to do with running the DSP filter, that was actually the easy part.
> 
> I'm not sure why you're arguing about using the "right HRTF", when any generalized HRTF beats any 'normal' cross-feed. It's clearly impractical to measure every users HRTF, so it must be a generalization, perhaps a choice of 2 or 3, like small-medium-large, or whatever.



It depends on how well you want things done. Convolution with real HRTF > approximative IIR or FIR of a HRTF > normal cross-feed. It's also a question of diminishing returns. Normal cross-feed takes you to 80 %, approximative filters to 90 % and convolution with real HRTF to 95 % (numbers illustrate the principle). If 90 % is enough, why isn't 80 % as well?



pinnahertz said:


> All I'm saying is the technology is in our hands now, and there's a market gap.  The missing critical information is market perceived need, which is probably why it hasn't been done.  None of the details of the DSP algorithm are road blocks at all.   It sounds like "Mr. Cross-Feed" is pushing back on the idea of a $2 DSP cross-feed app.  Huh.



Huh.


----------



## castleofargh

pinnahertz said:


> You convolve once, then it's a low load fixed filter that is easy to run.  It's been done, though not specifically an impulse response for HRTF.  Audyssey's AMP app (now discontinued...nuts!) ran headphone EQ which started by essentially taking impulse response in their lab, developing the "tuning" filter, then users downloaded it and ran it on their iOS device.  It worked fine even on devices several generations previous to current.  In fact, the apps issues had nothing to do with running the DSP filter, that was actually the easy part.
> 
> I'm not sure why you're arguing about using the "right HRTF", when any generalized HRTF beats any 'normal' cross-feed. It's clearly impractical to measure every users HRTF, so it must be a generalization, perhaps a choice of 2 or 3, like small-medium-large, or whatever.
> 
> All I'm saying is the technology is in our hands now, and there's a market gap.  The missing critical information is market perceived need, which is probably why it hasn't been done.  None of the details of the DSP algorithm are road blocks at all.   It sounds like "Mr. Cross-Feed" is pushing back on the idea of a $2 DSP cross-feed app.  Huh.


talking about that, there is a Waves NX app for cellphones. I didn't think about it because it's originally made for a little blue tooth head tracking toy that's faaaar from being bug free, but you can still use it without the tracker and get their simulation for the head for free. they ask for 2 measurements of the head to set up their target. it's more intuitive than asking for delay, attenuation, where to roll off...
if you take the room setting down, what's left is xfeed


----------



## pinnahertz

71 dB said:


> It depends on how well you want things done. Convolution with real HRTF > approximative IIR or FIR of a HRTF > normal cross-feed. It's also a question of diminishing returns. Normal cross-feed takes you to 80 %, approximative filters to 90 % and convolution with real HRTF to 95 % (numbers illustrate the principle). If 90 % is enough, why isn't 80 % as well?
> 
> 
> 
> Huh.


We've traded positions.


----------



## Zapp_Fan (Sep 28, 2017)

pinnahertz said:


> It seems to me the two above quotes are somewhat at odds with each other.  Which is it, we mix for speakers and headphones, or we mix for speakers primarily then do a cursory check for headphones?  (It's the latter, BTW).
> 
> I'm kinda wishing there were fewer statements and assumptions about what audio pros do made by people who don't seem to actually know.



Sorry for the slow uptake here, been quite busy.  This is a fair comment, and it bears mentioning that when people say "check on headphones" they usually mean for frequency response, not stereo image.



Strangelove424 said:


> Well, this Ryoki Ikeda is certainly not as bad as Merzbow. Anyone ever hear Merzbow? I'm not going to put a link down, but if you are curious I will warn you: _turn down your volume!_ I found the existence of Merzbow after browsing the DR database one day and organizing results by album with least dynamic range. Merzbow and a couple other "noise artists" earned a prestigious 0 DR. Yep, that's 0 db of range. Nada. Nill. Nichts. It'll drill your flippin' brain out. Japanese electronic music seems to be heavily influenced by noise.



Ha, if you think Merzbow is bad, try some Prurient... I don't listen to him, not sure why anyone does...



pinnahertz said:


> So why is cross-feed not universal at this late date?  Could it be it's not universally desired or accepted?  I don't know, but I'm in the "not universally desired" camp.



I'm relatively sure that it's because manufacturers consider crossfeed to have only niche appeal and their customers aren't asking for it.  The vast bulk of the audio market is comprised of people who have no critical listening experience.  Headphones you'd consider mediocre or even horrible make most people quite happy.  For most consumers, a crossfeed feature would be akin to putting a spoiler on a bicycle. I don't say this to deingrate the average listener in any way - it's just that most people wouldn't feel a need for crossfeed, or necessarily even appreciate the difference.

Those who are more engaged with sound quality will be assumed to be capable enough to implement crossfeed themselves.  And, this thread somewhat proves that, no?



pinnahertz said:


> You convolve once, then it's a low load fixed filter that is easy to run.


  I think for real convolution this is not strictly true, but the general point is true, which is that you could probably convolve with an HRTF IR on most smartphones today.

I wanted to bring up another question, which is related, but not strictly about crossfeed.

The site RTINGS.com has automated headphone testing and rating, and their sound quality ratings depend on whether a headphone is open or closed.  As I understand it, open headphones get an automatic bonus for the "critical listening" score.  They measure openness in part by measuring acoustic crosstalk within the room from ear to ear - for headphones! 

This struck me as odd, and I assume they're just using openness as a proxy for quality, and don't believe that audio leaking from one ear to the other is an objective measure of sound quality.  Rather, acoustic crosstalk on headphones is a measure of openness, which is a heuristic/proxy for overall sound quality.  That's my guess anyway...

Anyone know for sure what this metric is getting at?


----------



## Sc00p (Nov 10, 2017)

I'm not sure if this is all in my head. I have never really used any kind of cross feed. Mainly because any amps i chose didn't have this kind of feature, and i didn't seek it out as i have always read a lot of negativity surrounding it. Anyways, my current amp has it and i thought i would try it out.
Before trying the cross feed function: I prefer music through headphones rather than speakers. But for a long time, i have felt on many tracks that the right ear sounded too dominant, many songs sounding unbalanced or skewed. I started to wonder if it was my hearing, but if i change to mono there is no inbalance. This is for multiple amps, dacs, headphones so i knew it wasn't my gear.
I put up with it because, even though i don't hear it with speakers, my other preferences lean heavily towards headphones.
So, last night i decided to have a long listening session with the 3d crossfeed on my amp. I am inexperienced, but this seems to have fixed my problem. Everything sounds so much more balanced (not mono, it is still very much stereo), more natural.

Presuming this isn't in my head. What could be a possible explanation? without crossfeed eg, a singer who is imaged in the middle, is in the middle. But if the singer is imaged to come from both sides, it quite often feels to be louder from my right. Could it be something to do with timings? With crossfeed enabled. This annoying effect is gone for me.
Am i describing an effect when others say "without crossfeed is unnatural sounding" ?


----------



## Zapp_Fan

Sc00p said:


> Presuming this isn't in my head. What could be a possible explanation? without crossfeed eg, a singer who is imaged in the middle, is in the middle. But if the singer is imaged to come from both sides, it quite often feels to be louder from my right. Could it be something to do with timings? With crossfeed enabled. This annoying effect is gone for me.
> Am i describing an effect when others say "without crossfeed is unnatural sounding" ?



A couple possibilities - one is that your headphones have a driver matching problem, i.e. different frequency response on each ear, which can cause pretty noticeable imabalances like what you describe.  If you have nice headphones this is unlikely, but you can easily check by listening to a frequency sweep and then seeing if the sound seems to move from side to side. 

Another is that the song is just mixed that way, with something harder left/right than sounds natural.  The crossfeed inherently diminishes that. 

Another possibility is your ears have uneven frequency response, it's not unusual to have that type of hearing loss over time.


----------



## 71 dB

Sc00p said:


> So, last night i decided to have a long listening session with the 3d crossfeed on my amp. I am inexperienced, but this seems to have fixed my problem. Everything sounds so much more balanced (not mono, it is still very much stereo), more natural.
> 
> Presuming this isn't in my head. What could be a possible explanation? without crossfeed eg, a singer who is imaged in the middle, is in the middle. But if the singer is imaged to come from both sides, it quite often feels to be louder from my right. Could it be something to do with timings? With crossfeed enabled. This annoying effect is gone for me.
> Am i describing an effect when others say "without crossfeed is unnatural sounding" ?



The idea of crossfeed is to make the sound more natural for headphones reducing excessive stereo separation. When you listen to loudspeakers, there acoustic crossfeed because left ear hears rigth speakers and vice versa. So, if you find crossfeed natural, you are simply hearing it as intented.


----------



## SilverEars

Perhaps that's a better solution for movie watching with headphones?  Try crossfeed?  Is there a good plug-in for windows to try out for movies, videos?   For videos and movie watching I prefer speakers as it sounds more natural.


----------



## bigshot

I think what you're reacting to is how 5.1 gets folded down into 2 channel for headphones. Dialogue works much better when it's isolated in that center channel speaker than when it is halfway between your ears with cans.


----------



## 71 dB

SilverEars said:


> Perhaps that's a better solution for movie watching with headphones?  Try crossfeed?  Is there a good plug-in for windows to try out for movies, videos?   For videos and movie watching I prefer speakers as it sounds more natural.



Multichannel movie sound has a lot of stuff going on in rear channels. When those are downmixed to stereo, there's a lot of stereo separation, because rear channels are encoded to stereo out of phase. When I watch movies I usually use strong crossfeed for that reason and the result is good imo.


----------



## Malfunkt

SilverEars said:


> Perhaps that's a better solution for movie watching with headphones?  Try crossfeed?  Is there a good plug-in for windows to try out for movies, videos?   For videos and movie watching I prefer speakers as it sounds more natural.



For movie watching, even from Netflix with multi-channels, definitely check out Dolby Atmos for Headphones. Search for it in the Windows store. It really is excellent. It will work on stereo sources too, but its best for multi-channel.


----------



## pinnahertz

71 dB said:


> Multichannel movie sound has a lot of stuff going on in rear channels. When those are downmixed to stereo, there's a lot of stereo separation, because rear channels are encoded to stereo out of phase. When I watch movies I usually use strong crossfeed for that reason and the result is good imo.


There are several downmix algorithms, a common one does result in Ls and Rs mixed out of phase, because that downmix is compatible with ProLogic decoding.  There are other algorithms, such as Dolby Headphone, that begin with the raw 5.1/7.1 or even 2.0 ProLogic (Dobly Stereo, if a legacy track), and result in virtual surround in headphones by processing the surround channel(s) with appropriate spatial cues.  

Hardly cross-feed, though.  Basic cross-feed of an LtRt mix or downmix will not localize surround properly at all, and will, if applied strongly enough, upset the intended mix balance of LCRS.  A lot more has to be done to get that right.


----------



## 71 dB

pinnahertz said:


> There are several downmix algorithms, a common one does result in Ls and Rs mixed out of phase, because that downmix is compatible with ProLogic decoding.  There are other algorithms, such as Dolby Headphone, that begin with the raw 5.1/7.1 or even 2.0 ProLogic (Dobly Stereo, if a legacy track), and result in virtual surround in headphones by processing the surround channel(s) with appropriate spatial cues.
> 
> Hardly cross-feed, though.  Basic cross-feed of an LtRt mix or downmix will not localize surround properly at all, and will, if applied strongly enough, upset the intended mix balance of LCRS.  A lot more has to be done to get that right.



Cross-feed of a Lt/Rt mix or downmix is the best I can do (on DVDs and Blu-rays) with the stuff I have. Fortunately for me it works really well despite your efforts.


----------



## pinnahertz

71 dB said:


> Fortunately for me it works really well despite your efforts.


My efforts?  To do what, exactly? All I did was explain the reality of that downmix situation.


----------



## 71 dB

pinnahertz said:


> My efforts?  To do what, exactly? All I did was explain the reality of that downmix situation.



Crossfeed is like sonic umbrella: It doesn't stop the rain, but it keeps you from getting wet. My effort is to keep people dry. Your effort is to have only sunny days. My way is to buy an umbrella, your way is to move to a paradise island where it never rains.

Also, I don't speak to anyone with Dolby Headphone. I have never heard it, but I'm sure they are better off than I. I am speaking to people who don't use any kind of separation reduction system and don't even know about them or what to know more.


----------



## SilverEars (Nov 11, 2017)

bigshot said:


> I think what you're reacting to is how 5.1 gets folded down into 2 channel for headphones. Dialogue works much better when it's isolated in that center channel speaker than when it is halfway between your ears with cans.


I'm not referring to downmixed stuff actually. Just in general downmix or not.

My JBL sounds perfectly fine(and more natural) out of 2 channels with downmix for movies.  Just that headphones have a contained environment effects in sound for movies, and yes, dialogs sounds off.  Headphones it sounds like the high frequencies are rolled off in comparison for movies for dialog, which I mean by muffled sounding.

For music, there are often times, my headphones setup sounds clearer, and better articulated in comparison to speakers, sounds better in general. Iems specifically does clarity and articulation well, but full sized with the cups I often have issues with how mids sounds, congested or recessed at times.  Also for for tracks that probably needs the open speaker sounds, sounding compressed with certajn sounds if left and right channels are isolated to each ear, which is an issue with headphones.


----------



## pinnahertz (Nov 11, 2017)

71 dB said:


> Crossfeed is like sonic umbrella: It doesn't stop the rain, but it keeps you from getting wet. My effort is to keep people dry. Your effort is to have only sunny days. My way is to buy an umbrella, your way is to move to a paradise island where it never rains.


My effort is to help people understand if it's really raining, and if the umbrella has holes in it or not.

Every island has weather issues, you just need to understand what they are and use the best tool properly.


71 dB said:


> Also, I don't speak to anyone with Dolby Headphone. I have never heard it, but I'm sure they are better off than I. I am speaking to people who don't use any kind of separation reduction system and don't even know about them or what to know more.


You might spend some energy looking into Dolby Headphone. Lots of research done by folks way smarter than either of us went into that.

If you really want to help people with their headphone experience you might try learning about the available tools.


----------



## RRod

Anyone have experience with any of the Atmos headphone stuff on a cell/tablet?


----------



## bigshot

SilverEars said:


> For music, there are often times, my headphones setup sounds clearer, and better articulated in comparison to speakers, sounds better in general. Iems specifically does clarity and articulation well, but full sized with the cups I often have issues with how mids sounds, congested or recessed at times.  Also for for tracks that probably needs the open speaker sounds, sounding compressed with certajn sounds if left and right channels are isolated to each ear, which is an issue with headphones.



It may be the kind of music you're listening to. I know when I listen to opera on headphones, it rarely sounds as good as on speakers. The complex blend of orchestral sound with voices and hall ambience can end up all getting jammed into the ear cups together and sound like mush. But when the same opera is played on speakers, the voices have direction and the ambience has room to bloom. That makes it easier to pick individual sounds out of the mix. Speakers are just better at complex blends of sound. Headphones are better with more straightforward mixes with everything carefully placed into carved out spots in the mix. If there's too much going on at the same time, it turns into a muddle.


----------



## SilverEars (Nov 11, 2017)

bigshot said:


> It may be the kind of music you're listening to. I know when I listen to opera on headphones, it rarely sounds as good as on speakers. The complex blend of orchestral sound with voices and hall ambience can end up all getting jammed into the ear cups together and sound like mush. But when the same opera is played on speakers, the voices have direction and the ambience has room to bloom. That makes it easier to pick individual sounds out of the mix. Speakers are just better at complex blends of sound. Headphones are better with more straightforward mixes with everything carefully placed into carved out spots in the mix. If there's too much going on at the same time, it turns into a muddle.


Actually, my experience is opposite, speakers tend to sound unclear and muddled with tracks with too much going on.  I would describe it as the openness of speakers, sounds like certain sounds are covered, and conflicting, and this happens to least degee with my favorite iems, but with cans, it's the cups that creates a bit of muddleness depending on the headphone.


----------



## bigshot

That could have something to do with the relative quality of your speakers compared to the relative quality of your headphones. It's a lot less expensive to buy really good headphones than it is really good speakers. My headphones cost over a grand, but my speaker system cost many times that.


----------



## Malfunkt

@71 dB and @pinnahertz , 71db setups only gives him a stereo downmix, so I can understand that crossfeed helps in that situation. 

Definitely, without crossfeed, you really are missing the center channel.


----------



## Strangelove424

bigshot said:


> That could have something to do with the relative quality of your speakers compared to the relative quality of your headphones. It's a lot less expensive to buy really good headphones than it is really good speakers. My headphones cost over a grand, but my speaker system cost many times that.



Don't associate cost with performance. That's precisely the kind of wrongly held assumption we try to battle against in sound science. You can get good headphones for well under $1,000 and good speakers for well under a multitude of that.

I've heard you observe yourself that headphones can offer up more detail, almost artificial amounts of it. That is my experience as well, but I do not prefer headphone to speakers, or speakers to headphones, I simply see them as having their own unique listening experience. Regarding the extra detail, the proximity of the drivers to the ears mean a lack of attenuation of high frequencies, as well as the lack of weight/inertia in the small drivers used. There are unique benefits to headphones, just as there as unique downsides.


----------



## 71 dB

pinnahertz said:


> You might spend some energy looking into Dolby Headphone. Lots of research done by folks way smarter than either of us went into that.



I did years ago when Nokia Lumia phones had it. I mean I newer had such phone, but I think I heard some Youtube demos online.


----------



## pinnahertz

Malfunkt said:


> @71 dB and @pinnahertz , 71db setups only gives him a stereo downmix, so I can understand that crossfeed helps in that situation.
> 
> Definitely, without crossfeed, you really are missing the center channel.


If he starts with an LtRt track, or 5.1 dowmmixed to LtRt, 71's crossfeed will help a bit with the center, which is encoded as L+R-3dB, but it will also partially cancel surround. It's important to understand that the net result is a remix that doesn't resemble anything the creators intended, but may satisfy someone's basic need to hype center a little.  Where this application differs from crossfeed with stereo music is, crossfeed results in music are erratic, but since the LtRt downmix is very well standardized, crossfeed modifys all tracks the same way, of course creative intentions vary so final results will vary too. What will be consistent is the relative LCRS balance modification.


----------



## pinnahertz

71 dB said:


> I did years ago when Nokia Lumia phones had it. I mean I newer had such phone, but I think I heard some Youtube demos online.


Sounds more like you still need to study Dolby Headphone a bit more.


----------



## bigshot (Nov 12, 2017)

Strangelove424 said:


> Don't associate cost with performance.




Speakers are the one area of home audio that offers better quality for more money. That's because they're mechanical. It doesn't hold true for electronics. A circuit board is a circuit board.  But when you're working with voice coils and acoustics, it don't come cheap. Expensive speakers are expensive for a reason. I'm sure there are lousy expensive ones, but cheap ones aren't generally very good. You get what you pay for with speakers.

Great speakers sound more natural than great headphones. I'll take speakers over headphones any day of the week. They sound more real. I have good headphones, but they stay in the drawer most of the time because they don't hold a candle to my speakers. The only reason I wear them is when I'm editing and I don't want to annoy the people around me. I never listen to headphones for pleasure


----------



## 71 dB

pinnahertz said:


> Sounds more like you still need to study Dolby Headphone a bit more.



Why don't you educate me what I don't know about it since you want so much me to learn? 
I have admitted many times, that HRTF-convolution techniques can surpass normal crossfeed.
Crossfeed reduces/removes excessive stereo separation => Flat silver screen, no fatique
HRTF-convolution techniques also create depth => 3D movie instead of 2D

DIY crossfeeder is easy and cheap to build. Dolby Headphone not so much.


----------



## jgazal (Nov 14, 2017)

71 dB said:


> If you can use DSP to *convolute HRTF-impulse responses* with the music then of course do that. *That's cross-feed too*, just in more sophisticated form.



Every time you and pinnahertz exchange ideas, we all learn. That’s dialectics.

I have some doubts.

Correct me if I am wrong, but you said that convolution of HRTF-impulse responses is a more sophisticated form of crossfeed.

I was wondering that I would be able to measure with two microphones in my ears a binaural room impulse response. Then I thought such BRIR would “inseparably” contain not only my HRTF but also early reflections and reverberation from the room I measured in. But then I could possible (but way less probably) measure impulses inside an anechoic chamber and then measure only my HRTF.

So when you convolve a BRIR I see the reason for adding electronic crosstalk, because you are trying to replicate the whole system composed by the rig and room itself.

But when and why you would want to add electronic crossfeed if you convolve a generic HRTF or you had the opportunity to measure your own pure HRTF?

Would it be anyhow useful to avoid acoustic crosstalk with speakers playback or to avoid electronic crossfeed when you convolve even a BRIR with headphones playback?

What is the difference between a) the computation of the localization of sound object in a separate tracks (within a digital stream of many sound objects tracks) using a HRTF with density of lets say 720 measured coordinates (360/5 azimuth locations * 10 elevations) and b) the interpolation* of two coordinates (+ and - 5 degrees azimuth, zero elevation) between a 360 degrees of head movement freedom when playing binaural content**?

* interpolation for head tracking purposes
** or content with natural ILD and ITD; I see you have a “DIY Jecklin disk microphone” in your profile...

Would you need electronic crosstalk in such headphone playback environments with such content (binaural recordings)?

What happens if a generic or your personalized HRTF does not have a 720 density but let’s say 16 coordinates density and you need interpolation to calculate sound objects in their designated locations? The function to do that interpolation is the same function would you use to head tracking?

So if you have a HRTF of 16 coordinates and you apply interpolation not only to compute the sound object location but to rotate then acording to the user head would you still need crossfeed when playing Atmos content? And if you are playing third order ambisonics content?

If you want my opinion, I guess Atmos is intrinsically limited to and ambisonics was already designed to deal with acoustic crosstalk, but I am curious to know what would happen if you do not add electronic crossfeed*** when convolving a HRTF with headphones playback of Atmos and higher order ambisonics content.

*** Or deliberately and carefully control its level in different and lower intensities we would find in acoustical crosstalk with speakers; when dealing with acoustical crosstalk I am referring to speakers located on both hemisphere that are cut by the median plane. The speakers in the same hemisphere would be summed with the HRTF filter anyway and the speakers placed within the median plane would also be filtered by the HRTF without any electronic crosstalk setting.

Any idea?


----------



## pinnahertz

71 dB said:


> Why don't you educate me what I don't know about it since you want so much me to learn?


It was an observation, not a desire.  In fact, I recognize that I cannot educate you.  You seem to do best by educating yourself.  Hence the sugestion.


71 dB said:


> I have admitted many times, that HRTF-convolution techniques can surpass normal crossfeed.


Hmm.  Well, I think we should just pass on counting those many times, shouldn't we?


71 dB said:


> 1. Crossfeed reduces/removes excessive stereo separation => 2. Flat silver screen, no fatique
> 3. HRTF-convolution techniques also create depth => 3D movie instead of 2D
> 
> 4. DIY crossfeeder is easy and cheap to build. Dolby Headphone not so much.


1. I think that may be a first for you, but yes!
2. The implication that a screen type has anything to do with fatigue is false.   The parallel of 3D movies and crossfeed is completely false.
3. We aren't talking about HRTF convolution only with Dolby Headphone (or Dolby Atmos headphone), though.  It's not "creating" depth, it's placing sounds where they are intended.  Again, the 2D/3D movie analogy is not applicable at all.  Visual localization works completely differently that audio localization.


----------



## 71 dB

I don't think I want "my room" in the recording I am listening to, at least not always. I believe classical music recordings have all the spatial information there is to have and crossfeed allows that information to enter my ears in a reasonable way. If it something else like rock or techno etc. I don't think "my room" is needed either. It is a more or less artificial soundstage and I think reducing channel separation to natural levels is all that really matters. That is my _opinion_ and other people may differ.


----------



## jgazal (Nov 12, 2017)

71 dB said:


> I believe classical music recordings have all the spatial information there is to have and crossfeed allows that information to enter my ears in a reasonable way.



Do you believe that the following classical music recording arrangements will render equivalent spatial information for headphones playback with added electronic crosstalk?

A) your “DIY Jecklin disk microphone” [or  ORTF (French, 110 degrees apart) or NOS (Dutch, 90 degrees apart)] sitting at the conductor spot direct to disk;
B) these examples from Decca: The Decca Sound: Secrets Of The Engineers.

Which of them do you believe will sound better with and without crosstalk?


----------



## pinnahertz

71 dB said:


> I don't think I want "my room" in the recording I am listening to, at least not always. I believe *classical music recordings have all the spatial information there is to have* and crossfeed allows that information to enter my ears in a reasonable way.



Oh dear. Are you sure you don't want to rephrase this? I had written a reply but thought you might want to reconsider first.


----------



## jgazal (Nov 14, 2017)

71 dB said:


> 2. HRTF-convolution techniques also create depth => 3D movie instead of 2D





pinnahertz said:


> The parallel of 3D movies and crossfeed is completely false.



One would argue that standing waves and bass overhigh at low frequencies and comb filtering from early reflections at mid and high frequencies the analogy is indeed incorrect or at least a rude parallel.

Nevertheless, when advocating his crosstalk cancellation algorithm, Dr. Choueiri compares it to an stereoscope, a device people used to wear in order to perceive the 3D effect of stereoscopic pictures:






https://www.audiostream.com/content/bacch-prelude

What I would object most is that HRTF convolution solely/alone/isolated/only, at playback, causes the 3D effect.

The synthesis of “binaural mixes (equivalent to binaural recordings produced through dummy heads or humans with in-ear microphones)”, played back with speakers (listeners HRTF acoustic convolution) and crosstalk cancellation also causes the 3D effect (perhaps imprecise rendering of elevation). 

Binaural recordings with dummy head microphones, played back with speakers (listeners HRTF acoustic convolution) and crosstalk cancellation also causes the 3D effect (perhaps with imprecise rendering of elevation). 

Regular stereo recordings with natural ILD and ITD played back with speakers (listeners HRTF acoustic convolution) and crosstalk cancellation also causes the horizontal 360 soundstage effect and probably suffers to render any elevation.

In the last three playback environments, the loudspeaker crosstalk cancellation algorithm may improve with electronic PRIR convolution.

Binaural recordings played back with headphones, HRTF convolution, without electronic crosstalk and with headtracking also causes the 3D effect (perhaps imprecise rendering of elevation).

Regular stereo recordings with natural ILD and ITD, played back with headphones, electronic HRTF convolution, with headtracking, but with lower level of electronic crosstalk than one would find with acoustical crosstalk, also causes the 3D effect (perhaps imprecise rendering of elevation). See:



Erik Garci said:


> By the way, I recently created a PRIR for stereo sources that simulates perfect crosstalk cancelation. To create it, I measured just the center speaker, and fed both the left and right channel to that speaker, but the left ear only hears the left channel because I muted the mic for the right ear when it played the sweep tones for the left channel, and the right ear only hears the right channel because I muted the mic for the left ear when it played the sweep tones for the right channel. The result is a 180-degree sound field, and sounds in the center come from the simulated center speaker directly in front you, not from a phantom center between two speakers, so they do not have comb-filtering artifacts as they would from a phantom center.
> 
> Binaural recordings sound amazing with this PRIR and head tracking.





Erik Garci said:


> Using the first PRIR, central sounds seem to be in front of you, and they move properly as you turn your head. However, far-left and far-right sounds stay about where they were. That is, they sound about the same as they did without a PRIR, and they don't move as you turn your head. In other words, far-left sounds stay stuck to your left ear, and far-right sounds stay stuck to your right ear. It's possible to shift the far-left and far-right sounds towards the front by using the Realiser's mix block, which can add a bit of the left signal to the front speaker for the right ear, and a bit of the right signal to the front speaker for the left ear.



Binaural recordings and regular stereo recordings played back with headphones, with electronic HRTF convolution, but adding electronic crosstalk and headtracking will render the external pan pot stereo effect we are used to perceive with regular speakers in a room.

Binaural recordings made by the own user played back with headphones, without HRTF convolution, without electronic crosstalk and with headtracking also causes the 3D effect (perhaps with more precise rendering of elevation).

Object based tracks mixed with a personalized HRTF convolution (one measured in an anechoic chamber), played back with headphones, without electronic crosstalk and with headtracking also causes the 3D effect (perhaps with more precise rendering of elevation depending on the HRTF density or the integration quality of the interpolation algorithm).

Higher order ambisonics with an PRIR convolution, played back with headphones, without electronic crosstalk and with headtracking also causes the 3D effect (perhaps with more precise rendering of elevation depending on the order used: 3rd 16 channels or 4th 32 channels with and recordings from eigenmikes). And with a little bit more of research and an array of eigenmikes perhaps soundfield navigation of recorded venues! (https://www.princeton.edu/3D3A/Publications/Tylka_POMA_NavigationEvaluation.html)

Pan pot stereo recordings with unnatural ILD and ITD, played back with speakers and crosstalk cancellation or played back with headphones, without the addition of crossfeed will sound odd with like 71dB fears. See:



> *3 Is the 3D realism of BACCH™ 3D Sound the same with all types of stereo recordings?*
> (...)
> All other stereophonic recordings fall on a spectrum ranging from recordings that highly preserve natural ILD and ITD cues (these include most well-made recordings of “acoustic music” such as most classical and jazz music recordings) to recordings that contain artificially constructed sounds with extreme and unnatural ILD and ITD cues (such as the pan-potted sounds on recordings from the early days of stereo). For stereo recordings that are at or near the first end of this spectrum, BACCH™ 3D Sound offers the same uncanny 3D realism as for binaural recordings18. At the other end of the spectrum, the sound image would be an artificial one and the presence of extreme ILD and ITD values would, not surprisingly, lead to often spectacular sound images perceived to be located in extreme right or left stage, very near the ears of the listener or even sometimes inside of his head (whereas with standard stereo the same extreme recording would yield a mostly flat image restricted to a portion of the vertical plane between the two loudspeakers).
> (...)
> https://www.princeton.edu/3D3A/PureStereo/Pure_Stereose13.html#x28-1300013



So many possibilities, too difficult to write down them all without writing something wrong. Anxious to test them all.

P.s.: Edited several times to correct mistakes and to embrace more content x playback enviroment possibilities.


----------



## pinnahertz

jgazal said:


> One would argue that at low frequencies (standing waves and bass overhigh) and with comb filtering from early reflections at mid and high frequencies the analogy is indeed incorrect or at least a rude parallel.
> 
> Nevertheless, when advocating his crosstalk cancelled algorithm, Dr. Choueiri compares it to an stereoscope, a device people used to wear in order to perceive the 3D effect of stereoscopic pictures:
> .


To save time....”Depth perception arises from a variety of depth cues. These are typically classified into binocular cues that are based on the receipt of sensory information in three dimensions from both eyes and monocular cues that can be represented in just two dimensions and observed with just one eye.[2][3] Binocular cues include stereopsis, eye convergence, disparity, and yielding depth from binocular vision through exploitation of parallax. Monocular cues include size: distant objects subtend smaller visual angles than near objects, grain, size, and motion parallax.[4]” quote from Wiki. 

Visual depth perception is much more different than similar to spatial hearing. The parallels really don’t work.

The biggest problem with 3D imaging is the focus plane is fixed but the convergence distance is constantly changing and usually out of parity with the focus plane (the screen or aerial image distance). The problems in reproducing 3D audio are completely different.


----------



## jgazal

pinnahertz said:


> The biggest problem with 3D imaging is the focus plane is fixed but the convergence distance is constantly changing and usually out of parity with the focus plane (the screen or aerial image distance). The problems in reproducing 3D audio are completely different.



Please forgive me, I did not dig enough about it. Indeed it was in my to do list:



jgazal said:


> I just hope Smyth Research develops a way to seamlessly integrate the Realiser A16 with softwares that are able to display stereoscopic threedimensional pictures. I would love to use a BRIR and a VR headset with an stereoscopic picture of the measured room to match vision and audition. I asked them and they did not answer. Perhaps parallax errors, viewing angle of VR headsets and other difficulties with stereoscopic 3D 360 degrees images don’t allow a precise match between real speakers image and virtual speakers sound just yet. I am looking forward to anybody figuring this out.


----------



## bigshot

Yeah, visual depth perception is quite different than auditory depth perception- with one exception- the importance of head movement. We turn our heads to see depth, like a deer will cock it's head back and forth to determine how far away that coyote is. We do the same thing with sound to perceive directionality and distance. Beyond that, it's best to think of depth in sound as either primary depth cues (slight phase and echo and location information that exists in real space in the listening room) and secondary depth cues (echo information recorded into the music itself). The primary depth cues are real. The secondary ones are copied from a different space. If you combine the two well, secondary cues can greatly enhance the perceived depth in a recording. If the secondary cues don't jibe with the real world environment, they can deter and just muddy up the sound.


----------



## 71 dB

jgazal said:


> Do you believe that the following classical music recording arrangements will render equivalent spatial information for headphones playback with added electronic crosstalk?
> 
> A) your “DIY Jecklin disk microphone” [or  ORTF (French, 110 degrees apart) or NOS (Dutch, 90 degrees apart)] sitting at the conductor spot direct to disk;
> B) these examples from Decca: The Decca Sound: Secrets Of The Engineers.
> ...



My short answer is: B) needs much stronger crossfeed then A), so without crossfeed A) sounds better.



pinnahertz said:


> Oh dear. Are you sure you don't want to rephrase this? I had written a reply but thought you might want to reconsider first.



Do you hear the acoustics of your living room when you are in the classical concert? The acoustics of the concert hall is all we need to capture.


----------



## pinnahertz (Nov 12, 2017)

71 dB said:


> Do you hear the acoustics of your living room when you are in the classical concert?


Edit: _Not if I attend the actual concert, but when listening to a recording,_ Yes, and so do you and everyone else.


71 dB said:


> The acoustics of the concert hall is all we need to capture.


Two problems: 1. We can't capture concert hall acoustics in any practical way that even begins to be the complete acoustic picture. 2. The goal of all recordings is to present an acceptable representation of something, good enough to suspend disbelief, not to replicate the original.  That's both practical and good because replication of the original is mostly impossible because it never actually existed at all.


----------



## 71 dB

pinnahertz said:


> Two problems: 1. We can't capture concert hall acoustics in any practical way that even begins to be the complete acoustic picture. 2. The goal of all recordings is to present an acceptable representation of something, good enough to suspend disbelief, not to replicate the original.  That's both practical and good because replication of the original is mostly impossible because it never actually existed at all.



Problems problems problems… …how about putting headphones on, setting proper crossfeed level and just enjoy the music instead of thinking about these problems? As you said, it's good enough to suspend disbelief...


----------



## Strangelove424

bigshot said:


> Speakers are the one area of home audio that offers better quality for more money. That's because they're mechanical. It doesn't hold true for electronics. A circuit board is a circuit board.  But when you're working with voice coils and acoustics, it don't come cheap. Expensive speakers are expensive for a reason. I'm sure there are lousy expensive ones, but cheap ones aren't generally very good. You get what you pay for with speakers.
> 
> Great speakers sound more natural than great headphones. I'll take speakers over headphones any day of the week. They sound more real. I have good headphones, but they stay in the drawer most of the time because they don't hold a candle to my speakers. The only reason I wear them is when I'm editing and I don't want to annoy the people around me. I never listen to headphones for pleasure



A good engineer with a budget can make something that sounds as good or better than a buffoon with pound and pounds of beryllium. Yes, the construction and materials cost is higher than headphones, but you don't need to spend a fortune on speakers to get high fidelity. Especially in today's age with a prevalence of artificial materials and automated manufacturing. The "didn't spend enough" excuse is used to stymie all sorts of arguments. "You don't hear what I do because you didn't spend the money", and such like trains of thought. It's a statement that's impossible to respond to with any intelligence, and defers to the superiority of the wallet, which is what spurs such high spending in this hobby. Spend enough money, and nobody nowhere on any forum can question you. Owners of summit gear are treated like royalty. Perspectives unquestioned. It's a toxic perspective in this hobby. I'm not saying you are one of those types, but you should not appeal to their logic.

I understand your preference for speakers. Many people feel the same way. That said, the premise of HeadFi is headphone listening, and that can't be avoided. Some people, I'm imagining all sorts of apartment dwellers in LA, NY, Toyko, London, etc. simply can't own or listen loudly enough to a speaker setup with much enthusiasm. Those people don't have much choice but to listen to headphone for pleasure. Personally, I'm able to listen to headphones and speakers, and get pleasure (and utility) out of both for different reasons.


----------



## pinnahertz (Nov 13, 2017)

71 dB said:


> Problems problems problems… …how about putting headphones on, setting proper crossfeed level and just enjoy the music instead of thinking about these problems? As you said, it's good enough to suspend disbelief...


How about when "setting proper crossfeed" means not using it at all?  I believe I gave an example of that...

But, you said:


71 dB said:


> I believe 1. *classical music recordings have all the spatial information there is to have* and 2. *crossfeed allows that information to enter my ears in a reasonable way*.



1. All the spatial information there is to have? Even modest familiarity with stereo microphone arrays should reveal the complete nonsense of that statement. ORTF/XY: less than a hemisphere.  Coincident pair: no ITD. M/S: no ITD.  The Decca Tree: scrambled ITD, and unique ILD, but capturing less than a hemisphere.  Spaced omnis: fully scrambled ITD and ILD. Spot mic: mono, no 3D spatial information. And those are the commonly used ones.  They all fall far short of capturing “all the spatial information there is to have”, but each is usable as an element for creating a believable mix.

2. Since the information isn’t even partially captured, it’s not going to enter your ears, crossfeed or not. Crossfeed corrects for one problem: widely separated mixes listened to on headphones.  There hasn't been a widely separated orchestral ping-pong style recording made commercially in perhaps half a century.

You’ve completely missed the purpose of recording and reproduction.  We aren’t trying to replicate the entire acoustic event, we are creating something new that represents the impression and feeling of the event when played in two-channel stereo on speakers in a typical home.  You don't need to capture all the spatial information there is to do that, which is fortunate because we can't grab even a fraction of it anyway.  That's why we have other solutions.   There is better spatial representation when 5.1 or greater is used, but we still aren't replicating the original, or even close to it.

I worked on a series of recordings for broadcast of a world-class orchestra in their home hall.  The hall had, at that time been tragically "ruinovated" to the point that the acoustic environment was no longer really very good for concerts, being overly dry with an assortment of other issues. We used many mics, including mono spots and various stereo pairs, and added (gasp!) artificial reverberation, mixed actively and judiciously, to create something that not even the live audience heard: good concert acoustics.  But the AKG BX20 reverb (yes, it was springs!) didn't generate any 3d space, it generated random space.  The Lexicon 224 that came shortly after that did a better job, the same result.  They accomplished our goal, but very little spatial information from the original hall existed in those recordings.


----------



## 71 dB

pinnahertz said:


> How about when "setting proper crossfeed" means not using it at all?  I believe I gave an example of that...
> 
> But, you said:
> 
> ...



You should be a lawyer. I feel like being in court while debating with you. Everything I say you use against me.

1. Yes, you use the best option and that's it. What else can you do? You have it in stereo or in better cases as multichannel.

2. I believe it is captured almost completely considering the sound is reproduced with stereophonic headphones or speakers. Even just one mono microphone in a room would be exposed to the acoustics largely capturing the reverberation time as a function of frequency completely while of course ignoring directional information. A stereo microphone pair captures a lot that directional information and multichannel microphone setups even better. Classical music in general doesn't suffer from strong stereo separation, but the microphone setups cause some excessice separation. For example if you have an AB-pair 7 feet apart, the ITD information will be exaggarated almost 10-fold in headphone listening. If you use ORTF, the ITD will be well scaled, but cardioid microphones will produce excessive ILD information. Well, of course you don't think about headphones. You think speakers and that's why the result is often not so optimal for headphones. Crossfeed helps correcting this. I like to use quite strong crossfeed on orchestral music and with string quartets for some reason. Solo piano music is the rebel against crossfeed and often doesn't want much of it.

The purpose of recording is to sell recordings and to make money doing so. Most of music listeners understand absolutely nothing about what we are talking about here. They'd have hard time telling a ping pong stereo recording apart from mono. Classical music is record in a way that tries to replicate the acoustic event as accurately as possible while a recorded rock concert perhaps has another philosophy.

Your recording in the "ruinovated" acoustics may not have much real acoustic information, but hearing can be fooled, otherwise using spring reverbs would only ruin things. I don't believe our hearing expects 100 % accurate spatial infomation, because such a thing isn't unique: Move to the another seat in a concert hall and the acoustics your ears experience changes. However, you don't feel changed acoustics, because our hearing is used to that. Movement changes acoustics ***. However, excessive stereo separation _never_ happens in real life no matter were you sit in a concert hall. That's why crossfeed is so important.

*** I did some binaural test recordings electret mics in my ears couple of years ago. I walked outdoors and came home. The acoustics changed a lot from an open environment to closed space. When I come home in everyday life, I don't pay much attention to this change because it's expected, but listening to the recording at my computer with headphones made the changes huge, because they were unexpected. I wasn't moving at all so why is the acoustics changing so much?


----------



## bigshot

The thing about speakers and cost is that it all depends on the volume level and size of the room. If you have a small dorm room, near field monitors and sitting close will do, and that doesn't cost much at all. But if you have a good size space to fill and you want to get the volume up, it isn't inexpensive. Perhaps you're just defining expensive differently than I am. A decent set of speakers for a 5.1 system in a good sized living room would run between $5,000 and $10,000. To me, that qualifies as expensive. I'm not talking about speakers that are $15,000 apiece. I know that exists, but that seems like overkill to me. I can totally see spending a grand or two for a speaker though. (I've done it myself.)


----------



## Strangelove424

The required space does have an impact on cost. If you are nearfield, in the example you used for a dorm room, you can get a pair of powered monitors for relatively cheap, and hear everything there is to hear for $500 or less. A decent 5.1 system for an average living room, I think you could probably get away with spending $2,000-$3,000 (close to my own budget) and have an unbelievably good sounding system, including receiver and sub. It won't be a 1,000sq ft system, but for an average 15x15 living room it would do the trick. 

One day, I hope to spend about $10k on a truly high end system. I'm not saying that spending thousands of dollars on speakers is a waste. There are great speakers out there that fetch a hefty sum because they have solid engineering and materials behind them. But high price is not a guarantee of such quality, and low price doesn't necessarily mean you're missing out on much either.


----------



## bigshot (Nov 13, 2017)

I tried to get away with 2-3 grand in my living room. It would have been fine for TV, but not for music. You could probably do that if speaker size wasn't an issue and you were willing to buy used though. The hardest speaker to get cheap and good is the sub. You pretty much have to get one that costs more than $800 or $900 if you want it to sound decent with music. The cheaper ones are fine for bass rumble for movies, but they aren't flat and they sound really woofy. The center channel is really important too. I tried to cheap out there with a $150 Woot special and it just couldn't keep up with the volume from the mains. Dialogue in movies wasn't coming through over the music and if I boosted it to ride over the hump, it would flatten out. If you have money, it's always well spent on mains too. Rears are the ones that you can go cheap on, but not so much if you listen to a lot of multichannel music where the rears are just as active as the fronts. In that case, you need equally good speakers all around.

The biggest trick is volume in a good sized room. If you want to get it up to over 80dB, you need pretty darn good speakers to avoid distorting or flattening out, and you need a pretty powerful amp to push all those channels, especially with modern speakers that aren't terribly efficient. Cheap speakers can sound OK if you keep the volume down and you have a smaller space to fill, but finding really good loudspeakers for cheap is almost impossible unless you buy used stuff from the 70s. Price actually does have meaning with speakers. Cheap speakers are generally those little satellite jobs. It's impossible to get a reasonably flat response without huge frequency firebreaks in the sound. Bookshelves are better, but it's hard to fill a good sized space with them. Tower speakers are generally the best, but you're talking a lot more money when you have multiple drivers.


----------



## SilverEars (Nov 13, 2017)

You know what is interesting I've run into?  There was a crappy video recording with high ambient noise, and couldn't articulate well out of my nearfield JBL speakers, but my headphones did.  The artulation were much better through headphones and of course detailing heard better, minute ambiant noise detailing even, but it did have that environmental containment sound from the recording that the speakers didn't have.  Don't know which is more real, but headphones was better in articulation sense.


----------



## bigshot

Probably a response difference in just the right place to cut down the noise.


----------



## Strangelove424 (Nov 13, 2017)

bigshot said:


> I tried to get away with 2-3 grand in my living room. It would have been fine for TV, but not for music. You could probably do that if speaker size wasn't an issue and you were willing to buy used though. The hardest speaker to get cheap and good is the sub. You pretty much have to get one that costs more than $800 or $900 if you want it to sound decent with music. The cheaper ones are fine for bass rumble for movies, but they aren't flat and they sound really woofy. The center channel is really important too. I tried to cheap out there with a $150 Woot special and it just couldn't keep up with the volume from the mains. Dialogue in movies wasn't coming through over the music and if I boosted it to ride over the hump, it would flatten out. If you have money, it's always well spent on mains too. Rears are the ones that you can go cheap on, but not so much if you listen to a lot of multichannel music where the rears are just as active as the fronts. In that case, you need equally good speakers all around.
> 
> The biggest trick is volume in a good sized room. If you want to get it up to over 80dB, you need pretty darn good speakers to avoid distorting or flattening out, and you need a pretty powerful amp to push all those channels, especially with modern speakers that aren't terribly efficient. Cheap speakers can sound OK if you keep the volume down and you have a smaller space to fill, but finding really good loudspeakers for cheap is almost impossible unless you buy used stuff from the 70s. Price actually does have meaning with speakers. Cheap speakers are generally those little satellite jobs. It's impossible to get a reasonably flat response without huge frequency firebreaks in the sound. Bookshelves are better, but it's hard to fill a good sized space with them. Tower speakers are generally the best, but you're talking a lot more money when you have multiple drivers.



Again, I think it comes down to the space you need to fill. I live close by Hsu subwoofers and went in there for an audition before I purchased. They seated me in a small listening room with a $1500 concrete subwoofer and their horn bookshelf speakers. The sub weighed about as much as I do, and the performance was insane. Way overkill, but the quality of the bass was superb. A sumptuous, melodic bass that captures all the gentle strumming of an upright bass. Did I purchase that $1500 subwoofer? No way in hell. I went with Hsu’s budget subwoofer for $400. Inside of my own room, which is bigger than Hsu’s demo room, did I notice a startling difference in performance between their mega sub and their budget one? Not really, same tightness, same richness, same detailed strumming of a bass. The bass quality was there, I just couldn’t get the SPL the concrete giant could. Which in my space is completely, utterly unnecessary, and I would even argue that it was unnecessary in Hsu’s own listening room. I asked them why they used such a huge sub. “Can never have too much sub” I was told. Sure. But my wallet would beg to differ. Hsu had a chart for comparing room size, SPL, and the sub that would do the job. I purchased the one they recommended for my space, and have been well served by it.

I brought the sub in to tighten the jacks recently and asked about their new models, having my eye on an upgrade. The helpful gentlemen who worked on my sub told me that the new models will go about 4-5Hz lower. I couldn’t justify spending another $500 for 5Hz into infrasonic territory. Then he mentioned higher SPL, but I didn’t think I needed any more of that either. He mentioned that while he was working on the sub, he saw I had my volume knob set at 9 o’clock. “Yeah”, I said “and even then it’s too loud for certain action movies.” “Probably don’t need the extra SPL then”. “Nah, probably not.”

Upgraditis avoided. Thanks to a damn good (enough) budget sub. I’ve decided I’m not getting a new one till mine craps out.


----------



## Strangelove424

SilverEars said:


> You know what is interesting I've run into?  There was a crappy video recording with high ambient noise, and couldn't articulate well out of my nearfield JBL speakers, but my headphones did.  The artulation were much better through headphones and of course detailing heard better, minute ambiant noise detailing even, but it did have that environmental containment sound from the recording that the speakers didn't have.  Don't know which is more real, but headphones was better in articulation sense.



Likely an effect of different high frequency response, same as true between different pairs of headphones. For instance, on HD600s I will get a far more speaker like presentation that on DT880s, which greatly emphasize high frequency detail and thus ambient noise. But this can be a boon when you need a microscope of sorts in order to reduce ambient noise in editing or do de-essing of vocals/dialogue. In that case, the DT880s provide a wonderfully magnified view of all the high frequency quirks that need to be addressed. But it's not a presentation I would use at the final mastering stage to decide balance. Headphones generally are bad for that stage.


----------



## pinnahertz

71 dB said:


> I am done with pinnahertz.... Why do I even bother?





71 dB said:


> I have online discussion board fatique. I am burned out.... I will be loser forever....Crossfeed is my life and I nearly distroyed one of my only sources of happiness on this board within a few weeks.





71 dB said:


> You should be a lawyer. I feel like being in court while debating with you. Everything I say you use against me.


Just a short sampling to help explain this post. 

I'm sorry if I've caused you pain, suffering, burn-out, etc. My purpose is accuracy of information.  We appear to be at cross purposes, but it is never my intent to hurt anyone.  I'll keep my disagreements with your posts silent.  Whatever value they may have to others, they'll be fine with or without my contributions.  You will not, so with respect, I'll decline to respond.[/size]


----------



## jgazal (Nov 14, 2017)

@71 dB and @pinnahertz, I enjoyed the discussion.

Only in the sound science forum we have the chance to talk about technical restrains and readily available tools or tools in the making to overcome them.

For instance, by reading and posting in the thread:

A) Accuracy is subjective, I realized the importance of how content is produced;
B) How do we hear height in a recording with earphones, I noticed the role of spectral cues;
C) here, I am now eager to test what results one would have by having acoustic crosstalk with 3rd order ambisonics and not adding electronic crosstalk with headphones when auralizing a 3rd order ambisonics.

Thanks to both of you and the head-fiers that started such threads for exposing such questions.

Edited to mention other very informative threads:
Are binaural recordings higher quality than normal ones?
About SQ


----------



## 71 dB

pinnahertz said:


> Just a short sampling to help explain this post.
> 
> I'm sorry if I've caused you pain, suffering, burn-out, etc. My purpose is accuracy of information.  We appear to be at cross purposes, but it is never my intent to hurt anyone.  I'll keep my disagreements with your posts silent.  Whatever value they may have to others, they'll be fine with or without my contributions.  You will not, so with respect, I'll decline to respond.



You don't need to apologize anything *pinnahertz*. I came here thinking my opinions represented accurate information. I tried to defence myself, but it caused burn out it seems. I am confused now and I need to process what happened, do other things. That's why I will be less active here.



jgazal said:


> @71 dB and @pinnahertz, I enjoyed the discussion.
> 
> Only in the sound science forum we have the chance to talk about technical restrains and readily available tools or tools in the making to overcome them.
> 
> ...



You're welcome* jgazal*! This has not gone as smoothly as I hoped for, but I'm glad if got something out of it.


----------



## 71 dB

*Spatial deafness and the limits of real life ILD.*

I think headphone listening without crossfeed causes people _spatial deafness_. Just like after driving fast for a while slower speeds may appear very slow, exposing your ears to excessive spatial information with headphones may cause spatial deafness. Because of this turning crossfeed on may cause a feeling of quite monophonic sounds which causes negative reactions on some people, but those people make their judgement too fast. Listening to crossfed sounds for a while causes the effect of spatial deafness to go away and the spatiality of the recording "emerges" from the monophony. Crossfeed tends to show it's benefits lowly. Hearing adjusts lowly tp the lack of excessive spatial information and the unnatural "special spatial effects". The lack of listening fatique happens after a longer listening session. However, after the benefits of crossfeed have become clear, one doesn't want to go back to the unnatural supersterophonic listening.

Our hearing is very sensitive for ILD and ITD -information. At low frequencies no more than about 3 dB of ILD is needed. That is almost monophonic, but our spatial hearing is that sensitive and of course combined with supportive and actually more important (under 800 Hz) ITD -information it is enough. Listening to loudspeakers doesn't produce ILD greater that this. Depending on the room modes and revereberation the ILD with loudspeakers vary between 0 and 3 dB at low frequences. This can be seen on HRTF-measurement, were the difference of +90° and -90° horizontal angles at low frequences is about 5 desibels for a sound 1 m (40") away and even less for sounds of greater distance [1]:





We also see that for sounds only 12 cm (5") away max ILD at low frequencies is almost 20 dB! That is why headphone without crossfeed sound so close and small and why the soundstage gets bigger with crossfeed, which to many may find counterintuitive. Narrowing stereo separation actually widens the sound to a certain point after which the sounds becomes so monophonic that the soundstage collapses again. Having the optimum amount of ILD is the key! For typical speaker angles HRTF-responses look like this [2]:



​Here LL is left speaker for left ear and LR left speaker for right ear. Here ILD up to 200 Hz is 0 dB! However, this is misleading, because the room reverberation (modes) will produce some ILD so that the final ILD is between 0 and about 3 dB. Totally monophonic bass isn't best with headphones. In my opinion a few decibels of ILD create a natural sensation of low frequences behaving according to the laws of physics in an acoustic environment. The important point to undertand here is that with speakers it doesn't matter whether you have 0 dB or 100 dB channel separation at bass, because room acoustics + HRTF transforms it to 0-3 dB for the listener depending on the frequency anyway, but with headphones (without crossfeed) it means the difference of "life and death", so wouldn't it be rational to optimize ILD at bass for headphones? That means limiting it to about 3 dB.
​[1] https://www.researchgate.net/figure...sponses-of-the-HRTF-filters-used-to-spatially
[2] https://www.dirac.com/dirac-blog/how-to-make-headphones-stereo-compatible​


----------



## pinnahertz

71 dB said:


> *Spatial deafness and the limits of real life ILD.*
> 
> I think headphone listening without crossfeed causes people _spatial deafness_. Just like after driving fast for a while slower speeds may appear very slow, exposing your ears to excessive spatial information with headphones may cause spatial deafness. Because of this turning crossfeed on may cause a feeling of quite monophonic sounds which causes negative reactions on some people, but those people make their judgement too fast. Listening to crossfed sounds for a while causes the effect of spatial deafness to go away and the spatiality of the recording "emerges" from the monophony. Crossfeed tends to show it's benefits lowly. Hearing adjusts lowly tp the lack of excessive spatial information and the unnatural "special spatial effects". The lack of listening fatique happens after a longer listening session. However, after the benefits of crossfeed have become clear, one doesn't want to go back to the unnatural supersterophonic listening.


This is likely true, but a temporary condition.  Making up a new term like "spatial deafness" is alarmist. It already has a name: *Auditory Adaptation.* 

Personally, having tried listening with various types of crossfeed, and for extended periods, my personal experience is that only very few recordings sound better that way, and going back to no crossfeed on a recording that doesn't benefit from it is always an improvement, regardless of time spent listening, though eventually whatever I'm listening to becomes natural because of Auditory Adaptation.


71 dB said:


> Our hearing is very sensitive for ILD and ITD -information. At low frequencies no more than about 3 dB of ILD is needed. That is almost monophonic, but our spatial hearing is that sensitive and of course combined with supportive and actually more important (under 800 Hz) ITD -information it is enough. Listening to loudspeakers doesn't produce ILD greater that this. Depending on the room modes and revereberation the ILD with loudspeakers vary between 0 and 3 dB at low frequences. This can be seen on HRTF-measurement, were the difference of +90° and -90° horizontal angles at low frequences is about 5 desibels for a sound 1 m (40") away and even less for sounds of greater distance [1]:


Kind of underestimating room modes here.  If they combine to create a null, a very slight difference in position makes a very big difference, much bigger than 3dB.  If you're sitting in a null, ear spacing can be enough to do this.  Deep nulls occur in rooms all the time, relating to frequency and dimension.  


71 dB said:


> We also see that for sounds only 12 cm (5") away max ILD at low frequencies is almost 20 dB! That is why headphone without crossfeed sound so close and small and why the soundstage gets bigger with crossfeed, which to many may find counterintuitive.


Except...you've ignored how recordings are made and mixed.  A hard-panned, acoustically or electronically "dry" sound is indeed very rare, sort of an "effect" rather than a normal mix technique.  As soon as you put ambience around the sound and in the opposite channel that close perspective is mitigated.


71 dB said:


> Narrowing stereo separation actually widens the sound to a certain point after which the sounds becomes so monophonic that the soundstage collapses again.


I strongly disagree, and anyone experimenting with this will readily see that narrowing separation by simply mixing both channels NEVER results in widening of the soundstage.  If you want to widen by reducing separation something else must happen too!


71 dB said:


> Having the optimum amount of ILD is the key! For typical speaker angles HRTF-responses look like this [2]:
> 
> ​Here LL is left speaker for left ear and LR left speaker for right ear. Here ILD up to 200 Hz is 0 dB! However, this is misleading, because the room reverberation (modes) will produce some ILD so that the final ILD is between 0 and about 3 dB. 1. Totally monophonic bass isn't best with headphones. 2. In my opinion a few decibels of ILD create a natural sensation of low frequences behaving according to the laws of physics in an acoustic environment.  3. The important point to undertand here is that with speakers it doesn't matter whether you have 0 dB or 100 dB channel separation at bass, because room acoustics + HRTF transforms it to 0-3 dB for the listener depending on the frequency anyway, but with headphones (without crossfeed) it means the difference of "life and death", so wouldn't it be rational to optimize ILD at bass for headphones? That means limiting it to about 3 dB.
> ​[1] https://www.researchgate.net/figure...sponses-of-the-HRTF-filters-used-to-spatially
> [2] https://www.dirac.com/dirac-blog/how-to-make-headphones-stereo-compatible​


1. Mono bass results in far stronger bass response, though.  When bass arrives in both ears it becomes stronger, more solid, and more purposeful.  Listen to a strong bass line then cut one ear, you'll easily see the effect.

2. The acoustic environmental effects are far stronger that 3dB, though.  And frequency dependant.  This cannot be simply simulated with 3dB of bass crossfeed. 

3. Oh yes, it DOES matter!  Bass in rooms is a result of what happens in ALL speakers and the resulting room mode stimulation.  The best example of this is the theory of multiple subwoofers.


----------



## bigshot

I've had people tell me that it's impossible to fill a good sized room with bass with just one sub. But I'm doing just that in my room. The sub is a little to the right toed in a bit, but the bass fills the whole room evenly, and even bass solos on the left are clearly placed. It may be the power of my Sunfire sub, or some quirk of the shape of the room, I don't know enough about bass acoustics to figure out why. But I know enough about acoustics to not mess with a good thing!


----------



## pinnahertz

bigshot said:


> I've had people tell me that it's impossible to fill a good sized room with bass with just one sub. But I'm doing just that in my room. The sub is a little to the right toed in a bit, but the bass fills the whole room evenly, and even bass solos on the left are clearly placed. It may be the power of my Sunfire sub, or some quirk of the shape of the room, I don't know enough about bass acoustics to figure out why. But I know enough about acoustics to not mess with a good thing!


Bass filling the whole room with smooth response is impossible with one sub, but yours may work well in your LP. Measurements would serve you well here.


----------



## castleofargh

here is what I use in so called "true stereo" for the convolution. well I flatten the low end(because I like it like that and it doesn't really impact localization), and apply EQ for whatever headphone/IEM I use, but else that's it. based on 30° impulse of some HRTF I found online that worked best for my head. the subject had a massive difference between left and right in some part of the response that I could totally notice, so I just bounced the left ear values for both ears(well reversed). 
 
the end result is no realiser A16, but it's by far the best Xfeed I've had and I've tested MANY. customization is unavoidable to get better results. at least it is for poor me with my non average head.


----------



## 71 dB

pinnahertz said:


> [1] This is likely true, but a temporary condition.  Making up a new term like "spatial deafness" is alarmist. It already has a name: *Auditory Adaptation.*
> 
> [2] Personally, having tried listening with various types of crossfeed, and for extended periods, my personal experience is that only very few recordings sound better that way, and going back to no crossfeed on a recording that doesn't benefit from it is always an improvement, regardless of time spent listening, though eventually whatever I'm listening to becomes natural because of Auditory Adaptation.
> 
> ...


[1] Spatial deafness is simply a subgenre of Auditory Adaptation.

[2] I really don't get this. To me recordings that work best without crossfeed are few and far between.

[3] I don't underestimate room modes! I have done research work how to pre-filter signal before it's fed to the speakers to reduce room modes. In pathological cases I suppose it's possible to have ILD bigger than 3 dB, at a certain frequency and place in the room and at reduced SPL level masked by other frequencies.

[4] Wasn't rare at all in 1958! Even the newest compressed "headphone-friendly" pop has ILD > 3 dB bass.

[5] You are clearly a person, who knows a lot and has done a lot for decades, but for some reason you have these strange fights against me. The science is behind me. Larger ILD means the sound source is close to the other ear. It's not only self-evident, but measured HRTFs show it clearly. I really don't know what's wrong with you. 

[6]
1. Yes. However, a few decibels of ILD sounds more lively imo so I don't always go for mono bass.

2. Room modes create typically 10-15 dB peaks and even deeper dimples. However, most of the time this doesn't affect much ILD and if it does, at low level, masked and possibly below the hearing threshold anyway. Usually the modes become dense enough to transform into reverberation below 200 Hz where the wavelength is about 2 m. You are splitting hairs. Yes, what I said doesn't work in every possible pathological situation, but in general it does and those pathological situations are called "very bad acoustics." People tend to fix them, at least those who care about fidelity. Often all it takes is to move your speakers or chair a feet to make a diffence.

3. Yes, when we talk about the quality of bass. I was talking about ILD only. The quality aspect does indeed support monophonic or near-monophonic bass. Reducing the ILD at bass in a recording say from 10 dB to 3 dB is a big step toward that.

---------------------------------------

Crossfeeders don't serve coffee. They don't fix everything in the sound. They fix things related to excessive stereo separation and that's it. Why do I even need to say this?


----------



## 71 dB

bigshot said:


> I've had people tell me that it's impossible to fill a good sized room with bass with just one sub. But I'm doing just that in my room. The sub is a little to the right toed in a bit, but the bass fills the whole room evenly, and even bass solos on the left are clearly placed. It may be the power of my Sunfire sub, or some quirk of the shape of the room, I don't know enough about bass acoustics to figure out why. But I know enough about acoustics to not mess with a good thing!


Since subwoofers typically output only lowest bass, only in larger rooms we have problems with modes and small room behaves more like a pressure chamber and it's easier to find a good placement for the sub. In larger rooms more subs helps in having more flat response and od course more SPL to fill the room.


----------



## 71 dB

castleofargh said:


> here is what I use in so called "true stereo" for the convolution. well I flatten the low end(because I like it like that and it doesn't really impact localization), and apply EQ for whatever headphone/IEM I use, but else that's it. based on 30° impulse of some HRTF I found online that worked best for my head. the subject had a massive difference between left and right in some part of the response that I could totally notice, so I just bounced the left ear values for both ears(well reversed).
> 
> the end result is no realiser A16, but it's by far the best Xfeed I've had and I've tested MANY. customization is unavoidable to get better results. at least it is for poor me with my non average head.


What is the frequency resolution/FFT size? Window-function? Below 200 Hz it seems junk due to resolution limit/windowing


----------



## jgazal (Dec 2, 2017)

pinnahertz said:


> Except...you've ignored how recordings are made and mixed.  A hard-panned, acoustically or electronically "dry" sound is indeed very rare, sort of an "effect" rather than a normal mix technique.  As soon as you put ambience around the sound and in the opposite channel that close perspective is mitigated.





71 dB said:


> [4] Wasn't rare at all in 1958! Even the newest compressed "headphone-friendly" pop has ILD > 3 dB bass.
> 
> [5] You are clearly a person, who knows a lot and has done a lot for decades, but for some reason you have these strange fights against me. The science is behind me. Larger ILD means the sound source is close to the other ear. It's not only self-evident, but measured HRTFs show it clearly. I really don't know what's wrong with you.
> 
> Crossfeeders don't serve coffee. They don't fix everything in the sound. They fix things related to excessive stereo separation and that's it. Why do I even need to say this?



I don’t think he is trying to fight.

I believe he is trying to figure out what is in your opinion the percentage of recordings with unnatural ILD.

So please, do you mind to describe which recordings from 1958 are you referring to and how they were mixed?

To have a numerical perspective, just forget the algorithms that choose music according to music preference and past choices and tell us: if anybody plays 100 musics choosen randomly, how many does have in your opinion unnatural ILD? 

You write about crossfeed as if they (recordings with unnatural ILD) were the majority.

To put into perspective, this is Professor Choueiri opinion, which seems to be the opposite, in other words, the minority of recordings:



> *13 Is the 3D realism of BACCH™ 3D Sound the same with all types of stereo recordings?*
> 
> The stereophonic recording technique that is most accurate at spatially representing an acoustic sound field is, incontestably, the so-called “binaural” recording method15, which uses a dummy head with high-quality microphone in its ear 16. Until the recent advent of BACCH™ 3D Sound, the only way for an audiophile to experience the spectacular 3D realism of binaural audio was through headphones. Many such recordings exist commercially, and more have recently been made thanks to the iPod revolution.
> 
> ...


----------



## 71 dB

jgazal said:


> I believe he is trying to figure out what is in your opinion the percentage of recordings with unnatural ILD.



He should know that by now. I have given the "magic" number 98 %. 



jgazal said:


> So please, do you mind to describe which recordings from 1958 are you referring to and how they were mixed?



A lot of recordings of that era were recorded hard panned so that "half" of the instruments were on the left and "half" on the right and perhaps some in the middle. Example:* Dave Brubeck*:_ Jazz Impressions of Eurasia_.





jgazal said:


> To have a numerical perspective, just forget the algorithms that choose music according to music preference and past choices and tell us: if anybody plays 100 musics choosen randomly, how many does have in your opinion unnatural ILD?[/UOTE]
> 
> 98
> 
> ...


----------



## jgazal (Dec 2, 2017)

71 dB said:


> He should know that by now. I have given the "magic" number *98 %*.



Then pinnahertz and Professor Choueiri are in disagreement with you. 



71 dB said:


> A lot of recordings of that era were recorded hard panned so that "half" of the instruments were on the left and "half" on the right and perhaps some in the middle. Example:* Dave Brubeck*:_ Jazz Impressions of Eurasia_.




Anyway, thank you very much for that recording! Very cool!


----------



## bigshot

71 dB said:


> Since subwoofers typically output only lowest bass, only in larger rooms we have problems with modes and small room behaves more like a pressure chamber and it's easier to find a good placement for the sub. In larger rooms more subs helps in having more flat response and od course more SPL to fill the room.



My room has a bar straight back off the end of it and a bathroom at a right angle to that. It's shaped sort of like an exponential horn. I find that there is a huge bass peak in the shower stall! The arrangement of the furniture in the room, the construction of the walls / position of the bookcases, and the high peaked ceiling seems to discourage primary reflections. Most of the reflections came off the concrete slab floor, but a thick oriental rug took care of that problem.


----------



## 71 dB

jgazal said:


> Then pinnahertz and Professor Choueiri are in disagreement with you.


Mr. pinnaherz maybe. Don't know about Professor Choueiri, who is talking about _speaker_ audio and how to get "headphone binaural" sound with speakers using "anti-crossfeed", crosstalk canceling. That's completely different from what I'm talking about.

Nearly all recordings are produced for speakers, so it's no wonder they work just fine with speakers, but when you listen to them with headphones, without the support of acoustic crossfeed or other crossfeed the ILD/ITD problems show themselves. 



jgazal said:


> Anyway, thank you very much for that recording! Very cool!



I'm glad you like. Cool recording indeed.


----------



## RRod

Has there ever been an experiment done where a binaural recording was compared with a normal mic-ed/speaker-ed mix of the same material recorded on the same dummy head? Would seem like something useful for quantifying the ILD/ITD/spectral errors, at least for the head facing straight forward.


----------



## jgazal (Dec 2, 2017)

RRod said:


> Has there ever been an experiment done where a binaural recording was compared with a normal mic-ed/speaker-ed mix of the same material recorded on the same dummy head? Would seem like something useful for quantifying the ILD/ITD/spectral errors, at least for the head facing straight forward.



Very interesting question!

Do you mean:

1. 5 speakers in a row of 2 meters parallel to the coronal plane of the dummy head.
2. Five sweeps/chirps from 20hz to 20khz are played one after another in each speaker and recorded by the dummy head (binaural master reference audio file).
3. Five sweeps/chirps from 20hz to 20khz are played one after another in each speaker and recorded by the an ORTF microphone pattern (ORTF master reference audio file).
4. The two farther speakers in the same row of 2 meters parallel to the coronal plane of the dummy head now play the “binaural master reference audio file” and is against recorded by the dummy head (playback acoustic crosstalk corrupted audio file A).
5. The two farther speakers in the same row of 2 meters parallel to the coronal plane of the dummy head now play the “ORTF reference audio file” and is against recorded by the dummy head (playback acoustic crosstalk corrupted audio file B).
6. Compare playback acoustic crosstalk corrupted audio files A and B?

Dr. Choueiri might have done it, but I never saw any paper about such experiment. Theoretically they already know the ILD/ITD/spectral cues from the dummy head itself (the HRTF is certainly in the research database of HRTF) and recording engineers also know ILD/ITD from an ORTF pattern (might be useful to place a foam disc between the mics).

I just can’t find graphics in the internet to compare such chains/paths.


----------



## pinnahertz (Dec 2, 2017)

71 dB said:


> [1] Spatial deafness is simply a subgenre of Auditory Adaptation.


Please cite a reference (other than your own).


71 dB said:


> [2] I really don't get this. To me recordings that work best without crossfeed are few and far between.


You are one opinion, and you have never once referenced anything that shows your opinion is widely held, or even held by anyone but you.  If crossfeed were so good, so essential, such a key function, then why hasn't it been standard on even 1% of all music players since the original Sony Soundabout (Walkman) of 1979?  It certainly could have been done and very low cost.  But it wasn't, hasn't, and isn't still.  "Loudness Compensation" had a far better market penetration, and that didn't work well either!


71 dB said:


> [3] I don't underestimate room modes! I have done research work how to pre-filter signal before it's fed to the speakers to reduce room modes. In pathological cases I suppose it's possible to have ILD bigger than 3 dB, at a certain frequency and place in the room and at reduced SPL level masked by other frequencies.


You can't reduce room modes with pre-filter, and I'm sure you know that.  You can only reduce the results of room modes.  Room modes can only be reduced with physical means.


71 dB said:


> [4] Wasn't rare at all in 1958! Even the newest compressed "headphone-friendly" pop has ILD > 3 dB bass.


And exactly how many stereo recordings were released in 1958?  What percentage of the total releases ever does that make up?  Wouldn't be reasonable to expect early stereo mixes to have a few issues until we learned how to handle the new medium?  You've "cherry-picked" an example, which is thus meaningless.



71 dB said:


> [5] You are clearly a person, who knows a lot and has done a lot for decades, but for some reason you have these strange fights against me. The science is behind me.


We have these "strange fights" because your science doesn't apply to reality, it's specifically targeting contrived conditions.

However, my objects are, and have been:

1. Promoting headphone cross-feed as if it were a "compensation" for some form of distortion, and as if it were a universal solution.  Such is not the case.  The application of cross-feed is highly generalized at best, inappropriate at least, and never compensates properly for any speaker-mix condition because you have no idea what the intentions were in the first place, nor the precise monitoring conditions used in the mix.  You advance it as if it were a complimentary equalizer (like RIAA) when it is at very best a coarse approximation based on uneducated assumptions.

2. You authenticate your "science" with your own passionate opinion, but offer no other statistical listener preference for crossfeed.

3. Your opinions are (still!) stated as immutable fact.  And when others express their opinions you view them as personal attacks.

4. Your "science" is tightly targeted at a narrow set of conditions, and ignores the facts that actual mixes are as much art as science.  There's no "correcting" for "art".


71 dB said:


> Larger ILD means the sound source is close to the other ear. It's not only self-evident, but measured HRTFs show it clearly. I really don't know what's wrong with you.


No it does not.  ILD is not the sole proximity cue!  Your HRTF model may work in an anechoic space, but doesn't account for a real room of any size or reflective nature.  In other words, your HRTF alone doesn't model spatial hearing in real life.


71 dB said:


> [6]
> 1. Yes. However, *a few decibels of ILD sounds more lively* *imo* so I don't always go for mono bass.


Everyone please note the emphasized text.  It is exactly what it says: opinion.  That means there will be other conflicting opinions!  You want fact? Collect statistics, don't quote your own opinion!  The bulk of modern music producers and engineers disagree with you.


71 dB said:


> 2. Room modes create typically 10-15 dB peaks and even deeper dimples. However, most of the time this doesn't affect much ILD and if it does, at low level, masked and possibly below the hearing threshold anyway. Usually the modes become dense enough to transform into reverberation below 200 Hz where the wavelength is about 2 m. You are splitting hairs.


No, not splitting hairs at all.  In fact, you are correct that modes create 10-15dB peaks, and dips as deep as 30dB or more.  You are incorrect in that reverberation below 200Hz causes modes to become denser.  Simply not true, and if you'd measure a few small rooms, you'd know that.  Those deep dips are not correctable because room EQ systems must limit gain to only a few dB (Audyssey's gain limit is 9dB, for example) because of what that kind of gain does to amplifier power requirements and speaker power handling.  If you'd bother to do a few real-room measurements you'd see radical dips that are very, very location specific.  In fact, that's why room EQ can only be properly done by averaging many measurement points.  However, ears sit in single positions, and therefore are subject to some rather deep frequency specific notches.


71 dB said:


> Yes, what I said doesn't work in every possible pathological situation, but in general it does and those pathological situations are called "very bad acoustics." People tend to fix them, at least those who care about fidelity. Often all it takes is to move your speakers or chair a feet to make a diffence.


You really should get out and measure like two dozen rooms and see what reality is like.  No, those are not pathological situations, they are typical of single-sub rooms.  Yes, we do try to fix them with treatment and multiple subs, but if you're working with two-speaker stereo, you cannot move your chair much!  Sub-200Hz dips occupy real estate, and your chair can only be moved along the center-line.  If you move a speaker, you'll also move the center line, but you can't move a speaker enough to mitigate room mode dips at the optimum LP! Sorry, expericence shows the grim reality.  And that's just another reason why multiple-sub, multichannel audio wins hands down.


71 dB said:


> 3. Yes, when we talk about the quality of bass. I was talking about ILD only. The quality aspect does indeed support monophonic or near-monophonic bass. Reducing the ILD at bass in a recording say from 10 dB to 3 dB is a big step toward that.


But most bass is mixed 0dB channel difference (center), and with good reason.  There is absolutely no performance or perspective advantage to mixing bass 3dB off center in stereo.


71 dB said:


> ---------------------------------------
> 
> Crossfeeders don't serve coffee. They don't fix everything in the sound. They fix things related to excessive stereo separation and that's it. Why do I even need to say this?


Cross-feed _changes stereo separation_, that's it.  Whether or not that's an improvement (a "fix") is _*entirely subjective*_, and varies with every recording from improvement all the way to detriment.

You have presented _no data to support the premise that cross-feed is either universally desired, or perceived as an improvement at all_, much less that it is over the (claimed) majority of recordings.  None.  Only one person (your) opinion.

Why do I need to say _*that?*_


----------



## pinnahertz

RRod said:


> Has there ever been an experiment done where a binaural recording was compared with a normal mic-ed/speaker-ed mix of the same material recorded on the same dummy head? Would seem like something useful for quantifying the ILD/ITD/spectral errors, at least for the head facing straight forward.


Sure, there are a hadfull of recordings made simultaneously using both methods, but hard to compare since binaural fails on speakers.


----------



## pinnahertz

71 dB said:


> Since subwoofers typically output only lowest bass, only in larger rooms we have problems with modes and small room behaves more like a pressure chamber and it's easier to find a good placement for the sub. In larger rooms more subs helps in having more flat response and od course more SPL to fill the room.


Define larger vs smaller.


----------



## jgazal (Dec 2, 2017)

pinnahertz said:


> Sure, there are a hadfull of recordings made simultaneously using both methods, but hard to compare since binaural fails on speakers.



Yes, that would imply a subjective assessment of both the musical recording and the subsequent filtering in each path.

Now I am curious to know what filter Chesky Records uses to make their binaural recordings compatible with regular speakers playback (with acoustic crosstalk).

It cannot be an inverse HRTF otherwise it would ruin the 3d effect with crosstalk cancellation. Maybe some spectral smoothing since elevation is not critical?

I am sure that Professor Choureiri suggest applying his Bacch filter within the master, but then your listener still need to stay still:



> *20 Can BACCH™ 3D Sound be experienced without the the BACCH™ 3D Sound Processor?*
> Yes. If a stereo signal is filtered through a BACCH™ 3D Sound processor and recorded it becomes a BACCH™ 3D Sound recording and does not require playback through a BACCH™ 3D Sound Processor. It can then be played back on any normal stereo system and can be heard in 3D with no special hardware or processing. (Such pre-processed BACCH™ 3D Sound recordings are generally made with non-customized (universal) u-BACCH filters in order to make them compatible with all stereo playback systems.)
> 
> This feature is piquing the interest of a number of leading recording and mixing engineers, and recording labels, who are interested in making new audio recordings in 3D or reissuing existing stereo recordings in 3D. The consumer can play these recordings in 3D on a regular stereo system without any specialized equipment.
> ...


----------



## pinnahertz

71 dB said:


> He should know that by now. I have given the "magic" number 98 %.


I do know that, but you've never backed that number up with data.  It's just your opinion, that's a statistic of 1. Do you realize what the margin of error on that is?


71 dB said:


> A lot of recordings of that era were recorded hard panned so that "half" of the instruments were on the left and "half" on the right and perhaps some in the middle. Example:* Dave Brubeck*:_ Jazz Impressions of Eurasia_.


That's not a "lot", that's One.  How many stereo recordings were released in 1958?  What percentage of the total of all stereo recordings is that?


----------



## pinnahertz

jgazal said:


> Yes, that would imply a subjective assessment of both the musical recording and the subsequent filtering in each path.
> 
> Now I am curious to know what filter Chesky Records uses to make their binaural recordings compatible with regular speakers playback (with acoustic crosstalk).
> 
> ...


Did you read _*this*_?


----------



## jgazal (Dec 2, 2017)

pinnahertz said:


> Did you read _*this*_?



Thank you very much as usual, @pinnahertz!



> Some previous reporting has seemed to indicate the "+" is related to filters developed by Professor Edgar Choueiri of the 3-D Audio and Applied Acoustics (3D3A) Laboratory at Princeton University.
> This is not the case. The "+" indicates that the EQ changes due to the pinna effects on the tonal character imparted on the sound has been restored to neutral EQ using carefully chosen compensation curves.
> Read more at https://www.innerfidelity.com/conte...dphone-demonstration-disc#IzEmD8iz8lEHGfSo.99


----------



## 71 dB

pinnahertz said:


> Please cite a reference (other than your own).



No, I don't. I'm not here to do what you tell me to do. If you don't take my opinions serioustly then don't. I'm beginning to think you are a tormentor. I have every right to come up with my own new terminology. For normal people explaining the meaning is enough. Not to you apparently.



pinnahertz said:


> You are one opinion, and you have never once referenced anything that shows your opinion is widely held, or even held by anyone but you.  If crossfeed were so good, so essential, such a key function, then why hasn't it been standard on even 1% of all music players since the original Sony Soundabout (Walkman) of 1979?  It certainly could have been done and very low cost.  But it wasn't, hasn't, and isn't still.  "Loudness Compensation" had a far better market penetration, and that didn't work well either!



That's the damn problem! Products aren't as good as they could be, because customers are ignorant and manufacturers offer what customers want and customers become even more ignorant and the spiral into stupidity is ready. Crossfeed should have been EVERYWHERE from day one. Even I was totally ignorant of crossfeed until 2012 when I was 41! People need to be educated!

You are delusional if you think most recordings have adequate ILD for headphones. They are mixed for speakers, not headphones!



pinnahertz said:


> You can't reduce room modes with pre-filter, and I'm sure you know that.  You can only reduce the results of room modes.  Room modes can only be reduced with physical means.



Does a room mode exist in silence? It is a philosophical question… …anyway, result is what matters.



pinnahertz said:


> And exactly how many stereo recordings were released in 1958?  What percentage of the total releases ever does that make up?  Wouldn't be reasonable to expect early stereo mixes to have a few issues until we learned how to handle the new medium?  You've "cherry-picked" an example, which is thus meaningless.



If you think they were stunning binaural recordings you are delusional. The thing is it's not only 1958. King Crimson's 70's stuff has insane separation too. Tangerine Dream too. No matter what it is I need crossfeed. Old new whatever. That's how it is. Not cherry picking at all. This stuff was made for speakers. Miles Davis, Cato Barbieri, Herbie Hancock. Rose Royce. Carly Simon. Always too much ILD for headphones. Of my 1500 CDs maybe 3 dozen work best without crossfeed.



pinnahertz said:


> We have these "strange fights" because your science doesn't apply to reality, it's specifically targeting contrived conditions.


Seems to apply to reality pretty well, because it is created_ based_ on observations of reality.



pinnahertz said:


> However, my objects are, and have been:
> 
> 1. Promoting headphone cross-feed as if it were a "compensation" for some form of distortion, and as if it were a universal solution.  Such is not the case.  The application of cross-feed is highly generalized at best, inappropriate at least, and never compensates properly for any speaker-mix condition because you have no idea what the intentions were in the first place, nor the precise monitoring conditions used in the mix.  You advance it as if it were a complimentary equalizer (like RIAA) when it is at very best a coarse approximation based on uneducated assumptions.
> 
> ...



1. Yes, normal crossfeed is an approximation of the acoustic crossfeed that happens with speakers. Acoustic crossfeeder isn't "perfect" either unless you listen to the recording in the same studio it was mixed in. We have to do with less than perfect solutions in real life. As such, crossfeed is amazingly good in reducing ILD/ITD and making headphone sound more natural and less fatiqueing. Crossfeed doesn't give 100 % what was intended in the studio, but it gives close-enough version of it that is pleasant and enjoyable. That is what counts.

2. Ask Andy Linkner (Andolink). I built him a crossfeeder in 2014. Also, if you read discussion boards about crossfeed you'll see how many use crossfeed "all the time" just like me. So, your attempt to make me look a singular fool fails.

3. My "facts" may not be 100 % correct (understanding is refined every day), but I have put effort in this so much, that if I was wrong it would be really strange. It would be like saying Einstein didn't understand relativity. I take your posts as personal attacks because instead of demostrating my faults you try to discredit me. You have managed to correct me once (vinyl stylus movements and L+R, L-R). I really was mistaken that time and remembered the directions wrong. Mostly you just point out that your opinions differ from mine. I don't know about you, but I try to build my opinions on scientifically sound premises and finetune them if needed.

4. People correct "art" with acoustic crossfeed all the time and nobody cares. Room, speakers and listening position is a complex acoustic system, much more unpredictable than headphone crossfeed.


----------



## 71 dB

pinnahertz said:


> 1. No it does not.  ILD is not the sole proximity cue!  Your HRTF model may work in an anechoic space, but doesn't account for a real room of any size or reflective nature.  In other words, your HRTF alone doesn't model spatial hearing in real life.
> 
> 2. Everyone please note the emphasized text.  It is exactly what it says: opinion.  That means there will be other conflicting opinions!  You want fact? Collect statistics, don't quote your own opinion!  The bulk of modern music producers and engineers disagree with you.
> 
> ...



1. Of course not, but it's the cue that most of the time goes wrong in headphone listening so it's the one that we want to fix with crossfeed.

2. You are the one who wanted me to use that word. Mono bass is easy. You just make your bass tracks mono. Some pop music has mono bass, some has large ILD so it's up to the producer what they do. I tend to have some ILD on my bass when making music. Depends on how lively or calm bass I want. That is my opinion. You have yours.

3. Standing waves become denser with the frequency and in typical room so dense that the modes form reverberation around 150-200 Hz. Also, at higher frequencies the peaks an dips aren't that strong, because the acoustic losses increase. 






4. Yes, the reality is bad. I agree. Of course multichannel wins. 

5. Most bass? That's not my experience. Some producers such as Dr. "raper" Luke do favor mono bass, but not all producers. In room you do have some ILD at bass. You even used room modes to demonstrate greater than 3 dB ILD, so I can see advantages to mixing bass at 3 dB to have some sense of acoustic environment. In free field about 5 dB ILD at bass is all you get for sounds not very close to head. In a room you have diffuse sound field, so the reasonable level of ILD is between 0 and 5 dB and that's when 3 dB show up. You can have a little less or a little more, but that's your "default" level. I am really done with explaining this. If you continue not getting my point, I have no choice but to consider you dumb.



pinnahertz said:


> Cross-feed _changes stereo separation_, that's it.  Whether or not that's an improvement (a "fix") is _*entirely subjective*_, and varies with every recording from improvement all the way to detriment.



Have I said otherwise? It's the same kind of improvement that acoustic crossfeed does. It also changes stereo separation so that a ping pong stereo recording with huge separation becomes something with much smaller separation in our ears. The biology of our hearing suggests that reducing separation to natural levels should be subjectively an improvement, but of course individuals can hold twisted conceptions, because we are human beings, not machines.



pinnahertz said:


> You have presented _no data to support the premise that cross-feed is either universally desired, or perceived as an improvement at all_, much less that it is over the (claimed) majority of recordings.  None.  Only one person (your) opinion.



Crossfeed is not universally desired because people are too ignorant to understand it or even know about it. People are pretty mindless consumers brainwashed by global companies to buy their products. It's easier to Fidget Spinners than crossfeeders. What problem does Fidget Spinner fix? People like you make sure crossfeed will never become popular. There is no hope. Thanks a lot.


----------



## 71 dB

pinnahertz said:


> 1. I do know that, but you've never backed that number up with data.  It's just your opinion, that's a statistic of 1. Do you realize what the margin of error on that is?
> 
> 2. That's not a "lot", that's One.  How many stereo recordings were released in 1958?  What percentage of the total of all stereo recordings is that?



1. I read that number online somewhere and have no reason to disagree so it's a statistic of 2. Your own numbers are a statistic of 1. I win. Seriously, this is silly, but you want to play this game. It doesn't matter how many people agree with me or you. 7 billion people can be wrong and in many things is. Sometimes you are right alone before rest of the world catches up.

2. What difference would it make to list all of them? All people except you know that the stereo separation was often huge back then, because stereo was new and the stereo effect was maximized in a naive manner just like all new technology at first. Do you think they thought about headphones in 1958? No, they did not. All they thought about was to boost sales with stereo sound. Not much different from high-res downloads today.


----------



## jgazal (Dec 3, 2017)

71 dB said:


> No, I don't. I'm not here to do what you tell me to do. If you don't take my opinions serioustly then don't. I'm beginning to think you are a tormentor. I have every right to come up with my own new terminology. For normal people explaining the meaning is enough. Not to you apparently.



He does that because we are in sound science forum and usually most descriptors were already agreed by previous researchers so that the newcomers don’t need to reinvent the wheel.



71 dB said:


> Mr. pinnaherz maybe. Don't know about Professor Choueiri, who is talking about _speaker_ audio and how to get "headphone binaural" sound with speakers using "anti-crossfeed", crosstalk canceling. That's completely different from what I'm talking about.





71 dB said:


> You are delusional if you think most recordings have adequate ILD for headphones. They are mixed for speakers, not headphones!
> 
> 1. Yes, normal crossfeed is an approximation of the acoustic crossfeed that happens with speakers. Acoustic crossfeeder isn't "perfect" either unless you listen to the recording in the same studio it was mixed in. We have to do with less than perfect solutions in real life. As such, crossfeed is amazingly good in reducing ILD/ITD and making headphone sound more natural and less fatiqueing. Crossfeed doesn't give 100 % what was intended in the studio, but it gives close-enough version of it that is pleasant and enjoyable. That is what counts.



I am failing to see how why crosstalk is bad for speakers and good for headphones.

With speakers you add your own HRTF and the real time filtering effect of your head movement. Professor Choueiri goes further and eliminates crosstalk.

With headphones you loose the head and torso component of your HRTF and add some filtering from the headphones itself.

But adding crossfeed does not bring back your HRTF or real time filtering effect of your head movement. And most of the time ILD for each mixed instrument/track are already blended in the master.



71 dB said:


> Have I said otherwise? It's the same kind of improvement that acoustic crossfeed does. It also changes stereo separation so that a ping pong stereo recording with huge separation becomes something with much smaller separation in our ears. The biology of our hearing suggests that reducing separation to natural levels should be subjectively an improvement, but of course individuals can hold twisted conceptions, because we are human beings, not machines.
> 
> Crossfeed is not universally desired because people are too ignorant to understand it or even know about it. (...) People like you make sure crossfeed will never become popular. There is no hope. Thanks a lot.





71 dB said:


> All people except you know that the stereo separation was often huge back then, because stereo was new and the stereo effect was maximized in a naive manner just like all new technology at first. Do you think they thought about headphones in 1958? No, they did not. All they thought about was to boost sales with stereo sound. Not much different from high-res downloads today.



I see your point with “ping pong stereo recording” (do you really believe that 98% of musical recordings available nowadays have such strong stereo separation?).

But perhaps your emotional involvement with your previous efforts with crossfeed, a hard work that deserves our highest respect, may have given you a bias to extrapolate its supposed improvement to 98% of stereo recordings.

I see that your desire is to further increase the crossfeed even with modern mixes, but I still fail to see why that would be beneficial instead of neutral or even detrimental.

So I guess pinnhertz is not fighting you our personally attacking you, but just trying to do the same thing you believe you are doing, which is educating consumers.

The beauty is that I cannot know for sure who is right, because there are too many variables. But every time you both argue I see a new subtlety I was failing to see previously. So don’t feel bad with people disagreeing.


----------



## RRod (Dec 3, 2017)

jgazal said:


> Very interesting question!
> 
> Do you mean:
> 
> ...



Sure, that's more ascetic than the example in my mind but would be better for generalizable results. But such comparisons against several 'common' mic-ing (ortf, decca) and playback (2ch, headphone, 5.1) setups would be very interesting.


----------



## pinnahertz

71 dB said:


> No, I don't. I'm not here to do what you tell me to do. If you don't take my opinions serioustly then don't. I'm beginning to think you are a tormentor. I have every right to come up with my own new terminology. For normal people explaining the meaning is enough. Not to you apparently.


You said, "[1] Spatial deafness is simply a subgenre of Auditory Adaptation." Where's the backup for that statement?  None?  OK, then "spatial deafness" isn't a real term.  You made it up.


71 dB said:


> That's the damn problem! Products aren't as good as they could be, because customers are ignorant and manufacturers offer what customers want and customers become even more ignorant and the spiral into stupidity is ready. Crossfeed should have been EVERYWHERE from day one. Even I was totally ignorant of crossfeed until 2012 when I was 41! People need to be educated!


Marketing 101: if it seems better, and costs nothing, it's a "feature" and raises value, you included it differentiate your product and sell more.  Cross-feed is a very old idea, but it's never surfaced in products at all.  


71 dB said:


> You are delusional if you think most recordings have adequate ILD for headphones. They are mixed for speakers, not headphones!


Yeah. Me and, um...how many others?  I think the statement works both ways.


71 dB said:


> Does a room mode exist in silence? It is a philosophical question… …anyway, result is what matters.


No.  Silly question. 


71 dB said:


> 1. If you think they were stunning binaural recordings you are delusional. 2. The thing is it's not only 1958. King Crimson's 70's stuff has insane separation too. Tangerine Dream too.* No matter what it is I need crossfeed. *Old new whatever. That's how it is. Not cherry picking at all. This stuff was made for speakers. Miles Davis, Cato Barbieri, Herbie Hancock. Rose Royce. Carly Simon. Always too much ILD for headphones. *Of my 1500 CDs maybe 3 dozen work best without crossfeed.*


1. I don't think they are binaural recordings at all.  
*
2. That is your opinion! * You leave no room for anyone else's opinion, and project yours on everyone.  I would check my attitude and see if it allows for and respects others opinions.   This is a big key to "our problem". 


71 dB said:


> Seems to apply to reality pretty well, because it is created_ based_ on observations of reality.


Clearly someone else sees it differently.  Does your humility allow for that difference?


71 dB said:


> 1. Yes, normal crossfeed is an approximation of the acoustic crossfeed that happens with speakers.


"Approximation"...this is a very broad and liberal use of that term, if we are still talking about your flavor of cross-feed.


71 dB said:


> Acoustic crossfeeder isn't "perfect" either unless you listen to the recording in the same studio it was mixed in.


...at which point you don't need cross-feed.


71 dB said:


> We have to do with less than perfect solutions in real life.


Provided there actually is a problem for that solution.


71 dB said:


> As such, crossfeed is amazingly good in reducing ILD/ITD and making headphone sound more natural and less fatiqueing. Crossfeed doesn't give 100 % what was intended in the studio, but it gives close-enough version of it that is pleasant and enjoyable. That is what counts.


Your opinion, no evidence yours is global, and mine is different.  Can you allow for that?


71 dB said:


> 2. Ask Andy Linkner (Andolink). I built him a crossfeeder in 2014. Also, if you read discussion boards about crossfeed you'll see how many use crossfeed "all the time" just like me. So, your attempt to make me look a singular fool fails.


One example doesn't have any statistical validity.


71 dB said:


> 3. My "facts" may not be 100 % correct (understanding is refined every day), but I have put effort in this so much, that if I was wrong it would be really strange. It would be like saying Einstein didn't understand relativity. I take your posts as personal attacks because instead of demostrating my faults you try to discredit me. You have managed to correct me once (vinyl stylus movements and L+R, L-R). I really was mistaken that time and remembered the directions wrong. Mostly you just point out that your opinions differ from mine. I don't know about you, but I try to build my opinions on scientifically sound premises and finetune them if needed.


You have put forth a lot of effort in solving a problem without any put forth in evaluating the global perception that it even is a problem in the first place.


71 dB said:


> 4. People correct "art" with acoustic crossfeed all the time and nobody cares. Room, speakers and listening position is a complex acoustic system, much more unpredictable than headphone crossfeed.


Headphone perspective is relatively stable across all headphones, and known even to the content creators.  There is also the possibility that it is what they intend, already a recognized and accounted for compromise.  Can you recognized that someone else (the creators) may already have considered this and deemed it not a problem?


----------



## 71 dB

jgazal said:


> 1. He does that because we are in sound science forum and usually most descriptors were already agreed by previous researchers so that the newcomers don’t need to reinvent the wheel.
> 
> 2. I am failing to see how why crosstalk is bad for speakers and good for headphones.
> 
> ...



1. The use of crossfeed in headphone listening hasn't been studied much. So, rather than reinventing the wheel, this is inventing the wheel.
2. Stereophonic sound can be divided into three classes: Normal stereo (for loudspeakers), binaural (for headphones) and omnistereophonic (compromize between normal stereo and binaural stereo). Normal stereo on headphones needs crossfeed. Binaural on speakers needs crosstalk canceling. Omnistereophonic sound works as it is on both speakers and headphones.
3. Yes.
4. Yes.
5. Crossfeed is a coarse approximation on HRFT. It models the fact that ILD is small at low frequencies and increases with frequency + it has ITD information. The coarse nature means that it's not "wrong" HRTF, because it doesn't even try to be real HRTF. It just scales ILD to natural levels. ILD is mixed for speakers and are most of the time too large for phones.



jgazal said:


> 1. I see your point with “ping pong stereo recording” (do you really believe that 98% of musical recordings available nowadays have such strong stereo separation?).
> 
> 2. But perhaps your emotional involvement with your previous efforts with crossfeed, a hard work that deserves our highest respect, may have given you a bias to extrapolate its supposed improvement to 98% of stereo recordings.
> 
> ...



1. Don't be silly. Nobody does ping pong in 2017! It's extreme. A modern pop song requires much weaker crossfeed than a 1958 ping pong recording. This is how I define spatial distortion SD:





A ping pong recording has very strong spatial distortion, maybe 80 % or so. You need very strong crossfeed at level -1 dB or so. A modern pop song requires maybe crossfeed at level -7 dB meaning the spatial distortion is "only" 20 %. The hearing threshold of spatial distortion is about 5-10 %, so below -11 dB crossfeeding becomes useless and the recording is practically free of spatial distortion. Red means severe channel separation and green mild. So, from 1958 to 2017 we have come from red to yellow maybe in average. However, because recordings are mixed for speakers, not many recordings are in the green area and downmixed multichannel movie soundtracks contain A LOT of separation, because rear channels are encoded as channel difference.

2. No. My efforts and hard work come from the realization that almost all recordings benefit from crossfeed.
3. As long as the ILD is unnaturally large crossfeed is beneficial.
4. So what is the message of pinnahertz? Don't do anything?
5. I hope this is beneficial to people reading this board.


----------



## bigshot (Dec 3, 2017)

pinnahertz said:


> If crossfeed were so good, so essential, such a key function, then why hasn't it been standard on even 1% of all music players since the original Sony Soundabout (Walkman) of 1979?  It certainly could have been done and very low cost.  But it wasn't, hasn't, and isn't still.



I've always wondered why equalization is only available on A/V receivers. It's probably the most important form of signal processing you can use, yet if it exists at all, it's in a primitive bass and treble dial.... and high end amps don't even have that! I guess you can't compare market penetration and usefulness.

If I listened to headphones more, I would like to have a good cross feed. I probably wouldn't use it all the time, but when I needed it, it would be handy. EQ I use all the time.

By the way, if you think no one does ping pong stereo any more, you don't listen to hip hop music!


----------



## 71 dB

pinnahertz said:


> You said, "[1] Spatial deafness is simply a subgenre of Auditory Adaptation." Where's the backup for that statement?  None?  OK, then "spatial deafness" isn't a real term.  You made it up.



All terms are made up, including Auditory Adaptation. I can make up a term such as RedC as long as I tell others it means "red car." You brought the term Auditory Adaptation here. I would have been happy with spatial deafness only, because that's the part of Auditory Adaptation relevant to crossfeed. You don't need to prove that A = B, if you define A = B by definition! I don't need to prove RedC is a subset of all cars, because my definition of RedC makes the claim true.

Seriously man, this is so pointless. Why can't you just accept my definitions and terms as they are? 



pinnahertz said:


> Marketing 101: if it seems better, and costs nothing, it's a "feature" and raises value, you included it differentiate your product and sell more.  Cross-feed is a very old idea, but it's never surfaced in products at all.


Huh? SPL monitor. Corda Jazz. Dolby Phone. Never surfaced in products? Crossfeed is more relevant than ever, because more people listen to headphones and the sound quality of headphones has improved so that things such as spatial distortion becomes one of the largest problems.



pinnahertz said:


> *2. That is your opinion! * You leave no room for anyone else's opinion, and project yours on everyone.  I would check my attitude and see if it allows for and respects others opinions.   This is a big key to "our problem".



Tracks can be analysed to see that the ILD-information is objectively unnatural in most cases. My opinions are based on objective facts. The level of proper crossfeed level can be argued and even I may change my opionion about it from day to day, but your opinions are "too" far from the reality as I see it. All I can think of is that you suffer from severe spatial deafness. That would explain why only a few recordings in your opinion benefit from crossfeed.

My attitude might seem arrogant, but this is how I am. I am very shy when I feel I don't know much and very confident when I feel I know what I am talking about.



pinnahertz said:


> Clearly someone else sees it differently.  Does your humility allow for that difference?



No, I don't think so. Not in THIS issue.



pinnahertz said:


> "Approximation"...this is a very broad and liberal use of that term, if we are still talking about your flavor of cross-feed.


Appoximation of ILD and ITD information, not spectral cues. You need something else to have spectral cues.



pinnahertz said:


> ...at which point you don't need cross-feed.



Except the studio is an acoustic crossfeeder too. Your both ears hear all the monitors. That's crossfeed.



pinnahertz said:


> Provided there actually is a problem for that solution.



Unfortunately there is: Spatial distortion.



pinnahertz said:


> Your opinion, no evidence yours is global, and mine is different.  Can you allow for that?



You don't need my approval to your opinions. You can have any opinions you want. However, if your opinions aren't rooted to reality you may find it difficult to justify them to someone who knows his/her stuff.



pinnahertz said:


> One example doesn't have any statistical validity.



How many do you want? A thousand? A million?



pinnahertz said:


> You have put forth a lot of effort in solving a problem without any put forth in evaluating the global perception that it even is a problem in the first place.



Six years ago I didn't know it is a problem, but people get wiser. I had no clue about the potential of headphone listening, very few have. People are blind to problems they don't understand. For years I didn't "connect the dots", but in 2012 I did. I wasn't spatially ignorant anymore. It was about the time! Experiences like that make one humble.



pinnahertz said:


> Headphone perspective is relatively stable across all headphones, and known even to the content creators.  There is also the possibility that it is what they intend, already a recognized and accounted for compromise.  Can you recognized that someone else (the creators) may already have considered this and deemed it not a problem?



Why is it that by default everything other people do is "probably" correct, but my crossfeeding is by default iffy? I don't think your average music producer has as deep understanding of human hearing and acoustics as someone who has studied that stuff in university. Understanding how to create catchy radio hits is their field of expertise. It does vary. Some producers demonstrate better understanding of spatiality than others, but somehow I like to use crossfeed with almost everything.


----------



## 71 dB

bigshot said:


> By the way, if you think no one does ping pong stereo any more, you don't listen to hip hop music!



I don't listen to hip hop so I didn't know...


----------



## castleofargh

eheh, I like the name ping pong stereo. it really describes it perfectly.


----------



## pinnahertz

71 dB said:


> 1. The use of crossfeed in headphone listening hasn't been studied much. So, rather than reinventing the wheel, this is inventing the wheel.
> 2. Stereophonic sound can be divided into three classes: Normal stereo (for loudspeakers), binaural (for headphones) and omnistereophonic (compromize between normal stereo and binaural stereo). *Normal stereo on headphones needs crossfeed. *


Unsubstantiated statement, at very least the "need" is subjective and varies both with listener preference and specific recording.


71 dB said:


> Binaural on speakers needs crosstalk canceling.


Yes, but the acoustic crosstalk cancellation is specific to the room, speaker and listener position.  It's certainly not ever just an ON vs OFF situation.


71 dB said:


> Omnistereophonic sound works as it is on both speakers and headphones.


...or it may just be an acceptable compromise for both speakers and headphones.


71 dB said:


> 5. Crossfeed is a coarse approximation on HRFT. It models the fact that ILD is small at low frequencies and increases with frequency + it has ITD information. *The coarse nature means that it's not "wrong" HRTF, because it doesn't even try to be real HRTF.*


It's not "Right" either then. 


71 dB said:


> It just scales ILD to natural levels. ILD is mixed for speakers and are most of the time too large for phones.


Here's where cross-feed collapses: it's not "right" for anything, exactly, and the need varies widely.  It cannot "scale ILD to natural levels" unless the natural level is known...and it isn't.



71 dB said:


> 1. Don't be silly. Nobody does ping pong in 2017! It's extreme. A modern pop song requires much weaker crossfeed than a 1958 ping pong recording. This is how I define spatial distortion SD:


Tend to agree here in general, though not in specific. 

[/quote]Opinion.  Not fact at all!


71 dB said:


> A ping pong recording has very strong spatial distortion, maybe 80 % or so. You need very strong crossfeed at level -1 dB or so. A modern pop song requires maybe crossfeed at level -7 dB meaning the spatial distortion is "only" 20 %. The hearing threshold of spatial distortion is about 5-10 %, so below -11 dB crossfeeding becomes useless and the recording is practically free of spatial distortion. Red means severe channel separation and green mild. So, from 1958 to 2017 we have come from red to yellow maybe in average. However, because recordings are mixed for speakers, not many recordings are in the green area and downmixed multichannel movie soundtracks contain A LOT of separation, because rear channels are encoded as channel difference.


Opinion, opinion, opinion!!!!! IT depends on the goal of the recording and the preference of the listener.  The chart, and above statements are unproven, opinion only.


71 dB said:


> 2. No. My efforts and hard work come from the realization that almost all recordings benefit from crossfeed.


That is opinion, and not agreed with by everyone.  There is no data to support this.


71 dB said:


> 3. As long as the ILD is unnaturally large crossfeed is beneficial.
> 
> 4. So what is the message of pinnahertz? Don't do anything?


We're on post 267 and you still don't know? Re-read.  I'm not suggesting cross-feed cannot be a benefit, I'm stating that your opinions, presented as facts, cannot be facts.  My personal preference strongly conflicts with yours.  Use it if it works, don't if it doesn't.  I object most emphatically to your definitive statements about the necessity of cross-feed!


71 dB said:


> 5. I hope this is beneficial to people reading this board.


I strongly doubt that.[/QUOTE]


----------



## 71 dB

castleofargh said:


> eheh, I like the name ping pong stereo. it really describes it perfectly.



Yes, I like the name too, put not the stereo image itself. It doesn't work well even with speakers!


----------



## pinnahertz

71 dB said:


> All terms are made up, including Auditory Adaptation. I can make up a term such as RedC as long as I tell others it means "red car." You brought the term Auditory Adaptation here. I would have been happy with spatial deafness only, because that's the part of Auditory Adaptation relevant to crossfeed. You don't need to prove that A = B, if you define A = B by definition! I don't need to prove RedC is a subset of all cars, because my definition of RedC makes the claim true.


If you look up Auditory Adaptation you will find a clear and universally accepted definition. If you look up Spatial Deafness, you will find you are misusing the term.


71 dB said:


> Seriously man, this is so pointless. Why can't you just accept my definitions and terms as they are?


Because they are understood only by you.  That makes them useless.



71 dB said:


> Huh? SPL monitor. Corda Jazz. Dolby Phone. Never surfaced in products?


What are those? How much of the market share do they have?  How about iPhone?  iPod? Sony products?  Sanza?  Anything main stream at all? 


71 dB said:


> Crossfeed is more relevant than ever, because more people listen to headphones and the sound quality of headphones has improved so that things such as spatial distortion becomes one of the largest problems.


...or not.  Opinion again.



71 dB said:


> Tracks can be analysed to see that the ILD-information is objectively unnatural in most cases.


Have you now defined "objectively unnatural" in an artistic medium?  What do you use to objectively measure "unnatural"?  Perhaps you should look up the definition of "objective".


71 dB said:


> My opinions are based on objective facts. The level of proper crossfeed level can be argued and even I may change my opionion about it from day to day, but your opinions are "too" far from the reality as I see it.


Your opinions are not based on anything except your opinions.  If it's an objective fact it won't change with your opinion.  Have you tested your theories on a large sample of listeners?  Have you determined general preferences?  No, and No!  You don't have a shred of objectivity here.


71 dB said:


> All I can think of is that you suffer from severe spatial deafness. That would explain why only a few recordings in your opinion benefit from crossfeed.


SOOO typical.  "I hear it and you don't so you must be defective/deaf/stupid/<fill in any derogatory term here>.  Your attitude of superiority and arrogance is indeed monumental.

And you _would_ think that because if you believe it yourself, then it is absolutely right and all other views are wrong.   You don't see just a tiny bit wrong with that?

Well, here's news: I'm not spatially deaf, not at all!  I can easily and correctly localize sounds in a 360 degree sphere, and properly judge distance.  I've made binaural recordings, I've researched acoustic crosstalk cancellation decades before you were listening to headphones, I've mixed in 5.1/7.1.  I've designed and built many high-end surround systems and calibrated theaters.  Yeah, I'm spatially deaf as a post.  Right.  I also don't think cross-feed is "necessary" in most cases, but is beneficial in a few.  Do you know the difference between a physical defect and a difference of opinion??  Or, do I correctly assume than anyone who differs with your opinion must, by definition, be defective? 


71 dB said:


> My attitude might seem arrogant, but this is how I am. I am very shy when I feel I don't know much and very confident when I feel I know what I am talking about.


Yes, it seems arrogant, to the extreme.  I'm afraid you don't know what you don't know, which is actually true of everyone.  Realize that's true.



71 dB said:


> No, I don't think so. Not in THIS issue.


Not a surprise given the apparent lack of humility.


71 dB said:


> Except the studio is an acoustic crossfeeder too. Your both ears hear all the monitors. That's crossfeed.


But that's already taken into account when the mix is created, it's already been compensated for.


71 dB said:


> Unfortunately there is: Spatial distortion.


I don't completely disagree that your (made up pet) term is real in some recordings and that cross-feed can mitigate it. I disagree with your universal application of it as the universal fix.  You have no factual backup for that at all.  Restating your strong opinion as fact doesn't constitute factual data or research.


71 dB said:


> You don't need my approval to your opinions. You can have any opinions you want. However, if your opinions aren't rooted to reality you may find it difficult to justify them to someone who knows his/her stuff.


I could actually just copy/paste that as a reply, but you may not get it because you haven't yet.  Your opinions are repeatedly stated as immutable fact, but you have no actual test results or research to back them up.  You have your theory, and analysis of what you believe is a "problem", but don't have any data to back up the fact that it is generally viewed as a problem.  In short, you have strong opinions you state as fact.  I have a problem with that.


71 dB said:


> How many do you want? A thousand? A million?


I'd like you to take your wild claims and scale them to reality.  That's going to take more than one cherry-picked example from an era when the stereo medium wasn't fully understood.



71 dB said:


> Six years ago I didn't know it is a problem, but people get wiser. I had no clue about the potential of headphone listening, very few have. People are blind to problems they don't understand. For years I didn't "connect the dots", but in 2012 I did. I wasn't spatially ignorant anymore. It was about the time! Experiences like that make one humble.


6 years! Well, I've been researching acoustic crosstalk, headphone cross-feed, psychoacoustics, the reproduction of 3D sound, blah blah blah....it doesn't matter...since 1980.  Does that help you respect my opinion? No!  At this point it's pointless for us to throw our backgrounds at each other.  I don't really care if you've been at it for 60 years or 600 years, stating opinion as fact with zero backup is propaganda.  Arrogant self-righteousness and the blatant disrespect of others opinions is abhorrent in a scientific community, or any community.


71 dB said:


> Why is it that by default everything other people do is "probably" correct, but my crossfeeding is by default iffy?


I didn't say everything other people do is "probably" correct.  I'm saying your cross-feeding iffy because of lack of supporting data.  You have not done the research.


71 dB said:


> I don't think your average music producer has as deep understanding of human hearing and acoustics as someone who has studied that stuff in university. Understanding how to create catchy radio hits is their field of expertise. It does vary. Some producers demonstrate better understanding of spatiality than others, but somehow I like to use crossfeed with almost everything.


You are discussing three mutually exclusive properties: a producer's ability to make something the market demands, the understanding of human hearing and acoustics, and your preference for cross-feed.  I do not see that they must all be equally present for a successful recording.


----------



## pinnahertz

bigshot said:


> I've always wondered why equalization is only available on A/V receivers. It's probably the most important form of signal processing you can use, yet if it exists at all, it's in a primitive bass and treble dial.... and high end amps don't even have that! I guess you can't compare market penetration and usefulness.
> 
> If I listened to headphones more, I would like to have a good cross feed. I probably wouldn't use it all the time, but when I needed it, it would be handy. EQ I use all the time.


Auto EQ is only on A/V products for several reasons, while every single system would benefit from EQ, it's not found on stereo systems for many reasons.  One reason is, with stereo you're only dealing with two, usually identical, speakers.  Two is: two-channel stereo only actually works right in one tiny LP, so it is generally accepted as mostly out of calibration because few will actually make routine effort to sit in that LP.  And possible Three: EQ is viewed by the two-channel audiophool group as categorically bad...even though the entire speaker/room system is already applying drastic EQ.  B

Auto EQ made multichannel system setup much easier, and much more accurate, and compensated for difficult position issues with delay and different speaker types with EQ.  This seemed to happen at the same time that the problems with early noise/RTA based systems (that didn't hit the target well) were mitigated by sophisticated FFT-based systems.

Today you can buy a miniDSP product that does two channels, pretty much automatically.


----------



## jgazal (Dec 6, 2017)

71 dB said:


> 2. *Don't be silly.*





71 dB said:


> 4. Seriously man, this is so pointless. Why can't you just accept *my definitions* and terms as they are?





71 dB said:


> 5. *Except the studio is an acoustic crossfeeder too. Your both ears hear all the monitors. That's crossfeed.*



I am not being silly.

I really have empathy for you.

So let me ease your pain, so that I can ease mine also.

So let's go toguether, but promise you are going to be patient with me.

Read what you had previously written:



71 dB said:


> Cancellation of *loudspeaker crosstalk* as a concept is familiar to me. I studied acoustics in the university and worked in the acoustics lab for almost a decade. However, I am not sure why you talk about loudspeaker cross-talk cancellation in a thread about cross-feed in headphone listening. Personally I am not that worried about loudspeaker cross-talk. It is a "natural" acoustic phenomenon that doesn't create unnatural signals to my ears. Making the listening room more absorbent and using more directional loudspeakers one can reduce cross-talk, if that is an issue. (...)
> 
> *I think you confuse cross-talk and cross-feed in some places.*



Is it loudspeaker crosstalk or crossfeed?

I thought feed was something that you do deliberately and not something that occurs without human intervention.

Now read what Professor Choueiri wrote:



> *10 How does BACCH™ 3D Sound work?*
> 
> Imagine a musician who stands on the extreme right of the stage of a concert hall and plays a single note. A listener sitting in the audience in front of stage center perceives the sound source to be at the correct location because his brain can quickly process certain audio cues received by the ears. The sound is heard by the right ear first and after a short time delay (called ITD) is heard by the left ear. Furthermore there is a difference in sound level between the two ears (called ILD) due to the sound having travelled a little longer to reach the right ear, and the presence of the listener’s head in the way. The ILD and ITD are the two most important types of cues for locating sound in 3D *and are to a good extent preserved by most stereophonic technique*11.
> 
> ...



Do you still think the "*Spatial distortion*" you want to address by adding electronic/digital crossfeed at headphone playback is worse than the fundamental problem Professor Choueiri wants to solve with acoustic crosstalk cancellation at speakers playback?

I don’t.

If you do, then this is the second time Dr. Choueiri is in disagreement with you.

Do you still think that "*the studio is an acoustic crossfeeder too" *instead of what Dr. Choueiri designates as "*crosstalk*", an "*important and fundamental problem*"*?*

If you do, then this is the third time Dr. Choueiri is in disagreement with you.

That’s why I believe pinnahertz insisted that you need to be precise with terms describing physical phenomena.



71 dB said:


> 1. Crossfeed (...) just scales ILD to* natural levels*. ILD is mixed for speakers and are most of the time too large for phones.



I would only say "*natural levels*" provided that you clearly state that your reference is stereophonic recordings played with loudspeakers without crosstalk cancelation.

I would say "standard levels with currently mainstream dsp-less playback environments".

And why one would need to state that?

Because dsp-less mastering and playback environments with the "*fundamental problem*" of "*crosstalk*" are not the state of the art anymore.



71 dB said:


> 6. Unfortunately there is: *Spatial distortion*.



I would *not* say "*S**patial distortion*" at all.

If you have stereophonic recording convolved with a binaural room impulse response (two measured speakers and two looking angles) with interpolation in real time to account for head movements and equalization to neutralize the headphone filtering, would you still add crossfeed?

If you want to monitor how your mix is going to sound in currently mainstream dsp-less playback environments, then certainly yes.

If you want to monitor how your mix is going to sound in state of the art playback environments in which your listener has the same headphone rig as you have or speakers with a crosstalk cancellation device (for instance Professor Choueiri bacch-dsp or an phased array of beaforming transducers), then adding crossfeed in the “magic number” of “98%” of your recordings is certainly decreasing "*spatial and tonal fidelity*".

So now read this post again:



jgazal said:


> I don’t think he is trying to fight.
> 
> I believe he is trying to figure out what is in your opinion the percentage of recordings with unnatural ILD.
> 
> ...





71 dB said:


> 3. So what is the message of pinnahertz? Don't do anything?



I believe he agrees that stereophonic with "*unnatural ILD*" (more than one would find if an real instrument were played at the position you want to pan it) may benefit, stating clearly that the reference for "*natural ILD levels*" *is* the real musical event and a dummy head and *not *the currently mainstream mastering room dsp-less environment.

The reference used to be the crosstalk corrupted mastering environment. Now dsp allows an such ambitious reference as the real musical event thus ILD experienced with acoustic crosstalk cannot be considered "*natural*".

I know @pinnahertz is going to disagree since he still uses the mastering room environment as the reference for fidelity, but I am just saying that, creative intent aside, the state of the art dsp allows the mastering enviroment to be potentially faithful to the spatiality of the music event.



71 dB said:


> 7. Six years ago I didn't know it is a problem, but people get wiser. I had no clue about the potential of headphone listening, very few have. People are blind to problems they don't understand. For years I didn't "connect the dots",* but in 2012 I did*. *I wasn't spatially ignorant anymore*. It was about the time! *Experiences like that make one humble*.



I am afraid you still do not agree with Professor Choueiri that acoustic crosstalk speakers playback is worse than the lack of crossfeed in headphone playback. And I am certain that Professor Choueiri is humbler than you.

I know it is hard to give up something that was so dear to you since 2012, but just let it go. We need to.

That is why I guess, I believe, I have the opinion that deliberatly adding, digitally or eletronically, crossfeed at playback may be beneficial only to "ping pong recordings", recordings with "strong stereo separation", recordings with "*unnatural ILD*".

P.s.: By *dsp-less *I mean mastering environments in which dsp in not restricted to panning and synthetic reverberation, but goes further to cancel crosstalk in speakers and externalize virtual speakers on headphones.


----------



## bigshot

71 dB said:


> Yes, I like the name too, put not the stereo image itself. It doesn't work well even with speakers!



It works great for Sly and the Family Stone!


----------



## pinnahertz

jgazal said:


> I
> 
> I would only say "*natural levels*" provided that you clearly state that your reference is stereophonic recordings played with loudspeakers without crosstalk cancelation.
> 
> ...


 Technically correct, but nearly all mixing is still done without DSP correcting acoustic crosstalk of the monitor speakers in the mix environment.


jgazal said:


> I would *not* say "*S**patial distortion*" at all.
> 
> If you have stereophonic recording convolved with a binaural room impulse response (two measured speakers and two looking angles) with interpolation in real time to account for head movements and equalization to neutralize the headphone filtering, would you still add crossfeed?


No, that would pretty much do it right there.


jgazal said:


> If you want to monitor how your mix is going to sound in currently mainstream dsp-less playback environments, then certainly yes.


But nobody does that.  You could go nuts calling up one virtual room/system after another.  You can't make mix or mastering decisions that way.


jgazal said:


> If you want to monitor how your mix is going to sound in state of the art playback environments in which your listener has the same headphone rig as you have or speakers with a crosstalk cancellation device (for instance Professor Choueiri bacch-dsp or an phased array of beaforming transducers), then adding crossfeed in the “magic number” of “98%” of your recordings is certainly decreasing "*spatial and tonal fidelity*".
> 
> So now read this post again:
> 
> ...


Actually, you've sort of made the point for me.  The question of where/when to "correct" depends entirely on the knowledge of what was done during the creation process, knowledge of the difference between that environment and the play environment, AND, ultimately, the intent of the creator vs the intent of the listener.  If it matters to the listener that he hear the content as it was intended, then obviously he needs to replicate the creative environment as nearly as practical.  However, the listener may not give  a crap about what the creator intended and may think his preference is superior anyway, as in the case of 71dB. Or, the listener may not perceive anything is amiss and joyfully listen to his Bose Lifestool system.  Whatever, we all know that virtually nobody will actually attempt to replicate the mix environment, so it's not expected.  Music is mixed on studio speakers, checked on other speakers and headphones, and whatever adjustments may help are made.  But thinking you can somehow correct for a relatively random set of conditions is simply delusional.   What's actually being done is the exercise of free choice and preference.

But "Spatial Distortion", whatever that is, is actually a relatively minor problem in playback.  There are such HUGE other problems, I find it strange anyone would lock into that one.  Frequency response of play systems is a very big problem, particularly with typical headphones and speakers.  It's all over the place, with excursions from any hope of match falling into the 20dB range sometimes.  And FR affects our mix, perspective, balance...pretty much the whole presentation.  And yet if you go and demo headphones, the ones that actually come at all close to good and smooth are very, very few.  Faced with that, and the lack of any clear subjective preference for cross-feed in general, it seems like beating a dead horse.


jgazal said:


> I am afraid you still do not agree with Professor Choueiri that acoustic crosstalk speakers playback is worse than the lack of crossfeed in headphone playback. And I am certain that Professor Choueiri is humbler than you.


Behind the arrogant stand a long line of the more humble.


jgazal said:


> I know it is hard to give up something that was so dear to you since 2012, but just let it go. We need to.
> 
> That is why I guess, I believe, I have the opinion that deliberatly adding, digitally or eletronically, crossfeed at playback may be beneficial only to "ping pong recordings", recordings with "strong stereo separation", recordings with "*unnatural ILD*".
> 
> P.s.: By *dsp-less *I mean mastering environments in which dsp in not restricted to panning and synthetic reverberation, but goes further to cancel crosstalk in speakers and externalize virtual speakers on headphones.


Nearly every mixing and mastering environment uses minimal DSP in the monitor chain, perhaps some precision EQ, until you get to multichannel rooms.  None use DSP crosstalk cancellation, and nobody makes mix decisions on virtualized speakers in headphones.


----------



## jgazal (Jan 26, 2018)

pinnahertz said:


> Technically correct, but nearly all mixing is still done without DSP correcting acoustic crosstalk of the monitor speakers in the mix environment.



Agree.



pinnahertz said:


> AND, ultimately, the intent of the creator vs the intent of the listener.



Agree that intent of the creator must be respected.



pinnahertz said:


> But "Spatial Distortion", whatever that is, is actually a relatively minor problem in playback.  There are such HUGE other problems, I find it strange anyone would lock into that one.  Frequency response of play systems is a very big problem, particularly with typical headphones and speakers.  It's all over the place, with excursions from any hope of match falling into the 20dB range sometimes.  And FR affects our mix, perspective, balance...pretty much the whole presentation.  And yet if you go and demo headphones, the ones that actually come at all close to good and smooth are very, very few.  Faced with that, and the lack of any clear subjective preference for cross-feed in general, it seems like beating a dead horse.



I agree that reference can’t be something that is not in mainstream use.

I was just trying to pull the argument to its limit.

But anyway consider what Smyth and Choueiri says:



> *19 How does BACCH™ 3D Sound correct problems in audio playback?*
> 
> While the foremost goal of BACCH™ 3D Sound is 3D audio imaging, the BACCH filters at the heart of BACCH™ 3D Sound have the additional advantage of correcting, in both the time and frequency domains, many non-idealities in the playback chain, including loudspeaker coloration and resonances, listening room modes, spatial comb filtering, balance differences between channels, etc…
> 
> ...





jgazal said:


> In the following podcast interview (in English), Stephen Smyth explains concepts of acoustics, psychoacoustics and the features and compromises of the Realiser A16, like bass management, PRIR measurement, personalization of BRIRs, etc.
> 
> He also goes further and describes the lack of absolute neutral reference for headphones and the convinience of virtualizing a room with state of the art acoustics, for instance “A New Laboratory for Evaluating Multichannel Audio Components and Systems at R&D Group, Harman International Industries Inc.” with your own PRIR and HPEQ for counter-filtering your own headphones (@Tyll Hertsens, a method that _personalizes_ room/pinnae and pinnae/headphones idiosyncratic coupling/filtering and keeps the acoustic basis for Harman Listening Target Curve).
> 
> ...




Still very ideal scenarios. But who knows what future is going to be like with easier ways to acquire personalized HRTF instead of just binaural room impulse responses...

I agree that what really needs to be preserved from an real recorded event are mainly (a) the acoustics relationships between the hemispheres that are divided by the human sagittal plane (crosstalk cancellation with speakers or crossfeed free externalization), (b) spectral cues (_personal_ HRTF/HRIR/BRIR or in a lower performance tier a generic HRTF/HRIR/BRIR) and (c) bass response (for instance, the A16 allows direct bass, phase delayed bass and even equalization in the time domain). But once those three factors are controlled to maintain localization fidelity, the overall frequency response and seat perspective not necessarily need to be referenced to the real event.

Then it will be up to the creator to deliberately define the art, by creative intent, not only by carefully altering some of the frequency bands of real recorded events (dummy heads, baffled spaced microphones, ambisonics, eigenmikes), but also in synthetic recreations (stereophonic mixing or binaural synthesis with synthetic reverberations).

By “carefully” I mean choosing the recording room, where to place the musician and instrument, where to place the microphone etc. Anyway, if your aim is to keep elevation precision, I would be extra careful when altering some specific frequency bands with post equalization either of recordings made with microphone patterns that encode spatial info (dummy heads, sound field microphones or eigenmikes) or with synthetic mixing.

Correct for such “relatively random set of conditions” may seem “delusional”, but if we do not do it, virtual reality won’t work. And I believe consumers want it to work.

Edited to rephrase a confusing paragraph and to complete the idea of creative liberty of the producer...


----------



## jgazal (Dec 4, 2017)

pinnahertz said:


> But "Spatial Distortion", whatever that is, is actually a relatively minor problem in playback.





Erik Garci said:


> By the way, I recently created a PRIR for stereo sources that simulates perfect crosstalk cancelation. To create it, I measured just the center speaker, and fed both the left and right channel to that speaker, but the left ear only hears the left channel because I muted the mic for the right ear when it played the sweep tones for the left channel, and the right ear only hears the right channel because I muted the mic for the left ear when it played the sweep tones for the right channel. The result is a 180-degree sound field, and sounds in the center come from the simulated center speaker directly in front you, not from a phantom center between two speakers, so they do not have comb-filtering artifacts as they would from a phantom center.
> 
> Binaural recordings sound amazing with this PRIR and head tracking.





jgazal said:


> How do you mute the opposite microphone?





Erik Garci said:


> To mute it I unplug the left or right microphone from the Y-junction between sweeps. I set the "post silence" to 8 seconds beforehand to give me enough time. To make it easier I plan to hook up an A/B switch.
> 
> I actually got the idea from a comment by Timothy Link in this Stereophile article about Dr. Choueiri's BACCH.
> http://www.stereophile.com/content/bacch-sp-3d-sound-experience
> ...





jgazal said:


> Can you describe that sensation of envelopment improvement you heard between the first PRIR and the "Hafler" PRIR?
> 
> Does the front soundstage keep believable in both PRIRs as you turn your head?





Erik Garci said:


> Using the first PRIR, central sounds seem to be in front of you, and they move properly as you turn your head. However, far-left and far-right sounds stay about where they were. That is, they sound about the same as they did without a PRIR, and they don't move as you turn your head. In other words, far-left sounds stay stuck to your left ear, and far-right sounds stay stuck to your right ear. *It's possible to shift the far-left and far-right sounds towards the front by using the Realiser's mix block, which can add a bit of the left signal to the front speaker for the right ear, and a bit of the right signal to the front speaker for the left ear.*
> 
> Using the Hafler PRIR, there seems to be a greater sense of space and ambience for all sounds. If the recording was matrix-encoded, some sounds extend beyond the far-left and far-right and wrap around you. Initially I noticed that far-left and far-right sounds moved too much when I turned my head, but after I increased the front speaker level to be 3 dB higher than the rear speaker level, they moved properly.



Why is this happening?

Would it be a minor problem if you wanted to match sound localization and image from a virtual reality headset?

I am just hoping the musical industry to take a free ride in vr development.


----------



## jgazal (Dec 4, 2017)

pinnahertz said:


> (...) None use DSP crosstalk cancellation, and nobody makes mix decisions on virtualized speakers in headphones.



“Nobody” is such an strong word: https://www.soundonsound.com/reviews/smyth-research-realiser-a8 (published July 2013).

Though I agree that possible very few make mix decisions on virtualized speakers in headphones *and* a PRIR set to avoid crossfeed.

I say very few because Professor Choueiri and Chesky Records recording engineer may be the only two human beings doing that.


----------



## 71 dB

What are the scientifically proven facts of crossfeed that are not opinions? Human hearing has been studied for a long time. A lot of research done. I apply those "facts" to audio reproduction and to my ears it works. It works for Andy Linkner. It works for my friends.


----------



## 71 dB

pinnahertz said:


> The question of where/when to "correct" depends entirely on the knowledge of what was done during the creation process, knowledge of the difference between that environment and the play environment, AND, ultimately, the intent of the creator vs the intent of the listener.



Some aspect are more relevant than others. A creator can't expect his/her creation to be listened to in an exact same environment. A creator needs understanding of what kind of variation you have within typical listening environments and optimize his/her creation to work as well as possible for such "set" of environments. People don't consume music in studios. They consume it in at their living room and other everyday places, but almost never in a studio.

The intents of creators are heavily affected by commercial demands which are often rather irrational such as loudness war. So, we should not blindly worship all intends of the creators. We can use our own head.



pinnahertz said:


> However, the listener may not give  a crap about what the creator intended and may think his preference is superior anyway, as in the case of 71dB.


Creators have 3 options:

- mix for speakers
- mix for headphones
- mix for both and minimaze the downsides of such compromize

The default is speakers. As headphone listening has become more popular, maybe it's 60-70 % for speakers and 30-40 % headphones for typical pop, but that is not a balanced compromize between the two and leaves some spatial distortion to be crossfed away for headphones. I'm sure most creators agree with me about proper crossfeed. It makes the sound more natural and less tiring, so why wouldn't they? As I have said before, spatial distortion hardly is the intent of any creator.



pinnahertz said:


> But "Spatial Distortion", whatever that is, is actually a relatively minor problem in playback.  There are such HUGE other problems, I find it strange anyone would lock into that one.  Frequency response of play systems is a very big problem, particularly with typical headphones and speakers.  It's all over the place, with excursions from any hope of match falling into the 20dB range sometimes.  And FR affects our mix, perspective, balance...pretty much the whole presentation.  And yet if you go and demo headphones, the ones that actually come at all close to good and smooth are very, very few.  Faced with that, and the lack of any clear subjective preference for cross-feed in general, it seems like beating a dead horse.



What HUGE problems? Listening with speakers suffer from acoustics problems and less than optimum placement of speakers and listening position. These problems are significant, but also hard to fix, althou not impossible if you have money and an understanding wife. Fortunately, there is nothing unnatural about bad acoustics. Boomy bass is not unnatural, it just sounds bad. Fuzzy stereo image is not unnatural, the ILD an ITD information is natural presentation of fuzziness. Frequency response errors hurt fidelity of course, but are totally natural. Spatial distortion however is unnatural and that makes is a HUGE problem. Fortunately it can be fixed with crossfeed pretty easily.


----------



## pinnahertz

71 dB said:


> What are the scientifically proven facts of crossfeed that are not opinions? Human hearing has been studied for a long time. A lot of research done. I apply those "facts" to audio reproduction and to my ears it works. It works for Andy Linkner. It works for my friends.



You have not established:

1. A statistically significant portion of headphone listeners don't like the perspective

2. A statistically significant portion of headphone listeners prefer cross-feed...at all, much less 98% of all recordings.

You have established:

1. You like it so much you can't live without it

2. Your friends like it (could be any number from 2 up, but we don't know how it was tested, demo selections, biases...etc.)

Now, none of that is a problem _until_ you go and proclaim it as THE SOLUTION, and mandatory on 98% of all recordings ever made, proclaim the listening masses as ignorant, and accuse any with an opposing view as being somehow inept or impaired, and proclaim your own view as the only correct one.


----------



## bigshot (Dec 4, 2017)

71 dB said:


> A creator can't expect his/her creation to be listened to in an exact same environment. A creator needs understanding of what kind of variation you have within typical listening environments and optimize his/her creation to work as well as possible for such "set" of environments.




That's only true insofar that the sound in the studio isn't compromised. Creators usually focus on the main mix on the big monitors. They might do a pass at the end listening on small speakers, but that is mostly just to make sure nothing is blasting out. I've never seen anyone actually mix to anything but the main monitors in a mixing stage. Making the sound fit for various purposes is the job of the mastering engineer, and the creator usually has very little part in that.

As for the necessity of cross feed... I think DSPs will become more important as time goes by. They're already standard in multichannel AVRs, and I think DSPs for two channel will eventually be implemented as plugins for cell phones and DAPs for headphone listening as well. I don't think the fact that cross feed hasn't been standard up to now is any reason to think they won't be standard in the future. It just takes more sophisticated processing schemes designed specifically for how people listen to music today.


----------



## pinnahertz

71 dB said:


> Some aspect are more relevant than others. A creator can't expect his/her creation to be listened to in an exact same environment. A creator needs understanding of what kind of variation you have within typical listening environments and optimize his/her creation to work as well as possible for such "set" of environments. People don't consume music in studios. They consume it in at their living room and other everyday places, but almost never in a studio.


Those who still have short term memory will recall that I have already stated that creators mix assuming few if any will listen to their work in a similar environment.


71 dB said:


> The intents of creators are heavily affected by commercial demands which are often rather irrational such as loudness war. So, we should not blindly worship all intends of the creators. We can use our own head.
> 
> Creators have 3 options:
> 
> ...


There are other options, like the one actually used: mix on speakers, check on headphones.  And by that I mean, assure the intent of the mix doesn't change on headphones, and accept the new perspective as valid (since not everyone agrees that "spatial distortion" is bad or even a problem).   


71 dB said:


> The default is speakers. As headphone listening has become more popular, maybe it's 60-70 % for speakers and 30-40 % headphones for typical pop, but that is not a balanced compromize between the two and leaves some spatial distortion to be crossfed away for headphones.


Where on earth do you get your statistics?


71 dB said:


> 1.  I'm sure most creators agree with me about proper crossfeed. 2. It makes the sound more natural and less tiring, so why wouldn't they? 3. As I have said before, spatial distortion hardly is the intent of any creator.


1. Why...because you are the God of Cross-Feed and unless they agree with you they are all imbeciles and should immediately be devoted to destruction?  You've already found one who disagrees...quite strongly, so you have two data points: yours and mine.  Not even enough to extrapolate a trend!
2. OMG!!!!   OPINION! OPINION! OPINION! OPINION!   Here's another: It flattens depth, removes ambiences, sucks the life out of the mix, alters mix balances...
3. How would you know?  Oh yeah, the God of Cross-Feed knows what everyone intends.  How incredibly disrespectful of the very creators on whom your listening depends.

OPINION!


71 dB said:


> What HUGE problems?


 Frequency response would be the biggest one.


71 dB said:


> Listening with speakers suffer from acoustics problems and less than optimum placement of speakers and listening position. These problems are significant, but also hard to fix, althou not impossible if you have money and an understanding wife.


A symmetrical room layout with an LP in the "sweet spot" is not hard to do, it just isn't a priority.  It doesn't take a lot of money, or even that much understanding from your spouse.  It does take an interest and a priority.  The problem is it takes money for good speakers, and room acoustic treatment is highly disruptive to decor and layout.  All that's true, but basic system response is still the huge problem.


71 dB said:


> Fortunately, there is nothing unnatural about bad acoustics. Boomy bass is not unnatural, it just sounds bad.


Boomy bass is _a distortion of the mix!  Thin bass is also a distortion!  Over-emphasized upper mid is a distortion!_


71 dB said:


> Fuzzy stereo image is not unnatural, the ILD an ITD information is natural presentation of fuzziness.


But that doesn't represent the original..._so it's a distortion!_
Frequency response errors hurt fidelity of course, but are totally natural. 
[/quote]This is absolutely and completely wrong.  Response errors are huge, and affect the mix balance to a far, far greater extent that anything else.  Spend a week mixing a song, then play it on a consumer system...see how you like your work being remastered by some random response curve!  


71 dB said:


> Spatial distortion however is unnatural and that makes is a HUGE problem.


OPINION!  YOURS!  "Spatial distortion" has been accepted for decades, where frequency response has been worked on for decades.  The industry in general has prioritized response and fully ignored spatial distortion!  Why?  The reality is, the headphone perspective is fully accepted by most listeners, and has become an alternate version of "natural".  That's why there is no cross-feed on any main stream music player.  Yet every respectable AVR has room EQ.  There are many ways of applying headphone EQ.  


71 dB said:


> Fortunately it can be fixed with crossfeed pretty easily.


The application of the "fix" is subjective, generalized, and requiring variation with every track.  This is not true of response equalization.

You have defined the priority and magnitude of problems according to your own preference.  I see no point in presenting a different opinion since our resident egomaniac has made up his mind.

Yesterday, as a sanity check (readers will understand my need for this!) I listened to an early 1970s album all the way through using cross-feed.  Initially I was surprised because the first track heard with cross-feed presented a guitar moved out of it's normal pan position into one elevated up about 45 degrees.  Interesting, but not right.  I listened to the entire album, and within a song or two, the cross-feed version no longer sounded "wrong", in fact, it seemed like the band was less in my head, more in front of me.  Then, I listened again to the entire album without cross-feed.  Wow!  I was drawn into the music, the ambience surrounded me, I heard mix details missed before, and the music touched me far more deeply.  I enjoyed the entire experience more.  

The album was "Who's Next" by The Who.  

Now all of that is just my opinion, and I don't expect anyone to share it or agree with it.  It's just my preference.  But at least I gave it a shot, I opened the possibility that cross-feed might improve the experience.  I felt it didn't in that case.  However, I'm not done either.  I'll be listening to more and varied recordings with cross-feed.  Please know, this is not something new I'm doing now.  I've experimented with cross-feed for decades, various forms.  I keep giving it a chance.  I know it has merit, and I'll see if I can find where that is. And, where possible, I ask others opinions (not yours...I have yours pretty much in hand).  

I welcome others to do the same kind of comparison.  And I'll let you know when I find a track that cross-feed improves.


----------



## Zapp_Fan (Dec 4, 2017)

As someone who is currently working on feature sets for headphones, I can tell you that it's true, the reason crossfeed isn't implemented on headphones or DAPs is because most people don't know what it is, and so don't ask for it.  Whether it's really an essential feature for listening or not, I decline to make a definitive statement 

I've done a bit of market research on this very question, and the underlying problem is that it's pretty difficult to get the average person interested in better stereo imaging - unless they are physically listening to an A/B comparison in person.  Telling people that the stereo separation is better in marketing copy simply does not work on the typical consumer, at least in my limited but direct experience.  If it costs money to implement but the consumer doesn't care, you don't implement it.  People have a good sense of what they'd be willing to spend on more SPL, more bass, or a general notion of clearer, better sound, but claims about stereo image is not something that makes the wallets come out, for whatever reason.

All that said, I noticed that the companion app for the Sony WH‑1000XM2  wireless cans includes a spatialization feature, it does some kind of simulation of "club" and "arena" reverb type sounds.  Unfortunately, I didn't personally find the effect very pleasing, nor was it something you'd recognize as crossfeed, so much as a special effect.  They did, however, implement some pretty convincing HRTFs that offer a front/back/left/right positioning of the sound, so a quality crossfeed scheme built into headphones is certainly within reach today. 

But, this does point the way to widespread adoption of crossfeed.  Headphone BT chipsets are getting powerful enough to run a relatively sophisticated crossfeed DSP,  and I expect companion apps will become more common for headphones over time, so I would say the prospects for crossfeed in the mass market are probably better than they've ever been - the reason being there is no marginal cost for implementing it as a DSP effect, you just need to pay your engineer to implement it once in the firmware.


----------



## gregorio

This thread seems to have become extremely polarised, with some pretty strange assertions which rather typically in audiophile discussions are based on personal opinion/perspective, incorrect facts and important facts which have been simply omitted/ignored/overlooked.



71 dB said:


> Crossfeeders don't serve coffee. They don't fix everything in the sound. They fix things related to excessive stereo separation and that's it. Why do I even need to say this?



I don't know why you think you need to say that, maybe because some of us realise/understand that crossfeeding does NOT fix things related to excessive stereo image! When we listen to speakers yes, we have the direct dry sound which in the case of a sound panned right is heard first in the right ear and then in the left ear at a lower level. With HPs we would only hear the sound in the right ear and therefore crossfeeding some of it may seem like a good solution and obviously some people feel (strongly apparently) that it does. However, there are a number of reasons why it doesn't:
1. There is not the expected time delay between left and right ears or the expected masking effect of the skull.
2. It's extremely rare for there to be only a direct dry sound on a recording. Sure, in the early days of stereo there was relatively little which could be done about this but even in the 60's ways around that were being commonly/ubiquitously employed; echo chambers, plates and other sympathetic resonators and even reverb units (acoustic/analogue devices such as the EMT units). Recording and playback technology was improving and therefore recordings didn't need to be as dry as previously. The ping/pong effect never really died completely, it just became less obvious over time, using technology such as stereo delays and reverbs for example. Reverb units been stereo for many years, in fact the EMT had a stereo return unit in the early 60's, so just crossfeeding a mix is going to damage, in some cases quite severely, the often delicate timing of left/right reflections.
3. Reverb again but this time it's the reverb of the listening environment, both of them. There was a period in the late '70's and part way through the '80's when control room design fashion was for quite dead/dry acoustics, quite dry multi-track recordings and then add a bunch of reverb and delays artificially. It was fairly quickly realised that did not result in good translation to the consumer and so, for around the last 30 years or so music studios have erred far more towards diffusion than absorption. The concept of mixes being created in relatively dead/dry controls rooms is pretty much an audiophile myth. Most decent music control/mixing rooms have RTs (reverb times) broadly similar to average to the average sitting room. The application of reverb and delay effects is therefore based on the interaction of those effects with the hopefully neutral reverb of the control room. Obviously, consumers don't typically have appropriate diffusers in their listening environments and don't have neutral room reflections/reverb but nevertheless, what they do have is still an important interaction with the reflections/reverb on the recording. Hopefully it's obvious that crossfeeding does not accomplish this same feat, if anything, it can again just make matters worse. 

Ultimately, every playback scenario has it's strengths a weaknesses. Speakers are better because the music has been mixed and mastered for them but are badly compromised by relatively poor listening environments. Headphones cut out the worst of poor consumer listening environments but also throws out significant bits of the baby with the bath water. Crossfeeding "fixes" some those issues but makes others worse. For many people it's a case of which disadvantages/weaknesses bother them the least. When I listen on HPs I know I'm going to get an artificially wide stereo image, a recording with less reverb and depth than intended, if I want it to sound more like it does through a good speaker system/environment then I listen on a good speaker system/environment. I've heard some recordings on HPs which benefited from crossfeed, where crossfeed's deficiencies were less objectionable, and I've heard many others where I found the deficiencies too objectionable. Personally, I therefore avoid it, at least with just straight cans I know I'm getting the sound which the engineers in the studio checked and didn't think was too objectionable. That's just my personal preference though, what personally bothers me the least. It's obvious you've got pretty strong feelings on the matter, that you obviously aren't much bothered by the deficiencies of crossfeeding, while being impressed by the issues it does solve. That's up to you of course but also of course that's only your personal preference. You seem to be getting a bit confused and carried away, apparently claiming as fact that it would be better for almost everyone and almost every recording.



71 dB said:


> Since subwoofers typically output only lowest bass, only in larger rooms we have problems with modes and small room behaves more like a pressure chamber and it's easier to find a good placement for the sub.



I'm not sure how you came to this conclusion, actually the reverse is true. As room size increases, the fundamental room mode frequencies and levels get lower as the distance between the main reflective surfaces increases. In large rooms, most cinemas for example, the lowest room modes are below 20Hz and therefore of no real concern. If small rooms were better, we'd all be mixing in them because it'd be cheaper. Generally, the smaller the room, the worse the acoustic issues and the more difficult they are to effectively treat.

G


----------



## bigshot (Dec 4, 2017)

My room is about 20 by 20 with a 20 foot high peaked roof and a 10 by 10 foot L off the rear on the right. It's a spit in the wind guesstimate, but I would bet the room mode frequency in my room is down below 30Hz based on my attempts to find the resonant frequency. A half octave at the very bottom isn't enough for me to really worry about, so I just tame it through EQ and furniture placement to lessen reflections and I don't worry about eliminating it through more extreme means. I do try to make sure that the sub has a clear shot at all the listening positions and I put it as close to the center of the front wall as I can, but not exactly in the center. (I find that symmetry sometimes causes problems.) It works for me.

Is there a way to estimate the resonant frequency based on room dimensions? I'm lousy at math, but I'd be interested to see if my guesstimate is in the proper ballpark.


----------



## bigshot

pinnahertz said:


> Behind the arrogant stand a long line of the more humble.



At Sound Science that line forms to the rear! The humble may be blessed, but they don’t get much air time around here!


----------



## bfreedma

bigshot said:


> My room is about 20 by 20 with a 20 foot high peaked roof and a 10 by 10 foot L off the rear on the right. It's a spit in the wind guesstimate, but I would bet the room mode frequency in my room is down below 30Hz based on my attempts to find the resonant frequency. A half octave at the very bottom isn't enough for me to really worry about, so I just tame it through EQ and furniture placement to lessen reflections and I don't worry about eliminating it through more extreme means. I do try to make sure that the sub has a clear shot at all the listening positions and I put it as close to the center of the front wall as I can, but not exactly in the center. (I find that symmetry sometimes causes problems.) It works for me.
> 
> Is there a way to estimate the resonant frequency based on room dimensions? I'm lousy at math, but I'd be interested to see if my guesstimate is in the proper ballpark.




Since I know you like Ethan's work....  http://realtraps.com/modecalc.htm

There are some online options as well - google "room mode calculator"


----------



## 71 dB

bigshot said:


> My room is about 20 by 20 with a 20 foot high peaked roof and a 10 by 10 foot L off the rear on the right. It's a spit in the wind guesstimate, but I would bet the room mode frequency in my room is down below 30Hz based on my attempts to find the resonant frequency. A half octave at the very bottom isn't enough for me to really worry about, so I just tame it through EQ and furniture placement to lessen reflections and I don't worry about eliminating it through more extreme means. I do try to make sure that the sub has a clear shot at all the listening positions and I put it as close to the center of the front wall as I can, but not exactly in the center. (I find that symmetry sometimes causes problems.) It works for me.
> 
> Is there a way to estimate the resonant frequency based on room dimensions? I'm lousy at math, but I'd be interested to see if my guesstimate is in the proper ballpark.








Here c = 345 m/s (at typical room temperature), W = L = 20*0.30 m = 6 m. Peaked roof and "L" complicates things a bit. 30 foot is 9 m = W*

For nW = 1, nL = 0 and nH = 0, we have 345*SQRT(1/36)/2 Hz = 345/(6*2) Hz = 29 Hz.
For nW* = 1, nL = 0, and nH = 0, we have 345*SQRT(1/81)/2 Hz = 345/(9*2) Hz = 19 Hz.
For nW = 1, nL = 1 and nH = 0, we have 345*SQRT(1/36 + 1/36)/2 Hz = 345/(6*SQRT(2)) Hz = 41 Hz.
For nW* = 1, nL = 1 and nH = 0, we have 345*SQRT(1/81 + 1/36)/2 Hz = 35 Hz.

The L-shape makes the calculations difficult, but you should find the lowest room modes at about 20, 30, 35 Hz and 40 Hz depending on where you measure them in the room.


----------



## 71 dB

gregorio said:


> I'm not sure how you came to this conclusion, actually the reverse is true. As room size increases, the fundamental room mode frequencies and levels get lower as the distance between the main reflective surfaces increases. In large rooms, most cinemas for example, the lowest room modes are below 20Hz and therefore of no real concern. If small rooms were better, we'd all be mixing in them because it'd be cheaper. Generally, the smaller the room, the worse the acoustic issues and the more difficult they are to effectively treat.
> 
> G


Small rooms are "problem free" only at lowest bass below the lowest room modes, but you reproduce whole audio band in small rooms too. At 150 Hz you have problems! It's just not the sub doing those frequencies (say your cut off frequency is 60 Hz). Your problems at 150 Hz are related to the main speakers, their placement and of course acoustics at 150 Hz. If you work in a room so small that the lowest room mode is at 60 Hz, you can place your sub with 60 Hz cut off practically anywhere! That's about a 10' x 10' x 10' box at most.

In a large room with lowest room mode at 20 Hz doesn't save you from problems. You have modes at 40, 60, 80, 100, … Hz too + many others.
Studios are designed and engineered to have good acoustics. Shape, damping, diffractors etc. Comparing your living room to a studio is like comparing your car to a F1 car.


----------



## 71 dB

gregorio said:


> I don't know why you think you need to say that, maybe because some of us realise/understand that crossfeeding does NOT fix things related to excessive stereo image! When we listen to speakers yes, we have the direct dry sound which in the case of a sound panned right is heard first in the right ear and then in the left ear at a lower level. With HPs we would only hear the sound in the right ear and therefore crossfeeding some of it may seem like a good solution and obviously some people feel (strongly apparently) that it does. However, there are a number of reasons why it doesn't:
> 1. There is not the expected time delay between left and right ears or the expected masking effect of the skull.
> 2. It's extremely rare for there to be only a direct dry sound on a recording. Sure, in the early days of stereo there was relatively little which could be done about this but even in the 60's ways around that were being commonly/ubiquitously employed; echo chambers, plates and other sympathetic resonators and even reverb units (acoustic/analogue devices such as the EMT units). Recording and playback technology was improving and therefore recordings didn't need to be as dry as previously. The ping/pong effect never really died completely, it just became less obvious over time, using technology such as stereo delays and reverbs for example. Reverb units been stereo for many years, in fact the EMT had a stereo return unit in the early 60's, so just crossfeeding a mix is going to damage, in some cases quite severely, the often delicate timing of left/right reflections.
> 3. Reverb again but this time it's the reverb of the listening environment, both of them. There was a period in the late '70's and part way through the '80's when control room design fashion was for quite dead/dry acoustics, quite dry multi-track recordings and then add a bunch of reverb and delays artificially. It was fairly quickly realised that did not result in good translation to the consumer and so, for around the last 30 years or so music studios have erred far more towards diffusion than absorption. The concept of mixes being created in relatively dead/dry controls rooms is pretty much an audiophile myth. Most decent music control/mixing rooms have RTs (reverb times) broadly similar to average to the average sitting room. The application of reverb and delay effects is therefore based on the interaction of those effects with the hopefully neutral reverb of the control room. Obviously, consumers don't typically have appropriate diffusers in their listening environments and don't have neutral room reflections/reverb but nevertheless, what they do have is still an important interaction with the reflections/reverb on the recording. Hopefully it's obvious that crossfeeding does not accomplish this same feat, if anything, it can again just make matters worse.


Direct dry sound from speakers: The sound at your left ear is hardly a decibel weaker than at right ear at bass and not much at higher below 1 kHz. You sense the direction of sound below 1 kHz based on ITD.

1. All crossfeeders I have been playing with do have "expected" (about 250 µs) time delay between left and right ears and the masking effect of the skull is simulated, not on HRTF-accuracy, but simulated nevertheless. Ipsilateral signal is treblebossted with a shelf-filter and contralateral signal is low-pass filtered.

2. That's why we don't crossfeed at level 0 dB, but at lower level depending on the recording (proper crossfeed)

3. How does not using crossfeed accomplish anything? You don't have a room with headphones!


----------



## 71 dB

pinnahertz said:


> Yesterday, as a sanity check (readers will understand my need for this!) I listened to an early 1970s album all the way through using cross-feed.  Initially I was surprised because the first track heard with cross-feed presented a guitar moved out of it's normal pan position into one elevated up about 45 degrees.  Interesting, but not right.  I listened to the entire album, and within a song or two, the cross-feed version no longer sounded "wrong", in fact, it seemed like the band was less in my head, more in front of me.  Then, I listened again to the entire album without cross-feed.  Wow!  I was drawn into the music, the ambience surrounded me, I heard mix details missed before, and the music touched me far more deeply.  I enjoyed the entire experience more.
> 
> The album was "Who's Next" by The Who.
> 
> ...



Do you hear these "mix details missed before" when you listen to speakers? Spatial distortion emphasizes channel difference so what you hear is not real detail, but overblown detail out of it's proportion. That's "detail scale distortion" (Yes, one more of my own terms!) The same details are at the crossfed sound, but at lower level, at correct level in fact if you use proper crossfeed. At correct level they don't "mask" monophonic detail. L+R and L-R details are balanced. One thing people might not realize is that you need volume correction with crossfeed. Our hearing is sensitive to channel difference and crossfeed reduces that so perceived loudness drops. A few decibels is needed to raise crossfed sound on the same perceived loudness level. That's one reason why crossfeed seems to make music less involving. Another reason is that spatial distortion functions as "special effects" and kind of masks the emptyness of the music itself. Does the music have artistic value or is it all just hard panned spatial tricks, smoke and mirrors?

I appreciate your open mind pinnahertz. I'm not familiar of The Who's music, but I'm listening to the album on Spotify while writing this (Deluxe Edition). The album contains typical 70's spatial distortion and imo needs crossfeed at level about -1 dB. Music itself is quite nice!


----------



## pinnahertz

71 dB said:


> Do you hear these "mix details missed before" when you listen to speakers?


Yes.  The basic mix and balance matches speakers better without cross-feed.


71 dB said:


> Spatial distortion emphasizes channel difference so what you hear is not real detail, but overblown detail out of it's proportion.


No, the non-cross-feed version matches the speaker mix quite well.  Instruments are in wider positions and the sense of ambience is more immersive in headphones, otherwise the two are in reasonable parity.  Cross-feed throws the mix off.


71 dB said:


> That's "detail scale distortion" (Yes, one more of my own terms!)


 O     M    G.


71 dB said:


> The same details are at the crossfed sound, but at lower level, at correct level in fact if you use proper crossfeed. At correct level they don't "mask" monophonic detail. L+R and L-R details are balanced.


Not my impression at all, sorry.


71 dB said:


> One thing people might not realize is that you need volume correction with crossfeed. Our hearing is sensitive to channel difference and crossfeed reduces that so perceived loudness drops. A few decibels is needed to raise crossfed sound on the same perceived loudness level. That's one reason why crossfeed seems to make music less involving.


The cross-feeder I used already takes the level adjustment into account.  There's no perceived loudness difference.  What changes is the sense of 3D space.  Cross-feed kills that almost completely, and moves the mix slightly forward out of mid-head.  But it's then a 2D mix, very flat.


71 dB said:


> Another reason is that spatial distortion functions as "special effects" and kind of masks the emptyness of the music itself. Does the music have artistic value or is it all just hard panned spatial tricks, smoke and mirrors?


I'm not sure what you're saying.  I've already explained what I heard. 


71 dB said:


> I appreciate your open mind pinnahertz. I'm not familiar of The Who's music, but I'm listening to the album on Spotify while writing this (Deluxe Edition). The album contains typical 70's spatial distortion and imo needs crossfeed at level about -1 dB. Music itself is quite nice!


Well, it's not an unusual mix even for today.  A bunch of mono sources panned using ILD, with "space" added.

If you think about that last bit in your above post...I think you might actually land on a reason some of us may not appreciate cross-feed.  That record is one I knew from its release.  And yes, we had headphones back then!  They ran on coal fired steam, but we had them.  So much of what you hear as "wrong" I hear as "normal", because I've heard it that way for most of my life.  That makes it "normal" for me. 

Now, by extension, consider how much listening is done today on headphones, earphones.  I think your statistics are way, way off, but even if it's 50%, and those listeners have never heard anything else but non-cross-fed stereo in headphones, aren't you even a little afraid they will also feel that perspective is their "normal" and not like the flattened narrow perspective of cross-feed?  How long have they listened to music that way?  If I were you, that's where my concern and research would be.  Find out how people actually react to it in a bias controlled test. Which, by the way, probably means you can't administer the test yourself (bias control, you know).

As to the open mind...heck If I don't learn something new in a given day I must be asleep or dead.  And learning new stuff about audio has been pretty much my entire life.  And its self-serving too, if I learn something that makes my listening experience better, I'm going to do it.  And that's the only reason I'm still messing with cross-feed after like almost 40 years.  So far, and I'm still open to other possibilities, I find its benefit isn't zero, it's just another tweak, a tool that should be used when appropriate.  For me, "appropriate" is relatively rare.  I would suggest you set your preconceptions aside and put your work into determining what "appropriate" means to the unwashed listening masses.


----------



## Whazzzup




----------



## pinnahertz

71 dB said:


> Here c = 345 m/s (at typical room temperature), W = L = 20*0.30 m = 6 m. Peaked roof and "L" complicates things a bit. 30 foot is 9 m = W*
> 
> For nW = 1, nL = 0 and nH = 0, we have 345*SQRT(1/36)/2 Hz = 345/(6*2) Hz = 29 Hz.
> For nW* = 1, nL = 0, and nH = 0, we have 345*SQRT(1/81)/2 Hz = 345/(9*2) Hz = 19 Hz.
> ...


This is a pretty fine example of theory and calculation falling more than a little short of reality.  The equations above assume the boundaries are absolutely rigid and 100% reflective, which they aren't.  Every wall is partially permeable, and to make matters worse, it varies with frequency.  At lower frequencies residential partitions become diaphragmatic, and partially absorptive.  The only boundary types that come close to those assumed by the calculations are very thick reinforced concrete walls, ceilings and floors, and even those can be shown to be partially permeable. Ceilings and floors are also not equally reflective at all frequencies.  Modal analysis must also take into account the position of the exciter (s), and the listener (or mic), and non-parallel surfaces, and the L shape will just confound prediction.  Then you throw stuff in the room, and it's different again.   I can promise you that the analysis above will have little relation to reality.  

Only a couple of decades ago people used to do this with elaborate spread-sheets, charts, and graphs.  They'd work out the best modal distribution and build that room.  But none of it actually worked out that way once the room is built!   Even the so-called "golden ratio" turned out to be a bit mythological because it didn't take the actual characteristics of walls into account.  The best you can do is avoid multiple modes on 3 axis  from landing on the same frequency....sort of.   You take your best shot, measure the effects in situ, react to the data if possible, and measure many, many positions.   It turns out that modal analysis is pretty much just a math exercise only.


----------



## pinnahertz

71 dB said:


> Small rooms are "problem free" only at lowest bass below the lowest room modes, but you reproduce whole audio band in small rooms too. At 150 Hz you have problems! It's just not the sub doing those frequencies (say your cut off frequency is 60 Hz). Your problems at 150 Hz are related to the main speakers, their placement and of course acoustics at 150 Hz. If you work in a room so small that the lowest room mode is at 60 Hz, you can place your sub with 60 Hz cut off practically anywhere! That's about a 10' x 10' x 10' box at most.
> 
> In a large room with lowest room mode at 20 Hz doesn't save you from problems. You have modes at 40, 60, 80, 100, … Hz too + many others.
> Studios are designed and engineered to have good acoustics. Shape, damping, diffractors etc. Comparing your living room to a studio is like comparing your car to a F1 car.


The reality is, ti doesn't work exactly as expected.  As posted earlier, the calculations you base your assumptions upon do not consider real rooms, real walls, and real construction materials.

You may find your assumptions about studios vs home listening rooms to be a bit wrong, too.  Yes, when we design studios we pay attention to a lot of acoustic issues that home designs ignore, but there exists a lot of granularity in both design and implementation.  

For a picture of home listening rooms, see the AES paper, "First Results from a Large-Scale Measurement Program for Home Theaters" by Tomlinson Holman and Ryan Green (November 2010).  The raw data set for the analysis in the paper came from measurements taken of 1000 rooms, using the Audyssey MultEQ (the Pro calibration version) where measurements of multiple positions both before and after correction could optionally be uploaded to a server.  There are some surprising results, such as that the assumption that homes are noisier environments than studios is probably wrong.


----------



## gregorio

71 dB said:


> [1] In a large room with lowest room mode at 20 Hz doesn't save you from problems. You have modes at 40, 60, 80, 100, … Hz too + many others.
> [2] Studios are designed and engineered to have good acoustics. Shape, damping, diffractors etc. Comparing your living room to a studio is like comparing your car to a F1 car.



1. That all depends on how you define a large room. In the given example of a cinema, where the minimum typical length of the room would be about 70', the fundamental axial mode would be around 8Hz, the first harmonic at 16Hz, neither of which are an audible problem and even the second harmonic at about 24Hz is not much of a problem.
2. There are comparisons between my car and an F1 car, both have 4 wheels for example, unlike a comparison between say a bicycle and an F1 car.



71 dB said:


> 1. All crossfeeders I have been playing with do have "expected" (about 250 µs) time delay between left and right ears and the masking effect of the skull is simulated, not on HRTF-accuracy, but simulated nevertheless.
> 2. That's why we don't crossfeed at level 0 dB, but at lower level depending on the recording (proper crossfeed)
> 3. How does not using crossfeed accomplish anything? You don't have a room with headphones!



1. "Expected" by whom? Even "HRTF-accuracy" is based on mean values unless you're going to have a personalised HRTF, which is potentially better but achieving that potential is not straight forward.
2. And how would you judge what the proper level would be? There is no simple equation because each recording has different types, different combinations of types and different amounts of stereo reverb/delay effects, so that judgement would be entirely subjective by the individual for each recording and that subjective judgement is likely to vary from listening to listening as one focuses on different aspects of the recording. That's hardly a practical solution for the average consumer, even assuming there is an optimal/"proper" crossfeed level for any particular recording, which in many cases I don't believe there is.
3. Huh? That's my point, you don't have a room without crossfeed and you don't have a room with crossfeed either! A. Without crossfeed all you have is the stereo reverb and delay based effects actually applied to the recording without the intended interaction of the room. B. With crossfeed you still don't get that listening room interaction plus you get messed-up timing of the delay based effects actually applied to the recording. In this respect, not only does crossfeed not "fix things related to excessive stereo separation", it potentially just makes matters even worse, depending on the recording. I'd personally rather have A than B and as a bonus, not have to own and fiddle with a crossfeeder!

G


----------



## 71 dB

*I apologize if I can't respond to every message as this must be the most active online discussion board I have ever seen! *



gregorio said:


> 1. That all depends on how you define a large room. In the given example of a cinema, where the minimum typical length of the room would be about 70', the fundamental axial mode would be around 8Hz, the first harmonic at 16Hz, neither of which are an audible problem and even the second harmonic at about 24Hz is not much of a problem.
> 2. There are comparisons between my car and an F1 car, both have 4 wheels for example, unlike a comparison between say a bicycle and an F1 car.


1. A large room in Finland is propably small in the US, because we have small apartments in Finland (it takes energy to keep them warm at winter so they aren't very large). I agree about the 8, 16 and 24 Hz modes being harmless.
2. I can compare myself to God: We both have a name. 







gregorio said:


> 1. "Expected" by whom? Even "HRTF-accuracy" is based on mean values unless you're going to have a personalised HRTF, which is potentially better but achieving that potential is not straight forward.


The fine details of HRTF vary from person to person, but the general shape is the same dictated but our anatomy. If you change the cut-off frequency of a first order low pass filter by 10 %, people will hardly notice anything, but if you change the narrow spikes of HRTF by that amount the result is completely wrong. Crossfeed doesn't even try to simulate the fine details. It scales spatial information as a mapping function from "stereo space" to "human hearing space". In "stereo space" you can have any ILD. In "human hearing space" ILD is limited. So, for example at bass you need to limit "stereo space" ILD of zero to infinity dB to about 0 to 5 dB unless you want to create a soundstage where the kickdrum is hit a few inches of your other ear, something I believe nobody tries to achieve. A crossfeeder at level -5 dB does such mapping: 0 dB remains 0 dB while infinite dB becomes 5 dB. Expected delay is pretty trivial issue, because the size of head doesn't very much between people and if someone has very large or small head, it only translate into angle error which isn't serious. A guitarist playing at angle 40° instead of 30° isn't serious. A person with large head hears all recordings narrower than small head people with headphones, crossfeed or not. That's life. Crossfeed not being able to simulate your HRTF with 100 % accuracy is not a valid reason not to use it. It doesn't eve try to do what it can't do and does what it can do: scale spatial information and doing so fixes the serious problem of spatial distortion. Not using crossfeed doesn't do even that. It fixes nothing letting all problems reach your ears.



gregorio said:


> 2. And how would you judge what the proper level would be? There is no simple equation because each recording has different types, different combinations of types and different amounts of stereo reverb/delay effects, so that judgement would be entirely subjective by the individual for each recording and that subjective judgement is likely to vary from listening to listening as one focuses on different aspects of the recording. That's hardly a practical solution for the average consumer, even assuming there is an optimal/"proper" crossfeed level for any particular recording, which in many cases I don't believe there is.


It's pretty easy when you learn to distinguish spatial distortion from music. I simply use the weakest crossfeed level that makes spatial distortion disappear or leaves very very little of it. It's as simple as using a crossfeed level that makes the recording sound best in respect of spatial information. It's that simple! Sure, it takes some hours of listening and testing for a person to learn how spatial distortion diminishes as crossfeed gets stronger, but it is not difficult. Comments like this are a bit frustrating because I feel people don't have a clue about these spatial things and how crossfeed addresses them. Don't think about what effects are used in music. Our hearing doesn't detect reverb plugins. It detects ILD/ITD + spectral cues in time. Only care about what kind of ILD/ITD -information reaches your ears. Is it within "human hearing space" or not?



gregorio said:


> 3. Huh? That's my point, you don't have a room without crossfeed and you don't have a room with crossfeed either! A. Without crossfeed all you have is the stereo reverb and delay based effects actually applied to the recording without the intended interaction of the room. B. With crossfeed you still don't get that listening room interaction plus you get messed-up timing of the delay based effects actually applied to the recording. In this respect, not only does crossfeed not "fix things related to excessive stereo separation", it potentially just makes matters even worse, depending on the recording. I'd personally rather have A than B and as a bonus, not have to own and fiddle with a crossfeeder!



So, in respect of having a room no crossfeed and crossfeed are on the same line, except crossfeed creates spatial cues of sound sources at about 30° angles, perhaps in a room? That's the problem of not using crossfeed. Your recording can contain any kind od "spatial" information in "stereo space" which is the representation of real life sounds as 2 x N matrixes if in digital form. "stereo space" spatial information in imcompatible with our hearing and must be mapped to "human hearing space" where we don't have wild 100 dB ILDs, but ILDs are limited to about 0-5 dB at bass and about 0-30 dB at high frequencies. When we listen to speakers, this all happens "automatically" with acoustic crossfeed. With headphones we need crossfeed to make it happen. Our everyday sound world is more monophonic than people realise, closer to mono than ping pong stereo. All you need to do is analyse binaural recordings to see this yourself. When a bee flies to your ear you experience large ILDs, but you don't like it, do you?

What a crossfeeder does to sound is what our hearing expects sounds coming to our ear having. That's why the "messing-up" is beneficial. That's why people don't say the stereo image with speakers is messed up because your left ear hears right speaker and vice versa. Spatial information in "stereo space" MUST BE messed with to map it into "human hearing space". You can live your life without a crossfeeder, but that makes you spatially ignorant. I was such a person up until 2012. I managed, but I get so much more out of music now that I am spatially aware. Purism can be a good thing and it can be a bad thing. Considering crossfeed as "messing up" timing is bad kind of purism. How can you claim that crossfeed doesn't fix excessive stereo separation? It undeniably reduces stereo separation so it makes excessive stereo separation less excessive, hence fixes the problem.


----------



## 71 dB

pinnahertz said:


> This is a pretty fine example of theory and calculation falling more than a little short of reality.  The equations above assume the boundaries are absolutely rigid and 100% reflective, which they aren't.  Every wall is partially permeable, and to make matters worse, it varies with frequency.  At lower frequencies residential partitions become diaphragmatic, and partially absorptive.  The only boundary types that come close to those assumed by the calculations are very thick reinforced concrete walls, ceilings and floors, and even those can be shown to be partially permeable. Ceilings and floors are also not equally reflective at all frequencies.  Modal analysis must also take into account the position of the exciter (s), and the listener (or mic), and non-parallel surfaces, and the L shape will just confound prediction.  Then you throw stuff in the room, and it's different again.   I can promise you that the analysis above will have little relation to reality.
> 
> Only a couple of decades ago people used to do this with elaborate spread-sheets, charts, and graphs.  They'd work out the best modal distribution and build that room.  But none of it actually worked out that way once the room is built!   Even the so-called "golden ratio" turned out to be a bit mythological because it didn't take the actual characteristics of walls into account.  The best you can do is avoid multiple modes on 3 axis  from landing on the same frequency....sort of.   You take your best shot, measure the effects in situ, react to the data if possible, and measure many, many positions.   It turns out that modal analysis is pretty much just a math exercise only.



True. The real frequencies might be anything, but not knowing anything about the house, this is what can be calculated. Sorry, if I let you think this is an exact calculation. More like a demonstration of how "stupid" it is to ask online the mode frequencies of your living room telling some approximative measures here and there. I regret even using my time on that post.


----------



## 71 dB

pinnahertz said:


> Yes.  The basic mix and balance matches speakers better without cross-feed.


Really? Then you must have very strange headphones or speakers. Or maybe you're in denial that crossfeed actually works and have strange placebo effect due to that attitude. How can you be honest and tell me your speakers with acoustic crossfeed sound closer to headphones without cross-feed? Don't you see how insane such a claim is? What kind of ultra-directive magic speakers could produce ILDs over 10 dB at bass to your ears even in an anechoic chamber let alone in your room?



pinnahertz said:


> No, the non-cross-feed version matches the speaker mix quite well.  Instruments are in wider positions and the sense of ambience is more immersive in headphones, otherwise the two are in reasonable parity.  Cross-feed throws the mix off.


I give up. Do what you want. We simple disagree so much there is no hope for agreements. To me speaker sound if narrower than heaphone sound. Crossfeed makes the sound narrower and closer to speakers. I don't know why people want "wide". Usually the music is in front of you! Crossfeed gives sensation of that to same sense, depents on the recording. It seems you don't understand anything I tell you about what crossfeed does. You remain clueless. Waste of time. I stop now and start doing other things I need to do in my life instead of wasting time with boneheads[/QUOTE]


----------



## pinnahertz

71 dB said:


> Really? Then you must have very strange headphones or speakers. Or maybe you're in denial that crossfeed actually works and have strange placebo effect due to that attitude.


There it is again: "I hear it, you don't, so you must be dear/defective/stupid...etc., etc."  I found this interesting too: "2. I can compare myself to God: We both have a name." 


71 dB said:


> How can you be honest and tell me your speakers with acoustic crossfeed sound closer to headphones without cross-feed?


I can't.  I didn't.  Read more carefully.


71 dB said:


> Don't you see how insane such a claim is? What kind of ultra-directive magic speakers could produce ILDs over 10 dB at bass to your ears even in an anechoic chamber let alone in your room?


Yes, that would be insane.  I'm not sure what you call lack of reading comprehension, probably not "insane", though.  Read my stuff again.


71 dB said:


> I give up. Do what you want. We simple disagree so much there is no hope for agreements. To me speaker sound if narrower than heaphone sound. Crossfeed makes the sound narrower and closer to speakers. I don't know why people want "wide". Usually the music is in front of you! Crossfeed gives sensation of that to same sense, depents on the recording.


Speaker sound could be narrower than cross-feed headphone sound, or it could be wider.  It depends on the placement of the speakers, the LP, directivity and room acoustics.  I did not say, or mean to imply that cross-feed made the perspective narrower than speakers, but it is less dimensional, flatter, less involving than may be possible on speakers.  Yes, with speakers the sound is mostly in front of you, and with cross-feed you get a sense of that.  But I don't find that "better".  I like speakers, and I like non-cross-feed headphones, with a few exceptions.


71 dB said:


> It seems you don't understand anything I tell you about what crossfeed does. You remain clueless. Waste of time. I stop now and start doing other things I need to do in my life instead of wasting time with boneheads


You've confused lack of understanding with a difference of preference.  I don't think YOU are clueless!  I think we have a difference in preference!  You are projecting your preference on everyone else as fact, and if not accepted, then they are clueless boneheads.  And now you're lobbing insults.  Do you think reducing this to juvenile name-calling will change my mind or the mind of any other reader?  I'm afraid you haven't improved your credibility.


----------



## 71 dB

pinnahertz said:


> If you think about that last bit in your above post...I think you might actually land on a reason some of us may not appreciate cross-feed.  That record is one I knew from its release.  And yes, we had headphones back then!  They ran on coal fired steam, but we had them.  So much of what you hear as "wrong" I hear as "normal", because I've heard it that way for most of my life.  That makes it "normal" for me.



So, my crossfeeding is wrong because of your nostalgia for hard panned stereo sound some half a century ago? You can't tell nostalgia apart from real fidelity? Nothing wrong with nostalgia, I feel it too, but admit the irrationality of it. Crossfeed allows you to hear your old favorite albums more like they should have always been and perhaps even discover some artistically rich music that doesn't rely on excessive spatial effects but on real musicality and creativity. Anyway, _Who's Next_ album was a nice experience for me not familiar with their music previously and I liked it with crossfeed. However, crossfeed off it sounds pretty horrible imo, spatial distortion is over 75 % it seems. Typical for the genre and era.


----------



## pinnahertz

71 dB said:


> So, my crossfeeding is wrong because of your nostalgia for hard panned stereo sound some half a century ago?


That is NOT what I said!  Read it again.


71 dB said:


> You can't tell nostalgia apart from real fidelity?


Real fidelity?  What's this, another of your new made-up terms?  I do know what "fidelity" means.  I also know that in the absolute, it doesn't exist.  Every recording is a compromise, the best ones are compromises that represent the intent well.


71 dB said:


> Nothing wrong with nostalgia, I feel it too, but admit the irrationality of it. *Crossfeed allows you to hear your old favorite albums more like they should have always been* and perhaps even discover some artistically rich music that doesn't rely on excessive spatial effects but on real musicality and creativity.


Well, I guess the God of Cross-Feed has spoken...again.  


71 dB said:


> Anyway, _Who's Next_ album was a nice experience for me not familiar with their music previously and I liked it with crossfeed. However, crossfeed off it sounds pretty horrible imo, spatial distortion is over 75 % it seems. Typical for the genre and era.


We differ.  But I challenge you to state clearly the algorithm with which you use to quantify the objective measurement of "spatial distortion" over 75%.  If you state a number, there certainly MUST be a means of measurement.  Or is this something only the God of Cross-Feed knows, knowledge he retains for himself?  It's a number.  That means there's a measurement, or an opinion.  Which is it?  If it's a measurement, then anyone can duplicate it and arrive at the same number.  If it's an opinion, then expect it to be disagreed with. 

Who's Next sounds better to me without cross-feed.  Much better. It is my preference, but others my like other options.  And you should research the group, listen to some of their other work, and other music of the era.  It was a very formative and inventive time.   And a LOT of people were discovering headphones for the first time!


----------



## Zapp_Fan

pinnahertz said:


> Every recording is a compromise, the best ones are compromises that represent the intent well.



I think this pretty much sums it up.  The idea of objectively higher fidelity to the original stereo image as recorded is more or less a farce in my opinion.  Not least because stereo mixes are often at least partly, if not entirely artificial.  Live sound is a 4-dimensional object - a 3D, smooth distribution of compressions and rarefactions in air that changes over time.  We're trying to experience this room-filling, ever-changing phenomenon by sampling a handful of isolated points within the room, messing with the 2-dimensional output (a mono recording), taking several of those and cramming them into two 2-dimensional containers (dual channel audio files) and then adding crossfeed, or not, at the very end, just before jamming back through another set of tranducers with innumerable problems of their own. 

My point is simply that there is so much distortion between the musician and the listener, crossfeed or not, that any notion of fidelity to a real stereo image (read: 3d distribution of sound in space) is like arguing about the photorealism of a portrait drawn in charcoal.  Sure, you can capture a lot of the essence of the original with good technique, and you can argue a lot about proper perspective, the right paper to use, and so forth... but it's still a black and white drawing of a living, breathing person.


----------



## Strangelove424

alex_aiwa_USA said:


> I think this pretty much sums it up.  The idea of objectively higher fidelity to the original stereo image as recorded is more or less a farce in my opinion.  Not least because stereo mixes are often at least partly, if not entirely artificial.  Live sound is a 4-dimensional object - a 3D, smooth distribution of compressions and rarefactions in air that changes over time.  We're trying to experience this room-filling, ever-changing phenomenon by sampling a handful of isolated points within the room, messing with the 2-dimensional output (a mono recording), taking several of those and cramming them into two 2-dimensional containers (dual channel audio files) and then adding crossfeed, or not, at the very end, just before jamming back through another set of tranducers with innumerable problems of their own.
> 
> My point is simply that there is so much distortion between the musician and the listener, crossfeed or not, that any notion of fidelity to a real stereo image (read: 3d distribution of sound in space) is like arguing about the photorealism of a portrait drawn in charcoal.  Sure, you can capture a lot of the essence of the original with good technique, and you can argue a lot about proper perspective, the right paper to use, and so forth... but it's still a black and white drawing of a living, breathing person.



Your conception of stereo as dual mono, or as two flat containers, is an oversimplification. Nor does it really describe my experience of stereo, atleast on my own equipment. Since we are borrowing from visual contexts here, a 3-D image is created from stereoscopic vision, differences in perspective between left and right eyes. 3-D sound likewise comes from differences in L/R perception or L/R differences the engineer baked into the mix. The depth and spaciousness is partially a byproduct of those two 'flat' perspectives being interpreted through the brain as something with depth. Throw 3+ extra channels in for 5.1, and you have an even more amazing illusion of dimension all around. I agree that reproduction has its limitations, and cannot match a live performance, but to compare a room full of speakers to a charcoal drawing I think is a bit much. Do charcoal drawings pan?


----------



## bigshot

Mixes are definitely not real sound. They’re optimized for clarity.


----------



## 71 dB

pinnahertz said:


> But I challenge you to state clearly the algorithm with which you use to quantify the objective measurement of "spatial distortion" over 75%.  If you state a number, there certainly MUST be a means of measurement.  Or is this something only the God of Cross-Feed knows, knowledge he retains for himself?  It's a number.  That means there's a measurement, or an opinion.  Which is it?  If it's a measurement, then anyone can duplicate it and arrive at the same number.  If it's an opinion, then expect it to be disagreed with.



At least I have an algorithm to be challenged. It was first based on subjectivity. We find the optimal crossfeed level x and then calculate the distortion value SD from that:

SD = 100 * 10 ^ ( x / 10 ) %.​
To make things more objective, I have been testing filtering octave bands and calculating channel difference values D for them

D = S / (S + M),​
where S = abs(L-R) and M = abs(L+R)

D = 1 => L and R are identical but out of phase (antimono)
D = 0 => mono sound
D = 0.5 => S and M are equal

Now, there is a threshold value of D' which represents the largest ILD of "human hearing space". At bass it's about -5 dB. So if for example R = 10^(-5/20) * L = 0.56*L, we get:

M' = abs(L+R) = 1+0.56 = 1.56
S' = abs(L-R) = 1-0.56 = 0.44
D' = S' / (S'+M') = 0.44 / (1.56+0.44) =* 0.22
*
Let's say I analyse a track and get D = 0.45. We want to crossfeed this so that it becomes 0.22. What is the crossfeed level x? It's a bit tricky mathematically, but the formula turns out to be:

x = 20*log10 ((D-D')/(D+D'-2*D*D')),​
So we get x = 20*log10 ((0.45-0.22)/(0.45+0.22-2*0.45*0.22)) = 20*log10 (0.23/0.472) = *-6.2 dB* and the corresponding spatial distortion is 100*10^(-0.62) = *24 %*.

This is pretty objective, because it comes from HRTF-measurements. It is not perfect and it's work in progress. These calculations don't use ITD-information at all. That's why subjective method is more reliable imo.



pinnahertz said:


> Who's Next sounds better to me without cross-feed.  Much better. It is my preference, but others my like other options.  And you should research the group, listen to some of their other work, and other music of the era.  It was a very formative and inventive time.   And a LOT of people were discovering headphones for the first time!



Tangerine Dream, King Crimson, Herbie Hancock, Miles Davis and Carly Simon are my favorites of the era. Genesis and Pink Floyd don't do much for me. The 70's is a tiny fraction of all music history, so only a small fraction of attention is focused into it. If you want inventive time period in music check out electric dance music 1988-1993.

The Who is better without cross-feed? Sorry, your preferences are messed up.[/QUOTE]


----------



## Zapp_Fan (Dec 5, 2017)

Strangelove424 said:


> Your conception of stereo as dual mono, or as two flat containers, is an oversimplification. Nor does it really describe my experience of stereo, atleast on my own equipment. Since we are borrowing from visual contexts here, a 3-D image is created from stereoscopic vision, differences in perspective between left and right eyes. 3-D sound likewise comes from differences in L/R perception or L/R differences the engineer baked into the mix. The depth and spaciousness is partially a byproduct of those two 'flat' perspectives being interpreted through the brain as something with depth. Throw 3+ extra channels in for 5.1, and you have an even more amazing illusion of dimension all around. I agree that reproduction has its limitations, and cannot match a live performance, but to compare a room full of speakers to a charcoal drawing I think is a bit much. Do charcoal drawings pan?



OK, charcoal drawing is over-dramatic, but your analogy of a 3D movie is more accurate - but it's somewhere between something shot with 2 cameras, and something reconstructed from a set of 2D images. But still, comparing a 3D movie to the "fidelity" of being in a room with the actors is similar to what we're talking about here.  Sure, you can draw comparisons, but at the end of the day, they are two entirely different experiences... 5.1 / Atmos is maybe analagous to a VR headset, I guess? 

To be fair, stereo is actually dual mono in most modern formats, it just so happens that the two signals are highly correlated in certain ways


----------



## pinnahertz

71 dB said:


> At least I have an algorithm to be challenged. It was first based on subjectivity. We find the optimal crossfeed level x and then calculate the distortion value SD from that:
> 
> SD = 100 * 10 ^ ( x / 10 ) %.​


The term SD is subjective, and has not been evaluated.


71 dB said:


> To make things more objective, I have been testing filtering octave bands and calculating channel difference values D for them
> 
> D = S / (S + M),​
> where S = abs(L-R) and M = abs(L+R)
> ...


It may be based on partial HRTF measurements....but:


71 dB said:


> These calculations don't use ITD-information at all. That's why subjective method is more reliable imo.


So...then the aren't based on HRTF measurements!  And as soon as you claim subjectivity, you eliminate all possibility of measurement!

The very concept of "spatial distortion" is subjective, and there have not been any values assigned.  There hasn't even been any statistical analysis of subjective testing!  Your algorithm includes a subjective term, and is therefore subjective.  Enter even one subjective term and you no longer have a means of objectivity.


71 dB said:


> Tangerine Dream, King Crimson, Herbie Hancock, Miles Davis and Carly Simon are my favorites of the era. Genesis and Pink Floyd don't do much for me. The 70's is a tiny fraction of all music history, so only a small fraction of attention is focused into it. If you want inventive time period in music check out electric dance music 1988-1993.


I largely detest 1980s electronic dance music, with one or two notable exceptions.  I find most of it unlistenable.  I'm pretty sure that makes me an idiot in your view, right?  The 70s is a tiny fraction of music history, and it's also a tiny fraction of what I listen to.  I even choose a tiny fraction of that tiny fraction.  What is your point?  And have we finally found someone who doesn't like "Dark Side of the Moon?"  I would think that's just loaded with lovely "spatial distortion" for you to chew on.


71 dB said:


> The Who is better without cross-feed? Sorry, your preferences are messed up.


How can someone's preferences be messed up...unless they violate laws or the rights of others? 

Worse, when it comes to scientific research, you've limited yourself to a single data point: your own opinion.  OK, maybe two of your friends too.  That is about as unscientific as it gets.  Then you proclaim yourself absolutely right, and everyone differing with you as worse than wrong, they're deaf, defective, insane, all flavors of wrong, now including "messed up".  I'm sure the list of derogatory terms you've actually applied is far more extensive. 

Real scientific research tries to keep biases under control and out of a position to influence data, with the intent of uncovering the truth.  You have fully biased research here.  Actually, the term "research" would not even apply.  You have no idea what people prefer and why, and worse, you don't care, and even worse, you condemn them for having an opinion different than yours.  This is a very dangerous mind set, and has a long history of disaster, both for those with that mind set and those around them.  You will never uncover the truth that way.

I hope you are far removed from politics.


----------



## 71 dB

Some people think crossfeed "messes" with the sound. When the crossfed signal from left channel is summed with right channel we get a new right channel, but it's not "messed up", because crossfed right channel was summed with left channel too. Both channels change under the same crossfeed principles, and our hearing is able to decode what happened to the sound in the crossfeeding comparing the new left and right channels. It helps that crossfeeders simulate what happens in reality so the decoding is easy for our brain, much much easier than decoding the un-crossfed signals with excessive spatial information. People assume things out of ignorance, lack of understanding or protecting their twisted preferences. Some people think high-res audio is "better" than 16/44.1 because they lack undertanding/knowledge of digital audio and the limits of human hearing and audio reproduction technology. Similarly some people think crossfeed hardly ever is beneficial, because they lack understanding of spatial hearing.

Crossfeed messing up sound is a misunderstanding of human hearing. The sound without crossfeed is the messes up one! Well, in 98 % of cases that is...



alex_aiwa_USA said:


> To be fair, stereo is actually dual mono in most modern formats, it just so happens that the two signals are highly correlated in certain ways



That is very true. Spatial distortion is avoided, if the correlation is strong enough and comforms with the "human hearing space."


----------



## Strangelove424

alex_aiwa_USA said:


> OK, charcoal drawing is over-dramatic, but your analogy of a 3D movie is more accurate - but it's somewhere between something shot with 2 cameras, and something reconstructed from a set of 2D images. But still, comparing a 3D movie to the "fidelity" of being in a room with the actors is similar to what we're talking about here.  Sure, you can draw comparisons, but at the end of the day, they are two entirely different experiences... 5.1 / Atmos is maybe analagous to a VR headset, I guess?



Yes, long before there was stereo audio, there were stereoscopes which people would place postcards into with slightly offset perspectives, which turned to 3D when peered into. "Stereo" generally refers to our biological capability to perceive depth using pairs of sensory data, and the qualities of reproducing sensory experiences that way, with depth. I would equate 5.1 to a virtualized experience as well. It's not interactive like VR is, but it can create a convincing environment all around the listener.



alex_aiwa_USA said:


> To be fair, stereo is actually dual mono in most modern formats, it just so happens that the two signals are highly correlated in certain ways



They are mainly recorded with mono mics and mixed stereo. I like it that way, personally. I think binaural mics sound weird the same way 3d movies look weird. Maybe that's just different strokes for different folks though.


----------



## 71 dB

pinnahertz said:


> The term SD is subjective, and has not been evaluated.
> It may be based on partial HRTF measurements....but:
> So...then the aren't based on HRTF measurements!  And as soon as you claim subjectivity, you eliminate all possibility of measurement!
> 
> ...


I am the only one here presenting ANY kind of calculations and I admit they are far from perfect. How do you justify your opinion of The Who sounding best without crossfeed? I have at least some more or less clumsy calculations to demonstrate and justify why crossfeed is needed. Spatiality is damn complex issue and I try hard to understand more everyday. Are you trying, or are you just happy with the nostalgia of hard panned stereo? How do you explain your claims that feeding excessive spatial information into your ears generate 3-dimensional sound image similar to speakers? Speakers give you natural spatial information within "human hearing space" while headphones without crossfeed give you unnatural spatial information outside "human hearing space". How could these two give you more similar results than crossfeed which scales the spatial information into similar information space to the speakers? Youe claims don't make sense to me and you have a lot of explaining to do to convert me. I try hard giving my justifications for my claims and if they are not enough for you then they aren't.


----------



## pinnahertz

71 dB said:


> I am the only one here presenting ANY kind of calculations and I admit they are far from perfect. How do you justify your opinion of The Who sounding best without crossfeed?


Once clearly identified as opinion justification becomes unnecessary, unless one wishes to convince someone else to adopt the same opinion.


71 dB said:


> I have at least some more or less clumsy calculations to demonstrate and justify why crossfeed is needed.


No, you don't!  You have clumsy calculations that included your own made-up subjective parameter.  You have opinion even with calculations.  Your calculations do not prove need or benefit.


71 dB said:


> Spatiality is damn complex issue and I try hard to understand more everyday. Are you trying, or are you just happy with the nostalgia of hard panned stereo?


I'm trying to understand why someone would prefer cross-feed for a claimed 98% of all stereo recordings ever made.  My only resource is to listen to cross-feed, which I have done extensively.


71 dB said:


> How do you explain your claims that feeding excessive spatial information into your ears generate 3-dimensional sound image similar to speakers?


I don't have to.  That's not what I claimed.  I do not believe speakers present much in the way of 3 dimensional sound without extensive processing.


71 dB said:


> Speakers give you natural spatial information within "human hearing space" while headphones without crossfeed give you unnatural spatial information outside "human hearing space".


The spatial presentation of speakers is unique to speakers.  They don't present anything at all like a 3D space.  Headphones have that potential, but that potential is also impractical.  Headphones with cross-feed present another unique perspective, that of headphones with cross-feed.  The effect is not identical to speakers, though it may in certain circumstances be more similar in some aspects than non-cross-feed.  However, non-cross-feed headphones also present a unique perspective.  None of these perspectives represents reality.  All are compromises.   A good compromise will convey the core sense of the original creator.  _In my opinion_ I find that the unique non-cross-feed headphone perspective retained more of what the speaker perspective had in terms of the musical mix and balance than the cross-feed version, and added a strong immersive quality that I find adds to the entertainment of the recording.  I don't need to justify this, it's my subjective opinion.  Everyone here knows you don't agree.  You don't need to keep firing away at my one opinion.  My point here is you need to get more opinions...like a massive number of them...collected in a bias-controlled means. 


71 dB said:


> How could these two give you more similar results than crossfeed which scales the spatial information into similar information space to the speakers?


Because cross-feed changed the mix.  It changed the balance between instruments and sound sources.  It reduced distinction between instruments, it reduced depth of the mix, flattening it, and all of that reduced my enjoyment below that of my speakers.


71 dB said:


> Youe claims don't make sense to me and you have a lot of explaining to do to convert me.


I do not intend to convert you, or to make sense to you. 


71 dB said:


> I try hard giving my justifications for my claims and if they are not enough for you then they aren't.


The reason you have to justify your claims and I don't is that you present yours as "right", as "fact", coss-feed as mandatory for correct listening for 98% of all recordings.  That kind of claim demands justification and proof.  You've provided neither.  You've provided your calculations that include your own subjective variable.  Proof would be a statistically significant number of listeners in a group that prefer cross-feed.

I, on the other hand, present my opinion.  Not as "right" or "fact" or as a means to demand anyone listen in any particular way.  I've expressed preference after testing, and I'm one guy, not statistically significant at all.  I'm not trying to convince anyone of anything EXCEPT for the fact that you are not presenting fact, you're presenting OPINION, of which I have a difference of opinion.  That shows that we have a 1:1 ratio, my opinion against yours.  It shows there's room beyond your cross-feed edicts for other means of enjoyable headphone listening. Your edict that all headphone listening is wrong without cross-feed is unsubstantiated, it lacks any support data taken from scientific testing.  It's your OPINION...presented as IMMUTABLE FACT! 

As I've said several times before in this thread...that's my only problem here.  I'm not anti-cross-feed, or anti-71 dB.  I appreciate the work you've done, and I'm still auditioning cross-feed myself.  I'm simply identifying your emphatic statements as OPINION, not fact, and trying to show that it may be possible for listeners to not prefer cross-feed for 98% of all stereo recordings on earth.  I'm trying to understand why you could possibly be so definitive of cross-feed, and I'm most puzzled by your need to elevate yourself above all others, and denigrate all others, while doing so.

But the burden of proof is on you.  Re-stating your opinions hasn't gotten you anywhere, has it?  Prove, with research, properly done, that the average listener prefers cross-feed.  Heck, that's too hard.  Show in a group big enough for statistical significance, what percentage prefers cross-feed.  You cannot supply that proof with equations with your own made up terms.  You cannot supply that proof by reiterating your opinions, no matter how much emphasis you apply.  It can only be supplied by actual research.  THAT is what is now required: do the work.   But until you do, I will continue to challenge your pseudo-facts, blanket claims, and statements than anyone who disagrees with you is deficient in any manner, or that you are in some what auditorily superior to the rest of the unwashed masses.


----------



## 71 dB

My calculations are based on HRTF-measurements. Measurements aren't opinions. It's not an opinion that someone is 6' tall, but a measured fact. My discovery of crossfeed was a result of suddenly realizing how headphone listening easily causes excessive spatial information and when I tried crossfeed I confirmed what the scientific knowledge of human hearing tells us. There are hard scientific facts behind my "opinions." I call it understanding rather than opinions. My claims may not be 100 % accurate and correct, but they are hopefully quite correct and in time it gets more refined. 

How about other people? How many percent of population even know the basic principles of human hearing? Heck, most people don't even understand decibels! If you are used to crap you may think the crap is gold. Assume ignorance and stupidity. You can't ask people, because most of the them are clueless. They need to be educated. It's sad that in our global capitalism wisdom isn't valued much. You are a valued customer as long as you have money and you blindly buy what you are brainwashed to buy. Making money selling Fidget Spinners is easier so that's what people are sold. I can only dream about a spatially cultured word where instead of Fidget Spinners all people used crossfeed to fully enjoy their music on headphones.

Asking research data to back up that people want/need crossfeed is like asking children whether or not math should be teached at school. I wonder how that would turn out...


----------



## mindbomb (Dec 5, 2017)

This mirrors the debate between virtual surround sound and stereo in gaming. Except in that case, the deck is stacked even more against stereo, since more ambiguous sounds lead to a competitive disadvantage, and there clearly is a 3d environment that could be better represented on headphones. And yet it is still extremely difficult to get people to switch. It's classic baby duck syndrome.

For music, crossfeeds seem like the least disruptive way to address the spatial problems of stereo recordings. The alternatives are that content creators start releasing more binaural content, which are specifically made for headphone users. Or headphones have to adopt more open form factors - like the akg k1000 perhaps, or the bose soundwear.


----------



## ironmine

I agree with 71 dB. Once I found out the crossfeed technology (more than 10 years ago) and started using it, there is no way back.

I have tried many crossfeed plugins (basically, almost all of them out there, except two or three), both Foobar components and VST plugins, I have a bunch of presets for them and can invoke them for instant comparison.  Meier, Isone, Redline Monitor.  Before listening to an album critically, I try briefly saved presets to see which one sounds better with this particular album.  I also try no crossfeed.  The variant "no crossfeed" is never a winner. And never even the 2nd place. Actually, it's always the worst and the most horrible sounding.

The crossfeed technology is a major breakthrough in headphone listening. It's a revolution in headphone listening experience. But most audiophiles are Know Nothings, they are literally the most ignorant people on Earth when it comes to the understanding of how sound works. This is the good description of what a typical audiophile is: http://www.kenrockwell.com/audio/audiophile.htm

71 dB, which crossfeed do you use?


----------



## castleofargh

ironmine said:


> I agree with 71 dB. Once I found out the crossfeed technology (more than 10 years ago) and started using it, there is no way back.
> 
> I have tried many crossfeed plugins (basically, almost all of them out there, except two or three), both Foobar components and VST plugins, I have a bunch of presets for them and can invoke them for instant comparison.  Meier, Isone, Redline Monitor.  Before listening to an album critically, I try briefly saved presets to see which one sounds better with this particular album.  I also try no crossfeed.  The variant "no crossfeed" is never a winner. And never even the 2nd place. Actually, it's always the worst and the most horrible sounding.
> 
> ...


he shows it to you in his avatar ^_^. seems like good old Linkwitz design but it's hard to say at this size, I don't have the "enhance picture" feature they always have in TV shows to solve cases.


----------



## pinnahertz

71 dB said:


> My calculations are based on HRTF-measurements. Measurements aren't opinions. It's not an opinion that someone is 6' tall, but a measured fact.


I'm not disputing HRTF in concept or the measurements involved.  But you are only using a small part of HRTF and ignoring the rest.


71 dB said:


> My discovery of crossfeed was a result of suddenly realizing how headphone listening easily causes excessive spatial information and when I tried crossfeed I confirmed what the scientific knowledge of human hearing tells us. There are hard scientific facts behind my "opinions." I call it understanding rather than opinions. My claims may not be 100 % accurate and correct, but they are hopefully quite correct and in time it gets more refined.


You can call it whatever you like.  It is just your opinion until you show the research.


71 dB said:


> How about other people? How many percent of population even know the basic principles of human hearing? Heck, most people don't even understand decibels! If you are used to crap you may think the crap is gold.


None of that relates you the question of the efficacy and general preference (or not) of cross-feed.


71 dB said:


> Assume ignorance and stupidity.


There's that arrogance again!


71 dB said:


> You can't ask people, because most of the them are clueless. They need to be educated.


Actually, good research would NOT educate them at all.  You'd simply present two choices for each of several music selections, one cross-fed, one not, and allow as much time as you like to get a preference.  It's actually better research if they are not educated.


71 dB said:


> It's sad that in our global capitalism wisdom isn't valued much. You are a valued customer as long as you have money and you blindly buy what you are brainwashed to buy. Making money selling Fidget Spinners is easier so that's what people are sold. I can only dream about a spatially cultured word where instead of Fidget Spinners all people used crossfeed to fully enjoy their music on headphones.


This line of reasoning is ridiculous.  People are not forced to buy blindly.   We live in a world where more consumers are better educated every day because of on-line shopping and customer reviews.  If you marketed a cross-feeder and sold it on Amazon you'd get at least some statistics, though somewhat biased.  You have none, and that has nothing whatever to do with global capitalism.


71 dB said:


> Asking research data to back up that people want/need crossfeed is like asking children whether or not math should be teached at school. I wonder how that would turn out...


That's funny.  Math is provable, and the correct answer is always preferable.  You have such an incredibly high opinion of your cross-feed!  It's nothing like math.  We all need at least some math every day, and some need a lot.  There's boundless proof of that.  Thus, math should be taught in school.  Not very hard to understand.  Cross-feed?  I don't know if anyone other than you and a couple of others like it.  I know I like it on an occasional recording, not on most.  That alone casts some doubt on your "cross-feed everything all the time" rule.


----------



## 71 dB

pinnahertz said:


> I'm not disputing HRTF in concept or the measurements involved.  But you are only using a small part of HRTF and ignoring the rest.



I'm using the relevant part. Excessive stereo separation is an issue mostly below 1 kHz. At higher frequencies the shadow-effect of head becomes strong and the phase difference become pretty meaningless. So, the original "stereo space" information works more or less as it is. It's not the kind of major problem it is below 1 kHz. 



pinnahertz said:


> You can call it whatever you like.  It is just your opinion until you show the research.



What counts "research" to you? This is mental gymnastics that has taken years of thinking and calculating and testing/listening. Since this is my hobby, I haven't been most systematic, because that would take the fun out of it. What's the point of a hobby if you don't enjoy doing it? So, I don't have a neat well-written pdf to show you. All I have is messy calculations all over, messy xls-files of simulations and calculations, Audacity nyquist plugins for testing ideas and prosessing tracks of music I make, DIY crossfeeders, listening experiences etc. The only organised thing I have is the knowledge and understanding I have of the issue and that's what I am sharing here. If all of this is worthless to you because it doesn't count as "research" then I don't have anything to offer to you. Sorry.



pinnahertz said:


> Actually, good research would NOT educate them at all.  You'd simply present two choices for each of several music selections, one cross-fed, one not, and allow as much time as you like to get a preference.  It's actually better research if they are not educated.



Why educate people about the dangers of smoking? Let them smoke so researchers can monitor how fast they get lung cancer and die. The research has been done long ago. We know how spatial hearing works. It's time to apply the knowledge and educate people about spatial distortion.



pinnahertz said:


> This line of reasoning is ridiculous.  People are not forced to buy blindly.   We live in a world where more consumers are better educated every day because of on-line shopping and customer reviews.  If you marketed a cross-feeder and sold it on Amazon you'd get at least some statistics, though somewhat biased.  You have none, and that has nothing whatever to do with global capitalism.



We are not as free as you think we are. The fact that you prefer Who's Next album without crossfeed indicates you suffer from spatial deafness. You are propably blind to the problems of global capitalism too. 



pinnahertz said:


> That's funny.  Math is provable, and the correct answer is always preferable.  You have such an incredibly high opinion of your cross-feed!  It's nothing like math.  We all need at least some math every day, and some need a lot.  There's boundless proof of that.  Thus, math should be taught in school.  Not very hard to understand.  Cross-feed?  I don't know if anyone other than you and a couple of others like it.  I know I like it on an occasional recording, not on most.  That alone casts some doubt on your "cross-feed everything all the time" rule.



High opinion compared to no crossfeed. Crossfeed doesn't give you the best possible outcome imaginable, but for most recordings  it means a clear improvement compared to not using it. DIY crossfeefers cost like 10-50 bucks to build depending on the level of sophistication and for that money you get so large improvement it's hard to imagine similar improvement elsewhere for the money in audio. Or you can use software crossfeeders for free or very little cost. Plus the use of crossfeed as a concept is supported by the scientific knowledge of spatial hearing. Even if every single recording from now on was produced for headphones (free of spatial distortion), 98 % of everything in stereo so far is plaqued with spatial distortion to some degree.

*jasonb:*_ "I must say that ever since first using it, I now have to use it. Music through headphones now without it just sounds strange to me."_
*revonlink24:* _"I do use crossfeed. It makes the music sound more natural to my ears, and I always strive for natural sound."_
*aimlink:* "_I currently always use the crossfeed option on my Head UltraDesktop Amp."  _
*xnor: *_"I also use crossfeed, cmoy's implementation that is (though software based)."_
*p a t r i c k: *_"I have become a huge fan of cross-feed since I got my Meier-Audio StageDAC"_
*EddieE: *_"I voted yes, but it depends on the crossfeeed of course."_
*Uncle Erik:*_ "Yes, I like Dr. Meier's crossfeed implementation quite well."_
*GreatDane:*_ "I've used crossfeed with portable amps from Xin, Meier,HeadRoom & Practical Devices. I currently only have the XM5 and use it 50% with Westone 3. I did have a Corda Cross at one time. I regret selling that now."



_


----------



## 71 dB

ironmine said:


> I agree with 71 dB. Once I found out the crossfeed technology (more than 10 years ago) and started using it, there is no way back.



That's nice to hear and a statement of someone who cares about natural enjoyable music more than how we have learned to listen to headphones. 



ironmine said:


> I have tried many crossfeed plugins (basically, almost all of them out there, except two or three), both Foobar components and VST plugins, I have a bunch of presets for them and can invoke them for instant comparison.  Meier, Isone, Redline Monitor.  Before listening to an album critically, I try briefly saved presets to see which one sounds better with this particular album.  I also try no crossfeed.  The variant "no crossfeed" is never a winner. And never even the 2nd place. Actually, it's always the worst and the most horrible sounding.



Well, I do have some (maybe 2 %) recordings that do sound best as they are and I do listen to them without crossfeed, but almost always I want crossfeed and use it. I don't listen to music from computer much for many reasons so I don't use software crossfeeders (I have Vox player with the three crossfeeders).



ironmine said:


> The crossfeed technology is a major breakthrough in headphone listening. It's a revolution in headphone listening experience. But most audiophiles are Know Nothings, they are literally the most ignorant people on Earth when it comes to the understanding of how sound works. This is the good description of what a typical audiophile is: http://www.kenrockwell.com/audio/audiophile.htm



Yeah. I'm surprised about people who are serious about sound quality and have tried crossfeed for long but still are more or less against it and prefer no crossfeed. Why are these people unable to take the step crossfeed allows and enter the world of less or no spatial distortion? Reading your message is like reading my own writings and it makes me feel more relaxed after all the pressure under pinnahertz' spatial distortion fandom. So, thanks!



ironmine said:


> 71 dB, which crossfeed do you use?



I mostly use two DIY headphone adapters with crossfeed connected to my AV-amp's B-speaker terminals. The other one is based on Linkwitz-Cmoy (yep, in my avatar), but has 6 different crossfeed levels from -10 dB to -1 dB. It also has the possibility to reduce channel separation at high frequencies and has "almost mono"/mono switch (surprisingly helpful).

The other one is a modification of Linkwitz-Cmoy with one fixed crossfeed level of -3 dB, but cut off frequency dropped to about 300 Hz so that the phase shift raises to about 640 µs creating a wide, but "flat" soundstage. This kind of "widefeeder" is good for those who think crossfeed makes the sound too narrow. Not this one! The idea is that it produces the widest possible sound for headphones without spatial distortion and kind of simulates a multichannel speaker system rather than a stereo speaker system. To compensate for the low cut off frequency the crossfeeder has (adjustable) "floor level" flat channel mixing at level -26 dB or -15 dB that gives enough crossfeed above 300 Hz.

I take turns between these DIY crossfeeders and the recording determines which one sounds better. Having a headphone adapter removes the need of a headphone amp. The 6 level model was more expensive to buid, about 50 bucks. The other one was maybe half of that. Also, it doesn't matter what my sound source is, computer, CD player, Blu-ray player, TV, radio, … I can use the same crossfeeders for all of them!

A picture and schematics of my 6 crossfeeder level DIY headphone adapter:


----------



## gregorio

71 dB said:


> Don't think about what effects are used in music. Our hearing doesn't detect reverb plugins. It detects ILD/ITD + spectral cues in time. Only care about what kind of ILD/ITD -information reaches your ears. Is it within "human hearing space" or not?



This is patently incorrect. If it were correct, reverb would be undetectable in mono (single speaker placed centrally), where there is effectively little or no ILD/ITD, which is clearly NOT the case. I'm not sure what you mean by "human hearing space" so I cannot respond to your question.



71 dB said:


> Our everyday sound world is more monophonic than people realise, closer to mono than ping pong stereo.



Again, no it is not. Our everyday world is NEVER monophonic, unless your "everyday world" is living in an anechoic chamber! 



71 dB said:


> [1] What a crossfeeder does to sound is what our hearing expects sounds coming to our ear having. [2] That's why the "messing-up" is beneficial. That's why people don't say the stereo image with speakers is messed up because your left ear hears right speaker and vice versa. [2a] Spatial information in "stereo space" MUST BE messed with to map it into "human hearing space". [3] You can live your life without a crossfeeder, but that makes you spatially ignorant.



1. Again, NO it is not! Our hearing does not expect just the simple crossfeeding of the sound, it would only expect that if normal, everyday life were living in an anechoic chamber. An anechoic chamber is in fact so alien to what human hearing "expects" that the first time inside one can be a very strange experience for many people, as the brain cannot accept the reality of what it's hearing and can start to play all kinds of weird tricks to make sense of it, even to the point of causing severe nausea in some people. What the brain actually expects is not just crossfeed but acoustic reflections. This is where it gets complicated because we have reflections in the recording itself and reflections of those reflections from the listening environment. 
2. That all depends on the messing-up! You appear to have a typical audiophile black and white approach, much the same as the "all distortion is bad" opinion. In fact, distortion is not "bad", much/most distortion is not only "good" but absolutely essential, it depends on the type of distortion, the amount and it's context! Same with "messing-up", it ALL depends on how it's messed-up! What we hear in say the left ear from the right speaker is not just the right speaker signal delayed by some small amount (you quoted 250um) to compensate for ITD, what we hear (and absolutely expect to hear!) in our left ear is the reflections off the left wall of the room from the right speaker output, which is a delay in the several milliseconds range, plus a freq colouration of those reflections (which is completely unrelated to any head transfer functions). Messing-up with listening room reflections is beneficial because the recording has been created in an environment with room reflections in the first place, the delays and reverbs chosen/programmed according to how they interact with those room reflections. "Messing-up" the recording playback using ONLY a 250um delayed crossfeed is an entirely different sort of messing-up! It's just as likely, if not MORE likely to be the opposite of beneficial. There CANNOT be a black or white answer to this because it ENTIRELY depends on what reverbs/delays have been employed in a mix and how they have been employed.
2a. True but, HOW that spatial information is messed-up is vital, not just any messing-up will do!

3. The argument can be made for exactly the opposite to your statement! Yes, in some cases crossfeeding might provide an improvement, particularly with early stereo mixes which often contained very rudimentary stereo information but you are stating that nearly all recordings benefit from crossfeed and the most likely, logical explanation for such a statement is that you are relatively insensitive to some of the spatial information on recordings and are therefore not bothered by the inappropriate "messing-up" which often occurs with crossfeeding. If this is the case, that would make you the one who is "spatially ignorant"!

I know your answer to the above, something along the lines of: "With no crossfeed you still haven't got any room interaction but at least with crossfeed you've got one of the elements the brain expects." True and in some cases that's enough for crossfeeding work reasonably well but in other cases it can do more harm than good. In some cases, taking away that room interaction does not have much of a detrimental effect on the spatial information, it all depends on the amount of reverb applied and the parameters of that reverb, such as diffusion, stereo spread, stereo width and the timing, positioning and relative balance of the early reflections, almost all reverb algorithms already contain a fair amount of crossfeed to start with, although it's user configurable! Secondly, almost all mix engineers will check their mix using headphones, depending on the target media. Obviously the mix doesn't work how we intend on HPs and while we can't do much as far as the left/right positioning of the dry source sound/instruments are concerned without compromising the reproduction on speakers, there are typically tweaks we can apply to the spatial information (reverbs, delays, compression, EQ, etc.) which may have little impact on the speaker reproduction but significantly improves the HP reproduction. Again, how much we can do depends on the mix in the first place, the types and amounts of spatial information employed and of course, the time/effort dedicated to making such adjustments, which can vary from almost none at all to a considerable amount. Obviously, anything above "none at all" and you're going to damage/destroy it with your crossfeeding!



71 dB said:


> It helps that crossfeeders simulate what happens in reality so the decoding is easy for our brain, much much easier than decoding the un-crossfed signals with excessive spatial information.



This is so typical of many audiophiles; take a perception/preference and either just invent some complete nonsense to explain why it's the truth/real or, take some actual facts but ignore other vital facts to arrive at a more plausible/factual explanation which due to the omitted facts is still nonsense! This quoted statement falls into the latter category. Crossfeeding does indeed "simulate what happens in reality", the reality it simulates is what would happen if we listened to a recording on a stereo speaker system in an anechoic chamber and typically, nothing is more unreal or difficult for "our brain to decode" than listening in an anechoic chamber. This is the complete opposite of your quoted statement!!! AGAIN, in practice it all depends on the recording, our personal sensitivity to all the various types of spatial information and our personal preferences. Some audiophiles for example prefer and actively seek out the most "excessive spatial information" they can find, regardless of the meaning of the word "fidelity". 

I've no objection to your preference for crossfeed, what I object to is you taking what is simply your personal preference and trying to redefine it as objective fact which applies to everyone and that me and everyone else who does not share your preference is "spatially ignorant". In all likelihood, I'm far less spatially ignorant than you, because for the last 25 years creating and manipulating spatial information is a significant part of what I do for a living. Additionally, your objective facts are not objective, they're clearly subjective and last but not least, you're completely ignoring/omitting other, vital facts! You have a strongly held preference and have developed a fairly elaborate explanation to turn that preference into an absolute factual belief, so there's virtually no chance I'm going to have even the tiniest influence on your views. This post is mainly aimed at others, who might find some of what I've said interesting or thought provoking.  

G


----------



## WoodyLuvr

Strangelove424 said:


> Recently, I found a digital simulation of the Meier crossfeed for Foobar, and have been enjoying it.
> 
> http://www.foobar2000.org/components/view/foo_dsp_meiercf
> 
> I'm not much of a crossfeed fan, but this one is very good, subtle but effective when not overdone. I am listening to Jimi Hendrix experience now and it's making a big difference. I don't know if I could enjoy this album on headphones otherwise.





Strangelove424 said:


> +1 I'm big into DSP right not. I loaded Foobar up with DSPs and am going to town on them. I've never felt so empowered to customize my listening experience. Some people do tube rolling, I do plugin rolling. And things have never sounded so good. That Jimi Hendrix album I mentioned above, I was using dynamic EQ for treble spikes, parametric EQ for tone adjustment, slickEQ for saturation/tube sound, and Meier crossfeed to stop the ping pong. lol. The neutrality folks would probably freak out hearing that, but man did it sound gooood. If I was using speakers, I'd go for stereo->5.1 DSP too. I'm not exactly sure what Jimi Hendrix intended, but I assume he wanted me to enjoy his music, and that I did.





Strangelove424 said:


> I found a simulation of Meier "natural" crossfeed filter for foobar that sounds very good to me. That will work for Windows, not sure what to tell you about Android.



Good day sir.  I too have recently fell into DSP/VST/Components rolling and having a blast!

What setting are you using on your Case's Meier Crossfeed component?  I am staying around the 15 to 18 level with my Nhoord Audio Red v2s listening to predominately ambient electronic or ambient classical.





Currently I have my DSP chain as follows:


----------



## 71 dB (Dec 6, 2017)

gregorio said:


> This is patently incorrect. If it were correct, reverb would be undetectable in mono (single speaker placed centrally), where there is effectively little or no ILD/ITD, which is clearly NOT the case. I'm not sure what you mean by "human hearing space" so I cannot respond to your question.



Reverberation is more than ILD / ITD so maybe misunderstood each other.



gregorio said:


> Again, no it is not. Our everyday world is NEVER monophonic, unless your "everyday world" is living in an anechoic chamber!


I think I didn't mean monophonic, but_ closer to_ monophonic. Sorry for being unclear.



gregorio said:


> 1. Again, NO it is not! Our hearing does not expect just the simple crossfeeding of the sound, it would only expect that if normal, everyday life were living in an anechoic chamber. An anechoic chamber is in fact so alien to what human hearing "expects" that the first time inside one can be a very strange experience for many people, as the brain cannot accept the reality of what it's hearing and can start to play all kinds of weird tricks to make sense of it, even to the point of causing severe nausea in some people. What the brain actually expects is not just crossfeed but acoustic reflections. This is where it gets complicated because we have reflections in the recording itself and reflections of those reflections from the listening environment.
> 2. That all depends on the messing-up! You appear to have a typical audiophile black and white approach, much the same as the "all distortion is bad" opinion. In fact, distortion is not "bad", much/most distortion is not only "good" but absolutely essential, it depends on the type of distortion, the amount and it's context! Same with "messing-up", it ALL depends on how it's messed-up! What we hear in say the left ear from the right speaker is not just the right speaker signal delayed by some small amount (you quoted 250um) to compensate for ITD, what we hear (and absolutely expect to hear!) in our left ear is the reflections off the left wall of the room from the right speaker output, which is a delay in the several milliseconds range, plus a freq colouration of those reflections (which is completely unrelated to any head transfer functions). Messing-up with listening room reflections is beneficial because the recording has been created in an environment with room reflections in the first place, the delays and reverbs chosen/programmed according to how they interact with those room reflections. "Messing-up" the recording playback using ONLY a 250um delayed crossfeed is an entirely different sort of messing-up! It's just as likely, if not MORE likely to be the opposite of beneficial. There CANNOT be a black or white answer to this because it ENTIRELY depends on what reverbs/delays have been employed in a mix and how they have been employed.
> 2a. True but, HOW that spatial information is messed-up is vital, not just any messing-up will do!
> 
> 3. The argument can be made for exactly the opposite to your statement! Yes, in some cases crossfeeding might provide an improvement, particularly with early stereo mixes which often contained very rudimentary stereo information but you are stating that nearly all recordings benefit from crossfeed and the most likely, logical explanation for such a statement is that you are relatively insensitive to some of the spatial information on recordings and are therefore not bothered by the inappropriate "messing-up" which often occurs with crossfeeding. If this is the case, that would make you the one who is "spatially ignorant"!


1.


----------



## gregorio

@71 dB, I'm quite surprised at such a personal attack and rather than respond in kind, I'll let a mod take care of it. It's a shame that your extremism apparently results in you not properly reading what I posted and just arguing, even against your own quotes! Pretty much all recordings have spacial distortion, the only exception would possibly be binaural recordings but they have their issues. Spacial distortion is not masking the mix, it's effectively a deliberate part of the mix, which you are then messing with! I also notice you completely ignored the fact that mixes are commonly somewhat modified for non-crossfeed HPs and therefore crossfeeding would be destroying those modifications and when a mix is not modified for headphones it's very possibly because the artists and engineers want it to sound that way or at least don't object to sounding the way it does in headphones. Additionally, depending on what I'm mixing, yes, I often listen to music without reverberation, or rather, I listen to it with only the "spacial distortion" of my studio, as do most mix engineers, what do you think we start with when we start mixing? You think maybe we receive a recording with perfect reverberation and we then spend hours adding "spatial distortion" to wreck it? And you call me delusional!

I'm in danger of going round in circles now, so I'll leave it there, except for one last point: You of course have absolutely no right whatsoever to object to my non-crossfeed preferences/attitude, this is not a fascist state with you as the dictator! I do not object to your preferences/attitude, except when that attitude attempts to dictate what my preference should be or attempts to state as objective fact what is clearly no more than a personal subjective opinion!

G


----------



## 71 dB

gregorio said:


> @71 dB, I'm quite surprised at such a personal attack and rather than respond in kind, I'll let a mod take care of it. It's a shame that your extremism apparently results in you not properly reading what I posted and just arguing, even against your own quotes! Pretty much all recordings have spacial distortion, the only exception would possibly be binaural recordings but they have their issues. Spacial distortion is not masking the mix, it's effectively a deliberate part of the mix, which you are then messing with! I also notice you completely ignored the fact that mixes are commonly somewhat modified for non-crossfeed HPs and therefore crossfeeding would be destroying those modifications and when a mix is not modified for headphones it's very possibly because the artists and engineers want it to sound that way or at least don't object to sounding the way it does in headphones. Additionally, depending on what I'm mixing, yes, I often listen to music without reverberation, or rather, I listen to it with only the "spacial distortion" of my studio, as do most mix engineers, what do you think we start with when we start mixing? You think maybe we receive a recording with perfect reverberation and we then spend hours adding "spatial distortion" to wreck it? And you call me delusional!
> 
> I'm in danger of going round in circles now, so I'll leave it there, except for one last point: You of course have absolutely no right whatsoever to object to my non-crossfeed preferences/attitude, this is not a fascist state with you as the dictator! I do not object to your preferences/attitude, except when that attitude attempts to dictate what my preference should be or attempts to state as objective fact what is clearly no more than a personal subjective opinion!
> 
> G


I'm sorry it I make you feel attacked gregorio. Not my intent in any way. I have read great posts by you on this board and I believe we share the belief that 16/44.1 is all we need in consumer audio for example.

It's true that modern mixes are "modified" for non-crossfeed HPs, but I still feel some tiny spatial distortion in them and prefer to take care of it with weak crossfeed. Most recordings have spatial distortion, but perhaps less than 20 % of them have very strong spatial distortion. For a lot of recording it is mild and perhaps I haven't brought that up enough.

I feel so tired and sick today and I don't know if I have made any sense here. I try, try and try and it never works. Why did I born? What is my place? Where do I belong to? So confused of life. Crossfeed gives some sense of control and purpose. That's why it hurts me so much when my beliefs are challenged.


----------



## castleofargh

@71 dB how about you change the end of your previous post https://www.head-fi.org/threads/to-...-is-the-question.518925/page-22#post-13897143 ,and @gregorio agrees to go easy on you about who's got the longest and stiffest spatial awareness from now on?


----------



## 71 dB

castleofargh said:


> @71 dB how about you change the end of your previous post https://www.head-fi.org/threads/to-...-is-the-question.518925/page-22#post-13897143 ,and @gregorio agrees to go easy on you about who's got the longest and stiffest spatial awareness from now on?



I did modify and sensor my post so that it should be much less offensive. Sorry again. I'm a loser. I shouldn't be so sensitive to post that start "No! Wrong." I should believe in myself and respond in a calm and respectful manner. People don't need to go easy me. I need to be ready for the challenge.


----------



## Strangelove424 (Dec 6, 2017)

WoodyLuvr said:


> Good day sir.  I too have recently fell into DSP/VST/Components rolling and having a blast!
> 
> What setting are you using on your Case's Meier Crossfeed component?  I am staying around the 15 to 18 level with my Nhoord Audio Red v2s listening to predominately ambient electronic or ambient classical.
> 
> ...



Good day to you as well, sir! Glad to hear other people are getting excited about the DSP/VST rolling possibilities!  

This is my current headphone setup (for DT880s):






I don’t always use every plugin, SlickEQ isn't a common fixture, but I use it a lot more than I thought I would when I originally downloaded it. It’s a ‘saturation’ plugin (I have decided to stop saying “tube simulation” plugin and start saying "saturation plugin" to in order to bring more positive connotations to using them). Anyway, it’s sounds great with older rock or jazz, and beats any tube amp in the world because it doesn’t smudge details, you get all the detail of an SS amp with just a hint of smoothness, and can EQ treble as need be. I have become a fan. I see you use something called TAL tube. Saturation plugin? How is that in your experience? You can post a review here to let other people know about it (me included): https://www.head-fi.org/threads/can...ated-via-plugins.657769/page-11#post-13706466

I’ll get to your original question about Case’s Meier Crossfeed simulation finally. I use a general setting of 10, but will go as high as 15-20 for older recordings that sound extremely separated to me, typically early stereo era mastering (the Hendrix album being a great example). Sometimes I go as low as 7 for genres that I think benefit from a strong sense of channel separation, like electronic or hip hop. For that kind of music, I like to use a setting that’s low enough not to notice, just to reduce fatigue.

The only DSP presets I have that remain constant are EQ. I use Xnor’s Dynamic EQ to double de-peak the DT880s and then GEQ7 parametric equalizer to shape tone. GEQ7 has a permanent 2db raise in the mids, and sometimes for fun (ok, most of the time for fun) I also bump up mid bass for some thump too.

When I listen on speakers, I only use one DSP (except for receiver EQ, which I’ll ignore here). That DSP is Channel Mixer.

















Channel Mixer is a wonderful plugin. I personally like it better than Pro Logic because it gives so much control. There’s no individual channel mixing or delay control for Dolby. That makes it hard to customize for individual systems.

I’m always glad to discuss DSP. I feel like the guy sitting in the back of the bar with the odd looking Hawaiian shirt on, and somebody finally asked me “where’d you get that awesome shirt from?”  “I’ll tell you all about it!”

Discovering all these plugins has really changed the way I listen to music, and added a new plateau of enjoyment for me.


----------



## pinnahertz

71 dB said:


> I feel so tired and sick today and I don't know if I have made any sense here. I try, try and try and it never works. Why did I born? What is my place? Where do I belong to? So confused of life. Crossfeed gives some sense of control and purpose. That's why it hurts me so much when my beliefs are challenged.


It's not your beliefs that are being challenged, it's your attempt to impose them on others that is being challenged and objected to.  It's your presenting your beliefs and opinions as fact, and mandatory for all headphone listening.  It's that anyone who doesn't agree with you is deemed inferior to you.  It's the quoting of your own made up statistics as proof of your opinion.  It's the use of your own made-up terminology to describe a perceived problem that you believe is universal, yet there's no attempt to even acknowledge that others may not agree it's a problem or perceive it's a problem (unless, of course, they are inferior to you). 

Your beliefs are not offensive.  Your forcing of opinion on others, and labeling them as "wrong" if they don't accept your opinion, is _highly_ offensive.  It's your lack of respect for anyone else with a different opinion, even though the may well be in possession of experience and knowledge that you don't have. 

Arrogance and lack of humility will get your opinions challenged every time.  There's a big difference between explaining a belief, sharing an opinion, and telling everyone your way is the only right way and everyone else is wrong.

And this:


71 dB said:


> I feel so tired and sick today and I don't know if I have made any sense here. I try, try and try and it never works. Why did I born? What is my place? Where do I belong to? So confused of life. Crossfeed gives some sense of control and purpose. That's why it hurts me so much when my beliefs are challenged.


Seriously?  This is just an audio forum, in the end it means very little, perhaps slightly more than the paper it's printed on (and there's no paper involved). Any of us should be able to walk away easily, and not look back... something several of us do occasionally to reset from the crazyness.  I could list 100 things that are more important in life than any on-line forum.   This forum will not answer any of the questions above, nor does anything posted here comment on them.  It's not about the meaning of life, it's about the tech, knowledge and sharing info, exchange of ideas.  But it's also not about making rules and telling those with a dissenting voice that they are deaf if they don't agree.


----------



## bigshot

I think it's a fact that signal processing can improve the sound of many recordings. The specific form of signal processing that works best varies from recording to recording, but it's pretty clear that having a toolbox full of DSPs is a very good idea.


----------



## WoodyLuvr (Dec 7, 2017)

Strangelove424 said:


> Good day to you as well, sir! Glad to hear other people are getting excited about the DSP/VST rolling possibilities!
> 
> This is my current headphone setup (for DT880s):
> 
> ...


@Strangelove424  Appreciate the feedback; especially the information regarding the Meier Crossfeed setting.  I will try to lower it down to 7-8 again and see how that compares to my current 15-18 setting usage with my electronic tracks.  I too agree that I like using the crossfeed feature mainly to reduce fatigue and I sincerely believe it is working in that regard very well.

I used to use the KA Golden Equaliser GEQ-7 with my B&O H6 headphones maybe I should re-install it however I have had good luck with the 31-Band Graphic Equalizer component from Foobar2k with my Nhoords so far.

As requested I added a post to that thread you linked regarding the TAL-Tube saturation plugin I have been using.

How does my DSP chain look?  Curious to hear if I have my components/plugins in the correct/logical order.


----------



## 71 dB

How does the Meier Crossfeed plugin setting translate into actual crossfeed level? What does "10" mean? Actual crossfeed level is negative, weak crossfeed being for example -11 dB, moderate something like -7 dB and very strong -1 dB. Where does "7", "10" or "20" fall on the actual scale?


----------



## WoodyLuvr

71 dB said:


> How does the Meier Crossfeed plugin setting translate into actual crossfeed level? What does "10" mean? Actual crossfeed level is negative, weak crossfeed being for example -11 dB, moderate something like -7 dB and very strong -1 dB. Where does "7", "10" or "20" fall on the actual scale?


Sorry, no idea but I do know that originally the plugin had a setting from 1-10 but then was updated by it's creator "Case" to have settings from 0-100.


----------



## gregorio (Dec 7, 2017)

71 dB said:


> Most recordings have spatial distortion, but perhaps less than 20 % of them have very strong spatial distortion. For a lot of recording it is mild and perhaps I haven't brought that up enough.



I really don't understand exactly what you mean by "spatial distortion". On all popular music genres (rock, pop, metal, electronic genres, etc.) virtually all the spacial information is artificial, and not an artificial recreation of some spatial reality but an application of reverb without even any attempt or concern for reality! The only concern when mixing and applying reverb to these genres is whether the result sounds subjectively good, not whether it sounds real or has any relationship to reality. In practise, pretty much all popular music mixes end up being a complete mish-mash of spatial information, simultaneously using several completely different reverbs; maybe a small room type reverb on one instrument, a medium chamber on another, a plate on another, an arena type left/right slapback echo/delay on the lead guitar and all this, all at the same time and probably all of them individually EQ'ed and processed so that even individually none of those reverbs would sound quite like a real acoustic space. And, this isn't a new trend, popular genres have been mixed this way starting in the 1960's, all modern technology provides is a relatively cheap and almost unlimited variety of reverbs and reverb parameters. So, "spatial distortion" relative to what? Spatial distortion is pretty much the ONLY spatial information on EVERY commercial popular genre recording for at least 40 years, if not 50 or more. So your idea of strong or not so strong spatial distortion is simply your personal perception of what is ALWAYS effectively 100% spatial distortion! Even with acoustic genres, such as classical music recordings, we virtually never have spatial information which could ever be experienced in real life. Starting in the 1950's setups such as the Decca Tree became one of the preferred methods for recording an orchestra, 3 semi omni-directional mics arranged in a triangle pattern with sides of about 1 1/2m (5 feet) placed roughly 3 meters above the conductor's position. You do not have 3 ears, they are not 5' apart and they're not 10' above the conductor! As time went on, outrigger mics and room mics were added to the typical setup and then later still, spot mics targeting individual instruments or small groups of instruments in the ensemble were added and mixed together with the Decca Tree or other mic array. You do not have 20 or more ears which are simultaneously placed all over the recording venue. Unlike with popular music, the mixing of classical music is taking at least some account for reality or rather, the perception of real/natural spatial information but in practise what's really on the recording is again a complete mish-mash of spatial information! The audiophile belief of "natural" and/or "real" is just an illusion, a deliberate and typically carefully crafted illusion but an illusion nevertheless, the only spatial information which is there in reality is always distorted! While it's flattering that our deliberate illusions are perceived as real or natural, it can also be rather disconcerting and we don't see this in other artistic fields. For example, in the film world, the equivalent of audiophiles are called "film aficionados" and they too have their own terminology and some pretty strange and erroneous beliefs about what really happens in filmmaking but unlike audiophiles, they realise films are not natural or real, that it's all a manufactured illusion.



71 dB said:


> Crossfeed gives some sense of control and purpose. That's why it hurts me so much when my beliefs are challenged.



Yes, I completely understand that to some it's a very important hobby, possibly even, in some cases the only bright part of an otherwise, depressing part of their life. For me, it's somewhat different, it's not been an escape from or an addition to my real life, music and music/sound engineering is my real life, it has completely defined my entire life since my mid teens and I'm now in my early fifties. It's provided me with $500 a month for working 16 hours a day, seven days a week and for several years close to a million dollars a year income. It's provided me with amazing high points, amazing low points and some great experiences: Winning the world youth music festival in Vienna in 1982, performing in the Royal Albert Hall for the first time, playing the Rites of Spring with the Royal Opera/Ballet, controlling the sound for a full house at the Hollywood Bowl, being a BAFTA finalist, recording and working with some great stars, arguing with Malcolm Mclaren about music history and being backed-up by Brian May, threatening to call security on Dave Gilmour unless he put his guitar down, putting a $4m violin on the floor in a corridor without it's case, being complimented by Freddie Mercury, listening to Elton John trying to convince Robbie Williams he was gay but just didn't realise it (the funniest conversation I ever heard) and, holding a friend's head after a gig while he died from an overdose. Just a few of the numerous memorable (and repeatable) experiences which have made up my rather colourful life and although in some respects of the industry I'm a rather jaded cynic, in other respects I have at least as much determination and passion as I've ever had and there have been one or two occasions when that passion was the only thing which stopped me doing something terminally stupid. So, I understand more than most about the importance you're talking about! On the other hand, it's important to be open to facts we may not have fully appreciated previously and have a decent grasp of the difference between subjective preference and objective fact, something which can be difficult to achieve with music/sound recording and reproduction because it lies at the crossroads of art and science/technology and is routinely deliberately confused as a marketing tactic.

G


----------



## 71 dB

WoodyLuvr said:


> Sorry, no idea but I do know that originally the plugin had a setting from 1-10 but then was updated by it's creator "Case" to have settings from 0-100.



Well, if you generate for example a 20 Hz tone playing in left channel only, crossfeed it with the plugin at levels 0, 20, 40, 60, 80 and 100 and recording the result, it could be possible to analyse what the relation is.


----------



## 71 dB

gregorio said:


> I really don't understand exactly what you mean by "spatial distortion". On all popular music genres (rock, pop, metal, electronic genres, etc.) virtually all the spacial information is artificial, and not an artificial recreation of some spatial reality but an application of reverb without even any attempt or concern for reality! The only concern when mixing and applying reverb to these genres is whether the result sounds subjectively good, not whether it sounds real or has any relationship to reality. In practise, pretty much all popular music mixes end up being a complete mish-mash of spatial information, simultaneously using several completely different reverbs; maybe a small room type reverb on one instrument, a medium chamber on another, a plate on another, an arena type left/right slapback echo/delay on the lead guitar and all this, all at the same time and probably all of them individually EQ'ed and processed so that even individually none of those reverbs would sound quite like a real acoustic space. And, this isn't a new trend, popular genres have been mixed this way starting in the 1960's, all modern technology provides is a relatively cheap and almost unlimited variety of reverbs and reverb parameters. So, "spatial distortion" relative to what? Spatial distortion is pretty much the ONLY spatial information on EVERY commercial popular genre recording for at least 40 years, if not 50 or more. So your idea of strong or not so strong spatial distortion is simply your personal perception of what is ALWAYS effectively 100% spatial distortion! Even with acoustic genres, such as classical music recordings, we virtually never have spatial information which could ever be experienced in real life. Starting in the 1950's setups such as the Decca Tree became one of the preferred methods for recording an orchestra, 3 semi omni-directional mics arranged in a triangle pattern with sides of about 1 1/2m (5 feet) placed roughly 3 meters above the conductor's position. You do not have 3 ears, they are not 5' apart and they're not 10' above the conductor! As time went on, outrigger mics and room mics were added to the typical setup and then later still, spot mics targeting individual instruments or small groups of instruments in the ensemble were added and mixed together with the Decca Tree or other mic array. You do not have 20 or more ears which are simultaneously placed all over the recording venue. Unlike with popular music, the mixing of classical music is taking at least some account for reality or rather, the perception of real/natural spatial information but in practise what's really on the recording is again a complete mish-mash of spatial information! The audiophile belief of "natural" and/or "real" is just an illusion, a deliberate and typically carefully crafted illusion but an illusion nevertheless, the only spatial information which is there in reality is always distorted! While it's flattering that our deliberate illusions are perceived as real or natural, it can also be rather disconcerting and we don't see this in other artistic fields. For example, in the film world, the equivalent of audiophiles are called "film aficionados" and they too have their own terminology and some pretty strange and erroneous beliefs about what really happens in filmmaking but unlike audiophiles, they realise films are not natural or real, that it's all a manufactured illusion.


Let's say you have 12 dB of ILD at bass on your recording. If we look at measured HRTFs we see that such ILD can only occur if the sound source is pretty close to your ears. It seems improbable that the producers of the record intented that, kickdrums a feet from your head. How does kickdrum sound if you listen to it that close? It sounds insanely loud, but in the recording it isn't that loud compared to other sounds. If the ILD goes to 20 dB, the situation becomes pretty impossible, because the kickdrum is entering your ear to create such a massive ILD level, but kickdrums don't fit into our ear canals! They are  ~100 times too large objects for that and if you had a miniature kickdrum that fits your ear canal, the spectrum of the sound it produces will be scaled 100 fold to some treble/bat frequencies. Kickdrums are expected to be located to some distance meaning ILD at bass is limited to a few decibels. When you listen to speakers, all of this is automatically taken care of by acoustic crossfeed. Headphones don't have acoustic crossfeed apart from the little leak of open models, so you have what I call spatial distortion, too large ILD levels. My opinion, not forcing on others...

Yes, but our brain still has to make sense of it. Artificial becomes easier if you have cues from the reality such as reasonable ILD. My ears don't like large ILD below 1 kHz and for some reason scientific knowledge of human hearing does give explanations why (kickdrums don't fit into ear canals). But that's just me. I don't know how your ears work.

I see it differently. To me acoustic crossfeed tames the excessive ILD levels with speakers creating ILD values that make sense. Crossfeed with headphones does similar (but not identical) thing. A hard-panned instrument becomes a spatially panned instrument with reasonable combination of ILD and ITD information. That works great for me.

I don't get why acoustic crossfeed is fine, but electric crossfeed isn't. Both reduce excessive ILD. 

That is exactly why I need crossfeed to transform "Decca tree information" to something my ears and head understands. If they use say a Jecklin Disk then the spatial information is already compatible with my spatial hearing (spatial distortion free) and I most probably listen to the recording crossfeed off.

I don't get why we shouldn't make spatial information less distorted. If a recording was clearly made for speakers (having acoustic crossfeed) and clearly sounds horrible with headphones without crossfeed then to me it's a nobrainer to use crossfeed, but that's just me… …it makes sense to me but apparently not to everybody. Maybe I just prefer my enjoyment of the music more than worshipping the hard panned fetishes of music producers 50 years ago. Yeah, could be that.


----------



## 71 dB

gregorio said:


> Yes, I completely understand that to some it's a very important hobby, possibly even, in some cases the only bright part of an otherwise, depressing part of their life. For me, it's somewhat different, it's not been an escape from or an addition to my real life, music and music/sound engineering is my real life, it has completely defined my entire life since my mid teens and I'm now in my early fifties. It's provided me with $500 a month for working 16 hours a day, seven days a week and for several years close to a million dollars a year income. It's provided me with amazing high points, amazing low points and some great experiences: Winning the world youth music festival in Vienna in 1982, performing in the Royal Albert Hall for the first time, playing the Rites of Spring with the Royal Opera/Ballet, controlling the sound for a full house at the Hollywood Bowl, being a BAFTA finalist, recording and working with some great stars, arguing with Malcolm Mclaren about music history and being backed-up by Brian May, threatening to call security on Dave Gilmour unless he put his guitar down, putting a $4m violin on the floor in a corridor without it's case, being complimented by Freddie Mercury, listening to Elton John trying to convince Robbie Williams he was gay but just didn't realise it (the funniest conversation I ever heard) and, holding a friend's head after a gig while he died from an overdose. Just a few of the numerous memorable (and repeatable) experiences which have made up my rather colourful life and although in some respects of the industry I'm a rather jaded cynic, in other respects I have at least as much determination and passion as I've ever had and there have been one or two occasions when that passion was the only thing which stopped me doing something terminally stupid. So, I understand more than most about the importance you're talking about! On the other hand, it's important to be open to facts we may not have fully appreciated previously and have a decent grasp of the difference between subjective preference and objective fact, something which can be difficult to achieve with music/sound recording and reproduction because it lies at the crossroads of art and science/technology and is routinely deliberately confused as a marketing tactic.
> 
> G


My life is the opposite of yours. 25000 euros a year working on something that doesn't interest me a bit under bosses with narsicistic/psychopatic characteristics or being unemployed as I am now. I don't even know what I want to do for living.  Educate about spatial distortion? Nobody is going to pay me doing that, that much I know!

Your background and working history just blows my mind. A couple of years ago I didn't even know who Brian May is until he played with Tangerine Dream and Googled that he is the guitarist of Queen. That's _my_ reality. Your reality is to be backed-up by him. So different. No wonder our opinion differ too.


----------



## pinnahertz

With apologies...I know we've moved past this a little...



71 dB said:


> I'm using the relevant part. Excessive stereo separation is an issue mostly below 1 kHz. At higher frequencies the shadow-effect of head becomes strong and the phase difference become pretty meaningless. So, the original "stereo space" information works more or less as it is. It's not the kind of major problem it is below 1 kHz.


In HRTF the above 1kHz effects are quite powerful.  Yes, phase becomes meaningless as the wavelengths are too short, but diffraction and pinna effects are huge, and a major influence on virtual position and space.  It's one reason why I've taken exception with your simplistic cross-feed circuit.  In my personal research I've found that both ITD and diffraction are very, very important to cross-feed.


71 dB said:


> What counts "research" to you? This is mental gymnastics that has taken years of thinking and calculating and testing/listening.


Research means the same thing to me as it does to everyone else.  Google the definition.  
What you've concluded is that your cross-feed is mandatory for everyone listening to headphones. Thats the part you have not proven.  Research.


71 dB said:


> Since this is my hobby, I haven't been most systematic, because that would take the fun out of it. What's the point of a hobby if you don't enjoy doing it? So, I don't have a neat well-written pdf to show you. All I have is messy calculations all over, messy xls-files of simulations and calculations, Audacity nyquist plugins for testing ideas and prosessing tracks of music I make, DIY crossfeeders, listening experiences etc. The only organised thing I have is the knowledge and understanding I have of the issue and that's what I am sharing here. If all of this is worthless to you because it doesn't count as "research" then I don't have anything to offer to you. Sorry.


No, that's not my point at all.  Your work is not worthless.  It's your conclusion and firm application that is flawed and unsupported.  Research can be fun too.


71 dB said:


> Why educate people about the dangers of smoking? Let them smoke so researchers can monitor how fast they get lung cancer and die. The research has been done long ago. We know how spatial hearing works. It's time to apply the knowledge and educate people about spatial distortion.


The smoking data was accumulated from a large number of tests, studies, and data that included millions of people. The conclusions were supported by that data, and thus authoritative warnings could be placed on packaging, and sales restrictions and sanctions applied. Yours? 


71 dB said:


> We are not as free as you think we are. The fact that you prefer Who's Next album without crossfeed indicates you suffer from spatial deafness. You are propably blind to the problems of global capitalism too.


I strongly object to your _*incessant personal attacks and denigration.*_  You have NO DATA to support your conclusions about my spatial hearing and NO DATA to support your insistence that your cross-feed is mandatory for all listeners.  

I Very Strongly object to your conclusions about my views on anything else!  We haven't discussed global capitalism, and we aren't going to.


71 dB said:


> High opinion compared to no crossfeed. Crossfeed doesn't give you the best possible outcome imaginable, but for most recordings  it means a clear improvement compared to not using it.


Unsubstantiated claim.


71 dB said:


> DIY crossfeefers cost like 10-50 bucks to build depending on the level of sophistication and for that money you get so large improvement it's hard to imagine similar improvement elsewhere for the money in audio.


The cost is irrelevant if the claim is unsubstantiated.


71 dB said:


> Or you can use software crossfeeders for free or very little cost.


Irrelevant.  Lots of software is cost free but accomplishes no benefit.


71 dB said:


> Plus the use of crossfeed as a concept is supported by the scientific knowledge of spatial hearing.


The concept is supported, the need and desirability is not.


71 dB said:


> Even if every single recording from now on was produced for headphones (free of spatial distortion), 98 % of everything in stereo so far is plaqued with spatial distortion to some degree.


Unsubstantiated claim.  Big time.


71 dB said:


> *jasonb:*_ "I must say that ever since first using it, I now have to use it. Music through headphones now without it just sounds strange to me."_
> *revonlink24:* _"I do use crossfeed. It makes the music sound more natural to my ears, and I always strive for natural sound."_
> *aimlink:* "_I currently always use the crossfeed option on my Head UltraDesktop Amp."  _
> *xnor: *_"I also use crossfeed, cmoy's implementation that is (though software based)."_
> ...


Research would return a statistical basis that in most cases results in some form of bell-shaped curve, including the strong preferences through neutral to the negative ones.  Objective statistical analysis of subjective testing always returns a range of data, it's never fully polarized.   Your list includes no negatives, no information about the total data-set size, really no information at all.  A list of random, unverified opinions is not substantiation. We don't know anything about how these people were introduced to cross-feed, what biases were applied and still exist, what music they listen to, etc.,etc.  Lacking any test information we must conclude it was fully biased, sighted and uncontrolled.  You can get better than 50% preference for a placebo choice over an identical alternative under those conditions.

You show fully cherry-picked and incomplete data.   I've even told you I do like cross-feed on some recordings!  You have nothing here in terms of substantiation of proof of preference, and that's not research.


----------



## 71 dB

pinnahertz said:


> In HRTF the above 1kHz effects are quite powerful.



Of course. The ILD problems are below 1 kHz and that's the frequency range crossfeed "fixes" while leaving above 1 kHz almost untouched. 



pinnahertz said:


> Yes, phase becomes meaningless as the wavelengths are too short, but diffraction and pinna effects are huge, and a major influence on virtual position and space.  It's one reason why I've taken exception with your simplistic cross-feed circuit.  In my personal research I've found that both ITD and diffraction are very, very important to cross-feed.



Crossfeed doesn't change pinna effects (doesn't change the shape of pinna). Sure, above 1 kHz channels are crossfed at a low level, but how is that even close to as detrimental even in theory than for example having headphones in you head changing the acoustics next to your ear even changing the resonance in your ear canal a bit? Headphones always give the sound from the drivers and you always have the diffractions you have. Move the headphones a bit in your head and the diffractions change. The change is pretty large at highest frequencies, but nobody seems to care. People only care about the theoretical microscopic problems of crossfeed. I don't say more to avoid insulting someone.


----------



## bigshot

I guess I'll stick with my speakers. This cross feed stuff sounds dangerous! Maybe cross feed caused those brain problems for the diplomats in Cuba!


----------



## Strangelove424

WoodyLuvr said:


> @Strangelove424  Appreciate the feedback; especially the information regarding the Meier Crossfeed setting.  I will try to lower it down to 7-8 again and see how that compares to my current 15-18 setting usage with my electronic tracks.  I too agree that I like using the crossfeed feature mainly to reduce fatigue and I sincerely believe it is working in that regard very well.
> 
> I used to use the KA Golden Equaliser GEQ-7 with my B&O H6 headphones maybe I should re-install it however I have had good luck with the 31-Band Graphic Equalizer component from Foobar2k with my Nhoords so far.
> 
> ...



Doh, knew I forgot something!







If I understand it correctly, DSPs higher on the list are processed first. You can experiment with different chain orders, and get different sounds. I found this chain sounded the best, and my logic for ordering them was to do EQ and tone correction first, then apply spatial processing with crossfeed. 

I just read your review of the TAL plugin, and followed your link to download the dll of the VST. I'm very excited to give this a shot right now. Thanks for posting your review.


----------



## Zapp_Fan

I'm just going to chime in and say I personally would never use a VOS plugin (Slick EQ in this case) for playback/listening.  They're all wonderful plugins, but AFAIK all are deliberately designed to include various types of pleasant-sounding distortion beyond pure EQ.  I actually haven't used this particular plugin though, do you know if you can actually disable all of the saturation algorithms?


----------



## Strangelove424

I think your missing the point. I use it _for_ the saturation algorithms. I have graphic and parametric EQ for dedicated EQ plugins. I was inspired by this thread to explore saturation plugins and have been charmed by the effects under certain circumstances.


----------



## pinnahertz

71 dB said:


> Of course. The ILD problems are below 1 kHz and that's the frequency range crossfeed "fixes" while leaving above 1 kHz almost untouched.


Interesting.  So you are correcting for hard-panned below 1kHz, but not correcting hard-panned above (I know it's a gradual transition).  If I may...why not?  Wouldn't hard-panned sounds above 1kHz be bothersome to you as well? 


71 dB said:


> Crossfeed doesn't change pinna effects (doesn't change the shape of pinna). Sure, above 1 kHz channels are crossfed at a low level, but how is that even close to as detrimental even in theory than for example having headphones in you head changing the acoustics next to your ear even changing the resonance in your ear canal a bit? Headphones always give the sound from the drivers and you always have the diffractions you have. Move the headphones a bit in your head and the diffractions change.


They are not comparable, though.  Both effects (overly wide separation) and HF response changes that are related to headphone position my be detrimental, but not in the same way.  There's no means of comparison without statistical analysis of subjective testing, and I guess we won't go there.


71 dB said:


> The change is pretty large at highest frequencies, but nobody seems to care.


How would you know, though?  Don't you reposition headphones for best (most pleasing) response?  Each time you put them on?  Some of us do that, and some of us do care. 


71 dB said:


> People only care about the theoretical microscopic problems of crossfeed. I don't say more to avoid insulting someone.


If that's really your belief, why continue to post?  And how would you know what people only care about? Have you asked a lot of them, or are you just referring to one or two who post here because they disagree with you? 

I really don't think you've avoided insulting the targeted "people".


----------



## WoodyLuvr

Strangelove424 said:


> Doh, knew I forgot something!
> 
> 
> 
> ...


Curious to see how you find it in comparison to your other saturation plugins.
Do you use Replay Gain and/or Advanced Limiter?


----------



## ironmine

Music without a crossfeed sounds like a bee buzzing in the ear. 

This feeling is awful, weird and unnatural. 

Anti-crossfeed guys may say what they will, but it's not pleasant. 

It's not the way we hear sounds in the real world. If you listen to headphones without a crossfeed for a long time, yes, you may get used to it.  That's what most headphone listeners have done, unfortunately - they have become accustomed to a wrong presentation of sound. That's why they resist. 

I am sure that 10 or 20 years from now, headphone listening without some sort of crossfeed would be regarded as an oddity, a thing of the past. It's like watching TV now in black & white mode.


----------



## ironmine

71 dB said:


> I mostly use two DIY headphone adapters with crossfeed connected to my AV-amp's B-speaker terminals. The other one is based on Linkwitz-Cmoy (yep, in my avatar), but has 6 different crossfeed levels from -10 dB to -1 dB. It also has the possibility to reduce channel separation at high frequencies and has "almost mono"/mono switch (surprisingly helpful).
> 
> The other one is a modification of Linkwitz-Cmoy with one fixed crossfeed level of -3 dB, but cut off frequency dropped to about 300 Hz so that the phase shift raises to about 640 µs creating a wide, but "flat" soundstage.



71 dB, thanks for your explanations and for the photo.

I am not really knowledgeable in electronics, I cannot understand this diagram. If it were presented as a bunch of inter-connected VST plugins, I would understand it better


----------



## ironmine (Dec 8, 2017)

WoodyLuvr said:


> How does my DSP chain look?  Curious to hear if I have my components/plugins in the correct/logical order



Hi WoodyLuvr,

(1) The first plugin should be a high-quality upsampler so that all further calculations are done with a higher precision and less quality loss. Use the ratios 2X or 4X or 8X. It means it's better to upsample 44/16 to 88/24 or 176/24 or 352/24, but not to 96/24 or 192/24 or 384/24. Of course, 32 bits are even better than 24 bits.

(2) The last three plugins in the chain must be:
a) Ditherer. Dither the signal to 16 or 24 bits depending on your DAC input specifications.
b) Volume control adjustment (not to exceed 0 dB or preferably even less, in order to avoid digital clipping)
c) Volume and clipping monitor

2b and 2c sometimes can be combined in the same plugin.

All your other processing plugins should be put in between (1) and (2).

My Foobar has this component installed - http://www.yohng.com/software/foobarvst.html
which connects to VST-chainer called Console ART-Teknika - http://www.console.jp/en/

ART-Teknika basically lets you load and connect almost any VST plugins. You just place VST plugins there, move them around and connect them to each other with virtual wires any way you want. So, it's very graphic, visual and convenient. After you connected all your plugins and set them up you just save this preset in Console. So, you can have a bunch of VST-chains, each with it's one parameters, and you can open them "on the fly", while the music is playing, for instant comparison.


----------



## WoodyLuvr (Dec 8, 2017)

@ironmine Thank you for the detailed advice... unfortunately much of it went right over my head!  LOL.  Though I believe I have the Advanced Limiter placed correctly, as per your advice, last in the chain for volume and clipping prevention.

Currently, the VST plugin adapter I use is foo_vst:






So besides missing a "resampler" first in the chain (are there any that you would recommend and do I really require it?) do I have the rest of my DSPs ordered correctly?  FYI: all of my files are MP3 256-320 kbps with sampling set at 24 bit 44,100 Hz for my DragonFly.




Please give me an example of your DSP chain so I may better understand your advice.

Regarding your Crossfeed VST (112 dB Redline Monitor) how does it compare with Meier Crossfeed?


----------



## ironmine

WoodyLuvr said:


> @ironmine Though I believe I have the Advanced Limiter placed correctly, as per your advice, last in the chain for volume and clipping prevention.



I am not sure what exactly the Advanced Limiter does, as I could not find the description of it and it does not have any controls.  I just hope it's not a compressor!

But, if you monitor the signal level and make sure that it stays below 0 dB or -1.0 dB, then you don't need any limiter, be it "advanced" or "stupid".



WoodyLuvr said:


> So besides missing a "resampler" first in the chain (are there any that you would recommend and do I really require it?) do I have the rest of my DSPs ordered correctly?



Yes, you are still missing a resampler, ditherer, volume adjustment and clip detector.



WoodyLuvr said:


> FYI: all of my files are MP3 256-320 kbps with sampling set at 24 bit 44,100 Hz for my DragonFly.



Why would you want to use such low-quality source material as compressed mp3 ??? It's not serious, man...


----------



## WoodyLuvr (Dec 8, 2017)

ironmine said:


> I am not sure what exactly the Advanced Limiter does, as I could not find the description of it and it does not have any controls.  I just hope it's not a compressor!
> 
> But, if you monitor the signal level and make sure that it stays below 0 dB or -1.0 dB, then you don't need any limiter, be it "advanced" or "stupid".


Apparently, it does nothing except detect and prevent clipping (especially from DSPs) and thus why it has been recommended to be set last in the DSP chain.  Also, it has been recommended if you are using Replay Gain in Foobar and have processing set at "Apply Gain" only under Playback Preferences.



ironmine said:


> Yes, you are still missing a resampler, ditherer, volume adjustment and clip detector.


If you don't mind please do a quick screen capture of your DSP list in Foobar so I can visualize and research all the plugins/components you are using.



ironmine said:


> Why would you want to use such low-quality source material as compressed mp3 ??? It's not serious, man...


Unfortunately, my ears are honestly unable to hear the difference between 256 kbps MP3 and FLAC/ALAC/WAV so I have simply not gone that route.  Plus, all my music is from Google Play which is limited to that.

@ironmine 
Thus far though with my plugins is my DSP chain correctly ordered you think?


----------



## pinnahertz (Dec 8, 2017)

ironmine said:


> Music without a crossfeed sounds like a bee buzzing in the ear.
> 
> This feeling is awful, weird and unnatural.
> 
> ...


Even with cross-feed, headphone listening isn't how we hear sounds in the real world.  If you're after the "real world", the only thing that even sort of works is binaural.  Otherwise, the "real world" isn't even a goal in any recording.  Cross-feed simple presents a different perspective of a totally artificial world that may or may not be a subjective improvement.


ironmine said:


> I am sure that 10 or 20 years from now, headphone listening without some sort of crossfeed would be regarded as an oddity, a thing of the past. It's like watching TV now in black & white mode.


Well let's see now.  We've had stereo recordings available to the consumer for 65 years (starting with tape in 1952) and stereo headphones for 59 years (Koss, 1958) with the current escalation of popularity beginning in 1979 with the Walkman (38 years).  Cross-feed has existed in one form or another for at least 39 years (the Apt-Holman preamp had variable cross-feed), and could at any time have been achieved with minimal, though not insignificant cost of manufacture.  Yet it's still extremely rare today.  I'm not sure another 10-20 years is going to change anything. 

Every single feature that is standard today was introduced as a major change and easily percieved audible improvement at an acceptable cost, like the jump from mono to stereo, tape noise reduction, analog to digital audio, stereo to multi-channel, etc.  Unsuccessful would be things like Loudness compensation which showed up in the 1960s, but since it never really worked as intended, and eventually vanished.  It's been replaced by DSP-based volume-aware loudness compensation, but those options aren't understood and so ignored by most consumers.   And so far, higher than CD audio has not succeeded in dominating the music market, which is interesting, because it really is a low-cost upgrade with imperceptible improvement.

Where does Cross-feed fit into this?  It's a clearly audible change, it's low cost, the concept is anything but new and has had numerous market tests with free solutions available now.  Why isn't it already on every single iOS and Android device, which between the two covers the vast majority of all devices in the world that headphone listening sources?  Lack of market awareness?  If it costs nothing, but provides clear benefit, it becomes a marketing edge.  If it were presented as an advantage to consumers and included on a lot of product, the market would become aware...but that's not done.

Looking to the more esoteric devices in the headphone world, things like DAC/Amp combos, or headphone amps, we find very, very few include cross-feed.  Within Headphone.com's offering of 32 headphone amps, around 10% have cross-feed.  Their most expensive offering, the Sennheiser HDVD 800, has this sentence in its description:  "It provides balanced sound, maximum precision and impressive spacial accuracy."  _Spatial accuracy?_  That MUST be cross-feed, right? Nope, none at all.  One manufacturer who did include it on many past models, Headroom, has discontinued all of those products, with their current offering lacking cross-feed. 

So the question to answer is: Why is cross-feed not standard today?  It can't be ignorance, at least, not in the headphone amp market.  But it hasn't sustained it's presence even there, with today's total number of products including fewer than ever with cross-feed.  The extrapolated statistics have the presence of cross-feed in headphone amps slowly approaching zero in 10 years or less.  Could that be because it's happening elsewhere, like in software?  Possibly, but not likely.  Headphone amps are very targeted to headphones, and would be the perfect place for a cross-feed option.  So would digital music players, but it's not there either...much.

I feel compelled to say at this point that I'm not really a cross-feed hater, nor am I opposed to it.  It's a tool that is good when appropriately and properly used.  I have it, I use it, and I try it on a lot of material, but usually choose to keep it off.  I would, however, rather have a cross-feed option than not.  Going forward, that's not looking like something that's going to happen.


----------



## 71 dB

pinnahertz said:


> Interesting.  So you are correcting for hard-panned below 1kHz, but not correcting hard-panned above (I know it's a gradual transition).  If I may...why not?  Wouldn't hard-panned sounds above 1kHz be bothersome to you as well?



ILD problems above 1 kHz are pretty insignificant. Some recordings do have such a "harsh" upper end, that I find a little treble crossfeed beneficial in "softening" the sound, but generally the main issue is lower frequencies. 1 kHz here is a symbolic border between "ILD problems/no ILD problems" areas. As you said, it's gradual transition.



pinnahertz said:


> They are not comparable, though.  Both effects (overly wide separation) and HF response changes that are related to headphone position my be detrimental, but not in the same way.  There's no means of comparison without statistical analysis of subjective testing, and I guess we won't go there.
> How would you know, though?  Don't you reposition headphones for best (most pleasing) response?  Each time you put them on?  Some of us do that, and some of us do care.


I don't care and hardly even hear the difference. Moving you head while listening to speakers means similar changes to sound. I try to tackle relevant issues of sound reproduction.


----------



## jgazal (Dec 8, 2017)

This is going to be a wild speculative post, but since nobody risked to give an explanation, I will try.

Consider the following crosstalk cancellation with speakers and externalization with headphones:



jgazal said:


> > *13 Is the 3D realism of BACCH™ 3D Sound the same with all types of stereo recordings?*
> > (...)
> > All other stereophonic recordings fall on a spectrum ranging from recordings that highly preserve natural ILD and ITD cues (these include most well-made recordings of “acoustic music” such as most classical and jazz music recordings) to recordings that contain artificially constructed sounds with extreme and unnatural ILD and ITD cues (such as the pan-potted sounds on recordings from the early days of stereo). For stereo recordings that are at or near the first end of this spectrum, BACCH™ 3D Sound offers the same uncanny 3D realism as for binaural recordings18. At the other end of the spectrum, the sound image would be an artificial one and the presence of extreme ILD and ITD values would, not surprisingly, lead to often spectacular sound images perceived to be located in extreme right or left stage, very near the ears of the listener or even sometimes inside of his head (whereas with standard stereo the same extreme recording would yield a mostly flat image restricted to a portion of the vertical plane between the two loudspeakers).
> > (...)





jgazal said:


> Erik Garci said:
> 
> 
> > By the way, I recently created a PRIR for stereo sources that simulates perfect crosstalk cancelation. To create it, I measured just the center speaker, and fed both the left and right channel to that speaker, but the left ear only hears the left channel because I muted the mic for the right ear when it played the sweep tones for the left channel, and the right ear only hears the right channel because I muted the mic for the left ear when it played the sweep tones for the right channel. The result is a 180-degree sound field, (...).
> ...



In the second example, why the hard panned sounds are stuck to the side and do not derotate according to the headtracking?

First, let’s see how the convolution, interpolation and headtracking work.

An standard 2 channel _personal impulse response - PRIR _comprises 12 impulses:

1.   looking center + left speaker playing + left ear measuring;
2.   looking center + left speaker playing + right ear measuring;
3.   looking center + right speaker playing + left ear measuring;
4.  looking center + right speaker playing + right ear measuring;
5.   looking left + left speaker playing + left ear measuring;
6.   looking left + left speaker + right ear measuring;
7.   looking left + right speaker + left ear measuring;
8.   looking left + right speaker + right ear measuring;
9.   looking right + left speaker + left ear measuring;
10. looking right + left speaker + right ear measuring;
11. looking right + right speaker + left ear measuring;
12. looking right + right speaker + right ear measuring.

ipsolateral impulses (1, 4, 5, 8, 9, 12)
contralateral impulses (2, 3, 6, 7, 10, 11).

If you have only three looking angles (i.e. -30, 0, +30), how does the convolution engine set the _frequency levels, time arrivals _and _phase_ _delays_ for any angle between -30 degrees and 0 degrees and between 0 degrees to +30 degrees?

_Frequency levels, time arrivals_ and _phase_ _delays_ for any angle are calculated by an interpolation algorithm.

Then head-tracking helps to externalize sounds.

The standard PRIR convolution, interpolation and headtracking allow to exactly emulate, with headphones, _how the measured room/speakers sounds_.

Since speakers were located at +/-30 degrees and contralateral impulses describes an important proportion of the acoustic crosstalk, the expression “_how the measured room/speakers sounds_” above means mostly flat image restricted to a portion of the vertical plane between the two virtual loudspeakers. Even if the content has hard panned sounds. Virtual speakers and hard panned sounds within will derotate according to the head-tracking.

So if you are fond of crossfeed for headphones that emulates what would be to listen to standard stereo recordings, in the old-fashioned way (i.e. with two speakers in a room and corrupting acoustic crosstalk), just use a PRIR convolution without interpolation and headtracking.

So far, so good.

But Erik wanted to test with headphones how Professor Choueiri’s crosstalk cancellation would sound in his room.

What has he done?

Erik’s crossfeed free PRIR comprises also 12 registries, but now both speakers are at 0 degrees and contralateral impulses registries are left blank (2, 3, 6, 7, 10, 11).

Since only ipsolateral impulses are computed in Erik’s crossfeed free PRIR, acoustic crosstalk info is lost and no crossfeed is added.

But there is another catch in Erik’s crossfeed free PRIR.

In standard PRIR’s, speakers are measured at +/-30 degrees.

One the one hand, ipsolateral impulses capture _frequency levels, time arrivals _and _phase delays_ introduced by _room reflections_ (RIR component of his PRIR) and the _filtering effect of his pinna_ plus the scattering effect of the ipsolateral side of his face (two of three HRTF components of his PRIR) varying according to the looking angles.

On the other hand, contralateral impulses captures _room reflections _(RIR component of his PRIR) and _acoustic crosstalk_ (note that acoustic crosstalk is a mix of room reflections and a third HRTF component). Such third HRTF component is the effect of his head shadowing directional frequencies.

In Erik’s PRIR both speakers are measured at 0 degrees, with +/-30 looking angles.

On the one hand, ipsolateral impulses capture _frequency levels, time arrivals _and _phase delays_ introduced by all the _room reflections_ (RIR component of his PRIR), the _filtering effect of his pinna, _the scattering effect of the ipsolateral side of his face and the _effect of his __head shadowing directional frequencies_ varying in degree according to the looking angles.

On the other hand, contralateral impulses, that in standard PRIR’s would capture the effects of room reflections and acoustic crosstalk, are left in blank, as I said before.

As he describes, he gets externalization with binaural recordings.

That is because natural ILD an ITD are already in the content. In others words, the effect of your head shadowing directional frequencies (or as, I said earlier, the acoustic relationships between both hemispheres that are separated by the sagital plane were preserved in the content).

If Erik plays back with his crossfeed free PRIR the old stereo songs that contains hard panned sounds, why the hard panned sounds don’t derotate?

Hard panned sounds don't have ITD at all, because there is no reference for the original time in the opposite channel. Every difference in time will be deemed just as the hard panned instrument started to play later (obviously it does not affect artistic coherence with other sounds as ITD are in the range of microseconds). They don’t have ILD either because their full level is on one single channel.

Since Erik’s PRIR has ipsolateral impulses measured with speakers at 0 degree that, as I said, capture _the effect of his __head shadowing directional frequencies_, the convolution engine can introduce, in the playback step, his personal ILD and ITD to center sounds while rotating his head.

That happens because when he turns his head:

a) looking left (maximum looking angle is still 30 degrees), center sounds become more shadowed by left ipsolateral impulses and at the same time center sounds become less shadowed and more scattered by right ipsolateral impulses;

b) looking right (maximum looking angle is still 30 degrees), center sounds become less shadowed and more scattered by left ipsolateral impulses and at the same time center sounds become more shadowed by right ipsolateral impulses .

And why that does not work with hard panned sounds? Why in such strong stereo separation recordings, hard panned sounds are stuck to the side when using Erik’s crossfeed free PRIR?

Because when he turns his head looking left (maximum looking angle is still 30 degrees), hard panned sounds in the left channel become more shadowed by left ipsolateral impulses, but absolutely nothing happens with such hard panned sounds at the right headphone driver, since no correlated sounds are feed into the convolution of right ipsolateral impulses.

Note that the absence of crossfeed seems to make reflections more perceptible (so better room acoustics would yield no echo?):



Erik Garci said:


> By the way, after making the PRIR, the "window" setting should be reduced to 200ms to prevent the right ear from hearing a faint echo of the left ear's signal, and vice versa.



So it is clear that stereo recordings with hard panned sounds won’t result a good spatial rendering with Erik’s PRIR.

What does Erik do to circumvent such error?

The Realiser A8 mix block allows mixing the left channel into the right channel and vice versa with rough steps (consider 0 as mute and 1 as full scale, the step is 0.1; don’t know how to express that logarithmically in dBFS).

If @Erik Garci mix at least 0.1 of each channel one another then he is able “to shift the far-left and far-right sounds towards the front”.

Do the hard panned sounds also start to derotate according to the headtracking?

Yes.

Why?

Because only using ipsolateral impulses Erik is to feed some level of left channel hard panned sounds into the convolution of right ipsolateral impulses.

Then why the original far-left and far-right sounds shift directly towards the front? Why the far-left and far-right sounds shift directly to the front and not gradually, going from "stuck to the sides" to rotating as if they were outside the +/-30 degrees looking angles?

Because the ITD and ILD are derived from and is therefore limited by the maximum time difference captured by ipsolateral left and right looking angles.

Could the 0.1 mix ratio be higher than his natural ILD?

Yes.

That’s why, as far as I understood, Erik said:



Erik Garci said:


> That would be much easier than manually muting the microphones during measurements, and just about any PRIR could be used.
> 
> Allowing fractional values would be even better, such as 0.5 (-6 dB) or 0.1 (-20 dB).



What if the Realiser could mix both channels in increments such as 1dBFS?

We are waiting for Smyth Research response.

Does anybody have an alternative explanation that is not expressed as programming codes or equations?


----------



## 71 dB

ironmine said:


> Music without a crossfeed sounds like a bee buzzing in the ear.
> 
> This feeling is awful, weird and unnatural.
> 
> ...



Totally agree with this. The dominant problem of no crossfeed is this "bee buzzing", but there is more.

- Bass with too much ILD sounds fake. Crossfeed gives it "physicality", realism.
- Sound image is "broken" due to spatial distortion. Crossfeed is able to fix this. Spatial distortion masks real spatial information. Crossfeed reduces/removes spatial distortion so that real spatial information is revealed. Instruments are located at pinpoint accuracy instead of spreading all over the place (requires good recording thou).
- Listening fatique which crossfeed is famous for reducing/removing.
- Without crossfeed the music is located mostly above shoulders near ears, which sounds annoying.

Crossfeed makes the sound "jump" in front of me at a distance dictated by the spatial information of the recording, but varying from 1' (30 cm) (crappy spatiality) to perhaps 5' (1.5 m) (excellent spatiality). The latter is almost like listening to speakers in terms of soundstage depth. If I sit in front of my speakers and imagine the sound coming from them I can almost fool myself that I am listening to speakers, but that's only with the recordings with excellent spatiality (typically a multichannel SACD of classical music). When I began crossfeed, for some time I had hard time believing it's possible to achieve improvement of this scale using some simple electric circuits. It seemed magic, but after thinking these things I did understand why this it possible. Spatial distortion is very detrimental because our hearing is so sensitive to it. Reducing it even a little helps a lot. Our spatial hearing can be fooled as long as we have reasonable spatial cues that make sense and that's why the spatial information doesn't need to be perfect after crossfeed. As long as it makes sense our hearing is able to "get the picture" and we are fine.

The problem of crossfeed is not how it "messes up" the sound (that's a misunderstanding of the whole concept). The problem is that each recording contains different amount of spatial distortion so for optimal result one must be able to adjust the crossfeed level. If you don't crossfeed enough, you don't really experience the real benefits fully and if you crossfeed too much, you get a bit dull and "monophonic" sound. It's the optimal level where crossfeed shines. If you have just one fixed level, say -8 dB, only those recordings with optimum crossfeed level between -9 dB and -7 dB work optimally with it (1 dB accuracy seems to be enough imo) and that's probably 50 % at most of all your music, perhaps only 25 %. 

You are spot on saying heaphones listeners have accustomed to a wrong presentation of sound. That's exactly how it is. Pinnahertz even admitted to this saying he wants to hear The Who with the same hard panned spatiality he used to hear in the 70's. The day I discovered crossfeed I admitted myself having done headphone listening wrong for years. It's crazy to assume we do things the right way from the beginning. At least I don't listen to headphones without crossfeed all my life.


----------



## 71 dB

ironmine said:


> 71 dB, thanks for your explanations and for the photo.
> 
> I am not really knowledgeable in electronics, I cannot understand this diagram. If it were presented as a bunch of inter-connected VST plugins, I would understand it better


No problem. I have a degree in electric engineering, so these diagrams aren't difficult to me, but of course they are if you don't know electronics. The schematics has more components than electrically needed, because I have multiplied some resistors to increase power handling capability. This is a headphones adapter, my amp feeds it like a speaker so we are talking about watts of power at peaks, a small fraction of it sent to the headphone. So using 0.6 W metal film resistors I can increase power handling to say 2.4 W by using four 47 Ω resistors in paraller/serial configuration instead of just one. Three 10 Ω resistors in series is just one 30 Ω resistor with 1.8 W of power handling. The power handling requirements are carefully analysed. However, the power handling capabilities are overkill so my typical listening levels don't make the resistors warm so much to be noticable by finger.


----------



## gregorio (Dec 8, 2017)

71 dB said:


> [1] Let's say you have 12 dB of ILD at bass on your recording. [2] It seems improbable that the producers of the record intented that, kickdrums a feet from your head. ...  Kickdrums are expected to be located to some distance meaning ILD at bass is limited to a few decibels. ... [2a] How does kickdrum sound if you listen to it that close? .... [3] When you listen to speakers, all of this is automatically taken care of by acoustic crossfeed.



1. That would be unusual.
2. You are just making that up, you have no idea. In fact in some genres the kick drum is processed very dry and designed/intended to sound close relative to most other elements of the mix.
2a. A real kick drum typically sounds COMPLETELY different, at any distance. In virtually all pop and rock of the last 40 years or so, the kick drum is heavily processed and sounds nothing like a real kick drum. You are again falling into the audiophile trap of making a comparison with reality, when there is no reality in the first place!
3. No, it's not! Firstly, a kick drum is typically panned centrally or near centrally, so there is very rarely a large ILD. Secondly, the reason a dry processed kick drum doesn't sound like it's in your head when reproduced with speakers is because of listening room reflections and has little or nothing to do with ILD! Thirdly, when I'm mixing a track with a very dry kick and when I listen with cans, if that kick sounds too present relative to the other elements in the mix, I'll typically add some amount of an "ambience" type reverb. This will likely have little or no impact when played on speakers as the effect will likely be overwhelmed by room reflections but it will move the kick back into a more desirable relative position when listening with cans. HP crossfeeding will damage that processing.



71 dB said:


> [1] Yes, but our brain still has to make sense of it. Artificial becomes easier if you have cues from the reality such as reasonable ILD. [2] My ears don't like large ILD below 1 kHz and for some reason scientific knowledge of human hearing does give explanations why (kickdrums don't fit into ear canals). [3] But that's just me. I don't know how your ears work.



1. Again, you are referencing to reality, when there is no reality in the first place! The reason our brain can "make sense" of all the spatial information anarchy on a recording is because it was created/applied by a human being who also has a brain. And again, the mix would always be checked on cans and if what was applied for speaker reproduction doesn't make sense, then something can often be done to have it make better sense, assuming of course that "making sense" was the original goal, which it sometimes isn't!

2. Careful here! You are stating a preference and then attempting to support that preference with "scientific knowledge" but that scientific knowledge is inapplicable. What has a kick drum not fitting into your ear canal got to do with anything? Again, we are not talking about a real kick drum, we are talking about an imaginary, made-up kick drum which is nothing like a real kick drum! Additionally, you do realise that most manufactured/manipulated kick drums have a very significant component typically somewhere between 1kHz and 2kHz, the initial attack or click of the kick as opposed to the boomy resonance?

3. Exactly, it's just you (and a few others). You don't know how my ears work, you don't know how yours work, you don't know how the engineers and artists ears work or what they were trying to achieve and you don't know what a kick drum sound is!



71 dB said:


> [1] I see it differently. To me acoustic crossfeed tames the excessive ILD levels with speakers creating ILD values that make sense. [1a] Crossfeed with headphones does similar (but not identical) thing. [1b] A hard-panned instrument becomes a spatially panned instrument with reasonable combination of ILD and ITD information. [2] That works great for me. ... [3] I don't get why acoustic crossfeed is fine, but electric crossfeed isn't.



1. You can see it however differently you choose but you seeing it differently does NOT change the actual facts. The actual facts being that acoustic crossfeed is NOT the only or even necessarily the most important factor at play with speakers, unless you're talking about speakers in an anechoic chamber.
1a. No, crossfeed does a very different thing, it does NOT take into account the "acoustics" part of "acoustic crossfeed"!
1b. No, it does NOT become a "spatially panned instrument"! It will probably become more appropriately panned relative to it's intended pan position with speakers but it does not address the "spatial" part of your "spatially panned instrument" statement. Indeed, simply crossfeeding with an ITD may actually damage/destroy that "spatial" part of "spatially panned instrument" more than not crossfeeding! You are ignoring the fact that in practise we don't just have ILD panning on an instrument, that instrument will also almost certainly have some sort of STEREO reverb (or stereo delay based effect) and the timing and levels of the reflections created by that reverb are likely to be damaged by a simple crossfeed and ITD of 250uS!

2. And that's fine. With some recordings it works better for me too but with others it doesn't and I don't know which recordings have had adjustments applied to help with non-crossfed HPs or have deliberately not been adjusted because the artists wanted it to sound that way when played back on HPs and that's why I don't use crossfeed.

3. That's obvious. Maybe you are somewhat insensitive/ignorant of the "spatial" part of "spatially panned instruments" and are therefore not bothered that you are often damaging/destroying it? Or, maybe I'm overly sensitive because I spend so much time working specifically on it? I'm not sure, which is why I state that my preference is not to use crossfeed and that using it can reduce fidelity BUT, I would never say that everyone will definitely benefit from not using crossfeed and that it would be idiotic to use it!

Let's use your vehicle analogy again: Let's say studio playback is like a MotoGP race bike, playback on home speakers is not the same, more like a consumer superbike. Playback on HPs is a like a car, a completely different vehicle. Now we could do something to that car to make it more like a motorbike. If we hacked-off two of it's wheels, it would have two wheels just like a motorbike but of course it's not really anything like motorbike, what we've really got is a nonsense, un-drivable vehicle and I personally would rather have just a normal car with all it's relative faults.  ... This analogy is poor on numerous levels, one of which is in practice such a two-wheeled car might sometimes give us a better experience than a normal car but it serves the purpose of illustrating that just fixing one aspect/element doesn't necessarily get us something better.



71 dB said:


> [1] That is exactly why I need crossfeed to transform "Decca tree information" to something my ears and head understands. [2] If they use say a Jecklin Disk then the spatial information is already compatible with my spatial hearing (spatial distortion free) and I most probably listen to the recording crossfeed off.



1. No, you have that backwards! The Decca Tree mic array was invented BECAUSE it translates so well to what our ears/brains understand! It's entirely possible that a Decca Tree recording would sound subjectively better in HPs than speakers, which is why the use of outrigger mics along with a Decca Tree eventually became the popular/preferred setup.

2. You can't use a Jecklin Disk with a Decca Tree array, a Jecklin Disk only works with a specific type of mic arrangement: Two small diameter mics placed in a close stereo setup, like an ORTF pair for example. And, as with the more modern usage of a Decca Tree arrangement, there are probably very few classical/orchestral recordings in the last 30 years or more which only use such a simple mic setup, virtually always there would be a number of other mics mixed in.



71 dB said:


> [1] I don't get why we shouldn't make spatial information less distorted. [2] If a recording was clearly made for speakers (having acoustic crossfeed) and [2a] clearly sounds horrible with headphones without crossfeed then to me it's a nobrainer to use crossfeed, but that's just me… …it makes sense to me but apparently not to everybody.



1. Because applying crossfeed to HPs doesn't make the spatial information less distorted, it can make some aspects of the spatial information MORE distorted!
2. And how do you know if a recording "was CLEARLY made for speakers"? It probably has been primarily made for speakers but that doesn't mean that some aspects of the mix have not been modified so that it still works relatively well on HPs, un-crossfed HPs!
2a. "Horrible" is a purely subjective term and exactly what makes a piece/type of music horrible to some is what makes it great to others! It's often difficult to judge what is just horrible and therefore bad and what is intentionally horrible and therefore good if we're after fidelity but still potentially bad relative to a particular individual's preferences. That makes sense to me but apparently not to many audiophiles, for whom if something sounds better (relative to their preferences) then it is good period for everyone and they then simply redefine the word "fidelity" to equal their perceived "better".

G


----------



## Erik Garci

jgazal said:


> This is going to be a wild speculative post, but since nobody risked to give an explanation, I will try.


Great explanation. It gets even more interesting when you consider the Hafler PRIR. Basically you hear the sum from the center-front speaker (L+R to both ears), and you hear the differences from the center-back speaker (L-R to left ear, and R-L to right ear).

I made a Hafler PRIR where the center-back speaker was actually measured in front, but I turned my head in the opposite direction. I looked right instead of left and looked left instead of right. This way, the center-back speaker has the same spectral balance as the center-front speaker, and head-tracking helps me distinguish which sounds are from the front versus the back. Maybe Smyth can add a Hafler mode that works for any PRIR that has a center speaker or a closely-spaced pair.

My next plan is to add crosstalk-free height channels.

Technical note: The A8's mix block cannot produce differences on its own because it does not allow negative coefficients, thus the negative signals must be generated by another device (such as one with balanced outputs) and fed to the A8 on other channels.


----------



## gregorio

ironmine said:


> (1) The first plugin should be a high-quality upsampler so that all further calculations are done with a higher precision and less quality loss. Use the ratios 2X or 4X or 8X. It means it's better to upsample 44/16 to 88/24 or 176/24 or 352/24, but not to 96/24 or 192/24 or 384/24. Of course, 32 bits are even better than 24 bits.
> 
> (2) The last three plugins in the chain must be:
> a) Ditherer. Dither the signal to 16 or 24 bits depending on your DAC input specifications.
> ...



1. No, that's not how processing works! You will NOT get either higher precision or less quality loss. Precision is dictated by the plugin's internal bit depth which is either 32 or 64 bit float. So whether you feed a plugin with 16 bits of data or 16 bits plus 8 additional zeros, the precision and result are identical. Increasing the sample rate is likewise pointless, there's no data above the Nyquist point to process and you don't magically get from upsampling. The only time a higher sampling rate could make a difference is in the case of some non-linear processes, in which case any decent plugin will internally upsample anyway. Actually, upsampling could (theoretically) make matters worse, some plugins operate at a single internal sample rate. Say for example a plugin operates at 96kHz, you upsample to say 176kHz, the plugin down samples to 96, performs it's algorithm and when complete upsamples the result back to 176kHz again, to match the incoming sample rate. What have you gained by upsampling in the first place? You'd have been better if you'd left it at 44.1kHz! You've fallen into the old audiophile trap of confusing the bit depth of the audio file with the bit depth of the processing environment! The bit depth (and precision) has nothing to do with the bit depth of the file but the bit depth of the internal processing of the plugins and the bit depth of the data connections between those plugins.

2. Dither should be the last step here, not the first! If you are changing the volume in your processing environment, which is presumably a 32 or 64bit environment, then the result of that volume change is obviously a 32 or 64bit word length, which you are then truncating to 16 or 24bit because you applied dither before the volume change. However, if you're outputting 24bit to your DAC there is no point in dithering, as truncation artefacts would be way below audibility, even at 16bit truncation artefacts would probably be inaudible. 

G


----------



## pinnahertz

71 dB said:


> ILD problems above 1 kHz are pretty insignificant.


Sorry...huh?  ILD above 1kHz is very significant, and if you cross-feed to correct below 1kHz and ignore above, you'll skew the perceived position of the LF fundamental with the HF harmonic or attack content.  You'll break the image into separate sources!  I'm sure you can see that in a complete analysis of HRTF.  And of course, mentioned this before...I see no attempt at ITD "correction", ignoring half the spectrum. 

I've experimented with ILD-based and ILD/ITD/HRTF-based cross-feed.  Care to guess which works better?  I know you can't whip that last one up with resistors and caps, but since there's a DSP or three pretty much in everything, it's not a problem, it's software.  And already done. 

Still not a preference of mine in either case.


71 dB said:


> Some recordings do have such a "harsh" upper end, that I find a little treble crossfeed beneficial in "softening" the sound, but generally the main issue is lower frequencies. 1 kHz here is a symbolic border between "ILD problems/no ILD problems" areas. As you said, it's gradual transition.


The "harshness" issue isn't fixed by cross-feed, though, either with ILD or ITD alone or in combination.  You have to work in the full HRTF.  And like I said, even then I don't think it works universally.


71 dB said:


> I don't care and hardly even hear the difference. Moving you head while listening to speakers means similar changes to sound. I try to tackle relevant issues of sound reproduction.


Interesting opportunity opens here.  If our positions were reversed on the above, would I be accused as being spatially deaf?  

I hope that's over.


----------



## 71 dB

pinnahertz said:


> Even with cross-feed, headphone listening isn't how we hear sounds in the real world.  If you're after the "real world", the only thing that even sort of works is binaural.  Otherwise, the "real world" isn't even a goal in any recording.  Cross-feed simple presents a different perspective of a totally artificial world that may or may not be a subjective improvement.



True, only binaural can reach very high level of real world feel, but crossfeed modified the sound to be if not "real world" at least more natural and pleasant. The problem with binaural is that you need your own or at least very similar HRTF-signature for it to work really well. Crossfeed simulates HRTF at so coarse level that the result works for anybody. "Real world" is even today a challenge in sound reproduction. Spatial distortion free sound is not.



pinnahertz said:


> Their most expensive offering, the Sennheiser HDVD 800, has this sentence in its description:  "It provides balanced sound, maximum precision and impressive spacial accuracy."  _Spatial accuracy?_  That MUST be cross-feed, right? Nope, none at all.


That's silly marketing talk. Without crossfeed Sennheiser HDVD 800 doesn't offer spatial accuracy unless the recording happens to be one of the 2 % free of spatial distortion. Rich morons can spend their money on overprised products like that. I have my 50 bucks DIY headphone adapter with 6 level crossfeed and enjoy real spatial accurary on almost all recordings (some recordings are bad beyond hope of fix). That's the benefit of having a degree on electric engineering and acoustics. 

Sennheiser Orpheus has 2 level crossfeed. For the price it's kind of the minimum..


----------



## 71 dB

This board is too active for me! I try to keep up with the pace...



gregorio said:


> 1. That would be unusual.
> 2. You are just making that up, you have no idea. In fact in some genres the kick drum is processed very dry and designed/intended to sound close relative to most other elements of the mix.
> 2a. A real kick drum typically sounds COMPLETELY different, at any distance. In virtually all pop and rock of the last 40 years or so, the kick drum is heavily processed and sounds nothing like a real kick drum. You are again falling into the audiophile trap of making a comparison with reality, when there is no reality in the first place!
> 3. No, it's not! Firstly, a kick drum is typically panned centrally or near centrally, so there is very rarely a large ILD. Secondly, the reason a dry processed kick drum doesn't sound like it's in your head when reproduced with speakers is because of listening room reflections and has little or nothing to do with ILD! Thirdly, when I'm mixing a track with a very dry kick and when I listen with cans, if that kick sounds too present relative to the other elements in the mix, I'll typically add some amount of an "ambience" type reverb. This will likely have little or no impact when played on speakers as the effect will likely be overwhelmed by room reflections but it will move the kick back into a more desirable relative position when listening with cans. HP crossfeeding will damage that processing.
> G


1. Perhaps. Even ILD larger than 6 dB starts to be too much imo.
2. Yeah, but to be in the middle. Dry and mono.
2a. Of course, but that doesn't change how spatial hearing works.
3. Yes, fortunately. Just explaining spatial distortion to you.


----------



## bigshot

Adding reverb softens harsh treble. Maybe he’s thinking of that


----------



## 71 dB

pinnahertz said:


> Sorry...huh?  ILD above 1kHz is very significant, and if you cross-feed to correct below 1kHz and ignore above, you'll skew the perceived position of the LF fundamental with the HF harmonic or attack content.  You'll break the image into separate sources!  I'm sure you can see that in a complete analysis of HRTF.  And of course, mentioned this before...I see no attempt at ITD "correction", ignoring half the spectrum.



You do realize the same kind of thing happens with speakers? Our spatial hearing isn't idiotic. It knows how to "get the picture" as long as the cues make sense. To me crossfeed produces very pinpoint accurate positions and that's not a mystery, because crossfed sound contains spatial information scaled to make sense. I believe the gradual transition helps hearing here. Without crossfeed the image is broken.

Too much ILD above 1 kHz isn't that serious, because lower frequencies mask and the overall level is lower. It's hard to hear that the ILD is 30 dB when it should be 20 dB. The tiny "missing" energy at contralateral ear is masked by other sounds. I believe the opposite is true too: A bit too little ILD above 1 kHz isn't serious. However, below 1 kHz it's so easy to hear that ILD is 8 dB when it should be 3 dB so proper crossfeed below 1 kHz is important.

Sorry, but some of our comments of crossfeed make it hard to believe you have been playing with it for decades. You sound like you heard about crossfeed 2 weeks ago for the first time in your life. The first thing one learns about crossfeed is that it happens at low frequencies and leaves higher frequencies almost untouched.


----------



## Strangelove424

WoodyLuvr said:


> Curious to see how you find it in comparison to your other saturation plugins.
> Do you use Replay Gain and/or Advanced Limiter?



TAL seems like a decent quality plugin, but I found that it created too much distortion for my taste. It's going for a classic over-driven guitar tube amp distortion. For a subtle yet sweet saturation effect, similar to the distortion you'd find in an audiophile tube amp, my favorite is still VORslickEQ (available here from Tokyo Dawn - I recommend DLing the version w/ no installer).

I do not use replay gain or limiters. I suppose a lot of this is very subjective, but I like to hear mastering decisions or even mistakes if they are there. If the album is loud, or soft, or even clipped, I like experiencing those things. I avoid DSP clipping by keeping EQ overall gain at "auto" when available, or manually set negative gain in situations where I boosted frequencies. I'm pretty careful not to go over the level I started at. If a song shows up with its own level problems or clipping though, I don't do anything to hide it.


----------



## 71 dB

gregorio said:


> 1. No, you have that backwards! The Decca Tree mic array was invented BECAUSE it translates so well to what our ears/brains understand! It's entirely possible that a Decca Tree recording would sound subjectively better in HPs than speakers, which is why the use of outrigger mics along with a Decca Tree eventually became the popular/preferred setup.
> 
> 2. You can't use a Jecklin Disk with a Decca Tree array, a Jecklin Disk only works with a specific type of mic arrangement: Two small diameter mics placed in a close stereo setup, like an ORTF pair for example. And, as with the more modern usage of a Decca Tree arrangement, there are probably very few classical/orchestral recordings in the last 30 years or more which only use such a simple mic setup, virtually always there would be a number of other mics mixed in.



1. Translates well when listened to with speaker or headphones? Decca Tree produces huge ITD values totally incompatible with our spatial hearing without crossfeed. 
2. My cheap DIY Jecklin Disk:




 

Yeah, Decca Tree and Jecklin Disk don't mix. Night and day.



gregorio said:


> 1. Because applying crossfeed to HPs doesn't make the spatial information less distorted, it can make some aspects of the spatial information MORE distorted!



How? 



gregorio said:


> 2. And how do you know if a recording "was CLEARLY made for speakers"? It probably has been primarily made for speakers but that doesn't mean that some aspects of the mix have not been modified so that it still works relatively well on HPs, un-crossfed HPs!



The less "binaural" it sounds the more evident it is it is for speakers. If the recording has been "modified" enough for HPs then I listen to it without crossfeed of course, but that happens rarely.


----------



## jgazal (Dec 8, 2017)

pinnahertz said:


> Even with cross-feed, headphone listening isn't how we hear sounds in the real world.  If you're after the "real world", the *only* thing that even sort of works is binaural.  Otherwise, the "real world" isn't even a goal in any recording.





71 dB said:


> True, *only* binaural can reach very high level of real world feel, but crossfeed modified the sound to be if not "real world" at least more natural and pleasant. The problem with binaural is that you need your own or at least very similar HRTF-signature for it to work really well.



“Only” is such an strong word...

I would say that binaural is the currently commercially available type of recording that offers the lowest distortion if the reference is the real recorded sound field.

I say lowest because as pinnahertz suggest “spatial distortion varies by degree”.

In the case of binaural recordings:



jgazal said:


> What happens when you play binaural recordings through headphones?
> 
> The cues from the dummy head HRTF and your own HRTF cues don’t match and as you turn your head cues remain the same and the 3D sound field collapses.
> 
> ...



Higher order ambisonics is an attempt to lower even further such distortion:



jgazal said:


> The second one is that at high frequencies the math proves that you need too many loudspeakers to be practical:
> 
> 
> 
> ...



The advantage of Ambisonics is that the HRTF is not filtered upstream by the dummy head microphone (which means it is fired into the listening room before a new room filtering and another HRTF filtering or the problems of HRTF mismatch and head movement when using headphones). With Ambisonics  the spatial information of the original sound field is encoded by coincident microphones (see eigenmikes) and the user HRTF is filtered downstream the audio reproduction chain either acoustically with the room/speakers/listener’s head and torso interaction or by convolving a interpolated PRIR with headphones or, potentially better, a high resolution/density personal HRTF:



jgazal said:


> If ambisonics spherical harmonics decoding already solves acoustic crosstalk at at low and medium frequencies, then the only potential negative variables would be the number of channels for high frequencies and the listening room early reflections. That would be in fact an advantage of convolving a high density/resolution HRTF (when you can decode to an arbitrary higher number of playback channels) instead of interpolated BRIR/HRIR (of sixteen discreet virtual sources for instance), when binauralizing ambisonics over headphones.
> 
> That is one of the reasons why this path may benefit from easier ways to acquire high density/resolution HRTF such as capturing biometrics and searching for close enough HRTF  in databases:



It seems contradictory someone still using the expression “spatial distortion” to describe the sound of conventional stereo rendering with two speakers and the consequent acoustic crosstalk, which a Professor of applied physics from Princeton describes as a fundamental problem...

It is even more contradictory to see someone criticizing the use of the expression “spatial accuracy” as “silly marketing talk” if what he does with the expression “spatial distortion” is very similar to justify supposed benefits of a crossfeed electronic circuit.

Instead of “spatial distortion”, I would just say “a more pleasant suspension of disbelief with stereo recordings”...

Do you really believe that an economically viable analog crossfeed electronic circuit can handle all the complex interaction that occurs in the _time domain_ without any digital convolution of binaural room impulses?

That’s why I believe gregorio and pinnahertz were tirelessly trying to say that a crossfeed circuit or plugin can do more harm than good.

Anyway, I will give you the benefit of doubt because you are advocating “a more pleasant suspension of disbelief with stereo recordings”...

71 dB, that’s why I said I have empathy for you: you just don’t want to give up that “spatial distortion” expression because you have put a lot of effort into your analog crossfeed circuit work.

And pinnahertz, please do not accuse me of being delusional if such paradigm shift is important for matching visual cues and sound cues with virtual reality, unless we want to retrain our cortex everytime we enter or get out of virtual environments.



Erik Garci said:


> (...). It gets even more interesting when you consider the Hafler PRIR. (...)
> (...)



@Erik Garci, first, thank you very much for leading the tests!

In your opinion, which kind of recordings benefit most of a “Hafler PRIR”?



Erik Garci said:


> My next plan is to add crosstalk-free height channels.



I don’t know how Atmos/DTS:X/Auro height speakers will behave, but I am also curious about how Ambisonics would behave without crosstalk:



jgazal said:


> If ambisonics spherical harmonics decoding already solves acoustic crosstalk at at low and medium frequencies, then the only potential negative variables would be the number of channels for high frequencies and the listening room early reflections. That would be in fact an advantage of convolving a high density/resolution HRTF (when you can decode to an arbitrary higher number of playback channels) instead of interpolated BRIR/HRIR (of sixteen discreet virtual sources for instance), when binauralizing ambisonics over headphones.



In that case, things get more complex as there would be ipsolateral and contralateral impulses, but also 3 sagital impulses from each speaker that is located at 0 azimuth at any elevation (therefore such speakers are coincident with the sagital plane).

That’s why my question to Smyth got confusing...

Hopefully the Realiser will allow testing the lowest distortion path hypothesis (here, here and here) of HOA eigenmike -> High resolution HRTF -> headphones.


----------



## pinnahertz

71 dB said:


> True, only binaural can reach very high level of real world feel, but crossfeed modified the sound to be if not "real world" at least more natural and pleasant. The problem with binaural is that you need your own or at least very similar HRTF-signature for it to work really well.


The way I see it, having recorded actual binaural with heads and my own ears, yes to work really wall you need your own HRTF, but it the general effect works for everyone.  The big problem with binaural is the lack of compatibility with normal speaker playback, which dictates that any given recording must be released both ways...and that's the deal-breaker, not the fact that it's imprecise without your personal HRTF. 


71 dB said:


> Crossfeed simulates HRTF at so coarse level that the result *works for anybody. *


I think we've already disproven that. Amazing you even bother to state it that way at this point.


71 dB said:


> "Real world" is even today a challenge in sound reproduction. Spatial distortion free sound is not.


Neither is necessary for enjoyment or immersion int he art.  


71 dB said:


> That's silly marketing talk. Without crossfeed Sennheiser HDVD 800 doesn't offer spatial accuracy unless the recording happens to be one of the 2 % free of spatial distortion. Rich morons can spend their money on overprised products like that. I have my 50 bucks DIY headphone adapter with 6 level crossfeed and enjoy real spatial accurary on almost all recordings (some recordings are bad beyond hope of fix). That's the benefit of having a degree on electric engineering and acoustics.


Yes, of course, but that's not my point in mentioning it.  The point is: Where's the Cross-feed? 


71 dB said:


> Sennheiser Orpheus has 2 level crossfeed. For the price it's kind of the minimum..


I think I saw it on a Fios product, if you want that kind of thing.

You don't actually build that cross-feeder and sell for $50, do you? I would think the time alone would push it into the $100s.


----------



## Erik Garci

jgazal said:


> In your opinion, which kind of recordings benefit most of a “Hafler PRIR”?


Some live recordings sound great with crowd noise and hall reverb that come from the back. Recordings that were matrix-encoded sound great as well, and you might not realize which ones until you listen to them this way.

In addition, for 4.0 or 5.1 recordings, the effect can be flipped for the two discrete surround channels, Ls and Rs. Basically you hear the sum from the center-back speaker (Ls+Rs), and you hear the differences from the center-front speaker (Ls-Rs to left ear, and Rs-Ls to right ear).


----------



## pinnahertz

71 dB said:


> 1. Translates well when listened to with speaker or headphones? Decca Tree produces huge ITD values totally incompatible with our spatial hearing without crossfeed.


I agree the ITD in the Decca Tree is huge, but it also serves a purpose.  Remember, we aren't re-creating a reality, we're representing it by completely artificial means.  The Decca Tree is extensively used, and with good reason.  Be assured pretty much every other mic configuration has been tried.  


71 dB said:


> The less "binaural" it sounds the more evident it is it is for speakers. If the recording has been "modified" enough for HPs then I listen to it without crossfeed of course, but that happens rarely.


[/quote]But you are assuming that you have complete knowledge of what the original intent was!  What if the intent was to have that bass line close to your left ear?  You hate that and can't accept it as intentional, but honestly, you really have no idea.  That's an extreme example, and doesn't happen much anyway, but my point is how the heck do you know what's right?  You insist on imposing your own values on everyone, listeners and creators alike.  Sorry, I just can't go along with that kind of self-righteous viewpoint.  I've always first and foremost wanted to hear what was intended, then move out from that slightly as the need may arise, which is fairly rare.  The only way I'd blindly accept cross-feed is if there was a note on the media that said, "When listening to this recording in headphones, please use 71 dB's cross-feeder."  Otherwise I'm fairly sure they did NOT intend it to be heard that way!  

Remember_ "Made Loud to be Played Loud?" _ No, of course you don't.


----------



## 71 dB

jgazal said:


> It is even more contradictory to see someone criticizing the use of the expression “spatial accuracy” as “silly marketing talk” if what he does with the expression “spatial distortion” is very similar to justify supposed benefits of a crossfeed electronic circuit.
> 
> Instead of “spatial distortion”, I would just say “a more pleasant suspension of disbelief with stereo recordings”…



“a more pleasant suspension of disbelief with stereo recordings" is ok in semantic sense, but "spatial distortion" is much shorter expression and kind of describes that bee buzz "distortion."



jgazal said:


> Do you really believe that an economically viable analog crossfeed electronic circuit can handle all the complex interaction that occurs in the _time domain_ without any digital convolution of binaural room impulses?



I view the "complex interactions" _statistically_ and in that sense "analog crossfeed electronic circuits" do fine job. Beliefs are beliefs. To me crossfeed does near miracles so yes, it can handle no matter how weird that seems to you. I don't even get the idea, that crossfeed gets interactions wrong. Wrong compared to what? Speaker sound? Headphone sound without crossfeed? What is relevant in the sound we hear? Do you enjoy the guitar solo because it is located in the soundimage in the exacly correct (correct according to who?) position or because the guitarist is very good? In another room with another speakers the soundstage will be a little different. Which speaker set up is correct? Oh, neither, because in the studio we have the CORRECT set up and acoustics. That's where the damn album was mixed. So, should I tell myself, that my own speakers + room is correct, but headphones + crossfeed is not? Both give sound without excessive ILD even if there are other differencies. Headphones without crossfeed on the other hand produce excessive ILD, give bee buzz, fake bass, broken sound image, listening fatique and what not. Sorry guys, but I am going to call that WRONG if anything. That's why I use crossfeed and recommend it to others instead of "raw" headphone listening.



jgazal said:


> That’s why I believe gregorio and pinnahertz were tirelessly trying to say that a crossfeed circuit or plugin can do more harm than good.


 gregorio and pinnahertz in my opion are a) too puristic and b) don't place various things in proper perspective.



jgazal said:


> Anyway, I will give you the benefit of doubt because you are advocating “a more pleasant suspension of disbelief with stereo recordings”...
> 
> 71 dB, that’s why I said I have empathy for you: you just don’t want to give up that “spatial distortion” expression because you have put a lot of effort into your analog crossfeed circuit work.


Thanks!


----------



## jgazal (Dec 8, 2017)

71 dB said:


> Beliefs are beliefs. To me crossfeed does near miracles so yes, it can handle no matter how weird that seems to you. I don't even get the idea, that crossfeed gets interactions wrong. Wrong compared to what? Speaker sound?



Comparable to these two playback environments:



jgazal said:


> (...)
> So if you are fond of crossfeed for headphones that emulates what would be to listen to standard stereo recordings, in the old-fashioned way (i.e. with two speakers in a room and corrupting acoustic crosstalk), just use a PRIR convolution without interpolation and headtracking.
> 
> So far, so good.
> ...



This is verifiable with experiments. I bet bigshot, gregorio and pinnahertz already bought either a Realiser or a Bacch4Mac/BacchDSP to test such hypothesis. 

You can do it also! 



71 dB said:


> So, should I tell myself, that my own speakers + room is correct, but headphones + crossfeed is not? Both give sound without excessive ILD even if there are other differencies. Headphones without crossfeed on the other hand produce excessive ILD, give bee buzz, fake bass, broken sound image, listening fatique and what not. Sorry guys, but I am going to call that WRONG if anything. That's why I use crossfeed and recommend it to others instead of "raw" headphone listening.



 Now I clearly see your point.

Your claim is that headphones + crossfeed are more pleasant than speakers in a room with acoustic crosstalk.

Point taken. I am fine.


----------



## 71 dB

pinnahertz said:


> I agree the ITD in the Decca Tree is huge, but it also serves a purpose.  Remember, we aren't re-creating a reality, we're representing it by completely artificial means.  The Decca Tree is extensively used, and with good reason.  Be assured pretty much every other mic configuration has been tried.


With speakers it does serve purpose because acoustic crossfeed makes sure the ITDs at our ears are scaled to "allowed" levels. With headphones without crossfeed things go horribly wrong. ITDs of even 5 ms make no sense at all. Maybe elephants would consider such ITD ok. I am not a fan of Decca Tree and I consider it overrated.



pinnahertz said:


> But you are assuming that you have complete knowledge of what the original intent was!  What if the intent was to have that bass line close to your left ear?  You hate that and can't accept it as intentional, but honestly, you really have no idea.  That's an extreme example, and doesn't happen much anyway, but my point is how the heck do you know what's right?  You insist on imposing your own values on everyone, listeners and creators alike.  Sorry, I just can't go along with that kind of self-righteous viewpoint.  I've always first and foremost wanted to hear what was intended, then move out from that slightly as the need may arise, which is fairly rare.  The only way I'd blindly accept cross-feed is if there was a note on the media that said, "When listening to this recording in headphones, please use 71 dB's cross-feeder."  Otherwise I'm fairly sure they did NOT intend it to be heard that way!
> 
> Remember_ "Made Loud to be Played Loud?" _ No, of course you don't.



If the intent was to have that bass line close to my left ear then I think the producer is an idiot. Who in their right mind would have such intents? I am going to assume that the intent was something else and that's what I get when I listen to the track with speakers. It propably still sounds crappy (ping pongy), but at least excessive ILD is avoided. If excessive ILD is really intented for some sadistic reason then listening with speakers is wrong! Stop listening to speakers, because you might miss excessive ILD intented by lunatic producers! No, I am going to listen to speakers or headphones with crossfeed (unless of course the recording is spatial distortion free) because that makes the most sense to me and what's most important provides the most enjoyable sound to me! Kind of a no-brainer.


----------



## 71 dB

jgazal said:


> Your claim is that headphones + crossfeed are more pleasant than speakers in a room with acoustic crosstalk..


No, not more "pleasant". More precise because room acoustics is not convoluted into the music.


----------



## jgazal (Dec 8, 2017)

71 dB said:


> No, not more "pleasant". More precise because room acoustics is not convoluted into the music.



But your headphone is a filter (see Realiser HPEQ) and your torso is also a filter and spectral cues usually are not represented in the content and moving your head is also a dynamic filter.

So “precise” is very debatable.


----------



## 71 dB

pinnahertz said:


> You don't actually build that cross-feeder and sell for $50, do you? I would think the time alone would push it into the $100s.


More than $100, but crossfeed is a great DIY project. The only crossfeeder I have sold was Andy Linkner's 6 level balanced crossfeeder with XLR-connectors and that was $300. It was so much work from design to shipment that it was a "one off".


----------



## castleofargh

an obvious argument against saying that crossfeed is an objective improvement, is how it's trying to fake 3D acoustic in a clearly oversimplified way even if it was only a 2D system we'd still be doing it too simply. setting up a fixed delay to say we're having speakers at a given position, I'm fine with that. that delay should depend on the head of the user but let's say we're setting that right individually and I for one would be fine with that. we're neglecting head movement and the room, so at a psychoacoustic level it's not like we solved stereo, but it's a needed mix so I'm good with that. 
but then the FR correction would need to be close to our HRTF for those angles instead of what's often more of a one band EQ. we could argue that given how wrong a headphone's own signature will be, this doesn't matter. or we could argue that because of how wrong it will be, we need a specific EQ even more. one that would account for my HRTF and for the headphone. 

in the end we can certainly think that Xfeed is a step in the right direction, if only because it feels better than nothing to many people(me included) on many albums. but to correctly remove the "spatial distortions", we wouldn't stack stuff over Xfeed to complete the correction. we would remove Xfeed and replace it with the more specific and proper solutions. so in that sense it's more of a walk sideways IMO ^_^.


----------



## bigshot

71 dB said:


> No, not more "pleasant". More precise because room acoustics is not convoluted into the music.



I studied design in college and one of the things they taught me was to be true to your materials. Don't use wood grain formica or force a square peg in a round hole. Use your tools to their strengths. Trying to make headphones more like speakers is like that. If you are going to use headphones, think about what it is that headphones do better than speakers and play to that. If you want a sound that is more like speakers, use speakers. The same is true in reverse. A room without any acoustic character might sound closer to headphones, but as speakers, it would sound pretty lousy.

Room acoustics are supposed to be convoluted into the music. It's designed for that. The purpose of room treatment isn't to remove all reflections. It's to remove the problematic ones. You need to have a certain amount of space for the sound to inhabit. That's a big part of what makes speakers sound better than headphones.


----------



## 71 dB

bigshot said:


> If you are going to use headphones, think about what it is that headphones do better than speakers and play to that.


I can listen to headphones at loud level in the middle of the night without disturbing anyone.


----------



## ironmine

gregorio said:


> 1. No, that's not how processing works! You will NOT get either higher precision or less quality loss. Precision is dictated by the plugin's internal bit depth which is either 32 or 64 bit float. So whether you feed a plugin with 16 bits of data or 16 bits plus 8 additional zeros, the precision and result are identical. Increasing the sample rate is likewise pointless, there's no data above the Nyquist point to process and you don't magically get from upsampling. The only time a higher sampling rate could make a difference is in the case of some non-linear processes, in which case any decent plugin will internally upsample anyway. Actually, upsampling could (theoretically) make matters worse, some plugins operate at a single internal sample rate. Say for example a plugin operates at 96kHz, you upsample to say 176kHz, the plugin down samples to 96, performs it's algorithm and when complete upsamples the result back to 176kHz again, to match the incoming sample rate. What have you gained by upsampling in the first place? You'd have been better if you'd left it at 44.1kHz! You've fallen into the old audiophile trap of confusing the bit depth of the audio file with the bit depth of the processing environment! The bit depth (and precision) has nothing to do with the bit depth of the file but the bit depth of the internal processing of the plugins and the bit depth of the data connections between those plugins.
> 
> 2. Dither should be the last step here, not the first! If you are changing the volume in your processing environment, which is presumably a 32 or 64bit environment, then the result of that volume change is obviously a 32 or 64bit word length, which you are then truncating to 16 or 24bit because you applied dither before the volume change. However, if you're outputting 24bit to your DAC there is no point in dithering, as truncation artefacts would be way below audibility, even at 16bit truncation artefacts would probably be inaudible.
> 
> G



Gregorio, I advise you to read Bob Katz' book "Mastering Audio: The Art and the Science".

Modern high-quality VST plugins do not operate at fixed internal sample rates. At least, I never heard about them (and I read the manuals for all the plugins which I use). But I do have a number of the state-of-the-art plugins which, as an option, perform internal oversampling to achieve a higher precision.

1. Variant A = Taking the signal from 44/16 to 176/24, processing it with plugins and outputting the result to the DAC in 176/24.
2. Variant B = Leaving the signal in 44/16 format, processing it with plugins and outputting the result to the DAC in 44/16.

A will give a better, more accurate result than B.

2. No, dithering must not be the last step. Dithering can also cause digital clipping in certain cases. The last step must be the signal level & peak detection monitor. But you are right that volume adjustment must happen *before *dithering, not *after*. My mistake. So, the correct ending sequence in the processing chain must be as follows:

Signal level adjustment
Dithering
Signal level monitoring

The exact type of dithering (TPDF or noise-shaping) depends on whether the signal is to undergo any further processing.  If there is further processing, choose TPDF. If not, choose noise-shaping. But, frankly speaking, dithering more important for signals in 16-bit format. If the signal goes out to a DAC in 24-bit format, dithering can be neglected. As for me, I still apply it, even to 24-bit signals.


----------



## pinnahertz

71 dB said:


> You do realize the same kind of thing happens with speakers?


Of course, but the material is mixed on speakers, that's the target.


71 dB said:


> Our spatial hearing isn't idiotic. It knows how to "get the picture" as long as the cues make sense. To me crossfeed produces very pinpoint accurate positions and that's not a mystery, because crossfed sound contains spatial information scaled to make sense. I believe the gradual transition helps hearing here. Without crossfeed the image is broken.


I could easily make the same argument in reverse: the image with cross-feed without ITD is broken.


71 dB said:


> Sorry, but some of our comments of crossfeed make it hard to believe you have been playing with it for decades. You sound like you heard about crossfeed 2 weeks ago for the first time in your life.


Again...I could make the same remark in reverse.  I'm afraid you'll either have to accept my history with it or reject it.  I really don't care.  It doesn't change my opinions, and it certainly won't change yours.


----------



## pinnahertz

71 dB said:


> With speakers it does serve purpose because acoustic crossfeed makes sure the ITDs at our ears are scaled to "allowed" levels. With headphones without crossfeed things go horribly wrong. ITDs of even 5 ms make no sense at all. Maybe elephants would consider such ITD ok. I am not a fan of Decca Tree and I consider it overrated.


Ok, but every classical music engineer will disagree with you.



71 dB said:


> If the intent was to have that bass line close to my left ear then I think the producer is an idiot. Who in their right mind would have such intents?


So, no room in your world for free artistic expression?  It's just "wrong"? 


71 dB said:


> I am going to assume that the intent was something else and that's what I get when I listen to the track with speakers. It propably still sounds crappy (ping pongy), but at least excessive ILD is avoided. If excessive ILD is really intented for some sadistic reason then listening with speakers is wrong! Stop listening to speakers, because you might miss excessive ILD intented by lunatic producers! No, I am going to listen to speakers or headphones with crossfeed (unless of course the recording is spatial distortion free) because that makes the most sense to me and what's most important provides the most enjoyable sound to me! Kind of a no-brainer.


Lets see: we have the "producer is an idiot", and not in their right mind.  We have "listening to speakers is wrong". We have "lunatic producers"...it never ends.  I guess it's just you against the world, man.  

So sad.


----------



## pinnahertz

71 dB said:


> I can listen to headphones at loud level in the middle of the night without disturbing anyone.


Great!  Now figure out how to talk about cross-feed without disturbing anyone.


----------



## pinnahertz

ironmine said:


> Modern high-quality VST plugins do not operate at fixed internal sample rates. At least, I never heard about them (and I read the manuals for all the plugins which I use). But I do have a number of the state-of-the-art plugins which, as an option, perform internal oversampling to achieve a higher precision.
> 
> 1. Variant A = Taking the signal from 44/16 to 176/24, processing it with plugins and outputting the result to the DAC in 176/24.
> 2. Variant B = Leaving the signal in 44/16 format, processing it with plugins and outputting the result to the DAC in 44/16.
> ...


Resampling to a different rate doesn't improve anything.  Adding a few bits at the bottom might.  Really, your point here is wrong.


ironmine said:


> 2. No, dithering must not be the last step. Dithering can also cause digital clipping in certain cases. The last step must be the signal level & peak detection monitor. But you are right that volume adjustment must happen *before *dithering, not *after*. My mistake. So, the correct ending sequence in the processing chain must be as follows:
> 
> Signal level adjustment
> Dithering
> Signal level monitoring


Looks like dithering is still the last step, as level monitoring isn't a process that changes anything.


ironmine said:


> The exact type of dithering (TPDF or noise-shaping) depends on whether the signal is to undergo any further processing.  If there is further processing, choose TPDF. If not, choose noise-shaping. But, frankly speaking, dithering more important for signals in 16-bit format. If the signal goes out to a DAC in 24-bit format, dithering can be neglected. As for me, I still apply it, even to 24-bit signals.


Why?  What DAC do you use that has the full 144dB dr?


----------



## ironmine

This crossfeed - https://hydrogenaud.io/index.php/topic,90764.0.html
sounds so good.   112dB Redline Monitor finally has a decent rival.


----------



## WoodyLuvr (Dec 9, 2017)

Strangelove424 said:


> TAL seems like a decent quality plugin, but I found that it created too much distortion for my taste. It's going for a classic over-driven guitar tube amp distortion. For a subtle yet sweet saturation effect, similar to the distortion you'd find in an audiophile tube amp, my favorite is still VORslickEQ (available here from Tokyo Dawn - I recommend DLing the version w/ no installer).
> 
> I do not use replay gain or limiters. I suppose a lot of this is very subjective, but I like to hear mastering decisions or even mistakes if they are there. If the album is loud, or soft, or even clipped, I like experiencing those things. I avoid DSP clipping by keeping EQ overall gain at "auto" when available, or manually set negative gain in situations where I boosted frequencies. I'm pretty careful not to go over the level I started at. If a song shows up with its own level problems or clipping though, I don't do anything to hide it.


Apologies totally going off subject so I moved my post to the more pertinent Sound Science Foobar2k Plug-In Thread.


----------



## ironmine

WoodyLuvr said:


> Yes, I would have to agree that there is sometimes some unwanted distortion on certain tracks... I will give TDR VOS SlickEQ a try.  This is a VST plugin that cna be used from within Foobar2k correct?
> 
> EDIT: Nevermind, I had missed the Windows (no installer VST option)
> 
> Which mode have you been using? Soviet?



I will try TDR VOS SlickEQ also!  I like testing these "digital toys".

Let's see how it compares to what I consider currently the best tube amplifier emulation plugins:






*Waves Tube Saturator v.1 (Tube Saturator Vintage)*





*Waves Tube Saturator v.2 *





*Nick Crow Labs Tube Driver*

I also like the sound of this equalizer (fat and juicy), but I am not sure whether its trying to emulate any tubes:





*Softube Trident A-Range*

However, when I need an absolutely transparent, surgically precise EQ with no coloration, I use *Fab Filter Pro-Q2.*


----------



## WoodyLuvr

ironmine said:


> I will try TDR VOS SlickEQ also!  I like testing these "digital toys".
> 
> Let's see how it compares to what I consider currently the best tube amplifier emulation plugins:
> 
> ...


Awesome; thank you for sharing.  Really be interested on your thoughts on each.  I think we should move this tube plugin conversation over to the Tub Plugin Thread *here*.  I already moved my last post as well as I really have strayed from the crossfeed topic. cheers.


----------



## castleofargh

ironmine said:


> This crossfeed - https://hydrogenaud.io/index.php/topic,90764.0.html
> sounds so good.   112dB Redline Monitor finally has a decent rival.


"finally" ^_^. it was one of the first xfeed VST I used on foobar. 



 modo talk: please refrain from making personal attacks like you did. Katz book is a classic for sure. but you might want to also have a look at headfi's rules. not a best seller by any mean, but fairly relevant to being in the forum.


----------



## 71 dB

pinnahertz said:


> Great!  Now figure out how to talk about cross-feed without disturbing anyone.


Oh, I didn't realize I am among special snowflakes who get triggered by microaggressions and need their safe space without pro-crossfeed propadanda.

Let's try talking about cross-feed without disturbing anyone:

_Hey guys, something must be wrong with my head because I don't always want to listen to the music with headphones as it was intented, but I mess-up with the intended excessive ILD and enjoy natural and fatique-free sound instead. That can't be the intend, now can it? Who'd produce music to sound natural and fatique-free? That's crazy, so something must be wrong with me. Should I see the doctor?
_
Is that better my friend, or are you still disturbed?


----------



## WoodyLuvr

Oh my why is everyone so hostile and mean-spirited lately on Sound Science?


----------



## 71 dB

pinnahertz said:


> 1. Ok, but every classical music engineer will disagree with you.
> 
> 2. So, no room in your world for free artistic expression?  It's just "wrong"?
> 
> ...


1. I don't think BIS has ever used Decca Tree and Jürg Jecklin definitely doesn't disagree with me.
2. If excessive ILD is the main point in musical artistic expression then maybe music isn't my cup of tea…
3. I don't consider produsers idiots, because I don't have delusions of them to mix for headphones. What could you do in 1970? You didn't have DAW's with amazing spatial plugins. You had amplitude panoration and you used it to get the kind of sound that appeared good with speakers at that time. Yes, listening to speakers is wrong if you want to hear the original excessive ILD. You need headphones without crossfeed for that. Listening to speakers involves acoustic crossfeed + room acoustics which is kind of ILD regulating at low frequencies: No matter what the recording is, mono, ping pong, anything, what comes to your ears has ILD of about 3 dB at low frequencies. With headphones mono gives 0 dB and ping pong gives maybe 50 dB (limited by the leaking of headphones).


----------



## gregorio (Dec 9, 2017)

71 dB said:


> Our spatial hearing isn't idiotic. It knows how to "get the picture" as long as the cues make sense.



No, our brain will try to make sense of whatever it's given! Whether or not the "sense" it ends up creating is likeable/preferable, depends on the individual.



71 dB said:


> [1] True, only binaural can reach very high level of real world feel, but crossfeed modified the sound to be if not "real world" at least more natural and pleasant. [2] The problem with binaural is that you need your own or at least very similar HRTF-signature for it to work really well. [3] Crossfeed simulates HRTF at so coarse level that the result works for anybody. [4] "Real world" is even today a challenge in sound reproduction.



1. No, binaural is not good at creating a real world feel, it's only good at capturing and reproducing the real world of the sound which enters your ears!
2. No, that's only one of the problems with binaural.
3. It's a course simulation of a simulation, so pretty much it doesn't "work" for anybody! That doesn't mean to say it can't be preferred by some though.
4. It's not a particular challenge but that's irrelevant anyway because almost no one is trying to create a "real world" sound recording or reproduction because in the real world no one perceives the actual sound entering their ears!

You just have to keep coming back to referencing the "real world", a real world which usually does not exist and even when it does, that's not what we're trying to record or reproduce anyway!



71 dB said:


> 2a. Of course, but that doesn't change how spatial hearing works.



Absolutely it does! It wouldn't make a difference if your hugely oversimplified concept of how "spatial hearing works" were correct but as it's incorrect, processing absolutely can and does change our spatial perception!



71 dB said:


> 1. Translates well when listened to with speaker or headphones? [1a] Decca Tree produces huge ITD values totally incompatible with our spatial hearing without crossfeed. ... ITDs of even 5 ms make no sense at all. [1b] Maybe elephants would consider such ITD ok. [1c] I am not a fan of Decca Tree ...
> [2] How?
> [3] The less "binaural" it sounds the more evident it is it is for speakers. If the recording has been "modified" enough for HPs then I listen to it without crossfeed of course, but that happens rarely.



1. Potentially either, depending.
1a. Well, that depends. A Decca Tree would produce fairly large ITD values, which *might* be incompatible with our spatial hearing if it were just a two mic stereo pattern but it's not, it's a 3 mic array!! So, what ITDs of 5ms are you talking about? Firstly, a Decca Tree is never placed on the same horizontal plane as the instruments so at most we're talking about around 4ms timing difference between mics B and C but if we take the violins as an example, between mics A and B the difference would be variable. Some of the violins might have a timing difference up to about 4ms between mic A and B, other violins would have 0ms and most anywhere between 0 and about 3ms. Mic A is panned to the centre of the stereo image, in this example acting effectively as a crossfeed. Being a triangle, we've always got this type of crossfeed interaction between all the mics! Additionally, the mics are not just recording the instruments but also the reflections of those instruments from all the surfaces in the recording venue. So we've now got a bunch of signals, all over the place with varying timing differentials from 0ms up to about 2 secs or so. And furthermore, we've generally got a relatively small ILD difference between A, B and C, although that varies with frequency with a Decca Tree. But heck, let's just crossfeed it anyway!
1b. Yes, although according to your "simplified to the point of nonsense" it would have to be an elephant with 3 ears sitting 10 feet above the conductor!
1c. Yes, agreed. It can produce a rather dry and narrow/overly focused image, which is why we almost never use a Decca Tree on it's own, even quite early on it was commonly paired with outrigger mics. But oh no, now our elephant is going to need 5 ears and two of them would need to be about 50 feet apart! Then there's room mics and spot mics and an elephant with up to around 40 ears placed in the most ridiculous positions! Questions: A. How do you know when you're listening to a recording which used only a Decca Tree? B. If the vast majority of the recordings you're listening to have a Decca Tree in combination with various other mics, how do you isolate the Decca Tree mics from all the others to conclude that you're not a fan of Decca Tree?
2. Already explained! We have a complex interaction of level differences, reflections and timing differences, plus frequency factors such as masking, absorption, cancellations and summing and you're reducing all of that down to a simplistic crossfeed with 250ms delay.
3. And how much is "enough" modification? Regardless of the amount of modification, you are damaging or destroying it with crossfeed. Effectively then, it's just your subjective preference for improving some aspects of spatial positioning while damaging/destroying others!



71 dB said:


> [1] With speakers it does serve purpose because acoustic crossfeed makes sure the ITDs at our ears are scaled to "allowed" levels. [2] With headphones without crossfeed things go horribly wrong.
> [3] If the intent was to have that bass line close to my left ear then I think the producer is an idiot. Who in their right mind would have such intents?



1. A. There are no "allowed" or disallowed levels, only levels we might or might not find subjectively more pleasing. B. Despite being told numerous times, you ignore the obvious fact that "acoustic crossfeed" has the word "acoustic" in it and therefore includes acoustics! You then repeatedly ask "how it's different", because obviously if you're going to ignore acoustics, the interaction and perception of acoustics, then there is no difference!
2. Things can go somewhat wrong, other things can go somewhat wrong caused by crossfeed. Which wrong things do you prefer? You've made clear your preference but that's not a preference shared by all and we don't have to be idiots or delusional not to share your preference!!
3. Again, your judgement and preference. Maybe I would think that producer was an idiot too, maybe the producer really is an idiot but maybe I've just missed the point. Almost all musical developments throughout the history of music (and it's production) have been negatively received by some/many. Regardless, I want to hear what the producer/artists intended, even if they were all idiots!



71 dB said:


> No, not more "pleasant". More precise because room acoustics is not convoluted into the music.



That's obviously not true, although maybe it is to you because you're ignoring room acoustics but then if you're going to ignore room acoustics how come you're making statements about it?



71 dB said:


> gregorio and pinnahertz in my opion are a) too puristic and b) don't place various things in proper perspective.



Make up your mind, which is it, are we too puristic or spatially ignorant? We can't be both! In actual fact we are neither. You are fixated on some "real world" which does not exist and which we're not trying to recreate anyway and therefore effectively talking about eliminating spatial distortion from recordings which are deliberately massively spatially distorted! But many of your assertions to support your preference are simply not true and/or based on incorrect conclusions/assumptions. I'm not even going to get into your 1kHz crossfeed crossover, which would sometimes result in some really bizarre artefacts but if you like it that way, that's your choice.

G


----------



## 71 dB

pinnahertz said:


> 1. Of course, but the material is mixed on speakers, that's the target.
> 2. I could easily make the same argument in reverse: the image with cross-feed without ITD is broken.
> 
> 3. Again...I could make the same remark in reverse.  I'm afraid you'll either have to accept my history with it or reject it.  I really don't care.  It doesn't change my opinions, and it certainly won't change yours.


1. Yes, that has been my point all the time. The material is mixed on/for speakers.
2. Without ITD? Crossfeed doesn't nullify ITD.
3. I accept your fine history. You must be talented and hard working according to what you have accomplished. Most of what you write on this board I agree totally with or learn from, but for some reason crossfeed is something we disagree a lot about. Maybe it's because I listen to modern classical recordings a lot and those recordings have great spatiality that makes crossfeed shine. I did not listen to 70's rock in my youth. My father is a jazz guy who thinks all rock music is for idiots. So instead of hearing The Who, my childhood was filled with Max Roach's drum solos and Clifford Brown's virtuosity on speakers. My music listening was passive up until about 1988 when I was 17. I got into acid house and the following trends of modern electronic dance music. I got into classical music in 1997. I didn't listen to any rock music until 2001 when I was 30. I had though pretty much all music from the 70's sucked, but then I discoved Tangerine Dream and King Crimson in 2008, both almost unknown in Finland. Spotify is good for exploring. The point is I discovered the 70's music with excessive ILD at an older age so I don't have childhood nostalgia for that like you seem to have. My concept of the 70's is more from TV, which of course was monophonic, but hey, we got a color-tv in 1975 I think. I was 4.


----------



## WoodyLuvr

So to get back to the question crossfeed or not?  What is the consensus?


----------



## 71 dB

gregorio said:


> Already explained! We have a complex interaction of level differences, reflections and timing differences, plus frequency factors such as masking, absorption, cancellations and summing and you're reducing all of that down to a simplistic crossfeed with 250ms delay.


You call a hard panned ping pong stereo recording something that contains "complex interaction of level differences, reflections and timing differences"? Sorry, but it doesn't. Ping Pong is spatial nonsense and it has to be modified into something that makes sense to our spatial hearing. Crossfeed is one method to do that. Crossfeed is simple, but at least it's _something_. No crossfeed is_ nothing_. 250 µs (not 250 ms!) delays are much better than nothing. It amazes me how you keep thinking excessive ILD/ITD are ok. Their are spatial nonsense for elephants.


----------



## 71 dB

WoodyLuvr said:


> So to get back to the question crossfeed or not?  What is the consensus?


Crossfeed is the most controversal topic in the history of humankind. There is no hope of consensus if you ask me. Peace between Palestinians and Israel is easier than consensus on crossfeed.


----------



## WoodyLuvr

castleofargh said:


> "finally" ^_^. it was one of the first xfeed VST I used on foobar.
> 
> 
> 
> modo talk: please refrain from making personal attacks like you did. Katz book is a classic for sure. but you might want to also have a look at headfi's rules. not a best seller by any mean, but fairly relevant to being in the forum.


You would recommend this VST (xfeed) over the Case Meier version?


----------



## ironmine (Dec 9, 2017)

castleofargh said:


> "finally" ^_^. it was one of the first xfeed VST I used on foobar.



This crossfeed would benefit greatly if it had a mid/side level adjustment. To my ears, it makes the center too hollow, so I like putting in front of this crossfeed some other plugin (e.g., Fabfilter Pro-Q2) that can reduce the level of the side channel.

Anyway, so far, 112 dB Redline Monitor remains my default crossfeed. It just preserves more details and the bass sounds fabulously natural in it. Also, it has a very user-friendly interface with the settings which are understandable and do their job nicely.  I can bring "speakers" closer for more details, or I can move them away for a more relaxed sound. I can also adjust the volume of the center to make the soundstage either flat or curved forward (with the center closer to me than the sides) or curved outwards (with the center distanced further from me than the sides).


----------



## WoodyLuvr

ironmine said:


> This crossfeed would benefit greatly if it had a mid/side level adjustment. To my ears, it makes the center too hollow, so I like putting in front of this crossfeed some other plugin (e.g., Fabfilter Pro-Q2) that can reduce the level of the side channel.
> 
> Anyway, so far, 112 dB Redline Monitor remains my default crossfeed. It just preserves more details and the bass sounds fabulously natural in it. Also, it has a very user-friendly interface with the settings which are understandable and do their job nicely.  I can bring closer "speakers" for more details, or I can move them away for a more relaxed sound. I can also adjust the volume of the center to make the soundstage either flat or curved forward (with the center closer to me than the sides) or curved outwards (with the center distanced further from me than the sides).


Okay I will consider buying that then.  How about  VST-chainer called Console ART-Teknika - http://www.console.jp/en/ is that Windows 10 friendly as well?


----------



## 71 dB

gregorio said:


> That's obviously not true, although maybe it is to you because you're ignoring room acoustics but then if you're going to ignore room acoustics how come you're making statements about it?


Typical room has more reverberation and worse acoustics than a studio. Studio is kind of in the middle of the no acoustics headphones and too much acoustics living rooms. Botg headphones and speakers are "wrong", just in opposite directions. In studio we have acoustic crossfeed, with speakers we have acoustic crossfeed. With headphones we have either nothing or something such as crossfeed. If this opinions makes me ignorant then...



gregorio said:


> Make up your mind, which is it, are we too puristic or spatially ignorant? We can't be both! In actual fact we are neither. You are fixated on some "real world" which does not exist and which we're not trying to recreate anyway and therefore effectively talking about eliminating spatial distortion from recordings which are deliberately massively spatially distorted! But many of your assertions to support your preference are simply not true and/or based on incorrect conclusions/assumptions. I'm not even going to get into your 1kHz crossfeed crossover, which would sometimes result in some really bizarre artefacts but if you like it that way, that's your choice.
> 
> G


Purism and igronance aren't mutually exclusive, in fact lack of understanding can lead to dogmatic purism. For example a purist may assume that downsampling hi-res audio to 16/44.1 reduces audible resolution because he/she doesn't understand what happens sonicly and how 16/44.1 is all you need in consumer audio. Similarly someone may think crossfeed messes up spatial information. The opposite is true, messed up spatiality is scaled to be less messed up. Elephant audio becomes human audio such like downsampling 192 kHz to 44.1 kHz makes bat audio human audio.

I have listened to my crossfeeders half a decade for countless hours and never have I encountered "really bizarre artefacts" unless you mean natural pleasant sound is a bizarre artefact. To me excessive ILD/ITD is the source of spatial artefacts. So yes, crossfeed is my choice.


----------



## 71 dB

gregorio said:


> 1. A. There are no "allowed" or disallowed levels, only levels we might or might not find subjectively more pleasing. B. Despite being told numerous times, you ignore the obvious fact that "acoustic crossfeed" has the word "acoustic" in it and therefore includes acoustics! You then repeatedly ask "how it's different", because obviously if you're going to ignore acoustics, the interaction and perception of acoustics, then there is no difference!
> 2. Things can go somewhat wrong, other things can go somewhat wrong caused by crossfeed. Which wrong things do you prefer? You've made clear your preference but that's not a preference shared by all and we don't have to be idiots or delusional not to share your preference!!
> 3. Again, your judgement and preference. Maybe I would think that producer was an idiot too, maybe the producer really is an idiot but maybe I've just missed the point. Almost all musical developments throughout the history of music (and it's production) have been negatively received by some/many. Regardless, I want to hear what the producer/artists intended, even if they were all idiots!


1. The limit (set by physical reasons) for ITD if about 640 µs for sounds originating not very close and 700-800 µs for close sounds. To me mixing music to be very close to your head doesn't make much sense, so I'd limit ITD to 640 µs. If very close sound is the intent, then such intents aren't respected by speakers. Acoustic crossfeed is of course different from electronic crossfeed, but both reduce ILD/ITD information following the same principle: More at low frequencies and less at higher frequencies. Crossfeed is much closer to acoustic crossfeed than no crossfeed, so why would anyone choose no crossfeed?
2. Which wrong do I prefer? Crossfeed wrong of course because I don't even recognize it being wrong whereas spatial distortion feels very wrong. People who like crossfeed a lot exists in numbers, but even if I was the only one I would believe in it. How often are masses right? The scientific facts of human hearing are what they are despite of the weird preferences people hold about spatiality in music for technical, cultural, nostalgic and commercial reasons.
3. Well, make a call to those producers and artists and ask whether they want their music listened with or without crossfeed. The answer might surprise you. I don't need to call anyone, because I use my own head to figure out the proper crossfeed setting.



gregorio said:


> That's obviously not true, although maybe it is to you because you're ignoring room acoustics but then if you're going to ignore room acoustics how come you're making statements about it?


I don't ignore room acoustics, but headphones don't have room acoustics! Do you want your no room acoustics with or without spatial distortion? That's the question here?


----------



## jgazal (Dec 17, 2017)

gregorio said:


> 4. It's not a particular challenge but that's irrelevant anyway because _*almost*_ no one is trying to create a "real world" sound recording or reproduction because in the real world no one perceives the actual sound entering their ears!
> 
> You just have to keep coming back to referencing the "real world", a real world which usually does not exist and even when it does, _*that's not what we're trying to record or reproduce anyway*_!



I am glad you used the word almost.

It is always a matter of reference or perspective, isn’t it?

But “almost no one” may become in the short term or may be already a wrong _*degree*_.

Let’s see a few examples:

*1. Sennheiser Ambeo *(first order ambisonics)



>




*2. YouTube VR*

YouTube VR recommends content With first order ambisonics, probably downmixed to binaural with a generic HRTF. Potentially better with a Realiser crossfeed free PRIR.​
Attention: use YouTube app to rotate your visual point of view in the monoscopic 360 degree video!​
_*2.A. Cindy Crawford closet - Vogue*_

Important demonstration for wives!

Cindy voice need to derotate in order to enhance the immersion.

Mobile devices and tables feed the tracking. Probably better with crosstalk cancellation with speakers or crossfeed free externalization with headphones.

You may want to follow these instructions:



jgazal said:


> Try to listen with loudspeakers in the very near field at more and less +10 and -10 degrees apart and with two pillows one in front of your nose in the median plane and the other at the top of your head to avoid ceiling reflections (or get an IPad Air with stereo speakers, touch your nose in the screen and direct the loudspeakers sound towards your ears with the palm of your hands; your own head will shadow the crosstalk).





>




*2.B. Bewitched love - Manuel de Falla - Orchestre national d'Île-de-France*

Audio content that derotates according to the visual point of view! This is an spot microphones mixing probably with binaural synthesis.



>




*2.C. Showcase/Showdown Eigenbeams/Ambisonics - mh acoustics*

Audio content that derotates according to the visual point of view! This is an eigenmike probably downmixed to first order ambisonics and then downmixed again to binaural with a generic HRTF and finally streamed through YouTube.

With a HOA streaming, a personalized HRTF an Realiser A16 this would be the lowest distortion path available (what Professor Choueiri says “being fooled by audio”).



>






>






>




Can you imagine that with an stereoscopic video?

Ping pong - compare with Ricoh below...



>




*3. Netflix VR *

Probably atmos bed and objects downmixed to binaural with generic HRTF. Potentially better with a Realiser crossfeed free PRIR.



>




*4. Google Daydream “Fantastic Beasts” VR*

Maybe  a mix of first order ambisonics and objects? Or maybe full binaural synthesis (similar to BACCH-dSP)? IDK.



>




*5. Ricoh Theta V with “spatial audio”*

Some frustrating content done with 360 monoscopic cameras in which the audio does not derotate.

Probably *not* Ambisonics.

Just showing to demonstrate the potential of home-made music oriented video distribution with spatial accuracy:



>




Ping pong - compare with eigenmike above...



>




You may see that those are not minor players in the entertainment market...



gregorio said:


> (...)
> 3. Again, (...). Maybe I would think that producer was an idiot too, maybe the producer really is an idiot but maybe I've just missed the point. *Almost all musical developments throughout the history of music (and it's production) have been negatively received by some/many*. Regardless, I want to hear what the producer/artists intended, even if they were all idiots!
> (...)
> Make up your mind, which is it, are we too puristic or spatially ignorant? We can't be both! In actual fact we are neither. (...)
> G



If you think about Google Daydream VR, Netflix VR, Samsung VR, Facebook virtual hangouts etc, you will realize that they are whole highways in which immersive sound content formats can reach consumer’s virtual environments.

You just need to use them.

I guess consumers don’t care about the bit depth or the the sample rate of distributed content, but I believe they going to click in sponsored pages and happily pay for streaming/downloads that render audio spatial immersion that is accurately correlated with visual cues.

That is a huge incentive for mastering engineers to persue spatial accuracy with reference to the original sound-field or virtual designed visual cues, don’t you think?

I know the number of streamings with Spotify, Deezer, Pandora, Tunein Radio etc is higher than music streamed with video formats, but is that an intrinsic barrier?

Is there any other objection to distribute music content through such distribution channels and consumers virtual environments?

There is a reason why Google has chosen - ambisonics - to distribute vr content in YouTube VR and Daydream VR.

Perhaps you may financially benefit from reducing your negativity about shifting your reference to the real sound-field or virtual designed visual cues.


----------



## gregorio

71 dB said:


> [1] You call a hard panned ping pong stereo recording something that contains "complex interaction of level differences, reflections and timing differences"?
> [2] Crossfeed wrong of course because I don't even recognize it being wrong whereas spatial distortion feels very wrong.



1. Where did I call it that? In fact I called it something quite different but you've chosen to misrepresent what I said, why I wonder?

2. That is EXACTLY my point, thanks for admitting it! It's all about what YOU are able to "recognise" and of course conversely, what YOU are not able to recognise. I am able to recognise that it's wrong (and even more so if I only crossfed below 1kHz), while you are not able to recognise it's wrong. That's not the issue, the issue is that you're calling me spatially ignorant when you are the one incapable of recognising it and you're calling me and everyone else who disagrees with you idiotic and delusional based on YOUR INABILITY!! This is probably a good place to leave this line of discussion! 



jgazal said:


> It is always a matter of reference or perspective, isn’t it?



Particularly in this case, as I've been talking about commercial stereo music releases!

G


----------



## gregorio (Dec 9, 2017)

ironmine said:


> (A) Gregorio, I advise you to read Bob Katz' book "Mastering Audio: The Art and the Science".
> (B) Modern high-quality VST plugins do not operate at fixed internal sample rates. At least, I never heard about them (and I read the manuals for all the plugins which I use). But I do have a number of the state-of-the-art plugins which, as an option, perform internal oversampling to achieve a higher precision.
> (C) 1. Variant A = Taking the signal from 44/16 to 176/24, processing it with plugins and outputting the result to the DAC in 176/24.
> 2. Variant B = Leaving the signal in 44/16 format, processing it with plugins and outputting the result to the DAC in 44/16.
> ...



A. Why would you advise me to read a book I've already read and discussed with the author?

B. Some do have fixed sample rates. Most have variable rates, some have oversampling options and some upsample/downsample without mentioning it in the manual. Upsampling does NOT increase precision, how can there be higher precision from processing frequencies which do not exist! Why isn't that obvious to you?

C. What have those two variants got to do with your original statement or my response? An at least equally and possibly more accurate variant (which you haven't mentioned?) than your variant A would be: 44/16 (without upsampling), processing it with plugins and outputting the result to the DAC in 44/24. Although more technically more accurate, it almost certainly wouldn't be audibly better than outputting at 44/16. Again, how can processing 16 bits + 8 additional zeros be higher precision than processing 16 bits + no additional zeros, when both are calculated at 64bit float precision?

D. I suggest YOU read Bob Katz' book "Mastering Audio: The Art and the Science"!!!

G


----------



## jgazal

gregorio said:


> Particularly in this case, as I've been talking about commercial stereo music releases!
> G



I liked that!

Added a new video, now with music oriented content.


----------



## 71 dB

gregorio said:


> That is EXACTLY my point, thanks for admitting it! It's all about what YOU are able to "recognise" and of course conversely, what YOU are not able to recognise. I am able to recognise that it's wrong (and even more so if I only crossfed below 1kHz), while you are not able to recognise it's wrong. That's not the issue, the issue is that you're calling me spatially ignorant when you are the one incapable of recognising it and you're calling me and everyone else who disagrees with you idiotic and delusional based on YOUR INABILITY!! This is probably a good place to leave this line of discussion!



I don't care about errors I can't hear/notice, and even if I did, I'd have to compare pros and cons. The pros of crossfeed are massive. The cons are theoretical. So, I wasn't admitting that much. Maybe your ears are superior to my ears and somehow welcome excessive ILD/ITD information beyond the theories of spatial hearing, but I can only use my own ears to listen to music. My ears seem to obey the theories of human spatial hearing. So, I apologize for calling you ignorant while not realizing you are an überhuman.


----------



## castleofargh

WoodyLuvr said:


> You would recommend this VST (xfeed) over the Case Meier version?


it was my go to VST because it would let me have options others didn't have, or had but named or applied differently and I didn't always understand what they were doing. with my big weird head, I need more customization than the average guy who might be super happy with default crossfeed settings for the average human(I always suspected the horns to mess my HRTF).
anyway I happened to both understand and find some pleasing setting with Xnor's crossfeed. but 112dB is very nice too, the main flaw I found was that it's not free ^_^. but I used the trial period till the end, and I suggest you do too. for us amateur dudes, there is also the good old TB Isone suite(that you can also try for free if they didn't change that). it's perhaps one of the easiest approach because it has a lot of stuff packed together. and plenty of knobs to play with. nothing revolutionary IMO, and we can certainly just use various VSTs to get to the same sort of results or even more custom ones. but Isone is, again IMO, user friendly.

in the end it's really just a matter of how well something works for your own head/taste. the best stuff for me might feel like crap for you.  my criteria are usually to be able to set things to feel like the side sounds come from my speakers instead of other angles. and also to have a center image that isn't too screwed up. which isn't always as easy as it might seem. for that @ironmine's comment is on point. some old stuff had what I imagine to be massive comb effect(somehow I never bothered to measure the output to check). some stuff like the "studio" surround setting on sony walkmans started with a totally crippled center image. it was a massacre. and in the recent years, they have improved on that a little and now it's pretty nice(to me). but it's a bit more than just xfeed. in the end how they technically deal with the center image is only relevant in how right it feels to you. 



WoodyLuvr said:


> Oh my why is everyone so hostile and mean-spirited lately on Sound Science?


winter is coming, we all get depressed and grumpy. the science people talk about having less light, the change in temperature and stuff like that. but I totally blame Xmas songs coming way too soon. I can handle them for a family weekend, but I cannot live with that crap for 2 months!!!!! if there is a war on Xmas like I see on US tv, then how come I'm not protected by the Geneva convention as prisoner of war?  
we're having real problems here!


----------



## bigshot (Dec 9, 2017)

I'm getting more and more jolly as time goes by! I see the "regular suspects" tossing hand grenades into the discussions and I just laugh at them. Then I laugh at the inevitable sputtering outrage it causes. If you look at it all as a slapstick comedy, it's a lot of fun. Just imagine posters as the Marx Brothers and Margaret Dumont.






Don't point that beard at me! It might be loaded!


----------



## 71 dB

What I feel after weeks of debating here is what am I doing here? Am I contributing in any way? Can I help someone? All I manage to do is make other people feel bad. I feel bad myself too. Everything feels so pointless. I'm not going to change my mind about crossfeed at this point and either will anyone else here I believe, so what the *** are we doing? Why don't we spend the time we waste here on people important to us, people who are part of our life? Keep debating people if you want, but I have had enough of this. Use crossfeed or don't, it's your choice. I have made my choice and you know what it is...


----------



## bigshot

Some advice...

Don't grab on so hard.
Don't get even more argumentative when people get argumentative with you. That just escalates things.
Sort out the wheat from the chaff. I identify the people you can learn from and discuss things with them.
Don't fall into the trap of thinking the spotlight is on you and you're here just to present information. It should be a give and take.
If someone is behaving like the hind end of a donkey, you should only have to tell them that once. If you keep interfacing with the wrong end of the animal, you know what you'll end up covered with.
Being simple and direct is a virtue. If you find yourself obfuscating or blathering just to make a point, cut it all back and try to say it clearly in three sentences or less.
Every day is a new fresh day to find something interesting to read about and share.


----------



## jgazal

castleofargh said:


> with my big weird head, I need more customization than the average guy who might be super happy with default crossfeed settings for the average human(I always suspected the horns to mess my HRTF).





I bet is no bigger than Neumann KU!



castleofargh said:


> winter is coming, we all get depressed and grumpy. the science people talk about having less light, the change in temperature and stuff like that. but I totally blame Xmas songs coming way too soon. I can handle them for a family weekend, but I cannot live with that crap for 2 months!!!!! if there is a war on Xmas like I see on US tv, then how come I'm not protected by the Geneva convention as prisoner of war?
> we're having real problems here!



You should pin this post! Best post of 2017!!


----------



## ironmine

gregorio said:


> Upsampling does NOT increase precision, how can there be higher precision from processing frequencies which do not exist! Why isn't that obvious to you?





gregorio said:


> Although more technically more accurate, it almost certainly wouldn't be audibly better than outputting at 44/16. Again, how can processing 16 bits + 8 additional zeros be higher precision than processing 16 bits + no additional zeros, when both are calculated at 64bit float precision?



Processing in a higher sample/bit rate format helps to minimize aliasing, quantization noise, rounding-off errors, phase misalignment issues, etc. 
The more data points an algorithm has to work with, the more precise and accurate the result of calculation will be. 

That's why high-quality plugins offer oversampling options to increase the accuracy of processing. And that's why DAW process signals at higher sample and bit rates. They need more resolution to achieve their best.

Processing audio means running thousands of mathematical calculations. Where the results of one calculation depend on the results of a previous calculation. 

A higher sample/bit rate is like having a calculator with more figures after the decimal point. The more figures there are after the decimal point, the less the rounding off error is. And if there are thousands of math calculations to do, these rounding off errors add to each other, resulting in a less precise result.

Consider this:
If a calculator (A) has only two figures after the decimal point, it cannot multiply 1.25343 x 1.54789 as precisely as a calculator (B) with four figures after the decimal point:

Reality:  1.25343 x 1.54789 = 1.9401717627

Calculator A: 1.25 x 1.55 = 1.94 (the rounding-off error is 1.9401717627 - 1.94 = 0.0001717627)
Calculator B: 1.2534 x 1.5479 = 1,9401 ((the rounding-off error is 1.9401717627 - 1.9401 = 0,0000717627.

The rounding-off error of calculator A is *2.5 times more* than the rounding-off error of calculator B.
The more math operations we do, the more we deviate from the precise result, because rounding-off errors multiply.

Now imagine doing thousands of such operations in one plugin, then passing the result to another plugin which runs thousands of other math calculations.
How about a chain of 5-6 plugins?

That's why I prefer to upsample the signal to as high format as my DAC would accept. (44/16 > 176/24).
Or you can do this (if your PC is powerful enough and if your plugins can work in such high resolution): upsample 44/16 to 352/32. Process in 352/32 and then downsample to 176/24 before outputting to your DAC.


----------



## castleofargh

jgazal said:


> I bet is no bigger than Neumann KU!


the only reference I know for head size is from hats. I get my hats in euro size 62cm(don't how much that is in finger units). and it's not the hair taking space for I shave the little I have left. so in that aspect I'm closer to a dummy head than most ^_^.


----------



## ironmine

gregorio said:


> Why would you advise me to read a book I've already read and discussed with the author?
> 
> I suggest YOU read Bob Katz' book "Mastering Audio: The Art and the Science"!!!



I've read 2 times and periodically come back to it. 

And you either did not read it, or read it but you don't agree with the author or you cannot even understand what you have read and how to apply it.

Some people are like that: they read, then they boast that they've read it or even "discussed it with the author", but in reality they did not understand it, do the opposite and continue spreading the false information and harmful advise.

Please refer to Chapter 4 "Worldlengths and differ" (page 49).
Also, Chapter 18 "High Sample Rates" (page 221).


----------



## castleofargh

more talking about audio and less about what you think of the other forum member please. I'm asking all of you to try and make some effort on this. it brings nothing to the table but a crappy atmosphere. if the only argument against a claim is "you suck so I'm right", we're in serious trouble. 


the way I see it, if a VST does something significantly better with oversampled signal, it will oversample the signal itself anyway and pick whatever rate works best for the job. if it doesn't, and bothers to do whatever it is that it does at all sample rates instead, maybe the guy considered the difference to be irrelevant? or maybe he actually thought it worked better that way than to resample before and resample after? when something is really better, I'd expect the documentation of a specific plugin to make mention of it. 
  as for fidelity, when applying various plugins set by ear, I feel like it might become more of a philosophical concept than a reference to actual objective fidelity. but as we have gone way beyond the simple crossfeed on a consumer playback, I'm not sure we can really count on a "one fit all" better choice.


----------



## ironmine (Dec 10, 2017)

castleofargh said:


> more talking about audio and less about what you think of the other forum member please. I'm asking all of you to try and make some effort on this. it brings nothing to the table but a ****ty atmosphere. if the only argument against a claim is "you suck so I'm right", we're in serious trouble.
> 
> 
> the way I see it, if a VST does something significantly better with oversampled signal, it will oversample the signal itself anyway and pick whatever rate works best for the job. if it doesn't, and bothers to do whatever it is that it does at all sample rates instead, maybe the guy considered the difference to be irrelevant? or maybe he actually thought it worked better that way than to resample before and resample after? when something is really better, I'd expect the documentation of a specific plugin to make mention of it.
> as for fidelity, when applying various plugins set by ear, I feel like it might become more of a philosophical concept than a reference to actual objective fidelity. but as we have gone way beyond the simple crossfeed on a consumer playback, I'm not sure we can really count on a "one fit all" better choice.



No, a VST does not usually oversample itself, because oversampling increases a computational load upon a processor significantly, and the maker of a VST does not know how powerful your computer is. Oversampling is usually optional, the user has to choose 2X, 4X, 8X or 16X oversampling. 

Check, for example, Voxengo plugins (http://www.voxengo.com/product/harmonieq/features/)
One of the features: "Up to 8x oversampling".

Another example: http://wavearts.com/products/plugins/ts2/
Waves Tube Saturator. It has a toggle switch "2X oversampling" and a remark: "Also there is a 2x oversampling mode using a very high quality resampler algorithm. _Use 2x mode to attenuate aliased distortion harmonics_."

And this is typical. You are offered to open a little drop-down menu and choose which oversampling your computer is fast enough to handle.


----------



## ironmine

Those who want to make their crossfeed to sound more live and less anechoic, try adding a reverb before a crossfeed plugin:





This is 122 dB Redline Reverb, I chose Studio Liver Room preset, but I reduced the Wet/Dry mix to 15% and set the decay reverb time (RT60) to 0.35 sec.


----------



## bigshot

castleofargh said:


> more talking about audio and less about what you think of the other forum member please.



I should have made it clear that I was speaking about internet forums in general, not specific people. There should be an instruction manual with internet forums, but there isn't. Some folks expect it to work the way they want it to work and they get upset when it doesn't. Understanding the dynamic requires being honest about your intentions for participating and how you plan to interact with others. That's something that takes thought. It doesn't just happen.


----------



## gregorio

71 dB said:


> [1] I don't care about errors I can't hear/notice, and even if I did, I'd have to compare pros and cons. [2] The pros of crossfeed are massive. The cons are theoretical.



1. So now you're agreeing with what I posted several pages back, yes, we have to compare the pro and cons for ourselves!
2. This is the sort of statement I object to! Just because you personally can't hear the cons does not make them "theoretical", you've just completely made that up, based solely on your own personal inability to hear them! If I were an old man and couldn't hear anything beyond 8kHz does that mean all frequencies above 8kHz are only "theoretical" or that they are real but I simply can't "recognise" them? Again, going round in circles back to where I started with you: I do hear the "cons", crossfed HPs do not sound perfect to me, un-crossfed HPs are also not perfect, even HRTF compensated HPs are also often imperfect, as is binaural and so is playback on speakers in a consumer environment. There is no perfect consumer playback!

G


----------



## bigshot

speakers > headphones > binaural For some music, cross feed can be an improvement I would imagine. For others, no. It depends.


----------



## 71 dB

ironmine said:


> Those who want to make their crossfeed to sound more live and less anechoic, try adding a reverb before a crossfeed plugin:



When making music I "render" the raw tracks and often fist crossfeed and then add reverberation. The "direct sound" is strongly crossfed while reverberation contains greater ILD. Summing these together ensures, that the ILD levels stay low enough. Works nicely imo.


----------



## gregorio (Dec 10, 2017)

ironmine said:


> [1] Processing in a higher sample/bit rate format helps to (a) minimize aliasing, (b) quantization noise, rounding-off errors, (c) phase misalignment issues, etc.
> [2] The more data points an algorithm has to work with, the more precise and accurate the result of calculation will be.
> [2a] That's why high-quality plugins offer oversampling options to increase the accuracy of processing. [2b] And that's why DAW process signals at higher sample and bit rates. They need more resolution to achieve their best.
> [3] Processing audio means running thousands of mathematical calculations. Where the results of one calculation depend on the results of a previous calculation.
> ...



1a. Aliasing of what? There is nothing above 22.05kHz if you're feeding 16/44.1, upsampling does NOT magically recreate those frequencies ALREADY removed above the Nyquist point. Same with bit depth: If we've got a 16bit file and convert it to 24bits it does NOT magically generate data for those extra 8 bits, all it does is fill/pad those 8 bits with zeros!
1b. The quantisation/round-off error is ALWAYS in the LSB of the plugin format, which is 64bit float in many cases, 32bit float in others.
1c. Sample rate has nothing to do with phase.

2. That's of course nonsense! How does 1.25343 x 1.54789 give a less accurate result than 1.2534300000000 x 1.54789?
2a. Due to the above, your assertion obviously has nothing to do with why plugins upsample! There are 3 potential reasons plugins upsample: 1. The plugin is using some non-linear process which generates content above 22.05kHz, typically something like an analogue modelled compressor will do this to generate IMD in the audible band. 2. It might be more practical for a plugin to operate at a single sample rate and up/down sample it's input to match, some convolution reverbs do this for example. 3. It could be purely marketing, to fool newbs who are gullible enough to believe that a higher sample rate must be better because it's a bigger number!
2b. DAWs do not operate at a higher sample rate! If you record in 44.1kHz, they operate at 44.1kHz. Their internal mix environment is commonly 64bit float, some are 32bit float or in some older DAWs it's 48bit fixed.

3. All of this is irrelevant nonsense!! Let's take your 1.24343 as our 16bit value, let's convert it to 24bit, so now we have something like 1.2434300000000. What happens if we were to feed those two values into a 64bit plugin? Our 16bit value gets padded with a whole bunch of zeros to create a 64bit word so that our 64bit plugin can actually process it, so now we have:
1.253430000000000... On the other hand, our 24bit word gets padded with a whole bunch of zeros to create a 64bit word so that our 64bit plugin can actually process it, so now we have:
1.253430000000000... In both cases we've got 1.25343 followed by exactly the same number of zeros, so WHAT'S THE DIFFERENCE??
The result of all the internal calculations of the plugin is also a 64bit float (because it's a 64bit plugin!). The quantisation error is in the LSB of that 64bit result (because it's a 64bit plugin!). The output of the plugin when it's finished all it calculations is also a 64bit float (because it's a 64bit plugin!), which either stays as a 64bit float if the data paths between plugins is 64bit or gets truncated to 32bit if that's the width of data path.
The difference between a 16bit word or a 16bit word padded to 24bits is LITERALLY zero (or 8 zeros if you want to be really precise about it) and once input into a 64bit plugin even the number of trailing zeros is the same!!! The only way your examples and statements would make any sense would be if feeding a 16bit word to a 64bit plugin somehow magically changed all the plugin's internal coding/processing and turned it into a 16bit plugin, while feeding it a 24bit word magically turned it into a 24bit plugin. That's of course nonsense, all that happens is that those 16 or 24 bit words are padded with zeros to 64bit floats and that 64bit plugin is always a 64bit plugin!

4. What's your DAC go to do with it? You are talking about the precision of plugin processing not whether or not your DAC is incompetently designed, which is a completely different issue!

5. Ah, it seems like the suggestion in my previous post was incorrect. Instead, try a book which explains the very basics of digital first, and then you might correctly understand what's in Bob's book!

G


----------



## 71 dB

gregorio said:


> 1. So now you're agreeing with what I posted several pages back, yes, we have to compare the pro and cons for ourselves!
> 2. This is the sort of statement I object to! Just because you personally can't hear the cons does not make them "theoretical", you've just completely made that up, based solely on your own personal inability to hear them! If I were an old man and couldn't hear anything beyond 8kHz does that mean all frequencies above 8kHz are only "theoretical" or that they are real but I simply can't "recognise" them? Again, going round in circles back to where I started with you: I do hear the "cons", crossfed HPs do not sound perfect to me, un-crossfed HPs are also not perfect, even HRTF compensated HPs are also often imperfect, as is binaural and so is playback on speakers in a consumer environment. There is no perfect consumer playback!
> 
> G


1. Yes, agreed.
2. Yes, there is no perfect consumer playback, but in respect of just enjoying music it's pretty easy to have a perfect enough system these days. Without  crossfeed isn't perfect, with crossfeed isn't perfect and speakers aren't perfect. I can only compare these three options to each other. To my ears headphones without crossfeed loses to headphones with crossfeed and speaker unless the spatial distortion of the recording is near zero or zero, but that happens rarely. If I had the chance to listen to the recording in the studio it was mixed in perhaps I would hear what is wrong with crossfed version, but I don't have that chance. Very few does. Most propably the studio doesn't even exist anymore as it was when the album was produced. Maybe they renovated it to improve the acoustics and upgraded speakers to the newest Genelec models? So, I have those three options and whether or not you like it heaphones without crossfeed loses hands down most of the time no matter what cons crossfeed has to you. Maybe I should emphasize that my opinions about crossfeed do not necessarily apply to überhumans with spatial hearing of elephants.


----------



## Strangelove424

bigshot said:


> I studied design in college and one of the things they taught me was to be true to your materials. Don't use wood grain formica or force a square peg in a round hole. Use your tools to their strengths. Trying to make headphones more like speakers is like that. If you are going to use headphones, think about what it is that headphones do better than speakers and play to that. If you want a sound that is more like speakers, use speakers. The same is true in reverse. A room without any acoustic character might sound closer to headphones, but as speakers, it would sound pretty lousy.



There is a lot of relevance in this comment. I like some crossfeed, I think it reduces fatigue by making things sound more natural acoustically. However, when people start talking about smyth realizers, room size algorithms, HRTF, etc. I have to admit that there is a part of me that becomes reactionary. It reminds me of VR, and I see a direct effect from gaming technology making its way into music listening through devices such as the SmythRealizer. In gaming the emphasis on spatial perception is important for either immersion or competition. Many of the headphone advancments like HRTF functions or 3d tracking came from that field.  I think they are innovative solutions, but I do not understand why it is being assumed that an ideal headphone experience is a mimic of speakers, and that in order to experience "reference" sound on headphones first we must artificially simulate the interaction of sound in space that was never really there. It is precisely this lack of space that gives headphones their own character, for both its positives and negatives. You get a presentation isolated from ambient/acoustic effects, which can be beneficial. It's an intimate and detailed presentation. When that gets traded for a sense of synthetic space, like wearing a trackIR in a 3d game, the whole experience seems less authentic to me. Neither headphones or speakers, but a Frankenstein between. For me, it not only fails to suspend my disbelief, it puts me right in the uncanny valley!

I still maintain that I like some crossfeed, but I don't know how I feel about replicating speakers to the full extent with HRTFs, head trackers, binaural mics, etc. That is my personal preference though, and if others wish to pursue this technology I encourage them and will stay informed of their progress. I do not, however, think that should become the standard of headphone listening.


----------



## gregorio (Dec 10, 2017)

71 dB said:


> When making music I "render" the raw tracks and often fist crossfeed and then add reverberation. The "direct sound" is strongly crossfed while reverberation contains greater ILD. Summing these together ensures, that the ILD levels stay low enough. Works nicely imo.



So let me get this straight. You crossfeed to correct the positional distortion but the spatial distortion, which you've been going on about for pages, you don't crossfeed in your own mixes, why is that? Why don't you apply crossfeed after you've added reverb and correct all that horrible, unacceptable spatial distortion? And, how do you know that in your opinion it "works nicely", are you an uberhuman with the spatial hearing of elephants?

Much of what you've said previously seems like nonsense and now that your contradicting your own arguments against the "spatial information/distortion", I can't see how it even makes sense to you!



71 dB said:


> Maybe I should emphasize that my opinions about crossfeed do not necessarily apply to überhumans with spatial hearing of elephants.



G


----------



## jgazal (Dec 10, 2017)

Strangelove424 said:


> There is a lot of relevance in this comment. I like some crossfeed, I think it reduces fatigue by making things sound more natural acoustically. However, when people start talking about smyth realizers, room size algorithms, HRTF, etc. I have to admit that there is a part of me that becomes reactionary.  (...), but I do not understand why it is being assumed that an ideal headphone experience is a mimic of speakers, and that in order to experience "reference" sound on headphones first we must artificially simulate the interaction of sound in space that was never really there. It is precisely this lack of space that gives headphones their own character, for both its positives and negatives. You get a presentation isolated from ambient/acoustic effects, which can be beneficial. It's an intimate and detailed presentation. (...)
> 
> I still maintain that I like some crossfeed, but I don't know how I feel about replicating speakers to the full extent with HRTFs, head trackers, binaural mics, etc. That is my personal preference though, and if others wish to pursue this technology I encourage them and will stay informed of their progress. I do not, however, think that should become the standard of headphone listening.



It is true that some prefer headphones over speakers.

I've never had a videogame and I don't use any regularly. But I find modeling the human auditory perception important for other uses that are not related to the entertainment industry (I could elaborate on this, but it is off topic).

There is a difference between convolving your own personal binaural room impulse response (PRIR, currently the state of the art) and convolving your own high resolution HRTF (the acquisition is far from trivial and is *not* mainstream).

If you distribute ambisonics you just need to downmix to stereo instead of downmixing to binaural with a generic HRTF. You still get what you prefer.

If you choose in the future the convolution of a high resolution HRTF you get what you desire plus elevation. You will maybe find elevation attractive.

I guess is also true the majority in the future will choose binaural through two loudspeakers or beamforming phased array of transducers, because consumers find inconvenient multiple surround speakers or wearing headphones.

P.s.: the Realiser lets you choose the reverberation window of any PRIR if you want critical monitoring. I believe it was designed with the pro audio industry in mind much more than the gaming portion of the entertainment industry.


----------



## 71 dB

gregorio said:


> So let me get this straight. You crossfeed to correct the positional distortion but the spatial distortion, which you've been going on about for pages, you don't crossfeed in your own mixes, why is that? Why don't you apply crossfeed after you've added reverb and correct all that horrible, unacceptable spatial distortion?


This is "rendering" individual tracks which can have spatial distortion, because only the whole mix containing all the tracks must be free of spatial disortion. I try to accomplish omnistereophonic sound which works for speakers and headphones _without_ crossfeed. If I crossfeed all tracks individually to have no spatial distortion, the whole mix is going to be probably too monophonic. It must be kept as wide as possible for speakers. Reverberation after crossfeed (means a plugin written by me to use ILD/ITD processing) keeps width for individual tracks and also sounds good. The final mix can always be crossfed a little if some spatial distortion remains. I have written plugins to reduce ILD below 315 Hz and to increase it above 1600 Hz so it's easy to optimaze ILD.


----------



## Markj

jasonb said:


> I thought it might be a nice idea to see who likes or dislikes crossfeed.
> 
> Please vote and share your opinion either way. I wanna hear what people here have to say about it one way or another.


I vote for crossfeed. My experience from best to mild. #1 Holographic Audio Ear One with Ohman X-FEED, Beyerdynamic Headzone with headtracker, McIntosh Headphone Crossfeed Director HXD, Headroom Home amp. HD 800 needs crossfeed to sound natural.


----------



## bigshot

I strongly believe that digital signal processing is the best way to improve sound quality today. We've gotten to the point where purity theories are obsolete. We can go to Walmart and buy a player for under $50 that sounds perfect to human ears. More money and better specs don't improve sound any more. Perfect reproduction is perfect. That means that if we want to improve sound, the best way to do that is to be able to sculpt it. Cross feed is a very basic way of doing that. I think in the future, there will be much more sophisticated ways to solve the problems cross feed is intended to help correct.


----------



## Strangelove424

jgazal said:


> It is true that some prefer headphones over speakers.



Yes, it is. I'm not necessarily one of those people (I value bass and unencumbered movement too much) but I can see how it could be for some.  



jgazal said:


> I guess is also true the majority in the future will choose binaural through two loudspeakers or beamforming phased array of transducers, because consumers find inconvenient multiple surround speakers or wearing headphones.
> 
> P.s.: the Realiser let's you choose the reverberation window of any PRIR if you want critical monitoring. I believe it was designed with the pro audio industry in mind much more than the gaming portion of the entertainment industry.



I wish consumers would suck it up a little, and buy big, ugly speakers. More spending power in that market would be beneficial for all. The consumer resistance toward big speakers led to the creation of satellite systems and sound bars, and proliferation of lifestyle systems by companies like Bose. I have seen virtualization technology used to fairly good effect on TV-embedded sound bars, but nothing even close to decent set of speakers. Headphones are growing in popularity, and I see more opportunity to expand the hifi market there than anywhere else. 

The reduction of PRIR for critical monitoring reminds me of the Dolby Headphone DH-1 setting for "reference room". There is a tacit admittance that a critical monitoring environment has the least amount of acoustic response possible. It is a treated room. The question is how treated? How large? And deciding these factors will be a somewhat arbitrary process. Are you going for Abby Road or Gateway? This also brings up the fact that making a studio is an art within itself. All critical monitoring environments are not exact replicas of each other. So the question becomes "which environment do you choose to mimic and why?". And that's not a simple question to answer. 

I'm also a bit apprehensive about using simulation algorithms in a critical environment. You are strictly relying upon Smyth's proprietary coding to simulate phase, and I'm not convinced their algorithm is a direct replacement for reality. It reminds me of how auto companies design and test cars in simulated physics environments in CAD. Yes, those simulations are very accurate, and get better year after year, but they are not perfect reflections of real world Newtonian physics. At some point, prototypes need to hit the track or be crashed into walls to see how they react to a truly physical world. I feel the same way about sound waves in space.


----------



## Strangelove424

bigshot said:


> I strongly believe that digital signal processing is the best way to improve sound quality today. We've gotten to the point where purity theories are obsolete. We can go to Walmart and buy a player for under $50 that sounds perfect to human ears. More money and better specs don't improve sound any more. Perfect reproduction is perfect. That means that if we want to improve sound, the best way to do that is to be able to sculpt it. Cross feed is a very basic way of doing that. I think in the future, there will be much more sophisticated ways to solve the problems cross feed is intended to help correct.



I totally agree. Fidelity is so 1980s. We're way beyond that now. The question is how do we proceed? I think conversations like this are great. Real, pragmatic conversations about how to improve music. And not even spending that much money to do it! Terms like "DSP rolling" and "DSP chain" I hope will become a more typical feature of the language here, and we can encourage that greatly. It's also important that we realize that one DSP chain or setting is not objectively superior to another. This pragmatic conversation needs to take place openly, and we must value each other's tastes and perceptions as much as our own. There is no "correct way" there is just "a way". If we can do that, I see us helping a lot of people here to get better sound, and learning more to achieve better sound ourselves.


----------



## WoodyLuvr

Would be great to have a DSP Chain & DSP Rolling How-To thread


----------



## jgazal (Dec 10, 2017)

Strangelove424 said:


> The reduction of PRIR for critical monitoring reminds me of the Dolby Headphone DH-1 setting for "reference room". There is a tacit admittance that a critical monitoring environment has the least amount of acoustic response possible. It is a treated room. The question is how treated? How large? And deciding these factors will be a somewhat arbitrary process. Are you going for Abby Road or Gateway? This also brings up the fact that making a studio is an art within itself. All critical monitoring environments are not exact replicas of each other. So the question becomes "which environment do you choose to mimic and why?". And that's not a simple question to answer.



I agree completely and that’s why I think future acquisition of HRTF with biometrics has an edge over PRIR’s. Anyway, Smyth Research Realiser PRIR’s cannot be compared with Dolby Headphone DH-1 at all because the former are literally that (“personal binaural impulse responses”) while the latter relied on generic BRIR/HRIR. Chances are the lower adoption of Dolby Headphone DH-1 had more to do with the lack of personalization than with the choice of mastering rooms in which the generic BRIR/HRIR were acquired.



Strangelove424 said:


> I'm also a bit apprehensive about using simulation algorithms in a critical environment. You are strictly relying upon Smyth's proprietary coding to simulate phase, and I'm not convinced their algorithm is a direct replacement for reality. It reminds me of how auto companies design and test cars in simulated physics environments in CAD. Yes, those simulations are very accurate, and get better year after year, but they are not perfect reflections of real world Newtonian physics. At some point, prototypes need to hit the track or be crashed into walls to see how they react to a truly physical world. I feel the same way about sound waves in space.



You’ve got me there, particularly having in mind the efficiency of the interpolation algorithm. You have to hear it by yourself. Nevertheless, here is what Professor Smyth says about you apprehension:



> SMYTH SVS
> HEADPHONE SURROUND MONITORING FOR STUDIOS
> PRIR look-angles
> SVS simplifies the personalisation process by acquiring a sparse set of PRIR measurements for each active loudspeaker. Typically the system measures these responses for three different head positions, at approximately -30º, 0º and +30º azimuthal angle.
> ...


----------



## bigshot

I'm betting that multichannel speaker systems will become more prevalent. New houses will most likely be built with media rooms that have built in sound systems and media servers that feed the whole house the same way that houses have electrical and plumbing. If you look at the layout of the typical house, it's changing. Separate dining rooms and living rooms are giving way to open floor plans that combine kitchen, dining and living room areas all into one. Home offices are also being designed into floor plans now. It's just a single step further to design those areas to incorporate networking and places designed specifically as a spot for the big screen TV along with multichannel audio and basic room treatment built right into the walls.


----------



## Strangelove424 (Dec 10, 2017)

WoodyLuvr said:


> Would be great to have a DSP Chain & DSP Rolling How-To thread



Ok, I think I'll start one up. I think I'd enjoy maintaining a thread like that, but I'll need help from different people on different platforms and players. I could update the original post with links to their posts or other threads. I will type up some kind of an intro to start with, and maybe a few basic links. It'd be nice to have a single repository of reviews, links, and help. Nobody can find all the DSPs out there on their own. Too many.  

@catleofargh, is it possible to edit an original post indefinitely or do editing capabilities get locked out after a certain point?


----------



## jgazal (Dec 10, 2017)

Strangelove424 said:


> @castleofargh, is it possible to edit an original post indefinitely or do editing capabilities get locked out after a certain point?



I have been doing that with the layman multimedia guide to immersive sound for the technically minded. Thanks God, because there were a lot of errors (and probably still have errors). Please help me to find them if you have free time available.


----------



## Strangelove424

jgazal said:


> I agree completely and that’s why I think future acquisition of HRTF with biometrics has an edge over PRIR’s. Anyway, Smyth Research Realiser PRIR’s cannot be compared with Dolby Headphone DH-1 at all because the former are literally that (“personal binaural impulse responses”) while the latter relied on generic BRIR/HRIR. Chances are the lower adoption of Dolby Headphone DH-1 had more to do with the lack of personalization than with the choice of mastering rooms in which the generic BRIR/HRIR were acquired.
> 
> 
> 
> You’ve got me there, particularly having in mind the efficiency of the interpolation algorithm. You have to hear it by yourself. Nevertheless, here is what Professor Smyth says about you apprehension:



Oh, I'm certainly not trying to pass judgment on the Realizer based on my experience with the Dolby. I was just mentioning it in terms of reference simulations having less overall spatial processing. The Dolby algorithm isn't necessarily bad, I actually use it occasionally, but I would assume the Realizer is more natural. It's something I've been curious about trying. I cannot make any subjective claims until I do so. 



jgazal said:


> I have been doing that with the layman multimedia guide to immersive sound for the technically minded. Thanks god because there were a lot of errors (and probably still have errors). Please help me to find them if you have free time available.



Thanks for letting me know! I didn't see anything wrong, but I will mention any errors I find. I'm horrible with editing my own words. I only see what I meant to say, not what's actually there. Amazes me what gets by sometimes.


----------



## castleofargh

Strangelove424 said:


> There is a lot of relevance in this comment. I like some crossfeed, I think it reduces fatigue by making things sound more natural acoustically. However, when people start talking about smyth realizers, room size algorithms, HRTF, etc. I have to admit that there is a part of me that becomes reactionary. It reminds me of VR, and I see a direct effect from gaming technology making its way into music listening through devices such as the SmythRealizer. In gaming the emphasis on spatial perception is important for either immersion or competition. Many of the headphone advancments like HRTF functions or 3d tracking came from that field.  I think they are innovative solutions, but I do not understand why it is being assumed that an ideal headphone experience is a mimic of speakers, and that in order to experience "reference" sound on headphones first we must artificially simulate the interaction of sound in space that was never really there. It is precisely this lack of space that gives headphones their own character, for both its positives and negatives. You get a presentation isolated from ambient/acoustic effects, which can be beneficial. It's an intimate and detailed presentation. When that gets traded for a sense of synthetic space, like wearing a trackIR in a 3d game, the whole experience seems less authentic to me. Neither headphones or speakers, but a Frankenstein between. For me, it not only fails to suspend my disbelief, it puts me right in the uncanny valley!
> 
> I still maintain that I like some crossfeed, but I don't know how I feel about replicating speakers to the full extent with HRTFs, head trackers, binaural mics, etc. That is my personal preference though, and if others wish to pursue this technology I encourage them and will stay informed of their progress. I do not, however, think that should become the standard of headphone listening.


I'll answer for myself. the album was mastered using speakers, so I consider them to be the first meaningful reference. ideally I would want to have the sound of the speakers in the very studio the master was done, while sitting where the guy was. that is my own idea of the sound like the artist intended. not some often poor sound coming out from giant speakers at a live concert. live event is the true sound like the band is playing in front of me. but it is not what I wish to replicate. because I don't really like that, and also because it is typically impossible when using a mixed and mastered album. so I aim for the next best thing, the sound like the guy heard it when doing the mastering in the studio. I also don't get that, but I try to get close to it and it starts with speaker sound. 
I do believe that one day(soon) it will be a reality and albums will have the data(one way or another) to use on our devices and blend in the result with our own HRTF.  and maybe when those tech are everywhere, making an album will change too, and the released albums will become the sound like someone in a VIP seat heard it at the live event in some glorious room without spectators. or the sound like you're next to the singer(although I don't think that would sound great). with the potential for good mimicry of a given space or given speakers, comes all the potential to produce differently. so you should IMO see the Realiser as the step in the door of future audio. 
for now if I can get at night on headphones, the sound I get on my speakers during the day, that would already be mighty cool. anything beyond that will be bonus to me. ^_^


----------



## castleofargh

Strangelove424 said:


> @catleofargh, is it possible to edit an original post indefinitely or do editing capabilities get locked out after a certain point?


I have no idea ^_^. but a great many topics are edited on a regular basis, so if there is a limit I'd expect it to be real high. do whatever you wish to without worry.


----------



## ironmine (Dec 11, 2017)

gregorio said:


> 1a. Aliasing of what? There is nothing above 22.05kHz if you're feeding 16/44.1, upsampling does NOT magically recreate those frequencies ALREADY removed above the Nyquist point. Same with bit depth: If we've got a 16bit file and convert it to 24bits it does NOT magically generate data for those extra 8 bits, all it does is fill/pad those 8 bits with zeros!
> 1b. The quantisation/round-off error is ALWAYS in the LSB of the plugin format, which is 64bit float in many cases, 32bit float in others.
> 1c. Sample rate has nothing to do with phase.
> 
> ...



Please do not ascribe to me some primitive ideas as if I don't understand that converting the wordlength from 16 bit to 24 bit "recreates" extra 8 bits (or that upsampling "recreates" some frequencies). I may have some misconceptions but not as stupid as that.

Let's talk about the wordlength (bit rate) first.

I do understand that these extra bits will be "padded" with zeros.  But only initially, because, as we apply more and more processes to audio, these zeros will be quickly replaced with figures other than zeros. After several processes even 64 bit wordlength will probably be not long enough to accurately represent the full result of all computations without rounding it off.

Do you agree that (from the technical, mathematical point of view - let's forget for now the debate whether we can hear it or not) after 16-bit audio is heavily processed by 32-bit or 64-bit plugins, the 24-bit representation of their final computational result will be more accurate/complete than the 16-bit representation? If so, do you agree that it's better to send out to the DAC the signal in 24-bit format rather than 16-bit?


----------



## ironmine

71 dB said:


> When making music I "render" the raw tracks and often fist crossfeed and then add reverberation. The "direct sound" is strongly crossfed while reverberation contains greater ILD. Summing these together ensures, that the ILD levels stay low enough. Works nicely imo.



Why do you think it's better to inject reverberations after the crossfeed? I tried both ways. Reverb after the crossfeed sounded worse to me. 

In theory, which way is more correct? 

What if we inject a bit of reverb before the crossfeed AND a bit after?


----------



## WoodyLuvr (Dec 11, 2017)

Strangelove424 said:


> Ok, I think I'll start one up. I think I'd enjoy maintaining a thread like that, but I'll need help from different people on different platforms and players. I could update the original post with links to their posts or other threads. I will type up some kind of an intro to start with, and maybe a few basic links. It'd be nice to have a single repository of reviews, links, and help. Nobody can find all the DSPs out there on their own. Too many.
> 
> @catleofargh, is it possible to edit an original post indefinitely or do editing capabilities get locked out after a certain point?


Awesome!  Looks great thus far.  May I suggest that you QUICKLY reserve two or three spots right under your initial opening post (via replying to your post two or three time and enter the following into each: "*RESERVED FOR FUTURE DATA STICKIES*").

If I may here are a few suggestions:

Add "Components" to Plugins (e.g. PLUGINS & COMPONENTS)

Add "Samplers & Ditherers" under Plugins & Components​
Add "Emulators" under Plugins & Components (so that we can add tube saturators, reverb, pre-amps, and other emulation/saturation plugins)

Add "VST Chainer" under Plugins & Components (e.g. KVR Audio Art Teknika Console)​
Add a third section entitled: "ACTIVE DSP LIST ORDER"
I am extremely interested in understanding what should go before what and why as there seems to be conflicting information/opinion on this. (e.g. I have read that volume control should always go before any limiter/clipper and that samplers should go before crossfeed: resampler > crossfeed > volume > limiter and that limiters should always be the last in the chain.  Should equalizers, saturators, and/or pre-amps go before or after crossfeed?​

Add this alternative VST Adapter to your list:
*Yohng Foobar2000 VST Wrapper*
I found that the standard VST 2.4 adapter sometimes acts up with some VST dll plugins in WIN10 (e.g. SlickEQ and Voxengo Marvel GEQ constantly crash and/or throw errors in VST 2.4 (WIN10) but work perfectly via the George Yohng's VST Wrapper and it is very easy to install and use.)​


----------



## 71 dB

ironmine said:


> 1. Why do you think it's better to inject reverberations after the crossfeed? I tried both ways. Reverb after the crossfeed sounded worse to me.
> 
> 2. In theory, which way is more correct?
> 
> 3. What if we inject a bit of reverb before the crossfeed AND a bit after?



1. With speakers you have direct sound, early reflections and reverberation. Putting reverb after crossfeed makes "direct" sound and reverb sound a bit different. Reverb not being crossfed leaves potential spatial distortion, but when you have dozens of audio tracks they mask each other a bit. Spatial distortion of a mix is less than the spatial distortion of individual tracks. I tend to crossfeed the direct sound heavily, it's not just "crossfeed", but ILD/ITD based panoration. I also "shape" ILD and keep it low at lowest frequencies. It's a working strategy that suites me. I'm not a guru on this, I learn all the time while making music. I usually duplicate the raw track that which may contain some reverb already. The first one is direct sound and gets perhaps floor reflection simulation (increases realism) and ILD/ITD panoration (crossfeed) while the second track gets reverb and possibly ILD reduction at bass if it's needed. Then I set a proper level for the tracks and mix together.

2. Hard to say. Matter of taste. What sounds best is most correct. As reverberation is a diffuse soundfield, every single reflection should theoretically be crossfed according to the angle of arrival, which makes things insanely complicated. You could have "partial" reverbs you crossfeed differently and mix it all together. If it's worth it I don't know.

3. Probably works nicely. Haven't tried. Good idea.


----------



## gregorio

ironmine said:


> [1] Please do not ascribe to me some primitive ideas as if I don't understand that converting the wordlength from 16 bit to 24 bit "recreates" extra 8 bits (or that upsampling "recreates" some frequencies). I may have some misconceptions but not as stupid as that.
> [2] I do understand that these extra bits will be "padded" with zeros.  But *only* initially, because, as we apply more and more processes to audio, these zeros will be quickly replaced with figures other than zeros. [2a] Processing After several processes even 64 bit wordlength will probably be not long enough to accurately represent the full result of all computations without rounding it off.
> [3] Do you agree that (from the technical, mathematical point of view - let's forget for now the debate whether we can hear it or not) after 16-bit audio is heavily processed by 32-bit or 64-bit plugins, the 24-bit representation of their final computational result will be more accurate/complete than the 16-bit representation? [3a] If so, do you agree that it's better to send out to the DAC the signal in 24-bit format rather than 16-bit?



1. There's only two choices: A. Ascribe to you those primitive ideas/stupid misconceptions or B. Ascribe to you a decent basic understanding of the principles of plugin processing, in which case the only logical conclusion is that you were deliberately giving incorrect advice.

2. I don't get the "But only initially". You advised: "_The first plugin should be a high-quality upsampler so that all further calculations are done with a higher precision and less quality loss._" - The initial input into a subsequent plugin (and "further calculations") is the ONLY thing you are affecting with this advice. Whatever happens after this initial input into a 64bit plugin occurs in 64bit, NOT in 16, 24 or whatever other bit depth you input!
2a. This statement is also completely unrelated to whether or not you initially feed the plugin with a 16bit word or a 16bit word padded with zeros to 24bit. Even though it's unrelated, I'll answer it anyway, or rather, I've already answered it, twice! Yes, you will get quantisation error in the LSB of the 64bit. The truth of your statement depends on what you mean by "accurately represent": If you mean in terms of pure mathematics, no 64bit would not enough. If by "accurately represent" you are talking in terms of sound waves, which of course is the whole point of the mathematics in the first place, then yes, it's way more than enough. I'm not talking about audibility here but the sound waves themselves. A sound wave is the movement of billions of air molecules and the quantisation error at the 64bit level represents an energy level significantly lower than that required to move billions of air molecules and therefore, it's unable to affect a sound wave. Obviously there's a cumulative effect of quantisation error but you'd probably need many hundreds of 64bit plugins in series before you accumulated enough quantisation error energy to have any affect on a sound wave and probably thousands for that affect to be potentially audible.

3. Mathematically, yes. But "technically", it would entirely depend on what processing we're talking about and what we're processing (the input audio file/s). BTW, by "technical" I'm not talking about audibility but the technical practicalities of sound wave reproduction. 
3a. Providing we're NOT talking about what's audibile, then possibly. However, this is a question of signal to noise ratios of an individual replay system and environment. Of course NONE of this is related to using a preliminary upsampling plugin, as we're now talking about the output of a plugin chain and therefore a bit depth defined by those plugins and by the processing environment in which they are employed (the bit depth of the data connections between plugins), NOT the bit depth of what we initially feed that processing chain! 

Your advice to use an upsampler as the first plugin to improve the precision of subsequent plugins/calculations was incorrect. Also incorrect was your advised positioning of the dither plugin, which you now appear to have accepted. All the points above are unrelated to my refuting these parts of your advice.

G


----------



## ironmine

gregorio said:


> 2. I don't get the "But only initially". You advised: "_The first plugin should be a high-quality upsampler so that all further calculations are done with a higher precision and less quality loss._" - The initial input into a subsequent plugin (and "further calculations") is the ONLY thing you are affecting with this advice. Whatever happens after this initial input into a 64bit plugin occurs in 64bit, NOT in 16, 24 or whatever other bit depth you input!



No, it's not the ONLY thing. My advice affects not only the bit rate at the input to the VST-chainer, but also the sample rate (which we agreed not to talk about, for now). And, probably (this is what I am not sure about), it also affects the bit rate at the output of the VST-chainer.

After the dBpoweramp/SSRC upsampler in Foobar upsamples 44/16 audio to 176/32 (or 176/64? not sure about it), it passes the result to the VST-chainer in 32bit format. After the VST-chainer finishes its calculations it passes the result to Foobar (or to Foobar VST wrapper which links Foobar to VST-chainer? again, not sure) in 32bit format, right? Foobar truncates it to 24 bits and sends the data out to the DAC.

What would happen if I don't upsample and send the original 44/16 audio to the VST-chainer? I know that the extra bits will be padded with zeros at the input and the calculations will be done in 32bit, but what would happen at the output of the VST-chainer? Will the VST-chainer assume:  "Ok, the signal was handed to me in 16bit format, I've done my job in 32bit format, but now I need to truncate and hand back the result in the same wordlength (16) as I had received it"?

Or, will it disregard the initial bit rate (16) at the input and will return the new increased bit rate (32) from its output?


----------



## gregorio

ironmine said:


> [1] After the VST-chainer finishes its calculations it passes the result to Foobar (or to Foobar VST wrapper which links Foobar to VST-chainer? again, not sure) in 32bit format, right? [2] Foobar truncates it to 24 bits and sends the data out to the DAC.
> 
> [3] What would happen if I don't upsample and send the original 44/16 audio to the VST-chainer? I know that the extra bits will be padded with zeros at the input and the calculations will be done in 32bit, but what would happen at the output of the VST-chainer? Will the VST-chainer assume: "Ok, the signal was handed to me in 16bit format, I've done my job in 32bit format, but now I need to truncate and hand back the result in the same wordlength (16) as I had received it"?
> [3a] Or, will it disregard the initial bit rate (16) at the input and will return the new increased bit rate (32) from its output?



1. Either 32 or 64bit float, you'd have to refer to the manual or the developers for which, probably 32bit though.
2. That will depend, is Foobar 32 or 64bit? Is it outputting through it's own driver or through say ASIO? If it's through it's own driver it presumably has an output bit depth setting, otherwise it will output 32 bits to the driver.
3. It depends on the exact coding of the software. However, my previous answer also comes into play, if it has it's own driver and provides an option for 16bit output and you select that option, it will output in that format. Otherwise it will assume a 32 or 64bit depth, most probably 32bit to maintain compatibility with virtually every DAW/Audio software.
3a. This. Audio data within the PC environment is expected to be 32bit, even 16/24 bit data is encapsulated within 32bit frames so it would make no sense to truncate to 16 or even 24bit as it would just be padded with zeros again. Modern PC processors and data paths are 64bit but of course all the software and drivers in the chain need to be able to accept 64bit, if they don't they'll simply truncate to 32bit or fail/crash if they cannot recognise 64bit words.

G


----------



## WoodyLuvr

gregorio said:


> 2. That will depend, is Foobar 32 or 64bit?


If I am not mistaken Foobar2K is 32-bit.


----------



## Strangelove424

WoodyLuvr said:


> Awesome!  Looks great thus far.  May I suggest that you QUICKLY reserve two or three spots right under your initial opening post (via replying to your post two or three time and enter the following into each: "*RESERVED FOR FUTURE DATA STICKIES*").
> 
> If I may here are a few suggestions:
> 
> ...



Great ideas! Thank you very much for the suggestions Woodyluvr, this is exactly the kind of help I was hoping I'd receive! I'm kind of kicking myself because I didn't read your post before I got the chance to reserve spots under the op, but oh well I guess. I updated a couple descriptions last night, am about to upload more screenshots, and will make these additions you mentioned. I will also change the current saturation heading to a general Emulators heading, and include more subsections under Emulators as they come. I'm also going to be adding a Normalization section for peak and loudness, thinking of you and replay gain and others using Soundcheck . Keep the suggestions coming. Thanks again.


----------



## WoodyLuvr

Strangelove424 said:


> Great ideas! Thank you very much for the suggestions Woodyluvr, this is exactly the kind of help I was hoping I'd receive!... ...I'm also going to be adding a Normalization section for peak and loudness, thinking of you and replay gain and others using Soundcheck . Keep the suggestions coming. Thanks again.


You are very welcome.  Looking forward to seeing that section, as well as more on Crossfeed as I am still uncertain on how to set the Meier Crossfeed slider... floating between 6 and 20.


----------



## ironmine (Dec 12, 2017)

gregorio said:


> This. Audio data within the PC environment is expected to be 32bit, even 16/24 bit data is encapsulated within 32bit frames so it would make no sense to truncate to 16 or even 24bit as it would just be padded with zeros again.



Ok.

If it is really so, then it is indeed unnecessary for the user to convert 16 bit audio to a higher bit rate, because not only it will be done automatically anyway, but it will be kept at a higher bitrate until the very output. (But it's still advisable to set up the Foobar's output setting to 24bit or 32 bit, depending on how much your DAC can accept, because now, at the end of a processing chain, these lower bits - from 16 to 24, contain useful information. They are not zeros anymore.)

But here we need to come back to the issue of upsampling (changing the frequency).

As I am still convinced that it's beneficial to upsample the frequency, let's say, from 44 to 88 or 176, prior to feeding the audio date to a processing chain of plugins, we _anyway _end up with not only a higher sample rate, _but also a higher bit rate_, because, as it was shown above, the audio data at the end of the upsampler will be 176/32, not 176/16.

Aleksey Vaneev (the author of Voxengo VST plugins):

"Almost all types of audio processes benefit from an oversampling: probably, only gain adjustment, panning and convolution plug-ins have no real use for it. An oversampling helps plug-ins to create more precise filters with minimized warping at highest frequencies, to reduce aliasing artifacts in compressors and saturators, to improve a level detection precision in peak compressors." (quoting from here)

The quote from Bob Katz' book: "The Art & The Science of Mastering":







Also, consider this argument: _plugins oversample_ (optionally or automatically).
DAСs _also _oversample or upsample the signal.

https://en.wikipedia.org/wiki/Oversampling:
"Oversampling improves resolution, reduces noise and helps avoid aliasing and phase distortion by relaxing anti-aliasing filter performance requirements."

So, since plugins/DACs upsample/oversample anyway, _why don't we help them do this work_ (fully or, at least, partially) by upsampling the signal first ourselves with the high-quality upsampler such as dbPoweramp/SSRC available in Foobar? I don't think that the quality of oversampling/upsampling inside plugins is superior compared to a dedicated upsampler. I am sure dBpoweramp/SSRC can increase the sample rate with a better result - measurements prove that it's the best compared to similar upsamplers.

So, our signal, while starting its way from modest 44/16, even without an upsampler at Foobar, _at any case, _undergoes these changes on its way:

Step A (audio file): 44/16
Step B (VST plugins - let's assume they oversample everything to 176): 176/32
Step C (at the Foobar output limited by the DAC driver, and at the DAC input): 44/24
Step D (inside the DAC - let's assume it oversamples all incoming signals to 352): 352/32

So, if our signal finishes its way being 352/32 in the DAC, why would it not be beneficial to insert our highest-quality upsampler in between steps A and B to upsample the signal first to 88 or 176?

Please note that there is downsampling happening from step B to step C (if we don't upsample before the VST chain).

But if we upsample before our VST chain, then step C looks like this:

Step C (foobar output, limited by the DAC driver, and at the DAC input): 176/24

If we don't upsample, then the VST plugin has to do the full job, i.e. oversample 4X.
If we upsample to 88, then the VST plugin has to do only half of its job, i.e. oversample only 2X.
If we upsample to 176, then the VST plugin does not have to oversample at all.

What's wrong with my logic?


----------



## gregorio

ironmine said:


> But it's still advisable to set up the Foobar's output setting to 24bit or 32 bit, depending on how much your DAC can accept, because now, at the end of a processing chain, these lower bits - from 16 to 24, contain useful information. They are not zeros anymore.



No, they're not zeros anymore, in almost all cases they're reasonably random zero's and one's, IE. Noise. Also, don't forget that your DAC cannot resolve the last four bits or so. So really we're talking about what's in bits 16-20, assuming there is something "useful" there in the first place. If there is, then it's a question of what your amp, transducers (speakers/HP) and listening environment combined are capable of and then finally, if there is still anything "useful" actually being produced, are your ears capable of hearing it? With noise-shaped dither we can achieve the equivalent of 20bit performance with 16bit anyway and in the vast majority of cases most people can't hear the artefacts of just truncating 32 or 24bit to 16bit.

However, if you're doing this self administered plugin processing as the end user, then you might as well not apply dither and just output the 32bit from Foobar. Unless Foobar is directly outputting to your DAC, which I assume it isn't (it's most likely outputting to a driver) then there's no advantage (or disadvantage) to outputting 24bit from Foobar, so you might as well leave it at 32bit.



ironmine said:


> What's wrong with my logic?



Several  things:

1. You appear to be confusing two different things here: Anti-alias and anti-imaging filters. To comply with Nyquist, we have to remove all freqs above the Nyquist Point to avoid aliasing. With oversampling our Nyquist Point is far higher, this requires a much simpler, cheaper and less damaging analogue filter and also spreads the dither noise over a wider band, much of which is then discarded when a secondary (digital) decimation filter is applied. This is why pro ADCs oversample into the multiple *m*Hz range, commonly somewhere around 22mHz. However, none of this is applicable in our case because what we're dealing with has already been anti-alias filtered and there is nothing above the 22.05kHz Nyquist Point of our 44.1/16 input file/signal!
1a. What Bob is talking about was absolutely true, for a number of years. I sometimes used to run sessions at 96kHz as many plugins operated audibly better at that rate, many/most soft-synths, compressors, some EQs and various others. There is now (and for quite a few years) no benefit to this as the processing power available to plugin developers has increased significantly and the coding is more sophisticated. Most plugins no longer benefit from a higher sample rate and those few which do (such a non-linear analogue modelling plugins), now simply up/down sample where necessary and can apply better filters when doing so. I no longer run sessions higher than 44.1 or 48kHz for audio quality reasons, I do so only if a client requires delivery of a higher sample rate.

2. In your "Step B", if we were to make that assumption, then what you've asserted might have some merit. However, you cannot make that assumption! Those plugins which do legitimately upsample would do so at x2 (either 88.2 or 96kHz) as any theoretical benefits of filters and any non-linear processes are perfectly addressed by those sample rates and going higher is just a waste of processing and potentially less accurate. Those plugins with a fixed sample rate (such as some convolution reverbs and some soft-synths/samplers for example) tend to operate at 96kHz. I know of no plugins which upsample to 176.4kHz legitimately and by "legitimately" I mean processors which upsample to that rate for any reason other than marketing. The only exception to this would be plugin processors designed to deal with intersample peaks, such as true-peak limiters, although some upsample well beyond 176kHz.

3. Your last paragraph; if we don't upsample to 176 and the plugins do automatically upsample 176 then yes, there would be fewer applications of anti-alias filters. BUT:
A. If we're feeding the plugin a signal with no content above 22.05 and if the processor is not creating any content above that freq, then we're applying a smoother, higher frequency filter where there is no frequency content anyway, to a signal which already has (whatever) filter artefacts from being filtered to 22.05kHz to start with.
B. All plugins do not automatically upsample to 176. In the case of a plugin with a fixed sample rate of say 96kHz then: You upsample to 176, adding a filter in the process. The plugin downsamples, processes and upsamples again, adding two more filters to the process. That's the application of 3 filters where if you'd just fed the plugin 44.1 to start with, there would only have been two filters applied.
C. If the plugin does not upsample (and there's no reason to, in most cases) then: You're upsampling for no benefit, adding an unnecessary processing step, an additional filter and risking lower precision by operating at a higher than optimal sample rate.
D. If we were to upsample to 88.2, we'd be adding a filter. If the plugin is operating at 176.4, then it has to upsample from 88.2 to 176.4 and add another filter, then downsample to 88.2 and add another filter. If we'd just fed the plugin 44.1 to start with there would only be two filters applied, rather than three.

4. Your fear of up/down sampling and the effects of applying filters to this process is unwarranted. The filters in plugins today are far superior to those of 15 or so years ago and are audibly transparent.

5. In the case of a DAC oversampling to say 352kHz, then going from 44.1kHz to 352kHz is theoretically better than going from 44.1 to 176 and then from 176 to 352. It's one less processing step and filter application, the same as 3d above.

None of the above is absolutely set in stone, as of course it all depends on the skill/effort of the plugin developer.

G


----------



## 71 dB

Don't want to be "grammar nazi", but *m*Hz = 0.001 Hz and *M*Hz = 1 000 000 Hz


----------



## WoodyLuvr

@gregorio very informative post; thank you for taking the time to write that as I learned a lot! Cheers.


----------



## Zapp_Fan

ironmine said:


> Please do not ascribe to me some primitive ideas as if I don't understand that converting the wordlength from 16 bit to 24 bit "recreates" extra 8 bits (or that upsampling "recreates" some frequencies). I may have some misconceptions but not as stupid as that.
> 
> Let's talk about the wordlength (bit rate) first.
> 
> ...



I have to chime in here because 32/64 bit floating point noise is something we can't afford to start worrying about, whether or not it is technically more accurate.  Also, any reasonable VST host will be passing 32 bits between plugins before converting back to the output bitdepth at the end, so whether you change bit depth before the processing stage is irrelevant.  Still, probably, using 24-bit output is technically superior to 16-bit output, if any component in your system actually has that kind of noise performance... which is doubtful.


----------



## Whazzzup

well i use minimum setting crossfade on my TT, sometimes no crossfade


----------



## WoodyLuvr

Whazzzup said:


> well i use minimum setting crossfade on my TT, sometimes no crossfade


Is crossfade similar to crossfeed?


----------



## castleofargh

here is a hint, they are different words. like crosstalk, or crossfit. ^_^


----------



## Whazzzup

Yes feed


----------



## WoodyLuvr

LOL


----------



## Strangelove424

A crossfade is a kind of audio transition, maybe he got confused.


----------



## ironmine

gregorio said:


> 1. You appear to be confusing two different things here: Anti-alias and anti-imaging filters. To comply with Nyquist, we have to remove all freqs above the Nyquist Point to avoid aliasing. With oversampling our Nyquist Point is far higher, this requires a much simpler, cheaper and less damaging analogue filter and also spreads the dither noise over a wider band, much of which is then discarded when a secondary (digital) decimation filter is applied. This is why pro ADCs oversample into the multiple *m*Hz range, commonly somewhere around 22mHz. However, none of this is applicable in our case because what we're dealing with has already been anti-alias filtered and there is nothing above the 22.05kHz Nyquist Point of our 44.1/16 input file/signal!
> 1a. What Bob is talking about was absolutely true, for a number of years. I sometimes used to run sessions at 96kHz as many plugins operated audibly better at that rate, many/most soft-synths, compressors, some EQs and various others. There is now (and for quite a few years) no benefit to this as the processing power available to plugin developers has increased significantly and the coding is more sophisticated. Most plugins no longer benefit from a higher sample rate and those few which do (such a non-linear analogue modelling plugins), now simply up/down sample where necessary and can apply better filters when doing so. I no longer run sessions higher than 44.1 or 48kHz for audio quality reasons, I do so only if a client requires delivery of a higher sample rate.
> 
> 2. In your "Step B", if we were to make that assumption, then what you've asserted might have some merit. However, you cannot make that assumption! Those plugins which do legitimately upsample would do so at x2 (either 88.2 or 96kHz) as any theoretical benefits of filters and any non-linear processes are perfectly addressed by those sample rates and going higher is just a waste of processing and potentially less accurate. Those plugins with a fixed sample rate (such as some convolution reverbs and some soft-synths/samplers for example) tend to operate at 96kHz. I know of no plugins which upsample to 176.4kHz legitimately and by "legitimately" I mean processors which upsample to that rate for any reason other than marketing. The only exception to this would be plugin processors designed to deal with intersample peaks, such as true-peak limiters, although some upsample well beyond 176kHz.
> ...



You now argue just to win this debate, it's the logic of a Jesuit, you write contradictory things, without hesitation, depending on what you need to say just to prove me wrong. Examples:

From what you say here: "_Most plugins no longer benefit from a higher sample rate and those few which do (such a non-linear analogue modelling plugins), now simply up/down sample where necessary and can apply better filters when doing so._"... "_Your fear of up/down sampling and the effects of applying filters to this process is unwarranted."_

By saying so, you invite us to make the conclusion that upsampling and filtering nowadays is implemented at such high quality level so we don't have to be afraid of any quality loss even when this process is repeated many times over and over again. You imply that _*upsampling / filtering is either harmless*_ _*or beneficial*_.

But in the other paragraphs, you say the opposite: "_The plugin downsamples, processes and upsamples again, adding two more filters to the process. That's the application of 3 filters where if you'd just fed the plugin 44.1 to start with, there would only have been two filters applied._"

So, now, all of sudden, an increased number of downsampling/upsampling & filtering processes is an ugly thing and we need to minimize it. You imply that *upsampling / filtering is harmful*.

When an upsampling/downsampling happens inside a plugin, it's good and beautiful. When it is done outside a plugin, it's suddenly bad and ugly. How is that?

When you need to derive my logic of thinking of merit, you don't allow me to make an assumption that plugins may upsample to 176, you write: "In your "Step B", if we were to make that assumption, then what you've asserted might have some merit. However, *you cannot make that assumption*!"

Several sentences later you write: "_*If the plugin is operating at 176.4*_, then it has to upsample from 88.2 to 176.4 and add another filter, then downsample to 88.2 and add another filter."

So now, just to prove me wrong, you are making the very same assumption which you wouldn't let me make earlier!

Seeing these logical flaws in your thinking, I do not find your arguments convincing. 

I still think that upsampling helps plugins do their job better. Bob Katz and Aleksey Vaneyev (the author of highly respected Voxengo plugins) think the same. To my knowledge (based on the plugins which I have, and whose manuals I've read), if audio is 44 or 48, most plugins will offer x2 oversampling to improve accuracy or will do it even without asking.  They may not oversample 88 or 96 to 176 or 192, because 88 and 96 are already good enough, but, most probably, they will oversample 44 or 48 to 88 or 96.  

So, if you need to process audio that is 44 or 48, I advise to do at least X2 upsamping of the signal before you feed it to the VST host for serious processing. Better do it yourself, once, in the upsampler (dBpoweramp/SSRC) whose quality and precision have been confirmed to be top-notch, rather than trust plugins do it themselves, probably many times over, in who knows which haphazard way. Remember, that your 44 kHz or 48 kHz signal will be taken in your oversampling DAC anyway to 352 or 384 kHz or even to  MHz whether you want it or not. So, if your have to do heavy audio processing and upsampling is anyway inevitable, ask yourself this question: where do you want this processing to happen - BEFORE or AFTER upsampling?  For me the choice is obvious. I prefer to do it before. This is what I do - I usually upsample x2 (to 88 or 96) and the result (sound), to my ears, is great (but a bit different from both 44 and 176).

You can find my arguments more convincing or Gregorio's arguments more convincing, I don't care. If that's all you have to say, Gregorio, i am still unconvinced, sorry. But I read your messages with interest and look forward to them. Maybe I am not smart enough to connect all the dots and see the light of truth.


----------



## 71 dB

Strangelove424 said:


> A crossfade is a kind of audio transition, maybe he got confused.



The used of the cross… terms has been a little messy in this thread including myself I'm afraid.

Crosstalk => Unwanted. Happens electronically, audio channels leak to each other.
Crossfeed => Wanted. Happens electronically or acoustically.
Crossfade => One audio clip fades out as another fades in. Unrelated to crossfeed and crosstalk.


----------



## gregorio (Dec 13, 2017)

ironmine said:


> You now argue just to win this debate ...



Actually, I'd say that's exactly what YOU appear to be doing! Making up a whole bunch of my supposed implications, taken out of context, to suit your argument. For example:


ironmine said:


> Several sentences later you write: "_*If the plugin is operating at 176.4*_, then it has to upsample from 88.2 to 176.4 and add another filter, then downsample to 88.2 and add another filter." - So now, just to prove me wrong, you are making the very same assumption which you wouldn't let me make earlier!



What assumption have I made? I have stated what would happen given the circumstances YOU described, not that plugins do operate at 176. In fact, later in the post I specifically stated "I know of no plugins which upsample to 176.4kHz legitimately".



ironmine said:


> you invite us to make the conclusion that upsampling and filtering nowadays is implemented at such high quality level so we don't have to be afraid of any quality loss even when this process is repeated many times over and over again. You imply that _*upsampling / filtering is either harmless*_ _*or beneficial*_.



Nowhere do I imply filtering is beneficial and I'm also NOT implying that up/down sampling is always harmless, just as I did not imply there is no quantisation error with a 32 or 64bit plugin. However, many modern releases will already have had this repeated "many more times over and over again", a consumer on the other hand doesn't! I've done mixes with over 200 plugins, what consumer is ever going to do that? They're going to create a chain of just one or a handful of plugins and NOT repeat the process "many more times over and over again"!



ironmine said:


> I still think that upsampling helps plugins do their job better. ... To my knowledge (based on the plugins which I have, and whose manuals I've read), if audio is 44 or 48, most plugins will offer x2 oversampling to improve accuracy or will do it even without asking.



You're free to think whatever you want but you haven't explained why adding trailing zeros one step early "improves accuracy" or how doubling the data and bandwidth, where no frequencies exist, "improves accuracy". Until you do, there is no basis for your advice to others!

G


----------



## bigshot

71 dB said:


> Crosstalk => Unwanted. Happens electronically, audio channels leak to each other.
> Crossfeed => Wanted. Happens electronically or acoustically.
> Crossfade => One audio clip fades out as another fades in. Unrelated to crossfeed and crosstalk.



You forgot one...

Cross Purposes => The way people discuss things in Sound Science


----------



## 71 dB

bigshot said:


> You forgot one...
> 
> Cross Purposes => The way people discuss things in Sound Science



Crossroad, crossdress, crossword,…


----------



## jgazal

bigshot said:


> You forgot one...
> 
> Cross Purposes => The way people discuss things in Sound Science


----------



## ironmine

gregorio said:


> I'm NOT implying that up/down sampling is always harmless



I am sorry, but you implied it when you wrote (let me quote you):

_"those few [plugins] which do [benefit from a higher sample rate] now simply up/down sample where necessary".
"Your fear of up/down sampling ... is unwarranted."
_
Looks like, for you, whatever happens in a plugin, including upsampling/downsampling, is harmless/beneficial. But when I propose to do the same upsampling outside plugins, you are against it. Why such prejudice?



gregorio said:


> you haven't explained why adding trailing zeros one step early "improves accuracy"



I already told you. It's not me who "adds zeros".  It's the upsampler which increases the bitrate from 16 to 32 in the process of upsampling. Look - you take 44/16 and feed it to the 32-bit upsampler and choose to upsample the signal x2  and, presto, you end up with 88/32 on your hands. That's simple. Going from 16-bit to 32-bit is the natural result of upsampling in the dBpoweramp/SSRC upsampler. What do you expect me to do? Truncate the upsampled signal back to 16-bit? It's not reasonable to do so, because now these extra bits are not zeros any more, they contain useful info.



gregorio said:


> or how doubling the data and bandwidth, where no frequencies exist, "improves accuracy".



The sample rate is doubled not for the purpose of recreating frequencies which don't exist. You know it perfectly well. Let me quote you:

"_Those plugins which do legitimately upsample would do so at x2 (either 88.2 or 96kHz) as any theoretical benefits of filters and any non-linear processes are perfectly addressed by those sample rates_" and "_I sometimes used to run sessions at 96kHz as many plugins operated audibly better at that rate, many/most soft-synths, compressors, some EQs and various others._"

The sample rate is doubled to improve the precision and accuracy of computations and to sound, in your words, "audibly better".

Why don't you ask plugin makers why they oversample inside their plugins? Don't these naive fools know that "there are no frequencies"?



gregorio said:


> Until you do, there is no basis for your advice to others!



No. Until you prove (to me and also to Bob Katz and to Aleskey Vaneyev) that re-sampling up and down, up and down, many times, when the audio data are transmitted from one plugin to another, is better than upsampling it once before entering a chain of vst-plugins, you should refrain from discouraging users to use the dBpoweramp/SSRC upsampler in Foobar.



gregorio said:


> However, you cannot make that assumption! Those plugins which do legitimately upsample would do so at x2 (either 88.2 or 96kHz) as any theoretical benefits of filters and any non-linear processes are perfectly addressed by those sample rates



Ok! Let's suppose plugins oversample x2. I mean, if the signal 88 or 96 or higher (176 or 192), they just operate at these rates. But if the incoming signal is 44 or 48, then they oversample by 2.

This is what happens if you follow my advice and upsample in Foobar prior to feeding the signal to a vst-host (Foobar is 32bit and vst-host is also 32bit):







And this is how it looks if I follow your advice:


----------



## gregorio (Dec 14, 2017)

ironmine said:


> [1] Looks like, for you, whatever happens in a plugin, including upsampling/downsampling, is harmless/beneficial. [2] But when I propose to do the same upsampling outside plugins, you are against it. Why such prejudice?



1. No, it is not harmless but it is/should be inaudible!
2. Why do it when it's unnecessary? Increasing sample rate just for the sake of increasing sample rate does not improve accuracy/precision, if anything it decreases accuracy!



ironmine said:


> I already told you. It's not me who "adds zeros". It's the upsampler which increases the bitrate from 16 to 32 in the process of upsampling.



Huh? Who is it who is inserting the upsampler? I've already covered bit depth and explained why there is absolutely no point in adding this additional process to your chain!



ironmine said:


> The sample rate is doubled to improve the precision and accuracy of computations and to sound, in your words, "audibly better".



No, doubling the sample rate does NOT improve precision or accuracy. You are either not reading what I'm posting, not understanding it or deliberately misrepresenting it, which is it? I said that many years ago there were numerous plugins which sounded "audibly better" at higher sample rates. How many plugins are you and others using which have not been updated in a decade? I also stated that there are very few plugins which benefit from upsampling and those which do, upsample for specific reasons, NOT because of computational precision or accuracy! For the third time (!), those few plugins which perform non-linear processes AND create frequencies above 22.05kHz, such as a modelled analog compressor which creates IMD as part of that modelling process. Such plugins will therefore up/down sample internally to temporarily increase frequency bandwidth, it has nothing to do with your supposed increase in accuracy/precision!



ironmine said:


> [1] Why don't you ask plugin makers why they oversample inside their plugins? Don't these naive fools know that "there are no frequencies"?



1. Duh, I have, how do you think I know what I'm stating, you think maybe I'm just making it all up because I have some irrational hatred of upsamplers? I've spoken with some of the very best, most respected developers in the business and in fact some of the same people from whom Bob Katz gets his information. AGAIN, I did NOT disagree with what Bob wrote, WHEN he wrote it. I disagree that what he said 15 years ago is still applicable today and I would think he does too! AGAIN, how many plugins do you use which have not been updated in a decade or so?

2. These "naive fools" you're talking about have shaped the industry. I think it's clear who the "naive fool" is here!

For everyone else: You do not need an upsampler as part of your plugin chain. Very few plugins benefit from upsampling and those that do, will do it themselves, if necessary. If you have some burning desire to insert a dedicated upsampler then by all means do so but you are costing yourself double or quadruple the processing resources for absolutely no benefit. If, by some chance, you do detect an actual audible benefit from inserting a dedicated upsampler, then examine your processing chain, you are using an incompetently coded/designed plugin. Generally, I wouldn't need to state that last sentence but I'm aware some audiophiles use free plugins and while there are some excellent developers whose business model allows for high quality plugins which are free, this isn't always the case and there's certainly some shoddy/inexperienced/poor plugin developers out there. And of course, just because a plugin is not free isn't an absolute guarantee that it's competently coded/designed.

G


----------



## castleofargh

ironmine said:


> I am sorry, but you implied it when you wrote (let me quote you):
> 
> _"those few [plugins] which do [benefit from a higher sample rate] now simply up/down sample where necessary".
> "Your fear of up/down sampling ... is unwarranted."
> ...



to be clear for others, the 2 examples are manufactured!  and the red annotations about how something will stay as is or be upsampled X2 are either written without actual evidence that it happens(and it most likely doesn't), or does happen that way because again the settings of the VSTs when available were set to upsample X2. kind of like proving something happens by forcing it to happen with settings nobody would use otherwise. 
if I was to know for a fact that most of the VSTs I use work at a known and identical sample rate, I would just set the VST host at that sample rate and be done with it(if the DAC happens to handle that sample rate). and if some VSTs allow me to pick the internal sample rate myself, and no other consideration came along at all (CPU usage or whatever convolution stuff that could require to follow the rate of the impulse), then obviously I would also try to set them to have the same sample rate, or maybe a higher one if I knew it somehow mattered for one of those VSTs in particular. 
so we simplify things when possible and when there is no other reason to do something else. which is a little different from assuming that starting the VST chain with oversampling is always better. the proper answer to pre oversampling or not is likely to be: "it depends".

as for bit depth, indeed almost everything works at least at 32bit. until whatever we have set at the output for the DAC. I believe everybody now agrees on at least that much.


----------



## ironmine (Dec 15, 2017)

Gregorio,

I want to come back to your previous post:



gregorio said:


> You appear to be confusing two different things here: Anti-alias and anti-imaging filters. To comply with Nyquist, we have to remove all freqs above the Nyquist Point to avoid aliasing. ... However, none of this is applicable in our case because what we're dealing with has already been anti-alias filtered and there is nothing above the 22.05kHz Nyquist Point of our 44.1/16 input file/signal!



Yes, the original 44/16 has been anti-alias filtered, but as we process audio with plugins, aliasing can be re-introduced into the signal again. For example, even an algorithm as simple as a limiter can result in aliasing. Just read this explanation from FabFilter:
https://www.fabfilter.com/help/pro-l/using/oversampling.html

"The limiting algorithm often needs to make very quick changes to the audio in order to remove peaks while preserving transparency and apparent volume. These sudden changes can introduce aliasing, which causes distortion and generally reduces the quality of the audio signal. Oversampling is a way to reduce that aliasing by running the internal limiting process at a higher sample rate that is a multiple of the host's sample rate."

FabFilter also recommends oversampling for its compressor and expander:
https://www.fabfilter.com/help/pro-c/using/oversampling
https://www.fabfilter.com/help/pro-mb/using/oversampling

And for its gating algorithm, too:
https://www.fabfilter.com/help/pro-g/using/oversampling



gregorio said:


> 1. No, it [oversampling] is not harmless but it is/should be inaudible!



Well, it you do it once, may be in inaudible, but if you do it several times? As an audiophile, I passionately care about the quality of playback and I try not to do something that degrades the sound quality.

FabFilter also says that oversampling degrades the quality:

https://www.fabfilter.com/help/pro-l/using/oversampling.html:
"There are only two small drawbacks to oversampling: it increases CPU usage, and it can introduce a very slight pre-ring due to the phase-linear filtering that is needed. Generally this effect is so small that it's inaudible, but it's good to be aware of this and not blindly assume that oversampling is always better."

I don't care about CPU usage (my computer is powerful), but I do care about the effect of "slight pre-ringing due to the phase-linear filtering that is needed" as a result of oversampling. I don't want this effect of pre-ringing to be multiplied by many instances of oversampling occurring in different plugins. If oversampling has such a negative impact upon sound quality, I would like to do it only once, in the highest quality upsampler that I trust, not many times in VST plugins, whose oversampling algorithm, I am sure, is not as advanced as a dedicated upsampler. VST try to boast as low latency as possible, but high-quality oversampling works against it. But I don't care about latency while simply listening to music.

You can see here http://src.infinitewave.ca/ that even costly software products often struggle to re-sample with high-quality. So, what to expect from VST plugins? Mediocre results due to crude interpolation methods, I would not be surprised.

Oversampling results can differ very much depending on how much effort you input into interpolating the missing samples:




http://lavryengineering.com/pdfs/lavry-sampling-oversampling-imaging-aliasing.pdf


----------



## ironmine

gregorio said:


> . For everyone else: You do not need an upsampler as part of your plugin chain.



Yeah, very "valuable" advice from the guy who still runs his sessions, as you confessed, at 44 or 48 kHz!
Much good it will do us...



gregorio said:


> .
> Very few plugins benefit from upsampling and those that do, will do it themselves, if necessary.



You conveniently do not mention the fact that not only these plugins will upsample if necessary, but they will inevitable downsample the data before they can pass it to the next plugin in the vst-chain. If the VST-host operates at 44, a plugin cannot pass the data to the next plugin at 88 or 176.


----------



## gregorio

castleofargh said:


> to be clear for others, the 2 examples are manufactured! and the red annotations about how something will stay as is or be upsampled X2 are either written without actual evidence that it happens(and it most likely doesn't), or does happen that way because again the settings of the VSTs when available were set to upsample X2.



In actual fact, his example plugin chain is not just a manufactured example, it's completely made-up nonsense! The RX De-Clip has a built-in True Peak limiter and is therefore running internally at a sample rate of at least 192kHz. So, he's upsampled (with a plugin omitted from his diagram!) to 88.2kHz and then the De-Clip upsamples it again. If he'd just fed the RX De-Clip 44.1kHz to start with, there would have been fewer upsampling steps/filters applied! The signal exits this De-clip module and is sent to FabFilter Pro Q2 (an EQ plugin). EQ does not benefit from oversampling, the Q2 therefore does not oversample and has no setting to force it to oversample! After the Q2, the signal goes to a mic-preamp emulator. I'm not sure why anyone would want to put a mic-preamp plugin on a final mix but that's a matter of choice I suppose; as this is a non-linear analogue modelling type plugin it could theoretically benefit from upsampling, depending on exactly what non-linear processing it's doing and how it's doing it. Whether it does actually upsample or not I don't know and if it does upsample then it might well do so at a fixed rate, say 96kHz. There's a fair/good chance it does not upsample though, because as a mic-preamp plugin it's designed to be used on every input channel and upsampling could result in unacceptable CPU usage in such circumstances. Then on to an algorithmic reverb, which definitely does not benefit from oversampling, in fact even 44.1kHz is higher than necessary and to reduce processing overhead some algorithmic reverbs downsample and then operate internally at a 32kHz sample rate! I'm not sure if any current plugin algorithmic reverbs actually do this any more but for certain, if this plugin even has an oversampling option it's purely a gimmick/marketing exercise and should be avoided. The Redline Monitor is a basic headphone crossfeeder which again would not benefit from a higher sample rate.

Clearly, ironmine's examples/annotations could not be manufactured even if you wanted to, he's just made it up! The RX De-Clip is not upsampling x2, more like about x4.3 or higher. The Q2 does not and cannot be forced to upsample. We don't know if the pre-amp plug upsamples and if it does, to what sample rate, etc. Even in his own cherry-picked processing chain example, initially upsampling to 88.2kHz is likely to be more detrimental than just feeding the original 44.1kHz! However, it's very unlikely in practise this inferior result would be audibly inferior.

Again, for everyone else: Do not insert an upsampling plugin, just feed your chain what you've got (say 44.1kHz) and let your plugins up/down sample when/IF they need to. I suppose an exception could be: If you are certain that ALL the plugins in your chain are operating at exactly x2 oversampling AND that they are doing so for legitimate reasons. Then sure, insert an upsampling plugin BUT rarely, if ever, can you be certain of both of these conditions, as so ably (and inadvertently!) demonstrated by ironmine! Therefore, the most likely outcome is that you will be significantly increasing your processing overhead in exchange for (theoretically) reducing audio quality! 



ironmine said:


> For example, even an algorithm as simple as a limiter can result in aliasing. Just read this explanation from FabFilter: ...



AGAIN, you appear to be quoting something you either have not understood or are deliberately misrepresenting, which is it???

1. I specifically already mentioned the need for oversampling with a true-peak limiter AND
2. This is what FabFilter actually state in the article to which you linked: "_We recommend 4x or possibly 8x oversampling for normal use as this will already drastically reduce aliasing and not cause extreme CPU usage. The 16x and 32x options may result in even higher audio quality, but for most systems this is simply too taxing to run in real-time especially if there are also other plug-ins in the session._". So, what benefit is there to upsampling x2 to 88.2kHz (and introducing a filter), just for the plugin to upsample again at an even higher rate, add another filter and then downsample back to 88.2??? You'd be better off just feeding the plugin the original 44.1kHz to start with!



ironmine said:


> [1] Yeah, very "valuable" advice from the guy who still runs his sessions, as you confessed, at 44 or 48 kHz!
> [2] You conveniently do not mention the fact that not only these plugins will upsample if necessary, but they will inevitable downsample the data before they can pass it to the next plugin in the vst-chain.



1. You are AGAIN either NOT reading or DELIBERATELY misrepresenting what's been stated! I am not "STILL" running my sessions at 44 or 48kHz. I already mentioned (and you have quoted!) that at one stage, for a number of years (starting around 1998), that I regularly ran sessions in 96kHz for audio quality reasons but that I have since stopped that practise and RETURNED to 44 or 48, as there are no longer any audible benefits.
2. Another lie/misrepresentation, I have consistently stated that in the given circumstances plugins up/down sample, NOT ONLY upsample!

You obviously have a strong (though erroneous) belief in upsampling, which you've arrived at through the marketing hype that higher sample rates must be better and then supported that belief with outdated information, a typical audiophile trap. I don't object to this, in fact, this is one of the main reasons this forum exists, to present/discuss relevant information and facts, to separate out what is just marketing hype and help to avoid the audiophile traps. However, you've now progressed beyond just presenting/discussing the facts and are trying to defend your belief by simply making-up nonsense facts and deliberately misrepresenting others, that is NOT acceptable in this sub-forum!!

G


----------



## ironmine

I did not actually say that I know for sure that those specific plugins which I showed in the above screenshots oversample or  don't.  I cannot know that. I showed that chain to illustrate my thought (scenario) only.  I don't claim any knowledge about how much they oversample, either (if they do). But you seem to be so confident, you pretend that you know exactly what happens, for example, inside the RX De-Clip. Ok, let's check whether you lied or not when you said that you know that RX De-Clip operates at 192 kHz:



gregorio said:


> In actual fact, his example plugin chain is not just a manufactured example, it's completely made-up nonsense! The RX De-Clip has a built-in True Peak limiter and is therefore running internally at a sample rate of at least 192kHz.



I converted a flac file in Foobar losslessly to another flac file. In the process of conversion, I applied the processing in the form of the RX De-Clip plug with its post-limiter activated . This VST plugin found 0 (zero) clips (because the file's peak was at -1.5 dB and the the RX De-Clip threshold was set to - 1 dB). The quality level was set to High. Then I bit-compared these two flac files in Foobar. Foobar checked them and reported that they are bit-identical except a small offset. It means that the RX De-Clip does not upsample/downsample. If it did, the files would not be identical.

So, why did you say that you know the RX De-Clip upsamples? It does not.




gregorio said:


> So, he's upsampled (with a plugin omitted from his diagram!) to 88.2kHz



What? I did not omit the plugin. I showed it in the list of Active DSPs in Foobar. Don't you see dBpoweramp/SSRC in my screenshot? it's the first in the list.

For your information, unlike Foobar components, VST-plugins inside a VST-host cannot be upsamplers. I mean, they can upsample/downsample internally only, but they cannot upsample and pass these upsampled data to the next plugin.  So your remark is funny.  You just revealed inadvertently your ignorance about how VST-hosts work.


----------



## gregorio

ironmine said:


> But you seem to be so confident, you pretend that you know exactly what happens, for example, inside the RX De-Clip. Ok, let's check whether you lied or not when you said that you know that RX De-Clip operates at 192 kHz ...So, why did you say that you know the RX De-Clip upsamples? It does not.



Yes it does and it MUST!  You obviously don't know what True Peak means or how it's measured and defined. The ITU defines a TP measurement as at least a x4 oversampling process based on a 48kHz sample rate, although Izotope's own dedicated TP meter/loudness plugin oversamples x9, it's entirely likely that it's De-Clip plugin does too. So, I don't know DeClip is operating at 192kHz, it could be operating far higher but it's certainly not working at 44.1 or 88.2!!

G


----------



## castleofargh

I was wondering about a few of those points. obviously anything about clipping check that wouldn't oversample is silly. but maybe the scan could be done on oversampled signal and then only apply some gain or compression or whatever the VST does in a separate signal stream using the original signal instead of the oversampled one used for the scan? 

as for VST host, I checked on another one just now in case my assumptions were limited to the one I know best. and as I thought, the oversampling/downsampling when done, seems internal to each plugin. they all output whatever I set the VST host to use and can't seem able to communicate another sample rate to the next VST in the chain. I was on 48khz in the VST host settings and that's what I was reading out of each plugin.
on the other hand, I can reduce bit depth and read the new bit depth at the output of the plugin reducing it. but then the next plugin will typically turn it back to 32bit. not super interesting.
maybe more professional VST hosts, or at least DAWs offer a little more versatility/coherence by letting a sample rate out of a vst and into the next even if it's not the default setting for the workspace? IDK.


----------



## Zapp_Fan

castleofargh said:


> I was wondering about a few of those points. obviously anything about clipping check that wouldn't oversample is silly. but maybe the scan could be done on oversampled signal and then only apply some gain or compression or whatever the VST does in a separate signal stream using the original signal instead of the oversampled one used for the scan?
> 
> as for VST host, I checked on another one just now in case my assumptions were limited to the one I know best. and as I thought, the oversampling/downsampling when done, seems internal to each plugin. they all output whatever I set the VST host to use and can't seem able to communicate another sample rate to the next VST in the chain. I was on 48khz in the VST host settings and that's what I was reading out of each plugin.
> on the other hand, I can reduce bit depth and read the new bit depth at the output of the plugin reducing it. but then the next plugin will typically turn it back to 32bit. not super interesting.
> maybe more professional VST hosts, or at least DAWs offer a little more versatility/coherence by letting a sample rate out of a vst and into the next even if it's not the default setting for the workspace? IDK.



My GUESS is they force to project/systemwide sample rate because some (most?) DAWs have wet/dry controls for each insert effect.  If each VST didn't output the same sample rate, you'd have to mix two or more sample rates together, without resampling, I don't see how you can pass a mix of 44 and 192 to single input further down the chain. However, I am not super-clever with software tricks, so it might be possible? Would be a question for the devs.


----------



## ironmine

gregorio said:


> Yes it does and it MUST!  You obviously don't know what True Peak means or how it's measured and defined. The ITU defines a TP measurement as at least a x4 oversampling process based on a 48kHz sample rate, although Izotope's own dedicated TP meter/loudness plugin oversamples x9, it's entirely likely that it's De-Clip plugin does too. So, I don't know DeClip is operating at 192kHz, it could be operating far higher but it's certainly not working at 44.1 or 88.2!!
> 
> G



Well, then you have to explain how the original flac file and the same flac file that has been put through the De-Clip plugin ended up being bit-identical.  Is Castle of Fargh's assumption correct? 



castleofargh said:


> as for VST host, I checked on another one just now in case my assumptions were limited to the one I know best. and as I thought, the oversampling/downsampling when done, seems internal to each plugin. they all output whatever I set the VST host to use and can't seem able to communicate another sample rate to the next VST in the chain. I was on 48khz in the VST host settings and that's what I was reading out of each plugin.





alex_aiwa_USA said:


> My GUESS is they force to project/systemwide sample rate because some (most?) DAWs have wet/dry controls for each insert effect.  If each VST didn't output the same sample rate, you'd have to mix two or more sample rates together, without resampling, I don't see how you can pass a mix of 44 and 192 to single input further down the chain. However, I am not super-clever with software tricks, so it might be possible? Would be a question for the devs.



Yes, this is what I mean. Otherwise, the VST-chains such as I built below would not be technically possible. 
(Please don't jump at me if the logic of this specific VST chain does not make sense, I just show it as an illustration to my thoughts):





I inserted two instances of the same plugin.  One of them emulates 12AX7 and oversamples x2. The other one emulates 12AU7 and does not oversample. The outputs of these two plugins are summed at the input to the next plugin (Redline Monitor). 

So, I am sure that each plugin in the VST chain, regardless how it resampled internally, at its output must release the data at the sample rate that the VST host is running at.

Here's a similar discussion: "Oversampling vs. Increased Sample Rate". I started reading it.
And this one too.


----------



## WoodyLuvr (Dec 16, 2017)

Could we discuss the how/why of the slider on Case's Meier Crossfeed DSP?  E.g. explain the scale 0-100; what settings best for certain genre; etc.


----------



## gregorio

castleofargh said:


> [1] I was wondering about a few of those points. obviously anything about clipping check that wouldn't oversample is silly. but maybe the scan could be done on oversampled signal and then only apply some gain or compression or whatever the VST does in a separate signal stream using the original signal instead of the oversampled one used for the scan?
> [2] as for VST host, I checked on another one just now in case my assumptions were limited to the one I know best. and as I thought, the oversampling/downsampling when done, seems internal to each plugin. they all output whatever I set the VST host to use and can't seem able to communicate another sample rate to the next VST in the chain.



1. What RX DeCip is doing is taking a signal which has been clipped, the waveform peaks literally cut off by being slammed into the 0dBFS limit (for example) and then reconstructing those waveform peaks to what would have been if there were no 0dBFS limit. It uses floating point math to do this because while in the digital domain there is no 0dBFS limit (in effect the limit with 32bit float is about +1,520 dBFS). I am presuming that you would need some degree of oversampling to achieve this reconstruction task, to make the thing work well you'd almost certainly need to calculate the inter-sample peaks (ISPs). Once this De-Clipping process is complete, you'd typically have an illegal but recoverable result (sample peaks above 0dBFS), so included in the plugin is a True Peak Limiter, also of course operating at 32bit float, to take those peaks back into the legal range and provide a dynamic range roughly equivalent to the input signal but without any clipping distortion (even from ISPs), hence why the TP limiter rather than just a standard sample peak limiter. This TP Limiter absolutely must operate at very high sample rates (at least x4 oversampling) because it is specifically processing the ISPs to give a True Peak result. If it were not oversampling there would be no detectable or processable ISPs and it could only operate as a sample peak limiter NOT a TP limiter. In other words, it's operating on ISPs at an oversampling rate which defines it as a True Peak limiter in the first place. And, as mentioned previously, True Peak is defined by the ITU as; at least x4 oversampling and Izotope's TP detection/reporting in it's dedicated metering plugin is done at x9 oversampling, so it's not unreasonable to assume it's DeClip plug is too, otherwise the TP level achieved in it's DeClip plugin would not agree with the TP level reported by "Insight" (Izotope's dedicated metering plugin). Izotope is, AFAIK, the only plugin manufacturer which uses this x9 oversampling for it's TP reporting, most do x8, some only do the required minimum and some provide the option to choose from x4 all the way up to x32.

I just happen to know a fair bit about Izotope RX suite of plugins because I use it frequently and have done since RX 2 Advanced was released about 7 years ago and I'm now on RX 6 Advanced. I've also had correspondence with Alexey Lukin, the head of DSP coding for Izotope. Unlike most programmers for the big manufacturers, Alexey is quite open, most either don't want to talk about it or are legally constrained from doing so. However, "quite open" is a relative term, Alexey is willing to discuss general DSP principles in depth and go into *some* detail about what the Izotope plugins are doing under the hood but there's ALWAYS a limit. On the one hand he wants to satisfy pro-user's desire to know what's actually going on and on the other hand, he doesn't want to give any clues which might help competitors or potential competitors achieve similar performance more cheaply. It's conceivable that the DeClip plugin actually operates at two different sample rates. For example, x2 oversampling for the DeClipping and then x4 (or higher) for the TP limiting section. In this case though, I would expect it to always oversample x2 in the DeClip section, IE. Feed it 44.1 and the Declip section would oversample to 88.2, Feed it 88.2 and it would oversample to 176.4. I think it's moderately unlikely it's using two different sample rates though and I think it's at least moderately unlikely Alexey would say if asked.

2. I'm not an expert or even particularly well informed about VST hosts but the basic principles of digital audio processing environments are much the same across the board. I don't know of any hosts or DAW environments which allow output from a plugin at a sample rate or bit depth different to the environment. A plugin is an independent DSP program which can do pretty much anything it wants but is constrained to interfacing it's input and output with the host environment. The host environment doesn't know or care what the plugin is doing, as long as it gets what it expects in terms of format and whatever other stipulations the original host environment owners/controllers demand. So internal bit depths and sample rates can be anything the plugin developer wants as long as it can handle the data the host environment gives it and return it's processed data in the format the environment expects. In practise, what actually going on is all over the place and always has been. Software environments which allowed 3rd party plugins started really taking off about 20 years ago with Pro Tools TDM. Within a couple of years TDM's release we started seeing "double precision" plugins, plugins operating internally at 48bit fixed but outputting to the environment's 24bit requirement and by about 2002 plugins which benefited from this double bit depth accounted for a significant number of the most used plugins, often without the operator (engineer) knowing anything about it. So already, by 15 years ago we had the situation where the bit depth was all over the place, 24bit for recording, 8, 24 or 48bit for processing, back to 24bit again, then to the internal TDM summing mixer operating at 56bit, and finally recorded ("bounced down") at 24bit or 16bit. Today the situation is basically the same, although with different bit depths, still 24bit for recording, 32bit float data paths, 64bit internal plugin processing, back to 32bit float, then summed at 64bit and finally bounced down in 24bit or 16bit. The situation of plugin internal sample rates was more static for longer, as there's very significantly more processing overhead with higher sample rates relative to the processing overhead of doubling the bit depth (to say 64bit, as modern CPUs have an architecture designed and optimised for 64bit word lengths). So it wasn't until about 2005 that I recall starting to see plugins internally up/down sample, they simply operated at the session sample rate, and it took several years more before it was common. By about 2009 or so, the only time I needed to run a session at 88.2 or 96kHz to force a plugin to operate at an audibly superior sample rate was for a small number of soft-synths, nearly all the other plugins up/down sampled internally when required with no audible benefit to feeding them a native 88.2 or 96kHz. 



ironmine said:


> Well, then you have to explain how the original flac file and the same flac file that has been put through the De-Clip plugin ended up being bit-identical. Is Castle of Fargh's assumption correct?



Castleofargh's assumption about the fixed environment sample rate is correct, AFAIK. I can think of 3 or 4 potential explanations off the top my head for your observation. However, none of them are that RX DeClip is not oversampling, because that is an impossibility by definition of a True Peak limiter!

G


----------



## ironmine

WoodyLuvr said:


> Could we discuss the how/why of the slider on Case's Meier Crossfeed DSP?  E.g. explain the scale 0-100; what settings best for certain genre; etc.



I don't think one can find a setting in Meier that would sound best with all albums.  As you move the slider more and more to the right, it will take out more and more "buzzing bees" out of your ears and your perception of the music coming from the sides will improve, but at the cost of losing details in the center. It's a trade-off.


----------



## WoodyLuvr

ironmine said:


> I don't think one can find a setting in Meier that would sound best with all albums.  As you move the slider more and more to the right, it will take out more and more "buzzing bees" out of your ears and your perception of the music coming from the sides will improve, but at the cost of losing details in the center. It's a trade-off.


Thank you for the explanation.  Between you and @Strangelove424 I have a better idea what to listen for when I am adjusting the slider.  Any other helpful listening cues would be most appreciated


----------



## 71 dB

I'd say the main principle with crossfeed is to not pay attention to how _much_ the sound chances, but does the sound become more natural and pleasant to listen to. Don't assume the original sound to be even near perfect for headphones. It rarely is since almost all recordings are mixed for speakers. The original sound should not be the mental "starting point", but one option of listening competing against crossfed versions.

Try and error works. Recordings with consistent "spatiality" are easy, but some recordings have contradictory spatial signature and finding the optimal level of crossfeed can be a challenge.

The only kind of crossfeeder, that seems to work with "all recordings" using fixed crossfeed level ( -3 dB or so ) is "widefeeder", variation of Linkwitz-Cmoy with ~640 µs ITD. This is because for sounds coming from side, -3 dB ILD at low frequencies is enough * so you don't need stronger crossfeed for anything and large ITD is enough to keep the sound image wide for recordings which don't have that much excessive stereo separation to begin with. Room + speakers is also an ILD "regulator" which gives you almost the same amount of ILD (at least below 1 kHz) no matter what kind of recording you are playing.


* If ITD is smaller, such as typical 250 µs, stronger crossfeed (up to -1 dB) may be needed, because our hearing expects smaller ILD on sounds coming from ahead rather than form sides.[/HR]


----------



## bigshot

I don't know much about headphones and even less about cross feed, but I know there's a DSP that rechannels stereo to 5.1 for my speaker system that works with any stereo music. (Not so great for movies if you want the dialogue to be focused behind the screen.) So I would think that a one size fits all DSP for headphone listening should be possible. It would probably involve more than just cross feed though.


----------



## 71 dB

bigshot said:


> I don't know much about headphones



Well, considering that you have made over 16 thousand posts on this board dedicated for the most part for headphones I might say you are either extremely modest about your knowledge or a really slow learner. 



bigshot said:


> and even less about cross feed, but I know there's a DSP that rechannels stereo to 5.1 for my speaker system that works with any stereo music. (Not so great for movies if you want the dialogue to be focused behind the screen.) So I would think that a one size fits all DSP for headphone listening should be possible. It would probably involve more than just cross feed though.



Speakers are a bit easier in this sense because room acoustics will always polish out the most problematic spatial aspects of the sound. With headphones you need to get it right, because the eardums are an inch or two away!


----------



## bigshot

I used to be interested in headphones, but since I moved out of an apartment into my own home, I haven't had a lot of interest in them. I got a set I like and I use them with my computer. That is about it.


----------



## WoodyLuvr

Sacrilege!  Burn the witch!  Burn 'em all!


----------



## castleofargh

a real house without direct neighbors to try and kill you when you use speakers for too long? lucky!!!!!


> Wade Wilson: I watched my own birthday party through the keyhole of a locked closet, which also happened to be my...
> 
> Vanessa Carlysle: Your bedroom. Lucky. I slept in a dishwasher box.
> 
> Wade Wilson: [Gasps] You had a dishwasher....




a one fit all solution would still need to have enough customization to fit all people and most headphone signatures. the delay alone should in the most basic concept of Xfeed, be based on sound sources at 30°angle on each sides(to match stereo speaker position), and how long the left speaker sound takes to reach the right ear after reaching the left ear. meaning it's directly related to the size of the listener's head. 
and the same thinking goes for ILD cues. so we will always need some matter of customization and it's at least one reason why some Xfeed plugins happen to please some people and not others.


----------



## ironmine

71 dB said:


> Well, considering that you have made over 16 thousand posts on this board dedicated for the most part for headphones



Average 3.4 messages per day, for more than 13 years.
Wow.  I am impressed. I would call it a full time job!


----------



## bigshot

Plus I was banned for over a year for pointing out that an emperor's nakedness! My stats slipped during the time I was on the bench.


----------



## ironmine

Does anybody use a reverb plugin before/after a crossfeed plugin?
Do we need to use any?
If you use one, which one it is and what settings you prefer?


----------



## castleofargh

I spent some time fooling around with the free version of reverberate. and I preferred to have it before the xfeed(after felt more natural for the reverb itself but not for the Xfeed). and last year I used the app that came with the waves NX head tracker, it had some some reverb along with a bunch of other things and I enjoyed it for a few months, then didn't(dunno why). nowadays I don't use any on the PC but I might come back to it anytime. 
 I still have some in Viper4android on my phone.


----------



## Alexey Lukin

ironmine said:


> So, why did you say that you know the RX De-Clip upsamples? It does not.


RX De-clip's post-limiter does upsample the signal, but only in the side-chain, for level detection. The main signal path is kept at the original sampling rate, both for clipping restoration and post-limiting.



gregorio said:


> The ITU defines a TP measurement as at least a x4 oversampling process based on a 48kHz sample rate, although Izotope's own dedicated TP meter/loudness plugin oversamples x9, it's entirely likely that it's De-Clip plugin does too.


This is correct: iZotope's true peak detection is usually based on 9× oversampling (it may depend on the product). One property of ITU's recommended 4× oversampling filter is that it has a ±0.1 dB passband ripple, which may slightly over- and underestimate “real-world” peak levels, esp. near the Nyquist rate (see the pic below). iZotope's upsampler has a flat passband, so its true peak indications may slightly differ from other true peak meters, while still being BS.1770 compliant.






_BS.1770 true peak detection filter_​


----------



## ironmine

Alexey Lukin said:


> RX De-clip's post-limiter does upsample the signal, but only in the side-chain, for level detection. The main signal path is kept at the original sampling rate, both for clipping restoration and post-limiting.



Hi Alexey!

It's great to see you here!


----------



## jgazal (Dec 22, 2017)

Erik Garci said:


> Great explanation. It gets even more interesting when you consider the Hafler PRIR. Basically you hear the sum from the center-front speaker (L+R to both ears), and you hear the differences from the center-back speaker (L-R to left ear, and R-L to right ear).
> 
> I made a Hafler PRIR where the center-back speaker was actually measured in front, but I turned my head in the opposite direction. I looked right instead of left and looked left instead of right. This way, the center-back speaker has the same spectral balance as the center-front speaker, and head-tracking helps me distinguish which sounds are from the front versus the back. Maybe Smyth can add a Hafler mode that works for any PRIR that has a center speaker or a closely-spaced pair.





Erik Garci said:


> Some live recordings sound great with crowd noise and hall reverb that come from the back. Recordings that were matrix-encoded sound great as well, and you might not realize which ones until you listen to them this way.
> 
> In addition, for 4.0 or 5.1 recordings, the effect can be flipped for the two discrete surround channels, Ls and Rs. Basically you hear the sum from the center-back speaker (Ls+Rs), and you hear the differences from the center-front speaker (Ls-Rs to left ear, and Rs-Ls to right ear).



Erik, please excuse me to bother you again.

Have your tried to measure an one real speaker crosstalk free PRIR with four virtual speakers (Front-Left and Front-Right at 0 degrees and Back-Left and Back-Right at 180 degrees azimuth), using the same approach you described, i.e., Back-Left and Back-Right virtual speakers are actually measured in front, but with your head turned in the opposite direction? Ipsolateral speakers at full mix.

On a second thought, maybe just one left speakers at -90 degrees and right speakers at +90 degrees?

Do you believe binaural or (even stereo recordings) may benefit from such arrangements?


----------



## Erik Garci

jgazal said:


> Have your tried to measure an one real speaker crosstalk free PRIR with four virtual speakers (Front-Left and Front-Right at 0 degrees and Back-Left and Back-Right at 180 degrees azimuth), using the same approach you described, i.e., Back-Left and Back-Right virtual speakers are actually measured in front, but with your head turned in the opposite direction? Ipsolateral speakers at full mix.


I have measured it two ways. One PRIR was measured with the rear speaker behind (180 degrees) and normal head-tracking. Another PRIR was measured with the rear speaker in front (0 degrees) and reversed head-tracking.


jgazal said:


> On a second thought, maybe just one left speakers at -90 degrees and right speakers at +90 degrees?


Speakers at +/-90 degrees would not work well for head-tracking.


jgazal said:


> Do you believe binaural or (even stereo recordings) may benefit from such arrangements?


The Hafler method works well for 2-channel sources. I have not tried other methods and cannot think of any that would work better.


----------



## Jon Sonne (Dec 24, 2017)

ironmine said:


> Average 3.4 messages per day, for more than 13 years.
> Wow.  I am impressed. I would call it a full time job!



*castleofargh, **bigshot and 71 dB, I follow you guys, and I really appreciate your contributions to the sound science forum! @ironmine* - I've just installed 112dB redline monitor, and I am really enjoying it with my new HD800S. The xfeed solved the problem with electric guitars sounding diffuse on the HD800S, now they sound powerful and crisp! I use the 90 degrees soundstage width and -1 dB phantom center. The speaker distance is turned off. In addition, I've added an EQ after the xfeed with a -5 dB peak at 5500 Hz and a -1 dB from 10kHz to 20 kHz centered at 13kHz. This works perfectly to remove the coloration in the treble. Thanks for recommending this wonderful vst plugin!
*
Merry xfeedmas!  *


----------



## WoodyLuvr

@Jon Sonne  FYI: It was @ironmine that recommended 112 Redline Monitor


----------



## 71 dB

Thanks *Jon Sonne!* So nice to hear somebody appreciating stuff I write. 
I don't know if there is much anything else I can contribute since I have said pretty much what I know about crossfeed, but there is always more to learn…

* Merry xFEEDmas to everyone! *


----------



## ironmine

Jon Sonne said:


> I've just installed 112dB redline monitor, and I am really enjoying it with my new HD800S.




I am glad that you like Redline Monitor.


----------



## 71 dB

*Spatial Wars*

*Episode I: Birth of Recorded Sound*
In the end of the 19th century technical means to record sound are invented. The sound quality is very bad and monophonic, but necertheless selling recorded music becomes a profitable business in the beginning of the 20th century. The biggest problems are poor frequency response and range, poor dynamic range, strong distortion and short playing time. People don't think about spatiality. A quarter into the new century electric recording is invented. Stereophonic "live" sound had been demonstrated as early as 1881, but it took a half a century to have recorded stereophonic sound in the 1930's.

*Episode II: Stereophonic sound*
In the end of 1950's stereophonic sound finally makes it's way to consumer market. In order to sell people the wonder of stereo sound the recordings tend to have very strong channel separation, "ping pong" stereophony. Most of the time people listened to these recordings with stereo loudskeakers, and even if the sound image was far from the potential of stereophony, it worked and people where excited. Sound Engineers adapted an artistical style of wide stereo image created by hard amplitude panoration. This style fundamentally contradicted the principles of human hearing. It is the dark side of spatiality. Easy and quick compared to the real spatiality that is based on the proper combination of ITD, ILD and other spatial cues. Sound reproduction in general had many other problems at that time, so hardly anyone payed attention to spatial problems. So, people where seducted into the dark side of spatiality. Balance of the sound (between the ears) was suddenly flawed after thousands of ears of natural sounds and recorded monophonic sound.

*Episode III: Crossfeed*
Some smart people understood that excessive stereo separation is totally unnatural when listened to with headphones. People had been seduced to the dark side of spatiality with popular music and what not containing excessive stereophony, spatial distortion. Perhaps it was loudspeaker manufacturers behind this evil plot to make headphone listening unnatural and tiring? Perhaps not. Perhaps it was simply lack of wisdow and knowledge of the ways of human hearing. Anyway, wise people developped crossfeed to address the problem. In respect of reducing excessive channel separation it was a success, but most people were seduced too strongly to the dark side of spatiality. It was not easy to bring them back, perhaps not even possible. Many of them didn't want to hear their beloved ping pong recordings _without_ spatial distortion. Some other people did trust the wisdom of crossfeeders and where more open minded. They where able to hear the benefits of more natural spatiality. After decades the debate remains and the spatial war continues.


----------



## WoodyLuvr

*Spatula Wars?*


----------



## bigshot (Dec 30, 2017)

71 dB said:


> *Episode I: Birth of Recorded Sound*
> In the end of the 19th century technical means to record sound are invented. The sound quality is very bad and monophonic, but necertheless selling recorded music becomes a profitable business in the beginning of the 20th century. The biggest problems are poor frequency response and range, poor dynamic range, strong distortion and short playing time. People don't think about spatiality. A quarter into the new century electric recording is invented. Stereophonic "live" sound had been demonstrated as early as 1881, but it took a half a century to have recorded stereophonic sound in the 1930's.



This isn't really an accurate description of acoustic recording and playback. Horn recording stripped off room acoustics since the horn could only hear to a distance of 15 to 20 feet. But phonograph instruction manuals told consumers to put their phonograph in the corner of a live sounding room to add natural spatial acoustics to the dry recording. The acoustic recording process captured clear and present sound with human voices. That's why singers like Caruso were so popular in recordings. With a loud tone needle, the dynamic range was incredibly wide. There are certain records that go from a natural whisper all the way up to the point where your ears ring. You don't hear this with electrical transcription, only wth acoustic sound boxes. The various record companies manufactured the records to suit the capabilities of the machine, and vice versa.

When you play back a Victor Caruso record on a good quality acoustic Victrola, the playback horn acts as an exact mirror of the recording horn, projecting the image of the singer about five or six feet in front of the phonograph. It's like an aural hologram. The effect is uncanny making the hairs on the back of your neck stand up. The sound quality is definitely not bad. There was distortion, and surface noise but it was organic sounding and attenuated by the diaphragm, not irritating like electronic noise at all. Until the mid 20s, all phonographs were acoustic and phonograph technology advanced greatly until 1930. Victor introduced Orthophonic records in 1924 which were electrical recordings designed to be played on a machine with an exponential horn. The sound quality of this kind of acoustic playback was excellent. When you hear a record being played this way, you find yourself looking for the plug, but there isn't one. The exponential horn extends the low end response and projects the sound out, filling the room. It is quite definitely spatial because the horn and room lend spatial presence to it.

Most people have no idea how sophisticated recording was in the acoustic era. That's because all they have heard is tinny sounding CDs that are transferred flat. They were never intended to be played that way. People in the teens and 20s used acoustic phonographs that were designed to decode and enhance the sound cut into the shellac, and they placed phonographs in a live room that added spatial presence and character to the sound. The idea was to play back natural sound, not necessarily record natural sound. That theory sound backwards to us today, but it actually worked very well.

And of course Episode IV is multichannel audio. There was a record label that recorded the rolls royce of phonographs on the stage of a theater with great acoustics. It was encoded in matrixed multichannel sound and I'm told the results were amazing. I've never heard any of these because the matrix format is a dinosaur today, but the only way to really capture the sound of an acoustic phonograph at its best would be to do it in multichannel.


----------



## pinnahertz

71 dB said:


> *Spatial Wars*
> 
> *Episode I: Birth of Recorded Sound*
> In the end of the 19th century technical means to record sound are invented. The sound quality is very bad and monophonic, but necertheless selling recorded music becomes a profitable business in the beginning of the 20th century. The biggest problems are poor frequency response and range, poor dynamic range, strong distortion and short playing time. *People don't think about spatiality.* A quarter into the new century electric recording is invented. Stereophonic "live" sound had been demonstrated as early as 1881, but it took a half a century to have recorded stereophonic sound in the 1930's.
> ...


Your "Episodes" are pretty far off, and your perspective of "Spatial Awareness" is entirely limited to the small-ish world of recorded sound, and not even accurate for that.  In fact, spatial awareness, and even music composed specifically for sound source and direction, predates recorded sound significantly. 

Directional cues had already been written into antiphonal "call and response" church music of the late 1500's and early 1600's, showing composers such as Giovanni Gabrielli had full spatial awareness in composition, and expected the performances of his music, which included choruses located behind the audience, to expose his audiences to such spatial stimulus.    In churches of the time it was not uncommon to have as many as four or more organ chambers, left, right, front, back, and even "swallows-nest" chambers located high overhead.

In 1830, Hector Berlioz first performed the now audiophile staple, Symphonie Fantastique, which included, in the score, directions for off-stage oboe parts, establishing the "depth" dimension.  A few years later in his "Requiem" he specified four orchestras in separate locations. 

In 1933 the Bell Labs stereo experiments established that multiple channels were required for accurate spatial representation, and they even specified that an array of an "infinite number of microphones" that fed an a corresponding array of "an infinite number of loudspeakers" could replicate the full spatial experience.  That's 1933!  And NOT recorded, their experiments were with live music transmitted over phone lines equalized flat to 15kHz.  You may also recall that they determined that the absolute minimum number of speakers/channels required for accurate representation was 3, L, C, R.  The outgrowth of those concerts and experiments was a relationship between Stokowski (who actually was involved in the experiments) and Disney leading to Fantasia, produced and presented in multichannel sound, something Disney himself cooked up by adding surround speakers. The film was released in 1940. 

To state that people didn't think about spatiality at the birth of recorded sound is simply ridiculous.   Live music clearly already contained, and was composed specifically for spatial awareness, and Bell's experiments and the entrance of surround sound in film indicates that composers and media creators were already aware that their audience could appreciate spatial effects, something that is inherent in human hearing, and predates even Gabrielli.  The fact that recorded media wasn't made available to consumers until 1950 has nothing whatever to do with anyone's spatial awareness!

In your "Episode II", you inaccurately represent audio engineering of recording as fully hard-panned, widely separated stereo recordings as the "norm".  Wrong.  Sure, there are those recordings, and more of them from that time than later.  Certainly there were more than a few stereo "demo" recordings that shoved wide channel separation in the face of consumers to sell the concept.  But orchestral and classical music was not recorded that way, and many other recordings were also not.  Again, the resulting base of recorded music is not an indication of "spatial awareness" as much as a necessity in selling a new concept: make it obviously different. 

And now you come to "Episode III", and this gem: "_*Some smart people*_ understood that excessive stereo separation is totally unnatural when listened to with headphones. *People had been seduced to the dark side of spatiality* with popular music and what not containing excessive stereophony, spatial distortion."

I'm not exactly sure what it is you're saying here, but based on your many other posts like this, I'd have to assume it is that only "smart people" understand that excessive stereo separation is wrong, and everyone else is stupid, and seduced to the dark side.  Seriously?  Have you no idea how offensive that statement is?  And after everything we've argued about pages ago????

You have continually defined what is right and wrong in recording with _your own opinions that ignore any and all creative and subjective opinions of anyone else._  I'm pretty sure the _"98% of all recordings need crossfeed"_ fabricated statistic is just a post or two away from it's ugly reappearance.   It's a complete exaggeration, and how could it be otherwise?  The application of a rudimentary generalized crossfeed system must be varied according to the recording itself, and that requires...guess what?....subjective judgement....yes, opinion!  And THAT means, your 98% stat is wrong.   

And what have we argued about?  NOT that crossfeed is universally wrong.  NOT that it doesn't provide a benefit.  NOT that it should't be tried in all it's forms.  It's the presentation of personal opinion as immutable fact that continues to be the problem here!  Some may disagree with your fabricated terminology, statistics, and value of crossfeed, but if they do, they WILL be termed "stupid" or "seduced" by you!  Do you have any idea how offensive THAT is?  We've talked about the concept of grays, not blacks and whites.  We've talked about the fatal flaws of rudimentary crossfeed, and that "crossfeed" itself doesn't even define a single thing, that its application must be variable from zero on up to some arbitrary 100% point, that HRTF is important, yet not uniform, and on and on. 

It's all been done, and done to death, right here in these pages. 

If everyone is so spatially unaware, or should I say "unenlightened", then why have we had the independent development of 3 multichannel immersive audio systems in the base few years?  And, conversely, if crossfeed is so critical to life itself, where is it in the commercial market?  It's so simple it should be on everything, but it's not.

Until we all live in a police state that requires every headphone listener to employ your specific brand of crossfeed to 98% off all recordings, I'm going to take exception with your sweeping generalities, fabrications, inventions of concepts and terminology, and condemnations of anyone who has a different opinion.   I believe there's room for crossfeed, and room for not using any, depending on the recording, and the listener's preference.  I do NOT accept that rudimentary crossfeed is required for all recordings or our brains will be reduced to inert gelatinous masses.  I do NOT accept that any simple crossfeed is better than none, or that it is impractical to do something better.  

I also have absolutely no doubt that the above labels me a complete idiot by some here.  I would suggest neither idiocy nor fanaticism are a very long hike from each other....or from insanity.


----------



## jgazal

pinnahertz said:


> Your "Episodes" are pretty far off, and your perspective of "Spatial Awareness" is entirely limited to the small-ish world of recorded sound, and not even accurate for that.  In fact, spatial awareness, and even music composed specifically for sound source and direction, predates recorded sound significantly.
> 
> Directional cues had already been written into antiphonal "call and response" church music of the late 1500's and early 1600's, showing composers such as Giovanni Gabrielli had full spatial awareness in composition, and expected the performances of his music, which included choruses located behind the audience, to expose his audiences to such spatial stimulus.    In churches of the time it was not uncommon to have as many as four or more organ chambers, left, right, front, back, and even "swallows-nest" chambers located high overhead.
> 
> ...



Thank you for sharing that!


----------



## 71 dB

Excessive stereo separation at the ears requires the lack of environmental acoustics AND very close sound sources. Giovanni Gabrielli and Hector Berlioz did not suffer from such conditions. Nobody needs to be aware of excessive separation if it doesn't exist and in the past it didn't. Music was always heard in a way that involved acoustic crossfeed. When you put headphones on your head you can create a situation where acoustic crossfeed doesn't exist or is too weak. That's when the problems theoretically start.


----------



## pinnahertz

71 dB said:


> Excessive stereo separation at the ears requires the lack of environmental acoustics AND very close sound sources. Giovanni Gabrielli and Hector Berlioz did not suffer from such conditions. Nobody needs to be aware of excessive separation if it doesn't exist and in the past it didn't. Music was always heard in a way that involved acoustic crossfeed. When you put headphones on your head you can create a situation where acoustic crossfeed doesn't exist or is too weak. That's when the problems theoretically start.


But you imply that with the onset of widely separated stereo mixes in headphones, people suddenly become spatially unaware.  "_Sound reproduction in general had many other problems at that time, so hardly anyone payed attention to spatial problems. So, people where seduced into the dark side of spatiality."_  Not true.  The early highly separated stereo mixes were purposeful, intentional, deliberate, and achieved their goal, which had nothing whatever to do with a realistic spatial presentation in headphones!  You're simply misapplying the medium, then complaining about how evil it is when listened to incorrectly.  I'm surprised you aren't equally bitching about mono compatibility.  Play some stereo recordings in mono, and it's all wrong again in a different vector.  There's a method of remediation to that too, but no mention, huh? 

And not true there is such a thing as the "dark side" of spatiality, with all of those serious negative connotations.  That's your spin. 

You also stated outright that there was no spatial awareness in the early days of mono recording, "People don't think about spatiality." Yes, they did.  They were acutely aware of it, but had no practical means to present anything other than mono.  There's a difference between being not being aware of something and simply not having the tools to deal with it. 

There is no such thing as acoustic cross-feed in live acoustic music.  The fact that both ears hear variants of the same sound source is NOT cross-feed, it's spatial hearing.  

When you put headphones on the experience may result in an artificial representation of the intent of the creators, who built the mix to be presented in an acoustic environment.  Or not, or anywhere in between.  It depends on the recording.  The theoretical problem is not universal. 

Please keep in mind that the application of cross-feed is not the correction of a problem (can't be, as you can't correct for something that is not known in detail, and without knowing the creators intent), it's a mitigation of a subjectively perceived problem that must be regulated based on subjective opinion relative to specific recordings.


----------



## 71 dB

pinnahertz said:


> But you imply that with the onset of widely separated stereo mixes in headphones, people suddenly become spatially unaware.  "_Sound reproduction in general had many other problems at that time, so hardly anyone payed attention to spatial problems. So, people where seduced into the dark side of spatiality."_  Not true.  The early highly separated stereo mixes were purposeful, intentional, deliberate, and achieved their goal, which had nothing whatever to do with a realistic spatial presentation in headphones!  You're simply misapplying the medium, then complaining about how evil it is when listened to incorrectly.  I'm surprised you aren't equally bitching about mono compatibility.  Play some stereo recordings in mono, and it's all wrong again in a different vector.  There's a method of remediation to that too, but no mention, huh?



When you hear excessive separation for the first time you can be seduced to the dark side. 
I have developped "vivid mono" for making stereo into mono. 



pinnahertz said:


> And not true there is such a thing as the "dark side" of spatiality, with all of those serious negative connotations.  That's your spin.
> 
> You also stated outright that there was no spatial awareness in the early days of mono recording, "People don't think about spatiality." Yes, they did.  They were acutely aware of it, but had no practical means to present anything other than mono.  There's a difference between being not being aware of something and simply not having the tools to deal with it.
> 
> ...



The problem is too much ILD (more than 5-6 dB at low frequencies) and ITD (> 640 µs). Applying crossfeed reduces these values. So, it is a correction to a problem.


----------



## bigshot

I think it's a lot more beneficial to just get a good speaker system.


----------



## WoodyLuvr

bigshot said:


> I think it's a lot more beneficial to just get a good speaker system.


Beneficial to/for whom?  Really depends on the end user and their particular listening environment.  In my case, there is no speaker system in the world that would sound good and/or work as good as a pair of headphones.  All relative.


----------



## pinnahertz

71 dB said:


> When you hear excessive separation for the first time you can be seduced to the dark side.


_What_ "dark side"?  Do I assume you have once again created your own definition, albeit borrowed from a certain other creative work?  


71 dB said:


> I have developped "vivid mono" for making stereo into mono.


I cannot WAIT to hear what kind of nonsense that is! 


71 dB said:


> The problem is too much ILD (more than 5-6 dB at low frequencies) and ITD (> 640 µs).


None of that defines creative intent.


71 dB said:


> Applying crossfeed reduces these values. So, it is a correction to a problem.


A "reduction" is not a "correction".  A non-specific cross-feed cannot correct for an undefined problem.  You figures do not define a problem as they do not include creative intent.

The problem is evaluated as such subjectively by each individual listener who may determine what, if any, cross-feed makes a subjective improvement to his experience.  What you evaluate subjectively as a horrific problem, someone else might evaluate as just fine, or anywhere in between.  But nobody would be "correct" in their evaluation except for the actual creator/artist.  And since you don't have them as a reference, your analytical application of a pseudo-correction would be a fabrication, a false correction of an undefined problem.  Don't get me wrong, your "correction" may in fact be perfect for you, but don't force it on anyone else.  The entire matter is subjective, there is no right/wrong. 

And no "dark side".  That is another of your offensive labels.  What if someone happens to like a recording, one of your 98%, without any crossfeed 

I can't quite say how offensive it is for someone to present their subjective opinions as a religious tenant to which, if you differ, you are condemned.  

If you want to win people to your side, present evidence that your process is an improvement.  Don't bother with your theories, or attempting to justify your position by fabricating your own definitions, statistics or terminology.  Just present actual examples to listen to, and let others form their own opinions.  That's something that has yet to be done.


----------



## bigshot (Jan 1, 2018)

WoodyLuvr said:


> Beneficial to/for whom?  Really depends on the end user and their particular listening environment.  In my case, there is no speaker system in the world that would sound good and/or work as good as a pair of headphones.  All relative.



Practicality, perhaps. Sound quality all things being equal, no. The best speaker setups trounce the best headphone setups. If you live in an apartment with grouchy neighbors, you have to make do though.


----------



## castleofargh

different variables are better on each option. but when albums are mixed/mastered using speakers in a room, the expected playback system should involve speakers in a room. IMO everything else comes second to that. but ultimately, with different mixing/mastering habits, or with better than just crossfeed(real use of complete individual HRTF), then I agree that we'd achieve better fidelity with headphones. and maybe a subwoofer for the body shake^_^.


----------



## 71 dB

pinnahertz said:


> I cannot WAIT to hear what kind of nonsense that is!


Normal mono is just L+R and L-R gets lost. This makes the mono version sound "dull" compared to the original stereo version. Vivid mono sums L-R high pass filtered (1000 Hz) and delayed 0.5 ms to L+R. This retains about 20 % of the vividness of stereo and a lot of L-R information. Better than nothing. The comp filtering effects created by summing L-R to L+R delayed "simulates" stereophonic sound creating "vividness." It's not hi-fi, but what can you do? It makes mobile phone ring tones made from stereophonic sources less dull. Call it nonsense if you want, but I'd like to hear about your "better" stereo to mono algorithm. Have one? L-R can be as much as 50 % of all information in a stereophonic track, so normal L+R mono is quite brutal. 



pinnahertz said:


> I cannot WAIT to hear what kind of nonsense that is!


None of that defines creative intent.[/QUOTE]

If your creative intent is to have excessive ILD and ITD then you are a _headphone artist_ hating loudspeakers and natural sound.



pinnahertz said:


> A "reduction" is not a "correction".  A non-specific cross-feed cannot correct for an undefined problem.  You figures do not define a problem as they do not include creative intent.



Semantics. Reduction or correction. Better than nothing. The problem is defined well enough, for example ILD is 12 dB at bass when it should be less than 6 dB.



pinnahertz said:


> The problem is evaluated as such subjectively by each individual listener who may determine what, if any, cross-feed makes a subjective improvement to his experience.  What you evaluate subjectively as a horrific problem, someone else might evaluate as just fine, or anywhere in between.  But nobody would be "correct" in their evaluation except for the actual creator/artist.  And since you don't have them as a reference, your analytical application of a pseudo-correction would be a fabrication, a false correction of an undefined problem.  Don't get me wrong, your "correction" may in fact be perfect for you, but don't force it on anyone else.  The entire matter is subjective, there is no right/wrong.



How about acoustic crossfeed? Room acoustics, directivity of your speakers etc. define acoustic crossfeed. People just listen to their speakers without thinking if acoustic crossfeed is correct (same as in the studio the record was mixed?) You are nitpicking too much brother...



pinnahertz said:


> And no "dark side".  That is another of your offensive labels.  What if someone happens to like a recording, one of your 98%, without any crossfeed
> 
> I can't quite say how offensive it is for someone to present their subjective opinions as a religious tenant to which, if you differ, you are condemned.
> 
> If you want to win people to your side, present evidence that your process is an improvement.  Don't bother with your theories, or attempting to justify your position by fabricating your own definitions, statistics or terminology.  Just present actual examples to listen to, and let others form their own opinions.  That's something that has yet to be done.



Science is on my side. All you have is "artistic intent". Please.


----------



## WoodyLuvr

bigshot said:


> Practicality, perhaps. Sound quality all things being equal, no. The best speaker setups trounce the best headphone setups. If you live in an apartment with grouchy neighbors, you have to make do though.


Again, the "best speaker setups trounce the best headphone setups" *only* in certain listening environments/situations... perhaps the reason why headphones were created in the first place.  I do get your point but your generalization still doesn't apply to all and most especially on a headphone oriented forum  !


----------



## bigshot

Well, I have very good headphones and a very good 5.1 speaker system, and for listening to music, the speakers are the way to go. I lived in an apartment for 30 years and I understand needing to contain sound so the neighbors don't get mad... well, at least containing it enough that they don't get *too* mad. But now that I have a dedicated listening room that isn't anywhere near where anyone is trying to sleep, I barely use my headphones any more. For listening to music, speakers are a no brainer. Headphones are best if other people don't want to hear what you are hearing.

A lot of the people in this forum have never actually heard a good multichannel speaker system. If they did have that opportunity, and they had a place in their home for it, they would put their cans in a drawer just like I did.


----------



## bfreedma

bigshot said:


> Well, I have very good headphones and a very good 5.1 speaker system, and for listening to music, the speakers are the way to go. I lived in an apartment for 30 years and I understand needing to contain sound so the neighbors don't get mad... well, at least containing it enough that they don't get *too* mad. But now that I have a dedicated listening room that isn't anywhere near where anyone is trying to sleep, I barely use my headphones any more. For listening to music, speakers are a no brainer. Headphones are best if other people don't want to hear what you are hearing.
> 
> A lot of the people in this forum have never actually heard a good multichannel speaker system. If they did have that opportunity, and they had a place in their home for it, they would put their cans in a drawer just like I did.



Agree regarding 5.1 (or more) being preferable but still tend to use headphones later in the evening to not disturb the family.  With two high output subs in my home theater, the bass does tend to be intrusive throughout the rest of the house.


----------



## pinnahertz

71 dB said:


> Normal mono is just L+R and L-R gets lost. This makes the mono version sound "dull" compared to the original stereo version. Vivid mono sums L-R high pass filtered (1000 Hz) and delayed 0.5 ms to L+R. This retains about 20 % of the vividness of stereo and a lot of L-R information. Better than nothing. The comp filtering effects created by summing L-R to L+R delayed "simulates" stereophonic sound creating "vividness." It's not hi-fi, but what can you do? It makes mobile phone ring tones made from stereophonic sources less dull. Call it nonsense if you want, but I'd like to hear about your "better" stereo to mono algorithm. Have one? L-R can be as much as 50 % of all information in a stereophonic track, so normal L+R mono is quite brutal.


Just as I expected.  You've ignored the 6dB center/L+R build-up.  Problem known since the beginnings of consumer stereo.  The traditional correct fix is to sum using a 90 degree network resulting in a 3dB L+R sum, putting it back in balance with the rest of the mix.  Your "fix" won't achieve the correct balance as it relies on your interpretation of how much L-R there is, which is NOT a reliable indicator of mix balance.


71 dB said:


> If your creative intent is to have excessive ILD and ITD then you are a _headphone artist_ hating loudspeakers and natural sound.


You've completely missed two important points.  1. You have NO IDEA what the creative intent was.  2. If the creative intent is to have a very wide image on speakers and a super-wide image on headphones, they don't hate either one, the love both and have chosen to embrace the effects of both on their mix.  You wound never know (and clearly not assume) that condition exists. 

YOU are the one hating things and imposing YOUR value judgements on everyone else by denigrating them for their opinions!


71 dB said:


> Semantics. Reduction or correction.


No, not semantics, definitions.

Reduction : the action or fact of making a specified thing smaller or less in amount, degree, or size.
Correction: a change that rectifies an error or inaccuracy.

And...
Semantics: the historical and psychological study and the classification of changes in the signification of words or forms viewed as factors in linguistic development


71 dB said:


> Better than nothing.


That is YOUR OPINION, and is NOT universally shared! Have we not already established this?  Or do I need to go back to the last time we had this ridiculous argument and quote it to you?


71 dB said:


> The problem is defined well enough, for example ILD is 12 dB at bass when it should be less than 6 dB.


No, that defines the mathmatical relationship of ILD in a specific instance.  Your definition ignores creative intent and subjective opinion!   Your definition ignores the resulting subjective effects and their desirability (or lack of) and general listener preference, which will vary widely for each individual AND each recording.


71 dB said:


> How about acoustic crossfeed? Room acoustics, directivity of your speakers etc. define acoustic crossfeed. People just listen to their speakers without thinking if acoustic crossfeed is correct (same as in the studio the record was mixed?) You are nitpicking too much brother...


There is no such thing as "acoustic cross-feed"!  What you define above is "spatial hearing" and "localization".   That's how we hear, it's not an anomaly, or something artificially generated.  Yes, people don't usually think that their speakers and room acoustics match the studio or not.  That's not the point at all.  Everyone working in a studio already knows the target venue for music (film is different) will likely not match theirs!  That's part of the considerations applied in mixing!  You don't "undo" that, you accept it as their intentions!


71 dB said:


> Science is on my side. All you have is "artistic intent". Please.


No, your science has proven nothing.  If you want to apply science, then take your hypothesis and apply the scientific method!  All you've done is stated a hypothesis as fact.  That could not be farther from science!  You've done no actual research into listener response and preference, it's just all about YOU and you're theories. 

I'm not challenging your theory itself, or your math, or your analysis.  I'm challenging you to prove one thing: your *hypothesis of universal efficacy. *  You thing you've "discovered" something, but you have only found what has been known for decades, but has not, for some mysterious reason, been embraced with even a tiny fraction of the universality you think is so critical.  Why not?  It's not because it's difficult to do, or expensive, or heretofore unknown.  It's had its chance, and been voted down.  You're bucking the existing tide, but refuse to even begin to prove your point other than to re-argue the details of you're marvelous "discovery" ad-nauseam.

You have proven nothing other than a possible mental block.


----------



## 71 dB

bigshot said:


> A lot of the people in this forum have never actually heard a good multichannel speaker system. If they did have that opportunity, and they had a place in their home for it, they would put their cans in a drawer just like I did.



We believe you are the luckiest man alive and have the best multichannel system known to man. Other people aren't as lucky so we use headphones. Ok?


----------



## 71 dB

pinnahertz said:


> You have proven nothing other than a possible mental block.



Okay, now I believe I can't earn your respect. There is no reason for me to communicate with you. Sorry.


----------



## bigshot

71 dB said:


> Other people aren't as lucky so we use headphones. Ok?



Sure. Make do!


----------



## WoodyLuvr (Jan 2, 2018)

bigshot said:


> Well, I have very good headphones and a very good 5.1 speaker system, and for listening to music, the speakers are the way to go. I lived in an apartment for 30 years and I understand needing to contain sound so the neighbors don't get mad... well, at least containing it enough that they don't get *too* mad. But now that I have a dedicated listening room that isn't anywhere near where anyone is trying to sleep, I barely use my headphones any more. For listening to music, speakers are a no brainer. Headphones are best if other people don't want to hear what you are hearing.
> 
> A lot of the people in this forum have never actually heard a good multichannel speaker system. If they did have that opportunity, and they had a place in their home for it, they would put their cans in a drawer just like I did.


My issues/constraints are both physical and social:


Wall-to-wall marble flooring with marble,brick, and tile walls... talk about an echo chamber.
The main room I vegetate at (where I have luckily been able to claim a small space for myself) is actually one continuous hall containing an entry hall, large living/family room, dining area, large bar area, and then a full sized western kitchen and has a strangely vaulted ceiling of varying angles/levels ranging from 4 to 8 meters in height.
Tropics = Open Windows with Fans or Air Conditioning
Two to four young Siamese females present in the house 24/7 (if you don't know Siamese are extremely talkative, active, and noise prone creatures to say the least!).
All three (3) flat screen televisions are running Thai soap operas and/or Thai music throughout the waking hours whether or not they are being viewed.


And most importantly most of my listening is at night-time when all the monsters are asleep and I am all alone which means headphones


----------



## pinnahertz

71 dB said:


> Okay, now I believe I can't earn your respect. There is no reason for me to communicate with you. Sorry.


What do you want me to do?  Draw you a road map?  I've already give you step-by-step instructions.  More than once.  

I believe you don't want to try to earn the respect of the members of a scientific community.  You just want to say you're right and everyone will believe you.  I know of no communities of any kind where that works. 

If you think things are tough here stop over to hydrogenaudio and try to pull that stuff.


----------



## pinnahertz

71 dB said:


> We believe you are the luckiest man alive and have the best multichannel system known to man. Other people aren't as lucky so we use headphones. Ok?


Who are you speaking for with that "we"?  

I think Bigshot probably has a great system and would love to hear it some time, but I seriously doubt it's the best known to man.  Nor does it need to be to accomplish the core goals, his or most people's. 

Oh...sorry, was that just your sour grapes sarcasm?   

I really don't get the banter about which is better, headphones or speakers, for whatever reasons.  Both have their points, neither is perfect no matter how much you spend.  Neither method does what the other does, it can't, and that's not the point anyway.  The point is presenting music in an enjoyable manner, and both can be enjoyable for different reasons.  The key issues have always been frequency response, and until relatively recently, headphones seemed to solve most of that with less expense.   But you can't really compare them to speakers because speaker systems and headphones have completely different purposes.  You can't walk down the street with your 5.1 system, now can you?  And you shouldn't wear headphones while driving.  You can't crank up the SPLs in certain environments without irritating others, and some music is highly personal, and sharing is not desired.  You can't get controlled reliable directional cues from headphones, but if you want a sound there, you can put a speaker there...if you settle the WAF issue.

What's better?  Both, just for different reasons.  Which gets you better sound for less money?  I still think probably headphones, but with certain compromises, and some definite limitations.  Mostly we can get good smooth speakers now, but positioning and the room are still darn difficult problems, so there's a big expense there if you want to get it right.  

I just don't get why argue about it, it's apples vs cumquats.   Both can be anything from fantastic to rotten.


----------



## 71 dB

pinnahertz said:


> What do you want me to do?  Draw you a road map?  I've already give you step-by-step instructions.  More than once.


You could stop shooting down everything I say. You could provide lisp code for Hilbert transformation so I could try myself 90° phase shift. I'm not sure if I can figure it out myself. Perhaps I could approximate it with all-pass-filters? After giving it some thought I admit it is an interesting idea for sure to have 90° phase shift prior to downmixing stereo to mono.



pinnahertz said:


> I believe you don't want to try to earn the respect of the members of a scientific community.  You just want to say you're right and everyone will believe you.  I know of no communities of any kind where that works.



No, I don't say I am right. It's you telling me I am wrong. We never debate as equals because you put yourself above me.

Maybe it's the sound engineers with their "artistic intents" who are wrong? At some point artistic intent goes outside what is reasonable to accept. I myself made music for year with excessive stereo separation because I was spatially ignorant and now I can see how wrong I was. I simply do not believe that most excessive stereo recordings are artistic intents. In my opinion excessive stereo recordings exists because:

- mixed for speakers, headphones ignored (less true nowadays)
- more appealing to spatially ignorant people (almost everybody)
- lack of sophisticated mixing tools (not true anymore)
- lack of understanding of the psychoacoustic problems of excessive separation.

I sense that you almost fear stereo sound with natural ILD and ITD, but that's not a limitation really because there's so much more you can do in music, so much other possibilities for artistic intent.


----------



## bigshot

WoodyLuvr said:


> My issues/constraints are both physical and social




Time to build a guest house in the back!


----------



## WoodyLuvr

bigshot said:


> Time to build a guest house in the back!


They would invade that as well!  LOL.


----------



## pinnahertz (Jan 3, 2018)

71 dB said:


> You could stop shooting down everything I say.


An exaggeration, but I can see how you might feel that way.


71 dB said:


> You could provide lisp code for Hilbert transformation so I could try myself 90° phase shift. I'm not sure if I can figure it out myself. Perhaps I could approximate it with all-pass-filters? After giving it some thought I admit it is an interesting idea for sure to have 90° phase shift prior to downmixing stereo to mono.


90 degree phase shift networks have been used in audio matrix systems for 50 years (4-2-4 quad, Dolby Stereo, etc.) I'm sure you can figure out how to do it.


71 dB said:


> No, I don't say I am right. It's you telling me I am wrong.


Well, this is interesting!  I went back to look for examples where you have declared yourself right and anyone else who doesn't agree is an idiot (or words to that effect), claimed that 98% of all recorded stereo music benefits from cross-feed, and if anyone doesn't agree they're spatially deaf (or worse).  It looks like the thread has been "cleaned up" just a tad, large groups of posts are now gone, others edited.  But I do have all the originals in email notifications.  I guess citing all those many occurrences would not benefit the thread and would just get deleted again.  I can forward all of them to you privately if you like.


71 dB said:


> We never debate as equals because you put yourself above me.


I'm not the one proclaiming superiority of concept, intelligence or hearing ability.


71 dB said:


> Maybe it's the sound engineers with their "artistic intents" who are wrong? At some point artistic intent goes outside what is reasonable to accept.


Oh look! There's an example now!


71 dB said:


> I myself made music for year with excessive stereo separation because *I was spatially ignorant and now I can see how wrong I was.* I simply do not believe that most excessive stereo recordings are artistic intents. In my opinion excessive stereo recordings exists because:
> 
> - mixed for speakers, headphones ignored (less true nowadays)
> - more appealing to *spatially ignorant people (almost everybody)*
> ...


...and you don't see what I mean from the above?  I highlighted a few things to help you out.  The red one, in particular, keeps coming up again and again.  People's preferences make them ignorant, spatially or otherwise.


71 dB said:


> I sense that you almost fear stereo sound with natural ILD and ITD,


You've sensed pretty much everything incorrectly about me, so why not just add that one to the growing list?


71 dB said:


> but that's not a limitation really because there's so much more you can do in music, so much other possibilities for artistic intent.


And THAT  from the same one who denies that choices in stereo perspective in headphones could possibly be artistic intent!

Back to the deleted posts (about a month's worth) and edits for a second, I really have no issue with that, but it does indicate something.  It has become quite clear that what little educational benefit this thread might have once had has been obliterated by intense propaganda promoting a polarized, but scientifically unproven viewpoint.  I recall a time in the not too distant past on this forum when threads would have been locked for less.  If education is at all important, perhaps a return to the actual scientific method in the sound science forum should be considered rather than foot stamping, and the synthesis of terminology, statistics, and pseudofacts.


----------



## 71 dB

pinnahertz said:


> 90 degree phase shift networks have been used in audio matrix systems for 50 years (4-2-4 quad, Dolby Stereo, etc.) I'm sure you can figure out how to do it.



Yes, audio matrix systems use 90° phase shift, but it's not "trivial." The networks are an _approximation_ of 90° phase shift within a "narrow" frequency band. The question is whether this can give better results than my vivid mono algorithm.

I think this method could work:

(1) Break the original signal into narrow band partial signals (say 10 octave bands).
(2) Allpass flter each partial signal so that the 90° phase shift hits the octave band middle frequency.
(3) Construct original phase shifted signal summing all these partial signals together.

Since the partial signals overlap a bit and in every partial signal the frequencies below middle frequency are phase shifted less than 90° and frequencies above the middle frequency more than 90°, summing these overlapping octave band will give pretty "flat" ~90° phase shift over the whole audio band.


----------



## WoodyLuvr

71 dB said:


> Yes, audio matrix systems use 90° phase shift, but it's not "trivial." The networks are an _approximation_ of 90° phase shift within a "narrow" frequency band. The question is whether this can give better results than my vivid mono algorithm.
> 
> I think this method could work:
> 
> ...


A quick question: your DIY crossfeed adapter is specifically made for the HD598? What year is your HD598? Any modifications? Cheers.


----------



## pinnahertz

71 dB said:


> Yes, audio matrix systems use 90° phase shift, but it's not "trivial." The networks are an _approximation_ of 90° phase shift within a "narrow" frequency band. The question is whether this can give better results than my vivid mono algorithm.


 No, that's incorrect.  They are relatively trivial, were realized with basic analog circuitry, and were pretty dead-on across the audio band.  


71 dB said:


> I think this method could work:
> 
> (1) Break the original signal into narrow band partial signals (say 10 octave bands).


Not necessary!


71 dB said:


> (2) Allpass flter each partial signal so that the 90° phase shift hits the octave band middle frequency.


Getting warm....


71 dB said:


> (3) Construct original phase shifted signal summing all these partial signals together.
> 
> Since the partial signals overlap a bit and in every partial signal the frequencies below middle frequency are phase shifted less than 90° and frequencies above the middle frequency more than 90°, summing these overlapping octave band will give pretty "flat" ~90° phase shift over the whole audio band.


You're overthinking it.  Think about doing that in 1968.  And think not global spectral phase shift, but relative inter-channel phase.


----------



## 71 dB

WoodyLuvr said:


> A quick question: your DIY crossfeed adapter is specifically made for the HD598? What year is your HD598? Any modifications? Cheers.


Not made _specifically_ really for HD598. Works with any headphone I believe. I bought my HD 598 in 2011, serial number 0500001929. Earpads and headband padding renewed a few months ago. That's it.


----------



## 71 dB

pinnahertz said:


> No, that's incorrect.  They are relatively trivial, were realized with basic analog circuitry, and were pretty dead-on across the audio band.
> Not necessary!
> Getting warm....
> You're overthinking it.  Think about doing that in 1968.  And think not global spectral phase shift, but relative inter-channel phase.



I found this example of an 90° phase shifter:




 
Source: http://www.microwave.gr/content/view/48/67/

This works in audio band 50-5000 Hz with a theoretical phase error of ±0.0607° (accurate as hell, but real life tolerances make the error much bigger).


----------



## 71 dB

Okay, I wrote today a nyquist-plugin that simulates that 90° phase shifter above. The phase shift looks very accurate, but there's high frequency attenuation happening so that 20 kHz is down about 4 dB. All I can think of causing this is accuracy problems with low pass filter. The nyquist code looks like this:

;nyquist plug-in
;version 2
;type process
;name "Hilbert Transformer"
;action "Hilberting..."
;info "90 degrees phase shifter.\nWritten Jan. 4, 2018.

(setf sigl (aref s 0))
(setf sigr (aref s 1))

;; Left Channel All Pass Filters

(setf sigl (sim (lp sigl 51) (mult -0.5 sigl)))
(setf sigl (sim (lp sigl 201) (mult -0.5 sigl)))
(setf sigl (sim (lp sigl 677) (mult -0.5 sigl)))
(setf sigl (sim (lp sigl 2354) (mult -0.5 sigl)))
(setf sigl (mult 32 (sim (lp sigl 16442) (mult -0.5 sigl))))

;; Right Channel All Pass Filters

(setf sigr (sim (lp sigr 15) (mult -0.5 sigr)))
(setf sigr (sim (lp sigr 106) (mult -0.5 sigr)))
(setf sigr (sim (lp sigr 369) (mult -0.5 sigr)))
(setf sigr (sim (lp sigr 1246) (mult -0.5 sigr)))
(setf sigr (mult 32 (sim (lp sigr 4881) (mult -0.5 sigr))))


(if (arrayp s)
(vector (abs-env
      sigl)
        (abs-env
      sigr)))​


----------



## adamlr

[QUOTE="jasonb, post: 7007848, member: 159987"

  "A delayed, lowpass-filtered version of the opposite channel is added to the current channel. The delay is achieved bs2b-style using a single high shelve filter giving about 0.5 ms delay. After that, the signal is mixed without phase delay with 12 dB attenuation. In addition, there is a small reverb based on Haas stereo widening effect of 30 ms ping-pong buffers."

  I have also used one of the crossfeed plugins that are available for winamp with good results as well. 
   [/QUOTE]

you blew my mind with this. sounds like its a common deal but ive never heard of it.
you listen to your music through an effect that, in addition to playing the standard L R, also send them into each-other, with a lower level, tiny delay, a filter and reverb?


----------



## 71 dB

adamlr said:


> you blew my mind with this. sounds like its a common deal but ive never heard of it.
> you listen to your music through an effect that, in addition to playing the standard L R, also send them into each-other, with a lower level, tiny delay, a filter and reverb?



Think about what happens when you listen to loudspeakers.

Your left ear hears the (direct) sound from left speaker.
Your right ear hears the (direct) sound from right speaker.

But it doesn't end here.

Your left ear hear the sound from right speaker, delayed because of additional distance of about 10 cm or 4 inches and filtered because going "round" the head.
Your right ear hear the sound from left speaker, delayed because of additional distance of about 10 cm or 4 inches and filtered because going "round" the head.
Your ears also receive early reflections from surfaces, your upper body, furnitures etc.
Your ears also hear the reverberation in the room.

Nobody thinks there's something funny about these things and most recordings are mixes in studios with speaker for this kind of situation (the acoustic environment in a studio is much "better" and more controlled than a typical living room is, but anyway…)

When you listen to headphones:

Your left ear hears the left channel.
Your right ear hears the right channel.

That's pretty much it. Open headphones leak some sound and there is very minor acoustic crosstalk happening, but unless you do something to the signal entering your headphones, none of the violet stuff is happening. That's why some people including me find headphone listening without crossfeed unnatural, annoying, spatially broken and tiring.


----------



## Erik Garci

jgazal said:


> I agree completely and that’s why I think future acquisition of HRTF with biometrics has an edge over PRIR’s.


Biometrics will be used by Creative's Super X-Fi for headphones. It will be demonstrated at CES next week.

https://us.creative.com/sxfi/


----------



## RRod

71 dB said:


> Think about what happens when you listen to loudspeakers.
> 
> Your left ear hears the (direct) sound from left speaker.
> Your right ear hears the (direct) sound from right speaker.
> ...



But aren't the mics already adding in acoustic crosstalk? If I record using binaural mics in my ears, for instance, I would want no contralateral content out of the playback speakers. It would seem that the argument is that more typical micing and mixing schemes are judged based on normal speaker playback.


----------



## 71 dB

RRod said:


> But aren't the mics already adding in acoustic crosstalk? If I record using binaural mics in my ears, for instance, I would want no contralateral content out of the playback speakers. It would seem that the argument is that more typical micing and mixing schemes are judged based on normal speaker playback.


More or less yes. Depents on how the recording is mixed. You can record instruments with separate mono mics and hard pan them for example. Binaural recordings by definition have correct amount of channel separation  (correct spatial information) and should of course be listened to without crossfeed (that's why crossfeeders have off/bypass switch). The things is, binaural recordings are VERY rare. I think I have two CDs (out of my ~1500 discs) having binaural sound. Most recordings are recorded and mixed so that ILD and ITD content is too much for headphones without crossfeed. Stereo mics that are more than about 10 inches away from each other produce too much ITD and the directivity + set up of mics easily produce too much ILD.


----------



## jgazal (Jan 5, 2018)

71 dB said:


> Most recordings are recorded and mixed so that ILD and *ITD content is too much* for headphones without crossfeed. Stereo mics that are more than about 10 inches away from each other produce too much ITD and the directivity + set up of mics easily produce too much ILD.



Besides spot close microphones on each instrument that only captures mono direct field and are downstream mixed, what commonly used stereo spaced pair of microphones arrangements are more than 10 inches away?

I have been told that usually ILD is not encoded:



gregorio said:


> I'm not really sure of the context of the statements you've quoted. But on the face of it, some/many appear to be nonsense.
> 
> The percentage of popular music recordings deliberately mixed with both ILD and ITD is tiny. At a guess, less than 1% and probably a lot less!
> 
> ...





pinnahertz said:


> 1. All the spatial information there is to have?
> 
> Even modest familiarity with stereo microphone arrays should reveal the complete nonsense of that statement.
> 
> ...



I see what you mean when choosing different spacing with A-B stereo pairs:



> http://www.dpamicrophones.com/mic-university/classical-orchestra-a-b-stereo



From your ~1500 discs how many were recorded with stereo A-B pairs spaced more than 17 cm?

And how do you know exactly how much ITD you need for each type of recording or mixing?


----------



## 71 dB

jgazal said:


> Besides spot close microphones on each instrument that only captures mono direct field and are downstream mixed, what commonly used stereo spaced pair of microphones arrangements are more than 10 inches away?



AB pair can be close to each other, but also a few meters (10 feet!) away.



jgazal said:


> I have been told that usually ILD is not encoded:
> I see what you mean when choosing different spacing with A-B stereo pairs:
> From your ~1500 discs how many were recorded with stereo A-B pairs spaced more than 17 cm?
> And how do you know exactly how much ITD you need for each type of recording or mixing?



Maybe 500? The "needed" amount is 0-640 µs.


----------



## jgazal (Jan 5, 2018)

71 dB said:


> AB pair can be close to each other, but also a few meters (10 feet!) away.





71 dB said:


> Maybe 500? The "needed" amount is 0-640 µs.



I see it.

So do you digitally analize the ILD and ITD parameters in the recording using some software the allows to do that and then you are able to know how the recording was made and how much ILD and ITD it has?

And then do you adjust your algorithm accordingly?

Do you mind telling me what sofware I can use to discover ILD and ITD from a given recording?

I beleive the Realiser with a crossfeed free PRIR would allow to spatially perceive those ITD differences, but it would be nice to have a sotware that allows to numerically/quantitatively confirm such perceptions...


----------



## pinnahertz

71 dB said:


> Stereo mics that are more than about 10 inches away from each other produce too much ITD and the directivity + set up of mics easily produce too much ILD.


And yet many feel that recordings made with widely spaced stereo pairs and mono spot mics sound just fine on headphones!

There is SO much more to creating an aesthetically pleasing recording than microphone configuration, and to evaluate recordings in any way using only that information (usually unknown and only presumed anyway) would simply be the equivalent of viewing the night sky while looking through a toilet paper tube.

Many orchestras are recorded using combinations of X-Y pairs (with "correct" ITD by your definition), widely spaced omnis, spot mics, possibly even a coincedent pair or two, all mixed together.  The goal of nearly every recording is not to clinically replicate an event, since that is impractical even using binaural techniques.  It is to present the sense of a performance that represents the intent of the creator given the limitations of stereo reproduction. A great many of these performances never existed acoustically at all! So how can you claim what is "correct"? Even recording with panned mono sources (which is hardly ever done exclusively) they can be placed spatially with delay and reverb.

Since you cannot measure ILD and ITD built into a recording, especially if there are multiple variations of each occurring simultaneously, and specific mic and mixing parameters are never published, coupled with the lack of full knowledge of the creators intent for his work in every playback situation and system, the need for, desired amount  and type of crossfeed must.... that's MUST.... be the resul of subjective preference...only.  There cannot be any rules.  There cannot be any hard specifications, because of everything we don't know.

And around we go again, with more circularity in discussion than sound in 5.1 surround.


----------



## jgazal (Jan 5, 2018)

pinnahertz said:


> Since you cannot measure ILD and ITD built into a recording, especially if there are multiple variations of each occurring simultaneously, and specific mic and mixing parameters are never published, coupled with the lack of full knowledge of the creators intent for his work in every playback situation and system, the need for, desired amount  and type of crossfeed must...



Thank you for that information!!!


----------



## pinnahertz

jgazal said:


> Thank you for that information!!!


You'd think this would be completely obvious by now, but...guess not. Perhaps it's inertia from all the spin in the other direction.


----------



## jgazal

pinnahertz said:


> You'd think this would be completely obvious by now, but...guess not. Perhaps it's inertia from all the spin in the other direction.



Well, I could only have imagined that if one previously had a RIR in the exact location where the microphones were in the recording.

But as Claude Lévi-Strauss once said: "The scientist is not a person who gives the right answers, he is one who asks the right questions."


----------



## pinnahertz

jgazal said:


> Well, I could only have imagined that if one previously had a RIR in the exact location where the microphones were in the recording.


Measurements of any kind are not often part of recording sessions.


----------



## bigshot (Jan 5, 2018)

Tracking is a creative and evolutionary process. You don't know when you start what you are going to end up with. All you have is a demo of the song with the bare bones of it. Then you start building and tailoring tracks based on what you've recorded before.


----------



## 71 dB

jgazal said:


> I see it.
> 
> So do you digitally analize the ILD and ITD parameters in the recording using some software the allows to do that and then you are able to know how the recording was made and how much ILD and ITD it has?
> 
> ...



Not really. I listen to the sound and set the crossfeed level so that spatial distortion just disappears. I have writen analyser for ILD, but simulating human hearing isn't easy, at least not to me.


----------



## pinnahertz

71 dB said:


> Not really. I listen to the sound and set the crossfeed level so that spatial distortion just disappears. I have writen analyser for ILD, but simulating human hearing isn't easy, at least not to me.


What is your indication that spatial distortion has disappeared?  

An analyzer that measured ILD would still not tell you how much cross-feed to apply to correct for spatial distortions anyway.


----------



## 71 dB (Jan 6, 2018)

pinnahertz said:


> And yet many feel that recordings made with widely spaced stereo pairs and mono spot mics sound just fine on headphones!



Yes, even I did feel that way, but then again I was spatially ignorant back then...



pinnahertz said:


> Many orchestras are recorded using combinations of X-Y pairs (with "correct" ITD by your definition), widely spaced omnis, spot mics, possibly even a coincedent pair or two, all mixed together.  The goal of nearly every recording is not to clinically replicate an event, since that is impractical even using binaural techniques.  It is to present the sense of a performance that represents the intent of the creator given the limitations of stereo reproduction. A great many of these performances never existed acoustically at all! So how can you claim what is "correct"? Even recording with panned mono sources (which is hardly ever done exclusively) they can be placed spatially with delay and reverb.



XY has zero ITD which is not optimal, because in real life we have ITD up to about 640 µs for sounds that aren't coming from very near. For speakers XY creates a narrow sound image because of the zero ITD information.

The purpose of music is enjoyment, and if for example use of crossfeed increases that enjoyment, it's hard for me to deny it is the right thing to do. I don't believe that most recording engineers understand (some do understand, e.g. Jürg Jecklin) that well human hearing. They understand how to get balanced sound. .

Nothing wrong with panned mono tracks as long as you do it in a way that makes sense (ITD + ILD + perhaps some spectral stuff). I make computer music with very little recorded elements. I have to create the spatial information and out of principle I use only free software, that is Garage Band and Audacity. I export raw tracks from Garage band to be processed in Audacity using the plugins I have written. So, I am creating spatial information from scratch all the time and that's why I think I understand that stuff pretty well. If you use expensive professional software and effects you don't know what those effects are made of and you don't learn about human hearing.



pinnahertz said:


> Since you cannot measure ILD and ITD built into a recording, especially if there are multiple variations of each occurring simultaneously, and specific mic and mixing parameters are never published, coupled with the lack of full knowledge of the creators intent for his work in every playback situation and system, the need for, desired amount  and type of crossfeed must.... that's MUST.... be the resul of subjective preference...only.  There cannot be any rules.  There cannot be any hard specifications, because of everything we don't know.



What counts is how good it sounds. Our ears are amazing ITD/ILD detectors. That's why spatially enlightened listeners get annoyed by excessive ITD and ILD. We can have guidelines such as "limit ILD below 500 Hz to 6 dB." or "avoid ITD more than 640 µs below 1 kHz." ILD and ITD is easiest to control while recording and creating the music. We can estimate the theoretical maximum ILD and ITD values of a given mic set up for example. Say we have an AB pair with 1 m (40 inches) distance between the mics. The mics are at a distance where the orchestra to be recorded covers ±45° angle for the mics. This means the sound from the far left instruments arrive to left mic sin (45°) * 1 / 345 s = 2 ms earlier than to the right mic. Omnidiractional mics create ILD due to distance difference. If the mics are 5 meters away from the closest players, the theoretical maximum ILD is 20*log10 (6/5) = 1.6 dB. So, we see ILD won't be a problem, but ITD will be. For a XY pair it's the other way around etc.


----------



## 71 dB

pinnahertz said:


> What is your indication that spatial distortion has disappeared?



It sounds spatial distortion free. No different from say harmonic distortion. If you can reduce it, at a certain point the distortion goes below hearing threshold.  



pinnahertz said:


> An analyzer that measured ILD would still not tell you how much cross-feed to apply to correct for spatial distortions anyway.



Actually it does if we know the threshold (target) value. I have wrote analyser plugin for "D" value:

D = S/(S+M),

where S is absolute value of L-R and M is absolute value of L+R.

D=0 means mono sound. D=1 means L and R are out of phase (antimono). If the target value for ILD is 3 dB, it means R = 0.7 * L.

M = L + R = L + 0.7*L = 1.7*L
S = L - R = L -0.7*L = 0.3*L
D = S / (S + M) = 0.3*L / 2*L = 0.15

Now, if the analysed D value in 0.4 for example, we can calculate the needed correction:

x = 20*log10 ((0.4-0.15)/(0.4+0.1-2*0.5*0.1)) = 20*log10 (0.25/0.4) = -4.1 dB.


----------



## pinnahertz

71 dB said:


> Yes, even I did feel that way, but then again I was spatially ignorant back then...


You have just defined anyone who doesn't share your opinion as "ignorant".

Again.

Your arrogance is offensive, even if someone partially agrees.



71 dB said:


> XY has zero ITD which is not optimal, because in real life we have ITD up to about 640 µs for sounds that aren't coming from very near. Also, the cardioid mics of XY create excessive ILD at low frequencies.


No, you need to examine the LF pattern of a few cardioid mics.  Besides, X-Y is only one of many stereo mic arrays. 


71 dB said:


> For speakers XY creates a narrow sound image because of the zero ITD information.


Again, incorrect. The image location of sound between speakers does not depend on ITD alone, or even primarily. If it did the ubiquitous "pan pot" wouldn't pan.


71 dB said:


> XY isn't that good for much anything, but it's not very bad either.


Its great for producing stereo recordings that are highly mono-compatible.  


71 dB said:


> The purpose of music is enjoyment, and if for example use of crossfeed increases that enjoyment, it's hard for me to deny it is the right thing to do.


Not always. And not for everyone. This is, again, arrogance.


71 dB said:


> I don't believe that most recording engineers understand (some do understand, e.g. Jürg Jecklin) that well human hearing. They understand how to get balanced sound.


Arrogance! To think that those that have a career in making commercial recordings are not as "enlightened" as you, a rank amateur!  Just incredible!



71 dB said:


> Nothing wrong with panned mono tracks as long as you do it in a way that makes sense (ITD + ILD + perhaps some spectral stuff).
> No pan pot works that way. You know that, so what you're doing here is dissing the entire recording industry relative to your own enlightenment.
> 
> 
> ...


----------



## gregorio

71 dB said:


> Science is on my side. All you have is "artistic intent". Please.



Huh? How many commercial music recordings do you have which have been made by scientists and/or to purely scientific principles? No, commercial music recordings are made by artists and "artistic intent" is always the fundamental and overriding concern. Your statement is therefore complete nonsense and pretty much backwards! And that's even if science were even on your side to begin with!



71 dB said:


> [1] I sense that you almost fear stereo sound with natural ILD and ITD [2] but that's not a limitation really because there's so much more you can do in music, so much other possibilities for artistic intent.



1. I can't answer for pinnahertz but I certainly do! It would in effect mean dumping all the progress made in the last 50 years or so in music recording techniques, mixing and production, along with all the modern popular music genres, plus narrative film and TV sound. That's pretty scary to me!
2. Such as? I can't think of anything more limiting to artistic intent than dumping all the technical and artistic progress of the last 50 years!



71 dB said:


> [1] XY has zero ITD [2] which is not optimal, because in real life we have ITD up to about 640 µs for sounds that aren't coming from very near. [3] Also, the cardioid mics of XY create excessive ILD at low frequencies. [4] For speakers XY creates a narrow sound image because of the zero ITD information. [5] XY isn't that good for much anything ...



1. No it doesn't but pretty close. 2. The opposite is true! As near to zero as possible is in fact optimal, to avoid phase artefacts.
3. No they don't! 4. No it doesn't! 5. Yes it is! In fact it's probably the most commonly used stereo mic technique.
3 and 4 depend entirely on: What you're recording, where you're recording it, how the XY pair is positioned relative to what you're recording and artistic intent! 

I'm not sure where you've got all this nonsense from, whether you've zero practical experience of mic'ing techniques and are just misinterpreting what you've read or if you're deliberately misrepresenting the facts just to support your opinion?



71 dB said:


> [1] What counts is how good it sounds. [2] Our ears are amazing ITD/ILD detectors. [2a] That's why spatially enlightened listeners get annoyed by excessive ITD and ILD.



1. Absolutely!
2. They are only potentially amazing given an exact HRTF and even then they can be pretty easily fooled.
2a. That's presumably why you've managed to convince yourself that you're spatially enlightened when in fact your statements indicate the exact opposite! For example:


71 dB said:


> Open headphones leak some sound and there is very minor acoustic crosstalk happening, but unless you do something to the signal entering your headphones, none of the violet stuff is happening. That's why some people including me find headphone listening without crossfeed unnatural, annoying, spatially broken and tiring.



With extremely few exceptions, whether you're listening to headphones with crossfeed, listening to them without crossfeed or even listening on speakers in an acoustically excellent room, the spatial information of what you're listening to (commercial music, TV/films) is ALWAYS "unnatural" and "spatially broken". The only explanation I can think of for why you seem completely unaware/oblivious/ignorant of this basic fact is that you are spatially ignorant/un-enlightened. 



71 dB said:


> [1] Nothing wrong with panned mono tracks as long as you do it in a way that makes sense [2] (ITD + ILD + perhaps some spectral stuff).



1. Again, absolutely agreed. 2. Nonsense, virtually all commercial music/audio comprises at least partly, if not mostly or entirely of panned mono tracks with little or no ITD and a fair amount of it with no ILD either and it makes perfect sense! Or more precisely, our brains can make perfect sense of it.



71 dB said:


> [1] I listen to the sound and set the crossfeed level so that spatial distortion just disappears.
> [2] I don't believe that most recording engineers understand (some do understand, e.g. Jürg Jecklin) that well human hearing.
> [2a] So, I am creating spatial information from scratch all the time and that's why I think I understand that stuff pretty well.
> [2b] If you use expensive professional software and effects you don't know what those effects are made of and you don't learn about human hearing.



1. Spatial distortion never disappears, unless you remove ALL the spatial information and that would sound pretty ridiculous. Maybe it just seems to disappear to you personally because you are unaware of it/spatially ignorant?
2. That's clearly complete nonsense. You somehow don't seem to realise that being a sound engineer is an extremely competitive career or even that the two requisites which determine their success is the quality of their work and their efficiency. Neither of which they could achieve without excellent judgement of human hearing perception.
2a. You mean that's how you've managed to convince yourself that you know more than anyone else.
2b. And this is why this is all "clearly complete nonsense"! You're making a correlation where there is none. Is an F1 car mechanic a better driver than a professional F1 driver because he has a far better understanding of how the car works? What about a fighter plane designer being a better pilot than a fighter pilot, a tennis racket maker, a designer of surgical equipment, a piano manufacturer, was Stradivarius a legendary violinist? The best mixes are made by the best sound engineers/artists (typically using professional software), not by those who design/code that software. Isn't this obvious, do you really believe what you're saying or just making it up to justify your belief?

You've made it abundantly clear that you have a preference for the "unnatural, broken spatially" reproduction of HPs with crossfeed. I generally prefer the "unnatural, broken spatially" reproduction of HPs without crossfeed and better still, the "unnatural, broken spatially" reproduction of speakers in a good acoustic. So what all this comes down to is how each of us personally prefers their "unnatural, broken spatially" reproduction. What I object to is you coming out with all kinds of nonsense to try and contradict the basic facts and prove that your reproduction preference is in fact not "unnatural, broken spatially", simply because you're apparently unable to hear it and then calling everyone who can hear it and/or do know the basic facts "spatially ignorant"! Please, enough already!

G


----------



## bigshot

> Yes it is!


No it isn't!


> You are wrong!


No I'm not!


> Yes it is!


No it isn't!


> You are wrong!


No I'm not!


> Yes it is!


No it isn't!


> You are wrong!


No I'm not!


----------



## pinnahertz

^^^^How is this helpful?


----------



## bigshot (Jan 6, 2018)

That's exactly what the recent posts look like to me. Go through and read just the replies without reading all the quotes. That's what most of us out here in forum land do. The amount of actual content has dropped precipitously. But if you're having fun, feel free to continue.


----------



## pinnahertz

Just trying to keep the discussion balanced, less polarized, and factual, as it has the tendency to escape the bounds of reality fairly often.

Yes, the usefulness of content has intersected the baseline and gone negative.

Fun? Well as much fun as this is, I cant wait to take a break from this for a root canal I've been looking forward to...


----------



## 71 dB

pinnahertz said:


> No, you need to examine the LF pattern of a few cardioid mics.



You are right. Typically cardioids have -5 dB response from side at low frequencies. That's not that excessive. Sorry about the confusion.


----------



## 71 dB

pinnahertz said:


> Again, incorrect. The image location of sound between speakers does not depend on ITD alone, or even primarily. If it did the ubiquitous "pan pot" wouldn't pan.



I mean narrow _compared_ to some other mic set ups. I wasn't clear enough. Sorry.


----------



## bigshot

pinnahertz said:


> Fun? Well as much fun as this is, I cant wait to take a break from this for a root canal I've been looking forward to...



If that doesn't work, you can always go over to the "I have some panels where should I put them?" thread and chat about absorption and diffusion. I've absorbed as much of that as I can take myself. I'm sitting out these dances until the uptempo song starts.


----------



## 71 dB

pinnahertz said:


> You have just defined anyone who doesn't share your opinion as "ignorant".
> 
> Again.
> 
> Your arrogance is offensive, even if someone partially agrees.



Yes, ignorant on one thing. I am ignorant about so many things I can't even count and I humbly admit it. 



pinnahertz said:


> Arrogance! To think that those that have a career in making commercial recordings are not as "enlightened" as you, a rank amateur!  Just incredible!



Please, don't be dramatic. People have their perspective and way of doing things and it hardly ever covers all possible knowledge. I have put a lot of effort on spatiality, but that doesn't make me more "enlightened" than others, just "enlightened" on a different subject. Recording engineers have tons of knowledge and experience I don't have.


----------



## 71 dB

gregorio said:


> 1. Spatial distortion never disappears, unless you remove ALL the spatial information and that would sound pretty ridiculous. Maybe it just seems to disappear to you personally because you are unaware of it/spatially ignorant?
> 
> 2. That's clearly complete nonsense. You somehow don't seem to realise that being a sound engineer is an extremely competitive career or even that the two requisites which determine their success is the quality of their work and their efficiency. Neither of which they could achieve without excellent judgement of human hearing perception.
> 
> ...


1. Spatial distortion as I define it disappears when spatial information gets scaled within human spatial hearing. Evolutionary prosess has shaped our hearing to decode the sounds coming from our environment. We hear these sound around us in everyday life. Our spatial hearing expects certain kind of correlation between left and right ear. 

2. Competitive career for sure, but nobody knows/understands everything.

2a. No, not more than anyone else but "pretty well" as I said. 

2b. Both the mechanic and the driver are needed in F1 and the more they communicate with each other, the better. 

3. Clearly your definitions for unnatural broken spatiality differs from mine.


----------



## pinnahertz

71 dB said:


> Yes, ignorant on one thing. I am ignorant about so many things I can't even count and I humbly admit it.


It's the "I'm spatially enlightened" and "nobody else is" bit that's a problem.  And yes, that IS what you're saying.



71 dB said:


> Please, don't be dramatic. People have their perspective and way of doing things and it hardly ever covers all possible knowledge. I have put a lot of effort on spatiality, but that doesn't make me more "enlightened" than others, just "enlightened" on a different subject.


You've repeatedly come across as "more enlightened than anyone else" on this subject, placing recording engineers globally below you on this subject.


71 dB said:


> Recording engineers have tons of knowledge and experience I don't have.


You think?  And yet you assume they have none of your knowlege?  You take your shots at stereo mic configurations, but have you ever recorded an orchestra? Then you apply your own metric and terminology to evaluate the entire body of recorded music as defective, requiring a corrective process, based on your own "enlightenment".   How hard would readers have to look to see humility in that?


----------



## gregorio (Jan 7, 2018)

71 dB said:


> 1. Spatial distortion as I define it disappears when spatial information gets scaled within human spatial hearing. [1a] Evolutionary prosess has shaped our hearing to decode the sounds coming from our environment. We hear these sound around us in everyday life. Our spatial hearing expects certain kind of correlation between left and right ear.
> 2a. No, not more than anyone else but "pretty well" as I said.
> 3. Clearly your definitions for unnatural broken spatiality differs from mine.



1. You define it that way because it appears TO YOU to disappear. However, that is an issue with your personal perception ability, of your "spatial ignorance" because it does NOT disappear, it's still ALWAYS there ...
1a. Yes, our hearing has evolved to decode the sounds of our environment and therefore expects natural acoustic information, including a certain kind of correlation between left and right ear. However, it virtually never gets that from ANY commercial music/audio! It never gets that under ANY listening conditions, including crossfeeding HPs, even crossfeeding HPs with the perfect HRTF or even listening in a theoretically perfect listening environment with theoretically perfect speakers!

2a. You did say, more than once, you are better/more enlightened than others, better even than those who do it for a living and not just better but so much better that those others are "ignorant" in comparison!

3. Yes, clearly! My definition is based on the fact that the acoustic information on commercial audio recordings is not natural, in the vast majority of cases it's not anywhere even vaguely close to natural and is therefore "broken spatially" relative to both what actually exists in our environment and to what our hearing has evolved to expect. Your definition seems to be based purely on your inability to discern this fact (with one type of reproduction). This of course makes you the one who is "spatial ignorant", not those of us who do not suffer from your inability!

We're going round in circles now, you've gone to considerable lengths to invent an explanation which completely avoids any consideration that your experience/preference is due to your personal inability/perception and for this reason you cannot conceive that any experience/preference which differs from yours could be based on anything other than ignorance. The problem is that many of the facts used in your explanation are clearly not facts, so going round in circles (ever smaller circles as your "facts" are discredited) is pretty much your only remaining option which doesn't require you to discard much of your "explanation". However, no one else here wants to see the thread going round in circles and you MUST STOP insulting others for an ignorance/inability which is yours, NOT ours!

G


----------



## 71 dB

pinnahertz said:


> It's the "I'm spatially enlightened" and "nobody else is" bit that's a problem.  And yes, that IS what you're saying.
> 
> You've repeatedly come across as "more enlightened than anyone else" on this subject, placing recording engineers globally below you on this subject.



I'd say anyone who recognizes and understands the benefits of crossfeed is spatially enlightened. 



pinnahertz said:


> You think?  And yet you assume they have none of your knowledge?  You take your shots at stereo mic configurations, but have you ever recorded an orchestra? Then you apply your own metric and terminology to evaluate the entire body of recorded music as defective, requiring a corrective process, based on your own "enlightenment".   How hard would readers have to look to see humility in that?



All I know is it's impossible for anyone to know everything. I have not recorded orchestras. My work has been about other things, you know there's other things too to be done in the world besides recordings orchestras. Everyone knows recordings are done for speakers and it causes spatial problems with headphones. That's why some people use crossfeed including me. If that is arrogant then there's nothing I can do about it.


----------



## bigshot

71 dB said:


> I'd say anyone who recognizes and understands the benefits of crossfeed is spatially enlightened.



Then listening on speakers would have to qualify as spatial samadhi!


----------



## jgazal (Jan 7, 2018)

gregorio said:


> (...)
> 1a. Yes, our hearing has evolved to decode the sounds of our environment and therefore expects natural acoustic information, including a certain kind of correlation between left and right ear. However, it virtually never gets that from ANY commercial music/audio! *It never gets that under ANY listening conditions, including crossfeeding HPs, even crossfeeding HPs with the perfect HRTF or even listening in a theoretically perfect listening environment with theoretically perfect speakers!*
> (...)
> G



@gregorio

@RRod tried to ask @71 dB, in a subtle way, why he does not consider acoustic crosstalk in stereo playback a distortion.

@pinnahertz and you say that art is done given what the technology allow and that real sound sources don’t need to be a reference and that, in fact, they must not be the reference.

I feel okay if you don’t want to have any real reference.

But I am trying to understand what you mean with your sentence in bold.

Do you mean that currently there is no technology to achieve a listening environment with natural acoustic information?

I am fine with that interpretation of your sentence.

Or do you mean that is theoretically impossible to create such listening environment and that it will never exist?

I don’t feel comfortable with this second interpretation of your sentence.

Other assertive I don’t feel comfortable in your sentence is crossfeeding headphones when you can convolve/convolute a “perfect” HRTF.

If you convolve/convolute a personal binaural room impulse response, it makes sense to crossfeed the signals for headphones playback if you want to emulate the measured speakers in the measured room and it’s ordinary acoustic crosstalk. That’s really important for recording and mastering engineers to mix their work with headphones as if they were exactly in their mastering rooms. Engineers have been doing that with the Realiser A8 for years.

But if you have a high density HRTF (either measured in a anechoic chamber or computationally generated from anthropometric features) why would you want to add crossfeed if you don’t have a mastering or listening room to emulate at all?

Why not letting the mastering engineer to decide if he wants to add crosstalk in the content if he really wants to synthesize two virtual speakers the way they sound in the current standard?

The following research is from 2015:



> *Applications of 3-Dimensional Spherical Transforms to Acoustics and Personalization of Head-related Transfer Functions (HRTFs) *
> *Date*
> 
> September 29, 2015
> ...




Creative Labs and Lee Teck Chee are claiming that they are comparing anthropometric data and HRTF in large databases with the buzzword technology of the year (neural networks - AI).

As a layman, I really don’t know who to believe...

P.s.: Politis says the HRTFs vary in the near field and I don’t see HRTF measurements with different distances (near, mid and distant fields). My gut feeling is that makes binaural synthesis an easier way to approximately (not precisely) convey proximity.


----------



## bigshot (Jan 7, 2018)

It would be possible to capture a very realistic sound field and reproduce it. It would take a very specific kind of miking, and a custom speaker array that is precisely matched to it. It would be basically a "capture only" system. You couldn't edit or overdub or balance levels. The result would be realistic, but not very exciting. We hear realistic sound every moment of our lives. Recorded music is intended to be *better* than real... more organized, more balanced, more clear, more interesting sounding.

The problem isn't that realism is unattainable. It's that pursuing realism is a waste of a great deal of materials and effort for minimal returns. The first law of being an artist is to know how to use your medium to its strengths. Recording is no different.


----------



## jgazal (Jan 8, 2018)

bigshot said:


> It would be possible to capture a very realistic sound field and reproduce it. It would take a very specific kind of miking, and a custom speaker array that is precisely matched to it. It would be basically a "capture only" system. You couldn't edit or overdub or balance levels. The result would be realistic, but not very exciting. We hear realistic sound every moment of our lives. Recorded music is intended to be *better* than real... more organized, more balanced, more clear, more interesting sounding.
> 
> The problem isn't that realism is unattainable. It's that pursuing realism is a waste of a great deal of materials and effort for minimal returns. The first law of being an artist is to know how to use your medium to its strengths. Recording is no different.



After struggling with this idea, I concede.

I just mean that such feeling may prevent engineers from acquiring familiarity with the new mixing and listening environments and perhaps they may also allow to achieve better than real results.


----------



## bigshot

Engineers are really good at assembling a "bag of tricks"... techniques that work best in different situations. I'm sure most of them incorporate all kinds of ideas from immersive sound into their mixes. It's just that doing it strictly that way is impractical and limiting creatively.


----------



## gregorio

bigshot said:


> It would be possible to capture a very realistic sound field and reproduce it. It would take a very specific kind of miking, and a custom speaker array that is precisely matched to it. It would be basically a "capture only" system. You couldn't edit or overdub or balance levels. *The result would be realistic, but not very exciting. We hear realistic sound every moment of our lives.* Recorded music is intended to be *better* than real... more organized, more balanced, more clear, more interesting sounding.
> The problem isn't that realism is unattainable. It's that pursuing realism is a waste of a great deal of materials and effort for minimal returns. The first law of being an artist is to know how to use your medium to its strengths. Recording is no different.



I agree that the majority of the time we are trying to make it "better than real". However, there are some music genres where this isn't the case, effectively where we're trying to make it better so that it does sound real. Quite often in audiophile discussions the topic is brought around to the comparison of a live acoustic performance, such as orchestral music, with a recorded equivalent. The problem here is quite different to the "better than [and not even directly concerned with] real" which is the case with the non-acoustic genres. In the case of acoustic genres such as orchestral, I would re-word the part I've highlighted in bold to: "The result would often not appear to be entirely realistic or very exciting, because what we hear at an orchestral concert is not real in the first place!" - What actually enters our ears and what we perceive are two different things. Our brain will filter/reduce what it thinks is irrelevant, such as the constant noise floor of the audience for example, and increase the level of what it thinks is most important, such as what we are looking at (the instrument/s with the solo line for example). This isn't "real" at all, although of course it feels entirely real. Clearly, even with a theoretically perfect capture system, all we're going to record is the real sound waves but when reproduced, the brain is generally not going to perceive those sound waves as it would in the live performance because the visual cues and other biases which informed that perception are entirely different. So, the trend over the decades has been to create a orchestral music product which sounds realistic relative to human perception rather than just accurately capture the sound waves which would enter one's ears. To achieve this we use elaborate mic'ing setups which allows us to alter the relative levels of various parts of the orchestra in mixing (as our perception would in the live performance). However, a consequence of this is messed-up timing, as sound wave arrival times are going to vary between all the different mics (which are necessarily in significantly different positions). This is an unavoidable trade-off, we're always going to get messed-up spatial information but with careful adjustment during mixing we can hopefully end up with a mix which is not perceived to be too spatially messed-up (even though it still is). This "careful adjustment" is done mainly on speakers but is typically checked on HPs and further adjustments may be made if the illusion/perception of not being spatially messed-up is considered to be too negatively affected by HP presentation. This brings me back to what I stated previously, that pretty much whatever we listen to and however we're listening to it (speakers, HPs, HPs with crossfeed, etc.) we've always got messed-up timing, "spatial distortion" or whatever else you want to call it.

PS. I know you're probably aware of all this already bigshot.



71 dB said:


> I'd say anyone who recognizes and understands the benefits of crossfeed is spatially enlightened.



Following on from what I've just stated: In the case of acoustic genres such as orchestral music, where an illusion/perception of reality is a serious concern then whether or not crossfeeding is beneficial will depend on these variables: The various mic placements used to make the recording in the first place, the "careful adjustments" made during mixing, the further adjustments made when checking the mix on HPs and your personal perception/preference. From all this we can make certain statements/deductions:

1. One thing is certain, crossfeeding cannot and is NOT correcting/fixing "spatial distortion", it's there, baked into the recording and cannot be un-baked! All we're talking about therefore is just different presentations of that spatial distortion, not about a type of presentation which doesn't have spatial distortion.
2. While one may have a personal preference for crossfeed, it is likely to be contrary to the intent of the engineers/artists and to "fidelity", assuming the mix has been checked/adjusted on HPs. Unless of course that checking/adjusting was done with crossfeed but that would be exceptionally rare.
3. Anyone who believes/perceives that spatial distortion ceases to exist, disappears or is fixed by crossfeed is deluded and by definition NOT spatially enlightened but the exact opposite! Now, as it's all based on illusion/delusion in the first place, with any type of presentation, that's not as outrageously insulting as it appears. Nevertheless, in direct response to the quote: If you are ONLY able to "recognise and understand the benefits of crossfeed" but not able to recognise or understand it's disadvantages and not able to recognise and understand that you've still got spatial distortion, then you are CLEARLY NOT "spatially enlightened", you are (and must be) DELUDED!!! So again, enough with the "I'm spatially enlightened" BS, you're not, you're actually spatially deluded but just don't (and/or won't) realise it!

G


----------



## bigshot

The nice thing about messed up spacial timing info in a mix is that when you play it back on speakers in your home, the sound of your own room gets overlaid, creating a layer of unified spacial timing that can mask some of the confusion inherent in the mix. That's an element that you can't predict in the studio, but it can go a good distance to making a mix sound even better.


----------



## 71 dB

gregorio said:


> Following on from what I've just stated: In the case of acoustic genres such as orchestral music, where an illusion/perception of reality is a serious concern then whether or not crossfeeding is beneficial will depend on these variables: The various mic placements used to make the recording in the first place, the "careful adjustments" made during mixing, the further adjustments made when checking the mix on HPs and your personal perception/preference. From all this we can make certain statements/deductions:
> 
> 1. One thing is certain, crossfeeding cannot and is NOT correcting/fixing "spatial distortion", it's there, baked into the recording and cannot be un-baked! All we're talking about therefore is just different presentations of that spatial distortion, not about a type of presentation which doesn't have spatial distortion.
> 2. While one may have a personal preference for crossfeed, it is likely to be contrary to the intent of the engineers/artists and to "fidelity", assuming the mix has been checked/adjusted on HPs. Unless of course that checking/adjusting was done with crossfeed but that would be exceptionally rare.
> ...


1. If spatial distortion is defined as excessive ILD / ITD information then crossfeed is able to reduce/remove spatial distortion. If you reduce ILD at low frequencies at 3-5 dB and about 10 dB at 1 kHz, have ILD that makes sense to human spatial hearing, you don't have spatial distortion.

If spatial distortion is defined containing also other spatial aspects then depending on what those aspects are it's possible crossfeed is unable to address them. For me excessive ILD / ITD is THE problem of headphone listening and fixing that problem is what I am after with crossfeed. The other aspects whatever they are do not matter because they don't ruin my listening enjoyment.

2. Nowadays mixes are probably "checked" * on HPs, but hardly in the 70's when people hard panned for speakers and thats it. Even today some mild crossfeed is beneficial to clean the sound, because speakers are favored. Why should we have 100 % trust on the intents of all sound engineers? Are they gods or normal human beings? Engineers may have tremendous knowledge and skills on many aspects, but my claim is knowing how to mix for headphones isn't their strongest virtue in general.

* What is checked isn't so much spatiality, but for example the sonic balance between various instruments.

3. I define spatial distortion so that crossfeed removes it. In my books spatial distortion is spatial confusion created by brain due to excessive spatial information. To me it is not the difference of intents in the studio and what I hear. I don't know what they intented. I only know what I hear and my brain evaluates how much it makes sense. If ILD at 200 Hz is 2 dB it makes sense, if it is 12 dB then it doesn't make sense no matter what the engineers intented in the studio. If it doesn't make sense I don't enjoy listening and I stop listening or I activate crossfeed to have it make sense.

Audio technology isn't even close to ready to allow us to hear exactly what engineers intented in the studio. We'd need VERY accucate soundfield synthesis around the listeners head. We don't have that. What we have is means to reproduce the recordings to certain level of accuracy allowing very enjoyable listening experience.

If you think listening without crossfeed is somehow what was intented then which headphone model gives the correct version? All headphones render the spatiality differently.

I think I know what aspects of sound we can address technologically and what aspects are relevant and you call me deluded. Life sucks and then you die. With crossfeed it sucks less. That's why I use it.


----------



## bigshot

71 dB said:


> I define spatial distortion so that crossfeed removes it.



I would like to define the word "debt" so that my bank account could remove it!


----------



## 71 dB

bigshot said:


> I would like to define the word "debt" so that my bank account could remove it!



Let me guess: You are in debt because of you "better than headphones" loudspeakers ? 

Headphone listening suffers from the problem of excessive spatial information. To solve this problem we need to scale the spatial information so that it is not excessive. Crossfeed does that. So, there's a problem and there is a solution for it. I just call this problem "spatial distortion" and that's why crossfeed is conveniently the solution to it. I also call the stuff that after swallowing takes my hunger away food. The thing that removes your debt is called "gigantic luck in lottery/stock market."


----------



## gregorio (Jan 11, 2018)

71 dB said:


> If spatial distortion is defined as excessive ILD / ITD information then crossfeed is able to reduce/remove spatial distortion.
> For me excessive ILD / ITD is THE problem of headphone listening and fixing that problem is what I am after with crossfeed. The other aspects whatever they are do not matter because they don't ruin *my listening enjoyment.*



Thank you for so excellently proving my point! As you've clearly stated, it's all about your listening enjoyment, your perception! What "ruins" YOUR PERCEPTION and what doesn't. ... If we do indeed define spatial distortion as excessive timing delay, then there's massive time delays between all the 30+  mics used to record an orchestra, and crossfeed cannot possibly remove/reduce this without knowing:

How many mics were used in the mix? What was the distance between them all? And, what timing adjustments to all those mic inputs have the engineers already applied?

You do not know the answer to ANY of these questions, your crossfeed algorithm doesn't know the answer to ANY of these questions and even if it knew the answers to all them, it still can't deconstruct the recording and apply a correction/fix, to all those mic inputs. So no, it's complete nonsense to state crossfeed is able to reduce/remove the spatial distortion! In reality, your crossfeed isn't even attempting to address the excessive timing delays/spatial distortion, it's instead just trying to address the tiny timing delay between your ears!

What you've done is create your own definition of "spatial distortion" (which is what bigshot was referring to), a definition based SOLELY on YOUR PERSONAL PERCEPTION. Your definition appears to be that "Spatial distortion" is what "ruins" YOUR PERCEPTION and when YOUR PERCEPTION is not "ruined" there's no spatial distortion. In reality, as clearly explained, there is always tons of spatial distortion but it doesn't seem to "ruin" your "listening enjoyment" because you can't hear (are SPATIALLY IGNORANT of) it! Now, that's fine, that's just your inability to hear spatial distortion and therefore your preference. I've no objection to your preference, I just don't share it (because I can hear the spatial distortion). But I'm getting sick and tired of you making up nonsense facts and definitions, making up nonsense assertions about engineers, when you obviously don't even know the first thing about engineering and insulting everyone who doesn't share your spatial ignorance and preference!

G


----------



## bigshot

WELL! Isn't THAT spatial!


----------



## pinnahertz

71 dB said:


> Headphone listening suffers from the problem of excessive spatial information.


The qualification and evaluation of this as a "problem" is highly subjective, anything but absolute.


71 dB said:


> To solve this problem we need to scale the spatial information so that it is not excessive. Crossfeed does that. So, there's a problem and there is a solution for it.


Cross-feed is not _*the*_ solution.  It's a mitigation tool at best, the application of which is not universally accepted (example: your 98% preference, my preference is almost the inverse). 


71 dB said:


> I just call this problem "spatial distortion" and that's why crossfeed is conveniently the solution to it.


Your definition is not universally accepted, cross-feed is not "the solution", and not universally accepted either.


71 dB said:


> I also call the stuff that after swallowing takes my hunger away food. The thing that removes your debt is called "gigantic luck in lottery/stock market."


You didn't make up "food", it's a universally accepted, proven solution to the "problem" of hunger, and an essential element in the prevention of death.  It's benefits have been proven.

There are other far more acceptable and effective solutions to "debt", but the concept of debt is also not made up by you, and is universally accepted.


----------



## 71 dB

gregorio said:


> Thank you for so excellently proving my point! As you've clearly stated, it's all about your listening enjoyment, your perception! What "ruins" YOUR PERCEPTION and what doesn't. ... If we do indeed define spatial distortion as excessive timing delay, then there's massive time delays between all the 30+  mics used to record an orchestra, and crossfeed cannot possibly remove/reduce this without knowing:
> 
> How many mics were used in the mix? What was the distance between them all? And, what timing adjustments to all those mic inputs have the engineers already applied?
> 
> ...


How does a pair of speakers know how to play 30+ mics? How does acoustic crossfeed and room acoustics know? They don't. So, why not knowing is only a problem when it's crossfeed?


----------



## jgazal (Jan 11, 2018)

gregorio said:


> I agree that the majority of the time we are trying to make it "better than real".
> 
> However, there are some music genres where this isn't the case, effectively where we're trying to make it better so that it does sound real.
> 
> ...



Thought experiment:

Imagine that you record an orchestra with an eigenmic (32 capsules) placed at row A, seat 2, and you have and that you convolve the highest number possible of virtual speakers a high density HRTF. At row A, seat 3, there is a born blind listener. At row A, seat 1, there is a viewer with normal eyesight. Finally, at row B, seat 2 you have a listener that recently acquired blindness. Full audience.

Questions:

Are you saying that the viewer with normal eyesight would only perceive, with headphones playback, an soundfield identically to the one he/she heard live if, and only if, he/she uses a “perfect” virtual reality headset displaying images at where he/she were seated?

Are you saying that blind listeners cannot precisely locate sounds at the live event, for instance, identify where the soloist is playing?

Are you saying that only blind listeners would perceive, with headphones playback, an soundfield identically to the one he/she heard live?

Are you saying that accuracy to locate sounds (at least in the horizontal plane) differs from  a blind listener and a blindfolded viewer who has normal eyesight?

Are you saying that blind listeners are not capable of sound selective attention (cocktail party effect)?

Do you think that the born blind listener and the listener that recently acquired blindness will achieve different sound location accuracy?

I agree that vision can in some circumstances override sound cues. I also agree that vision is normally the sense that allows to train your brain to locate sound sources with your ears and that you can retrain your brain if your vision does not match your sound cues.

But I don’t know if that is the only route to create a _virtual_ soundfield map in your brain (or maybe is it a _neural network physical simulacrum_ of a soundfield map?).

Someone that was born blinded can walk to his mother when she is calling “my angel”. Some are capable of echolocation. Some play blind soccer.

But I don’t know if all psychoacoustics processing phenomena are caused by visual and sound cues ambiguities.

Are you sure you can claim *that*?


----------



## pinnahertz

71 dB said:


> How does a pair of speakers know how to play 30+ mics? How does acoustic crossfeed and room acoustics know? They don't. So, why not knowing is only a problem when it's crossfeed?


Are you seriously asking this? 

OK...I'll break it down simply:

1. Mixes are performed on speakers in rooms.
2. The engineer responds to what he hears by using the tools he has. His judgements are based on listening to speakers in a room.
3. There is no such thing as "acoustic cross-feed".  You're making that one up.
4. The engineer hears both speakers in his room with both ears, and therefore knows what he hears, responds, and mixes accordingly.  The monitoring environment is _known_ by virtue of the fact that he's in it and using it to make mix decisions...yeah, of all 30 (or whatever) mics.
5. While the rooms used for mixing are mostly better acoustically than the typical home listening room, they are not all that dissimilar either.  
6. Today's mixes are checked on headphones.  If it's completely wrong, there will be a change.  If it's acceptable, there won't be.  Headphones comprise a very significant portion of the total listening environments.

On the other hand, when you apply headphone cross-feed you:

1. Have no idea what the intentions were when the recording was made
2. Have no idea how sources were mixed and placed, how many there were, or what the ITD or ILD is.
3. There are so many different ITDs and ILDs in use that there's no single set of ITD/ILD figures to work with
4. You are applying highly generalized cross-feed_ according to personal taste._  That's not compensation or correction at all. _ It's preference._

And on the _other_ hand, you have different fingers.


----------



## pinnahertz

jgazal said:


> Thought experiment:
> 
> Imagine that you record an orchestra with an eigenmic (32 capsules) placed at row A, seat 2, and you have and that you convolve the highest number possible of virtual speakers a high density HRTF. At row A, seat 3, there is a born blind listener. At row A, seat 1, there is a viewer with normal eyesight. Finally, at row B, seat 2 you have a listener that recently acquired blindness. Full audience.
> 
> ...


Perhaps we should be asking you the question: "Is gregorio actually saying any of that?"  Or do _you_ need a lesson in reading comprehension? 

Please don't dumb-down the discussion by putting words in peoples mouths that they've never said or implied.  I know it's tempting because the thread is so dumb already, but resist...resist....resist.


----------



## 71 dB

pinnahertz said:


> The qualification and evaluation of this as a "problem" is highly subjective, anything but absolute.
> Cross-feed is not _*the*_ solution.  It's a mitigation tool at best, the application of which is not universally accepted (example: your 98% preference, my preference is almost the inverse).
> Your definition is not universally accepted, cross-feed is not "the solution", and not universally accepted either. You didn't make up "food", it's a universally accepted, proven solution to the "problem" of hunger, and an essential element in the prevention of death.  It's benefits have been proven. There are other far more acceptable and effective solutions to "debt", but the concept of debt is also not made up by you, and is universally accepted. Yada Yada Yada...



Your resistance of my posts is tiring. I happen to suffer from low self-esteem and 90 % of the time I read your responses I feel insecure of myself. That's why I don't fight back as much as I should, but now I feel it time. Just because you have worked 188 years with Elton John doesn't mean you know/understand everything better than I do. I have studied these things quite a lot as a hobby. You on the other hand demonstrate lack of understanding of spatial hearing every now and then, for example your disbelief that reducing channel separation can make the sound image wider. Any hard panned ping pong album from hell is an "artistic statement" to you not to be questioned or corrected with crossfeed.

For me it doesn't matter how "universally" something is accepted. Masses are wrong all the time and the spatiality of headphone listening is badly handled in audio community, totally ignored field that only a handful of people tackle with, and we who try to tackle it face people like you.

Excessive stereo separation is overhelmingly the biggest problem in headphone listening and crossfeed does fix it. Think about a minute of your opinions man. You refute my logically and scientifically sound claims just because you learned to listen your music hard panned in the 70's? Is that intellectually honest? I learned to listen my music with headphones without crossfeed too. I listened without crossfeed for years because I was spatially ignorant, but then it suddenly occured to me that it's wrong! It's crazy! It means excessive stereo separation. I found crossfeed and learned to listen headphones the right way and it revolutionazed my music listening habits and my enjoyment of music. All it took was my education, tendency to question things and open mind. The only weird thing is that it took me so long to realize the existence of spatial distortion.


----------



## pinnahertz

71 dB said:


> Your resistance of my posts is tiring.


Apparently, not quite tiring enough.


71 dB said:


> I happen to suffer from low self-esteem and 90 % of the time I read your responses I feel insecure of myself. That's why I don't fight back as much as I should, but now I feel it time. Just because you have worked 188 years with Elton John doesn't mean you know/understand everything better than I do. I have studied these things quite a lot as a hobby. You on the other hand demonstrate lack of understanding of spatial hearing every now and then, for example your disbelief that reducing channel separation can make the sound image wider. Any hard panned ping pong album from hell is an "artistic statement" to you not to be questioned or corrected with crossfeed.


My point is, and has always been, it's not a question of if it is artistic or not, the point is *you don't know, but have taken it upon yourself to decide for the world.*


71 dB said:


> For me it doesn't matter how "universally" something is accepted. Masses are wrong all the time and the spatiality of headphone listening is badly handled in audio community, totally ignored field that only a handful of people tackle with, and we who try to tackle it face people like you.


I give you push-back for one reason: _*you leave absolutely no room for preference, your decisions and determinations are final, anyone else is wrong.*_  Get it?


71 dB said:


> Excessive stereo separation is overhelmingly the biggest problem in headphone listening and crossfeed does fix it.


With all due respect (which has frankly diminished a tad), I disagree.  Cross-feed doesn't "fix" it, it mitigates it some times, not so much in others.  The overwhelmingly biggest problem with headphone listening is erratic frequency response.


71 dB said:


> Think about a minute of your opinions man. You refute my logically and scientifically sound claims just because you learned to listen your music hard panned in the 70's? Is that intellectually honest?


That's just your opinion.  I hardly listen to any of that music much now, and yet I still can't find many applications for cross-feed.  That's my opinion.


71 dB said:


> I learned to listen my music with headphones without crossfeed too. I listened without crossfeed for years because I was spatially ignorant, but then it suddenly occured to me that it's wrong! It's crazy! It means excessive stereo separation. I found crossfeed and learned to listen headphones the right way and it revolutionazed my music listening habits and my enjoyment of music. All it took was my education, tendency to question things and open mind. The only weird thing is that it took me so long to realize the existence of spatial distortion.


So, then, why is it that I've spent literally days attempting to listen to cross-feed of various types and intensities, on various recordings, and yet I just don't respond the same way you do?  We both have the reference of listening to speakers, and listening to the real world around us.  I find cross-feed almost universally flattens the dimensional life out of recordings, makes them less involving, less immersive, less fun. 

You gave it a shot and like it.  I gave it a shot and mostly, with a few exceptions, don't like it.  I don't force my opinions on anyone as fact.  I DO counter your radical posts of opinion-as-fact because I feel the free world should be offered an opportunity to decide for themselves what's right in a situation where there is no overwhelming support for either side.  I counter you so we can achieve balance and fairness.  If I didn't, we might have a whole lot of readers who try cross-feed because of reading this thread, and both of them would wonder why they remain so spatially ignorant because they don't like it.  How is that helping?


----------



## jgazal (Jan 11, 2018)

pinnahertz said:


> Perhaps we should be asking you the question: "Is gregorio actually saying any of that?"  Or do _you_ need a lesson in reading comprehension?
> 
> Please don't dumb-down the discussion by putting words in peoples mouths that they've never said or implied.  I know it's tempting because the thread is so dumb already, but resist...resist....resist.



This is what gregorio wrote:



gregorio said:


> (...)
> 2b. No, I am not saying acoustic virtual reality is a myth! I'm not sure where you've got that from? I am saying that because with popular music genres there is no "reality" to start with, then logically it's obviously impossible to emulate a reality which never existed. So, we cannot have a virtual reality of popular music, although we could in theory have a sort of "virtual non-reality" or "virtual surreality" but it's not clear how we could achieve even that in practice without musical compromises and avoiding it being no more than just a cheesy gimmick (as with some early stereo popular music mixes).
> 
> 3. To be honest, your questions, conclusions and statements indicate that you have relatively little understanding of our work. We do not "add value" ... putting a chassis, wheels and suspension on a car does not "add value" to a car because without a chassis, wheels and suspension you don't have a car in the first place, just an incomplete pile of car parts! Engineering is an intrinsic part of the creation of all popular music genres, not an added value. For example ...
> ...



So when I wrote about crosstalk cancellation filters, beaforming phased array of transducers and headphone externalization I wrote something that might theoretically occur and I was wrong.

But now he writes about acoustic music genres and the problem is mainly in *visual cues*?

So tell me, what is worst problem: “_*visual cues and other biases” “in the live performance*_” or acoustic crosstalk in the playback?

I am not trying to put words in his mouth. Sometimes the absurd argument is useful to express a mild idea.

What I am trying to say, *respectfully*, is that mixing without carefully considering ITD is a potential problem.

You say no because stereo acoustic crosstalk with speakers is ubiquitous, it happens in any “loudspeakers in a room” listening environment.

Fine, but you don’t have to rage at what I wrote.

Did I really dumb down the discussion? 

I will refrain posting at all, then. 

I have reading comprehension issues. And I am delusional.


----------



## pinnahertz

jgazal said:


> This is what gregorio wrote:
> 
> 
> 
> ...


Somehow you missed his point and instead focused on the point that he correctly made regarding visual reinforcement of spatial hearing, but then took it out of his balanced context over to the ridiculous. Those "Are you saying..." questions were way out of context.

I'm not raging, I'm asking you to not blow things out of perportion or take minor points out of context. This is a challenged thread that needs no more confusion or interference.

I'm not saying you should refrain from posting. That's also a polar extreme. Just keep it real.


----------



## rule42

I'm probably one of the 2 people that @pinnahertz just mentioned that have read this thread and have been playing around with crossfeed for the last couple of days. Not a long time for sure but here's my 2 cents worth. Maybe it's because thus far I've mainly listened to tracks I'm very familiar with (and hence sound a bit different/weird) but in general I'll say I'm not at all sold on it and conclude I'm either spacially dumb or more likely that my preference is to not crossfeed. I'll keep playing...
I've seen a lot of comments about people hearing swarms of bees and the like without crossfeed - I certainly don't get that! I like my cans (but prefer my speakers when I can use them). 
Also, could someone explain to me what this 'fatigue' is, as it seems to crop up a lot and is not in the site's glossary of terms. I have visions of people falling over with exhaustion when they've got cans on whereas  for me it always seems to be a fun and uplifting experience. Maybe I'm too involved with enjoying the music to notice.


----------



## ironmine

rule42 said:


> I'm probably one of the 2 people that @pinnahertz just mentioned that have read this thread and have been playing around with crossfeed for the last couple of days. Not a long time for sure but here's my 2 cents worth. Maybe it's because thus far I've mainly listened to tracks I'm very familiar with (and hence sound a bit different/weird) but in general I'll say I'm not at all sold on it and conclude I'm either spacially dumb or more likely that my preference is to not crossfeed. I'll keep playing...
> I've seen a lot of comments about people hearing swarms of bees and the like without crossfeed - I certainly don't get that! I like my cans (but prefer my speakers when I can use them).
> Also, could someone explain to me what this 'fatigue' is, as it seems to crop up a lot and is not in the site's glossary of terms. I have visions of people falling over with exhaustion when they've got cans on whereas  for me it always seems to be a fun and uplifting experience. Maybe I'm too involved with enjoying the music to notice.



It means that you've got used to shittу sound already. 

You've been living in the audio equivalent of North Korea for too long and adapted to it (even started to enjoy it):

Take my advice, my friend: listen to your music with crossfeed for 2 weeks.

Don't compare crossfeed vs. no crossfeed. 

Don't play with crossfeed settings, don't compare different settings / different plugins, etc.

Just listen to your music through 112dB Redline Monitor (or any other decent crossfeed plugin) with default settings for at least 2 weeks.

Then try coming back to no-crossfeed crappy sound again. You won't be able to.


----------



## bigshot (Jan 11, 2018)

Fatigue generally means one of two things... 1) narrow frequency spikes, especially in the upper mids, that are not terribly audible but can make your ears hurt; and 2) a general way to describe anything you don't think you like (i.e.: bias).

I use DSPs all the time, but there are no DSPs that I would say work for everything. It's salt and pepper to taste.

ironmine, how do you know if the sound you are used to is crappy or not when whatever you listen to that's different than what you're used to sounds crappy? That just means you can get used to anything and not know which is better.


----------



## rule42

@ironmine . Like I said, I'll keep playing with cossfeed though TBH doubt that I'll grant exclusivity for a 2 week trial. Having read quite a bit on the science section now I'm sure after that length of time my brain could get used to almost anything. Could you or anyone else perhaps recommend a track or 2 or album that in your opinion bring out the best in crossfeed?


----------



## bigshot

Early Beatles with the harsh left/right stereo? (I prefer the mono mixes on that though.)


----------



## pinnahertz

bigshot said:


> Early Beatles with the harsh left/right stereo? (I prefer the mono mixes on that though.)


It's my understanding that those were never meant to be "stereo", but were released because there was no alternative actual stereo mix.


----------



## bigshot

Yeah. Four tracks thrown to whatever channel they felt like.


----------



## pinnahertz

bigshot said:


> Yeah. Four tracks thrown to whatever channel they felt like.


Some of their early stuff was before multi-track of any kind. They had 2 track machines which they considered multi-track and would record vocals on one track and the band on the other for later mixing to mono.  Any additional layers were dubs mixed with new material, "reduction mixing".   The stereo releases were an afterthought, so lots of that 2 track stuff got out as stereo.  Even in Sgt. Pepper (they only used 4 track machines and reduction mixing again) where they supposedly spent 3 weeks on the mono mix, then only 3 days on the stereo, it was clearly the afterthought.  

Yeah, cross-feed works for that stuff.  Mono works even better, since that was really what the original mix was.


----------



## 71 dB

pinnahertz said:


> Are you seriously asking this?
> 
> OK...I'll break it down simply:
> 
> ...


My question was half-retoric.

1. Yep.
2. Yep, but the sound is different in his/her studio compared to your listening room.
3. What do you call the fact that your left ear hears the sound from right speaker? I call it acoustic crossfeed.
4. Exactly! He/she mixes in a situation where acoustic crossfeed is happening! Not in a situation where left ear doesn't hear right channel!
5. Some people listen to their music in "reverberation chambers" compared to studios, but not all of course.
6. Totally agreed. Modern productions need much less crossfeed than older ones statistically. Bass is often mixed mono, but between 200 and 1000 Hz the ILD tends to be just a bit too high. Weak crossfeed fixes that.



pinnahertz said:


> On the other hand, when you apply headphone cross-feed you:
> 
> 1. Have no idea what the intentions were when the recording was made
> 2. Have no idea how sources were mixed and placed, how many there were, or what the ITD or ILD is.
> ...



1. So I assume the intention was to have it sound good on speakers. Using crossfeed I can simulate speakers in respect of acoustic crossfeed.
2. Listeners should not be bothered with any of this information. Listeners are supposed to simply enjoy it.
3. ITD/ILD is complex and even noisy statistical information. When we scale it we get scaled complex and even noisy statistical information, so everything is fine. Spatial hearing happens on higher cognitive level where the information is packed into compressed form of higher level information. Tiny details don't matter much. Instead the overall picture does. Spatial hearing creates an interpretation based on spatial cues and that's why it's important that the cues make sense. Scaling spatial information correctly helps a lot.
4. Personal taste yes, but based on scientific facts and understanding of spatial hearing. That makes it good taste and I believe it's beneficial for all people to target that kind of taste.


----------



## 71 dB

rule42 said:


> I'm probably one of the 2 people that @pinnahertz just mentioned that have read this thread and have been playing around with crossfeed for the last couple of days. Not a long time for sure but here's my 2 cents worth. Maybe it's because thus far I've mainly listened to tracks I'm very familiar with (and hence sound a bit different/weird) but in general I'll say I'm not at all sold on it and conclude I'm either spacially dumb or more likely that my preference is to not crossfeed. I'll keep playing...
> I've seen a lot of comments about people hearing swarms of bees and the like without crossfeed - I certainly don't get that! I like my cans (but prefer my speakers when I can use them).
> Also, could someone explain to me what this 'fatigue' is, as it seems to crop up a lot and is not in the site's glossary of terms. I have visions of people falling over with exhaustion when they've got cans on whereas  for me it always seems to be a fun and uplifting experience. Maybe I'm too involved with enjoying the music to notice.



Listening headphones without crossfeed means excessive spatial information which makes your brain generate effects that don't belong to the music. When you use crossfeed these effects disappear or at least get weaker. Some people conclude this means that crossfeed makes the music duller, less compelling. However, getting rid of the effects makes it easier to hear the real details in the music. If you always watch tv color saturation set to max, you may find natural colors dull, but when your color saturation is correct you see the color shades correctly. It's about learning to appreciate natural things, smaller detail instead of "in your face" stuff. Less is more when the other option is "too much".

The "bees" is the difference of crossfeed and no crossfeed. The sound changes because "bees go away". It's the sensation of something humming at your ears.

Fatigue means your ears and mind starts to prefer silence. Brain has to work hard to make sense of unnatural excessive spatial information.


----------



## gregorio

71 dB said:


> How does a pair of speakers know how to play 30+ mics? How does acoustic crossfeed and room acoustics know? They don't. So, why not knowing is only a problem when it's crossfeed?



Huh? Have you not read anything I've written? I clearly stated, restated and it's been quoted in bold, that a pair of speakers doesn't "know" how to play 30+ mics, neither do HPs without crossfeed or HPs with crossfeed, which is why there will ALWAYS be very significant spatial distortion regardless of what playback you use, even if it's playback in the exact same studio it was mixed in! Now, for some reason you appear incapable of hearing that distortion when you listen to your HPs with crossfeed. In response to your observation you've come up with the explanation that crossfeed somehow fixes/cures the spatial distortion but it's nonsense, your crossfeed cannot magically know the answers to the questions it would need to know and even if it did, it still could not magically apply a fix. The actual issue is not an impossible/magical crossfeed algorithm, it's your faulty observation, the inability of your hearing/perception/preference to hear the spatial distortion which is still there, just presented differently!



jgazal said:


> Imagine that you record an orchestra with an eigenmic (32 capsules) placed at row A, seat 2, and you have and that you convolve the highest number possible of virtual speakers a high density HRTF. At row A, seat 3, there is a born blind listener. At row A, seat 1, there is a viewer with normal eyesight. Finally, at row B, seat 2 you have a listener that recently acquired blindness. Full audience.
> ... Are you sure you can claim *that*?



I'm not even sure I can claim to understand your post, let alone making the claim you're asserting! From what I can tell though, pinnahertz is correct, I'm not necessarily claiming any of that and you seem to have missed the point of what I'm saying about how orchestral recordings are made and why they're made that way.

If we take your example, then:

1. I've got no idea how a person born blind will perceive a live orchestral performance.
2. I have a vague idea of how the recently blind person might be perceiving the performance.
3. I have a vague idea of the listener with normal eyesight will perceive the performance but not as vague as #2, I can make some generalised assumptions, which will apply much of the time to the average audience member. And, I've already explained those assumptions but let's be a bit more specific, let's take an example of a section of music in which say all the strings are playing an accompanying role to a prominent/solo part for the french horn section. Our brain will rapidly latch-on to this, our eyes and conscious attention will be drawn to the horns and brought more into focus, making the horn section clearer/louder relative to what we're not focusing on, the audience noise floor and to a lesser extent the strings for example. We're not really consciously aware of this effect, it sounds entirely natural/real because that's what our hearing is does all the time with all sound. 

With a sound recording (even a theoretically perfect one), we're listening in our sitting room, we don't have the same biases affecting our perception, certainly not the same sight, so we're not likely to have the same perception/experience (of this effect) or not experience it as strongly, so what are we to do?  Typically, we'd use another mic, placed appropriately near the horns so it picks up more of the horns relative to the strings and room acoustics and then, when mixing, bring this mic up a couple of dBs or so during this section of the music. This would make the horns very slightly louder and clearer than what our perfect recording would be but more in line with what our perception would do at the live event. The downside is that we're going to have a timing issue, the horn sound will arrive at our spot mic much earlier than it will arrive at our perfect mic setup (in row A, seat 2), maybe 20 milliseconds or more. So, we've seriously messed-up the timing (spatial distortion). Maybe the recording still sounds fine and we can leave it like that but almost certainly we'd apply some delay to the spot mic. Even if we applied the exact delay to that spot mic as the distance to the perfect mic setup would suggest (about 1ms per 1.1 feet), that would give us the correct arrival time but we'd still have spatial distortion because the early reflections and reverb from row A, seat 2 will be significantly different to the early reflections at the position of our spot mic. Interestingly though, applying the 1ms per 1.1 feet formula often doesn't work very well, or rather we use it as a starting point and adjust the amount of delay from there until it sounds right but what we end up with is therefore actually wrong by several/many milliseconds.

Now I'm sure your going to say something like, why introduce that mic and all that timing error/spatial distortion in exchange for just a small gain in perception? My answer to that would be: You seem to have a real thing about timing error, spatial information/distortion. I'm not saying it's unimportant, it is important and we (engineers) spend a considerable amount of our time adjusting and manipulating it but the absolute, perfect accuracy you seem to be craving simply isn't that important, the brain is quite easily deceived and constantly messes with that spacial information itself anyway, to increase clarity and various other reasons. Relatively speaking it's a good exchange, a significant improvement in perception/the listening experience for a relatively insignificant amount of spatial distortion. And that is why the recording and mixing of orchestras has evolved to using more and more mics, starting in the early 1950's. 



71 dB said:


> 1. So I assume the intention was to have it sound good on speakers. Using crossfeed I can simulate speakers in respect of acoustic crossfeed.
> [2] Tiny details don't matter much. Instead the overall picture does. Spatial hearing creates an interpretation based on spatial cues and that's why it's important that the cues make sense.
> [2a] Scaling spatial information correctly helps a lot.
> 4. Personal taste yes, but based on scientific facts and understanding of spatial hearing. [4a] That makes it good taste and I believe it's beneficial for all people to target that kind of taste.


 
1. Huh? Pinnahertz correctly stated that mixes are typically checked and adjusted using headphones and you agreed! So clearly the intention was not only to have "it sound good on speakers", if it were, why check with HPs? And AGAIN, crossfeed clearly does NOT simulate speakers, which invalidates your argument on it's own but in addition, crossfeeding will damage/destroy those adjustments ALREADY made to the mix when it was checked on headphones or the intentions of the artists if it was deemed no adjustments were desirable!
2. Again, Huh? It's because spatial hearing is an interpretation that it's NOT important that spatial cues make sense! The brain will make sense out of what you feed it, even if you feed it complete nonsense, which is exactly what virtually all popular music genres contain, spatial cues which make no sense whatsoever! How can a mix which has simultaneously the spatial cues of an arena, a medium sized chamber and a toilet possibly make sense?
2a. How do you "scale spatial information correctly" of an arena, medium chamber and toilet all at the same time?
4. Personal taste yes but clearly no understanding and no scientific facts (just definitions and "facts" you've invented)!
4a. No it doesn't, it just makes it your personal taste, a personal taste based on your inability/spatial ignorance, how's that "beneficial for all people"?

G


----------



## pinnahertz

71 dB said:


> My question was half-retoric.
> 
> 3. What do you call the fact that your left ear hears the sound from right speaker? I call it acoustic crossfeed.


Of course you do!  You are the master of made-up terminology!  It's part of spatial hearing and localization.  If it causes a problem you might term it "acoustic crosstalk", which is what we called it when working on "acoustic crosstalk cancellation", something the exact inverse of your cross-feed, and also interesting, sometimes desirable, and off-topic.


71 dB said:


> 5. Some people listen to their music in "reverberation chambers" compared to studios, but not all of course.


This has been researched, and it turns out, not so many.  The home listening environment is not far off from the studio.


71 dB said:


> 1. So I assume the intention was to have it sound good on speakers. Using crossfeed I can simulate speakers in respect of acoustic crossfeed.


No, your rudimentary cross-feed doesn't re-localize to a position of virtual speakers in front of you, it flattens dimensionality and collapses imaging to a position within the head, which is unnatural.


71 dB said:


> 2. Listeners should not be bothered with any of this information. Listeners are supposed to simply enjoy it.


"Bothered" with information about how many different mics, ITDs, and ILD there are?  Who's "bothered"?  The point is, if you don't know what you have, you can't "correct" for it.  You don't, so you can't. 


71 dB said:


> 3. ITD/ILD is complex and even noisy statistical information. When we scale it we get scaled complex and even noisy statistical information, so everything is fine. Spatial hearing happens on higher cognitive level where the information is packed into compressed form of higher level information. Tiny details don't matter much. Instead the overall picture does. Spatial hearing creates an interpretation based on spatial cues and that's why it's important that the cues make sense. Scaling spatial information correctly helps a lot.


You've just contradicted yourself. How can you "scale" something about which you have no specific knowlege?  


71 dB said:


> 4. Personal taste yes, but based on scientific facts and understanding of spatial hearing. That makes it good taste and I believe it's beneficial for all people to target that kind of taste.


[/quote]You cannot have good science without accounting for taste, and analyzing why it differs.  This is what you have ignored.  You have a poor and non-specific "correction" process that creates something that is different but still wrong, and yet you firmly are convinced it's scientifically correct.  It cannot BE correct if it's a generalized non-specific "correction"!  That's where taste comes it, which you summarily dismiss.

A similarity exists with the recent research into preferred frequency response in speakers and headphones by Harmon.  It's been a long-standing concept that "we all hear differently", which never made any sense, but was held onto by transducer manufacturers who tried to capture their market with a specific sound signature.  Turns out, we do actually all hear the same, and when given the choice, choose a smooth and relatively flat response!  What happened to "taste"?  When you present a group of choices that don't have smooth, flat responses, listeners choose a compromise, but it's not consistent.  When you present smooth and flat, the majority picks that.  You've done NO LISTENING RESEARCH like this with your cross-feed, so you have not researched why some like it and some don't. Yet you repeatedly insist cross-feed is right, and a universal improvement.  You have absolutely nothing on which to base this!


----------



## pinnahertz

71 dB said:


> Listening headphones without crossfeed means excessive spatial information which makes your brain generate effects that don't belong to the music. When you use crossfeed these effects disappear or at least get weaker.


Sometimes, rarely, but not always.  You've ignored subjective preference here.


71 dB said:


> Some people conclude this means that crossfeed makes the music duller, less compelling.


I've never said "duller", a term that implies less HF response.  Less dimensional should be obvious, less involving and less interesting and fun gets more to the point.


71 dB said:


> However, getting rid of the effects makes it easier to hear the real details in the music.


Blending the channels together does not, and cannot make it easier to hear details!


71 dB said:


> If you always watch tv color saturation set to max, you may find natural colors dull, but when your color saturation is correct you see the color shades correctly. It's about learning to appreciate natural things, smaller detail instead of "in your face" stuff. Less is more when the other option is "too much".


No!  Because the color on TV has an obvious reference: life.  Don't confuse adaptation with random correction.


71 dB said:


> The "bees" is the difference of crossfeed and no crossfeed. The sound changes because "bees go away". It's the sensation of something humming at your ears.


I don't hear "bees".  Ever.  This is all you again.


71 dB said:


> Fatigue means your ears and mind starts to prefer silence. Brain has to work hard to make sense of unnatural excessive spatial information.


I experience listening fatigue when a system has peaky frequency response, high distortion, noise, or when the material has excessive dynamics processing.  I don't find non-cross-feed listening fatiguing.  Again, it's all you!


----------



## pinnahertz

@bigshot 
Sorry for all the blue lines.  Too many points to work out.  I could have summed it all up by just saying "I disagree completely", but then what's the fun in that?


----------



## bigshot

As long as everybody's enjoying themselves!


----------



## 71 dB

pinnahertz said:


> 1. Sometimes, rarely, but not always.  You've ignored subjective preference here.
> 2. I've never said "duller", a term that implies less HF response.  Less dimensional should be obvious, less involving and less interesting and fun gets more to the point.
> 3. Blending the channels together does not, and cannot make it easier to hear details!
> 4. No!  Because the color on TV has an obvious reference: life.  Don't confuse adaptation with random correction.
> ...



1. Your "sometimes, rarely, but not always" is also a subjective preference, except I give objective justification for my preference.
2. You find artifacts created by excessive stereo separation involving. Maybe your favorite music is so empty it need such substance or maybe your preferences are as naive as teenagers preferring excessive bass in their music? Maybe both? Matured taste prefers naturality and balance.
3. Channels do not represent real sounds. Soundwaves in 3D space is real sound and you always have correlation between the ears in real life. Headphone listening without crossfeed means we deny the 3D space acoustic process, natural correlation. Blending channel with crossfeed brings some of that back, gives correlation which makes things more natural. Spatial hearing is sensitive to interaural differences. We don't need huge ILDs. A few decibels of ILD + some ITD is enough for our spatial hearing to hear detail. Huge ILD/ITD is simply overkill, that only makes it more difficult to decode the spatial information.

Crossfeed "blends channels", but it's important to realize that 2 "new" channels are created from original 2 channels and the blending process for these 2 new channels differ from each other. The result is 2 channels with difference between them but also strong correlation. That's why it is good for our spatial hearing, because our hearing expects sounds like that. Correlation, but also difference in a certain way. It's natural.

4. Sounds have real life reference too. In real life you have only a few decibels of ILD at low frequences. Why should it be any different when you are listening to headphones?
5. Well we spatially enlightened people do hear "bees". That's why we use crossfeed. You may think the bees are part of the music, but that's a wrong assumption.
6. It's all me and other crossfeeders. Stop acting like I was the only one on this planet promoting crossfeed. I'm just active online.


----------



## 71 dB

pinnahertz said:


> No, your rudimentary cross-feed doesn't re-localize to a position of virtual speakers in front of you, it flattens dimensionality and collapses imaging to a position within the head, which is unnatural.


This is one the moments when I think "How can this dude know/understand so little ?" I have already tried to teach you that reducing ILD at low frequences can make the sound appear wider. Sounds very near your other ear create large ILD, because the relative distance difference to ears is so huge. Those sounds are of course located near your head. When such a sound source is taken to a larger distance of the head, the relative distance difference gets smaller meaning ILD drops. So, if you scale large ILD smaller with crossfeeder, you get sound that appears to be further away from head. This is not rocket science, but you don't seem to get it.



pinnahertz said:


> "Bothered" with information about how many different mics, ITDs, and ILD there are?  Who's "bothered"?  The point is, if you don't know what you have, you can't "correct" for it.  You don't, so you can't.


I can scale excessive spatial information so that the information makes sense. The "raw" ITD/ILD information can be pretty meaningless in respect of human hearing, because the mic setups are so different from human anatomy. Acoustic crossfeed with speaker scales the raw information to humans as does crossfeed in headphone listening. You don't seem to understand the difference of technical sound engineering and psychoacoustics. Recording music with 30 mics all over the place is a way to capture the sonic information techically as well as possible to be mixed and stored on a medium such as CD, but that's sonic information in a technical 2-channel form. Despite of us having two ears, our hearing expects to receive sonic information in another form, acoustic form, soundwaves arriving to our ears form all directions. Technical 2-channel recording can be in "psychoacoustic" form if it's a binaural recording, but that's rare. Most recordings are not in "psychoacoustic" form. They are in technical form. That's why we need technical-to-psychoacoustic modification. With speakers that happens acoustically. With headhones crossfeed does it. So, all we need to know is whether the technical recording is in psychoacoustic form already or not and how far from it is. That tells how much crossfeed we need or whether we need it at all. After getting into crossfeed it's pretty easy to hear how far a given recording is and how much crossfeed to use. 



pinnahertz said:


> You've just contradicted yourself. How can you "scale" something about which you have no specific knowlege?


I have knowledge of it by listening to it. If I hear "bees" I turn crossfeed on and the "bees" disappear. It's that simple. If the sound is too bassy you cut bass with tone controls and you don't care not knowing how the bass was recorded. Same with excessive spatiality.


----------



## pinnahertz

71 dB said:


> 1. Your "sometimes, rarely, but not always" is also a subjective preference, except I give objective justification for my preference.


Huh?  What "objective justification"?


71 dB said:


> 2. You find artifacts created by excessive stereo separation involving. Maybe your favorite music is so empty it need such substance or maybe your preferences are as naive as teenagers preferring excessive bass in their music? Maybe both? Matured taste prefers naturality and balance.


Could you possibly avoid insulting people?  Seems like we've already established that I've been at this for 20 years longer than you, and have worked for 45 years in pro audio, broadcast and recording.  How is that not "mature"?


71 dB said:


> 3. Channels do not represent real sounds. Soundwaves in 3D space is real sound and you always have correlation between the ears in real life. 1. Headphone listening without crossfeed means we deny the 3D space acoustic process, natural correlation. 2. Blending channel with crossfeed brings some of that back, gives correlation which makes things more natural.  3. Spatial hearing is sensitive to interaural differences. 4. We don't need huge ILDs. A few decibels of ILD + some ITD is enough for our spatial hearing to hear detail. Huge ILD/ITD is simply overkill, that only makes it more difficult to decode the spatial information.


1. You don't have the right 3D acoustic space or natural correlation on speakers either.  It's all an illusion, and a deliberate one.
2. Blending with cross-feed narrows separation, flattens whatever depth there is, and the resulting image lands mid-head.  That's NOT natural!
3. Yes
4. I'm not arguing the values or structure of your algorithm, I'm arguing its efficacy.


71 dB said:


> Crossfeed "blends channels", but it's important to realize that 2 "new" channels are created from original 2 channels and the blending process for these 2 new channels differ from each other. The result is 2 channels with difference between them but also strong correlation. That's why it is good for our spatial hearing, because our hearing expects sounds like that. Correlation, but also difference in a certain way. It's natural.


That's a very odd way to look at how summing works.  But no matter, the result is what I said in 2.


71 dB said:


> 4. Sounds have real life reference too.


Having trouble with the concept of "reference"?  Real life IS the reference.  You can't have a reference for the reference.  What would THAT be, sound in the vacuum of space?


71 dB said:


> In real life you have only a few decibels of ILD at low frequences.


That's part of how spatial hearing works, a tiny portion of HRTF.  So?


71 dB said:


> Why should it be any different when you are listening to headphones?


Because we don't have a vast library of binaural recordings that take HRFT into account!  Cross-feed doesn't address the full HRFT, hence it's hobbled.


71 dB said:


> 5. Well we spatially enlightened people do hear "bees". That's why we use crossfeed. You may think the bees are part of the music, but that's a wrong assumption.
> 6. It's all me and other crossfeeders. Stop acting like I was the only one on this planet promoting crossfeed. I'm just active online.


I'm addressing you.  The "others" are a very quiet minority.


----------



## pinnahertz

71 dB said:


> This is one the moments when I think "How can this dude know/understand so little ?" I have already tried to teach you that reducing ILD at low frequences can make the sound appear wider.


I have indeed learned your opinions.  This one, I believe, is incorrect.  You can never reduce ILD and have the result seem wider.  That's not how spatial hearing works, and not how I perceive it.  Could it actually be that "this dude" understands something you don't?


71 dB said:


> Sounds very near your other ear create large ILD, because the relative distance difference to ears is so huge. Those sounds are of course located near your head. When such a sound source is taken to a larger distance of the head, the relative distance difference gets smaller meaning ILD drops. So, if you scale large ILD smaller with crossfeeder, you get sound that appears to be further away from head. This is not rocket science, but you don't seem to get it.


I do get it, but what you have described is a tiny fraction of how spatial hearing works.  While it is true that more distance sources will present with less ILD, there's SO much more to aural depth perception that you are ignoring. In fact, reducing ILD by itslef does not push the virtual source away from you at all!  Quite the reverse.  It places that source more toward the center of the skull, an unnatural position for any sound source. 


71 dB said:


> I can scale excessive spatial information so that the information makes sense. The "raw" ITD/ILD information can be pretty meaningless in respect of human hearing, because the mic setups are so different from human anatomy. Acoustic crossfeed with speaker scales the raw information to humans as does crossfeed in headphone listening.


You have this backwards when it comes to mixing in stereo.  There is no valid "raw information" for the speakers to scale, unless the recording was made with a single stereo pair with the ITD of a human skull.  That's just hardly ever done.  Again, your cross-feed does not move the headphone image to a speaker position, it collapses it to within the skull.


71 dB said:


> You don't seem to understand the difference of technical sound engineering and psychoacoustics. Recording music with 30 mics all over the place is a way to capture the sonic information techically as well as possible to be mixed and stored on a medium such as CD, but that's sonic information in a technical 2-channel form. Despite of us having two ears, our hearing expects to receive sonic information in another form, acoustic form, soundwaves arriving to our ears form all directions. Technical 2-channel recording can be in "psychoacoustic" form if it's a binaural recording, but that's rare. Most recordings are not in "psychoacoustic" form. They are in technical form. That's why we need technical-to-psychoacoustic modification. With speakers that happens acoustically. With headhones crossfeed does it. So, all we need to know is whether the technical recording is in psychoacoustic form already or not and how far from it is. That tells how much crossfeed we need or whether we need it at all. After getting into crossfeed it's pretty easy to hear how far a given recording is and how much crossfeed to use.


I agree in principle with everything except that cross-feed reproduces the "technical-to-psychoacoustic modification" of speakers.  Not even close!  To do that you'd have to introduce the correct HRTF and ambient acoustics of speakers in a room, similar to what the Smyth Realizer does.  That's a very, very long way from your cross-feed!  I like what the Realizer does, but it's impractical.  I don't like cross-feed on most material, but do on some. 


71 dB said:


> I have knowledge of it by listening to it. If I hear "bees" I turn crossfeed on and the "bees" disappear. It's that simple. If the sound is too bassy you cut bass with tone controls and you don't care not knowing how the bass was recorded. Same with excessive spatiality.


But, I have knowlege of it the same way, yet make different choices.  The main difference here is I don't say your choice is wrong, it's your choice. I've also explained why I don't agree with your choice, but from a preference standpoint and a technical one.   My choice is right for me, but you say I'm spatially ignorant, immature, unenlightened, spatially deaf, and a whole string of other insults.  I don't even know what your point or purpose is anymore.


----------



## gregorio (Jan 13, 2018)

71 dB said:


> [1] This is one the moments when I think "How can this dude know/understand so little ?"
> [2] I have already tried to teach you that reducing ILD at low frequences can make the sound appear wider.
> [3] So, if you scale large ILD smaller with crossfeeder, you get sound that appears to be further away from head. This is not rocket science, but you don't seem to get it.
> [4] I can scale excessive spatial information so that the information makes sense.
> ...



1. And that's one of your problems! It's very clear to everyone that pinnahertz actually knows/understands quite a lot.
2. And the reason you've failed to teach us that is because it's not true, although it might appear to be true to your personal perception.
3. It's not rocket science, in fact it's not any sort of science, it's a logical fallacy, false cause and effect and that is why we don't get it!! What makes a sound appear further away is a combination of loss of high frequency, loss of volume, increased initial reflection time (ER predelay), increased number of reflections, diffusion and duration of the reverb. Crossfeeding obviously does not apply any of this, just the one parameter of reducing inter-ear level differences. This is all basic stuff, the stuff sound/mix engineers learn very early on but you're apparently ignorant both of these facts AND that this is what you're hearing!
4. The technology to do that does not exist and will not exist in the foreseeable future! So unless you are ascribing some magic properties to your crossfeed, all you are doing is changing the presentation according to your personal perception ability (inability!) and preferences.
5. Of course they don't, speakers can no more "scale the raw information" than can crossfeed. Speakers do not have magic properties anymore than does your crossfeed does! Actually, the speakers aren't doing anything at all, what's happening is your listening room is adding another whole layer of spatial information on top of the spatial information already on the recording: Additional initial reflection time (ER predelay), increased number of early reflections, changes in freq response, etc., All those things that make sound appear more distant, because of course it is more distant, you don't sit with your ears right next to the speakers! Your crossfeed is not doing ANY of this, all it's doing is crossfeeding, duh! If your saying you can't hear the difference between listening to your speakers and listening on HPs with crossfeed, then your "spatial ignorance" is far more severe than I even imagined!
6. Correct, we don't understand that difference because it's nonsense which you've just made up!
6a. That's clearly completely made up AND absolute nonsense! If we wanted to capture the "sonic information technically as well as possible" then we'd use a stereo pair, or some sort of soundfield mic. Recording with 30+ mics does not capture "the information" well, in fact the OPPOSITE, it's actually detrimental to "the information"!! We use 30+ mics for artistic and/or perceptual/psycho-acoustic reasons, nothing whatsoever to do technical reasons. I've explained all this in some detail, you've just completely ignored it and made up a bunch of utter nonsense which is the exact opposite of the actual facts! And then you wonder why we don't get it ... you're joking right? Just about the only accurate thing in your post was that your assertions are "not rocket science", although it would have been even more accurate had you left out the word "rocket"!
6b. Every single one of these sentences is nonsense you've just made-up, based on other nonsense you've also made-up and which is contrary to the facts!!



71 dB said:


> [1] I have knowledge of it by listening to it. If I hear "bees" I turn crossfeed on and the "bees" disappear. It's that simple.
> [2] If the sound is too bassy you cut bass with tone controls and you don't care not knowing how the bass was recorded. [2a] Same with excessive spatiality.



1. Bees, what nonsense are you going on about now? Either you've got what appears to be something like a severe driver overload or you need to go and see a doctor!
2. That depends, are you after fidelity or just satisfying your preference? Is it too bassy because your system is too bassy or because the artists wanted it to sound particularly bassy? If it's the latter then those of us who prefer fidelity and the intentions of the artists are not "bass ignorant", "idiots" or "know/understand so little". If you don't want to know or care about fidelity or artist intention and want to instead just go with your personal preference, that's fine, your choice but then you "not knowing" is the very definition of ignorance!!
2a. No it's not even vaguely the same! If you want to reduce bass, would you apply crossfeed to achieve that? Of course not, you'd use a control designed to cut bass, a tone control! There is no control to cut "spatial information/distortion", no equivalent to a tone control for spatial information! All you can do is crossfeed that exact same spatial information/distortion, present that exact same spatial information differently, which an individual may or may not prefer. It's got nothing to do with being ignorant, unless you're being ignorant of these obvious facts!

You've fallen into the old audiophile trap. You've made up some nonsense to explain a faulty observation and now you're just making up even more nonsense to explain/justify the original nonsense and so we descend, further and further, into the nonsensical! Now obviously you realise you're just making it up but somehow don't yet realise that it's complete nonsense. Are you just going to continue ignoring the facts until we've descended so far into the nonsensical that what you're making up even sounds like nonsense to you? When are we going to reach that point and what are you going to do when we do? Fascinating, it's like watching a train wreck!

G


----------



## 71 dB

pinnahertz said:


> Huh?  What "objective justification"?


Science of human spatial hearing.


pinnahertz said:


> Could you possibly avoid insulting people?  Seems like we've already established that I've been at this for 20 years longer than you, and have worked for 45 years in pro audio, broadcast and recording.  How is that not "mature"?


I try to be honest with you out of respect. Sometimes honesty insults. No doubt your experience and knowledge of the field is massive after all those decades, but your knowledge of human spatial hearing isn't that strong, something that seems to be common among sound engineers.



pinnahertz said:


> 1. You don't have the right 3D acoustic space or natural correlation on speakers either.  It's all an illusion, and a deliberate one.
> 2. Blending with cross-feed narrows separation, flattens whatever depth there is, and the resulting image lands mid-head.  That's NOT natural!
> 3. Yes_ (Spatial hearing is sensitive to interaural differences)_
> 4. I'm not arguing the values or structure of your algorithm, I'm arguing its efficacy.


1. Yes, it's a illusion. That's why it "making sense" is the relevant part, not it being exactly correct.
2. Again you expose the limits of your knowledge about human spatial hearing. That is the naivistic sound engineer view.
3. Glad to see you agree. Now think about what it means and how it's related to reducing ILD/ITD with crossfeed.
4. Not sure what you mean by "my algorithm" here. 



pinnahertz said:


> That's a very odd way to look at how summing works.  But no matter, the result is what I said in 2.


Well, maybe if you understood spatial hearing better it wouldn't look that odd to you. Our ears expect "summed" sounds, even if the source of the sound isn't a pair of speakers in 2-channel form. The sounds that arrive to our ears correlate with each other the way that has this "summed" signature. Our spatial hearing expects that so when you crossfeed stereo tracks you shape the sounds to have that "summed" signature and they work well with spatial hearing. Your point 2 is incorrect.


----------



## 71 dB

gregorio said:


> 3. It's not rocket science, in fact it's not any sort of science, it's a logical fallacy, false cause and effect and that is why we don't get it!! What makes a sound appear further away is a combination of loss of high frequency, loss of volume, increased initial reflection time (ER predelay), increased number of reflections, diffusion and duration of the reverb. Crossfeeding obviously does not apply any of this, just the one parameter of reducing inter-ear level differences. This is all basic stuff, the stuff sound/mix engineers learn very early on but you're apparently ignorant both of these facts AND that this is what you're hearing!
> 
> G


You are correct about what makes a sound appear further away, althou reverberation time doesn't change, only the balance between direct sound and reverberation. Moving the sound source closer and further away in a acoustic space keeps the reverberation pretty much the same, but the direct sound changes a lot. Most of the time all these things except interaural differences are already right in a recording so we don't need to do anything about them. However, excessive ILD is in contradiction with other spatial cues such as high frequency loss. Crossfeed reduces ILD reducing the contradiction (+ making unnatural ILD natural ILD) and we are good.

I have learned about spatial hearing in university as part of my M.Sc. degree (majoring in acoustics and signal prosessing). I don't know what is though to sound/mix engineers, but it seems not so much about spatial hearing. Mics, audio channels and history of recorded sound seems to distort how sound/mix engineers think about these things. This is painfully clear when pinnahertz keeps insisting that more ILD means wider sound. It's true with speakers, becauce we have acoustic crossfeed, but it's not true with headphones without crossfeed.


----------



## 71 dB

gregorio said:


> 5. Of course they don't, speakers can no more "scale the raw information" than can crossfeed. Speakers do not have magic properties anymore than does your crossfeed does! Actually, the speakers aren't doing anything at all, what's happening is your listening room is adding another whole layer of spatial information on top of the spatial information already on the recording: Additional initial reflection time (ER predelay), increased number of early reflections, changes in freq response, etc., All those things that make sound appear more distant, because of course it is more distant, you don't sit with your ears right next to the speakers! Your crossfeed is not doing ANY of this, all it's doing is crossfeeding, duh! If your saying you can't hear the difference between listening to your speakers and listening on HPs with crossfeed, then your "spatial ignorance" is far more severe than I even imagined!
> 
> G


I have never said (normal) crossfeed makes headphones sound like speakers. You need HRTF convolution (which is actually crossfeed, just in more sophisticated form) to achieve something like that. I use crossfeed to get rid of excessive ILD. The process makes headphones sound a bit like speakers, but the difference is still clear. I like how with headphones I avoid the extra layer of spatiality and acoustics. I hear the original spatiality of the recording, but scaled correctly.

Crossfeed makes headphone sound appear 1-3 feet away from my head depending on the spatiality of the recording, while with speakers the sound appears at the distance of speakers (7 feet in my case) or little more, again depending the spatiality of the recording.


----------



## 71 dB

gregorio said:


> You've fallen into the old audiophile trap. You've made up some nonsense to explain a faulty observation and now you're just making up even more nonsense to explain/justify the original nonsense and so we descend, further and further, into the nonsensical! Now obviously you realise you're just making it up but somehow don't yet realise that it's complete nonsense. Are you just going to continue ignoring the facts until we've descended so far into the nonsensical that what you're making up even sounds like nonsense to you? When are we going to reach that point and what are you going to do when we do? Fascinating, it's like watching a train wreck!
> 
> G


I was spatially ignorant for years and "observed" nothing*. Then one day I suddenly realized the problem of excessive spatial information in headphone listening based on my theoretical knowledge of spatial hearing. I discovered crossfeed and learned to "hear the bees" in the sound. I found out that headphone sound can be much better when you scale the spatial information with crossfeed. I made up nothing. I had knowledge learned in university and I applied it to headphone listening and the observations agreed with the theory. So, it's you making up nonsense about me making up nonsense to justify your rejection of my knowledge and claims. 

* To be honest as a teenager I listened to FM radio and the reception was sometimes noisy so I listened mono (S/N increases 20 dB). I noticed, that there is something pleasant about mono sound with headphones. However, I remembered this after discovering crossfeed.


----------



## castleofargh

gents, you know the rules, no personal attacks. the back and forth has been going on for a while, pages later it's clear that you guys don't even agree on the idea of what the proper spacial cues should be, in fact I feel that only @71 dB insists that there is proper spatial cues to be found somewhere. and that axiom is leading to most other disagreements. 


everybody in this argument knows the basics behind HRTF. when it goes to crap, it's because some subjective perception is given as having a universal impact when very few do. that's the thing with subjective approach, just because I feel it, doesn't mean that other dude with another headphone is going to. 
same for how the stereo signal isn't going to be altered the same way with a headphone's path compared to having it coming out of speakers in a room. you all know that, even I know that and understand at least the principles. 

@71 dB can't sleep if we don't agree that Xfeed is beneficial for headphones. which is a pointless discussion given how all the variables need to be more or less custom set for a given listener, and how the subjective impression of benefit will still only be subjective. some people will not agree and that's that. 
the self made definition of spatial distortion leads nowhere and it's only been making everything more difficult for pages. so far we haven't even been able to agree on what it means and what actually is a distortion free spatial signal, or if that's even a thing on a released stereo album. in a given room with given speakers and a given listener, the signal reaching the eardrum will be statistically unique to those conditions. so what's correct? where is the reference? with a headphone it's the same thing, the headphone does it's thing, the FR being part of perceived spatial cues, we control it completely or not at all, but a partial compensation does not solve whatever spacial distortion is. and the HRTF aspects bypassed by the headphone, need to be reintroduced somehow for that one listener to get a subjective impression of some specific spacial cues in the general direction of what he would get on some speakers. so we start with something unique(yet not "spatially" related to the sound in some place of the room when the artist was playing), and we play it back on the wrong medium(headphone) where we need also a custom correction. with all that, I see something like the Smyth Realiser as a close attempt to simulate one specific situation while using the other specific situation. but no spatial distortion, no real sound, we just try to mimic a given guy using a given pair of speakers in a given room. is that spatially correct? is another guy in any other room on speakers, also spatially correct despite how different they all are? even then some of the concepts advanced in the topic are hard to defend. so when looking at basic Xfeed and general settings claiming to solving spacial distortions(again, quite unclear on what that is), I'm not really surprised to see 2 sound engineer reject the principle and several others trickling from it.  


in any case, we are all used to talking to the wind and having both sides of an argument camp their positions forever no matter what. like a political way of dealing with everything; "our side is right, what were we talking about? you also don't know? well in any case we're right, that's the important part."  we can't always hope to reach the other side with our ideas and opinions, it's kind of sad but we know it happens more times than not. don't get mad because of it, don't take any of it too seriously, the world is big enough to go find other people we can discuss with. 

and most of all in here on Head-Fi, respect the damn terms of service and stop suggesting that the other guys are idiots/ignorant/mad/worshipers of spatial distortions. I blame all of you on this one, no need to be hypocrites pretending to respect what the other one says, but the tone and insinuations pages after pages, that's really not cool. 
find a common reference for a dialog, if you believe to be right about some objective or psycho acoustic aspect, bring some paper or evidence of it instead of resorting to insults, and if you really can't work it out because the can only be one audio god, then I'll do what we do when humanity fails, and close the topic.  
what's even more ironic is how a few weeks back, Pinna and 71 dB both got fed up with the topic the very same day, bot saying it wasn't worth it and wasn't making anybody happy. yet we're back to the same polarized arguments. when I see this happening between crippling idiots, I don't mind moderating like a barbarian, but you guys are all better than this. show me I'm right about you(it's one of those moment where a pub would work way better than a topic on the web).


----------



## bigshot

In the words of Rodney King... "We're all Bozos on this bus!"


----------



## pinnahertz

Points well taken @castleofargh! 

@71 dB, @gregorio...meet me at the pub and we'll hash this out or get drunk in the attempt.  Everyone else come and throw popcorn.


----------



## 71 dB

castleofargh said:


> If you believe to be right about some objective or psycho acoustic aspect, bring some paper or evidence of it instead of resorting to insults



It's difficult ot trace back after many years where I have learned what. Some things I have learned listening to the lecturer in the university. Some things I have learned from material made up by the professor (such material was in Finnish in my case). The basics are in Thomas D. Rossing's "The Science of Sound" which we used in "Acoustics 101". The deeper understanding of the things was come from trying things out, making music, writing plug-ins to see what works and why. The basic facts of human spatial hearing aren't secrets I feel I should "reveal" to the world. These things are Googled in 5 seconds, but to truly understand things takes time and effort to try things out.

I am all for a friendly and respectful debate. That's what I came here for.  I want to be friendly, but it has been a challenge because the disagreements are so large. Sorry.


----------



## 71 dB

castleofargh said:


> the self made definition of *spatial distortion* leads nowhere and it's only been making everything more difficult for pages. so far we haven't even been able to agree on what it means and what actually is a distortion free spatial signal, or if that's even a thing on a released stereo album. in a given room with given speakers and a given listener, the signal reaching the eardrum will be statistically unique to those conditions. so what's correct? where is the reference?



For me spatial distortion has been a useful concept in crossfeed, part of the understanding of the issue. It'a so fundamental part of how I understand headphone spatiality, that it's difficult to say anything without mentioning it here and there. I have used "excessive spatiality" instead on this forum since spatial distortion seems so controversal. To me spatial distortion as it's simplest definition is the difference in spatiality of a recording and an imaginary binautal recording of the same source.

Subjectively spatial distortion SD is calculated

SD = 100 * 10 ^ ( x / 10 ) %,​
where x is proper (subjectively optimal) crossfeed level. So, if crossfeed level -5 dB gives optimal spatial result, spatial distortion SD = 100*10^(-5/10) % = 32 %.

Objectively spatial distortion is calculated using algorithms that simulate human spatial hearing. I haven't been able to develop such algorithm yet, but I am working on it, trying to come up with an elegant solution.

In my books binaural recordings are the "widest" possible spatial distortion free recordings. So anything between mono and binaural in respect of ILD/ITD is spatial distortion free. Larger ILD/ITD means excessive spatiality which in our brain generate spatial distortion because the spatiality doesn't make sense. So, spatial distortion happens in our head, but is triggered by excessive spatial information of a recording, just like pain happens in our brain, but is triggered by SPL > 120 dB. Scaling SPL under pain threshold removes pain and scaling spatial information removes spatial distortion. 

Hopefully this explains what I mean by spatial distortion.


----------



## gregorio (Jan 13, 2018)

71 dB said:


> [1]* I made up nothing*. I had knowledge learned in university and I applied it to headphone listening and *the observations agreed with the theory*.
> [2] So, it's you making up nonsense about me making up nonsense to justify your rejection of my knowledge and claims.



1. Then who did? These two sentences contradict each other, unless someone other than you made up this "theory"! So if you didn't make up this "theory", who did and where's it published? Where's all the independent scientific studies of numerous observations of numerous subjects which support this theory? You're absolutely right, it's not rocket science, it's not science at all! Clearly it's only your personal observations and your personal idea/"theory" (certainly not a scientific theory!), an idea which you've made up!

2. I'm not making up any nonsense, I'm pointing out the facts of how recordings are made. I'm rejecting your knowledge and claims because it flies in the face of the facts and you've got nothing to back up your claims except your own perception, your own preferences and then other "facts" you've made up which!



71 dB said:


> [1] You are correct about what makes a sound appear further away, althou reverberation time doesn't change, only the balance between direct sound and reverberation.
> [2] Moving the sound source closer and further away in a acoustic space keeps the reverberation pretty much the same ....



1. Again, you are contradicting yourself. How can I be correct about what makes a sound appear further away (which I stated depended on differences in the reverberation, initial reflection times, ER predelay, etc.) but be incorrect about the reverberation changing?
2. Obviously moving a sound source around the room must change the reverb. As a sound source moves around a room (and closer and further away), it gets closer and further away from the various reflective surfaces (walls) and so the various reflections, reflection times and distribution of reflections MUST change too. This is basic physics, to the point of simple common sense. You obviously don't get just a greater or lesser amount of exactly the same reverberation. In addition, with most music (the non-acoustic genres) we're not talking about instruments moving around a room, but of instruments in different rooms (different acoustic spaces, typically synthetically generated) with different reverb characteristics!



71 dB said:


> [1] *I like* how with headphones I avoid the extra layer of spatiality and acoustics. *I hear* the original spatiality of the recording, [2] but scaled correctly.



1. Again, it's about what you like and what you hear, your preferences and your perception, that it's, not science, not facts, not "beneficial to all people"!
2. And this is where you go off the rails, it is not scaled correctly, it cannot be scaled correctly. The best which which could in theory be achieved would be emulating speakers, a perfect HRTF and a perfect convolution reverb of the room in which the speakers are placed but all that would do is somewhat reproduce the illusion of speakers reproduction, it would not remove, cure or fix spatial distortion. You're not talking about HRTF or perfect room convolution though, just a basic crossfeed with a delay! Which also does not remove, cure or fix spatial distortion. For hopefully the last time:

It only takes a few minutes on Google to find the facts but to save you time here's a couple: http://www.musictech.net/2014/06/ten-minute-master-orchestral-recording/ and https://www.mixonline.com/recording/orchestral-recording-365592. It's clear orchestras are recorded with multiple mics, sometimes over 40. This is clearly not realistic and not designed to be, you do not have 40 ears and they are not placed all over, around and in an orchestra are they? Furthermore, when you listen to an orchestral recording on speakers, you've got all the spatial information of the concert hall or large recording stage, superimposed by all the spatial information of your sitting/listening room. You keep going on about spatial information making sense but how does this make sense? You cannot possibly fit a large concert hall into your small sitting room, so the spatial information actually on recordings cannot possibly exist in the real world and makes NO sense. And when we get into the non-acoustic genres the situation is even less real and worse!

If you're going to invent some idea/"theory" that's fine but it either has to agree with the facts or have a substantial amount of very compelling evidence to back it up ... way, way more than just "I like" and "I hear", you don't because you're an idiot, I'm enlightened. That's clearly not even close to "science" and you leave us with no choice but to call it nonsense!

G


----------



## castleofargh

71 dB said:


> For me spatial distortion has been a useful concept in crossfeed, part of the understanding of the issue. It'a so fundamental part of how I understand headphone spatiality, that it's difficult to say anything without mentioning it here and there. I have used "excessive spatiality" instead on this forum since spatial distortion seems so controversal. To me spatial distortion as it's simplest definition is the difference in spatiality of a recording and an imaginary binautal recording of the same source.
> 
> Subjectively spatial distortion SD is calculated
> 
> ...



yes it makes what you mean pretty clear to me. now I'm not sure I agree, but at least I understand how you want to consider the recording as if actually heard by a human head. my personal experience with binaural recording has been pretty poor TBH. so I'm not a fan of using it as reference. but you pick the reference you like in your model. I have no love for the Chesky records for example and prefer/feel more "natural" with 2 simple mics spaced a little instead of the fancy dummy head for some reason(my head might just not be standard enough? IDK). I have some friends who enjoy Chesky albums a lot, and for them it's like they're discovering real sound could exist on headphones or something. while I don't even get a horizontal axis for the instruments(the headphone can logically play a part in this with the FR but so far I failed to solve the issue with EQ alone for more than mono signal).

also most albums aren't using mics at positions similar to binaural recording, so even before the mixing of tracks, your reference is dead in the water. how do we reconcile that at the playback when the principles weren't applied from the start? that doesn't make things easy when discussing with the evil duo who have been manufacturing the sense of space in albums from scratch for a living using mainly basic panning?


----------



## 71 dB

gregorio said:


> 1. Then who did? These two sentences contradict each other, unless someone other than you made up this "theory"! So if you didn't make up this "theory", who did and where's it published? Where's all the independent scientific studies of numerous observations of numerous subjects which support this theory? You're absolutely right, it's not rocket science, it's not science at all! Clearly it's only your personal observations and your personal idea/"theory" (certainly not a scientific theory!), an idea which you've made up!



Who did? People who have researched spatial hearing: http://auditoryneuroscience.com/spatial_hearing


----------



## gregorio

castleofargh said:


> also most albums aren't using mics at positions similar to binaural recording, so even before the mixing of tracks, your reference is dead in the water. how do we reconcile that at the playback when the principles weren't applied from the start? that doesn't make things easy when discussing with the evil duo who have been manufacturing the sense of space in albums from scratch for a living using mainly basic panning?



That's the problem, there are extremely few binaural recordings. Binaural recording can only be employed for acoustic music genres and even then, it removes the possibility of mixing, of the art to enhance the perception/psycho-acoustics (as would occur during an actual performance).

G


----------



## 71 dB

gregorio said:


> I'm not making up any nonsense, I'm pointing out the facts of how recordings are made. I'm rejecting your knowledge and claims because it flies in the face of the facts and you've got nothing to back up your claims except your own perception, your own preferences and then other "facts" you've made up which!



How recordings are made doesn't necessorily reflect how spatial hearing works. In my opinion there is a conflict between these two things. To remove this conflict we must change something. We can't change spatial hearing, but we can change how recordings are made. So, how to change it? My preliminary proposal is to limit ILD below 500 Hz under 6 dB at 1 kHz under 12 dB.


----------



## gregorio

71 dB said:


> Who did? People who have researched spatial hearing: http://auditoryneuroscience.com/spatial_hearing



There is no reference on that link of your theory of a simple crossfeed curing spatial distortion, in fact no mention of spatial distortion at all, no mention of technical vs psycho-acoustic recordings. No mention of reverb staying the same when the source moves, no mention of the spatial information actually on commercial music recordings and no mention of bees! You state that people who have researched spatial hearing invented the theory/explanation you're asserting, please provide a link to THAT theory/explanation, not to some generic pyscho-acoustics research!

G


----------



## gregorio

71 dB said:


> [1] How recordings are made doesn't necessorily reflect how spatial hearing works. In my opinion there is a conflict between these two things.
> [2] To remove this conflict we must change something. We can't change spatial hearing, but we can change how recordings are made. So, how to change it? [3] My preliminary proposal is to limit ILD below 500 Hz under 6 dB at 1 kHz under 12 dB.



1. Isn't it clear by now, recordings are never made to reflect how spatial hearing works, I've provided links which demonstrate this fact. The exception is a few binaural recordings but obviously you wouldn't want to apply your crossfeed to those would you?

2. Commercial recordings are made by professional recording engineers. You're not going to change the way we make recordings with just a "preliminary proposal" and no credible evidence beyond your personal perception/preferences to back it up and you're certainly not going to get us to change what we've learned collectively over many decades by calling us all ignorant and idiots!

3. How does applying a simple crossfeed during the replay of a recording change how a recording has been made? You don't know how the recording was made and you don't know how to correct how it was made. How do you crossfeed the ILD without also affecting the timing information? But, you seem now to be talking about something completely different, how recordings are made rather than how you replay them (with crossfeed) after they're made? I know this is pushing the bounds of castleofargh's guidelines but does what your saying really still make some sort of sense to you?

G


----------



## 71 dB

gregorio said:


> There is no reference on that link of your theory of a simple crossfeed curing spatial distortion, in fact no mention of spatial distortion at all, no mention of technical vs psycho-acoustic recordings. No mention of reverb staying the same when the source moves, no mention of the spatial information actually on commercial music recordings and no mention of bees! You state that people who have researched spatial hearing invented the theory/explanation you're asserting, please provide a link to THAT theory/explanation, not to some generic pyscho-acoustics research!
> 
> G



Crossfeed etc. are based on the theories of spatial hearing presented in the link. Theories are "just" theories, the foundation for applications. If you look at this picture in the link:






You'll have a challenge finding something that contradicts what I have been saying about ILD and ITD.



gregorio said:


> 1. Isn't it clear by now, recordings are never made to reflect how spatial hearing works, I've provided links which demonstrate this fact. The exception is a few binaural recordings but obviously you wouldn't want to apply your crossfeed to those would you?
> 
> 2. Commercial recordings are made by professional recording engineers. You're not going to change the way we make recordings with just a "preliminary proposal" and no credible evidence beyond your personal perception/preferences to back it up and you're certainly not going to get us to change what we've learned collectively over many decades by calling us all ignorant and idiots!
> 
> ...


1. Ok. I'll just crossfeed the recordings suitable for my ears. I'm doing that everyday.

2. I have called people (including myself) spatially ignorant, but never idiots. It's stupid to call people online idiots, because idiots are too stupid to be online. However, even geniuses can be ignorant about certain things, all people are ignorant about something. I was ignorant spatially prior to 2012 and I am ignorant about Marvel movies for example. Being ignorant isn't a negative thing as such. More important is what it is you are ignorant about. 

3. It's making the step of limiting ILD afterwards. Excessive spatial information is excessive spatial information no matter how many mics where used and how they where mixed. Crossfeed shapes the ITD information so that the cues tell ears there's sound sources at 30° angles, exactly the same what happens when you listen to speakers. Yes, all of this makes sense to me.


----------



## pinnahertz (Jan 13, 2018)

71 dB said:


> Subjectively spatial distortion SD is calculated
> 
> SD = 100 * 10 ^ ( x / 10 ) %,​
> where x is proper (subjectively optimal) crossfeed level. So, if crossfeed level -5 dB gives optimal spatial result, spatial distortion SD = 100*10^(-5/10) % = 32 %.


The problem with the above is: who determines the optimal value of x? Is there no room for subjectivity in that subjective term?


71 dB said:


> Objectively spatial distortion is calculated using algorithms that simulate human spatial hearing. I haven't been able to develop such algorithm yet, but I am working on it, trying to come up with an elegant solution.


How can you measure and evaluate something that is fundamentally the result of opinion?

How can you measure and evaluate something comprised of multiple dynamic functions?


----------



## gregorio

71 dB said:


> Crossfeed etc. are based on the theories of spatial hearing presented in the link.



If I asked for a link to the theory of General Relativity, would you post a link to Newton's theory of gravity because that's the theory upon which General Relativity was based? If I asked you who invented the theory of General Relativity, would you say Isaac Newton? Are you sure that makes sense to you? Wouldn't the correct answer be Albert Einstein? I didn't ask who invented the theories upon which you based your idea/"theory", I asked who invented your idea, if it wasn't you? If it was you, then obviously you invented it (made it up) and your statement that you "_made up nothing_" was in fact a deliberate untruth. 



71 dB said:


> 1. Ok. I'll just crossfeed the recordings suitable for my ears. I'm doing that everyday.
> 2. I have called people (including myself) spatially ignorant, but never idiots. [2a] It's stupid to call people online idiots, because idiots are too stupid to be online. [2a] However, even geniuses can be ignorant about certain things, all people are ignorant about something.
> 3. It's making the step of limiting ILD afterwards. Excessive spatial information is excessive spatial information no matter how many mics where used and how they where mixed. Crossfeed shapes the ITD information so that the cues tell ears there's sound sources at 30° angles, exactly the same what happens when you listen to speakers.
> [3a] Yes, all of this makes sense to me.



1. Yes exactly, "suitable for YOUR EARS" not automatically for everyone's ears!
2. You did in fact call us idiots, although that post seems to have been removed by castleofargh. 2a. I entirely agree, it is stupid!
2b. I'm certainly ignorant about a large number of things. However, like all other experienced professional sound engineers, spatial information is NOT one of those things! Spatial information is something I've had to deal with virtually every working day for the last 25 years or so, it's a fundamental, integral and essential part of the job of a sound engineer. Saying we're ignorant of spatial information is therefore saying we're ignorant of our own job, which is both untrue and extremely insulting!  
3. There are two reasons why it should be plainly obvious that your crossfeed is NOT "exactly the same" as what happens when listening to speakers. Firstly, you've stated you can hear the difference between listening on HPs with your crossfeed and listening to speakers. If your crossfeed were "exactly the same as what happens when you listen to speakers" then there would be no difference to hear! Secondly, when you listen to speakers you are listening to them in a room (with room acoustics), with a plethora of reflections (spatial information) coming from numerous directions and therefore you are NOT listening to two sound sources at 30° angles. You cannot simply ignore this fact, ignore the field of acoustics. "Excessive spatial information" therefore makes no sense, in fact, it's the exact opposite of "excessive" spatial information, you've actually got LESS spatial information with HP's because you don't have the additional spatial information of the room/listening environment! If your crossfeed reduces this "excessive" spatial information, how would that be a good thing, you've already got less spatial information than with speakers to start with?
3a. Apparently ... I just can't understand how though.

G


----------



## pinnahertz (Jan 14, 2018)

Lets see now....if I recall, my first studies in spatial hearing occurred in conjunction with my work on acoustic crosstalk cancellation (essentially the inverse of cross-feed).  The date on the circuit board that resulted was 1980.  The circuit worked about as well as cross-feed, in other words, recording and listener specific. In other words, while the effect was clearly audible on everything, it was not desirable for every recording, and different listeners preferred different amounts all the way down to zero. Also unlike cross-feed, it did widen the image and increased ambience and depth, sometimes even placing sound behind the listener.   In order to get it to work at all I had to come to an understanding of the problem, and that surrounded the concept of spatial hearing.  It's been a life long area of study.

But somehow I get:


71 dB said:


> No doubt your experience and knowledge of the field is massive after all those decades, but your knowledge of human spatial hearing isn't that strong, something that seems to be common among sound engineers.


...and...


71 dB said:


> Well, maybe if you understood spatial hearing better it wouldn't look that odd to you.


Oh well.  Such are the comments from someone who was, I think, 4 at the time I started working on spatial hearing.

I've just spent the evening listening to the free demo of 112dB Redline Monitor plugin with a whole bunch of different recordings.  It did improve some that had extreme source panning, but at the expense of loss of ambience and depth. The image flattened, the sense of space around soloists collapsed quickly.  I found one recording that it improved without question, many that I kept changing settings until it sounded good, only to realize I'd effectively bypassed it.  At no time did it create a perspective outside the head or in front of the head, and it never made the image sound wider.

So I tried a bit of the Fong Audio "Out Of Your Head" plugin, another free demo.  Think of it as performing a Smyth-Realizer-like function (though not nearly as well, and without head tracking) for $149.    Now that did a few things!  Each setting was a virtual acoustic environment with real speakers in front.  That part worked really well.  Unfortunately, every "speaker" was severely colored to the point of being an obvious synthetic process.  It was fun, but nothing I'd want to live with very long or pay up for.  While I really liked having the stereo image in front of me, I couldn't find a "speaker" that really sounded good.  If that problem were fixed, that thing might just be a winner.

Now, I haven't looked into what the Redline Monitor is actually doing, and I assume the OOYH plug is doing some serious auralization, but one thing is obvious: both are doing way more than the 71dB circuit, taking into account far more details of spatial hearing.  And both partially fail in different ways.

I'm still at 'I like it for a few tracks, mostly not', now for those cross-feed systems too.

See, I've never stopped working in this area.  This particular old "idiot producer" is still willing to learn.  However, I'm also not willing to abandon the knowlege base I already have without proof and justification.  Certainly not for one guy's opinion.


----------



## 71 dB

pinnahertz said:


> The problem with the above is: who determines the optimal value of x? Is there no room for subjectivity in that subjective term?



It's subjective, so anyone. Just as we have individual HRTFs, we have individual optimal crossfeed levels I suppose. For me perhaps -5 dB and -6 dB for you, perhaps… …no different from having a bassy recording. You maybe want to cut bass 4 dB and I want 5 dB. It doesn't mean we should not fix the bassy sound just because we can't agree about the amount of correction.



pinnahertz said:


> How can you measure and evaluate something that is fundamentally the result of opinion?
> 
> How can you measure and evaluate something comprised of multiple dynamic functions?


Well, you can take the average of the opinions for example.


----------



## 71 dB

gregorio said:


> If I asked for a link to the theory of General Relativity, would you post a link to Newton's theory of gravity because that's the theory upon which General Relativity was based? If I asked you who invented the theory of General Relativity, would you say Isaac Newton? Are you sure that makes sense to you? Wouldn't the correct answer be Albert Einstein? I didn't ask who invented the theories upon which you based your idea/"theory", I asked who invented your idea, if it wasn't you? If it was you, then obviously you invented it (made it up) and your statement that you "_made up nothing_" was in fact a deliberate untruth.


Not the best analogy, because General Relativity supersedes Newton's gravity, but I get your point.
Getting from spatial hearing to crossfeed is pretty straightforward and I don't claim having invented that stuff. Crossfeed was invented long before I was born and Linkwitz played with it when I was playing in kindergarden. Someone tested crossfeed in the 1950's, but I don't remember his name. Someone in the Bell laboratories I believe.



gregorio said:


> 1. Yes exactly, "suitable for YOUR EARS" not automatically for everyone's ears!
> 2. You did in fact call us idiots, although that post seems to have been removed by castleofargh. 2a. I entirely agree, it is stupid!
> 2b. I'm certainly ignorant about a large number of things. However, like all other experienced professional sound engineers, spatial information is NOT one of those things! Spatial information is something I've had to deal with virtually every working day for the last 25 years or so, it's a fundamental, integral and essential part of the job of a sound engineer. Saying we're ignorant of spatial information is therefore saying we're ignorant of our own job, which is both untrue and extremely insulting!
> 3. There are two reasons why it should be plainly obvious that your crossfeed is NOT "exactly the same" as what happens when listening to speakers. Firstly, you've stated you can hear the difference between listening on HPs with your crossfeed and listening to speakers. If your crossfeed were "exactly the same as what happens when you listen to speakers" then there would be no difference to hear! Secondly, when you listen to speakers you are listening to them in a room (with room acoustics), with a plethora of reflections (spatial information) coming from numerous directions and therefore you are NOT listening to two sound sources at 30° angles. You cannot simply ignore this fact, ignore the field of acoustics. "Excessive spatial information" therefore makes no sense, in fact, it's the exact opposite of "excessive" spatial information, you've actually got LESS spatial information with HP's because you don't have the additional spatial information of the room/listening environment! If your crossfeed reduces this "excessive" spatial information, how would that be a good thing, you've already got less spatial information than with speakers to start with?
> ...


1. Teenager ears need more bass? Maybe not.
2. If I did it was a mistake and I apologize.
2a. good.
2b. I think the concept of spatiality for sound engineers differ from acoustic engineers explaining our disagreements. Of course you aren't ignorant about things related to your work.
3. Crossfeed is not the same as speakers, of course. Speakers limit excessive spatial information differently from crossfeed, but they both limit it and that's a feature they share. That's why crossfeed takes headphone sound _toward_ speaker sound. Linkwitz even called his crossfeeder an acoustic simulator.
Room acoustics increase "spatial information" * adding reverb and reflections, but it all goes through HRTF when coming to your ears which limits ILD and ITD values to natural levels below headphones. So, headphones have larger ILD/ITD even if it lacks rooms acoustics. The listening room is a spatial regulator: No matter what you play, mono, ping pong, … you have about 3 dB of ILD at low frequences. Please don't misunderstand my messages on purpose.
3a. I have though about these things for years and I am somewhat smart/educated. That's how.

* so far I have meant by spatial information channel difference, but of course we can say adding reverb adds spatial information.


----------



## 71 dB (Jan 14, 2018)

pinnahertz said:


> Lets see now....if I recall, my first studies in spatial hearing occurred in conjunction with my work on acoustic crosstalk cancellation (essentially the inverse of cross-feed).  The date on the circuit board that resulted was 1980.  The circuit worked about as well as cross-feed, in other words, recording and listener specific. In other words, while the effect was clearly audible on everything, it was not desirable for every recording, and different listeners preferred different amounts all the way down to zero. Also unlike cross-feed, it did widen the image and increased ambience and depth, sometimes even placing sound behind the listener.   In order to get it to work at all I had to come to an understanding of the problem, and that surrounded the concept of spatial hearing.  It's been a life long area of study.



Crosstalk cancellation on speakers is much more difficult than crossfeed with headphones, but you know that yourself of course.



pinnahertz said:


> But somehow I get:
> ...and...
> 
> Oh well.  Such are the comments from someone who was, I think, 4 at the time I started working on spatial hearing.



I am just as stunned as you are because I am aware of your backgroud, but if I identify lack of understanding I point it out. Mind you, I believe your understanding of other aspects of sound engineering is massive and superior to my knowledge, but this might be your weak point.



pinnahertz said:


> I've just spent the evening listening to the free demo of 112dB Redline Monitor plugin with a whole bunch of different recordings.  It did improve some that had extreme source panning, but at the expense of loss of ambience and depth. The image flattened, the sense of space around soloists collapsed quickly.  I found one recording that it improved without question, many that I kept changing settings until it sounded good, only to realize I'd effectively bypassed it.  At no time did it create a perspective outside the head or in front of the head, and it never made the image sound wider.



It could be that what you call "loss of ambience and depth", I call "ambience balanced to it's natural level". What you call "flattened" I call "smoothed". It's interesting that you don't get "out of head" sound, because I get it all the time. I get a "sound cloud" of a few feet in diameter and my head is inside the cloud so some of the sound is inside my head, but most of it is outside.



pinnahertz said:


> So I tried a bit of the Fong Audio "Out Of Your Head" plugin, another free demo.  Think of it as a Smyth Realizer for $149.    Now that did a few things!  Each setting was a virtual acoustic environment with real speakers in front.  That part worked really well.  Unfortunately, every "speaker" was severely colored to the point of being an obvious synthetic process.  It was fun, but nothing I'd want to live with very long or pay up for.  While I really liked having the stereo image in front of me, I couldn't find a "speaker" that really sounded good.  If that problem were fixed, that thing might just be a winner.



Spatiality and colourization are actually the same thing. It's the colourization in the sound that gives the spatial cues. The stronger spatial effects you want, the more colored it gets. In real life you don't notice it, because you can't turn the spatial effects off, but with headphones and spatial plugins you can.



pinnahertz said:


> Now, I haven't looked into what the Redline Monitor is actually doing, and I assume the OOYH plug is doing some serious auralization, but one thing is obvious: both are doing way more than the 71dB circuit, taking into account far more details of spatial hearing.  And both partially fail in different ways.


It could be that they "fail" because they try too hard. The beauty of normal crossfeed is that it does so little and therefore can succeed in doing it. If I want speaker sound I listen to speakers. Crossfeed I use to avoid excessive stereo separation and in the process I get a sound that is about 80 % headphone-like and 20 % speaker-like. Works damn well for me for the money.



pinnahertz said:


> I'm still at 'I like it for a few tracks, mostly not', now for those cross-feed systems too.
> 
> See, I've never stopped working in this area.  This particular old "idiot producer" is still willing to learn.  However, I'm also not willing to abandon the knowlege base I already have without proof and justification.  Certainly not for one guy's opinion.



Do what I do: Modify your prior knowledge to include new knowledge. Accepting new things doesn't mean your old wisdom is obsolete. It means you can finetune your knowledge even better.


----------



## pinnahertz

71 dB said:


> I am just as stunned as you are because I am aware of your backgroud, but if I identify lack of understanding I point it out. Mind you, I believe your understanding of other aspects of sound engineering is massive and superior to my knowledge, but this might be your weak point.


The point I was trying to make clearly flew right over your head.  How can a thorough study of spatial hearing for almost 40 years result in a weak point?  


71 dB said:


> It could be that what you call "loss of ambience and depth", I call "ambience balanced to it's natural level". What you call "flattened" I call "smoothed". It's interesting that you don't get "out of head" sound, because I get it all the time. I get a "sound cloud" of a few feet in diameter and my head is inside the cloud so some of the sound is inside my head, but most of it is outside.


Study the above carefully.  It holds a very important key to understanding our argument.


71 dB said:


> Spatiality and colourization are actually the same thing. It's the colourization in the sound that gives the spatial cues.


Absolutely not!  The colorization I was referring to is much more about FR in this case.  In fact, mostly.


71 dB said:


> The stronger spatial effects you want, the more colored it gets. In real life you don't notice it, because you can't turn the spatial effects off, but with headphones and spatial plugins you can.


I think your thinking is out of balance here. It's like the only aspect of audio that matters is spatial.  The above is wrong.


71 dB said:


> It could be that they "fail" because they try too hard. The beauty of normal crossfeed is that it does so little and therefore can succeed in doing it. If I want speaker sound I listen to speakers. Crossfeed I use to avoid excessive stereo separation and in the process I get a sound that is about 80 % headphone-like and 20 % speaker-like. Works damn well for me for the money.


My point with this is your cross-feed fails too, in fact, yours fails worse.  


71 dB said:


> Do what I do: Modify your prior knowledge to include new knowledge. Accepting new things doesn't mean your old wisdom is obsolete. It means you can finetune your knowledge even better.


[/quote]You haven't, though. You have supplied nothing that I can accept as a worth while addition to my knowlege because it doesn't hold up under scrutiny.


----------



## jgazal (Jan 14, 2018)

pinnahertz said:


> So I tried a bit of the Fong Audio "Out Of Your Head" plugin, another free demo.  Think of it as a Smyth Realizer for $149.    Now that did a few things!  Each setting was a virtual acoustic environment with real speakers in front.  That part worked really well.  Unfortunately, every "speaker" was severely colored to the point of being an obvious synthetic process.  It was fun, but nothing I'd want to live with very long or pay up for.  While I really liked having the stereo image in front of me, I couldn't find a "speaker" that really sounded good.  If that problem were fixed, that thing might just be a winner.



pinnahertz, please do not get angry with me, but if I were you I would not say that.

The Realiser adds head tracking and BRIR personalization and those are, IMHO, critical features to avoid colorations and sound field collapse with all users.

The app performance cannot be assessed with only few users.

We don't know how much your HRTF matches Darin's HRTF or other people head he used to acquire the PRIR's provided in that app (AFAIK all measured with the Realiser A8).

We don't know if the number of PRIR's available warrant a good match for a relevant percentage of the population.

We also don't know how much other users are tolerant to head movements.

You are an authority here and people will trust what you write and if 100 people buy it with your description a relevant percentage may experience cognitive dissonance.


----------



## TYATYA

CROSSFEED IS A NEED for me.
I need it to cut out some back ground instruments in records. Mostly they located in upper mid range and are long decay. Those bother my ears when listening by hp. 
After wipe out those kind of signals, music imagine is much more cleaner and easier to receive ( more ear-friendly).


----------



## gregorio

71 dB said:


> …no different from having a bassy recording. You maybe want to cut bass 4 dB and I want 5 dB. It doesn't mean we should not fix the bassy sound just because we can't agree about the amount of correction.



If it's a bassy recording then I want to hear that bassy recording. If you want to change the recording to better suit your preferences, that's up to you but you cannot say that everyone must reduce the bass by 4dB, 5dB or whatever dB and if they don't then they are ignorant!



71 dB said:


> Crossfeed was invented long before I was born and Linkwitz played with it when I was playing in kindergarden. Someone tested crossfeed in the 1950's, but I don't remember his name.



Again, I'm not asking who invented or tested crossfeed. I'm asking who invented your ideas/"theory" of crossfeed.



71 dB said:


> 2b. Of course you aren't ignorant about things related to your work.
> 3. Crossfeed is not the same as speakers, of course.
> [4] Speakers limit excessive spatial information differently from crossfeed, but they both limit it and that's a feature they share. Room acoustics increase "spatial information" ...
> 3a. I have though about these things for years and I am somewhat smart/educated. That's how. ... * so far I have meant by spatial information channel difference, but of course we can say adding reverb adds spatial information.



2b. Spatial information is not RELATED to my work, it IS my work! It's a large part of every aspect of being a sound engineer.

3. Now we're getting somewhere. If "of course" crossfeed is NOT the same as speakers, then "of course" this statement is incorrect: "_Crossfeed shapes the ITD information so that the cues tell ears there's sound sources at 30° angles, exactly the same what happens when you listen to speakers. Yes, all of this makes sense to me_." - So, as this statement is "of course" incorrect: 1. Why did you make it? 2. How did it make sense to you? and 3. Does it still make sense to you?

4. You seem to agree that with speakers "room acoustics increase spatial information". So how can adding spatial information to "excessive spatial information", limit spatial information? If I have two apples and I add an apple, how does adding that apple limit the number of apples? Are you certain this still makes sense to you?

3a. Hang on a minute, are you saying all the time you've been talking about "spatial information" that you've actually meant "channel difference"? Although channel difference affects spatial information, they're two different things!! If this is in fact what you're saying then it's shocking that you've been "thinking about these things for years" and not realised this difference. Although this ignorance/confusion of terms would appear to explain a number of your false assertions, it would clearly demonstrate a very poor education on the subject, the opposite of what you're claiming. Surely you're not saying this are you?

G


----------



## pinnahertz

jgazal said:


> pinnahertz, please do not get angry with me, but if I were you I would not say that.


What part would you not say?  I'm just posting my subjective opinions on what I heard.


jgazal said:


> The Realiser adds head tracking and BRIR personalization and those are, IMHO, key features to avoid colorations and sound field collapse with all users.


Head tracking and BRIR personalization are important factors, but considering the app is not a Realizer, and considering evaluation without moving your head, you can easily test how well it works.  It's not a virtual reality device, but it does present a very believable image of speakers in front of the listener.  It's just that the speakers sound...um...not that good. 


jgazal said:


> The app performance cannot be assessed the performance only with one user.


I'm not writing a review, or attempting qualitative performance evaluation, though.  Just my impressions. 


jgazal said:


> We don't know how much your HRTF matches Darin's HRTF or other people head he used to acquire the PRIR's provided in that app (AFAIK all measured with the Realiser A8).


Who cares?  But the HRTF part actually worked very well.  It was the speaker simulatin/auralization that didn't for me. I know some of those speakers, the simulation is way off. The AIX studio setting is unusably dull, and that simply cannot be either.


jgazal said:


> We also don't know how much other users are tolerant to head movements.


No head movement was involved, or necessary. 


jgazal said:


> You are an authority here and people will trust what you write and if 100 people buy it with your description a relevant percentage may experience cognitive dissonance.


Ouch.

Look, it is what it is.  It's less expensive than a Smith, still not cheap. An app for $149 includes one "speaker", each additional is $25.  I find the price commensurate with the function IF it actually sounded fantastic.  It didn't to me, so I'm not buying it.  I was impressed with what it did in one aspect, and not in another.  They are separate functions, though.  Modeling a speaker in a room is pretty tricky stuff, getting the image out in front using generalized HRTF, not as much.  He got the easier part right, he attempted something additional that is really, really hard to do, and I didn't care for the results.  The two functions may well be all part of a single measurement set, and if so, he has a problem translating the speaker sound signature to headphones, likely a measurement quality issue.


----------



## jgazal

I would not say: "Think of it as a Smyth Realizer for $149".


----------



## pinnahertz

oops!  Posting error...sorry.


----------



## pinnahertz

jgazal said:


> I would not say: "Think of it as a Smyth Realizer for $149".


Wow.  Ok, I didn't realize I had to be that literal, and that nobody here could figure out what I meant.   

Fixed it.


----------



## jgazal (Jan 14, 2018)

My apologies if I have offended anyone reading this thread. Perhaps I went unnecessarily picky.


----------



## pinnahertz

I wouldn't worry about it much. This thread is gone way beyond "picky" status.


----------



## 71 dB

gregorio said:


> You seem to agree that with speakers "room acoustics increase spatial information". So how can adding spatial information to "excessive spatial information", limit spatial information? If I have two apples and I add an apple, how does adding that apple limit the number of apples? Are you certain this still makes sense to you?
> G



I'm afraid you are confusing things. The excessive spatial information is in the recording. When we play that recording on speakers, spatial information is added and then the sound arrives to your ears. HRTF happens and that limits the spatial information. With headphones we don't add spatial information, but the sound "bypasses" most of HRTF, especially the acoustic crossfeed part and the information is not limited.

Because of HRTF you will never have large ILD at low frequencies no matter what you play, on what speakers and in whatever room/acoustics. HRTF will always limit ILD. You should know that, but I'm not totally sure you do. The only way you have large ILD at low frequencies is using headphones without crossfeed and that's the reason why large ILD at low frequencies is unnatural. Human evolution happened without headphones. 

*At this point I can only say that we represent totally different "cultures" on this matter. My approach is scientific (mathematical, analytic etc.) while your's is music production (you know best what that means). This is very surprising to me, but our way of thinking is so different that we cannot agree much anything it seems. From now on I will respond to a very limited amount of posts here, selectively, while "ignoring" most of what is writen here. I believe that gives me the best chances to be polite and respectful.*


----------



## 71 dB

gregorio said:


> 3a. Hang on a minute, are you saying all the time you've been talking about "spatial information" that you've actually meant "channel difference"? Although channel difference affects spatial information, they're two different things!! If this is in fact what you're saying then it's shocking that you've been "thinking about these things for years" and not realised this difference. Although this ignorance/confusion of terms would appear to explain a number of your false assertions, it would clearly demonstrate a very poor education on the subject, the opposite of what you're claiming. Surely you're not saying this are you?
> 
> G



I know the difference, thankyou. Confusion of terms is sloppyness from my part.


----------



## pinnahertz

71 dB said:


> I'm afraid you are confusing things. The excessive spatial information is in the recording. When we play that recording on speakers, spatial information is added and then the sound arrives to your ears. HRTF happens and that limits the spatial information. With headphones we don't add spatial information, but the sound "bypasses" most of HRTF, especially the acoustic crossfeed part and the information is not limited.
> 
> 1. Because of HRTF you will never have large ILD at low frequencies no matter what you play, on what speakers and in whatever room/acoustics. HRTF will always limit ILD. *You should know that, but I'm not totally sure you do.  *2a. The only way you have large ILD at low frequencies is using headphones without crossfeed and 2b. that's the reason why large ILD at low frequencies is unnatural. 3. Human evolution happened without headphones.


1. He does, I do, we all do, that's NOT the issue here!
2a. Again, nobody is disagreeing with this!
2b. "Unnatural" is subjective.  That's a big part of our problem.
3. Or maybe not at all.


71 dB said:


> At this point I can only say that we represent totally different "cultures" on this matter. 4. My approach is scientific (mathematical, analytic etc.) while your's is music production (you know best what that means). 5.This is very surprising to me, but our way of thinking is so different that we cannot agree much anything it seems. 6. From now on I will respond to a very limited amount of posts here, selectively, while "ignoring" most of what is writen here. I believe that gives me the best chances to be polite and respectful.


4. You'd like to think this, but this is incorrect!  Your "science" is far to limited to explain why your cross-feed is not universally accepted as "right", your "math" has no variables to account for preference or artistic intent.  You've created a rigid analysis when a fuzzy one would have served everyone far better.
5. What's surprising to me is that you cling to your own beliefs when professionals with far more experience and exposure to this and many other issues disagree.  Your conclusion: I'm right, they're wrong!  Does that make any logical sense? 

Let me contrast something for you.  You've challenged my understanding of cross-feed many times.  I've stated my understanding, but to check for my own sanity, I've gone back to actually listening to cross-feed in headphones,  something I did and was satisfied with a long time agon.  Actually about half a dozen different varieties of cross-feed, possibly over 100 different recordings spanning the total time of stereo recording, inception until now.  My observations support my prior understanding and conclusions, but differ from yours.  I stated my beliefs, but actually tested yours to try to understand where you were coming from and why you are so rigid about it. I put mine aside, approached this with an open mind, and listened...listened...listened.  My conclusions are that there are some recordings that do benefit from cross-feed, the specific flavor is less important that the degree of application, but the benefit comes at a cost, the flattening of depth, narrowing of soundstage, and for me, a less involving and immersive experience.  There is a very small sub-set of hard-panned stereo where cross-feed for headphones is essential, but it's quite small relative to the entire lexicon of stereo music.

You see my point is, I questioned my position and test drove your opinion, first hand, and not just a little quick trip around the block.  I took cross-feed out on the open road and ran it on the audio Nürburgring.   This driver did not favor it the majority of the time, it did well on a handful of anomalous early ping-pong stereo recordings including some never even meant to be released as stereo. That's NOT a universal, hands-down win! 

Did you give my opinion that kind of test drive?  Nope.  Just blew it off as impossible. Then when I and greg counter your rigid, inflexible opinions (because they are shallow and are not supported by actual listening research, or the application of the recording art in general), we too are blown off as ill-informed, naive, and yes, I did find your use of the word "idiot" in reference to producers and production values.    What you are practicing is not science, not analytical, not research.  It's the emphatic expression of opinion not backed by fact.  Now, read that carefully, I'm not talking about your analysis of the problem at all.  I'm talking about the efficacy of cross-feed as a solution.  It fails more than it succeeds, IMO.  But until you start your exploration with humility, "I wonder why those expert guys disagree", and find THAT answer (we're idiots doesn't count, neither does we're spatially ignorant), rather than put your own made-up term on it, you'll fail to fully understand your own "invention".

6. No, that's not the point.  That's "I'm right, your wrong, so I'm taking my ball and going  home".  None of us have done that!  Test drive our side, thoroughly.  And if you think you already have and have found the answer, then question that answer and start over.  See if there's understanding to be had, and don't have such confidence that you are the only right one.


----------



## gregorio

71 dB said:


> [1] I'm afraid you are confusing things. [1b] The excessive spatial information is in the recording. When we play that recording on speakers, spatial information is added and then the sound arrives to your ears.
> [2] HRTF happens and that limits the spatial information. With headphones we don't add spatial information, but the sound "bypasses" most of HRTF, especially the acoustic crossfeed part and the information is not limited.
> [3] HRTF will always limit ILD.
> [3a] You should know that, but I'm not totally sure you do.
> ...



1. I'm the one confusing things? And then in the very next post you state you are confusing terms. You are not just confusing terms, you are confusing who is confused!
1b. Firstly, look at what you're saying here, at the simple logic. We engineers mix mainly on speakers, for playback mainly on speakers. Spatial information is added by our control rooms when we're mixing and will be added by the consumers' listening room acoustics. The mix itself therefore has LESS spatial information than we hear when we're mixing or than the consumer will hear when playing back with speakers. How is LESS spatial information than intended be "excessive" spatial information? Secondly, you are now saying the opposite of what you were saying previously, which was that "speakers limit spatial information". How can you make completely opposite and contradictory statements and they both make sense to you?
2. No! HRTF stands for Head Related Transfer Function, NOT "spatial information limiting function"! Again, there is no way to limit the spatial information already in the mix and why would you want to, when the recording already contains less spatial information than intended? And, what has any of this got to do with your crossfeed and your assertions about your crossfeed? Your crossfeed does not have HRTFs, it's just a simple delayed crossfeed!
3. Agreed but ILD is one part of the equation of "spatial information", in fact, it's just one part of one of the equations of "spatial information". Spatial information, as you've agreed, is also the reverb. Panning (ILD) defines the left/right position of the dry signal/instrument (in non-acoustic genres) and reverb also partially defines the left/right position (due to relative left/right predelay times of the early reflections), along with defining the depth (relative distance). In acoustic genres we do not have a dry signal/instrument and the reverb and relative timing of signals arriving at different mics plays a greater part in left/right positioning. The problem with crossfeeding is that while you are reducing ILD, you are also messing-up the timing information of the reverbs/reflections. This can (and very often does) damage/destroy the crafted illusion of depth and can cause phase (frequency response) issues. Apparently you're ignorant/insensitive to this undesirable spatial distortion, and therefore TO YOU crossfeeding sounds wonderful. I'm not insensitive to it and crossfeeding does not sound wonderful to me. Obviously though, being ignorant of/insensitive to this spatial distortion does not make you more "enlightened", "smart" and "educated" than me, quite the opposite!
3a. That's your problem, not mine. I'm obviously not responsible for what you're "totally sure" of!
4. What's culture go to do with it? Unless you're saying that it's your "culture" to: Make up nonsense, contradict yourself, confuse terms, be "sloppy" and then state that you're smart/educated and we're ignorant?
4a. Clearly your approach is not scientific and sadly you don't appear to understand what "scientific" means. Just coming up with your own ideas which are based on some scientific research/data does NOT make your ideas scientific. To be scientific your ideas have to undergo the scientific method. For example: Scientific data proves that as ice cream sales increase, the rate of drowning deaths increases. If I come up with the "theory" that "Eating ice-cream causes drowning" - Is my "theory" scientific because it's based ENTIRELY on scientific proof? And, if you refuse to accept my theory, does that prove you are unscientific, ignorant, poorly educated or have a different culture? Obviously not! Actually I would be the ignorant, unscientific, poorly educated one because my "theory" is in fact a correlation (cause and effect) fallacy and complete nonsense!
4b. Even more than "very surprising" to me!
5. This statement appears to answer my previous question: "_Are you just going to continue ignoring the facts until we've descended so far into the nonsensical that what you're making up even sounds like nonsense to you? When are we going to reach that point and what are you going to do when we do?_" - Seems like maybe we've finally reached that point and what you're doing now is changing/contradicting some of what you've stated previously and are going to "ignore most of what is written". That's disingenuous and a "cop out" but ultimately, probably wiser than just continuing as you were, at least it will hopefully stop you from making the hole you've dug for yourself even deeper!

Points #1b and #2 would make somewhat more sense if you actually mean "channel separation" instead of "spatial information" but it's not entirely clear that is what you mean.  



71 dB said:


> I know the difference, thankyou. Confusion of terms is sloppyness from my part.



"Channel separation", "channel differences" and "spatial information" can all be related but are all quite different things. If you do actually know the difference, how can you KEEP confusing them? Being sloppy might account for a "slip of the tongue" once, maybe even twice but not repeatedly, post after post for many pages and then continuing to do it even after it's been pointed out to you! Using the wrong terminology (due to confusion, sloppiness or ANY other reason) is not going to get you through exams or make you "educated", in fact the opposite and, it certainly doesn't make you more educated than those who don't!

G


----------



## 71 dB

gregorio said:


> 1b. Firstly, look at what you're saying here, at the simple logic. We engineers mix mainly on speakers, for playback mainly on speakers. Spatial information is added by our control rooms when we're mixing and will be added by the consumers' listening room acoustics. The mix itself therefore has LESS spatial information than we hear when we're mixing or than the consumer will hear when playing back with speakers. How is LESS spatial information than intended be "excessive" spatial information? Secondly, you are now saying the opposite of what you were saying previously, which was that "speakers limit spatial information". How can you make completely opposite and contradictory statements and they both make sense to you?



Yes, but the mix has often more channel separation than what enters listeners ears, because while listening rooms increase spatial information, ILD is limited by HRTF compared to headhones.

Channel separation is excessive, not spatial information.

Listening speakers limit ILD, because of HRTF.

Confusion with terminology doesn't mean confusion or contradictions in substance.


----------



## 71 dB

pinnahertz said:


> Did you give my opinion that kind of test drive?



How do you want me to test it? I test all new recording first without crossfeed to see if they belong to the infamous 2 % and if that's not the case, I find out proper crossfeed level. Having heard the non-crossfed version tells me how bad the problem is.





pinnahertz said:


> Nope.  Just blew it off as impossible. Then when I and greg counter your rigid, inflexible opinions (because they are shallow and are not supported by actual listening research, or the application of the recording art in general), we too are blown off as ill-informed, naive, and yes, I did find your use of the word "idiot" in reference to producers and production values.    What you are practicing is not science, not analytical, not research.  It's the emphatic expression of opinion not backed by fact.  Now, read that carefully, I'm not talking about your analysis of the problem at all.  I'm talking about the efficacy of cross-feed as a solution.  It fails more than it succeeds, IMO.  But until you start your exploration with humility, "I wonder why those expert guys disagree", and find THAT answer (we're idiots doesn't count, neither does we're spatially ignorant), rather than put your own made-up term on it, you'll fail to fully understand your own "invention".



I find crossfeed very successful. I don't know what fails you are talking about. Crossfeed doesn't make everything better (only ~98 % imo), but that's why there's OFF switch and even I used it 2 % of the time. However, without crossfeed 98 % of material sounds worse to my ears. How is that a fail for crossfeed?  

I'm sorry I can't make this as a real science because I don't have the resources. Those who have can make the science. I'm confident it will back up my claims. If not then my years in the university have been complete waste of time.


----------



## 71 dB

I listened to Steve Roach's 'Kiva" last night on Spotify with headphones. I used wide crossfeeder and the sound image was very immersive, almost binaural.


----------



## bigshot




----------



## 71 dB

gregorio said:


> 2. No! HRTF stands for Head Related Transfer Function, NOT "spatial information limiting function"! Again, there is no way to limit the spatial information already in the mix and why would you want to, when the recording already contains less spatial information than intended? And, what has any of this got to do with your crossfeed and your assertions about your crossfeed? Your crossfeed does not have HRTFs, it's just a simple delayed crossfeed!
> 
> G



Yes. The amount of spatial information is not the issue. The issue is how large ILD it results. Depending on the system we use to listen to a given recording the resulting ILD can be large or small. With headphones ILD is easily excessive. A recording of one impulse of opposite polarity on left and right channel has huge channel difference, but zero spatial information. A recording if a church can have lots of spatial information, but the resulting ILD can be limited if the spatial information is coded that way.

Now HRTF makes sure ILD is not excessive. With headphones we "bypass" HRTF so we want something else to limit ILD. That something is for example crossfeed.


----------



## pinnahertz

71 dB said:


> Yes, but the mix has often more channel separation than what enters listeners ears, because while listening rooms increase spatial information, ILD is limited by HRTF compared to headhones.
> 
> Channel separation is excessive, not spatial information.
> 
> ...


Is this what we call "colorful language"?


----------



## 71 dB

pinnahertz said:


> Is this what we call "colorful language"?



It's chromatic distortion free communication.


----------



## pinnahertz

71 dB said:


> How do you want me to test it? I test all new recording first without crossfeed to see if they belong to the infamous 2 % and if that's not the case, I find out proper crossfeed level. Having heard the non-crossfed version tells me how bad the problem is.
> 
> I find crossfeed very successful. I don't know what fails you are talking about. Crossfeed doesn't make everything better (only ~98 % imo), but that's why there's OFF switch and even I used it 2 % of the time. However, without crossfeed 98 % of material sounds worse to my ears. How is that a fail for crossfeed?
> 
> I'm sorry I can't make this as a real science because I don't have the resources. Those who have can make the science. I'm confident it will back up my claims.



*So, the only thing we have here is opinions!  *

And you have just admitted *this isn't real science.*  If you have* no science* to back up your opinions. If the principle of "if you want a different result do something different" holds, then why not try this: Rather than stating your opinion as scientific fact backed up by spatial hearing and math (because by your own admission, that's not true), state your opinions as what they are, _opinions, personal preferences, etc.?_  I have no problem with that at all!  In fact, I'll defend your right to state them, even if I disagree! Just so long as they stay framed as your personal preference and opinion, and not a mandatory religeous belief that everyone must adopt or be labeled a heretic.

When you say "This is the way it is, and this is the way it should be, has to be, _*you are all wrong!*_" , it puts you in a position of exposure to counter arguments because you condemn the opinions of others, and thereby condemn their right to express them.  All you had to do was proclaim your preference and refrain from attacking anyone else by labelling them idiots, spatially deaf, spatially ignorant, or just plain ignorant, challenging the validity of what must certainly total up to nearly 100 years of combined professional experience in opposition to your opinion.  Then you fight with us with your own made-up terminology, convoluted logic, and border-line name calling.  44 pages, and this could have all been done in about 2. 

YOU chose to start the war, and now have the choice of continue to perpetuate it, or not.  You won't even win cross-feed converts by behaving like a zealot, though.

And, while it may not mean much to you, look what you've accomplished.  You've gotten me, a cross-feed non-fan, to start using it occasionally!  Ok, only about 2% of the time, and only with serious tweaking, but that's a big deal. 

You've said you want to learn. How about you figure out why many, including the professionals, don't generally like cross-feed?  Or why _anyone_ wouldn't like it?  There MUST be a reason (besides they're all spatially-unenlightend idiots)!  This will be a study of human sensory perception, not human spatial hearing, and may just turn on a light or two.  Perception includes hearing and spatial hearing, but there's a whole lot more going on.  Perception can be fooled.  Perhaps there's some fooling going on with both polar opposite opinions of cross-feed?


71 dB said:


> If not then my years in the university have been complete waste of time.


And that statement makes no sense other than being purely emotional response.  That's not science either.  That's "I've tripped and fallen, my life is a complete waste."  Chill, man. Get back on the horse.


----------



## 71 dB

pinnahertz said:


> *So, the only thing we have here is opinions!  *
> 
> And you have just admitted *this isn't real science.*  If you have* no science* to back up your opinions. If the principle of "if you want a different result do something different" holds, then why not try this: Rather than stating your opinion as scientific fact backed up by spatial hearing and math (because by your own admission, that's not true), state your opinions as what they are, _opinions, personal preferences, etc.?_  I have no problem with that at all!  In fact, I'll defend your right to state them, even if I disagree! Just so long as they stay framed as your personal preference and opinion, and not a mandatory religeous belief that everyone must adopt or be labeled a heretic.


My claims can still be correct even if there's lack of scientific methodology behind them.



pinnahertz said:


> When you say "This is the way it is, and this is the way it should be, has to be, _*you are all wrong!*_" , it puts you in a position of exposure to counter arguments because you condemn the opinions of others, and thereby condemn their right to express them.  All you had to do was proclaim your preference and refrain from attacking anyone else by labelling them idiots, spatially deaf, spatially ignorant, or just plain ignorant, challenging the validity of what must certainly total up to nearly 100 years of combined professional experience in opposition to your opinion.  Then you fight with us with your own made-up terminology, convoluted logic, and border-line name calling.  44 pages, and this could have all been done in about 2.



I say ILD should be limited to a few decibels at low frequencies, because the ILD of HRTF contralateral/ipsilateral pairs at low frequencies are within a few decibels unless the sound source is VERY near to head. You can't refute HRTF, because those are scientifically measured, so you have to explain why my claim about "mimicing" HRTF in respect of ILD limiting is wrong. If you can explain that, I might learn something from you. Also, I'd be interested to learn why "larger than HRTF" ILD/ITD - values aren't a problem.



pinnahertz said:


> YOU chose to start the war, and now have the choice of continue to perpetuate it, or not.  You won't even win cross-feed converts by behaving like a zealot, though.



Hopefully you notice I'm trying to be more respectful now. I was so surprised by the opposition to my claims I said nasty things, apparently calling people idiots without realizing it! That's not who I am, so all of this is confusing as hell.



pinnahertz said:


> And, while it may not mean much to you, look what you've accomplished.  You've gotten me, a cross-feed non-fan, to start using it occasionally!  Ok, only about 2% of the time, and only with serious tweaking, but that's a big deal.



That's better than nothing. Appreciate it! I'm not demanding anyone to use it 98 % of the time. That's just how it's for me.



pinnahertz said:


> You've said you want to learn. How about you figure out why many, including the professionals, don't generally like cross-feed?  Or why _anyone_ wouldn't like it?  There MUST be a reason (besides they're all spatially-unenlightend idiots)!  This will be a study of human sensory perception, not human spatial hearing, and may just turn on a light or two.  Perception includes hearing and spatial hearing, but there's a whole lot more going on.  Perception can be fooled.  Perhaps there's some fooling going on with both polar opposite opinions of cross-feed?



Even if "crossfeeders" are a minority, some people really "love" crossfeed. We have several members in this thread stating that. So, for some reason some people really like crossfeed, find it more natural, less tiring, "bee free", etc. I believe a lot of people don't like crossfeed because of human nature. It's difficult to "let go" the way you have done things. I was willing to condemn my listening years without crossfeed "wrong" and accept crossfeed as the superior way of listening headphones. I think many find the reduction of effects caused by excessive ILD/ITD loss of spatial detail. I don't. To me effects caused by excessive ILD/ITD are not real information and those effects don't exist when listening to speakers because HRTF limits ILD/ITD. To me headphone listening without crossfeed requires superspatial hearing so that excessive ILD/ITD values make sense. Turning crossfeed ON reduces perceptual loudness, removes effects caused by excessive ILD/ITD making the sound less "impressive" and changes the nature of spatiality. I can understand why some people may have a problem with that, especially when some of the benefits of crossfeed (less tiring) became apparent only after a longer listening session. It has been my believe that explaining people what really happens when you crossfeed would help them to accept crossfeed and hear the benefits better, but these belief have been challenged to extreme levels on this discussion board to my surprise. Go figure.

In my opinion sound engineers shouldn't be offended by crossfeed. I don't think it messes up at all the spatial information painstakingly worked into a mix. In fact, I believe proper crossfeed allows all that spatial information to really come across undistorted in all glory.


----------



## pinnahertz (Jan 15, 2018)

71 dB said:


> My claims can still be correct even if there's lack of scientific methodology behind them.


Could be, but you actually have no idea.  Evidence and opinion may not be in your favor.


71 dB said:


> I say ILD should be limited to a few decibels at low frequencies, because the ILD of HRTF contralateral/ipsilateral pairs at low frequencies are within a few decibels unless the sound source is VERY near to head.


Under what conditions?  With what material? With what mix?  Did you forget about all those variables?


71 dB said:


> You can't refute HRTF, because those are scientifically measured, so you have to explain why my claim about "mimicing" HRTF in respect of ILD limiting is wrong. If you can explain that, I might learn something from you. Also, I'd be interested to learn why "larger than HRTF" ILD/ITD - values aren't a problem.


OK, but I don't think you'll learn anything.  Ready?

Nobody here has "refuted HRTF"!

I'm not claiming that "mimicking HRTF" with ILD an low frequencies is, of itself "wrong".  I'm challenging the universal efficacy and application of cross-feed as a "correction" of a problem that may not be subjectively necessary to correct.

"Larger than HRTF" ILD and ITD may or may not be a problem depending on what else is going on in the mix and what the artistic intent is.  Because of that, blanket ILD correction stands more chance of being wrong than right, but the key is the subjective opinion of  how it sounds.  You cannot calculate opinion.  Opinion is rooted in perception.  You have not factored in perception.

Learn anything?  I didn't think so.


71 dB said:


> Hopefully you notice I'm trying to be more respectful now. I was so surprised by the opposition to my claims I said nasty things, apparently calling people idiots without realizing it! That's not who I am, so all of this is confusing as hell.


Any attempt at respect will be appreciated.


71 dB said:


> That's better than nothing. Appreciate it! I'm not demanding anyone to use it 98 % of the time. That's just how it's for me.


You say that now, but you did, and have made it quite clear that anyone with another view must be spatially deaf/ignorant/unenlightened,  blah..blah..blah.  And more than once.


71 dB said:


> Even if "crossfeeders" are a minority, some people really "love" crossfeed. We have several members in this thread stating that. So, for some reason some people really like crossfeed, find it more natural, less tiring, "bee free", etc.


Not discounting what you said, but please realize you have essentially zero data about cross-feed acceptance and desire.  You don't have a statistically valid sample.  You cannot draw any valid conclusions from the above.


71 dB said:


> I believe a lot of people don't like crossfeed because of human nature. It's difficult to "let go" the way you have done things. I was willing to condemn my listening years without crossfeed "wrong" and accept crossfeed as the superior way of listening headphones.


Your belief, stated above, is functioning as a strong bias against your own research.  Even if you cannot perform all of the scientific research you'd like to, you could at least employ a basic scientific attitude.  However, you have within your power, abilities and budget the capability to do the necessary research, it's just not been your focus.  Attributing rejection of cross-feed to "difficulty of letting go" of a way of doing something is a possibility, but certainly not the only one.  That, too, could be easily researched.


71 dB said:


> I think many find the reduction of effects caused by excessive ILD/ITD loss of spatial detail. I don't. To me effects caused by excessive ILD/ITD are not real information and those effects don't exist when listening to speakers because HRTF limits ILD/ITD.


The disparity above would motivate a scientist to find out why it exists.


71 dB said:


> To me headphone listening without crossfeed requires superspatial hearing so that excessive ILD/ITD values make sense. Turning crossfeed ON reduces perceptual loudness, removes effects caused by excessive ILD/ITD making the sound less "impressive" and changes the nature of spatiality.


You are re-stating your preference.  You'll get no validation here.  We get your preference, but your bias is so strong you can't even open the possibility of researching the reason why it's not universally shared.  It's the problem 3D movie producers have had for years, but now have come to an understanding of it.  3D should, theoretically, be seen universally as better, but it's not.  In fact, it's acceptance ranges from a few that seek it out and prefer 3D, all the way to those that abhor it. That's why all 3D movies released today are also released flat.  But, today, the reason for this is well known.  You have a similar problem, but the reason for the wide variance in acceptance of cross-feed is not known.


71 dB said:


> I can understand why some people may have a problem with that, especially when some of the benefits of crossfeed (less tiring) became apparent only after a longer listening session. It has been my believe that explaining people what really happens when you crossfeed would help them to accept crossfeed and hear the benefits better, but these belief have been challenged to extreme levels on this discussion board to my surprise. Go figure.


No, what you've stated is a hypothesis.  It's also illogical.  You can explain 3D movies all day, it won't build audience acceptance.  That's been more than proven.  You need to stop trying to hypothesize and convince, and start researching why that's not working.


71 dB said:


> In my opinion sound engineers shouldn't be offended by crossfeed. I don't think it messes up at all the spatial information painstakingly worked into a mix. In fact, I believe proper crossfeed allows all that spatial information to really come across undistorted in all glory.


I accept, but disagree with, your opinion.  Here's what you need to research:

Why do some think cross-feed collapses the dimensional and spatial nature of some recordings?
Why, even though you think it is less tiring, others find cross-feed immediately annoying?
Why, when you find it applicable to 98% of all stereo music, others find anywhere from partial agreement to an inverse statistic?

Re-stating opinions and hypotheses without finding proof and testing hypothesis is just wasting bandwidth at this point.  You've had several beliefs challenged here.  Do you still hold them as absolute, or are you willing to dig deeper into human sensory perception?

And quit arguing?


----------



## bigshot

IBID above


----------



## gregorio (Jan 16, 2018)

It's good to see that you seem to be coming around, that you've now contradicted and thereby apparently withdrawn a number of your claims; being scientific and some of your claims about spatial information for example. There's still a few problems/misconceptions though:



71 dB said:


> [1] I test all new recording first without crossfeed to see if they belong to the infamous 2 % ...
> [2] I find crossfeed very successful. I don't know what fails you are talking about.



1. It's not an infamous or famous 2%, I'd never even heard of this figure before you came out with it. It's a figure you've invented which correlates with your personal perception/preference for crossfeed, maybe it's a figure which is applicable to some other people, maybe there are some who think the figure should be 100% or 0% or in fact anywhere in between. You do not know, there is little actual, scientific evidence and, we have to be careful because it's not fixed, it varies with individual recordings. As I've already stated, I am not a fan of crossfeed but there are some recordings I've tested which I feel do benefit from crossfeed but my "infamous" figure would be almost the inverse of yours, probably only a few percent which benefit. However, I don't know exactly what music you listen to, maybe if I had your collection of music recordings my figure would increase significantly and maybe if you had my collection your figure would decrease significantly, although the huge disparity between us strongly suggests a significant difference of perception/preference and not solely a difference in our music collections. And of course, you've claimed 98% of ALL recordings benefit from crossfeed, not just 98% of the recordings you own/listen to.

2. That is clearly untrue! What's your "infamous 2%" then? If you don't know of any "fails" why don't you quote crossfeed as being beneficial 100% of the time, instead of only 98%? So you do know what fails pinnahertz is talking about! The differences here are: 1. You're talking about only 2% of recordings fail and we're talking about a very significantly higher percentage and 2. You've stated your 2% as indisputable fact for ALL music and ALL people, which you can't, it's just what you've found with your perception/preference, it's your opinion and NOT fact. I'm stating that your 2% is more like 95% but I'm not stating that 95% figure is an "infamous" fact, applicable to all recordings and all people, just applicable to me and my perception/preferences and 3. You have stated that anyone who disagrees with you is ignorant/an idiot and there's obviously a massive difference on this point between us but fortunately, you're starting to back away from this claim!



71 dB said:


> [1] Yes. The amount of spatial information is not the issue.
> [2] The issue is how large ILD it results. Depending on the system we use to listen to a given recording the resulting ILD can be large or small. With headphones ILD is easily excessive.
> [3] A recording of one impulse of opposite polarity on left and right channel has huge channel difference, but zero spatial information.
> [3a] A recording if a church can have lots of spatial information, but the resulting ILD can be limited if the spatial information is coded that way.
> [4] Now HRTF makes sure ILD is not excessive. With headphones we "bypass" HRTF so we want something else to limit ILD. That something is for example crossfeed.



1. Now that statement is good ... or rather, it's not "good", it's just an improvement. It's an improvement because it withdraws many of your more ridiculous claims about spatial information and indicates that the issue of incorrect terminology is now not so much of an issue, which means we might be able to have a more rational discussion. It's "not good" though, because spatial information is in fact the issue or rather one of the issues!

2. ILD is AN issue with HP presentation and crossfeed does indeed solve that issue. But, ILD is not "The issue", it's only one of the issues of the channel separation which occurs with HP presentation and while crossfeeding solves the issue of ILD, it does not solve and indeed often makes worse, some of those other issues, one of which is spatial information!

3. We may have another incorrect use of terminology here. One impulse of "opposite polarity on left and right channel" would have zero spatial information, zero level difference and the only difference between the channels would be 180deg of phase. Given a perfect stereo system and the listener perfectly positioned between the speakers, the result would be complete silence!  Do you mean an impulse panned hard left and THEN panned hard right (or vice versa), what is often referred to as "ping pong"? But then, as panning is one of the aspects of spatial information, you cannot have panning AND zero spatial information. So, do you mean "ping pong" and no other spatial information except for panning (no EQ or volume differences and no reflections/reverb for example)? If I assume this is what you mean, then how many commercial music recordings are there which comply with these conditions? Either none at all or almost none at all! Although admittedly, some early, very basic stereo mixes get somewhat close to these conditions.
3a. A recording made in a church will, virtually without exception, have a great deal of spatial information in terms of reflections and most probably other aspects of reverb. Reverb will reduce ILD but the amount of ILD in the completed mix will depend on how the mix was originally recorded, how the mics are panned during mixing and the exact nature and amount of the reverb.

4. Yes, agreed but there are two issues here: 1. Your crossfeed is NOT a HRTF, it only accounts for part of a HRTF! and 2. When crossfeeding you are not crossfeeding ONLY level differences, you are crossfeeding the entire signal (or in your case, if I've understood correctly, the entire signal below 1kHz). This entire signal contains not only level but obviously also frequency and all the other spatial information (reflections/reverb). Crossfeeding CAN therefore cause undesirable frequency interactions and WILL affect the timing upon which reflections/reverb depends. A problem when we take into account the fact that the reflections/reverb in the final product is a manufactured illusion rather than a pure/real/natural occurrence. Crossfeeding does therefore affect this illusion, although how noticeably depends on the reflections/reverb in the final mix and of course, how sensitive our individual perception is. My experience is similar to pinnahertz, for me the illusion often breaks down significantly, I loose depth, ambience and it all sounds rather 2 dimensional/flat. Maybe others are not so sensitive and either simply don't notice this effect at all or do notice it but not enough to be particularly bothered by it and therefore to them, crossfeeding is either all good or good most of the time.

G


----------



## 71 dB

gregorio said:


> That is clearly untrue! What's your "infamous 2%" then? If you don't know of any "fails" why don't you quote crossfeed as being beneficial 100% of the time, instead of only 98%? So you do know what fails pinnahertz is talking about! The differences here are: 1. You're talking about only 2% of recordings fail and we're talking about a very significantly higher percentage and 2. You've stated your 2% as indisputable fact for ALL music and ALL people, which you can't, it's just what you've found with your perception/preference, it's your opinion and NOT fact. I'm stating that your 2% is more like 95% but I'm not stating that 95% figure is an "infamous" fact, applicable to all recordings and all people, just applicable to me and my perception/preferences and 3. You have stated that anyone who disagrees with you is ignorant/an idiot and there's obviously a massive difference on this point between us but fortunately, you're starting to back away from this claim!



The "2 %" are recordings without excessive channel difference. They don't cause excessive ILD so nothing needs to be done, no crossfeed = proper crossfeed. Crossfeed being successful means it is successful in situation when there is excessive channel difference to be fixed. Umbrellas being successful on rainy days doesn't mean people should use umbrellas all the time. Umbrellas protect from the rain, so you need rain to have a need for umbrellas. To me crossfeed is the "umbrella" for excessive channel difference and it rains excessive channel difference 98 % of the time. Using crossfeed when you don't need it or using too strong crossfeed limits ILD values under the "full potential" and make the sound unnecessorily mono-like. Proper crossfeed limits ILD so that the largest ILD values are the largest that human spatial hearing expects to experience naturally. At low frequences that value for ILD is about 3 dB. So, it max LF ILD of a recording is 3 dB or less you don't perhaps need crossfeed. If max LF ILD = 5 dB, you crossfeed a little. If max LF ILD = 12 dB, you crossfeed a lot. The higher frequencies up to about 1.6 kHz have a say on this too, but that's the general idea. Bass ILD can be limited to near mono, because at those frequencies spatial hearing is based almost completely on ITD or you can have mono bass and have all the spatiality on higher frequencies ( > 200 Hz ).

1. It's not "fail", because OFF is one of the crossfeed levels for situations when excessive channel difference doesn't exist such as binaural recordings.
2. The 2 % is not an indisputable fact. It is what seems to agree with my listening experiences.
3. I take back the idiot part. I don't know why I ever wrote that! I still feel ignorance might play a part for some people. At least it played part for me before 2012.


----------



## 71 dB

gregorio said:


> 3. We may have another incorrect use of terminology here. One impulse of "opposite polarity on left and right channel" would have zero spatial information, zero level difference and the only difference between the channels would be 180deg of phase. Given a perfect stereo system and the listener perfectly positioned between the speakers, the result would be complete silence!  Do you mean an impulse panned hard left and THEN panned hard right (or vice versa), what is often referred to as "ping pong"? But then, as panning is one of the aspects of spatial information, you cannot have panning AND zero spatial information. So, do you mean "ping pong" and no other spatial information except for panning (no EQ or volume differences and no reflections/reverb for example)? If I assume this is what you mean, then how many commercial music recordings are there which comply with these conditions? Either none at all or almost none at all! Although admittedly, some early, very basic stereo mixes get somewhat close to these conditions.
> 3a. A recording made in a church will, virtually without exception, have a great deal of spatial information in terms of reflections and most probably other aspects of reverb. Reverb will reduce ILD but the amount of ILD in the completed mix will depend on how the mix was originally recorded, how the mics are panned during mixing and the exact nature and amount of the reverb.



3. Zero difference of _absolute_ values , but ears don't take _absolute_ values when figuring out ILD. The resulting sound with perfect stereo system wouldn't be silence! In anechoic chamber left ear would hear positive impulse from left speaker convoluted with the HRIR associated with that angle of sound + negative impulse convoluted with _another_ HRIR associated with that angle of right speaker. The result is HRIR(from left) - HRIR(from right) which is not zero, because HRIR(from left) is not same as HRIR(from right). The resulting sound is an impulse-like positive spike followed by less impulse-like strongly low-pass filtered negative spike about 0.2-0.3 ms later. Right ear would hear the same, but polarity reversed. In normal living room we'd have also early reflections and reverberation to add lots of spatial information. We would hear kind of 2 versions of the impulse response of the room playing simultaneously. Far from silence.

3a. Exactly! 100 % agreed! I could have written that myself. Now, that is why we have recordings with different amount of ILD and why we need different crossfeed levels to address those differences. Also, what you wrote tells us that we always could choose such mic set ups and positions that in a given acoustics the resulting ILD is optimal or at least within the limits of spatial hearing.


----------



## 71 dB

gregorio said:


> Do you mean an impulse panned hard left and THEN panned hard right (or vice versa), what is often referred to as "ping pong"? But then, as panning is one of the aspects of spatial information, you cannot have panning AND zero spatial information. So, do you mean "ping pong" and no other spatial information except for panning (no EQ or volume differences and no reflections/reverb for example)? If I assume this is what you mean, then how many commercial music recordings are there which comply with these conditions? Either none at all or almost none at all! Although admittedly, some early, very basic stereo mixes get somewhat close to these conditions.
> 
> G


I mean you have first mono impulse (same impulse at both channels at same amplitude and timing) and then you reverse right channel. For such signal L+R = 0L and L-R = 2L. That is not ping pong and you don't get such signal with amplitude panoration. Such signal doesn't have spatial information (stereo-sense), because it's not mono (sound is not centered), and sound is not in the right or left either, because we have same absolute amplitude left and right, just in opposite polarity. So in stereo-sense there is no spatial information. In "pro-logic" multichannel-sense there is spatial information, because such signal would be decoded to rear channel, but we are talking about stereophonic sound here.


----------



## gregorio

71 dB said:


> 3.The resulting sound is an impulse-like positive spike followed by less impulse-like strongly low-pass filtered negative spike about 0.2-0.3 ms later. Right ear would hear the same, but polarity reversed. In normal living room we'd have also early reflections and reverberation to add lots of spatial information. We would hear kind of 2 versions of the impulse response of the room playing simultaneously. Far from silence.
> 3a. Exactly! 100 % agreed! I could have written that myself. Now, that is why we have recordings with different amount of ILD and why we need different crossfeed levels to address those differences. [3b] Also, what you wrote tells us that we always could choose such mic set ups and positions that in a given acoustics the resulting ILD is optimal or at least within the limits of spatial hearing.



3. Have you actually tried this in practise? It takes some messing about because of room acoustics which causes the two speakers to never have exactly the same response but with some messing you can indeed experience silence. I've done this with students, although in a studio, never tried it in a home environment.
3a.  I explained that crossfeed does solve the different levels issue but using crossfeed to solve this issue causes other, undesirable issues.
3b. Yes we could, it's not even very difficult. However, in practise "optimal ILD" is not the only concern, it's not even the primary concern. There are other more important concerns!



71 dB said:


> [1] I mean you have first mono impulse (same impulse at both channels at same amplitude and timing) and then [2] you reverse right channel. For such signal L+R = 0L and L-R = 2L. That is not ping pong and you don't get such signal with amplitude panoration. Such signal doesn't have spatial information (stereo-sense), because it's not mono (sound is not centered), and sound is not in the right or left either, because we have same absolute amplitude left and right, just in opposite polarity.



1. Such a signal DOES have spatial information, the sound will appear at the phantom centre position, this is effectively exactly what panning is! There is no actual centre position in a (2 channel) stereo system only a left and a right. To pan a sound to the centre position results in effectively exactly what you describe, the exact same signal in both the left and right channels at the exact same time with exactly the same amplitude. As one changes the relative level so the sound moves from the centre position and this is called "panning". As one changes the relative timing of the two signals the sound also moves from the centre position and this is called "psychoacoustic panning".
2. When you phase invert one channel relative to the other you get phase cancellation, which is silence, as described previously. So, assuming the signal is identical in both channels, L+R = C (phantom) and L-R = 0 (silence). This is real basic stuff, I can't believe you don't know it, maybe there is some confusion of terms again?

G


----------



## pinnahertz (Jan 16, 2018)

71 dB said:


> I mean you have first mono impulse (same impulse at both channels at same amplitude and timing) and then you reverse right channel. For such signal L+R = 0L and L-R = 2L. That is not ping pong and you don't get such signal with amplitude panoration. Such signal doesn't have spatial information (stereo-sense), because it's not mono (sound is not centered), and sound is not in the right or left either, because we have same absolute amplitude left and right, just in opposite polarity. So in stereo-sense there is no spatial information. In "pro-logic" multichannel-sense there is spatial information, because such signal would be decoded to rear channel, but we are talking about stereophonic sound here.


You have chosen a bad example.  Impulses cannot and do not exist in practical situation because there’s no way to produce them.  The theoretical impulse becomes band limited, phase distorted, by every transducer on earth. 

But even if you chose a mono sound signal, noise burst, pretty much anything, the perception of a mono but out of phase signal, presented equally from two speakers, actually is one of very high spatiality! It is perceived as a diffuse and directionless sound (there are test records and CDs that describe it this way) that appears to not come from dead center but all over.  Because acoustic reverb also has some of this quality, which you can easily confirm by looking at the content an L-R sum, we associate an out of phase mono signal with a sense of space.  When experimenting with acoustic crosstalk cancellation I was surprised that one of the problems was when you cancel acoustic crosstalk with speakers, some recordings presented an unusually high amount of reverb, out of balance direct signal. 

What you say about an out of phase signal is wrong, as far as human perception goes, and frankly that’s all that matters.


----------



## 71 dB

pinnahertz said:


> 1. Could be, but you actually have no idea.  Evidence and opinion may not be in your favor.
> 
> 2. Under what conditions?  With what material? With what mix?  Did you forget about all those variables?



1. I have some idea. If I hadn't I wouldn't be making claims here. Education/knowledge of spatial hearing + listening experiences give me an idea, just not verified scientifically. I have mentioned I have low self-esteem which means I keep my mouth shut about things I know/feel I don't know much. This is something I am very confident about so I make claims online.

2. Under any conditions and with any material/mix.



pinnahertz said:


> OK, but I don't think you'll learn anything.  Ready?
> 
> Nobody here has "refuted HRTF"!
> 
> ...



Audio science makes little sense if we keep insisting subjectivity. We should try to find objective truth. Having "larger than HRTF" ILD/ITD as your artistic intent is kind of a bad intent, because you can't have "larger than HRTF" values with speakers, only with headphones. So you are doing something that will sound "against your intent" on speakers. Such productions should have a sticker on them saying "For headphones only. Don't use crossfeed." The only recordings with such stickers are binaural recordings and they are especially free of any "larger than HRTF" values for ILD and ITD. Another point is that large ILD is related to sounds very near head/other ear and causes an annoying feeling so that we know to use our hands to swipe the damn bee before it flies into the ear canal! Smaller ILD is more comfortable to listen to.


----------



## bigshot

Speakers just sound better than headphones, that's all. If you absolutely have to use headphones, do whatever you have to.


----------



## pinnahertz

71 dB said:


> 1. I have some idea. If I hadn't I wouldn't be making claims here. Education/knowledge of spatial hearing + listening experiences give me an idea, just not verified scientifically. I have mentioned I have low self-esteem which means I keep my mouth shut about things I know/feel I don't know much. This is something I am very confident about so I make claims online.


"You have no idea" means you have no scientific and statistical data to support your claim.  You don't.


71 dB said:


> 2. Under any conditions and with any material/mix.


That is impossible. 


71 dB said:


> Audio science makes little sense if we keep insisting subjectivity. We should try to find objective truth. Having "larger than HRTF" ILD/ITD as your artistic intent is kind of a bad intent, because you can't have "larger than HRTF" values with speakers, only with headphones. So you are doing something that will sound "against your intent" on speakers. Such productions should have a sticker on them saying "For headphones only. Don't use crossfeed." The only recordings with such stickers are binaural recordings and they are especially free of any "larger than HRTF" values for ILD and ITD. Another point is that large ILD is related to sounds very near head/other ear and causes an annoying feeling so that we know to use our hands to swipe the damn bee before it flies into the ear canal! Smaller ILD is more comfortable to listen to.



The above, again, indicates your knowlege gap.  You are still focussed on a very firm and invariant definition based on very simple math.  For a full and complete understanding, you must go outside that and include human perception. Until you do, we will be knocking heads.


----------



## pinnahertz

71 dB said:


> 1. It's not "fail", because OFF is one of the crossfeed levels for situations when excessive channel difference doesn't exist such as binaural recordings.


So because your cross-feeder has an "OFF" setting, you can set it to "OFF" and still claim to use cross-feed?  


71 dB said:


> 2. The 2 % is not an indisputable fact. It is what seems to agree with my listening experiences.


Well, you're the only one claiming that figure, pretty much everyone else here claims something else.  So I guess, it's been disputed, and adequately. 


71 dB said:


> 3a. I take back the idiot part.  3b. I don't know why I ever wrote that! *I still feel ignorance might play a part for some people.*


3a.  Thank you. I hope it stays that way, but confidence is low...
3b. That is why you wrote it.  So long as you dismiss opposing opinion as ignorant and refuse to attempt to understand, the possibility exists for that kind of post to return.


----------



## pinnahertz (Jan 16, 2018)

71 dB said:


> I listened to Steve Roach's 'Kiva" last night on Spotify with headphones. I used wide crossfeeder and the sound image was very immersive, almost binaural.


You probably ruined Roach's intent completely using cross-feed.  I'm well acquainted with Roach, from "Structures From Silence" (1984) on.  Roach's work is "*Space Music*", which is based on a highly immersive, super-real experience (think of the vastness of space, only with actual sound).  I've listened to some of that same recording with and without cross-feed.  My opinion is that it's a perfect example of when not to use cross-feed.  In fact, on speakers, a 5.1 or greater upmix would be great, as would acoustic crosstalk cancellation, if you had a system that worked.

edit: This genre, the development of which hinged on the advent of non-artificial sounding digital reverb devices (Lexicon 224, etc.) that made artificial hyper reverberant sound fields practical, is rooted in hyper-dimensional, hyper-reality, and spaciousness that exceedes the bounds of physical acoustics. Prior to these tools, you had a handball court-sized physical reverb chamber, an huge EMT plate, or various less-applicable attempts at reverb using springs.  None had infinitely variable decay, pre-delay, or reflection response shaping.  That device, the digital reverb, made the Space Music genre bloom.  They'd have done 5.1 or Atmos if it was accessible then, but many of these guys didn't make a living with their music, so the arrival of even the most basic tools like reverbs gave them a dimensional palette that was essential to their art form.

Cross-feed collapses their hyper-dimensional sound field, and kills the art.


----------



## bigshot

Don't forget slap back! Elvis wouldn't be Elvis without slap back!


----------



## pinnahertz

Yeah, but his best slap echo stuff was mono.  I think that's 100% cross-feed by some peoples definition...if they leave the switch "on".

Wouldn't mono also be 100% spatial distortion?  Hmmmmmmmmmmmm!  (you can get that last bit out with a 60Hz notch).


----------



## 71 dB (Jan 17, 2018)

pinnahertz said:


> You probably ruined Roach's intent completely using cross-feed.  I'm well acquainted with Roach, from "Structures From Silence" (1984) on.  Roach's work is "*Space Music*", which is based on a highly immersive, super-real experience (think of the vastness of space, only with actual sound).  I've listened to some of that same recording with and without cross-feed.  My opinion is that it's a perfect example of when not to use cross-feed.  In fact, on speakers, a 5.1 or greater upmix would be great, as would acoustic crosstalk cancellation, if you had a system that worked.
> 
> edit: This genre, the development of which hinged on the advent of non-artificial sounding digital reverb devices (Lexicon 224, etc.) that made artificial hyper reverberant sound fields practical, is rooted in hyper-dimensional, hyper-reality, and spaciousness that exceedes the bounds of physical acoustics. Prior to these tools, you had a handball court-sized physical reverb chamber, an huge EMT plate, or various less-applicable attempts at reverb using springs.  None had infinitely variable decay, pre-delay, or reflection response shaping.  That device, the digital reverb, made the Space Music genre bloom.  They'd have done 5.1 or Atmos if it was accessible then, but many of these guys didn't make a living with their music, so the arrival of even the most basic tools like reverbs gave them a dimensional palette that was essential to their art form.
> 
> Cross-feed collapses their hyper-dimensional sound field, and kills the art.



I just love to ruin artist's intents! 

Seriously, I disagree with what you say. Digital reverbs generate artificial reverberation, but it's not hyper-dimensional or hyper-reality! Nor does listeners have hyper-ears to listen such sounds even if they existed. Roach's "Kiva" took me to a soundworld of exceptional realism WITH crossfeed and you call that ruining intent. Well, in that case ruining intents really is my thing!

What happens to Roach's intent with speakers? Room acoustics get's added, HRTF reduces ILD/ITD and you are worried about my crossfeed? It is frustrating to see how after hundreds of messages on here it seems people still have misconceptions about crossfeed. I really don't know how to make people understand these things...


----------



## 71 dB

*Speakers:* Rooms acoustics gets added. HRTF. Reduced ILD/ITD.
*Headphones:* No room acoustics. No HRTF. ILD/ITD as they are.
*Headphones with crossfeed:* No room acoustics. No HRTF. Reduced ILD/ITD.

On what logic speakers and heaphones without crossfeed are cool, but headphones + crossfeed "ruin" intent? The only logic is that speakers and headphones are the "default" and crossfeed is something extra. That's narrow-mindness and I want to make crossfeed "default."


----------



## pinnahertz

71 dB said:


> I just love to ruin artist's intents!
> 
> Seriously, I disagree with what you say. Digital reverbs generate artificial reverberation, but it's not hyper-dimensional or hyper-reality! Nor does listeners have hyper-ears to listen such sounds even if they existed. Roach's "Kiva" took me to a soundworld of exceptional realism WITH crossfeed and you call that ruining intent. Well, in that case ruining intents really is my thing!


You've missed the point entirely.  The intent is hyper-dimensionality. 


71 dB said:


> What happens to Roach's intent with speakers? Room acoustics get's added, HRTF reduces ILD/ITD and you are worried about my crossfeed?


What happens on speakers...happens on speakers.  His intent is realized to the extent it can be.  On headphones without cross-feed, even more intent realized.  If you reduce that, you've missed the point of the music.


71 dB said:


> It is frustrating to see how after hundreds of messages on here it seems people still have misconceptions about crossfeed.


I agree with that.


71 dB said:


> I really don't know how to make people understand these things...


You can't because you don't!


----------



## pinnahertz (Jan 17, 2018)

71 dB said:


> *Speakers:* Rooms acoustics gets added. HRTF. Reduced ILD/ITD.
> *Headphones:* No room acoustics. No HRTF. ILD/ITD as they are.
> *Headphones with crossfeed:* No room acoustics. No HRTF. Reduced ILD/ITD.


Yes, but which does the listener like, and which conveys the creator's intent better?  You can't seem to grasp that.


71 dB said:


> On what logic speakers and heaphones without crossfeed are cool, but headphones + crossfeed "ruin" intent? The only logic is that speakers and headphones are the "default" and crossfeed is something extra.


Actually missed it yet again! Let me give you an example:  In my opinion, cross-feed can, sometimes, be used to reduce extreme separation to a subjective improvement, it usually reduces separation that flattens perspective and removes dimensionality, which is not a subjective improvement.


71 dB said:


> That's narrow-mindness and I want to make crossfeed "default."


I think the irony of the above is apparent to all.  No doubt you want to make cross-feed "default" for the entire world.  I have no issue with you listening to cross-feed and enjoying it.  Making it mandatory for the world?  Not going to fly.

You keep thinking you have something to "teach".  I, for one, have learned it, absorbed, it, understand it AND disagree with the application of cross-feed.  "Teaching" isn't the same as "convincing".  Even teaching usually requires proof.  You have no proof, you have preference only, but think of it as proof.

No doubt, you've missed what I did in my example that makes it ok.

You know what else is OK?  Go ahead and make it YOUR default!  You've made your choice, stop deriding others for their choices.  You aren't "right", you've just chosen differently.  You can't justify your rightness because you have no statistics for the general preference of cross-feed.  The math may seem definitive, but it's far too basic to be. As a result, the effect is subjective.  You keep trying to justify your position by stating the math, the logic.  You ignore perception (which I suggested a few posts back), and that's your blind spot.  You've got some of the mechanics down, and have ignored the perception part completely.  And because of that, you can't understand the intent of music, or the way its mixed.   Instead, you apply YOUR intent, as a dominant superior purpose, and in the view that even the creator is wrong. And you apply YOUR intent to everyone else in the universe.

Cross-feed: You love it, I don't.  Some others like it, some other's don't. Some don't care.  You have not proven which is in the majority.   And that's just considering your basic form of cross-feed, there are many others.

What I don't get is, why the fight?  What is there for you to win?  You're not selling anything, you have nothing to gain or loose regardless of how many people love, hate, or don't care about cross-feed.   If Greg and I suddenly came over to your side would the world be a better place?  We're two guys!  What about everyone else, including all the other professionals that agree with our position? Would you then need to win them over too? The way to win over professionals is by publishing a paper that details the discovery, and validates it with testing, and subjecting that paper to peer review.  Not going to do that?  Then you have no hope of winning the argument!

What's to gain by winning?  The only thing I can see here is the need for validation.  If you must have validation by getting the agreement with everyone, you will be disappointed.  Always, not just with cross-feed.  It's just not possible to do.

Will the world at large end if we leave it that way?  Will yours?


----------



## 71 dB

pinnahertz said:


> You've missed the point entirely.  The intent is *hyper-dimensionality*.



What's that? I'm educated only on three-dimensional reality-based acoustics…
…maybe you are begining to realize you aren't so strong in_ this*_ bedate you have believed. It's ok, I have thought about crossfeed a lot the last 5-6 years.

* On many other things your knowledge can be superior to my knowledge, but crossfeed doesn't seem to be one of them.



pinnahertz said:


> 1. What happens on speakers...happens on speakers. His intent is realized to the extent it can be.
> 2. On headphones without cross-feed, even more intent realized.
> If you reduce that, you've missed the point of the music.


1. That's an honest answer. I give you that.
2. Are you sure? Does Steve Roach make music mainly for headphones? Hard to believe.


----------



## pinnahertz

71 dB said:


> What's that? I'm educated only on three-dimensional reality-based acoustics…
> …maybe you are begining to realize you aren't so strong in_ this*_ bedate you have believed. It's ok, I have thought about crossfeed a lot the last 5-6 years.


See what I mean?  I even told you what the point was, you still don't get it.


71 dB said:


> * On many other things your knowledge can be superior to my knowledge, but crossfeed doesn't seem to be one of them.


Uncalled for.  Everyone reading this thread knows your opinion of me.  Knock it off.


71 dB said:


> 1. That's an honest answer. I give you that.
> 2. Are you sure? Does Steve Roach make music mainly for headphones? Hard to believe.


Yes, I'm 1000% positive.  No, he doesn't, and that's NOT the point!

You still don't get it.  I've explained it, now it's up to you.


----------



## gregorio

71 dB said:


> [1] Audio science makes little sense if we keep insisting subjectivity. [2] We should try to find objective truth.[3] Having "larger than HRTF" ILD/ITD as your artistic intent is kind of a bad intent, because [3a] you can't have "larger than HRTF" values with speakers, only with headphones. [3b] So you are doing something that will sound "against your intent" on speakers.
> [4] Another point is that large ILD is related to sounds very near head/other ear and causes an annoying feeling so that we know to use our hands to swipe the damn bee before it flies into the ear canal! [4a] Smaller ILD is more comfortable to listen to.



1. The obvious fact you seem to be consistently missing is that much of what we're talking about has relatively little to do with audio science , we are talking about commercial recordings and commercial music recordings are NOT made by audio science, they are made almost purely by subjectivity, the subjectivity of the artists/engineers! You appear to have a shocking lack of understanding of the recording and mixing process, of the tools available and how those tools are used in practise. What tools are used, how and when they are used varies massively, from genre to genre, artist to artist and engineer to engineer and often varies very significantly even with the same artists and engineers between different tracks or albums. The arbiter of *ALL* of this is the subjectivity of the artists/engineers!

2. What objective truth, the objective truth of an artistic endeavour which is entirely subjective? How does this possibly make sense to you? What you've done is to massively oversimplify the whole process, take some notion of reality which in practise never exists, ignore the art of it all and then tried to find the "objective truth" of what's left.

3. So now you're telling us what artistic intent is allowed to be. Hilter and Stalin tried the same thing! I mastered a number of tracks a couple of years ago which had a lot of ILD and ITD. The artists (who were also the engineers) had mixed the tracks entirely on headphones, for playback headphones (though not exclusively) and a large part of my mastering was done with headphones. Although there were a few things which needed fixing/improving, I quite liked what they had done, the significant ILD in many places worked well, partly due to how they had mixed/processed it but according to you, it was "bad" (even though you've never heard it) and they should not have done that.
3a. That is not true, we can achieve this in effect, we can in fact make the stereo image appear to be much wider than the width of the speakers, one method for example is called "stereo shuffling". In fact you're missing all kinds of spatial effects commonly employed which rely on some combination of timing, phase and/or EQ such as: Chorusing, Doubling, Phasing, Flanging, Ring Modulators, Harmonizers, DDs, Echos and of course reverbs and this list goes on and there are innumerable variations of each of these, plus they are often used in combination! Additionally, how these effects are perceived depends almost entirely on the sound (and the different effects applied to that sound) of all the other tracks/instruments in the mix.
3b. That's a possibility, it's also entirely possible to have an intent which works fine on both speakers and HPs (without crossfeed) even though the presentation is different and indeed that is often the goal of mixing/mastering!
4. And what if we want a sound/instrument to "sound very near head/other ear", are we not allowed to because YOU find it annoying and for some bizarre reason think it's a bee?
4a. You are demonstrating a complete lack of understanding of the fundamental basics of music as an art form! Some pieces of music and in fact more than one entire genre of music, are based on NOT being comfortable to listen to! One could for example remix say "Ace of Spades" and make it comfortable to listen to but that would defeat the entire intent of the piece, as it would all heavy metal or metal genre. It's supposed to be harsh/metallic, distorted, loud and uncomfortable. Make it comfortable, take away the "heavy" and the "metal" and whatever you're left with is obviously NOT heavy metal! Now maybe you don't like heavy metal and that's fine but you can't say no one is allowed to make it and no one is allowed to listen to it. Yours is an opinion/perception/preference and thankfully this is a free world and you are not the dictator!



71 dB said:


> [1] Seriously, I disagree with what you say. [2] Digital reverbs generate artificial reverberation, but it's not hyper-dimensional or hyper-reality! [2a] Nor does listeners have hyper-ears to listen such sounds even if they existed. [3] Roach's "Kiva" took me to a soundworld of exceptional reality WITH crossfeed and you call that ruining intent. [3a] Well, in that case ruining intents really is my thing!



1. You can't be both "serious" AND "disagree" with what pinnahertz stated, only one OR the other, because what pinnahertz stated was entirely accurate! The only part of his post you could disagree with is the part he stated was his opinion.
2. That's clearly nonsense which contradicts the facts! Most digital reverbs do NOT just generate artificial reverberation, they generate all sorts or reverb type effects, such as plates and springs which incidentally were in use well before digital reverbs. There is one type of reverb unit which is designed to specifically generate/emulate artificial reverberation, the convolution reverb but even then, these can be post-processed and used/mixed in such a way as to a hyper-real/hyper-dimensional effect, if desired. And typically it is desired in many modern genres!
2a. And this statement is your problem, the problem you are ignoring and which invalidates most of what you're stating as fact. Correct, no one has "hyper-ears" but then no one listens to their ears and no one creates commercial music recordings for listeners' ears in the first place!! What listeners listen to and what artists/engineers create for is NOT their ears but their brain, their perception! And, it's relatively easy to fool the brain into a hyper-real/hyper-dimensional perception, in fact virtually all modern films and music genres RELY on doing just this! 
3. Kiva is clearly ABSOLUTELY NOT a reality! You've got the spatial information of a cave, of a studio and other spatial information all occurring simultaneously. How is that any sort of reality, let alone an "exceptional reality"? And indeed it is NOT supposed to sound like any reality, it is supposed to sound surreal, that's the whole point! OBVIOUSLY, your crossfeed CANNOT turn a deliberately surreal mix into a reality, that's a technical impossibility. I can only assume you are perceiving it as "real" because you are ignorant of/very insensitive to spatial information. It would not sound real to me with crossfeed, it would sound flattened, unreal and would destroy most of the depths/distances which are so well crafted and a hallmark of this album!
3a. That does indeed seem to be the case. The fundamental intent is "surreal" but you've somehow turned it into (for you) "exceptionally real" and destroyed that fundamental intent!



71 dB said:


> What's that? I'm educated only on three-dimensional reality-based acoustics… [2] …maybe you are begining to realize you aren't so strong in_ this*_ bedate you have believed. [3] It's ok, I have thought about crossfeed a lot the last 5-6 years.



1. You just don't seem to get it; your education isn't the answer, it's the problem! Your education is largely inapplicable because commercial music recordings do not employ "three-dimensional reality-based acoustics"!! What you really need to be educated in, you appear to be almost completely ignorant of (the art of music creation, recording, mixing and mastering), and not only are you ignorant of these things but you apparently deliberately want to remain ignorant and ignore/exclude them, while at the same time mocking/insulting those who understand they are fundamental to every aspect of the sound waves contained in a commercial recording! It really is quite astounding.
2. Due to the above, it's the exact opposite. It really is a shame that you cannot realise how ignorant you are of what you are crossfeeding and why therefore it's not going to work most of the time!
3. But not unfortunately about the material you are trying to crossfeed. Oh dear!

G


----------



## 71 dB

pinnahertz said:


> Yes, but which does the listener like, and which conveys the creator's intent better?  You can't seem to grasp that.



I'm sure Mr. Roach would be impressed with the spatiality of "Kiva" with my wide crossfeed.



pinnahertz said:


> Actually missed it yet again! Let me give you an example:  In my opinion, cross-feed can, sometimes, be used to reduce extreme separation to a subjective improvement, it usually reduces separation that flattens perspective and removes dimensionality, which is not a subjective improvement.



I don't experience flattened perspective or dimensionality with crossfeed unless I crossfeed too hard. I think you only assume this happening, because you think smaller ILD must do such things. No, that's not what happens. Reducing excessive ILD to natural levels of ILD gives the best chances for the spatiality of the recording to shine. It's when our spatial hearing makes most sense of the spatial information resulting in the most natural soundstaging of the spatial information in the recording. Removal of spatial effects caused by excessive ILD/ITD is not about flattening spatiality, it's getting rid of spatial distortion, "fake spatiality" to use Trumpian terminology. Recordings have less real spatial information than you have learned to think, but that's not a negative thing. Less is more. Natural spatiality makes possible to hear better small nuances. It's about accepting that less is more, becoming more mature and not demand "special spatial effects" everywhere. That's better for music and listeners.



pinnahertz said:


> I think the irony of the above is apparent to all.  No doubt you want to make cross-feed "default" for the entire world.  I have no issue with you listening to cross-feed and enjoying it.  Making it mandatory for the world?  Not going to fly.
> 
> You keep thinking you have something to "teach".  I, for one, have learned it, absorbed, it, understand it AND disagree with the application of cross-feed.  "Teaching" isn't the same as "convincing".  Even teaching usually requires proof.  You have no proof, you have preference only, but think of it as proof.



I don't mean forcing people to use crossfeed against their will. I mean educating people to actually notice the benefits of crossfeed so they WANT to use it. I also mean crossfeed to be "the third default" among speakers and headphones without crossfeed.

Yes, I feel I have something to teach on this issue. If I tell people why crossfeed is beneficial and what it does to the sound, people can test it themselves and possibly agree. If not, they can keep listening without crossfeed. They have nothing to lose, only to gain.

Regardless of what people, masses, think about crossfeed, I am totally convinced that crossfeed is the way to go with _most_ recordings, be it 98 %, 80 % or just 50.01%.



pinnahertz said:


> You know what else is OK?  Go ahead and make it YOUR default!  You've made your choice, stop deriding others for their choices.  You aren't "right", you've just chosen differently.  You can't justify your rightness because you have no statistics for the general preference of cross-feed.  The math may seem definitive, but it's far too basic to be. As a result, the effect is subjective.  You keep trying to justify your position by stating the math, the logic.  You ignore perception (which I suggested a few posts back), and that's your blind spot.  You've got some of the mechanics down, and have ignored the perception part completely.  And because of that, you can't understand the intent of music, or the way its mixed.   Instead, you apply YOUR intent, as a dominant superior purpose, and in the view that even the creator is wrong. And you apply YOUR intent to everyone else in the universe.



I have chosen carefully and wisely based on scientific knowledge of human hearing. The result of this choice is I enjoy headphone listening much more than I used to do.
Crossfeed is mathematically simple, but that it a good thing. We are free of the problems we have with HRTF being too detailed so that things go easily wrong. Crossfeed limits ILD using the simplest possible algorithm agreeing with the principles of human spatial hearing and that's also why crossfeed doesn't "mess up" spatiality as some people claim. Crossfeed is too simple to mess up anything. All it does is fix the problem of excessive inter-aural-differences.



pinnahertz said:


> Cross-feed: You love it, I don't.  Some others like it, some other's don't. Some don't care.  You have not proven which is in the majority.   And that's just considering your basic form of cross-feed, there are many others.
> 
> What I don't get is, why the fight?  What is there for you to win?  You're not selling anything, you have nothing to gain or loose regardless of how many people love, hate, or don't care about cross-feed.   If Greg and I suddenly came over to your side would the world be a better place?  We're two guys!  What about everyone else, including all the other professionals that agree with our position? Would you then need to win them over too? The way to win over professionals is by publishing a paper that details the discovery, and validates it with testing, and subjecting that paper to peer review.  Not going to do that?  Then you have no hope of winning the argument!
> 
> ...



I did not come here to fight. I came here to tell about my experiences with crossfeed and possibly to educate people. Since then I have been defending my claims.


----------



## pinnahertz

71 dB said:


> I'm sure Mr. Roach would be impressed with the spatiality of "Kiva" with my wide crossfeed.


Of course, I have the opposite opinion, but would never presume to know how someone I've never met would feel about cross-feed on his own work.


71 dB said:


> I don't experience flattened perspective or dimensionality with crossfeed unless I crossfeed too hard.


Yes, we know.


71 dB said:


> I
> I think you only assume this happening, because you think smaller ILD must do such things.


No, I actually tried your cross-feed on that recording and drew my conclusion.


71 dB said:


> I
> No, that's not what happens.


Whatever.  That's what I hear.
Not quoting the rest.


71 dB said:


> I
> I don't mean forcing people to use crossfeed against their will. I mean educating people to actually notice the benefits of crossfeed so they WANT to use it.


That's not teaching, that's persuading and convincing.  There's a big difference.  


71 dB said:


> I
> I also mean crossfeed to be "the third default" among speakers and headphones without crossfeed.


Ridiculous.  Ignoring the elephant in the room. Literally. 


71 dB said:


> I
> Yes, I feel I have something to teach on this issue. If I tell people why crossfeed is beneficial and what it does to the sound, people can test it themselves and possibly agree. If not, they can keep listening without crossfeed. They have nothing to lose, only to gain.


But what if they decide they don't like it?  How do you view them?


71 dB said:


> I
> Regardless of what people, masses, think about crossfeed, I am totally convinced that crossfeed is the way to go with _most_ recordings, be it 98 %, 80 % or just 50.01%.


OMG!  POINT MADE ALREADY!


71 dB said:


> I
> 
> I have chosen carefully and wisely based on scientific knowledge of human hearing. *The result of this choice is I enjoy headphone listening much more than I used to do.*
> Crossfeed is mathematically simple, but that it a good thing. We are free of the problems we have with HRTF being too detailed so that things go easily wrong. Crossfeed limits ILD using the simplest possible algorithm agreeing with the principles of human spatial hearing and that's also why crossfeed doesn't "mess up" spatiality as some people claim. Crossfeed is too simple to mess up anything. All it does is fix the problem of excessive inter-aural-differences.


I highlighted the only meaningful statement.


71 dB said:


> I
> 
> I did not come here to fight. I came here to tell about my experiences with crossfeed and possibly to educate people. Since then I have been defending my claims.


That would be "a fight".


----------



## 71 dB (Jan 17, 2018)

gregorio said:


> 1. You just don't seem to get it; your education isn't the answer, it's the problem! Your education is largely inapplicable because commercial music recordings do not employ "three-dimensional reality-based acoustics"!! What you really need to be educated in, you appear to be almost completely ignorant of (the art of music creation, recording, mixing and mastering), and not only are you ignorant of these things but you apparently deliberately want to remain ignorant and ignore/exclude them, while at the same time mocking/insulting those who understand they are fundamental to every aspect of the sound waves contained in a commercial recording! It really is quite astounding.



Try to turn my education against me all you want, but at least I don't believe in mumbo jumbo about hyper-dimensionality and what not. The bits on your CD are information and the CD doesn't know what information it is. Music? Pictures? Stock market data? It's just all bits, data, information. The moment you play the CD, your speakers or headphones start to produce three-dimensional physical reality-based soundwaves according to the data on the disc. It doesn't matter what digital "magical non-real" effects have been used to produce the CD. It all is turned into physical soundwaves and that is what your ears hear, not the bits on the disc. So, no matter how you produce music, it will obey the laws of physics and acoustics, the stuff I was educated about. Nice try!


----------



## bigshot (Jan 17, 2018)

Speakers produce a dimensional soundstage. Headphones don’t. You need space to have dimension. The room provides that, not the speakers.


----------



## 71 dB

More Steve Roach arrived today!


bigshot said:


> Speakers produce a dimensional soundstage. Headphones don’t. You need space to have dimension. The room provides that, not the speakers.


It's not that black and white. Recordings contain spatial information. Binaural recordings on headphones can be very "dimensional." for example.


----------



## bigshot (Jan 17, 2018)

Recordings contain secondary depth cues... reverb, reflection of sound off studio walls, ambience, etc. But that is baked into the mix and is no different on headphones or speakers. The thing that adds real dimensional space is real dimensional space (i.e. your living room). Blending two channels together doesn't even qualify as a secondary depth cue. It's just blending two channels together.

Binaural is a gimmick for headphones. It isn't the way music is recorded or played back. And it isn't particularly dimensional. Certainly not in the sense that speaker soundstage is.


----------



## 71 dB

bigshot said:


> Recordings contain secondary depth cues... reverb, reflection of sound off studio walls, ambience, etc. But that is baked into the mix and is no different on headphones or speakers. The thing that adds real dimensional space is real dimensional space (i.e. your living room). Blending two channels together doesn't even qualify as a secondary depth cue. It's just blending two channels together.
> 
> Binaural is a gimmick for headphones. It isn't the way music is recorded or played back. And it isn't particularly dimensional. Certainly not in the sense that speaker soundstage is.



In a recording of church music recorded in a church, say organ music by Buxtehude, the recording contains the real proper spatial information (unless recorded badly) and your room creates additional acoustic "garbage", fortunately not that harmful if your room acoustics is good. If we can simulate the same soundwaves at eardrums with headphones as with speakers + room the result will be the same. The trick is of course to simulate accurately enough. Not easy, but not impossible either. Even if headphones have only say 20 % of the "dimensionality" of speakers, that still dimensional sound.


----------



## bigshot (Jan 17, 2018)

Nope. A stereo recording can only contain secondary depth cues, not actual spatial information. For that you need space. Multichannel can do a better job of reproducing secondary depth cues in an immersive way, but it still isn’t the spatial information from the church recording venue. For actual spatial information people need to be able to move their head to locate objects in space. When you make a recording, all that information gets stripped off. Head movement on playback is dictated by the spatial placement of the speakers in the room. That is the space you are hearing, not the church’s space.

Secondary depth cues are great for adding a specific atmosphere once you have real depth. But they don’t convey actual space. The room is the space, not the recording. Speakers have space in a room. Headphones have no space because they’re clamped to your head. As we’ve said many times, recordings are not created to reproduce the space of the recording venue. With multichannel, the mixer is creating an artificial balance that will wrap around the room like wallpaper on the walls of the listening room itself.


----------



## gregorio

71 dB said:


> I'm sure Mr. Roach would be impressed with the spatiality of "Kiva" with my wide crossfeed.



If you were correct and crossfeeding does make Kiva "exceptionally real" that would destroy all the effort Roach put into making it surreal instead of real. If instead it flattens the mix as pinnahertz stated, that would damage/destroy all the effort Roach put into making it particularly dimensional. Either way, how does your assertion make sense to you? There's only one way it can make sense, simply eliminate all artistic intent and personal perception, make the whole thing simple and objective and then you can have surety. There's just one small problem with this approach, can't you spot it? I on the other hand can have a good guess, based on his stated artistic intent but I can't be sure because I don't know his personal perception!



71 dB said:


> [1] I don't experience flattened perspective or dimensionality with crossfeed unless I crossfeed too hard. [2] I think you only assume this happening, because you think smaller ILD must do such things.



1. We know that already! The difference is, I've got a RATIONAL explanation for it; that you are insensitive to/ignorant of the flattening perspective! You think maybe the recording knows when you've crossfed "to hard" and suddenly flattens itself or do you think as soon as you apply crossfeed you are flattening the stereo effect and this flattening increases as you increase crossfeed but you're just not aware of it? Does what you're saying still make sense to you?

2. It's a shame, at one point, after god knows how many pages, you started contradicting all the nonsense you'd made up and I thought we might have a rational discussion but it seems as soon as rationality and the actual facts rear their ugly heads you simply retreat into your trusted defence of making up nonsense and making out you're educated and those who disagree with you are the opposite. I'm actually more highly educated than you and I've got two and a half decades of actually making recordings for a living but let's not let facts get in the way of nonsense theories!

For the last time, as there's no point humiliating you any more with logic and the facts: Recordings are full of spatial information, much of it dependent on time and direction of reflections/delays/echoes. You want to cure excessive ILD using a technique (crossfeed) which not only cures excessive ILD but changes the time and direction of those reflections/delays/echoes. That's great for you because all you seem concerned about is excessive ILD and you obviously can't hear/perceive the damaging affect on the time based effects used on recordings. And, you're the dictator who defines what is "excessive" ILD in the first place (with the aid of some bees apparently), so all's good there too. The facts and answers to ALL the points you're now making are already in my previous posts but you're deliberately avoiding them because you don't want to know the facts, just make them up as you go along in order to defend your "theories", so clearly I'm wasting my time and your's because I presume it takes you time to make this stuff up!

G


----------



## bigshot

Space is dimensional. Secondary depth cues are just sounds. If you want dimensional sound, the sounds have to be wrapped around an actual space that the listener inhabits. Otherwise it’s just sound without dimension.


----------



## 71 dB

gregorio said:


> 1. We know that already! The difference is, I've got a RATIONAL explanation for it; that you are insensitive to/ignorant of the flattening perspective! You think maybe the recording knows when you've crossfed "to hard" and suddenly flattens itself or do you think as soon as you apply crossfeed you are flattening the stereo effect and this flattening increases as you increase crossfeed but you're just not aware of it? Does what you're saying still make sense to you?



Dude, I have been crossfeeding for years, making music for years playing with spatial effects. I know when the crossfeed level is proper and when it's not. I wouldn't have written even one post about crossfeed here if I didn't know. I don't know what is wrong with you and pinnahertz having a need to fight me this much and being so stubborn. Is it so difficult to admit you haven't had full understanding of crossfeed? It's not a shame. I learned about spatial hearing in the university in early 1990's and discovered crossfeed in 2012 nearly 2 decades later. It took me that long to "connect the dots." Better late than never.

It is not so simple as to smaller ILD meaning flatter stereo effects. We have _optimal_ range of ILD (as a function of frequency of course). Going above that level means spatial distortion and going below mean narrowed, mono-like stereo image. I use crossfeed to get to that optimal point and yes, my ears are trained to find that.



gregorio said:


> 2. It's a shame, at one point, after god knows how many pages, you started contradicting all the nonsense you'd made up and I thought we might have a rational discussion but it seems as soon as rationality and the actual facts rear their ugly heads you simply retreat into your trusted defence of making up nonsense and making out you're educated and those who disagree with you are the opposite. I'm actually more highly educated than you and I've got two and a half decades of actually making recordings for a living but let's not let facts get in the way of nonsense theories!



Good for you if you are more highly educated. Please start thinking like an educated person. I'm not contradicting myself. I have been consistent all the time. It's not my fault sound engineers don't care much about excessive ILD/ITD on the music they produce. I'm just pointing it out. You try to defend your profession by calling it "artistic intent".



gregorio said:


> For the last time, as there's no point humiliating you any more with logic and the facts: Recordings are full of spatial information, much of it dependent on time and direction of reflections/delays/echoes. You want to cure excessive ILD using a technique (crossfeed) which not only cures excessive ILD but changes the time and direction of those reflections/delays/echoes. That's great for you because all you seem concerned about is excessive ILD and you obviously can't hear/perceive the damaging affect on the time based effects used on recordings. And, you're the dictator who defines what is "excessive" ILD in the first place (with the aid of some bees apparently), so all's good there too. The facts and answers to ALL the points you're now making are already in my previous posts but you're deliberately avoiding them because you don't want to know the facts, just make them up as you go along in order to defend your "theories", so clearly I'm wasting my time and your's because I presume it takes you time to make this stuff up!
> 
> G


Spatial information is spatial information only if it makes sense to spatial hearing. If it doesn't make sense, it's spatial disinformation, distortion. Excessive ILD means spatial distortion and crossfeed is one way to scale spatial information so that it makes sense spatially. If we look at the crossfed signal, yes, crossfeed does make minor timing changes, but that isn't harmless: The timing changes change the shape of the sound image a little bit similar to going a bit further away from speakers, but most importantly using crossfeed we actually achieve spatial information that makes sense. As I have said before, the timing of a crossfeeder happens in less than milliseconds, while reflections/delays/echoes work at timescales of several milliseconds. Since both audio channels are crossfed using the same principle, spatial hearing is able to hear how the sound is crossfed especially because crossfeeding changes the sound so that it makes more sense to spatial hearing. You skepticism of crossfeed is at ridiculous level. When you understand what crossfeed does and what it means you will see that.

If you so worried about damage on time based effects you should be horrified about speakers and room acoustics! So much new reflections and reverberation working at the same timescale of several milliseconds as the spatial effects in the recording. If that isn't a mess I don't what is, except it isn't that big mess, because spatial hearing can handle it. It's all natural sound with spatial information that makes sense. So, have some faith in crossfeed. For such a simple thing it does miragles.


----------



## 71 dB

bigshot said:


> Space is dimensional. Secondary depth cues are just sounds. If you want dimensional sound, the sounds have to be wrapped around an actual space that the listener inhabits. Otherwise it’s just sound without dimension.



Spatial hearing has to "decode" the direction of the sounds using spatial cues such as ILD, ITD and spectral changes. Spatial hearing knows the sound comes from behind instead of ahead, because among other things the ears "block" sound from behind more causing attenuation of high frequencies. If you feed all these spatial cues correctly using headphones, you can fool spatial hearing to hear any kind of spatiality.


----------



## Zapp_Fan (Jan 17, 2018)

71 dB said:


> If it doesn't make sense, it's spatial disinformation, distortion. Excessive ILD means spatial distortion and crossfeed is one way to scale spatial information so that it makes sense spatially.



I think there is a fundamental point of disagreement here.  You seem to believe that ILD / ITD and other spatial cues SHOULD be naturalistic, i.e. as if originating from a physical object in 3D space.  And eliminating unnatural, i.e. purely artificial levels of ILD and ITD is generally desirable.  The thing is, I think this is actually a very controversial opinion.  One need look no further than Steve Reich's Come Out to see artistic use of unnatural ITD in a very clearly intentional way.  

Not only does this song absolutely an unambiguously depend on unnatural use of stereo, it's considered a watershed in the artistic use of recording / mixing techniques for special effect. 

You might not care for it, but as someone who listens to a great deal of purely artificial (i.e. electronic) music it's very odd to hear someone assert that unnatural ILD / ITD is best described as "distortion".

When it comes to acoustic recordings that are deliberately mixed to produce a natural soundstage, I tend to agree that reducing "illegal" ITD is sensible.  However, there is a great deal of very good music to which that does not apply.


----------



## pinnahertz

71 dB said:


> 1. Dude, I have been crossfeeding for years, making music for years playing with spatial effects. I know when the crossfeed level is proper and when it's not. I wouldn't have written even one post about crossfeed here if I didn't know. 2. I don't know what is wrong with you and pinnahertz having a need to fight me this much and being so stubborn. 3. Is it so difficult to admit you haven't had full understanding of crossfeed? It's not a shame. 4. I learned about spatial hearing in the university in early 1990's and discovered crossfeed in 2012 nearly 2 decades later. It took me that long to "connect the dots." Better late than never.


1. The self-assured attitude is getting in your way.  A true scientist would explore why the professionals don't agree with the amateur, eliminating supposed self-assurance first to find the real cause.
2. We've said the same thing about you, though.  You've got the concept to promote and convince the world, we just try to keep the thread grounded in reality.
3. I'll throw that same sentence back to you.  See what's happening?
4. You've learned about spatial hearing, but have not learned about perception. 


71 dB said:


> 5. It is not so simple as to smaller ILD meaning flatter stereo effects. We have _optimal_ range of ILD (as a function of frequency of course). 6. Going above that level means spatial distortion and going below mean narrowed, mono-like stereo image. 7.I use crossfeed to get to that optimal point and yes, my ears are trained to find that.


5. You cannot have such a thing as "optimal ILD" because you don't know what ILD is built into the recording, and clearly have no idea what the creator or listener desired.  There is, therefore, no such thing as "optimal ILD" of cross-feed.   There is, however, _your opinion _of what optimal ILD is.  We do not agree. 
6. Don't you have that backwards?  Aren't you trying to talk about the level of the cross-fed signal?  More would mono.  Spatial distortion is a made up term.
7. What you have done is train your "ears" (actually, you've trained your perception) to match your hard-line concept of what's right leaving no room for variation.


71 dB said:


> Good for you if you are more highly educated. Please start thinking like an educated person.


Fair waring...you're moving very close to a personal attack here.


71 dB said:


> I'm not contradicting myself. I have been consistent all the time. It's not my fault sound engineers don't care much about excessive ILD/ITD on the music they produce. I'm just pointing it out. You try to defend your profession by calling it "artistic intent".


You're saying sound engineers are wrong....yet again. 


71 dB said:


> Spatial information is spatial information only if it makes sense to spatial hearing. If it doesn't make sense, it's spatial disinformation, distortion.


I disagree with the above.  Spatial information must be interpreted by human perception after filtering by spatial hearing.  It always makes sense, perhaps not the sense you'd like, but it made sense to the person creating the recording. 


71 dB said:


> 8. Excessive ILD means spatial distortion 9.  and crossfeed is one way to scale spatial information so that it makes sense spatially. 10. If we look at the crossfed signal, yes, crossfeed does make minor timing changes, but that isn't harmless: The timing changes change the shape of the sound image a little bit similar to going a bit further away from speakers, but most importantly using crossfeed we actually achieve spatial information that makes sense. As I have said before, the timing of a crossfeeder happens in less than milliseconds, while reflections/delays/echoes work at timescales of several milliseconds. 11.Since both audio channels are crossfed using the same principle, spatial hearing is able to hear how the sound is crossfed especially because crossfeeding changes the sound so that it makes more sense to spatial hearing. 12.You skepticism of crossfeed is at ridiculous level. When you understand what crossfeed does and what it means you will see that.


8. No, it does not.  That definition is yours, and yours alone.
9. No, it does not.  That definition is yours, and yours alone.
10.  The timing in your cross-feed signal is completely inadequate to provide any sense of position outside the head.  It's a single delay as a result of the filter, not an HRTF delay.  It's pointless to discuss as the delay component is dominated by the ILD. 
11. The degree to which cross-feed makes sense can only be determined by the reaction of a listener.  You don't have consistent reaction on your side at all here. 
12. I've been working with cross-feed of various kinds, yours included, since the beginning of this discussion.  I fully understand what it does, but I also understand what it does not do.  I approached my evaluation from the standpoint of exploration with as little bias as possible.  I've found where it sort of works, where it really works, and where it fails.  Just last night I created my own variant of your cross-feed, only with actual HRTF delay and EQ, and applied it along with several other processes to a late 1960s recording, "McArthur Park", by Richard Harris.  Your cross-feed helped a little, it was better than the non-cross-feed, but I never found the sweet spot.  It just moves everything into the head.  Redline Monitor was only different, not better.  I always end up turning that one down until I like it only to find I've turned it pretty much off.  Mine is a processor stack that is fairly complex, but not only pulled in the hard L and R orchestra parts, but also spread them out to occupy a group of angles. So I'm not trying to move L and R into point-source speaker locations, I'm trying to spread the signal to that of a real orchestra, and to move the entire stage forward out of the head.  It sort of works, but wouldn't for any other recording.   My point it this anecdote is, I'm still trying to get cross-feed working, but the best I can come up with is a very customized solution for each track, and typically, general cross-feed does not help with the intent of the music as I perceive it.  And it only works for one recording.  I threw a few others at it, total fail. 


71 dB said:


> If you so worried about damage on time based effects you should be horrified about speakers and room acoustics! So much new reflections and reverberation working at the same timescale of several milliseconds as the spatial effects in the recording. If that isn't a mess I don't what is, except it isn't that big mess, because spatial hearing can handle it. It's all natural sound with spatial information that makes sense. So, have some faith in crossfeed. For such a simple thing it does miragles.


I feel like you're arguing for the sake of argument here.  We work in rooms with room acoustics, and we know it.  We know reflections because we've had to deal with bad ones and embrace good ones.  That "mess", as you call it, is the reality of stereo with two speakers in a room.  It's part of the medium we're working in.  Why are we still talking about this?

I don't feel cross-feed is a miracle at all.  It's a band-aid at best for certain extreme and specific recordings.  Why are we still talking about that too?


----------



## pinnahertz

Zapp_Fan said:


> I think there is a fundamental point of disagreement here.  You seem to believe that ILD / ITD and other spatial cues SHOULD be naturalistic, i.e. as if originating from a physical object in 3D space.  And eliminating unnatural, i.e. purely artificial levels of ILD and ITD is generally desirable.  The thing is, I think this is actually a very controversial opinion.  One need look no further than Steve Reich's Come Out to see artistic use of unnatural ITD in a very clearly intentional way.
> 
> Not only does this song absolutely an unambiguously depend on unnatural use of stereo, it's considered a watershed in the artistic use of recording / mixing techniques for special effect.
> 
> ...


"Come Out" is a perfect example, thank you!  

I gotta say, I thought I was the only one that even knew the recording! I got it when it was originally released, and it still has playable grooves on it, and the entire record is in my iTunes library.  The Oliveros side has some interesting use of unnatural space  and huge tape delay too.  I've always found the Maxfield piece technically bewildering.


----------



## Zapp_Fan (Jan 17, 2018)

pinnahertz said:


> "Come Out" is a perfect example, thank you!
> 
> I gotta say, I thought I was the only one that even knew the recording! I got it when it was originally released, and it still has playable grooves on it, and the entire record is in my iTunes library.  The Oliveros side has some interesting use of unnatural space  and huge tape delay too.  I've always found the Maxfield piece technically bewildering.



They played "Come Out" for us in art class in college, it was pretty interesting sitting there and listening to the whole thing straight through in a group setting.  It definitely made an impression though, that was nearly 15 years ago and I still remember it... TBH I hadn't listened to the other cuts on that album until today.  the Maxfield recording honestly seems like someone screwing around with two Moogs at once and feeding each to a separate channel, which actually also lends a lot of credence to the idea that listening to it with crossfeed would at minimum, not enhance the artist's intended effect.  It's not really possible to argue that there's some spatial information to retrieve when it's just two different tracks going on in parallel, lol.

I guess you can listen to them on speakers, so crossfeed is not a "wrong" way to listen to these tracks, but it's also hard to say it's a better way, either, since a central purpose of the recording is to have wildly different (and obviously artificially arranged) content on each channel at all times.


----------



## bigshot (Jan 17, 2018)

71 dB said:


> Spatial hearing knows the sound comes from behind instead of ahead, because among other things the ears "block" sound from behind more causing attenuation of high frequencies.



One of the primary ways to determine sound location involves moving the head to locate the sound. (That's why animals cock their heads and move their ears when they hear predators.) You can't move your head relative to the location of the sound with headphones. You can do that with speakers however. Things like alteration of timing and frequency changes and level changes are all secondary cues, and without head movement, they can only give a generalized impression of depth, not actual location.

As I said before, sound location is limited to secondary cues with headphones. With speakers, more and more spatial information is available depending on how many speakers one uses. But when it comes to primary location cues, you're limited to the distances from the speakers themselves. You can't completely convey the spatial information in the recording, nor would you want to.

Monkeying around with secondary depth cues is fun. DSPs that introduce sophisticated delays can create all kinds of envelopes you could use to sweeten your sound. But it isn't real spatial information, and it isn't decoding anything embedded in the recording itself. It's just wrapping a layer of secondary depth cues around the outside of the sound. That is the best that you can do. You can't record and playback an auditory spacial environment with any kind of accuracy. You can create lots of fun synthetic ones though.



Zapp_Fan said:


> I guess you can listen to them on speakers, so crossfeed is not a "wrong" way to listen to these tracks.



Cross feed and headphones isn't at all like listening to music on speakers.


----------



## 71 dB

Zapp_Fan said:


> I think there is a fundamental point of disagreement here.  You seem to believe that ILD / ITD and other spatial cues SHOULD be naturalistic, i.e. as if originating from a physical object in 3D space.  And eliminating unnatural, i.e. purely artificial levels of ILD and ITD is generally desirable.  The thing is, I think this is actually a very controversial opinion.  One need look no further than Steve Reich's Come Out to see artistic use of unnatural ITD in a very clearly intentional way.



Playing with excessive ILD might be cool, but it's about abusing your medium. Did Steve Reich make Come Out for speakers or headphones? The is no excessive ILD with speakers. When you put headphones on suddenly there is! What is so good about excessive ILD? Why would you want to hear it?


----------



## bigshot

71 dB said:


> Did Steve Reich make Come Out for speakers or headphones? The is no excessive ILD with speakers. When you put headphones on suddenly there is!



If it was designed to listen to on speakers, speakers are probably the best way to listen to it. If you can't and you absolutely have to use headphones, then you can slap a bandaid on it if you want, or modify it through signal processing to sound another way altogether. But that doesn't mean that you are "decoding spatial information that is recorded into the track". You're just adding salt and pepper to taste.


----------



## Zapp_Fan (Jan 17, 2018)

71 dB said:


> Playing with excessive ILD might be cool, but it's about abusing your medium. Did Steve Reich make Come Out for speakers or headphones? The is no excessive ILD with speakers. When you put headphones on suddenly there is! What is so good about excessive ILD? Why would you want to hear it?




I think there is a greater point that I didn't express clearly in my comment about Come Out.  Basically in recordings like that, there isn't really anything that's intelligible as ILD or ITD because each channel is completely different content. The correlation is nearly zero and it's not even a recording of an instrument, it's synthetic.

So it's irrelevant whether it was mixed for headphones or speakers, because there is no stereo image or spatial content to begin with.  An extreme case, but one that proves 1) artistic intent often violates physical rules of spatialization, quite intentionally, and 2) trying to approximate natural spatial information is often the wrong approach to artistic intent, since the artist didn't intent for there to be spatial information at all.  As such, any way one chooses to listen is probably equally valid, unless the artist says otherwise. 

As far as why someone might enjoy listening to that, well, science can never justify an opinion, so we'll have to move that to another forum


----------



## 71 dB

pinnahertz said:


> You're saying sound engineers are wrong....yet again.


Depends on what they say and do. I'm sure they do a lot of things right, but the dilemma of creating recordings spatially suitable for both speakers and headphones is a bit unsolved among sound engineers, althou admittely progress has happened, but older recordings are what they are. There two ways to deal with the speaker/headphones thing:

(1) Mix for speakers and listeners crossfeed for headphones
(2) Mix cleverly for both


----------



## bigshot (Jan 17, 2018)

Zapp_Fan said:


> So it's irrelevant whether it was mixed for headphones or speakers, because there is no stereo image or spatial content to begin with.



The position of the speakers and the room itself lend the spatial content to the music. The music is designed to be played back with the speakers at a particular angle and distance from the listener, so instead of the spatial content being encoded into the music, the music is created to sound good in your living room, as long as your living room fits the general requirements of the space they designed it to be played in.

Spacial content and sound location requires space. It's that simple.



71 dB said:


> (1) Mix for speakers and listeners crossfeed for headphones
> (2) Mix cleverly for both



The solution is to add the sound of your own favorite speaker and room or headphones to the music to create a sound that pleases you. Engineers are mixing to a general standard. No one is required to adhere to that standard. And engineers can't be expected to mix for every person's individual preferences. Some people like to stretch out 1.33:1 TV shows to fit 1.85 screens. Other people like to listen to music using a reverb to make their room sound like Carnegie Hall. Still others want to hear every S and T so they listen to "analytical headphones" that emphasize the treble. There's nothing wrong with any of this. There's nothing wrong with cross feed. It's just not standard and it doesn't really add any spatial information. It just reduces the channel separation. If you like that effect, go for it. Engineers aren't going to ever mix for that though. They don't even mix for headphones generally, and they only grudgingly check the sound on tiny speakers. Engineers like working to standards. That way they can focus on their mix, not exceptions to the rule.


----------



## 71 dB

Zapp_Fan said:


> I think there is a greater point that I didn't express clearly in my comment about Come Out.  Basically in recordings like that, there isn't really anything that's intelligible as ILD or ITD because each channel is completely different content. The correlation is nearly zero and it's not even a recording of an instrument, it's synthetic.
> 
> So it's irrelevant whether it was mixed for headphones or speakers, because there is no stereo image or spatial content to begin with.  An extreme case, but one that proves 1) artistic intent often violates physical rules of spatialization, quite intentionally, and 2) trying to approximate natural spatial information is often the wrong approach to artistic intent, since the artist didn't intent for there to be spatial information at all.  As such, any way one chooses to listen is probably equally valid, unless the artist says otherwise.
> 
> As far as why someone might enjoy listening to that, well, science can never justify an opinion, so we'll have to move that to another forum



For uncorrelated channels the ILD is simply a huge value depending on the content. Speakers + room create spatiality and natural ILD.

Artistic intent can be unfavored. All music in the world is artistic intent, but you hardly like it all, do you? Some artistic choices suck. What Steve Reich did 50 years ago in the early days of stereo sound might have been interesting and cool at the time, but to the ears of 2018 Come Out sounds stupid, at least to my ears. Crossfeed or not (I tried both) it was extremely annoying borefest to me. I have heard much better things from him.

If an artist wants to avoid spatial information for some reason, I recommend mono sound.


----------



## Zapp_Fan

71 dB said:


> For uncorrelated channels the ILD is simply a huge value depending on the content. Speakers + room create spatiality and natural ILD.



OK fair, yes, spatial information is generated by loudspeaker playback. I was more talking about within the recording itself, there is no sensible way to talk about naturalistic stereo image. 



71 dB said:


> Artistic intent can be unfavored. All music in the world is artistic intent, but you hardly like it all, do you? Some artistic choices suck.



OK, yes, but so what?  You can choose to play it back however you want. But if artistic intent is to be used as a general guideline for what type of playback is most correct, we must acknowledge that artistic intent often conflicts with naturalistic sound. 



71 dB said:


> What Steve Reich did 50 years ago in the early days of stereo sound might have been interesting and cool at the time, but to the ears of 2018 Come Out sounds stupid, at least to my ears. Crossfeed or not (I tried both) it was extremely annoying borefest to me. I have heard much better things from him.



I agree it's not his most listenable work, although I don't think that's the point. 



71 dB said:


> If an artist wants to avoid spatial information for some reason, I recommend mono sound.



Steve Reich wanted to make a point using ping-pong / delay effects, we may dislike it, but it is not anyone's place to say he was wrong to try it in the first place.


----------



## bigshot

71 dB said:


> If an artist wants to avoid spatial information for some reason, I recommend mono sound.



Mono sound has spatial information. It has the same sorts of secondary depth cues baked in that stereo has. If it's played back through a speaker, the placement of the speaker and the room lend their spatial information to it. Same as stereo, just one channel, so channel separation isn't an issue.


----------



## pinnahertz

71 dB said:


> 1. Playing with excessive ILD might be cool, but it's about abusing your medium. 2. Did Steve Reich make Come Out for speakers or headphones? 3. The is no excessive ILD with speakers. When you put headphones on suddenly there is! What is so good about excessive ILD? Why would you want to hear it?


1. This is YOUR  subjective opinion of preference ONLY!  It is no more correct than any other subjective opinion of preference. 
2. You need to hear this both ways.  I first heard it on speakers, but then I heard it on headphones.  That is a completely different and valid experience of his art.  Don't even bother commenting on this again until you've actually heard it.  One thing we know for sure, Reich never intended his work to be heard on headphones with cross-feed!
3. Because it is another, just as valid, version of an art form in many cases.  There are a few that are simply bad on headphones and benefit.  The headphone perspective was known when most stereo recordings were made.  It's highly likely some were even purposely mixed for headphones with deliberate high ILD.  Now your next comment will be "Those engineers were idiots".  You are in no position to judge.  

Want another example?  The Moody Blues, "The Best Way To Travel".  There are several effects that are intended to circle the listener then move away, then back again for a repeat. In headphones they can easily be perceived as doing just that.  With speakers, not so much.  In headphones with cross-feed, the effect is lost.  Was this piece mixed for headphones?  Speakers?  Both?  Was it mixed for cross-feed headphones?  The last question is the only one we can answer definitively: NO!  There was no cross-feed then.  And cross-feed destroys the effect.  

Sure, it's one example, and I'm not going to waste time compiling a list for someone who claims to fully understand 98% of all stereo music ever recorded, and has convinced himself every recording engineer is inept.


----------



## pinnahertz

bigshot said:


> Mono sound has spatial information. It has the same sorts of secondary depth cues baked in that stereo has. If it's played back through a speaker, the placement of the speaker and the room lend their spatial information to it. Same as stereo, just one channel, so channel separation isn't an issue.


Yep. And one channel out of one speaker can be panned vertically using HRTF simulation.  That would also be spatial information.


----------



## pinnahertz

71 dB said:


> For uncorrelated channels the ILD is simply a huge value depending on the content. Speakers + room create spatiality and natural ILD.
> 
> Artistic intent can be unfavored. All music in the world is artistic intent, but you hardly like it all, do you? Some artistic choices suck. What Steve Reich did 50 years ago in the early days of stereo sound might have been interesting and cool at the time, but to the ears of 2018 Come Out sounds stupid, at least to my ears. Crossfeed or not (I tried both) it was extremely annoying borefest to me. I have heard much better things from him.


Did the artistic intent get communicated to you?  I doubt it.  He had a point to make, a very strong one.  The piece makes it, and in an interesting, creative way using tools of the time.  Of course, it does take an open mind with artistic appreciation at heart to realize any of that.  I don't think it sounds stupid at all.  It has impact, a message, and the highly repetitive nature of the message serves to emphasis it and drive it home.


----------



## bigshot

It's important to remember that when it comes to recorded music, the science is intended to serve the art, not the other way around.


----------



## 71 dB

bigshot said:


> It's important to remember that when it comes to recorded music, the science is intended to serve the art, not the other way around.


Science serves music by teaching how to achieve natural spatiality. Science doesn't tell what instruments to play or what to play with those instrument. Science tells how to create ILD values for those instruments that make sense.

Now I will take some distance of this thread. This is going too far already.


----------



## bigshot

71 dB said:


> Science serves music by teaching how to achieve natural spatiality.



Recording music isn't just about capturing sound realistically. If it was just that, it wouldn't be artistic at all. It would just be a technical exercise. But recording and mixing music involves a great deal of artistry. Organizing a soundscape for music to exist in is a highly creative process.


----------



## gregorio

71 dB said:


> Depends on what they say and do. I'm sure they do a lot of things right, but the dilemma of creating recordings spatially suitable for both speakers and headphones is a bit unsolved among sound engineers ...



I want a car that's suitable for both the average family and for racing in formula 1 but I can't have such a car and the reason, if I apply your logic, is that: I'm sure car designers "do a lot of things right" but they are effectively "wrong", "ignorant", "idiots" or not "clever" enough to solve this "dilemma". Any rational person who knows the basics of car design understands there are irreconcilable differences between an average family car and an F1 car and that any attempt at reconciling those differences must result in compromising one or both of them. Any rational person would not blame all car designers for this fact! Likewise with sound engineers, we mix for speakers, exceedingly rarely just for headphones and will sometimes make compromises to the speakers mix for HP presentation but this decision and how much compromise depends on our subjective opinion of how artistic intent is affected, this is the ONLY rational approach given the irreconcilable differences between speakers and HPs. 

It's shocking the lengths you're willing to go to in order to defend your "theory": Presenting your "theory" as science, saying all engineers don't know how do do their job, anyone who disagrees with you is wrong because they are ignorant and then, when demonstrated to be wrong you simply redefine or misrepresent anything which contradicts you (including; science, art/music and the relationship between them), repeat the same nonsense all over again and dig yourself into an even deeper hole by making up even more nonsense to defend the nonsense you've previously made up. 



71 dB said:


> [1] Science serves music by teaching how to achieve natural spatiality.
> [1a] Science doesn't tell what instruments to play or what to play with those instrument.
> [1c] Science tells how to create ILD values for those instruments that make sense.
> [2] Now I will take some distance of this thread. This is going too far already.



1. Another classic example of making up nonsense. Science does NOT teach music how to achieve anything naturally, including "spatiality". If it did, the history of music would be entirely different and most music today would not exist because ...
1a. This is utter nonsense, to the point of being the exact opposite of the facts! There are so many examples of the opposite being true, it's impossible to even count them all but here's just 3:
A. Science clearly tells us (from the time of Pythagoras) what notes a metal tube can produce (the harmonic series), therefore composers could not use brass instruments whenever they wanted and when they did use them, they could only play the notes in the harmonic series. In the 1800's we got around this problem with "natural" trumpets and horns by inventing the valve, which turned them into "unnatural" tubes, effectively making them 4 tubes in one. Orchestral music composition was then able to evolve into chromaticism, impressionism and nearly all the classical genres of the C20th.
B. A drumkit, 2 guitars and a voice cannot operate as an ensemble by what science teaches is "natural". We have to electrify those guitars to produce an amplified and distorted sound which is entirely unnatural. If we stuck to what science teaches music is "natural", we could not have a rock band or any of the music genres derived from it!
C. Science tells us that a drummer has two hands, this limits what instruments a drummer can play and what he/she can play with those instruments. In the early 1990s several genres/sub-genres of music evolved which were reliant on ignoring this fact!
1c. There's two *obvious* problems with this statement: A. Science does NOT tell us how to create ILD values that make sense. Science tells us what ILD values would be present in a "natural" situation but what "makes sense" is quite different and down to individual perception. For this reason, you do not get to dictate what "makes sense", who do you think you are?
B. As demonstrated by 1a/1b above (plus countless other examples), most music exists because it ignores what science tells us is "natural" and a fair amount exists by also ignoring or even deliberately doing the opposite of what "makes sense"!

2. Way, way too far! And, you said you were going to take some distance from this thread quite a number of posts ago but what did you actually do? You took it even further and made up even more nonsense, this time about music and how it should be dictated by what you think "makes sense" and what science tells us is "natural"!

G


----------



## ironmine

Have you guys tried WAVES NX?







It's simply amazing.


----------



## tansand (May 4, 2018)

Currently awaiting parts for a unswitched -9 db meier crossfeed to go between my colorfly and portable amp. Anything that gets the angry little guys with nasty mouths so exercised is probably worth checking out.


----------



## skwoodwiva

tansand said:


> Currently awaiting parts for a unswitched -9 db meier crossfeed to go between my colorfly and portable amp. Anything that gets the angry little guys with nasty mouths so exercised is probably worth checking out.


More please? 

I do use the built in UAPP one. 
T do you know what these sliders do?


----------



## RRod

skwoodwiva said:


> More please?
> 
> I do use the built in UAPP one.
> T do you know what these sliders do?



Let IL = ear associated with the channel, CL = the ear opposite the channel.
If there are two controls for crossfeed, they are typically:
.The -3dB frequency for the low-pass filter to CL
.The level difference between IL and CL at 20Hz

See more here.


----------



## skwoodwiva

RRod said:


> Let IL = ear associated with the channel, CL = the ear opposite the channel.
> If there are two controls for crossfeed, they are typically:
> .The -3dB frequency for the low-pass filter to CL
> .The level difference between IL and CL at 20Hz
> ...


So these are the slider equivilents


----------



## RRod

skwoodwiva said:


> So these are the slider equivilents



Yep, if it's copying bs2b


----------



## castleofargh

ironmine said:


> Have you guys tried WAVES NX?
> 
> 
> 
> ...


got it when they were making the kickstarter campaign for their Bluetooth tracker. if you have a webcam, lot of light in your room and fairly decent CPU, I strongly suggest that you try. I was just  "meh" about the crossfeed itself after setting up my measurements(there is always some luck factor where whatever model they use will be close enough from our own HRTF, or not). but head tracking is pretty cool when under good conditions. so if you already have good results, it's worth a try. I say good conditions because in low light the FPS drops make the experience horrible, as for the Bluetooth tracker, it mostly sucks based on my experience(now I can't even get it to work, it can't connect without the update, but it can't update without connecting...)


----------



## ironmine

castleofargh said:


> got it when they were making the kickstarter campaign for their Bluetooth tracker. if you have a webcam, lot of light in your room and fairly decent CPU, I strongly suggest that you try. I was just  "meh" about the crossfeed itself after setting up my measurements(there is always some luck factor where whatever model they use will be close enough from our own HRTF, or not). but head tracking is pretty cool when under good conditions. so if you already have good results, it's worth a try. I say good conditions because in low light the FPS drops make the experience horrible, as for the Bluetooth tracker, it mostly sucks based on my experience(now I can't even get it to work, it can't connect without the update, but it can't update without connecting...)



No, I don't use the headtracking function. I turn off this feature and just use the crossfeed section only (the room ambience can be turned on or off separately). To me, Waves NX crossfeed sound very realistic and juicy. It's probably as good as 112 dB Redline Monitor or even better.

They also have a NX surround version that can take a 2.0 stereo signal and convert it to 5.1 surround sound. I tried it in SoundForge (I cannot make it work with Foobar), it's great.


----------



## ironmine

No, I don't use the headtracking function.

(When I listen to music, my body is on the sofa and my head is motionless and just rests on the pillow. I am half-awake and half-asleep, in my favorite sensory deprivation mode   So, tracking the position of my head in this situation does no require much sophistication, it's like tracking the position of a Egyptian pyramid.  It's just there, man. And this is where it's going to be until the end of an album.)

I turn off this feature and just use the crossfeed section only (the room ambience can be turned on or off separately). To me, Waves NX crossfeed sound very realistic and juicy. It's probably as good as 112 dB Redline Monitor or even better.

They also have a NX surround version that can take a 2.0 stereo signal and convert it to 5.1 surround sound. I tried it in SoundForge (I cannot make it work with Foobar), it's great.


----------



## Erik Garci

Erik Garci said:


> Using the first PRIR, central sounds seem to be in front of you, and they move properly as you turn your head. However, far-left and far-right sounds stay about where they were. That is, they sound about the same as they did without a PRIR, and they don't move as you turn your head. In other words, far-left sounds stay stuck to your left ear, and far-right sounds stay stuck to your right ear. It's possible to shift the far-left and far-right sounds towards the front by using the Realiser's mix block, which can add a bit of the left signal to the front speaker for the right ear, and a bit of the right signal to the front speaker for the left ear.


Lately I have been trying crossfeed methods as a remedy for this issue. I still want to avoid comb-filtering artifacts, thus I prefer methods such as BS2B and Meier, and I think they sound much better than the Realiser's mix block.


----------



## jiiteepee

Prepared a Linkwitz Crossfeed for Headphone use as EqualizerAPO preset - but, as I don't use DSP while listening music, I can't say if it sounds OK nor is implemented correctly. 
Any thoughts?


----------



## 71 dB

jiiteepee said:


> Prepared a Linkwitz Crossfeed for Headphone use as EqualizerAPO preset - but, as I don't use DSP while listening music, I can't say if it sounds OK nor is implemented correctly.
> Any thoughts?



I'd check the filtering:

Filter: ON LS 6 dB Fc `fc` Hz Gain -2 dB​
Here you want +2 dB highshelf -filtering for ipsilateral signal. You should maybe use higher (double or so) frequency than fc (700 Hz). 

Filter: ON IIR Order 2 Coefficients `b0` `b1` `b2` `a0` `a1` `a2`​
Here you want (1st order butterworth) lowpass filtering for the crossfed signal. 

It's important to get the phase differences right meaning the filter cut off frequencies and filter orders must make sense. Linkwitz crossfeeder creates about 250 µs worth of time difference between channels. It's a good idea to test crossfeeders with test signals to see it they work as they should (does a 100 Hz sinewave on left channel leak to right channel at the crossfeed level and have the ~250 µs delay (about 11 samples at 44.1 kHz) and so on?


----------



## jiiteepee (May 21, 2018)

71 dB said:


> I'd check the filtering:
> 
> Filter: ON LS 6 dB Fc `fc` Hz Gain -2 dB​
> Here you want +2 dB highshelf -filtering for ipsilateral signal. You should maybe use higher (double or so) frequency than fc (700 Hz).
> ...



I was thinking the HS instead but somehow ended to LS.
That 2nd order butterworth LP has 1st order response ... Might be wrong but IIRC, EqualizerAPO does not support other but biquad and higher order filters so I believe if I use "Order 1", those a2 and b2 becomes 0 at some point anyway (have to check the EAPO code someday).

Responses and phases:











EDIT: 
Script response:


----------



## ironmine (Jul 11, 2018)

ironmine said:


> Have you guys tried WAVES NX?
> It's simply amazing.



After several weeks of evaluation and comparing Waves NX vs. Redline Monitor, I came back to the 112dB Redline Monitor as my default crossfeed plugin.

I like how Waves NX sounds spatially, but I don't like how it bloats the bass too much. (It might be beneficial though for some bass-shy headphones or for the music recordings where the lows are lacking.)

By the way I found this crossfeed: BS2BR VST Plugin
I like how it sounds from a spacial point of view, but it does boost the lows a bit, it's not completely flat.


----------



## dwinnert

Any of you guys hear of this one ....HPL2 Processor.....I like it, but curious how you folks would.

https://www.hpl-musicsource.com/software


----------



## ironmine (Jul 13, 2018)

dwinnert said:


> Any of you guys hear of this one ....HPL2 Processor.....I like it, but curious how you folks would.
> 
> https://www.hpl-musicsource.com/software



I will try it, thanks.

By the way, there is a software called DDMF Plugin Doctor, allowing you to check the behavior of a VST plugin and see which effects does it produce on various parameters of sound.

I have checked almost all crossfeed plugins I have and so far I have found that the only crossfeed plugin that do not change the amplitude-frequency response are the 112dB Redline Monitor and Meier.  All other crossfeed VSTs increase the lows by several dB.


----------



## dwinnert (Jul 12, 2018)

ironmine said:


> I will try it, thanks.
> 
> By the way, there is a software called DDMF Plugin Doctor, allowing you to check the behavior of a VST plugin and see which effects does it produce on various parameters of sound.
> 
> I have checked almost all crossfeed plugins I have and so far I have found that the only crossfeed plugin that does not change the amplitude-frequency response is the 112dB Redline Monitor.  All others crossfeed VSTs increase the lows by several dB.



I am curious how HPL2 behaves....

I tried 112db Redline Monitor in JRiver 24 and I couldn't get it to function. It would load in the DSP manager, but I was unable to adjust anything.

Edit...I got Redline Monitor working.


----------



## ironmine (Jul 13, 2018)

dwinnert said:


> I am curious how HPL2 behaves....
> I tried 112db Redline Monitor in JRiver 24 and I couldn't get it to function. It would load in the DSP manager, but I was unable to adjust anything.
> Edit...I got Redline Monitor working.



I am sorry, Redline Monitor is not the only crossfeed that gives a flat frequency response. Meier VST crossfeed also shows a flat line.

But HPL2 shows weird fluctuations:







To my ears, HPL2 sounds muffled, the sound resolution is not on the same level as others...


----------



## dwinnert

ironmine said:


> I am sorry, Redline Monitor is not the only crossfeed that gives a flat frequency response. Meier VST crossfeed also shows a flat line.
> 
> But HPL2 shows weird fluctuations:
> 
> To my ears, HPL2 sounds muffled, the sound resolution is not on the same level as others...



For me HPL2 does not sound muffled, just the gain drops. If I raise the volume up to the same level it was before, I just notice the crossfeed. Redline for me, get a lot brighter and tinny sounding.


----------



## castleofargh

hpl2 seems to apply some HRTF setting at a fixed angle instead of the more typical low pass stuff on crossfeeds. 
personally I feel the center is at a decent distance. but on several tracks the sound just feels wrong to me, and rapidly gives me a headache(from what I imagine must be really conflicting audio cues my brain tries to make sense of in vain). and of course as soon as HRTF or even basic crossfeed is involved, I require some means of customization. ready made universal solutions are usually no good for my head, and this one isn't the exception. 

to be clear I'm not saying it's bad, I'm saying it's bad for me and my head. with a fixed setting, hit or miss is what will always happen. 


@ironmine in real life the result should not be flat. I can get the practical appeal, but a really immersive 3D simulation on headphones wouldn't maintain the same response as the input.


----------



## dwinnert

OK, I am digging the Redline Monitor plugin. Weird issue is, I get pops between tracks where the sample rate changes...and to a lesser extent, same sample rate files. Running JRiver 24 64 bit. Does not happen with MusicBee, which is 32 bit.


----------



## ironmine

dwinnert said:


> OK, I am digging the Redline Monitor plugin. Weird issue is, I get pops between tracks where the sample rate changes...and to a lesser extent, same sample rate files. Running JRiver 24 64 bit. Does not happen with MusicBee, which is 32 bit.



I have similar issues.

As soon as I start upsampling the audio data (in Foobar, on-the-fly) before the Redline (e.g., from 44.1 to 88.2), it starts stuttering. But if the audio is initially in 88.2 format, or if I resample it first _before _playback, the Redline can process it just fine.


----------



## ironmine

castleofargh said:


> hpl2 seems to apply some HRTF setting at a fixed angle instead of the more typical low pass stuff on crossfeeds.
> personally I feel the center is at a decent distance. but on several tracks the sound just feels wrong to me, and rapidly gives me a headache(from what I imagine must be really conflicting audio cues my brain tries to make sense of in vain). and of course as soon as HRTF or even basic crossfeed is involved, I require some means of customization. ready made universal solutions are usually no good for my head, and this one isn't the exception.
> 
> to be clear I'm not saying it's bad, I'm saying it's bad for me and my head. with a fixed setting, hit or miss is what will always happen.
> ...



For me, currently the most immersive crossfeed is the Redline. And it's flat.  So, I don't care about theoretical "should bes" and "shouldnt bes". 

Why simulate the non-perfect speakers' frequency response in an acoustically crappy room, if you can simulate the perfect speakers in the perfect room? 

When it comes to crossfeed, I want my simulation to be _better _than reality.

For the same reason why people play computer shooter games. You don't come to gamers and say: "You know guys, I can get the practical appeal, but a really immersive 3D simulation in the shooter game wouldn't maintain the same level of body intactness as in real life. You are supposed to get dirty and take shots and bleed and feel horrible pain and some of you should even die to make it real". Do you?


----------



## dwinnert

ironmine said:


> For me, currently the most immersive crossfeed is the Redline. And it's flat.  So, I don't care about theoretical "should bes" and "shouldnt bes".
> 
> Why simulate the non-perfect speakers' frequency response in an acoustically ****ty room, if you can simulate the perfect speakers in the perfect room?
> 
> ...



I found the Meier 64 bit VST this afternoon. Going to give that a try. Did not see if it works in JRiver or....comp is shutdown for the night....so tomorrow.


----------



## castleofargh

ironmine said:


> For me, currently the most immersive crossfeed is the Redline. And it's flat.  So, I don't care about theoretical "should bes" and "shouldnt bes".
> 
> Why simulate the non-perfect speakers' frequency response in an acoustically ****ty room, if you can simulate the perfect speakers in the perfect room?
> 
> ...


say that you like the EQ you're already using( although that's bringing controversy on why EQ is fine only outside crossfeed). or say that you don't think those corrections should belong inside a crossfeed plugin because Xfeed should stick to just sending the delayed signal to the other channel, with a low pass filter over it. I'll be fine with opinions like those. but calling that choice "better than reality" is fundamentally wrong. even if we forget speaker distortions, and room reflections, our brain still works by referencing sound to a lifetime of stuff bouncing and being masked by the head. a Headphone removes/alters some of those cues including FR. without those we lack some of our means to locate sound sources properly, so I don't see how that would be better than reality.


----------



## dwinnert

Well...the Meier 64 bit does not work in JRiver 24 64 bit.


----------



## Erik Garci

Erik Garci said:


> Lately I have been trying crossfeed methods as a remedy for this issue. I still want to avoid comb-filtering artifacts, thus I prefer methods such as BS2B and Meier, and I think they sound much better than the Realiser's mix block.


Here is my latest setup:

Stereo source ==> miniDSP 2x4 (Linkwitz-like crossfeed) ==> Realiser A8 (crosstalk-free PRIR) ==> headphones

Basically I hear stereo audio coming from the front but with headphone crossfeed instead of speaker crosstalk.


----------



## bigshot

I've never heard anything coming from a distance in front of me using headphones, only with speakers.


----------



## Erik Garci (Jul 30, 2018)

bigshot said:


> I've never heard anything coming from a distance in front of me using headphones, only with speakers.


That's why I'm using the Realiser. It does a great job of making sound seem to come from a distance in front.


----------



## bigshot

Can it do 5.1 or Atmos convincingly?


----------



## Erik Garci

bigshot said:


> Can it do 5.1 or Atmos convincingly?


It can. It sounds just like the speaker system that was measured. The Realiser A16 will include Atmos.


----------



## jgazal (Jul 30, 2018)

bigshot said:


> Can it do 5.1 or Atmos convincingly?





pinnahertz said:


> You should audition the Smyth.  Not easy to do, but if you ever get a chance, take it.  It might change your mind on a few things.





> CanJam SoCal 2018 - Smyth Research Realiser A16 Headphone Surround System






> CanJam SoCal 2018 - Smyth Research Realiser A16 Headphone Surround System





Erik Garci said:


> Here is my latest setup:
> 
> Stereo source ==> miniDSP 2x4 (Linkwitz-like crossfeed) ==> Realiser A8 (crosstalk-free PRIR) ==> headphones
> 
> Basically I hear stereo audio coming from the front but with headphone crossfeed instead of speaker crosstalk.



Is it better than your crosstalk free PRIR and the A8 mixblock to add crossfeed?

I hope Smyth adds a well thought out crossfeed to use with standard stereo recordings instead of relying just in the mixblock. I have asked them this particular question but received no answer... I am feeling sad...


----------



## Erik Garci

jgazal said:


> Is it better than your crosstalk free PRIR and the A8 mixblock to add crossfeed?


I think it sounds better. The A8 mix block merely changes ILD for all frequencies equally. Certain types of crossfeed make changes to both ILD and ITD that vary depending on frequency, which sounds more natural, while avoiding the comb-filtering artifacts that speaker crosstalk would typically introduce.


----------



## bigshot

Erik Garci said:


> It can. It sounds just like the speaker system that was measured.



Can you adjust the level and EQ of each channel?


----------



## Erik Garci (Aug 14, 2018)

bigshot said:


> Can you adjust the level and EQ of each channel?


The A8 lets you adjust the bass, treble and volume levels between -12dB and +12dB per channel. I don't know if the A16 will offer more EQ controls.

You can also use other devices to adjust EQ. If you use them while the Realiser measures the speaker system, the EQ will be baked into the measurement, and they will no longer have to be used while you listen through the Realiser.


----------



## bigshot

Neat!


----------



## Davesrose

You know the best headphone surround scheme I've ever heard is a Sennheiser Dolby surround processor I got with my HD580 20 years ago (It's specifically a Dolby Pro Logic).  It has many settings for tonality, sound-stage, and presence.  If you spend the time to tailor it just right: it does sound a lot more convincing then any of the canned Dolby Atmos headphone settings I get with Windows 10.  When it comes to stereo crossfeed, I don't have much music that's glaringly panning to one channel (the main reason to crossfeed now IMO).


----------



## jgazal

Erik Garci said:


> I think it sounds better. The A8 mix block merely changes ILD for all frequencies equally. Certain types of crossfeed make changes to both ILD and ITD that vary depending on frequency, which sounds more natural, while avoiding the comb-filtering artifacts that speaker crosstalk would typically introduce.



I’ve asked them one more time:



> I've just read from an A8 user that he uses his unit with a special PRIR (in which the left microphone only recorded the left channel sweep, because he muted the mic for the right ear when the A8 played the sweep tones for the left channel, and the right microphone only recorded the right channel sweep, because he muted the mic for the left ear when it played the sweep tones for the right channel) and a mini dsp to have more control over the way the crossfeed is done.
> But I guess that somehow detracts the personalization.
> You mentioned that you can add a playback mode in which the signals assigned to left speaker are not played back at the right headphone driver and vice versa.
> But that equals to total crosstalk cancellation.
> ...



But I do not know if it would really be in their interest to develop this functionality if the main purpose of the A16 is to emulate Dolby, DTS and Auro systems...


----------



## Erik Garci

jgazal said:


> But I do not know if it would really be in their interest to develop this functionality if the main purpose of the A16 is to emulate Dolby, DTS and Auro systems...


Another approach is to make a PRIR of the crossfeed itself by measuring headphones instead of speakers, and then load the crossfeed PRIR on user B which feeds the crosstalk-free PRIR on user A. That way you can eliminate the separate crossfeed device.

Stereo source ==> Realiser user B (crossfeed PRIR without head-tracking) ==> Realiser user A (crosstalk-free PRIR with head-tracking) ==> headphones


----------



## jgazal (Aug 5, 2018)

Erik Garci said:


> Another approach is to make a PRIR of the crossfeed itself by measuring headphones instead of speakers, and then load the crossfeed PRIR on user B which feeds the crosstalk-free PRIR on user A. That way you can eliminate the separate crossfeed device.
> 
> Stereo source ==> Realiser user B (crossfeed PRIR without head-tracking) ==> Realiser user A (crosstalk-free PRIR with head-tracking) ==> headphones



Oh, yes! You told me that before, but I simply forgot that alternative.


----------



## Sennheiser92

I just cant crossfeed.   It ruins the sound signature for me    Ive not found one that doesnt.


----------



## MadSounds (Aug 15, 2018)

I find there's a dip in micro detail or clarity or what ever you want to call it when using meier crossfeed in foobar. very hard to pick up at level 5 of 100 scaling to what I consider a very modest but obvious dip in detail around level 40-50.  As others have stated a lot of great classic jazz recordings have hard interment splits between channels, it really just doesn't work with headphones.


----------



## 71 dB

Sennheiser92 said:


> I just cant crossfeed.   It ruins the sound signature for me    Ive not found one that doesnt.



What is it you want crossfeed do for you? Change nothing? Do nothing?


----------



## ironmine

Has anybody tried to use Flux SPAT (Multiformat Room Acoustic Simulation and Localization Processor) as a crossfeed?

I play with it now. 





Here's the description of what it can do:


----------



## ironmine

*Sennheiser AMBEO ORBIT (link)*






The AMBEO Orbit is a Sennheiser’s free binaural panner plugin, designed to facilitate mixing immersive binaural content.
Pairing the Neumann KU100 – reference for binaural capture – with the newly released AMBEO Orbit plugin you gain full flexibility and control over your binaural recording. You can now effectively position additional mono or stereo sources into the 3D sound field, avoiding the unwanted coloration. In fact, the patented clarity control allows you to choose how much of the binaural coloration to apply. Additionally, the unique interface for creating binaural room reflections allows you to drastically improve spatial accuracy in comparison to a reverb plugin.

The AMBEO Orbit plugin is available in AAX, VST, VST3 and AU format for both Mac and Windows.

Free Windows Download 
Free Mac Download


----------



## WoodyLuvr (Dec 17, 2018)

MadSounds said:


> I find there's a dip in micro detail or clarity or what ever you want to call it when using meier crossfeed in foobar. very hard to pick up at level 5 of 100 scaling to what I consider a very modest but obvious dip in detail around level 40-50.  As others have stated a lot of great classic jazz recordings have hard interment splits between channels, it really just doesn't work with headphones.





71 dB said:


> What is it you want crossfeed do for you? Change nothing? Do nothing?


I have been slowly lowering the Meier Crossfeed Plugin in foobar2K and am now down to about 5 coming from 20-22 a year or so ago. I am starting to wonder if I was liking the coloring rather than the intended blending of channels... what artifacts or tell-tale signs should one be looking for that would indicate that crossfeeding is distorting/coloring? Something in the bass perhaps? In fact, if anyone has a recommended music track to use with setting crossfeed levels with some key adjustment leveling tips that might actually quite be helpful. Any guidance or advice would be sincerely well received and appreciated as I am thinking about dropping the entire plugin.


----------



## castleofargh

you could take some track with only left channel signal then only right channel signal, and fool around to see where you imagine those sounds to be and how wrong they feel. for left and right channel you could for example try to get those sounds in the direction of typical speakers, around 30° on each side. but there is no telling that it will be your preferred sound even if well done.
beyond that, you somehow might need to get lucky with Xfeed. as mentioned in some of the back and forth arguments in this topic, Xfeed tries to give you a simplified compensation, so even if it's the right one for your head, your brain could still get annoyed by something it's expecting to be different(not that this won't happen without Xfeed). 

you could also go on a journey like https://www.head-fi.org/threads/recording-impulse-responses-for-speaker-virtualization.890719/  and try to record the impulses(actually sweeps for practical reasons) with mics at your ear canal and make your very own crossfeed with delays and FR for your own head, even keeping all the room reverb if you like. but once you get there, you might notice that you would really love head tracking and impulses at various more positions, and maybe multichannel and... in a way it's a blessing to be able to consider typical headphone experience as good.


----------



## WoodyLuvr

@castleofargh Thank you for the response; understood. I will test as you have suggested. Do you use xfeed on foobar2K yourself?


----------



## castleofargh (Dec 21, 2018)

yes I use one form or another of crossfeed almost anytime I use headphones or IEMs. even my portable amp has crossfeed options.  in foobar I have messed around with some impulses I picked http://recherche.ircam.fr/equipes/salles/listen/sounds.html after trying all the demos  and deciding which one worked best at about 30° when the "bzzz bzzzz" is turning around my head. I don't really remember what I did to the impulses, it was a mix of lucky tries and asking for help. in the end, I use that with a convolver. @Joe Bloggs shared something similar 2 or 3 years ago but it was more of a room simulation attempt than a sterile HRTF capture at a given angle like I did. a bunch of people seemed to like his impulses, while absolutely everybody who tried my stuff said it sounds weird and at the wrong place. I guess I really just have a weird head. that's why I can't wait to get the Realiser A16, hopefully they'll deliver before 2030.


----------



## ironmine

*Auburn Sounds - Panagement *(available as a free version):







*Panagement - Powerful Binaural Management*


----------



## 71 dB

WoodyLuvr said:


> I have been slowly lowering the Meier Crossfeed Plugin in foobar2K and am now down to about 5 coming from 20-22 a year or so ago. I am starting to wonder if I was liking the coloring rather than the intended blending of channels... what artifacts or tell-tale signs should one be looking for that would indicate that crossfeeding is distorting/coloring? Something in the bass perhaps? In fact, if anyone has a recommended music track to use with setting crossfeed levels with some key adjustment leveling tips that might actually quite be helpful. Any guidance or advice would be sincerely well received and appreciated as I am thinking about dropping the entire plugin.


Colouring is part of the existence of sound. Physical sounds exist in physical reality and colouring is part of that physical world. It's similar to objects and lighting. Shadows happen. Things happen and without those it doesn't look right, it's just badly rendered cgi. All colouring isn't the same. There's destructive colouring and expected colouring. An amp clipping a signal and causing distortion is destructive colouring. Reflection from the ceiling of the listening room is expected colouring and adds spatial cues that tell the listeners in which kind of acoustics they are listening. Lack of colouring is a problem if colouring is expected. Pure sinusoidals generated in a sound editor listened with headphones to add as little colourization as possible is dull as hell, because the sounds heard lack all possible spatial cues that would indicate how the sounds exist in physical reality. In a way such sounds "don't exist", but are  imaginary, in our mind_ almost_ outside physical reality. Such sounds are "half-existing." Expected colouring (reflections, ILD, ITD, ISD, reverberation etc.) make the sounds "full-existing." They interact with the acoustic reality and get coloured. So, the rule of thumb goes:

- Avoid destructive colouring unless it makes the sound "better" (in which case the colouring should be in the recording, not in the reproduction chain!)
- Expected colouring is _needed_, but bad expected colouring (such as bad acoustics) is bad. 

Now, finally to the issue at hand: Most recordings are mixed for loudspeakers and loudspeaker listening causes acoustic crossfeed colouring among other colouring related to room acoustics (in studios these are very mild thanks to the studio acoustics, but strong acoustic crossfeed happen anyway). So, those recordings are mixed to sound the best when there's normal acoustic crossfeed (left ear hears right speaker and vice versa) + mild controlled room acoustics colouring. Headphone crossfeed simulates the acoustic crossfeed. Typical headroom crossfeed level (from -12 dB to -6 dB) is rather mild compared to acoustic crossfeed. So, we colour the sound _less_ than speaker would do and doing so reduce excessive ILD (which confuse spatial hearing and cause listening fatique / lack of sonic realism). When strong crossfeed (something like -1 dB) is use, the colourization is similar to acoustic crossfeed except for the room acoustics, but often we don't need to reduce excessive ILD that much. Speakers make the sound very narrow with the acoustic crossfeed, but room acoustics (reflections, reverberation) restores the width. Speakers + room transform all recordings to almost the same width. Even mono recordings get width thanks to the acoustics, althou it's fake width. Very wide (ping pong stereo) recordings become almost the same as mono recordings. Speakers + room is kind a width regulator where the width is expected. With headphones we get whatever the recording is. Mono is mono and ping pong is ping pong. Only a handful of recordings happen the have expected width and work best without any tinkering in the form of crossfeed. Most recordings have excessive spatial information for headphone and tinkering is needed to get to expected value. Pretty much every recording has it's own proper crossfeed level based on how wild ILD it contains and it's pretty easy to learn to set the correct level, but a fixed crossfeeder in the range -8…-5 dB works pretty well with most recordings (some remain under-crossfed and some are somewhat over-crossfed).

Worrying about crossfeed colourization is kind of ridiculous, because in most cases it's milder than that with speakers and nobody worries about that! Also, our spatial hearing EXPECTS to hear certain correlation between ears. So, the lack of crossfeed colourization is the real problem. The reason why people don't complain about speaker acoustic crossfeed is because people can't turn it off (unless they have crossfeed canceling prosessing).

Some people insist crossfeed loses detail, but that's "fake" detail caused by excessive spatiality. It's like sharpening video picture. Turning picture sharpness to max on your tv makes the picture look sharper, but it's fake sharpness: It doesn't make DVD look as good as Blu-ray. No, the DVDs look best when the sharpness level is natural, we have the detail level that the format allows. Properly crossfed sound allows the real detail come out under the fake detail, so not only is no real detail lost, but the detail isn't masked by fake information. People don't realize, that spatial hearing can decode the crossfeed process comparing left and right ear signals and that's why the strong acoustic crossfeed with speakers isn't a real problem either.

Ultimately it comes down this: Do you like listening with or without crossfeed? Which sounds more natural? Which cause less fatique?


----------



## bigshot

The way sound inhabits space is a very complex thing involving reflections, minute time delays and directionality. DSPs can simulate a lot of this, but not all of it. There's no substitute for a really good listening room. But if you are stuck with cans because of circumstances, DSPs can help. There is no reason to be dogmatic about "purity". The best thing to do is to try it and see if you like it. Listen for a while and fine tune the settings. If it doesn't work, don't use it.


----------



## 71 dB

bigshot said:


> The way sound inhabits space is a very complex thing involving reflections, minute time delays and directionality. DSPs can simulate a lot of this, but not all of it. There's no substitute for a really good listening room. But if you are stuck with cans because of circumstances, DSPs can help. There is no reason to be dogmatic about "purity". The best thing to do is to try it and see if you like it. Listen for a while and fine tune the settings. If it doesn't work, don't use it.



The harsh reality is only a very small amount of people have the possibility to have "a really good listening room." You have one and that makes you an exception in the World. I have listened to music in "a really good listening room", the listening room of the acoustics laboratory I used to work in years ago. So I know it is good. To have such constructions (room inside a room to allow room mode reducing bass leakage/finetuned acoustics with diffusers and absorpers etc). is "impossible" for most people. People may even have the money, but have even more important things to spend it on. With proper crossfeed I get very good results when listening to well recorded music and I can blast off the music day and night whenever I want without disturbing my neighbours (yes, a lot of people have neighbours behind the wall with an sound insulation of 55 dB or even less and I know my religious old neighbour does not want to hear the bumping bass of The Prodigy at 2 am!).

It's about scales. Concerts are for a large group of people. Home speaker audio is for a small group of people or just for one person. Headphone audio is only for one person, excluding others (they can only hear what the cans leak). The smaller and closer to your ears the transducers get, the more it's "your" audio and the more you can control it. That's what I like about headphones. I call the shots and listen to The Prodigy's new album "No Tourists" 2 am if I want to!


----------



## bigshot (Dec 31, 2018)

I bought my house based on the room.It's not so much about the construction of the room. That is just gilding the lilly. The main thing is the space and the layout of the room. Headphones, even with crossfeed, don't come anywhere close. It's a fine compromise if you don't have any other options, but if you do like I did and keep the proper listening room as your goal, eventually you'll have it. The other alternative is to find a friend with a great setup and visit often. I have a bunch of friends who come over for music and movies and dinner often.

If you do get to the situation where you're looking for a house, I'd recommend thinking outside the box. Most people think of rooms in terms of normal uses... kitchens, bedrooms, living rooms. But a space is a blank slate. You can use a room for a different purpose if you want. My house had a strange long hallway with a closet running the whole length that had been added on connecting the front of the house with the rear. It also had a bathroom that could only be accessed from the backyard, I opened up the closet to create a 25 foot run of record storage and knocked a door into the house, turning the bathroom into a laundry room. My kitchen is also my library because the kitchen table is where I like to sit and read. I divided my house into two halves, the part I live in and the part I entertain in. It's like two separate wings. Hard to describe, but I'm sure I use the house in a different way than anyone who ever lived here before. But it suits me perfectly. You can use a dining room as an office or a living room as a bedroom if you want. There are no laws saying how rooms should be used. (I don't know about using the bathroom as a kitchen though!)


----------



## 71 dB (Dec 31, 2018)

bigshot said:


> Headphones, even with crossfeed, don't come anywhere close.



Does it have to come close? Does it have to sound like speakers to be enjoyed? Speakers, headphones, whatever. My enjoyment is much more correlated to the quality of the music. I did listen to some new Tangerine Dream (Sessions III) on speakers while waiting the year to end. Yeah, sounds _different_ from headphones, but not really more or less enjoyable. At least the difference isn't dramatic. We adapt to the sound while listening.

I think houses in general are much larger in the US than in Finland. Finland is a cold place and the houses can't be that large, because keeping them warm in the middle of the winter takes a lot of energy. So it's takes literally a millionare to buy a larger house. My rental flat is 300 square foot and it has one 160 square foot (about 16' x 10') room + small kitchen and toilet. In that 160 square foot I have "everything" even my bed where I sleep at night. The bed is good absorbing bass frequencies so that's a plus.


----------



## bigshot

Evict some reindeer and convert a barn into a good listening room!


----------



## 71 dB

bigshot said:


> Evict some reindeer and convert a barn into a good listening room!



I live ~500 miles too south to evict reindeers...


----------



## bigshot

The reindeers are probably thankful of that!


----------



## isnotdynamic

My theoretical version of my ideal DSP crossfeed is in a post on the help and introduction forum 

crossfeed/binaural


----------



## castleofargh

isnotdynamic said:


> My theoretical version of my ideal DSP crossfeed is in a post on the help and introduction forum
> 
> crossfeed/binaural


talk about a click bait ^_^. the actual link https://www.head-fi.org/threads/crossfeed-binaural-iser-question.899159/
what you suggest in the second half of your post is how mostly how sounds reach the ears when using stereo speakers. also how most crossfeed solutions work. and you can use what is called "true stereo" as a convolution scheme to have a stereo signal turn into 4 channels where you'll apply the impulses you still need to procure yourself, and then mixes it all back to stereo. ideally in such a case, the impulses you'd want to use would be those from your own measured HRTF at about 30° left and right(left and right ear for each). in practice you could have a mic at your ear and capture the impulses(usually using sine sweep) coming out of actual speakers(one impulse at a time). 

about the first part and your 3D software idea, so long as you're handling stereo recording, you don't have anything resembling 3D data that could be manipulated in such fashion. to get to a 3D model of the sound(which is possible), you have to record in a certain way, or you need a HRTF model to use as basis for placing "instruments" in space. so we're back to first having to procure a HRTF(again, ideally it would be your own, which is the limitation of standard crossfeeds that try to offer a sort of universal solution instead).


----------



## Steve999 (Feb 11, 2019)

FWIW I use a Behringer Monitor2USB gizmo with my computer setup, computer USB out to gizmo, that gives me two headphones outs with variable crossfeed and a second source input and three monitor outs, one of which I use for nice speakers in the computer area.

https://www.amazon.com/Behringer-MONITOR2USB-BEHRINGER/dp/B01DV237NA/ref=sr_1_1

It's a very basic headphone crossfeed but it does the trick for me. For me the issue is I want variable crossfeed, not a preset amount someone else decided upon, and the amount of crossfeed is very flexible with a nice knob, and there's a nice big volume knob for the line out too.

The only downside is you have to get some balanced to non-balanced cables if you are running normal line stereo home inputs or outputs. There is a button in the back where you choose between +4 dBu and -10 dBV depending on if you are running to a balanced input or not, or something.

Here is a big long article about this confusing button:

https://www.bhphotovideo.com/explor...humored-attempt-demystifying-10-dbv-and-4-dbu

I don't understand it but if you read the article you might, or you might already know.

But here's the "practical takeaway":

If you’re feeding a +4 dBu signal into a -10 dBV input, you’re running hot levels into receivers not necessarily equipped to handle them. Turning the +4dBu level down is a good idea.
The reverse is also true; if you connect a -10 dBV signal into a +4 dBu input, you’ll want to raise the -10 dBV signal—however, beware: You will also be raising the noise-floor of the signal, which, depending on the consumer-level piece of gear, might degrade the sound.


----------



## castleofargh

+4dBu gives about 1.23V (RMS)
-10dBV gives about 0.32V (RMS)

you just have to consider this as a gain setting but instead of having "low gain" and "high gain" or a value with a multiplier, or some more or less defined dB variation, here they use famous standard values and nomenclature for people used to handle those all day long. depending on what you plug into that device, you'll get better gain matching if you switch from one setting to the other. the end.
forget the confusing article that brings stuff up just to say it's irrelevant, but still follows up to explaining the unrelated stuff in great details for no reason.


----------



## viveksaikia22

I have very recently found an Audio Unit plugin (and the concept of crossfeed) for Audirvana plus on my macbook called CanOpener Studio by GoodHertz -
https://www.goodhertz.co/canopener-studio/

This has changed the way I listen to music for good. On engaging, the sound comes out much more natural and I feel it adds a very less coloration. While I am at it, I also switch LR. This further improves the sound to my ears. But maybe, it's just my ears.

Only downside is this plugin costs $60 and currently I am using the 15 day trial period. I am pretty certain that I will buy it, unless I find something which less expensive or even better, free.
Crossfeed for life 

It's a new revelation for me and musical enjoyment is all I care and I can throw out all the measurements out of the proverbial window as long as I keep enjoying my music. But again, it's just me.


----------



## 71 dB

Steve999 said:


> FWIW I use a Behringer Monitor2USB gizmo with my computer setup, computer USB out to gizmo, that gives me two headphones outs with variable crossfeed and a second source input and three monitor outs, one of which I use for nice speakers in the computer area.
> 
> https://www.amazon.com/Behringer-MONITOR2USB-BEHRINGER/dp/B01DV237NA/ref=sr_1_1
> 
> It's a very basic headphone crossfeed but it does the trick for me. *For me the issue is I want variable crossfeed*, not a preset amount someone else decided upon, and the amount of crossfeed is very flexible with a nice knob.



Variable crossfeed is important, because recordings contain differing levels of excessive spatiality to be scaled down with crossfeed. My term for the correct level of crossfeed for each recording is _proper crossfeed_. For some recordings no crossfeed is proper crossfeed and that's why you have the on/off switch.

The reason why the level of spatiality varies from recording to recording so much is because it can. Most music is mixed primarily for speakers. Speakers + room acoustics not only act as a spatiality regulator (a mono recording creates almost as much spatiality as a ping pong stereo recording due to the room acoustics), but also the result to our ears is by default natural spatiality containing natural levels of ILD, ITD and ISD. It's not_ correct, intended_ spatiality, because your speakers and room don't replicate with 100 % accuracy the spatiality the sound engineer heard in the studio, but _natural_ nevertheless. Headphones do not "regulate" spatiality. So, we need variable crossfeed to do that.


----------



## 71 dB

viveksaikia22 said:


> I have very recently found an Audio Unit plugin (and the concept of crossfeed) for Audirvana plus on my macbook called CanOpener Studio by GoodHertz -
> https://www.goodhertz.co/canopener-studio/
> 
> This has changed the way I listen to music for good. On engaging, the sound comes out much more natural and I feel it adds a very less coloration. While I am at it, I also switch LR. This further improves the sound to my ears. But maybe, it's just my ears.
> ...



Crossfeed revolutionized my headphone listening too in 2012 when I discovered it and stopped being a spatial ignoramus. I'm glad you had the revelation too.

I'd say that plugin is worth the $60 ($65?) if you like it. The DIY headphone adapter crossfeeder I am using cost about that much to build (plus countless of hours of designing and building it, buts that's a hobby…) Headphone amps with adjustable crossfeed cost much more than $60.


----------



## gregorio

71 dB said:


> Crossfeed revolutionized my headphone listening too in 2012 when I discovered it and stopped being a spatial ignoramus.



I discovered crossfeed many years earlier than that and despite trying various different crossfeeds since then, I've always discarded them because I was not enough of a "spatial ignoramus"! You seem to be going round in circles and never moving from where you started well over a year ago. Crossfeed solves some problems and creates others. Some people are not that bothered (or using your terminology, are a "spatial ignoramus") about the problems it causes, are happy with the problems it solves and therefore personally like crossfeed. Others of us are not "spatial ignoramuses", are bothered by the problems it causes, as much, if not more than the problems it solves, therefore we do not personally like crossfeed and prefer to hear the recording in the highest fidelity as created by the musicians/engineers. The difference between us is that I don't go around calling people who aren't aware (or aren't bothered) by the problems of crossfeeding "ignoramuses", it's just a difference of perception and/or a personal value choice.

So, enough of the ignoramus BS, because if anyone is being an ignoramus, it's you! Not least because we've already gone over ALL of this more than a year and 30 pages ago!!!

G


----------



## Whazzzup

Just a wee bit o crossfeed please


----------



## 71 dB (Feb 13, 2019)

gregorio said:


> I discovered crossfeed many years earlier than that and despite trying various different crossfeeds since then, I've always discarded them because I was not enough of a "spatial ignoramus"! You seem to be going round in circles and never moving from where you started well over a year ago. Crossfeed solves some problems and creates others. Some people are not that bothered (or using your terminology, are a "spatial ignoramus") about the problems it causes, are happy with the problems it solves and therefore personally like crossfeed. Others of us are not "spatial ignoramuses", are bothered by the problems it causes, as much, if not more than the problems it solves, therefore we do not personally like crossfeed and prefer to hear the recording in the highest fidelity as created by the musicians/engineers. The difference between us is that I don't go around calling people who aren't aware (or aren't bothered) by the problems of crossfeeding "ignoramuses", it's just a difference of perception and/or a personal value choice.
> 
> So, enough of the ignoramus BS, because if anyone is being an ignoramus, it's you! Not least because we've already gone over ALL of this more than a year and 30 pages ago!!!
> 
> G



Too much crossfeed creates problems making the sound unnecessorily narrow and monophonic, but proper crossfeed doesn't do that. I don't tell people to crossfeed their music to death. I am telling how the need for crossfeed varies from recording to recording and some recording need zero crossfeed while some ping pongy crazy stereo recordings require heavy crossfeeding, in some extreme cases sounding best completely mono!

What exacly is "as created?" Is listening to the recording on speakers with acoustic crossfeed, early reflections and room acoustics all of which change the spatially a lot "as created"? Or is listening to the recording in the same studio where it was created in with acoustic crossfeed, but much less early reflections and reverberation due to heavy acoustic treatment "as created"? Or is listening to the recording with headphones without crossfeed "as created"? Of all of these choices I'd say the middle one is "as created", because the sound engineer made mixing desicions in the studio based on what he/she heard in the studio. If not then why do we have studios in the first place? If "as created" means headphones without crossfeed then speaker sound is not "as created", because there's acoustic crossfeed + room acoustics to make it "not as created." People would need at least crosstalk canceling to switch off acoustic crossfeed.

If both speakers and headphones were "as created" then the recording should be created to avoid spatial information that relies on avoiding the differences of the two. That is certainly possible to a certain degree and I have myself studied "omnistereophonic" spatiality that tries to avoid the problems related difference of speaker and headphone listening. It is about avoiding large ILD at low frequencies and use large ILD at high frequencies were high ILD (10-20 dB) is natural. It is about using natural ITD (0-640 µs) to create spatiality, and Haas-effect type of larger (1-30 ms) than natural ITD to create sense of space and depth without excessive ILD. It is about using ISD to create natural spatial cues. It's about using auditory masking to "hide" unnatural aspects of individual tracks. It's doable, but most recordings are not created like this. Maybe newer pop music uses this kind of tricks more and more (pop producers are generally well aware of the need for limited ILD at low frequencies and use sophisticated plugins to create natural spatial effects), but that is a tiny limited fraction of all stereophonic recordings in the history.

Crossfeed skeptics say the process creates colourization. I respond to this claim saying:

- Colourization is very small, microscopic compared to the colourizations in speaker listening. Crossfeed or not, headphone sound is insanely uncoloured compared to speaker listening.
- Crossfeed happens effectively at low frequencies were the typical time difference of about 300 µs and is tiny compared to the wavelength resulting in practically nonexisting combfilter effects. At treble frequencies combfiltering would happen, only the crossfeeding typically fades away above 800 Hz. Only the strongest crossfeed settings can theoretically cause significant combfiltering.
- Even if colourization was significant, worrying about it while ignoring excessive spatiality is illogical to say the least. Excessive spatiality is in my opinion much worse problem than colourization which is natural aspect of acoustic systems. All headphones have coloured response to begin with and even the flattest headphones have much more colourization then crossfeed can ever introduce. Excessive spatiality on the other hand is a significant problem. ILD of over 10 dB at low frequencies really is a massive "spatial mistake" when about 3 dB is natural level and 6 dB is absolute max. Just putting things in perspective.
- Crossfeeding does not remove detail! This is a myth that needs to be busted. After crossfeeding both channels contain a mix of both channels with differing delays and amplitudes. Our spatial hearing is able to decode the channels from using difference signals/cross correlation and also EXPECTS incoming signals to be of that nature! In fact crossfeeding HELPS auditory system to decode the details of the sound better. Excessive spatiality creates confusion, spatial distortion that masks real spatial detail and causes listening fatique. Crossfed music sounds less detailed, softer and rounder and that's why people mistakenly thing crossfeeding removes detail, but the softer, rounder sound is the real detail, the same you hear on speakers thanks to acoustic crossfeed. It's only when you hear the softer crossfed signal after your ears have been exposed to excessive spatiality it feels soft, but it the correct signal while the non-crossfed signal is the wrong one with all of it's sharpness and fake details due to spatial distortion. Crossfeed requires the listener to get used to the natural levels of spatiality and when your ears have adjusted you notice how the real detail is all there unmasked by excessive spatiality. For me this adjustment of hearing takes about one minute. Switching crossfeed ON first makes the soundstage narrower because ears have been exposed to excessive spatiality, but after a minute or so ears have adjusted to the natural spatiality and sound is wide again, only free of spatial distortion making it more natural, less fatiquing and real details easier to listen to.

So your viewpoint of everything being fine without crossfeed and how introducing crossfeed generates serious problems is unwarranted. If you mix your music so that the spatiality relies on reduction due to acoustic crossfeed, spatially informed people want to use crossfeed with headphones for similar reduction to have natural levels of ILD.

Being an ignoramus in regards of spatiality is not a shame, because these things are not talked about generally. I studied human spatial hearing in the university (as part of the acoustics "101" course) in the early 90's, but it took me 2 decades (!) before I realized how the principles of spatial hearing make spatially correct music challenging when people use speakers, headphones and what not. It suddenly occured to me that there can be and is "spatially illegal" sounds (signals containing excessive spatially directly fed to ears with headphones). Of all the mathematically possible stereophonic signal pairs (stereophonic signal space) only a subspace is "legal." Speaker listening forces all signal pairs to become legal while headphones pretty much do nothing to the original signal pair causing illegalities to reach our ears. Crossfeed fixes this. I still remember the moment when I realized this. It is a powerful moment when you realize something fundamental and the light bulb turns on in your head.


----------



## ironmine

Audiophiles due to their ignorance are among the most stubborn people when it comes to progress in audio. 

They always defend the older inferior quality-degrading technologies until their last breath. 

It's funny to watch how they resist the advance of crossfeed nowadays in the same way they resisted earlier the appearance of standalone DACs (remember the debates what sounds best, DAC vs CD-player), digital room correction (DRC vs. "Bit-Perfectness" debates), digital audio vs. vinyl, etc.


----------



## bigshot

But stand alone DACs are hold over ignorance from the old days of separate amps and preamps. There's no reason why a player can't sound just as good as a stand alone DAC, and an awful lot of them do. In fact, I don't know of any that don't.


----------



## ironmine

bigshot said:


> But stand alone DACs are hold over ignorance from the old days of separate amps and preamps. There's no reason why a player can't sound just as good as a stand alone DAC, and an awful lot of them do. In fact, I don't know of any that don't.



Unlike CD-players, DACs do not require CDs (outdated, hard to copy and expensive source of sound) and they can play hi-rez music. 
I get my music in any lossless and/or hi-rez format I want 100% free from torrent trackers for more than 10 years already and I listen to it through DACs.

When you play music through a DAC, you can process sound first in your computer (e.g. Digital Room Correction, thousands of professional studio quality VST plugins to your liking, crossfeeds, etc., etc.).


----------



## bfreedma

ironmine said:


> Unlike CD-players, DACs do not require CDs (outdated, hard to copy and expensive source of sound) and they can play hi-rez music.
> I get my music in any lossless and/or hi-rez format I want 100% free from torrent trackers for more than 10 years already and I listen to it through DACs.
> 
> When you play music through a DAC, you can process sound first in your computer (e.g. Digital Room Correction, thousands of professional studio quality VST plugins to your liking, crossfeeds, etc., etc.).




Are you publicly admitting that you steal all of your music or am I misreading your post?


----------



## 71 dB

ironmine said:


> and they can play hi-rez music.
> I get my music in any lossless and/or hi-rez format I want...



What does hi-res give that 16/44.1 doesn't give you?


----------



## ironmine

71 dB said:


> What does hi-res give that 16/44.1 doesn't give you?



Sometimes I indeed prefer the CD version of an album, if its hi-res remastered version has  squashed dynamics. There are CDs that sound excellent and are recorded great.

But under all other conditions being equal, I let a hi-res variant remain in my collection and I delete its CD version from my hard drive. Even not going into the discussion whether a hi-res can sound better than its CD counterpart, I prefer to feed my VST plugins with 24-bit resolution files than 16-bit files as it increases the quality of processing.

When I listen through heaphones, I use 112dB Redline Monitor for crossfeed effect. Through speakers, I use MathAudio Room EQ for digital room correction. These are my 2 VST plugins that I always use (plus dithering + monitoring). In 5% cases I may also use a VST equalizer (DMG Equilibrium is my favorite), mainly to boost LF by 1.0-2.0 dB.


----------



## gregorio (Feb 14, 2019)

71 dB said:


> [1] If both speakers and headphones were "as created" then the recording should be created to avoid spatial information that relies on avoiding the differences of the two. ... It is about avoiding large ILD at low frequencies and use large ILD at high frequencies were high ILD (10-20 dB) is natural.
> [2] Even if colourization was significant, worrying about it while ignoring excessive spatiality is illogical to say the least. Excessive spatiality is in my opinion much worse problem than colourization...
> [2a] If you mix your music so that the spatiality relies on reduction due to acoustic crossfeed, spatially informed people want to use crossfeed with headphones for similar reduction to have natural levels of ILD.



Thanks for proving my point, every single one of the points in your response are just repeats of the exact same points you made over a year (and 30 odd pages) ago. Points which have already been addressed and refuted but here you are just repeating the same points again and insulting everyone how doesn't share your ignorance of the facts and your preferences! I'm not going to refute every one of your points because it's already been done in this thread, I'll just address your main point of ignorance upon which you base everything:

1. There is nothing "natural" about the creation or end result of commercial music recordings and there hasn't been for many decades. Music mixing/production is an ART, it has been for nearly 60 years, it is *ABSOLUTELY NOT *about avoiding what would not occur naturally, it's ALL about creating products that fulfil an artistic goal and that consumers will hopefully like/enjoy, *COMPLETELY REGARDLESS* of whether it's "natural" or not! In fact, in almost all cases "it is about" the exact opposite, creating spatial information which is not "natural"! You do NOT get to dictate how a "_recording should be created_" or dictate what should be avoided!!

2. EXACTLY, this is the heart of your ignorance! Firstly, it is *YOUR OPINION* and secondly, it's an opinion based on a falsehood! It's a falsehood because no one ignores "excessive spatiality". I've never seen or heard of a studio that didn't have headphones or of a mix being created without being checked on headphones (by the engineers, producer and artists) and this is especially true since the 1980's as more and more consumers listened to music using headphones. Your opinion about what constitutes "excessive" (spatiality) is just that, your opinion. Certainly HP listening presents a wider/more separated stereo image but it is YOUR OPINION of whether that is "excessive" or not. There are many cases where that "excessive spatiality" is the desired intention of the producer and artists (and in fact tools were often used to widen the image on speakers and make it more like HP listening, "shuffling" for example). The result is not "natural", it is not intended to be natural and you applying crossfeed is both lower fidelity AND going against the artists' intentions! Of course, if you like/prefer crossfeed and want to change/ignore the artistic intentions that's entirely your choice but it's NOT your choice to dictate that artists must follow what is "natural" and your personal definition of "excessive" and it's certainly NOT your choice to call others ignoramuses, especially as you're the one apparently completely ignorant of the fundamental basics/goals of music production!
2a. If "people" really were "spatially informed" they would realise there is nothing spatially "natural" in the first place, that "spatiality" in music mixes does not only rely on acoustic replay crossfeed, that crossfeed therefore can do more harm than good and does not by itself emulate acoustic speaker crossfeed/reproduction anyway. You don't appear to know or understand any of this, so clearly you are NOT one of those "spatially informed people"!

Your ignorance of the facts and dogged belief that your opinion/preference should be shared by everyone, including the artists themselves, results in you defending your belief with a barrage of false statements and complete nonsense, for example:


71 dB said:


> [1] Being an ignoramus in regards of spatiality is not a shame, [1a] because these things are not talked about generally.
> [2] It suddenly occured to me that there can be and is "spatially illegal" sounds ...
> [2a] Of all the mathematically possible stereophonic signal pairs (stereophonic signal space) only a subspace is "legal."
> [2b] Speaker listening forces all signal pairs to become legal
> ...



1. Clearly. In fact you actually seem proud of being an ignoramus!
1a. That's a completely false statement, they're always talked about, on every single music mix!

2. Music production is an ART, consequently there are NO "spatially illegal sounds"! Pretty much all music productions for many decades are not spatially "natural", therefore if that's your definition of "spatially illegal" then pretty much all commercial music recordings are "spatially illegal" all the time, regardless of whether they're reproduced on speakers or headphones!
2a. Again, virtually no commercial music productions only employ a single stereophonic signal pair and therefore virtually all music mixes are "illegal" to start with.
2b. Nope, you've just made that up and clearly it's complete nonsense! How do speakers know what spatial "illegalities" there are in any particular music mix and even if they did, how would they correct and make them "legal".
2c. The "illegalities" "reach our ears" with headphones AND speakers, though they present those "illegalities" differently.
2d. No it does NOT! It just presents those "illegalities" differently again. So now we've got 3 different presentations of the "spatial illegality", none of which are "spatially legal"! Which one or ones a consumer prefers it up to them but personally I prefer to go with the fidelity of what the artists actually put on the recording.

3. That seems to be the heart of your problem, you had a "powerful moment" when you realised something. What you realised is in fact actually false (crossfeed does NOT "fix this"), it's nothing more than a personal preference but unfortunately, because for you it was a "powerful moment", you've spent an inordinate amount of time trying to turn it into something more than just a personal preference. In your mind you've (falsely) turned it into an objective fact, which you then try to force on everyone else on the (false) basis that it is an objective fact and therefore anyone who disagrees with you must be ignorant of those facts/an ignoramus.

You're like some extremist born again Christian who can't separate faith from fact, gets very upset with anyone who refutes their "facts" and just keeps preaching their faith as fact regardless!



ironmine said:


> [1] Audiophiles due to their ignorance are among the most stubborn people when it comes to progress in audio.
> [1a] They always defend the older inferior quality-degrading technologies until their last breath. It's funny to watch how they resist the advance of crossfeed nowadays in the same way they resisted earlier the appearance of standalone DACs ...
> [2] I prefer to feed my VST plugins with 24-bit resolution files than 16-bit files as it increases the quality of processing.
> [2a] In 5% cases I may also use a VST equalizer (DMG Equilibrium is my favorite), mainly to boost LF by 1.0-2.0 dB.



1. They are indeed apparently ignorant, as your post demonstrates! Because:
1a. You have this backwards! Crossfeed is an older, inferior quality-degrading technology that's been around far longer than any consumer digital audio, let alone "standalone DACs" and despite the fact that crossfeed has been around for many decades, it's never really caught on.

2. No it doesn't, it makes no difference whatsoever! Your VST plugins process at 32 or 64bit float regardless of whether you feed them 16bit or 24bit files.
2a. DMG make some excellent plugins, I use several myself on a daily basis, although I don't really see why a consumer would need such complex functionality for such a simple task, when a free EQ plugin would achieve the same thing. Dave Gamble is a highly knowledgeable and respected developer and unlike many, he's perfectly willing to engage with consumers, so you could ask him yourself!

G


----------



## bigshot (Feb 14, 2019)

ironmine said:


> Unlike CD-players, DACs do not require CDs (outdated, hard to copy and expensive source of sound) and they can play hi-rez music.



I have an Oppo blu-ray player that can do all that. And I have an AVR with a built in DAC too. Both of those have DSPs. I suppose if you don't want to knock your high data rate files down to a normal file size a phone player won't work.



ironmine said:


> I prefer to feed my VST plugins with 24-bit resolution files than 16-bit files as it increases the quality of processing.



It processes at different bit rates depending on the file? That's interesting. I think all the DSPs I work with process at a fixed level (I'm guessing 16/44.1 PCM) and output at that level. How can you check to see that the file being played isn't being transcoded up or down when you use processing? Does it output a processed file you can look at? What kind of processing are you doing that requires a high rate? Are you messing with timing stuff?


----------



## castleofargh

won't a VST use whatever bit depth the player/DAW/... is using, so most likely 32 or 64bit?


----------



## 71 dB

gregorio said:


> 1. Thanks for proving my point, every single one of the points in your response are just repeats of the exact same points you made over a year (and 30 odd pages) ago.
> 
> 2. Points which have already been addressed and refuted but here you are just repeating the same points again and insulting everyone how doesn't share your ignorance of the facts and your preferences! I'm not going to refute every one of your points because it's already been done in this thread, I'll just address your main point of ignorance upon which you base everything:
> 
> ...



1. Yes, they are repeats because the facts are still the same. I repeat them because we may have new people on this board who don't go 30 pages into the past.
2. Refuted only in your mind. I'm not purposedly insulting anyone here. If you get insulted it's your problem. Why do you get so triggered by my posts? You certainly don't behave like some with your experience in the field of sound engineering should behave. I would take you much more seriously if you recognized at least some of my points correct while offering solid arguments for your disagreements. 
3. Natural in this context means that the spatial cues, however obtained (acoustic binaural recording, VST plugins in DAW or any other way) have somewhat natural levels of parameters such as ILD, ITD, ISP and reverberation so that the unnatural nature of the sound does not cause unnecessory listening fatique nor distortion of spatial information. Mixing being ART doesn't automatically make it great ART. Undertanding spatiality helps creating better ART, just as knowing music theory helps composing better music. Maybe music production should be about avoiding what would not occur "naturally" and exploring ARTistical possiblities within that framework? All artists need to ask themselves whether their artistical goals make sense, especially if you produce music for other people, the consumers. You have gotten away with nonsensical spatiality because most consumers are spatially ignorant. 
4. My opinions regarding this issue are grounded in scientific facts (studied in the university) and careful thinking of the implications since 2012 after realizing the existence of excessive spatiality in certain type of music reproduction scenarious. Excessive spatiality exists in most recordings whether it's due to ignorance or not. My "opinion" about what is excessive spatiality is based on two things: The science behind human spatial hearing (HRTF etc.) and my own listening experiences which are well in line with the established science. I'm ready to finetune and refine my understanding if needed, but it seems I am quite close to the truth. However you telling me I have to accept excessive spatiality because it's ART is not something that will change my mind. 
5. If excessive spatiality is _desired_ intention, it fails with speakers. Widening sound on speakers "outside" is about fooling spatial hearing to think the sound source is outside the line between the speakers, but does it create excessive spatiality at low frequencies? No. It is two speakers playing what is fed to them, both creating natural spatiality furher "softened" by the room acoustics at the ears of the listener, only the resulting sound has such natural spatial cues that fool spatial hearing. In principle this is no different from fooling hearing with monophonic phantom center channel or sound panned somewhere between the speakers. We hear the sound coming from a direction were there's no sound source. Such sound is not fatiquing, because there's no unnaturality to it. However the same signal fed to headphones (without crossfeed) creates unnatural spatiality and the sound becomes fatiquing. Why is headphone crossfeed "lower quality", but acoustic crossfeed + room acoustics isn't lower quality? If speakers and headphones give totally different spatiality (former natural and latter unnatural), which one is the intent of the artist? I believe that using proper crossfeed I get closest to the intent of the artist. I do not believe the ART of King Crimson is about excessive spatiality at all! I believe their ART is about masterful guitar playing, inventive time signatures, musical energy, harmony, melodies, etc. All that stuff gets to my mind best when I use proper crossfeed. Very strange if the intent is often something that sounds bad to me and vice versa what sounds best to me is against artistical intent! I don't dictate what is natural. Our spatial hearing dictates it. It's biology. The size and shape of our head makes it so that you can have large ILD at low frequencies only by bringing the sound source VERY near the other ear. For ILD of ~10 dB this distance is about 5 inches and for ILD of ~20 dB just one inch! It that the intent? Have the band playing a few inches of your head? My intent would be have a large soundstage, depth. For that you need very small ILD (1 dB or less!) at low frequencies (+ other spatial cues such as reflections and reverberation of course). Instead of large ILD you use ITD at low frequencies (0-640 µs). I'm not claiming expertise on _music production_, althou I know something about that too and I am not totally ignorant. I'm interested to learn more about music production. Youtube has tons of videos about that and I am watching them. I claim expertise on spatiality hearing and I am proposing what music production should be in regards of that. You act like music was all about (excessive) spatiality when it's only one aspect of it. Important, but still only one of the many important aspects.
6. Believe or not, I understand all of this. Proper crossfeed doesn't do more harm than good. If it did, it wouldn't be proper crossfeed, but "too much" crossfeed! Sometimes proper crossfeed = no crossfeed. That happens when the recording doesn't have excessive spatiality. Of course crossfeed doesn't emulate the whole acoustic transfer function between speakers and ears. It addresses the thing that is actually harmful, namely lack of acoustic crossfeed. Compared to studio acoustics, speaker listening has usually too much acoustics while headphone listening hasn't got any except that in the recording itself. In both cases the "error" is natural unless the recording lacks totally any acoustics. Lack of acoustic crossfeed is the only "unnatural" problem we need to fix. If you want the acoustics of your listening room to be incorporated into the sound, you fix it by putting headphones away and listening to your speakers (aka Bigshot hack). Audio reproduction is about making compromises. To make the best compromises one needs to know the importance of contradicting properties. You allow many "insignificant" problems if that fixes a major problem such as excessive spatiality. Surely you know that? Right? 
7. You are the one insisting we listeners should share your ARTistical vision about spatiality even when is contradicts the fundamental principles of human spatial hearing. My opinions are based on established science, something that is witnessed by the fact that crossfeed lovers exists, people who are open-minded enough to recognise the benefits. These people existed long before I discoved crossfeeding. People get used to excessive spatiality thinking it's normal and correct. I was one of those people before 2012, spatially ignorant. Crossfeed is about doing headphone listening more correctly. Bass frequencies become more realistic/physical, stereo image gets more precise and listening fatique disappears. The sound just becomes more natural. All of this is a strong proof of a working method to improve headphone listening. So, I have all this to back up my "opinion". I also advice people to use the correct (proper) amount of crossfeed which sometimes is zero crossfeed and warn about too much crossfeed. In this context your constant claims of me being totally ignorant is unwarranted to say the least and I'm confident most readers of our post will agree. What I do lack is the biases of sound engineer bubble. That much is clear.
8. Not at all. I am not at all proud of discovering crossfeed 2 decades after learning the science behind it! Things like this happen, because we are human. Some other things I have realized very quickly / young and that balances things out...
9. Not talked about much in public among the people who consume music.
10. Compression is an artform too, but that doesn't mean loudness war is a positive thing. it causes listening fatique too! "Illegal" = illogical. Large ILD at low frequencies means logically sound source very near the other ear, which alone is a bit weird, but other spatial cues such as reverberation may suggest a sound far away => illogical/illegal spatiality. What is this fetish of bands playing on my shoulders and at my ears annoyingly? Nasty ART! What is this fetish of fake bass? What is this fetish of fuzzy stereo image where impulse-like sounds breaks into fragments all around? It doesn't matter how long music production has been not spatially "natural". Maybe it's time to end the lunacy and start doing things correctly? In fact that's the case and a lot of modern music has better spatiality than older productions. So much better than early ping pong recordings! Also, if there's excessive spatiality, crossfeed helps to fix the problems so all good. Speakers don't have illegality problems. It is a headphone thing.
11. If you mix 100 tracks which all have "legal" spatiality, the downmix is legal too, but probably too narrow. Individual tracks can have certain amount of excessive spatiality, because tracks mask each other more or less. You can even use hard left/right panning on some tracks if you know what you doing since illegalities are masked by other tracks.
12. Speakers + room forces natural spatiality, spatiality of two loudpeakers playing in a room. Even if you put one of your ears near one speaker, the acoustics leak sound to your other ear and reduce ILD.
13. No, only with headphones. Speaker sound can be very colored and all due to acoustics, but the spatiality is 100 % natural: No excessive ILD, ITD,... …both ears are in the same acoustic space experiencing the same acoustic waves. If your left ear experiences strong bass, your right ear will experience it too! Maybe 0.9 dB quieter and 218 µs later, but very similarly nevertheless. Put headphones on and all bets are off! Who knows what kind of ART-vision the sound engineer had!
14. Speakers + room = always legal (natural) spatiality. Headphones without crossfeed = often illegal (unnatural) spatiality. Headphones with proper or stronger crossfeed = legal (natural) spatiality.
15.  It's the objective facts that made me have the realization in the first place. Sure, my understanding of the issue has deepened from the initial realization, but that's completely normal and the way our understanding develops. I am talking about scientific facts and how they relate to headphone listening ireflecting my understanding/knowledge of it. Readers can make their own conclusions. I am _encouraging_ use of crossfeeder rather than forcing it.
16. My facts haven't been refuted. You can't swipe away decades of scientific research on human spatial hearing just calling me ignorant.


----------



## 71 dB

ironmine said:


> 1. Sometimes I indeed prefer the CD version of an album, if its hi-res remastered version has  squashed dynamics. There are CDs that sound excellent and are recorded great.
> 
> 2. But under all other conditions being equal, I let a hi-res variant remain in my collection and I delete its CD version from my hard drive. Even not going into the discussion whether a hi-res can sound better than its CD counterpart, I prefer to feed my VST plugins with 24-bit resolution files than 16-bit files as it increases the quality of processing.
> 
> 3. When I listen through heaphones, I use 112dB Redline Monitor for crossfeed effect. Through speakers, I use MathAudio Room EQ for digital room correction. These are my 2 VST plugins that I always use (plus dithering + monitoring). In 5% cases I may also use a VST equalizer (DMG Equilibrium is my favorite), mainly to boost LF by 1.0-2.0 dB.


1. That's a valid reason.    2. I don't think it does.    3. Ok.


----------



## Amberlamps

71 dB said:


> 1. Yes, they are repeats because the facts are still the same. I repeat them because we may have new people on this board who don't go 30 pages into the past.
> 2. Refuted only in your mind. I'm not purposedly insulting anyone here. If you get insulted it's your problem. Why do you get so triggered by my posts? You certainly don't behave like some with your experience in the field of sound engineering should behave. I would take you much more seriously if you recognized at least some of my points correct while offering solid arguments for your disagreements.
> 3. Natural in this context means that the spatial cues, however obtained (acoustic binaural recording, VST plugins in DAW or any other way) have somewhat natural levels of parameters such as ILD, ITD, ISP and reverberation so that the unnatural nature of the sound does not cause unnecessory listening fatique nor distortion of spatial information. Mixing being ART doesn't automatically make it great ART. Undertanding spatiality helps creating better ART, just as knowing music theory helps composing better music. Maybe music production should be about avoiding what would not occur "naturally" and exploring ARTistical possiblities within that framework? All artists need to ask themselves whether their artistical goals make sense, especially if you produce music for other people, the consumers. You have gotten away with nonsensical spatiality because most consumers are spatially ignorant.
> 4. My opinions regarding this issue are grounded in scientific facts (studied in the university) and careful thinking of the implications since 2012 after realizing the existence of excessive spatiality in certain type of music reproduction scenarious. Excessive spatiality exists in most recordings whether it's due to ignorance or not. My "opinion" about what is excessive spatiality is based on two things: The science behind human spatial hearing (HRTF etc.) and my own listening experiences which are well in line with the established science. I'm ready to finetune and refine my understanding if needed, but it seems I am quite close to the truth. However you telling me I have to accept excessive spatiality because it's ART is not something that will change my mind.
> ...



Subscribed.

I like fireworks


----------



## ironmine

71 dB said:


> 2. I don't think it does.



How come "it doesn't"?  

If you have two audio files, one of them is16 bit and the other one is 24 bits (not just 16 bits padded with zeros, but a legitimate 24 bit file), then it's better to feed into your DAW the 24 bit file.



bigshot said:


> It processes at different bit rates depending on the file?



No, its does not. The DAW processes both 16 bit files and 24 bit files at the same bit rate (32 bits).


----------



## bigshot

I guess it doesn't matter what the bit rate of the file is if it processes all files at the same rate. That was what I suspected.


----------



## gregorio (Feb 15, 2019)

bigshot said:


> [1] I think all the DSPs I work with process at a fixed level (I'm guessing 16/44.1 PCM) and output at that level.
> [2] How can you check to see that the file being played isn't being transcoded up or down when you use processing?
> [2a] Does it output a processed file you can look at?
> [2c] What kind of processing are you doing that requires a high rate?



1. That's generally not the case, it varies depending on the plugin but none of them process at 16bit and as far as I'm aware no plugin has ever processed at 16bit, although it's possible some very early plugins (in the early 1990's) did, before DAWs were professionally popular. Today and for many years plugins operate at a fixed bit depth, which is either 32bit float or 64bit float, depending on which DAW you're using, which of those two bit depths it supports and which version of the plugin you have installed. On the sample rate side of things it also varies, some plugins will process at whatever sample rate your DAW is set to, which could be 44.1kHz, others have a fixed processing sample rate but typically that is not 44.1kHz, typically it would be 96kHz. Convolution reverb plugins are a good example of this, though there are other examples, modelled plugins such as some EQs, compressors, limiters, guitar amp emulators, etc. And still other plugins may oversample, not have a fixed sample rate but also not use the sample rate of the DAW/environment but some multiple of it.

2. There is no way of knowing unless: A. The documentation actually tells you, although typically the documentation only tells you when the plugin has the user definable feature to turn oversampling on or off or B. Ask the developer and hope they'll divulge the answer.
2a. The output file doesn't tell you anything about the sample rate the plugin processed at, the plugin will output the sample rate the DAW/Environment is set to. If we take a typical convolution reverb as an example, the data flow and processing would be like this: The 16 or 24bit audio file/s will be loaded into the DAW/environment and converted to 32 or 64bit float, the sample rate will remain the same or be converted if the DAW is set to a different sample rate (than the input audio files). Let's say your input files are 44.1kHz and your DAW is set to 44.1kHz, in which case no sample rate conversion is done by the DAW but the bit depth will be converted to 32bit (or 64bit). The DAW will then pass this 44.1kHz/32bit (or 64bit) file to the reverb plugin, which will upsample it to 96kHz, process it at 96kHz/32bit (or 64bit) and then downsample it back to 44.1kHz for output. If on the other hand your DAW/environment is set to 192kHz sample rate, the plugin will be fed 192kHz/32bit (or 64bit) which it will downsample to 96kHz, process at 96kHz/32bit (or 64bit) and then upsample it's output to 192/32 (or 64) to match the DAW environment.
2c. There are various reasons why the processing sample rate could legitimately be different (higher or lower). In the case of a convolution reverb for example, it's far more practical (for both the developer and end user) to supply the impulse responses at one sample rate (96kHz for example) and convert the input from the DAW to that sample rate, rather than supplying each impulse response in every sample rate and not converting the DAW input sample rate. Another reason would be some/many modelled plugins, plugins which emulate some vintage bit of kit like a vintage EQ, compressor, limiter, guitar amp or analogue synth for example. Prized vintage kit is prized because of some non-linear behaviour, typically IMD and/or various other non-linear distortions. It is therefore often necessary to over or upsample so that the ultrasonic freqs causing the IMD can be generated and then downsampled again, once the IMD product (in the audible freq band) has been created. Another reason again, would be in the case of a true peak (TP) compressor or limiter.



castleofargh said:


> won't a VST use whatever bit depth the player/DAW/... is using, so most likely 32 or 64bit?



Yes, a VST plugin will always operate at either 32 or 64bit, although it can be difficult to know which of those two bit depths is actually being used for pocessing. The VST marketplace is unregulated and there's nothing to stop an unscrupulous developer from taking say a 64bit input (from the DAW/Environment) truncating it to 32bit, processing at 32bit and then padding the output with zeros back to 64bit again. The opposite is also possible, taking a 32bit input, processing at 64bit and then truncating (or rounding or dithering) the output back to 32bit again. There's no way of knowing if this is occurring without hacking the plugin and analysing the code or unless the developer actually states what the plugin is doing.



ironmine said:


> If you have two audio files, one of them is16 bit and the other one is 24 bits (not just 16 bits padded with zeros, but a legitimate 24 bit file), then it's better to feed into your DAW the 24 bit file.
> The DAW processes both 16 bit files and 24 bit files at the same bit rate (32 bits).



These two statements are contradictory. As BOTH 16bit and 24bit files are converted to 32bit (or 64bit) float and processed at 32 or 64bit float, why is it better to feed your DAW 24bit files?

G


----------



## old tech

ironmine said:


> How come "it doesn't"?
> 
> If you have two audio files, one of them is16 bit and the other one is 24 bits (n*ot just 16 bits padded with zeros, but a legitimate 24 bit file*), then it's better to feed into your DAW the 24 bit file.


What is a legitimate 24 bit file in this context?  

It may not be just 16 bits padded with zeros, but it certainly isn't 16 bits padded with extra music, unless of course you have music with a greater dynamic range than 96db.  

What music do you listen to that exceeds that exceeds 96db of dynamic range?


----------



## ironmine

gregorio said:


> As BOTH 16bit and 24bit files are converted to 32bit (or 64bit) float and processed at 32 or 64bit float, why is it better to feed your DAW 24bit files?
> G



Because of the extra precision these additional 8 bits provide. 

A 32bit file converted from a 16 bit file will have extra 16 empty bits which are stuffed with zeroes. 16 bits will be legitimate audio and 16 other added bits will be "zero-stuffing".

But a 32 bit file converted from a 24 bit file will have only 8 empty bits. 24 bits are legitimate and 8 only are "zero-stuffing".

Read:
*Why bother with 24-bit DAC*

Quote: "*Conclusion... *As you can imagine, the difference between 16-bit and 24-bits is about the extra precision those 8 bits can provide. Manipulation of the data like volume attenuation even to a significant degree (like -25dB) will not result in loss to low-level detail and subtle nuances will be passed on to a good hi-res DAC after DSP manipulation. Of course, audio engineers have been using 24 or even 32-bit audio in the professional setting for ages for the best audio quality. ... I personally am not of the camp that would forego readily accessible technological improvements like 24-bit resolution."


----------



## ironmine

bigshot said:


> I guess it doesn't matter what the bit rate of the file is if it processes all files at the same rate. That was what I suspected.



Are you saying that you can feed your 32bit DAW or your 32bit software audio player with 24bit or 16bit or 8 bit or even 2bit audio files and it will not matter because all of them will be processed anyway at the same rate? Ahaha. 

 Why don't you download your movies in 320 x 240 resolution to watch them on your 4K UHD TV (3840 x 2160)?  There won't be any difference between 320x240 video files and 1920x1080 video files, because all of them will be processed at 4K resolution anyway and shown on 4K TV . Is that your logic?


----------



## 71 dB

ironmine said:


> How come "it doesn't"?
> 
> If you have two audio files, one of them is16 bit and the other one is 24 bits (not just 16 bits padded with zeros, but a legitimate 24 bit file), then it's better to feed into your DAW the 24 bit file.



Sorry about my short responses. I was exhausted after replying to gregorio…

16/44.1 is all you really need. You don't need more than 13 bit worth (~80 dB) of dynamic range. So, even if 16 bit files didn't use the highest 3 bits they would be enough, just very quiet. 80 dB dynamic range is enough. If you listen to so loud that peaks go to 100 dB, the noise floor is at ~20 dB with flat dither and _perceptually_ ~0-10 dB with shaped dither. You can hear sounds this quiet (so quiet that the blood traveling in your veins start to mask them!) only within the most sensitive bandwidth of you hearing (500-5000 Hz) in extremely quiet places such as unechoic room after you have spend some time in there in silence. When you listen to music peaking at 100 dB you do not hear them, 60 dB of dynamic range in a recording is extreme. 24 bits does not even give better sound, because lower noise floor is all it gives when dither is used and 16 bits already give lower noise floor then you or anyone else consuming music needs.

In plugins the calculations are done at higher bit depth anyway and even at higher temporary sample rate if necessory/beneficial (for example distortion plugins create harmonics which would cause aliasing otherwise).


----------



## 71 dB

ironmine said:


> Why don't you download your movies in 320 x 240 resolution to watch them on your 4K UHD TV (3840 x 2160)?  There won't be any difference between 320x240 video files and 1920x1080 video files, because all of them will be processed at 4K resolution anyway and shown on 4K TV . Is that your logic?



CD already is "hi-res" to our ears, but for example DVD isn't "hi-res" to our eyes. When you are past the relolution of your hearing or vision, increasing the resolution doesn't chance anything. 16 bit / 44.1 kHz audio is like 8K video. 320 x 240 resolution is like listening to 8 bit/11025 kHz "telephone" sound. Don't let video resolutions fool you. Sound is different than picture and CD already reached the limits of human hearing, while video is only now reaching it slowly.


----------



## 71 dB

ironmine said:


> Because of the extra precision these additional 8 bits provide.
> 
> A 32bit file converted from a 16 bit file will have extra 16 empty bits which are stuffed with zeroes. 16 bits will be legitimate audio and 16 other added bits will be "zero-stuffing".
> 
> ...



Read the article fast. I think it's a bit of a mess. Yes, 24 bit is beneficial when attenuating 25 dB, but the benefits are there also with 16 bit signals! The best way to attenuate a signal is bit-shifting.You do nothing to the signal. Instead of using the highest 16 bits of 24, you use the middle 16 bits for example to have 24 dB attenuated identical version. 0.75 times the signal is -2.5 dB, you use powers of 2 and add them to have your scaling coefficient like 0.75 = 0.5 + 0.25 = 2^(-1) + 2^(-2). Nothing is added or taken away. The signal is just scaled. Nobody is against 24 bit DAC! In studios you of course use 24 bit sound, because your levels aren't optimized! You need safety margin. Strange article.


----------



## castleofargh

ironmine said:


> Are you saying that you can feed your 32bit DAW or your 32bit software audio player with 24bit or 16bit or 8 bit or even 2bit audio files and it will not matter because all of them will be processed anyway at the same rate? Ahaha.
> 
> Why don't you download your movies in 320 x 240 resolution to watch them on your 4K UHD TV (3840 x 2160)?  There won't be any difference between 320x240 video files and 1920x1080 video files, because all of them will be processed at 4K resolution anyway and shown on 4K TV . Is that your logic?


 I don't remember one time where someone brought up the visible changes of increasing video resolution to a digital audio argument, that wasn't trying to support a logical fallacy. the basic notion that low resolution video is readily visible as being non transparent, while 16bit is already beyond audibility under typical conditions, is more than enough to debunk your argument.



ironmine said:


> Because of the extra precision these additional 8 bits provide.
> 
> A 32bit file converted from a 16 bit file will have extra 16 empty bits which are stuffed with zeroes. 16 bits will be legitimate audio and 16 other added bits will be "zero-stuffing".
> 
> ...


zero padding will allow the processing to apply with high precision. it's the same reason why some VSTs work at a fixed sample rate and will resample the original signal because it simply works better that way. then they go back to the original sample rate once the processing is done. nobody is denying the benefits of having more bits to process something. be it for ease of use(gain changes without concern), or to maintain quality as long as possible when a great many processes are going to be applied(like making an album). that specific part can actually be compared to processing video or pictures.
but you mustn't mistake that rational with some notion of audibility. the moment you decided to apply VSTs to your playback chain, you abandoned fidelity in favor of making something subjectively better(not objectively so!!!!!!). it's a choice, I happen to do that pretty much all the time. and yes I output my signal to 24bit because of all the gain attenuation I often end up applying on the signal(just replaygain can result in more than 10dB so those stuff can rapidly pile up if we're not careful). I do that so my 16bit track is "moved down" within the 24bit container instead of getting truncated at 16bit. so I'm not contesting the benefits of having higher bit depth at all. but I am contesting the significance of using a 24bit track vs using a 16bit track as an audible argument(with or without VST). my albums don't have 90dB of dynamic, I don't notice the benefits of more accurate background noise being recorded, and I can't say that I have noticed a VST making an obvious audible difference because the original file was 24bit instead of 16. if you have examples of that, I'd be interested to see them. but if it's your guts talking about what is intuitive for you, then sorry that doesn't convince me.


----------



## 71 dB

Amberlamps said:


> Subscribed.
> 
> I like fireworks



Thanks, but don't expect fireworks every week or even month!


----------



## Amberlamps (Feb 15, 2019)

71 dB said:


> Thanks, but don't expect fireworks every week or even month!



It’s cool, a once a year rage’on is about as much as I can take from g.

Stick to the facts and science, something which is not practised in this sub forum no matter how much they like to tell you/folk that it is.

When rageon happens, please notify me by quoting my post, as I do not want to miss the upcoming angry man vein popping stroke that “is” coming your way


----------



## gregorio

71 dB said:


> 1. Yes, they are repeats because the facts are still the same. [1b] I repeat them because we may have new people on this board who don't go 30 pages into the past.
> 2. Refuted only in your mind.
> 2a. I'm not purposedly insulting anyone here. If you get insulted it's your problem. Why do you get so triggered by my posts?
> 2b. You certainly don't behave like some with your experience in the field of sound engineering should behave.
> ...



1. Yes the facts are still the same and unfortunately, so is your ignorance and misrepresentation of them!
1b. Of course, some new people you can try and convert to your religious zealotry by misrepresenting the facts and insulting those who are less ignorant of them than you!

2. Refuted by the obvious facts, of which you are stubbornly determined to remain ignorant because they conflict with your (erroneous) belief!
2a. What is wrong with you, don't you even know what you are posting? How many times have you called me and everyone else who disagrees with you "idiots", "ignoramuses" and "ignorant"? Anyone can go back 30 pages or just a post or two ago and see your posted insults for themselves!
2b. Yes I do. Although the response of other experienced professionals to being called an ignoramus by an amateur, who clearly has no idea what they're talking about, would vary. Some would just consider you such a complete nutter that it's not even worth responding to you, while others would actually call you a complete nutter and/or far worse! And others would do as I have, just throw your own insult back at you.
2c. No you wouldn't, that's ALREADY been tried over a year ago and you DID NOT take it seriously, you just ignored it and/or invented more nonsense to defend your personal preference as fact!

3. No, that's just utter nonsense that you've invented. It would be true ONLY in the case of an unprocessed, unmixed binaural recording. The WHOLE point of applying say a VST (or other plugin type) reverb/time based effect is to change/distort the "natural" spatial information recorded into something else!
3a. Yet again, your statement just demonstrates your utter ignorance of both my/our understanding of spatiality and of how that applies to art. If that's not bad enough, you've now added music theory to the list of things of which you're obviously ignorant, well done! Understanding the science of spatiality can help create better art but the fact of which you are ignorant (and appear to want to remain ignorant of) is that art is not constrained by those rules/science, that's what makes it art rather than science! For example, the whole history of western classical music is predicated on bending and breaking the rules of music theory and in the 20th century, of deliberately avoiding every single one of those rules!! Were Schoenburg, Webern, Stravinsky, Cage and countless other great composers all "ignormamuses" compared to you? Just as with music, there is a great deal of science regarding visual spatiality (perspective), however, in the late 19th Century many artists decided not to apply that science to their art, in fact to deliberately avoid it. Were Matisse, Picasso and numerous others all "ignoramuses" or is it you who's the ignoramus for not knowing/understanding the difference between art and science and apparently not having the listening skills to identify what is "natural" and what isn't. Are Picasso's paintings "perspectively illegal", can you tell that his paintings do not conform to the science of perspective, should he have stuck to the science of perspective? Who's the ignoramus here?
3b. And there we have it! Maybe music production should go back to the 1940's, explore within your "natural framework" what's already been extensively explored and evolved beyond. Maybe Picasso should have only explored within the "natural framework" of perspective, maybe all his (and countless other) paintings should be labelled "illegal" for their "excessive" perspective. Maybe Wagner, Debussy and countless others should be "illegal" for bending and breaking the rules of music theory/harmony. Maybe all these great professionals are ignoramuses except for you ... or maybe, you're the ignoramus!!
3c. Yep, just like Picasso should have asked himself if his artistic goals made sense and obviously he's gotten away with his nonsensical perspective because everyone is "Perspectively ignorant", except presumably the enlightened 71dB.


71 dB said:


> 4. My opinions regarding this issue are grounded in scientific facts (studied in the university) and careful thinking of the implications since 2012 after realizing the existence of excessive spatiality in certain type of music reproduction scenarious.
> [4a] Excessive spatiality exists in most recordings whether it's due to ignorance or not.
> [4b] My "opinion" about what is excessive spatiality is based on two things: The science behind human spatial hearing (HRTF etc.) and my own listening experiences which are well in line with the established science.
> 5. I do not believe the ART of King Crimson is about excessive spatiality at all! I believe their ART is about masterful guitar playing, inventive time signatures, musical energy, harmony, melodies, etc.
> ...



4. Are you really that ignorant and deluded? Your opinions are grounded ENTIRELY in your personal preference! In an attempt to justify that personal preference as objective fact, you then applied a few scientific facts you learned as a student and a few scientific facts you've learned since. However, it's all nonsense because those facts are only a small part of the picture, you've deliberately ignored other pertinent scientific facts (which contradict your justification), simply made-up nonsense/false facts where necessary, are apparently utterly ignorant of what art is/means and then use the circular logic of your preferences to define "excessive", "illegal", "illogical", "nasty", etc. It's all utter nonsense, you get to chose your own preferences, you don't get to present them as fact which everyone else must share or be an ignoramus and especially not here in this sub-forum!!!
4a. "Excessive spatially" does exist due to ignorance, YOUR IGNORANCE! Sure, HP's do present a wider stereo image but whether that's "excessive" or not is YOUR personal opinion based on YOUR personal preference, which is fine for YOU but I prefer to retain the fidelity of the artists' personal preference/intention and wanting to retain that fidelity does NOT make me an ignoramus, it makes you the ignoramus for apparently deluding yourself that your preference is objective fact!
4b. And again, there we have it, in your own words!! Firstly, it's YOUR opinion! Secondly, that is patently false even according to the science. Crossfeed is based on JUST SOME of the facts of HRTF but it omits others, which is why HRTF supersedes crossfeed and even a theoretically perfect HRTF is still only part of the picture if we're talking about emulating speakers, we would also need an impulse response of the speakers in a room and convolution. Clearly then, "your own listening experiences" are NOT AT ALL "well in line with the established science", you obviously have rather poor listening skills and are ignorant (deliberately or otherwise) of the science! And to make matters even worse, even if a perfect HRTF and speaker convolution were possible it may still be invalid because the artists may prefer/intend/like the HP presentation as it is!

5. This is not the "What 71dB believes" forum! And, on top of that, it's laughably ironic. I've spent many days/hours with Bill Bruford, he's in the top handful of most knowledgeable and intelligent people in the music business. Crimson was one of the most progressive/experimental of bands, they constantly pushed/broke the rules of music theory and those of music production, they absolutely did NOT stick to "natural" spatiality, they sliced diced and layered, processed and completely messed with spatiality (and the other aspects of music theory and production) all over the place, it was all entirely INTENTIONAL and they most certainly were NOT all ignoramuses!
5a. This is not the "What get's to 71dB's mind best" forum and it definitely is NOT the "What gets to 71dB's mind best is therefore fact" forum. There's no way that crossfeed can "fix" any of the spatial irregularities (or "illegalities" in your terminology) and make it conform to "natural" spatiality and even if it could, there's no way I'd want it to, any more than I'd want some processing filter to turn a Picasso painting into a "natural" perspective (and destroy the art). Again, you're free to play Crimson recordings however you choose but screwing-up their artistic intentions does not make you enlightened and me not wanting to screw it up does not make me an ignoramus!
5b. Why is it very strange? It's actually very common, many people at the time didn't like Picasso's cubism because it was unnatural, my mum hated Hendrix, couldn't understand why I liked him and couldn't understand why he didn't play the guitar properly/naturally. To some people the additional distortion of valve amps is preferable, others find analogue Vinyl preferable to digital, which is entirely up to them but it is NOT higher fidelity and is NOT an objective fact that it's better.
5c. No, you are not dictating what is "natural", what you're trying to dictate is that music production should/must be constrained by what is "natural", which is ridiculous nonsense!! Why are you even listening to King Crimson, doesn't all that "excessive", "illegal" spatiality drive you crazy? Is your perception really so deluded, your listening skills really so poor and your ignorance really so great that "in your mind" crossfeed magically fixes all the obvious, deliberate/intentional spatial "illegalities"??

I can't be bothered responding to your other points, they've ALL already been addressed earlier in this thread and they're all just variations of the above points: Complete falsehoods that you've just invented (like point #11 for example), deliberately ignore relevant scientific facts or ignore that art is not constrained by scientific facts anyway!

G


----------



## Phronesis

I haven't read the thread, I saw the title and am just jumping in without context of the discussion.

I've played around with cross-feed, and have had mixed results with it.  Sometimes it makes things better, sometimes worse, and sometimes just different.  It depends on the headphone and the track and my subjective preference.  I haven't found that cross-feed can simulate the sound of speakers in a room when using headphones.  I usually don't bother messing with the cross-feed settings of the Hugo 2, I just leave it off.


----------



## gregorio

ironmine said:


> [1] Because of the extra precision these additional 8 bits provide.
> [2] A 32bit file converted from a 16 bit file will have extra 16 empty bits which are stuffed with zeroes.
> [2a] But a 32 bit file converted from a 24 bit file will have only 8 empty bits. 24 bits are legitimate and 8 only are "zero-stuffing".



1. What extra precision? The precision is exactly the same. As soon as the audio enters the DAW environment it is converted to a (say) a 32bit float, at this stage it's precision is completely unchanged. However, as soon as the first mathematical process is applied, the result is at 32bit float precision and it remains at 32bit float precision throughout all the following mathematical processes until the plugin has finished it's processing at which point the final 32bit float result is output. This is exactly the same process and precision regardless of whether your original audio files were 16 or 24 bit.

2. No it won't. There are no 32bit DAWs/environments, they're all 32bit (or 64bit) float. A 32bit float converted from a 16bit file will have 7 bits stuffed with zeros, which will be replaced with relevant values as soon as it undergoes the first mathematical process (computer instruction). The other 9bits are a sign bit and 8 bits for the exponent.
2a. No it won't. There are no 32bit DAWs/environments, they're all 32bit (or 64bit) float. A 32bit float converted from a 24bit file will not stuff any of the 23 mantissa (fraction) bits with zeros, it will use the data stored in the 24bit file (the last 8 bits or so of which are most likely to be random zeros and ones). However, those mantissa bits will be replaced with other relevant values as soon as it undergoes the first mathematical process (instruction). The other 9bits are a sign bit and 8 bits for the exponent.

Again, all processing is at 32bit (or 64bit) float, regardless of whether you feed the DAW/environment 16, 24 or even 8bit audio files.

G


----------



## 71 dB

gregorio said:


> 71 dB doesn't know anything ad nauseum…



I don't know why it is, but I seem to have difficulties to make other people believe I know what I am talking about. It's really demoralizing. Why educate yourself about anything if you never get any recognition and respect for it? Is it because English isn't my first language? Maybe my language to too simple to give an impression of a smart person? My thinking happens on higher level than you think.

Picasso's art (which I value high) is not a good analoque of unnatural spatiality. You may not understand it, but I do. There is nothing unnatural about Picasso's art. It doesn't cause unnatural visual information. Perspective isn't real in 2D-art. It's an abstraction. The 2D picture is interpreted to have some kind of perspective. Paintings are just paint on a canvas! Nothing unnatural about that. Eyes see the paint and our brain interprets the paint as some kind of crazy perspective. Unnatural spatiality causes the hearing system to create spatial distortion which is also delivered to higher level and interpreted differently than what was intented. So, Picasso's art is not the same as excessive stereo.

If Picasso made VR art and created art where left eye sees different picture than right eye, then maybe there was unnatural aspects to it, like the right eye picture being up side down or much darker. That WOULD cause unnatural visual information and fatique, even headache. Visual crossfeed would balance the darkness and create natural visuality.

Sounds near one ear cause large ILD, but when and how? Low frequencies are created by large objects vibrating. If you go near a kick drum the drum is bigger than your head and more sound is leaked to the other ear. Also, the SPL is HUGE!! Auts! So you play quieter, hit the drum VERY softly to compensate. Good, but the sound of the drum changes! The spectrum changes, because the drum has non-linearities. So if you mix bass frequencies with large ILD you end up with sounds that have the spectrum of loud playing but the perceived loudness of quiet playing. This is why bass with large ILD sounds fake. Do you really want art with fake bass? If so then just know that speaker listening destroys this fakeness.

Spatiality has been abused (ping pong etc.) in music ever since stereophonic recordings were invented. There's two options:

(1) Have recording with excessive spatiality. Speakers are fine and headphones with proper crossfeed are good too.
(2) Create omnistereophonic recordings which work well as they are with speakers and headphones.

HRTF is crossfeed, just more detailed than "normal" crossfeed. So, when I talk about crossfeed it includes HRTF convolution. 

This is all I want to say at this point. No point in repeating what has been said already. If you don't believe in me then you don't and I just have to accept not being respected by you. I believe you are a competent audio engineer and you probably know much more than I do about many things related to your work, but spatiality is an area were you might learn something from me.

I have suffered from low self esteem all my adulthood for many reasons. I fight everyday to increase my self-esteem. Being constantly rejected makes it really really difficult. Life sucks.


----------



## bfreedma

71 dB said:


> I don't know why it is, but I seem to have difficulties to make other people believe I know what I am talking about. It's really demoralizing. Why educate yourself about anything if you never get any recognition and respect for it? Is it because English isn't my first language? Maybe my language to too simple to give an impression of a smart person? My thinking happens on higher level than you think.
> 
> Picasso's art (which I value high) is not a good analoque of unnatural spatiality. You may not understand it, but I do. There is nothing unnatural about Picasso's art. It doesn't cause unnatural visual information. Perspective isn't real in 2D-art. It's an abstraction. The 2D picture is interpreted to have some kind of perspective. Paintings are just paint on a canvas! Nothing unnatural about that. Eyes see the paint and our brain interprets the paint as some kind of crazy perspective. Unnatural spatiality causes the hearing system to create spatial distortion which is also delivered to higher level and interpreted differently than what was intented. So, Picasso's art is not the same as excessive stereo.
> 
> ...




I certainly don't think your knowledge level or English skill are issues.  Your use of English is impressive for a second language, and sadly based on what I encounter in the US, would be pretty good if it was your first language.


A couple of (hopefully constructive) suggestions:

Don't denigrate yourself due to disagreements in a technical debate. I've found the discussion interesting and found value in both sides.  There is also a lot of personal preference involved beyond the pure technical aspects and what you prefer need not/should not hinge on those elements.

IMO, it would create a better debate environment if words like "ignoramus" weren't in play.  Whether intended or not, it comes across as an insult and is going to escalate the tone of the responses.


----------



## bigshot

I would strongly advise that basing your self esteem on how well you do in discussion forum arguments is a very bad idea. Better to just state your case clearly, consider arguments to the contrary honestly, and move on if the other party tries to escalate it.


----------



## 71 dB

bfreedma said:


> I certainly don't think your knowledge level or English skill are issues.  Your use of English is impressive for a second language, and sadly based on what I encounter in the US, would be pretty good if it was your first language.



Thanks! What I mean is my use of English language may look limited in regards of vocabulary and structure to a native speaker. 



bfreedma said:


> A couple of (hopefully constructive) suggestions:
> 
> Don't denigrate yourself due to disagreements in a technical debate. I've found the discussion interesting and found value in both sides.  There is also a lot of personal preference involved beyond the pure technical aspects and what you prefer need not/should not hinge on those elements.



Yes, this is a good suggestion. I wouldn't mind so much if gregorio only targeted some details of my opinions. Fine detail such as exact dB limits for excessive ILD for example are debatable of course and values given by me are often rough estimates reflecting my current understanding of things. So if I say 3 dB is the limit, you can respond that actually 4 dB is the limit. What gregorio does is tell me that there's not limit at all, anything goes because it's art shooting my whole premise! It's interesting how gregorio and I do agree about a LOT of things, because we both understand digital audio for example, but when it comes to crossfeed, we couldn't disagree more.



bfreedma said:


> IMO, it would create a better debate environment if words like "ignoramus" weren't in play.  Whether intended or not, it comes across as an insult and is going to escalate the tone of the responses.



I agree. My first posts on this forum were polite and I believe most of my posts have been polite, but when my crossfeed posts got attacked so fiercely (I was really surprised and shocked) I totally lost my temper and called people even idiots. I shouldn't have done that, of course.


----------



## 71 dB

bigshot said:


> I would strongly advise that basing your self esteem on how well you do in discussion forum arguments is a very bad idea. Better to just state your case clearly, consider arguments to the contrary honestly, and move on if the other party tries to escalate it.



My self esteem isn't based on how well I do in discussion forum arguments, but it takes hits if my opinions are rejected and I am told I know nothing. I wish it only mattered what I feel myself, but succeeding in life requires acceptance among other people. How can I even get a job (I am unemployed at the moment) if other people think I know nothing? What is it that I am good at? Spatiality? Not according to gregorio! What is my place in this World? What is it I give to the World? Knowledge about spatiality? Not according to gregorio! So sick all this rejection.


----------



## castleofargh

@gregorio stop attacking people! attack the ideas all you like, but there is no excuse to be this nasty toward @71 dB. surely you can explain things without being insulting. 




Amberlamps said:


> It’s cool, a once a year rage’on is about as much as I can take from g.
> 
> Stick to the facts and science, something which is not practised in this sub forum no matter how much they like to tell you/folk that it is.
> 
> When rageon happens, please notify me by quoting my post, as I do not want to miss the upcoming angry man vein popping stroke that “is” coming your way


the content in the sub forum is what forum members post in it. the only way to have more facts and scientific approach to topics is for people to bother posting that. if you want to share some, you're so very welcome. but if you only care to come troll and complain, you're part of the problem.


----------



## 71 dB

castleofargh said:


> @gregorio stop attacking people! attack the ideas all you like, but there is no excuse to be this nasty toward @71 dB. surely you can explain things without being insulting.



Thanks for the support! It helps me a lot to know there's nice people here supporting other people. It helps me to use respectful language myself, something I fail to do when I feel bad inside.


----------



## 71 dB

Maybe I should change my message to one in which crossfeed is a good tool to remove/reduce excessive stereophony if you want to do so?
Would that be more respectful and acceptable to mr. gregorio?


----------



## Amberlamps

castleofargh said:


> @gregorio stop attacking people! attack the ideas all you like, but there is no excuse to be this nasty toward @71 dB. surely you can explain things without being insulting.
> 
> 
> 
> the content in the sub forum is what forum members post in it. the only way to have more facts and scientific approach to topics is for people to bother posting that. if you want to share some, you're so very welcome. but if you only care to come troll and complain, you're part of the problem.



I didn’t come to troll, no, as that would be unsporting to do such a thing ol boy, I just came in to see the fireworks.



Oh Suzy Q baby I love you, Suzy Q


----------



## bigshot (Feb 15, 2019)

I'm going to be honest. This isn't a good place to go fishing for ass pats. You have to understand that your point of view is your own, and be secure in that. A lot of people aren't going to see it the same. There's nothing wrong with that. We aren't here to care about each other's personal problems and act as psychic sponges to help everyone feel warm and fuzzy about themselves. We're here to talk about home audio. That's the only subject I'm interested in discussing. If I wanted a Dr Phil forum, I'd go Dr Phil's site. But I will offer this advice... if it gets to be too much, just take a vacation from the forum for a couple of weeks and see how you feel after that.


----------



## castleofargh (Feb 16, 2019)

71 dB said:


> HRTF is crossfeed, just more detailed than "normal" crossfeed. So, when I talk about crossfeed it includes HRTF convolution.


I disagree with that. I get why you say it, but it's a little like saying that any frequency response is neutral, modulo some EQ. we can see it as correct if we want, but it conveys a strange point of view at the same time.
crossfeed is more like taking a pair of speakers in an anechoic chamber, and calculate the delay and FR to mix so that the headphone will give about a similar signal. even in that fairly specific system, we have to deal with the headphone itself and how it's never going to provide a flat response on its own. then we have to look at the listener and calculate the right changes for his head. delays for the head size, and the right FR for some signal coming at about 30° on each sides. something a typical crossfeed solution doesn't accurately offer. instead, typical crossfeed bothers with a very basic approximation of the low and high frequency change caused by a crude masking of the face on one side when the signal comes from the other. some offer to accurately set the delay before mixing the result together(but several crossfeed solutions don't even bother providing that setting). so even in the context of a specific reference that's not realistic speaker listening, where we disregard things like moving our heads or room reverb, only the most elaborate crossfeed and some personal work on setting it correctly for ourselves, will achieve what you discuss as being crossfeed. and if you really consider convolution based on HRTF, then we're even further away from what people will get to use when they go look for a crossfeed solution.
maybe if we were all discussing the same actions on the signal, you wouldn't have some of us disagree with you so systematically?

beyond that, you take a pretty objective approach when it comes to ILD and ITD from a given angle. but the moment you agree to simulate only some variables, only approximate some and simply dismiss others when referring to an experience with stereo speakers, you have to consider that the subjective result might not always support the idea that a partial improvement is still an improvement. it's hard to predict exactly how conflicting cues will be interpreted by the brain. to take a well known issue, when you offer mono sound with the right FR for the listener at 0° altitude from the ears, some will feel like the sound comes from in front of them, and others will feel like the sound is in their head or on their nose, so long as they don't have visual confirmation of the sound source. we also know that some people respond better than other to head tracking(although I'm not sure if it's about the significance of visual cues, or simply that the HRTF used for the head tracking happens to be closer to the HRTF of some listener?). if you try to decompose the human experience into small variables like that, you'll often get variations in how effective those are for the listeners. I won't claim that it is the same for crossfeed when properly calibrated for each listener, because honestly I don't know. but I also have no reason not to expect variations in the subjective impressions. so I believe you when you say that you get very convincing results, and I believe others when they say they don't and that crossfeed isn't close to be enough for them.
I don't think gregorio ever tried to contest the reality of ILD and ITD as audio cues for positioning in space. I believe his issues are before, around, and after that. his focus is on psychoacoustic and on subjective impressions in general as that's what he had to consider all his life. making an album isn't objectively accurate, why are albums panned instead of using more elaborate changes in delays and FR? because of habit? because speakers are a perversion of proper single sound sources? in the end things are done because they feel ok to most people, not because they're objectively the most accurate approach in some aspects. to me crossfeed is the minimalist version of 3D simulations(surrounds and whatever), and falls into the same issues where one will be amazed by the feeling of realism, while another will only experience a flawed solution that feels weird. accurate customization can most certainly improve on that, but something as basic and incomplete as crossfeed is going to be listener dependent. I can't imagine it being otherwise.


----------



## Phronesis

71 dB said:


> My self esteem isn't based on how well I do in discussion forum arguments, but it takes hits if my opinions are rejected and I am told I know nothing. I wish it only mattered what I feel myself, but succeeding in life requires acceptance among other people. How can I even get a job (I am unemployed at the moment) if other people think I know nothing? What is it that I am good at? Spatiality? Not according to gregorio! What is my place in this World? What is it I give to the World? Knowledge about spatiality? Not according to gregorio! So sick all this rejection.



@71 dB, I think forums like this may not be the best place for you.  With people being generally anonymous and interacting through computers, civility is diminished and people can get away with being rather rough with each other.  For the support of your psyche, I suggest connecting with nature, meditation, mindfulness practices, getting help of counselors and people who really care about you, etc.  It's quite common for people to struggle with issues of self-esteem, anxiety, depression, etc., and these issues can be dealt with effectively in a variety of ways.


----------



## Amberlamps (Feb 15, 2019)

bigshot said:


> I'm going to be honest. This isn't a good place to go fishing for ass pats. You have to understand that your point of view is your own, and be secure in that. A lot of people aren't going to see it the same. There's nothing wrong with that. We aren't here to care about each other's personal problems and act as psychic sponges to help everyone feel warm and fuzzy about themselves. We're here to talk about home audio. That's the only subject I'm interested in discussing. If I wanted a Dr Phil forum, I'd go Dr Phil's site. But I will offer this advice... if it gets to be too much, just take a vacation from the forum for a couple of weeks and see how you feel after that.



Don’t be too hard on him bigshot, no need for the above, a couple of lines could of said what you meant without putting the boot in.

I agree with you that headfi is not a therapy website, but you could of said it without trying to humiliate him.


----------



## Phronesis

Amberlamps said:


> Don’t be too hard on him bigshot, no need for the above, a couple of lines could of said what you meant without putting the boot in.
> 
> I agree with you that headfi is not a therapy website, but you could of said it without trying to humiliate him.



I'll add a suggestion: the forum has an ignore feature which enables hiding all messages for selected forum members.  I've used that feature, and it has made my experience of this forum much more pleasant.  Sometimes it's best to have no interaction with overly argumentative or mean-spirited people.


----------



## bigshot (Feb 15, 2019)

I wasn't trying to humiliate or punish. I was speaking the truth in my own straightforward, practical way. I don't hold any ill will toward anyone. Some people frustrate me and I just move past them, that's all. If everyone did that, we wouldn't have the endless back and forth battles we see here. We're all speaking here one on one, but the real conversation is with everyone... lurkers included. Forums are about community, not scoring individual points. Sometimes people forget that it isn't all about them. The internet makes it easy to live in a self reflecting bubble. All I'm saying is that trying to create that kind of experience for yourself that is a lousy idea. It's better to open yourself up to the world for better and worse and not be so hung up on being an "expert" or fishing for ass pats. On a scale of 1 to 10, the kinds of experts and ass pats you find in internet forums don't even rate a 2.

This is just another internet forum.


----------



## 71 dB

Phronesis said:


> @71 dB, I think forums like this may not be the best place for you.



I think this planet/time isn't the best place for me. 
A place that is good for me is one where I fit in. 
I thought I fit in in a headphone forum with my opinions about crossfeed. 
Not the case apparently. Silly me.


----------



## 71 dB

bigshot said:


> This is just another internet forum.



Yeah, and I got just another meltdown...


----------



## Steve999

To crossfeed, or not to crosfeed, that is the question:
Whether 'tis nobler in the mind to suffer
The slings and arrows of outrageous fortune,
Or to take arms against a sea of troubles
And by opposing end them. To die—to sleep,
No more; and by a sleep to say we end
The heart-ache and the thousand natural shocks
That flesh is heir to: 'tis a consummation
Devoutly to be wish'd. To die, to sleep;
To sleep, perchance to dream—ay, there's the rub:
For in that sleep of death what dreams may come,
When we have shuffled off this mortal coil,
Must give us pause—there's the respect
That makes calamity of so long life.
For who would bear the whips and scorns of time,
Th'oppressor's wrong, the proud man's contumely,
The pangs of dispriz'd love, the law's delay,
The insolence of office, and the spurns
That patient merit of th'unworthy takes,
When he himself might his quietus make
With a bare bodkin? Who would fardels bear,
To grunt and sweat under a weary life,
But that the dread of something after death,
The undiscovere'd country, from whose bourn
No traveller returns, puzzles the will,
And makes us rather bear those ills we have
Than fly to others that we know not of?
Thus conscience does make cowards of us all,
And thus the native hue of resolution
Is sicklied o'er with the pale cast of thought,
And enterprises of great pitch and moment
With this regard their currents turn awry
And lose the name of action.


----------



## Phronesis

71 dB said:


> I think this planet/time isn't the best place for me.
> A place that is good for me is one where I fit in.
> I thought I fit in in a headphone forum with my opinions about crossfeed.
> Not the case apparently. Silly me.



Everyone has a place in this world, but it can take time to discover it.  Hang in there and know that better days will come.

And don't be put off by a few people being rude.  They don't represent the attitudes of the majority.


----------



## Steve999

71 dB said:


> Yeah, and I got just another meltdown...



Hang in there. Seriously.


----------



## bfreedma

71 dB said:


> I think this planet/time isn't the best place for me.
> A place that is good for me is one where I fit in.
> I thought I fit in in a headphone forum with my opinions about crossfeed.
> Not the case apparently. Silly me.



You’ve got opinions, like a spirited debate, and can get a little bit cranky at times.
Seems to me like you fit in with the rest of us chickens just fine.


----------



## Amberlamps

71 dB said:


> I think this planet/time isn't the best place for me.
> A place that is good for me is one where I fit in.
> I thought I fit in in a headphone forum with my opinions about crossfeed.
> Not the case apparently. Silly me.



You’re not silly, you just presumed that you would be treated like an adult instead of being shouted at like you were a child.

Many other parts of headfi will welcome you and your knowledge/input.

Phronesis also added a good piece of info, stick gigilo on ignore and his posts will magically vanish from your screen.


----------



## Steve999 (Feb 15, 2019)

*“A foolish consistency is the hobgoblin of little minds, adored by little statesmen and philosophers and divines. With consistency a great soul has simply nothing to do. He may as well concern himself with his shadow on the wall. Speak what you think now in hard words, and to-morrow speak what to-morrow thinks in hard words again, though it contradict every thing you said to-day. — 'Ah, so you shall be sure to be misunderstood.' — Is it so bad, then, to be misunderstood? Pythagoras was misunderstood, and Socrates, and Jesus, and Luther, and Copernicus, and Galileo, and Newton, and every pure and wise spirit that ever took flesh. To be great is to be misunderstood.” *

-Ralph Waldo Emerson


----------



## Steve999 (Feb 15, 2019)

I like crossfeed. I figured out how to get the crossfeed I wanted through this forum and then I ditched this place for 10 years or so (probably a lot more, I've never really figured it out) and I'll probably ditch it again some time. There's nothing like a real live person to make you laugh or feel alive.


----------



## 71 dB

People here try to prove normal crossfeed is really crappy thing, but somehow it improves the sound A LOT for me. Sorry, but that's how it is. My ears are like that so I use crossfeed. To me crossfeed has pretty much zero negative effects on the sound. On some recordings with weird spatiality it can be challenging to find a good settings, but it's rare. Most recordings are easy to "fix" with crossfeed. That's how I see it.

I have used A LOT time to understand crossfeed/spatiality as well as I can. They say you get good at the thing you practise a lot. Apparently doesn't apply to me. At least I can't impress other people with my knowledge. On the contrary, some people seem to be impressed by my ignorance.

I make computer music. I use "free" software, Garageband and Audacity. I write Nyquist plugins for Audacity for effects. I have written all kinds of plugins including plugins which do spatial effects. My friend says I have good spatial effects. I'm also learning to mix better. I watch a lot of Youtube videos about that. I had problems with music theory, but recently I have made a breakthough in it. I finally understand chord progressions! It was Jake Lizzio of "Signals Music Studio" -Youtube channel who made me understand. I also watch "Hack Music Theory", which is superb. I had practically zero music education in childhood, but I am learning finally! 

Anyway, since I have to create spatiality to the tracks of my own music using mostly my own plugins. I need to understand spatiality and I think I do. That's why I am so stumbed when called ignorant here.

So that's who I am. A guy trying to fit in somewhere...


----------



## bigshot

Find common ground. Don't fixate on the disagreements and blow them out of proportion. You have to know when to let go and not keep hammering away at little stuff. You're not alone in that by any means. This forum is full of people who think the topic of this forum is arguing, not sound science.


----------



## Steve999 (Feb 15, 2019)

71 dB said:


> People here try to prove normal crossfeed is really ****ty thing, but somehow it improves the sound A LOT for me. Sorry, but that's how it is. My ears are like that so I use crossfeed. To me crossfeed has pretty much zero negative effects on the sound. On some recordings with weird spatiality it can be challenging to find a good settings, but it's rare. Most recordings are easy to "fix" with crossfeed. That's how I see it.
> 
> I have used A LOT time to understand crossfeed/spatiality as well as I can. They say you get good at the thing you practise a lot. Apparently doesn't apply to me. At least I can't impress other people with my knowledge. On the contrary, some people seem to be impressed by my ignorance.
> 
> ...



Put yourself among real people in real life who appreciate you and want to help you and do things you like to do, like develop your plugins and make music. Listen to your friends. That's why they're your friends. You think we all are social butterflies and that's why we hang around on an internet forum that is a tiny little corner of this huge world?  We're all a little loopy. @bfreedma seems a little too sane at times and of course that's kind of weird. But otherwise we're all, um, a little unusual.


----------



## 71 dB

It's amazing how controversial audio topics are. Vinyl vs. CD, Hi-res vs 16/44.1, crossfeed, cables,… No matter what you think, there will be people who disagree strongly.


----------



## bfreedma

Somehow, I find that one George Carlin quote sums up the human experience and how we perceive it fairly well:

“Have you ever noticed that anybody driving slower than you is an idiot, and anyone going faster than you is a maniac?”

I’ll leave quoting lesser philosophers to @Steve999


----------



## 71 dB

Steve999 said:


> Put yourself among real people in real life who appreciate you and want to help you and do things you like to do, like develop your plugins and make music. Listen to your friends. That's why they're you're friends. You think we all are social butterflies and that's why we hang around on an internet forum that is a tiny little corner of this huge world?  We're all a little loopy. @bfreedma seems a little too sane at times and of course that's kind of weird. But otherwise we're all, um, a little unusual.



Yeah, that is right. It took me hours to write the responses to gregorio. I almost missed sauna! Finns don't want to miss sauna! When you get a massive response in the style of "Wrong, you don't know anything, you're are an ignoramus, art debunks your facts …" in a topic you think you know a lot about it kind of get under your skin when you realize you wasted hours when you could have been doing many other things instead. In the future I better ignore gregorio. I hope he doesn't mind since he calls me an ignoramus anyway.


----------



## ironmine

gregorio said:


> 1. What extra precision? The precision is exactly the same. As soon as the audio enters the DAW environment it is converted to a (say) a 32bit float, at this stage it's precision is completely unchanged. However, as soon as the first mathematical process is applied, the result is at 32bit float precision and it remains at 32bit float precision throughout all the following mathematical processes until the plugin has finished it's processing at which point the final 32bit float result is output. This is exactly the same process and precision regardless of whether your original audio files were 16 or 24 bit.
> 
> 2. No it won't. There are no 32bit DAWs/environments, they're all 32bit (or 64bit) float. A 32bit float converted from a 16bit file will have 7 bits stuffed with zeros, which will be replaced with relevant values as soon as it undergoes the first mathematical process (computer instruction). The other 9bits are a sign bit and 8 bits for the exponent.
> 2a. No it won't. There are no 32bit DAWs/environments, they're all 32bit (or 64bit) float. A 32bit float converted from a 24bit file will not stuff any of the 23 mantissa (fraction) bits with zeros, it will use the data stored in the 24bit file (the last 8 bits or so of which are most likely to be random zeros and ones). However, those mantissa bits will be replaced with other relevant values as soon as it undergoes the first mathematical process (instruction). The other 9bits are a sign bit and 8 bits for the exponent.
> ...



I think one of us is going nuts.  Are you really trying to drive into my mind the crazy idea that it does not matter if I feed my Foobar (with a 32bit float VST-chainer) a 24bit audio file or 16bit audio file?

So how low we can go following this line of "logic"? All the way down to 8bit, 6 bit, 2 bit files - before you quit saying "it does not matter as long as your processing happens in 32bit float environment"?

Why don't you try yourself to do a simple test with the bit length reduction: truncate a 24 bit file to a 2 bit file and listen to them both? Hear the difference?  Please tell me you do.


----------



## ironmine

castleofargh said:


> zero padding will allow the processing to apply with high precision.



You are trying to prove a very strange point of view.

You yourself admit that extra bits even in the form of zero padding will allow the processing to apply with high precision. Now imagine that these extra bits are not "zero padded" but are filled with actual ("legitimate") audio data.   How can it not further improve precision?

Imagine you are a seller and you have a calculator that is precise down to 10 figures after the decimal point. You want to sell to somebody 20 thousand fish cans. The price of each is $2.24. So, you expect to get $44,800 (20000 x 2.24) for selling all your fish cans.

But the buyer tells you: why don't we use fewer figures after the decimal point in your price? Why don't we truncate your price from $2.24 to $2.2?

You start objecting, because in this case you will get only $44,000 (20000 x 2.2), which is $800 less than you initially expected, but the buyer says (like Gregorio): "No-no, the result will still be as precise as before, this truncation will not hurt your financially, because, you see, we'll still be using a calculator whose calculation precision is 10 figures after the decimal point. So, it does not matter how precise are the numbers that we feed into the calculator. Precision of calculation is still the same".

As you stay there bewildered and perplexed, your buyer takes pity at you and says: "Ok, I really like you so let's compromise. Since you like to use 2 figures after the decimal point in your price, let's truncate your price from $2.24 to $2.2 and then pad it with zero, so now your price is not $2.2 but $2.20. Are you happy now, my friend?"


----------



## 71 dB

ironmine said:


> I think one of us is going nuts.  Are you really trying to drive into my mind the crazy idea that it does not matter if I feed my Foobar (with a 32bit float VST-chainer) a 24bit audio file or 16bit audio file?
> 
> So how low we can go following this line of "logic"? All the way down to 8bit, 6 bit, 2 bit files - before you quit saying "it does not matter as long as your processing happens in 32bit float environment"?
> 
> Why don't you try yourself to do a simple test with the bit length reduction: truncate a 24 bit file to a 2 bit file and listen to them both? Hear the difference?  Please tell me you do.



This happens to be an issue where I agree with gregorio. It really doesn't matter whether you feed 16 or 24 bit audio to your Foobar. That's because 16 bit already provides all the fidelity we need. Of the 8 "extra" bits in 24 bit audio most are noise anyway.

How low can we go? I'd say to 13 bits. Less than that mean the noise floor starts to be a _potential_ problem. So, 16 bits is on the safe side.


----------



## Phronesis

bfreedma said:


> Somehow, I find that one George Carlin quote sums up the human experience and how we perceive it fairly well:
> 
> *“Have you ever noticed that anybody driving slower than you is an idiot, and anyone going faster than you is a maniac?”*
> 
> I’ll leave quoting lesser philosophers to @Steve999



SO true for me!


----------



## ironmine

71 dB said:


> This happens to be an issue where I agree with gregorio. It really doesn't matter whether you feed 16 or 24 bit audio to your Foobar. That's because 16 bit already provides all the fidelity we need. Of the 8 "extra" bits in 24 bit audio most are noise anyway.
> 
> How low can we go? I'd say to 13 bits. Less than that mean the noise floor starts to be a _potential_ problem. So, 16 bits is on the safe side.



16 bits are OK as the final output format for listening, without further manipulations.

But as I manipulate the sound with DSP (VST plugins), I need 24 bits as they provide higher precision of calculation.


----------



## 71 dB

ironmine said:


> You are trying to prove a very strange point of view.
> 
> You yourself admit that extra bits even in the form of zero padding will allow the processing to apply with high precision. Now imagine that these extra bits are not "zero padded" but are filled with actual ("legitimate") audio data.   How can it not further improve precision?
> 
> ...



If you truncate like that you create distortion, but digital audio uses dither, in this example 60 % (12000 cans) are sold are at $2.2 and 40 % (8000 cans) at $2.3. You get $44,800 using truncated prices, because you dithered the prices! All you lost was equal prices. Your prices fluctuated making them noisy. The same happen in audio. You use dither to reduce word length. You get the same fidelity, but you lose dynamic range. 4 bit dithered audio has the exact same fidelity as 24 bit audio it was created from, but it's ruined by MASSIVE dither noise (-23 dBFS I believe if TPDF dither is used) making it totally unsuitable for music reproduction. Every bit drops the noise floor by 6 dB and around 13 bits we have enough (~80 dB) dynamic range.


----------



## 71 dB

ironmine said:


> 16 bits are OK as the final output format for listening, without further manipulations.
> 
> But as I manipulate the sound with DSP (VST plugins), I need 24 bits as they provide higher precision of calculation.



The very first calculation is likely to turn your zero padded 16 bit audio into a non-zero padded 32 bit floating point audio. Your 24 bit audio can have all 8 least significant bit zeros, althou the probability for that to happen is only 0.5^8 = 0.39 %. In a 4 minutes long stereo track at 24 bit/44.1 kHz you have statistically about 83000 sample points with 8 least significant bits all zeros.


----------



## ironmine (Feb 15, 2019)

71 dB said:


> The very first calculation is likely to turn your zero padded 16 bit audio into a non-zero padded 32 bit floating point audio. Your 24 bit audio can have all 8 least significant bit zeros, althou the probability for that to happen is only 0.5^8 = 0.39 %. In a 4 minutes long stereo track at 24 bit/44.1 kHz you have statistically about 83000 sample points with 8 least significant bits all zeros.



Yes, indeed, even the simple act of changing the volume turns 16 bit audio to 32 bits, I checked it using the VST plugin called *Bitter*.







The second instance of Bitter shows that there is now (after volume adjustment) audio data appearing below 16 bits.
(In the image above I used simple volume adjustment as an example, but it can be any other VST plugin such as 112dB Redline Monitor or Meier Crossfeed).

Now, imagine I have the 2nd variant of Buddy Whittington album where the audio is not 44.1/16, but 44.1/24, so the Bitter plugins will show this:






So, even though in both cases the second instance of Bitter shows that after DSP manipulation the audio became 32bits, but in the second photo above (where the played audio is 24 bits) the range from 24 to 32 bits consists of the result of multiplication of real audio data (real data x real data), while in the first photo above (where the played audio is 16 bits only) the same range consists of the result of multiplication of "fake" audio data (zero padded bits multiplied by real data).

I just don't understand how come there is no difference between "000 x real data" and "real data x real data" ? Don't you want this range of the bit length of your incoming audio file be filled with real audio data instead of zeroes? :


----------



## Amberlamps

ironmine said:


> Yes, indeed, even the simple act of changing the volume turns 16 bit audio to 32 bits, I checked it using the VST plugin called *Bitter*.
> 
> 
> 
> ...




You bad bad boy, you forget to hide uTorrent.

I shall give you the benefit of the doubt and will presume that you are just downloading linux distro’s.


----------



## 71 dB

ironmine said:


> Yes, indeed, even the simple act of changing the volume turns 16 bit audio to 32 bits, I checked it using the VST plugin called *Bitter*. The second instance of Bitter shows that there is now (after volume adjustment) audio data appearing below 16 bits.
> (In the image above I used simple volume adjustment as an example, but it can be any other VST plugin such as 112dB Redline Monitor or Meier Crossfeed).
> 
> Now, imagine I have the 2nd variant of Buddy Whittington album where the audio is not 44.1/16, but 44.1/24, so the Bitter plugins will show this:
> ...



Digital audio is not as intuitive as people think. Volume change of -0.4 dB means multiplying the signal by 10^(-0.4/20) = 10^-0.02) = 0.954992586. Lets multiply 1.00000000 (one with 8 padded zero decimals) by that number: Unsurpringly the result is 0.95499259. The last decimal had to be truncated (we only have 8 decimals in this example) and the result was rounded up. The only way to change volume while keeping the lowest bits zero is to do bitshifting: For example multiplying a binary number by 2 shifts the bits toward most significant bit and the lowest bit becomes zero. The same happen with normal 10-based numbers when you multiply them by 10, 100, 1000 etc. 456 x 100 = 45600.

If the 16 bit version comes from a 24 bit version using dither, it has the EXACT same fidelity as the original 24 bit version. The difference is it has possibly higher noise floor. It's possible the original 24 bit version already has a lot of noise and the 16 bit version has about as much. People need to undertstand 24 bit has practically nothing to offer to music consumers. Bigger file sizes is what it gives and sometimes better masters is used (in which case you can make your own dithered 16 bit versions and save disc space), but that's it. You need enough bits to give 80 dB dynamic range. That's about 13 bits. That's why going beyond 16 bits is total overkill. Dither is pretty amazing. You can have a 4 bit version of your 24 bit file with the EXACT same fidelity thanks to dither, but it costs you a LOT of dynamic range: Your original 24 bit fidelity is ruined by LOUD dither noise, but under that noise the fidelity has survived 100 %. Sounds decay into the noise beautifully without any distortions and you can hear them until the dither noise masks them completely. That's why we don't use 4 bit audio. It has all the fidelity (if dithered from a higher bit version), but it doesn't give enough dynamic range. Not even close!

Of the 24 bits a lot are noise, because producing music with that level of dynamic range is quite impossible. You can't record anything with 144 dB dynamic range. Also, 16 bit already gives enough dynamic range.


----------



## Sonic Defender

I'm sorry, the argument as presented about bit truncation (e.g. from 24 to 16 to 8 to 4 …) is oversimplified. There are two points, and only two points that matter, the upper and lower range of our hearing brains sensitivity. Beyond 16 bits has been demonstrated to be inaudible and therefore not of any value for audible listening goals. There would also be a point where the full range of frequencies our brain can meaningfully decode can not be represented properly. Both of those points represent the audible threshold that we can hear. I do not understand why people continue to insist they can hear what science has demonstrated that they can not? Why does it matter? I will absolutely guarantee nobody can hear the difference from 24 bit versus 16 bit files in multiple trial, level matched blind listening tests. If the two files were made from the same masters, were exactly the same volume you will never be able to hear a difference, it is not possible.


----------



## castleofargh

ironmine said:


> You are trying to prove a very strange point of view.
> 
> You yourself admit that extra bits even in the form of zero padding will allow the processing to apply with high precision. Now imagine that these extra bits are not "zero padded" but are filled with actual ("legitimate") audio data.   How can it not further improve precision?
> 
> ...


here is my rational. if truncation or whatever noise causes the 16th bit to switch compared to what initial value it should have had, then the change for that sample will be of bigger amplitude than anything you will do to all the extra 8bits below added together. and I don't notice such a change at 16bit in typical music listening, I need to reach about 12 or 13bit to be able to notice something depending on the music and listening conditions. so based on that pretty basic observation, I conclude that I don't care about what's below 16bit for music playback. 
 you assume that the data below 16bit is relevant in some ways, but what way is that?
audibility? I doubt it and would like to get examples if it is so. 
fidelity? what actual level of dynamic do you expect from an album? between ambient noises, self noises from mics, recording devices, etc, do you think removing bits below 16bit is worst than the changes above 16bit due to the various noises and distortions? I don't think it is. in fact I think noise above 16bit is much worse. I think my playback gears failing to provide 16bit of resolution at my ears, will affect my experience more than discarding data(whatever it really was originally) below 16bit. you assume that the extra bits are valuable data, but is it? 

I said that I was agreeing with increased bit depth for processing in 2 conditions:
1/ that there will be significant gain variations, then for piece of mind and ease of use, increasing the dynamic room with extra bits is nice. that doesn't require hires tracks, just being set to work with higher bit depth does the trick. 
2/ that there will be a lot of successive processing(like a lot!!!). because in that context, the tiny irrelevant changes might pile up into something relevant at some point depending on the type of processes applied and how destructive some of them are. so it's just good practice to start with the biggest resolution and reduce only at the end(like I'm guessing all recording engineers tend to do). in that respect, what you do is fine and feel free to keep it up. but given what you actually apply to your signal on some already mastered tracks, I don't believe it's resulting in audible changes. and that's why I asked if you had some VSTs showing audible changes because the file was 24bit instead of 16. if that happens, it would be interesting to check if simply turning the signal to 32 or 64bit before sending it to the VST would be enough to remove the audible difference? or if there will still be an audible difference compared to using the 24bit track from the start like you suggest will happen. I'm not as confident as you are on that, and having evidence is important, because procuring the hires files of my albums costs money that I'd rather spend on new 16bit albums(or Pringles ^_^). 



about "bitter" showing 32bit, that only tell us that this volume control works at 32bit inside this VST host. I don't get what you're trying to say with those screenshots. the magnitude of change on each sample is determined by the gain change, not by the file's resolution. volume change is the worst example you could have used. btw, foobar handles volume as 32bit float. but of course what it sends to the VST host is whatever bit depth was determined beforehand in foobar, or in Windows if you're using its mixer.


----------



## bigshot (Feb 16, 2019)

ironmine said:


> I think one of us is going nuts.  Are you really trying to drive into my mind the crazy idea that it does not matter if I feed my Foobar (with a 32bit float VST-chainer) a 24bit audio file or 16bit audio file?.



It doesn’t appear to make any difference to your VST plugin, and since both 16 bit and 24 bit are audibly transparent for the purposes of playback of recorded music, it doesn’t make any difference there either. I’d say unless you are using a VST plugin to radically compress your music and pull up very low level signal near the noise floor, it doesn’t matter at all.

I think it’s amusing how convoluted responses to this question have become!


----------



## ironmine (Feb 17, 2019)

24-bit audio not only gives a wider dynamic range, but also more levels of quantization (16 777 216 levels for 24bit audio vs 65 536 levels for 16bit). You completely ignore this aspect, you only talk about the dynamic range and noise floor, etc.

The more bits are used to represent a sample, the better approximation of the audio signal is. The better the signal is approximated before being fed into a VST-chain, the more precise the result of multiple computations that it has to go through will be.

What I do in my computer with VST plugins (DRC or crossfeed) is the continuation of what sound studios do with the sound. I customize the sound to suit my needs. But studios do not work with 16-bit audio material, they use higher bit rates. And when they do DSP processing not at one go, but at several stages, they do not store the intermediate results of DSP processing in 16bit format audio, either. But this is exactly what you try to advise me. It's like advising a mastering engineer, who has worked on a track in the evening and intends to continue working on it on the morning of the next day, to save the intermediate result of his work in 16bit format at the end of his first day.

Another example: If one sound engineer needs to partially process (e.g., equalize with VST plugin #1) an audio file in one city and then he needs to e-mail the file to another sound engineer for final processing (e.g., compression with VST plugin #2), would you advise the first engineer to e-mail the file in 16bit format? If not, they why do you tell me it does not matter if my file is 16bit or 24bit? In this example, I am this second engineer!

Whenever I have two version of the same album, e.g., 44/16 and 44/24, under all other conditions being equal (same remaster, same dynamic range, etc.), 44/16 will be deleted and 44/24 will be kept in my collection. If I listened to music straight, with no DRC or crossfeed processing needed (if my room were acoustically perfect or if the recording were already equalized for my room resonances or if the recording were already processed with crossfeed algorithm), I wouldn't care, I would be happy with 44/16. But since I do the last sound customization steps myself with DSP, I prefer 24bits.


----------



## 71 dB

ironmine said:


> 1. 24-bit audio not only gives a wider dynamic range, but also more levels of quantization (16 777 216 levels for 24bit audio vs 65 536 levels for 16bit). You completely ignore this aspect, you only talk about the dynamic range and noise floor, etc.
> 
> 2. The more bits are used to represent a sample, the better approximation of the audio signal is. The better the signal is approximated before being fed into a VST-chain, the more precise the result of multiple computations that it has to go through will be.



1. We don't ignore the amount of quantization levels, but do you understand how it matters?

2. No. The more bits, the lower noise floor if dither is used (and it is). As I have said a couple of times already, you can create a 4 bit version of your 24 bit file which has got the exact same fidelity, but is ruined by the HUGE dither noise which is needed to retain the fidelity. That's why we don't dither music to 4 bits, but we use enough bits to have quiet enough noise floor. The limit for this is about 13 bits, so 16 bits is enough and then some while 24 bits is total overkill. How can less bits be as precise as more bits? That's because of sampling theory. What I just wrote is not intuitive and requires understanding digital audio and especially dithering. You say more bits means better approximation of the audio signal. That is intuitive and all people who don't understand digital audio (but think they do) fall for it. Dithering is more clever than you think. It randomizes the quantization error and makes it uncorrelated from the signal. That means that instead of having a less precise approximation of the audio alone, we have the combination of the signal in it's full precision PLUS uncorreleted noise. So, the dithered 4 bit version is EXACTLY the same thing as playing the original 24 bit version and mixing the loud dither noise with it. Increasing bits allows quieter dither noise and around 13 bits you noticed that you can't hear the noise under the music no matter what. Calling the 4 bit dithered version less precise if no different from saying your 24 bit version lost precision, because your neighbour started using vacuum cleaner and you can hear it under the music. That is how digital audio works in regards of bit depth explained without math.

How to express a price $1.23 without using pennies? You say: of 100 sold, 77 cost $1 each and the rest 23 cost $2 each. So you calculate the averige price: ($1 * 77 + $2 * 23)/100 = $1.23  and what you know, I have told you the exact price to the penny using dollars only. I dithered the price for you. How does the dither know how to tell things precisely? It uses randomness and statistics. It's noise, so it is the best in the world in that. If I add a random price between -$0.5 and $0.5 to the price $1.23 one hundred times before rounding it to dollars I get statistically ~77 times $1 and ~23 times $2. If I add anything between -$0.5 and 0.26, I get $0.73…$1.49 which all round to $1, but if I add $0.27…$0.5 I get $2. If the price was a bit higher, say $1.33, there would be less $1 roundings and more $2 roundings. So, no matter what the price actually is, this algorithm is a way to tell the price precisely using only whole dollars. Dither does the same in audio. Precision isn't lost, but you have other nuisances such as needing to calculate average prices or having loud noise.


----------



## 71 dB

ironmine said:


> What I do in my computer with VST plugins (DRC or crossfeed) is the continuation of what sound studios do with the sound. I customize the sound to suit my needs. But studios do not work with 16-bit audio material, they use higher bit rates. And when they do DSP processing not at one go, but at several stages, they do not store the intermediate results of DSP processing in 16bit format audio, either. But this is exactly what you try to advise me. It's like advising a mastering engineer, who has worked on a track in the evening and intends to continue working on it on the morning of the next day, to save the intermediate result of his work in 16bit format at the end of his first day.
> 
> Another example: If one sound engineer needs to partially process (e.g., equalize with VST plugin #1) an audio file in one city and then he needs to e-mail the file to another sound engineer for final processing (e.g., compression with VST plugin #2), would you advise the first engineer to e-mail the file in 16bit format? If not, they why do you tell me it does not matter if my file is 16bit or 24bit? In this example, I am this second engineer!
> 
> Whenever I have two version of the same album, e.g., 44/16 and 44/24, under all other conditions being equal (same remaster, same dynamic range, etc.), 44/16 will be deleted and 44/24 will be kept in my collection. If I listened to music straight, with no DRC or crossfeed processing needed (if my room were acoustically perfect or if the recording were already equalized for my room resonances or if the recording were already processed with crossfeed algorithm), I wouldn't care, I would be happy with 44/16. But since I do the last sound customization steps myself with DSP, I prefer 24bits.



Crossfeed certainly doesn't need extra bits beyond 16. Why do you do DRC? Car audio? DRC reduces dynamic range so why do you what a lot of dynamic range if you reduce it? Makes no sense. DRC is good for quiet listening of listening in noise environmental. Both cases don't need 24 bit.


----------



## bigshot (Feb 17, 2019)

What kind of processing are you doing that would require this degree of overkill? Are you shifting time or something? I can see higher sampling rates perhaps helping that, but I don't see how bit rate in a recording could affect anything except perhaps massive degrees of compression.



ironmine said:


> The more bits are used to represent a sample, the better approximation of the audio signal is. The better the signal is approximated before being fed into a VST-chain, the more precise the result of multiple computations that it has to go through will be.



Upping the bitrate just before processing does just that. The sound file itself doesn't have to be native to that bitrate. Let's make sure you're not just talking about inaudible sound purely in theory... Do a test. Run a 16 bit file through your processing, then run an identical 24 bit file through. Do a controlled blind test and see if any of this translates into something audible. I'll be you a box of girl scout cookies that it doesn't.


----------



## gregorio

71 dB said:


> [1] Gregorio said: "71 dB doesn't know anything ad nauseum…"
> [2] Why educate yourself about anything if you never get any recognition and respect for it?
> [2a] Is it because English isn't my first language? Maybe my language to too simple to give an impression of a smart person?
> [2b] My thinking happens on higher level than you think.
> ...



1. That's a statement YOU have just invented and then FALSELY attributed it to me. If you really are formally educated you'd know that's unacceptable, it's effectively a deliberate lie!

2. But that's the whole problem! You clearly have NOT educated yourself about something, you've educated yourself about one aspect of something and then deluded yourself into believing that one aspect is the only thing that matters because you're ignorant of all the other aspects.
2a. No, it's got nothing to do with your language. It's because a smart person is NOT just someone who has some knowledge in a specific aspect of something, a smart person ALSO a decent understanding of the limits of that knowledge and when and in what context their knowledge is applicable. A smart person would therefore either educate themselves about the context and practical application of their knowledge or not make assertions of fact about the context and application. You however do neither, you have some knowledge about one aspect (spatiality) of music recordings but are apparently ignorant of the other aspects, even the very basics of what the primary goals and practicalities are, and yet you still make all sorts of assertions of fact about this wider context. How could that "give an impression" that you're a smart person?
2b. No it doesn't, how do you even know what I think? For the record, I think that your thinking about spatiality probably does "happen on a higher level" than most consumers, probably even than many student sound engineers and maybe even a few professional engineers but you are thinking at a much lower level about most/all of the other aspects, far lower than even a first year student!

3. Yes it is. 3a. Clearly you don't, because:
3b. You're joking? Are you really saying that your perception of the world is the same as a Picasso painting, that you can't therefore tell the difference between reality and a Picasso cubist painting? The only way a Picasso cubist painting would appear "natural" is if you had numerous eyes on say 20m stalks, all looking at an object/s from different angles/axes at the same time. This is why it IS a good analogy, because almost all commercial audio (music and sound) from the late 1950's onwards, in order to actually be "natural", would require numerous ears, all on stalks, all listening simultaneous to the musicians/ensemble from different angles/axes in space (and time). The only difference is that YOU PERSONALLY are (hopefully) aware a cubist painting is not "natural" but you don't seem aware that virtually ALL music recordings are not. Admittedly, it's usually less obvious in music recordings than in cubist paintings but then it comes down to one's analytical listening skills, or lack of them!
3c. Maybe your eyes do, my eyes see the paint and my brain interprets it as a number of juxtaposed, different impressionistic perspectives.
3d. Maybe your ears/hearing system does, my ears/hearing system interprets it as a number of juxtaposed/superimposed different impressionistic, aural perspectives (which is what's actually occurring). So, Picasso's cubist paintings ARE the same as this aspect of stereo. Furthermore, how do you know what was intended? What education and/or knowledge do you have about artists' intentions, have you asked them? Clearly the answer is none/no, so why are you making assertions of fact about "intention"? Doesn't that demonstrate that you are not a smart person?


71 dB said:


> [4] Sounds near one ear cause large ILD, but when and how? Low frequencies are created by large objects vibrating. ...
> [5] Spatiality has been abused (ping pong etc.) in music ever since stereophonic recordings were invented.
> [6] There's two options: (1) Have recording with excessive spatiality. Speakers are fine and headphones with proper crossfeed are good too. (2) Create omnistereophonic recordings which work well as they are with speakers and headphones.
> [7] HRTF is crossfeed, just more detailed than "normal" crossfeed.


4. I'm not disputing what ILD is or how/when it exists in nature/reality. I'm disputing your assertion that the art of music production should/must be dictated by the rules of nature/reality and pointing out that virtually no commercial music production does and hasn't for about 60 years! Additionally, your repetitive condescending attitude is insulting and tiresome. Why do you keep repeating the same beginner information about how sound is produced, what ILD is and what occurs naturally, to someone who knew all that beginner information more than two decades before you even heard of them? That's bad enough but worse still, you already know this because along with everything else in the last couple of pages, it's ALREADY been explained to you over a year ago in this very thread (and incidentally not by me but by a different experience professional you were similarly insulting). So the only rational conclusion is that you are being deliberately condescending and insulting!!

5. Good, that's how it should be, that's how it's always been with music, for centuries before recording was even invented! The rules of harmony were abused, the instruments themselves were abused, music theory was abused, the whole evolution of music is based on it and the whole history of music production from the late 1950's includes the "abuse" of spatiality! There are countless examples of such abuse but probably the most obvious one in modern times is the electric guitar, the sound of which is based on the serious abuse of guitar amps and cabs, deliberately driving them into an almost constant state of overload distortion, IMD, feedback and various other abuses, NONE of which are "natural"!

6.1. You don't get to define what is "excessive" or dictate what rules music creation must conform to. You wouldn't be the first to try though, many of the Christian churches in the C16th, Hitler, Stalin, the philistines in the 1950's (who tried to get Rock banned) and others. Additionally, neither playback in consumer environments on speakers nor headphones are ideal and neither is crossfeed, which is just a third (different) flawed presentation again.
6.2. I don't know what you mean, there's no such term as "omnistereophinic", you've just made that up.

7. As this statement is false, there's only two possible conclusions: Either you know far less about "spatiality" than you're making out or you're deliberately misrepresenting the facts to support your agenda.



castleofargh said:


> @gregorio stop attacking people! attack the ideas all you like, but there is no excuse to be this nasty toward @71 dB. surely you can explain things without being insulting.



Castle, all I've done is thrown 71dB's own original insults ("ignoramuses" and other accusations of ignorance), condescension and nastiness back at him. Is it OK to insult and attack people but not OK to respond in kind? If so, please go ahead and permanently ban me, you'll be doing me a favour. But if not, why are you warning only me?, the solution is simple, warn/stop 71dB from attacking/insulting people in the first place! 

G


----------



## 71 dB

Talking about Picasso's art on sound science sub forum is a bit strange. What I mean by Picasso's art being natural is that for our eyes it does not create anything unnatural. It's paint on canvas! That's just as natural as a painted wall. It's our brain that has these crazy ideas of interpreting the painted patterns as weird perspective, but we have already passed the parts of our visual senses that might create visual distortion from unnatural visuality such as stereoimages with the other eye seeing up side down image. To my knowledge Picasso's art doesn't do that. Similarly in music synth-sounds are not heard in nature, but if they have natural spatiality, critical parts of our hearing (where excessive spatiality would create problems) are passed nicely and the brain can interpret the synth sounds anyway it wants.

That's all I will comment as it is useless to discuss these things with gregorio. I'm kind of tired of reading how everything he knows is the most important knowledge in the World while what I know is mostly irrelevant. It is kind of like identity politics. I gladly discuss about crossfeed and spatiality with anyone who wants to do it in constructive and friendly manner. That was always my intention.


----------



## bigshot (Feb 18, 2019)

I have no idea what your point about Picasso means. He was a great artist who was improvisational and very stylized. That is more relatable to the creativity of musicians than technical engineering. High fidelity sound involves fidelity. It is either true to the sound, or it isn't. I don't want an amp that distorts sound in interesting ways. I want one that is true to the signal. I don't think I'm alone in that. I think everyone needs to reel in and focus on reality, not argumentativeness.


----------



## old tech

bigshot said:


> I have no idea what your point about Picasso means. He was a great artist who was improvisational and very stylized. That is more relatable to the creativity of musicians than technical engineering. High fidelity sound involves fidelity. It is either true to the sound, or it isn't. I don't want an amp that distorts sound in interesting ways. I want one that is true to the signal. I don't think I'm alone in that. I think everyone needs to reel in and focus on reality, not argumentativeness.


Yep, and taking the Picasso analogy further, if I wanted a reproduction of a Picasso I would want it in high fidelity (ie true to the picture in this case), not a reproduction with artefacts, distortion or added colour.


----------



## gregorio (Feb 18, 2019)

ironmine said:


> [1] I think one of us is going nuts.
> [2] So how low we can go following this line of "logic"?
> [3] Why don't you try yourself to do a simple test with the bit length reduction: truncate a 24 bit file to a 2 bit file and listen to them both?
> [4] You are trying to prove a very strange point of view.
> ...



1. Yes and fortunately (for me), that would be you! 

2. Until you either understand the logic or at least until you stop making false statements about it.

3. When you load your audio file into your (say) 32bit float DAW/Environment does it truncate your 24bit or 16bit file to 2bit? If not, why do that test, what's it got to do with anything?

4. No I'm not, I'm trying to explain a very simple fact of how all DAW/DSP environments work.

5. No I didn't.

6. OK, let's take that analogy: Let's say you have $100 and you need to split it into 3 equal parts. If your calculator is precise down to 10 decimal places, the answer will be $33.3333333333. What happens if instead of imputing "100" to your calculator, you input 100.00 or even 100.000000, how will the result differ? It won't differ, the answer on your ten decimal place calculator will always be 33.3333333333, REGARDLESS of how many decimal places your input figures have!



ironmine said:


> [1] 24-bit audio not only gives a wider dynamic range, but also more levels of quantization (16 777 216 levels for 24bit audio vs 65 536 levels for 16bit). You completely ignore this aspect, you only talk about the dynamic range and noise floor, etc.
> [2] The more bits are used to represent a sample, the better approximation of the audio signal is.
> [3] The better the signal is approximated before being fed into a VST-chain, the more precise the result of multiple computations that it has to go through will be.
> [3a] Now imagine that these extra bits are not "zero padded" but are filled with actual ("legitimate") audio data. How can it not further improve precision?
> ...



1. I completely ignore that "aspect" with good reason, because it is FALSE! 24bit does NOT provide "a wider dynamic range but also more quantisation levels", it provides a wider dynamic range BECAUSE there are more levels of quantisation (and therefore less quantisation error/noise). More quantisation levels ONLY results in more dynamic range, there is no other benefit!

2. No, the audio signal is represented effectively perfectly, REGARDLESS of the bit depth, this is the basic tenet of the Nyquist-Shannon Sampling theorem which was proven mathematically. The difference between different bit depths is the amount of quantisation noise which accompanies our perfectly represented samples and this matters because we can move that quantisation noise (to where it is inaudible), revealing that perfect representation. You may not understand (or believe) this fact but your understanding or belief is not required, it's been proven (by Shannon) and the whole world of digital data/devices depends on it, which is why Shannon is often called the father of information technology and of the digital age. Is SACD, with just 2 quantisation levels (1bit), an 8 million times less accurate "approximation of the audio signal" than 24bit? Why don't you try that test? Clearly you have some misconceptions about how digital audio works, you might find this simple introduction helpful: "24bit vs 16bit, the myth exploded"

3. No, you are confusing two different things, you are talking about the data present in the input files (which can be different) but making assertions about the "calculation precision", which is a different thing and is ALWAYS the same. Obviously, if we input different information/figures into a calculation, we are going to get a different result but that's because the input figures are different, not because the calculation precision has changed. Therefore:
3a. If we were to imagine that, then the calculation precision would NOT change, although the result obviously would if we are feeding the calculation with a different input. However, what practical difference does this actually make, or to put it differently, when does what you are suggesting we imagine (that the additional bits of 24bit contain "legitimate" audio data) actually occur in practice? For simplicity's sake, let's imagine we're talking about a processing environment that is 32bit integer (rather than 32bit float): If we input a 16bit file into that environment what would we get? As the dynamic range is rarely even as much as 60dB what we would get is effectively 10 bits of "legitimate" data (or fewer), plus 6 bits of random/noise data, plus 16 bits of digital silence (padded zeros). What would we get if we input a 24bit file? We would get effectively 10bits of "legitimate" data, plus 6 bits of random noise, plus another 8 bits of random/noise (below the level of the first 6 bits of noise), plus 8 bits of digital silence (padded zeros). The actual difference in practice is therefore an additional 8 bits of random noise which makes no difference at all because it's all way below the random noise occupying the 6 more significant (louder) bits!

4. Correct, we almost always use 24bit rather than 16bit, because of the significantly greater recording headroom 24bit provides but this is irrelevant to consumers because consumers are replaying/reproducing audio files rather than recording them and therefore do not need recording headroom!

5. Correct, we do not store the intermediate results of DSP in 16bit audio format but neither do we store the intermediate results as 24bit!!
5a. No, I would NEVER do that and I would NEVER advise another mastering engineer or student to do that, either at 16bit or 24bit! No one should/would ever "save the intermediate result of his work" as any audio format! At the end of the first day, the DAW session itself is saved, which includes all the plugin processors and all their settings, NOT the "intermediate result". The next day that DAW session is reloaded, the original, unchanged 24bit (or 16bit) audio files are reloaded into the DAW (32/64bit) environment, along with all the exact same plugins and settings, resulting in the exact same state and output as the mix at the end of the first day! Even when moving the material from a recording studio to a different recording (or mixing) studio the DAW session will be moved/transferred, NOT an "intermediate result". There is though one particular scenario where we do typically save an "intermediate result"; when the completed mix is transferred from the recording/mixing studio to a mastering studio and then I would NOT recommend a 16bit transfer but preferably at least a 24bit or better still a 32bit float wav. However, this "intermediate result" really is an "intermediate result", it is NOT a completed result, it will have a greater dynamic range than the completed result and in addition, it has not been level optimised (to remove the headroom), of which there should be a minimum of at least 6dB. The combination of these two factors justifies using 24bit instead of 16bit, although that's mainly for safety as the vast majority of the time it still doesn't make any difference. Your statement #4 is therefore incorrect, what you do with your computer is NOT a continuation of what the sound studios do, once a finished master is produced there is no continuation, the process is complete. What you are actually doing is therefore quite different, you are adding to an already completed result, not continuing with an intermediate result and the *potential* use of 24bit (greater dynamic range) no longer exists in a completed result.
5b. Huh, I haven't given you any advice on what you should do. If I were to give you advice, it would be pretty much the exact opposite of what you're suggesting I'd advise! If for example you are applying DRC and crossfeed, I would advise that you do NOT save an "intermediate result", either in 16bit or 24bit. I would advise you to place your DRC plugin and crossfeed plugin next to one another within your 32bit (or 64bit) environment and not leave that environment until all your DSP is completed.

G


----------



## 71 dB

bigshot said:


> I have no idea what your point about Picasso means. He was a great artist who was improvisational and very stylized. That is more relatable to the creativity of musicians than technical engineering. High fidelity sound involves fidelity. It is either true to the sound, or it isn't. I don't want an amp that distorts sound in interesting ways. I want one that is true to the signal. I don't think I'm alone in that. I think everyone needs to reel in and focus on reality, not argumentativeness.



Are you responding to my post? I believe it was mr. G who brought Picasso's art here in order in invalidate my thinking, am I have simply answered why Picasso's art doesn't invalidate my thinking. I totally agree about Picasso being a great artist. My parents had a poster copy of Guernica which is the synonym of "home" for me. When the poster got bad after 30 years or so, my parents got another one and my father still has it.  

Good amps don't distort signal and are true to the signal, but we encounter the problem of spatiality. The transfer function of a sound reproduction chain is very different in regards of spatiality when we compare speakers and headphones. While amps, speakers and headphones have become better and better, this difference in spatiality has in my opinion become the biggest problem in sound reproduction.


----------



## gregorio

71 dB said:


> [1] What I mean by Picasso's art being natural is that for our eyes it does not create anything unnatural.
> [1a] That's just as natural as a painted wall. It's our brain that has these crazy ideas of interpreting the painted patterns as weird perspective,
> [1b] but we have already passed the parts of our visual senses that might create visual distortion from unnatural visuality such as stereoimages with the other eye seeing up side down image. To my knowledge Picasso's art doesn't do that.
> [2] Similarly in music synth-sounds are not heard in nature, [2a] but if they have natural spatiality, [2b] critical parts of our hearing (where excessive spatiality would create problems) are passed nicely and the brain can interpret the synth sounds anyway it wants.
> ...



1. So is that an indirect answer to my question, that Picasso's cubist paintings seem natural to you, that the world appears to you as a cubist painting and you can't tell the difference?
1a. So you can't tell the difference between a painted wall and a cubist painting? Picasso didn't make "painted patterns" he made paintings which juxtaposed and superimposed different simultaneous perspectives, just as we juxtapose and superimpose different aural perspectives.
1b. What visual or aural distortion from unnatural visual or aural perspectives? I don't get any distortion, do you have something wrong with your eyes or ears?

2. True.
2a. But synths sounds do not have any spatiality and therefore by definition do not have "natural" spatiality.
2b. How can no spatiality be either "excessive" or "natural" spatiality? Again, it's just nonsensical circular arguments.

3. Yes apparently it is, maybe you'd feel your discussion wasn't "useless" in a forum where know no one knows anything about spatiality and will happily accept your personal preferences as objective fact without question?
3a. Then maybe you should try educating yourself, learn something about music production and gain some understanding of the context and relevance of what you currently know or, are you saying you are happy with your ignorance, are therefore tired of what I'm saying because you do not want to know and do not want anyone else to know?

4. You have proven that statement to be false! Even without knowing who they are, you have repeatedly called everyone who might not share your personal preferences "idiots", "ignoramuses" and "ignorant", out of the blue and not in response to any insult.
4a. If that really were your intention, then why call people ignoramuses?

G


----------



## 71 dB

gregorio said:


> 1. So is that an indirect answer to my question, that Picasso's cubist paintings seem natural to you, that the world appears to you as a cubist painting and you can't tell the difference?
> 1a. So you can't tell the difference between a painted wall and a cubist painting? Picasso didn't make "painted patterns" he made paintings which juxtaposed and superimposed different simultaneous perspectives, just as we juxtapose and superimpose different aural perspectives.
> 1b. What visual or aural distortion from unnatural visual or aural perspectives? I don't get any distortion, do you have something wrong with your eyes or ears?
> 
> ...


Damn, I shoudn't argue with you more about these things, but I try once more.

1. My eyes don't_ interpret_ Picasso's art. My eyes don't care what kind of perspectives Picasso used in his paintings. My eyes see 2D patterns on a canvas and find NOTHING unnatural about 2D patterns on a canvas. Then, visual information from my eyes go to the part of my brain that handles information from eyes, I believe that's the back part of my brain. My brain _interprets_ the patterns my eyes saw. My brain recognizes the style: Hey, must be a Picasso! My brain knows Picasso used weird perspectives. My brain knows how to interpret Picasso. My brain also knows paintings are 2 D paint patterns on canvas. My brain knows all of this. I hope your brain knows it too and isn't too shocked about Picasso. My brain interprets the visual information on different levels of abstraction. Picasso's perspective is a higher level abstraction. Stupid low IQ people may not be able to interpret things on such a high level and these people can't understand Picasso's art. Unnaturality happens on lower level of abstraction. As I say, for example stereo 3D-picture where the other picture is up side down. If such visual information hits my brain, it goes "holy cow, the picture is up side down on the other eye!" That's unnatural.

1a. Yes, I'm that stupid. No, actually I can. Picasso's art is 2D, mono. He did not do 3D stereopicture art to my knowledge (that would be interesting!)

1b. I "suffer" from excessive stereo separation. Makes the sound worse, irritating an causes fatique. You don't apparently.

2a. Of course synth sounds contain spatiality if they are processed to have it. At least I create spatiality to them with my plugins. Without spatial information they sound pretty crappy.
2b. Your idea of synth sounds is weird! It's not 1950 when pioneers of electronic music generated mono sinewaves.

3. maybe
3a. What is it I should learn exactly? I am learning things all the time. What knowledge do you want me to have? I can't learn EVERYTHING in the world, because there's too much for anynone to learn. I watch Rick Beato, You Suck at Producing (because it funny), EDM tips, Hack Music Theory, Adam Neely, Signals Music Studio,… I have only 24 hours a day! I wonder what Rick Beato would say about your ideas. I think he would be on my side. Dr Luke and Max Martin know bass goes to center. How many number 1 hits do you have? You don't know EVERYthinig either you are not god. I'm sure on many issues you are very ignorant just as I am and everyone else. That's being human. 

4. I haven't recently.
4a. I called _mysel_f ignorant before discovering crossfeed and becoming aware of spatial distortion.


----------



## bigshot

old tech said:


> Yep, and taking the Picasso analogy further, if I wanted a reproduction of a Picasso I would want it in high fidelity (ie true to the picture in this case), not a reproduction with artefacts, distortion or added colour.



The reason art doesn’t apply at all here is because even a faithful reproduction of a Picasso painting wouldn’t be worth a ten thousanth of an original. Copies have no value because it’s art. Not a good analogy.



71 dB said:


> Good amps don't distort signal and are true to the signal, but we encounter the problem of spatiality. The transfer function of a sound reproduction chain is very different in regards of spatiality when we compare speakers and headphones. While amps, speakers and headphones have become better and better, this difference in spatiality has in my opinion become the biggest problem in sound reproduction.



Do a carefully controlled double blind test on amps and get back to me on that.


----------



## 71 dB

bigshot said:


> Do a carefully controlled double blind test on amps and get back to me on that.



Why? How does that benefit me?


----------



## Glmoneydawg

71 dB said:


> Why? How does that benefit me?


I am a "soundstage and imaging" fan....probably more important to me than frequency response and absolute volume...but i have found these qualities to be transducer, and in the case of speakers,room dependant.I have owned some very expensive tube and solid state amps over the years.None of them have the effect that appropriate speakers and room setup can have.


----------



## 71 dB

Glmoneydawg said:


> I am a "soundstage and imaging" fan....probably more important to me than frequency response and absolute volume...but i have found these qualities to be transducer, and in the case of speakers,room dependant.I have owned some very expensive tube and solid state amps over the years.None of them have the effect that appropriate speakers and room setup can have.



Yep. Speaker/listener placement and acoustics has huge impact and with headphones it's spatiality. With speakers you deal with the issue by carefully spacing your speakers and yourself as a listener and also doing acoustic improments. With headphones proper crossfeed is the cure. Of course having good speakers and headphones is important too.


----------



## gregorio

71 dB said:


> Damn, I shoudn't argue with you more about these things, but I try once more.
> [1] My eyes don't_ interpret_ Picasso's art.
> [1.1] My eyes don't care what kind of perspectives Picasso used in his paintings.
> [1.2] My eyes see 2D patterns on a canvas and find NOTHING unnatural about 2D patterns on a canvas.
> ...



Agreed, you "_shouldn't argue with me_ [or anyone else] _about these things_" if you don't really understand them! So why try to do so "once more"?
1. And your ears don't interpret a music production.
1.1. Any your ears don't care what kind of perspectives are used in music production.
1.2.  And again, clearly a (2D) painting which provides the illusion of depth/distance (or in Picasso's case multiple different/conflicting perspectives), might not seem unnatural to your eyes but is to your brain.
1.3. Yes, all of the above occurs in the brain. Your brain interprets the aural patterns your ears heard.
1.4. And here is where we seem to run into a problem. My brain recognises the style of music production, my brain knows that virtually all music productions use weird perspectives, my brain knows how to interpret music productions. My brain also knows that 2 channel stereo is just two point sources with which we can (if we wish and to a certain extent) create the illusion of distance/depth/perspective, just as artists can use a 2D canvas to create a limited illusion of distance/depth/perspective. AND we can (and virtually always do) layer different, simultaneous, contradictory aural perspectives not dissimilarly to Picasso's visual perspectives, except that in many cases it isn't the primary goal to make these different/contradictory aural perspectives the most obvious artistic feature (and in some cases the goal is to hide/disguise the fact that it's actually multiple different/contradictory aural perspectives).
1.4. That's the problem, YOUR brain apparently knows all of this for your eyes with 2D paintings but apparently doesn't know any of this for your ears and 2D/2 channel music production!!
1.5. If your brain knows all this for paintings and your perception of vision, how come your brain doesn't know ANY of this for music production and your perception of hearing? That's why the analogy with cubist paintings works and presumably why you don't "get" the analogy!! 
1.6. Then by YOUR OWN assessment, you are a stupid low IQ person!! Clearly you are NOT interpreting music production on a higher level, you are trying to interpret it at the lower level of "natural"/"real" and then labelling it as "excessive", "illegal" or just "wrong" when it doesn't meet that lower level interpretation. You seem able to appreciate that Picasso's paintings do NOT conform to the rules of "natural" perspective because it's art but you can't seem to grasp the simple fact that the same is true of music production!


71 dB said:


> 1a. Picasso's art is 2D, mono. He did not do 3D stereopicture art to my knowledge (that would be interesting!)
> 1b. I "suffer" from excessive stereo separation. Makes the sound worse, irritating an causes fatique. You don't apparently.
> 2a. Of course synth sounds contain spatiality if they are processed to have it. At least I create spatiality to them with my plugins. Without spatial information they sound pretty crappy.
> 2b. Your idea of synth sounds is weird! [2c] It's not 1950 when pioneers of electronic music generated mono sinewaves.
> ...


1a. What's that got to do with it, we don't create 3D art either! 2 point source stereo is not 3D but as with some paintings we can create the illusion of perspective using the 2 dimensions of two speakers.
1b. Correct, I do not. I do not suffer from your lower level interpretation or personal preferences, a wide stereo image does not make the sound worse to me (sometimes it makes it better, other times it makes little intrinsic artistic difference), it is not irritating to me if music production does not follow the rules of "natural" spatiality (in fact usually the opposite) and I tend to get fatigue from listening to music too loudly, concentrating too hard for too long on it or from an over abundance of mid/harsh frequency content, not from stereo spatiality effects. So who is the "ignoramus" here, me for not suffering from your lower level interpretation and personal preferences or you, for not realising it's about your personal interpretation and preferences and deluding yourself into believing it's objective fact?

2. Huh, that's nonsensical! How do you know that synth sounds "sound pretty crappy" without spatial information if synth sounds have spatial information? It's because synth sounds do NOT have spatial information that you know what they sound like without it! Of course, one can (and usually does) add spatial information to a synth sound, with say a reverb plugin, but then you obviously have a synth sound plus reverb, not just a synth sound! 
2b. No it's not, it's just the fundamental basics even a first year student would know.
2c. That's false! Have you actually heard the works of Stockhausen or say the film score to "Forbidden Planet" in the 1950's? Do you really only hear mono sine waves? Your attempt at being insulting/condescending has again backfired due to your own ignorance!

3a. If you are going to make assertions about music production then I would expect you to have at least a basic understanding of what music production is, what it's goals are and how it's done. I do not expect you to learn everything in the world about it, no one knows everything in the world about it, just learn some fundamental basics (starting with the fact that it's an art for example!). And, one doesn't have to be god to understand the basic principles of what music production is, in fact, one doesn't have to be god to understand significantly more about music production than just the basic principles! How, if you're so well educated and smart, do you not know/realise this?

4. That's clearly untrue!! You started over a year ago being condescending and calling everyone who didn't share your preferences "idiots" and "ignorant". Despite going through all this back then and you eventually apologising and saying you wouldn't do it again, here we are over a year later and just a few days ago you start all over again with exactly the same ignoramus/ignorant BS!!
4a. But now you've leaned about crossfeed you are no longer ignorant, you are god and can call everyone who doesn't share your preference "ignoramuses" and tell all us professionals what rules our art must conform to. Deluded or what?

G


----------



## 71 dB

Sorry about ignoramus. I called myself one because discovering crossfeed. So so sorry. I regret what I did year ago. I am weak, low self-esteem. I expoded and did wrong. Do you judge me forever for it? This is difficult for me, so difficult. So easily triggered. Damn!

I have not seen forbidden planet they probaby used whatever but not FM synthesis, time stretchung and its monoi

Looking art is differentfrom headpohes. 3d vr is the same 

SO tire d cant even erite


----------



## 71 dB

Headphones create more difference between ears than looking art creates. That's the problem. Looking art is like listening to speakers. Headphones is like VR classes when you can create totally different pictures to each eye if you want.


----------



## 71 dB

Would like to ignore gregorio, but the guy knows stuff. What to do? Where do I belong? What is my talent that the World recognizes? I am so lost.


----------



## 71 dB

I don't understand excessive spatiality as an art. Speakers don't give it so I understand recordings with large stereo separation, but to say everything is okay with headphones is weird. I really don't understand, no matter how hard gregorio says it. He doesn't even educate me. Instead of teaching me he just keeps saying how I don't know anything. Does he have Youtube videos where he explains these things? Or is what I say so hurtful he needs to fight back this wrongly? What if he knows I know better and he is shocked? I don't know what it is. I'm not telling to not use excessive spatiality. I am saying if you do, spatially aware people may want to use crossfeed and the best way to not have people tinkering your production is to not use excessive spatiality. That's what I suggest and the thanks I get is being humiliated. You are welcome.


----------



## 71 dB

Unfortunately I have heard very little Stockhausen. Not sinewaves all of course, no need to split hairs.


----------



## bigshot

71 dB said:


> Why? How does that benefit me?



You would actually know instead of stating a subjective impression as fact.



71 dB said:


> Unfortunately I have heard very little Stockhausen.



Unfortunately, I've heard too much.


----------



## 71 dB

bigshot said:


> You would actually know instead of stating a subjective impression as fact.



What subjective impressions are you talking about?


----------



## bigshot

If you haven't done a controlled test, then you're working off a subjective impression. I've done controlled tests of every amp I've ever bought and I haven't found one that sounds different yet. I've been actively searching for one, and no one has been able to help me locate one yet.


----------



## ironmine

71 dB said:


> but to say everything is okay with headphones is weird.



Now, headphones are not ok. 

When we listen through speakers, the left ear hears both the left channel *and *the right channel. 

When we listen though headphones, the left ear hears *only *the left channel. And that's it. Transcendental style of writing and verbal pyrotechnics cannot change this simple fact of reality.


----------



## 71 dB

bigshot said:


> If you haven't done a controlled test, then you're working off a subjective impression. I've done controlled tests of every amp I've ever bought and I haven't found one that sounds different yet. I've been actively searching for one, and no one has been able to help me locate one yet.



What's wrong with subjective impression if you know it's subjective impression? Instead of searching for amps, I use the one I have. I don't find anything wrong with the sound and I have plenty of power so no problems.


----------



## bigshot

71 dB said:


> What's wrong with subjective impression if you know it's subjective impression? Instead of searching for amps, I use the one I have. I don't find anything wrong with the sound and I have plenty of power so no problems.



You aren't arguing on point any more. You're just treading water.


----------



## 71 dB

bigshot said:


> You aren't arguing on point any more. You're just treading water.



Huh? Frankly I have had my fair share of arguing with Mr. G. I would get your posts if I was someone claiming huge differences between amps.


----------



## bigshot (Feb 19, 2019)

I must be mistaken then. I thought you had claimed that the "speciality" of headphones, speakers and amps was the biggest problem in sound fidelity. Headphones and speakers all sound different, and that can certainly affect sound fidelity, but amps don't vary audibly in my experience. And I still don't know how space has anything to do with transducers you jam right up against your ears. Space is in a room. The room is what creates dimensionality of sound.


----------



## ironmine

bigshot said:


> I must be mistaken then. I thought you had claimed that headphones, speakers and amps sounded different because of "spaciality" or something. Headphones and speakers all sound different, but amps don't in my experience.



You don't hear any difference between various amps?  An integrated amp vs. goods preamp + a couple of power mono amps? Class D amp vs. Class A amp? Transistor vs. tube amp?  Nothing at all?


----------



## bfreedma

ironmine said:


> You don't hear any difference between various amps?  An integrated amp vs. goods preamp + a couple of power mono amps? Class D amp vs. Class A amp? Transistor vs. tube amp?  Nothing at all?



Tube amps in some cases.  The rest, given the usual caveats regarding proper operating conditions, no audible difference.  Can you show any measurements that would indicate otherwise?


----------



## 71 dB

bigshot said:


> I must be mistaken then. I thought you had claimed that the "speciality" of headphones, speakers and amps was the biggest problem in sound fidelity. Headphones and speakers all sound different, and that can certainly affect sound fidelity, but amps don't vary audibly in my experience. And I still don't know how space has anything to do with transducers you jam right up against your ears. Space is in a room. The room is what creates dimensionality of sound.



_*Good amps don't distort signal and are true to the signal,* but we encounter the problem of spatiality. The transfer function of a sound reproduction chain is very different in regards of spatiality when we compare speakers and headphones. While amps, speakers and headphones have become better and better, this difference in spatiality has in my opinion become the biggest problem in sound reproduction._

The bolded text tells what I think about amps. "Good" here means a functioning well engineered unit. Spatiality is another thing not really related to amps. Amps definitely are not one of the biggest problems in sound fidelity. 

Recordings contain spatial information and you hear that with headphones. With speakers the spatial information on the recording gets convoluted with room acoustics and HRTF of the listener.


----------



## gregorio (Feb 20, 2019)

71 dB said:


> Sorry about ignoramus. I called myself one because discovering crossfeed. So so sorry. I regret what I did year ago. I am weak, low self-esteem. I expoded and did wrong. Do you judge me forever for it?


If you carry on doing it forever then I'll carry on judging you for it forever! If you are "so sorry" and "regret what you did a year ago", then why are you doing it all over again NOW? These are just a few (of numerous) examples from *THIS* month:
"_Crossfeed revolutionized my headphone listening too in 2012 when I discovered it and stopped being a spatial ignoramus._"
"_People get used to excessive spatiality thinking it's normal and correct. I was one of those people before 2012, spatially ignorant. Crossfeed is about doing headphone listening more correctly._"
""_spatially informed people want to use crossfeed with headphones_"

Crossfeed did NOT revolutionise my HP listening, artificially wide/separated can be "correct" and I do NOT want to use crossfeed. Therefore you are calling me spatially ignorant, that I don't listen to HPs correctly and am a spatial ignoramus.


71 dB said:


> [1] I'm not telling to not use excessive spatiality.
> [2] I am saying if you do, spatially aware people may want to use crossfeed and the best way to not have people tinkering your production is to not use excessive spatiality.
> [3] That's what I suggest and the thanks I get is being humiliated.


1. That too is a lie, here are some more of YOUR statements from the last few days:
"_I claim expertise on spatiality hearing and I am proposing what music production should be in regards of that._"
"_It doesn't matter how long music production has been not spatially "natural". Maybe it's time to end the lunacy and start doing things correctly?"_
"_Maybe music production should be about avoiding what would not occur "naturally" and exploring ARTistical possiblities within that framework?_"
"_You have gotten away with nonsensical spatiality because most consumers are spatially ignorant._"
2. And I am saying that if people really are "spatially aware" (and interested in fidelity), then they would want to hear the spatiality intensionally created by the artists, rather than alter/destroy it!
3. Huh? You call us ignorant/"ignoramuses", tell us it's time to end our lunacy and how we should be producing music "correctly" (that isn't "illegal"), present your preferences as objective fact, ignore or misrepresent many of the actual pertinent facts, just make-up other lies/facts to support your agenda, and then you expect us to thank you for it?  Are you joking?


71 dB said:


> "Your idea of synth sounds is weird! It's not 1950 when pioneers of electronic music generated mono sinewaves."
> I have not seen forbidden planet they probaby used whatever but not FM synthesis, time stretchung and its monoi
> Unfortunately I have heard very little Stockhausen. Not sinewaves all of course, no need to split hairs.


The Barron's (who created Forbidden Planet) and Stockhausen are arguably the greatest pioneers of electronic music in the 1950's and Forbidden Planet was a milestone, being the first electronic film music score. The did NOT use just sine waves, they used square, triangle and sawtooth waves, they did use frequency modulation, ring modulation and other synthesis techniques and they did employ time-stretching (although time-stretching is NOT a synthesis technique). Forbidden Planet is in mono (because films had to be at the time). However, in the 1950's Stockhausen was using all these synthesis techniques and most definitely NOT in mono!

So, not only were you attempting to be patronising/insulting (which is unacceptable) but to support your agenda you made-up false facts which is also unacceptable and now, when your "facts" have proven false you try to wriggle out of it by saying I'm just "splitting hairs"?!

Ironically, for someone who claims to be an expert in spatiality, Stockhausen was THE world's leading expert/pioneer in the field of music spatialisation and yet clearly you've not heard any of it? Arguably his most influential work was the 1956 _Gesang der Junglinge (Kontate)_, you'll absolutely hate it though, the unnatural spatiality of the synthesised music is just about it's main artistic intention, which of course you'll want to destroy because you'll conclude that Stockhausen must have been a "spatial ignoramus"!!


71 dB said:


> [1] I don't understand excessive spatiality as an art.
> [2] Speakers don't give it ...


1. That's EXACTLY my point! The problem isn't with the art, it's that YOU "_don't understand_" it (as an art). The issue then is YOUR lack of understanding (YOUR ignorance) and YOU are the "_stupid low IQ_" person who is "_not be able to interpret things on such a high level and can't understand art_."!

2. That is NOT true. Listen to _Gesang der Junglinge_ on speakers and then tell me that spatialisation is even slightly related to "natural".

G


----------



## 71 dB

Mr. G tells me to not call even myself ignoramus and then he calls me a "stupid low IQ" person. 

Listening to _Gesang der Junglinge._ Hard-panned sound left and right. Doesn't create excessive spatiality with speakers, nothing does. It's impossible.


----------



## bigshot (Feb 20, 2019)

ironmine said:


> You don't hear any difference between various amps?  An integrated amp vs. goods preamp + a couple of power mono amps? Class D amp vs. Class A amp? Transistor vs. tube amp?  Nothing at all?



I don't use tube amps, so I don't know about that. But every amp and player I've owned for the past 30 years has been audibly transparent. I do a listening test on every piece of equipment I buy and if it wasn't audibly transparent, I'd pack it back up and return it. I haven't had to do that yet.

My system is carefully equalized and level balanced. I don't want some parts of my system to be colored one way and other parts another way. If that was the case, I would need a different EQ curve for every source. That would be a royal pain in the ass. It's easier to just have everything clean and transparent. That way I can swap sources in and out or replace my amp, and all the settings remain the same.



71 dB said:


> Recordings contain spatial information and you hear that with headphones. With speakers the spatial information on the recording gets convoluted with room acoustics and HRTF of the listener.



Secondary depth cues can be heard even with a mono transistor radio. They are baked into the recording. Spacial information is an interaction between the recording, the speakers and the room itself.

Sound engineers don't mix with headphones. All of the balances are set to work with a properly laid out speaker system in a listening room. If you want to hear the spacial cues presented the way they were intended to be heard, you use speakers.

If you go to the opera, you don't sit where the soprano is singing into your left ear and the tenor in the right. You sit back a bit from them so the sonic stage is laid out in front of you in a natural perspective. The envelope of the room acoustics wraps around the singing and creates bloom. That is what speakers do when they are properly laid out in a good room.

If you have no space for a good speaker system, or you live near people who don't share your musical interests, headphones are an acceptable compromise. But headphones don't present music with soundstage, the envelope of the room, or with bloom. They just present it bald, shot straight into each ear.


----------



## 71 dB

bigshot said:


> 1. Secondary depth cues can be heard even with a mono transistor radio. They are baked into the recording. Spacial information is an interaction between the recording, the speakers and the room itself.
> 
> 2. Sound engineers don't mix with headphones. All of the balances are set to work with a properly laid out speaker system in a listening room. If you want to hear the spacial cues presented the way they were intended to be heard, you use speakers.
> 
> ...



1. Yes, mono sound can have depth cues.
2. To some extent they do, but the they mic primarily using speakers. Studios have (at least should have!) heavy acoustic treatment making them different from typical home listening rooms. That's why you will have more interaction than intented. 
3. Yep.
4. With proper crossfeed well recording music can sound quite "large" in regards of soundstage. With headphones the room doesn't mess up with the sound so the spatial information of the recording is heard very well. With the best (in this regard) recordings I have the soundstage is almost as large as with speakers! On bad recording the sound collapses inside the head no matter how it's crossfed. Typically I get a miniature soundstage of about 3-5 feet in diameter.


----------



## Steve999

Zippy zing zow zoodly hey hey hey! 

I made that up, you can google it and you won’t find it anywhere. I’m copyrighting it right now so don’t even.

Anyway, the newer higher end Sony noise cancelling headphones have some spatial effects and let you move the image around a little or actually quite a bit in a few discrete places, one of which is, you know, like, in front of you. It’s discrete, not variable, but it involves some crossfeed like fur sure and is interesting if not compelling.

Over and out.


----------



## bigshot

Soundstage is a lot more than just blending the two channels. It is created by matching the geometry of the equilateral triangle of distance between speakers, and distance between speakers and listener. That places the sound location precisely, and it is scalable for larger rooms simply by increasing the width of the triangle's sides. Headphones can't do that.

By the way, a recording booth may be soundproofed to be acoustically dead because they don't want to pick up unwanted reflections while recording, but the mixing stage shouldn't be acoustically dead. The goal of a listening room in the home is to come as close as possible to the sound of a good mixing stage. The room is what gives the sound dimension and bloom. Headphones can't do any of that.\

There are three kinds of "spatiality" in recorded music-- 1) secondary depth cues which are burned into the recording itself; 2) the difference between the left and right channels; and 3) the space around the sound that comes from the combination of distances between speakers and the listener and the acoustic properties of the room. Headphones can only do secondary depth cues and simple stereo. It takes speakers to do all three kinds of space, and multichannel speaker setups to take simple speaker soundstage into the truly dimensional realm of soundscape.

You need space around the sound to have spatiality. Even a mono system in a really good room can sound better than stereo headphones. Crossfeed might be a good compromise to take some of the curse off of stereo headphone listening, but it doesn't sound anything like true speaker soundstage, and it is flat as a pancake compared to a multichannel speaker system. The only reason I can think of why someone would say differently is if they have never heard a good speaker installation, or they are hell bent to justify their use of headphones reality be damned.


----------



## gregorio (Feb 21, 2019)

71 dB said:


> [1] Mr. G tells me to not call even myself ignoramus and [1a] then he calls me a "stupid low IQ" person.
> [2] Listening to _Gesang der Junglinge._ Hard-panned sound left and right. Doesn't create excessive spatiality with speakers ...


1. You can call yourself anything you like but you cannot call or imply other people are ignoramuses. [1a] What, you don't like being called a "_stupid low IQ_" person? You'll notice I placed it in quotes (and italics), because I was *quoting YOU*, the insult YOU posted!

2. Then there must be something very seriously wrong with either your speaker setup, your ears or your brain/perception! I can clearly hear that the spatiality is nowhere near "natural" (and is "excessive") even on my laptop speakers and of course that's entirely intensional. If you can't hear/perceive it as unnatural/excessive then you're missing a very significant part of the "art" of the composition!


71 dB said:


> 2. To some extent they do, but the they mic primarily using speakers. Studios have (at least should have!) heavy acoustic treatment making them different from typical home listening rooms. That's why you will have more interaction than intented.
> 3. Yep.
> 4. With proper crossfeed well recording music can sound quite "large" in regards of soundstage.
> [4a] With headphones the room doesn't mess up with the sound so the spatial information of the recording is heard very well.
> ...



2. Yes, studios are always well acoustically treated and yes, they are somewhat different from typical home listening rooms BUT NO, you will not have more interaction than intended! Of course, this depends on exactly what you mean by "interaction". If you mean "interaction" purely in terms of frequency response then yes, the typical consumer listening environment will have untreated room modes/standing waves which will result in resonances, peaks and troughs (summing and cancellation) at certain frequencies. However, as Bigshot states, acoustic treatment (for music studios) does NOT only mean sound absorption (deadening)! Acoustic treatment means sound isolation, some absorption (to reduce standing waves/summing and cancellation at specific frequencies) AND diffusion! The point of diffusion is specifically NOT to reduce room acoustics/reverb or interaction but to linearise the the reflections/interactions. The desired end result is a mix room with the same amount and duration (RT) of room reflections/reverb as an average/slightly large consumer listening environment (a sitting room for example) but with a flatter frequency response. In the case of a mastering studio, the whole point of mastering in the first place is to alter the mix so that consumer playback is as close as possible to what was intended and the acoustic treatment of mastering studios is obviously designed to facilitate that goal, not make it more difficult!

3. But of course we don't actually record or produce/mix it that way!

4. What is "quite large" and how does that relate to what the creators intended, and what other (undesirable) effects does crossfeed create?
4a. You are contradicting yourself!! If a master/mix is designed solely for speaker playback, then for crossfeed to "fix"/"cure" the problems of headphone playback it needs to "mess up the sound" as a room does. As you state, crossfeed does not do this and therefore is NOT a "fix" or "cure" as you keep asserting! Furthermore, if we want to hear the master/mix with just the spatial information on the recording (and no room acoustics interaction) then crossfeed changes that spatial information, which obviously makes it more difficult to hear.

5. That is YOUR perception, not objective fact. With crossfeed I never get a soundstage as large as with speakers. Sure, I can adjust crossfeed to provide a similar overall width but soundstage is not just width, it's also depth, the timing and position of reflections (within the mix) and various other factors. You claim to be an expert on spatiality, surely you must know this fundamental basic? For me, crossfeed does NOT improve any of these other factors, it degrades them. Typically, I perceive a somewhat more blurred and flatter (less depth) mix using crossfeed, in addition to the reduction of width. However, I realise that's my perception, that some other people may not perceive the same as me, they may not perceive the flattening or blurring effect or may not be bothered by it if they do. The difference between you and me is that I don't claim my perception and preferences are objective fact and therefore that anyone who doesn't perceive the same as me or doesn't have the same preferences must be ignorant!

6. Recordings are not made specifically to respond well to crossfeed. Virtually without exception, they are made for playback on speakers and HPs without crossfeed. Therefore, a good recording may respond particularly badly to crossfeed and a bad recording might respond well. Again, your are creating definitions and using circular arguments based on your personal preferences. You are defining recordings as "good" or "bad" according to how they respond to you applying crossfeed but there are numerous factors which define whether a recording is "good" or "bad" and how they respond to crossfeed is the least of them or not a factor at all!
6a. Even if you did get exactly the same soundstage just in "miniature"/smaller (which you don't with crossfeed), how do you know that a recording is intended to have a soundstage "of about 3-5 feet in diameter"? Most are not!

G


----------



## 71 dB

gregorio said:


> Then there must be something very seriously wrong with either your speaker setup, your ears or your brain/perception! I can clearly hear that the spatiality is nowhere near "natural" (and is "excessive") even on my laptop speakers and of course that's entirely intensional. If you can't hear/perceive it as unnatural/excessive then you're missing a very significant part of the "art" of the composition!



Nothing wrong with my setup. It's just that our denifition of natural spatiality is different. Any music with any spatiality listened with speakers is just sounds radiated by speakers into the room. No matter how crazy spatiality in the recording, that is _natural_ to me, because by the time the soundwaves hit my ears, excessive spatiality has disappeared. There is no excessive ILD for example. If left speaker is 20 dB louder it doesn't mean my left ear hears sounds 20 dB louder. It mears maybe a few decibels louder and that is natural, it's what my ears expects from sounds around me that aren't VERY close to me. Your definition is more like what would it sound like it the band played in front of you. 

This equation (f is frequency, [Hz]) describes what I mean by natural spatiality:

ILD ⩽ 1 + (f/1000)^0.8 dB​
Exact values of course vary a bit from listener to listener due to different HRTFs, but that should give you the idea of what I mean by natural spatiality. What you seem to call natural spatiality, I call _authentic_ spatiality and that's much more demanding than what I call natural spatiality.


----------



## 71 dB

gregorio said:


> Recordings are not made specifically to respond well to crossfeed. Virtually without exception, they are made for playback on speakers and HPs without crossfeed. Therefore, a good recording may respond particularly badly to crossfeed and a bad recording might respond well. Again, your are creating definitions and using circular arguments based on your personal preferences. You are defining recordings as "good" or "bad" according to how they respond to you applying crossfeed but there are numerous factors which define whether a recording is "good" or "bad" and how they respond to crossfeed is the least of them or not a factor at all!
> 6a. Even if you did get exactly the same soundstage just in "miniature"/smaller (which you don't with crossfeed), how do you know that a recording is intended to have a soundstage "of about 3-5 feet in diameter"? Most are not!
> 
> G



Why don't recordings suffer from _acoustic_ crossfeed inheret to speaker listening? If recordings aren't made to respond well to crossfeed, doesn't that mean that we should use crosstalk canceling when listening to speakers? Why is only headphone crossfeed bad? Crossfeed is either bad or it's good. If headphone crossfeed is bad so is acoustic crossfeed with speakers and vice versa. For me both are good, because music is mixed to take crossfeed (acoustic or headphone) into account.

Recordings are hardly intented to have a soundstage of about 3-5 feet in diameter, but that's what I get from my gear and fortunately it sounds good. They hardly intented the soundstage I get from my speakers either (because all room + speaker combinations create their own spatiality), but that's what I get and hopefully I enjoy it. Movies are intented to be watched on big screens in theatres instead of 32" tvs, but I do enjoy watching movies on my small tv. For authentic spatiality I can go to a live concert. At home I can achieve natural spatiality.


----------



## 71 dB

gregorio said:


> That is YOUR perception, not objective fact. With crossfeed I never get a soundstage as large as with speakers. Sure, I can adjust crossfeed to provide a similar overall width but soundstage is not just width, it's also depth, the timing and position of reflections (within the mix) and various other factors. You claim to be an expert on spatiality, surely you must know this fundamental basic? For me, crossfeed does NOT improve any of these other factors, it degrades them. Typically, I perceive a somewhat more blurred and flatter (less depth) mix using crossfeed, in addition to the reduction of width. However, I realise that's my perception, that some other people may not perceive the same as me, they may not perceive the flattening or blurring effect or may not be bothered by it if they do. The difference between you and me is that I don't claim my perception and preferences are objective fact and therefore that anyone who doesn't perceive the same as me or doesn't have the same preferences must be ignorant!
> 
> G



I know it's _my_ perception, but some other people have similar perception. Ask for example Andolink to who I build a crossfeeder in 2014. He talks about his crossfeed experience in this thread he created long before I came here. I had spoken about crossfeed on a classical music forum where he is also a member and over there he asked me to design and build him a balanced crossfeeder (the only one in the World?). Reluctantly I agreed. It was a lot of work to design and especially build the unit, but all I heard from him is he likes it.

Crossfeed in reducing excessive spatiality allows spatial hearing to decode the spatial cues of the recording and at least in case of recorded acoustic music such as classical there's a lot of that, all the reflections of the acoustic space where the recording was done. Modern "not so acoustic" music uses sophisticated plugins to create artificial spatiality, often so well that the need for crossfeed is limited, typically a weak -8 dB crossfeeding gives excellent results.

Width isn't large ILD. Width is spatial cues that indicate wide sound. It's a combination of small ILD, spectral and time-based cues. Haas-effect is a good example of that. Normal Haas-effect has zero ILD, but the time delay, if well chosen, creates a strong wide spatial effect. Of course Haas as such is very simplistic, but it illustrates how spatial hearing can be fooled. Adding some ISD and ILD effects can make Haas even better. Large ILD means cues for a sound VERY close to one ear. Crossfeed helps making the sound wider by reducing excessive ILD, but your ears need to adjust to it (for me it's a minute or so). If the recording itself has good spatial cues of depth, those will create a sense of depth when proper crossfeed removes excessive spatiality and our spatial hearing is able to decode the spatial cues better.

What I hear is pretty much the opposite of what you hear. To me crossfeed makes the stereo image sharper and ordered. Different sounds find their place. Without crossfeed the stereo image is fractured, like pieces of broken glass all over the place. Crossfeed clues these pieces together into coherent sounds. Crossfeed is kind of listening to music in a silent room compared to no crossfeed which is like listening in a noisy room in comparison. Low frequencies sounds "fake" if there's large ILD. The wisdom among music producers is to make bass mono or almost mono. That makes it "real", because low frequency sounds we hear in real life are very mono-like. That's how human spatial hearing works. We don't have large enough heads to shadow low frequencies. Elephants are different. For them ILD at 100 Hz can be large, maybe even more than 10 dB? That doesn't mean we can't have spatial cues at low frequencies. Of course we can! ITD! Have the same 100 Hz sound in both channels, but delay right channel by 640 µs and you have "hard-panned" it to the left. Dropping the level of right channel 1 dB makes it even better. Add reverberation and reflection to further improve spatial cues. That is if you insist on having such a low frequencies so far left.


----------



## castleofargh

all right, can I officially announce that we've gone full circle one time too many?
@71 dB. you want people to acknowledge that crossfeed is an improvement because it approximately alters a few specific variables by following a model with basic single driver stereo system and fixed head model in an anechoic room without any sense of tactile bass. @gregorio and a few others(me included), disagree because that completely ignores the many other variables of actual playback. you are free to think that adding 2 more wheels on a bike improves the bike because now on some specific aspects, it's closer to a car and you like riding in a car. but when someone looks at your 4 wheel bike and acts as if it's not an improvement, I don't get why you would act surprised and fight over that idea. 

I think that listening to albums mastered for speakers on headphones is broken stereo. in that respect my views are really close to your own. I often use some form of crossfeed or more, I appreciate that, I prefer that to nothing at all. so subjectively it is an improvement for me. but I have also been eagerly waiting for the Realiser A16 because I'm absolutely not satisfied with either standard headphone listening, nor crossfeed of any sort with any setting. there is no hope for you to ever convince me that crossfeed is giving the correct spacial cues. correct spacial cues would have reverb, would change when I move my head, and the all thing would be modeled on my own HRTF, not on a one band EQ to simulate my head masking the direct incoming sound on one ear, etc. even then we'd be talking about simulating one specific playback environment only, which would probably not be the one that the mastering engineer heard. so proper spatial cues... that's a concept limited to very very specific models and references. IMO you should stop insisting on such notion when talking about album playback in general. you can explain the same stuff about ILD and ITD another 500 times to people like @gregorio or myself who understood you just fine the first time, it won't change our opinion. partial job does partial stuff, the end. you like your crossfeed, good. someone doesn't, fine. that's a subjective matter and there is no point trying to force everybody to admit that there is more to it.


----------



## 71 dB

gregorio said:


> Yes, studios are always well acoustically treated and yes, they are somewhat different from typical home listening rooms BUT NO, you will not have more interaction than intended! Of course, this depends on exactly what you mean by "interaction". If you mean "interaction" purely in terms of frequency response then yes, the typical consumer listening environment will have untreated room modes/standing waves which will result in resonances, peaks and troughs (summing and cancellation) at certain frequencies. However, as Bigshot states, acoustic treatment (for music studios) does NOT only mean sound absorption (deadening)! Acoustic treatment means sound isolation, some absorption (to reduce standing waves/summing and cancellation at specific frequencies) AND diffusion! The point of diffusion is specifically NOT to reduce room acoustics/reverb or interaction but to linearise the the reflections/interactions. The desired end result is a mix room with the same amount and duration (RT) of room reflections/reverb as an average/slightly large consumer listening environment (a sitting room for example) but with a flatter frequency response. In the case of a mastering studio, the whole point of mastering in the first place is to alter the mix so that consumer playback is as close as possible to what was intended and the acoustic treatment of mastering studios is obviously designed to facilitate that goal, not make it more difficult!
> 
> G



Especially at lower frequencies studios are better treated than typical homes and overall the acoustics is more controlled. In studios the listening is more near field style, which increases direct sound compared to reflections and reverberation. There's interaction in the studio of course, they are not anechoic chambers, but not as much as in typical living rooms. Headphones don't give that interaction, crossfeed or not, but we can fix the acoustic crossfeed part with crossfeed. If I can fix only one thing out of 10, I will do it even if the rest 9 things remain, but that's me.


----------



## bigshot (Feb 21, 2019)

I’ve seen homes with listening rooms with acoustics as good as studios. It requires a dedicated room and no wife!

I think we can agree that crossfeed is a compromise to consider if you can’t keep a speaker system in your home. It’s just that even the best headphones never come close to matching the presentation of a halfway decent speaker system.


----------



## 71 dB

castleofargh said:


> 1. @71 dB. you want people to acknowledge that crossfeed is an improvement because it approximately alters a few specific variables by following a model with basic single driver stereo system and fixed head model in an anechoic room without any sense of tactile bass.
> 
> 2. @gregorio and a few others(me included), disagree because that completely ignores the many other variables of actual playback. you are free to think that adding 2 more wheels on a bike improves the bike because now on some specific aspects, it's closer to a car and you like riding in a car. but when someone looks at your 4 wheel bike and acts as if it's not an improvement, I don't get why you would act surprised and fight over that idea.
> 
> ...



1. Is it wrong to want things? I want World peace too. If you want "tactile bass" with headphones there's shakers. Makes the setup a little more complex, but it's doable.
2. I don't get this "if you can't fix everything fix nothing" -mentality but that's me. I don't know about the other variables of actual playback, but to me fixing the problem of excessive spatiality improves things tremendously. There is no way back to non-crossfeed headphone listening for me.
3. Yeah, that's how it often is, but for me crossfeed fixes "broken stereo."
4. That's good.
5. To me Realiser A16 is sophisticated crossfeed. It does reduce excessive spatiality among other things. If you can afford this kind of system then by all means!
6. It doesn't give correct spatial cues, it gives spatial cues that make sense to our hearing system. What are correct spatial cues anyway? If I listen to speakers and move my head two inches the spatial cues will chance a little. Correct spatial cues are actually a _collection_ of spatial cues that make sense. To use your analogy of 4 wheel bikes and cars: There are many brands and models of cars. All of them are genuine cars. We have to be realistic about how accurately we can reproduce spatiality and the truth is we can't reproduce spatiality to the smallest detail. All you have to do is move the headphones in your head a bit and the high frequency response changes a lot and since ISD is part of spatiality, we have changed the spatiality! Similarly moving in a room while listening to speakers chances the spatiality you hear. Audio engineers can only dream about people hearing the music exactly the way it was envisioned in the studio, but that won't happen. However, people can enjoy the music in the intented way. 
7. How large percentage of music is recorded in unechoic chambers? Recordings have reverberation in them. Listen to a recording of organ music recorded in a church. Do you really want to add the reverberation of your listening room to the reverberation of that recording? Headtracking: if you can/want to use it then use it! No objections from me. I have never said people shouldn't use headtracking.
8. I am 48 years old and I am still clueless about what my place in the World is. What exactly should I be writing in a "To crossfeed or not to crossfeed?" -thread as someone who finds crossfeed very beneficial? Aren't you trying to turn this to a "To crossfeed or to Realiser A16?" -thread?


----------



## gregorio

71 dB said:


> [1] It's just that our denifition of natural spatiality is different.... This equation (f is frequency, [Hz]) describes what I mean by natural spatiality: ILD ⩽ 1 + (f/1000)^0.8 dB
> [1a] Exact values of course vary a bit from listener to listener due to different HRTFs, but that should give you the idea of what I mean by natural spatiality.
> [2] Any music with any spatiality listened with speakers is just sounds radiated by speakers into the room. No matter how crazy spatiality in the recording, that is _natural_ to me, because by the time the soundwaves hit my ears, excessive spatiality has disappeared.
> [2a] If left speaker is 20 dB louder it doesn't mean my left ear hears sounds 20 dB louder. It mears maybe a few decibels louder and that is natural, it's what my ears expects from sounds around me that aren't VERY close to me.
> [2b] Your definition is more like what would it sound like it the band played in front of you.



1. Clearly our definition of natural spatiality is different, your definition is incorrect! Because: That equation does NOT describe spatiality at all, natural or otherwise, it ONLY describes ILD, which is just ONE of dozens of parameters that together define "spatiality"! You claim to be an expert in spatiality but you don't even know what spatiality is.
1a. Yes, that does give me an idea of what you mean by "natural spatiality". What you actually mean is the ILD that would occur naturally at some significant distance from the sound source, NOT "natural spatiality". What I (and everyone else who knows what "spatiality" means) call "natural spatiality" is the spatiality that would occur naturally.

2. No, the excessive/unnatural spatiality has NOT disappeared, just the excessive/unnatural ILD. If, to you/your perception, the excessive/unnatural spatiality has disappeared then clearly there is an issue with your perception because virtually ALL music recordings have unnatural spatiality and in the case of _Gesang der Junglinge_ for example, unnatural/excessive spatiality is a primary artistic goal, which you cannot appreciate because for you it doesn't exist ("has disappeared").
2a. And what if the artists intend for a sound to be perceived as "VERY close to you"? In fact, this is a very common artistic intension!
2b. My definition of "natural spatiality" is the spatiality that would naturally occur/exist at any particular listening position. My definition of natural spatiality would therefore NOT include "what it would sound like if the band played in front of you" because virtually without exception all rock/popular music genre gigs are "mixed",  they contain a significant amount of artificial spatiality (delay/reverb spatial effects) which would NOT occur naturally in that performance space/venue and in addition, is a mixture of many very different "listening" positions, rather than a particular listening position. 



71 dB said:


> [1] If recordings aren't made to respond well to crossfeed, doesn't that mean that we should use crosstalk canceling when listening to speakers?
> [1a] Why is only headphone crossfeed bad?
> [1b] Crossfeed is either bad or it's good.
> [1c] If headphone crossfeed is bad so is acoustic crossfeed with speakers and vice versa.
> ...



1. Yes, in theory you should, IF the recording is designed for headphone crossfeed, although I don't know of any such mixes or in the case of say a binaural recording.
1a. Because it's ONLY headphone crossfeed.
1b. No, it's both. It has some disadvantages and can have some advantages. "Good" or "Bad" is therefore NOT an inherent property but purely a personal preference, depending on whether an individual is aware of the disadvantages or bothered by them.
1c. That would be true IF headphone crossfeed and acoustic crossfeed were the same thing but they're not, they're very significantly different. For example, there is no such thing as ONLY acoustic crossfeed.
1d. What you prefer is up to you, however it is NOT because music is mixed to take acoustic or headphone crossfeed into account. I know of NO recordings mixed to take headphone crossfeed into account and no recordings mixed to take only acoustic crossfeed into account (because there is no such thing as "only acoustic crossfeed").

2. So, you are talking about what sounds good to you, even though you know it contradicts the artists' intension. That's your preference and choice to do that but I prefer fidelity to the artists' intentions, I want to hear what the artists actually did/intended.
2a. Yes they did! Sure, all different room + speaker combinations produce somewhat different spatiality but the soundstage is adjusted to give a good approximation of the artists intentions across a range of room + speaker combinations, that's what "mastering" is and why it exists.
2b. That's because you are listening to a different mix, a mix intended for playback on a TV, not for theatrical playback!
2c. There's no such thing as "authentic spatiality" you've just made that up. You can get natural spatiality by going to a live acoustic concert. You cannot achieve that natural spatiality at home, unless your home is a concert hall full of acoustic musicians!


71 dB said:


> [1] I know it's _my_ perception, but some other people have similar perception.
> [1a] Crossfeed in reducing excessive spatiality allows spatial hearing to decode the spatial cues of the recording ...
> [1b] Crossfeed helps making the sound wider by reducing excessive ILD, but your ears need to adjust to it (for me it's a minute or so).
> [1c] If the recording itself has good spatial cues of depth, those will create a sense of depth when proper crossfeed removes excessive spatiality and [1d] our spatial hearing is able to decode the spatial cues better.
> ...



1. Some people believe/perceive that vinyl is higher fidelity than CD or 24bit is higher resolution than 16 bit and "some other people have similar perception". Does that make those people's belief a fact or is it just a fallacy shared by more than one person?
1a. Crossfeed puts some of the signal from the left channel into the right channel and vice versa, thereby making the mix sound more mono (narrower) and blurring/confusing the left/right spatial information, making it more difficult to "decode the spatial cues". For some strange reason, you appear to be perceiving the opposite of what's actually occurring, a wider mix and clearer/easier to decode spatial information.
1b. Crossfeed reduces ILD by making the mix narrower and I do not want my ears "to adjust to it" (in any amount of time) and give me the false perception that the opposite is occurring, although of course it wouldn't be my ears adjusting, it would be my brain.
1c. No they won't.
1d. You are contradicting yourself. You state that you "know it's [your] perception" and that it is NOT my perception, so how can "OUR spatial hearing be able to decode the spatial cues better"? You mean YOUR spatial hearing, not OURS! You have a perception (which is contrary to what's actually occurring) and you are trying to falsely sell it as an objective fact.
1e. No, crossfeed is nothing like listening to music in any kind of room because crossfeed does not contain any kind of room (acoustic) information! Listening without crossfeed is also not like listening in any room (noisy or otherwise), it is listening to the raw spatial information on the recording put there by the artists and that's what it sounds like to me. If your brain is creating some noisy listening room that doesn't exist, then we're back to what I said previously, you've got a problem with your perception.

2. Not so much music producers but certainly mastering engineers.
2a. It's got nothing to do with making it "real", making it "real" is pretty much the last thing that music producers would want because they spend so much time changing the "real" low freqs! It's done because the low freqs is where the majority of the energy is and it makes sense to split the energy between two drivers rather than putting it all in one driver and risking overloading that driver, or having to lower the level of the entire mix and in times past, high amplitude LF had to be mono to avoid catastrophic tracking issues with vinyl. Again, this has ALL been discussed and dealt with over a year ago and here you are misrepresenting it all over again, why?

3. No, that is false. Bigshot and I have already refuted that (and others previously) but you just repeat that falsehood anyway! Mastering studios typically do NOT use nearfields, they use mid fields and even in commercial recording/mix studios where they do use nearfields, they also have midfields. The acoustic interaction (RT) is specifically designed to be the same as typical living rooms (though far more neutral/flatter). And again, you clearly know very little about commercial studio acoustics, music engineering or production and yet you seem quite happy to simply make-up facts which contradict the actual facts (presumably because you don't know enough to realise you're contradicting the actual facts). Ask yourself if that's what a smart person would do? If your answer is "no", then you've answered your previous question about why you are not being recognised as a smart person!!



71 dB said:


> [1] Headphones don't give that interaction, crossfeed or not, but we can fix the acoustic crossfeed part with crossfeed. If I can fix only one thing out of 10, I will do it even if the rest 9 things remain, but that's me.



1. You CANNOT fix the "acoustic crossfeed part" with crossfeed because it is crossfeed, NOT acoustic crossfeed. What is acoustically cross-fed is a combination of direct sound, highly coloured direct sound and numerous coloured reflections, crossfeed cannot "fix" that part because crossfeed does not contain most of "that part"! So, I cannot fix even that one thing out of 10 with crossfeed and trying to do so will damage/change some of the other 9 things, which is why I don't use it, but that's just me!

G


----------



## bigshot

Have you ever noticed that when you go to a concert or movie, they don't have headphone jacks so you can plug in your cans? They have speaker systems for some reason. Odd isn't it?


----------



## 71 dB

gregorio said:


> You CANNOT fix the "acoustic crossfeed part" with crossfeed because it is crossfeed, NOT acoustic crossfeed. What is acoustically cross-fed is a combination of direct sound, highly coloured direct sound and numerous coloured reflections, crossfeed cannot "fix" that part because crossfeed does not contain most of "that part"! So, I cannot fix even that one thing out of 10 with crossfeed and trying to do so will damage/change some of the other 9 things, which is why I don't use it, but that's just me!
> 
> G



Speakers-room-listener interaction consists of:

1. Direct sound (aka acoustic crossfeed) => headphones crossfeed fixed this.
2. Early reflections => headphone crossfeed doesn't have this.
3. Reverberation => headphone crossfeed doesn't have this.
4. Listener HRTF =>  headphone crossfeed doesn't have this*.

Since headphones, crossfeed or not, do not have 2.-4., proper headphone crossfeed is weaker than acoustic crossfeed (1.). Without 2.-3. (anechoic chamber) speaker listening means narrower sound: The ILD at lower frequences is less than 1 dB, but with 2.-3. the resulting ILD is a few decibels depending on the recording and how wide spatiality it has. So, we use proper crossfeed depending on the recording. Recordings with huge ILD require strong crossfeed and recordings with near natural ILD require only weak crossfeed and so on. So, crossfeed fixes one of the 4 things.

* Over the ear headphones create pinna reflections and headphone manufacturers can use frequency responses that mimic roughtly HRTF.

Anyway, this is not good enough for you so you don't use crossfeed. For me this is a great improvement and I use crossfeed.


----------



## 71 dB

gregorio said:


> Some people believe/perceive that vinyl is higher fidelity than CD or 24bit is higher resolution than 16 bit and "some other people have similar perception". Does that make those people's belief a fact or is it just a fallacy shared by more than one person?



16 bit / 24 bit is placebo. Crossfeed is not, everyone hears the difference easily in blind tests. So you can't compare.


----------



## bigshot

You can clearly hear a difference between LPs and CDs, yet some people claim LPs sound better. That’s a very common fallacy right there.


----------



## gregorio (Feb 22, 2019)

71 dB said:


> Speakers-room-listener interaction consists of:
> 1. Direct sound (aka acoustic crossfeed) => headphones crossfeed fixed this.
> 2. Early reflections => headphone crossfeed doesn't have this.
> 3. Reverberation => headphone crossfeed doesn't have this.
> 4. Listener HRTF => headphone crossfeed doesn't have this*.



You just made that up, I thought you said you were an expert on spatiality? Speaker-room-listener interaction is complex and your #1 is a FALSE statement. "Direct sound" is NOT "also known as acoustic crossfeed", you just made that up!!

The actual interaction is: Direct sound from say the left speaker enters the left ear, some of that direct sound passes through the skull and enters the right ear and in so doing gets coloured (by the skull's absorption characteristics), headphone crossfeed DOES NOT have this and therefore DOES NOT fix this. Some of that direct sound reflects off the right wall and enters the right ear, this too is coloured (by the absorption characteristics of the wall) and is also significantly time delayed, some of that reflected sound into the right ear also gets into the left ear (through the skull, with the skull's colouration), crossfeed does not have ANY of this and therefore cannot fix this. Some of the direct sound reflects off several boundaries and becomes a randomised, chaotic system which is even more coloured and time delayed than the initial/early reflections and enters both ears, crossfeed does not have any of this either. The brain uses ALL of this timing AND colouration information to create a perception. Without the colouration (caused by the skull and room boundaries) the perception will necessarily be different, NOT "fixed"/"cured", different!

If this is not complex enough already, the signal being reproduced by the speakers is NOT itself only direct sound, the recording will also contain time based coloured effects, echoes, early reflections, reverb, etc. So, a direct sound in the left channel will have some reflections off both the left and right walls (of the recording or production) in both the left and right channels, although the timing of those reflections will be different, the right channel early reflections will be later than the left channel reflections. Unfortunately though, you are crossfeeding some of that right channel into the left channel (and vice versa) and therefore damaging/destroying that spatial information!! Maybe you prefer the result of crossfeed, that's up to you, just as it's up to someone if they prefer the sound of vinyl over CD, but that is just a preference, vinyl is NOT actually higher fidelity and crossfeed does NOT actually fix the soundstage issue (IF there is one!) with headphones, in fact the opposite is true, vinyl is lower fidelity and crossfeed screws up more spatial information than it fixes! Again, this has all been discussed over a year ago and CONTRARY to your claim, you have NOT educated yourself about it, in fact you've done the exact opposite and have completely ignored it all (presumably because it conflicts with your belief/agenda). Isn't that pretty much the very definition of ignorance?

G


----------



## 71 dB

The shadow-effect to human skull is pretty simple at lower frequencies. It get complex at higher frequencies, but the crossfeeding isn't happening there.
Funny how crossfeed has to do everything at 100 % accuracy, but "decent" room acoustics is just fine. As if every room had 100 % correct reflections, RT etc.


----------



## gregorio

71 dB said:


> [1] The shadow-effect to human skull is pretty simple at lower frequencies. It get complex at higher frequencies, but the crossfeeding isn't happening there.
> [2] Funny how crossfeed has to do everything at 100 % accuracy, but "decent" room acoustics is just fine. As if every room had 100 % correct reflections, RT etc.



1. At exactly what frequencies? Everyone's skull is different, everyone therefore has a somewhat different HRTF and crossfeeding doesn't account for a HRTF anyway, it's just a couple of parameters (no colouration for example) and even those few parameters are just generalised, not individualised. And, as already explained, it's not just about HRTF it's the combination of HRTF with the timing and colouration of the spatial information which is reaching our ears/skull.

2. What do you mean "funny", it's simple common sense. You claim to be an expert but don't even understand the common sense of the situation, let alone the complexities! The explanation is simple and should be obvious, it's about our reference. Every moment of our lives we hear the world with our own personal HRTF, that's not just a reference but our ONLY reference, we never hear the world without a HRTF and never with any other HRTF than our own. Even a tiny deviation from our own personal HRTF is therefore likely to raise a red flag, even to the point of it not being recognised as a HRTF but just as some unrelated timing/colouration effect. Of course, everyone's perception is different and exactly how close we have to get to our own specific HRTF in order to fool our hearing (into believing it is our HRTF) will vary from person to person. This situation is entirely different to listening environment/acoustics, in fact pretty much the exact opposite! Consumers never hear the recording reproduced in the recording or master studio, they have no reference at all, little/no idea what it's supposed to sound like and of course, unless you live your entire life in the same one room, the brain knows and actually expects there to be an almost infinite number of different acoustic environments, rather than one fixed/unchanging HRTF! So, with the experience/expectation that more than one single room acoustic naturally exists and not having any experience/reference to that particular room/acoustic environment (the mastering studio), we only have to get in the rough vicinity with room acoustics rather than match quite precisely an individual's HRTF. This isn't a difficult concept to grasp, it's pretty obvious and simple if you think about it logically for a couple of minutes AND, this isn't the first time it's been explained to you! So, how come you apparently don't know and are misrepresenting it (again)?

G


----------



## 71 dB

gregorio said:


> 1. At exactly what frequencies? Everyone's skull is different, everyone therefore has a somewhat different HRTF and crossfeeding doesn't account for a HRTF anyway, it's just a couple of parameters (no colouration for example) and even those few parameters are just generalised, not individualised. And, as already explained, it's not just about HRTF it's the combination of HRTF with the timing and colouration of the spatial information which is reaching our ears/skull.



Everyone's skull is different, but about the same size. Up to 800 Hz very little shadowing, 800-1600 Hz is transition bandwidth and 1600 Hz up strong shadowing. Crossfeed is much closer to HRTF than no crossfeed. If crossfeed makes a 2 dB error at say 800 Hz, no crossfeed makes huge error by not leaking sound at all to the other ear. Crossfeed doesn't create 100 % perfect spatial information, but at least it tries unlike no-crossfeed, which is often totally wrong in keeping the channel separation unnaturally large. It's almost comical how much you nitpick about the imperfections of crossfeed without realizing that not using crossfeed is much worse (has nothing to do with your or anyone's skull and HRTF).



gregorio said:


> 2. What do you mean "funny", it's simple common sense. You claim to be an expert but don't even understand the common sense of the situation, let alone the complexities! The explanation is simple and should be obvious, it's about our reference. Every moment of our lives we hear the world with our own personal HRTF, that's not just a reference but our ONLY reference, we never hear the world without a HRTF and never with any other HRTF than our own. Even a tiny deviation from our own personal HRTF is therefore likely to raise a red flag, even to the point of it not being recognised as a HRTF but just as some unrelated timing/colouration effect. Of course, everyone's perception is different and exactly how close we have to get to our own specific HRTF in order to fool our hearing (into believing it is our HRTF) will vary from person to person. This situation is entirely different to listening environment/acoustics, in fact pretty much the exact opposite! Consumers never hear the recording reproduced in the recording or master studio, they have no reference at all, little/no idea what it's supposed to sound like and of course, unless you live your entire life in the same one room, the brain knows and actually expects there to be an almost infinite number of different acoustic environments, rather than one fixed/unchanging HRTF! So, with the experience/expectation that more than one single room acoustic naturally exists and not having any experience/reference to that particular room/acoustic environment (the mastering studio), we only have to get in the rough vicinity with room acoustics rather than match quite precisely an individual's HRTF. This isn't a difficult concept to grasp, it's pretty obvious and simple if you think about it logically for a couple of minutes AND, this isn't the first time it's been explained to you! So, how come you apparently don't know and are misrepresenting it (again)?
> 
> G



I do understand and thanks to that understanding I know what are the best compromizes. Listening to headphones without crossfeed takes our beloved HRTF from us. Crossfeed gives an approximation of it back so that things are not completely wrong, only a bit wrong. That's an improvement, in my opinion a significant one.


----------



## 71 dB

bigshot said:


> You can clearly hear a difference between LPs and CDs, yet some people claim LPs sound better. That’s a very common fallacy right there.



If music is produced so that it _expects_ vinyl distortions to be added later then it will probably sound better on vinyl. To me people who prefer headphone sound without crossfeed are not much different from people who prefer vinyl. The former likes spatial distortion and the latter likes vinyl distortions. I prefer distortion free reproduction chain meaning I prefer CD and crossfeed with heaphones.


----------



## gregorio (Feb 23, 2019)

71 dB said:


> [1] Everyone's skull is different, but about the same size. Up to 800 Hz very little shadowing, 800-1600 Hz is transition bandwidth and 1600 Hz up strong shadowing.
> [2] ... no crossfeed makes huge error by not leaking sound at all to the other ear.
> [3] Crossfeed doesn't create 100 % perfect spatial information, but at least it tries
> [4] unlike no-crossfeed, which is often totally wrong in keeping the channel separation unnaturally large.
> [5] It's almost comical how much you nitpick about the imperfections of crossfeed without realizing that not using crossfeed is much worse (has nothing to do with your or anyone's skull and HRTF).



1. You are contradicting yourself. If everyone's skull is different then you do not know what the error is at 800Hz, exactly where the transition band is or what values it has. And clearly, you are having trouble grasping the obvious/simple concept explained in my last post. The HRTF has to be quite close our personal HRTF, because our personal HRTF is our only reference. Crossfeed is not any HRTF, so it is not close at all! However, as I also stated, we each have our own perception, so for those with particularly poor spatial awareness/sensitivity it might not have to be close, it's possible that crossfeed might be close enough to fool some people. However, this is the exact opposite of what you're claiming, you're apparently claiming that you are that easily fooled because you have "better" spatially awareness than those who aren't, (who are "spatially ignorant"), which is completely backwards nonsense. You've repeatedly stated that you've educated your spatial awareness but seem completely oblivious to the fact that you've apparently educated it to be insensitive/unaware!

2. No, that statement is completely false/backwards! Not using crossfeed makes absolutely no error at all, it reproduces exactly what's in the recording. However, reproducing the recording with speakers (in a room) does "make huge error", the issue is therefore: What is the intension of the artists who created the music recording? Is it that it's reproduced: A. With that huge error, B. Without that huge error, C. With or without that error or D. With a very different sort of error? The answer is: Rarely B, even more rarely D, more commonly A and even more commonly C.

3. Again, you are contradicting yourself! You have already admitted that crossfeed does NOT create ANY spatial information but now you're saying that in fact it does try to create spatial information, although it's not 100% accurate. It should be obvious that crossfeed does not create any spatial information, it just feeds the spatial information that's already on the recording to the opposite channel/ear (and without the required colouration). As already explained, this is a very different sort of error, the different left/right timings of the early reflections in the recording are no longer different, because you are feeding the timing of the right channel reflections into the left ear (and without colouration). The error with speaker reproduction is very different, the different left/right timings of the early reflections in the recording are added to/exacerbated, rather than reduce/eliminated! Additionally, you have repeatedly stated that crossfeed does in fact "cure"/"fix" the issues of headphone presentation but now you seem to be saying that actually it doesn't "but at least it tries"?

4. You have no idea what the intension of the artists was, and therefore no idea if the channel separation is even slightly wrong, let alone "totally wrong". Again, the channel separation of, for example, _Gesang der Junglinge _is intensionally supposed to be "unnaturally large", it is in fact one of, if not THE most important artistic intention of the entire composition/production, which you would try and reduce/remove! "Unnaturally large" is in fact a common artistic intention with many/most music productions, though not typically the most important one.

5. You are apparently insensitive to spatial information, falsely define and misrepresent it, completely ignore artistic intention, state that you know it's your perception but also argue it's objective fact (rather than your perception), contradict yourself in other ways all over the place and then insult those who don't share your insensitive perception and/or preferences as ignorant (and now "almost comical"). What you're doing isn't "almost" comical, it's completely comical!



71 dB said:


> [1] I do understand and thanks to that understanding I know what are the best compromizes.
> [2] Listening to headphones without crossfeed takes our beloved HRTF from us. Crossfeed gives an approximation of it back so that things are not completely wrong, only a bit wrong.
> [3] That's an improvement, in my opinion a significant one.



1. Clearly that's false. If you did have an understanding then you'd realise that you cannot "know what are the best compromises". You do NOT know what others perceive, you do NOT know what others prefer and you do NOT know what the artists intended. In other words, you "know what are the best compromises" thanks to your apparent lack of sensitivity to spatial information and lack of understanding (ignorance)!

2. You've already (correctly) stated, just a few posts ago that "_Listener HRTF => headphone crossfeed doesn't have this_" but now you're trying to say that in fact headphone crossfeed does have this or at least an approximation of it.

3. No, that's not an improvement, it's a degradation, in my opinion sometimes/often a significant one! AGAIN though, the difference between us is that I recognise this is based on my perception, that others may have a different perception and/or preferences, therefore if they want to employ crossfeed it's entirely up to them and they are not necessarily an "ignoramous", a "stupid low IQ person", "ignorant", an "idiot" or "almost comical" for doing so!

G


----------



## ironmine

Gregorio, will you please PLEASE just do this:

Activate Redline Monitor 112dB with default settings, switch off the lights and lie down in total darkness with your headphones on your head. Relax and close your eyes, concentrate on your favorite music.

Soon, you mind will adjust and your will be carried away to another world.


----------



## gregorio

ironmine said:


> Gregorio, will you please PLEASE just do this:
> Activate Redline Monitor 112dB with default settings, switch off the lights and lie down in total darkness with your headphones on your head. Relax and close your eyes, concentrate on your favorite music.



Why would I want to do that again? I did close my eyes and concentrated on a number of pieces of music but I didn't turn the lights off and lay down, would that have made all the difference? I've tried various systems, hardware and software over the years, none of them really did it for me and to be honest, the Redline Monitor was one of the weakest, because it was only a crossfeed plugin, while some others employed more sophisticated technology, some form of HRTF + convolution.

Does the Redline Monitor actually sound "real" to you? Or is it just a different sort of presentation (from speakers and uncrossfed HPs) that you personally prefer?

G


----------



## 71 dB

gregorio said:


> 1. You are contradicting yourself. If everyone's skull is different then you do not know what the error is at 800Hz, exactly where the transition band is or what values it has. And clearly, you are having trouble grasping the obvious/simple concept explained in my last post. The HRTF has to be quite close our personal HRTF, because our personal HRTF is our only reference. Crossfeed is not any HRTF, so it is not close at all! However, as I also stated, we each have our own perception, so for those with particularly poor spatial awareness/sensitivity it might not have to be close, it's possible that crossfeed might be close enough to fool some people. However, this is the exact opposite of what you're claiming, you're apparently claiming that you are that easily fooled because you have "better" spatially awareness than those who aren't, (who are "spatially ignorant"), which is completely backwards nonsense. You've repeatedly stated that you've educated your spatial awareness but seem completely oblivious to the fact that you've apparently educated it to be insensitive/unaware!
> 
> G



Ok. You and me have different HRTFs. Your skull shadow 800 Hz 5 dB and my small skull only 4 dB. We both use a crossfeeder that crossfeeds the signal so that at 800 Hz the level is -6 dB. For you the error is 1 dB and for me 2 dB. Ok? Without crossfeed the error is enourmous, maybe 30 dB since the headphones leak some sound. Now I wait to see what you find wrong in this logic.


----------



## castleofargh

71 dB said:


> I don't get this "if you can't fix everything fix nothing" -mentality but that's me. I don't know about the other variables of actual playback, but to me fixing the problem of excessive spatiality improves things tremendously. There is no way back to non-crossfeed headphone listening for me.


I'm not saying to fix nothing. I'm saying that contrary to your clear belief, incremental modifications do not guaranty a subjective improvement. this is the core of our "dispute" and I've tried to push this idea across without much success so far. it's impossible to pick an appropriate analogy because nothing is going to be as complex and specific as this, at least try to understand the principle of what we're talking about with this failure of analogy: you have a black&white TV, and because you know how light works and how our vision works, you postulate that having a green filter would improve the accuracy of the TV and make it more realistic. the argument being that out of various color channels, we are most sensitive to the green one, so having a picture dominated with green cues is objectively closer to what we see in real life. which is correct. but how many people will tell you that green TV is a clear improvement in realism over black&white?
again it's a crappy analogy, I'm sorry I can't think of something better. but that's how I see crossfeed. an incremental and simplified change toward a given reference that neglects many of the variables involved in psychoacoustic. it could feel better to some, an not to others, a slightly different change could now be perceived as better for the later group and not for the first, or it could be preferred by everybody, or nobody.
when you offer to add other incremental changes like a shaker, head tracking(meaning we'd have to involve some HRIR model for at least a few more angles), a more custom EQ, etc. each and everyone of those increments may be what some need to do the trick and have their brain fooled. or at least give a subjective result they will enjoy more. but when exactly will it start working for a given listener? I don't know that. I know that most crossfeed settings can feel more relaxing to me when listening to some old music with a lot of content fully on a single channel. but with most albums in my library, most crossfeed actually feel weird after a while. and when I get one with a bunch of customizable settings, I certainly can reach something that works better for me. I assume that it happens because those settings have more in common with how I usually perceive sound. but that alone doesn't make things right. in fact my brain clearly gets enough cues that it's not right, because over time(after maybe 15mn), my brain puts the sounds back at 90°. meaning that out of all the cues, it decides to trust that the sound sources aren't in front of me. it could be that acoustically I'm convinced, but my eyes dominate my senses and I know I have no speakers in front of me. or maybe I know that in this room I should have a given reverb when I'm sitting at my desk? or maybe feeling the headphone pads on my head is what tells my brain to "see" sound like it is with headphones? probably a little bit of all of the above and then some more. one incremental change even if it brings us closer to a model objectively for that variable, does not mean our global interpretation will become more realistic or preferred. this is what you don't seem to recognize by insisting that crossfeed is right. one thing alone is only one thing alone.
another personal example of changing one single variable and hope for the best. when I put on my HD650 that I love for many audio and non audio reasons, mono signals are placed way up somewhere on my forehead. not the best experience. it's relatively easy to EQ my headphone so that mono sounds are close to eye level. if I use an actual speaker as reference it's really not to hard to achieve. so what about the result? well is we dismiss everything but where I feel the sound is coming from with music in mono, it's a great improvement for me. one that agrees with an objective approach of the problem and also agrees with my subjective impressions. hooray!!!!
but wait, I'm not actually listening to only mono albums. so how does that new EQ work for my favorite albums? in short, horribly. I hate that signature and the experience. that EQ is probably very close to the impulse response capture at 0° right in front of me, but it's completely and obviously inappropriate for sound sources on the sides. and that too is supported both subjectively and objectively for me. but for someone else, the change in signature necessary for his mono to be at ° of altitude might be less annoying subjectively for typical stereo music. I don't know how another guy would react to his own mono EQ. my experience is only my experience. I can't even use my own EQ on other people so properly testing this issue isn't easy. and if I started simulating other cues of a speaker experience, maybe the subjective result would change again(they probably would). maybe if I had more of the room reverb to tame the overall signature, or if I was to only apply that EQ to a simulated center channel, I would then like and "believe" the image I'd get in my head a lot more. our impressions are a pudding of cues, and the incremental change of one variable won't always make the all pudding taste better. sometime it will, sometimes it won't, sometimes one guy will prefer the stuff you hate, or like what you like but only after adding some more sugar.
this is what I mean to say in too many words. a complex system can be decomposed into basic elements and you can alter those elements in a way that improves them individually, but that does not always ensure a global improvement for all. and even when it does, you can't know if others feel as much of an improvement as you do. that's just what we have to deal with.




71 dB said:


> I am 48 years old and I am still clueless about what my place in the World is. What exactly should I be writing in a "To crossfeed or not to crossfeed?" -thread as someone who finds crossfeed very beneficial? Aren't you trying to turn this to a "To crossfeed or to Realiser A16?" -thread?


you can think whatever you like. you can say anything so long as it's within the forum's rules. facts will be facts and social interactions will be messy, that's life. and no I'm not trying to kill crossfeed just to advertise the A16(that's not even a product yet... and maybe never will). I used that as an example to show how many more variables could/should be involved to give an approximation of "correct" spacial cues. because even then, gregorio wouldn't agree about any claim of correct spatial cues. you think we're conducting some anti crossfeed or anti @71dB offensive, but if you add 5 more variables to your best crossfeed settings that you'd objectively estimate to be closer in magnitudes to a given speaker playback experience, we would still have mostly the same views. I'll say it once again, it has nothing to do with rejecting acoustic or even the psychoacoustic part concerning ILD, ITD. and we really don't mind that you feel a solid improvement in your listening experience. I've been a solid advocate of pretty much anything that would turn headphone listening into something more speaker like. I simply argue that crossfeed isn't the panacea you make it to be. a more advanced system wouldn't just build over crossfeed by adding stuff, it would pretty much replace crossfeed almost entirely. the only thing that would remain untouched is the need to ultimately have 4 channels and mix them down to stereo with the delay imposed by head size and speaker angle, if stereo speaker simulation is the reference.

and I said to myself that I'd keep it short... bravo me!


----------



## 71 dB

ironmine said:


> Gregorio, will you please PLEASE just do this:
> 
> Activate Redline Monitor 112dB with default settings, switch off the lights and lie down in total darkness with your headphones on your head. Relax and close your eyes, concentrate on your favorite music.
> 
> Soon, you mind will adjust and your will be carried away to another world.



He is very stubborn.


----------



## 71 dB

gregorio said:


> You have no idea what the intension of the artists was, and therefore no idea if the channel separation is even slightly wrong, let alone "totally wrong". Again, the channel separation of, for example, _Gesang der Junglinge _is intensionally supposed to be "unnaturally large", it is in fact one of, if not THE most important artistic intention of the entire composition/production, which you would try and reduce/remove! "Unnaturally large" is in fact a common artistic intention with many/most music productions, though not typically the most important one.
> 
> G



Why don't these artists give instructions how to experience their intentions correctly? The lack of instruction make me do the obvious: What sounds best is the way to go. I can imagine some artists like Stockhausen having intentions of weird spatiality, but most artists must want natural spatiality.


----------



## 71 dB

gregorio said:


> 1. Clearly that's false. If you did have an understanding then you'd realise that you cannot "know what are the best compromises". You do NOT know what others perceive, you do NOT know what others prefer and you do NOT know what the artists intended. In other words, you "know what are the best compromises" thanks to your apparent lack of sensitivity to spatial information and lack of understanding (ignorance)!



2. You've already (correctly) stated, just a few posts ago that "_Listener HRTF => headphone crossfeed doesn't have this_" but now you're trying to say that in fact headphone crossfeed does have this or at least an approximation of it.

3. No, that's not an improvement, it's a degradation, in my opinion sometimes/often a significant one! AGAIN though, the difference between us is that I recognise this is based on my perception, that others may have a different perception and/or preferences, therefore if they want to employ crossfeed it's entirely up to them and they are not necessarily an "ignoramous", a "stupid low IQ person", "ignorant", an "idiot" or "almost comical" for doing so!

G[/QUOTE]

1. Really? I don't care what you think about my understanding. You don't clearly even want to understand me, so does it even matter to you whether I understand or not? I have had some respect toward you because you know a lot of stuff, but the way you keep attacking me has caused me to stop respecting you. I have apologized for my own bad language, but that's not enough for you. What kind of person are you? 

2. Listener HRTF here means the stuff that crossfeed doesn't have! Can't you even try to understand me? Crossfeed simulates roughly some aspects of HRTF.

3. I am done here. I give up.


----------



## 71 dB

It has been insanely frustrating to see that no matter how hard I try to explain why crossfeed makes sense some people here don't get it, or don't want to get it. Crossfeed it not about ultrarealistic "like listening to speakers" thing. It is removing annoyance caused by excessive spatiality. Why do you even want your headphones sound like speakers? As if speakers where perfection. Good maybe, but not perfect. I allow headphones sound headphone-like, but without annoyance/fatique cause by excessive spatiality. I DON'T WANT large ILD at low frequencies to my ears! It sounds annoying to me! No matter what the artist intented! If an artist has something against my crossfeeding then maybe I stop listening to his/her annoying art! As a paying consumer I have the choice.

Everybody has an agenda. My agenda if to find purpose to my life. 

I have been so frustrated and angry lately because of this forum. I don't want someone to hint I haven't thought about spatiality two minutes in my life when I have thought about these things for hours for years! Of course I understand! 

I feel I should have never registered here! I could have used all this time I have spend here on other things. Somehow I keep making these mistakes in life. Everything turns out so different from what you think, usually much worse, but not always. Some things are positive surprises, but coming here wasn't been one. 

That's how I feel. You may not see me here for a while...


----------



## bfreedma

71 dB said:


> It has been insanely frustrating to see that no matter how hard I try to explain why crossfeed makes sense some people here don't get it, or don't want to get it. Crossfeed it not about ultrarealistic "like listening to speakers" thing. It is removing annoyance caused by excessive spatiality. Why do you even want your headphones sound like speakers? As if speakers where perfection. Good maybe, but not perfect. I allow headphones sound headphone-like, but without annoyance/fatique cause by excessive spatiality. I DON'T WANT large ILD at low frequencies to my ears! It sounds annoying to me! No matter what the artist intented! If an artist has something against my crossfeeding then maybe I stop listening to his/her annoying art! As a paying consumer I have the choice.
> 
> Everybody has an agenda. My agenda if to find purpose to my life.
> 
> ...




If you would simply couch crossfeed as your preference with headphones rather than a universal improvement for all headphone users and every recording, the discussion would be far less contentious and personally frustrating.


----------



## bigshot (Feb 23, 2019)

71 dB said:


> You may not see me here for a while...



I think that might be a good idea. It isn't good to keep hammering away on a single subject like this. It's like the guy with the turntable fetish, and the guy who wants to believe that every recording in the universe is hot mastered. It gets tiresome for the rest of us.

I don't see why everyone is arguing. Speakers are pretty clearly the best sounding way to present music. It creates a true soundstage and natural directionality and space around the music. It's the way real music in the real world sounds, and it's the way the people making the recording intended it to be heard.

But if you can't afford speakers or don't have the room, then you use headphones. Cool. And if you think crossfeed sounds better than headphones without, you use it. Nothing wrong with that. But that doesn't mean that crossfeed is magically reconstituting the space of a proper speaker system, and it doesn't mean blending two channels together is creating true soundstage. All you're doing is minimizing the curse of having your speakers crammed up on top off your ears. You're minimizing channel separation, you aren't magically creating space where there is no space.

There's a whole lot of self validation in this hobby. I see people arguing all the time to justify their buying decisions or personal situations. If you ask someone to recommend a player or amp, 99 times out of 100, they will recommend the one they bought themselves... even if it is totally unsuitable for how the person asking for advice plans to use it. We're in a headphone forum, so there are thousands of people here posting every day to justify their own purchases... "My brand has no veil. Yours does." "You get blacker blacks with my headphones." Most of that is just self validating hooey. You would think all the cherries had been picked, but cherry picking goes on and on.

If you can't afford a speaker system, listen to headphones at home and listen to speakers at a friends' house who can afford it. Just do what works for you. But don't stretch that to trying to convince yourself and others that your Yugo is the same as a Mercedes.

Oh God! Now I've made a car analogy! This silliness will never end now!


----------



## bigshot

71 dB said:


> If music is produced so that it _expects_ vinyl distortions to be added later then it will probably sound better on vinyl.



What? Are you saying that engineers intended for there to be noise and distortion? Should we add noise and distortion to legacy CDs because that’s the way they are supposed to sound? I think you’re grasping at straws. Go outside and soak up some sunshine.


----------



## Glmoneydawg

71 dB said:


> It has been insanely frustrating to see that no matter how hard I try to explain why crossfeed makes sense some people here don't get it, or don't want to get it. Crossfeed it not about ultrarealistic "like listening to speakers" thing. It is removing annoyance caused by excessive spatiality. Why do you even want your headphones sound like speakers? As if speakers where perfection. Good maybe, but not perfect. I allow headphones sound headphone-like, but without annoyance/fatique cause by excessive spatiality. I DON'T WANT large ILD at low frequencies to my ears! It sounds annoying to me! No matter what the artist intented! If an artist has something against my crossfeeding then maybe I stop listening to his/her annoying art! As a paying consumer I have the choice.
> 
> Everybody has an agenda. My agenda if to find purpose to my life.
> 
> ...


I don't think anybody here wants to anger or frustrate you my friend.But by the same token you shouldn't expect everyone to agree with you in here...it's an internet forum lol....have a scotch and take a deep breath


----------



## ironmine (Feb 23, 2019)

gregorio said:


> Why would I want to do that again? I did close my eyes and concentrated on a number of pieces of music but I didn't turn the lights off and lay down, would that have made all the difference?



You need darkness and closed eyes, because it shuts the biggest nerve going to our brain - the optical one from our eyes. In the absence of which, the brain begins to rely more on signals coming from another organs, i.e., ears. When you don't see anything, your sense of hearing sharpens. Your brain will be forced to construct the reality and orient in it by using sounds alone.

You need to lay down (on your back) because this posture signals the body to relax. Relaxing your neck muscles puts your body at ease and your mind will go into "the zone" where everything just flows smoothly, without catch. 

Ideally, your should drift in and out of your sleep. This lucid condition on the edge of dreaming and waking up, while listening to music via headphones and crossfeed is very addicting. The stage, space, the location of instruments, the feeling of being there, becomes so believable and convincing at this moment. Once tried and experienced, you may want to come back to it all the time. All other listening conditions just pale in comparison to it.

I've tried various systems, hardware and software over the years, none of them really did it for me and to be honest, the Redline Monitor was one of the weakest, because it was only a crossfeed plugin, while some others employed more sophisticated technology, some form of HRTF + convolution.



gregorio said:


> Does the Redline Monitor actually sound "real" to you? Or is it just a different sort of presentation (from speakers and uncrossfed HPs) that you personally prefer?
> G



Redline does not equalize the sound, and for this reason it retains most details. It does not pretend to be speakers. After I tried all so many crossfeeds already, I came to the conclusion, that  complicating the crossfeed with HRTF, speaker or room emulation is just not necessary, it just results in the loss of details and the sound becomes muddy, less transparent or the bass becomes boomy. So it's not worth it, it does not make the stage more believable and does not sound good compared to more simple crossfeed implementations. Probably, because it is all so individual so it's even useless to try to imitate it in all these HRTF specifics. The simple basic crossfeed functions are enough and work best.

I should say that I also like Meier crossfeed very much. MathAudio Headphone EQ is also ok.

I yesterday tried Waves Panorama - it just sucks. And so do Auburn Sound Panagement, Beyerdynamics VS, DearVR pro, Sennheiser Ambeo Orbit, CanOpener, Waves NX, TB Isone, HEAR, etc. etc.


----------



## bigshot

When you work in production sound, you aren't supposed to fall asleep at the board. You have to learn to analyze sound alert and with your eyes open. When I'm half asleep anything sounds fine.


----------



## ironmine

bigshot said:


> When you work in production sound, you aren't supposed to fall asleep at the board. You have to learn to analyze sound alert and with your eyes open. When I'm half asleep anything sounds fine.



I am an audiophile, I listen to music for joy. And my advice is not for sound producers.


----------



## castleofargh

ironmine said:


> I should say that I also like Meier crossfeed very much. MathAudio Headphone EQ is also ok.
> 
> I yesterday tried Waves Panorama - it just sucks. And so do Auburn Sound Panagement, Beyerdynamics VS, DearVR pro, Sennheiser Ambeo Orbit, CanOpener, Waves NX, TB Isone, HEAR, etc. etc.


the important aspect here is that it's your impression of those apps. many people have been enjoying TB isone or CanOpener or other relatively old products for years. so either they never knew they could get even better results with the ones you find good, or simply those apps offer something more in line with their own preferences and subjective impression of what feels "natural". I won't bet on which is the correct answer, but given the variety of people, music and headphones, I wouldn't be surprised if there was a little bit of both, and possibly some who think Meier's crossfeed isn't right for them.


----------



## bigshot (Feb 24, 2019)

ironmine said:


> I am an audiophile, I listen to music for joy. And my advice is not for sound producers.



Musicians are the ones that create your joy. Sound producers are the ones who package your joy up on little shiny discs for you and make sure they sound the way they should. If you want to talk about music, listen to a musician. If you want to talk about sound fidelity, speak to an engineer. Musical joy and sound fidelity are two entirely different things. You shouldn't mix them up because there are too many snake oil salesmen who will try to confuse the two to make you buy a product that makes absolutely no difference to sound quality. I see people online attributing flowery romantic notions to wires and solid state circuits every day and suckers fall for it. Caveat emptor. Emotion one of the first things a snake oil salesman will appeal to... fear is the emotion they prey on the most.

Anything that the general public can use to take the curse off of the sound of headphones is a good thing, but none of this is comparable to the proper presentation of speakers. That's why professional sound engineers use speakers to mix and master, not headphones. I am interested to see what the Realizer brings to the table, but from my experience with virtual reality, I know that there is always a difference between virtual and real.

Besides, it's easier to lay on the couch and doze when you don't have headphones over your ears.


----------



## ironmine

112 dB Redline Monitor has an interesting feature: 

When you turn the Distance knob all the way to the left (0 m), the knob becomes red, the speaker distance is *0 m*. This is the most transparent mode, but the sound is more "in your head" and "around the listener".

From this position, when you turn the Distance knob just as little as possible to the right, the speaker distance shows *0.0 m*, but the knob becomes grey and the sound presentation changes. The sound is no longer "around the listener", the sources of the sounds move forward, it feels like you stepped backwards out of the stage from where you stood before, surrounded by musicians.


----------



## bigshot (Feb 24, 2019)

What does the documentation say? Does it actually call the dials what they do, or does it just refer to them vaguely as "distance" and "soundstage"? I'm not familiar with this plugin, but it sounds like it's messing around with time and creating phase and subtle reverb effects. When I was doing 78 record restoration they had stuff like this to create artificial stereo. It sounded good at first, but after a while I kept hearing the effect over the top of the music. I started dialing it back bit by bit as it bothered me, and eventually I was back to plain old mono. I have problems with artificial ambiences unless it involves multichannel surround, but if you like that effect it's good. Who knows, maybe someday I will get uncomfortable with multichannel ambiences too.

There is something interesting I've noticed with 5.1... when ambiences change drastically in a movie, it seems natural because the image on the screen reinforces it. But when I hear sudden ambience shifts in the middle of a song, it irritates me, because my mind is trying to interpret the cues to place the sound sources in space. When it shifts, it's like shaking the etch a sketch and starting over again. I bet when you use this software, it sounds better if you find a good overall level and leave it there.


----------



## gregorio

71 dB said:


> [1] Why don't these artists give instructions how to experience their intentions correctly?
> [2] The lack of instruction make me do the obvious: What sounds best is the way to go.
> [3] I can imagine some artists like Stockhausen having intentions of weird spatiality, but most artists must want natural spatiality.



1. Because we have Mastering, which is the process by which the final mix is altered so that the artists intentions' can be experienced correctly on a wide range of consumer playback equipment/environments.

2. No, that's not the obvious thing. The obvious thing is just to load your CD, LP, audio file into your playback equipment and press play and indeed that's exactly what the vast majority of listeners already do and therefore they don't need instructions! When you instead go for "what sounds best" you run into all sorts of issues, such as: What does "sounds best" mean, "sounds best" to whom and what is the actual intention? Clearly "sounds best" is a personal perception and/or preference and is commonly contrary to artists' intension. You personally find a certain aspects of spatiality (width, separation and ILD for example) "excessive" and therefore what "sounds best" to YOU means reducing those, other audiophiles do the exact opposite and prize HPs which widen and separate the mix even more. Some are bass-heads and what "sounds best" them is sometimes so extreme it just sounds like an annoying muddy mess to me. For others, adding distortion, surface noise and/or a host of other errors "sound best" to them. And, almost all audiophiles spend significant amounts of money on what "sounds best" even though there's no actual audible difference. You know all this, you yourself have argued against it on many occasions and yet when it comes to this specific issue you do exactly what so many audiophiles do and assume that what "sounds best" to you is factually "best" and therefore anyone who disagrees is factually wrong/ignorant.

3. Why must most artists want natural spatiality? What do you know of music production and what artists want? If you knew just a little about both of these then you wouldn't have created an assumption that completely contradicts the actual facts! The actual fact is that virtually no artists want "natural spatiality"! Even in cases like a recording of an acoustic classical symphony orchestra, no one wants "natural spatiality", what everyone wants is completely unnatural spatiality that creates an illusion of being some idealised natural spatiality that never existed at ANY audience listening position. With rock and all other popular genres the situation is far more extreme, there's not even the concept of an idealised natural spatiality, it's a mixture of a bunch of completely disparate simultaneous spatialities, which could not possibly exist in the real/natural physical world and is therefore by definition not "natural spatiality". Even just within a drumkit there is completely unnatural spatiality; typically the snare drum will have the spatiality of a very reverberant small, medium or even large room, while the kick drum will typically have the spatiality of a dry room/very little reverberation. The lead vocal will typically sound quite close/present with a different spatial environment again and an electric guitar virtually always has some echo/delay effect typically producing the sort of spatiality encountered in say an arena. These (and various other) extremely different spatialities all exist at the same time in a music mix and in fact, it's hard to imagine "spatiality" that could be more weird/unnatural!! Although, it's generally not intended to be as obvious as Stockhausen's weird/unnatural spatiality. Nevertheless, it's still quite obvious and easy to identify/recognise with just some fairly rudimentary listening skills. So, why can't you recognise it? The only rational explanation is that you lack those "fairly rudimentary listening skills" but of course, you are claiming the exact opposite, that you have superior spatial awareness. Furthermore, this weird/unnatural spatiality made it's first really significant appearance in popular music with Phil Spectre's "wall of sound" in the late 1950's and became a fundamental part of many of the most successful pop music productions in the 1960's (The Beach Boys and various others), then in the mid 1960's with Sgt. Pepper the Beatles employed really weird/unnatural spatiality (which incidentally was directly inspired by Stockhausen!) and this became a fundamental part of ALL the rock/popular genres that followed. In other words, as this weird/unnatural spatially became a FUNDAMENTAL part of rock and popular music genres in the 1960's and onwards, without it, you don't fundamentally have those genres! What you are essentially suggesting therefore is that "most artists must want" to create the music/music productions of the mid 1950's or earlier, which of course is ludicrous!


71 dB said:


> 1. You don't clearly even want to understand me, so does it even matter to you whether I understand or not?
> [1a] I have had some respect toward you because you know a lot of stuff, but the way you keep attacking me has caused me to stop respecting you.
> [1b] I have apologized for my own bad language, but that's not enough for you.
> [1c] What kind of person are you?
> ...


1. How do you know I don't understand you? You think that because I disagree with you, therefore I don't understand you? And no, it doesn't really matter whether you understand or not, except/unless you post misrepresentations of the facts in this sub-forum and use your erroneous understanding to explain/justify them.
1a. All I've done is thrown YOUR OWN attacks back at you, so why haven't they "caused you to stop respecting" yourself?
1b. Of course that's not enough for me, it's not enough for anyone. In addition to apologising, you ALSO have to stop doing what you're apologising for! Some of the insults I quoted are from over a year ago, some from the last couple of weeks and one from just two days ago. Your apologies count for absolutely nothing if you keep doing it!
1c. What kind of person are you? What kind of person apologises for doing something, carries on doing it and then complains that their apology it not being accepted?

2. The term "HRTF" already has a precise meaning, you using it with an apparently different meaning pretty much precludes me from understanding you!
2a. And why is that better? Clearly you think/perceive it's better but "simulating roughly some aspects" does not necessarily actually make it better, it's just as likely, if not more likely, to make it worse. If I have a bicycle but want a car, I could try and "simulate roughly some aspects of" a car. For example, I could take the bicycle's wheels off, weld on a couple of axels, fit 4 car wheels and change the handlebars for a steering wheel. It now "simulates roughly some aspects" of a car but of course it's still absolutely nothing like a car, I haven't "fixed" or "cured" anything AND, in the process, I've also ruined my bike!! How is that better? It's worse, not better.

3. That would be wise. Wiser still would be to stop believing that your perceptions and preferences are objective facts, because then you'd stop trying to desperately validate that belief with irrelevant, erroneously assumed or simply made-up false facts!! 


71 dB said:


> [1] It has been insanely frustrating to see that no matter how hard I try to explain why crossfeed makes sense some people here don't get it, or don't want to get it.
> [2] Crossfeed it not about ultrarealistic "like listening to speakers" thing. It is removing annoyance caused by excessive spatiality.
> [2a] I DON'T WANT large ILD at low frequencies to my ears! It sounds annoying to me! No matter what the artist intented! If an artist has something against my crossfeeding then maybe I stop listening to his/her annoying art! As a paying consumer I have the choice.


1. No matter "how hard you try and explain why crossfeed makes sense", I will never get it because it doesn't make sense! I understand how/why it might sound subjectively better to some people but the only way it could actually "make sense" is if we ignore (or discount) a number of other pertinent facts. It's the same as someone who believes vinyl is better than CD: If we ignore (or discount) surface noise, tracking and other issues and prefer weaker HF response, then vinyl does sound better than CD and their assertion "makes sense".

2. What annoyance caused by what excessive spatiality? Firstly, "excessive spatiality" is a term you've just made up, which you've described as unnatural spatiality but then pretty much all music productions from the late 1960's onwards intentionally has very unnatural spatiality, on both speakers and headphones. So you are trying to remove something which is supposed to be there! Secondly, even in extreme cases where it can be a bit annoying (unpleasant or whatever you want to call it), how do you know it's not supposed to be annoying? The whole history of western music for the last 600 years or so is based on being annoying, on creating tension/discord/annoyance and then resolving it (or not resolving it as composition evolved in the late 1900's). Ironically, some discords/annoyances were actually made illegal! You say you've been studying music theory and now understand it but you don't seem to know even the fundamental basics of it. Whether we perceive these annoyances as acceptable (and maybe even desirable) or "excessive"/unacceptable is down to personal perception/preferences. For this reason, not everyone likes Wagner and fewer still like Schoenberg but were they "ignorant" and "lunatics" because they created music that some people found "excessive"/unacceptable or would it be ignorant to "remove that annoyance" because we personally find it "excessive"?
2a. This isn't the "What 71dB DOESN'T WANT" sub-forum and no music creator in history would submit to what you think is right/proper (unless you are Hitler, Stalin or some other all powerful dictator). As a paying consumer you do have the choice to playback recordings however you want and to ignore the artists' intentions or stop listening to it. However, the obvious/simple fact you can't seem to grasp is that you are talking about you! Listen to what you yourself have written: "*I* don't want", "to *my* ears", (excessively) "annoying to *me*" and that *you* don't care "what the artists' intended". This is ALL YOUR perception, YOUR preference and YOUR choice but, it is ABSOLUTELY NOT your choice to state that YOUR perceptions/preferences are objective facts which everyone else should/must also choose or be an "ignoramus"!! It's been well over a year, how long can you continue ignoring this "obvious/simple fact"? If, as you state, you want to be respected for your education and being a "smart person" the solution is simple, you have to start acting like an educated, smart person and stop ignoring this "simple fact"!!

G


----------



## 71 dB

No Gregorio, you don't understand me. This vent of yours is a testimony of that.

Speakers in a room give very different spatiality than headphones. This is a fact. Speaker listening functions as an ILD compressor: Mono recordings get some spatiality because of the room avcoustics and ping pongy stereo recordings are tamed. ILD levels are almost the same for all recordings. 

Which one respects the intent of the artist?

Proper crossfeed acts as an ILD compressor: Recordings with low enough ILD levels are listened to crossfeed OFF as they are. Yes, I do listen to _some_ recordings without crossfeed, because there's nothing to fix. Don't fix what isn't broken and all that. Ping pongy recordings are tamed crossfeeding them hard. The result is that the ILD levels are similar for all recordings just as they are in speaker listening.

I'd be surprised if a main artistic intent wasn't to give people listening enjoyment. I have hard time understanding why compressing ILD with crossfeed to levels similar to speaker listening is against artistic intent especially when doing so tends to increase my enjoyment dramatically.

Liking Wagner is not a result of physiology. Spatial hearing is tide to physiology. ILD means what it means because of our physiology, something we can't escape.

I'm not _prohibiting_ any artist to use excessive spatiality. I have no power to do that. I'm just saying that using excessive spatiality is kind of like designing not so ergonomic chairs. It's against our physiology.

Natural spatiality allows "completely disparate simultaneous spatialities" as long as the ILD levels are natural. The lack of excessive spatiality prevents the hearing system to create spatial distortion while desperately trying to decode the spatiality. When spatial distortion doesn't exist, all those disparate simultaneous spatialities can be decoded. Our brain is very good at stuff like that, but it requires that things tided to physiology aren't violated. The birth of rock music (which is basically blues with distorted guitars) happened around the same time as stereophony arrived to commercial music. Weird spatiality has more to do with the novelty factor of stereo and naive use of technological possibilites than the estetics of rock music. People listened to those rock/pop songs on their mono transistor radios on the beach!


----------



## bigshot (Feb 26, 2019)

The spatiality of speakers comes primarily from the room. You can't have space around the sound without space, and the room in a speaker setup is every bit as important to the overall sound presentation as the equipment you put in it. Soundstage is 100% dependent on the room. You don't get soundstage without physical space. You can synthesize lowering the channel separation with crossfeed, and introduce timing shifts like reverb to synthesize some of the aspects of space, but you can't fake directionality of sound and that is a big part of the presentation of a speaker system. Sound doesn't just come from the speakers themselves, it comes from reflections off the floor, walls and roof. That directionality is what creates space and focuses the soundstage.

Headphones are speakers placed right over the ears with no physical space involved. Crossfeed can take the curse off of excessive channel separation. But it can't create "natural spatiality". That is pretty much self evident to anyone with two ears who has heard the difference between speakers in a room and cans strapped to your noggin.

Most of the time, the mixing of rock music makes no attempt to reproduce any kind of "natural spatiality". There's nothing wrong with that. Popular music is mixed to create an optimal presentation, not a realistic one. The sound is optimized by monitoring the music with speakers in a room, and the proportions of a proper speaker installation in a room are standardized to keep the presentation relatively consistent in different homes. That consistency is what creates soundstage.

Crossfeed isn't what is against the "artist's intent" (if you want to call it that). Listening to popular music _*on headphones*_ is the reason why you aren't hearing it the way it was created or intended to be heard. Some people have no choice but to listen to music in less than optimal ways. They have no space or budget for a speaker system, so they make compromises by listening with headphones  Nothing wrong with that. Play the cards you're dealt and make do. If you want to tweak it a bit with crossfeed to take some of the curse off your compromise, that is fine. But it isn't restoring "natural spatiality" that only exists in a physical listening room, and it isn't at all a fit substitute for a proper speaker system.


----------



## 71 dB

bigshot said:


> 1. The spatiality of speakers comes primarily from the room. You can't have space around the sound without space, and the room in a speaker setup is every bit as important to the overall sound presentation as the equipment you put in it. Soundstage is 100% dependent on the room. You don't get soundstage without physical space. You can synthesize lowering the channel separation with crossfeed, and introduce timing shifts like reverb to synthesize some of the aspects of space, but you can't fake directionality of sound and that is a big part of the presentation of a speaker system. Sound doesn't just come from the speakers themselves, it comes from reflections off the floor, walls and roof. That directionality is what creates space and focuses the soundstage.
> 
> Headphones are speakers placed right over the ears with no physical space involved. Crossfeed can take the curse off of excessive channel separation. But it can't create "natural spatiality". That is pretty much self evident to anyone with two ears who has heard the difference between speakers in a room and cans strapped to your noggin.
> 
> ...



The "space" is heard because the space creates spatial cues. If we manufacture those same cues without space the hearing is fooled. This is evident when listening to binaural recordings, for example a Jecklin disk recording. Of course most recordings are not binaural, but they content spatial information depending on the microphone setup and the venue where the recording was done.

I have speakers, even a 5-channel system, but most of my listening it done with headphones with crossfeed. I wonder why if speakers are so superior…


----------



## sonitus mirus (Feb 26, 2019)

I listen with speakers almost exclusively.  For headphone listening, I prefer the Jan Meier emulation as this approximates what I hear with my speakers, especially with hard-panned early stereo recordings and mono.

My DAC has five settings available that adjust the filter frequency and the amount of crossfeed.  The filter can either be 650Hz or 700Hz (not sure about the Q factor) with the dB amount ranging from a slight -13 dB up to  -3 dB. (Jan Meier is 650 HZ, -9.5 dB) My headphone choice for listening is about making it sound like I hear music through speakers (2), as this is what I am used to and have come to enjoy.  Without any crossfeed enabled, some music makes it feel like my sinuses are congested.  Admittedly, this is quite rare and nothing to be concerned about for the most part.  It was a feature I could live without, but I do enjoy having the option.

Edit:  If I was not clear, I do not use crossfeed when listening via speakers, only with headphones.


----------



## bigshot (Feb 26, 2019)

I have yet to hear anything with headphones that sounds like speakers. I've head binaural recordings. They sound odd and "present" sort of, but nothing like real speakers in a real room. I'll keep an open mind about the realizer, but to be honest, I'm not expecting much.

Here's a little story that relates to this... I was invited over to my brother's house for Thanksgiving. My niece is a vegetarian and she was all excited about the new kind of vegetarian turkey she had brought. She told me, "It tastes just like real turkey!" She handed me a slice and I took a bite. It was like slightly salty rubber. She asked me what I thought of it and I smiled and stammered. My brother rescued me by saying, "Don't worry, I made real turkey too." My niece told me there was plenty of her vegetarian turkey if I wanted it and walked away. I turned to my brother and said, "She thinks that tastes just like real turkey." My brother replied, "She hasn't eaten real turkey in over a decade."

I think that's the way it is with speakers and headphones too.


----------



## sonitus mirus

bigshot said:


> I have yet to hear anything with headphones that sounds like speakers. I've head binaural recordings. They sound odd and "present" sort of, but nothing like real speakers in a real room. I'll keep an open mind about the realizer, but to be honest, I'm not expecting much.
> 
> Here's a little story that relates to this... I was invited over to my brother's house for Thanksgiving. My niece is a vegetarian and she was all excited about the new kind of vegetarian turkey she had brought. She told me, "It tastes just like real turkey!" She handed me a slice and I took a bite. It was like slightly salty rubber. She asked me what I thought of it and I smiled and stammered. My brother rescued me by saying, "Don't worry, I made real turkey too." My niece told me there was plenty of her vegetarian turkey if I wanted it and walked away. I turned to my brother and said, "She thinks that tastes just like real turkey." My brother replied, "She hasn't eaten real turkey in over a decade."
> 
> I think that's the way it is with speakers and headphones too.



I can believe that, but I have had some turkey sausage that wasn't too bad.  I listen with speakers almost exclusively, and headphones are used only if I am testing something, which I don't find to be necessary anymore.


----------



## Davesrose (Feb 26, 2019)

71 dB said:


> I have speakers, even a 5-channel system, but most of my listening it done with headphones with crossfeed. I wonder why if speakers are so superior…



I have sets of headphones I like using at work to listen to music.  I can get nice details and dynamics, and the sound seems lively.  I have tried some crossfeed and headphone surround schemes, but I've never heard a solution that sounds like a theater speaker system.  I have one quadrophonic recording of Bach fugues in a cathedral with 4 organs.  You do clearly hear instruments behind you.  I also have quite a few blu-ray concerts that have nice lively ambiance in surround sound.  You also can get a visceral sense of bass with a subwoofer (and I have quite a few live concerts that seem to actively use my subwoofer). I recently upgraded to 11.1 Atmos/Dolby:X/Auro-3D surround.  The Live Aid reconstruction in Bohemian Rhapsody does seem pretty life-like in Atmos.

So to me, there isn't one system that's always going to be inherently better for all things music.  But, I also haven't heard a headphone scheme that reproduces all "spaciality" of speakers (which often times is not just trying to present instruments spread out in the center of you).


----------



## 71 dB

sonitus mirus said:


> I listen with speakers almost exclusively.  For headphone listening, I prefer the Jan Meier emulation as this approximates what I hear with my speakers, especially with hard-panned early stereo recordings and mono.
> 
> My DAC has five settings available that adjust the filter frequency and the amount of crossfeed.  The filter can either be 650Hz or 700Hz (*not sure about the Q factor*) with the dB amount ranging from a slight -13 dB up to  -3 dB. (Jan Meier is 650 HZ, -9.5 dB) My headphone choice for listening is about making it sound like I hear music through speakers (2), as this is what I am used to and have come to enjoy.  Without any crossfeed enabled, some music makes it feel like my sinuses are congested.  Admittedly, this is quite rare and nothing to be concerned about for the most part.  It was a feature I could live without, but I do enjoy having the option.
> 
> Edit:  If I was not clear,* I do not use crossfeed when listening via speakers*, only with headphones.



Crossfeeders use first order butterworth low pass filter, so the Q factor = 0.707

Crossfeeding makes no sense with speakers. There's acoustic crossfeed with speakers so using also electric crossfeed means there's double crossfeeding and the result is very bad, mono-like and dull. I didn't think you would use crossfeed with speakers. Just making clear for all readers here that crossfeeder with speakers is not a thing. Crossfeeders are to be used with headphones because acoustic crossfeed doesn't exist in that listening scenario.


----------



## Sonic Defender

Personally, I like the unique presentation of headphones and find the experience while quite different from speaker listening, nonetheless very engaging. I also love my speaker system and as somebody who developed their love for music through speaker rigs back in the very beginning of the 1980s that is where my truest music listening joy is found. Saying that, I have no desire to force headphones to sound like speakers, headphones for me offer their own rewards that I highly value. I suppose I wouldn't be against using well done (based on my personal preferences) software cross-feed, but it isn't a necessity for me either. Still, the mind is like a parachute, it only functions when it is open so I will try to keep an open mind.


----------



## sonitus mirus

Sonic Defender said:


> Personally, I like the unique presentation of headphones and find the experience while quite different from speaker listening, nonetheless very engaging. I also love my speaker system and as somebody who developed their love for music through speaker rigs back in the very beginning of the 1980s that is where my truest music listening joy is found. Saying that, I have no desire to force headphones to sound like speakers, headphones for me offer their own rewards that I highly value. I suppose I wouldn't be against using well done (based on my personal preferences) software cross-feed, but it isn't a necessity for me either. Still, the mind is like a parachute, it only functions when it is open so I will try to keep an open mind.


I´m not trying to force headphones to sound like speakers.  Speaker listening is my turkey sandwich that I am used to and enjoy.  Consequently, in my opinion, the headphones that I like the best have characteristics that sound closer to speakers.  I tend to prefer bass-heavy cans in a futile attempt to recreate what I experience from large floor speakers. 

I’m going to find a place to use your parachute analogy today.  That’s a great line.


----------



## 71 dB

Sometimes I like to listen to speakers, but since I listen to headphones so much these day I tend to feel speaker sound "distant" (10 feet away!) and wanting it come closer (1-3 feet from me) to me to remind the miniature soundstage of headphones. I am also annoyed how speaker sound changes when I move my head since headphone sound is 100 % steady no matter how I move. Also, low bass frequencies make doors and windows resonate causing irritating noise. Speakers have their place. Playing loud makes me feel the soundwaves and of course you don't need to wear headphones which is very nice. Also, excessive spatiality is not a thing with speakers - no hassle with proper crossfeed settings!


----------



## gregorio

71 dB said:


> [1] No Gregorio, you don't understand me. This vent of yours is a testimony of that.
> [2] Speakers in a room give very different spatiality than headphones. This is a fact.
> [3] Speaker listening functions as an ILD compressor: [3a] Mono recordings get some spatiality because of the room avcoustics and ping pongy stereo recordings are tamed. ILD levels are almost the same for all recordings.
> [4[ Which one respects the intent of the artist?
> ...



1. I do understand you, I understand that you are making nonsense assertions to support your erroneous belief. You post is a testimony to that because it's almost entirely nonsense!

2. You think maybe I don't know that?

3. That's clearly nonsense, speaker listening does NOT "function as an ILD compressor"! Listening with speakers adds the colouration of the speakers, plus the acoustics ("spatial cues") of the listening environment, and those "spatial cues" comprise a bunch of early reflections, reverb, colouration of all those reflections/reverb and frequency response changes due to phase summing and cancellation. Again, you claim to be an expert in spatiality but you don't even understand (or just ignore) introductory level spatiality/acoustics.
3a. This too is utter nonsense. Mono recordings "get some spatiality" because they contain spatiality (a great deal of "spatial cues")! They contain early reflections, reverb, colouration of those reflections, etc. Mono recordings are effectively identical to stereo recordings as far as spatiality is concerned, with just one difference, mono recordings reproduce those spatial cues in mono.

4. I've already answered that question: "_Not using crossfeed makes absolutely no error at all, it reproduces exactly what's in the recording. However, reproducing the recording with speakers (in a room) does "make huge error", the issue is therefore: What is the intension of the artists who created the music recording? Is it that it's reproduced: A. With that huge error, B. Without that huge error, C. With or without that error or D. With a very different sort of error? The answer is: Rarely B, even more rarely D, more commonly A and even more commonly C. _

5. Again, this statement is false! Crossfeed acts as crossfeed, if it only acted as "an ILD compressor" then it would be called an ILD compressor and not crossfeed! Crossfeed crossfeeds everything, all the spatial cues, NOT only ILD!

6. Again, you are contradicting yourself. YOU define "ping pongy" as "broken" because YOU don't like it but "Ping pongy" recordings are NOT broken, a "ping pongy" effect is added deliberately/intensionally, it can't be added accidentally. So, you are in fact trying to "fix what isn't broken"!
6a. No, that is NOT the result! When listening to speakers in a room we have reflections from that listening room, what level are those reflections, are they always 0dB as it is with crossfeed?


71 dB said:


> [7] I'd be surprised if a main artistic intent wasn't to give people listening enjoyment.
> [8] I have hard time understanding why compressing ILD with crossfeed to levels similar to speaker listening is against artistic intent especially when doing so tends to increase my enjoyment dramatically.
> [9] Liking Wagner is not a result of physiology. Spatial hearing is tide to physiology.
> [9a] ILD means what it means because of our physiology, something we can't escape.
> ...


7. You are NOT "people", you are one person. Are there any pieces of music or music genres you don't really enjoy? Were the creators of those pieces/genres idiots for creating music you don't enjoy? This is nonsense, no one creates commercial music recordings just for your personal listening enjoyment, we create them for the listening enjoyment of various target audiences, which may or may not include you! Furthermore, "listening enjoyment" is NOT the main artistic intention of a great deal of music. For example, the main artistic intent of much/most classical music before the mid part of the C18th was the glorification of god, not listener enjoyment. Many of the pieces we may (or may not) enjoy listening to today would have been quite shocking and uncomfortable in their day. Another example is some/many modern classical music sub-genres (starting around the time of the first world war), where the main artistic intent is intellectual stimulation, "listening enjoyment" may not be an artistic intent at all and in some sub-genres it is deliberately avoided! The actual historical facts completely contradict what YOU "would be surprised by"! 

8. Read what you have written, the issue is what YOU personally "have a hard time understanding" and YOUR personal "enjoyment". Do you have a "hard time understanding" quantum mechanics or the theory of relativity? If so, do they not exist or should they be discounted simply because you have a hard time understanding them? Clearly you don't understand artistic intent and ignore/discount it, that's your choice but it is NOT an objective fact and doesn't therefore apply to everyone!

9. That's ridiculous, of course liking Wagner is tied to physiology. If you didn't have ears and therefore couldn't hear it, how could you either like or dislike Wagner? And, if human ears were significantly different, Wagner would have composed his music differently. As with liking Wagner, spatial hearing is also tied to physiology but it doesn't actually occur in our ears, it occurs in our brain, it is a perception!
9a. No, that is patently false! ILD means "Inter-aural Level Difference", where do you think that "difference" is calculated/identified, in your physiology (ears) or in your brain?

10, No, it's absolutely nothing like that, we don't sit in music, we don't create music to be ergonomic, in fact quite the opposite, we create music virtually always with deliberately unergonomic features. Without dissonance and other unergonomic/uncomfortable features nearly all western music simply doesn't work/exist. Particularly obvious examples with popular genres would be thrash/heavy metal and electronic hardcore, which absolutely depend on being uncomfortable, if it were comfortable/ergonomic then it wouldn't be "heavy" metal or hardcore, it would be something else entirely; "softcore" for example! This is very obvious and very basic music theory, how can you be so ignorant of it?

11. That's just more complete nonsense, which also contradicts your previous complete nonsense! If spatiality is tied only to physiology, how can having say the spatiality of an arena, a small dry room and a medium sized recording room (all at the same time) be "natural"? Do you have (or do you know of anyone who has) 3 pairs of ears which you can simultaneously place in these three completely different locations? It's just ridiculous!


71 dB said:


> [12] The lack of excessive spatiality prevents the hearing system to create spatial distortion while desperately trying to decode the spatiality.
> [12a] When spatial distortion doesn't exist, all those disparate simultaneous spatialities can be decoded. Our brain is very good at stuff like that, but it requires that things tided to physiology aren't violated.
> [13] The birth of rock music (which is basically blues with distorted guitars) happened around the same time as stereophony arrived to commercial music.
> [13a] Weird spatiality has more to do with the novelty factor of stereo and naive use of technological possibilites than the estetics of rock music.
> [13b] People listened to those rock/pop songs on their mono transistor radios on the beach!


12. There's no such thing as "excessive spatiality" that's a term YOU have just made-up to describe ONE SINGLE parameter of spatiality that exceeds your personal preference. And NO, it does NOT create "spatial distortion", there is no spatial distortion! Apparently, your perception is creating some effect/distortion out of the spatial information but then that's your perception, NOT an objective fact which therefore applies to everyone else's "hearing system".
12a. Even though spatial distortion doesn't exist, except apparently in your perception, we still can't easily decode those spatialities. Our brain is NOT very good at stuff like that! Although it takes skill and knowledge, our brain is relatively easily fooled. When you listen to a symphony recording, does your brain decode each of the 20-50 different aural perspectives and spatialities? When you watch a film, does your brain decode/recognise that the dialogue was recorded in a different room to the one in the scene and that all the rest of the sound/s were recorded in various other different rooms again (all with different spatialities)? No, obviously it doesn't, even though "things tied to physiology" are pretty much ALWAYS violated, dramatically so!!

Your whole post yet again just contradicts your claims and demonstrates exactly what I have stated. You claim expertise in spatiality and yet define it ONLY in terms of ILD, which is just ONE of numerous parameters that define spatiality. Even someone with just a fairly basic understanding of spatiality would know that, let alone an expert. You are therefore demonstrating the opposite of your claim! Furthermore, you seem perfectly willing to simply make-up other facts just to support your agenda. For example:

13. Yes but to start with, they were relatively unconnected. The technological breakthrough which allowed the evolution of rock music wasn't stereo, it was multi-track recording and equipment which allowed the individual processing of each of those tracks. It wasn't until much later that stereo became integral to rock and it's sub-genres. And NO, rock music is NOT just blues with distorted guitars. Sure there was a transition/evolution period but there were/are many differences, not just "distorted guitars" but also: The electric bass, the drumkit is very different (the instruments in the kit, the style of playing and the processing of it), the singing style is often significantly different, the rhythms/beat/tempos are somewhat or very different, the actual "blue" notes (obviously a defining feature of "the blues") are commonly omitted and the (also defining) "12 bar blues" chord progression is also usually modified or dropped in rock music. Again, you clearly know very little, apparently lack the listening skills to identify/notice these differences between rock and blues and are simply making-up nonsense!
13a. No, you can't just continue making up nonsense facts which contradict the actual historical facts! Sure, some artists did use stereo in the beginning naively and for the novelty factor but only some. The stereo recordings of orchestras had weird spatiality by the mid/end of the 1950's but it was very intelligently used and the basic principles from that time are still employed almost ubiquitously today. And by the mid 1960's pop/rock music was employing more sophisticated stereo techniques and the naive/novelty factor type productions gradually died out (except on occasion when it wasn't naive but desired).
13b. Yes, because mono-compatibility was required when mastering and this requirement didn't completely die out in commercial audio until fairly recently (the last 15-20 years). Again, you just make-up (or assume) a fallacious cause/effect correlation and present it as fact.

G


----------



## bigshot

71 dB said:


> Sometimes I like to listen to speakers, but since I listen to headphones so much these day I tend to feel speaker sound "distant" (10 feet away!) and wanting it come closer (1-3 feet from me) to me to remind the miniature soundstage of headphones. I am also annoyed how speaker sound changes when I move my head since headphone sound is 100 % steady no matter how I move. Also, low bass frequencies make doors and windows resonate causing irritating noise. Speakers have their place. Playing loud makes me feel the soundwaves and of course you don't need to wear headphones which is very nice. Also, excessive spatiality is not a thing with speakers - no hassle with proper crossfeed settings!



Sounds like you have room problems. The sound should fill the room all around you. It shouldn't sound distant at all. And if your stuff is rattling, you probably have a wolf tone around the resonant frequency of your room. EQ would fix that right up. As for turning your head, that is a big part of how we locate sound in space. Without that, you have no soundstage at all.

But I thought you lived in Greenland with reindeer or something and didn't have room for speakers. Are the speakers a new addition?


----------



## 71 dB

bigshot said:


> Sounds like you have room problems. The sound should fill the room all around you. It shouldn't sound distant at all. And if your stuff is rattling, you probably have a wolf tone around the resonant frequency of your room. EQ would fix that right up. As for turning your head, that is a big part of how we locate sound in space. Without that, you have no soundstage at all.
> 
> But I thought you lived in Greenland with reindeer or something and didn't have room for speakers. Are the speakers a new addition?



There will be room problems unless you can invest a lot of money in constructing a dedicated listening room. I don't mean turning head, but moving head side to side without moving it. The the patterns the speakers radiate sound.

I live in Finland, far from Greenland. I am in Helsinki, south Finland without Reindeers which are located in the north Finland, Lapland.

Speakers are not new. I use to listen to them a lot before discovering crossfeed in 2012.


----------



## 71 dB

Sound (correleted) coming from both speakers causes cancellations on different frequencies depending on the point is space and that's why moving head chances sound


----------



## 71 dB

At the highest frequencies the directivity of speakers can be very large and the "sweet spot" is small.


----------



## 71 dB

gregorio said:


> 1. I do understand you, I understand that you are making nonsense assertions to support your erroneous belief. You post is a testimony to that because it's almost entirely nonsense!
> 
> Yada yada yada…
> 
> G



Don't expect me to respond. I'm done with your smearing. I'm not that stupid to waste any more time on you. I should have realized the first time you attacked my crossfeed posts over a year ago. I try ignoring your posts and responding to respectful people here, but if it doesn't work I may leave the place altogether.


----------



## bigshot

You need to calm down. Attacking cross feed isn't attacking you.


----------



## ironmine (Feb 27, 2019)

71 dB said:


> Also, low bass frequencies make doors and windows resonate causing irritating noise.



You recently asked me why I do Digital Room Correction (DRC). This is why. To remove irritating and sound-masking resonances.

I tried to get rid of them using _passive _acoustic treatment. I installed 11 acoustic panels made of rockwool of the right density, 10-15 cm thick - including cloud ceiling, bass traps in corners, behind my back, in all the FRZ - first-reflection zones, between the speakers and the wall behind them (to help with the SBIR - Speaker Boundary Interference Response). I did help (a lot !) with imaging, but not much with low frequency resonances.

Then I installed the VST plugin in Foobar to do _active _DRC for me. It's called MathAudio Room EQ. The difference before and after was amazing. The combination of acoustic panels and DRC nailed it down.

I have an expensive and individually calibrated microphone (75 euro) that I purchased specially for room measurements.

I enjoy equally well listening to music via speakers and via headphones. When you listen to an album first through speakers, then next day through open cans Senn HD6XX, then next day through closed cans Denon D2000, you feel like you've got to the very heart of an album, you approached it from each and every direction and sucked out every tiny nuance and emotion from it.


----------



## bigshot (Feb 28, 2019)

I think people who prefer headphones over speakers have never heard a good speaker system. All of his criticisms are signs of room and layout problems. I’d offer suggestions for how to fix it, but I don’t think he’s interested in fixing it


----------



## 71 dB

bigshot said:


> I think people who prefer headphones over speakers have never heard a good speaker system. All of his criticisms are signs of room and layout problems. I’d offer suggestions for how to fix it, but I don’t think he’s interested in fixing it



I bought my first pair of speakers back in 1993. Until 2012 I preferred speakers over headphones, because I had not heard good (properly crossfed) headphones. I love speakers, but crossfeed made me love headphones too. I have heard good speakers in my life. I have heard Duntech Princess speakers in the dedicated acoustically perfect listening room of the acoustics lab I used to work in. I have heard B&W Nautilus at a Hifi exhibition. I have heard a lot of Finnish quality speakers from Genelec to Amphion to Gradient and many others.

My speakers are optimally placed in my tiny room of the tiny appartment I can afford to rent. Don't forget I'm an acoustic engineer. I am a guy who tells people how they should place the speakers in their room to get the best possible sound. The overall sound of my speakers is good, but even a great room colors the sound more than headphones.


----------



## Sonic Defender

While I do think if I had to pick either speakers or headphones (for personal audio sessions) it would be speakers, but not by a landslide. There are some very unique and engaging things about headphone listening. I can completely understand how headphone listening could be preferred even when the choice to use speakers is available.


----------



## bigshot (Feb 28, 2019)

71 dB said:


> IMy speakers are optimally placed in my tiny room of the tiny appartment



That would explain why your sweet spot is so small and the sound changes when you move your head from side to side. Last night I was playing a Mozart DVD and I was thinking about our conversation, so I moved around a little bit to see how much latitude my sweet spot has. It turns out that there is a consistent sound the width of a couch.. If my room didn't have the bar and bathroom in the back, I would probably be able to put in two couches lined up like theater seating. I have an unusual setup with dual mains to increase the size of the soundstage to match the size of my projection system. The broader soundstage probably makes the width of the sweet spot wider too.

When I compare the sound of my multichannel speaker system to my headphones, there is no contest. A room full of sound beats a band in the space between your ears every time. The difference might not be so great with stereo bookshelf speakers, and to me a near field system isn't that much different from headphones at all.... The secret is the space around the sound- the room- not necessarily the speakers themselves, although they need to be big enough to fill the space.


----------



## 71 dB (Feb 28, 2019)

bigshot said:


> 1. That would explain why your sweet spot is so small and the sound changes when you move your head from side to side.
> 
> 2. Last night I was playing a Mozart DVD and I was thinking about our conversation, so I moved around a little bit to see how much latitude my sweet spot has. It turns out that there is a consistent sound the width of a couch..
> 
> ...



1. Yes, that would explain it. I like my spatiality precise and small sweet spot gives just that.
2. I probably have more direct sound compared to reverbaration than you.
3. Home speakers in general aren't supposed to be used like that. They have (at least should have) controlled directivity which is compromised when you use more than one pair like that. You have identical sound coming from different places causing comb-filter effects. I don't know your setup and all I know it might even work, but you are walking on thin ice as we Finns say.
4. As I have told several times I get a miniature soundstage with headphones using crossfeed. It's because crossfeeds destroys the spatial cues that tell the sound sources is at our ears and the remnants of such cues get masked by the spatial cues of the recording itself which can be for example a church, concert hall or artificial spatiality greated using various plugins in a DAW. When your hearing system gets mostly spatial cues about a large acoustic space and only a little bit cues indicating "inside skull" sounds, the result is miniature soundstage with a size determined by the spatial cues of the recording. Anyway, you are right about multichannel speaker system giving the best/largest soundstage as it most efectively masks the acoustics of the listening room with the spatial information of the recordings and creates zero cues for "inside skull" sounds.


----------



## bigshot (Feb 28, 2019)

Room reverberation isn't always a bad thing. It's only bad when it causes problems. When it doesn't cause problems, it wraps a natural dimensional envelope around the sound which headphones can't synthesize. It's the same ambience you would hear if you spoke in the room or played a guitar. The goal of a good listening room isn't to have no reverberation, it's to put the natural reverberation to good use.

I know in theory using multiple sets of mains isn't recommended, however it isn't uncommon in theaters.It takes more care with balancing, and I had to experiment with several placement arrangements before I got something that works. Theory is important, but there's a point in building a listening room where you have to experiment and try to solve problems through trial and error to find what works. Plans on paper don't always work the way you expect them to. It's a balancing act to get the space and sound to work together.

Headphones are incapable of soundstage because soundstage requires space between the listener and the sound sources. Headphones can only render secondary depth cues, not primary ones. If you set up speakers near field with a pinpoint sweet spot and no room ambience, you might as well wear headphones, because you aren't getting the full benefit of having speakers.


----------



## Glmoneydawg

The type of music you gravitate to can also be a factor here...i listen to a lot of blues,classic rock.I have a 1500sq ft listening room,but still prefer a nearfieldish position(roughly 10ft equilateral triangle)having said that...for large scale classical,movies and maybe even some prog rock lol a larger sounfield is preferable and probably more accurate.I do believe nearer field setup is more precise,requires less volume and reduces room effects with music involving 6 musicians or less.....but it does shrink my occasional classical ventures


----------



## bigshot (Feb 28, 2019)

Yes, it isn't easy to fill a whole big room. If it isn't a dedicated listening room, you would need to limit the area that is devoted to that and focus a smaller space. My room is all a big theater/listening room- 20 by 25 or so. It's still very accurate and the sound location in the soundstage is still precise, it's just that I can increase the size of the equilateral triangle by increasing the distance between speakers and the distance from the listening position to the speakers, and end up with a soundstage that is about 20 feet across and 8 feet tall. That matches the size and scale of what you would imagine a rock or jazz band would be like if they were playing right in front of you in a small club, or a classical concert with good seats a little bit back from the stage. It isn't like a miniature version of a band in front of you. It's full scale.

There are big screen TVs that give you a different sense of scale. Increasing the length of the sides of the equilateral triangle does that for sound too. When I match the big soundstage to a ten foot projection screen of a concert or opera blu-ray, it really looks, sounds and feels like you're there because it is all natural scale. The rear channel ambience opens up the roof and rear wall and makes it sound as big as a concert hall. Hard to explain in words. It's an experiential thing.


----------



## Glmoneydawg

bigshot said:


> Yes, it isn't easy to fill a whole big room. If it isn't a dedicated listening room, you would need to limit the area that is devoted to that and focus a smaller space. My room is all a big theater/listening room- 20 by 25 or so. It's still very accurate and the sound location in the soundstage is still precise, it's just that I can increase the size of the equilateral triangle by increasing the distance between speakers and the distance from the listening position to the speakers, and end up with a soundstage that is about 20 feet across and 8 feet tall. That matches the size and scale of what you would imagine a rock or jazz band would be like if they were playing right in front of you in a small club, or a classical concert with good seats a little bit back from the stage.
> 
> There are big screen TVs that give you a different sense of scale. Increasing the length of the sides of the equilateral triangle does that for sound too. When I match the big soundstage to a ten foot projection screen of a concert or opera blu-ray, it really looks, sounds and feels like you're there because it is all natural scale. Hard to explain in words. It's an experiential thing


Sounds cool bud....nothing in this forum is more important than having a system/setup that you are happy with....enjoy!


----------



## ironmine

71 dB said:


> As I have told several times I get a miniature soundstage with headphones using crossfeed.



When I start listening with headphones, at first the sound stage is miniature, as you said it. But the more I listen and the more relaxed I become, somehow the sound stage grows bigger. I think the mind is very adaptable. It has to work with whatever spatial cues it has. To the end of an album, the stage is huge and perfect.

All you guys who just listen with headphones while doing something at your computer or while sitting in an upright position with the full lights on in your room, please understand that you are not making a very intimate and close connection to music as compared to putting your headphones on and lying on your sofa in complete darkness. Maybe I cannot express my thought clearly, but, at least in my case, something very unique and beautiful happens when you do NOTHING (with your eyes or body or mind) and you JUST listen to music via crossfeed in the dark. It puts the mind in that special mode.


----------



## 71 dB

bigshot said:


> Room reverberation isn't always a bad thing. It's only bad when it causes problems. When it doesn't cause problems, it wraps a natural dimensional envelope around the sound which headphones can't synthesize. It's the same ambience you would hear if you spoke in the room or played a guitar. The goal of a good listening room isn't to have no reverberation, it's to put the natural reverberation to good use.
> 
> I know in theory using multiple sets of mains isn't recommended, however it isn't uncommon in theaters.It takes more care with balancing, and I had to experiment with several placement arrangements before I got something that works. Theory is important, but there's a point in building a listening room where you have to experiment and try to solve problems through trial and error to find what works. Plans on paper don't always work the way you expect them to. It's a balancing act to get the space and sound to work together.
> 
> Headphones are incapable of soundstage because soundstage requires space between the listener and the sound sources. Headphones can only render secondary depth cues, not primary ones. If you set up speakers near field with a pinpoint sweet spot and no room ambience, you might as well wear headphones, because you aren't getting the full benefit of having speakers.



Very dry recordings can benefit from reverberation, but I don't think a church recording with RT60 = 5 seconds needs additional reverberation. Our hearing doesn't sense a sound field, but only pressure changes at eardums. Depending on which direction the sound came to the ear the shape of our upper body, head and pinna creates spatial cues that reveal the direction. If we can feed the eardums with headphones and somehow have the same spatial cues our hearing can't tell the difference. The problem of course is to have those exactly correct spatial cues, something our current technology can't do properly meaning headphone spatiality is somewhat compromised. It's up to listeners themselves how much they weight that compromise against the benefits of headphone listening.

In movie theatres the "sweet spot" has to be very large and having a perfect sweet spot is unrealistic to begin with. The target is different from home listening.

No, space between the listener and the sound source is not required. Our ears sense pressure changes and pressure changes are just pressure changes no matter how far the source of those pressure changes is. Distance between the source and listener in an acoustic space causes all kinds of spatial cues to be generated which are what gives away the distance. If someone moves your speakers closer to you in your listening room, the reverberation doesn't change. Early reflection do change and so does the balance between direct sound and reflections + reverberation. The closer the speakers are to you, the louder is the direct sound compared to reverberation. Loud reverberation compared to direct dry sound is a spatial cue of distant sound source. Short time delay between direct sound and the first early reflections can be a cue of distant sound and so on… our spatial hearing decodes these spatial cues and if we can trick (by having these spatial cues in the recording) the hearing using headphones we have a soundstage. I think you don't hear a headphone soundstage because you don't believe in it. You are convinced it's impossible so you force yourself to not hear it.


----------



## 71 dB

Glmoneydawg said:


> I have a 1500sq ft listening room



My whole apartment which I can afford to rent is 320 sqft, of which 170 sqrt is my living/listening room and the rest 150 sqft consists of entry, toilet, kitchen and closet. Homes in Finland are very small compared to those in warmer countries, because keeping a large apartment warm during the winter is insanely expensive and Finnish homes are warm in the winter. We are not Brits who shiver of cold inside if it's not summer!  Typically Finnish family homes are 600-1000 sqft (200-300 sqft living room) and rich people/people in rural areas live in larger houses. I'm living alone so I have a single room apartment. The advantage of a single room apartment is that my bed in my listening room is a great bass attenuator...


----------



## bigshot (Mar 1, 2019)

71 dB said:


> Very dry recordings can benefit from reverberation, but I don't think a church recording with RT60 = 5 seconds needs additional reverberation.



Perhaps I used the wrong term. I meant the natural envelope of sound a normal room adds. I didn't mean big sloppy reflections. Room ambience is a subtle thing, but that's where the natural presence comes from, and it allows the sound to bloom and envelop you kinesthetically instead of just shooting directly into your ears. We aren't even consciously aware of it because it's just "there". But we perceive it and it's a lot of what sounds "real" to us. Some of the distance cues are in the recording, and some are part of the physics of the room itself. If you don't have space for the sound to inhabit, you don't get a lot of the distance cues.

The key to spatiality is real space. You can't get that effect with headphones. Headphones are a great compromise to listen to music in limiting circumstances though. No denying that. It's like watching a movie on your cell phone. It certainly is convenient and it's the only way you are going to be able to watch a movie while waiting in your doctor's office, but it isn't the same as watching a movie in a proper theater.

Church organ recordings are MUCH better in multichannel where the reverberation can actually have directionality and isn't wallpapered over the top of the music. Big cathedral ambiences don't sound quite right in stereo because they muddle up the sound. Reverberation with real dimensional space is the best. I am sure that the quality of the space created depends on the number of channels. A church organ recording in Atmos must be amazing. I'd like to hear a good Atmos installation sometime. The bigger the room, the more channels, the more immediate and enveloping the sound is. Small near field stereo setups with no envelope of room acoustics just don't give you anywhere near that kind of presentation.

Multichannel speaker systems don't just create spatiality, they make it possible to *enhance* the spatiality of the sound of your natural room and expand it far beyond your four walls. It's like an aural "holodeck". I just got a box set of the Berlin Philharmonic Europakonzert series. Each concert is filmed in a different venue all over Europe. I watched one that took place in a huge factory building, and another was in a beautiful intimate baroque theater. They're all in 5.1 and the ambiences of the locations sound unique and interesting. They're all different, but they all sound wonderful. Good listening rooms are like that. There isn't just one optimal way to do it. You want to experiment find the setup that makes your particular room sing. Headphones are nice because you can just slap them on your head and they always sound the same wherever you are.

I lived for over 30 years in a one bedroom apartment. I had a speaker system, but it was mostly for the TV or for when company came over. There just wasn't enough room for it to sound really good, so I would use my good headphones a lot. I totally understand that. When I was a kid, I remember watching one of those "You Are There" interviews... I can't remember if it was Sammy Davis Jr or Frank Sinatra.... but he was sitting in his living room talking and he pushed a button and an opening in the wall opened up for a projection booth and a big screen dropped down in front of him. I remember as a kid saying to myself that I wanted a setup like that someday. And technology has now made that affordable to normal humans like me. I am so fortunate to be able to finally have a dedicated room for listening to music and watching movies. It's taken me over half a century to get here, but now that I have it, I would never go back to the little apartment and headphones. It's hard to get true spatiality that way though, because you can't carry your whole room around in your backpack on the bus.

I think the problem is that you've never really heard a good multichannel speaker system. If you ever hop in your dogsled and make the trek to California, I'd be happy to demonstrate mine for you.


----------



## 71 dB

bigshot said:


> I think the problem is that you've never really heard a good multichannel speaker system. If you ever hop in your dogsled and make the trek to California, I'd be happy to demonstrate mine for you.



Even if that was true I wouldn't call it a problem. Some of us have _real_ problems*. I'd much rather felt respected, valued and loved than have a good multichannel speaker system in a big house. I have so much things to fix in my life the lack of your speaker system doesn't make it to my top 10.000 things to fix**. I hope you are not becoming as condescending toward me as Mr. G just because you made it in life after 30 years of headphone misery.

* And those problems are microscopic compared to problems in countries such as Yemen.
** One of the easiest things to fix is to stop calling people idiots and ignoramuses when they don't agree with me about crossfeed. I'm working on that. Not calling people idiots for that is rather easy, but not calling them ignorant has turned out challenging, but I am trying.


----------



## ironmine

71 dB said:


> I have so much things to fix in my life the lack of your speaker system doesn't make it to my top 10.000 things to fix**.



Do you have a job? If not, why you cannot find it? What was your previous job?


----------



## 71 dB

ironmine said:


> Do you have a job? If not, why you cannot find it? What was your previous job?



I'm out of job at the moment. Getting a job has turned out pretty hard for someone like me despite of my education. My previous job was in a HEVAC engineering office. It was horrible and did a lot of damage to me as a person.


----------



## gregorio (Mar 2, 2019)

71 dB said:


> [1] Very dry recordings can benefit from reverberation, but I don't think a church recording with RT60 = 5 seconds needs additional reverberation.
> [2] Our hearing doesn't sense a sound field, but only pressure changes at eardums.
> [2a] Depending on which direction the sound came to the ear the shape of our upper body, head and pinna creates spatial cues that reveal the direction.
> [2b] If we can feed the eardums with headphones and somehow have the same spatial cues our hearing can't tell the difference. The problem of course is to have those exactly correct spatial cues, something our current technology can't do properly meaning headphone spatiality is somewhat compromised.
> [2c] It's up to listeners themselves how much they weight that compromise against the benefits of headphone listening.



1. Again, you are simply making-up assertions which not only contradict the facts but even contradict yourself! You correctly state later in your post that as you move speakers closer to you, the balance between direct and reflected sound changes, more direct sound and less reflected sound. It should be obvious that the exact same is true of microphones, the closer to the source sound (instrument/s) the mic is placed, the more the direct sound of that instrument is captured and the less reflected sound. Whether a church recording needs additional reverberation is therefore determined by where the mics were positioned during the recording. Virtually always, additional reverb is therefore required, even on recordings in a church (or other acoustic with a relatively long RT60!). Virtually always, a combination of mic positions is employed, very close (spot mics), very distant (room mics) and in the case of an acoustic ensemble, typically some medium distance mics (main pair or tree), which are still significantly closer to the instruments than the audience. As implied by the name, the "room mics" are there to capture the room acoustics/reflections. The spot mics certainly and possibly the main mics will need "additional reverberation", which is why we have room mics in the first place, although commonly a combination of room mics and artificial (computer generated) reverb supply this "additional reverberation". However, all these mic positions are recorded to separate tracks which are then balanced/mixed together in a mix studio, so the determination of this balance (and processing) is done WITH the added acoustics of the mix room/studio. To reproduce this you therefore also need the "added reverberation" of the listening room. However in practice, many/most mixes are auditioned with headphones and the mix maybe adjusted to account for (though not correct) the difference in presentation.

2. This statement doesn't make sense, because those "pressure changes at the eardrums" contain the soundfield.
2a. This isn't entirely correct either. The "spatial cues" already exist in the sound pressure waves and we can detect and measure this with a (stereo) pair of microphones, which obviously don't have an upper body, head or pinna. The upper body, head and pinna do not "create spatial cues", they modify (filter) the existing spatial cues. The brain then uses the existing (original) spatial cues AND the modified cues, plus other biases to create a perception of spatiality. We still get a perception of spatiality with headphones but as we don't get the added spatial information of the room, nor most of the "modified cues" , then our perception of the spatiality is going to be affected (somewhat or very significantly different) to listening on speakers.
2b. The problem is to have both the same spatial cues (room acoustics) and modified cues (HRTF) but I agree with you that our current technology can't really do this properly yet, although it is starting to get close. However, you are now contradicting your main argument throughout this thread!! You have repeatedly stated that simple crossfeed does do this "properly" and fixes/cures the issues with headphone listening!
2c. Exactly and the exact same is true of using crossfeed with headphones.


71 dB said:


> [1] If someone moves your speakers closer to you in your listening room, the reverberation doesn't change.
> [1a] Early reflection do change and so does the balance between direct sound and reflections + reverberation.
> [1b] The closer the speakers are to you, the louder is the direct sound compared to reverberation.
> [1c] Loud reverberation compared to direct dry sound is a spatial cue of distant sound source.
> ...


1. Yes it does! Because:
1a. Reverberation is simply the combination of early reflections reflecting off subsequent reflective surfaces. If the early reflections change, then so too must the reverb!
1b. True!
1c. No, on it's own it is not! We can have a fairly dry large room and a very reverberant small room, therefore loud reverberation relative to direct sound but we would NOT perceive the direct sound in the small reverberant room as being more distant than in the large (fairly dry) room. Consistently, you take a single factor/parameter of spatiality and then define spatiality by ONLY that one parameter but the simple fact/reality is that it's the COMBINATION factors/parameters that create a perception of spatiality. The reason we would not perceive the higher relative reverberation level in the small room to be more distant is because concurrently and IN ADDITION to relative reverb level, the brain is ALSO analysing timing and colouration spatial information!
1d. Again, not on it's own. A short time delay between the direct sound and the first early reflection/s could occur in a large room if we're sitting relatively near a wall. If for example we were sitting in a concert hall 15 meters away from an instrument but only say a meter away from the left wall, then the first reflection will only have to travel a couple of meters further than the direct sound and will only be delayed by a few milli-secs, which is exactly the same as being much closer to the instrument and in the centre of a small room, so how is it we can perceive the difference? We can tell the difference because in the concert hall example the first reflection from the right wall would be delayed by several dozen milli-secs relative to the reflection from the left wall but in the centre of a small room there would be little/no difference in the arrival times of the initial reflections from the left and right walls. Again, this is why crossfeed is damaging to spatial information, because some of that left initial reflection is fed to the right channel and vice versa, messing-up/destroying that vital left/right difference. In case it's not obvious, the result is the same if we (the listener) sit in the centre of both rooms but the musician's position changes (is close/r to the left wall in the concert hall).

2. I don't believe Bigshot is saying that he doesn't experience any soundstage with headphones, just not a very good/pleasing soundstage. I'm sure he gets a fair bit of left/right (sounstage width) but obviously not so much sounstage depth and as "soundstage" is both left/right AND near/far (depth), if he's not perceiving much/any depth, then he's not perceiving a soundstage. Broadly, this is my experience as well, except I do get some depth with headphones (with many mixes) but not as much as with speakers. For me, crossfeed reduces the soundstage width and commonly reduces the depth even further (though by how much and how annoyingly varies between mixes).
2a. That's certainly a possibility, although in my case I heard/perceived that before I had any idea whether it was possible/impossible. I didn't have a prejudice against crossfeed and then heard what I expected, I heard various crossfeeds and that experience eventually led to my prejudice. However, if it's a possibility for Bigshot, isn't it also a possibility for you? Maybe you are convinced cossfeed is better and therefore you're hearing it as better? However, the actual facts/reality is somewhat more on Bigshot's side because crossfeed does damage/destroy some of the parameters of spatiality and it does not include HRTF or room acoustics!

G


----------



## 71 dB

I know/understand the various parameters of spatiality, but I concentrate on the relevant one to keep things simple. Of course one parameter alone doesn't make spatiality, but often one parameter is the dominant one. Call me lazy for simplifying things, but ignorant I am not about spatiality. Spatiality is insanely complex things and "full on" approach to all details all the time would make things unnecessorily complicated. I understand spatiality to concentrate on the most significant parameter(s) in each situation.


----------



## 71 dB

gregorio said:


> This statement doesn't make sense, *because those "pressure changes at the eardrums" contain the soundfield*.
> 
> G



The parameters you leave out yourself here is the directions of the soundwaves of the more or less diffuse soundfield. We lose that information in the form of directions when sound enters our ears (because the ear canal is so narrow it behaves as an 1-D waveguide at the frequency range of human hearing). The information of directions is encoded as pressure changes of different spectral colourizations so that at the eardrum pressure changes are all you have, but those pressure changes contain the information of the soundwave directions in more or less lossy form (you can't re-create the original soundfield from the pressure changes at eardum with 100 % accuracy).

Microphones can only record pressure changes. Microphones have different directional patterns so they react to sounds differently depending on the direction, but the information about the direction is ultimately lost and all becomes pressure changes. In order to record soundfields you need a soundfield microphone.


----------



## bigshot (Mar 2, 2019)

71 dB said:


> I'd much rather felt respected, valued and loved than have a good multichannel speaker system in a big house.



This is an audio forum, not an "I'm OK, you're OK" forum. And I'm afraid I have no opinion on Yemen. We just talk about sound fidelity here, we aren't expected to provide you with love and understanding. You need to deal with that stuff yourself. I've said it several times before, and I'll say it again... You need to take a break and go out in the sunshine and be with other people. You aren't going to receive the validation you seem so desperate to get here.

Respect is built in the real world. This is just an anonymous internet forum populated by people who know things, people who want to learn things, and a few people with various personality disorders. No one really expects respect here, even the people who deserve it the most. I'd say someone who has built a career in the real world of sound engineering and who is kind enough to tirelessly share his knowledge with us here in this group deserves it. That's just my criteria for respect though. YMMV.


----------



## 71 dB

I have been sharing my knowledge with you. I'm not expecting to be given a Nobel prize, but it would be nice to not being constantly told I don't understand/know anything and I haven't even heard good speaker systems etc. I don't have a career in sound engineering (that is a very RARE profession!) and I don't have the best speakers in the world, but that doesn't mean I don't understand and know stuff. I have an university degree and I have thought about these things for years. I have tested things while making music. I have written Nyquist plugins. I have my own experiencies of the issue and I don't know why my experiencies would be total junk compared to experiencies of other people. I'm an "out of the box" thinker and maybe that style is too radical for some conservatives here such as the idea of putting avoidance of excessive spatiality over "artistic intention." Limiting ILD at low frequencies to 6 dB may feel harsh for artistic freedom, but you still have a lot of choices for ILD: Maybe 1 dB? Or perhaps 2 dB? How about 4 dB? Or are we wild and go full 6 dB? Is that really so limiting? I'm not a conservative who thinks we must do things in a certain way just because The Beatles recorded their music in a certain way in the 60's.


----------



## Sonic Defender

Well, while I get what you're saying, I still think a degree of compassion and civility is expected of us, even in what I agree is a rather detached and dehumanizing space as the Internet can be. We still I hope can find a way to balance the gap between each other without forgetting that sometimes even here people still need to feel like they have value.

We can't provide love and that type of deep feeling,  but hopefully we can still be kind.


----------



## bigshot (Mar 3, 2019)

I'm here to talk about audio, not to coddle. I'm not Dr Phil. I don't expect sympathy or validation from anyone else, and I don't feel obligated to provide it to others. I just want to talk about audio with people who listen and respond on point. I get tired of all the self serving drama.


----------



## 71 dB

bigshot said:


> I'm here to talk about audio, not to coddle. I'm not Dr Phil. I don't expect sympathy or validation from anyone else, and I don't feel obligated to provide it to others. I just want to talk about audio with people who listen and respond on point. I get tired of all the self serving drama.



Well, I came here too to talk about headphone audio, but why does the conversion go off the trail? Why do you tell me to go out and meet people if you are not Dr Phil? Who I meet has nothing to do with headphone audio. I'm completely fine with keeping the conversion within stuff like ILD, ITD, etc. but everytime I talk about this stuff I get attacked. I'm tired of hearing how I "make up" things. Of course I "make up" some things, because these are not 100 % established things! There is not established terminology or understanding, so of course I need to "make up" terminology and concepts for things I figure out myself. In time these things probably will become established, but for now it is what it is. So, instead of attacking me for "making up" things, how about concentrating on my claims instead of me? That would be "talking about audio."


----------



## gregorio

71 dB said:


> I know/understand the various parameters of spatiality, but I concentrate on the relevant one to keep things simple. Of course one parameter alone doesn't make spatiality, but often one parameter is the dominant one. Call me lazy for simplifying things, but ignorant I am not about spatiality. Spatiality is insanely complex things and "full on" approach to all details all the time would make things unnecessorily complicated. I understand spatiality to concentrate on the most significant parameter(s) in each situation.



That's a contradiction! If you did know/understand "spatiality" then you would know that there isn't a "relevant one", that "relevancy" only comes with the combination of parameters and that "to concentrate on the most significant parameter" does NOT "keep things simple", it makes them wrong. We could for example concentrate on the brake horse power (BHP) of a vehicle's engine as the most significant or dominant parameter of performance. However, that is only one parameter and without the others, it's actually meaningless, irrelevant and plainly false. Obviously, a bus with a 400BHP engine will NOT have superior performance than a sport motorbike with just 200BHP, in fact the exact opposite is actually the case. Ignoring the other parameters (such as power to weight ratio, torque, power train, aerodynamics, etc.) does not "keep things simple", it makes the assertions about performance completely false. You could know everything there is to know about each of these parameters individually but unless you consider them IN COMBINATION then you know pretty much nothing about a vehicle's performance! In the examples I gave previously (1c and 1d), concentrating on the "relevant one" (parameter) tells us pretty much nothing, we would not able to hear/tell the difference between a large room and a small room or the left/right position of a performer relative to our listening position, which is clearly NOT the case. There is no "one dominant" or "one relevant" parameter, it's the COMBINATION of them that is the only relevant factor. Furthermore, how many posts have you made to this thread? Why is this the first time you are mentioning that you are "simplifying" (or more accurately, over-simplifying to the point of falsehood/nonsense)? You've repeatedly defined spatiality by one single parameter (and completely discounted many/most/all of the others), argued for page after page that this is a "fact" and never mentioned that actually it's not a fact but a "simplification".



71 dB said:


> [1] The parameters you leave out yourself here is the directions of the soundwaves of the more or less diffuse soundfield.
> [1b] We lose that information in the form of directions when sound enters our ears (because the ear canal is so narrow it behaves as an 1-D waveguide at the frequency range of human hearing).
> [1c] The information of directions is encoded as pressure changes of different spectral colourizations so that at the eardrum pressure changes are all you have, but those pressure changes contain the information of the soundwave directions in more or less lossy form (you can't re-create the original soundfield from the pressure changes at eardum with 100 % accuracy).
> [2] Microphones can only record pressure changes. Microphones have different directional patterns so they react to sounds differently depending on the direction, but the information about the direction is ultimately lost and all becomes pressure changes. In order to record soundfields you need a soundfield microphone.



1. No, I have not left out those parameters, quite the opposite. You are failing to understand the basic principle, the directional information is contained within the pressure changes of the soundwaves. 
1b. Firstly, we don't only have one ear canal, we have two. As they are a fixed distance apart, the brain can compare these two inputs (of the pressure changes of the soundwave) and calculate all the directional information/spatiality. And, even if we did only have one ear canal, still our brain would be able to calculate a significant amount of spatial information because the pressure changes of the soundwave includes ALL the time delayed reflections (and colourations of those reflections) which the brain can analyse.
1c. There's no "encoding" occurring. There's the direct sound and reflections and colourisations which are ALL part of the pressure changes before the soundwave even arrives at our ears! Then there is the measuring of that soundwave at two different fixed positions (our two ears) and the comparison of those two measurements (in our brain). This is all pretty basic psychoacoustics!

2. Again, no! The directional information is not lost and all becomes pressure changes, the pressure changes ARE that information! Therefore, even though mics only record pressure changes, even a single mic can record the soundfield, although only from one position and therefore only in mono. However, two mics (a stereo pair) capture the pressure changes (and therefore the soundfield!) from two different positions and provide a stereo soundfield. If the spatial/directional information were "lost", you would not perceive any spatial information in a mono, stereo or binaural recording. I assume you have heard purely mono, stereo and binaural recordings, are you really saying that you don't perceive any spatial information from any of them?

G


----------



## 71 dB

Gregorio, the point is the directional information doesn't have to come from physical soundfield around our head. It can be in the recording. Can't you already stop smearing me. You try so hard to make me look like I don't know this stuff. Yes, this is pretty basic psychoacoustics and I learned it in the university. You are using semantics to smear me and it's most annoying.


----------



## 71 dB

If all parameters were absolutely needed, Haas effect wouldn't work because it only uses ITD. Stop patronizing me for not understanding we have 2 ears! I talk about stuff like ILD more than perhaps anyone else here and that is based on differencies between two ears!


----------



## gregorio

71 dB said:


> [1] I'm tired of hearing how I "make up" things. Of course I "make up" some things, because these are not 100 % established things! There is not established terminology or understanding, so of course I need to "make up" terminology and concepts for things I figure out myself.
> [1a] In time these things probably will become established, but for now it is what it is.
> [1b] So, instead of attacking me for "making up" things, how about concentrating on my claims instead of me?
> [2] I don't have a career in sound engineering (that is a very RARE profession!).
> ...



1. And there at last we have it, this is NOT the sub-forum for making things up and then stating them as fact, regardless of whether they are 100% established or not! Even worse though, is that you are often making things up that are if fact already extremely well established (often for many decades and in some cases even centuries)! Just because you personally are not aware they are well established, doesn't mean that they are not. This results in you "making things up" which are in direct conflict with the actual established facts! This is the typical (fallacious) approach of so many audiophiles.
1a. No, in time they will NOT become established, especially if you've just made them up and they contradict what's already been established over the course of numerous decades by scientists and countless engineers!
1b. I'm attacking your claims because you've just made them up and they contradict the established facts!

2. It's not particularly rare, there's probably more than 50,000 professional sound engineers around the world and over the course of 70 odd years, probably a million or so.

3. But that's not "out of the box" thinking, it's the exact opposite, it's thinking in a 60+ year old box! It was thinking out of the already 20 year old box that led to "artistic intention" over the conservative "natural spatiality" (in the 1950's). So, you have it completely backwards, you're advocating 60+ year old conservatism that's so out of date it precludes the existence of almost all music created in the last 50 years or so!

4. You are (inadvertently) proving my point! The Haas Effect doesn't work. It provides no information about distance/depth or acoustic environment type or size, it does not define any aspect of the soundstage, except either the width of the sound source or it's left/right position (depending on the amount of delay). Furthermore, the perception of left/right position is not only due to the Haas/Precedence Effect, the brain also calculates using "summing localisation" (panning). Why don't you try mixing together some dry sound sources (say synth generated sounds) using only the Haas/Precedence Effect (and no other spatial information) and see for yourself what sort of soundstage you get?

G


----------



## 71 dB

Haas effect doesn't work? So why do they teach it in Acoustic 101 course?


----------



## 71 dB

50,000 professional sound engineers around the World is not much. How many million policemen are there in the World? If 1/1000 of those sound engineers are in Finland that's 50 in all. That is rare, not even 1 % of policemen in Finland (7000).


----------



## Sonic Defender

71 dB said:


> If all parameters were absolutely needed, Haas effect wouldn't work because it only uses ITD. Stop patronizing me for not understanding we have 2 ears! I talk about stuff like ILD more than perhaps anyone else here and that is based on differencies between two ears!


At this point it sounds like you are subjecting yourself to emotional harm and for little reason. It is obvious that the parties you are attempting to convince are not at all going to change their perspective and clearly you are struggling in life right now you run the risk of harming yourself further for no reason. I say give up, get out of the thread and concentrate on being around those who can and are willing to support you. The Internet is often just a psychological meat grinder and you have to know when to call it a day. I hope that the things in life that you are struggling with get better and that you have a support network around you. Clearly this thread is not that place. Cheers.


----------



## 71 dB

gregorio said:


> But that's not "out of the box" thinking, it's the exact opposite, it's thinking in a 60+ year old box! It was thinking out of the already 20 year old box that led to "artistic intention" over the conservative "natural spatiality" (in the 1950's). So, you have it completely backwards, you're advocating 60+ year old conservatism that's so out of date it precludes the existence of almost all music created in the last 50 years or so!
> 
> G



Nice try. The unnatural artistic intention spatiality was a brain fart caused by reckless use of technological possibilites, stereophonic sound. There is a reason why ping pongy recordings aren't a thing anymore and why a lot of modern music production uses concepts like mono bass to use stereophonic sound more skillfully, better. Your thinking seems to be inside that box according to how you advocate artistic intent. Skillful use of natural spatiality opens new possibilities that were not possible in the old days because of technological limitations.


----------



## Sonic Defender

Sigh.


----------



## bigshot (Mar 3, 2019)

71 dB said:


> Well, I came here too to talk about headphone audio, but why does the conversion go off the trail?



You asked the question... now you get ny honest and straightforward advice...

The conversation goes off track because you keep making it all about you instead of keeping focused on the subject. I say that speakers have a quality of sound that headphones can't match, and I list some of those qualities... You don't discuss those unique qualities, instead you start telling me about your small apartment and act all put upon because "we can't all afford an expensive speaker system and have a house to live in". Gregorio spends hours typing up detailed technical information about why he makes the claims he does, and you blow right on by, ignoring every bit of it. Then you focus on a few words spoken in frustration and complain that he isn't treating you as nicely as you'd like. Why is he frustrated? Well for one thing, he just spent a half hour typing up solid information for you and you blew right past all that! And why does Gregorio's frustration mount in post after post? Because in five previous posts, he nicely spent a half hour each time typing all the same information for you, just to have you blow right past it each and every time.

You know there are professionals in this group, so you list your resume as if that makes a difference, you discuss your current  job status, and you tell us how your previous employer didn't; appreciate you You get angry at Gregorio and you say all sound engineers are bums. You tell us you feel like you aren't respected or understood by this world and try to passive aggressively make us feel responsible for that. I'm sorry, I don't feel even the tiniest bit responsible, and there's no reason I should.  I've nicely told you several times that if you need self validation and sympathy, your real world friends and family are the ones to go to for that. Here, we are all just random voices on the internet discussing home audio. We're only responsible for ourselves and we expect everyone else to be responsible for themselves..

That said,... If you want to talk about audio subjects, this is the place. If you really do want to talk about audio, there are a few things you need to start doing... focus on the topic and actually READ everyone else's posts and TRY TO UNDERSTAND THEIR POINTS before replying. If you are going to reply to someone's post, reply to their points, don't use it as an excuse to launch into irrelevant self serving stuff. If they are right, admit that. If they are wrong, try to give information that will explain to them why they are wrong. Don't make stuff up or use emotional manipulation or try to trick your way past with logical fallacies. That stuff doesn't work. Most of all, approach the discussion as an opportunity to LEARN, not an exercise in self validation.

Go back and read your past few posts.... "Why are you smearing me?"  "How many policemen in Finland can't be wrong?" "Sound engineers and brain farts"... You're reacting emotionally, without logic, off point, and multi-posting in a row for no purpose. I could respond to your posts in the rare times when you actually make a point, but I've given up because your signal to noise has dropped so low.... and all of those points have already been answered five times before by Gregorio.

It doesn't matter if someone knows more than you on the internet. You don't have to *WIN* every argument at all costs, and you shouldn't feel the need to have to impress us. If you get frustrated or feel bad, you can walk away and get a breath of fresh air in the real world and not worry at all about the discussion. If you carry any frustration at all away from this group, you don't belong in this group. You're just grabbing on too tight and losing sight of why we are here. (Not that you're the only one...) Just loosen up and you will do perfectly fine. If you can't do that, go out in the real world and turn the computer off.

I'm not being mean here. I'm speaking straightforwardly.


----------



## 71 dB

bigshot said:


> You asked the question... now you get ny honest and straightforward advice...
> 
> The conversation goes off track because you keep making it all about you instead of keeping focused on the subject. I say that speakers have a quality of sound that headphones can't match, and I list some of those qualities... You don't discuss those unique qualities, instead you start telling me about your small apartment and act all put upon because "we can't all afford an expensive speaker system and have a house to live in". Gregorio spends hours typing up detailed technical information about why he makes the claims he does, and you blow right on by, ignoring every bit of it. Then you focus on a few words spoken in frustration and complain that he isn't treating you as nicely as you'd like. Why is he frustrated? Well for one thing, he just spent a half hour typing up solid information for you and you blew right past all that! And why does Gregorio's frustration mount in post after post? Because in five previous posts, he nicely spent a half hour each time typing all the same information for you, just to have you blow right past it each and every time.
> 
> ...



Maybe I am more pragmatic than you. I seek for audio solutions as cheap as possible so that as many people as possible has access to them. I may have a small apartment, but I believe I am still richer than 90 % of people in the World! That's because I live in a rich first World country. Your "great multichannel speakers in a large room" solution is good for less than 10 % of people in the World. My proper crossfeed -solution is MUCH cheaper meaning maybe more than half of all people in the World can have it. That's why I don't see much interest in talking about expensive systems/solution. Of course it sounds great if you spend half a million in it! My question is how to get great sound for few hundred bucks? 

Gregorio bases his details in artistic intent which makes them uninteresting for me as I want to base things on science. I learned science in the university, not artistic intent. I went to Helsinki University of Technology (since re-named to Aalto University), not Sibelius Academy where they perhaps teach artistic intent? I talk about stuff I know, not stuff I don't know such as artistic intent. This is the science sub-forum anyway, isn't it? Not the artistic intent sub-forum.

I had a nice Sunday visiting my sister who had made tasty vegan food, thanks for your concern. If I talk about spatiality to my sister who writes poems for living she wouldn't understand the first thing I say. I have to come to a forum like this to find people who understand spatiality. Well, actually my best friend understands as acoustics was his minor in the university, but in general 99 % of people wouldn't care less about spatiality! We are those rare crazy people who do care about spatiality. WE!

You are often right in my opinion and so is Gregorio. I disagree with you about whether headphones have a soundstage and I disagree with Gregorio about excessive spatiality. That's the main disagreements. On things like 16 bit vs 24 bit I totally agree with you and Gregorio.


----------



## bfreedma

Just my opinion - some of the recent posts would be more appropriate as PMs.


----------



## Davesrose (Mar 3, 2019)

71 dB said:


> You are often right in my opinion and so is Gregorio. I disagree with you about whether headphones have a soundstage and I disagree with Gregorio about excessive spatiality. That's the main disagreements. On things like 16 bit vs 24 bit I totally agree with you and Gregorio.



I can appreciate how steadfast Gregorio can be: I've had interactions with him and have seen that he can be incapable of civility in distilling information for non-sound engineers.  However, I would also agree with everyone that you are taking this too personally.  In this thread (which recently there has been a lot of crapping), you have been fanning flames by going on about how you get attacked and are not respected....and then go on about how you [and apparently only you] know about spatialty...and that anyone else's setup (headphones or speaker system) isn't as pertinent as yours.

From what I can gather, my own speaker setup isn't as spread out as bigshot's.  For me, I think it works well, as my 7.1.4 setup does have quite a bit of direct sound that doesn't need much acoustic treatment...but that's my setup.  When it comes to headphones, the best surround scheme I've heard is a now out of production processor from Sennheiser: it takes stereo (and Dolby matrix surround), and then there's a lot of settings for setting parametric settings.  If you spend the time, it does sound better then the current Dolby Atmos heaphone surround.  But none of them have a seemless blend of 360 degree sound as the recent Atmos/DTS:X/Auro-3D speaker setups.  When it comes to soundstage for headphones...I've equated it to hearing instrumentation that seems wider, but it always seems pointed towards your head.  That's not always a bad thing....and overall qualities of my headphone gear is great for enjoying music.


----------



## gregorio

71 dB said:


> [1] Haas effect doesn't work? So why do they teach it in Acoustic 101 course?
> [2] 50,000 professional sound engineers around the World is not much.
> [3] My proper crossfeed -solution is MUCH cheaper meaning maybe more than half of all people in the World can have it.
> [4] The unnatural artistic intention spatiality was a brain fart caused by reckless use of technological possibilites, stereophonic sound.
> ...



1. Because it clearly and succinctly demonstrates ONE of the processes the brain uses to perceive ONE aspect of spatiality. Why don't they ONLY teach the Haas Effect in Acoustics 101?

2. I said "_probably more than 50,000_", it could be 100,000 and I also said there's probably been a million or more over the course of the last 60+ years. Most/All all of whom apparently suffer from permanent "brain farts" and ignorance of spatiality.

3. You don't have a "proper crossfeed solution", all you have is a solution which you personally prefer.

4. Yes of course, About a million sound engineers, Stockhausen, Spectre, Martin/The Beatles and all the great music production masters are all just brain farts, except for you of course.

5. Ping Pongy recordings are still a thing, you just apparently lack the listening skills to hear/perceive it.
5a. That's NOT a modern music production concept! You do realise that recording technology was only mono (obviously including the bass) for 70 years or so? Completely contrary to being "modern", there isn't in fact an older music production concept than mono bass! Furthermore, even after stereo became a relatively common consumer format, mono bass was a technical requirement/limitation of Vinyl. So, you've just made-up some nonsense that's the EXACT OPPOSITE of the actual historical facts!


71 dB said:


> [6] Your thinking seems to be inside that box according to how you advocate artistic intent.
> [6a] Skillful use of natural spatiality opens new possibilities that were not possible in the old days because of technological limitations.
> [7] Gregorio bases his details in artistic intent which makes them uninteresting for me as I want to base things on science.
> [7a] I talk about stuff I know, not stuff I don't know such as artistic intent.
> ...


6. You admit to not being interested in artistic intent, not studying it at university and not knowing about it, yet here you are making (false) assertions about it and about how I think/understand/advocate it!
6a. What technological limitations? The original patent application for stereo sound (1931, Alan Blumlein) detailed the mic'ing technique to perfectly represent natural spatiality within the stereo soundfield (the "Blumlein Pair"). There were no "technological limitations" (!), this "natural spatiality" was the default stereo music recording strategy and the "possibilities" were extensively explored over a period of about 20 years, until there was nothing left to explore and music recording/production evolved beyond the limitations of natural spatiality in the 1950's. Again, you are making-up assertions which are exactly contrary to the actual historical facts!

7. What is it you are listening to and perceiving? Is it recordings of science or recordings of music? Do you understand that music is an art and not a science? If so (and as music is therefore "artistic intent"), why do you listen to music if you are uninterested in it?
7a. Huh? You've just made assertions about how I am thinking in a box about artistic intent, that the great music production innovators and masters were all suffering from "brain farts" and that artistic intent should follow the rules of natural spatiality and now you're saying that you don't know anything about artistic intent and don't talk about it! How much more self-contradictory could you possibly be??
7b. If this is the science sub-forum, why are you "making things up" that contradict the actual facts? How is that science? Isn't that actually the exact opposite of science?
7c. Then why did you bring it up? YOU are the one talking about the "proper"/"correct" way of reproducing music/artistic intention!

G


----------



## 71 dB

gregorio said:


> 6a. What technological limitations? The original patent application for stereo sound (1931, Alan Blumlein) detailed the mic'ing technique to perfectly represent natural spatiality within the stereo soundfield (the "Blumlein Pair"). There were no "technological limitations" (!), this "natural spatiality" was the default stereo music recording strategy and the "possibilities" were extensively explored over a period of about 20 years, until there was nothing left to explore and music recording/production evolved beyond the limitations of natural spatiality in the 1950's. Again, you are making-up assertions which are exactly contrary to the actual historical facts!
> 
> G



How does Blumlein pair give natural spatiality? Two figure-8 mics positioned 90° from each other at one point. How does such a set up quarantee for example natural ITD when ITD for all sounds is 0? Maybe this was "natural" in 1931 when they didn't know better. Another issue of course it that at that time there were only monophonic commercial sound formats. In the 50's the stereophonic commercial sound format finally arrived and stereo sound because a marketing gimmick. That kind of thing often happens with new technology: When digital sound formats arrived to movie theatres in early 90's and allowed extremely strong low frequency effects, movies of that era used excessive bass to exploit the new technological possibilities until the low frequency effects returned to more rational levels.

A few years ago I crafted a table of microphone set ups and how suitable they are for speaker and headphone listening:




 

I used a test CD which has the same musical performance recorded simultaneously using all these set ups. Of course the result applies only to a recording scenario of this type, but it tells about the differences between microphone set ups, for example that AB microphone set up is very problematic for headphone listening without crossfeed.


----------



## 71 dB

gregorio said:


> 7. What is it you are listening to and perceiving? Is it recordings of science or recordings of music? Do you understand that music is an art and not a science? If so (and as music is therefore "artistic intent"), why do you listen to music if you are uninterested in it?
> G



Melodies, harmonies, rhythm, timbre, counterpoint, musical contrast, spatiality etc. things that in my opinion are related to music. I do understand/enjoy Bach's counterpoint without studies in the Sibelius Academy, but I can't understand/enjoy excessive spatiality so I suppose I should have studied in the Sibelius Academy for that. For me it's as crazy as screening a movie up side down. I consider myself an artistic person and I make my own music. My artistic "intent" happens to be natural spatiality.


----------



## bigshot (Mar 4, 2019)

Well, I was being honest and clear and it was like talking to a brick wall. This rabbit hole is too deep. I guess I have no choice but to add another name to the ignore list. I'm really curious why this group attracts these sort of non-self aware personality types? Going to that to think about that one... Is it the subject of audio in general, or is it the dynamics of this particular group? I'm suspecting it is the latter. I just hope Gregorio doesn't burn out from dealing with all of them himself.

bfreedma, I generally only do one (sometimes two) shot across the bow, and they have to be public or they don't work as a shot across the bow. In this case, the results would have been the same in PM,... ore even just blocking and not saying anything... but you never know if someone is going to listen or not until you say it. There won't be any more here.


----------



## 71 dB

Davesrose said:


> I can appreciate how steadfast Gregorio can be: I've had interactions with him and have seen that he can be incapable of civility in distilling information for non-sound engineers.  However, I would also agree with everyone that you are taking this too personally.  In this thread (which recently there has been a lot of crapping), you have been fanning flames by going on about how you get attacked and are not respected....and then go on about how you [and apparently only you] know about spatialty...and that anyone else's setup (headphones or speaker system) isn't as pertinent as yours.
> 
> From what I can gather, my own speaker setup isn't as spread out as bigshot's.  For me, I think it works well, as my 7.1.4 setup does have quite a bit of direct sound that doesn't need much acoustic treatment...but that's my setup.  When it comes to headphones, the best surround scheme I've heard is a now out of production processor from Sennheiser: it takes stereo (and Dolby matrix surround), and then there's a lot of settings for setting parametric settings.  If you spend the time, it does sound better then the current Dolby Atmos heaphone surround.  But none of them have a seemless blend of 360 degree sound as the recent Atmos/DTS:X/Auro-3D speaker setups.  When it comes to soundstage for headphones...I've equated it to hearing instrumentation that seems wider, but it always seems pointed towards your head.  That's not always a bad thing....and overall qualities of my headphone gear is great for enjoying music.



I'm sure we all take it personally what we read online, but of course it's another thing how we respond. I try to hide my personal feelings more. I am a socially clumsy guy and talking to other people isn't one of my strengths. That clumsyness must show in my posts. Sorry, but I am who I am. I don't suggest I only know about spatiality. Of course not. I have learned most of what I know from other people and maybe learned something simply by trying things out myself.

I have never tested the Sennheiser prosessor or Dolby Atmos heaphone surround.


----------



## bigshot

Davesrose said:


> When it comes to headphones, the best surround scheme I've heard is a now out of production processor from Sennheiser: it takes stereo (and Dolby matrix surround), and then there's a lot of settings for setting parametric settings.  If you spend the time, it does sound better then the current Dolby Atmos heaphone surround.



I've got a blu-ray that has Atmos and it has a special "headphone surround" track, but it just sounds slightly different than regular two channel and doesn't seem to have any surround feel to it at all. Sometimes I wonder if a good bit of people talking about headphone surround and soundstage depth is just the emperors new clothes.


----------



## gregorio (Mar 5, 2019)

71 dB said:


> [1] How does Blumlein pair give natural spatiality? Two figure-8 mics positioned 90° from each other at one point. How does such a set up quarantee for example natural ITD when ITD for all sounds is 0?
> [1a] Maybe this was "natural" in 1931 when they didn't know better.
> [2] Another issue of course it that at that time there were only monophonic commercial sound formats.
> [2a] In the 50's the stereophonic commercial sound format finally arrived and stereo sound because a marketing gimmick.
> ...



1. Again, you are demonstrating your ignorance of the actual facts, the actual history AND even contradicting yourself! You stated just a few of days ago: "_In order to record soundfields you need a soundfield microphone_", which is technically correct if we are talking about the actual spherical soundfield (although this is irrelevant with 2 channel stereo, which has no vertical, up/down spatial information). Clearly, you don't actually know what a soundfield microphone actually is! A soundfiled mic is a tetrahedral array of unidirectional mics which is decoded to what is called "B-Format". B-Format is effectively a "Blumlein triple", a "Blumlein Pair" (which is a pair of 90deg coincident figure of 8 mics in the horizontal plane) plus an additional coincident figure of 8 mic at 90deg in the vertical plane. A Blumlein Pair IS therefore a soundfield mic (just without the vertical plane) and you are contradicting yourself!
1a. A soundfield mic is still the most "natural" in 2019 and therefore Blumlein clearly did know better! Furthermore, that the Blumlein Pair produces the most natural/convincing soundstage is pretty much beyond rational dispute, it's the consensus of the recording industry and has been confirmed in numerous published hearing tests (this one for example).

2. Well obviously. Blumlein wouldn't have been able to patent something that had already been invented and was available commercially. However, your next assertion is utterly false!
2a. The first commercial stereo system was developed by Blumlein himself (from his 1931 patent application concepts) and the first stereo vinyl LP was cut at Abbey Road Studios in 1934. However, this was not the first application of stereo itself. Stereo was first demonstrated at an exhibition in Paris in 1881, the BBC made the first stereo radio broadcast in 1926, the first stereo recording was cut on wax disc (using two grooves) in the USA in 1932, the first binaural recording (dummy head + 2 mics) was demonstrated publicly in 1933 and the first 3 channel stereophonic system was also demonstrated in 1933. Throughout the 1930s stereophonic sound was explored and developed, even an 8 channel version of surround sound in 1938 (released on Walt Disney's "Fantasia"). By the 1940s there were various competing commercial systems; tape, optical film and disk based. It's not until 1957 that stereo vinyl LPs were released to the public (although an expensive tape based stereo system was available to the public several years earlier) but by that time stereo was very well tested, explored and understood! It's also completely false that it was a marketing gimmick, even in 1881 it's actual advantages were demonstrated beyond doubt. By the mid/late 1940s there were countless (probably thousands of) commercial stereo recordings (though not available to the public in stereo) and to my knowledge ALL OF THEM were recordings of orchestras (symphonies and jazz orchestras) that were using stereo to record/reproduce natural spatiality, no gimmicks at all, that didn't come until nearly two decades later, when popular music genres started playing with stereo.
2b. Maybe, but what's that got to do with stereo? By the early 1960's stereo was NOT a new technology, it was an 80 year old technology that had been commercially available for about 25 years!
2c. Again, more utter nonsense that you've just made-up that contradicts the actual historical facts!! Why do you keep doing this? You say you want to be respected for being smart and knowledgeable but just making-up false facts demonstrates the exact opposite! The LFE channel was developed by Dolby in the mid 1970's (6 channel 70mm film format) and used quite extensively. Dolby Digital 5.1 in the early 1990's just continued that tradition with exactly the same level calibration (+10dB in channel gain), which was due to the narrow/limited headroom available on 70mm mag film.

3. Even if we were to assume your chart is correct (which it isn't!) the obvious question would be: How many commercial music recordings are of those types? The answer is virtually none!! Virtually all commercial stereo music recordings from the late 1950's used a combination of both stereo and (mono) spot mics and often a more than one stereo mic type. Orchestral recordings (for example) by the late 1950's/early 1960's typically used an XY, ORTF or Decca Tree, plus a very wide AB Pair (outriggers), plus some mono spot mics. In rock/popular music the drum kit typically had an overhead coincident stereo pair, plus mono spot mics for most of the individual instruments in the kit and everything else was mono, except if there was a piano (an AB pair) and/or backing vocals which were often recorded in stereo. Therefore, "_Of course, the result only applies_" to your test CD but virtually no actual commercial music recordings!

4. What do you mean "related" to music, they are music.
4a. Enjoy certainly but understand, I very much doubt that!

G


----------



## bigshot

How is a recording with a sound field microphone supposed to be reproduced? Is there a matching multichannel speaker arrangement? I'm guessing it isn't used much except for technical applications.


----------



## 71 dB

bigshot said:


> How is a recording with a sound field microphone supposed to be reproduced? Is there a matching multichannel speaker arrangement? I'm guessing it isn't used much except for technical applications.



The raw signals from sound field microphone is called A-format and isn't intented to be used without further prosessing, done in real time during the recording or later. The prosessed signals are called B-format and consists of 4 signals: W (omnidirectional information), X (front-back information), Y (left-right information) and Z (up-down information). So, the B-format is the same as if we had one omnidirectional mic and 3 figure-8 mics ortogonally to each other covering all 3 spacial dimensions. That's kind of like 3-dimension version of MS stereo setup.


----------



## 71 dB

Stereo sound is old, I know that, of course, but what I mean is in the late 50's it became possible to sell millions of stereophonic recordings to masses. 70 mm copies where a bit rare. I have read Star Wars had some 70 mm copies with 6-channel audio, but most copies were 35 mm with 4-channel Dolby Surround sound.


----------



## Davesrose

bigshot said:


> I've got a blu-ray that has Atmos and it has a special "headphone surround" track, but it just sounds slightly different than regular two channel and doesn't seem to have any surround feel to it at all. Sometimes I wonder if a good bit of people talking about headphone surround and soundstage depth is just the emperors new clothes.



Have you tried it with an Atmos specific processor?  I've got quite a few UHD discs and digital copies that are Atmos.  I haven't tried plugging headphones into my Atmos receiver to see if there are specific 3D surround options for headphone...since I enjoy movies on my 7.1.4 setup   I'll try doing it soon to see if there is any difference with regular a regular stereo source.  My Windows 10 computers have Dolby Atmos (and sometime soon, there's going to be a DTS app as well).  I've only tried listening to music with Dolby Atmos Headphone on and off....I also found with regular stereo there's not much of a difference (didn't hear any difference in soundstage...main thing was some added bass).  I do have a UHD drive that I can try some movies out on the computer to see if there are surround effects with Atmos tracks and Atmos Headphone (for Windows).  But I suspect I also won't hear something impressive with either my receiver or Atmos for Windows.  The only time I have heard immersive effects from traditional open back headphones is that out of production Sennheiser processor.  It took some experimenting with settings to get it to sound like a good surround field (and it's different for everyone's pinnas), but I could hear sound going in back of me.  I suppose it never became popular because it does take efforts in configuration.


----------



## Davesrose (Mar 5, 2019)

71 dB said:


> Stereo sound is old, I know that, of course, but what I mean is in the late 50's it became possible to sell millions of stereophonic recordings to masses. 70 mm copies where a bit rare. I have read Star Wars had some 70 mm copies with 6-channel audio, but most copies were 35 mm with 4-channel Dolby Surround sound.



Keep in mind that the 6 track 70mm prints did not have surround sound as we know it (in fact, it was still called Dolby Stereo..because of the use of Dolby noise reduction).  Most tracks were centers for the screen, and included one rear and up to two bass channels (and only a few theaters kept a full 6 tracks).  The main advertising for 6 track stereo was that it was higher fidelity and less expense in equipment compared to previous attempts with magnetic iron oxide tracks.


----------



## bigshot (Mar 5, 2019)

Davesrose said:


> Have you tried it with an Atmos specific processor?



It's some sort of 2 channel surround format that supposedly works with any system. I don't think it works. I can't make binaural sound like anything dimensional either. The best I can do is have the sound a few inches in front of my eyes... but then it snaps to a few inches behind my head and flickers back and forth. There's no way I can control it. It might be some sort of compatibility thing with my noggin. Headphone surround just doesn't seem to work.

Matrixed Dolby Stereo with the encoded center and rear is baked into a lot of movies and TV shows, but they almost never advertise that it it 4 channel matrix on the package. It irritates me that I have to bounce through various decoding schemes to find the one that works the best. The manual says to just set it to auto surround and it will figure it out, but it doesn't. Frustrating.

If you want to hear a great example of Dolby Surround, check out Joseph Losey's Don Giovanni (1979). It was one of the first wide release surround movies made, but the original implementation of surround sucked because they hadn't worked out the bugs yet. They recently went back and restored the film and the sound is incredible.

Here in LA I think I always saw the full blown Dolby surround.  I think we were among the first to get it. I don't think I ever saw Star Wars movies when they weren't 70mm 6 track.


----------



## Davesrose

bigshot said:


> It's some sort of 2 channel surround format that supposedly works with any system. I don't think it works. I can't make binaural sound like anything dimensional either. The best I can do is have the sound a few inches in front of my eyes... but then it snaps to a few inches behind my head and flickers back and forth. There's no way I can control it. It might be some sort of compatibility thing with my noggin. Headphone surround just doesn't seem to work.



Yeah, I think that's what made the Sennheiser processor effective.  I found adjusting the parametric setting (that's titled "ears") really does let you fine tune front and back effects based on your own head.  Recent "surround" headphone schemes don't seem to have many options (that I've seen).  But maybe tomorrow I'll pop in an Atmos movie and plug in headphones to my receiver (which has surround parameters for each surround mode), and also check Atmos headphone in Windows with an Atmos surround track.  Will report if I do see options for headphones and if I do hear anything beyond regular stereo.


----------



## 71 dB

Davesrose said:


> Keep in mind that the 6 track 70mm prints did not have surround sound as we know it (in fact, it was still called Dolby Stereo..because of the use of Dolby noise reduction).  Most tracks were centers for the screen, and included one rear and up to two bass channels (and only a few theaters kept a full 6 tracks).  The main advertising for 6 track stereo was that it was higher fidelity and less expense in equipment compared to previous attempts with magnetic iron oxide tracks.



Maybe I am mistaken, but I had understood digital movie sound allowed more dynamic range and that's why movies of early 90's had so strong low frequency content. I knew a guy who's brother worked in a movie theater and told these things.


----------



## Davesrose

71 dB said:


> Maybe I am mistaken, but I had understood digital movie sound allowed more dynamic range and that's why movies of early 90's had so strong low frequency content. I knew a guy who's brother worked in a movie theater and told these things.



LFE was introduced before Dolby Digital.  Dolby had it in their 6 track stereo, and there is an argument that it was more important because of the limitations of dynamic range with magnetic tracks.


----------



## gregorio

bigshot said:


> How is a recording with a sound field microphone supposed to be reproduced? Is there a matching multichannel speaker arrangement? I'm guessing it isn't used much except for technical applications.



There isn't a matching multichannel speaker arrangement, it's not designed to be output to speakers and that's why it's so useful! It's used quite often in the TV and film world, as an intermediate format. Let me try to explain without getting into the actual maths of it all. I assume you've heard of microphone polar (pick-up) patterns: Omni, Bi-directional (figure of 8), Cardioid, Hyper-cardioid, etc? However, of all the different available mic polar patterns only two actually physically exist, Omni-directional and Bi-directional. The Cardioid pattern (and all the variations of it) doesn't physically exist, it's effectively a mathematical construct: The rear "lobe' of a bi-directional (figure of 8) mic is precisely 180deg out of phase with the front "lobe". So, if you simultaneously capture a signal with both an omni pattern (which is entirely in phase) and a bi-directional pattern and then add the two together, the front lobe of the bi-directional pattern sums with the omni pattern while the rear lobe completely (phase) "cancels" with it and the result is the cardioid pattern. And changing the relative levels/balance of the omni and bi-directional signals before adding them together results in the different types of cardioid patterns. A soundfield mic employs exactly the same mathematical principle, although in a rather more complex way. When decoded to "B-Format" we've effectively got 3 figure of 8 mics (representing front/back, left/right and up/down) and an omni-directional mic. By adding together all these signals in various proportions, all the different potential phase summations and cancellations results in an effectively limitless number of polar patterns. In other words, from "B-format" we can (mathematically perfectly) derive mono, stereo, 5.1, 7.1, Atmos or any other speaker format. This "B-Format" is commonly employed in sound effects libraries; You buy a sound effect in B-Format and then, with the appropriate plugin/algorithms, turn it into (mathematically perfectly) whatever format your project is in (stereo, 5.1 or whatever). In case it's not obvious, the recording itself isn't perfect of course, just the derivation of the channel format. Additionally, it's only a single point/location recording, you can't for example derive a spaced (AB) stereo pair from it.



71 dB said:


> So, the B-format is the same as if we had one omnidirectional mic and 3 figure-8 mics ortogonally to each other covering all 3 spacial dimensions. That's kind of like 3-dimension version of MS stereo setup.



Yes, in fact it's not a "kind of 3D MS stereo setup", it IS effectively a 3D MS setup. Including the filter/EQ compensation of microphone physics and psychoacoustics (of the "S", differential signal). And, who was it who invented the MS setup and the physics/psychoacoustics EQ compensation? I'll give you a clue, it was all fully described in his 1931 patent application! Micheal Gerzon, the mathematical genius behind the invention of ambisonics and the soundfield mic, specifically stated on numerous occasions that his invention was based on the principles/math of Alan Blumlein. - Blumlein did not invent stereo, stereo was invented more than two decades before Blumlein was even born, what he invented was a system based on the psychoacoustic "duplex theory" of sound localisation (ILD, ITD and head shadow effects in different frequency ranges) published by Lord Rayleigh in 1907. In other words, a system designed on the principle of how a human would psychoacoustically perceive two spaced loudspeakers reproducing a stereo signal. This was in contrast to American researchers in stereo at the time, who were not considering many of these psychoacoustics effects and were therefore working with spaced stereo pairs. He invented not only the coincident Mid and Side (sum and difference) but how to cut MS into vinyl (all stereo vinyl even today is MS) and also how to psychoacoustically change sound localisation within the stereo field by manipulating the EQ of the sum and difference channel. All of this was detailed (both conceptually and mathematically) in his 1931 patent application. An MS pair produces extremely accurate stereo sound location although it has a slight technical weakness/insensitivity to sounds (acoustics/reflections) arriving from the rear. This technical weakness is overcome by the crossed coincided figure of 8 pair (Blumlein Pair) which can be considered the most technically accurate/perfect of all stereo pair mic configurations with regard to the entire (horizontal plane) soundfield. However, this technical perfection is ironically it's downfall, it's very rarely ever used in practice because almost no one prefers to hear what the ears perceive, they prefer to hear what the brain perceives, which is a relative reduction in the amount of acoustic reflections/reverb (relative to the direct signal)!

If you want to be seen as knowledgeable on the subject, rather than just making-up false facts and telling everyone how knowledgeable you are, why don't you learn some of the actual history and facts? 



71 dB said:


> [1] 70 mm copies where a bit rare. I have read Star Wars had some 70 mm copies with 6-channel audio, but most copies were 35 mm with 4-channel Dolby Surround sound.
> [2] Maybe I am mistaken, but I had understood digital movie sound allowed more dynamic range and that's why movies of early 90's had so strong low frequency content.
> [2a] I knew a guy who's brother worked in a movie theater and told these things.



1. It doesn't matter how many copies there were! Even if there were only one copy, then the dubbing stage still had to create a 6-channel mix for that one copy and as is still the case today, the "main" mix is the higher channel count version and the lower channel count version is derived from that (with or without the Director present), even though the higher channel count mix will be heard by fewer cinema goers.

2. Digital movie sound does in theory allow more dynamic range but that's because of a lower noise floor, not a higher peak level. The calibration level for Dolby Digital was the same as the previous (analogue) format.
2a. And I know a guy who doesn't work in a movie theatre but actually creates and mixes film sound in commercial theatrical dubbing theatres. I see that guy several times a day, in fact every time I look in a mirror! This is the science forum, not the "I know a guy who had a brother, who told me something" forum.

G


----------



## 71 dB

gregorio said:


> If you want to be seen as knowledgeable on the subject, rather than just making-up false facts and telling everyone how knowledgeable you are, why don't you learn some of the actual history and facts?



G[/QUOTE]

I learn differently. I tend to forget names and years but I learn principles, because I am a system thinker, not a historian. My acoustics 101 course was ~25 years ago and keeping all the details (names years) in head is difficult, but I learned the principles behind microphone set ups, the stuff that counts and makes me understand things. If I need names and years I have Google to help me out.

It seems remembering names and years make you look knowledgeable in the eyes of other people, but I think deep understanding of the principles is what's important.


----------



## 71 dB

gregorio said:


> 2a. And I know a guy who doesn't work in a movie theatre but actually creates and mixes film sound in commercial theatrical dubbing theatres. I see that guy several times a day, in fact every time I look in a mirror! This is the science forum, not the "I know a guy who had a brother, who told me something" forum.
> 
> G



So is the scientific fact that movies in the 70's had as much bass as movies in the 90's? If so then I have been wrong and thanks for correcting me.


----------



## bigshot

Thanks for the info Gregorio. I was thinking from the context that it was some sort of format that recorded "spatiality" for music. It's more of just a swiss army knife way of recording a single mix element so it can be used in a bunch of different applications. It doesn't apply much to what we're talking about, but it's interesting. Thanks.


----------



## 71 dB

Lord Rayleigh conducted binaural experiments in 1876, similar to the experiments by Giovanni Venturi nearly a centuri earlier, but he could only explain the results with head shadow effect (ILD). In 1907 in his duplex theory he included ITD to explain spatial hearing at low frequencies.


----------



## gregorio (Mar 8, 2019)

71 dB said:


> [1] I tend to forget names and years but I learn principles, because I am a system thinker, not a historian.
> [1a] My acoustics 101 course was ~25 years ago and keeping all the details (names years) in head is difficult, but I learned the principles behind microphone set ups, the stuff that counts and makes me understand things.
> If I need names and years I have Google to help me out.
> [2] It seems remembering names and years make you look knowledgeable in the eyes of other people, but I think deep understanding of the principles is what's important.



1. That raises two obvious problems: Firstly, if you learned the principles but not the history, why do you keep making all kinds of assertions about the history? Secondly, what use are the principles without the context of history? ...
1a. Maybe this explains your problems. Learning "the principles behind microphone set ups" is NOT "the stuff that counts" and does NOT "make you understand things", in fact quite the opposite, it makes you misunderstand things! Learning the principles is just basic knowledge, NOT "understanding", it's just the first step towards understanding. "Understanding" ONLY comes when you contextualise those principles, appreciate how they relate to each other and therefore when they are relevant and/or applicable (and when they are not). This is where you fail; you take a single principle, ignore the context (or make-up a false context), then assert that this is "understanding" and that others don't have your understanding (are ignorant). But, that isn't "understanding", it's "misunderstanding" (lack of understanding) and obviously, few others others suffer from that same misunderstanding!

2. A "deep understanding of the principles" IS what's important, the problem is: You do NOT HAVE a deep understanding of the principles! You've managed to convince yourself that you do have a deep understanding because you have quite a lot of knowledge about SOME of the principles but what you really have is: Almost complete ignorance of other highly pertinent principles, apparently little idea how/why/when the principles relate to each other and therefore, pretty much the opposite of "a deep understanding"! Remembering names and dates does not make me look knowledgeable, as you say, any kid with Google can do that. What makes me (somewhat) knowledgeable is knowing and understanding the history, because development/innovation is driven by weaknesses discovered in practical application and history therefore provides the context of how, why and when the principles relate to each other and are applicable. This is why university students are not taught only the principles but the history, because the goal of education isn't just knowledge but understanding.



bigshot said:


> Thanks for the info Gregorio. I was thinking from the context that it was some sort of format that recorded "spatiality" for music.



It's that as well! It's not a format/system designed specifically for capturing the ambient/diffuse acoustic soundfield (such as the "Hamasaki Square" for example), the concept of the soundfield mic is that it captures the entire soundfield mathematically perfectly AND, due to the mathematical arrangement of the different mics' polar patterns, any other arrangement of coincident mics can be derived. If we just consider the horizontal plane, all stereo mic'ing techniques have (different) strengths and weaknesses. The various coincident pairs do not capture the rear of the soundfield (the important early rear reflections for example) and as cardioid patterns must be used, they are subject to the proximity effect (inaccurate low freq response dependant on distance from the source), an omni spaced pair does capture an accurate freq response and the rear of the soundfield but has poor localisation, particularly in and around the most important centre position. The best solution theoretically is the Blumlein Pair, no freq response issues, it captures the rear, front and sides of the horizontal plane equally (without the hole/weakness in the centre of a spaced omni pair) but as mentioned, in a sense it's too accurate and unlike the other stereo pairs it can't really be manipulated (the other stereo pairs can be panned more narrowly for example and/or mixed with other mics, mono or stereo pairs). The soundfield mic solves all these weaknesses, it does capture the soundfield (in both horizontal and vertical planes) effectively perfectly and it can be manipulated almost infinitely, except for one weakness, it only plays well on it's own, not with other mics (even if the other mic/s is an identical soundfield mic), unless of course you loose a lot of the information (by say deriving a coincident stereo pair from it). This makes it completely unsuitable for virtually all rock/pop because we don't have a recording of a performance but several different recordings of different performances which are all layered/mixed together. And this is 71dB's big problem, he's ignoring some basic principles, such as: A. What do we want to end up with, B. The performance (strengths/weaknesses) of all the mic types (and stereo arrangements of them) depend almost entirely on what we are recording and the mic/s location relative to both the sound source (instrument/musician) and the room boundaries (reflective surfaces) and C. The other practical considerations of how most music is created/recorded, which is virtually ALWAYS: Multi-tracked and/or a combination of several/numerous different mic types and arrangements.


71 dB said:


> So is the scientific fact that movies in the 70's had as much bass as movies in the 90's? If so then I have been wrong and thanks for correcting me.



It's an historical fact that the LFE channel existed from the 1970's, that it had the same calibration level as Dolby Digital, that ALL the commercial theatrical dubbing theatres had it and that all the film re-recording mixers and directors could therefore use it as much/loudly throughout the 1980's as they did when DD arrived, if they so chose. Whether a movie goer would hear that on a particular film would depend on whether that particular cinema had 70mm projection facilities and the arrangement of speakers/subs to output the 6 channel mix (most cinemas worldwide did not) and what the artistic intent of the director was. Your assertion was FALSE, if there were some films with a great deal of bass in the 1990's, it was because that was the artistic intent, it was nothing to do with the LFE channel being a new technology or a novelty because it wasn't, it had been standard in dubbing theatres for over a decade. However, in a sense this is a good analogy because you've simply made-up a false narrative/history about the LFE channel to suit your agenda, exactly as you have with stereo!

G


----------



## 71 dB

gregorio said:


> It's an historical fact that the LFE channel existed from the 1970's, that it had the same calibration level as Dolby Digital, that ALL the commercial theatrical dubbing theatres had it and that all the film re-recording mixers and directors could therefore use it as much/loudly throughout the 1980's as they did when DD arrived, if they so chose. Whether a movie goer would hear that on a particular film would depend on whether that particular cinema had 70mm projection facilities and the arrangement of speakers/subs to output the 6 channel mix (most cinemas worldwide did not) and what the artistic intent of the director was. Your assertion was FALSE, if there were some films with a great deal of bass in the 1990's, it was because that was the artistic intent, it was nothing to do with the LFE channel being a new technology or a novelty because it wasn't, it had been standard in dubbing theatres for over a decade. However, in a sense this is a good analogy because you've simply made-up a false narrative/history about the LFE channel to suit your agenda, exactly as you have with stereo!
> 
> G



Ok, but having LFE channel doesn't mean you have artistical intent to use it to the full effect. I don't know if I have ever seen a 70 mm film in theatres. Maybe Kubrick's 2001? As a child I knew nothing about film formats and at the age I became aware of film formats, digital theatre audio was a thing meaning 35 mm films had LFE too. 

I have to live with the knowledge and understanding I have. Even this level of knowledge has required hard work, years in universtity and exploring things as hobby. Maybe I just am not good for anything, but I tried. If I failed then what am I suppsoed to do?


----------



## 71 dB

Even if I am wrong about everything, I REALLY like crossfeed and will NEVER stop using it. I love crossfeed. I really do!


----------



## 71 dB

Maybe I just should ENJOY life with crossfeed, listening to music instead of reading all day long how ignorant and clueless I am about everything. Clearly people don't want to hear what I have to say so...


----------



## 71 dB

I will use spatiality as I understand in the music I make. It's MY intent and nobody has to like it.


----------



## 71 dB

gregorio said:


> 1. That raises two obvious problems: Firstly, if you learned the principles but not the history, why do you keep making all kinds of assertions about the history? Secondly, what use are the principles without the context of history? ...
> 1a. Maybe this explains your problems. Learning "the principles behind microphone set ups" is NOT "the stuff that counts" and does NOT "make you understand things", in fact quite the opposite, it makes you misunderstand things! Learning the principles is just basic knowledge, NOT "understanding", it's just the first step towards understanding. "Understanding" ONLY comes when you contextualise those principles, appreciate how they relate to each other and therefore when they are relevant and/or applicable (and when they are not). This is where you fail; you take a single principle, ignore the context (or make-up a false context), then assert that this is "understanding" and that others don't have your understanding (are ignorant). But, that isn't "understanding", it's "misunderstanding" (lack of understanding) and obviously, few others others suffer from that same misunderstanding!
> 
> 2. A "deep understanding of the principles" IS what's important, the problem is: You do NOT HAVE a deep understanding of the principles! You've managed to convince yourself that you do have a deep understanding because you have quite a lot of knowledge about SOME of the principles but what you really have is: Almost complete ignorance of other highly pertinent principles, apparently little idea how/why/when the principles relate to each other and therefore, pretty much the opposite of "a deep understanding"! Remembering names and dates does not make me look knowledgeable, as you say, any kid with Google can do that. What makes me (somewhat) knowledgeable is knowing and understanding the history, because development/innovation is driven by weaknesses discovered in practical application and history therefore provides the context of how, why and when the principles relate to each other and are applicable. This is why university students are not taught only the principles but the history, because the goal of education isn't just knowledge but understanding.]



Yea, I thought I have a deep understanding of a thing I have concentrated, but… …maybe I don't meaning I don't a deep understanding of anything… …what am I good for and how can one have a deep understanding? Where did you get your understanding you claim to be superior? I how some history of course, but not all of it.

How do I understand ILD wrong? I do know/understand head shadow at low frequencies is smaller than in the high frequences. That's a scientific fact and as such what we are supposed to say here. In 2012 I realized that headphone listening messes up with the shadowing, it doesn't allow the shadowing happen as it should. Instead headphones without crossfeed means almost full shadowing, as if the head was inside a wall and the ears in different rooms with only a very little acoustic leaking. I admit I should have realized it 15 years earlier, but I did realize it eventually. I just was into speakers and didn't think about headphones that much. I can say that my experiencies with spatiality has justified the sense of understanding things well.

What is the stuff that counts?


----------



## gregorio

71 dB said:


> [1] I how some history of course, but not all of it.
> [2] How do I understand ILD wrong? I do know/understand head shadow at low frequencies is smaller than in the high frequences.
> [2a] That's a scientific fact and as such what we are supposed to say here.
> [3] Instead headphones without crossfeed means almost full shadowing ...
> [4] I can say that my experiencies with spatiality has justified the sense of understanding things well.



1. No one knows ALL the history, I certainly don't but I do know many of the most important, relevant and defining parts. You on the other hand apparently don't, you either completely ignore them or just make-up false histories.

2. The answer to your question is (unwittingly) in your very next sentence. Head Shadow is NOT just a "Level Difference", it is NOT created just by changing the level of a signal, it's a highly specific level difference created by an EQ reduction contour, which is defined by the absorption characteristics of an individual's head. Furthermore, "head shadow" is just ONE of several factors involved in the perception of spatial information and localisation.
2a. No, it's is NOT a scientific fact. It's a gross oversimplification of a scientific fact, EVEN if we were ONLY talking about ILD, which we are NOT! You/we are talking about the perception/localisation of spatial information (as a whole). 

3. No, headphones without crossfeed means no head shadowing at all, there is no (head absorption) EQ reduction contour applied, "full" or partial! Headphones with crossfeed does not apply the head absorption EQ reduction contour either!

4. Which is why we have science in the first place, to stop us simply making-up (or extrapolating) any old understanding and justifying it with personal experiences!!

G


----------



## 71 dB

gregorio said:


> 1. No one knows ALL the history, I certainly don't but I do know many of the most important, relevant and defining parts. You on the other hand apparently don't, you either completely ignore them or just make-up false histories.
> 
> 2. The answer to your question is (unwittingly) in your very next sentence. Head Shadow is NOT just a "Level Difference", it is NOT created just by changing the level of a signal, it's a highly specific level difference created by an EQ reduction contour, which is defined by the absorption characteristics of an individual's head. Furthermore, "head shadow" is just ONE of several factors involved in the perception of spatial information and localisation.
> 2a. No, it's is NOT a scientific fact. It's a gross oversimplification of a scientific fact, EVEN if we were ONLY talking about ILD, which we are NOT! You/we are talking about the perception/localisation of spatial information (as a whole).
> ...



1. So you define the history you know relevant and the parts you don't know irrelevant? How convinient for you. I don't think I have made-up false histories.

2. Of course head shadow isn't just ILD. You know I know that, but you keep smearing me misunderstanding me on purpose. I know the factors involved in the spatial perception because I studied acoustics in the university. If I remember correctly, I got the highest grade 5/5 on the course that included this stuff.
2a. Yeah, oversimplification if you will. I know. Whatever.
3. YES! No head shadowing at all! That is 100 % wrong!!!! Crossfeed is a coarse simulation of the exact head shadow so it's less than 100 % wrong. The real head shadow effect is very complex as you say, but actually not that complex at low frequencies where crossfeed operates. That's why crossfeed is pretty good approximation despite being oversimplification. If my ears expect the ILD to be 0-6 dB at 200 Hz and no crossfeed gives 30 dB (acoustic leak) while crossfeed gives say 3 dB, crossfeed is MUCH closer even if the correct ILD would be 4.12 dB. If your salary is $3000 and your employer pays you $0 do you say that's better than if he/she pays you $2500 ? If I can't get the EXACT correct shadowing, I want something that is in the ballpark rather than nothing. Crossfeed allows me to get on the ballpark and spatial hearing can cope with the inaccuraties that aren't totally irrational.
5. Science has it's empirical side and some people agree with my personal experiencies


----------



## bigshot

It would be nice if you gathered all your responses into a single post rather than floods of little ones.


----------



## gregorio

71 dB said:


> 1. So you define the history you know relevant and the parts you don't know irrelevant? How convinient for you.
> [1a] I don't think I have made-up false histories.
> 2. Of course head shadow isn't just ILD. You know I know that, but you keep smearing me misunderstanding me on purpose.
> [2.1] I know the factors involved in the spatial perception because I studied acoustics in the university. If I remember correctly, I got the highest grade 5/5 on the course that included this stuff.
> ...



1. You're joking? Subsequent history and current practice defines what (history) is important and relevant, not me, I've got nothing to do with it, I just learn what is already defined by history to be relevant/important. How do you not know this?
1a. So you are sticking with your "history" of the novelty factor of the LFE channel which didn't exist before Dolby Digital, despite the actual history which proves your "history" false? That stereo was likewise a new technology in the early 1960's, that Blumlein "didn't know any better" and various other false histories!

2. No, you are smearing yourself. You are the one who continually defines spatial perception by just ILD.
2.1 What's studying acoustics got to do with it? The "factors involved in the spatial perception" is the science of psychoacoustics, not acoustics!

3. Below about 500Hz to 800Hz there effectively is no head shadowing, as these freqs pass through (and/or around) the skull pretty much unimpeded. Above that range is where head shadowing (a complex individual EQ absorption curve) occurs. A "course simulation" of head shadowing would therefore be some approximated/simplified absorption EQ curve applied to the crossfed signal above 800Hz BUT *YOU* HAVE STATED that you don't don't crossfeed freqs above about 800Hz at all, let alone with any sort of EQ absorption curve! So, how is a complete absence of a crossfed signal with an EQ absorption curve "a course simulation of the exact head shadow"??

5. And at the time, the personal experiences of some people who actually used Stanley's Snake Oil was that it worked very well! 

G


----------



## 71 dB (Mar 11, 2019)

gregorio said:


> 1. You're joking? Subsequent history and current practice defines what (history) is important and relevant, not me, I've got nothing to do with it, I just learn what is already defined by history to be relevant/important. How do you not know this?
> 1a. So you are sticking with your "history" of the novelty factor of the LFE channel which didn't exist before Dolby Digital, despite the actual history which proves your "history" false? That stereo was likewise a new technology in the early 1960's, that Blumlein "didn't know any better" and various other false histories!
> 
> 2. No, you are smearing yourself. You are the one who continually defines spatial perception by just ILD.
> ...


1. The importance of historical events is subjective and often even controversial! Sure, the history of audio isn't that controversial, but for most people the history of audio is not important. My dad is a million times more interested of the history of stamps (e.g. the stamps of French colonies in Africa) than the history of audio. We are among a very small minority of people who are interested of the history of audio. Also, instead of just accepting everything you read as "facts" you should be critical to what is told. From whose perspective the history is told? Does Russia tell the history of WW2 exactly the same way Germany does?

1a. I am not sticking with anything. I am ready to change my mind if necessory. I have never denied LFE before Dolby Digital, but having LFE in 70's doesn't automatically mean the bass content is as strong as it was in 90's. The style of mixing movie soundtrack surely changed over 2 decades. Maybe things like THX made it so that the audio reproduction gear in the theaters got better during the 80's and 90's? Perhaps where you live the levels in theaters are calibrated, but that's not the case in where I live, in Finland. Quiet screenings can be 10 dB quieter than loud screenings. Maybe the style of mixing changed around 1990 and theatres had to learn to use appropriate levels so that the low frequency effects don't "open the doors" as people where joking. There's theoretical and practical side to things. Many people really experienced more bass in the 90's than in the 70's in movie theaters? 70 mm screening where rare and maybe the sound reproduction gear wasn't on the same level as in the 90's after THX etc. improvements? Maybe the levels increased? Maybe the style of movie soundtracks changed? Maybe what I said has some credit?

EDIT (forgot to address):
Of course I know stereo had been demonstrated and used in the late 19th century, but in the late 50's it became commercially popular meaning artists started to have commercial intentions. That's how I see it. Correct me if I am wrong.

2. I talk about ILD a lot because it's so relevant when talking about crossfeed, but if you read my posts you can see me mention ITD, ISD and other things too.
2.1 In my university psychoacoustics is included in the acoustics courses. That's not surprising, because a lot of acoustics has very little meaning without the knowledge of psychoacoustics. This kind of attacks from your part are totally pointless.

3. Correct and that's why the ILD at low frequencies can't get larger than a few decibels unless the sound source is VERY near one ear. My passive crossfeeders use first order low pass filters meaning the "leakage" of sound to the contralateral ear dies out along -6 dB/octave slope above 800 Hz. So if the crossfeed level is say -5 dB, at 400 Hz the level is -6 dB (-1 dB point), at 800 Hz the level is -8 dB (-3 dB point), at 1600 Hz the level is -12 dB (-7 dB point) and at 12800 Hz the level has dropped to -29 dB (-24 dB point). Absoption curves at these low levels aren't very important, because the crossfed sounds are added to the ipsilateral sound which usually are much stronger so that errors are for the most part masked out. It's the ipsilateral EQ errors that actually count, but having no crossfeed also suffers from it! Headphone frequency target curves should address this issue anyway.

5. Snake oil makes people hear differences when there isn't any. Crossfeed clearly makes a difference (even people without analytical listening skills can tell the difference pretty easily) so it's not snake oil.


----------



## bigshot

Yawn.


----------



## castleofargh

71 dB said:


> 1. The importance of historical events is subjective and often even controversial! Sure, the history of audio isn't that controversial, but for most people the history of audio is not important. My dad is a million times more interested of the history of stamps (e.g. the stamps of French colonies in Africa) than the history of audio. We are among a very small minority of people who are interested of the history of audio. Also, instead of just accepting everything you read as "facts" you should be critical to what is told. From whose perspective the history is told? Does Russia tell the history of WW2 exactly the same way Germany does?
> 
> 1a. I am not sticking with anything. I am ready to change my mind if necessory. I have never denied LFE before Dolby Digital, but having LFE in 70's doesn't automatically mean the bass content is as strong as it was in 90's. The style of mixing movie soundtrack surely changed over 2 decades. Maybe things like THX made it so that the audio reproduction gear in the theaters got better during the 80's and 90's? Perhaps where you live the levels in theaters are calibrated, but that's not the case in where I live, in Finland. Quiet screenings can be 10 dB quieter than loud screenings. Maybe the style of mixing changed around 1990 and theatres had to learn to use appropriate levels so that the low frequency effects don't "open the doors" as people where joking. There's theoretical and practical side to things. Many people really experienced more bass in the 90's than in the 70's in movie theaters? 70 mm screening where rare and maybe the sound reproduction gear wasn't on the same level as in the 90's after THX etc. improvements? Maybe the levels increased? Maybe the style of movie soundtracks changed? Maybe what I said has some credit?
> 
> ...



in the graph below I'm using the settings you gave, applied to what I use at the moment to enjoy headphones more.
warning/explanation: the following FR graphs are only for one single sound source, one speaker 30° to the right and 0° altitude. the impulses are not my own but they're real measured HRIR of an actual human, and what subjectively works best for me so far when trying to keep the altitude and distance good enough to avoid complete nonsense in my brain:
-red is the direct response reaching the right ear.
-green is the FR from the same speaker when reaching the left ear and being masked by the head(as actually measured!).
-cyan is red with the -6dB/octave butterworth applied at 800Hz and the -5dB gain as you suggested in your post. so cyan is basically green minus the FR error you don't find very important to your otherwise so important ILD rational for correcting spatiality(to me that's a double standard and part of why I keep disagreeing with you).



you're going to argue that 





> Absoption curves at these low levels aren't very important


 but my subjective impression emphatically disagrees with that opinion. out of all the crossfeed apps I've tried, even analog solution(I still own one amp with a setting for 2 levels of crossfeed), my made up stuff with custom ILD with some very small amount of reverb, and an EQ change for the headphone itself to better fit the simulation(basically making sure that mono is more or less at the right altitude when applying the convolution) is noticeably above the best experience I had of crossfeed. and still miles away from actually thinking I'm hearing speakers. 
so are you wrong? are you missing out on something better because you refuse to give up after investing so much into basic crossfeed? are you simply lucky and you really happen to feel like crossfeed is fixing your subjective world?  IDK. but I do know that there are many people with different experiences, different preferences and different ways to pretend that their views are objectively superior. among them, I know several who adore the amount of details they can get from headphones as they are. and most of those guys think that crossfeed degrades audio by making the bass feel weird, by making the singer sound veiled and EQed. and other arguments like that where they clearly consider the default headphone presentation as objectively superior. and just like you they can support their views by cherry picking the variables they consider important to win the argument. for example they will be able to pass a blind test and reach hearing thresholds well below what they will achieve on a pair of speakers with the same signal. therefore, they will have "evidence" that the experience has more fidelity. with crossfeed some of those threshold will inevitably rise. QED crossfeed is objectively inferior. 
you're doing the exact same thing when you decide to disregard anything not improved or potentially degraded by crossfeed, while being stuck on discussing how crossfeed improves the variables you care about for "spatiality". 
same same, but different, but the same. ultimately you like something and should simply enjoy it instead of trying to convert people to your righteous way of using headphones. there are many righteous ways of listening to music. some might say, all of them are.


----------



## 71 dB

castleofargh said:


> in the graph below I'm using the settings you gave, applied to what I use at the moment to enjoy headphones more.
> warning/explanation: the following FR graphs are only for one single sound source, one speaker 30° to the right and 0° altitude. the impulses are not my own but they're real measured HRIR of an actual human, and what subjectively works best for me so far when trying to keep the altitude and distance good enough to avoid complete nonsense in my brain:
> -red is the direct response reaching the right ear.
> -green is the FR from the same speaker when reaching the left ear and being masked by the head(as actually measured!).
> -cyan is red with the -6dB/octave butterworth applied at 800Hz and the -5dB gain as you suggested in your post. so cyan is basically green minus the FR error you don't find very important to your otherwise so important ILD rational for correcting spatiality(to me that's a double standard and part of why I keep disagreeing with you).



As these curves show, measured ILD is very small at low frequencies. The cyan curve might be proper crossfeed, because recordings don't always have infinite ILD as this graph suggests, in other words the result of applying cyan crossfeed curve to a stereo recording with limited ILD may give a result very close to the green curve, but there is also room acoustics involved. Room acoustics change spatiality compated to measurements in anechoic chamber, so we actually want something that is just below the green curve at low frequencies. Instead of about 1-2 dB at low frequencies, 3 dB is perhaps closer to what we have in a room. This simulation lacks treble boost for ipsilateral sound, but that's only a couple of desibels meaning the red curve should have  a shelf-filtering of about +2 dB abouve 800 Hz. 

Mind you that without crossfeed the cyan curve drops down so much it would hardly show up in this graph. How much is up to how much the headphones leak acoustically. Open headphones are better in this sense but still much worse than crossfeed.


----------



## 71 dB

castleofargh said:


> you're going to argue that  but my subjective impression emphatically disagrees with that opinion. out of all the crossfeed apps I've tried, even analog solution(I still own one amp with a setting for 2 levels of crossfeed), my made up stuff with custom ILD with some very small amount of reverb, and an EQ change for the headphone itself to better fit the simulation(basically making sure that mono is more or less at the right altitude when applying the convolution) is noticeably above the best experience I had of crossfeed. and still miles away from actually thinking I'm hearing speakers. so are you wrong? are you missing out on something better because you refuse to give up after investing so much into basic crossfeed? are you simply lucky and you really happen to feel like crossfeed is fixing your subjective world?  IDK. but I do know that there are many people with different experiences, different preferences and different ways to pretend that their views are objectively superior. among them, I know several who adore the amount of details they can get from headphones as they are. and most of those guys think that crossfeed degrades audio by making the bass feel weird, by making the singer sound veiled and EQed. and other arguments like that where they clearly consider the default headphone presentation as objectively superior. and just like you they can support their views by cherry picking the variables they consider important to win the argument. for example they will be able to pass a blind test and reach hearing thresholds well below what they will achieve on a pair of speakers with the same signal. therefore, they will have "evidence" that the experience has more fidelity. with crossfeed some of those threshold will inevitably rise. QED crossfeed is objectively inferior.
> you're doing the exact same thing when you decide to disregard anything not improved or potentially degraded by crossfeed, while being stuck on discussing how crossfeed improves the variables you care about for "spatiality".
> same same, but different, but the same. ultimately you like something and should simply enjoy it instead of trying to convert people to your righteous way of using headphones. there are many righteous ways of listening to music. some might say, all of them are.



This is "to crossfeed or not to crossfeed" tread, isn't it? The choices are crossfeed or not crossfeed. I consider HRIR convolutions crossfeed too, just more sophisticated. You convolve and add (crossfeed). You must know how it goes, don't you? That's just doing numerically what happens acoustically. So make up your mind about what you consider crossfeed! If you think you have a better way of crossfeeding than me then great for you! My point is almost any crossfeed method is better than no crossfeed because no crossfeed means most of the time excessive spatiality which CANNOT be ideal by definiton because it's excessive.

Am I missing out? Of course I am! I am not a billionaire who can do whatever I want. I have to use my limited resources and try to enjoy life the best I can. I think people who don't use any crossfeed are missing out. To me crossfeed is a bang for the buck solution to not miss out so much. All I know is crossfeed makes headphone listening much better for me. I totally disagree with people who think crossfeed makes bass weird. Bass with large ILD is weird, bass with small ILD is natural. But these people can do whatever they want and they should allow me to do whatever I want.


----------



## gregorio (Mar 12, 2019)

71 dB said:


> 1. We are among a very small minority of people who are interested of the history of audio.
> [1.1] Also, instead of just accepting everything you read as "facts" you should be critical to what is told. [1.1a] From whose perspective the history is told?
> 1a. I have never denied LFE before Dolby Digital, but having LFE in 70's doesn't automatically mean the bass content is as strong as it was in 90's. The style of mixing movie soundtrack surely changed over 2 decades.
> [1a1] Perhaps where you live the levels in theaters are calibrated, but that's not the case in where I live, in Finland.
> ...



1. If you are interested in the history of audio, why don't you actually learn some rather than just make-up your own?
1.1. Maybe we have a very different definition of what being "critical" means? For example, blindly accepting (and repeating it as fact) something that a fiend's brother told you, who once worked in a cinema, is pretty much the opposite of "being critical"! Furthermore, I do not just accept everything I read as facts, you just made that up! Sure, I've read a lot of facts but then in addition I've worked in some of those studios where the history was actually made and worked with some of the engineers (and/or their apprentices) who made it.
1.1a. Which is the more valid perspective: The witnessed, documented and verified history or, the perspective of someone just making-up a history to suit their agenda, which contradicts the documented history and who has no evidence. Or how about; the perspective of someone actually working in commercial dubbing theatres when the change to Dolby Digital occurred (and supported by both the technical documentation of Dolby themselves and the information given by Dolby engineers in person) or, the perspective of someone who was told something by the brother of a friend who once worked in a cinema?

1a. Exactly, the style (artistic intent) certainly did evolve over the two decades but now you are contradicting yourself because you have previously discounted artistic intent! And, you stated it changed/evolved in response to the novelty of the invention of the LFE in DD, which was false.
1a1. Again, if you are actually interested in the history, then why don't you actually learn the history instead of just making up false history to defend your assertions/beliefs? What you are stating would have been impossible! A cinema couldn't just go to a store and buy the Dolby equipment required to read and decode the Dolby Digital signal on 35mm film, it was only available direct from Dolby themselves and only under a licence agreement. The only way of getting the equipment was to make an appointment for a Dolby engineer to bring the equipment to your cinema and install it (which included calibration) and to maintain the terms of the licence agreement a Dolby engineer had to return every 6 months to check/adjust the calibration.
1a2. Today, certainly. But that's nothing to do with Dolby Digital 5.1, in fact quite the opposite! The theatrical distribution format of films today, and for quite a few years, is not 35mm film but the DCP (Digital Cinema Package). DCP does not support Dolby Digital 5.1 and therefore cinemas did not require a Dolby licence or the equipment to be installed and calibrated by Dolby engineers! Again, why don't you learn the actual facts rather than just putting two and two together and coming up with five?
1a3. There is indeed, so why are you now quoting both sides but previously completely ignored the practical side? For example, the practical side of mic usage to record music, which precludes "natural spatiality"?
1a4. That's a lot of "maybe's", why don't you actually find out/ask rather than just inventing a bunch of "maybe's"?



71 dB said:


> Of course I know stereo had been demonstrated and used in the late 19th century, but in the late 50's it became commercially popular meaning artists started to have commercial intentions. That's how I see it. Correct me if I am wrong.
> 2. I talk about ILD a lot because it's so relevant when talking about crossfeed ...
> 2.1 In my university psychoacoustics is included in the acoustics courses.
> 3. Correct and that's why the ILD at low frequencies can't get larger than a few decibels unless the sound source is VERY near one ear.
> ...



This isn't the "how 71dB sees it" forum, especially when "how 71dB sees it" contradicts the actual facts! Stereo music recordings only became available in 1957 and it became "commercially popular" over the course of well over a decade. The already quoted Stockhausen production demonstrates a huge amount of artistic intention a year before stereo was even released to the public, let alone was commercially popular! And, it's historical fact that EMI, Decca and others were experimenting with artistic intent with regards to natural spatiality long before 1957 when stereo became available to the public. You are free of course to "see it" however you want but this is the science forum and you can't simply state that "how you see it" is fact when it contradicts the actual facts!

2. But not so relevant when talking about spatiality when listening to music recordings (and therefore artistic intent).
2.1. So it was an (introductory) 101 acoustics course with some phsychoacoustics.

3. And why can't we have a sound source "VERY near one ear"? Is this some artistic intent law you've just invented?
3a. So a simple one pole filter then, not a simulation of a complex EQ absorption curve.

5. There was clearly a difference, whether one put "Snake Oil" on the afflicted joint or not. The question was not whether or not there was a difference but whether that difference correlated with an actual effect on the symptoms or merely the perception of an improvement (the placebo effect).

G


----------



## bigshot

I just play through speakers and get real spatiality, not synthetic approximations. Last night I was playing a 5.1 album and there's absolutely no way it could even sound remotely similar using headphones.


----------



## 71 dB

I'm sorry my knowledge on history is lacking. It's possible my friends brother was wrong, but before this NOBODY has questioned it so I haven't had a reason to doubt it. People seem to agree that volume changes from screening to screening so calibration is what it is. My friend told that the volume really is changed and is not fixed. Maybe that's how it's done in Finland and the US operates differently? I don't know. I admit if made false statements of stereo. Maybe I have misunderstood something. Doesn't change much the facts of spatiality. 



gregorio said:


> 3. And why can't we have a sound source "VERY near one ear"?
> 3a. So a simple one pole filter then, not a simulation of a complex EQ absorption curve.
> 
> G


3. We can of course, but do we want that? Aren't sounds very near one ear annoting? Kind of tickling? One problem is contradictory spatial cues. ILD may indicate near sound, direct sound to reflections may indicate distant sound => contradiction. Sound very near ear are loud because they are near. How does it sound when kick drum goes off near ear? Hearing damage? Pain? Not nice. That's why we listen kickdrums from a distance, not very near and ILD at low frequencies is small. Listening to speakers the sound isn't very near unless good crosstalk canceling is used => artistic intention of near sound destroyed => do not have such intentions, because speakers destroy it.
3a. I thought I presented the circuits when I came here so I assumed you knew it's simple one pole filter. That's why it's so cheap and easy. Don't you even know what I have been talking about?


----------



## 71 dB

bigshot said:


> I just play through speakers and get real spatiality, not synthetic approximations. Last night I was playing a 5.1 album and there's absolutely no way it could even sound remotely similar using headphones.



Do you really think all people in the World can have it as good as you? Didn't it take you 30 years to get to that point? I'm happy for you, but for 90 % of all people in the World what you have is utopia. Most people need MUCH more affordable solutions.


----------



## bigshot (Mar 12, 2019)

Boo hoo hoo. I already told you I wasn’t going to feel sorry for you. Yes, speakers have much more spatiality than headphones ever will. Yes, cross feed is a compromise some people who are forced to use headphones make to take a little of the curse off that. But cross feed doesn’t create spatiality that doesn’t exist. If you want spatiality, you need speakers. It’s pretty simple. Even a small scale inexpensive stereo speaker system has more spatiality than the best headphones with cross feed.


----------



## 71 dB

bigshot said:


> Boo hoo hoo. I already told you I wasn’t going to feel sorry for you. Yes, speakers have much more spatiality than headphones ever will. Yes, cross feed is a compromise some people who are forced to use headphones make to take a little of the curse off that. But cross feed doesn’t create spatiality that doesn’t exist. If you want spatiality, you need speakers. It’s pretty simple. Even a small scale inexpensive stereo speaker system has more spatiality than the best headphones with cross feed.



I think the spatiality I get from headphones crossfed is all I need. Recordings have spatiality in them, if they didn't what's the point of making stereophonic and multichannel recordings? Why not make mono recordings and let the speakers and room create true spatiality? When I watch my 32" TV I have miniature actors walking around, but still I can enjoy the movie. Similarly the scale of soundstage doesn't mean much for me. Miniature soundstage is enough as long as the spatiality appears natural and there is no excessive spatiality.


----------



## bigshot

If real spatiality isn't important to you, and headphones are fine for you, then that is great.


----------



## 71 dB

bigshot said:


> If real spatiality isn't important to you, and headphones are fine for you, then that is great.



It's about how we define "real spatiality". What is the criteria for real spatiality? Any speakers in any room? In my opinion if speakers are positioned poorly in a room the soundstage can be really bad, something I would never call real spatiality, horror spatiality maybe.


----------



## StandsOnFeet

71 dB said:


> I think the spatiality I get from headphones crossfed is all I need.



That's really good. I'm serious. 

Why don't you just leave it at that? Let Gregorio and the rest of us have our opinions and you keep your own. To continue the argument will just cause you pain--you're already demonstrating that with every third or fourth post. Please, for your own sake, desist.


----------



## bigshot (Mar 12, 2019)

The definition of "real spatiality" is that it inhabits real space. It isn't a band aid fix, artificial simulation or sacrifice for necessity. If you put sound in a room, you have spatiality. If you press the sound right up over your ears, you don't. ...well I guess it depends on how much open space there is between the headphone cups.

Badly implemented speakers sound bad. That is true. You said that you have speakers and you don't listen to them much because they don't sound good. That is probably because you have them set up wrong. If you'd like advice on how to set them up to get the most out of them, I'm sure there are a lot of us who would be happy to debug your setup and offer advice.


----------



## 71 dB

StandsOnFeet said:


> Why don't you just leave it at that?.



Because I am a human being and I want to be part of a community, something. 
I want to be heard, feel that my opinions and thoughts matter. 
That's what it is to be human. Surely you agree?


----------



## 71 dB

bigshot said:


> The definition of "real spatiality" is that it inhabits real space. It isn't a band aid fix, artificial simulation or sacrifice for necessity. If you put sound in a room, you have spatiality. If you press the sound right up over your ears, you don't. ...well I guess it depends on how much open space there is between the headphone cups.
> 
> Badly implemented speakers sound bad. That is true. You said that you have speakers and you don't listen to them much because they don't sound good. That is probably because you have them set up wrong. If you'd like advice on how to set them up to get the most out of them, I'm sure there are a lot of us who would be happy to debug your setup and offer advice.



I used to listen to speaker a lot before I discovered crossfeed. Before crossfeed speakers where clearly better than headphones (no excessive spatiality). I have high quality DIY speakers at optimal places (as acoustic engineer I now this stuff). Finns are very good with speakers. You may have heard of Genelec for example, very common active monitors in many studios all over the World. I don't need advice on that, but thanks anyway. Since I am single and I live alone I can have the speakers optimally. Speakers take their places based on acoustics and then other furniture goes where they can.


----------



## bigshot

If you want to be part of a community, you're doing it wrong. Normally someone who wants to be part of a community pays attention to what others say and reads the social cues they put off. They try to speak for other people's benefit, not just for their own self-gratification. I told you clearly a few posts back what the problem is, and you blew by it and went right back to the same circular routine again. You aren't impressing me, and the way you act makes me want to exclude you from my little part of the community.

You get a second shot across the bow because I think you honestly can't help it. But if you aren't going to make any effort, I'm not going to either.


----------



## 71 dB

bigshot said:


> If you want to be part of a community, you're doing it wrong. Normally someone who wants to be part of a community pays attention to what others say and reads the social cues they put off. They try to speak for other people's benefit, not just for their own self-gratification. I told you clearly a few posts back what the problem is, and you blew by it and went right back to the same circular routine again. You aren't impressing me, and the way you act makes me want to exclude you from my little part of the community.
> 
> You get a second shot across the bow because I think you honestly can't help it. But if you aren't going to make any effort, I'm not going to either.



I know I am socially akward and I have suffered from it all my life. Things that are easy for "normal" people can be very difficult for people like me. I think I have asperger. When other people are friendly to me I am relaxed and I believe nice too, but when other people disagree and attack me I lose my temper/control as you must have seen. I was unable to read carefully some of your recent post. I am sorry about that. I am pretty confused about everything.


----------



## bigshot

Being disagreed with isn't the same as being attacked. You aren't being attacked here. You are being disagreed with by people who have valid points. If you can keep that in mind, you'll have no problems.


----------



## gregorio (Mar 13, 2019)

71 dB said:


> [1] I'm sorry my knowledge on history is lacking. It's possible my friends brother was wrong, but before this NOBODY has questioned it so I haven't had a reason to doubt it.
> [2] My friend told that the volume really is changed and is not fixed. Maybe that's how it's done in Finland and the US operates differently?
> [2a] I don't know. I admit if made false statements of stereo. Maybe I have misunderstood something. Doesn't change much the facts of spatiality.
> 3. We can of course, but do we want that?
> ...



1. YOU are the one who (correctly) stated "_you should be critical to what is told_ [to you]" and therefore you definitely "_had a reason to doubt it_", the need to be critical of what you are told! Furthermore, you stated it was easy to check historical facts with Google, so why didn't you? Apparently everyone else "should be critical" of what they are told except you.

2. I already explained this to you but you just completely ignore it and make-up a "maybe".
2a. No, your false statements don't change the actual "facts of spatiality" but they do change the facts that you have presented!

3. Who are you to decide what we (artists) want?
3a. That's good, at least you're asking a question rather than making a (false) statement of fact! To you maybe it sounds "annoying"/"tickling" but it doesn't sound "tickling" to me and even if it is "annoying", why is that a bad thing? All western music is built upon being "annoying", the annoyance of discordance, the annoyance of not resolving a chord sequence, etc. The only difference appears to be what you personally find unacceptably annoying, which is fine but that's you personally and NOT applicable to everyone else!
3b. No, that's NOT a problem or if it is, it's a problem with virtually ALL commercial stereo recordings and has been since stereo was released to the public!
3c. And listening to a sound from a few centimetres away and simultaneously from 20m away is also a spatial contradiction (unless your ears are 20m apart?), as is listening to an electric guitar in a stadium/arena at the same time as a singer in a small studio. Pretty much all commercial music recordings are full of "contradictory spatial cues". However, you personally seem to have decided that all these contradictory spatial cues are somehow not contradictory, presumably because you personally are not able to hear/recognise them (or you have ears that can be in several totally different acoustic spaces at the same time) and that it's only ILD that can make spatial cues "contradictory". There's two obvious problems with that: Firstly, it's you personally, not necessarily everyone else and secondly, you state that of course you know that spatiality is not only ILD but then you define spatial cues as "contradictory" only in terms of ILD. You are contradicting yourself!
3d. And again, in order to justify your belief you take only ONE of the relevant variables, ignore the others and end up presenting utter nonsense as fact! A sound is NOT loud because it is near the ear, proximity to the ear is ONLY one of the variables. What sounds louder, a pin dropped on a carpet 1m from your ears or an atomic bomb exploding 100m from your ears? Therefore:
3e. A kick drum sounds perfectly fine close to the ear, no hearing damage, no pain and not even any discomfort, I've done it many times. It's only if your ear is close AND the kick drum is played loudly that it can cause pain/damage. In a music recording we have 3 variables, how loudly the kick drum is played, how close the mic is to the kick drum PLUS, what level we record that kick drum mic and mix it into the track.
3f. And here we have the result of your misrepresentation of the facts and ignoring the other relevant variables; an utter nonsense assertion which is the complete opposite of the actual facts! On music recordings we virtually NEVER "listen" to the kick drum from a distance, in fact the EXACT OPPOSITE, we couldn't physically get any closer because a kick drum is virtually always recorded with the mic very close to the kick drum and most commonly actually inside it!

4. I did know it was just a simple filter, what I don't know is how you can claim that such a simple filter is a "simulation" of a complex EQ absorption curve (head shadowing)!


71 dB said:


> [1] I think the spatiality I get from headphones crossfed is all I need. ...
> [2] Similarly the scale of soundstage doesn't mean much for me.
> [2a] Miniature soundstage is enough as long as the spatiality appears natural and [2b] there is no excessive spatiality.


1. Exactly, that's what YOU think is all that YOU need. That's absolutely fine, no one is saying you can't listen to music according to what you think is all that you need. What we've been arguing against for more that a year is your assertion that what you think is all that you need is some objective fact which therefore must apply to everyone else.

2. Again, just because it doesn't "mean much for YOU" doesn't mean that it doesn't mean much for everyone else. On the contrary, pretty much all the artists I've ever worked with are very conscious and exacting of the "scale of the soundstage".
2a. By definition a miniature soundstage is not natural, unless you are listening to a miniature orchestra in a miniature concert hall and you yourself are miniature! If that "appears" natural to you, then that's an issue of your personal perception, not an objective fact that necessarily applies to everyone else.
2b. That's a circular argument because you define spatiality as "excessive" according to your personal perception and preferences, and, you define it only in terms of ILD, which even you admit is only one of the variables of spatiality!

After over a year, you are still just going round and round in circles, making-up or repeating nonsense history and misrepresenting the facts in order to justify your belief that what YOU personally perceive and prefer is not a preference/perception but an objective fact. Unless you can get past that obvious fallacy and stop making-up and/or misrepresenting the facts then as this is the science forum, I'm going to keep challenging/refuting them!

G


----------



## 71 dB

gregorio said:


> 1. YOU are the one who (correctly) stated "_you should be critical to what is told_ [to you]" and therefore you definitely "_had a reason to doubt it_", the need to be critical of what you are told! Furthermore, you stated it was easy to check historical facts with Google, so why didn't you? Apparently everyone else "should be critical" of what they are told except you.
> 
> 
> 2. I already explained this to you but you just completely ignore it and make-up a "maybe".
> ...



1. Totally admit that. I was lazy. Fact checking everythig you hear means you have no life other that fact checking. Jurassic Park in 1993 had strong bass so it all seemed to makes sense. Have you never been wrong?

2. What? How do you even know how things are do in Finland? Have you even been to Finland?

2a. Yes dad.

3. I'm talking about consumers, not artists. As a consumer I what certain things and one of those things is natural spatiality.

3a. Annoyance in chord progression doesn't chance from speakers to headphones. It is the same chord progression. So, it's a good way of doing annoyance. Excessive spatiality does change from speakers to headphones. So it's problematic. To me chords and spatiality are on different metalevels. To you they are on the same level.

3b. Yeah, but the degree varies. I don't expect perfection, but to have some sanity, to be on the ballpark.

3c. It's not a problem if one sound has different cues than other, because we hear close and distant voices in real life at the same time. The problem is if the sounds themselves are contradictory. Distant guitar and close singer is fine, if those cues are fine. Building impossible spatiality using weird combinations of element of natural spatiality is actually a very interesting artistical intent and totally supported by me. I do it myself in my own music.

3d. A drum has certain resonancies because of it's build up. If the drum was smaller, the resonances would happen on higher frequency and vice versa. Generally large instruments have sonic energy consentrated on lower frequencies than small instrument: Double bass - Cello - Viola - Violin for example. So, spectrum is a cue of physical size. Very tiny instruments just don't generate low bass. Timbre is a cue of how loud the instrument is played. Stronger harmonics is a cue of loud playing.

3e. Ok. I would have expected otherwise, but I believe you.

3f. I know that. The mic is VERY near. The level is low. However, when it's say a symphony, the percussion is listened from a large distance.

4. Because a simulation can be coarse, exact or anything in between. The question is how precise the simulation needs to be. Not having crossfeed itself is a simulation of head shadowing which is as bad as possible totally ignoring what happens in reality. So, a coarse approximation is much better. I have already explaned that, crossfeed targets frequencies below 800 Hz where the absoprtion curve is not that complex and where no crossfeed is horrible wrong so almost anything is better. Crossfeed doesn't try to do much on higher frequences and there is no need for much, because at high frequences large ILD up to 30 dB is totally natural so no need to reduce ILD. I am surprised that at this point the whole philosophy of crossfeed seems to be unclear to you.

1. Sure, I may have pushed too much.

2. The artists must know that how big the soundstage is depends a lot of what gear is used to listen to the music, something they have no control over.

2a. Yeah, but what can you do? We can't have giant TVs to fit the statue of Liberty in correct scale and you need Bigshots awesome speaker system to have home opera in correct scale. Compromises are needed. Ok, Harrison Ford is only a foot tall on my TV and operas happen in miniature halls, but still everyhing is enjoyable.

2b. Excessive spatiality is excessive in any parameter (ILD, ITD, ISD, reverb, reflections), but crossfeed addresses merely ILD and since this is crossfeed tread I concentrate on ILD. Crossfeed does not address reverb/reflections at all. It does a bit adress ITD and ISD.


----------



## gregorio

71 dB said:


> 1. Totally admit that. I was lazy. Fact checking everythig you hear means you have no life other that fact checking. Jurassic Park in 1993 had strong bass so it all seemed to makes sense. Have you never been wrong?
> 2. What? How do you even know how things are do in Finland? Have you even been to Finland?
> 3. I'm talking about consumers, not artists. As a consumer I what certain things and one of those things is natural spatiality.
> 3a. Annoyance in chord progression doesn't chance from speakers to headphones. It is the same chord progression. So, it's a good way of doing annoyance. Excessive spatiality does change from speakers to headphones. So it's problematic. To me chords and spatiality are on different metalevels. To you they are on the same level.



1. A smart, educated person tries to avoid being wrong by not making absolute assertions of fact in the first place, unless/until they have fact checked. You've repeatedly stated you want to be respected as a smart, educated person but that can't be achieved by just keep telling everyone that you're smart and educated, you have to actually demonstrate that you're smart and educated by doing what smart, educated people do.

2. Unless cinema owners in Finland all broke into a Dolby HQ, stole all the necessary equipment and installed it themselves, then they did what everyone else in every other country had to do: Sign an agreement with Dolby and have the equipment installed and calibrated by Dolby engineers! If you're saying that did happen, please link to the reports (or any other reliable evidence) of a mass break-in and theft of cinema equipment from Dolby.

3. Well tough, artists do not make recordings only for you! Other consumers do not want natural spatiality and the consequences that would have on the music recordings they do want/buy. There has always been the odd music recording made with natural spatiality, throughout the last 60 years or so, but consumers buy them in miniscule, almost non-existant numbers compared to those without natural spatiality, not least because all the most popular genres of music recordings cannot be made with natural spatiality!

3a. No, quite the opposite. To me, the whole thing is on a "different metalevel", I do not accept your definitions or use of "excessive" or "annoyance". They are terms you've invented or used to describe your personal preferences, they are NOT objective facts!

The rest of your post is also just an exact repeat of what you've been stating for over a year, which is nonsense! It's nonsense because it's personal preference presented as objective facts, supported by made-up "facts" that contradict the actual facts and often even contradict themselves! A few examples:


71 dB said:


> 3b. Yeah, but the degree varies. I don't expect perfection, but to have some sanity, to be on the ballpark.
> 3c. It's not a problem if one sound has different cues than other, because we hear close and distant voices in real life at the same time.
> Distant guitar and close singer is fine, if those cues are fine.
> [3c1] Building impossible spatiality using weird combinations of element of natural spatiality is actually a very interesting artistical intent and totally supported by me. I do it myself in my own music.
> ...


3b. You are talking about YOUR personal "sanity", not an objective fact! What you find insane is different to what me and others find insane AND even if it isn't, sometimes I like a bit of insanity, it's art and insanity is sometimes good/acceptable.
3c. That's obviously nonsense, we NEVER hear the same voice (performance) both close and distant at the same time in real life. Likewise, a guitar in say an arena and a close voice in say a relatively small recording room at the same time in real life. To do so would obviously require one ear to be "close" and the other "distant" or, one ear in an arena at the same time as the other ear is in a small recording room, do you know anyone with ears like that?
3c1. By definition, "impossible" spatiality cannot be "natural" spatiality! You appear to think that combining two completely different natural spatialities results in a natural spatiality, which of course is nonsense, unless you do indeed have ears that can be in two completely different locations at the same time. An elephant exists in nature, so does an ant but a creature that is a "weird combination" of both does not! So, you are completely contradicting yourself because you "support" impossible/unnatural spatiality and even do it yourself in your own music but then you've been arguing against impossible/unnatural spatiality in virtually every post to this thread in over a year!
3f. No, that's an example of a false/nonsense assertion that you've just made-up! In a recording of a symphony, the percussion is "listened to" from at least two (and usually many) different distances all at the same time, both relatively close and distant.

4. I don't see how this statement could be more self contradictory. If "a simulation can be coarse, exact or anything in between" and "not having crossfeed itself is a simulation", then not having crossfeed must be at least a course simulation and is therefore "much better"??
The only rational thing you stated is that the question is indeed how precise the simulation needs to be. Are you saying you can't tell the difference between an accurate HRTF and the extremely "course" simulation that is your simple crossfeed? Or, just that it's your preference? ... Either way it doesn't really matter because either it's YOUR perception or YOUR preference, not an objective fact! If it were an objective fact, then why would HRTFs have been invented in the first place? For me (and others) crossfeed does NOT provide a precise enough simulation. I do NOT perceive it as head shadowing, I perceive it as just a distortion/colouration which by definition is a loss of fidelity and I personally do NOT prefer this (or any other) avoidable loss of fidelity.
4a. And I'm somewhat surprised that despite all the self-contradictions, the admission that you just make things up and the admission that your historical "facts" were not checked an were false , that you still doggedly stick to your clearly false "philosophy of crossfeed".

I say "somewhat surprised" rather than totally shocked because I've grown accustomed to how common it is in the audiophile world. Audiophiles commonly believe their personal perceptions/preferences are objective facts and then prioritise that belief above ANYTHING and EVERYTHING else, including: The actual facts/realities, the demonstrated science, logic, common sense or even any personal desire to avoid appearing ignorant or foolish. The solution to your dilemma of not being respected for your knowledge/education is therefore actually quite simple, prioritise your desire not to appear ignorant/foolish above your desire to defend your (erroneous) belief, which is causing you to make irrational and false assertions. You cannot blame me or this sub-forum for what is entirely YOUR choice of what YOU want to prioritise!

G


----------



## 71 dB

gregorio said:


> 1. A smart, educated person tries to avoid being wrong by not making absolute assertions of fact in the first place, unless/until they have fact checked. You've repeatedly stated you want to be respected as a smart, educated person but that can't be achieved by just keep telling everyone that you're smart and educated, you have to actually demonstrate that you're smart and educated by doing what smart, educated people do.



I have said many times my assertions are based on facts. Our disagreement is not about the facts but what he facts mean. To you ILD = 13 dB @ 100 Hz means a different thing it means to me. To you it's artistical intention and maybe even a requirement of many genres of music. To me it's something that remains the same when listened to using headphones without crossfeed, but becomes something like 3 dB if listened with speakers because of how the sound behaves in a room and how HRTF works. So it's not about what the facts are, it is about what to do with the facts.

I don't expect to be praised as a genius or something. Respect in this context means not to be called clueless. There are lots of people who are clueless, who know nothing about how spatial hearing works. Next time a pizza boy comes to your door ask him if he has ever heard about HRTF, ITD etc. Even if you don't agree with everything I say, you can respect my opinions as an alternative way of thinking. Even audio gurus have dissagreements.



gregorio said:


> 2. Unless cinema owners in Finland all broke into a Dolby HQ, stole all the necessary equipment and installed it themselves, then they did what everyone else in every other country had to do: Sign an agreement with Dolby and have the equipment installed and calibrated by Dolby engineers! If you're saying that did happen, please link to the reports (or any other reliable evidence) of a mass break-in and theft of cinema equipment from Dolby.



I don't believe anything was stolen. I was told the gear has volume adjustment. Maybe deluxe models with that has been installed only in Finland?

There is a news story, but it is in Finnish: https://yle.fi/uutiset/3-6777335

_*Elokuvien ääniraidat soitetaan Suomessa usein hiljaisemmalla tasolla kuin tuotantoyhtiöt suosittelevat. Tästä huolimatta elokuvateatterit saavat valituksia liian kovasta äänestä säännöllisesti. Joissakin elokuvissa kovimmat äänet lähentelevät jo sosiaali- ja terveysministeriön määrittelemää maksimia.*_

Translation: _"Movie soundtracks are often played in Finland quieter than recommended by the production companies. Despite of this, movie theatres receive complaints about too loud sounds regularly. In some movies the loudest sounds reach the maximum recommended levels specified by Ministry of Social Affairs and Health."_

Does this look to you everything is always played using Dolby calibrations? I am not crazy. Maybe Dolby can't enforce their calibrated levels in Finland due to the law? Maybe they have to include a volume knob to their installations in Finland? Dolby is not above the regional law. Maybe only in the US the law allows them to use fixed calibrated levels?


----------



## gregorio

71 dB said:


> [1] I have said many times my assertions are based on facts. Our disagreement is not about the facts but what he facts mean.
> [2] Does this look to you everything is always played using Dolby calibrations? I am not crazy.
> [3] Respect in this context means not to be called clueless.



1. No, our disagreement is based on your incorrect/made-up facts and the correct facts which you quote in isolation from the other pertinent facts and then make false assertions because you've ignored the pertinent/required facts!

2. Yes, you are crazy or at least, you are NOT reading what has been written and then fighting on blindly, regardless of the actual facts! Did you not see the date on that article you posted? There is no Dolby Digital 5.1 any more in cinemas and there hasn't been for many years, Dolby Digital 5.1 is not even supported by ANY digital theatre (DCP) standard!

3. How is fighting on blindly, against the actual facts, anything other than clueless? As you are demonstrating that you are clueless, why do you expect respest for not being clueless?

Again, your desire/choice to defend your erroneous facts/belief is the very thing that precludes you from getting any respect. You completely ignore this fact, carry on defending your false facts/beliefs and carry on whining about not being respected. Do you want to be respected OR do you want to defend false facts? Your choice but you can't have both!

G


----------



## 71 dB

gregorio said:


> 3. Well tough, artists do not make recordings only for you! Other consumers do not want natural spatiality and the consequences that would have on the music recordings they do want/buy. There has always been the odd music recording made with natural spatiality, throughout the last 60 years or so, but consumers buy them in miniscule, almost non-existant numbers compared to those without natural spatiality, not least because all the most popular genres of music recordings cannot be made with natural spatiality!
> 
> 3a. No, quite the opposite. To me, the whole thing is on a "different metalevel", I do not accept your definitions or use of "excessive" or "annoyance". They are terms you've invented or used to describe your personal preferences, they are NOT objective facts!
> 
> ...



3. Most consumers don't care about spatiality.

3a. I don't know what definitions you accept. A lot of my studies of spatiality happened in Finnish so my knowledge of English terminology may be weak. My preferred term is spatial distortion, but people here opposed that so I started to use excessive spatiality, but apparently that's not good either. You have a personal preference too which is that excessive spatiality is ok, even needed as artistical intent. 

3b. Our brain processes the sound. Meaningful objective facts are psychoacoustic in nature and that's why the science of spatial hearing is what counts. Today I listened to a Ondine CD of* Christoph Graupner*'s orchestral suites played by Finnish Baroque Orchestra conducted by Sirkka-Liisa Kaakkinen-Pilch. I used headphones. Without crossfeed the spatiality is VERY messy and excessive. Crossfeed improves the spatiality very much, the proper crossfeed level being about -2 dB. Are you saying I should consider this kind of personal experiences irrelevant? Why should I consider your personal experiences more relevant? Aren't we supposed to figure out why our personal experiences differ? I am very curious about why some people don't mind excessive spatiality. How is it possible that what is messy spatiality for me is not messy for you and vice versa? The difference of our HRTFs can't explain this. I think you really haven't properly realized what I realised in 2012 about headphone spatiality and your profession as a sound engineer might even prevent you to realize it, because you have learned and professionally adopted a certain philosophy about spatiality. Since you think your knowledge is superior to me, you can't believe I have realized something you have not. Maybe you realize this conflict on subconcious level and that triggers you to discredit me all you can? I admit you have made me question my belief (because I have low self-esteem and I think questioning yourself is a good thing), but you are not converting me. Every time I put headphones on I experience the benefits of crossfeed. Sure, it's only my personal experience, but that is what counts for me. If everyone hears things differently then what can the objective psychoacoustic facts be? However, the science of spatial hearing shows that people do hear spatiality in similar fashion. Fine detail (HRTF) differs, but the big picture (low frequencies are hardly shadowed, high frequencies are significantly shadowed) is the same for everyone. Crossfeed uses that big picture and in doing so should work for everyone no matter what the exact HRTF is.



gregorio said:


> 4. I don't see how this statement could be more self contradictory. If "a simulation can be coarse, exact or anything in between" and "not having crossfeed itself is a simulation", then not having crossfeed must be at least a course simulation and is therefore "much better"??
> The only rational thing you stated is that the question is indeed how precise the simulation needs to be. Are you saying you can't tell the difference between an accurate HRTF and the extremely "course" simulation that is your simple crossfeed? Or, just that it's your preference? ... Either way it doesn't really matter because either it's YOUR perception or YOUR preference, not an objective fact! If it were an objective fact, then why would HRTFs have been invented in the first place? For me (and others) crossfeed does NOT provide a precise enough simulation. I do NOT perceive it as head shadowing, I perceive it as just a distortion/colouration which by definition is a loss of fidelity and I personally do NOT prefer this (or any other) avoidable loss of fidelity.
> 4a. And I'm somewhat surprised that despite all the self-contradictions, the admission that you just make things up and the admission that your historical "facts" were not checked an were false , that you still doggedly stick to your clearly false "philosophy of crossfeed".
> 
> ...



4. Again you are interpreting things semantically in the way that is maximally negative for me. You are more interested in smearing me than actually processing what I am saying. Sure, it must be easy for you to do so when a Finnish guy tries to express himself in English. No crossfeed is like bold head. No hair, but there is the head so we can say "no hair". If there is no head, nobody says no hair, because why would there be? *No crossfeed is MUCH worse coarse simulation than crossfeed*. That it actually what I realized in 2012. I believe the psychological problem for many is that usually in audio the less you do the better. That's why no crossfeed may seem better than crossfeed, but crossfeed is a rare example in audio when doing something makes improves things. That's because our hearing _expects_ certain kind of simulation happening. No crossfeed is far from that expected simulation, but crossfeed is somewhat near, at least much closer than no crossfeed.

Of course I can hear the difference between an accurate HRTF convolution and simple crossfeed. I can hear the difference between different topologies ("H" and "X") of simple crossfeed. I don't have technological means to do HRTF convolution in realtime so that's not a choice for me. So I use simple crossfeed.

4a. I never said stereo was _invented_ in 1950's, because I know the first experiments were in 1880's! Again you smeared as much as possible. I did not make up stereo was invented in 1950's. I said that's when stereo became commercial. 

_"…stereophonic sound recordings did not develop commercially until the mid-1950s."_ - The Science of Sound by Thomas D. Rossing, second edition 1990. Surely you aren't suggesting Rossing makes things up?


----------



## 71 dB

gregorio said:


> 1. No, our disagreement is based on your incorrect/made-up facts and the correct facts which you quote in isolation from the other pertinent facts and then make false assertions because you've ignored the pertinent/required facts!
> 
> 2. Yes, you are crazy or at least, you are NOT reading what has been written and then fighting on blindly, regardless of the actual facts! Did you not see the date on that article you posted? There is no Dolby Digital 5.1 any more in cinemas and there hasn't been for many years, Dolby Digital 5.1 is not even supported by ANY digital theatre (DCP) standard!
> 
> ...


1. I don't know why you don't see what I say is based on facts so I don't know what to say.

2. Dolby Digital 5.1 or not, you whether have calibrated fixed levels or you don't have. According to this article from 2013 there is no fixed calibrated level in Finland.

3. Maybe I just ignore your statements. I am NOT clueless! If I was I wouldn't even be here. Why would I?


----------



## bigshot

Everything is uncalibrated in Finland... that's the problem right there!


----------



## castleofargh




----------



## gregorio

71 dB said:


> 3. Most consumers don't care about spatiality.
> 3a. I don't know what definitions you accept. ...My preferred term is spatial distortion, but people here opposed that [3a1] so I started to use excessive spatiality, but apparently that's not good either. [3a2] You have a personal preference too which is that excessive spatiality is ok, even needed as artistical intent.
> 3b. Without crossfeed the spatiality is VERY messy and excessive. [3b1] Crossfeed improves the spatiality very much, the proper crossfeed level being about -2 dB.
> [3b2] Are you saying I should consider this kind of personal experiences irrelevant?
> [3b4] Why should I consider your personal experiences more relevant?


3. Do you have some reliable evidence to back that assertion up or did you just make it up? If we created a mix with no spatial information, say recorded it in an anechoic chamber and mixed it in mono with no added reverb or delay based effects, don't you think consumers would notice/care? But that's not the point, the actual point you made was the importance (to you) of NATURAL spatiality and consumers absolutely would care about that because the restriction of natural spatiality would preclude almost all rock and popular music recordings from around the mid/late 1960's onwards. Do you really think that if there were no rock or popular music genres most consumers wouldn't care? Isn't that the definition of "popular" music?
3a. As the spatial information in a recording is NOT distorted when replaying with headphones (without crossfeed), then why isn't it obvious that "spatial distortion" is an inappropriate term?
3a1. Of course that's not good either, the word "excessive" means "too much" and you have decided what is "too much" according to your perception/preferences, not according to objective fact but you are employing the term as an objective fact, which is false!
3a2. No, that's a FALSE assertion that you have just made-up! "Excessive" ("too much") spatiality is NEVER ok, where we differ is our opinion of what constitutes "too much". Artists NEVER create recordings with too much (excessive) spatiality, their recordings contain exactly the amount of spatiality they desired, which may or may not be "too much" according to your personal preferences!
3b. No it's not, it's actually not messy enough and there's not enough spatiality! Which is why I prefer it on speakers, which adds more spatiality and "mess" (the listening environment acoustics).
3b1. No it doesn't, it makes it worse! With crossfeed we still have the same amount of spatial information only now it's crossfed!
3b2. The answer is obvious and even in the question itself!!! Your personal experiences are relevant to YOU personally.
3b4. You shouldn't consider my personal experiences more relevant to YOU personally and I've NEVER stated that you should. HOWEVER, the fact that my personal experiences (perception and preferences) are different to yours, disproves your assertion that YOUR personal experiences are objective facts that apply to everyone, because they do not apply to me (and others). Your solution to this problem is effectively to state that our personal experiences are invalid (because we are idiots and ignorant) and therefore do not contradict your assertion. However, you've got no reliable evidence to support this statement AND you have no reliable evidence to support your assertion that your personal experiences/preferences are objective facts that apply to everyone. The OBVIOUS, RATIONAL conclusion is that your personal preferences are just that, YOUR personal preferences and not everyone else's! 


71 dB said:


> [1] *No crossfeed is MUCH worse coarse simulation than crossfeed*.
> [2] I never said stereo was _invented_ in 1950's, because I know the first experiments were in 1880's!


1. Clearly that statement is false, it might be better for you but it's not for me (and others), so your statement is not a general or objective fact. AND, not crossfeeding obviously has no HRTF simulation at all.

2.This is what you actually stated: "_In the 50's the stereophonic commercial sound format finally arrived and stereo sound because a marketing gimmick. That kind of thing often happens with new technology: When digital sound formats arrived to movie theatres in early 90's and allowed extremely strong low frequency effects, movies of that era used excessive bass to exploit the new technological possibilities until the low frequency effects returned to more rational levels." _- Again, you are contradicting yourself, if you know the first experiments were in the 1880's how could it be a gimmicky new technology 70 years later? 


71 dB said:


> 1. I don't know why you don't see what I say is based on facts so I don't know what to say.
> 2. Dolby Digital 5.1 or not, you whether have calibrated fixed levels or you don't have.
> [2a] According to this article from 2013 there is no fixed calibrated level in Finland.
> 3. I am NOT clueless!


1. You're joking? When challenged enough you admitted to just making-up some facts, you also eventually admitted another of your facts was wrong, because you failed to check something you were told by the brother of a friend but  despite that admission, you are STILL arguing about it? ...

2. What do you mean Dolby Digital 5.1 or not, what other digital audio format "arrived to movie theatres in the early 1990's"?
2a. So let me get this straight; You make-up (or according to you, repeat) a false fact, admit it was wrong, carry on defending that false assertion anyway and to support it you post an article about loud theatrical sound 20 years later, when that "new technology" (Dolby Digital 5.1) didn't even exist as a theatrical audio format any more! Any you wonder "why I don't see what you say is based on fact"? I do see that it's based on facts, FALSE facts!

3. Then why are you so determined to prove that you are?

G


----------



## 71 dB

gregorio said:


> 3. Do you have some reliable evidence to back that assertion up or did you just make it up? If we created a mix with no spatial information, say recorded it in an anechoic chamber and mixed it in mono with no added reverb or delay based effects, don't you think consumers would notice/care? But that's not the point, the actual point you made was the importance (to you) of NATURAL spatiality and consumers absolutely would care about that because the restriction of natural spatiality would preclude almost all rock and popular music recordings from around the mid/late 1960's onwards. Do you really think that if there were no rock or popular music genres most consumers wouldn't care? Isn't that the definition of "popular" music?
> 
> G



Crossfed rock doesn't become non-rock so the music wouldn't disappear by reducing excessive spatiality. You may have 20 dB ILD on a rock album, but that ILD is reduced to about 3 at low frequencies with speakers.


----------



## 71 dB

gregorio said:


> 3a. As the spatial information in a recording is NOT distorted when replaying with headphones (without crossfeed), then why isn't it obvious that "spatial distortion" is an inappropriate term?
> 3a1. Of course that's not good either, the word "excessive" means "too much" and you have decided what is "too much" according to your perception/preferences, not according to objective fact but you are employing the term as an objective fact, which is false!
> 3a2. No, that's a FALSE assertion that you have just made-up! "Excessive" ("too much") spatiality is NEVER ok, where we differ is our opinion of what constitutes "too much". Artists NEVER create recordings with too much (excessive) spatiality, their recordings contain exactly the amount of spatiality they desired, which may or may not be "too much" according to your personal preferences!
> 3b. No it's not, it's actually not messy enough and there's not enough spatiality! Which is why I prefer it on speakers, which adds more spatiality and "mess" (the listening environment acoustics).
> ...



3a. Spatial distortion happens in the brain and is caused by excessive spatial information which the brain can't decode property. That's why it is a good name imo.
3a1. Science of spatial hearing tells as what is too much.
3a2. Luckily I have crossfeed to sort out the disagreements between the artist and me.
3b. Speakers + room add spatial _information_ (reflections, reverb especially), but the ILD at low frequencies goes toward about 3 dB which typically means reduction on ILD.
3b1. We have a MUCH BETTER coarse simulation of acoustic crossfeed of direct sound with speakers.
3b2. Yes, very much so.
3b4. You can't deny that crossfeed means MUCH BETTER coarse simulation than no crossfeed. My claim can be OBJECTIVELY proven by comparing signals. Below 800 Hz crossfeed gives significantly closer results compared to "perfect" HRTF convolution or speakers listening in a anechoic chamber. You can't debunk this claim no matter how hard you try. All you can do is say HRTF convolution is better and you are correct, it is, but this is crossfeed vs no crossfeed and in that battle crossfeed wins no matter what you try.


----------



## bigshot

Distortion can definitely happen inside the brain.


----------



## 71 dB (Mar 16, 2019)

gregorio said:


> 1. Clearly that statement is false, it might be better for you but it's not for me (and others), so your statement is not a general or objective fact. AND, *not crossfeeding obviously has no HRTF simulation at all*.
> 
> 2.This is what you actually stated: "_In the 50's the stereophonic commercial sound format finally arrived and stereo sound because a marketing gimmick. That kind of thing often happens with new technology: When digital sound formats arrived to movie theatres in early 90's and allowed extremely strong low frequency effects, movies of that era used excessive bass to exploit the new technological possibilities until the low frequency effects returned to more rational levels." _- Again, you are contradicting yourself, if you know the first experiments were in the 1880's how could it be a gimmicky new technology 70 years later?



1. What you seem to not realize even at this point is that there ought to be _something_, whether the real deal or a simulation. The fact that no crossfeed omits this _something_ is the damn problem and the cause of spatial distortion. That's why I use crossfeed: To have that_ something_ and avoid spatial distortion in my brain. You can have this something implemented in the recording itself by producing omnistereophonic recordings so that the headphone user doesn't need crossfeed, but you have been very much against any regulation of artistic intent conserning spatiality and even if 100 % of recordings from today onward were omnistereophonic, there's decades worth of stereo recordings which need crossfeed.

2. You don't need marketing gimmicks if you don't have a mass product to sell. In the late 1950's stereophonic sound became finally a mass product, and marketing gimmicks started to have a purpose. Apparently I need to explain everything as if you were a child to not be smeared by you.



gregorio said:


> 1. You're joking? When challenged enough you admitted to just making-up some facts, you also eventually admitted another of your facts was wrong, because you failed to check something you were told by the brother of a friend but  despite that admission, you are STILL arguing about it? …
> 
> 2. What do you mean Dolby Digital 5.1 or not, what other digital audio format "arrived to movie theatres in the early 1990's"?
> 2a. So let me get this straight; You make-up (or according to you, repeat) a false fact, admit it was wrong, carry on defending that false assertion anyway and to support it you post an article about loud theatrical sound 20 years later, when that "new technology" (Dolby Digital 5.1) didn't even exist as a theatrical audio format any more! Any you wonder "why I don't see what you say is based on fact"? I do see that it's based on facts, FALSE facts!
> ...



1. My facts being wrong doesn't automatically mean yours are correct.
2. To my knowledge DTS and SDDS.
2a. Because I am confused about what we are arguing about. What does the sound format matter? Whether you have a volume knob or you don't (fixed calibrated levels). I think I have demonstrated that in Finland those volume knobs exists and they are used. 
3. Because you keep challenging me.


----------



## bfreedma




----------



## jgazal

bigshot said:


> Distortion can definitely happen inside the brain.



I miss Oliver Sacks:


----------



## sonitus mirus

bfreedma said:


>




It should be played at Wimbledon because the participants could benefit from competing on "grass."


----------



## Steve999 (Mar 16, 2019)

. . .


----------



## Davesrose (Mar 16, 2019)

jgazal said:


> I miss Oliver Sacks:



At least he was prolific in his personalized stories of people with various conditions.  He was generous and always thought first about how a particular person (with often unique afflictions) saw the world in their own eyes (and IMO, what made his work really great: he was far from clinical).  I read quite a bit of his works while growing up, and did read his account of Temple Grandin before seeing her own biography (such as Claire Daine's performance).  I've bookmarked your video.  Another story I remember reading from Sacks is the case of a surgeon with Turrets: he needed to have music in the OR to not have any incidence.  I did see the intro of the vid where he mentions music might be unique to humans...even though there are mating songs of different animals.  Well whether there are other species that have a sense of music, I do think it's interesting how anyone does identify with music (I've notice people who are even tone deaf can be deeply moved with music).  That we can savor particular stimuli (auditory, visual, taste, touch, smell)...I do think it's an indication that it's evolutionary.


----------



## gregorio (Mar 17, 2019)

71 dB said:


> 3a. Spatial distortion happens in the brain and is caused by excessive spatial information which the brain can't decode property.
> 3a1. Science of spatial hearing tells as what is too much.
> 3a2. Luckily I have crossfeed to sort out the disagreements between the artist and me.
> 3b. Speakers + room add spatial _information_ (reflections, reverb especially) ...
> 3b1. We have a MUCH BETTER coarse simulation of acoustic crossfeed of direct sound with speakers.



3a. No it doesn't, you just made that up! Maybe YOUR brain is weird "can't decode spatial information properly" and then creates some sort of imaginary "spatial distortion" but that's YOUR brain, my brain does NOT do that. Therefore, your statement is FALSE! Spatial distortion does NOT happen in "THE" brain, it happens in YOUR brain. This is your big problem throughout your postings, you assume/assert that what your brain is creating/imagining is the same as what everyone else's brain is doing, despite the fact that you've presented no reliable evidence to support that assertion AND despite the fact that it's disproven by my (and many others) perception.
Furthermore, by your own admission this weird imaginary/perceived "distortion" is a creation/product of your brain, the actual signal itself (without crossfeed) does NOT have any "spatial distortion". Your use of the term is therefore incorrect as the term "spatial distortion" means an actual distortion of spatial information and there isn't any. A more accurate term would therefore be "your personal imaginary distortion"!

3a1. No it does NOT! If you assert that it does, then you must present that science. However, after over a year you have still NOT done so and of course you can't, because there is no science that tells us what is "too much". All the science tells us is the limits of certain aspects of spatiality as they occur naturally/in nature but of course that's irrelevant because we're not dealing with what occurs naturally! Music recordings do not exist in nature, they are an entirely man-made invention/creation, they virtually NEVER comply with "what occurs naturally/in nature" and are deliberately designed not to, because they are are designed as an art form, NOT an accurate documentary/record of "what occurs in nature". "Too much"/"excessive" is therefore your individual, personal preference, NOT an objective or scientific fact!

3a2. You mean: "Luckily you have crossfeed to lower the fidelity of the artistic intention". As music itself (as well as the recordings of music) is the actual embodiment of artistic intention, then I wouldn't say "luckily", I would say "ignorantly" but it's your recording to reproduce however you choose. So, playback your music recordings from a vinyl LP, with a valve amp and crossfeed if that's what YOU like/prefer but don't assert here or tell me that's actually "better" for me and everyone else!!!

3b. Why would anyone want to "add spatial information" to recordings that (according to you) already have too much spatial information? So everyone who prefers to listen to music recordings on speakers (which add even more spatial information) are what, idiots, ignorant?
3b1. Again, you must stop simply making-up FALSE assertions! I (and others) have repeatedly told you that I do NOT perceive/experience a better simulation of HRTF with crossfeed, in fact, I often don't perceive crossfeed as any sort of HRTF simulation, let alone a "better" one. So, your assertion is proven FALSE, "We" do NOT "have a much better simulation"! For your assertion to be true, you could ONLY state: "I personally perceive a much better simulation".


71 dB said:


> 1. What you seem to not realize even at this point is that there ought to be _something_, whether the real deal or a simulation.
> 2. You don't need marketing gimmicks if you don't have a mass product to sell.
> 2a. Apparently I need to explain everything as if you were a child to not be smeared by you.


1. Why on earth would I want to realise at any point (let alone "even at this point") that a false assertion is true? Presumably, you'd agree that there doesn't "ought to be something" with a binaural recording? What about with a stereo recording which is not binaural but has been designed for headphone listening?
2. Exactly my point! Both natural and unnatural spatiality, were explored many years before stereo was a consumer product, let alone a mass product. Even extreme unnatural spatiality (Stockhausen for example) was created before stereo was a consumer product, so how could it be a marketing gimmick if you don't have a mass product to sell?
2a. Well of course you do, because if you explained it as an adult then it wouldn't make any sense! It's like trying to explain that Santa Claus is real, try doing that to anyone other than a child and see what happens!


71 dB said:


> 1. My facts being wrong doesn't automatically mean yours are correct.
> 2. To my knowledge DTS and SDDS.
> 2a. Because I am confused about what we are arguing about.
> [2b] I think I have demonstrated that in Finland those volume knobs exists and they are used.
> 3. Because you keep challenging me.


1. No, but it does automatically mean that your facts/assertions are wrong, duh! And of course, anyone is free to check if my facts are correct.

2. DTS and SDDS both came some years after Dolby Digital. However, it makes no real difference to your (false) assertion because they all had the same SPL calibration, which was the same as the previous analogue theatrical audio format!
2a. That proves my point then! You are just arguing, even though you admit you don't know what you're arguing about and apparently don't care about logic, if your facts are wrong or mine are correct. Do you think that's what a smart, educated person would do or do you think that it's pretty much the exact opposite? ... As it's pretty much the exact opposite, then why don't you expect us to give you the respect due to the opposite of a smart educated person?
2b. You have demonstrated that, however there was no need to, because firstly, I already know that and secondly, it's IRRELEVANT to your assertion anyway! What you have demonstrated pertains to more than 20 years AFTER dolby digital was a "new technology" and in fact so long after, that not only wasn't Dolby Digital a "new technology" but it was such an old technology that it wasn't even supported by theatrical systems any more! And incidentally, neither were DTS or SDDS. The digital technology (DCP) that replaced 35mm film in the 2000's ONLY allows wav audio format, none of the data compressed audio formats (DD, DTS, etc) are supported!!

3. Again, that's EXACTLY my point! You are choosing to place your desire to not be challenged above any desire to be factually accurate (or even logical) and therefore above any desire to be respected as anything other than "clueless". That's your choice of course but you cannot blame me or anyone else for your choice! Also, the whole point of this sub-forum is to be factually accurate, so your choice is not valid here.

G


----------



## mindbomb

@gregorio 

He's right about spatial distortion. This can be demonstrated by listening to a regular audio track on headphones and then listening to it again on speakers. People can note a big difference in the imaging, a result of the distortion of headphones.


----------



## 71 dB

gregorio said:


> 3a. No it doesn't, you just made that up! Maybe YOUR brain is weird "can't decode spatial information properly" and then creates some sort of imaginary "spatial distortion" but that's YOUR brain, my brain does NOT do that. Therefore, your statement is FALSE! Spatial distortion does NOT happen in "THE" brain, it happens in YOUR brain. This is your big problem throughout your postings, you assume/assert that what your brain is creating/imagining is the same as what everyone else's brain is doing, despite the fact that you've presented no reliable evidence to support that assertion AND despite the fact that it's disproven by my (and many others) perception.
> Furthermore, by your own admission this weird imaginary/perceived "distortion" is a creation/product of your brain, the actual signal itself (without crossfeed) does NOT have any "spatial distortion". Your use of the term is therefore incorrect as the term "spatial distortion" means an actual distortion of spatial information and there isn't any. A more accurate term would therefore be "your personal imaginary distortion"!
> 
> 3a1. No it does NOT! If you assert that it does, then you must present that science. However, after over a year you have still NOT done so and of course you can't, because there is no science that tells us what is "too much". All the science tells us is the limits of certain aspects of spatiality as they occur naturally/in nature but of course that's irrelevant because we're not dealing with what occurs naturally! Music recordings do not exist in nature, they are an entirely man-made invention/creation, they virtually NEVER comply with "what occurs naturally/in nature" and are deliberately designed not to, because they are are designed as an art form, NOT an accurate documentary/record of "what occurs in nature". "Too much"/"excessive" is therefore your individual, personal preference, NOT an objective or scientific fact!
> ...


3a. I'm sure your brain decodes excessive spatiality into spatial distortion, but you don't realize it is spatial distortion. I didn't realize it either before 2012. I though of it just "headphone spatiality" which is what it is, but in 2012 I realized what it is and that you can reduce and even remove it. You can tell 1000 times these things happen only in my mind, but how do you explain other people enjoying crossfeed and finding benefits in using it? Why was crossfeed a thing long before I discovered it? Why did Siegfried Linkwitz create and publish his "Improved Headphone Listening" article in 1971? Why did Benjamin Bauer create crossfeeders 10 years before that? Something wrong with their brain too?

The original signal doesn't have spatial distortion, because it's out of context spatial information. Stereo sound is technical two mono signals with varying degree of correlation between the audio channels. The spatial information doesn't know HOW it is fed to our ears. Speakers? Headphones? The type of listening device and environment gives the spatial context and only then we know if spatial distortion exists in that particular context or not. In context of speaker listening spatial distortion doesn't exist, but in context of headphone listening spatial distortion is very common. Spatial distortion is not "imaginary". It's a result of how our spatial hearing works. I believe that all people hear spatial distortion, but only some people recognise it to be spatial distortion, result of something being wrong in the listening context.

3a1. Even the most obscure music creation listened with speakers is just two loudspeakers radiating mono sound into a room and there is no "too much" of spatiality. The same recording with headphones (completely different context) means too much spatiality unless we "naturalize" the spatiality with crossfeed or some other way like HRTF convolution.

3a2. Does applying RIAA playback EQ lower fidelity? No, because such equalizing is _expected_ to happen with vinyl. Similarly crossfeed is_ expected_ to happen meaning it is not lower fidelity, it is HIGHER fidelity because spatial distortion created by brain is avoided. 

3b. I didn't say anything about people wanting to add spatial information, but that's what happens with speaker. Adding natural spatiality is very different from having excessive spatiality. Adding spatial information in the form of reverb and reflections doesn't increase ILD, on the contrary it more or less decreases it.

3b1. Crossfeed is more of a spatial distortion reducer than HRTF simulator. To me the benefits of crossfeed are:

- Realistic "physical" bass instead of "fake" sounding bass.
- Reduced listening fatique
- Ordered solid soundstage instead of a fractured mess all over the place
- Miniature soundstage instead of head-sized microsoundstage.
- Lack of "sounds touching/tickling my ears" annoyance.
- More musical detail thanks to spatial distortion not masking stuff.

Crossfeed does not give speaker-like large soundstages or ultrarealistic binaural/HRTF convolution -type of sound, but the benefits above are so huge that crossfeed revolutionized my headphone listening.


----------



## bigshot (Mar 17, 2019)

Crossfeed, hot mastering, inaudible frequencies being audible, psychology making testing impossible, phono cartridges that reproduce more than CDs, worrying about inaudible sound that might be audible under extreme conditions, super golden ears that hear what bats only hear..,  We're gathering quite a rogue's gallery in this forum. Every day there's a different assortment. We really should make up some bingo cards so we can all play the Sound Science home game.


----------



## bigshot

mindbomb said:


> He's right about spatial distortion. This can be demonstrated by listening to a regular audio track on headphones and then listening to it again on speakers. People can note a big difference in the imaging, a result of the distortion of headphones.



I don't think anyone is denying that. We're just saying that crossfeed doesn't make headphones like speakers.


----------



## 71 dB

gregorio said:


> 1. Why on earth would I want to realise at any point (let alone "even at this point") that a false assertion is true? Presumably, you'd agree that there doesn't "ought to be something" with a binaural recording? What about with a stereo recording which is not binaural but has been designed for headphone listening?
> 2. Exactly my point! Both natural and unnatural spatiality, were explored many years before stereo was a consumer product, let alone a mass product. Even extreme unnatural spatiality (Stockhausen for example) was created before stereo was a consumer product, so how could it be a marketing gimmick if you don't have a mass product to sell?
> 3. Well of course you do, because if you explained it as an adult then it wouldn't make any sense! It's like trying to explain that Santa Claus is real, try doing that to anyone other than a child and see what happens!
> 4. No, but it does automatically mean that your facts/assertions are wrong, duh! And of course, anyone is free to check if my facts are correct.
> ...


1. You find an assertion false until you realize it is true. Binaural or other recording designed for headphones don't contain excessive spatiality so I listen to that kind of recordings without crossfeed.
2. Stockhausen hardly thought about headphones. I'm sure he intented his music for speakers. In fact so did later also recordings.
3. Your responses are getting lacklustre. Now we have Santa Claus involved. What next? Rudolf the red-nosed reindeer?
4. I admit when I am wrong. Do you?
5. Yes, they did come a few years later. So what? SPL calibrations are not used in Finland. Not today, not 10 years ago and not when the digital formats arrived 25-30 years ago. 
6. I don't take enough time to post. This is time consuming and I do argue about things such as American politics on other forums. I do post stupid things every now and then. Education doesn't mean freedom of mistakes. Everybody makes them. I am smart and educated enough to admit them.
7. Sound formats here are irrelevant. Compressed or not, volume knob is used to have different SPL on different screening, I believe day screenings are typically quieter than late screenings.
8. Being factually inaccurate about movie theatre sound has very little to do with headphone spatiality. I claim some expertise of the latter only.


----------



## bigshot (Mar 17, 2019)

I have a simple question... why do you keep on like this? It makes no sense. I don’t think you’re cut out for the internet. You should go out into the real world more. You’d realize why this kind of approach just doesn’t work.


----------



## gregorio (Mar 18, 2019)

mindbomb said:


> He's right about spatial distortion. This can be demonstrated by listening to a regular audio track on headphones and then listening to it again on speakers. People can note a big difference in the imaging, a result of the distortion of headphones.



I'm not denying of course that the imaging using headphones is quite different from the imaging using speakers or that most/all people can hear that difference. However, that fact does NOT "demonstrate" that the difference is a "result of the distortion of headphones", it doesn't demonstrate anything at all about what's causing the difference, ONLY that there is one. So, what has led you to believe that the difference is "a result of the distortion of headphones"? Apart from the fact that 71dB repeatedly says so (and that it might resonate with your intuition) what actual evidence do you have to support this assertion? If you don't have any, that's an excellent demonstration of why it's so important to refute his assertion, that's exactly how pretty much all audiophile myths get started! Someone notices/perceives some difference and makes-up an explanation for it that seems believable because others also notice/perceive that difference but that's a classic correlation (cause and effect) fallacy.

I, on the other hand assert that: There is NO distortion of the spatial information when playing back using headphones, beyond some EQ/frequency response inaccuracies (which can affect our perception of spatial information) but this is largely irrelevant as crossfeeding does NOT even attempt to correct headphone freq response inaccuracies any way. So in fact, I'm asserting the exact opposite, that the difference is a result of the LACK/ABSENCE of distortion when using headphones! An absence of the additional spatial information that speakers/room acoustics would add. What evidence is there to support my completely opposite assertion? Well, we can take a measurement of the original signal (the music recording), take a measurement of the output of that signal from headphones, take another measurement of that signal being reproduced by speakers in a room and then compare all three measurements. The measurement of the speaker reproduction will clearly evidence the additional spatial (and freq response) information caused by the acoustics of the room, while the headphone reproduction will be extremely similar to the measurement of the original signal (bar some freq response inaccuracies) and evidence virtually no difference ("distortion") of the spatial information in the original signal! My (opposite) assertion is therefore supported with OBJECTIVE evidence that has been repeated and confirmed countless times over numerous decades.

71dB's further assertion, that crossfeed cures/fixes this "spatial distortion" is therefore nonsense, because you obviously cannot cure/fix a distortion that doesn't exist! What we can do, theoretically, is add the "distortion" that speakers/room acoustics would add but that's entirely different to crossfeed, it requires both a personalised HRTF (head related transfer function) and obviously, an emulation of room acoustics, NEITHER of which is provided by crossfeed! While he (eventually) admitted this to be true, 71dB asserts that: Because crossfeed can somewhat emulate ONE ASPECT of the spatial information added by speakers/room acoustics, that crossfeed is therefore close enough (for everyone) to cure/fix all the issues of not having speakers/room acoustics and that purely by virtue of being closer it MUST, by definition, be "better" (for everyone). This too is just another logical fallacy though. For example, there's a pretty obvious difference between say a symphony orchestra and thrash metal band. ONE ASPECT of that difference is that a symphony orchestra has a tuba while a thrash metal band doesn't. So, if we add a tuba to a thrash metal band does it become a symphony orchestra? To me, it's obvious that it does not. Does it become closer to a symphony orchestra? Technically yes it does and it's possible therefore that some people might perceive it to be an orchestra but it would be FALSE to assert that because it's closer to an orchestra that everyone would perceive it to be an orchestra. I personally would not perceive it to be an orchestra, I'd perceive it as a thrash metal band with a tuba, and that alone disproves the assertion that everyone would perceive it as an orchestra! However, 71dB gets around that "disproof" by stating that I'm an idiot who doesn't realise what he's listening to but that's just another false statement invented to defend the previous false statements. In fact, it's because I DO realise what I'm listening to that I DON'T perceive it as an orchestra and it's 71dB who doesn't realise what he's listening to! In addition to those who might perceive it to be an orchestra, there are those who wouldn't but might prefer the sound of a trash metal band with an added tuba. That's their choice/preference and they are entitled to it, personally I'd prefer just to hear the band as the band intended it (without a tuba), even though I'm not a particular fan of thrash metal.



71 dB said:


> I'm sure your brain decodes excessive spatiality into spatial distortion, but you don't realize it is spatial distortion.



Please provide some reliable evidence of what my brain is decoding! Of course you can't do that, you don't have any idea what my brain is decoding, let alone have any actual evidence for it! YET AGAIN, you've just completely made-up a false assertion to defend your agenda! What you want me to "realise" is something that you are imagining/perceiving, that I don't imagine/perceive and that objective measurements demonstrate does not exist. Whose ability to "realise" is therefore better, mine for realising there isn't any spatial distortion or your's for realising there is spatial distortion when in fact there isn't any?

The rest of your post is just another repeat of same old fallacious nonsense built upon your personal preferences/perception rather than objective facts! Here one example, which is particularly impressive because every single assertion is false(!):


71 dB said:


> 3b1. Crossfeed is more of a spatial distortion reducer than HRTF simulator. To me the benefits of crossfeed are:
> - Realistic "physical" bass instead of "fake" sounding bass.
> - Reduced listening fatique
> - Ordered solid soundstage instead of a fractured mess all over the place
> ...



3b1. How can crossfeed reduce something that isn't there to start with? There is no spatial distortion with headphones!
- There is no "real physical bass" in pretty much any modern music, it's all fake (artificially manufactured and/or very heavily manipulated) bass. The very last thing I would therefore want is something that tries to make the (deliberately/intentionally fake) bass sound like a real, physical bass. In fact doing so would seriously damage or even completely destroy many popular music genres!
- Using crossfeed does not reduce listening fatigue, in fact for me it increases it!
- It does not order the soundstage, it does the exact opposite, it confuses/messes up the soundstage by crossfeeding it.
- I far prefer a head-sized though somewhat flat (2D) soundstage to a miniature though somewhat flat (2D) sounstage.
- I lack that annoyance without crossfeed, the sounds do not "tickle my ears".
- Spatial distortion cannot mask "stuff" because there is no spatial distortion! And, I actually perceive more masking of musical detail with crossfeed because obviously some of the detail is being overlaid by other detail from the opposite channel!

G


----------



## 71 dB

Spatial distortion is of course _one of the many_ differences between headphones and speakers. It's a major one, but still one of many. You can say what I hear and what my brain does isn't a proof of anything, but you can't use what you hear of course. Some people are spatial distortion deniers. They deny the fact that human hearing expects correlation between the sound in left and right ears, especially at low frequencies.

That's all I say for now, because I go out into the real world as suggested.


----------



## bigshot

ENJOY THE SUNSHINE!


----------



## mindbomb

@gregorio

It is the headphone form factor that is responsible for the spatial distortion. Spatial hearing is the result of comparing what one ear hears relative to the other, looking for specific patterns that indicate a direction. In headphones, the left and right sides are pretty much isolated from each other, so this system breaks down. And the "in your head" type sound is the result.

It is true that headphones, when measured, would be closer to the original audio, but you aren't meant to listen to the original audio. You were meant to listen to them on speakers, where the position of the speaker, and how it differentially hits both ears, would give it depth and an out of the head sound. There are recordings where you are supposed to listen to them on headphones, and the effect of using speakers is unwanted - they are called binaural recordings. However, most recordings don't fall into this category.


----------



## jgazal (Mar 18, 2019)

I am trying not to fan the flames, but I just can't resist. 
Stereo speakers in the first place introduce multiple kinds of  distortion. Rooms also introduce distortion.
So what is the reference for distortion? To compare speakers to headphones? That seems fragile to me.
As gregorio wisely put, stereo reproduction is an illusion (please @gregorio, correct me if I say something wrong).
And headphones are, no more, no less, a different kind of illusion.
Cheers!


----------



## gregorio (Mar 19, 2019)

mindbomb said:


> [1] It is the headphone form factor that is responsible for the spatial distortion.  .... It is true that headphones, when measured, would be closer to the original audio,
> [2] but you aren't meant to listen to the original audio.
> [2a] There are recordings where you are supposed to listen to them on headphones, and the effect of using speakers is unwanted - they are called binaural recordings.
> [2b] However, most recordings don't fall into this category.



1. You can't have it both ways! EITHER headphones create spatial distortion, in which case we can measure it in the headphone output OR, there is no distortion in that measurement, in which case the headphones are not creating distortion.

2. What evidence do you have to support that assertion?
2a. That's not quite correct. Binaural recordings are a specific sub-category of stereo recordings; a binaural recording is defined by 2 audio channels incorporating a HRTF. However, "there are recordings where you are supposed to listen to them on headphones" that do NOT include a HRTF and therefore are NOT binaural recordings. This too is essentially a large part of 71dB's fallacious argument. While it's true that commercial music mixes/masters are created using speakers (monitors) and are designed for playback on speakers, they are at the very least checked using headphones and if anything unwanted is noticed (by the artists or engineers), then the mix/master will be changed/altered. In fact, the purpose of "mastering" (the reason it exists in the first place) is to change/adjust the final mix, thereby creating a pre-master (usually just called a "master"), that plays back as intended on consumer equipment, rather than only in the studio where it was created. In the 1970's and earlier, extremely few consumers used headphones and therefore masters were typically (though not always) not adjusted for HP playback (and sometimes not even checked with HPs). However, that changed rather dramatically in the 1980's, due to the introduction of Sony's Walkman, which was so popular that it's credited for the fact that cassette tapes out sold vinyl for the first time (in 1983). Therefore:

2b. No, I dispute that! Sure, extremely few recordings fall into the category of "binaural recordings" but many/most do fall into the category of "you are supposed to listen to them on headphones" (or speakers) and therefore you ARE "meant to listen to the original audio"! So, how do we know which we are supposed to listen to with headphones? 71dB states that any recording with "unnatural" spatiality is not supposed to be played back with headphones and therefore requires crossfeed. However, that's a fallacy for two reasons: Firstly, pretty much all stereo recordings (with the possible/arguable exception of binaural recordings) employ unnatural spatiality, since even before stereo was available as a consumer format, REGARDLESS of whether they are "supposed to be listened to with headphones" (without crossfeed) and Secondly, crossfeed does NOT introduce the spatial distortion ("the effect") of using speakers anyway!

@jgazal no corrections necessary!

G


----------



## 71 dB

Before running back to the subshine quick remarks:



gregorio said:


> 1. You can't have it both ways! EITHER headphones create spatial distortion, in which case we can measure it in the headphone output OR, there is no distortion in that measurement, in which case the headphones are not creating distortion.
> 
> 2. What evidence do you have to support that assertion?
> 2a. That's not quite correct. Binaural recordings are a specific sub-category of stereo recordings; a binaural recording is defined by 2 audio channels incorporating a HRTF. However, "there are recordings where you are supposed to listen to them on headphones" that do NOT include a HRTF and therefore are NOT binaural recordings. This too is essentially a large part of 71dB's fallacious argument. While it's true that commercial music mixes/masters are created using speakers (monitors) and are designed for playback on speakers, they are at the very least checked using headphones and if anything unwanted is noticed (by the artists or engineers), then the mix/master will be changed/altered. In fact, the purpose of "mastering" (the reason it exists in the first place) is to change/adjust the final mix, thereby creating a pre-master (usually just called a "master"), that plays back as intended on consumer equipment, rather than only in the studio where it was created. In the 1970's and earlier, extremely few consumers used headphones and therefore masters were typically (though not always) not adjusted for HP playback (and sometimes not even checked with HPs). However, that changed rather dramatically in the 1980's, due to the introduction of Sony's Walkman, which was so popular that it's credited for the fact that cassette tapes out sold vinyl for the first time (in 1983). Therefore:
> ...



1. Spatial distortion happen in brain due to excessive spatiality that overdrives spatial hearing. That's the ONE way we have it.
2. Common sense?
2a. Binaural recordings certainly are a tiny sub-genre. Then there are also "semi-binaural" recordings incorporating approximations of HRTF such as recording done with a Jecklin Disk or Schneider Disk or just spatiality that is taylored to not cause spatial distortion for example limiting ILD to a few decibels at low frequencies. I wonder what fallacy of mine are you talking about here. Yes, recordings are "checked" with headphones, but does that "checking" include spatial distortion? I don't believe it always does. I don't believe the concept of spatial distortion is very well known even among sound engineers and mixers althou I think it has gotten better over the years (thanks to headphone listening becoming more and more popular) and newer recordings suffer less from spatial distortion than the older ones. Things maybe changed dramatically in the 80's, but in no way was the issue of spatial distortion completely dealt with.
2b. I believe the culture/conventions of music production doesn't fully recognise the issue of spatial distortion and that's why I bring (not make!) these things up. The fact that most people (consumers) don't know about spatial distortion and don't have the listening skills to separate real (natural) spatiality from excessive spatiality lets music producers to get away. People like me who whine about excessive spatiality are a minority, a minority that uses crossfeed or other techniques such as HRTF convolutions to address/fix the problem anyway.

Combining different elements of natural spatiality is ok (and a great way to express artistic intent!). The result is a _fabricated_ spatiality, but it makes sense to our spatial hearing, because there is no excessive spatiality involved. Our brain is able to decode the parts of which the fabrication was made of. Our hearing is used to tons of sounds with different kind of natural spatiality at the same time, for example the sound of a spoon in your coffee cup 2 feet from your ears, people talking 20 feet away and distant thunderstorm rumbling 5 miles away.


----------



## mindbomb

@gregorio 
I'm saying that albums were mastered on speakers, where the location of the speaker driver and room acoustics is providing additional spatial information, and it is assumed the end user will have some form of this as well. When you use typical headphones, this assumption is broken, and the imaging you then have is incorrect.

Binaural recordings demonstrate the inverse situation. It is assumed the end user will use headphones, and only on headphones is the imaging correct. With speakers, the additional spatial information is unnecessary and harmful in this case.

Looking at normal recordings and then binaural recordings, you get the whole spectrum of when speakers provide the correct imaging, and when headphones do it.


----------



## bigshot

71 dB said:


> Before running back to the subshine quick remarks.



NO. GO. Take a sailing ship to Tahiti and find a place to live Sit on the beach in the sunshine until you can handle an internet forum. That may take several years.


----------



## bigshot (Mar 19, 2019)

mindbomb, I tried to get someone here to recommend some dimensional sounding binaural recordings of music a while back. No one could do it. They could only point to hair clippers and clapping hands. The one musical album they recommended and I bought sounded like any other album. It was arrayed from left to right through the center of my skull. I think binaural recording is one of those theories that just doesn't work in practice. It's a mental exercise that ultimately doesn't amount to anything.

Headphones present music as a line going through the center of your head. No space. No soundstage.
Stereo speakers put the sound as a flat plane 8 or 10 feet in front of you. It uses the space in the room to create stereo soundstage.
5.1 presents a natural soundstage in front with a horizontal plane extending from front to back. The room is very important and room acoustics can be simulated.
Atmos presents a box of sound with up/down added to 5.1. The room acoustic is very important and more sophisticated room acoustics can be simulated.

It's a progression from one dimensional to three dimensional.


----------



## gregorio

mindbomb said:


> I'm saying that albums were mastered on speakers, where the location of the speaker driver and room acoustics is providing additional spatial information, and it is assumed the end user will have some form of this as well.



Which albums are mixed/mastered under the assumption that no end users will be using headphones? 



71 dB said:


> 1. Spatial distortion happen in brain due to excessive spatiality that overdrives spatial hearing. That's the ONE way we have it.
> 2. Common sense?



1. Spatial distortion may happen in YOUR brain due to what YOU perceive/deem to be "excessive" but it does not happen in my brain and I do not deem it to be "excessive". Therefore, that's NOT "the one way we have it" that's the one way YOU have it!

2. Who's common sense, your common sense or mine? Of course, that's why we have science in the first place and why it's evidence based, so we don't have to rely on what someone decides is common sense. So I ask again, where's the evidence?

The rest of your post is just the same repeat yet again and as it's already been refuted more than once, I can't be bothered to again. Why do you think that just repeating the same nonsense for over a year will eventually make it true, when it's already been demonstrated to be false?

G


----------



## jgazal

bigshot said:


> mindbomb, I tried to get someone here to recommend some dimensional sounding binaural recordings of music a while back. No one could do it. They could only point to hair clippers and clapping hands. The one musical album they recommended and I bought sounded like any other album. It was arrayed from left to right through the center of my skull. I think binaural recording is one of those theories that just doesn't work in practice. It's a mental exercise that ultimately doesn't amount to anything.
> 
> Headphones present music as a line going through the center of your head. No space. No soundstage.
> Stereo speakers put the sound as a flat plane 8 or 10 feet in front of you. It uses the space in the room to create stereo soundstage.
> ...



Have you tried with Dr. Choueiri crosstalk cancellation or with an acoustical barrier between the speakers?

Is 5.1 or Atmos really three dimensional?


----------



## ironmine

bigshot said:


> Headphones present music as a line going through the center of your head. No space. No soundstage.



How do you know what happens in my head? You can only speak about your head. When I listen to music with crossfeed, I have everything: space, soundstage, layering, etc. The brain is very adaptable.  You just need to give it some time to get used to it.

Listening to music that is not processed with a crossfeed algorithm sounds unnatural and annoying, especially bass. I am sure you can get used to it (as I said the brain is very adaptable), but why reject this wonderful tool (crossfeed processing) since it's so easily available and helps us overcome one of the main negative attributes of headphones (that being an insufficient crossfeed)?

This thread is completely hi-jacked by crossfeed-haters.  Let's come back to basics:

http://www.meier-audio.homepage.t-online.de/crossfeed.htm


----------



## bigshot

jgazal said:


> Have you tried with Dr. Choueiri crosstalk cancellation or with an acoustical barrier between the speakers?
> 
> Is 5.1 or Atmos really three dimensional?



Why would you want to separate the two channels? Is there a reason for that? My goal has always been to try to balance speaker placement, room acoustics and levels to create as perfect a phantom center as possible between each of the five speakers with all of the four other ones. That meshing is what creates the sound field, and makes multichannel not just sound like sound coming from all directions.

My system is 5.1, but I'm told that Atmos allows you to place a sound object anywhere within left right, front back and top bottom parameters. That would be true 3D. 5.1 is more like a 2D plane extending left right and front back. I have a jury rigged placement that sort of improves on that by raising the level of the center speaker and rear speakers a little higher. That kind of creates a raised triangle overhead that works really well with movies, because dialogue is usually up front and ambiences in the rear. It fills my 10 foot projection screen because the center is behind the screen. It works quite well with multichannel music because the vocals are often in the center, and the rears are either discrete parts of the music, or pure ambience. It works well in my room at least.


----------



## bigshot

ironmine said:


> How do you know what happens in my head?



Are you setting me up for a joke?

I don't hate crosstalk. It can take the curse off having the sound being shoved right up against your ears. I'm sure it improves the sound of headphones. But it doesn't approximate speakers at all. The whole point to a speaker system is how the sound interacts with the space in the room. Soundstage is dependent on the sound being physically in front of you. You don't get that at all with headphones.


----------



## ironmine

bigshot said:


> Are you setting me up for a joke?
> 
> I don't hate crosstalk. It can take the curse off having the sound being shoved right up against your ears. I'm sure it improves the sound of headphones. But it doesn't approximate speakers at all. The whole point to a speaker system is how the sound interacts with the space in the room. Soundstage is dependent on the sound being physically in front of you. You don't get that at all with headphones.



Yes, it does approximate speakers to a certain degree. Many crossfeed plugins have these additional features: simulation of speakers and simulation of room acoustics. You can activate it on or off depending on your preferences. I usually prefer pure crossfeed, I don't like to simulate either speakers or room.

This is an example:







As you see, you can choose speaker presets or build your own speakers.  Room presets or  design your own room. Read the manual.


----------



## ironmine

bigshot said:


> I don't hate crosstalk.



I also did not have any unfortunate or violent accidents in my childhood involving crossfeed that would negatively color my perception of crossfeed for the rest of my life. I guess it makes us both lucky, unlike other folks...


----------



## bigshot (Mar 19, 2019)

Ironmine, do you have a good speaker setup? If so, you're probably familiar with how it sounds. Can you set that plugin to make it sound just like your listening room? How does this deal with directionality? I have a 5.1 system, and a big part of sound location involves the direction my head is facing. I turn my head while I listen and that helps me locate a sound in space. I can clearly hear the difference in location between my center speaker and my rears. Even with 2 channel stereo when I listen to something with very controlled soundstage, like Culshaw's Ring, I find that turning my head makes a big difference to hearing characters moving around on the stage.  I don't know how you would accomplish that without a realizer... and I suspect that my eyes turning along with my ears might have some impact on how I imagine where those sounds are located in space as well. Not to mention the kinesthetic chest thump of loud fat bass- headphones can't do that. It just seems like a lot of the sound of speakers isn't there, even with the best processors.

I love my 5.1 speaker system. If I could find a way to reproduce that in headphones, it would be great. I've heard Atmos headphone mixes and binaural recordings. But none of them are able to reproduce the pinpoint sound location of the soundstage of my speaker system. It can sound kind of the same in the response, but not in directionality. The sound is always right there inside or just around my head. It's never ten or fifteen feet in front of me or behind me.

Maybe it's the shape of my ear canals. I don't know... but headphones always sound in my head and speakers always sound a distance away from me.


----------



## jgazal

Each of us advocates - okay, some more emphatically - our own preferences. 
@bigshot, I wouldn't say 5.1 is three dimensional. 
The way I see it, codecs with height channels or even Ambisonics (which relies on spherical harmonics) do not have channels enough to render high pitch objects in any azimuth or elevation. 
Perhaps we could call such codecs "surround (illusion)" similar to "stereo illusion" instead of 3D//?
What I concur is that, without DSP, externalization is very poor in headphones (even with binaural recordings most of us are very susceptible to sound field collapse with head movements). In such aspect, from what we have commercially available, codecs with height speakers go first,  5.1 second, stereo over loudspeakers third. 
But as you already said in the post above, with a little bit of DSP magic, headphones can reproduce a sound field equivalent to any of those three.
I still don't know if binaural with xtc can render 3D better than what you called "obscure" high order Ambisonics - HOA. Those are my candidates to be called one day, perhaps, three dimensional. 
Anyway, as @gregorio already explained, those may not be suitable for music.


----------



## Davesrose (Mar 19, 2019)

bigshot said:


> My system is 5.1, but I'm told that Atmos allows you to place a sound object anywhere within left right, front back and top bottom parameters. That would be true 3D. 5.1 is more like a 2D plane extending left right and front back. I have a jury rigged placement that sort of improves on that by raising the level of the center speaker and rear speakers a little higher. That kind of creates a raised triangle overhead that works really well with movies, because dialogue is usually up front and ambiences in the rear. It fills my 10 foot projection screen because the center is behind the screen. It works quite well with multichannel music because the vocals are often in the center, and the rears are either discrete parts of the music, or pure ambience. It works well in my room at least.



I've got a receiver that handles all the popular 3D surround formats: Atmos, DTS:X, Auro-3D.  I like Auro-3D for upmixing 5.1 Dolby Digital.  Some of my hi-res BD blu-ray concerts get defaulted to their source DTS-MA channels (and I think it's OK for them to have a 2-D surround plane).  I have some UHD discs that are DTS:X, but Atmos has become the most popular format for streaming and overall UHD content.  What is interesting is that the *ideal* speaker configurations are slightly different for every 3D format.  With Atmos, it's initially designed for either reflected sound off the ceiling or ceiling mounted speakers.  DTS:X and Auro-3D can use height speakers (my receiver will also mix Atmos for overhead effects pretty effectively).  Auro also recommends having one ceiling speaker directly above you ("voice of god") for what they show as a dome of sound (your 2-D surrounds, then a layer of height speakers, then the top channel for the top dome).  I got height speakers because it would be very difficult to install ceiling speakers in my room (that has another floor on top of it).  But I find movies are pretty immersive and I can hear overhead effects with various 3D modes. It was worth it to upgrade from 7.1 to 7.1.4 (especially since more content is streaming Atmos).



bigshot said:


> I love my 5.1 speaker system. If I could find a way to reproduce that in headphones, it would be great. I've heard Atmos headphone mixes and binaural recordings. But none of them are able to reproduce the pinpoint sound location of the soundstage of my speaker system. It can sound kind of the same in the response, but not in directionality. The sound is always right there inside or just around my head. It's never ten or fifteen feet in front of me or behind me.



I've mentioned before that the most realistic headphone surround I've heard is an out of production Sennheiser Dolby Pro Logic processor.  I could hear movie effects sweep from front sides to all the way in back.  I didn't hear a clear center front dialogue (the way a speaker theater setup is), nor do I think that headphone processing can address the height effects new speaker systems have.


----------



## bigshot (Mar 20, 2019)

jgazal said:


> @bigshot, I wouldn't say 5.1 is three dimensional.



I agree. It's like 2 1/2 dimensions at best. It's a flat plane from front to back and left to right. Atmos is true 3D. Three dimensions requires up/down too. I suppose stereo is two dimensions... a flat plane 10 feet in front of the listener. Headphones are one dimensional- a line straight through the ears. DSPs can improve it, but they don't add true dimensionality. They just take the curse off the one dimensionality.

In any case, anyone who is familiar with multichannel mixes for music knows how important it can be, especially in the last two or three years. It not only reproduces the music, it places it within physical space and it can create ambiences that open up the normal ambience of the room. Like any other kind of recording, that provides opportunities to push beyond the limits of just capturing a performance and actually makes it possible to craft a dimensional performance that doesn't mimic any real acoustic. I'd be happy to suggest recordings that take full advantage of this if anyone is interested in investigating it.



Davesrose said:


> What is interesting is that the *ideal* speaker configurations are slightly different for every 3D format.



4.0 is an entirely different approach than modern multichannel. Quad aims for sound coming equally from the four corners of the room, 5.1 aims at creating a coherent front soundstage with separate rear channels. However the newest Atmos recordings when down mixed to 5.1 have been attempting to create a phantom center between front and back- pretty startling. It requires much more precision in speaker placement and calibration, but when it works, it's revelatory, because it creates a relationship between front and back that equals left and right.



Davesrose said:


> With Atmos, it's initially designed for either reflected sound off the ceiling or ceiling mounted speakers.  DTS:X and Auro-3D can use height speakers (my receiver will also mix Atmos for overhead effects pretty effectively).  Auro also recommends having one ceiling speaker directly above you ("voice of god") for what they show as a dome of sound (your 2-D surrounds, then a layer of height speakers, then the top channel for the top dome).  I got height speakers because it would be very difficult to install ceiling speakers in my room (that has another floor on top of it).  But I find movies are pretty immersive and I can hear overhead effects with various 3D modes. It was worth it to upgrade from 7.1 to 7.1.4 (especially since more content is streaming Atmos).



Yeah, that's the problem. The vertical dimension requires a particular room setup that most living rooms can't handle. I have a peaked roof with beams, and putting in Atmos would be very difficult to manage. The more channels you add, the better it gets, but the harder it is to implement. You have to strike a compromise for what is possible until you have the opportunity to create a totally optimized space.



Davesrose said:


> I've mentioned before that the most realistic headphone surround I've heard is an out of production Sennheiser Dolby Pro Logic processor.  I could hear movie effects sweep from front sides to all the way in back.  I didn't hear a clear center front dialogue (the way a speaker theater setup is), nor do I think that headphone processing can address the height effects new speaker systems have.



I don't know for sure, but I think a lot of headphone surround depends on how your particular noggin shapes equate with the average. For some people, it works; but for others it never works. I have struggled to try to get binaural to work for me, but it never locks in. At best, it flickers from three inches in front of my head to three inches behind it. Other people may have no trouble. But everyone can hear the benefits of a well tuned multichannel speaker setup.


----------



## gregorio (Mar 20, 2019)

ironmine said:


> [1] How do you know what happens in my head? You can only speak about your head. ...
> [2] Listening to music that is not processed with a crossfeed algorithm sounds unnatural and annoying, especially bass.
> [3] Yes, it does approximate speakers to a certain degree. Many crossfeed plugins have these additional features: ... This is an example ...
> [4] This thread is completely hi-jacked by crossfeed-haters.



1. Agreed.

2. No it doesn't! Don't you really mean that to YOU it "sounds unnatural and annoying, especially bass"? If you do not explicitly specify that is how it sounds to YOU PERSONALLY, then you are making a blanket statement that covers everyone (including me). And the obvious response to that assertion is: "How do you know what happens in my head? You can only speak about your head.".  You are therefore effectively contradicting yourself! ... Just to be clear, despite the fact I've stated it a number of times: Listening to music on headphones without crossfeed does not sound natural to me but listening with crossfeed does not sound natural to me either. Furthermore, listening with speakers/monitors does not sound natural to me either and the reason none of the playback scenarios sounds natural to me is because none of it is natural! Stereo itself is an illusion (that isn't natural) and the music recordings themselves are not natural. Almost without exception, music recordings are made by layering multiple different aural locations and/or perspectives, which would only be natural if you had multiple different ears which were simultaneously in different locations.

3. That is NOT an example of a crossfeed plugin! Crossfeed is the act of taking the signal from one channel and "feeding" it (or part of it) to the other channel. The example you gave would only be a crossfeed plugin if you turned off all those "additional features", IE. Turn off the HRTF and "Room Designer" features. Your example is INCORRECT, it is NOT an example of a crossfeed plugin, it is an example of a "Binaural Room Simulator" plugin!

4. Firstly, I wouldn't say I'm a crossfeed hater. It doesn't work for me personally, I personally don't like it and I choose not to use it but I don't hate it or state that everyone else should/must hate it too. Secondly: So, instead of the thread being "hi-jacked by crossfeed haters", you want to hi-jack it as a Binaural Room Simulator fan boy? The thread title is NOT "to-binaural room simulate-or-not ..."!


bigshot said:


> My system is 5.1, but I'm told that Atmos allows you to place a sound object anywhere within left right, front back and top bottom parameters. That would be true 3D.



That would be true 3D but Atmos does NOT allow you to do that and therefore is NOT true 3D. Atmos allows you to place a sound anywhere in the horizontal plane (left/right, front/back) and can theoretically do that more precisely than 5.1 (although only in a cinema), as it it less reliant on the stereophonic illusion (phantom positioning). Atmos additionally provides ceiling speakers, allowing for height information BUT, with two limitations:

Firstly, the soundfield is effectively between the existing horizontal plane of (5.1) speakers and the higher vertical plane of the ceiling speakers. Using your terminology, Atmos allows you to place a sound object anywhere within left right, front back and top middle parameters! NOT within "top bottom parameters", a sound cannot be positioned below the horizontal plane of the (existing 5.1) speakers. In other words, Atmos theoretically provides a hemispherical soundfield, rather than the spherical soundfield that would be required for "true 3D". I know Atmos (and other similar formats) are marketed as 3D sound formats but that's just marketing, a bit like unlimited data plans that are limited!

Secondly, positioning a sound anywhere within that hemispherical soundfield is arguably only theoretically possible rather that possible in practice. This is because there are many positions within that hemisphere which would effectively rely on a double phantom position; a phantom position between the horizontal plan of speakers, PLUS a phantom position between that horizontal phantom position and a vertical phantom position. In practice, that would be a highly unstable position, even if all the speakers were perfectly phase aligned at a particular listening position, a stereophonic illusion on top of a stereophonic illusion could change (or be entirely destroyed) by a relatively small change in head position relative to any of the speakers. On the other side of the coin, would we actually perceive a change or destruction of this unstable illusion? Our localisation ability in the vertical plane is many times weaker than in the horizontal plane anyway. My educated/experienced guess would be that it would depend on several factors, probably the most important of which is how/if that aural illusion is supported by our vision (IE. Is there a visual cue in the film which indicates the position of the sound and if so, how obvious is it). Another factor would be our position relative to ALL the speakers and another would be an individual's perception. Due to the fact that this localisation may not be perceived as intended by significant portions or the audience and the potential phase issues it could create, these positions within the hemispherical soundfield are typically avoided when creating a mix (that is, typically avoided for static positions, "passbys", sounds passing through those positions are fine).

G


----------



## Davesrose (Mar 20, 2019)

bigshot said:


> 4.0 is an entirely different approach than modern multichannel. Quad aims for sound coming equally from the four corners of the room, 5.1 aims at creating a coherent front soundstage with separate rear channels. However the newest Atmos recordings when down mixed to 5.1 have been attempting to create a phantom center between front and back- pretty startling. It requires much more precision in speaker placement and calibration, but when it works, it's revelatory, because it creates a relationship between front and back that equals left and right.



I never referred to quad sound.  I referred to blu-ray concerts, which often have an option for stereo or 92khz DTS MA 5.1.  I’ll keep it the default 5.1 (2D surround plane) as who needs phantom height in a concert setting?  I haven’t tried an Atmos receiver with 5.1 or 7.1.  It is interesting that DTS and Auro have surround schemes that try to virtually add height speakers.  I’ve tried Auro’s mode: doesn’t sound nearly as engaging as having actual height speakers.  Having physical speakers will always be ideal.  For all of us that don’t have optimal home theater rooms, we just have to find work around.


----------



## bigshot (Mar 20, 2019)

Yeah, the whole thing about speakers is that it is real physical sound in real physical space. I don't think that can really be synthesized. I've jury rigged my system to add height by raising the center channel above the level of the sides, and raising the level of the rears. This helps pull the soundstage up to fill the screen, which is about 9 feet tall at the top. Without that, the sound seems like it is lower than the action on the screen. I would add height speakers, but they would look ugly in my particular room, and they would probably fire directly into a solid wood beam.

Based on my experience with 5.1, I'm guessing that the benefit of Atmos isn't just making airplanes fly over your head in Top Gun, but it can help create better sound location in the middle of the sound field. It's taken me a lot of work to get my 5.1 system to mesh front to back with a phantom center in the middle of the room. Sound can cross the room diagonally from front left to rear right without a dip in the center. But if I had a half dozen overhead channels, it might be possible for sound to do arcs through the middle of the room, or to locate something in the sound field other than along an x across it. (I'm not sure if I'm describing that clearly...) Another benefit of having overhead channels is creating synthetic room ambiences that are much bigger than your actual room. An organ concert in a cathedral for example, or open air in a field like the opening of the Sound of Music. I can see it allowing much more sophisticated and varied ambiences.

I keep thinking about how I could incorporate Atmos, hiding speakers behind beams... but everything about it would be a pain and I wouldn't have a lot of flexibility to experiment with speaker placement. I still may do it someday. I think it would be a definite notch up in quality of sound field. The reason I mentioned quad is because quad usually doesn't even attempt creating a sound field. It's just individual channels coming from the four corners of the room. A lot of 5.1 is like that too, but the newer immersive mixes are more exciting to me.



gregorio said:


> can theoretically do that more precisely than 5.1 (although only in a cinema), as it it less reliant on the stereophonic illusion (phantom positioning).



Yes, the more channels you have, the less dependent you are on phantom centers. It's like the center channel in 5.1... it replaces the need for a phantom center up front,. If it is used to be a part of the front image, rather than a separate sound like vocals, you can create a wider soundstage, because you can increase the width between the mains and create phantoms between the center and the left and right. You're right that Atmos's height is basically ear level up, not ear level down. And depending on how many Atmos speakers you are running, there would be dead spots. But it would be light years better than having no vertical dimension at all. Maybe it would be best to call it 2.8 D?

I have an old Cinemascope movie where the lead actor comes out at the beginning and walks in from stage right and crosses to exit stage left talking the whole time. They mixed his dialogue to hand off to the speakers behind the screen to give the voice a pinpoint location. It creates a very interesting effect. And I have recent Atmos music that is mixed for sound to fly diagonally across the room. Even in 5.1 it can work. The Kraftwerk Catalogue blu-ray is in both 3D picture and 3D Atmos sound. I don't do 3D but I've heard it is really astounding on a good system.


----------



## Davesrose

bigshot said:


> Yeah, the whole thing about speakers is that it is real physical sound in real physical space. I don't think that can really be synthesized. I've jury rigged my system to add height by raising the center channel above the level of the sides, and raising the level of the rears. This helps pull the soundstage up to fill the screen, which is about 9 feet tall at the top. Without that, the sound seems like it is lower than the action on the screen. I would add height speakers, but they would look ugly in my particular room, and they would probably fire directly into a solid wood beam.
> 
> Based on my experience with 5.1, I'm guessing that the benefit of Atmos isn't just making airplanes fly over your head in Top Gun, but it can help create better sound location in the middle of the sound field. It's taken me a lot of work to get my 5.1 system to mesh front to back with a phantom center in the middle of the room. Sound can cross the room diagonally from front left to rear right without a dip in the center. But if I had a half dozen overhead channels, it might be possible for sound to do arcs through the middle of the room, or to locate something in the sound field other than along an x across it. (I'm not sure if I'm describing that clearly...) Another benefit of having overhead channels is creating synthetic room ambiences that are much bigger than your actual room. An organ concert in a cathedral for example, or open air in a field like the opening of the Sound of Music. I can see it allowing much more sophisticated and varied ambiences.
> 
> ...



When I upgraded my surround system to HDMI lossless, the way my room was setup, 7.1 seemed like a better setup.  I placed the rear speakers a bit higher then ear level, and the surrounds are just to the sides (I thought it better because my viewing area is an open main floor with vaulted ceilings and one of the surrounds being mounted in a bar area against the kitchen area).  All the speakers were the same series (tower mains, center, and bipole surrounds).  For 5.1 tracks or 7.1, surround effects were good and seemless for having aircraft and arrows having a sense of coming from the screen to the sides and up rear when it hit back surrounds.  I started collecting UHD discs before I upgraded to UHD (knowing I should futureproof).  One aspect I liked about home movies appearing with Atmos, was that there was more tracks that were 7.1.  So I've directly compared Bladerunner 2049 and Last Jedi in 7.1 lossless (often downstreamed from Atmos now) vs now 7.1.4.  Certainly scenes like helicopters and remote drones flying overhead are the most convincing aspect of 3D surround.  I think my back heights are not quite as effective as the front heights, since my back surrounds are above ear level.  However, my height speakers are high up and do point down towards the top of my head.  So the combination of front and rear heights do make more of a sense of overhead effects.  Also RE sound below you....it's true that "3D surround" is intended as a dome (that depending on speaker config can be slightly below ear level to complete dome above).  But it's interesting that the opening scene of Blade Runner (Final Cut, has been remixed to Atmos), I've heard a spinner seem to come from somewhere above center to left bottom.  It might be the partial sound mix of sharply panning from height to 2D level, and partly perception of seeing the spinner go off-screen on the bottom.

When it comes to figuring out height speakers, I can give good marks to the height speakers I got.  They are SVS Elevations.  They're slanted and have the woofer driver in the larger portion of the loudspeaker (which seems like a better design to me: for crossover and focusing tweeter downwards).  The mounting bracket allows the speaker to be mounted in any orientation.  My 7.1 speakers are a different brand that's pretty much gone now...but they all seem to match well (even if I have room correction off on my receiver).


----------



## bigshot

Your room sounds quite similar to mine.


----------



## jgazal (Mar 20, 2019)

bigshot said:


> Yeah, the whole thing about speakers is that it is real physical sound in real physical space. I don't think that can really be synthesized. (...).



If the listener's HRTF is calculated from morphological data and the content have objects, then it can be synthesized for headphone playback, except for chest tactile bass impact.

HRTF can be calculated from morphological data and players can deal with object base content.

Both are compute-intensive and currently economically unviable.

That can change with more efficient algorithms or cheaper DSP. Who knows what the future holds?


----------



## ironmine (Mar 21, 2019)

bigshot said:


> Ironmine, do you have a good speaker setup? If so, you're probably familiar with how it sounds. Can you set that plugin to make it sound just like your listening room? How does this deal with directionality? I have a 5.1 system, and a big part of sound location involves the direction my head is facing. I turn my head while I listen and that helps me locate a sound in space. I can clearly hear the difference in location between my center speaker and my rears. Even with 2 channel stereo when I listen to something with very controlled soundstage, like Culshaw's Ring, I find that turning my head makes a big difference to hearing characters moving around on the stage.  I don't know how you would accomplish that without a realizer... and I suspect that my eyes turning along with my ears might have some impact on how I imagine where those sounds are located in space as well. Not to mention the kinesthetic chest thump of loud fat bass- headphones can't do that. It just seems like a lot of the sound of speakers isn't there, even with the best processors.
> 
> I love my 5.1 speaker system. If I could find a way to reproduce that in headphones, it would be great. I've heard Atmos headphone mixes and binaural recordings. But none of them are able to reproduce the pinpoint sound location of the soundstage of my speaker system. It can sound kind of the same in the response, but not in directionality. The sound is always right there inside or just around my head. It's never ten or fifteen feet in front of me or behind me.
> 
> Maybe it's the shape of my ear canals. I don't know... but headphones always sound in my head and speakers always sound a distance away from me.



I have a relatively good speaker setup, it's a stereo system with a subwoofer, see my signature. The room (3 m x 5.8 m x 2.5 m) is not very big and the speakers stand along a longer wall, but I have lots of self-made acoustic panels, bass traps, FRZ-panels, SBIR-panels, acoustic cloud on the ceiling and I employ DRC. So, I have a very good stage, a hauntingly realistic phantom center and lack of resonances in the room.



gregorio said:


> Furthermore, listening with speakers/monitors does not sound natural to me either and the reason none of the playback scenarios sounds natural to me is because none of it is natural!



All these three "unnatural" variants (1 - headphones without crossfeed, 2 - headphones with crossfeed, 3 - speakers) can be ranged and ranked in terms of their "unnaturalness" (as defined by moving further and further away from the reference spatially perfect sound of live instruments in real life):

0) Live sounds in nature (REFERENCE).
1) Speakers
2) Headphones with crossfeed
3) Headphones without crossfeed

As we move from the reference to variants 1, 2 and then 3, more and more spatiality is lost and it becomes more and more annoying.

Now, if you are not a spatiality freak but instead a detail freak, your hierarchy of preferences may be different and it may depend on the quality of your headphones, speakers and crossfeed processes.

But you cannot argue with the ranking above from the point of spatiality.


----------



## mindbomb (Mar 21, 2019)

For stereo music, a new headphone form factor could give the benefits of a crossfeed. Or older designs like the sennheiser surrounder, possibly the akg k1000, coming back in more accessible variants.


----------



## castleofargh

jgazal said:


> If the listener's HRTF is calculated from morphological data and the content have objects, then it can be synthesized for headphone playback, except for chest tactile bass impact.
> 
> HRTF can be calculated from morphological data and players can deal with object base content.
> 
> ...


I have honestly no clue how audibly significant the non linear behaviors can be, but they will be missing from current implementations. 

signed: nitpicker2000


----------



## bigshot

jgazal said:


> Who knows what the future holds?



Yes. I hope virtual reality and 3D improve and become more convenient too.


----------



## jgazal (Mar 21, 2019)

castleofargh said:


> I have honestly no clue how audibly significant the non linear behaviors can be, but they will be missing from current implementations.
> 
> signed: nitpicker2000



You are right. I have no clue either.
I made such assumption based on binaural synthesis.
But I have never compared a real sound field to a simulated one made by, for instance, the Bacch-DSP.
Perhaps @bigshot is also right in his feeling and there will always be differences no matter how precise is the HRTF or the computing power available. And I don't know also the threshold of audibility of such differences...
I confess it is more a wish than a true assertion.


----------



## gregorio (Mar 21, 2019)

bigshot said:


> [1] Based on my experience with 5.1, I'm guessing that the benefit of Atmos isn't just making airplanes fly over your head in Top Gun, but it can help create better sound location in the middle of the sound field. It's taken me a lot of work to get my 5.1 system to mesh front to back with a phantom center in the middle of the room.
> [2] Another benefit of having overhead channels is creating synthetic room ambiences that are much bigger than your actual room. An organ concert in a cathedral for example, or open air in a field like the opening of the Sound of Music.
> [2a] Yes, the more channels you have, the less dependent you are on phantom centers.
> [3] It's like the center channel in 5.1... it replaces the need for a phantom center up front,. If it is used to be a part of the front image, rather than a separate sound like vocals, you can create a wider soundstage, because you can increase the width between the mains and [3a] create phantoms between the center and the left and right.



1. That depends on what you mean by "middle" of the soundfield? If you mean "middle" of the horizontal soundfield (rather than middle of the horizontal and vertical soundfield, within the hemisphere) then no, Atmos makes no difference to the localisation.

2. Again, not really. Of course, we've been creating synthetic room ambience (acoustics) much bigger than your actual room for many decades. In the case of say "an organ concert in a cathedral" for example then we can have some synthetic height information (room acoustics) but it's both a technological challenge and only a very vague approximation. In the case of say "open air in a field" then it would make no difference at all because an open field obviously does not have a ceiling, there is no boundary to reflect sound and therefore there is no acoustic information coming from above to emulate/synthesise.
2a. That's true but the flaw in your reasoning is that Atmos does not provide more channels! Until Atmos, we all thought in terms of audio channels and routing those channels to physical outputs (speakers), but that's not how Atmos works. In effect, Atmos is a 7.1 channel system, with the addition of up to 128 "audio objects" but these audio objects are NOT audio channels and there is NO direct relationship between audio objects and either audio channels or physical outputs. Indeed, despite having 128 audio objects, the maximum number of physical outputs (speakers) an Atmos system can address is 64. Let's for a moment think of Atmos as if it were a 64 channel/speaker system and therefore we could use half of our available audio objects to represent each of those 64 channels/speakers. Straight away we run into an issue because technology cannot synthesise 64 channels of acoustic information, the maximum currently possible is (I believe) 12, which includes the LFE channel, and it's also not practical to accurately setup and position 64 separate room mics (at say an organ concert in a cathedral). Even if this were possible, all the channels representing all the (different) height information would effectively be mixed down to just the two height channels/speakers in a consumer Atmos setup. It would be far easier to actually show you how Atmos works in an Atmos equipped dubbing stage than trying to explain with written words in a forum post.

3. In additional to the fact that Atmos doesn't really work like the centre channel in 5.1, what you're stating about the centre channel of 5.1 is not really correct anyway! The centre channel of 5.1 does not "replace" the need for a phantom centre (between Front Left and Front Right), it only anchors the phantom centre to a physical location (of the centre speaker). In film sound, BOTH the physical centre channel AND the phantom centre are employed. The physical centre is employed for sound that has to be located in a physical position (such as the dialogue) but other sound, ambiances and incidental music for example, maybe routed partly to the physical centre channel and partly to the phantom centre or only to the phantom centre. Not uncommonly, incidental music and ambiances in films are effectively 4.0 mixes. In practical use therefore, the centre channel of 5.1 doesn't replace the phantom centre, it's effectively an option that's in addition to the phantom centre. You might like/prefer the effect of increasing the width of the front soundstage by increasing the distance between the front left and right speakers but it's not actually correct/better because you are reducing/damaging the phantom centre. In a correctly setup 5.1 system the front left and front right speakers should be setup to operate as a traditional 2 channel stereo system, with no hole/lower level in the phantom centre (from spacing the speakers too widely).
3a. The the phantom (stereophonic) position between say the centre and front left speaker is there anyway, you are not "creating" it, you are just making the entire stereo image wider and therefore more obvious/noticeable, which is not necessarily a good thing, just as having your speakers spaced too widely in a 2 channel stereo system isn't.  



ironmine said:


> All these three "unnatural" variants (1 - headphones without crossfeed, 2 - headphones with crossfeed, 3 - speakers) can be ranged and ranked in terms of their "unnaturalness" (as defined by moving further and further away from the reference spatially perfect sound of live instruments in real life):
> 0) Live sounds in nature (REFERENCE).
> 1) Speakers
> 2) Headphones with crossfeed
> ...



Not only CAN I argue with YOUR ranking above but I can argue with the entire premise YOU are basing it upon! You are making exactly the same mistake as 71dB and exactly the same mistake as so many audiophiles in numerous different areas of audio! It may all appear like a simple, obvious and common sense explanation to YOU but actually it's just a made-up explanation that's FALSE. The demonstrated science proves that it is false. As we move between the variants more and more spatiality is NOT lost: Between 0 and 1 we do not loose spatial information but the opposite, we gain spatial information, the addition of the spatial information of the listening room. HP's with cossfeed do not loose spatial information, it's exactly the same spatial information that's in the recording, only crossfed. HPs without crossfeed also does not loose spatial information, it's the same as on the recording. This is all provable with objective measurements!

Furthermore, I strongly depute your entire premise because it is based on BOTH a false definition of "unnaturalness" AND also therefore, a false reference! You stated "_unnaturalness as defined by moving further and further away from the reference spatially perfect sound of live instruments in real life_". What live instruments, what real life and what spatially perfect sound? With almost all rock and pop genres for the last 40+ years, "real life" is the drumkit being performed and recorded on it's own one day, the electric guitars performed and recorded on their own another day and so on for all the other parts. If "real life" is your reference then pretty much every rock/pop song would last a week or more and you'd never actually hear it as a song but as a succession of individual parts. Additionally, there is almost no "live instruments in real life", the main difference between an electric guitar and an acoustic guitar is the addition of electrical interference and distortion, which can be added either at the time of performance/recording or afterwards (or both), the drumkit similarly, although the changes/distortion mostly occurs after the performance/recording and by definition, a synthesiser is synthesised sound, not a live instrument. Lastly, there is no reference or real life spatiality, the spatiality is all entirely manufactured and the ONLY "reference" is the subjective decisions of the artists (during the mixing and mastering processes). In other words, what's on the recording you're reproducing is virtually all completely "unnatural" to start with, so you are ranking unnaturalness against a reference of complete unnaturalness, which of course is meaningless!

The situation is somewhat different if we're talking about recordings of purely acoustic music genres and instruments, say a symphony orchestra but the underlying premise is still the SAME! What live instrument reference in real life? A live acoustic instrument produces a very significantly different sound when measured, recorded, or listened to from 3ft away than it does from 50ft away. What real life or perfect spatiality? In real life it is not possible to listen to a performance on an acoustic instrument from both 3ft away and 50ft away at the same time but this is effectively how virtually all recordings of symphony orchestras are made (and has been since the late 1950's). You could ask the question; "why don't we simply record an orchestra in stereo from effectively the best seat the house?" - and the answer to that question is: We do not experience that sound in that seat, our brains' process that sound and what we experience is a perception that is significantly different to that actual sound. Since around the early 1950's we have been experimenting with how to create recordings of what we would experience, rather than of the actual sound that would enter an audience member's ears at the perfect listening position. So again, even in this scenario your reference for unnaturalness is something which is unnatural (does not and cannot exist in nature). Furthermore, with knowledge, experience and training/practice one can hear/perceive/identify the ubiquitous "unnaturalness" and that it's all a manufactured illusion. Although the difficulty of doing so varies from recording to recording, according to the intent of the engineers/artists who created it and/or the skill with which that intent was realised.

This has all already been discussed at length earlier in this thread. Either you couldn't be bothered to read it or, like 71dB, you are simply in denial of the facts?

G


----------



## 71 dB

gregorio said:


> Not only CAN I argue with YOUR ranking above but I can argue with the entire premise YOU are basing it upon! You are making exactly the same mistake as 71dB and exactly the same mistake as so many audiophiles in numerous different areas of audio! It may all appear like a simple, obvious and common sense explanation to YOU but actually it's just a made-up explanation that's FALSE. The demonstrated science proves that it is false. As we move between the variants more and more spatiality is NOT lost: Between 0 and 1 we do not loose spatial information but the opposite, we gain spatial information, the addition of the spatial information of the listening room. HP's with cossfeed do not loose spatial information, it's exactly the same spatial information that's in the recording, only crossfed. HPs without crossfeed also does not loose spatial information, it's the same as on the recording. This is all provable with objective measurements!



When going from live sound in nature (0) to speakers (1) we lose lots of original spatial information, but new spatial information is generated. The amount of spatial information isn't the issue, but the fact that ORIGINAL spatial information is lost. Headphones without crossfeed present the spatial information in wrong context (kind of if an Englishman tried to read a manual in Dutch - all the information is there, but interpreting it is a challenge). Crossfeed doesn't really change the amount of spatial information (because it's so simple that our hearing can tell it apart from complex real spatial information), but converts it into a more correct (not 100 % correct because crossfeed is just a coarse simulation of HRTF) form for our spatial hearing (Dutch manual is translated into English).



gregorio said:


> Furthermore, I strongly depute your entire premise because it is based on BOTH a false definition of "unnaturalness" AND also therefore, a false reference! You stated "_unnaturalness as defined by moving further and further away from the reference spatially perfect sound of live instruments in real life_". What live instruments, what real life and what spatially perfect sound? With almost all rock and pop genres for the last 40+ years, "real life" is the drumkit being performed and recorded on it's own one day, the electric guitars performed and recorded on their own another day and so on for all the other parts. If "real life" is your reference then pretty much every rock/pop song would last a week or more and you'd never actually hear it as a song but as a succession of individual parts. Additionally, there is almost no "live instruments in real life", the main difference between an electric guitar and an acoustic guitar is the addition of electrical interference and distortion, which can be added either at the time of performance/recording or afterwards (or both), the drumkit similarly, although the changes/distortion mostly occurs after the performance/recording and by definition, a synthesiser is synthesised sound, not a live instrument. Lastly, there is no reference or real life spatiality, the spatiality is all entirely manufactured and the ONLY "reference" is the subjective decisions of the artists (during the mixing and mastering processes). In other words, what's on the recording you're reproducing is virtually all completely "unnatural" to start with, so you are ranking unnaturalness against a reference of complete unnaturalness, which of course is meaningless!



When there is no real life sound, what the recording sounds like on speakers kind of becomes the "real thing". Number (1) becomes number (0).



gregorio said:


> The situation is somewhat different if we're talking about recordings of purely acoustic music genres and instruments, say a symphony orchestra but the underlying premise is still the SAME! What live instrument reference in real life? A live acoustic instrument produces a very significantly different sound when measured, recorded, or listened to from 3ft away than it does from 50ft away. What real life or perfect spatiality? In real life it is not possible to listen to a performance on an acoustic instrument from both 3ft away and 50ft away at the same time but this is effectively how virtually all recordings of symphony orchestras are made (and has been since the late 1950's). You could ask the question; "why don't we simply record an orchestra in stereo from effectively the best seat the house?" - and the answer to that question is: We do not experience that sound in that seat, our brains' process that sound and what we experience is a perception that is significantly different to that actual sound. Since around the early 1950's we have been experimenting with how to create recordings of what we would experience, rather than of the actual sound that would enter an audience member's ears at the perfect listening position. So again, even in this scenario your reference for unnaturalness is something which is unnatural (does not and cannot exist in nature). Furthermore, with knowledge, experience and training/practice one can hear/perceive/identify the ubiquitous "unnaturalness" and that it's all a manufactured illusion. Although the difficulty of doing so varies from recording to recording, according to the intent of the engineers/artists who created it and/or the skill with which that intent was realised.
> 
> G



As I have mentioned, fabricated spatiality is ok if each individual sound object in the mix has proper spatial context.


----------



## bigshot (Mar 21, 2019)

It must not have been sunny today.

Gregorio, my point was that more channels mean more possibilities for sound location and more specific sound fields.


----------



## bfreedma

gregorio said:


> 1. That depends on what you mean by "middle" of the soundfield? If you mean "middle" of the horizontal soundfield (rather than middle of the horizontal and vertical soundfield, within the hemisphere) then no, Atmos makes no difference to the localisation.
> 
> 2. Again, not really. Of course, we've been creating synthetic room ambience (acoustics) much bigger than your actual room for many decades. In the case of say "an organ concert in a cathedral" for example then we can have some synthetic height information (room acoustics) but it's both a technological challenge and only a very vague approximation. In the case of say "open air in a field" then it would make no difference at all because an open field obviously does not have a ceiling, there is no boundary to reflect sound and therefore there is no acoustic information coming from above to emulate/synthesise.
> 2a. That's true but the flaw in your reasoning is that Atmos does not provide more channels! Until Atmos, we all thought in terms of audio channels and routing those channels to physical outputs (speakers), but that's not how Atmos works. In effect, Atmos is a 7.1 channel system, with the addition of up to 128 "audio objects" but these audio objects are NOT audio channels and there is NO direct relationship between audio objects and either audio channels or physical outputs. Indeed, despite having 128 audio objects, the maximum number of physical outputs (speakers) an Atmos system can address is 64. Let's for a moment think of Atmos as if it were a 64 channel/speaker system and therefore we could use half of our available audio objects to represent each of those 64 channels/speakers. Straight away we run into an issue because technology cannot synthesise 64 channels of acoustic information, the maximum currently possible is (I believe) 12, which includes the LFE channel, and it's also not practical to accurately setup and position 64 separate room mics (at say an organ concert in a cathedral). Even if this were possible, all the channels representing all the (different) height information would effectively be mixed down to just the two height channels/speakers in a consumer Atmos setup. It would be far easier to actually show you how Atmos works in an Atmos equipped dubbing stage than trying to explain with written words in a forum post.
> ...



Nice post.

Only addition/correction is that there are a number of 16 channel Atmos processors available (Storm Audio, Emotiva, Bryston Acurus, Monoprice (soon).  Storm Audio also has 20 and 32 channel processors, though I'm not sure how the 16 channel Atmos processing works with the additional channels in post processing.


----------



## 71 dB

bigshot said:


> It must not have been sunny today.



For Gregorio or for me? My posts have been somewhat short.


----------



## Amberlamps

gregorio said:


> 1. That depends on what you mean by "middle" of the soundfield? If you mean "middle" of the horizontal soundfield (rather than middle of the horizontal and vertical soundfield, within the hemisphere) then no, Atmos makes no difference to the localisation.
> 
> 2. Again, not really. Of course, we've been creating synthetic room ambience (acoustics) much bigger than your actual room for many decades. In the case of say "an organ concert in a cathedral" for example then we can have some synthetic height information (room acoustics) but it's both a technological challenge and only a very vague approximation. In the case of say "open air in a field" then it would make no difference at all because an open field obviously does not have a ceiling, there is no boundary to reflect sound and therefore there is no acoustic information coming from above to emulate/synthesise.
> 2a. That's true but the flaw in your reasoning is that Atmos does not provide more channels! Until Atmos, we all thought in terms of audio channels and routing those channels to physical outputs (speakers), but that's not how Atmos works. In effect, Atmos is a 7.1 channel system, with the addition of up to 128 "audio objects" but these audio objects are NOT audio channels and there is NO direct relationship between audio objects and either audio channels or physical outputs. Indeed, despite having 128 audio objects, the maximum number of physical outputs (speakers) an Atmos system can address is 64. Let's for a moment think of Atmos as if it were a 64 channel/speaker system and therefore we could use half of our available audio objects to represent each of those 64 channels/speakers. Straight away we run into an issue because technology cannot synthesise 64 channels of acoustic information, the maximum currently possible is (I believe) 12, which includes the LFE channel, and it's also not practical to accurately setup and position 64 separate room mics (at say an organ concert in a cathedral). Even if this were possible, all the channels representing all the (different) height information would effectively be mixed down to just the two height channels/speakers in a consumer Atmos setup. It would be far easier to actually show you how Atmos works in an Atmos equipped dubbing stage than trying to explain with written words in a forum post.
> ...



1, Every now and then I pop in here and, everytime I do, I see a huge wall of text from yourself. Are any of your posts short ?

1a, Can I also ask, why do you argue or take an argumentative tone with everyone and always think what you post is right and everyone else is wrong ?

1b, If people who post stuff that you think is wrong, why do you get all funky and feel the need to write pages and pages in response to them and in the process make it look like you are really annoyed and angry by these people.

If they annoy you, why write a book in response and pretend that you are sick of all these people who post “untruths”. Personally, I think you actually enjoy doing it instead of being “annoyed”.

I’m not argueing with ya, it’s just a mere observation and me wondering why, if people’s posts annoy you, why do you see fit to write a book because you’re annoyed. If it was me, I would let it go, trying to correct people on the internet is a thankless task


----------



## sonitus mirus

I appreciate the exchanges as I often find them to be extraordinarily informative and entertaining.


----------



## ironmine (Mar 21, 2019)

gregorio said:


> What live instruments, what real life and what spatially perfect sound?



In real life we do not hear the guitar on the left with the left ear only and we do not hear the drummer on the right with the right ear only.
We hear all sources with both ears:






But headphones force us to listen to sound sources in unnatural way: sources on the left go to the left ear only, sources on the right go to the right ear only.  Headphones are basically a device that brings speakers real close to your ears and builds a soundproof Berlin wall between speakers. It's not natural to listen to music in this way.



gregorio said:


> Furthermore, with knowledge, experience and training/practice one can hear/perceive/identify the ubiquitous "unnaturalness" and that it's all a manufactured illusion.



Even if what I call "natural sound" or "original spatiality" or "reference" is what you call "manufactured illusion" or "spatiality that is all entirely manufactured" and "subjective decisions of the artists", it does not dispute my point.

Whatever the reference sound / illusion / artist's decision is, it is most preserved when reproduced with speakers.

Headphones with crossfeed do not preserve it as well as speakers, but at least they do it partially, by destroying (as Gorbachyov and Reagan did) the Berlin wall between our ears (between the left and right channels).

Headphones without crossfeed are most detrimental to the initial spatiality (whatever it is) present in the recording.


----------



## Davesrose (Mar 21, 2019)

It seems to me that crossfeed is good for combating recordings that do have hard pans going directly to left or right in a stereo recording.  When it comes to HRTF and surround sound with headphones, the first stage is your source.  The limitation with conventional headphones is they have one driver per cup that’s directed directly to your ear canal.  Many live performances these days are large arenas or auditoriums with a speaker setup.  What you tend to hear are specific locations (be it instrument or speaker)...and you’re able to perceive that location since it envelops your entire ear (the helix of your ear is shaped in a manner that it does contribute to direction cues).  I’ve just heard crossfeed and surround processors with conventional headphones: and I’ve had various levels of success hearing a soundscape that goes around me.  I haven’t heard an instance where sound is focused directly in front of me (IMO, because the driver of the headphone is sitting on top of my ear and not angled in directions for spatial ques).  I do seem to remember that there have been a few “true” surround headphones for gaming in which there are several drivers in a headphone cup (angled in different directions for a 2D surround plane).


----------



## gregorio (Mar 22, 2019)

71 dB said:


> [1] When going from live sound in nature (0) to speakers (1) we lose lots of original spatial information, but new spatial information is generated. The amount of spatial information isn't the issue, but the fact that ORIGINAL spatial information is lost.
> [2] Headphones without crossfeed present the spatial information in wrong context ...
> [2a] Crossfeed doesn't really change the amount of spatial information but [2b] converts it into a more correct form for our spatial hearing.
> [3] When there is no real life sound, what the recording sounds like on speakers kind of becomes the "real thing".
> [4] As I have mentioned, fabricated spatiality is ok if each individual sound object in the mix has proper spatial context.



1. What live sound in nature and what ORIGINAL spatial information is lost? The original spatial information of say the snare drum on a typical recording will be the sound + spatial information from 1 inch away from the drum (in mono) AND the sound + spatial information from 6ft away (in stereo) AND the sound + spatial information from say 20ft away (in mono), this is all mixed together, EQ'ed, compressed and artificial reverb is added and then it's mixed with all the other instruments in the band, all of which have different spatial information from different positions in different acoustic spaces and is EQ'ed and compressed differently! Your whole concept of "natural" and "original" spatial information is a fallacy that OBJECTIVELY does NOT exist and pretty much the last thing we would want is natural or original spatial information!

2. Yes they do but so do speakers and headphones without crossfeed because the spatial information is all in the "wrong context" to start with (on the recording). In fact, it's in a bunch of significantly different and conflicting contexts!
2a. Correct.
2b. No it doesn't, it's not only the same amount of spatial information, but exactly the same improper, unnatural, conflicting spatial information in every other respect, just crossfed! There's no complex processing occuring, no "converting it into" a natural, proper or "correct form", the technology to do that does not exist and if it did, it would be pretty much the last thing we would want! This, AGAIN, is the objective facts! Now clearly, you perceive something that sounds "correct" to you (with crossfeed), which is fine but that is NOT the objective fact, it is YOUR perception and therefore NOT applicable to everyone else.

3. Are speakers magic, are they implementing some technology that doesn't exist? Speakers are obviously not the "real thing", they do not make a recording "kind of become the real thing" because there is no real thing and there isn't intended to be. What speakers do is just transduce and replay the recording and the listening environment adds room acoustics and the result is an illusion which is NOT the real thing but hopefully pleasing.

4. That's exactly my point, each individual sound object in the mix does NOT have "proper spatial context"! Again, the typical snare drum "in the mix" does not have a proper spatial context, it has about 4 very different and contradictory spatial contexts. In fact, it's hard to image a spatial context that's LESS "proper"! So, you are contradicting yourself (again)!



bigshot said:


> Gregorio, my point was that more channels mean more possibilities for sound location and more specific sound fields.



Yes, I realise that and while your statement is often/sometimes true, that's not always the case in practice. For example, if we have standard 2 channel stereo and then add a 3rd (centre) channel we do not have more possibilities for sound location, we have exactly the same possibilities. Additionally, the paradigm of more channels isn't really how Atmos works anyway. Maybe my explanation to bfreedma below will help.



bfreedma said:


> Nice post.
> Only addition/correction is that there are a number of 16 channel Atmos processors available (Storm Audio, Emotiva, Bryston Acurus, Monoprice (soon). Storm Audio also has 20 and 32 channel processors, though I'm not sure how the 16 channel Atmos processing works with the additional channels in post processing.



I assume they must be consumer Atmos processors? Theatrical Atmos processors are 64 channel, although in both cases they're not really "channels", they're outputs. Before we consider Atmos, we need to get away from the consumer concept of a "channel" and a corresponding physical output/speaker for it. In a consumer stereo system we have a left channel and a right channel and a sound hard panned to say the left channel will exist only in the left audio channel and will be output only to the left speaker. Same with a consumer 5.1 system, we have a left front channel, a corresponding left front speaker and a sound exclusively panned to the front left channel will be output exclusively to the front left speaker. And the same is true with all the other channels/speakers. However, that's not the case with a theatrical 5.1 system. We still have a front left channel and a front left speaker and we also still have a surround (rear) left channel but we don't have a corresponding left surround speaker, instead we have an array of (diffuser) speakers all around the walls. If we pan a sound exclusively to the front left channel of a theatrical 5.1 system, it isn't output to only the front left speaker, some of it is routed to the foremost (nearest the screen) diffuser speaker (and what we would actually hear is a phantom position somewhat further left than the actual left front speaker). A cinema 5.1 system has the same 5.1 audio channels but 30 or more individual outputs/speakers, each of which is a different signal made up of different combinations of the 5 main audio channels in fixed ratios. So for example, a diffuser speaker on the left wall exactly between the front left and surround left positions would receive a signal that is a 50/50 combination of the front left channel and the rear left channel, the next speaker along that wall might receive a signal that is a 60/40 split and so on. This is all setup during installation and our same 5.1 audio channels can therefore feed a small cinema with say 20 speakers or a large cinema with 60. Now we come to Atmos and the first thing to bare in mind is that it's exactly the same, same array of speakers and same fixed ratio combination of what the speakers are fed as with a 5.1/7.1 system. It's important to understand this because the thing that makes Atmos different (audio objects) is in addition to this traditional 5.1/7.1 setup, not a replacement for it! When we're mixing we can simply mix a sound as we always would or, we can assign it as an "audio object", in which case it effectively bypasses the fixed ratio combination of channels that would feed a particular speaker and allows us to address all the speakers individually. In other words, our speaker in the middle of our left wall would output a fixed 50/50 split of the front left channel and surround left channel, plus a completely variable (0% - 100%) amount of an audio object. Atmos also provides us with two additional arrays of (ceiling) speakers, which are accessible as audio objects. In terms of physical audio channels, Atmos has 8 channels (configured as traditional 7.1) but it also has an additional bunch of data, which defines the audio objects. It's a more complex and "intelligent" system because in the case of a smaller cinema with far fewer than 64 speakers, the processor will effectively work out a phantom position for a sound using the available speakers, if a physical speaker doesn't exist in that (panned) location. The consumer version is effectively an extension of this, obviously using even fewer speakers.

I'm not sure how well this explanation helps? Hopefully you can see that the paradigm of an audio channel and it's corresponding physical output isn't really applicable to Atmos?

G


----------



## bigshot (Mar 22, 2019)

I realize there are always little exceptions and rare situations, but for the everyday purposes of all of us listening to music in our homes, those details rarely matter. Feel free to mention all that because it's interesting, but I'll focus on the broad strokes that are most important and most effective and solving the real world problems of achieving good sound in a home situation. I'm always try to keep information in context. Too often the unimportant details get more discussion around here than the things that really matter. Lack of context is what I believe gets people worrying about jitter ratings and high bitrates and all of that audiophoolery that sends people down pointless rabbit holes. I went to design school where they taught the importance of KISS.


----------



## Davesrose (Mar 23, 2019)

bigshot said:


> I realize there are always little exceptions and rare situations, but for the everyday purposes of all of us listening to music in our homes, those details rarely matter. Feel free to mention all that because it's interesting, but I'll focus on the broad strokes that are most important and most effective and solving the real world problems of achieving good sound in a home situation. I'm always try to keep information in context. Too often the unimportant details get more discussion around here than the things that really matter. Lack of context is what I believe gets people worrying about jitter ratings and high bitrates and all of that audiophoolery that sends people down pointless rabbit holes. I went to design school where they taught the importance of KISS.



I'm now listening to a concert BD that's DTS MA 92khz (Crossroads 2010), and I do keep thinking these detailed discussions are banal.  So my new Dolby Atmos/ 3D surround receiver is a Denon AVR-X6500H.  Before, I was a diehard Harman Kardon fan: my previous main receiver is an older HK TrueHD/DTS MA 7.1 receiver.  My dad gave me a more modest entry tier Denon 7.1 surround receiver when he upgraded to an Atmos enabled one.  I wired it and tried a few different blu-rays and found it wasn't as enganging (music sounded flatter as there was less mid-bass and DR seemed a bit more reserved).  I would like to have seen what Harman Kardon would have come up with for current Atmos/DTS:X/Auro-3D receivers, but unfortunately they're out of the market.  I have been impressed with my new Denon for how it processes 5.1 to 3-D in Auro-3D mode.  Interestingly, I'm finding the EQ and levels can change quite drastically with selection of surround mode (let alone settings for surround parameters and crossover).  When it comes to producing for Atmos, I have read that movie productions do tend to mix *at most* 7.1.4 core audio.  And to try to confirm what gregorio has written, 3D surround formats are based on tracks instead of "channels": Atmos allowing 128 "tracks" (combination of objects and "core channels").  IMO the main advantage of the new systems are greater flexibility for spacial reproduction in a cinema with many multi-channels to resampling to 5.1.4 home cinema.  But what matters most with what you're listening to is your own processor.  With the BD I'm listening to now, I'm hearing better dynamics and bass response by switching my surround mode to Dolby Surround (Dolby's new Dolby Atmos matrix surround).  When I'm critical about another track, who knows...I may prefer Auro-3D or DTS: Neural.  When it comes to native 3D surround sound: there's more instances of brands mandating an exclusive sound mode (I've found that's true with Atmos for DD streaming but not TrueHD BDs, and DTS:X BDs).


----------



## gregorio (Mar 23, 2019)

Davesrose said:


> [1] When it comes to producing for Atmos, I have read that movie productions do tend to mix *at most* 7.1.4 core audio.
> [2] And to try to confirm what gregorio has written, 3D surround formats are based on tracks instead of "channels": Atmos allowing 128 "tracks" (combination of objects and "core channels").
> [3] IMO the main advantage of the new systems are greater flexibility for spacial reproduction in a cinema with many multi-channels to resampling to 5.1.4 home cinema.
> [4] But what matters most with what you're listening to is your own processor.



1. To be honest, that's not really correct, there isn't a 7.1.4 theatrical movie format and Atmos doesn't really have any height channels as such, they are effectively audio objects. The mixing of movies, the formats and routing is very complex, it makes music recordings/mixes look childishly simple. A higher budget film will be mixed in several formats and numerous versions. Typically there will be an Atmos mix and a separate 7.1 mix, there might also be an Auro 3D mix which is based more on the traditional concept of channels and was 11.2 (although there were/are various versions, 13.2 and then AuroMax with 22.2), it was effectively two vertical layers of 5.1 and a single centre, even higher positioned, speaker (often referred to as "the voice of god" channel). There's also likely to be an IMAX version, which is 6 channels (but not in the 5.1 layout) and again requires a separate, dedicated mix. In total, a high budget feature film may have as many as 70 different sound mixes, spread across 5 or more different audio formats. Regardless of what other formats are used though, there's always a 5.1/7.1 mix. It's been this way with moves for over 30 years.

2. Mmmm, not really. I completely understand your confusion though, because the terms "tracks" and "channels" are often used interchangeably which confuses even quite advanced students and occasionally even pros. A "channel" is an audio path, while a "track" is a physical location where the audio is recorded, originally a horizontal region on a tape (or film), hence the term "multi-track recorder". Obviously, to record something on a "track" we need an audio path (channel) routed to that track and in this case the "track" and the "channel" is effectively the same thing. The physical act of recording a musician/sound is therefore often called (in the business) "tracking". Let's take a drumkit as an example to explain when/why they're different: We record a drumkit with multiple mics; a kick mic, a couple of snare mics, one or more mics for the toms, another for the hi-hats, a stereo "overhead pair" and a distant "room mic". Each of these mics requires it's own audio path (channel), so that's about 9 or more audio channels. However, it would be usual to submix that drumkit (those 9 channels) to stereo (another two channels). If we record that submix then we need 2 tracks (which are the same as our submixed stereo audio channels) but the drumkit has actually used 11 audio channels. A blockbuster film will commonly require over 1,000 audio channels, which in the case of a 7.1 mix will be recorded down to 8 tracks, which then require 8 audio paths (channels) routed to the 8 speakers or, as explained previously, 30 or more channels (derived from those 8 tracks) routed to each of the speakers in a theatrical system. Atmos is effectively 8 tracks/channels (traditional 7.1), plus up to 128 audio objects which are not related to tracks at all and are not tied to any specific output channel/s.

3. I'm not sure I would say "greater flexibility for spatial reproduction in a cinema", we don't actually ever record anything even in 7.1, let alone in Atmos. Maybe a greater flexibility for sound positioning would be a better description. Also, it's not really "resampling", nothing is being resampled it's being up/down mixed. You can theoretically up or down mix anything to pretty much anything else even without Atmos. 5.1.4 is effectively 7.1 down-mixed to 5.1 and then 4 ceiling audio channels approximated/calculated from the audio objects. This would typically give a far more accurate representation than upmixing a traditional 5.1 mix and effectively guessing what to put in the ".4" speakers.

4. BTW, the processor is identical in whichever AVR you're using. If your AVR has Dolby Atmos, then the manufacturer of the AVR must have purchased Dolby's Atmos processor, the same Dolby Atmos processor that every other AVR manufacturer has to purchase/licence. Same goes for the other copyrighted formats (DTS and Auro for example). As mentioned, when upmixing (say stereo to 5.1, 5.1 to Atmos, or whatever) the processors are effectively guessing, although not entirely blindly, the processor will try to identify cues (such as phase and various other indicators) but each different processor (Dolby, DTS, Auro) will "guess" at least somewhat differently and which is "better" will vary depending on the individual film/mix, your speaker setup and your personal preference.

G


----------



## gregorio

bigshot said:


> I realize there are always little exceptions and rare situations, but for the everyday purposes of all of us listening to music in our homes, those details rarely matter.



True but this isn't one of those occasions (a little detail or rare exception). The example I gave, having two channel stereo and expanding it to 3 channels (with a centre channel) would appear to be very rare/exceptional and indeed there are no 3.0 mixes available to consumers as far as I'm aware. However, in practice it's not a little exception or detail that doesn't really matter, because that's exactly how the front three speakers of ALL 5.1, 7.1, Atmos and modern multi-channel formats work!

G


----------



## Davesrose (Mar 23, 2019)

gregorio said:


> 1. To be honest, that's not really correct, there isn't a 7.1.4 theatrical movie format and Atmos doesn't really have any height channels as such, they are effectively audio objects. The mixing of movies, the formats and routing is very complex, it makes music recordings/mixes look childishly simple. A higher budget film will be mixed in several formats and numerous versions. Typically there will be an Atmos mix and a separate 7.1 mix, there might also be an Auro 3D mix which is based more on the traditional concept of channels and was 11.2 (although there were/are various versions, 13.2 and then AuroMax with 22.2), it was effectively two vertical layers of 5.1 and a single centre, even higher positioned, speaker (often referred to as "the voice of god" channel). There's also likely to be an IMAX version, which is 6 channels (but not in the 5.1 layout) and again requires a separate, dedicated mix. In total, a high budget feature film may have as many as 70 different sound mixes, spread across 5 or more different audio formats. Regardless of what other formats are used though, there's always a 5.1/7.1 mix. It's been this way with moves for over 30 years.
> 
> 2. Mmmm, not really. I completely understand your confusion though, because the terms "tracks" and "channels" are often used interchangeably which confuses even quite advanced students and occasionally even pros. A "channel" is an audio path, while a "track" is a physical location where the audio is recorded, originally a horizontal region on a tape (or film), hence the term "multi-track recorder". Obviously, to record something on a "track" we need an audio path (channel) routed to that track and in this case the "track" and the "channel" is effectively the same thing. The physical act of recording a musician/sound is therefore often called (in the business) "tracking". Let's take a drumkit as an example to explain when/why they're different: We record a drumkit with multiple mics; a kick mic, a couple of snare mics, one or more mics for the toms, another for the hi-hats, a stereo "overhead pair" and a distant "room mic". Each of these mics requires it's own audio path (channel), so that's about 9 or more audio channels. However, it would be usual to submix that drumkit (those 9 channels) to stereo (another two channels). If we record that submix then we need 2 tracks (which are the same as our submixed stereo audio channels) but the drumkit has actually used 11 audio channels. A blockbuster film will commonly require over 1,000 audio channels, which in the case of a 7.1 mix will be recorded down to 8 tracks, which then require 8 audio paths (channels) routed to the 8 speakers or, as explained previously, 30 or more channels (derived from those 8 tracks) routed to each of the speakers in a theatrical system. Atmos is effectively 8 tracks/channels (traditional 7.1), plus up to 128 audio objects which are not related to tracks at all and are not tied to any specific output channel/s.
> 
> ...



It seems you're mainly arguing about definitions and whether height channels are mixed with Dolby Atmos.  My language was based on Wikipedia.  Wikipedia states that Atmos is comprised of 128 "tracks" (and each track can be assigned a channel or object).  It also says that by default, there's a bed of "7.1.2" channels (leaving 118 tracks available for objects).  That doesn't seem confusing to me, so not sure why you do want to be argumentative.  It does seem the main point you've brought up that's against the literature is that the 128 "objects" are in addition to other mixed channels (that's fundamentally different than what is in the layman's literature, which specifically does say a channel is in one of the 128 tracks).  You're also saying height channels aren't included (so to me, that might be the main point that can be confusing).  I've seen an engineer say that Dolby recommends a 7.1.4 speaker configuration for home mixing and 9.1.4 for cinema mixing.  But perhaps the difference is that 7.1 channels are mixed and the 4 height speakers are for referencing objects? Dolby's literature on cinema Atmos does say that there's a bed of 9.1 channels (and they also state this leaves 118 available tracks).

As for delivered audio formats, I had assumed cinemas could be similar to home.  I've noticed that Atmos for home is delivered via a Dolby Digital core (in streaming) or TrueHD 7.1 core (in blu-rays).  I also bought one of the few IMAX Enhanced blu-rays to listen to IMAX format (which my AVR supports).  IMAX is trying to compete with Dolby, and their audio format is delivered on DTS:X, with their HDR format delivered on HDR10+.  It seems like trends are pointing to Dolby being the dominant format for home UHD standards (unlike DTS MA that dominated HD blu-rays).

RE: Atmos processor.  I do realize that the Atmos processor is effectively identical in any AVR.  My point was that I'm finding sound can be significantly different on which surround processor is selected, as well as different sound settings (such as enabling room correction, bass management, dynamic compression, and other processing in the receiver's system).  I have also seen that each brand's processing of stereo/2D surround to 3D has a different name than their 3D format (signifying it being different processing).  With Dolby, it's "Surround".  With DTS, it's "Neural:X".

Edit: I've just found a good series on Dolby Atmos from a sound engineer, and he goes through an example of doing an Atmos mix.  I've just started watching the series, but seems it might help me understand traditional channels vs audio objects.  He has mentioned that previous surround (2D) is comprised of panning (and these "channels" comprise "the bed" in Atmos) and that is still included in the RMU (which handles all channels and objects).  So it seems height information is handled as objects, and when one hears such and such movie was mixed in 7.1.4, that might be referencing the speaker configuration that was used for mixing:



Interesting that I've found another video in the series where he shows doing a pan and adding elevation.  The referenced axis are different than what is used in 3D animation.  In audio engineering Z is the height axis: in animation, Y is the height axis.  Every discipline has its own terminology, and it's good that we can learn and get clarification of terminology.


----------



## bigshot (Mar 23, 2019)

Davesrose said:


> My dad gave me a more modest entry tier Denon 7.1 surround receiver when he upgraded to an Atmos enabled one.  I wired it and tried a few different blu-rays and found it wasn't as enganging (music sounded flatter as there was less mid-bass and DR seemed a bit more reserved).



It may be underpowered for the number of speakers you're pushing with it. When you add speakers, you need a more powerful amp, because the power ratings get divided up like a pie between all the speakers. Your description perfectly describes an underpowered amp.



gregorio said:


> The example I gave, having two channel stereo and expanding it to 3 channels (with a centre channel) would appear to be very rare/exceptional and indeed there are no 3.0 mixes available to consumers as far as I'm aware.



There are a number of 3.0 SACDs in the RCA Living Stereo catalog. The added channel was recorded to allow them to adjust the soundstage in the mix. These SACDs are said to have a better defined soundstage because of the added channel. But that is the exception, not the rule. Most mixes are 4, 5 or 7 channel with or without LFE. I've heard of systems that have a center channel in the rear as well, but I don't know how that is handled. There are a lot of 4.0 recordings that don't have a center at all. I don't care for those as much as 5.1 because they tend to focus on sound coming from the corners of the room, rather than creating a soundstage or sound field.

I think the reason that these particular CDs have very little rear cnannel info and tightly defined center is because of remastering. Something they did in the re-release must have messed with the 90 degree phase, and they either didn't know or didn't care. I just got a copy of a Tomita CD that is titled Tomita in Surround. Interested to hear how the Dolby Surround is supposed to sound.


----------



## Davesrose (Mar 23, 2019)

bigshot said:


> It may be underpowered for the number of speakers you're pushing with it. When you add speakers, you need a more powerful amp, because the power ratings get divided up like a pie between all the speakers.



Very true that amplification is a factor for sound characteristics.  I've also seen that current quoted specs of AVRs show as rants for many A/V channels/forums (as they tend to just rate stereo channels and not a specified impedance).  The hand me down Denon 7.1's specs quoted a power rating of 2 channels at 6 ohm.  My older HK (that was more expensive at the time) quoted specs at 8 ohm.  When I ran the calculation for figuring watts per 8 ohms, the Denon should have been about on par with my HK.  But forums say that one of the nice features of Harman Kardon was that they tended to be conservative in their power ratings (so that actual RMS might be greater than other competitors).



bigshot said:


> There are a number of 3.0 SACDs in the RCA Living Stereo catalog. The added channel was recorded to allow them to adjust the soundstage in the mix. These SACDs are said to have a better defined soundstage because of the added channel. But that is the exception, not the rule. Most mixes are 4, 5 or 7 channel with or without LFE. I've heard of systems that have a center channel in the rear as well, but I don't know how that is handled. There are a lot of 4.0 recordings that don't have a center at all. I don't care for those as much as 5.1 because they tend to focus on sound coming from the corners of the room, rather than creating a soundstage or sound field.



That's interesting.  I collected quite a few RCA Living Stereo SACDs.  I even got one of Julian Bream that I also got as a used vinyl (to compare with debates about best format/mastering).  I thought one of the best aspects of the RCA Living Stereo SACDs is that it should reproduce the studio master tapes.  The tapes are also before the quadraphonic formats of the 70s.  While looking up info about Dolby Atmos, I saw that cinema stereo could comprise of 3 speakers (wonder if there was some intention of them being in a cinema, or if it's just for mixing purposes of the soundstage).  Also, when it comes to quad sound being part of a soundstage: it does depend on source.  One surround SACD that I have that is the prime example of being quintessential in 4 channel sound is a re-issue quad vinyl record of Bach toccatas in which there are 4 organs in each corner.  An exception for sure.  From what I'm gathering about Atmos with an Atmos processor: there is more emphasis on localization based on decoded speaker system.  Also, it's funny you mention a CD that has some of the matrixed old "Dolby Surround" (4.0) information.  Not sure why the marketing folks at Dolby have decided to label the new 2D to 3D surround format "Surround".


----------



## bigshot

Are you running more speakers with the Dennon than you did with the HK?


----------



## Davesrose

bigshot said:


> Are you running more speakers with the Dennon than you did with the HK?



At the time, no.  This was the same 7 speakers I've had for over 10 years (I've since upgraded my subwoofer with a custom built 12" one).  Now I have a higher end Denon that can power 11 speakers (to give me 7.1.4).


----------



## 71 dB

Spatial hearing hasn't developped for snare drum recordings with multiple contexts and artistical intent. It was developped to hear the lion roar nearby in the Safari so that you know to which direction to run away to not be eaten alive...


----------



## Davesrose

71 dB said:


> Spatial hearing hasn't developped for snare drum recordings with multiple contexts and artistical intent. It was developped to hear the lion roar nearby in the Safari so that you know to which direction to run away to not be eaten alive...



So we have to wait for this year's Lion King to hear true headphone 3D?


----------



## gregorio (Mar 24, 2019)

Davesrose said:


> [1] My language was based on Wikipedia. Wikipedia states that Atmos is comprised of 128 "tracks" (and each track can be assigned a channel or object). It also says that by default, there's a bed of "7.1.2" channels (leaving 118 tracks available for objects). That doesn't seem confusing to me, so not sure why you do want to be argumentative.
> [2] It does seem the main point you've brought up that's against the literature is that the 128 "objects" are in addition to other mixed channels ...
> [2a] I've seen an engineer say that Dolby recommends a 7.1.4 speaker configuration for home mixing and 9.1.4 for cinema mixing.
> [2b] As for delivered audio formats, I had assumed cinemas could be similar to home.
> [2c] Edit: I've just found a good series on Dolby Atmos from a sound engineer, and he goes through an example of doing an Atmos mix. I've just started watching the series, but seems it might help me understand traditional channels vs audio objects. He has mentioned that previous surround (2D) is comprised of panning (and these "channels" comprise "the bed" in Atmos) and that is still included in the RMU (which handles all channels and objects). So it seems height information is handled as objects, and when one hears such and such movie was mixed in 7.1.4, that might be referencing the speaker configuration that was used for mixing.



1. I agree, that doesn't seem confusing. However, an explanation isn't necessarily correct/accurate just because it "doesn't seem confusing", in fact the opposite is commonly the case. The reason I'm being argumentative (disputing the assertions) is because that explanation is NOT correct/accurate! In fact, Wikipedia actually contradicts itself. DCP (Digital Cinema Package) is the "package" of digital data delivered to cinemas which comprises the film. DCP is effectively a set of standardised specifications (defined by DCI/SMPTE) which ensures that a DCP will play in all cinemas. DCP specifies a SINGLE audio track, which must contain an uncompressed, 24bit, 48kHz or 96kHz 16 channel bwav file. Wikipedia states: "_The Picture Track File essence is compressed using JPEG 2000 and the Audio Track File carries a 24-bit linear PCM uncompressed multichannel WAV file._" - This then is a contradiction, if Dolby Atmos were 128 tracks then it couldn't actually be put into a DCP or distributed to cinemas. Even if we consider the term "tracks" to be interchangeable with the term "channels", that would still be impossible because DCP supports a maximum of 16 channels. BTW, in order to be completely technically accurate, DCP does optionally support multiple audio tracks but only one can be output at a time. This feature, when employed, is typically used for different language versions.

2. No, the main point I was bringing up is that tracks and channels are different things, that audio objects are a different thing again and this is important because the traditional consumer concept of channels and speakers does not apply. Case in point:
2a. Yes but that's speaker configuration, not track or channel configuration! Additionally, 9.1.4 maybe the theoretical minimum speaker configuration for cinema mixing, in practice it's much higher. The lowest speaker configuration I've ever seen was about 12.1.8 but the typical configuration in the dubbing stages used by Hollywood would be at least 24.1.16.
2b. As explained, not really. Cinema format is DCP with one audio track, comprising 16 audio channels. In the case of Dolby Atmos, 8 of those channels are used traditionally, IE. They contain 24/48 uncompressed wav audio data in the traditional 7.1 layout. The other audio channels do not contain audio data, they contain data which define each of the audio objects and their (3D, or really 2.5D) coordinates. Because of this, a cinema processor will not recognise these other channels as audio data and will just play the 7.1 channels, unless the cinema processor is a Dolby Atmos unit (RMU), in which case, it of course knows how to decode and use the data in those other channels.
2c. Good, there's hopefully little point in me continuing, as you seem to be grasping the concept.

I can almost hear bigshot saying something like: "All this info about theatrical systems might be somewhat interesting as an aside but it doesn't really concern me or other consumers because we don't have/use DCP or theatrical systems, we use (significantly different) BD/steaming media and home cinema systems". However, 5.1, 7.1, Atmos etc., were all originally designed as theatrical formats and then adapted for home consumer use and all films are mixed on theatrical systems (although often "re-versioned" - somewhat changed/adapted for home use). So the actual audio you are reproducing is largely (or even entirely) influenced by theatrical systems.



bigshot said:


> There are a number of 3.0 SACDs in the RCA Living Stereo catalog. The added channel was recorded to allow them to adjust the soundstage in the mix. These SACDs are said to have a better defined soundstage because of the added channel.



I didn't know there were some 3.0 SACDs. 3.0 can be a cinema format but is very rarely ever used, I've only ever seen it used for essentially "stereo" documentaries. However, 3.0 is incorporated within 5.1, 7.1 and other formats, it's effectively the front main speakers of all these formats and behaves in exactly the same way. Your stated "_my point was that more channels mean more possibilities for sound location and more specific sound fields_" - You are making and reiterating a point that is not correct (or that maybe correct but only in certain circumstances). In 5.1, 7.1, etc., the centre front speaker does not mean more possibilities for sound location, the possibilities for sound location are dictated by the front left and front right speakers and adding a centre speaker/channel does not increase or alter those possible locations. It does though, under certain conditions, make the phantom centre position more defined. This fact becomes even more important/relevant when we consider the Dolby Atmos format.



71 dB said:


> Spatial hearing hasn't developped for snare drum recordings with multiple contexts and artistical intent.



Exactly! So, as pretty much every song contains a snare drum recording with multiple contexts, pretty much every song should sound "unnatural" regardless of whether you reproduce it with speakers or HPs (with or without crossfeed), because neither speakers nor HPs (with or without crossfeed) can correct that "unnaturalness"!

G


----------



## Davesrose (Mar 24, 2019)

gregorio said:


> 1. I agree, that doesn't seem confusing. However, an explanation isn't necessarily correct/accurate just because it "doesn't seem confusing", in fact the opposite is commonly the case. The reason I'm being argumentative (disputing the assertions) is because that explanation is NOT correct/accurate! In fact, Wikipedia actually contradicts itself. DCP (Digital Cinema Package) is the "package" of digital data delivered to cinemas which comprises the film. DCP is effectively a set of standardised specifications (defined by DCI/SMPTE) which ensures that a DCP will play in all cinemas. DCP specifies a SINGLE audio track, which must contain an uncompressed, 24bit, 48kHz or 96kHz 16 channel bwav file. Wikipedia states: "_The Picture Track File essence is compressed using JPEG 2000 and the Audio Track File carries a 24-bit linear PCM uncompressed multichannel WAV file._" - This then is a contradiction, if Dolby Atmos were 128 tracks then it couldn't actually be put into a DCP or distributed to cinemas. Even if we consider the term "tracks" to be interchangeable with the term "channels", that would still be impossible because DCP supports a maximum of 16 channels. BTW, in order to be completely technically accurate, DCP does optionally support multiple audio tracks but only one can be output at a time. This feature, when employed, is typically used for different language versions.
> 
> 2. No, the main point I was bringing up is that tracks and channels are different things, that audio objects are a different thing again and this is important because the traditional consumer concept of channels and speakers does not apply. Case in point:
> 2a. Yes but that's speaker configuration, not track or channel configuration! Additionally, 9.1.4 maybe the theoretical minimum speaker configuration for cinema mixing, in practice it's much higher. The lowest speaker configuration I've ever seen was about 12.1.8 but the typical configuration in the dubbing stages used by Hollywood would be at least 24.1.16.
> ...



I’m not sure why you want to continue to assert a track is not an audio channel or object.  Dolby’s own white paper on mixing with Atmos says that it comprises 128 tracks (5.1 or 7.1 or 9.1 channels taking up some of those tracks).  I understand that this is for mixing, and may not be the same as a “track” on a video file.  You say my use of the term “track” would make sense, but it’s incorrect.  However, Dolby themselves say that a track can be a channel or audio object (in the context of a session).

https://www.dolby.com/us/en/technol...t-generation-audio-for-cinema-white-paper.pdf


----------



## bigshot (Mar 24, 2019)

In the case of the Living Stereo 3.0 SACDs, RCA claims that playing them in 3.0 with a center channel more precisely defines the placement of the instruments in the center of the soundstage because the recording was made in three channels with a symphony orchestra with a mike on left, right and center. You could mix it to stereo and just have the phantom center serve, but having a separate channel covering the center provides more interior definition of sound sources in the middle-- woodwinds would be spread better, rather than just in an undifferentiated middle. That's what RCA claims at least...


----------



## 71 dB

gregorio said:


> Exactly! So, as pretty much every song contains a snare drum recording with multiple contexts, pretty much every song should sound "unnatural" regardless of whether you reproduce it with speakers or HPs (with or without crossfeed), because neither speakers nor HPs (with or without crossfeed) can correct that "unnaturalness"!
> 
> G



The trick is to fool the hearing by having aspects if spatiality that reminds of natural spatiality. Speakers and crossfed headphones limit ILD at low frequencies in a way that reminds natural spatiality.


----------



## gregorio (Mar 26, 2019)

Davesrose said:


> [1] I’m not sure why you want to continue to assert a track is not an audio channel or object.
> [2] Dolby’s own white paper on mixing with Atmos says that it comprises 128 tracks (5.1 or 7.1 or 9.1 channels taking up some of those tracks).
> [2a] I understand that this is for mixing, and may not be the same as a “track” on a video file.
> [2b] You say my use of the term “track” would make sense, but it’s incorrect.
> [2C] However, Dolby themselves say that a track can be a channel or audio object (in the context of a session).



1. I've already defined what the term "track" means and that it's commonly used interchangeably with the term "channel". If you wish a fuller understanding, try this wiki page on "Multitrack recording". The reason I "want to continue to assert a track is not an audio channel or object" is because to understand what Atmos actually is, we have to get away from the traditional thinking in terms of tracks and channels. You yourself have quoted various different minimum channels for Atmos; 7.1.2, 7.1.4, 9.1.4 and 9.1. Using the traditional concept of tracks and channels, only one of these could be correct or possibly two if we make the (false) assumption that there are two versions of Atmos with different channel counts. However, in reality the consumer and theatrical versions of Atmos both have the same number of channels and the different channel/speaker configurations you have quoted are actually ALL correct because the traditional concept of tracks, channels and speaker configurations doesn't work with Atmos (or other formats which employ audio objects). I had thought you were starting to understand this new paradigm, due to your "edit" in your previous post, but this latest statement indicates that you're still having trouble.

2. Yes Dolby's paper does say that, BUT (!) you must understand the purpose of that paper is to explain what Atmos is, the basics of how it works and therefore introduce the concept of "audio objects". Dolby has used the term "tracks" because that's the closest reference with which readers of the paper (sound engineers, DCP authors, etc.) are already familiar, in order to progress to an understanding of what audio objects are.
2a. Yes, it is for mixing but you don't seem to understand the consequences of that and therefore what the term "track" actually means in that context. And BTW, there is no audio "track on a video file" with DCP. DCP is a hierarchical "package" of files, the audio files/tracks (in an MXF wrapper) are stored separately from the video file (JPEG2000).
2b. I can't actually remember that quote/reference but the use of the term "track" would make sense in a certain context, with a certain meaning. If you applied it outside of that context then it would be incorrect. Case in point:
2c. No, actually Dolby themselves did NOT say that! What Dolby actually stated was: "_Thinking about audio objects is a shift in mentality compared with how audio is currently prepared, but it aligns well with how audio workstations function. A track in a session can be an audio object, and standard panning data is analogous to positional metadata._" - Dolby were deliberately very specific about the context and meaning of "track" here, precisely to avoid confusion and incorrect assumptions. A "session" is the ProTools name for a project file (effectively a mix or sub-mix/stem) and ProTools uses the term "track" to mean "channel". So in this context a track cannot be "a channel or audio object", a track IS a channel and that channel can be assigned as an audio object or mixed to the "bed". Dolby is therefore stating that up to 128 of the 1,000 or so channels in a mix can be assigned as "audio objects", while the others would be assigned/mixed to the "bed".

I realise that much of this could easily be seen as just semantics and that I'm arguing about some irrelevant definition/s of a rather ambiguous term just for the sake of arguing. However, that's NOT the case! To understand what Atmos actually is and avoid incorrect/inapplicable assertions about channels and speaker configs, then one has no choice but to understand these "semantics" because they define the difference between Atmos (and other audio object formats) and what went before.



bigshot said:


> [1] In the case of the Living Stereo 3.0 SACDs, RCA claims that playing them in 3.0 with a center channel more precisely defines the placement of the instruments in the center of the soundstage because the recording was made in three channels with a symphony orchestra with a mike on left, right and center.
> [2] You could mix it to stereo and just have the phantom center serve, but having a separate channel covering the center provides more interior definition of sound sources in the middle--
> [2a] woodwinds would be spread better, rather than just in an undifferentiated middle.



1. Yep, that's marketing gumpf! No one has (or would) record a symphony orchestra with 3 mics (Left, Right and Centre) for more than 60 years. The results would be poor.

2. Having a dedicated centre channel, can provide a more defined/anchored centre position, that's all though!
2a. No, woodwinds would not spread better, they would spread exactly the same. And, the middle IS differentiated (in 2 channel stereo); it's differentiated by the fact that a signal in the centre position is the ONLY thing in a stereo mix which is identical in both channels. It maybe perceived as less differentiated, not well defined or anchored to the central position IF the stereo system is not setup well. For example, if the left and right speakers are spaced too widely and/or if the listener is not positioned equidistant from the two speakers.



71 dB said:


> [1] The trick is to fool the hearing by having aspects if spatiality that reminds of natural spatiality.
> [2] Speakers and crossfed headphones limit ILD at low frequencies in a way that reminds natural spatiality.



1. Not it's not, you just made that up! With some/most acoustic genres, say a symphony recording, the "trick" is to create an illusion of a believable natural spatiality. But that is most definitely NOT the case with virtually all popular culture music genres. There is no attempt or (artistic) desire to create a "natural spatiality" or an illusion of one, in fact quite the opposite, the natural sound (including the "natural spatiality") is deliberately avoided!  The "trick" with these genres is creating something that is deliberately not natural but is subjectively perceived as pleasing. Exactly the same as a modern or impressionist artist is not trying to create an illusion of natural perspective.

2. IF you think that speakers and headphones (with crossfeed) reminds you of "natural spatiality", then you have an issue with your perception! That's rather sad because it means you're not perceiving what was intended and are missing a significant part of the art you have purchased. If I looked at a Picasso cubist painting and it appeared to me as an illusion of a "natural perspective", there'd be something very strange with my perception and it would be rather sad because I'd be missing pretty much the entire point of Picasso's art!

G


----------



## Davesrose (Mar 26, 2019)

gregorio said:


> 1. I've already defined what the term "track" means and that it's commonly used interchangeably with the term "channel". If you wish a fuller understanding, try this wiki page on "Multitrack recording". The reason I "want to continue to assert a track is not an audio channel or object" is because to understand what Atmos actually is, we have to get away from the traditional thinking in terms of tracks and channels. You yourself have quoted various different minimum channels for Atmos; 7.1.2, 7.1.4, 9.1.4 and 9.1. Using the traditional concept of tracks and channels, only one of these could be correct or possibly two if we make the (false) assumption that there are two versions of Atmos with different channel counts. However, in reality the consumer and theatrical versions of Atmos both have the same number of channels and the different channel/speaker configurations you have quoted are actually ALL correct because the traditional concept of tracks, channels and speaker configurations doesn't work with Atmos (or other formats which employ audio objects). I had thought you were starting to understand this new paradigm, due to your "edit" in your previous post, but this latest statement indicates that you're still having trouble.
> 
> 2. Yes Dolby's paper does say that, BUT (!) you must understand the purpose of that paper is to explain what Atmos is, the basics of how it works and therefore introduce the concept of "audio objects". Dolby has used the term "tracks" because that's the closest reference with which readers of the paper (sound engineers, DCP authors, etc.) are already familiar, in order to progress to an understanding of what audio objects are.
> 2a. Yes, it is for mixing but you don't seem to understand the consequences of that and therefore what the term "track" actually means in that context. And BTW, there is no audio "track on a video file" with DCP. DCP is a hierarchical "package" of files, the audio files/tracks (in an MXF wrapper) are stored separately from the video file (JPEG2000).
> ...



I agree that we're mainly at a point of trying to agree on terminology (IE this is more semantics).  But with your arguments, you're showing that you haven't read my referenced white paper from Dolby.  And it's only counter productive to say that if I don't use the same terminology as you, that I'm ignorant of the whole subject.  I will concede that my choice of words, in my previous short response, about "tracks" in a video file was not appropriate.  I am used to authoring in mp4 container files, and video, audio, and subtitle "layers" are often called "streams".  I understand now that the DCP is a container that has seperate video and audio tracks (and that a main audio track in that reference can refer to 24bit PCM).  However, this is where you contradict even Dolby's terminology of Atmos:

"So in this context a track cannot be "a channel or audio object", a track IS a channel and that channel can be assigned as an audio object or mixed to the "bed". Dolby is therefore stating that up to 128 of the 1,000 or so channels in a mix can be assigned as "audio objects", while the others would be assigned/mixed to the "bed"."







If we use Dolby's terminology of 128 possible tracks within this final stage of mastering (Dolby Atmos tools), then it makes sense that Dolby says that Atmos can be used with a "bed" of 5.1, 7.1, or 9.1 "channels".  Dolby indicates that Atmos doesn't have to be mastered just in 9.1. "Beds can be created in different channel-based configurations, such as 5.1, 7.1, or even future formats such as 9.1 (including arrays of overhead loudspeakers)."   I also realize these channels are traditional channel arrays that comprises traditional panning, and remaining 118 audio objects are for height and localization.

Now I also understand that a "track" in music authoring can be a layer in the session software.  In which case, there can be any number for having an instance of an instrument at one time (and shows how your reference of Wikipedia's "multi-track recording" is a different topic than the above).


----------



## bigshot

gregorio said:


> No one has (or would) record a symphony orchestra with 3 mics (Left, Right and Centre) for more than 60 years. The results would be poor.



Coulda fooled me! Those 3.0 Living Stereo recordings sound phenomenal to me.


----------



## 71 dB (Mar 27, 2019)

gregorio said:


> 1. Not it's not, you just made that up! With some/most acoustic genres, say a symphony recording, the "trick" is to create an illusion of a believable natural spatiality. But that is most definitely NOT the case with virtually all popular culture music genres. There is no attempt or (artistic) desire to create a "natural spatiality" or an illusion of one, in fact quite the opposite, the natural sound (including the "natural spatiality") is deliberately avoided!  The "trick" with these genres is creating something that is deliberately not natural but is subjectively perceived as pleasing. Exactly the same as a modern or impressionist artist is not trying to create an illusion of natural perspective.
> 
> 2. IF you think that speakers and headphones (with crossfeed) reminds you of "natural spatiality", then you have an issue with your perception! That's rather sad because it means you're not perceiving what was intended and are missing a significant part of the art you have purchased. If I looked at a Picasso cubist painting and it appeared to me as an illusion of a "natural perspective", there'd be something very strange with my perception and it would be rather sad because I'd be missing pretty much the entire point of Picasso's art!
> 
> G



If you have a lion roaring in place of your speakers, the sound from the lion will come to your ear somewhat similarly as if you played a recording of lion in an anechoic chamber. The acoustics of your room is that natural spatiality.

Speakers and non-crossfed headphones give very different kind of ILD levels at low frequency. Which is the deliberate version? I have already explained Picasso. Frankly I am so fed up with you and that's why I write so little anymore. I tried over a year. I give up.


----------



## gregorio

Davesrose said:


> [1] But with your arguments, you're showing that you haven't read my referenced white paper from Dolby.
> [2] And it's only counter productive to say that if I don't use the same terminology as you, that I'm ignorant of the whole subject.
> [3] However, this is where you contradict even Dolby's terminology of Atmos: "So in this context a track cannot be "a channel or audio object", a track IS a channel and that channel can be assigned as an audio object or mixed to the "bed". Dolby is therefore stating that up to 128 of the 1,000 or so channels in a mix can be assigned as "audio objects", while the others would be assigned/mixed to the "bed"."
> [4] If we use Dolby's terminology of 128 possible tracks within this final stage of mastering (Dolby Atmos tools), then it makes sense that Dolby says that Atmos can be used with a "bed" of 5.1, 7.1, or 9.1 "channels".
> ...



1. No, I'm not "showing" that! There are two potential possibilities of what I'm "showing": A. That I have fully read AND understood Dolby's paper but that you haven't! You therefore see my arguments as contradicting Dolby's paper and therefore assuming that I haven't read it. Or B. That you are correct and that I haven't read or understood Dolby's paper. So, which of these possibilities is more likely? Baring in mind that I've made my living for the past 27 years working in TV/Film sound, completed my first Dolby Digital theatrical film mix in 1998 and have actually mixed a theatrical feature in Dolby Atmos, which do you think is more likely? Of course I've read the paper you referenced, plus many others and in addition, I've discussed Atmos in some detail with Dolby engineers in person.

2. I use the same terminology as all other film mixers/engineers and more or less the same terminology as Dolby. Dolby has called the mixture of beds and audio objects "tracks" because there is no dedicated term for it and so Dolby used the nearest relevant term that we would understand. The apparent difficulty here is because our (sound engineers) understanding of the term "tracks" is somewhat different to consumers' understanding of it. You have demonstrated your ignorance/misunderstanding of the subject by making assumptions/assertions based on the consumer understanding/definition and then defending those assertions by referencing a Dolby paper which is actually using the professionals' understanding/definition!

3. No, my quoted statement does NOT contradict Dolby's paper, it is ENTIRELY in line with it! That you seem to think my statement somehow contradicts Dolby's paper indicates some misunderstanding on your part.

4. How does "it makes sense"? It ONLY "makes sense" if there is no direct relationship between what Dolby calls "128 tracks" and the traditional concept of tracks/channels.

5. No, there's no difference between a "track" in music or film/TV sound. The "session" software is the same and there can be any number of tracks for an instance of a film sound just as there can be any number of tracks for an instrument. The wikipedia article I referenced is therefore exactly the same topic. The only practical difference between film and music in regard to "tracks" used to be the physical media; music used magnetic tape to store the tracks while the film world used 35mm film (with a magnetic coating) but even that difference doesn't exist any more because for many years it's all digital audio files and regions on hard disks.



bigshot said:


> Coulda fooled me! Those 3.0 Living Stereo recordings sound phenomenal to me.



Which indicates they were not recorded with three mics! (Or possibly that you have a somewhat poor stereo setup).



71 dB said:


> If you have a lion roaring in place of your speakers, the sound from the lion will come to your ear somewhat similarly as if you played a recording of lion in an anechoic chamber.
> [2] The acoustics of your room is that natural spatiality.



1. No it won't! If you have a recording of a lion roaring in an acoustic space (say a 100m arena for example), then playing that recording in an anechoic should chamber sound entirely different to a real lion in your listening room. When you play that recording on your speakers you are getting the acoustics of a 100m arena within the acoustics of your listening environment, without the acoustics of your listening environment (say in an anechoic chamber) you are just getting the 100m arena acoustics on the recording, which should sound entirely different to a lion roaring in your listening environment ... although apparently not to you!

2. How can you build a 100m arena inside a 10m (or so) listening environment? That's obviously against the laws of physics and therefore by definition CANNOT be "natural spatiality"!

G


----------



## Davesrose (Mar 27, 2019)

gregorio said:


> 1. No, I'm not "showing" that! There are two potential possibilities of what I'm "showing": A. That I have fully read AND understood Dolby's paper but that you haven't! You therefore see my arguments as contradicting Dolby's paper and therefore assuming that I haven't read it. Or B. That you are correct and that I haven't read or understood Dolby's paper. So, which of these possibilities is more likely? Baring in mind that I've made my living for the past 27 years working in TV/Film sound, completed my first Dolby Digital theatrical film mix in 1998 and have actually mixed a theatrical feature in Dolby Atmos, which do you think is more likely? Of course I've read the paper you referenced, plus many others and in addition, I've discussed Atmos in some detail with Dolby engineers in person.
> 
> 2. I use the same terminology as all other film mixers/engineers and more or less the same terminology as Dolby. Dolby has called the mixture of beds and audio objects "tracks" because there is no dedicated term for it and so Dolby used the nearest relevant term that we would understand. The apparent difficulty here is because our (sound engineers) understanding of the term "tracks" is somewhat different to consumers' understanding of it. You have demonstrated your ignorance/misunderstanding of the subject by making assumptions/assertions based on the consumer understanding/definition and then defending those assertions by referencing a Dolby paper which is actually using the professionals' understanding/definition!
> 
> ...




No, I have read things quite clearly, and I don't need long outlined diatribes to prove my point.  I do fully understand that the mastering stage of using Dolby Atmos tools uses what Dolby themselves call "128 tracks" (comprised of a bed of channels and audio objects)...they themselves go over the different stages of audio production in a movie in that paper.  I can also do sceen grabes of the Dolby Atmos mixing software I saw with the demo series I watched (which has a left hand column showing available tracks...they are 128 slots).  The presenter of that series is experienced with Atmos.  It is only you who have claimed that an Atmos mix comprises 128 "channels".  You continue to obfuscate by trying to reference other stages of sound production with Atmos (IE mixing "1000" channels to "128").  You say you: " more or less [use] the same terminology as Dolby".  No you haven't!  Otherwise you wouldn't try to belittle and tell me I don't understand all sources that say Atmos is comprised of 128 tracks.


----------



## bigshot (Mar 27, 2019)

gregorio said:


> Which indicates they were not recorded with three mics! (Or possibly that you have a somewhat poor stereo setup).



Nope. I'm curious what your purpose is for posting stuff like this? Are you just trying to be a edgelord? If so, short answers like this serve the purpose better than long winded ones that people don't read all the way through... well, come to think of it, maybe other edgelords will take the time to savor every word of the edge. I'm too busy for that. Keep 'em short and sweet for me!


----------



## ironmine

This thread is hi-jacked by crossfeed haters who discuss anything but crossfeed. They dither the discussion so much that nothing but white noise remains.

Instead of this thread "_To crossfeed or not to crossfeed? That is the question..._" we need to create the new one and call it "_To crossfeed. No longer the question... (without Gregorio)_".


----------



## bigshot

I don't hate crossfeed at all. I just don't think headphones ever sound like speakers. If it was approached like any other DSP where you use it if you like it and don't if you don't, there would be no problem with me. The only thing I object to is the statements about crossfeed giving headphones spatiality. To have spatiality, you need to have space for the sound to inhabit. Crossfeed can create a primitive simulation of a certain kind of effect that space has on sound, but it doesn't come close to true soundstage with speakers. Heck, even if it was called pseudo-spatiality or synthetic spatiality it would be a non-issue.


----------



## ironmine (Mar 27, 2019)

bigshot said:


> I don't hate crossfeed at all. I just don't think headphones ever sound like speakers.



Headphones _*with *_crossfeed sound more like speakers that headphones _*without *_crossfeed.



bigshot said:


> If it was approached like any other DSP where you use it if you like it and don't if you don't, there would be no problem with me. The only thing I object to is the statements about crossfeed giving headphones spatiality.



Crossfeed plugins makes the sound more _*natural *_in terms of spatiality.



bigshot said:


> To have spatiality, you need to have space for the sound to inhabit.



No, you _*don't *_need to.

Your ears receive a signal and your brain takes it as an input.

Your brain does not care whether this incoming signal is a result of the sound "inhabiting a certain space" or whether the signal is a result of the processing done by some plugin.

If humans ever create a perfect virtual game *indistinguishable *from reality (where video, audio, smells, tactile feelings, etc. cannot be separated from reality), if the players wear headphones while they play this game, the sound in their headphones *will unavoidably have* to be crossfed. It will be the mandatory auditory part of the game. Perfect crossfeed is what it will take to make the sound indistinguishable from reality in a perfect virtual reality game. Uncrossfed sound has no place in natural reality and thus it will have no place in a perfect virtual world.


----------



## Davesrose

ironmine said:


> Your ears receive a signal and your brain takes it as an input.
> 
> Your brain does not care whether this incoming signal is a result of the sound "inhabiting a certain space" or whether the signal is a result of the processing done by some plugin.
> 
> If humans ever create a perfect virtual game *indistinguishable *from reality (where video, audio, smells, tactile feelings, etc. cannot be separated from reality), if the players wear headphones while they play this game, the sound in their headphones *will unavoidably have* to be crossfed. It will be the mandatory auditory part of the game. Perfect crossfeed is what it will take to make the sound indistinguishable from reality in a perfect virtual reality game. Uncrossfed sound has no place in natural reality and thus it will have no place in a perfect virtual world.



Your brain doesn't just take in "a signal" as "an input" for audio perception.  First, in the inner ear, there are many efferent and afferent nerve fibers that provide the sense of loudness and frequency.  Second, for localization, the pinna of the ear is a major contributor.  The way sound is funneled helps determine if sound is coming from front or sides (and rear is least direct since it's obstructed from the back).  The main problem with headphones is that you have one driver hovering over your ear (or in your ear canal), and do not have a way to interact with the pinna in a natural way.  Basic crossfeeding will help blend left and right channels to not have a sound just one sided.  I'm sure the "ultimate" headphone surround scheme would have to be complicated and have to defy assumed physics.

I've heard serious gamers now play on UHD TVs and surround speakers for the best localization.  I previously mentioned that I remember there have been some "surround" headphones with more than one driver in each cup.  I know Dolby just came out with their own headphones, but it doesn't use multi-drivers.  Then again, I'm sure sound quality wouldn't be as good as a single driver also....so there's always trade-offs and not one perfect system for everyone.


----------



## castleofargh

about multichannel stuff, this topic is in stereo and for headphone processes. I leave short off topics alone, but if it's going to last, move it somewhere else please.



ironmine said:


> Headphones _*with *_crossfeed sound more like speakers that headphones _*without *_crossfeed.
> 
> 
> 
> ...


it's not about being anti crossfeed, I've used one version of crossfeed or another for almost 10 years. I never stopped, only moved to variants that I happened to prefer when I found them, or simply something more customizable(which apparently is just like what you've been doing for a while). I've said many times how grateful I am for crossfeed.
yet I clearly side with Gregorio and others before him in opposing @71 dB and his completely partial way of overselling the qualities of crossfeed. so to be clear, we never tried to be against crossfeed, because that doesn't make any sense in the first place to be against an end user DSP. it's like being against someone using compressor, EQ and any other DSP on his system. if we don't like it, we don't use it. and someone else will do whatever the F he wants, as he clearly should. the value of using such tool is subjective, the use is subjective, even the settings are subjective. it's a simple matter of taste and subjective experience. but it's not simple enough for @71 dB who has the urge to justify the use and benefits of crossfeed with BS arguments to make it look like more than it is. half baked objectivity and terms so loose that we cannot help but protest their abuse. if you've bother reading some of the exchanges, you must have seen how somehow, anything crossfeed does, must matter, and how he arbitrarily dismisses the importance of anything that's not in favor of his beloved crossfeed, be it acoustic or psychoacoustic. among other slightly infuriating moves, he's been playing a nonsensical game where you take 2 variables, one is improved, one is degraded, and he pretends that it's an objective improvement because he likes the one that improved better. logical fallacies and stubbornness are everywhere in his posts. he's been given the opportunity to simply say "I like it, some do not" many times and move on. but no, he has to keep talking about BS stuff like spatial distortion where his model of accoustic reference isn't actually a pair of speakers in a room, and his compensation isn't actually how acoustic would affect the signal, but still he will pretend to talk about objective notions such as distortions, ILD, ITD, always cherry picking what part of the signal is ILD and ITD worthy. it is the SS subsection, obviously not everybody is fine with those logical fallacies being continuously spammed. but it wouldn't be correct to assume that because someone defends crossfeed, and we reject his points, then we're automatically against crossfeed. we're against false claim and logical fallacies, not against(or for) crossfeed! people use some if they like it, they don't need me to agree or disagree and don't need the false righteousness of crossfeed forced onto us by @71 dB. 

as for you, instead of being dragged in that crap, I suggest you pretend like it doesn't exist and go back to doing a great job of showing the VSTs and settings you tried and sharing your experience of them. that way we may discover new VSTs and be interested to go try them ourselves and come back to share our own subjective impressions. that's the right way to support crossfeed and show interest IMO.


----------



## bigshot

Good! If this is only about stereo headphones, I’m done. Hooray! No more spaciality!


----------



## gregorio (Mar 28, 2019)

Davesrose said:


> [1]  I can also do sceen grabes of the Dolby Atmos mixing software I saw with the demo series I watched (which has a left hand column showing available tracks...they are 128 slots).
> [2] It is only you who have claimed that an Atmos mix comprises 128 "channels".
> [3] You continue to obfuscate by trying to reference other stages of sound production with Atmos (IE mixing "1000" channels to "128").
> [4] You say you: " more or less [use] the same terminology as Dolby".  No you haven't!
> [5] Otherwise you wouldn't try to belittle and tell me I don't understand all sources that say Atmos is comprised of 128 tracks.



1. Why call them "slots" if they're tracks?
2. Nope, Dolby has: "_A track in a session can be an audio object_" - A "track in a session" is a channel!
3. So you can do a screen grab of "the Dolby atmos mixing software you saw with the demo series" and I'm the one "obfuscating" the mastering and mixing stages of sound production? You state you "fully understand" the mastering stage but you clearly don't! The term "mastering" in film is shorthand for "print-mastering" and is entirely different from the "mastering" process for music (which again is shorthand but it's shorthand for "pre-mastering"). In film sound, ALL positional (and other audio) information is created during mixing, print-mastering is simply the act of "printing" that mix. In the case of Atmos, there are various "packaging options" for printing the mix precisely because an Atmos mix is independent of tracks/channels!
4. Yes I have! The reason I have is because in professional usage the terms "tracks", "channels", "audio objects" and "slots" are interchangeable but can have different specific meanings depending on context. For the consumer, the term "slots" is perhaps the most appropriate because it avoids the confusion with "channels", "tracks" and speakers. An Atmos mix can comprise up to 128 "slots" which are independent (unrelated) from channels/tracks/speakers.
5. I am refuting your assertions that are based on your misunderstanding of the use of the terms "tracks" and "channels"!



bigshot said:


> Nope. I'm curious what your purpose is for posting stuff like this? Are you just trying to be a edgelord?



If an "edgelord" is someone who refutes false assertions, then yes, I'm just trying to be an edgelord. Not sure why you would be curious about why I'm "posting stuff like this"? Isn't refuting false assertions one of the main purposes of this sub-forum?

G


----------



## Davesrose (Mar 28, 2019)

gregorio said:


> 1. Why call them "slots" if they're tracks?
> 2. Nope, Dolby has: "_A track in a session can be an audio object_" - A "track in a session" is a channel!
> 3. So you can do a screen grab of "the Dolby atmos mixing software you saw with the demo series" and I'm the one "obfuscating" the mastering and mixing stages of sound production? You state you "fully understand" the mastering stage but you clearly don't! The term "mastering" in film is shorthand for "print-mastering" and is entirely different from the "mastering" process for music (which again is shorthand but it's shorthand for "pre-mastering"). In film sound, ALL positional (and other audio) information is created during mixing, print-mastering is simply the act of "printing" that mix. In the case of Atmos, there are various "packaging options" for printing the mix precisely because an Atmos mix is independent of tracks/channels!
> 4. Yes I have! The reason I have is because in professional usage the terms "tracks", "channels", "audio objects" and "slots" are interchangeable but can have different specific meanings depending on context. For the consumer, the term "slots" is perhaps the most appropriate because it avoids the confusion with "channels", "tracks" and speakers. An Atmos mix can comprise up to 128 "slots" which are independent (unrelated) from channels/tracks/speakers.
> ...



I see that you're incapable of being able to form coherent paragraphs and to be able to carry an open dialogue towards another member.  This is my last response to you about this subject (also for the benefit of others).  I used the term "slot" for your benefit because YOU are so hung up on the term "track"....which time and time again, I have quoted Dolby saying Atmos is comprised of 128 TRACKS.  That it's not 128 "channels", and that a track and channel is not the same thing in this context (unlike your continued assertions).  Your response has been to claim credentials and say you once mixed in Atmos: which IMO does not make one an authority on Atmos.  I have experience with 3D animation and video production software (and I’m comfortable enough to not have to emphasize claimed experience).  With my experience in media, I can say that my authoring software does have source files that comprises "tracks" of video, audio, even a particular movement cycle of a 3D node object.  For all of these media, a track is in a layer in the timeline, and an audio channel is what's being mapped to an audio speaker channel.  We also can have different instances of a composition being nested in another.  They act independently (and if you have to, you can go back and edit a previous stage of project development that gets updated with other compositions).  I’m sure that film audio production is the same in which the “1000 channels" you've obfuscated with is a different stage that's NOT part of the Atmos stage.  In the demo, I’ve seen a possibility of 128 tracks within that project session.  From my own professional experience Dolby's terminology of 128 tracks makes sense to me.  Sorry it offends you, but I'll continue to use it!


----------



## 71 dB

I don't have experience for (crossfeed) VSTs. I How do they work? I use headphone adapters to do the crossfeed because that way every source is crossfed the same way, no matter if it's computer, CD, SACD, TV or fm radio. I don't know how VST can be used to crossfeed TV sound. My TV sound goes from my TV to my CD player (toslink) which operates as a DAC and then from CD player to my amp (RCA) and then from my amp B-speakers to my headphone adapter. There is no way to use a VST. I can only use VST (effect) with a DAW so I don't understand how people manage to use VST with all sources… …mind you I am not that good with computers. 

I have Vox player that has 3 crossfeed options, but I use computer rarely to listen to music. I am an old fashioned CD player guy into physical format. So, I have nothing against VST crossfeed. I just don't know anything about them...


----------



## bigshot (Mar 28, 2019)

gregorio said:


> yes, I'm just trying to be an edgelord.



Glad we have that solved!



ironmine said:


> Headphones _*with *_crossfeed sound more like speakers that headphones _*without *_crossfeed.



I'm conducting a poll on this subject... Do you own a good speaker system? If so, can you tell me a little bit about your setup? Is it multichannel? What aspects of crossfeed do you think contribute to sounding more like speakers? What aspects are different? Do you think that sound sources immediately in front of your ears sound the same as ones on the other side of the room?

Feel free to start a new thread for this if you'd like.

By the way, I have a VR headset and have heard virtual headphone surround. Unless I turn my head around I can't tell back from front, and by that time the back is in the front... well not the front because it's still a straight line through my head. The distance cues are very primitive- more than an order of magnitude less compared to the sound if I play my speaker system and get up and walk around the room. And the sound of my speaker system is more than an order of magnitude less than walking around a real nightclub with a live band playing. I imagine that someday there will be progress to make sound reproduction even more natural, but at this point, it's far off. The closest we can get to natural sound is multichannel speaker systems, but I wouldn't even describe that as "natural" sounding. It just has natural spatiality created by the natural space of the room. There are a lot of variables that go into something sounding natural- more than we can control.

I once spoke to a sound engineer who was visiting a local carnival with his kid. He was standing at the top of the "chute" with the carnival games in front of him, and at the end of the aisle was a merry-go-round. People were milling around and carnival barkers were calling out to them. He closed his eyes and listened... and tried to think of how he could reproduce that experience through recording and playback. He started to think about how many channels it would take and how the speakers would have to be laid out... But then he listened more carefully and heard how the sound subtly changed as breezes blew through, and how the sound of the carousel reflected off the structures of the games, and how the people's voices massed and shifted as they turned towards or away from him. He realized that it would be totally impossible to completely reproduce all that. He could only do an approximation.

Recording is an approximation of reality. There are inevitable compromises. Multichannel can offer one level of enhancement to that. Crossfeed another level a full notch down. Stereo a notch below that. Mono below that... And that is just considering directionality and spatiality... There's also other factors like frequency response, distortion, dynamics, timing, amplitude, etc... Creating a "natural" sound just isn't possible. It isn't even the goal of recording. Speakers can do things with directionality and spatiality that headphones just can't. Headphones are inexpensive and convenient and private in a way that speakers can't match. If you want convenience and "in your head" sound, you use cans. if you want an element of spatiality and directionality, you use a multichannel speaker setup. That's just the way it is. You can't make a raven into a writing desk, nor would you want to.



71 dB said:


> I don't have experience for (crossfeed) VSTs. I How do they work? I use headphone adapters to do the crossfeed because that way every source is crossfed the same way, no matter if it's computer, CD, SACD, TV or fm radio. I don't know how VST can be used to crossfeed TV sound.



Instead of running the signal through a dedicated sound processing device, you run the digital out through a computer running a sound app. There are various plugins to the app that can be put in line between the input and output. Usually VSTs are much more sophisticated and flexible than dedicated hardware processing. Generally, you put the VST as the last step in the chain and use the computer's DAC to output it to a simple amp. That way all the switching between sources is upstream.


----------



## Steve999 (Mar 28, 2019)

I had never heard of an edgelord before! I had to look it up! At first I was googling for "edgesword" because my memory had failed me. No one in this history of humanity has ever used the word "edgesword"! Oh sure, you've got double-edged swords, or two sworded edges, or candles burned at both ends, or both candles burned at the end, or whatever, but no edgeswords! I thought to myself, @bigshot has made up a new word!

But alas, no, it was "edgelord," which I had never heard of either. And I looked it up and so now I know what it means. I don't think I'd be a very good edgelord. Once in a while no matter how serious things get I have to make a joke out of it, and then I offend someone, etc. That's as edgelordy as I get.

As far as this proposed survey is concerned:

1) I have a sort of nice speaker set up. It is just expensive enough so I love it but I get no bragging rights.
2) It's two towers, two satellites, a center channel speaker, and two subwoofers (yes, I like bass).
3) Crossfeed in headphones makes headphones sound more like speakers because some of the left channel goes to the right channel and some of the right channel goes to the left channel, so your ears hear stuff from both sides, but in different proportions, sort of like in real life, maybe with some special fx or FR manipulation thrown in, but it's not as complicated as and doesn't in the end sound close to real life or loudspeakers in a room. I thought that was all kind of obvious but 76 pages down the line it seems like we have not settled the question. Of course actual headphones sound nothing like actual speakers. I like both. I like crossfeed. All other things being equal, I prefer speakers. If someone has different preferences, finarino!

*Now notice, no matter how you answer this survey, you may possibly be attacked by an edgelord of one ilk or another.* If you like crossfeed then it means your speakers are bad. If you don't like crossfeed, it means you are a spatial ignoramus. I don't quite get the logic from points A to points B but there it is. You can't win! Edgelords to the left of us, edgelords to the right! And somewhere there are 128 somethings involved, and it's all very personal.  I guess I'm pretty shallow. (If anyone thinks I am trying to call them an edgelord, and especially if they are an actual edgelord, I am not calling you an edgelord, I am just trying to be funny.)

@71 dB, for something much simpler than a VST, if you have Windows and you have not done so already, you can install Foobar2000 and the Meier variable crossfeed plugin for Foobar. Maybe you already know this. But if not, it will get your feet wet. It might be two or three hours of puzzling over randomly documented freeware installation the first time around. I've tried VSTs and they generally are more trouble than they are worth for what I want. I also use a Behringer DEQ2496 for variable crossfeeed by my bedside, and a Behringer Monitor2USB for continuously variable crossfeed from my computer. You can look those up in Amazon. Those are more hardware-based solutions. I do not have expectations of audio nirvana from crossfeed. Then again, I never have expectations of audio nirvana. I have episodes of music nirvana. It can happen on good gear, bad gear, through speakers, through headphones, with crossfeed, without crossfeed, live--imagine! All else being equal, though, for me, music nirvana on a good surround hi-fi setup or through a good set of headphones _with crossfeed_ is the bomb, and sometimes a live performance is even better.

If I want no-compromise perfect fidelity I will go press a key on my piano or pluck a string on my guitar. Whoopee.


----------



## castleofargh

71 dB said:


> I don't have experience for (crossfeed) VSTs. I How do they work? I use headphone adapters to do the crossfeed because that way every source is crossfed the same way, no matter if it's computer, CD, SACD, TV or fm radio. I don't know how VST can be used to crossfeed TV sound. My TV sound goes from my TV to my CD player (toslink) which operates as a DAC and then from CD player to my amp (RCA) and then from my amp B-speakers to my headphone adapter. There is no way to use a VST. I can only use VST (effect) with a DAW so I don't understand how people manage to use VST with all sources… …mind you I am not that good with computers.
> 
> I have Vox player that has 3 crossfeed options, but I use computer rarely to listen to music. I am an old fashioned CD player guy into physical format. So, I have nothing against VST crossfeed. I just don't know anything about them...


computers have become a source of music and without a doubt, the ultimate way to manage it. many people rip their CDs the day they get them and rarely open the the box again. and that's if they still purchase CDs. 
for crossfeed specifically on the computer, you have VSTs or other forms of components that you can integrate into your audio player(depends on the player!!!!!!), @ironmine has posted many examples in this topic, from basic crossfeed to more advanced solutions that come closer to the spirit of trying to fine tune based on some HRTF notion, or even add some room effects where you can set too many parameters to count. depending on how they're set, they can easily feel like some lame surround effect from 20 years ago with way too much and unrealistic reverb, but as for anything, learning how to set it for ourselves is key of a good subjective result. 
but using a VST in the audio player limits the effect to that audio player, I personally use a "virtual cable", that will reroute all the sounds of the computer(or just the ones I want), into a VST host, which is really a big funny sandbox where you can chain all the effect you want in any way you happen to enjoy. @ironmine has showed many screenshots of such host. 
one last option for "crossfeed", if to have the 4 impulses of some HRTF reference at a specific angle(left and right ear for speaker on the left, and left and right ear for speaker on the right, so 4 impulses), and you can then have those used for convolution on the stereo signal and then mixed with so called "true stereo"process. that stuff can be applied system wide on the computer through virtual cables, or with something like Equalizer APO(free) that will offer true stereo. the hurdle here being that you need to procure the impulses, you could use some generic stuff, or even make your own super basic stuff to have what you get with your analog crossfeed. but you could also invest in a pair of cheap binaural microphones(the type you insert in your ears), or try to DIY something with small cheap capsules(last time I tried soldering something that small, I destroyed everything, so that's not for me, at least not with my current soldering gear). anyway, recording sweeps, then creating the impulses for convolution, that's work, and not all easy solutions for that as free(maybe you could ask here if somebody would be willing to do it for you once you have the sweeps). the benefits would be that you'd end up with the music changed the way you ears(well, the mics at your ears) get it in your room with your speakers. and you'll have to take my word for it on this, it is more realistic than generic crossfeed. but obviously it's quite a bother and it requires custom measurements, which in itself can be a good enough reason to use typical crossfeed instead and appreciate the convenience. different people have different priorities. 
as for the issue of wanting to use other devices (TV, CD player,etc), I'm confident that there must exist some solutions from Dirac, miniDSP or whoever else, it has to be. I remember reading about this one a while back https://www.minidsp.com/products/dsp-headphone-amp/ha-dsp. no idea if it's any good or what exactly is the crossfeed aspect and how well it can be set. but if this exists, other solutions probably exist too. I've used a computer as source of everything for almost 20 years, so I'm really not the guy who can help for that specific matter, but it's a big forum, there are probably others who wondered about doing this and more, who found or made their own solution with a raspberry PI or whatever, just like you did with an analog circuit. obviously if you only plan to replicate what your circuit does, then there is no need and no point in looking for a digital alternative.


----------



## gregorio

Davesrose said:


> I’m sure that film audio production is the same in which the “1000 channels" you've obfuscated with is a different stage that's NOT part of the Atmos stage.



Of course you're sure. What really happens is that we mix all the ~1,000 tracks together to create a mix and THEN (apparently in the mastering stage?) we pan/position all the individual sounds within that mix using Dolby Armos, and you're sure of this because of your experience with 3D animation and video software. 



bigshot said:


> Glad we have that solved!



Yep, it's all solved:

I'm an edgelord.
Adding more speakers/channels gives more options for locating sounds within a soundfield.
Orchestras recorded with 3 mics in 3.0 is much better than orchestras recorded in stereo using many mics.
We apply Dolby Atmos after we've already completed a film mix in 9.1.4 (or whatever) using 128 tracks.
Crossfeed on headphones is always objectively better for everyone (except apparently idiots).
And, it's "solved" that this is the "Complete BS" forum! Hope you're happy now?

G


----------



## bigshot

DEE-LIGHTED!


----------



## 71 dB

I'm a mac user. Sometimes I use Spotify to explore music. I also make my own music on my mac and for that I write my own nyquist-plugins for Audacity where I process the "raw" tracks I export from GarageBand. If I listen to a single music track on my mac I just find the file, right click and choose "Quick Look."


----------



## bigshot

I don't use Audacity. Is it able to monitor using real time plugins? That is how you do signal processing on the fly.


----------



## 71 dB

bigshot said:


> 1. I don't use Audacity.
> 2. Is it able to monitor using real time plugins?
> 3. That is how you do signal processing on the fly.


1. Well, what do you use?
2. No.
3. Ok, but monitor using real time plugins is not possible.


----------



## jason41987

i have some schematics for some pretty simple crossfeed circuits.. i may get a small perfboard and some bits to test drive some of the variations of the cross-feed circuit and see if i find something i like that can replace the surround sound audio i have now.. if i can, i'll be building the crossfeed (with a toggle switch) into an amp.. i was originally thinking objective2, but i am looking at some balanced options, and possibly tubes for such a project, and that'll probably get paired with an ODAC... if i cannot get the desired crossfeed sound that i want with circuitry, then i'll scrap the ODAC idea and buy another 7.1 surround sound adapter (which does the crossfeed thing) and use that as the DAC in a DAC/AMP combo


----------



## WoodyLuvr

castleofargh said:


> I personally use a "virtual cable", that will reroute all the sounds of the computer(or just the ones I want), into a VST host, which is really a big funny sandbox where you can chain all the effect you want in any way you happen to enjoy. @ironmine has showed many screenshots of such host


What specific "crossfeed" oriented VST plugin do you typically use in your chain these days?

I am still experimenting with my Meier Crossfeed Foobar Plugin (along with replay gain) and still remain baffled by it... battling personal bias; poor hearing; unrepeatable listening sessions where I believe I have figured it out; etc.


----------



## castleofargh

WoodyLuvr said:


> What specific "crossfeed" oriented VST plugin do you typically use in your chain these days?
> 
> I am still experimenting with my Meier Crossfeed Foobar Plugin (along with replay gain) and still remain baffled by it... battling personal bias; poor hearing; unrepeatable listening sessions where I believe I have figured it out; etc.


I don't use one exactly, I'm relying on impulse responses and convolution to apply EQ/delay form the impulses, and then mix the signal for a result that's basically crossfeed with a very customized EQ. I would easily say that it works better for me than a more standard Xfeed, but the secret sauce(the impulses) being entirely dependent on your own HRTF, that makes finding something right quite challenging if you don't have binaural microphones and the patience to learn what to do with them. @Joe Bloggs has shared something he made, some adore it, some don't, as it should be. and @jaakkopasanen even ... made"?" and shared a little app(I still haven't tried, but it's on my to do list, one day I swear I will) to do deconvolution and all that.

to simplify, it's a step toward actual HRTF measurements, but usually limited to stereo(not that it has to be) and fixed speaker position(no head tracking). if well done, it should give something very close to what an objectively ideal Xfeed should be for a given listener.


when I was using some Xfeed VSTs, I would sit in front of my speakers and try to set the VST so that the full left or full right instruments would feel like coming from the general direction of the speaker. if set that way I felt ok over time, then all was good, and if I felt weird, or on rare occasion felt it was even more tiring than using nothing, then that was my signal that something was really done wrong in the VST or in my settings. but it's hard to be very confident about the settings as the brain has a mind of its own and tends to try and move stuff back to full 180° panning(at least it does in my case, probably because of decades of conditioning that clearly associated massive panning with headphone use, or maybe there is some other reason I don't understand? IDK).


----------



## BobG55 (Jul 21, 2019)

Error


----------



## ForSerious

I tried to read everything, but it got too long and not always in directions I am interested in. There's some really good information in here. Thanks guys.
To the original question: I didn't care much about music until the moment I put on a pair of headphones from the dollar store. The range on them was as bad as it gets. What hooked me was how the sound seemed to be inside my head. It was like I entered a world made entirely of music.
In later years, I made my own 4 speaker surround sound system. The way I had it setup, it sounded more like headphones than anything else. Apparently I love hearing instruments and vocals all around me.
My final answer: Crossfeed reduces everything I enjoy about music, so I don't and won't use it.

I find things like clipping, dynamic compression, cars passing and central air systems to be way more distracting than my eardrums vibrating from sounds isolated to only one channel.


----------



## ironmine

ForSerious said:


> Apparently I love hearing instruments and vocals all around me.
> My final answer: Crossfeed reduces everything I enjoy about music, so I don't and won't use it.
> 
> way more distracting than my eardrums vibrating from sounds isolated to only one channel sounds isolated to only one channel.



So, do you like "instruments and vocals all around you" OR do you like "sounds isolated to only one channel" ?

These are two completely contradictory statements.

When sounds are "all around you", you hear them with BOTH ears, but you hear them differently with each ear.

This is exactly what crossfeed tries to achieves.


----------



## ForSerious

ironmine said:


> So, do you like "instruments and vocals all around you" OR do you like "sounds isolated to only one channel" ?


Huh? Read it again. I never said I liked sound isolated to one ear. I just said I don't mind the vibration that comes from that. If a recording happens to have it, I enjoy it like that.
When I tried some of the recommended crossfeed examples mentioned in past posts, I did not like how it seemed to put all the sound more in front of me.


----------



## ironmine

You've got used to enjoying wrong things, my friend. 
This is what happens to the brains of people who have been subjected to weird things for prolonged periods of time.
Even north koreans enjoy the brutal rule of their fat ugly rocket man. But it does not make it right.


----------



## Davesrose (Jul 29, 2019)

There's a small minority of my music that has hard pans just to one channel.  Most stereo recordings do have some bleed between both channels for each instrument/track.  My iCan headphone amp does have a crossfeed knob that has several increments.  Occasionally I've found a track that sounds livelier with it....but most times I have it off, as I find usually the FR isn't as good (and there's less front imaging).


----------



## ForSerious

ironmine said:


> You've got used to enjoying wrong things, my friend.


You'd probably be more appalled at how my 4 speaker surround sound setup sounded. The best way I can describe it is: Invert the wave form of one channel, mix the two together canceling out the center channel. That yields a mono sound, but that's not quite how it worked. The back speaker system had a setting that would make that effect but still keeping much of the stereo separation. Anyway, no one else I shared it with seemed to care for it. I don't have the components for that system anymore, so headphones are the next best thing.


----------



## ironmine

Davesrose said:


> There's a small minority of my music that has hard pans just to one channel.  Most stereo recordings do have some bleed between both channels for each instrument/track.  My iCan headphone amp does have a crossfeed knob that has several increments.  Occasionally I've found a track that sounds livelier with it....but most times I have it off, as I find usually the FR isn't as good (and there's less front imaging).



There are many $hitty hardware or software implementations of crossfeed effect. Use the best: Meier crossfeed, 112dB Redline Monitor, ToneBoosters Isone.

Audiophiles are very stubborn. All the actual progress in audio or signal processing comes against their fierce resistance. They miss all the technological revolutions and breakthroughs.


----------



## ironmine

ForSerious said:


> You'd probably be more appalled at how my 4 speaker surround sound setup sounded. The best way I can describe it is: Invert the wave form of one channel, mix the two together canceling out the center channel. That yields a mono sound, but that's not quite how it worked. The back speaker system had a setting that would make that effect but still keeping much of the stereo separation. Anyway, no one else I shared it with seemed to care for it. I don't have the components for that system anymore, so headphones are the next best thing.



You should get married.


----------



## Davesrose (Jul 29, 2019)

ironmine said:


> There are many $hitty hardware or software implementations of crossfeed effect. Use the best: Meier crossfeed, 112dB Redline Monitor, ToneBoosters Isone.
> 
> Audiophiles are very stubborn. All the actual progress in audio or signal processing comes against their fierce resistance. They miss all the technological revolutions and breakthroughs.



No one has said the 3-D filter on the iCan Pro is $hitty.  Good to know your viewpoint: anyone who has a difference in opinion is stubborn and doesn't know about technology .


----------



## ForSerious

ironmine said:


> You should get married.


I did. That's why I don't have that system any more.
The reason I shared my story in the first place, is because my road to becoming an audiophile is not one I specifically intended to take. Over the years a subtle urge for more quality crept in, until I hit the wall of finding that even master recording quality songs still have clipping and  overdone dynamic compression. Crossfeed (or actually the lack thereof), in comparison to those beasts, is something very small to me. Probably because 95% of the music I listen to was recorded after 1995.


ironmine said:


> Audiophiles are very stubborn. All the actual progress in audio or signal processing comes against their fierce resistance. They miss all the technological revolutions and breakthroughs.


I for one was so excited for 5.1 surround sound to take off. If done how I imagined it, there would be instruments coming from all the speakers! Yes there are a few recordings made like that, but mostly they're all just made to mimic concert settings or optimal acoustical venues. Turns out people pay for stereo more than surround sound. Soundbars and mono bluetooth speakers seem to be the next progression in audio 'advancement'. Who can afford to make hardware that most people will never buy?


----------



## ironmine

ForSerious said:


> until I hit the wall of finding that even master recording quality songs still have clipping and  overdone dynamic compression.



To declip and restore the dynamic compression, you can use VST plugins - *Accusonus ERA De-Clipper* and/or *Thimeo Stereotool *(Declipping and Natural Dynamics options in it). I keep them in my VST-chainer in deactivated form, they just pass through the sound unchanged. But, if I come across clipped music or dynamically compressed music, I switch them on and it greatly improves the sound.

You can also convert in Foobar the sound files from FLAC to FLAC, while applying these effects.


----------



## ForSerious

ironmine said:


> To declip and restore the dynamic compression, you can use VST plugins - *Accusonus ERA De-Clipper* and/or *Thimeo Stereotool*.


I have an old version of Stereo tool. is Accusonus ERA just as good in your opinion?


----------



## ironmine (Jul 31, 2019)

Yes, it's surprisingly (for me) good and easy to use. 

I also noticed that the type 2 algorithm does not necessarily sound better than the simpler type 1.


----------



## ztwindwalker

I have a SPL phonitor 2, and it has 3 parameter speaker simulation function(called SPL matrix),which could set crossfeed, speaker angle, and center. Since most music was mixed for playing in speakers, not a headphone, I always use that matrix on.
IMO that produce a more natural and "correct" sound, especially when playing an A-B MIC recorded classic music or a well mixed content.


----------



## gregorio

ztwindwalker said:


> [1] IMO that produce a more natural and "correct" sound,
> [2] especially when playing an A-B MIC recorded classic music or a well mixed content.



1. That seems to depend largely on personal perception. To my perception it makes some aspects more "correct" and some less so and it usually doesn't sound more natural. On balance, I haven't tried a crossfeed/simulation solution that works well enough for me to have one in my playback chain.

2. I'm not quite sure what you mean here? AFAIK, there are no A-B mic recorded commercial classical releases.

G


----------



## 71 dB (Oct 3, 2019)

gregorio said:


> AFAIK, there are no A-B mic recorded commercial classical releases.
> 
> G



One of my biggest pet peeves is the lack of information about how a recording session of classical music (and why not other genres too) was made. Typically the information given is limited to date of recording, name of the sound engineer and the brand/type of microphones, but almost never is there information about the mic setup. I believe at least BIS label has used A-B mic setup, but I can be mistaken. If you have professional insight into these things, I am all ears to learn.


----------



## bigshot

The classical recordings from the mid 50s on RCA might have used something like an A-B setup, then they started using three mikes (L, C, R). I don't think it's at all common today. No reason to not take control with more complicated setups when we have so many tracks and such sophisticated mixing boards to work with.


----------



## ironmine

Another crossfeed in the form of VST plugin, it's worth checking out: *Airwindows Monitoring*

Just move the slider to the right, there are two settings: cans A and cans B.


----------



## 71 dB

ironmine said:


> Another crossfeed in the form of VST plugin, it's worth checking out: *Airwindows Monitoring*
> 
> Just move the slider to the right, there are two settings: cans A and cans B.



Chris from Airwindows is a very interesting "out-of-the-box" thinker of audio processing. His idea of using Benford's law in dithering totally blew me away.

His cans A and cans B are clearly designed for monitoring purposes rather than headphone listening. In my opinion adjustment of crossfeed level for each individual recording is important in listening in order to optimize the spatiality on natural levels and as we know every recordings has it's own spatiality, different levels of excessive spatiality. For some of the music samples even the stronger "cans B" processing was imo too weak to remore all excessive spatiality. So, I'd say this plugin of his is great for monitoring purposes (work), but for headphone listening (enjoyment) there are perhaps better options out there.

*


----------



## gregorio

71 dB said:


> [1] One of my biggest pet peeves is the lack of information about how a recording session of classical music (and why not other genres too) was made. Typically the information given is limited to date of recording, name of the sound engineer and the brand/type of microphones, but almost never is there information about the mic setup.
> [2] I believe at least BIS label has used A-B mic setup, but I can be mistaken. If you have professional insight into these things, I am all ears to learn.



1. This "pet peeve" unfortunately demonstrates a lack of understanding of the issues involved in creating recordings. The information you are after is far more likely to be misinterpreted/misunderstood than actually be useful information because mic setup is only one part of the equation and in most music genres, one of the least important/relevant parts of the equation! Even in those genres where it is particularly important, most classical music for example, the placement and balancing of the mic inputs within the mix is more important. Furthermore, in such genres, the mic setup can/will vary considerably according to a number of variables: The music itself, the orchestra/ensemble, the recording venue, the delivery format of the product and the artistic desires/intentions of the artists and producer. It's a bit like asking for the information of a car's engine size: On it's own that information is largely worthless/misleading, it tells us little/nothing about the car's performance. It's only useful information when COMBINED with all the other relevant information (vehicle weight, aerodynamics, engine tuning/power output, power train delivery, etc.). In the case of recordings, all this other relevant information is impractical to publish in say a CD booklet; it's too much to fit, in the case of some recordings is never written down (and can't be), is often not desirable to publish from a marketing or commercial perspective and unlike with cars, pretty much no consumers would appreciate/understand what this other "relevant information" actually means!

2. Bis have produced some very good recordings but never, as far as I'm aware, with an A-B mic setup. Sure, they *sometimes* employ an A-B pair as do many/most other labels but only as part of a much larger array of mics. I believe that Bis' early releases (pre CD era) did only use a mic pair but I believe that was a single stereo mic or a coincident pair (such as a Blumlein pair) not an A-B pair. For at least 3 decades or so though, Bis have used a multi-mic setup, as do all the other labels (going back as far as the 1950's), which in the case of an orchestra recording would typically be 20+ (as many as about 50) mics. Here's the mic setup for a Bis recording of Mahler's 5th http://kazuyanagae.com/20180815BIS/16BISRecordsMinnesotaOrchestraSetup.pdf, using an A-B pair for the left and right surround channels, a not uncommon sort of setup.



bigshot said:


> The classical recordings from the mid 50s on RCA might have used something like an A-B setup, then they started using three mikes (L, C, R).



I can't say for sure that RCA didn't use just an A-B setup in the 1950's but I think it's very unlikely, as in most circumstances it would give inferior results to other stereo mic setups (X-Y, M-S or some other near-coincident pair). By the mid/late 1950's most were using a Decca tree, some other more than two mic array or (like EMI) a near-coincident pair, plus outriggers. Some of the experimental stereo recordings made in the USA in the late 1920's and early 30's used just an A-B pair but I don't think those recordings were ever released.



71 dB said:


> In my opinion adjustment of crossfeed level for each individual recording is important in listening in order to optimize the spatiality on natural levels and as we know every recordings has it's own spatiality, different levels of excessive spatiality.



You can't just keep repeating the same false facts in this sub-forum! Pretty much no commercial music recordings have natural "spatiality", crossfeed CANNOT alter a recording and magically optimize/make it "natural" and there is no "excessive spatiality", only potentially the opposite, a lack of "spatiality" when listening to some recordings designed for speakers using headphones. How many times? jeez!

G


----------



## 71 dB

gregorio said:


> 1. This "pet peeve" unfortunately demonstrates a lack of understanding of the issues involved in creating recordings. The information you are after is far more likely to be misinterpreted/misunderstood than actually be useful information because mic setup is only one part of the equation and in most music genres, one of the least important/relevant parts of the equation! Even in those genres where it is particularly important, most classical music for example, the placement and balancing of the mic inputs within the mix is more important. Furthermore, in such genres, the mic setup can/will vary considerably according to a number of variables: The music itself, the orchestra/ensemble, the recording venue, the delivery format of the product and the artistic desires/intentions of the artists and producer. It's a bit like asking for the information of a car's engine size: On it's own that information is largely worthless/misleading, it tells us little/nothing about the car's performance. It's only useful information when COMBINED with all the other relevant information (vehicle weight, aerodynamics, engine tuning/power output, power train delivery, etc.). In the case of recordings, all this other relevant information is impractical to publish in say a CD booklet; it's too much to fit, in the case of some recordings is never written down (and can't be), is often not desirable to publish from a marketing or commercial perspective and unlike with cars, pretty much no consumers would appreciate/understand what this other "relevant information" actually means!
> 
> 2. Bis have produced some very good recordings but never, as far as I'm aware, with an A-B mic setup. Sure, they *sometimes* employ an A-B pair as do many/most other labels but only as part of a much larger array of mics. I believe that Bis' early releases (pre CD era) did only use a mic pair but I believe that was a single stereo mic or a coincident pair (such as a Blumlein pair) not an A-B pair. For at least 3 decades or so though, Bis have used a multi-mic setup, as do all the other labels (going back as far as the 1950's), which in the case of an orchestra recording would typically be 20+ (as many as about 50) mics. Here's the mic setup for a Bis recording of Mahler's 5th http://kazuyanagae.com/20180815BIS/16BISRecordsMinnesotaOrchestraSetup.pdf, using an A-B pair for the left and right surround channels, a not uncommon sort of setup.



Very interesting. Thanks! The kazuyanagae link seems very interesting! Thanks! My lack of understanding comes from the fact I have never seen this stuff done in practice. I have only studied theory of it and the theory just states the different mic setups, not when, where and how these set ups are used. 




gregorio said:


> You can't just keep repeating the same false facts in this sub-forum! Pretty much no commercial music recordings have natural "spatiality", crossfeed CANNOT alter a recording and magically optimize/make it "natural" and there is no "excessive spatiality", only potentially the opposite, a lack of "spatiality" when listening to some recordings designed for speakers using headphones. How many times? jeez!
> 
> G



Facts or false facts, I stated _my opinion_. I have _my opinions_ no matter what you think the facts say. I change_ my opinions_ when I see it's necessory, not because you have different idea of what the facts are. I'm not fighting anymore and you can have _your opinions_.


----------



## gimmeheadroom (Oct 4, 2019)

jasonb said:


> I thought it might be a nice idea to see who likes or dislikes crossfeed.
> 
> Please vote and share your opinion either way. I wanna hear what people here have to say about it one way or another.



For me it depends on the individual track. Most often I don't like it. I would say I almost never prefer it, I consider it more of a tool like a parameter EQ to deal with screwed-up recordings.

The only amp I have that has this feature is a Meier Corda Jazz ff. I have no idea how other implementations sound or whether they help or not.


----------



## gregorio

71 dB said:


> [1] The kazuyanagae link seems very interesting! Thanks!
> [2] My lack of understanding comes from the fact I have never seen this stuff done in practice. I have only studied theory of it and the theory just states the different mic setups, not when, where and how these set ups are used.



1. It's maybe somewhat interesting but it doesn't really tell us much. For example it tells us that we have a 3 mic main array, that each of the orch sections are spot mic'ed (with one or mics) and that they're using an A-B pair for the ambience/surrounds BUT, as this is pretty much how every commercial orchestral recording has been made for more than 30 years and not much different from the majority of commercial orchestral recordings going back nearly 70 years, what does this information really tell us? It's a bit like drawing a crude diagram of a car with 4 wheels ... It doesn't really tell us anything because nearly all cars have 4 wheels! Maybe it would be interesting to someone who doesn't know that almost all cars have 4 wheels? There is some vaguely interesting details in there, such as the model of mics employed and some of their polar pattern orientations seem a bit unusual but this is of very limited use because we don't know the height of the mic's and more importantly, we don't know the balancing between them and of course we can't know that because it's likely to change anywhere from a few times to almost continually.

2. And that's because "the theory" provides little more than clues/suggestions on when, where and how these setups are used. All the variables I mentioned previously are never identical and therefore the mic setups vary and the balancing/mixing of those mic inputs always varies. The theory to which you seem to be referring, as found in most books/articles, just lists and explains the basic different stereo pairs, main arrays and soundfield mics, not how they are employed in practice. With the exception of a relatively tiny number of commercial binaural music recordings, these stereo pairs and arrays are virtually never used in isolation, in any genre of music! Many/Most audiophiles seem to falsely think we just choose a basic mic/stereo pair which provides the most accurate capture of the sound waves at a particular listening point. And indeed, that is pretty much what recording engineers did, up until the 1950's when the engineers and the labels for whom they worked found a better approach which consumers voted for with their wallets. Ultimately, although it involves a great deal of science, recording is an art. All this is typically less true with rock/popular genres though, where it's mostly all close mic'ed with a single mono mic and nearly all the spatial information is created/added later, during mixing.



71 dB said:


> Facts or false facts, I stated _my opinion_. I have _my opinions_ no matter what you think the facts say. I change_ my opinions_ when I see it's necessory, not because you have different idea of what the facts are. I'm not fighting anymore and you can have _your opinions_.



1. You are perfectly entitled to your opinion. For example, you would be entitled to have the opinion that 1 + 1 = 3, however, you should expect that if you post that opinion in a science forum, it will be refuted!  You may (and obviously do) prefer the effect of crossfeed but it does not "_optimize the spatiality on natural levels_" and, while "_we_" do know that "_every recordings has it's own spatiality_", "_we_" do NOT know that every recording has "_different levels of excessive spatiality_" because "excessive spatiality" is a term you've invented to describe your personal perception and NOT something that actually exists!! How many times?

G


----------



## 71 dB

The shadow effect of human head at bass frequencies is about 0-3 dB. This should be a scientific fact. That's the "natural" range of ILD at bass frequencies. This range increases with frequency, because the shadow effect gets stronger so that at high frequencies the "natural" range of ILD is about 0-30 dB. This is scientic fact that can be verified by calculating how a ball of similar size of human head affects soundwaves and it is verified by HRTF-measurements. So, "optimazing" spatiality is about scaling for example ILD of 0-7 dB at 100 Hz into the 0-3 dB range. The closer the original ILD is to natural level, the less we need to "scale" it and vice versa. Sure, making psychoacoustically exact calculations is very difficult, but that doesn't mean we can's do calculations at all using simple models. I set the crossfeed level based on what sounds best (most natural) so I always get the best result. Analyse of my choises for crossfeed level and the technical channel separation of the recording show a consistent relationship so that I have very little reason to think my reasoning doesn't work, because it works for me. Apparently doesn't work for some other people. I can't help that.


----------



## bigshot (Oct 4, 2019)

I looked up some info on the first stereo recording. It was produced by David Hall and used three microphones. Two of them fed a stereo recorder and the third one in the middle fed a separate mono recorder. That gave them a stereo and a mono master. Living Stereo recorded with three mikes too, but they were recorded to four track and the center channel was used in the mix to beef up the phantom center. Both RCA Living Stereo and Mercury Living Presence were recorded this way. The mix required a delicate balance to create optimal depth and space- too much and the sound went flat. The recent RCAs SACDs used these masters to create a 3.0 version, but that wasn't what the engineers were intending in the first place.

For Cinerama, the mike placement was determined based on the layout of the speakers in the auditorium. They actually recorded sound three dimensionally and then aurally "projected" it into the theater. I've heard a few of these recordings adapted to 5.1 and they sound stunning.


----------



## aukhawk

71 dB said:


> One of my biggest pet peeves is the lack of information about how a recording session of classical music (and why not other genres too) was made. Typically the information given is limited to date of recording, name of the sound engineer and the brand/type of microphones, but almost never is there information about the mic setup. I believe at least BIS label has used A-B mic setup, but I can be mistaken. If you have professional insight into these things, I am all ears to learn.



I think this link is an interesting read:
https://www.soundonsound.com/techniques/bbc-proms
it describes in fairly strategic terms, the mic setup for the annual Proms season in the Royal Albert Hall. This appears to have evolved steadily and incrementally ever since BBC stereo broadcasting started in the '60s.  The RAH is notoriously problematic (being a large circular domed building) and for example it's interesting to read that the engineers sometimes achieve such a dry sound, in this reverbrant space, that they have to feed digital reverb back in to the mix.

Regarding RCA and Mercury, AIUI in the late '50s they used three spaced omni mics, recording initially onto a 3-track tape machine or sometimes onto 3-track 35mm film magnetic stock (which runs a bit faster than a 15 ips tape so in theory has better HF response).  These latter can still be distinguished by a sprocketed image on the CD sleeve art (on Mercury recordings).  Whatever they did, the results are often quite stunning - lacking only the fullest extremes of frequency range, but the amount of inner orchestral detail they captured with this setup is amazing. 

Regarding BIS - didn't that label sometimes have a message on the sleeve about "no compression was used in this recording" or was that some other label, I forget.  Whoever it was, I regard this claim as disingenuous, and think it almost incredible that any commercial orchestral recording could actually be produced "without compression".  I would think any such recording is sub-optimal, for normal domestic listening. There are several stages in the process where compression can be applied, so probably by avoiding using compression at one major stage - the mixdown stage for example - they could then make that statement by conveniently ignoring the other less obvious stages.


----------



## bigshot

I have some BIS CDs that I can't listen to without reaching for the volume all the time. Not my favorite label, but they do have some really good Sibelius.


----------



## ironmine

71 dB said:


> Chris from Airwindows is a very interesting "out-of-the-box" thinker of audio processing. His idea of using Benford's law in dithering totally blew me away.*



What do you think about his using Benford's Law in dithering? Which one of Chris' dithering plugins is based on this principle? He's got so many dithering plugins on this website.


----------



## 71 dB

bigshot said:


> I have some BIS CDs that I can't listen to without reaching for the volume all the time. Not my favorite label, but they do have some really good Sibelius.



I have among others J. S. Bach's cantatas, the 55 discs in all (latter half of then SACD's) from BIS and they work well since baroque music is less dynamic than orchestral music of the romantic era. Yes, BIS has put a lot of effort into their Sibelius, but I am not into Sibelius (even if I  am a Finn) so I don't have any of those,



ironmine said:


> What do you think about his using Benford's Law in dithering? Which one of Chris' dithering plugins is based on this principle? He's got so many dithering plugins on this website.



Using Benford's Law in dithering is a brilliant idea because it's a mathematical rule of nature so the result is forced to be natural and the dither kind of hides behind the sound. Not Just Another Dither is the one which Benford's realness calculations.


----------



## 71 dB

aukhawk said:


> I think this link is an interesting read:
> https://www.soundonsound.com/techniques/bbc-proms
> it describes in fairly strategic terms, the mic setup for the annual Proms season in the Royal Albert Hall. This appears to have evolved steadily and incrementally ever since BBC stereo broadcasting started in the '60s.  The RAH is notoriously problematic (being a large circular domed building) and for example it's interesting to read that the engineers sometimes achieve such a dry sound, in this reverbrant space, that they have to feed digital reverb back in to the mix.
> 
> ...



Thanks for the link! I'll read it when my headache goes away…

…so a lot of mics when recording large orchestral works. How about piano sonatas? Chamber music? I'm pretty sure the amount of mics drops significantly.


----------



## bigshot

I think Gregorio outlined the way solo piano is miked a couple of years ago. It was interesting.


----------



## gregorio (Oct 6, 2019)

aukhawk said:


> [1] I think this link is an interesting read: https://www.soundonsound.com/techniques/bbc-proms
> it describes in fairly strategic terms, the mic setup for the annual Proms season in the Royal Albert Hall. This appears to have evolved steadily and incrementally ever since BBC stereo broadcasting started in the '60s.
> [2] The RAH is notoriously problematic (being a large circular domed building) and for example it's interesting to read that the engineers sometimes achieve such a dry sound, in this reverbrant space, that they have to feed digital reverb back in to the mix.
> [3] Regarding BIS - didn't that label sometimes have a message on the sleeve about "no compression was used in this recording" or was that some other label, I forget. Whoever it was, I regard this claim as disingenuous, and think it almost incredible that any commercial orchestral recording could actually be produced "without compression". I would think any such recording is sub-optimal, for normal domestic listening.



1. Thanks for that. I worked with the BBC OB unit at the Proms a couple of times in the 1990's but at that time it was just stereo. I didn't know they were now doing 4.0, the situation has changed quite a bit due to HDTV (with it's Dolby Digital audio specs). However, it was still a complex setup (or rather, setups) even back then to cover both radio and TV and large arrays of mics.

2. Acoustically the RAH is very strange/surprising and not just for the engineers but the musicians as well. It's a very large, fairly reverberant space and you would expect it to behave (and sound) somewhat like a cathedral but with reduced reverb and indeed, in some/many audience positions that is roughly how it sounds but not on stage. The first time I played at the RAH (as a musician) in the early 1980's was rather shocking, it was effectively the same as playing outside (in a free field), you couldn't hear any reflections of your own sound or of any of the other musicians, which made the timing and volume (to balance with the other ensemble members) of what you were playing pure guess work, which was very disconcerting!

3. Yep, that was BIS and their claim was disingenuous or at the very least, misleading. It was a claim based on very specific meaning of "compression". Fundamentally, compression is just the reduction of peak levels and therefore, there are two basic ways to achieve it: We can simply reduce the peak levels manually, by lowering the level of the output fader/s whenever there's a peak or, we can do it automatically (with far greater precision) using an electronic unit, such as a compressor, a limiter, a multi-band compressor or more recently, a dynamic EQ. There are two ways to achieve manual compression, either write fader (or VCA) automation during mixing or to do it live during recording by ear, simply lowering and raising the fader/s back again when peaks are heard, which colloquially is called "riding the gain". When BIS say "no compression", that's effectively a lie, I've seen articles and posts on pro audio forums where the engineers have stated that they "ride the gain". To be accurate/truthful, one should therefore read BIS' claim as: "Compression has been applied live/manually by the engineer, no electronic compressor units have been used in this recording".



71 dB said:


> …so a lot of mics when recording large orchestral works. How about piano sonatas? Chamber music? I'm pretty sure the amount of mics drops significantly.



As bigshot stated, we've covered this previously, several times! Firstly, there is no one right way of recording orchestras, piano sonatas or other acoustic ensembles due to the basic variables I've already mentioned ("_The music itself, the orchestra/ensemble, the recording venue, the delivery format of the product and the artistic desires/intentions of the artists and producer_"), which are never identical. Not to mention numerous other, more subtle variables, for example the positioning of the instrument/s within the recording venue, the exact tonal quality of a particular instrument or musician, etc. Therefore, we can only talk about mic setups in terms of generalities and tendencies, any individual situation will likely vary at least somewhat and could vary significantly. A piano sonata would typically be mic'ed using two large diameter condenser mic's actually inside the piano (one covering the lower register/strings one covering the higher) and a couple of omni mics placed at the back of the venue (or at least very distant from the piano), to record the room acoustics ("spatial information"). Furthermore, it would not be uncommon to pan the two close mics left and right, typically only by a small amount, not hard left/right, although wide panning of the piano register is sometimes done in other genres. However, whether it's 50 mics all over the place in the case of a symphony orch or 4 (or so) in the case of a piano sonata, it's completely "unnatural spatiality", unless you know of anyone with 4 ears who can position them inside the piano and (say) 20 meters from the piano at the same time? For the umpteenth time, it's all a carefully crafted illusion based on unnatural/conflicting "spatiality" and has been for many decades. How many times?

G


----------



## 71 dB (Oct 6, 2019)

I prefer unnatural spatiality that has natural ILD levels to unnatural spatiality that has unnatural ILD levels. How many times?

 It's a carefully crafted illusion sure, but mainly for speakers which include acoustic crossfeed and room acoustics. How can this carefully crafted illusion work when you take these away? To me it doesn't, but I can make it work pretty well using crossfeed.


----------



## gregorio

71 dB said:


> [1] I prefer unnatural spatiality that has natural ILD levels to unnatural spatiality that has unnatural ILD levels. How many times?
> [2] It's a carefully crafted illusion sure, but mainly for speakers which include acoustic crossfeed and room acoustics. How can this carefully crafted illusion work when you take these away?
> [2a] To me it doesn't, but I can make it work pretty well using crossfeed.



1. As this is the FIRST TIME that you've admitted it's just YOUR preference and that crossfeed doesn't in fact fix and/or somehow create natural spatiality, your question doesn't make much sense. However, I'll answer by saying: Just once! You appear to have take a massive step forward, let's hope...

2. It wouldn't work exactly as intended, unless of course the engineers/artists had tested the mix/master on headphones and were satisfied with the result.
2a. Exactly, you "_can make it work pretty well using crossfeed_" according to your perception/preferences. The damage that crossfeed causes to the spatial information (and therefore sometimes also the frequency information) typically does not outweigh the single benefit of a more natural ILD and therefore I generally cannot "make it work pretty well using crossfeed". I would much rather listen to the recording transparently but that's just my preference, not an objective scientific fact and therefore it may not apply to everyone! How many times?

G


----------



## bigshot (Oct 6, 2019)

Crossfeed doesn't come remotely close to synthesizing the effects of a room on sound. It can make things sound better in headphones, but it doesn't recreate the effects of space and walls on sound. It doesn't even really attempt to. It's apples and oranges.

I had a vegetarian friend tell me that tofurkey tastes "exactly like turkey". She gave me some. It didn't taste at all like turkey... not the same flavor, not the same texture, nothing about it was like turkey. I asked her how long it had been since she had real turkey to compare it to. She replied 20 years. She had just convinced herself that it tasted like real turkey because she didn't want to eat real turkey. The truth was that it probably tasted a bit better to her than plain tofu.


----------



## Davesrose

bigshot said:


> Crossfeed doesn't come remotely close to synthesizing the effects of a room on sound. It can make things sound better in headphones, but it doesn't recreate the effects of space and walls on sound. It doesn't even really attempt to. It's apples and oranges.
> 
> I had a vegetarian friend tell me that tofurkey tastes "exactly like turkey". She gave me some. It didn't taste at all like turkey... not the same flavor, not the same texture, nothing about it was like turkey. I asked her how long it had been since she had real turkey to compare it to. She replied 20 years. She had just convinced herself that it tasted like real turkey because she didn't want to eat real turkey. The truth was that it probably tasted a bit better to her than plain tofu.



I haven't tried an impossible burger: surprised that many restaurants are starting to serve it, and it's higher calorie and more expensive than a base hamburger.  I think the best vegetarian dishes are those that don't try to emulate a meat dish, but are made to taste good in their own right.

As to spacial sound...there have been a few instances where I've heard processors give a realistic feeling of sound enveloping me from the sides and going to the rear of my head.  I can't say I've heard one that gives the same depth cues of a center point like my surround loudspeaker system.


----------



## bigshot (Oct 6, 2019)

Davesrose said:


> As to spacial sound...there have been a few instances where I've heard processors give a realistic feeling of sound enveloping me from the sides and going to the rear of my head.  I can't say I've heard one that gives the same depth cues of a center point like my surround loudspeaker system.



Yeah, I've found the same... A DSP can give you the feeling of generalized space, but it's not good at focused directionality. Have you heard Dave Strohmeier's blu-rays of the Cinerama films? The orchestra on Best of Cinerama is amazingly focused and dimensional sounding. They tried to do with miking and mixing what the curved screen and multiple projectors did for the image. The recording techniques for multichannel music have settled into a formula. Classical music in particular doesn't vary much from the soundstage up front and reverberation in the rear. I know that is the way it sounds in a concert hall, but I'm curious what could be accomplished with pulling the band into the room a little more and getting a sound that wraps in front of you. The recordings I've heard that try this aren't very good... usually because they don't get the relative balances right. That Cinerama disc is to multichannel what RCA Living Stereo and Mercury Living Presence were to early stereo. Perhaps it takes a recording venue that is well known and the acoustics are established to figure out how to do it.


----------



## ironmine

bigshot said:


> Crossfeed doesn't come remotely close to synthesizing the effects of a room on sound. It can make things sound better in headphones, but it doesn't recreate the effects of space and walls on sound. It doesn't even really attempt to.



Recreating the effects of a space is what reverbs do, not crossfeeds. 

Crossfeed just makes the recording sound more natural in headphones in terms of spatiality, stereo imaging, because it mimics head-related transfer functions such as as ILD, ITD.

Some plugins are just pure crossfeeds (112dB Redline Monitor, Meier, etc.), while others offer not only crossfeed, but also, optionally, reverbs (TB Isone).


----------



## ironmine

71 dB said:


> Chris from Airwindows is a very interesting "out-of-the-box" thinker of audio processing. His idea of using Benford's law in dithering totally blew me away.
> 
> His cans A and cans B are clearly designed for monitoring purposes rather than headphone listening. In my opinion adjustment of crossfeed level for each individual recording is important in listening in order to optimize the spatiality on natural levels and as we know every recordings has it's own spatiality, different levels of excessive spatiality. For some of the music samples even the stronger "cans B" processing was imo too weak to remore all excessive spatiality. So, I'd say this plugin of his is great for monitoring purposes (work), but for headphone listening (enjoyment) there are perhaps better options out there.
> *



I tried Airwindows Monitoring (Cans A & B) with earplugs and I agree with you, the remaining spatiality is too much for me. 

But I need to try it also with big open phones (Senn HD650), because, in my experience, open phones require less crossfeeding than closed models or ear phones.  Is it because open back phones leak more sound outside and some of it reaches the opposite ear?


----------



## Davesrose

bigshot said:


> Yeah, I've found the same... A DSP can give you the feeling of generalized space, but it's not good at focused directionality. Have you heard Dave Strohmeier's blu-rays of the Cinerama films? The orchestra on Best of Cinerama is amazingly focused and dimensional sounding. They tried to do with miking and mixing what the curved screen and multiple projectors did for the image. The recording techniques for multichannel music have settled into a formula. Classical music in particular doesn't vary much from the soundstage up front and reverberation in the rear. I know that is the way it sounds in a concert hall, but I'm curious what could be accomplished with pulling the band into the room a little more and getting a sound that wraps in front of you. The recordings I've heard that try this aren't very good... usually because they don't get the relative balances right. That Cinerama disc is to multichannel what RCA Living Stereo and Mercury Living Presence were to early stereo. Perhaps it takes a recording venue that is well known and the acoustics are established to figure out how to do it.



I haven't heard the Cinerama BDs: I'll be on the lookout.  When it comes to surround with blu-rays, I do have quite a few concerts that are either stereo or surround.  Usually the surround does add some more reverberation and audience noises around one's head.  The best surround experience with music I've heard is Eric Clapton's Crossroads 2010: the music is well mixed and utilizes the subwoofer effectively.  But I haven't met a BD concert recording I didn't like: even classical concerts which don't really utilize surround is nice.  And now Atmos makes the surround scape that much better.  I've noticed older movies that are re-mixed in Atmos can sound pretty different from previous 5.1 mixes.  Example: Apollo 13 -- the music is more encompassing and I hear more ambient noises (such as scenes in offices where you hear type writters in the background).


----------



## bigshot (Oct 7, 2019)

I'm always happy when they include the original theatrical mix. Sometimes I don't care for the new multichannel mixes. Often the music and effects will drown out dialogue in remixes. I don't think the people doing the new mixes care as much as the original crew, and I'm sure directors aren't consulted. There was one movie... The Changeling. The multichannel mix was pretty bad. But it had the original stereo track and it decoded with Dolby digital into 3.0 It sounded great that way. It all depends on the quality of the mix.

Ironmine, crossfeed with reverb doesn't come much closer to the effect of a room on speakers. To get speciality, you need physical space. At least until the technology for stuff like the Smyth Realizer gets better.


----------



## gregorio

ironmine said:


> [1] Recreating the effects of a space is what reverbs do, not crossfeeds.
> [2] Crossfeed just makes the recording sound more natural in headphones in terms of spatiality, stereo imaging, because it mimics head-related transfer functions such as as ILD, ITD.



1. True, although there's a couple of points to remember: Firstly, although reverbs create the effects of a space, they don't entirely recreate the position of a sound/instrument within that space. For example, most instruments on most commercial recordings have been close mic'ed. If we then add a reverb to create the "space" of say a concert hall, the results will be unpredictable depending on the exact FR of the instrument we've recorded and where we're trying to place it within our reverb created concert hall. Most likely, we're going to have to process our recording of the instrument quite heavily, before we send it to the reverb unit/plugin. Secondly, on virtually all commercial music recordings we have several different reverbs (on different instruments) at the same time, so what we actually have is a set of conflicting "spaces" occurring simultaneously. In theory, this should sound like a complete mess and make no sense at all to our perception of "spatiality" but the reason it doesn't, is because it's been applied (and manipulated) by an experienced professional engineer subjectively, IE. With reference to their human perception of "spatiality".

2. This statement is not entirely true. Crossfeed only crossfeeds the left/right signals and typically only below a certain freq threshold. The result can be somewhat more natural levels of ILD (below that freq threshold) depending on how natural/unnatural the ILD is on the recording BUT those crossfed signals do NOT mimic HRTFs because they do not account for ITD or the colouration of the crossfed signal that is required of a HRTF. Furthermore, we must remember the above point and that many of the theory/rules of acoustics don't apply to commercial music recordings because we're dealing with several different/conflicting sets of simultaneous acoustics which could never exist in the real world (a fact that 71dB consistently ignores). The end result is therefore unpredictable, it maybe perceived by some as somewhat akin to a relatable HRTF, by others as just a further messing-up of what is intrinsically already messed-up "spatial information" and by others as just a somewhat different presentation (which may or may not be preferable).

G


----------



## 71 dB

bigshot said:


> Crossfeed doesn't come remotely close to synthesizing the effects of a room on sound. It can make things sound better in headphones, but it doesn't recreate the effects of space and walls on sound. It doesn't even really attempt to. It's apples and oranges.



Crossfeed doesn't even try doing that. It simulates the acoustic crossfeed that happens when you hear the direct sound from left speaker with your right ear and vice versa. So, it's exactly what you say: It makes things sound better in headphones, but for _large_ soundstage you need speakers. I think we are finally on the same page?


----------



## 71 dB

ironmine said:


> I tried Airwindows Monitoring (Cans A & B) with earplugs and I agree with you, the remaining spatiality is too much for me.
> 
> But I need to try it also with big open phones (Senn HD650), because, in my experience, open phones require less crossfeeding than closed models or ear phones.  Is it because open back phones leak more sound outside and some of it reaches the opposite ear?



In the newest video (Monoam plugin) he talks about requests of stronger versions of Cans A & B and he is working on that.

Open cans do leak sound, but it is a very weak acoustic crossfeed. The fact that I use crossfeed with almost everything I listen to with my open headphones is a sign this acoustic crossfeed is weak, maybe -20 dB or so?


----------



## 71 dB (Oct 7, 2019)

gregorio said:


> 1. True, although there's a couple of points to remember: Firstly, although reverbs create the effects of a space, they don't entirely recreate the position of a sound/instrument within that space. For example, most instruments on most commercial recordings have been close mic'ed. If we then add a reverb to create the "space" of say a concert hall, the results will be unpredictable depending on the exact FR of the instrument we've recorded and where we're trying to place it within our reverb created concert hall. Most likely, we're going to have to process our recording of the instrument quite heavily, before we send it to the reverb unit/plugin. Secondly, on virtually all commercial music recordings we have several different reverbs (on different instruments) at the same time, so what we actually have is a set of conflicting "spaces" occurring simultaneously. In theory, this should sound like a complete mess and make no sense at all to our perception of "spatiality" but the reason it doesn't, is because it's been applied (and manipulated) by an experienced professional engineer subjectively, IE. With reference to their human perception of "spatiality".
> 
> 2. This statement is not entirely true. Crossfeed only crossfeeds the left/right signals and typically only below a certain freq threshold. The result can be somewhat more natural levels of ILD (below that freq threshold) depending on how natural/unnatural the ILD is on the recording BUT those crossfed signals do NOT mimic HRTFs because they do not account for ITD or the colouration of the crossfed signal that is required of a HRTF. Furthermore, we must remember the above point and that many of the theory/rules of acoustics don't apply to commercial music recordings because we're dealing with several different/conflicting sets of simultaneous acoustics which could never exist in the real world (a fact that 71dB consistently ignores). The end result is therefore unpredictable, it maybe perceived by some as somewhat akin to a relatable HRTF, by others as just a further messing-up of what is intrinsically already messed-up "spatial information" and by others as just a somewhat different presentation (which may or may not be preferable).
> 
> G


1. Yes, reverberation is for the most part diffuse spatial information without direction. When I create the spatial information for the instruments of my own music, I mix the direct sound which has the position information (angle of sound) with reverberation which doesn't really have left-right information, but gives distance information: The heavier reverberation compared to direct sound the larger distance of sound it implies (some spectral manipulation such as high frequency cut helps).

I think our spatial hearing is actually quite good at dealing with multiply spatial scenarios simultaneously. Imagine listening to your friend speaking during thunderstorm. The sounds of the thunderstorm has totally differen spatiality than the speech of your friend, but there is nothing annatural to the situation. Spatial hearing can easily deal with the situation and the fact that your friends voice doesn't echo all over the neighborhood. As long as each spatiality itself makes sense, mixing multiple different ones isn't a problem in my opinion. In fact the separate tracks of a recording don't need to within the limits of our hearing, because the tracks mask each other so that if the tracks themselves exceed the limits of natural ILD for example, after masking by the other tracks the limits aren't exceeded anymore. The trick is to take the spatiality of individual track as far as possible, but not too far to have as "wide" spatiality as possible, but also avoid excessive spatiality. I believe this is the way to create "omnistereophonic recordings, recordings that work for both speakers and headphones without crossfeed.

2. Crossfeed is kind of what you get when you approximate HRTF for ~30° angled sounds with a first order low pass filter. At low frequencies HRTF is quite flat and has simple ITD response due to the physical distance difference of ears. Crossfeed happens to generate very similar ITD response when first order filter at about 800 Hz is used. I don't think this is even a lucky coinsidence: the first order filter at 800 Hz is simply mathematically similar situation than the shadow effect of human head, the 800 Hz coming from the size of human head. At 800 Hz the wavelenght is about 40 cm or 16 inches which happens to be a good first order lowpass filter approximation scaling of human head. Acoustics is a lot about scaling. Scale things correctly and it works. Above 1 kHz crossfeed does very little because the amount of crossfed signal is so small. It's like having no crossfeed. That's ok, because the natural level of ILD at high frequencies can be quite high so there no need to do much anything above 1-2 kHz (800-1600 kHz is the transitional octave where spatial hearing moves from being ITD based (ILD/ISD assumed small) to being ILD/ISD based and is problematic anyway, crossfeed or not - spatial hearing works best below 800 Hz and above 1600 Hz)

I don't ignore the fact of simultaneous acoustics. I'm just confident human hearing is surprisingly good at dealing with them because it happens in real life (thunderstorms etc.)


----------



## bigshot (Oct 7, 2019)

71 dB said:


> Crossfeed doesn't even try doing that. It simulates the acoustic crossfeed that happens when you hear the direct sound from left speaker with your right ear and vice versa. So, it's exactly what you say: It makes things sound better in headphones, but for _large_ soundstage you need speakers. I think we are finally on the same page?



Yes. Soundstage is a complex set of interactions between speaker dispersion, room reflections, physical distance and directionality. It is a plane of sound in front of you in space. Crossfeed corrects one small aspect of the equation-- the overlap between channels. It doesn't even attempt to recreate all the other complex acoustic geometry involved in soundstage. But it can improve sound if you like the effect and you have it set right.


----------



## ironmine

gregorio said:


> crossfed signals do NOT mimic HRTFs because they do not account for ITD or the colouration of the crossfed signal that is required of a HRTF.



Of course, they do!  Crossfeeds mimic ITD and some of them offer an option of colorizing (equalizing) the crossfed signal. Spend some time not arguing against crossfeed plugins but actually learning how they are made and how they work.


----------



## ironmine

bigshot said:


> Ironmine, crossfeed with reverb doesn't come much closer to the effect of a room on speakers. To get speciality, you need physical space. At least until the technology for stuff like the Smyth Realizer gets better.



I don't like reverbs, I don't use them. I want to hear the original reverberation in the recording. Why intentionally mix it with the reverberation of a listening room? 

But I guess, as the computational power grows in future, any physical space and any processes happening in it can be eventually simulated with a great degree of precision and detail.


----------



## bigshot

There are times when reverb and other kinds of delay are useful. In my multichannel speaker system, I use a hall ambience DSP based on the Vienna Sofiensaal when I play orchestral music recorded in dry studio conditions. For instance the recordings of Toscanini and the NBC Symphony in Studio 8H. The records are dry as dust and in aggressive mono. But when I wrap the ambience around it, it almost sounds like stereo recorded in a good concert hall.

I think the sound around the sound is as important as the sound itself. But I can understand how people who are used to headphones might not believe that.


----------



## gregorio

71 dB said:


> 1. Yes, reverberation is for the most part diffuse spatial information without direction.
> [1a] When I create the spatial information for the instruments of my own music, I mix the direct sound which has the position information (angle of sound) with reverberation which doesn't really have left-right information, but gives distance information ...
> [1b] I think our spatial hearing is actually quite good at dealing with multiply spatial scenarios simultaneously. Imagine listening to your friend speaking during thunderstorm. The sounds of the thunderstorm has totally differen spatiality than the speech of your friend, but there is nothing annatural to the situation.
> [1c] Spatial hearing can easily deal with the situation and the fact that your friends voice doesn't echo all over the neighborhood.
> ...



1. No, it is NOT! When we have a sound in an acoustic environment we get a set of initial reflections, which are NOT diffuse and DO have considerable directional information. The properties of these initial reflections (often referred to as Early Reflections, ERs); their timing, freq response, direction and level relative to the direct sound are VITAL to our perception of "spatiality", in fact they largely define it! These ERs then hit other surfaces, are reflected again and the system becomes semi-random/chaotic and this portion of the reverb is diffuse with little direction.
1a. Then you're doing it wrong, because reverb DOES have left/right information! Why do you think stereo (and surround) reverb units/plugins exist in the first place? If there were no left/right positional information then all reverb units would be mono!

1b. This is nonsense! The sounds of the thunderstorm would NOT have "totally different spatiality"! Obviously you, your friend and the thunderstorm would be in the same acoustic environment and all of you would therefore have somewhat similar "spatiality" (as defined by that acoustic environment). However, the individual parameters of the spatial information would be somewhat different due to the different relative positions of you, your friend and the thunderstorm within that aoustic environment.
1c. Of course it can, because our hearing deals constantly with sounds in different relative positions within an acoustic environment.
1d. With the vast majority of non-acoustic music genres we're not talking about different sound sources in different positions within a single acoustic environment but about different, independent acoustic environments at the same time. And of course our hearing never has to deal with that because it's a physical impossibility!
1e. You admit to knowing pretty much nothing about sound/music engineering, have stated that you are "not saying much what engineers are and [should] do", yet here you are, yet again (!), doing exactly that?!

2. Crossfeed obviously changes the time, level and direction of the ERs which are so vital to spatial hearing.

3. But that's exactly what you've just done! Your analogy does exactly that, your analogy is in ONE acoustic environment, so ignores different simultaneous environments (which is the case with non-acoustic genres) and it also ignores different simultaneous acoustic perspectives within that single environment (which is the case with acoustic genres such as classical). So, that covers all the acoustic and non-acoustic music genres, please tell me what other genres exist besides these two (to which your analogy would be applicable)?!! 
3a. Which again is nonsense, unless you have a totally bizarre "real life". IE. Two sets of ears which you can simultaneously place in two different rooms (acoustic spaces) or, which you can simultaneously place both say 1m and 30m away from your friend speaking (in the same acoustic environment).

You state you studied at university and have "thought about this a lot" but what you actually write indicates little/no education and that you've thought about this for no more than a few seconds!

G


----------



## castleofargh

we have to consider several playback models, first to try and understand them properly, and then to see if what applies to one model can apply to the others:
we have the headphone's usual playback.
we have the sound field from speakers in a room as perceived by a dude in that room.
and for crossfeed, the model consists in the best case scenario, of 2 speakers, no room, no floor, no listener's body, a head with a shape that acts like a one band EQ, and when the head turns, the speakers follow. you know, a more natural spatiality. 

deciding that crossfeed is an improvement over default headphone playback is a subjective opinion. and one I share. even more so when there are indeed VSTs that bring more than just basic crossfeed to the table and can feel even better to some listeners with the right settings. but subjective it is. the entire notion of perceived sound location is fundamentally a subjective thing. it involve psychoacoustic, and varies with the listener's head, oh and it refers to an acoustic model that does not actually exist. if that's not subjective, what is?


----------



## 71 dB

gregorio said:


> 1. When we have a sound in an acoustic environment we get a set of initial reflections, which are NOT diffuse and DO have considerable directional information. The properties of these initial reflections (often referred to as Early Reflections, ERs); their timing, freq response, direction and level relative to the direct sound are VITAL to our perception of "spatiality", in fact they largely define it! These ERs then hit other surfaces, are reflected again and the system becomes semi-random/chaotic and this portion of the reverb is diffuse with little direction.
> 1a. Then you're doing it wrong, because reverb DOES have left/right information! Why do you think stereo (and surround) reverb units/plugins exist in the first place? If there were no left/right positional information then all reverb units would be mono!
> 
> 1b. This is nonsense! The sounds of the thunderstorm would NOT have "totally different spatiality"! Obviously you, your friend and the thunderstorm would be in the same acoustic environment and all of you would therefore have somewhat similar "spatiality" (as defined by that acoustic environment). However, the individual parameters of the spatial information would be somewhat different due to the different relative positions of you, your friend and the thunderstorm within that aoustic environment.
> ...



1. Correct. I agree. Early reflections are not diffuse and they contain crucial spatial information. I don't think I have said otherwise. I said the reverberation after early reflections is diffuse. As I make computer music and have to create spatiality from the crash I am very familiar with this concept not to mention university studies where this stuff was teached almost the first day! It continues to amaze me how you refuse to believe I really know this stuff.

1a. Reverberation has left/right diffence (so it's not mono), but the difference is very random, practially noise and doesn't really contain spatial information about where the sound originated left-right-wise. That information is in the direct sound and early reflections. Reverberation contain information about the acoustic space and how far the sound source has been (when compared to direct sound/ER). Again this is stuff we both know. Since you keep claiming I don't, I have no choice but to call you nefarious.

1b. The thunderstorm contains echoes of significant time delay. The speech of a friend doesn't. Sure, the speech does techically echo from a distant building and come back half a second later, but the sound is so quiet nobody can hear it. The speech is dominated by the acoustics near you, the thunderstorm is dominated by the massive echoes from the hills and buildings in the radius of a mile. Result is totally different spatiality. In my opinion you lack the ability to discern what things mean in practice and we disagree a lot about  what matters and what doesn't matter.

1d. Except when we are listening to such music our hearing needs to deal with it… …so what do you mean?

1e. I haven't done it for a work, but I know something and I learn more. I have watched countless of hours of Youtube videos about how to mix music. I also make my on music and have learning a thinhg or two while doing it. However, I acknowledge the limits of my knowledge. In general consumers have power. If consumers don't like product A, they may prefer product B even if they know nothing about how A and B are produced. I'm not saying this is always a good thing. I am saying it's what happen in capitalism.

2. What ER? There is no room so there are not ER either. Maybe you mean the ER of the recording itself? According to youself that's something like 30 mics worth of multispatiality, that in your opinion doesn't suffer at all when you use speakers and have acoustic crossfeed (doing "time change" etc.), early reflections and diffuse revereberation. Also, if you use only headphones everything is fine and dandy according to you, BUT if you dare to simulate the acoustic crossfeed of speaker listening with crossfeed you suddenly have devastating problems!! As I said, you have difficulties discerning what matters and what doesn't matter.

3. This leads nowhere… …your attempts are pathetic at this point.
3a. Same as 3. 

I have the qualification certificate on my shelf and I am not intimidated anymore by your nasty words. You could start practicing putting things in proper perspective and becoming better at discerning what matters and what doesn't matter instead of calling others ignorant.


----------



## bigshot (Oct 11, 2019)

In order to have true spatiality, you need physical space. It may be possible someday to synthesize that effect, but it's going to require computer processing. It won't happen with a simple crossfeed.


----------



## gregorio (Oct 11, 2019)

71 dB said:


> 1. I said the reverberation after early reflections is diffuse. As I make computer music and have to create spatiality from the crash I am very familiar with this concept not to mention university studies where this stuff was teached almost the first day! It continues to amaze me how you refuse to believe I really know this stuff.
> 1a. Reverberation contain information about the acoustic space and how far the sound source has been (when compared to direct sound/ER). Again this is stuff we both know. Since you keep claiming I don't, I have no choice but to call you nefarious.
> 1b. The speech is dominated by the acoustics near you, the thunderstorm is dominated by the massive echoes from the hills and buildings in the radius of a mile. Result is totally different spatiality.
> [1b1] In my opinion you lack the ability to discern what things mean in practice ...
> ...



1. No you didn't, why don't you read what you wrote? You said "_I mix the direct sound which has the position information (angle of sound) with reverberation which doesn't really have left-right information, but gives distance information_", no mention of ERs at all, just the direct sound and the reverb!! If by "reverberation" you are including the ERs, which of course you should because reverberation is only the subsequent further reflections of the early reflections, then your statement is false and reverb does contain very significant directional (+ timing and freq content). If you're not including ERs in "reverberation", then as I said, you're doing it wrong and ignoring some of the most vital spatial information (which is damaged by crossfeed)! How many times?

1a. The main spatial information is in the left/right information of the direct sound and the parameters of the ERs, including distance! The arrival time and direction of the ERs tells us how far the direct sound is from the various different reflecting boundaries. However, you want to destroy that by changing (crossfeeding) that direction and timing information! If you do know this stuff, why do you keep omitting it and/or ignoring it? How many times?

1b. And how exactly are you going to hear the "massive echoes from the hills and buildings" independently from the acoustics of the environment you and your friend are in? Do you have a set of ears in the hills, another set of ears in the buildings and another set near your friend? Of course you don't, you have one set of ears near your friend and everything you hear is through that single acoustic environment and therefore the result CANNOT be totally different spatiality, just somewhat different, as I've already stated. This is NOT analogous to most commercial music recordings, where we have different, simultaneous acoustic environments.
1b1. How ridiculous is that? You've admitted you've got no practical experience yourself but even so, you have the opinion that someone who has 25 years of practical professional experience lacks the ability to "discern what things mean in practice" and you know better because what, you've seen some youtube vids and know that your knowledge is limited? How does that make any sense to you unless you're delusional?
1d. Huh, that's the whole point! Our hearing has to deal with something that we can never experience and therefore each individual's perception interprets the spatial information presented with headphones according to their individual biases, experiences and preferences. How many times!

2. Of course I mean the ERs on the recordings, what is it you're listening to on your headphones?
2a. Your joking right? 30 mics worth of different spatiality would be a complete mess when using speakers, which is why we need music engineers to adjust that spatial information, while using speakers, before it's distributed to consumers! You're just getting more and more ridiculous!
2b. And clearly you have severe difficulties discerning the actual facts from the lies that you yourself have made up! I never said only using headphones is "fine and dandy", in fact I said the opposite and I never said that crossfeed gives you "devastating problems", you just made those lies up!!

3. Absolutely. In terms of getting you to stop posting falsehoods, my attempts are clearly pathetic. However, as this is the sound science forum then your falsehoods will be refuted. The obvious solution to this "leading nowhere" is to stop posting your falsehood repeatedly. How many times?

4. What qualification certificate, you said your university course didn't even mention music production! And how can I START "practicing putting things in proper perspective" when I've been doing that professionally for over 25 years and started practicing that nearly 30 years ago? Just how ridiculous are you going to get?

Again, you're just following the path you always take, which inevitably leads you to more and more ridiculous assertions, which eventually even you realise are ridiculous and then you get all depressed, go on about your self-esteem, drop the subject for a while and then start all over again, do exactly the same thing and expect a different result. Don't you know the famous cliche attributed Einstein about that?

G


----------



## bigshot (Oct 11, 2019)

Nefarious!







Pathetic!


----------



## Davesrose

bigshot said:


> In order to have true spatiality, you need physical space. It may be possible someday to synthesize that effect, but it's going to require computer processing. It won't happen with a simple crossfeed.



I think also HRTF is much more complicated with headphones as they sit close to our ears: which are all individualized by how our outer ears are shaped, angles of our ear canals, the current state of middle ear, and the state of inner ear (and tolerances that more individualized/finite)...which leads to the bickering of what headphone brand is most "natural" sounding.  Speakers are far enough away to have a sound field that mimics the source venue. I've also never thought "crossfeed" a term for a virtual spactiality with headphones.  To date, the best spatiality I've heard with headphones are a few virtual surround settings.  The first time: when I got a Sennheiser Pro Dolby surround processor: which does rely on a parametric setting which you alter to your taste.  When I dialed it in to mine, I could hear effects around me.  True Dolby Atmos headphone sources also sound good...but can't say I hear as much front depth cue as my actual speaker system.


----------



## 71 dB

bigshot said:


> In order to have true spatiality, you need physical space. It may be possible someday to synthesize that effect, but it's going to require computer processing. It won't happen with a simple crossfeed.



You can't use a hammer as a screwdriver. Does it mean hammers are useless? I use crossfeed to make headphone sound have natural levels of ILD and in some ways (not all ways unfortunately) sound similar to speakers. Someday I may have better ways to improve headphone sound, but until then this is what I have got and I think it's much better than nothing so I am definitely using it! Sometimes I even listen to my speakers and have your beloved_ "true spatiality."_


----------



## 71 dB

gregorio said:


> 1. No you didn't, why don't you read what you wrote? You said "_I mix the direct sound which has the position information (angle of sound) with reverberation which doesn't really have left-right information, but gives distance information_", no mention of ERs at all, just the direct sound and the reverb!! If by "reverberation" you are including the ERs, which of course you should because reverberation is only the subsequent further reflections of the early reflections, then your statement is false and reverb does contain very significant directional (+ timing and freq content). If you're not including ERs in "reverberation", then as I said, you're doing it wrong and ignoring some of the most vital spatial information (which is damaged by crossfeed)! How many times?



Sorry about not mentiong ER every time. I don't mention dinosaurs either in my posts but I still know dinosaurs existed.. …you'd make a good lawyer…
If crossfeed damages ER, then acoustic crossfeed damages it also. Are you advocating crosstalk canceling for speakers? No, because you sound engineers mix taking acoustic crossfeed into account which means you are in trouble with headphones if you don't use similar prosessing of sound.

What does damaged ER even mean? Crossfeed IMPROVES the sound for me so I don't get what is damaged. I really don't get how room acoustics is nothing, but simple crossfeed ruins things. I have calculated these things, what crossfeed does, how it alters phases and things and it's NOTHING compared to what room does. I am totally fed up with your criticism. Nobody would use crossfeed if it didn't improve things. You are out of your mind if you think I consider the ER of some ping pong recordings intact and the crossfed version damaged! If I listen to speakers the player is 30° left. With heaphones without crossfeed it's maybe 60°. Which is correct? I suppose speakers so I crossfeed and the sound moves to maybe 40° which is closer to speakers and everything sounds better and natural you tell me ER is damaged? What the ****?


----------



## 71 dB

gregorio said:


> 2a. Your joking right? 30 mics worth of different spatiality would be a complete mess when using speakers, which is why we need music engineers to adjust that spatial information, while using speakers, before it's distributed to consumers! You're just getting more and more ridiculous!



Yes, USING SPEAKERS! Spatial information adjusted for speakers is not automatically optimal for headphones. Newer recordings are often somewhat good for headphones thanks to sophisticated tools (and many producers have learned to mix bass mono etc.) but older recordings are what they are, often ping pong or something else crazy. Even newer recordings often benefit from_ weak_ crossfeed, but clearly headphone users have been thought about. You are the one who brought the consept of _multiple spatiality_ into this discussion. 



gregorio said:


> 2b. And clearly you have severe difficulties discerning the actual facts from the lies that you yourself have made up! I never said only using headphones is "fine and dandy", in fact I said the opposite and I never said that crossfeed gives you "devastating problems", you just made those lies up!!|



Ok, sorry but why do you then oppose crossfeed so fiercely?? Your posts indicate you are very worried about what crossfeed does to ER. 



gregorio said:


> What qualification certificate, you said your university course didn't even mention music production! And how can I START "practicing putting things in proper perspective" when I've been doing that professionally for over 25 years and started practicing that nearly 30 years ago? Just how ridiculous are you going to get?



Music production is not the thing of the university I went. It was about electric engineering and I chose acoustics and signal processing as my speciality. The whole faculty was very much into telecommunication. I believe that's why Finland is strong in mobile technology (Nokia) because we have this faculty producing tons of telecommunication engineers. Acoustics was just a small fraction of the faculty, small group of more or less excentric people into sound, many musically talented. Until I heard it from you I didn't even know music production is teached in universities. I thought you learn the art by doing, working in the business and learning from the gurus while working with them.


----------



## gregorio (Oct 12, 2019)

71 dB said:


> I use crossfeed to make headphone sound have natural levels of ILD and in some ways (not all ways unfortunately) sound similar to speakers.



That's just a function of YOUR perception, to me crossfeed does NOT make it sound anything like speakers. It doesn't even sound similar to speakers for some/many of the people who use crossfeed regularly, just preferable to not using crossfeed! You are talking about your personal perception, NOT an objective fact that's applicable to everyone (except jerks, idiots, etc.)! How many times?



71 dB said:


> [1] Sorry about not mentiong ER every time. I don't mention dinosaurs either in my posts but I still know dinosaurs existed.. …
> [2] If crossfeed damages ER, then acoustic crossfeed damages it also.
> [2a] Are you advocating crosstalk canceling for speakers? No, because you sound engineers mix taking acoustic crossfeed into account which means you are in trouble with headphones if you don't use similar prosessing of sound.
> [3] What does damaged ER even mean?
> ...



1. What do you mean sorry for not mentioning ERs everytime? You omitted it completely from a long post all about spatial information! In nature/the real world, how do you have spatial information without ERs (probably the most important aspect of spatial information)? You think maybe ERs went extinct 98 million years ago?

2. Oh god, how many times? Except in an anechoic chamber, you never get acoustic crossfeed from speakers without the ERs and reverb (spatial information) of the listening environment, spatial information which is vital to our perception! Have you ever listened to music on stereo speakers in an anechoic chamber? It sounds terrible, except possibly to you?
2a. It's just ridiculous! You state you don't make assertions about sound engineers because you don't know anything/much about sound engineers/engineering but here you are, yet again making an assertion about what sound engineers "take into account". If that's not ridiculous enough, you're making the exact same false assertion about what sound engineers "take into account" that I, an actual sound engineer, refuted just a few posts ago! Round and round we go.

3. Asked and answered numerous times (but I've done it again briefly in 2a below).
3a. Fallacy, false correlation! Another common example: For some people, using a tube in the playback chain IMPROVES the sound for them and they too usually "don't get what is damaged"!
3b. Clearly you don't "get it", despite it being explained to you numerous times. The fault in your approach seems to be that you assume that because you don't "get it", then it must be false but what you should be doing is questioning your ability/willingness to "get it"! So, round and round we go.
4. Obviously it's not "NOTHING", that's nonsense but certainly it's a great deal less compared to speakers in a normal room. What you seem utterly unwilling "to get", is that's precisely what's wrong with crossfeed, it's why we've moved on to HRTFs and even that's still not enough (on it's own) for many!
4a. Then stop writing nonsense that needs to be criticised/refuted! How many times?
4b. Which is why "nobody" uses tubes, vinyl records or expensive audiophile cables, right?
4c. Pot, kettle, black!



71 dB said:


> [1] Yes, USING SPEAKERS!
> [2] Ok, sorry but why do you then oppose crossfeed so fiercely?? [2a] Your posts indicate you are very worried about what crossfeed does to ER.
> [3] Music production is not the thing of the university I went. It was about electric engineering and I chose acoustics and signal processing as my speciality. The whole faculty was very much into telecommunication. ...



1. What do you mean "yes, USING SPEAKERS", I was responding to your point about using speakers: "_According to youself that's something like 30 mics worth of multispatiality, that in your opinion doesn't suffer at all when you use speakers ..._". Again, it's just ridiculous, don't you know what you've written? Why do I have to keep quoting what you've written back to you? Why don't you stop writing nonsense/falsehoods in the first place and then we don't have to keep going round in circles? How many times?

2. You're joking right? I accuse you of lying, of making up an assertion and falsely attributing it to me, and how do you respond? You say you're sorry and then make-up another assertion which you falsely attribute it to me! It's just more and more ridiculous!
2a. I am somewhat worried about what crossfeed does to the signal/ERs, as obviously it crossfeeds the ER's and therefore alters the directional relative timing of them. Also obviously, that's not what is intended! Engineers and artists mix/master recordings for speakers in a consumer room/listening environment, for headphones or most commonly, for speakers but with some consideration of headphone playback but we NEVER mix/master for speakers in an anechoic chamber!! However, as we're talking about crossfeed messing up spatial information (on the recording) which cannot exist in the real/natural world anyway, then the end result, what a particular listener will perceive, varies according to that particular listener's perception and preferences. How many times? It's astonishing that after two or more years, you don't even seem to know what it is that I (and others) are "opposing so fiercely"!

3. Yes, we know that you don't know anything about music production, what we don't know is why you keep making false assertions on a subject you admit you don't know anything about and then defend them endlessly?!! How many times?

G


----------



## 71 dB

gregorio said:


> That's just a function of YOUR perception, to me crossfeed does NOT make it sound anything like speakers. It doesn't even sound similar to speakers for some/many of the people who use crossfeed regularly, just preferable to not using crossfeed! You are talking about your personal perception, NOT an objective fact that's applicable to everyone (except jerks, idiots, etc.)! How many times?



When I say similar to speaker I mean some aspects, not all aspects. Speakers don't give excessive ILD, same with crossfeed => SIMILAR

I am done now. I don't care what you or other peope think. Wasting my life here is pointless.


----------



## bigshot

Headphones sound nothing like speakers. Even with cross feed they sound nothing like speakers.

I agree that you’re wasting your time... ours too.


----------



## castleofargh

71 dB said:


> When I say similar to speaker I mean some aspects, not all aspects. Speakers don't give excessive ILD, same with crossfeed => SIMILAR
> 
> I am done now. I don't care what you or other peope think. Wasting my life here is pointless.


that's rich coming from you on this topic. when you don't try to force your delusion of objectivity onto us, this thread is a friendly one where people who enjoy crossfeed share their personal experiences and discuss the VSTs or analog solutions they've tried. 

you want to discuss ILD and ITD for sound localization? it's clearly a tiny part of what we listen to in music, if you asked around why people don't stick to using crossfeed you'd know that most of them just don't like how it sound. maybe it's the bass, maybe it's the singer(comb effect on mono or whatever), maybe they prefer the sense of clarity they get from default headphone sound, maybe they actually find that they're losing too much "width" to gain too little "depth", etc. all subjective reasons, and not everybody has the same. you specifically find it important to feel sound localization a certain way, this is your preference. and really nothing else. 
but let's pretend that ILD and ITD are the important stuff because it's the crossfeed topic and you want to discuss that(given how you usually mention ILD like you're trying to summon something). the real model for psychoacoustic would involve one physical sound source reaching both ears, not 2 speakers with the same signal a little louder on one side to make us feel like the instrument is on that side somewhere. speaker playback is obviously unnatural spatiality. panning, which has been used massively in almost all stereo albums does not give a frack about ITD, and provides some fake ILD-ish cues(doesn't bother at all about changing the FR because with 2 sound sources it would create more of a mess than anything else. same with ITD). yet your entire argument is that making headphone's sound partially and very approximately like a pair of speakers for direct sound only, is the way to experience the more natural spatiality. that's your entire "objective" argument. do you start to sense the giant holes in that picture? 

ILD and ITD are the right stuff for real life everyday sound localization(one sound source at one place in space), that much has been supported by a great many experiments and can IMO be considered a fact. but your deduction of that mechanism is that we need to badly copy a part of speaker playback that didn't rely on proper ILD or ITD in the first place, while arguing that you do it for the ILD and ITD. that's strange. just like it's strange how easily you make localization the obvious priority for everybody when headphone playback never really cared much about that and has consistently grown to be the giant market it is today.
anytime we get you to admit that one part of your argument is wrong or cruelly missing something, the next time or sometimes even in the same post, you're back writing that crossfeed gives more "natural spatiality" and is a clear improvement over default headphone playback. which again, is fine as your personal impression, but bollocks as an objective statement. if you never plan to drop that delusion of objectivity, the best option is indeed to stop posting about it, because acoustic and psychoacoustic aren't going to change and make you right anytime soon.



71 dB said:


> Nobody would use crossfeed if it didn't improve things.


you can turn this around and argue pretty much anything. like, if audiophile power cables weren't an improvement, nobody would use them. in practice, very few people do use crossfeed on a regular basis. even among those who tried, I'm pretty sure the majority doesn't continue using it all the time. you have the same tunnel vision about the situation as a vinyl or tube lover can have sometimes, being so very sure they they are using the objectively superior stuff because it feels better to them, and no amount of fact will change their mind. you usually dislike those people who can't accept the facts, but when it comes to crossfeed, you become one of them. maybe it's one of those things where a psy or a cop mustn't get involved with a personal case because it's assumed that he can't be partial. IDK.


----------



## bigshot (Oct 12, 2019)

If crossfeed was an improvement, everyone would use it. Headphones would come with it built in.


----------



## gregorio (Oct 13, 2019)

71 dB said:


> [1] When I say similar to speaker I mean some aspects, not all aspects. Speakers don't give excessive ILD, same with crossfeed => SIMILAR
> [2] I am done now.
> [2a] I don't care what you or other peope think.
> [2b] Wasting my life here is pointless.



1. Ah, in that case: A washing machine is similar to a Formula 1 race car. A washing machine has an electric motor, same with a Formula 1 race car => SIMILAR. By the same token, a sitting room is similar to a helicopter, an elephant is similar to a pencil, etc. If you take just one aspect and ignore all the others, then you end up with nonsense!

2. We can only hope!
2a. What I and other people here think is that it's vitally important to consider ALL the relevant science/facts and NOT just one of the facts in isolation, because that leads to all kinds of nonsense; false assertions, audiophile myths and snake oil marketing, which is pretty much the OPPOSITE of science and why this subforum exists in the first place. If you "don't care" about this, that's up to you but you're in the wrong subforum! However, rather strangely, you do seem to care about it with pretty much every other area covered by sound science, just not when it comes to headphone crossfeed?!
2b. Then don't waste your life here, the choice is entirely yours! Either waste it somewhere else where ALL the relevant science/facts isn't a requirement, simply stop posting about this subject here or preferably (IMHO), do something useful and gain valid/applicable knowledge by learning and understanding ALL the relevant facts!

G

PS. Please read Castle's last post more than once and try to understand it!


----------



## 71 dB

castleofargh said:


> that's rich coming from you on this topic. when you don't try to force your delusion of objectivity onto us, this thread is a *friendly* one where people who enjoy crossfeed share their personal experiences and discuss the VSTs or analog solutions they've tried.



Well, my personal experience of my analog and digital solutions is that crossfeed improves sound a lot for me. To _me_ science of human spatial hearing explains well why this happens.

*Crossfeed does NOT make headphones sound like speakers. If I want speaker sound I listen to speakers. Very easy solution! *
*Depending on the recording crossfeed often gives  me an experience of miniature soundstage that is a few feet in size. *
*Crossfeed in my opinion makes the stereo image cleaner, it's kind of focusing a unfocused picture so that instruments don't overlap eaxh other or be scattered all over.*
*Crossfeed in my opinion makes bass sound "real", physical while without it (too much ILD) in my opinion makes bass sound fake, artificial.*
*Crossfeed in my opinion reduces listening fatique.*
*Crossfeed in my opinion removes the annoying sensation of the sound "tickling" my ears and moves these sounds a little bit away my ears.*
*Crossfeed in my opinions takes some "getting used to" to be fully appreciated.*
*Crossfeed in my opinion makes the sound less "sparkly/energetic" and more relaxed/natural benefitting longer listening.*
*Crossfeed in my opinion gives more balanced (speaker-like) take on the M/S - balance of the recording so that S doesn't override M.*

This is my friedly opinion. A lot of improvements from a $50 device. Best $50 I ever spent!



castleofargh said:


> you want to discuss ILD and ITD for sound localization? it's clearly a tiny part of what we listen to in music, if you asked around why people don't stick to using crossfeed you'd know that most of them just don't like how it sound. maybe it's the bass, maybe it's the singer(comb effect on mono or whatever), maybe they prefer the sense of clarity they get from default headphone sound, maybe they actually find that they're losing too much "width" to gain too little "depth", etc. all subjective reasons, and not everybody has the same. you specifically find it important to feel sound localization a certain way, this is your preference. and really nothing else.
> but let's pretend that ILD and ITD are the important stuff because it's the crossfeed topic and you want to discuss that(given how you usually mention ILD like you're trying to summon something). the real model for psychoacoustic would involve one physical sound source reaching both ears, not 2 speakers with the same signal a little louder on one side to make us feel like the instrument is on that side somewhere. speaker playback is obviously unnatural spatiality. panning, which has been used massively in almost all stereo albums does not give a frack about ITD, and provides some fake ILD-ish cues(doesn't bother at all about changing the FR because with 2 sound sources it would create more of a mess than anything else. same with ITD). yet your entire argument is that making headphone's sound partially and very approximately like a pair of speakers for direct sound only, is the way to experience the more natural spatiality. that's your entire "objective" argument. do you start to sense the giant holes in that picture?



People like or dislike things for various reasons. It's complex. As I said I think jumping to the world of crossfeed takes "getting used to". The sound is less sparkly and energetic and even more monolike so if you are into "special effects" you may find it disappointed at first, but for me the "sparkle" comes back in a minute or so when my ears adjust to the lower levels of ILD, only better!

We have to live with the problems of stereo sound and how recordings are mixed. I use adjustable crossfeed level to get the best out of each recording, in some cases no crossfeed at all gives the best result. It's clear that a recording produced for speakers has spatiality that is more or less unsuitable for headphones because the playback scenario is SO different. No acoustic crossfeed. No early reflections from the room. No room reverberation. No change in sound when you turn your head or move. It is just SO different!

Does this diffrence matter? What parts do matter? I think everything matter, but the difference in ILD levels (lack of acoustic crossfeed) is in my opinion the thing that matters the most. The lack of ER and reverberation is less of a problem, because recordings tend to have own spatiality, all kinds of reverberations and delay effects used. My CDs of Bach's organ music contain MASSIVE church acoustics and reverberation so not adding the ER and reverberation of my own listening room on top of that doesn't sound like a big loss. Maybe the most dry recordings could benefit for some room acoustics, but you can always use speakers if you find the sound too dry. The lack of sound not changing when I move isn't an issue for me. I like it how the sound stays the same when I move and in fact it annoys me with speakers! That's what you get when you listen to headphones most of the time!

So, for me the lack of acoustic crossfeed is the BIG issue that really needs fixing, hence crossfeed. I believe you can improve sound by fixing only one thing. You may have both distortion and silly bass boost in your sound and fixing only either the distortion OR the silly bass boost would improve sound even when the other remains. I don't see giant holes in my picture, because I am not talking about "the most natural" spatiality, but "more natural" spatiality.



castleofargh said:


> ILD and ITD are the right stuff for real life everyday sound localization(one sound source at one place in space), that much has been supported by a great many experiments and can IMO be considered a fact. but your deduction of that mechanism is that we need to badly copy a part of speaker playback that didn't rely on proper ILD or ITD in the first place, while arguing that you do it for the ILD and ITD. that's strange. just like it's strange how easily you make localization the obvious priority for everybody when headphone playback never really cared much about that and has consistently grown to be the giant market it is today.
> anytime we get you to admit that one part of your argument is wrong or cruelly missing something, the next time or sometimes even in the same post, you're back writing that crossfeed gives more "natural spatiality" and is a clear improvement over default headphone playback. which again, is fine as your personal impression, but bollocks as an objective statement. if you never plan to drop that delusion of objectivity, the best option is indeed to stop posting about it, because acoustic and psychoacoustic aren't going to change and make you right anytime soon.



The rationale here is that the recordings have been mixed so that they work with speakers and take account the set up of two sound sources. The recordings have spatial information that makes sense when acoustic crossfeed (+studio acoustics) happens. You have to make ILD larger than natural levels indicate, because acoustic crossfeed will reduce it massively. That's why things like LCR panning work with speakers. Acoustic crossfeed (and other room acoustics) shapes this otherwise nonsensical spatiality into something that makes sense. When you listen to such recordings with headphones, nothing is being done to shape anything and the spatiality remains nonsensical, unless you use crossfeed to shape some sense into the spatiality.

Yes, headphones haven't cared much about these things, but that doesn't mean I can't improve things myself. I think this whole treat if about people arguing about what should be done to headphone listening after decades of neglect. I believe crossfeed does improve the spatiality of headphone sound even if it really fixes only ONE thing. It's the one that matters. I believe headphone sound without any processing is fundamentally so wrong that even a half-assed "worst in the world" crossfeeder improves things not to mention more careful crossfeeding or even more sophisticatic things like HRTF convolutions. It's not that crossfeed does everything 100 % correctly (of course not). It's that not doing anything is so ****'ed up that almost anything is better. Number 80 is not the same as 100, but it's much closer to 100 than 0. If my target is 100 and my options are 0 and 80, choosing 80 is wise even when skeptics whine about the missing 20. I'm rather missing 20 than 100! The psychological problem of crossfeed is that people think it's something _additional_ to mess up things when it's in reality something that is _missing_ in normal headphone listening. Headphones_ miss_ acoustic crossfeed and room acoustics. Adding crossfeed is not something additional, it's missing less. The assumption of some sort of crossfeed happening in payback is baked into recordings, otherwise recordings would have lower level of ILD. 



castleofargh said:


> you can turn this around and argue pretty much anything. like, if audiophile power cables weren't an improvement, nobody would use them. in practice, very few people do use crossfeed on a regular basis. even among those who tried, I'm pretty sure the majority doesn't continue using it all the time. you have the same tunnel vision about the situation as a vinyl or tube lover can have sometimes, being so very sure they they are using the objectively superior stuff because it feels better to them, and no amount of fact will change their mind. you usually dislike those people who can't accept the facts, but when it comes to crossfeed, you become one of them. maybe it's one of those things where a psy or a cop mustn't get involved with a personal case because it's assumed that he can't be partial. IDK.



Can people hear snake oil cables in blind tests? Can people hear crossfeed in blind test? Clearly I am not "selling" snake oil, because the effects of crossfeed are easy for everyone to hear. The only debate is whether the effect is positive or negative. It's interesting how selling snake oil is easier than selling crossfeed. Crossfeed market is small and people tend to make their own crossfeeders including me.

Digital sound was designed to be superior to vinyl. No wonder it is. Is headphone spatiality designed to be perfect? Are records mixed to give the best possible result with headphones? If that's the case then there is reason to think crossfeed is not different from tubes and vinyl.


----------



## 71 dB

gregorio said:


> 1. Ah, in that case: A washing machine is similar to a Formula 1 race car. A washing machine has an electric motor, same with a Formula 1 race car => SIMILAR. By the same token, a sitting room is similar to a helicopter, an elephant is similar to a pencil, etc. If you take just one aspect and ignore all the others, then you end up with nonsense!
> 
> 2. We can only hope!
> 2a. What I and other people here think is that it's vitally important to consider ALL the relevant science/facts and NOT just one of the facts in isolation, because that leads to all kinds of nonsense; false assertions, audiophile myths and snake oil marketing, which is pretty much the OPPOSITE of science and why this subforum exists in the first place. If you "don't care" about this, that's up to you but you're in the wrong subforum! However, rather strangely, you do seem to care about it with pretty much every other area covered by sound science, just not when it comes to headphone crossfeed?!
> ...



1. How many aspects of speakers in a room do headphones take in to account? Zero. No wonder the result is nonsencical spatiality. One aspect is better than none. A washing machine really is more closer to F1 than handwashing in a sink.

2. I have lost my hope.

2a. So, does this mean it's useless to improve TIM-distortion in a amplifier if you don't fix all other forms of distortion at the same time? You must renovate your whole house, renovation only the kitchen is not an improvement? I don't keep things in isolation. I talk about what matters, what is relevant as you say. You say I talk about ILD, but not ITD. That's because ILD matters and ITD doesn't matter in THIS context. Why? Because crossfeed to mess up with ITD it must be correct in the first place, but how could it be when recordings are mixed for speakers assuming acoustic crossfeed, ER and room acoustics? My claim is ITD is very messed up with headphones (because there is not acoustic crossfeed, ER or reverberation) and crossfeed makes it less messed up if anything. That's why I don't need to address it as a problem, because it is not. If ITD was perfect for headphones, we would have problems with speakers since acoustic crossfeed, ER and reverberation of the listening room mess up things. My understanding is that especially with older recorfings, things like ILD and ITD are messed up no matter what you do, but spatial hearing can make some sense of nonsensical spatiality if it's given a change for example using crossfeed. If the spatialty is stunning for headphones then maybe that's one of those recordings that I listen to_ without_ crossfeed.

2b. It was a BIG mistake on my part to register here. It has been totally different from what I expected. Most things in life are. I take a note of this and never register to forums anymore. Better stay away! Unfortunately it's very difficult to stay away. I have managed to leave one stupid forum. That's something.


----------



## bfreedma

71 dB said:


> 1. How many aspects of speakers in a room do headphones take in to account? Zero. No wonder the result is nonsencical spatiality. One aspect is better than none. A washing machine really is more closer to F1 than handwashing in a sink.
> 
> 2. I have lost my hope.
> 
> ...




If you would simply couch crossfeed as your personal preference, not one would debate it with you.  Your ongoing efforts to turn your personal preferences into universal truth is problematic as you continue to cherry pick “facts” then insist that in isolation within a complex system, that cherry picked element constitutes an improvement.

Enjoy crossfeed, but please stop insisting it’s an unquestionable improvement for everyone then insulting those who don’t agree.

Personally, I find crossfeed to be more detrimental than beneficial for headphone listening, but I’m not going to insult those that feel differently.


----------



## 71 dB (Oct 13, 2019)

bfreedma said:


> If you would simply couch crossfeed as your personal preference, not one would debate it with you.  Your ongoing efforts to turn your personal preferences into universal truth is problematic as you continue to cherry pick “facts” then insist that in isolation within a complex system, that cherry picked element constitutes an improvement.
> 
> Enjoy crossfeed, but please stop insisting it’s an unquestionable improvement for everyone then insulting those who don’t agree.
> 
> Personally, I find crossfeed to be more detrimental than beneficial for headphone listening, but I’m not going to insult those that feel differently.



Well, nobody has to like crossfeed. Everybody has their preferences. What I have tried to do is justify the use of crossfeed from scientific point of view, cherry picked or not. It's not like crossfeed was invented "accidentally" without scientific reason. It was reasoned from the facts that ILD is larger with headphones than speakers. Attacking my arguments you kind of attack also those pioneers who invented crossfeed. I thought I had finally discovered the purpose of my life being a messias of universal truth of crossfeed, but looks like that's not the case and my life still has no purpose and I still have nothing to offer to the World except insults. Depressing.

_On the bright side I was able to fix my older CD player today! It had problems with the disc loading mechanism. I took the mechanism apart and boiled the rubber belt that drives the disc loading mechanism for 10 minutes and put it all together and now it works like new! The rubber belt had gotten loose and it also had a non-circular shape for staying in the same position as I haven't used the CD player for years. Boiling it fixed that. So, I am not 100 % hopeless human being. I can do something right._

Yes, I will enjoy crossfeed on those recordings that need it in my opinion, thank you.


----------



## 71 dB (Oct 13, 2019)

_On the bright side I was able to fix my older CD player today! It had problems with the disc loading mechanism. I took the mechanism apart and boiled the rubber belt that drives the disc loading mechanism for 10 minutes and put it all together and now it works like new! The rubber belt had gotten loose and it also had a non-circular shape for staying in the same position as I haven't used the CD player for years. Boiling it fixed that. So, I am not 100 % hopeless human being. I can do something right._


----------



## bigshot (Oct 13, 2019)

71 dB said:


> 1. How many aspects of speakers in a room do headphones take in to account? Zero. No wonder the result is nonsencical spatiality. One aspect is better than none. A washing machine really is more closer to F1 than handwashing in a sink.



You realize tortured logic like this is why you get pounced upon, don't you? You're not a dumb person. If the purpose of a washing machine is washing clothes, how is a car better than washing the clothes in the sink?

Similarly, you're waffling all over the term "spatiality". One moment you're saying that crossfeed can make headphones sound more like speakers in a room, and then you admit that headphones with crossfeed sound nothing like speakers (which is the truth). I'm curious why you keep twisting and turning like this... do you enjoy the attention? Does it validate how you think you should be treated? Do you just like getting mad at other people? I honestly can't figure out why people keep throwing themselves back into the ring over and over when it's clear they aren't equipped. I see that in several people in this forum and it's like the Black Knight sketch in Monty Python and the Holy Grail... "only a flesh wound".

(that last reference was for Castle)

Bfreedma is right. All you need to do is say, "I like the sound of crossfeed." and leave it at that. You don't need to justify a personal preference and no one will nail you to the wall any more. Trying to scientifically validate a personal preference is like trying to prove religion. It's a waste of time. Just go with it without having to explain it. No one will argue with you over that.


----------



## 71 dB (Oct 13, 2019)

bigshot said:


> You realize tortured logic like this is why you get pounced upon, don't you? You're not a dumb person. If the purpose of a washing machine is washing clothes, how is a car better than washing the clothes in the sink?
> 
> Similarly, you're waffling all over the term "spatiality". One moment you're saying that crossfeed can make headphones sound more like speakers, and then you admit that headphones with crossfeed sound nothing like speakers (which is the truth). I'm curious why you keep twisting and turning like this... do you enjoy the attention? Does it validate how you think you should be treated? Do you just like getting mad at other people? I honestly can't figure out why people keep throwing themselves back into the ring over and over when it's clear they aren't equipped. I see that in several people in this forum and it's like the Black Knight sketch in Monty Python and the Holy Grail... "only a flesh wound".
> 
> (that last reference was for Castle)



I have been misunderstood. Crossfeed does not make speaker spatiality! It does something else, apparently too difficult for me to explain in english, but that something else improves headphone sound for me a lot!

That something else is related to ILD which is related to spatiality, but that doesn't mean SPEAKER spatiality, just spatiality. Crossfeed shares with speakers reduced ILD levels, so in THAT sense it's speaker-like, but that DOESNT mean speaker spatiality!!!! Speaker spatiality still has massive differences from crossfeed spatiality.


----------



## bigshot

That is a good start. We'll see if you can leave it at that once gregorio comes back to answer your recent flurry of comments.


----------



## Davesrose (Oct 13, 2019)

While recent exchanges in this thread don't seem to be productive, there was the topic of reverb and "effective spatiality" that was mentioned.  I'm writing while watching Bladerunner in UHD (and its new Atmos mix vs previous 5.1).  To me, it's a very effective movie when it comes to "spatiality"...more so than some new remixes of older movies to "3D audio".  I've noticed quite a few new remixes do at least try more pans and focus of ambient sounds going to the rear speakers (movies like Apollo 13, having raised levels with typewriter sounds happening during office scenes...and also there appears to be more coherent soundstage of music).  Bladerunner was a different beast in which the music was heavily synthesized...and demanded its own reverb.  It seems also that different sound effects are heavy in reverb as well (such as watch the opening scene with spinners flying toward the Tyrell corp, and how reverb vacillates with the sound effects of VFX and actual instrument notes).  With that and the apparent detail to optimal telecine process, this should be the final edition of the movie.


----------



## bigshot (Oct 14, 2019)

I don't have the Atmos, but I've always preferred the original theatrical version of Blade Runner. Has that been remixed too?

I think most people who have multichannel systems prefer when the channels are discrete over ones that mesh to create sound fields. I usually like it when the sound reflects an overall space, but I know I'm on my own with that.


----------



## 71 dB (Oct 14, 2019)

Fun Fact: I have never owned a copy of _Blade Runner_ on any format. No VHS, no DVD, no Blu-ray… …the movie simply does less for me than for most sci-fi fans and I have seen it enough times in movie theatre and from tv. Not saying it's a bad movie, on the contrary it is a very good movie, but just doesn't resonate with my personal referencies. It's too gritty for my taste perhaps and has also more eye core than I cared for.

No, ILD doesn't scientifically explain why I don't care about _Blade Runner_ that much.


----------



## gregorio (Oct 14, 2019)

71 dB said:


> *Crossfeed in my opinion makes the stereo image cleaner, it's kind of focusing a unfocused picture so that instruments don't overlap eaxh other or be scattered all over.....*  This is my friedly opinion.



Firstly, NO, that is NOT your "friendly" opinion because time and again you state it as objective fact and insult those who do NOT share your opinion. Secondly, your opinion is WRONG! What's actually happening is the exact opposite of what you describe; crossfeed is kind of like unfocusing a focused picture so that instruments DO overlap each other because obviously, you are taking the signal from one channel and overlapping it on the other. I don't doubt that you personally are somehow perceiving a "cleaner stereo image" with crossfeed but that's just your personal perception. Unfortunately, instead of realising that it's obviously not a cleaner stereo image but just a function of your personal perception, you cherry-pick scientific facts and misrepresent or simply make-up other facts in a fallacious attempt to turn your personal perception into an objective fact! How many times?


71 dB said:


> [1] As I said I think jumping to the world of crossfeed takes "getting used to".
> [2] The sound is less sparkly and energetic and even more monolike so if you are into "special effects" you may find it disappointed at first, but for me the "sparkle" comes back in a minute or so when my ears adjust to the lower levels of ILD, only better!


1. Which is just a more polite version of what you've falsely stated previously. Firstly, the implication of your statement (which you've expressed explicitly in the past) is that if someone disagrees or doesn't perceive what you're perceiving, it's not because they simply have different perception to you but because they haven't put in the effort to "get used to it [crossfeed]", which is just a politer variation of they're idiots, ignorant, etc. Secondly, your statement is self-contradictory and doesn't make any logical sense! If, as you claim, crossfeed somehow makes the spatiality "natural", why would anyone need to "get used to it"? Surely they've had their entire life to "get used to" natural spatiality? So, if someone needs to "get used to" crossfeed, it must be because crossfeed is NOT natural spatiality!

2. Firstly, NO, obviously the sound is NOT less "sparkly and energetic" because obviously it's exactly the same sound just crossfed!! Secondly, how does the "sparkle" come back? Does your DAC or crossfeeder know when your ears have "adjusted to the lower levels of ILD" and change the FR of it's output or, is this just a function of your personal perception? The former would be an objective fact (but obviously isn't what happens) and the latter is just a function of your personal perception and is NOT objective fact! How many times?


71 dB said:


> [1] Does this diffrence matter? What parts do matter? I think everything matter, but the difference in ILD levels (lack of acoustic crossfeed) is in my opinion the thing that matters the most. [1a] The lack of ER and reverberation is less of a problem, because recordings tend to have own spatiality, all kinds of reverberations and delay effects used.


1. This is NOT the "71dB's opinion" forum, it's the science forum. What "matters the most" to you is a preference, your personal preference, NOT a scientific fact!
1a. NO, the "lack of ER and reverberation is less of a problem" because that's your personal preference, NOT because "recordings tend to have their own spatiality"!


71 dB said:


> [1] So, for me the lack of acoustic crossfeed is the BIG issue that really needs fixing, hence crossfeed. I believe you can improve sound by fixing only one thing.
> [2] You may have both distortion and silly bass boost in your sound and fixing only either the distortion OR the silly bass boost would improve sound even when the other remains. I don't see giant holes in my picture, because I am not talking about "the most natural" spatiality, but "more natural" spatiality.


1. Exactly, "for you" (your perception/preference) but not for everyone. For me, the BIG issue that needs fixing is a lack of depth, crossfeed doesn't fix that issue and typically makes other aspects worse, hence NO crossfeed! The difference is that I realise this is my personal perception/preference, so I don't falsely assert it's an objective fact and I don't insult you because you have a different perception/preference!!! How many times?

2. Firstly, obviously crossfeed doesn't fix distortion in the recording! Secondly, I personally prefer to hear the "bass boost" the artists/engineers have put in the recording, regardless of my personal opinion/tastes (even if I think it's "silly"). You of course are entitled to a different preference but what you are NOT entitled to do here, is state that your preference is an objective fact that "improves the sound"! How many times?


71 dB said:


> [1] Can people hear snake oil cables in blind tests? Can people hear crossfeed in blind test? Clearly I am not "selling" snake oil, because the effects of crossfeed are easy for everyone to hear. The only debate is whether the effect is positive or negative.
> [2] It's interesting how selling snake oil is easier than selling crossfeed.
> [2a] Crossfeed market is small and people tend to make their own crossfeeders including me.



1. Despite it being CLEARLY explained to you TWICE, you STILL don't understand the analogy! So, let's use a different one, vinyl for example. Yes the debate IS whether the effect is positive or negative one. In a blind test it's reasonably trivial to differentiate vinyl from digital. Most likely there would be someone who preferred the vinyl to the digital, would this prove that vinyl is higher fidelity or "improved sound" relative to digital or would it just be a function of their preferences/perception? If this person stated that in their opinion vinyl is higher fidelity (on the basis that it sounded better to them), would that opinion be an objective/scientific fact or would it be contrary to the actual facts/science?

2. Not here in this forum it's not! No one is saying that crossfeed is snake oil, what's snake oil is your false assertions that your preferences are objective facts. How many times?
2a. If it's an objective fact that crossfeed makes headphone presentation more natural, why is it such a small market? Why isn't crossfeed a standard default on all headphones? How many times?


71 dB said:


> 1. How many aspects of speakers in a room do headphones take in to account? Zero. No wonder the result is nonsencical spatiality.
> [1a] One aspect is better than none. A washing machine really is more closer to F1 than handwashing in a sink.
> 2a. You must renovate your whole house, renovation only the kitchen is not an improvement?
> [2a1] I don't keep things in isolation. [2a2] I talk about what matters, what is relevant as you say.
> ...



1. And how many aspects of natural spatiality do "speakers in a room" take into account? Zero. No wonder the result is nonsensical spatiality! You say you've studied, understand and take the science of acoustics into account but then your arguments deliberately ignore them, why is that? Even if recordings contained perfectly natural spatiality (which they don't), how would having the perfect spatial information of of say a large church inside your small living room be "sensical" spatiality? We pretty much never have "sensical"/"natural" spatiality, regardless of speakers or headphones and your failure to understand or take into account why the spatial information on commercial recordings might seem to be somewhat (or entirely) natural when played back on speakers is largely what invalidates many/most of your factual assertions! How many times?
1a. That's just unbelievably self-contradictory and ridiculous, only by ignoring the whole purpose of a F1 race car (and a washing machine) can you make such a statement, unless you're just insane? And, NO, one aspect is NOT better than none, because you consistently ignore the fact that by improving that one aspect (with crossfeed) you damage other aspects. Your analogy is a perfect example:

2a. If in the process of renovating your kitchen you wreck your living room, then as far as your whole house is concerned, NO, renovating only the kitchen is NOT an improvement, it's probably worse!
2a1. BUT you've just done exactly that! You've isolated the kitchen from the rest of the house and are ignoring what you've done to the sitting room!
2a2. Maybe you never use the sitting room and all that matters to you is the kitchen but that's just YOU, other people do use their sitting room, it does matter to them and as far as the "whole house" is concerned it's entirely relevant and it's falsehood for you to state otherwise!!
2a3. Exactly, WHY?? The answer appears to be: Because "this context" is someone who's fixated on ILD and ignores/dismisses everything else! And then to justify that position you simply make-up more nonsense:
2a4. And here's the nonsense! Obviously, you CAN have messed-up ITD (or anything else) and mess it up even more!
2a5. You've answered your own question BUT you continually refuse to accept the obvious fact! Recordings that are mixed for speakers assume a combination of crossfeed, ER and other room acoustics, they do NOT assume ONLY acoustic crossfeed!! How many times?
2a6. And how does just endlessly repeating that obviously false claim make it true? We have all sorts of messed-up time/delay (spatial) information on just about all commercial music recordings. Crossfeed cannot and does deconstruct the recording and magically make that "messed-up" time/delay information less messed-up, all it does is crossfeed the signal. So what we have is, if anything, even more "messed-up" because we've "messed-up" the directional information the time/delay information also contains. Again, you personally seem to perceive this as somehow less messed-up and more natural, even though it's really the opposite, but that's a function of your perception, not what's actually occurring! How many times?
2a7. That's obviously nonsense! You're admitting you're basing this assertion on the assertion in the previous point, which is also demonstrably false! Furthermore, if it wasn't a problem then pretty much everyone would use crossfeed as standard and no one would have bothered to try to improve it (with HRTFs, etc.), another fact you refuse to acknowledge. How many times? The REAL reason YOU don't need to "address it as a problem" is because YOU personally are incapable of perceiving it and with simple crossfeed everything sounds fine and dandy to you. But then that's an issue of your personal perception, not of what's actually occurring! How many times?
2a8. Yes, we do have problems with speakers and the acoustics of the mix studio/listening room does mess things up, which is why we have music recording, mixing and mastering engineers in the first place (rather than just a computer doing it all automatically)! If you had a basic/reasonable understanding of this (and the implications of it), then a lot of your nonsense assertions would stop/disappear!


71 dB said:


> [1] What I have tried to do is justify the use of crossfeed from scientific point of view, cherry picked or not.
> [2] It's not like crossfeed was invented "accidentally" without scientific reason. It was reasoned from the facts that ILD is larger with headphones than speakers. Attacking my arguments you kind of attack also those pioneers who invented crossfeed.
> [3] I thought I had finally discovered the purpose of my life being a messias of universal truth of crossfeed ...
> [3a] I still have nothing to offer to the World except insults. [3b] Depressing.


1. You don't seem to understand that cherry-picked scientific facts is NOT science, it's pseudoscience or complete irrelevant nonsense. Again, skin effect is a scientific fact but cherry-picking that one scientific fact, ignoring the others (such as skin effect doesn't affect audible frequencies) and applying that one scientific fact to analogue audio cables is NOT a "scientific point of view", it's the opposite, a perversion of the science! How many times?

2. Huh? Using your logic, science itself is kind of attacking "those pioneers who invented crossfeed", by dropping crossfeed in favour of more sophisticated models and of course the "proof of the pudding" is that crossfeed is as you say, a small market because in practice it doesn't work (is not preferable) for many people. I don't subscribe to your logic though!

3. That's a pretty serious error of judgement on a number of different levels! Firstly in trying to be a messiah of anything. Secondly, picking something that maybe a truth for you but isn't a universal truth and lastly, trying to achieve that in a science forum!!!
3a. And insults are just as unacceptable here as falsely presenting a preference as a "universal truth".
3b. That's up to you. But just endlessly repeating the same false "universal truth" is not going to change anything here. Which brings us back again to the cliche attributed to Einstein, which you're ignoring!

G


----------



## gregorio

Hey @bigshot, can you do your usual and post an appropriate video clip please? I was thinking of the scene from "The Life of Brian" where his mother says; "He's not the messiah, he's a very naughty boy"! 

G


----------



## ironmine (Oct 14, 2019)

*AirWindows Monitoring Redux*

"Cans C" setting sounds good*. *
("Cans D" is worthless).


----------



## 71 dB

gregorio said:


> Firstly, NO, that is NOT your "friendly" opinion because time and again you state it as objective fact and insult those who do NOT share your opinion. Secondly, your opinion is WRONG! What's actually happening is the exact opposite of what you describe; crossfeed is kind of like unfocusing a focused picture so that instruments DO overlap each other because obviously, you are taking the signal from one channel and overlapping it on the other. I don't doubt that you personally are somehow perceiving a "cleaner stereo image" with crossfeed but that's just your personal perception. Unfortunately, instead of realising that it's obviously not a cleaner stereo image but just a function of your personal perception, you cherry-pick scientific facts and misrepresent or simply make-up other facts in a fallacious attempt to turn your personal perception into an objective fact! How many times?



Can I stop calling it an objective fact, stop insulting people or am I "doomed" for the rest of my life? Questions I have for you:

- Why do I experience a more focused stereo image with crossfeed?
- Why doesn't this unfocusing happen with speakers due to acoustic crossfeed? The soundwaves from speakers clearly overlap on each other! Not only that, but you have early reflections and reverberation all overlapping! Should be an epic mess if overlapping was such a problem you say it is with crossfeed. My understanding of this is that overlapping is only a problem if it happens in a certain way (some people call it simply "bad acoustics" for example) and luckily the way it normally happens in speaker listening or in crossfeed isn't that bad. 
- Perception is what counts. Music is not produced for signal analysers but for human ears. Raw unprocessed tracks are technically "cleaner" versions of the recorded sound (that's what the mics "heard") compared to the final mix of a track, but to human ears the final mix hopefully sounds much cleaner (polished). I "cherry pick" the facts that I think matter the most in the context. According to your logic we should use cross-talk canceling with speakers, because justifying acoustic crossfeed is cherry-picking scientific facts about spatial hearing. Read that again and think about it carefully, because that's what you are effectively saying.


----------



## 71 dB

gregorio said:


> 1. Which is just a more polite version of what you've falsely stated previously. Firstly, the implication of your statement (which you've expressed explicitly in the past) is that if someone disagrees or doesn't perceive what you're perceiving, it's not because they simply have different perception to you but because they haven't put in the effort to "get used to it [crossfeed]", which is just a politer variation of they're idiots, ignorant, etc. Secondly, your statement is self-contradictory and doesn't make any logical sense! If, as you claim, crossfeed somehow makes the spatiality "natural", why would anyone need to "get used to it"? Surely they've had their entire life to "get used to" natural spatiality? So, if someone needs to "get used to" crossfeed, it must be because crossfeed is NOT natural spatiality!
> 
> 2. Firstly, NO, obviously the sound is NOT less "sparkly and energetic" because obviously it's exactly the same sound just crossfed!! Secondly, how does the "sparkle" come back? Does your DAC or crossfeeder know when your ears have "adjusted to the lower levels of ILD" and change the FR of it's output or, is this just a function of your personal perception? The former would be an objective fact (but obviously isn't what happens) and the latter is just a function of your personal perception and is NOT objective fact! How many times?



1. More polite? I am making progress then. I have been harsh in my words and my own frustrations in life have manifested in insults. If you read my first post about crossfeed after I registed here you can see I didn't insult anyone. It was after the first attacks to what I had wrote that where a total surprise to me that I lost my marbles and the insults started. It was a shock to read people oppose my beliefs so strongly, those beliefs I have put so much effort into. Crossfeed has been SO IMPORTANT for me for 7 years so it hurt badly to read all those "WRONG" replies. This is no excuse of bad behavior, but maybe explain why I insulted people.

Getting used to is needed, because I believe normal headphone sound is unnatural spatiality and you need to learn away from it. If all headphones had always had a crossfeed all people would regard that as the normality and headphone sound without crossfeed would sound very strange and unnatural. I believe this, but you can disagree.

2. It's less sparkly and energetic, because the level of S (side) channel has been reduced compared to M (mid) channel. So, some of the sparkle and energy that is the difference of mono and stereo sound is gone. The sparkle comes back when spatial hearing adjusts to the new spatiality and S channels gets more weight in the spatial decoding process. However, since the sound is crossfed, the spatiality is more natural meaning the sparkle is "better" thanks to more natural spatiality. Of course my crossfeeder doesn't know what my spatial hearing does nor is it needed. Why on earth would I want change in FR? This is all subjective. This is what I experience. Interesting if I am the only one. All objectivity disappears in audio if every person perceives things in their unique way. Also, what I learn about sound based on my own experiences has value only to myself and is useless nonsense for others. That is quite depressing. It's like developing a cancer drug that works only for one person in the World. I am really bad at doing things that have value to other people. I just don't know what matters to other people and how other people perceive things. That's my epic weakness.


----------



## bfreedma

71 dB said:


> Well, nobody has to like crossfeed. Everybody has their preferences. What I have tried to do is justify the use of crossfeed from scientific point of view, cherry picked or not. It's not like crossfeed was invented "accidentally" without scientific reason. It was reasoned from the facts that ILD is larger with headphones than speakers. Attacking my arguments you kind of attack also those pioneers who invented crossfeed. I thought I had finally discovered the purpose of my life being a messias of universal truth of crossfeed, but looks like that's not the case and my life still has no purpose and I still have nothing to offer to the World except insults. Depressing.
> 
> _On the bright side I was able to fix my older CD player today! It had problems with the disc loading mechanism. I took the mechanism apart and boiled the rubber belt that drives the disc loading mechanism for 10 minutes and put it all together and now it works like new! The rubber belt had gotten loose and it also had a non-circular shape for staying in the same position as I haven't used the CD player for years. Boiling it fixed that. So, I am not 100 % hopeless human being. I can do something right._
> 
> Yes, I will enjoy crossfeed on those recordings that need it in my opinion, thank you.




My last comment to you on this topic.

1.  I'm not attacking anyone - you are misconstruing the crossfeed "pioneers" intent as you tilt at your windmill
2.  Your passive aggressive attempts to guilt everyone here over this ridiculous fixation on crossfeed are inappropriate.  If these types of forums are causing you that much stress, I would suggest avoiding them.


----------



## Davesrose

bigshot said:


> I don't have the Atmos, but I've always preferred the original theatrical version of Blade Runner. Has that been remixed too?
> 
> I think most people who have multichannel systems prefer when the channels are discrete over ones that mesh to create sound fields. I usually like it when the sound reflects an overall space, but I know I'm on my own with that.



They did have different versions of the movie for the blu-ray release several years ago (so the theatrical is HD and DTS-MA 5.1).  Ridley Scott really hates the theatrical version (the studio forced him to add a narration and make a happy ending).  The UHD is the "Final Cut", which besides his preferred editing, also has some new digital effects (including inserting actors in certain scenes and using Harrison Ford's son for speaking new lines).  The Final Cut was made for blu-ray with film scanned in 4K and 8K, and it seems Scott isn't like Lucas (who continued to redo digital effects for Star Wars for every home release.


----------



## bigshot (Oct 14, 2019)

Yeah, I've seen the "Final Version" of Blade Runner. I don't care for it nearly as much. My blu-ray set has a work print version too. The original release is the best.



71 dB said:


> I am really bad at doing things that have value to other people. I just don't know what matters to other people and how other people perceive things. That's my epic weakness.



I bet you're heaps of fun at parties.



gregorio said:


> Hey @bigshot, can you do your usual and post an appropriate video clip please?



Couldn't find Monty Python. Will this one do?


----------



## 71 dB (Oct 14, 2019)

bfreedma said:


> My last comment to you on this topic.
> 
> 1.  I'm not attacking anyone - you are misconstruing the crossfeed "pioneers" intent as you tilt at your windmill
> 2.  Your passive aggressive attempts to guilt everyone here over this ridiculous fixation on crossfeed are inappropriate.  If these types of forums are causing you that much stress, I would suggest avoiding them.



1. I don't think the intent of crossfeed pioneers was ambiguous at all. They didn't try to feed the hungry people of Africa. They realized stereophonic sound tends to have quite large ILD even at low frequencies, because acoustic crossfeed makes ILD smaller in speaker listening and mimicking such process electonically with headphones can be beneficial.

2. I agree. It has taken years to realize where the problem is. As I have said, coming here was a mistake and I would probably be a happier and more balanced person if I hadn't come here, but I DID come here because I mistakenly thought my opinions of crossfeed would be respected and it's difficult to leave.


----------



## bigshot

71 dB said:


> 1) As I have said, coming here was a mistake and 2) I would probably be a happier and more balanced person if I hadn't come here.



1) I agree.
2) I'm not so sure about that.

*You can become completely inactive if you would prefer. Fine with me.*


----------



## 71 dB

gregorio said:


> 1. This is NOT the "71dB's opinion" forum, it's the science forum. What "matters the most" to you is a preference, your personal preference, NOT a scientific fact!
> 
> 2. NO, the "lack of ER and reverberation is less of a problem" because that's your personal preference, NOT because "recordings tend to have their own spatiality"!
> 
> ...



1. Ok, what does science objectively say? You talk about how everything must be scientific, but you don't give much scientific substance, do you? In other words, you don't give scientific alternatives to my opinions, you just keep telling my opinions are no good here. For example, what is the limit_ scientifically_ for crossfeed level to ruin ITD? -25 dB? -10 dB? -3 dB? In my opinion the useful range of crossfeed level is between -12 dB and -1 dB depending on the recording and within this range ITD is not ruined. It is changed (-12 dB level changes it VERY little and -1 changes it more but the change is beneficial if anything, because it causes the stereo image to twist from normal left-right shape into a shape that has a little bit depth. It's kind of zoom lens of camera altering the lens angle.

This ITD business can be analysed by studying the phase shift when summing up sine waves of different phases. I have done this (years ago when I discovered crossfeed and wanted to know what it does to the sound) and I didn't lose my sleep over the conclusions. Acoustic crossfeed changes ITD a little bit, so does crossfeed. What's the big deal? Small change in ITD translates into small change in apparent direction of sound. 8 µs change in ITD means about 1° change in apparent angle. That's what science says.

2. Again, all you talk about is my personal preference. Does the science exist or doesn't it? Am I supposed to ignore my own preferences just because they are just my personal preferences? How can we discuss about crossfeed at all if EVERYTHING is just personal preferences and science is nowhere to be found? What does science say about the lack of ER and reverberation?

3. So what is your solution to address the issue of depth? Bigshot's solution is to put headphones away and listen to speakers. I agree, that's a good solution for THAT problem. When I listen to headphones I understand it's unrealistic to expect much depth. I can get some depth (miniature soundstage) if the recordings allows it, but nowhere near what speakers give. However, I can get otherwise very enjoyable sound using crossfeed.

4. Bass boost in this context means of course boost caused by some technical problem or human error that the artist didn't mean. It's also funny how much you worship/respect the personal intent of artists and sound engineers (even above your own preferences!) while giving zero respect to for example my personal preferences. I guess artists and sound engineers are priviledged first class citizens while I am just an ignorant second class citizen whose opinions doesn't count. How many times are you going to write "how many times?"


----------



## 71 dB

gregorio said:


> 1. Despite it being CLEARLY explained to you TWICE, you STILL don't understand the analogy! So, let's use a different one, vinyl for example. Yes the debate IS whether the effect is positive or negative one. In a blind test it's reasonably trivial to differentiate vinyl from digital. Most likely there would be someone who preferred the vinyl to the digital, would this prove that vinyl is higher fidelity or "improved sound" relative to digital or would it just be a function of their preferences/perception? If this person stated that in their opinion vinyl is higher fidelity (on the basis that it sounded better to them), would that opinion be an objective/scientific fact or would it be contrary to the actual facts/science?
> 
> 2. Not here in this forum it's not! No one is saying that crossfeed is snake oil, what's snake oil is your false assertions that your preferences are objective facts. How many times?
> 2a. If it's an objective fact that crossfeed makes headphone presentation more natural, why is it such a small market? Why isn't crossfeed a standard default on all headphones? How many times?



1. There's two questions:

a) Do you prefer vinyl or CD?
b) Which one is technically superior? Vinyl or CD?

To a) anyone can give their opinion based on their preferences. To b) there is only one correct objective answer: CD is technically superior. It's quite hard to try and explain how vinyl is better than CD using science. 

c) Do you like crossfeed?
d) Which is technically more correct, crossfeed or no crossfeed?

With crossfeed you of course have the aspect of c) and everyone is entitled to whether like or dislike crossfeed. We are debating about d) and I have tried to give scientific explanations and justifications for my argument that crossfeed really is technically more correct, not 100 % correct, but more correct than no crossfeed. What is the correct value of ILD for a recording? What I get with speakers should be more or less what it should be and crossfeed gives me ILD levels closer to speakers so in that sense it is technically more correct. Comparing the other aspects of spatial hearing is very difficult and perhaps totally meaningless, because headphone sound differs from speaker sound in regards of room acoustics, but ILD is something easy and simple enough to fix and so crossfeed was invented and is used by some people including me.

2. I have my preferences, but stating scientific facts is only preference of truth. You haven't debunked my scientific facts, all you do is make some ridiculous claims that since I ignore some insignficant details I can't apply the science at all in my reasoning. We can talk about the significance of the details I "ignore", but I don't see much of that from you. All you do is put me down and then you wonder why I lose my marbles and insult people. 

2a. That's a great question and one that is interesting. Most people are not analytic listeners and don't pay attention to such things as ILD. If you haven't studied spatial hearing, concepts like ILD and excessive spatiality are totally alien. I have studied spatial hearing in the university and STILL it took me years to realize something is fundamentally wrong with headphone sound and the sound I experience with headphones isn't something I have to experience just because I use headphones. Spatiality can be scaled/modified to work better on headphones. As for more analytical listeners I believe many prefer more energetic/sparkly sound and they believe crossfeed just means duller and more mono sound. Maybe these people don't usually listen to music that long at a time so the fatique aspect isn't an issue for them. Crossfeed is just difficult to sell, because what it offers is "unattractive" things like less fatique/more natural bass etc. People want XTRA-MEGA-BASS, COLORS!!! EXPLOSIONS!!! VIOLENCE!! BLOOD! instead. Crossfeed is not about that. It's more about inner peace, the idea that less (ILD) is more (naturalism) and that just doesn't sell well. Crossfeed is like classical music. It has survived the test of time, but still it sells much less than pop music that is forgotten in 10 years.


----------



## 71 dB

bigshot said:


> 1) I agree.
> 2) I'm not so sure about that.
> 
> *You can become completely inactive if you would prefer. Fine with me.*



Becoming inactive is difficult, like quiting smoking.


----------



## bfreedma




----------



## bigshot (Oct 14, 2019)

The internet sure is weird. One would think it would be a great form of communication, but some people would rather use it to pester people to get them to give them the attention they don't get from human interaction in real life. If it wasn't so annoying, I might feel sorry for them. But of course sympathy is also something they seem to crave. Ultimately, it's a waste of time.


----------



## 71 dB

bigshot said:


> The internet sure is weird. One would think it would be a great form of communication, but some people would rather use it to pester people to get them to give them the attention they don't get from human interaction in real life. If it wasn't so annoying, I might feel sorry for them. But of course sympathy is also something they seem to crave. Ultimately, it's a waste of time.



I am the first to admit my social problems because that's who I am. I'm pretty sure I have mild asperger, which is not something that makes you a social genius. I have also very unattractive physical appearence and serious issues with self-confidence, but a guy with 20,000+ posts accuses a guy with 1,000+ posts about attention seeking? That's quite ironic. You don't even care about headphones and still you keep posting here as one of the most active members?


----------



## bigshot (Oct 14, 2019)

I focus on discussing topics with other people, not constantly directing attention to myself. I'm not telling you to shut up. I'm suggesting you focus more of your energy into understanding what other people say to you, and not pour so much energy into attracting attention to yourself. You might find that you can learn from other people by admitting you are wrong once in a while. You also won't have to desperately scramble to validate yourself all the time, which can give you more time to actually make points that are useful to the people around you. At this point your content level has dropped to zero and you're just flailing around trying to defend yourself. When I post jokes and funny images and start answering in blank one sentence replies, it's because I know that you're not listening to a word I say. Why should I bother to interact with you in a real conversation? Your strategy and goals probably deserve rethinking. You've seriously messed up with Gregorio. You could have learned a lot from him, but your attitude has completely pissed him off. For good reason, I might add.


----------



## castleofargh

a Finn telling people in other countries that he's an introvert, that's like me as a French announcing that I'll soon be on strike. what else is new? ^_^

now, can we try to post crossfeed related stuff please.


----------



## bigshot

or not...


----------



## 71 dB (Oct 15, 2019)

castleofargh said:


> a Finn telling people in other countries that he's an introvert, that's like me as a French announcing that I'll soon be on strike. what else is new? ^_^



Have you ever met _drunken_ Finns? Enough alcohol turns introvert Finns into extrovert party animals. This drastic change can be a very surprising experience for people from "extrovert cultures" who think Finns are _always_ introvert. I don't drink alcohol so I am never extrovert. For some unknown reason alcohol doesn't work for me in this sense. You give me alcohol and instead of becoming extrovert like others, I become passive and sleepy. Sleepy introvert is even less extrovert than just introvert.


----------



## sonitus mirus




----------



## gregorio (Oct 16, 2019)

71 dB said:


> [1] Can I stop calling it an objective fact, stop insulting people or am I "doomed" for the rest of my life?
> [2] Questions I have for you:
> [2a] Why do I experience a more focused stereo image with crossfeed?
> [2b] Why doesn't this unfocusing happen with speakers due to acoustic crossfeed?
> ...


1. Why are you asking me/us? It's entirely up to you!

2a. Because that's simply the result of your personal perception trying to interpret the mess of spatial information that a commercial music recording contains but different people's perception will interpret it differently. How many times?
2b. It DOES happen with speakers. How many times?
2b1. It IS an epic mess. How many times? ... Clearly, you are misunderstanding the situation and are asking the wrong questions! The question you should be asking is: Why doesn't this epic mess generally sound like an epic mess (with speakers), why does it seem to make some sort (or even a very accurate sort) of spatial sense to most people's perception? I'll break the answer to this question into two related parts (to fit your rather strange way of looking at the issue):

The first part you've unwittingly already answered yourself! The reason "overlapping" with acoustic crossfeed (when listening to speakers in a room) doesn't end up just making a bad situation worse is PRECISELY because it ISN'T just acoustic crossfeed! Acoustic crossfeed ONLY EVER OCCURS in nature ALONG WITH ERs/reverb (never just on it's own) and the human perception of distance and direction ("spatiality") works by comparing the direct sound with all these factors and INTERPRETS the results based on past experience (EG. Expectation biases and preferences). What happens if we remove some of the vital/required/expected parts of that equation, EG. Just have acoustic crossfeed and not ERs/Reverb/other room acoustic effects? The results are largely unpredictable because as far as perception is concerned what is actually being heard is impossible (does not/cannot exist) and our perception will therefore alter it's interpretation (to make it possible) BUT, as "interpretation" is based on past experience, biases and preferences, everyone's interpretation is liable to be different. IE. The vast majority of peoples' perception will in effect just invent/make-up the missing parts of the equation to fill in the blanks but each person's "made-up invention" is likely to be at least somewhat different. What your perception makes-up apparently results in you "hearing" (think you are hearing) a relatively normal soundstage, just smaller. My perception results in me still "hearing" (thinking I'm hearing) a flat (without depth) presentation but narrower (less width) and with some other artefacts I find undesirable, such as comb-filtering effects and loss of spatial detail/information. But, as we're effectively talking about a made-up invention of each individual's perception, there is NO objective fact or universal truth! The situation with speakers is obviously quite different, our perception does not need to invent/make-up information to "fill in the blanks" because we do not have any missing (required/expected) parts of the equation, they're all there, acoustic crossfeed AND ERs/reverb/other room acoustic effects!

The second part is dealt with in point 2b2: Well yes, you could say that "_overlapping is only a problem if it happens in a certain way_" but that's a very imprecise/vague way of putting it. However, there are two bigger problems in your statement: Firstly, NO, crossfeed doesn't magically (or "luckily") make all the spatial information "overlap in a way" that isn't a problem as explained above (although it's possible some individuals might perceive it that way). Secondly, it is NOT "luck" that it isn't a problem with speakers, luck has nothing to do with it, what do you think a mix engineer/producer do? Let's take the example of say a rock band, typically there would be different spatial information added to each of the elements in the band, the singer would have one acoustic, the guitars another, the bass another and the drumkit another (this is an over simplification as usually for example, there will be different acoustics applied to different parts of the drumkit and to different guitars). If we just dialled in a reverb preset for each element what we would end-up with is an unacceptable mess, a very muddy mix with a serious lack of clarity. So we employ a range of techniques to avoid this situation, the different reverbs maybe EQ'ed differently, other parameters of the reverbs will be changed, we may eliminate the reverb portion of some of the reverbs entirely (only use the early reflections portion), we may use a simple delay with limited feedback instead of a reverb or we may use some other tactic but usually we use a combination of tactics. The result of course is absolutely nothing like any sort of natural acoustics (it's actually an unnatural combination of unnatural acoustic information) but we're not making a university project on real/natural acoustics, we don't care how "unnatural" it actually is, just whether it gives us the separation, clarity and depth we're after and sounds subjectively "good". And of course we're doing all this using our perception/preferences and speakers in a room. It takes a long time and considerable experience to understand the various interactions at play and learn how to apply and manipulate reverb/delay based effects (acoustic/spatial information), there's little/no luck involved!

The above covers pretty much every assertion you have made about crossfeed and it's not the first time!



71 dB said:


> 1. Ok, what does science objectively say?
> [1a] You talk about how everything must be scientific, but [1b] you don't give much scientific substance, do you?
> [1c] In other words, you don't give scientific alternatives to my opinions, you just keep telling my opinions are no good here.
> [1d] For example, what is the limit_ scientifically_ for crossfeed level to ruin ITD? -25 dB? -10 dB? -3 dB?
> ...


1. You're joking right? Don't you think you should have asked (and answered) that question a few years ago, BEFORE you started making objective scientific assertions?
1a. That's a lie, a lie that's pretty much the opposite of what I've actually stated! I have stated numerous times that this is the Sound Science subforum but I've also stated numerous times that commercial music recordings are art and therefore not subject to all the rules of science! How many times?
1b. I give a considerable amount of scientific substance, for example, some of the actual facts of how commercial music recordings are created, as opposed to some (erroneous) made-up assumptions from someone who self-admittedly has little/no idea!
1c. And again, that's a LIE! I am NOT telling you your opinions "are no good here", what I AM telling you is that presenting your subjective opinions as objective fact/science/"universal truth" is "no good here"! Plus, I refute the actual objective facts you cite which are erroneous!
1d. That's actually an excellent example! Science can tell us the limits of ILD of a particular media type, it can tell us the limits of ILD as far as human ears are concerned, it can tell us very accurately what ILD we actually have and it can tell us what ILD occurs in nature but it CANNOT tell us what is the limit "to ruin" it, because "ruin" is a subjective human valuation/preference. You seem to define "ruin" as anything beyond what occurs in nature but that definition is false because commercial music recordings are art and do NOT adhere to what occurs in nature. If they did, the vast majority of music recordings (and music genres) simply wouldn't/couldn't exist!
1e. Exactly, it's your (subjective) opinion, it's NOT an objective fact!! How many times?

2. That's nonsense. I talk about objective facts, science, your personal preferences and how these are all related (or in your case, unrelated)!
2a. Of course, a great deal of science exists but firstly you have to quote the actual facts/science, not just make them up to suit your agenda and secondly, you have to account for ALL the relevant facts/science, not just cherry-pick the ones which suit your agenda! How many times?
2b. Again, you're joking right, don't you even know what science is? The whole point of what science is and why it was invented is to eliminate personal (subjective) opinions/preferences and thereby separate objective fact from fiction/myth/superstition.
2c. How can we discuss the actual facts of crossfeed if pretty much EVERYTHING is just your personal preferences presented as objective facts and false facts you've just made up?
2d. Science says that the perception of direction/distance ("spatiality" as you call it) is largely dependent on ERs/reverb, that the lack of it never exists in nature and that if we artificially manufacture circumstances where they don't exist, human perception will react unpredictably, most people's perception will attempt to compensate for this "lack" (in different ways)!

3. If we choose to use headphones, there is no universal audio solution/truth! The solution most likely to work for the greatest number of people is an individualised HRFT plus convolution reverb but as the perception of sound is not entirely based on audio properties (but also on past experience, knowledge, biases, preferences and our other senses), then even a theoretically perfect HRTF and convolution reverb will not work universally!

4. And how do you know what is a "problem/error that the artist meant didn't mean"? How do you know that what you judge/perceive to be an "error" wasn't intensional?
4a. You've said some pretty ridiculous things in the past but this one arguably *takes the biscuit*! The WHOLE POINT of me buying an album by say Bjork is to hear the personal intent/preferences of Bjork (and her production team), certainly not my personal intent/preferences, not even the personal intent of say Amy Winehouse (and her production team) and DEFINITELY NOT your personal intent/preferences!
4b. I don't know, it entirely depends on YOU! On how many times you repeat the same false assertions and I/we have to repeat the same refutations. Of course I should only have to say it once to anyone with a rational mind, so the fact that I have to keep saying it indicates what?


71 dB said:


> 2. I have my preferences, but stating scientific facts is only preference of truth.
> 2a. You haven't debunked my scientific facts, all you do is make some ridiculous claims that since I ignore some insignficant details I can't apply the science at all in my reasoning.
> 2b. We can talk about the significance of the details I "ignore", but I don't see much of that from you.
> 2c. All you do is put me down and then you wonder why I lose my marbles and insult people.


2. Utter nonsense, that's pretty much the opposite of science!
2a. And no one has debunked the scientific facts of "skin effect" (that many snake oil cable salesmen make), only whether or not those scientific facts are applicable! How many times? Additionally, I certainly have debunked some of your made-up false assertions.
2b. Then you have a very serious problem with what you "don't see", which apparently sometimes even includes what you yourself write because you subsequently contradict yourself and I have to quote it back to you!
2c. All I do is refute those objective facts you assert which are not objective facts (because they're either just your subjective perception/preference or you've just made them up). And, I don't wonder why you loose your marbles and insult people. You've explained why and I've got a pretty good idea anyway, based on numerous past experiences with audiophiles and false assertions/beliefs! More importantly though, you can't keep blaming your response and behaviour on me/us doing what this subforum exists for (to discuss the facts/science and refute false assertions of fact/science). Repeating false assertions and your response/behaviour when those false assertions are refuted is ENTIRELY YOUR RESPONSIBILITY, no one else's. If you don't like it, either stop repeating your false assertions or go somewhere else, where false assertions are not refuted and you're treated as the messiah you (self-admittedly) crave! Again, how many times do I need to repeat what should be blatantly obvious to anyone with a rational mind?

Round and round we go,

G


----------



## 71 dB

gregorio said:


> 1. And how many aspects of natural spatiality do "speakers in a room" take into account? Zero. No wonder the result is nonsensical spatiality! You say you've studied, understand and take the science of acoustics into account but then your arguments deliberately ignore them, why is that? Even if recordings contained perfectly natural spatiality (which they don't), how would having the perfect spatial information of of say a large church inside your small living room be "sensical" spatiality? We pretty much never have "sensical"/"natural" spatiality, regardless of speakers or headphones and your failure to understand or take into account why the spatial information on commercial recordings might seem to be somewhat (or entirely) natural when played back on speakers is largely what invalidates many/most of your factual assertions! How many times?
> 1a. That's just unbelievably self-contradictory and ridiculous, only by ignoring the whole purpose of a F1 race car (and a washing machine) can you make such a statement, unless you're just insane? And, NO, one aspect is NOT better than none, because you consistently ignore the fact that by improving that one aspect (with crossfeed) you damage other aspects. Your analogy is a perfect example:
> 
> G



1. Speakers in a room create acoustically crossfed direct sound, early reflections and room reverberation, so they take _everything_ into account. Speaker spatiality is very sensical, ask *Bigshot* if you don't believe me!  Speaker spatiality is sensical, because the end result is a totally natural spatiality of what happens when you play music in a room with two speakers. Spatial hearing gets fooled (stereo sound makes sense because spatial hearing can be fooled so that for example if you play mono sound with left and right speakers, you can have the  perceived point of sound be anywhere between the speakers by using simple amplitude panoration. If hearing wasn't fooled, such panoration would not work, we would simply hear sound coming from the speakers with different amplitudes and stereo sound wouldn't make much sense). Speakers in a room don't do anything that ruins the desired effect of stereophony illusion, spatial hearing get fooled in a sensical way and therefor the spatiality is sensical. I think all of this is due to the fact that totally coherent sound sources don't exist in the nature (are extremely rare at best) so our spatial hearing has not developped to deal with them and is fooled by stereophonic tricks. Headphones present the these stereophonic tricks in a twisted overblown form that makes it hard for spatial hearing to be fooled in the desired way, but crossfeed helps scaling the spatiality so that spatial hearing gets fooled a little bit like with speakers thanks to mimicking acoustic crossfeed.

1a. Well it wasn't me who came up with this analoque. I don't need to talk about F1 cars and washing machines here. I believe the improvements crossfeed does are significantly greater than any theoretical damage. If that wasn't the case I am sure after 7 years I would have noticed that the damages make crossfeed unusable? That's why I don't use crosstalk canceling with speakers either. I believe it's better to have that acoustic crossfeed, it's part of the tricks of stereophonic sound to fool spatial hearing in a sensical way. As I said I did analyse what crossfeed does to sound when I got into it in 2012 and noticed that the theoretical problems are insignificant compared to the benefits and my ears and spatial hearing agrees. Similar problems such as mono colourization exist also with speakers (due to acoustic crossfeed), but nobody seems to care and that's fine, because that's a part of the conceptual imperfections of stereophonic sound. People like you care about these "problems", because crossfeed appear ADDITIONAL processing of sound as the default sound with headphones is crosstalk canceled, which of course is not how it's supposed to be, but people have got used to the idea that headphones don't have crossfeed and people like me have hard time educating people to un-learn and then re-learn what headphone sound should be.


----------



## gregorio

71 dB said:


> 1. Speakers in a room create acoustically crossfed direct sound ...
> 1a. I believe the improvements crossfeed does are ...



And again, round and round you go, just repeating the same falsehoods and circular arguments/fallacy, namely: "My ears/spatial hearing agrees with the cherry-picked science and false facts I've invented to explain what I'm hearing, therefore it must be a universal truth/scientific fact and everyone who disagrees must be ignorant/wrong/idiots, who are just trying to make me feel bad" bla bla bla, round and round we go!

G


----------



## 71 dB

gregorio said:


> 2a3. Exactly, WHY?? The answer appears to be: Because "this context" is someone who's fixated on ILD and ignores/dismisses everything else! And then to justify that position you simply make-up more nonsense:
> 2a4. And here's the nonsense! Obviously, you CAN have messed-up ITD (or anything else) and mess it up even more!
> 
> G



2a3. I don't ignore these things in my mind. ILD is the BIG problem. I fix it and ITD get marginally "worse" *. The end result is superior to original sound. If ITD was a BIG problem I would be talking about it here as much as ILD but it just isn't and we are lucky it isn't, because otherwise crossfeeding wouldn't be a nice easy trick to improve headphone sound. Or at least crossfeeding would be technically more demanding to work.

* Frankly it's debatable what worse ITD means. Bad ILD is much clearer concept. Acoustic crossfeed "changes" ITD too meaning with crosstalk canceling we would have different ITD, but nobody cares, because acoustic crossfeed just happens with speakers, is considered normal acoustic behavior of speakers in a room. Your talk about me ignoring ITD is desperate attempt to find something against my claims. I don't ignore it. I just know it's an insignicant factor.

Let's say your artistic intent as the sound engineer is to have the listener experience ITD of 200 µs. What is in your opinion of the amount of ITD you need in the recording and what are the ITDs with speakers, headphones without and with crossfeed? If you do this exercise you'll see your ITD critisism makes no sense. There is no recording ITD that gives 200 µs with both speakers and headphones, but there is ITD that gives about 200 µs for both speakers and quite heavily crossfed headphones. Also, if the listener goes nearer to the speakers or move the speakers more apart, the perceived ITD will increase. Do you want to dictate the speaker angle and listening distance for speaker listeners to make sure they experience exactly the ITD you intented? I hop you don't and similarly I'd hope you to give some slack for us crossfeed users in this practically insignificant issue.

2a4. I think it's a good deal if you can fix totally messed up ILD and all it costs is a little bit more messed up ITD, but that's me and this opinion is related to the fact that I have never noticed "messed up ITD" be a problem in enjoying headphone sound.


----------



## gregorio

71 dB said:


> 2a3. I don't ignore these things in my mind. ...
> 2a4. I think it's a good deal if you can fix totally messed up ILD and all it costs is a little bit more messed up ITD, but that's me ...





gregorio said:


> And again, round and round you go, just repeating the same falsehoods and circular arguments/fallacy, namely: "My ears/spatial hearing agrees with the cherry-picked science and false facts I've invented to explain what I'm hearing, therefore it must be a universal truth/scientific fact and everyone who disagrees must be ignorant/wrong/idiots, who are just trying to make me feel bad" bla bla bla, round and round we go!
> 
> G



!!!


----------



## 71 dB

gregorio said:


> And again, round and round you go, just repeating the same falsehoods and circular arguments/fallacy, namely: "My ears/spatial hearing agrees with the cherry-picked science and false facts I've invented to explain what I'm hearing, therefore it must be a universal truth/scientific fact and everyone who disagrees must be ignorant/wrong/idiots, who are just trying to make me feel bad" bla bla bla, round and round we go!
> 
> G


I have clearly beaten you in this debate. The points by you I have been answering (tiresome work) are pretty weak and easy to rebuke and this post of you hits a new low, so much so that I am surprised. All you have at this point is "you are repeating the same falsehoods and circular arguments". Sure, I am REPEATING stuff like crazy, but they are FACTS! *Speakers in a room create acoustically crossfed direct sound* … is a 100 % fact and you know it. Me repeating it doesn't change anything. Facts are facts, repeated or not.

I don't want to beat you. That's ugly. I want you to see the light so we can agree about crossfeed and this debate comes to an end.


----------



## gregorio

71 dB said:


> [1] I have clearly beaten you in this debate.
> [2] Sure, I am REPEATING stuff like crazy, but they are FACTS! *Speakers in a room create acoustically crossfed direct sound* … is a 100 % fact and you know it.
> [2a] Me repeating it doesn't change anything.
> [3] I want you to see the light so we can agree about crossfeed and this debate comes to an end.



1. It makes you feel better to believe that does it? 

2. No it is NOT fact and certainly NOT 100% fact! Speakers in a room NEVER create only acoustically crossfed direct sound, they create ERs/Reverb and other room acoustic effects (except in an anechoic chamber). This is 100% fact and YOU know it!!!! How many times?
2a. Correct, you repeating only part of the facts does NOT change anything here in this subforum, so why do you keep doing it?

3. I don't want to see your light, I want to see (all) the actual relevant facts/science, which is what this forum is for and that would be true even if you were a messiah, because this is the Sound Science forum and NOT a religious forum!

Thanks btw for proving my statement: "*And again, round and round you go, just repeating the same falsehoods and circular arguments/fallacy, namely: "My ears/spatial hearing agrees with the cherry-picked science and false facts I've invented to explain what I'm hearing, therefore it must be a universal truth/scientific fact and everyone who disagrees must be ignorant/wrong/idiots, who are just trying to make me feel bad" bla bla bla, round and round we go!*"

G


----------



## 71 dB

gregorio said:


> 2a5. You've answered your own question BUT you continually refuse to accept the obvious fact! Recordings that are mixed for speakers assume a combination of crossfeed, ER and other room acoustics, they do NOT assume ONLY acoustic crossfeed!! How many times?
> 2a6. And how does just endlessly repeating that obviously false claim make it true? We have all sorts of messed-up time/delay (spatial) information on just about all commercial music recordings. Crossfeed cannot and does deconstruct the recording and magically make that "messed-up" time/delay information less messed-up, all it does is crossfeed the signal. So what we have is, if anything, even more "messed-up" because we've "messed-up" the directional information the time/delay information also contains. Again, you personally seem to perceive this as somehow less messed-up and more natural, even though it's really the opposite, but that's a function of your perception, not what's actually occurring! How many times?
> 2a7. That's obviously nonsense! You're admitting you're basing this assertion on the assertion in the previous point, which is also demonstrably false! Furthermore, if it wasn't a problem then pretty much everyone would use crossfeed as standard and no one would have bothered to try to improve it (with HRTFs, etc.), another fact you refuse to acknowledge. How many times? The REAL reason YOU don't need to "address it as a problem" is because YOU personally are incapable of perceiving it and with simple crossfeed everything sounds fine and dandy to you. But then that's an issue of your personal perception, not of what's actually occurring! How many times?
> 2a8. Yes, we do have problems with speakers and the acoustics of the mix studio/listening room does mess things up, which is why we have music recording, mixing and mastering engineers in the first place (rather than just a computer doing it all automatically)! If you had a basic/reasonable understanding of this (and the implications of it), then a lot of your nonsense assertions would stop/disappear!
> ...


2a5. Yes, we agree about the combination of crossfeed, ER and other room acoustics, but such recordings assume even less the situation of not even acoustic crossfeed happening. Optimal headphone crossfeed is often a bit weaker than acoustic crossfeed so that the difference of acoustic crossfeed and headphone crossfeed compensates the lack of ER and room acoustics with headphones. This also helps with having a miniature soundstage with headphone and too strong crossfeed would kill the stereophony inside the head (headstage).

How well fixing only X of parameters X, Y and Z works depends on the correlations between these parameters. There is strong correlation between ILD and ITD, but crossfeeders tend to modify ITD similarly to speaker (acoustic crossfeed) so fixing ILD changes ITD in a more or less correct matter and that's why what happens to ITD in crossfeed is pretty insignificant.   Recording are mixed for speaker and therefor should assume ITD changes caused by acoustic crossfeed (ER and room acoustic create ITD information, but on totally difference scale, ms rather than µs) and ER + reverberation are heard as sepatate sound elements from the direct sound which mostly dictates the direction of perceived sound.)

2a6. I agree, the spatial information on commercial recordings is what it is. Speakers in a room help shaping all that into something that makes sense to spatial hearing, "acoustify" the sound. Headphones do pretty much nothing and we are on the mercy of how messy the spatiality of the recording is. Crossfeed helps by doing one thing that happens with speakers. This is just logic so far from false claims.

2a7. I have already tried to explain WHY crossfeed is not very popular and it has nothing to do with the problems you talk about. Crossfeed is not the end of the road in improving headphone sound. It is in my opinion a BIG improvement, but you can do better. Crossfeed is maybe a 80 % fix while HRTF techniques are perhaps a 95 % fix. Crossfeed is simple, HRTF is not. So, it's everybody's own choice how to improve headphone sound, but in my opinion everybody should use at least crossfeed rather than nothing, because I believe headphone sound as it is is fundamentally wrong because of the philosophy recordings are made for speakers. If recordings where made for headphones they would be binaural in nature and they would work brilliantly with headphones and crossfeed would make the sound much worse and also speaker sound would be bad and I would be writing about the need for crosstalk canceling with speakers to make binaural recordings sound better with speakers. 

I am not the golden ear of the year, but I have a somewhat trained analytical ear. If I can't detect a problem, I'm sure 90 % of population can't. What about the 10 %? Frankly, if you can detect problems smaller than I can, the excessive ILD of headphone listening should render headphones as they are totally useless. If you have large ILD at low frequencies it's a sign of a sound source very near one ear, but the recording may have other spatial cues indicating that the sound is not very near, such as direct sound/reverberation balance and the result is spatiality that is unnatural and doesn't make sense. This problem is so BIG that even untrained listeners can hear them if instructed what to listen to in the sound. That's why it's weird for me that you consider excessive ILD a non-problem WHILE being most worried about how crossfeed changes ITD a little bit. That's why I have said you seems to have problems knowing what matters and what doesn't matter, or matters very little.

2a8. Well, it's hard to learn from sound engineers when all they do is tell me my claims are false. It doesn't make me wiser, it makes me annoyed. What does it take in your opinion to have a basic/reasonable understanding? 5 years of working in the business? I have watched many Youtube videos of music producers teaching how to mix music, how to use side-chain compression, how to clean sound from ugly resonances, how to balance tracks, how to use glue compression, how to make tracks sound louder etc. and I have used these things in my own music making. I'd say I have at least a basic understanding of these things. I didn't study music production in the university, but I studied acoustics and signal prosessing, things that give in my opinion a great foundation for understanding the principles of music production. Sure, there are perhaps things you know (deep professonal knowledge) that I don't just because you have done it 25 years and I haven't, but to say I don't have a basic understanding is quite a reach of an attempt to attack my claims.

Things I don't know much about are things such as_ what are the microphone models best for recording singing_. That's the stuff you learn when you work in the business, but I don't need such knowledge to know crossfeed improves headphone sound. If I ever need such knowledge I Google about it or ask someone who knows.


----------



## 71 dB

gregorio said:


> 1. It makes you feel better to believe that does it?
> 
> 2. No it is NOT fact and certainly NOT 100% fact! Speakers in a room NEVER create only acoustically crossfed direct sound, they create ERs/Reverb and other room acoustic effects (except in an anechoic chamber). This is 100% fact and YOU know it!!!! How many times?
> 2a. Correct, you repeating only part of the facts does NOT change anything here in this subforum, so why do you keep doing it?
> ...



1. Actually no. You are better than that and it would be nicer to see you swallow your pride and admit that I have a point defending crossfeed.

2. Why are you that dishonest? Are you that desperate at this point? I mentioned ER and reverberation. You quoted me without those and when I quoted you quoting me you blame for not mentioning ER and reverberation? Actually I noticed that a total asshole could try to use such dishonest moronic trick, but of course I expected better from you. I am sorry I have made you that desperate and made you dig yourself deeper. Yes, speakers in room create acoustic crossfeed, ER and reververation, so please next time quote me honestly, thank you!

3. I am pretty much one of the  only ones here with science content (maybe because I actually studied spatial hearing in the university) while other people tell me I should just state my subjective opinion about crossfeed and stop trying to explain my opinions scientifically. Sometimes opinions can be very objective and grounded into science. For example "The Earth has got one Moon" is my subjective opinion, but from scientific point of views also a scientific objective fact. When objective facts are the basis of your subjective opinion, your opinions tend to be quite objective too. My discovery of crossfeed came from thinking headphone sound theoretically and realizing the fundamental problem (recordings are mixed for speakers, not headphones causing excessive spatiality on headphones) and my subjective experience of crossfed sound agrees with the theory, which is of course a good thing.

Lots of repeating, you are right. I have tried to keep my claims grounded to science and facts. I perhaps cherry pick, but sometimes it's the cherries that count. From my perspective you ignore the problems of excessive ILD for protecting ITD and other things I consider pretty insignificant in comparison and even your ITD talk is questionable as ITD is in no way preserved in speaker listening.


----------



## bfreedma

71 dB said:


> 1a. Well it wasn't me who came up with this analoque. I don't need to talk about F1 cars and washing machines here. I believe the improvements crossfeed does are significantly greater than any theoretical damage. If that wasn't the case I am sure after 7 years I would have noticed that the damages make crossfeed unusable? That's why I don't use crosstalk canceling with speakers either. I believe it's better to have that acoustic crossfeed, it's part of the tricks of stereophonic sound to fool spatial hearing in a sensical way. As I said I did analyse what crossfeed does to sound when I got into it in 2012 and noticed that the theoretical problems are insignificant compared to the benefits and my ears and spatial hearing agrees. Similar problems such as mono colourization exist also with speakers (due to acoustic crossfeed), but nobody seems to care and that's fine, because that's a part of the conceptual imperfections of stereophonic sound. People like you care about these "problems", because crossfeed appear ADDITIONAL processing of sound as the default sound with headphones is crosstalk canceled, which of course is not how it's supposed to be, but people have got used to the idea that headphones don't have crossfeed and *people like me have hard time educating people to un-learn and then re-learn what headphone sound should be*.




When did you assign yourself this role and why do you believe your view of this should be adopted by all?  The arrogance of that statement...

Stop already.


----------



## 71 dB

bfreedma said:


> When did you assign yourself this role and why do you believe your view of this should be adopted by all?  The arrogance of that statement...
> 
> Stop already.



I did assign myself this role in 2012. I believe that among those views (many of them political and not suitable for this forum) of mine that should be adopted crossfeed is one of the most important. Maybe I sound arrogant, but just trying to leave my mark in the World, create a purpose of my existence. I believe all people need that. So do I. How am I any more arrogant than someone who tells people should stop eating meat? Humans are biologically carnivores...


----------



## bfreedma

71 dB said:


> I did assign myself this role in 2012. I believe that among those views (many of them political and not suitable for this forum) of mine that should be adopted crossfeed is one of the most important. Maybe I sound arrogant, but just trying to leave my mark in the World, create a purpose of my existence. I believe all people need that. So do I. How am I any more arrogant than someone who tells people should stop eating meat? Humans are biologically carnivores...



Good for you.  And good luck imposing your personal views, however scientifically incomplete they are on others.

I really don't care what views of yours should be adopted and I think you'll find more happiness in the world if you stop trying to impose your will on everyone else, particularly where you are unable to validate them.

That's about as nice a response as I can make, because your post makes you sound like an irrational zealot.


----------



## gregorio (Oct 16, 2019)

71 dB said:


> 2. Why are you that dishonest? Are you that desperate at this point? I mentioned ER and reverberation. You quoted me without those and when I quoted you quoting me you blame for not mentioning ER and reverberation? Actually I noticed that a total asshole could try to use such dishonest moronic trick, but of course I expected better from you. I am sorry I have made you that desperate and made you dig yourself deeper. Yes, speakers in room create acoustic crossfeed, ER and reververation, so please next time quote me honestly, thank you!



Who is the desperate, dishonest, total asshole, moron here? Below is the exact full, unedited quote of your post, where do you mention ER and Reverb? I can't even be bothered to respond to just more repeats of the same old nonsense, You clearly need professional help!!



71 dB said:


> I have clearly beaten you in this debate. The points by you I have been answering (tiresome work) are pretty weak and easy to rebuke and this post of you hits a new low, so much so that I am surprised. All you have at this point is "you are repeating the same falsehoods and circular arguments". Sure, I am REPEATING stuff like crazy, but they are FACTS! *Speakers in a room create acoustically crossfed direct sound* … is a 100 % fact and you know it. Me repeating it doesn't change anything. Facts are facts, repeated or not.
> 
> I don't want to beat you. That's ugly. I want you to see the light so we can agree about crossfeed and this debate comes to an end.



Enough already!!

G


----------



## 71 dB (Oct 16, 2019)

gregorio said:


> Who is the desperate, dishonest, total asshole, moron here? Below is the exact full, unedited quote of your post, where do you mention of ER and Reverb? I can't even be bothered to respond to just more repeats of the same old nonsense, You clearly need professional help!!
> 
> 
> 
> ...



It's those three dots (…) after word _sound_. I have mentioned ER and reverberation many times here before this post so you know I aknowledge them. Of course I do since I have studied acoustics in university. Crossfeed doesn't emulate ER or reverberation so why would I be talking about ER or reverberation when crossfeed doesn't do that? Crossfeed does crossfeed and that's much more than nothing!


----------



## 71 dB (Oct 16, 2019)

pöh!


----------



## gregorio

71 dB said:


> [1] It's those three dots (…) after word _sound_.
> [2] I have mentioned ER and reverberation many times here before this post so you know I aknowledge them.



1. You accused me of being a desperate, dishonest moron, etc., for deliberately removing your "mention of ER and reverb" from the my quote. Show me where you "mentioned ER and Reverb" in your post (that I supposedly removed), otherwise you are clearly demonstrating which of us is being the desperate dishonest moron!!!

2. Sure you've mentioned ERs/reverb previously but, firstly, you did NOT in the post I quoted and secondly, it's a common habit of yours to acknowledge facts and then just repeat the same old nonsense which ignores them!! How many times? Round and round you go!

Just repeating the same nonsense over and over again, talk of leaving a mark on the world by you being a messiah/dictator and throwing about false accusations and insults does NOT educate us about crossfeed, it just educates us about your mental state (or lack of it) and the more you do it, the more we are convinced of your mental state. Do yourself a favour (and us) by stopping this madness!

G


----------



## 71 dB

gregorio said:


> 1. You accused me of being a desperate, dishonest moron, etc., for deliberately removing your "mention of ER and reverb" from the my quote. Show me where you "mentioned ER and Reverb" in your post (that I supposedly removed), otherwise you are clearly demonstrating which of us is being the desperate dishonest moron!!!
> 
> 2. Sure you've mentioned ERs/reverb previously but, firstly, you did NOT in the post I quoted and secondly, it's a common habit of yours to acknowledge facts and then just repeat the same old nonsense which ignores them!! How many times? Round and round you go!
> 
> ...


1. Maybe I got confused (so many posts), but you keep telling how I ignore ER and reverberation. 

2. I have said that I don't think ER and reverberation matter much in this context so why would I keep talking about them all the time? If you come up with ER/reverbration simulation on headphones then we can discuss about that. Crossfeed doesn't do ER and reverberation.

My mental state is not the subject of this threat, nor is it your business. If I stop "this madness" what is left? Some _subjective_ opinions about crossfeed? That's ok, except this is supposed to be the science section.


----------



## 71 dB

gregorio said:


> 1. You don't seem to understand that cherry-picked scientific facts is NOT science, it's pseudoscience or complete irrelevant nonsense. Again, skin effect is a scientific fact but cherry-picking that one scientific fact, ignoring the others (such as skin effect doesn't affect audible frequencies) and applying that one scientific fact to analogue audio cables is NOT a "scientific point of view", it's the opposite, a perversion of the science! How many times?
> 
> 2. Huh? Using your logic, science itself is kind of attacking "those pioneers who invented crossfeed", by dropping crossfeed in favour of more sophisticated models and of course the "proof of the pudding" is that crossfeed is as you say, a small market because in practice it doesn't work (is not preferable) for many people. I don't subscribe to your logic though!
> 
> ...


1. Just like knowing skin effect doesn't really matter on audio frequencies I know the spatial facts I "ignore" don't matter much either.

2. Pioneers of crossfeed didn't know about more sophisticated models. If modern science/technology attacks them so be it…

3. Yes.

3a. Yes.

3b. This could have been "universal truth" if I was lucky, but I rarely am.


----------



## kinkling

I have never seen such a long thread here with so little helpful info about the actual subject


----------



## kukkurovaca

kinkling said:


> I have never seen such a long thread here with so little helpful info about the actual subject



shh, this thread is the most entertainment I've had all week


----------



## 71 dB

kinkling said:


> I have never seen such a long thread here with so little helpful info about the actual subject



Welcome to the Internet my child! Being an active poster in threads like this is a norm for me. It all started for me in 2005 on a forum in a thread about the Star Wars Prequel Trilogy. Just like there are crossfeed haters and lovers, there are Prequel Trilogy (and now even Disney Wars) haters and lovers. I have debated so much over the years about religion and politics too, but those topics are "not allowed" here so I keep silent about them.


----------



## 71 dB

kukkurovaca said:


> shh, this thread is the most entertainment I've had all week



Here's your Pop Corn:


----------



## kukkurovaca

71 dB said:


> Here's your Pop Corn:



Thank you, it is delicious : )


----------



## bigshot

We need bingo cards to check off the symptoms.


----------



## castleofargh

kinkling said:


> I have never seen such a long thread here with so little helpful info about the actual subject


TBH, @ironmine coming here to share his new experiences with various VSTs, is the only reason why I didn't close that topic. if I could move posts, I'd take the beginning of this topic, and all his posts about VSTs, and put it all somewhere else where people could easily browse through the apps and impressions and discuss them. then I'd take what's left here(most of it being @71dB defending a fake model as if it was general relativity) and lock it up so hard that even the crew from Ocean 11 couldn't break in to post again.


----------



## ironmine

I tested yesterday the "Cans C" option in *AirWindows Monitoring Redux*. I left the following comment about it on the GearSlutz forum, I just copy and paste it here to save time:

I tried Cans C and Cans D settings. Cans D sounds weird, it's unusable. Cans A and B, which I tried before, have a very weak crossfeed effect, they leave to much excessive spatiality and hard stereo. 

Cans C, which I had high hopes for, unfortunately, still does not have a sufficient crossfeed effect to ensure pleasurable listening. On the plus side, highs and mids sound open and spacious and clear, but, on the minus side, the way this crossfeed handles bass ruins everything. 

Bass notes still sound too much in the manner "in one ear only" (it's like a bee buzzing in one ear), too close to the listener, too hard, too narrow. When I listen through Cans C, I don't have the iIllusion (which I want to have) that the drum set is _in front of me_, I have an (unpleasant) feeling that I sit _inside _the drum set. Drums thunder on my ears from the left and from the right. If you imagine the face of a clock, I (i.e., the listener) am at where the figure 6 is, the highs and the mids are ok, they occupy the space from 9 to 3, but the drums (bass notes) are on 7 & 8 on the left and on 4 & 5 on the right, while they should be 9-10 on the left and 2-3 on the right:







I wish you could offer more sliders for adjustment available to the user.
But I also know that you are unreasonably stubborn in this regard and you would sooner donate your kidney to another person than give the user a parameter slider for tinkering.


----------



## 71 dB

castleofargh said:


> TBH, @ironmine coming here to share his new experiences with various VSTs, is the only reason why I didn't close that topic. if I could move posts, I'd take the beginning of this topic, and all his posts about VSTs, and put it all somewhere else where people could easily browse through the apps and impressions and discuss them. then I'd take what's left here(most of it being @71dB defending a fake model as if it was general relativity) and lock it up so hard that even the crew from Ocean 11 couldn't break in to post again.



What is the point of having discussion boards online if moderators can lock threads when they don't like what has been said?
I didn't know you have to agree with moderators around here! What a joke of a forum!
You can call my posts fake all you want, freedom of speech and all, but can you offer such freedom to also other around here?
I think I have given STRONG scientific justifications for my claims.


----------



## taffy2207 (Oct 17, 2019)

@71 dB

Just some thoughts from an outsider looking in.

I think you need to take a break, seriously. Arguing all the time is just achieving the above and annoying you as well.

You said that you've made Crossfeed plugins. Why not put your time and effort into that and monetize it by seeking out Venture Capital or do it altruistically for the love of doing it? Who knows where it will lead?

If you have a Passion for something and it's strong enough you can devote your life to it if you feel that strongly about it.

You can either wait for things to happen for you or you can make them happen.

Direct your energy to something more positive. Just a friendly suggestion, nothing more.


----------



## 71 dB

taffy2207 said:


> You said that you've made Crossfeed plugins.



;nyquist plug-in
;version 2
;type process
;name "Exact Crossfeed"
;action "Crossfeeding..."
;info "Exact Crossfeeder 2017."
;control pax "Bass Crossfeed level" int "dB" -3 -12 0
;control pay "Treble Crossfeed level" int "dB" -25 -35 -15
;control paw "Width" int "degrees" 90 10 90

;; DEFINING CONSTANTS ALPHA, BETA, XI AND FREQ

(setf beta (/ 1.0 (+ 1 (expt 10.0 (/ pay 20.0)))))
(setf alpha (/ (- beta (/ 1.0 (+ 1 (expt 10.0 (/ pax 20.0))))) (- (* 2 beta) 1)))
(setf xi (* alpha (+ (* 2 beta) -1)))
(setf freq (* (/ 22050 paw) (/ xi (* (- beta xi) (- xi beta -1)))))

;; TREBLE CROSSFEED

(setf gamma (aref s 0))
(setf (aref s 0) (sim (mult beta (aref s 0)) (mult (- 1 beta) (aref s 1))))
(setf (aref s 1) (sim (mult beta (aref s 1)) (mult (- 1 beta) gamma)))

;; DEFINING CONSTANT GAMMA

(setf gamma (mult alpha (lp (sim (aref s 0)  (mult -1 (aref s 1))) freq)))

;; BASS CROSSFEED

(setf (aref s 0) (sim (aref s 0) (mult gamma -1)))
(setf (aref s 1) (sim (aref s 1) gamma))

;; OUTPUT

(if (arrayp s)
(vector (abs-env
   (at 0  (cue (aref s 0))))
        (abs-env
   (at 0  (cue (aref s 1))))))


----------



## bigshot

> What is the point of having discussion boards online if moderators can lock threads when they don't like what has been said?



The problem isn't the thread. Locking it won't help. I personally think a thread ban is in order here.


----------



## 71 dB

bigshot said:


> The problem isn't the thread. Locking it won't help. I personally think a thread ban is in order here.



Banning members you don't like? Such a freedom of speech warrior you are! I would propose banning someone only if said member gave practised threats of violence, racism etc.


----------



## castleofargh

71 dB said:


> I think I have given STRONG scientific justifications for my claims.


 you've renounced a scientific approach to this problem at every opportunity. anytime we have brought up some concerns(or evidence) about the many disparities between reality and your makeshift model to justify crossfeed as an objective improvement, you've made up excuses as to why we should just pretend that those variables and concerns don't matter. I must have missed the part where this was a step of the scientific method.


----------



## bigshot

At first I thought you were mentally ill. I try to ignore the mentally ill guy in this forum because his fixations aren't his fault. He can't help it. But you're definitely trolling us. You know what you're saying is full of hot air, and it's been pointed out to you in every way possible, from very polite to full on brusque. You aren't listening and you aren't participating in the conversation any more. You're just performing to attract attention to yourself. We don't need that. As far as I'm concerned, you're done.


----------



## 71 dB

castleofargh said:


> you've renounced a scientific approach to this problem at every opportunity. anytime we have brought up some concerns(or evidence) about the many disparities between reality and your makeshift model to justify crossfeed as an objective improvement, you've made up excuses as to why we should just pretend that those variables and concerns don't matter. I must have missed the part where this was a step of the scientific method.



Yeah, this has been the problem and I don't know what to say. The concerns seem very little and my ears say they don't matter. The concerns apply also to acoustic crossfeed, but nobody cares, because the option of cross-talk canceling rarely exists so people listen to speakers with acoustic crossfeed ER and reverberation all of which SHOULD arise concerns if one was concerned about what crossfeed does...


----------



## 71 dB

Yeah, this has been the problem and I don't know what to say. The concerns seem very little and my ears say they don't matter. The concerns apply also to acoustic crossfeed, but nobody cares, because the option of cross-talk canceling rarely exists so people listen to speakers with acoustic crossfeed ER and reverberation all of which SHOULD arise concerns if one was concerned about what crossfeed does…

…crossfeed is not perfect and in imperfection is based on science, but it is improvement


----------



## bfreedma

71 dB said:


> Banning members you don't like? Such a freedom of speech warrior you are! I would propose banning someone only if said member gave practised threats of violence, racism etc.



https://www.head-fi.org/articles/terms-of-service.6725/


----------



## Glmoneydawg

71 dB said:


> Banning members you don't like? Such a freedom of speech warrior you are! I would propose banning someone only if said member gave practised threats of violence, racism etc.


Do you believe there's anything to be gained here?....this has been dragged out for 85 pages....and still pretty much the same arguments as page 1


----------



## Dawnrazor

I have tried every crossfeed vst plugin I could find on different systems and none helped achieve what I wanted.  I bought a Hafler HA-75 amp that has a Focus Control  option that lets you control the center image to make it sound more like speakers.   That my friends is exactly what the vst plugins never pulled off for me.  It talks about it on page 10

https://www.hafler.com/pdf/HA75-R2-User-Guide.pdf


----------



## kukkurovaca

In terms of VST solutions, I like Goodhertz Can Opener. It’s pretty user-friendly, gives control over the angle being simulated, and doesn’t sound bad. Their mid/side vst is also handy sometimes. Not everything needs crossfeed but it certainly is nice to have when listening to a recording that was truly never intended for headphones. Can Opener does NOT work with EqualizerAPO in my laptop though. (Does work in JRiver and Audirvana)

I also make some use of the Radsone ES100’s crossfeed, which is not as sophisticated but works in a pinch. And doesn’t require configuring software.


----------



## 71 dB

Dawnrazor said:


> I have tried every crossfeed vst plugin I could find on different systems and none helped achieve what I wanted.  I bought a Hafler HA-75 amp that has a Focus Control  option that lets you control the center image to make it sound more like speakers.   That my friends is exactly what the vst plugins never pulled off for me.  It talks about it on page 10
> 
> https://www.hafler.com/pdf/HA75-R2-User-Guide.pdf



I wonder why this "focus control" is able to do for you what crossfeed vst plugins were not. The user guide is not very technical about how it's done, but based on the description it doesn't sound much more sophisticated than your typical crossfeed plugin.


----------



## 71 dB

Glmoneydawg said:


> Do you believe there's anything to be gained here?....this has been dragged out for 85 pages....and still pretty much the same arguments as page 1



Very little if anything can be gained. That's my problem. How to gain? This has been one of my efforts to "gain" and it failed royally. 
Experiences like this are demoralizing. Why even try when all you can achieve is this? I am really really bad at selling my opinions
to other people as you have seen.  I feel I am not able to express what I realized in 2012 and it has something to do with system
thinking and how people assume default positions. People assume the default position of headphone listening without crossfeed.
Is that called for knowing recordings are mixed for speakers? In 2012 I realized the scientific default position of headphone listening 
is binaural sound and since recordings very rarely are binaural, they have to be modified for headphones. Crossfeed doesn't turn
speaker stereophony into binaural stereophony, of course not, but it "scales" the speaker stereophony into something that has got
similar ILD range compared to binaural stereophony. To my ears this mean a significant improvement over not doing anything and
perhaps the best bang for the buck rations of headphone audio. I don't know how to get an improvement that big with $50 in any 
other way… 

…I have been convinced that this improvement my ears hear is explained by the science of human spatial hearing. It was
applying the science to headphone listening that made me have the 2012 realization in the first place (before 2012 I never
thought about headphone sound, because I was into speakers and headphones seemed so trouble-free as the transducers 
are at your ears, but ironically that turned up being the problem!) Crossfeed does round corners and is not perfect in any way. 
It's coarse. However, I believe it is still a significant improvement as speaker stereophony on headphones is such a huge 
problem. If you can use something better than crossfeed good for you, but all the HRTF-convolutions are technically crossfeed, 
summing a modified version of the signal to the other channel, just much more detailed. So, when I speak generally about
crossfeed, I mean any way to reduce ILD of speaker stereophony to get something that has ILD similar to binaural
stereophony.

Since recordings are mixed for speakers using studio monitors, I think it's rational to assume the playback to have acoustic
crossfeed for the direct sound with a 200-300 µs ITD corresponding the speaker angle. Crossfeed usually mimicks this using
similar ITD so I don't see why this is a significant problem. Not doing anything should be a much bigger problem. As for the
combination of ILD and ITD values, I believe those pairs make much more sense after crossfeed, because ILD values tend
to be so much "off" so even if ITD is change a little bit, getting ILD to the ballpark is much more important. Crossfeed is
kind of a "mapper" that takes the original spatial parameters and "maps" them into parameters that are more natural to
human ear. This is what my reason says and it's what my ears say. That's why it is so difficult for me to believe I am wrong.
If I was wrong then the implications would be I can't trust my reason and senses at all. 

If I can learn something from the experience here it is this: I am a person of low self-esteem so when someone questions
my knowledge of a subject I myself have believed knowing well I get hurt badly and I attack and insult people. The solutions
are not easy:

1) get better self-esteem => easier said than done. I try, but it is so damn hard!!
2) know better => again I am a human with limits of learning and I don't know what new to learn about spatiality, especially
when my own ears tell me what I know seems to work. If I agree with Gregorio I disagree with my ears! 
3) stay offlne. => that works, but human beings need interactions. I want to be involved, part of something.

The solutions are the there, but they are not easy. I am annoyed of these difficulties. I want easy happy life!


----------



## bigshot

I think you've gotten to the point where no one cares. You're typing into an echo chamber. No one is bothering to read your word salad any more.


----------



## 71 dB

bigshot said:


> I think you've gotten to the point where no one cares. You're typing into an echo chamber. No one is bothering to read your word salad any more.



Does my voice here make a dissonant chord in your echo chamber? Sorry. I am this way. My opinions about crossfeed shouldn't even concern you as you are not a headphone user. You are a speaker user or at least so you keep saying ten times a day. Crossfeed is irrelevant for speaker users, isn't it? The question in the topic of this thread _"To crossfeed or not to crossfeed?"_ should be meaningless to you. Why do you care about whether crossfeed improves headphone sound or not? You don't use headphones, do you?

Can you blame me for losing my patience and getting frustrated when so much of the responses to my posts are about trying to downplay my credibility (attacks toward me rather than yje claims I am making) rather careful analyse of what is possibly wrong in my claims?  No one bothers to read my word salad any more? No one cares? Are you sure? Have you asked everyone? My posts being word salad is YOUR opinion (and you are of course entitled to have such opinion), but do other members here share your opinion? It's possible, of course, but how sure are you?


----------



## bigshot

I think you are the one that should be asking whether people appreciate your posts. Take a break.


----------



## 71 dB

bigshot said:


> I think you are the one that should be asking whether people appreciate your posts. Take a break.



I post what I want and people appreciate it or they don't. 

I don't tell you to take a break (not my business). 
All I know is you are appreciated here. 
I have seen very good posts by you. 
Sometimes you are perhaps a little bit condescending, but that's your style. I have my style.
The subject we seem to agree the most about with each other is "16 bit / 44100 Hz is all you need".


----------



## Hifiearspeakers

Listen, you can all continue to bash @71 dB all you want, but he makes a good point in his post above. This thread is all about CROSSFEED and that really only applies to headphones/earphones. So when you all can’t stop chirping about speakers, then you all are really the ones off topic. So disagree with him all you want, but to call for him to be banned for pounding away on the actual topic (redundant as it may be) is just asinine.


----------



## kukkurovaca

what if they held a flamewar and nobody came


----------



## bigshot

I like your name kukkurovaca. It sounds like some sort of Australian marsupial.


----------



## kukkurovaca

bigshot said:


> I like your name kukkurovaca. It sounds like some sort of Australian marsupial.



Thanks, it's a joke about how bad I was at doing Sanskrit homework


----------



## castleofargh

Hifiearspeakers said:


> Listen, you can all continue to bash @71 dB all you want, but he makes a good point in his post above. This thread is all about CROSSFEED and that really only applies to headphones/earphones. So when you all can’t stop chirping about speakers, then you all are really the ones off topic. So disagree with him all you want, but to call for him to be banned for pounding away on the actual topic (redundant as it may be) is just asinine.


 it's hard not to discuss speakers, be it to explain crossfeed, or to debunk empty claims about it.
if we were to treat crossfeed for the subjective process that it is, then of course we could focus on headphones and subjective impressions. but somehow, @71 dB NEEDS for crossfeed to be more than something you appreciate(or don't). so speaker talk, it is. 



kukkurovaca said:


> Thanks, it's a joke about how bad I was at doing Sanskrit homework


once again wikipedia was right.


----------



## Dawnrazor

71 dB said:


> I wonder why this "focus control" is able to do for you what crossfeed vst plugins were not. The user guide is not very technical about how it's done, but based on the description it doesn't sound much more sophisticated than your typical crossfeed plugin.


No idea man.  I just know that it works and vst crossfeed plugins never seems to make me happy.


----------



## ironmine

Just for the fun of it, I would like to try creating a crossfeed myself in the VST host chainer (ART Teknika Console), using individual VST plugins and connecting them to one another.

71 dB, can you advise me how to do it in the right manner? I mean the general principle and scheme (diagram).

As far as I understand, I need to mix a bit of the left channel with the right channel and vice versa.  Also, simple mixing is not enough, the left channel needs to be delayed a bit before it is mixed with the right channel. Does this delaying need to be frequency-dependent? What other processing I should apply to the left channel before it's mixed with the right one?


----------



## ironmine

Here's the interesting *post *at GearSlutz from the guy who analyzed different crossfeed plugins and posted screenshots about them comparing the ways they process the sound.


----------



## 71 dB (Oct 19, 2019)

ironmine said:


> Just for the fun of it, I would like to try creating a crossfeed myself in the VST host chainer (ART Teknika Console), using individual VST plugins and connecting them to one another.
> 
> 71 dB, can you advise me how to do it in the right manner? I mean the general principle and scheme (diagram).
> 
> As far as I understand, I need to mix a bit of the left channel with the right channel and vice versa.  Also, simple mixing is not enough, the left channel needs to be delayed a bit before it is mixed with the right channel. Does this delaying need to be frequency-dependent? What other processing I should apply to the left channel before it's mixed with the right one?



I am not good at coding and I have never wrote VST plugins, but I certainly know about the general principle and scheme.

Crossfeed is about makiing both ears hear the content of both channels (as happens with all enviromental sounds) simulating coarsely the differences in sound at ears when the sound comes from an angle (typically something like 30° mimicking a normal speaker setup).

So, you mix a little of left channel ( L ) to right channel ( R ) and vice versa as you said. Human head blocks high frequencies more than low frequencies, so that more bass is leaked to the other ear than higher frequencies. The size of human head dictates the frequency point at which the head starts to block sound "aggressively". This frequency is about 800 Hz. So, up to this frequency we should leak a lot of sound to the other ear and above that frequency less and less with frequency. That's why we low pass filter the leaked sound at 800 Hz (1st order butterworth is a common choice):

Lf = L lowpass filtered at 800 Hz
Rf = R lowpass filtered at 800 Hz

Even at bass there is a small ILD so we don't "leak" these filtered versions without some scaling. To have gain x dB we need to scale them by a factor k of

k = 10^(-x/20)

which comes directly from the fact that x = 20*log10 (k). Note than k < 1 meaning log10 (k) is negative so the gain is negative decibels. To have gain -1 dB k becomes:

k = 10^(-1/20) = 0.89125…  …(0.9 does fine! The error is less than 0.1 dB)

What about the delay? Lowpass filtering at 800 Hz using 1st order butterworth creates a delay of about 200 µs at bass dropping to about 140 µs at 800 Hz where the leaked signal has attenuated 3 dB more and then the delay dies out with frequency. It's possible to do an additional delay to have the angle of sound increased, but this works nicely as it is. ILD varies between 0 and about 700 µs and depends almost linearly of the angle between 0° and 90° for sound on horizontal plane so that the delay in microseconfs depends on angle roughly:

delay in microseconds = 7 * angle in degrees for _distant_ sounds
delay in microseconds = 8 * angle in degrees for _very near_ sounds

The difference comes from geometric reasons and in my opinion the upper one is more reasonable approximation, because reducing ILD also means more distant sound. So, 200 µs corresponds about 29° and 140 µs equals 20°.

Mixing L and Rf (R and Lf) means increasing low frequencies compared to high frequences. That's why we need to boost frequencies above 800 Hz to have balanced sound. If low frequencies are totally mono to begin with the increase at low frequencies (assuming zero phase differency) is (for k = 0.9) 20*log10 (1+0.9) = 5.58 dB. However if they are uncorrelated, the increase is smaller, 10*log (1*1+0.9*0.9) = 2.58 dB. Music is typically something in between, not totally mono, but not uncorreleted either. So the typical increase is between 2.58 dB and 5.58 dB, but in practise the increase is smaller, because of the phase delay caused by filtering. Typically crossfeeders make a high frequency boost of 2-3 dB.

If someone at this point calls this "messy", let me remind you how "messy" speakers in a room are! That's messy! The acoustics combined with the radiation patterns of the speakers create a mess that make these minor things of a couple of dB look totally flat in comparison. If a crossfeeder sounds too bright or bassy to your ears they can be tweaked to give what you want. These calculations are made to give balanced sound having similar spectral balance to the original sound.

Boosting the high frequences of L and R create an additional delay of about 50 µs so that the final delay is about 250 µs (~35°):

L crossfed = L hf boosted + k * Rf
R crossfed = R hf boosted + k * Lf

However, this works well for "ping pong" recordings with huge stereo separation! If the recording has more sensible spatiality we want to crossfeed it much less! In practice the usefull range for crossfeed level is from -12 dB (almost headphone ready spatiality) to -1 dB (ping pong or "movie surround sound"). 

So, the value of k has to be adjustable ranging from 0.25 to 0.9. Also, the need for treble boost is smaller when k is smaller. Below is the treble boost analyse of a my crossfeeder (which nicely scales the treble boost with the crossfeed levels):






Hopefully this helps...


----------



## ironmine

71 dB said:


> That's why we low pass filter the leaked sound at 800 Hz (1st order butterworth is a common choice)



How about this, for a start? :

I take the left channel, split it into two, one goes directly to the output, the other one goes to the equalizer, before it gets mixed with the right channel.
Which EQ mode should I choose: linear phase or natural (minimum phase)?
How much should I attenuate the sound of the left channel coming from the EQ plugin before it is mixed with the right channel? -12 dB? -20 dB?


----------



## castleofargh

I already mentioned this thread https://www.head-fi.org/threads/recording-impulse-responses-for-speaker-virtualization.890719/ . you can ask there for some basic stereo impulses(or graphs from them), and see how well they align with some crossfeed options.
or you could get human measurements from the few HRTF databases available online and pick 30°(or whatever you fancy) to see what is really going on for some people.
obviously this would become really relevant with your own measurements. that could look maybe something like this:

recorded from about 30° left speaker. red is picked at left ear, blue at right ear. keep in mind that speaker, room, mic and mic calibration are all non strictly flat, so what's mainly relevant here beside some general response caused by the ear and how the headphone should probably try to cautiously aim for something similar, is the attenuation per frequency I get from the opposite ear. as that's in principle what crossfeed is aiming to replicate badly.
as for the delay, from the impulses I seem to be near 250µs on this one. 
someone else would have something else. not completely different, but noticeably different anyway.


----------



## ironmine

castleofargh said:


> I already mentioned this thread https://www.head-fi.org/threads/recording-impulse-responses-for-speaker-virtualization.890719/ . you can ask there for some basic stereo impulses(or graphs from them), and see how well they align with some crossfeed options.
> or you could get human measurements from the few HRTF databases available online and pick 30°(or whatever you fancy) to see what is really going on for some people.
> obviously this would become really relevant with your own measurements. that could look maybe something like this:
> 
> ...



I have the same (my own) measurements. The problem with these measurements is that they contain not only the effects of a head, but also, primarily, the effects of a room and speakers. I do not want to mimic the imperfections of my room and speakers. I want to transcend them.


----------



## gregorio

71 dB said:


> [1] Very little if anything can be gained. That's my problem. How to gain?
> [2] I am really really bad at selling my opinions to other people as you have seen.
> [3] Since recordings are mixed for speakers. [4] Crossfeed is kind of a "mapper" that takes the original spatial parameters and "maps" them into parameters that are more natural to human ear.
> [4a] This is what my reason says and it's what my ears say.
> ...



1. Asked and answered, numerous times!

2. This isn’t the “sell your opinions” sub forum and if your opinions contradict the facts/science then they’re ALWAYS going to be impossible to sell here!

3. That’s not strictly true, as mentioned numerous times. Mixes are virtually always at least checked and approved on headphones and commonly somewhat adjusted. This of course brings us to artistic intent and your bizarre belief that your intent/preference supersedes the artists’ and should be imposed on all other consumers/listeners.

4. No it doesn’t! The “original spatial parameters” (on commercial music recordings) are not natural to start with and crossfeed doesn’t magically make them natural. How many times?
4a. Your reasoning is flawed and one of the main reasons why it’s flawed is because your ears are not “saying” anything, what you’re listening to is your perception, not your ears. Which is EXACTLY the same mistake so many audiophiles make when asserting improvements!
4b. Which is again exactly why so many audiophiles also can’t believe their false assertions are wrong!
4c. Science clearly demonstrates that you can’t trust your (perception of) your senses and therefore, as your reasoning omits the fact that different people’s perception varies, your reasoning contradicts the science, is flawed and will continue to be impossible “to sell” here!

5. Again, it’s not your ears but your perception, you do not directly hear your ears. So if you are to agree with the science (rather than contradict it) and change the word “ears” in your statement for the word “ perception”, then NO, you can both agree with me AND your personal perception (but then you can’t apply your personal perception to everyone else)!

6. The solutions ARE EASY for a rational mind. For someone with an irrational mind, who believes they are a messiah and their preferences should be imposed on everyone else, then maybe the solutions aren’t easy but I wouldn’t know, that’s an issue to discuss with a personal therapist/psychiatrist, NOT an issue for this sub forum!

G


----------



## 71 dB

ironmine said:


> How about this, for a start? :
> 
> I take the left channel, split it into two, one goes directly to the output, the other one goes to the equalizer, before it gets mixed with the right channel.
> 
> ...



1. Minimum phase as in the picture. Lowpass (in the pic you have lowshelf filter), -6 dB/oct.

2. That's the crossfeed level, which as I said in my opinion should be variable between -12 dB and -1 dB (that's _gain =_ attenuation 12 dB to 1 dB) The former is for recordings needing only the final gentle touch to be "headphone ready" and the latter for sound with very large channel separation (ping pong stereo, movie sound downmixed from surround sound). So basically every recording has it's crossfeed level setting and you are supposed to set it by ear so that it's the lowest setting taking away any excessive spatiality and makes the sound appear natural. So, if for example -7 dB  does it, by -8 dB (one dB resolution is good enoug imo) not, the "proper" crossfeed level for that recording is -7 dB. When you learn to hear what crossfeed does to the sound it's easy to set this level (takes 1-3 seconds for me). If you want one fixed level for crossfeed you are having a compromise of how when that level agrees with "proper" level. Typical values for fixed crossfeeders  are -10 dB … -6 dB. My first diy crossfeeder had only -8 dB setting, but I soon recogized the need for adjustable crossfeed level so I first modified the crossfeeder to have options -10 dB, -8 dB and -1 dB and soon after that being wiser I built my current 6 level crossfeeder.

In your diagram you are missing the treble boost block between input and output in the lower branch. Just put another fabfilter plugin there (highself filter, 800 Hz, Q=1) the boost should vary with the crossfeed level according to the the pic I attached in my previous post. If that's too difficult then a fixed +2 dB is a compromise.


----------



## ironmine (Oct 20, 2019)

Just now I tried this idea:






I opened in Room EQ Wizard my room measurements which I made several months ago:

1) Left speaker sound measured near the left ear;
2) Left speaker sound measured near the right ear;
3) Right speaker sound measured near the right ear;
4) Right speaker sound measured near the left ear;

and exported them (no smoothing) as impulse files.

Then I built this monstrous VST chain in ART-Teknika. The first four instances of MConvolutionEZ convolver process the left and right channels using the impulses (1,2,3,4).
Another two instances of the convolver apply the room correction impulse* (which I normally use when I listen through speakers) and the headphone correction impulse for my Denon D2000.
The Voxengo Sound Delay plugin delays (2) and (4) by 250 ms. I can also use it to attenuate (2) and (4) but it sounds best when I don't reduce their volume.

(*I need to try removing the phase correction component from this impulse, as it now corrects both for room response and for speakers phase non-linearity.)

The resulting sound is really close to what I hear in my room through speakers. Overall, the resemblance is indeed striking.  I am a bit shocked with the result.


----------



## ironmine

71 dB said:


> 1. Minimum phase as in the picture. Lowpass (in the pic you have lowshelf filter), -6 dB/oct.
> 
> 2. That's the crossfeed level, which as I said in my opinion should be variable between -12 dB and -1 dB (that's _gain =_ attenuation 12 dB to 1 dB) The former is for recordings needing only the final gentle touch to be "headphone ready" and the latter for sound with very large channel separation (ping pong stereo, movie sound downmixed from surround sound). So basically every recording has it's crossfeed level setting and you are supposed to set it by ear so that it's the lowest setting taking away any excessive spatiality and makes the sound appear natural. So, if for example -7 dB  does it, by -8 dB (one dB resolution is good enoug imo) not, the "proper" crossfeed level for that recording is -7 dB. When you learn to hear what crossfeed does to the sound it's easy to set this level (takes 1-3 seconds for me). If you want one fixed level for crossfeed you are having a compromise of how when that level agrees with "proper" level. Typical values for fixed crossfeeders  are -10 dB … -6 dB. My first diy crossfeeder had only -8 dB setting, but I soon recogized the need for adjustable crossfeed level so I first modified the crossfeeder to have options -10 dB, -8 dB and -1 dB and soon after that being wiser I built my current 6 level crossfeeder.
> 
> In your diagram you are missing the treble boost block between input and output in the lower branch. Just put another fabfilter plugin there (highself filter, 800 Hz, Q=1) the boost should vary with the crossfeed level according to the the pic I attached in my previous post. If that's too difficult then a fixed +2 dB is a compromise.



Ok, thanks, 71dB, I will try it !


----------



## 71 dB

gregorio said:


> 2. This isn’t the “sell your opinions” sub forum and if your opinions contradict the facts/science then they’re ALWAYS going to be impossible to sell here!
> 
> 3. That’s not strictly true, as mentioned numerous times. Mixes are virtually always at least checked and approved on headphones and commonly somewhat adjusted. This of course brings us to artistic intent and your bizarre belief that your intent/preference supersedes the artists’ and should be imposed on all other consumers/listeners.
> 
> ...



2. I don't think my opinions contradict facts/science. Science doesn't say artistic intent should be respected, you do. That's YOUR opinion and it's not very scientific is it? Science doesn't say who to respect. Science doesn't care about artistic intent. Science tells us how we hear and I am trying to apply that to headphone listening and for me that work in the way that science often works because it's science. It is insulting to be called unscientific.

3. You are correct, but it's not black and white. Recordings require different levels of crossfeeding depending on how _well_ they are "checked on headphones." Modern pop music for example often respects headphone listeners a lot (mono bass etc.) and if I use crossfeed it's weak crossfeed to take the remnants of excessive spatiality away. Olders recordings? In my opinion very rarely sound "headphone ready." 

4. Well, not "technically" natural, but spatial hearing can be fooled (the whole concept of stereo sound is based on fooling spatial hearing) and to our hearing the spatiality appears natural IF the spatial parameters are close enough to natural spatiality. Since the natural range for ILD at low frequencies for example is 0-3 dB, limiting the ILD of a recording from range 0-14 dB to 0-3 dB makes it sound more natural to spatial hearing. 

4a. When I say my ears say something I of course mean my perceptions. I don't know what happen in my ears (well_ scientifically_ I know because I have studied human hearing). I only know what I perceive. That's what matter. So much for flawed reasoning.

4b. Usually those false assertions are related to placebo-effect etc. snake oil things.

4c. My reasoning contradicts science because it agrees with science? What? I know very well I can't trust my perception, but I can trust my reason. You think I am some kind of idiot who hasn't thought about crossfeed for one second? I have pondered these things for years! And you have the audacity to tell me my reasoning contradicts science? If it did I'm sure crossfeed would have NEVER invented or used. Why bother if it's so against science? The only thing it is against is your fetish to worship artistic intent in an idiotic way that causes excessive spatiality with headphones. It's your business if you prever headphones sound "as it is". That's your perception and I that's your choice, but don't call other people unscientific when they have scientific background and education + years of pondering these things! You have said NOTHING that debunks my reasoning. All you have is some illusive concerns about ITD and what not, but you haven't demonstrated convincingly how these debunk my reasoning. I admit crossfeed is not perfect and has these little insignificant things and that kills your concerns. As I have said million times: _Nobody cares about how acoustic crossfeed (and ER + room acoustics) messes ITD with speakers, but if you dare simulating that with a crossfeed, suddenly ITD is a gigantic problem. _Re-evaluate your own reasoning before attacking someone with the knowledge and understanding of these things I have. Even if you knew more than I know that doesn't mean I don't know anything. I have 7 years of crossfeed experience. How many years do you want to take me seriously? 20? 30?

5. Deal! So much of this has been semantic nonsense. 

6. I am not telling anyone they are forced to use crossfeed. My point is science does support/justify crossfeed. That's doesn't mean everyone's perception agrees with science. My perception seems to agree with science, perhaps because I am a science-oriented person rather than "artistic intent-oriented". If the artistic intent sounds natural on speakers and unnatural on headphones, I name the speaker version what the artist intented and use crossfeed to have something closer to that on headphones. I also use my own head. If my perception of the spatiality with headphones (without crossfeed) is crap, I conclude it's because the artist didn't check for headphones and I do it myself for her/him using crossfeed.


----------



## ironmine

71dB, is it the correct way? :






It sounds very nice!

Any other way to improve the scheme? Maybe I should insert a Mid-Side plugin to control the volume of the mid channel separately from the sides?

The sound is a bit too close to me. How to remove it away from me so it would be in front of me but at a certain distance from me?


----------



## gregorio

71 dB said:


> 2. I don't think my opinions contradict facts/science.
> [2a] Science doesn't say who to respect.
> [2b] Science tells us how we hear and I am trying to apply that to headphone listening.
> 4a. I only know what I perceive. That's what matter. So much for flawed reasoning.
> ...



2. It’s clear you think that, it’s also clear (to everyone except you) why and how you think that. Namely, by ignoring/dismissing those parts of science/the facts that you’re contradicting, perfect circular reasoning!! How many times?
2a. Correct, science doesn’t tell us who to respect, common sense does! Again, if I buy an album by say Bjork, whose artistic intents/preferences do I want to hear, mine, Mozart’s, Bjork’s or yours? Duh, your argument is ridiculous!
2b. Science does tell us how we hear (broadly speaking) but YOU ARE NOT applying that to headphone listening! Even by YOUR OWN admission you are only applying certain bits of science, which ISN’T science, it’s pseudoscience. Furthermore, the science you don’t apply/account for, you dismiss on the basis that you don’t/can’t perceive the negative effects. This contradicts science because they do exist, we can measure them and science demonstrates they are audible. So STOP saying science supports your opinions because it does the opposite!

4a. So much flawed reasoning indeed! Your perception is what matters to you BUT it doesn’t matter to me or to science!! Excellent example of flawed reasoning, thanks for demonstrating!!

4c. Your reasoning contradicts science because although it agrees with certain parts of science, it dismisses and contradicts the demonstrated science that perception/preferences will vary from person to person! How many times???
4c1. Which is an OBVIOUS FALLACY because you are basing your reasoning on perception you admit you “can’t trust”, duh!
4c2. No, I think you’re “some kind of idiot” for basing your reasoning on only part of the facts, ignoring and contradicting others and then trying to ram your fallacies down everyone’s throat, in a science forum of all places!!!

The rest of you post is just more of the same ridiculous nonsense, self contradictions and ego massaging. Round and round you go, bla bla bla!

G


----------



## 71 dB

ironmine said:


> 71dB, is it the correct way? :



Looks ok to me. 



ironmine said:


> 1. It sounds very nice!
> 
> 2. Any other way to improve the scheme? Maybe I should insert a Mid-Side plugin to control the volume of the mid channel separately from the sides?
> 
> 3. The sound is a bit too close to me. How to remove it away from me so it would be in front of me but at a certain distance from me?



1. That's good.

2. Crossfeed itself is_ effectively_ a frequency dependent mid-side plugin, but you may want to reduce the channel separation little more at high frequencies.

3. This type of crossfeed processing doesn't actively remove sound from the listener. It does it to some extend by reducing excessive spatiality that make hard panned sounds appear at your ears. How close the sound is is dependent on the recording itself. Dry recordings appear more close than recordings with heavy reverberation/spatial effects.


----------



## 71 dB

gregorio said:


> 2. It’s clear you think that, it’s also clear (to everyone except you) why and how you think that. Namely, by ignoring/dismissing those parts of science/the facts that you’re contradicting, perfect circular reasoning!! How many times?
> 2a. Correct, science doesn’t tell us who to respect, common sense does! Again, if I buy an album by say Bjork, whose artistic intents/preferences do I want to hear, mine, Mozart’s, Bjork’s or yours? Duh, your argument is ridiculous!
> 2b. Science does tell us how we hear (broadly speaking) but YOU ARE NOT applying that to headphone listening! Even by YOUR OWN admission you are only applying certain bits of science, which ISN’T science, it’s pseudoscience. Furthermore, the science you don’t apply/account for, you dismiss on the basis that you don’t/can’t perceive the negative effects. This contradicts science because they do exist, we can measure them and science demonstrates they are audible. So STOP saying science supports your opinions because it does the opposite!
> 
> ...



2. If I am using Newton's Law of Gravity to describe how an apple drops from a tree do you say I am wrong, because I ignore Einstein's General Relativity? Hopefully not, because the difference is meaningless. Newton's law is totally fine if you are using it where it is applicable. I am using the relevant parts of the human spatial hearing where they are applicable. When you put speakers in a room you are instantly ignoring a million small things, reflection from the coffee table for a start. People just don't care. Their ears get used to it. What crossfeed does is vanilla stuff compared to the "horrors" of speakers in a room. I know when/what to dismiss.

2a. I'm confident if Björk heard my choice for the proper crossfeed level she would approve it! Björk sounds very different on speakers and on headphones. Which one is her intent? If it is headphones without crossfeed, speakers are completely wrong (no excessive spatiality) and need crosstalk canceling!

2b. Applying science selectively doesn't make it speudoscience. How often can we apply science completely? 10 % of the time? Does that mean 90 % is pseudoscience? I haven't said I can't hear negative effects. I have said to me the negative effects are _insignificant_ compared to the positive. To me correct ILD is much much much more important than correct ITD and science of spatial hearing tell us why. This is again knowing how to apply science.

4a. Science doesn't care about my personal preferences or how I perceive sounds, but it seems to nevertheless agree which is nice.

4c. You are scraping the bottom of the barrel. You sound desperate. Perception does vary from person to person, but there are common traits. You seem to think repeating this "you are dismissing things" argument is an effective tool against me. It isn't, because I can say I know how I dismiss things and how insignificant they are.

4c1. No, I am basing my reasoning on science which made me discover crossfeed in the first place. My perception, trustful or not agrees with the science. if that wasn't the case, I would have to trust science more than my perception.

4c2. Explain how I am _contradicting_ other facts? I admit "ignore" of insignificant things, but not contradiction. Your posts are full of terms such as flawed, fallacy, contradiction etc. and a lot of talk about how science has to be used in it's entirely, but very little do I see science in action. Science means math. Where's your equations to demonstrate how "flawed" my thinking is? We can analyse each ignored part of the science one by one if you want. I am not afraid. I made the calculations 7 years ago and found out the things I ignore _can be_ ignored. Intuitively it was pretty clear that's the case, but I wanted to be sure.


----------



## 71 dB (Oct 20, 2019)

*
ALL HEADPHONES HAVE INTEGRATED CROSSTALK CANCELING!
*
Yes, all headphones have crosstalk canceling and by default it is *switched on*!

Headphones don't have electrical crosstalk canceling circuits so it can be hard to notice it, but headphones manage to do crosstalk canceling by bringing the transducers very near the ears so that the amplitude difference in crosstalk is huge due to distance attenuation/head shadowing reasons. So, people listen to crosstalk canceled sound on headphones, most of them totally unware of it. Crosstalk canceling is beneficial only when the sound is binaural in nature. Music mixed for speakers doesn't need crosstalk canceling. Crosstalk canceling can mess up the spatiality creating totally unnatural levels of ILD at low frequencies for example. To turn headphone crosstalk canceling *off *and hear the music as intented by the artist, there are two things one can do:

1) Listen to speakers instead of headphones.
2) Use crossfeed or other method like HRTF convolution to reduce ILD. This "neutralizes" the crosstalk canceling.


----------



## gregorio

71 dB said:


> 4a. Science doesn't care about my personal preferences or how I perceive sounds, but it seems to nevertheless agree which is nice.
> 4c. You are scraping the bottom of the barrel. You sound desperate. Perception does vary from person to person, but there are common traits.



4a. And here, yet again (!), you give a great demonstration of how your “reasoning” is fallacious! Yes, science does agree with your perception, NOT because your explanation is correct but because the science states that perceptions will vary. Science also agrees with what I’m perceiving, which is significantly different from your perception, for the same reason. The difference is that you’re contradicting this demonstrated science .....

4b. You seem to agree in principal with the science, that perception will vary from person to person, but then dismiss/contradict what you’ve just admitted by stating there are common traits. You’re effectively trying to assert that perception does not vary from person to person, that your perception is correct and different perceptions are wrong because you’ve thought about it a lot, don’t know anything about music production and therefore falsely apply the science of what occurs in nature! As previously, all you’ve demonstrated is that your insults actually apply to you! When are you going to stop doing this to yourself (and us)?

The rest of your post is just the same made-up personal opinion and self ego massaging as countless times before, yawn. You say you want things to change but then continue doing exactly the same thing, you think maybe we or science itself is going change if you contradict it enough times?

G


----------



## bigshot

Crossfeed is not an alternative to speakers any more than a rock is an alternative to mashed potatoes. They aren’t the same thing.


----------



## castleofargh

ironmine said:


> I have the same (my own) measurements. The problem with these measurements is that they contain not only the effects of a head, but also, primarily, the effects of a room and speakers. I do not want to mimic the imperfections of my room and speakers. I want to transcend them.


that's a hard one. how to get accurate data for our body using tools that introduce changes we don't want?  I don't know how far along the research is on those videos or just pictures of the head/ears and some software tries to build the correct acoustic model. last AES stuff I read on this are a few years old already, and the simplified models seemed to consistently get too bright for the listener.  Super X-FI is the obvious actual product that comes to mind, as they use pictures of the face and ears. but I didn't try and don't know much about it(knowing that the Realiser A16 would one day come to me, I've stopped purchasing all the little gizmos related to "3D" audio). I'm afraid it's creating a multichannel "surround" sound no matter what we feed it, I hope I'm wrong and it at least gives a choice.


I guess if you could measure the signal really in your ear canal and then at a very similar position do just a speaker measurement(without you). the variation in FR could be fairly accurate and you could decide to only use that variation for your simulation. to remove most reverb, if your room isn't too small and you set it up not to have desk reflections right away or something, I guess you could just shorten the impulse and remove the trail of reverb(some reverb plugins let you play with that, but IDK if that's something you find in typical convolvers? I've stuck to free stuff so again I'm not very knowledgeable on what exists. I kind of butchered my impulses "by hand" like a monster when I made my own sauce a few years ago. 
or you can go the crossfeed way and just use the FR variations(or whatever you're comfortable with as FR alteration) and the interaural delay you measure between your impulses. or just the delay that makes you feel like a left channel only is at about 30° when you turn it ON and OFF. that way you have effectively removed almost all components of the room.



ironmine said:


> Just now I tried this idea:
> 
> 
> 
> ...


250ms should be typo. you typically would get something in the ballpark of 250*µs* (modulo head size and crap). if the app doesn't allow such fine tuning, you can consider just editing the .wav files by counting the number of samples or something like that. there is probably a way to export from REW without the impulse being centered on the peak, but I don't know how. I'm really a noob for everything beyond FR in that app.


----------



## bigshot

I bet a VR helmet with a good sized listening environment would help. I imagine some of the directionality in speaker systems come from visually seeing the sources of the sound. I find that classical music concerts filmed in a concert hall and projected on a big screen feel different than the same sound without the visuals.


----------



## Davesrose

castleofargh said:


> that's a hard one. how to get accurate data for our body using tools that introduce changes we don't want?  I don't know how far along the research is on those videos or just pictures of the head/ears and some software tries to build the correct acoustic model. last AES stuff I read on this are a few years old already, and the simplified models seemed to consistently get too bright for the listener.  Super X-FI is the obvious actual product that comes to mind, as they use pictures of the face and ears. but I didn't try and don't know much about it(knowing that the Realiser A16 would one day come to me, I've stopped purchasing all the little gizmos related to "3D" audio). I'm afraid it's creating a multichannel "surround" sound no matter what we feed it, I hope I'm wrong and it at least gives a choice.



There's a problem that everyone's anatomy is a bit different.  In the medical community, there's never an absolute value....but a range of tolerance.  Especially when it comes to perceptual sound: all of our outer ears, ear canals, and inner ears are different.


----------



## ironmine

rePhase software has a preset for manipulating the phase, which is called "Shuffler" (one preset for the left channel and the second one for the right channel):





Is it of any use when crossfeeding the audio signal for headphone listening? 

What is this Phase Shuffler effect good for?


----------



## ironmine (Oct 23, 2019)

castleofargh said:


> I guess if you could measure the signal really in your ear canal and then at a very similar position do just a speaker measurement(without you). the variation in FR could be fairly accurate and you could decide to only use that variation for your simulation. to remove most reverb, if your room isn't too small and you set it up not to have desk reflections right away or something, I guess you could just shorten the impulse and remove the trail of reverb(some reverb plugins let you play with that, but IDK if that's something you find in typical convolvers?



Yes, it's a good idea (variation in frequency). I need to try it.

Also, the Room EQ Wizard lets you adjust the width of the impulse. You can also choose to time-align it or not.



castleofargh said:


> 250ms should be typo. you typically would get something in the ballpark of 250*µs* (modulo head size and crap). if the app doesn't allow such fine tuning, you can consider just editing the .wav files by counting the number of samples or something like that. there is probably a way to export from REW without the impulse being centered on the peak, but I don't know how. I'm really a noob for everything beyond FR in that app.



Yes, you are right. I need to put the Voxengo in another mode (it's called "dimension"). It uses meters instead of ms:






"Audio Delay This group of knobs specifies delay time in a selected dimensionality (milliseconds, meters or feet). Note that each knob affects a single decimal position of the whole delay time value.During calculation of delay time expressed in meters or feet, it is assumed that the speed of sound propagation equals 340.29 meters per second". (from the manual).

It's funny that using simple and sometimes even free VST plugins one can assemble a crossfeed. Why then do people pay money and buy crossfeed VST plugins?
Goodhertz Can Opener wants to charge $65 for its CanOpener.


----------



## 71 dB

ironmine said:


> It's funny that using simple and sometimes even free VST plugins one can assemble a crossfeed. Why then do people pay money and buy crossfeed VST plugins? Goodhertz Can Opener wants to charge $65 for its CanOpener.



It requires knowledge/insight to assemble a crossfeed. Paying $65 for a CanOpener you don't need to think yourself. Someone has though it out for you.


----------



## 71 dB

ironmine said:


> rePhase software has a preset for manipulating the phase, which is called "Shuffler" (one preset for the left channel and the second one for the right channel):
> 
> 1. Is it of any use when crossfeeding the audio signal for headphone listening?
> 
> 2. What is this Phase Shuffler effect good for?



1. This kind of "shuffling" isn't part of default crossfeed philosophy, but it can of course shape headphone sound to a direction you prefer. 

2. For creating spatial effects such as creating pseudostereo from mono sound or "wide" spatiality that is mono-compatible.


----------



## ironmine

Chris from AirWindows updated his Monitoring plugin.  The option "Cans C" in it has changed. 
I don't like how it sounds. Discordant! Even the previous "Cans C" was better.
The new version sounds as if there are two layers of music and they live their separate lives creating a cacophony.


----------



## ironmine

ironmine said:


> Chris from AirWindows updated his Monitoring plugin.  The option "Cans C" in it has changed.
> I don't like how it sounds. Discordant! Even the previous "Cans C" was better.
> The new version sounds as if there are two layers of music and they live their separate lives creating a cacophony.



I was wrong! I used yesterday a poorly mastered album for testing, it's the fault of the recording I used, the plugin now sounds excellent! I retested it again today on a higher quality audio material and using big headphones. It's great.


----------



## 71 dB

ironmine said:


> I was wrong! I used yesterday a poorly mastered album for testing, it's the fault of the recording I used, the plugin now sounds excellent! I retested it again today on a higher quality audio material and using big headphones. It's great.



Yeah, it's good to try a crossfeed plugin with different kind of recordings.


----------



## bigshot

I rarely use the hall ambience DSPs on my AVR. They tend to mush up the sound. But with Toscanini recordings made in Studio 8H, it is a vast improvement. That recording venue was too small for a full orchestra, so the recordings sound boxy and dry. Adding an envelope of hall reverberation improves them tremendously. Not all recordings are perfect. It's nice to have a toolbox to fix the crummy ones.


----------



## ironmine

71 dB said:


> Minimum phase as in the picture.



71 dB,

Why do you advise using the minimum phase? What's the logic behind for not using the linear phase?



71 dB said:


> you may want to reduce the channel separation little more at high frequencies



What is the best way to do it? Should I boost high frequencies in the Mid channel and reduce them in the Side channel? Or how?


----------



## 71 dB

ironmine said:


> 71 dB,
> 
> 1. Why do you advise using the minimum phase? What's the logic behind for not using the linear phase?
> 
> 2. What is the best way to do it? Should I boost high frequencies in the Mid channel and reduce them in the Side channel? Or how?



1. The delay is _known_ when we know the minimum filter type and cut-off frequency. In this case 1st order butterworth filter creates a low frequency delays of 1/(2*pi*800) seconds = 0.000199 s = 199 µs and delay drops to about 140 µs at 800 Hz. Together with the trembe boost phase shift this creates a ITD of about 250 µs which is very suitable for mimicking a typical stereo speaker angle. Minimum phase means minimum possible delay. If minimum is optimal, then everything else is too much. Linear phase needs FIR-type filters with a certain amount of filter taps and the delay is constant on all frequencies (phase is linear). That's is a very good thing itself, but the cost is increased phase shift/delay (delay is half of the taps) and also increased computational needs. Latency is a problem with linear phase filters. At sample rate 44100 Hz the needed delay (200 µs) is about 9 samples, so a linear phase filter of 17 taps would give a proper delay.

2. The boost should be in "direct sound". So, the original left channel signal goes treble boosted to the left ear and low pass filtered/attenuated to the right ear (and vice versa). That's roughly what happens in reality. Sound on left get a treble boost on the left ear, because the head reflects high frequencies boosting them (the acostic energy that doesn't leak to the right ears remains on the left boosting the sound pressure level a few decibels). So, after crossfeed:

- Mid channel: bass energy has increased, treble energy has decreased
- Side channel: bass energy has decreased, treble energy has increaced.

In other words crossfeed is kind of re-distributing energies of Mid and Side channels.


----------



## ironmine

71 dB said:


> The boost should be in "direct sound". So, the original left channel signal goes treble boosted to the left ear and low pass filtered/attenuated to the right ear (and vice versa).



I already implemented this treble boost in the direct sound after you wrote: "In your diagram you are missing the treble boost block between input and output in the lower branch. Just put another fabfilter plugin there (highself filter, 800 Hz, Q=1)."

But then I asked you how to improve it even more, and you said "but you may want to reduce the channel separation little more at high frequencies". Reducing the channel separation at HF means equalizing HF down in the side channel and/or equalizing them up in the mid channel, right? Or am I misunderstanding you?


----------



## 71 dB

ironmine said:


> I already implemented this treble boost in the direct sound after you wrote: "In your diagram you are missing the treble boost block between input and output in the lower branch. Just put another fabfilter plugin there (highself filter, 800 Hz, Q=1)."
> 
> But then I asked you how to improve it even more, and you said "but you may want to reduce the channel separation little more at high frequencies". Reducing the channel separation at HF means equalizing HF down in the side channel and/or equalizing them up in the mid channel, right? Or am I misunderstanding you?



I'm sorry!

If you want to reduce channel separation for treble do this: Start everything by adding an attenuated and channel swapped version of the original signal to the signal. This makes the original signal just a little bit more mono and it doesn't do much harm at low frequencies as the crossfeed reduces separation much more. Try attenuation levels -20 to -15 dB (gain 0.10 - 0.18).


----------



## ironmine

71 dB said:


> If you want to reduce channel separation for treble do this: Start everything by adding an attenuated and channel swapped version of the original signal to the signal. This makes the original signal just a little bit more mono and it doesn't do much harm at low frequencies as the crossfeed reduces separation much more. Try attenuation levels -20 to -15 dB (gain 0.10 - 0.18).



Ok. Is it the right way? :







1. BitShiftGain: lossless volume reduction by 3 bits (-18 dB). 

2. Swapping channels. 

These steps 1 and 2 implement your advice: "Start everything by adding an attenuated and channel swapped version of the original signal to the signal.  Try attenuation levels -20 to -15 dB (gain 0.10 - 0.18)."

3. BitShiftGain: lossless volume reduction by 1 bit (-6 dB). To compensate for further gain which happens down the road when the signal is processed.

4. DMG Audio Equilibrium: Lowpass filter, 1st Butterworth filter.

5. DMG Audio Equilibrium: Treble boost, highshelf filter, 800 Hz, Q=1.

6. Voxengo Sound Delay: Output gain is set to -8 dB. Delay is set to 0, as you said that steps 4 and 5 (EQ with min.phase) provide the optimum delay already and no further delay is needed.

7. Adding lowpass-filtered, attenuated and channel-swapped signal to the main signal.

8. TB Morphit: Headphone response correction

9. PurestGain: Final adjustment of signal level (to prevent digital clipping)

10. AirWindows Ditherbox: Dithering to 24 bits

11. TB EBU Loudness: Final peak level monitoring.

Is everything correct now?


----------



## 71 dB

ironmine said:


> Ok. Is it the right way? :
> 
> 
> 
> ...



Looks good to me. 

Having Voxengo Sound Delay (6.) gives the opportunity to try out "larger angles" for the sound adding some additional delay to the sound (100 µs for example). 

As this is a "fixed" -8 dB crossfeed it works best for recordings requiring some crossfeeding. It can be a bit overkill for recordings with spatiality of binaural nature and not enough for ping pongy stuff, but it's a starting point to try things out. Of course nothing stops you tinkering with the parameters for every recordings, but it can be confusing and clumsy. Well,  Voxengo Sound Delay (6.) output level actually…

…have fun testing this creation out.


----------



## ironmine

71 dB said:


> Looks good to me.
> 
> Having Voxengo Sound Delay (6.) gives the opportunity to try out "larger angles" for the sound adding some additional delay to the sound (100 µs for example).
> 
> ...



One thing bothers me: you wrote that the EQ (lowpass filter, treble boost) in the min.phase mode gives a certain delay (250 microseconds) . You say it as if it is fixed. But does it not depend on the specific EQ used, its graphic interface, its quality settings, etc.? I replaced FabFilter (which is good) with DMG Equilibrium (which is simply the best, to my ears). DMG is much slower, and I use it with the best quality settings (FIR processing mode, max. impulse length, max impulse padding). Don't all these settings affect the delay?


----------



## 71 dB

ironmine said:


> One thing bothers me: you wrote that the EQ (lowpass filter, treble boost) in the min.phase mode gives a certain delay (250 microseconds) . You say it as if it is fixed. But does it not depend on the specific EQ used, its graphic interface, its quality settings, etc.? I replaced FabFilter (which is good) with DMG Equilibrium (which is simply the best, to my ears). DMG is much slower, and I use it with the best quality settings (FIR processing mode, max. impulse length, max impulse padding). Don't all these settings affect the delay?



Yes, the delay depends on the filter type, filter degree and cut of frequency. I am not an expert off plugins so I am afraid I don't have good answers for you. There should be a way to have minimum possible delay meaning the same delay an equivalent analog filter circuit would create and can be calculated.


----------



## ironmine

71 dB said:


> Minimum phase means minimum possible delay. If minimum is optimal, then everything else is too much. Linear phase needs FIR-type filters with a certain amount of filter taps and the delay is constant on all frequencies (phase is linear). That's is a very good thing itself, but the cost is increased phase shift/delay (delay is half of the taps) and also increased computational needs. Latency is a problem with linear phase filters.



Can I make this conclusion from what you said above? :

1) If a min.phase is used, additional delay is optional (it's not really needed).
2) If linear phase is used, delay MUST be introduced additionally.

3) If min.phase is used, delay is frequency-dependent (some frequencies will delayed more than others).
4) If linear phase is used and delay is introduced additionally (by using a simple delay plugin), this delay is not frequency-dependent (all frequencies will be delayed for an equal amount of time).


----------



## 71 dB

ironmine said:


> Can I make this conclusion from what you said above? :
> 
> 1) If a min.phase is used, additional delay is optional (it's not really needed).
> 2) If linear phase is used, delay MUST be introduced additionally.
> ...



Here's what min. phase (on the left) and linear phase (on the right) filter impulse responsies look like (characteristics):



 

Minimum phase means the filter delays the frequencies as little as mathematically possible in order to have the desired frequency response. High frequencies need much less delay than low frequencies in low pass filtering, so the filter creates longer delays for low frequencies and the delay is not constant for all frequencies. Linear phase filter takes different approach: Instead of having the minimum possible delay for each frequency, every frequency is delayed equal amount, the amount that is needed for low frequencies. So, higher frequencies are delayed MORE than necessory.

100 Hz frequency means 100 cycles per second. That's one cycle (360° or 2π rads) every 0.01 seconds. So, delaying 100 Hz frequencies for example one tenth of a cycle (36° or π/5 rads) means a delay of 0.001 seconds. At 1000 Hz one cycle is only 0.001 seconds long so in order to have the same delay in seconds we need to delay 1000 Hz frequencies on cycle (360° or 2π rads). So, the phase shift needs to get 10 fold from 100 Hz to 1000 Hz, increase linearly with frequency and that's why constant delay on all frequencies means linear phase shift.

So, to answers your four questions:

1) Yes, because the delay as it is works nicely.
2) Depends on the linear filter ( it's constant delay). We want about 200 µs which at 44.1 kHz equals about 9 samples ( = 204 µs). If that's the delay of the linear filter then we are good and we have the opportunity to have additional delay if we want simulation of wider speaker angle. If the delay is much bigger than 200 µs then we have a problem, but it can be fixed by adding compensating delay on the "direct sound" path. However having a lot of delay this way means big latency for the crossfeeder. Crossfeeders based on minimum phase filters are as "fast" and possible and the latency at high frequencies is practically zero.
3) Yes, The delay for the lowest frequencies is about 200 µs dropping slowly at first and starting more rapid drop at the cut off frequency 800 Hz where the delay has dropped to 71 % (140 µs) and then in the treble the delay slowly dies away (at 20 kHz the delay is only 12 µs). The dropping of delay with frequency isn't a serious issue because the crossfeed level drops with the frequency also.  
4) Yes, as I explained above.


----------



## ironmine

71 dB said:


> If you want to reduce channel separation for treble do this: Start everything by adding an attenuated and channel swapped version of the original signal to the signal. This makes the original signal just a little bit more mono and it doesn't do much harm at low frequencies as the crossfeed reduces separation much more.



Well, yesterday I tried this additional feature that you recommended earlier (mono-izing the sound before further processing). Frankly speaking, I did not like it. I did not hear any benefit, but I think I heard a slight blurring of sonic images.



71 dB said:


> So, to answers your four questions:



Thanks for clarifications!

In short, theoretically the minimum phase is the way to go. In practice, I believe it sounds better, too (I compared them).

Do you know that equalizers can be dynamic? (see the manual for FabFilter, section "Dynamic EQ"). Can this dynamic feature be of any use in designing a crossfeed? E.g., for that treble boost (+2 dB) you mentioned?


----------



## gregorio

ironmine said:


> I used yesterday a poorly mastered album for testing, it's the fault of the recording I used, the plugin now sounds excellent!



In what way was the album poorly mastered? There are many metrics for judging mastering quality, some objective, some subjective but how well a master works with crossfeed is not one of them. So I object when a particular member repeatedly defines mastering quality by that metric! 



ironmine said:


> Is everything correct now?



Difficult to say for sure but probably not. Whenever you insert a plugin/process, a delay is introduced due to the processing time required by the plugin. That's not a problem if you're applying the same plugin/process to all the signals but is a problem when you're only applying it to one or some of the signals and then mixing them together again. For example, it's likely that the input to item #3 is a mix of the original signal and the channel swapped, -18dB signal, that's somewhat out of phase, due to the processing delay introduced by "bitshift" item #2. Most commercial DAWs *somewhat* alleviate this problem using a fairly complex scheme called "Automatic Delay Compensation" (ADC), where the plugins/processors report the processing delay they're incurring and the DAW delays other channels by an appropriate amount to compensate. From your flow diagram, it seems unlikely this is occurring and even if it is, it's often not entirely accurate/successful. To check this, you would need to run a null test between the two stereo inputs to item #3, although that might be tricky with your setup because you obviously don't want to add the same plugin to the original signal. Maybe add a second bitshift (after item #2) to swap the channels back again (with +3 bits). The end result should be a perfect null, anything else indicates a phase shift. A similar situation exists between the signal passing through items #4/#6 and item #5. You have an extra unit (#6, sound delay), plus a probable difference in phase between the two different EQ settings. A null test will again reveal if this is the case.

Lastly, your item #10 looks to be superfluous. Dithering to 24bit (as opposed to just truncating) makes no audible difference. Extensive testing 15+ years ago revealed that roughly 100 such truncation operations are required for artefacts to *sometimes* be audible.



ironmine said:


> [1] One thing bothers me: you wrote that the EQ (lowpass filter, treble boost) in the min.phase mode gives a certain delay (250 microseconds) . You say it as if it is fixed. But does it not depend on the specific EQ used, its graphic interface, its quality settings, etc.?
> [2] I replaced FabFilter (which is good) with DMG Equilibrium (which is simply the best, to my ears).
> [3] DMG is much slower, and I use it with the best quality settings (FIR processing mode, max. impulse length, max impulse padding). Don't all these settings affect the delay?



1. Yes, it does. Not so much on the graphic interface but certainly on the specific EQ used and it's processing settings (digital, min phase, etc.).

2. There's no difference. A decade or so ago, a bunch of engineers (through GearSlutz) null tested a wide range of plugin PEQs. The free built-in ones with ProTools, Logic, Cubase and a variety of third-party EQ plugins (some of them expensive, mastering EQ plugins), including DMG and FabFilter. The results nulled perfectly, there was no difference between any of them! This is only true of course when using basic PEQ unit features, obviously the results do not null using say the resonant filter or other options in DMG and other third party EQs which don't exist in built-in EQs (or when using EQs modelled on vintage analogue EQs). However, you're just doing a very basic hi-shelf and LPF so the results should be identical.

3. Yes, they would very significantly affect the delay! Obviously you'd want to make sure both EQs (on each of the signal paths) have the same basic setting. Personally I'd use the basic "Digital" setting in the DMG EQs rather than the other settings. Incidentally, it's inaccurate to call the other settings "best quality settings", they're different settings (rather than intrinsically better) that exchange some artefacts for other artefacts (for example, phase, pre and post ringing, etc.) but which is better depends on the material (music mix) you're inputting and typically the difference is only potentially audible if you're using numerous EQ instances (which is common in music/sound mixing but not in your case).

G


----------



## 71 dB

ironmine said:


> 1. Well, yesterday I tried this additional feature that you recommended earlier (mono-izing the sound before further processing). Frankly speaking, I did not like it. I did not hear any benefit, but I think I heard a slight blurring of sonic images.
> 
> 2. Thanks for clarifications! In short, theoretically the minimum phase is the way to go. In practice, I believe it sounds better, too (I compared them).
> 
> Do you know that equalizers can be dynamic? (see the manual for FabFilter, section "Dynamic EQ"). Can this dynamic feature be of any use in designing a crossfeed? E.g., for that treble boost (+2 dB) you mentioned?



1. Okay. I wouldn't use the "additional feature" either on all recordings. I have come to accept the fact that spatial hearing works differently for people, or maybe it's not how the hearing works, but what we prefer. So, everything I say might be worthless to others. Since I'm not paid to write here writing here is probably total waste of my time. You can try adding a long delay (say 20 ms) to the additional feature to simulate room. Maybe that gives the sound you want?

2. Of course EQ can be dynamic. I have written dynamic Nyquist plugins myself. I haven't tried dynamic filter in crossfeed myself, but I have thought about the possiblities of using dynamic EQ to compress channel separation to a more steady level applying it to Mid-Side signals.


----------



## 71 dB

gregorio said:


> There are many metrics for judging mastering quality, some objective, some subjective but how well a master works with crossfeed is not one of them.
> 
> G



Crossfeed being not a metric isn't written in the stone. Of course it can be a metric if we decide so. I myself can decide it is part of my personal metrics for mastering quality.


----------



## bigshot

There's certainly bad engineering out there. However you want to try and slap a band aid on it is fine if it subjectively sounds better to you.


----------



## betula

Crossfeed or not to crossfeed... It depends on the quality of crossfeed implementation I guess. TBH I don't trust any software based crossfeed. That sounds a bit dodgy. Hugo2 or TT2 crossfeed however is another matter. I love crossfeed level 2 (out of 0,1,2,3) on my TT2. It just gives a more natural space and sound. I guess it also depends on speakers/headphones how beneficial crossfeed is. Some of the old The Doors recordings for example are way too stereo. Those definitely benefit a bit of good crossfeed: makes the space more realistic.


----------



## ironmine

gregorio said:


> In what way was the album poorly mastered? There are many metrics for judging mastering quality, some objective, some subjective but how well a master works with crossfeed is not one of them. So I object when a particular member repeatedly defines mastering quality by that metric!



It was a typical poorly produced heavy-metal album (2009) with dynamic range 5-6 dB. The sounds of drums are awful.

Other albums sound better.

It does not matter much, because, overall, I still prefer 112dB Redline Monitor to AirWindows Monitoring (Cans C).



gregorio said:


> Difficult to say for sure but probably not. Whenever you insert a plugin/process, a delay is introduced due to the processing time required by the plugin. That's not a problem if you're applying the same plugin/process to all the signals but is a problem when you're only applying it to one or some of the signals and then mixing them together again. For example, it's likely that the input to item #3 is a mix of the original signal and the channel swapped, -18dB signal, that's somewhat out of phase, due to the processing delay introduced by "bitshift" item #2. Most commercial DAWs *somewhat* alleviate this problem using a fairly complex scheme called "Automatic Delay Compensation" (ADC), where the plugins/processors report the processing delay they're incurring and the DAW delays other channels by an appropriate amount to compensate. From your flow diagram, it seems unlikely this is occurring and even if it is, it's often not entirely accurate/successful. To check this, you would need to run a null test between the two stereo inputs to item #3, although that might be tricky with your setup because you obviously don't want to add the same plugin to the original signal. Maybe add a second bitshift (after item #2) to swap the channels back again (with +3 bits). The end result should be a perfect null, anything else indicates a phase shift. A similar situation exists between the signal passing through items #4/#6 and item #5. You have an extra unit (#6, sound delay), plus a probable difference in phase between the two different EQ settings. A null test will again reveal if this is the case.



Thanks for your adviсe.

What if I place the same impulses (perfect impulse containing all frequencies) in the left and right channels into the audio track which is otherwise silent, and process it ? The crossfeed processing must result in copying some part of the impulse from the left channel into the right one, and vice versa. So, now the right channel will show two impulses. By measuring the distance between them, will I be able to measure the delay exactly?



gregorio said:


> Lastly, your item #10 looks to be superfluous. Dithering to 24bit (as opposed to just truncating) makes no audible difference. Extensive testing 15+ years ago revealed that roughly 100 such truncation operations are required for artefacts to *sometimes* be audible.



You are right (probably - I did not compare it myself), it's most likely inaudible, but I still want to do it as it is scientifically the right way to convert from 32 to 24 bits. And it does not cost anything, computationally wise.

Chris from AirWindows makes a big deal about different dithering algorithms (he's got a bunch of them). Frankly, I thinks he's wasting his time.



gregorio said:


> 2. There's no difference. A decade or so ago, a bunch of engineers (through GearSlutz) null tested a wide range of plugin PEQs. The free built-in ones with ProTools, Logic, Cubase and a variety of third-party EQ plugins (some of them expensive, mastering EQ plugins), including DMG and FabFilter. The results nulled perfectly, there was no difference between any of them! This is only true of course when using basic PEQ unit features, obviously the results do not null using say the resonant filter or other options in DMG and other third party EQs which don't exist in built-in EQs (or when using EQs modelled on vintage analogue EQs). However, you're just doing a very basic hi-shelf and LPF so the results should be identical.



To my ears, even when boosting the bass with a very basic high shelf, DMG gives a perfect black background. FabFilter makes the background somewhat greyer. DMG does exactly what I want, without altering other aspects of music. It's the most transparent and non-detrimental EQ among many, many I have tried. I wish FabFilter would sound same as DMG, as I like the simpler interface and responsiveness of FabFilter much better.



gregorio said:


> Personally I'd use the basic "Digital" setting in the DMG EQs rather than the other settings. G



This is the mode I use, too.


----------



## ironmine (Oct 29, 2019)

71 dB said:


> 2. Of course EQ can be dynamic. I have written dynamic Nyquist plugins myself. I haven't tried dynamic filter in crossfeed myself, but I have thought about the possiblities of using dynamic EQ to compress channel separation to a more steady level applying it to Mid-Side signals.



Can you please elaborate your idea of applying dynamic EQ to mid-side signals? I would like to try it.

There should be some sort of a self-adjusting mechanism in crossfeeds, which changes the crossfeed level depending on the hardness of stereo in the audio signal.  I believe 112dB Redline Monitor has got such mechanism inside, as it sounds great with all recordings, I never hear any "excessive spatiality" from it, as I do with other plugins I test (including my own self-made one above).

If only there were a way how to reverse engineer it. I am very keen to see and understand how it processes the sound.


----------



## gregorio

71 dB said:


> [1] Crossfeed being not a metric isn't written in the stone.
> [2] Of course it can be a metric if we decide so.
> [3] I myself can decide it is part of my personal metrics for mastering quality.



1. As crossfeed is never a metric for mastering, then effectively it IS written in stone.
2. Exactly! The flaw in your argument is that "we" do NOT decide so!
3. You can decide anything you want for your "personal metrics" but then of course they're YOUR personal metrics, NOT everyone else's! For example: If I put a Ferrari in a muddy field, it will perform very poorly, probably more poorly than even an average family saloon. But of course, Ferraris are not designed to perform in muddy fields, they're (very well) designed for race tracks and decent quality public roads. If a muddy field is my "personal metric" though, can I then state that Ferraris are badly designed? Sure ... but only if I want to state a falsehood and be taken for an ignorant idiot!!

It's the same problem with you all over again. You have some personal preference/metric that you FALSELY state as a general fact and apply to everyone. How many times?



ironmine said:


> [1] It was a typical poorly produced heavy-metal album (2009) with dynamic range 5-6 dB.
> [2] The sounds of drums are awful. ... [2a] Other albums sound better.



1. What you describe could easily be the fault of the production/mixing. It could be a well mastered album (or as well mastered as possible given a poor mix), although of course it's entirely possible that it has been poorly mastered.

2. It might not be "awful" in the first place, maybe "awful" was intensional? Much of what makes heavy metal genres "heavy metal" is the use of intensionally "awful". Heavily distorted guitars, singing that's often more closely related to shouting/screaming than "singing" and heavily processed, crushed and/or distorted drums, etc.
2a. Under what conditions? Audiophiles often/typically judge what sounds better on audiophile/high quality systems but most masters are not designed for audiophile systems.

I'm not outright disputing that it's a poorly mastered album, it certainly may be but firstly, as explained above, it might not be and secondly, even if it is poorly mastered as you describe, in what way would that affect crossfeed?



ironmine said:


> What if I place the same impulses (perfect impulse containing all frequencies) in the left and right channels into the audio track which is otherwise silent, and process it ? The crossfeed processing must result in copying some part of the impulse from the left channel into the right one, and vice versa. So, now the right channel will show two impulses. By measuring the distance between them, will I be able to measure the delay exactly?



That could work but I'd run that test several times, with the same and different freq content impulses, to be sure the delay is consistently the same. If it is, also bare in mind that the delay is likely to be different with different sample rate material.



ironmine said:


> You are right (probably - I did not compare it myself), it's most likely inaudible, but I still want to do it as it is scientifically the right way to convert from 32 to 24 bits. And it does not cost anything, computationally wise.



Firstly, at normal/reasonable listening levels, the difference between dither and truncation is inaudible when dithering/truncating to 16bit, when dithering or truncating to 24bit the artefacts are another 48dB lower (than 16bit), so definitely inaudible! Secondly, we're not strictly talking about truncation here anyway. You are not converting from 32bits to 24bits, you're converting from 32bit float to 24bit fixed. A 32bit float isn't an integer value (unlike 24bit fixed), the 32 bits are divided, the first 8 of which represent the exponent, which effectively leaves 24bits to store an integer value. 32bit float and 24bit fixed therefore have exactly the same 24 bits of precision and the "scientifically right way" to convert between them is simply to round the 32bit float (depending on the exponent value), dithering just adds 6dB of unnecessary white noise (albeit way below audibility). Lastly, it obviously does cost something computationally, a triangular probability function has to be calculated for each sample of each channel and then applied. Admittedly though, this computational cost is pretty trivial for any modern processor.



ironmine said:


> To my ears, even when boosting the bass with a very basic high shelf, DMG gives a perfect black background. FabFilter makes the background somewhat greyer. DMG does exactly what I want, without altering other aspects of music. It's the most transparent and non-detrimental EQ among many, many I have tried. I wish FabFilter would sound same as DMG, as I like the simpler interface and responsiveness of FabFilter much better.



Again, this is quite trivially easy to test objectively. Take a channel of audio/music, apply your "very basic high shelf" with DMG and bounce it down, do the exactly the same but with FabFilter and null test them. Unless you've got some other setting applied in one of the EQs (or you've screwed the test up), the result should be a perfect null and therefore your ears are fooling you.

G


----------



## 71 dB

ironmine said:


> Can you please elaborate your idea of applying dynamic EQ to mid-side signals? I would like to try it.
> 
> There should be some sort of a self-adjusting mechanism in crossfeeds, which changes the crossfeed level depending on the hardness of stereo in the audio signal.  I believe 112dB Redline Monitor has got such mechanism inside, as it sounds great with all recordings, I never hear any "excessive spatiality" from it, as I do with other plugins I test (including my own self-made one above).
> 
> If only there were a way how to reverse engineer it. I am very keen to see and understand how it processes the sound.



You first do LR to MS conversion: M = a * (L + R), S = a * (L - R). 
Then you apply dynamic EQ (lowshelf filter) to S and get S'
Finally you do MS' to LR conversion: L = b * (M + S'), R = b * (M - S').​
Here a * b = 0.5. For example a and b both are square root of 0.5 = 0.707106781… or a = 0.5 and b = 1.0 or a = 1.0 and b = 0.5.

The result is signal where the ILD at low frequencies have been reduced dynamically, large ILD more than small ILD. It's low frequency ILD compression.


----------



## 71 dB

gregorio said:


> It's the same problem with you all over again. You have some personal preference/metric that you FALSELY state as a general fact and apply to everyone. How many times?
> 
> G



Except this time I DID state them as my personal preferencies and said what works for me doesn't work for everyone. Can't you give me credit for anything?


----------



## 71 dB

gregorio said:


> 1. As crossfeed is never a metric for mastering, then effectively it IS written in stone.
> G



Would you say metrics for mastering are 100 % perfect and free of any kind of burden of history? Maybe in a parallel universe where speakers don't exist and all listening is done with headphones crossfeed IS a metric for mastering? It's weird to say these things are written in stone. Where is that stone? I have never seen a picture of such thing.


----------



## gregorio

71 dB said:


> I myself can decide it is part of my personal metrics for mastering quality.


Yes you can, I already stated that. However, you cannot then assert publicly that the mastering quality is poor based on your personal metrics that are different from everyone else's, any more than I can state Ferrari's are poorly designed because they don't meet my personal metric of muddy fields! 


71 dB said:


> Except this time I DID state them as my personal preferencies and said what works for me doesn't work for everyone. Can't you give me credit for anything?


Numerous times (unfortunately) you have stated that the mastering is poor because you define mastering quality according to how much crossfeed you feel/prefer/believe it requires. It's all very well to admit that it's based on "your personal metric" but useless if you just carry on asserting poor mastering regardless! The MOST you can say is that according to your personal perception/preferences a particular mix/master does or doesn't benefit from crossfeed, NOT that the mastering is good or bad because of that.


71 dB said:


> [1] Would you say metrics for mastering are 100 % perfect and free of any kind of burden of history?
> [2] Maybe in a parallel universe where speakers don't exist and all listening is done with headphones crossfeed IS a metric for mastering?
> [3] It's weird to say these things are written in stone. Where is that stone? I have never seen a picture of such thing.


1. Of course not but what's that got to do with you making up your own metrics that have never been part of history?
2. How would I know, I've never been to a parallel universe. Can't you answer that for yourself?
3. Not as weird as you not being able to read the word "effectively"!!

G


----------



## 71 dB

gregorio said:


> 0. Numerous times (unfortunately) you have stated that the mastering is poor because you define mastering quality according to how much crossfeed you feel/prefer/believe it requires. It's all very well to admit that it's based on "your personal metric" but useless if you just carry on asserting poor mastering regardless! The MOST you can say is that according to your personal perception/preferences a particular mix/master does or doesn't benefit from crossfeed, NOT that the mastering is good or bad because of that.
> 
> 1. Of course not but what's that got to do with you making up your own metrics that have never been part of history?
> 
> G



0. Maybe I have not expressed myself clearly enough. If you master a recording well for speakers (the primary target usually) that doesn't automatically mean the same recording is  well mastered (and produced) for headphones. For ME crossfeed is a nice solution for recordings that are well mastered for speakers but my ears don't like with headphones. If you want me to stop using crossfeed what you can do as an audio engineer is master/produce "better" for headphones so that my ears like the result as it is and I don't have to fix anything using crossfeed.

1. Isn't everything made up by someone? Without making things up there wouldn't be history. I am not dead yet. I live and operate in society. Maybe if I am lucky and successful my ideas are accepted and they become part of history. That would be nice, but you seem to deny me that. Why? Are only some people allowed to be part of history, leave their mark?


----------



## ironmine (Oct 30, 2019)

71 dB said:


> You first do LR to MS conversion: M = a * (L + R), S = a * (L - R).
> Then you apply dynamic EQ (lowshelf filter) to S and get S'
> Finally you do MS' to LR conversion: L = b * (M + S'), R = b * (M - S').​
> Here a * b = 0.5. For example a and b both are square root of 0.5 = 0.707106781… or a = 0.5 and b = 1.0 or a = 1.0 and b = 0.5.
> ...



1) Do these steps above (LR to MS conversion, Dynamic EQ and MS' to LR conversion) replace the "Lowpass filter, 1st Butterworth filter" or should be added to it?

2)  "a" and "b" are gains, right?  Why a x b should equal to 0.5? Where does this rule come from? And if these a and b represent gains, how I can convert, say, 0.707106781 to dB?

We speak in different tongues  You describe processes in words and formulas, and I have a hard time understanding how to convert your words to concrete VST plugins and how to connect them between each other in VST host, in which sequence, etc.

PS: Why not simply convert LR to MS, then apply dynamic low shelf EQ to S, and, simultaneously, apply dynamic high shelf EQ to M (same amount of dB)?  Then convert MS back to LR.  It will redistribute the bass energy from S to M dynamically.


----------



## ironmine

gregorio said:


> A 32bit float isn't an integer value (unlike 24bit fixed), the 32 bits are divided, the first 8 of which represent the exponent, which effectively leaves 24bits to store an integer value.



I don't understand it. If 24 bits (in 32bit float) are used to store an integer value, then why this 32bit float isn't an integer value?

Also, what difference does it make whether 32 bits are float or integer? Doesn't dithering have to be applied anyway, whenever you convert from a higher bitrate format to a lower bitrate format?


----------



## ironmine

gregorio said:


> What you describe could easily be the fault of the production/mixing. It could be a well mastered album (or as well mastered as possible given a poor mix), although of course it's entirely possible that it has been poorly mastered.



I don't care for such details. Mixing or mastering - it's all the same too me, as a listener. What matters is the final result.



gregorio said:


> It might not be "awful" in the first place, maybe "awful" was intensional? Much of what makes heavy metal genres "heavy metal" is the use of intensionally "awful". Heavily distorted guitars, singing that's often more closely related to shouting/screaming than "singing" and heavily processed, crushed and/or distorted drums, etc.



As a long-term fan of heavy metal genres (from hard rock to grind metal), I can assure you that this awfulness is not intentional, it's a result of the loudness wars.  Previously, even Death Metal was not recorded as poorly as it is recorded now. Dynamics went from 10 dB to 8 dB, then to the current obnoxious 4-6 dB.


----------



## Davesrose

ironmine said:


> I don't understand it. If 24 bits (in 32bit float) are used to store an integer value, then why this 32bit float isn't an integer value?
> 
> Also, what difference does it make whether 32 bits are float or integer? Doesn't dithering have to be applied anyway, whenever you convert from a higher bitrate format to a lower bitrate format?



In computing, floating point has been used for maximizing memory and representing large numbers (it can be a more efficient way of processing large numbers or having a sliding scale: with applications like 3D animation, I find rendering times can be quite faster with 32bit float image files vs 16bit integer).  The notation for a floating point number is an integer (significand) and then exponent.  A 32bit variable has a range going from 1.2 x 10^-38 to ca. 3.4 x 10^38.  That's much shorter notation than a literal integer value.


----------



## 71 dB (Oct 31, 2019)

ironmine said:


> 1) Do these steps above (LR to MS conversion, Dynamic EQ and MS' to LR conversion) replace the "Lowpass filter, 1st Butterworth filter" or should be added to it?
> 
> 2)  "a" and "b" are gains, right?  Why a x b should equal to 0.5? Where does this rule come from? And if these a and b represent gains, how I can convert, say, 0.707106781 to dB?
> 
> ...



1) These steps don't "replace" default crossfeed steps as such, rather this is another method (one that doesn't try to simulate 2 speakers) of dealing with excessive spatiality. Combining this idea to crossfeed is more challenging (because dynamic EQ has also dynamic delay which is hard to control properly) and something I haven't done myself so I can't give you simple answers.

2) Yep, "a" and "b" here are gains and the values come from rotation matrix,






because LR to MS and MS to LR conversions are 45° pseudorotations:





Pseudorotation means we can use the same matrix to go either direction, in other words doing it twice brings us back to beginning rather than doing 90° rotation. This is handy, because I need only one plugin for both conversions. cos 45° = sin 45° = 0.7071… = 20*log10 (0.7071…) dB = -3.0102… dB. There is not gain change because you add two channels to have one. Without this -3 dB drop each conversion would increase your signal by 3 dB.


----------



## gregorio

71 dB said:


> 0. Maybe I have not expressed myself clearly enough. If you master a recording well for speakers (the primary target usually) that doesn't automatically mean the same recording is well mastered (and produced) for headphones.
> [0.1] If you want me to stop using crossfeed what you can do as an audio engineer is master/produce "better" for headphones so that my ears like the result as it is and I don't have to fix anything using crossfeed.
> 1. Isn't everything made up by someone?
> [1.1] Without making things up there wouldn't be history.
> ...



0. And maybe I have not expressed myself clearly enough! Using the analogy again, where mastering is car design, a recording is a Ferrari and a muddy field is my personal metric, then your statement becomes:  "If you design a Ferrari for a race track/good roads (the primary target) that doesn't automatically mean the same Ferrari is well designed for a muddy field." - Does this statement seem reasonable to you or is it just ignorant/stupid? What if we put your assertion the other other way around: Let's say I have a very well mastered binaural recording and decide to play that recording on speakers, it obviously doesn't work well, so would it be accurate/correct for me to assert that this binaural recording in fact isn't well mastered, it's poorly mastered? Or, would I just be ignorant/stupid?
0.1. I don't want you to stop using crossfeed, I don't care whether you use crossfeed or not and I'm not mastering a track/album just for you personally! Should Ferrari redesign their cars so they work "better" in muddy fields, just because that's my personal preference/metric? Or would I be stupid/ignorant for suggesting such a thing?

1. No.
1.1. Of course there would.
1.2. But apparently you don't want to live and operate in society, you want to dictate to society.
1.3. Firstly, crossfeed is NOT your idea and secondly, it's already part of history, not particularly successful and is scientifically superseded. So you'd need a lot more than just luck, you'd need a worldwide marketing campaign, based on fallacies/pseudoscience, celebrity enforcements, etc. And even then, you'd still be "denied that" here in this subforum!
1.4. Of course, do you really not know that?



ironmine said:


> [1] I don't understand it. If 24 bits (in 32bit float) are used to store an integer value, then why this 32bit float isn't an integer value?
> [2] Also, what difference does it make whether 32 bits are float or integer?
> [2a] Doesn't dithering have to be applied anyway, whenever you convert from a higher bitrate format to a lower bitrate format?



1. Because the other 8 bits represent the exponent, which effectively dictates where the "decimal point" should be located within the 24bit integer value. Together, the exponent and mantissa (significand) represent the "scientific notation" of integers.

2. 32bit fixed would give you 32bit precision, providing a dynamic range of ~192dB. 32bit float has an exponent value and therefore provides much larger range of potential values, as Davesrose explained, resulting in a dynamic range of up to ~1528dB (but with 24bit precision). Of course in practice, both these dynamic ranges are nonsense, we're constrained by the laws of physics pertaining to analogue and acoustic signals, not to mention human hearing limitations. There are however two practical benefits of 32bit float: Firstly, computer chips/CPUs are optimised for 32bit float and commonly today, double precision 32bit float (64bit), so the 32bit float values are far more efficiently processed. Secondly, because it's a "float" you can theoretically exceed 0dBFS (in fact, up to +770dBFS). Of course, you can't actually output values much higher than 0dBFS, the DAC itself would clip but with 32bit float you can have greater than 0dBFS values (and scale them down for output) without clipping. You can't do this with a fixed point format, once 0dBFS is reached, clipping effectively exists in the file/data and scaling it down afterwards just gives you the same signal at a lower level with the clipping distortion.
2a. Firstly, not necessarily. The point of dithering is effectively to convert (correlated) truncation error/distortion into uncorrelated white noise, because A. Truncation distortion is more objectionable than white noise (to our hearing perception) and B. As truncation distortion is correlated, it sums at +6dB rather than +3dB with dither (uncorrelated white noise). This raises two obvious questions: A. What difference does it make if the truncation distortion or the white noise are both below audibility? And B. How many summing operations of signals with truncation error are required to make the distortion audible and therefore when should we dither instead? The answer to "A" is none and even truncating or dithering to 16bit, both are inaudible. However, we're quite near the threshold of hearing and under certain circumstances (poor gain staging and very loud listening levels for example) it might become audible, hence the standard practice of applying dither when reducing to 16bit, which is effectively a "just in case" safety precaution. Reducing down to 24bit though, the white noise or truncation distortion is another 48dB lower and is way, way below audibility, it's not even reproducible because it's several times lower than the distortion/noise floor of even the lowest noise floor DACs (let alone the noise floor of transducers). The answer to "B" is hundreds or thousands, baring in mind that the truncation errors with 32bit float are theoretically occurring somewhere around -750dBFS.

Secondly, assuming that the 32bit float value is scaled for output, then you're not really converting from a higher bit depth to lower bit depth, the precision is effectively the same in both cases, 24bit. 



ironmine said:


> [1] I don't care for such details. Mixing or mastering - it's all the same too me, as a listener. What matters is the final result.
> [2] As a long-term fan of heavy metal genres (from hard rock to grind metal), I can assure you that this awfulness is not intentional, it's a result of the loudness wars.
> [3] Previously, even Death Metal was not recorded as poorly as it is recorded now.
> [3a] Dynamics went from 10 dB to 8 dB, then to the current obnoxious 4-6 dB.



1. Sure, such details may not matter to you personally but recording, mixing and mastering are all separate processes and at least the last two (and sometimes all three) are typically performed in different studios by different personnel, with different equipment and different priorities (of both the equipment and the personnel). So it does matter in practice, although for the consumer only the final result matters. There's only a problem on websites like Head-fi, where we have all kinds of false assertions/beliefs about recording, mixing and mastering and of course marketers who propagate their own or take advantage of the existing false beliefs/assertions. Isn't the main reason this sub-forum exists to discuss the factual/scientific "details"?

2. Yes but obviously the mix or mastering engineer has intensionally applied that "awfulness" and others have intensionally approved it. So it is intensional, although in some/many cases some of the awfulness of undesirable amounts of over-compression is sacrificed for the (perceived) awfulness of not being loud enough relative to other masters. This isn't always the case though, it's not uncommon that the musicians actually prefer the amounts of over-compression I personally consider undesirable and would have it that way regardless of the loudness relative to other masters.

3. In general, I would think Death Metal is recorded more or less as well now as it was previously. The mics used today are the same or better, the recording technology is better today but it's more common nowadays to record in home/project studios rather than the big commercial studios (with more knowledgable/experienced/skilled recording engineers), this is obviously more likely with less successful/known bands.
3a. That's due to the mixing, mastering or both, not the recording. And, "obnoxious" is a point of view, at least partially a generational thing, somewhat like the extreme guitar distortion people like Hendrix employed, which my mother thought was just "obnoxious noise" and I thought was rather hip/contemporary modern (at the time).

G


----------



## 71 dB

gregorio said:


> 1. And maybe I have not expressed myself clearly enough! Using the analogy again, where mastering is car design, a recording is a Ferrari and a muddy field is my personal metric, then your statement becomes:  "If you design a Ferrari for a race track/good roads (the primary target) that doesn't automatically mean the same Ferrari is well designed for a muddy field." - Does this statement seem reasonable to you or is it just ignorant/stupid? What if we put your assertion the other other way around: Let's say I have a very well mastered binaural recording and decide to play that recording on speakers, it obviously doesn't work well, so would it be accurate/correct for me to assert that this binaural recording in fact isn't well mastered, it's poorly mastered? Or, would I just be ignorant/stupid?
> 2. I don't want you to stop using crossfeed, I don't care whether you use crossfeed or not and I'm not mastering a track/album just for you personally! Should Ferrari redesign their cars so they work "better" in muddy fields, just because that's my personal preference/metric? Or would I be stupid/ignorant for suggesting such a thing?
> 
> 3. No.
> ...


1. Now you are talking about what is stupid instead of seeking solutions. We have tractors for muddy fields and Ferraries for race tracks. That's the solution.
2. I call sound that works well for both speakers and headphones omnistereophonic. We should be talking if such stereophony is possible, not if it's stupid.
3. Well obviously, but I mean man-made stuff, of course. All of it is .. well man made!
4. Man made history… what else? History of dinosaurs? How relevant is that in audio? Dinosaur rock perhaps?
5. How is what I do dictating? Do they put you in prison if you don't obey me? No, you can totally ignore my words meaning I don't dictate anything. I suggest.
6. I have never said crossfeed is my invention. I have said many times I was very late in discovering the whole concept in 2012 when I started actually thinking spatiality in headphone listening and realized there's a massive systemic problem with it. What I do is raise the question of music production practises and whether more emphasis could be given to avoid excessive spatiality with headphones and whether it was possible to develop production, mixing and mastering techniques that created onmistereophonic sound using the parameters of human spatial hearing cleverly. For example instead of using amplitude panoration (ILD) at low frequencies, ITD together with very little (few decibels) of ILD could be used to have a "wide" sound both on speakers and headphones without excessive spatiality. At high frequencies amplitude panoration is ok. It's about creating spatiality cleverly knowing people use speakers AND headphones. In fact I think modern pop music often is like this thanks to the tools we have today, but for some reason you are against this. Commercial success is commercial success and nothing else. Are the most watched movies always the best movies raved by the critics? I don't think so. The lack of commercial success tells nothing about how much science supports crossfeed. In my opinion crossfeed does miracles to tackle the problem of excessive spatiality considering how simple it is. The things that have superceded it aren't as simple. If I need all that luck and all then I can give up and live like a pum. Why bother try anything? How do other people make it in life and why is this so difficult for me? My sister writes poems for living. Without celebrity enforcements! It is so frustrating to try and have all this crap negativity as feedback.
7. How do I know if I am allowed or not? I thought everyone are allowed. In Finland at least people are equal by law.


----------



## bigshot

The solution for two channel spatiality is speakers in a room. The solution for convenience and low price is headphones. ...ne'r the twain shall meet.


----------



## gregorio (Nov 1, 2019)

71 dB said:


> 1. Now you are talking about what is stupid instead of seeking solutions. [1a] We have tractors for muddy fields and Ferraries for race tracks. That's the solution.
> 2. I call sound that works well for both speakers and headphones omnistereophonic.
> [2a] We should be talking if such stereophony is possible, not if it's stupid.
> 5. How is what I do dictating? ...[5a.] No, you can totally ignore my words meaning I don't dictate anything. I suggest.
> 7. How do I know if I am allowed or not? [7a] I thought everyone are allowed. In Finland at least people are equal by law.



1. You're the one talking stupid, I'm just pointing out how stupid.
1a. Exactly!! I would say you've got it now but somehow I doubt it. We do have tractors for muddy fields and Ferraris for race tracks (just like we've got binaural for headphones and stereo for home listening) AND yes, that IS the solution, so why are you advocating a different solution? Your solution is to put big chunky tyres on the Ferrari and falsely state that fixes all the problems but of course, that's "talking stupid"! It might give you better traction in some muddy fields but unless you ALSO sort out a range of other relevant factors, like ground clearance, gear ratios, etc., then all you've really got is something different (that performs differently to both a tractor and a Ferrari), that still doesn't work most of the time in muddy fields but maybe you personally prefer it. That's fine, but of course you don't get to dictate that preference to everyone else or state that your solution fixes all the problems! Furthermore, that actual point being made is that you don't get to call a Ferrari "poorly designed" because it doesn't work well in a situation that it's not designed for (such as a muddy field), as you say, that's why we have tractors! This is the same as you saying a recording mastered for speakers is "poorly mastered" if it doesn't work well on headphones, which is "stupid talk"!!

2. But that's effectively a nonsense term that you've just invented! What "_works well for both speakers and headphones_" is YOUR PERSONAL judgement, based on YOUR PERSONAL preferences/perception. "Omnistereophonic" is therefore a nonsense term (to everyone but you) because what it describes is your personal preference/perception, NOT an objective fact. Good god, how many times?
2a. Obviously therefore, this statement of yours is also nonsense. This is the Sound Science forum, not the "Let's talk about if it's possible to satisfy 71dB's personal preferences/perception" forum!

5. Asked and answered countless times! You are trying to dictate that your personal preferences/perceptions/judgements are objective facts, making up terms to describe those (false) facts and based on this fallacy, dictate/define what is good or bad mastering, how artists/engineers should be creating/producing recordings and that "_we should be talking if_" it's possible to satisfy your personal preferences/perceptions.
5a. No I can't! This is the sound science forum, we deal with the actual facts/science and refute false assertions of objective facts. How do you not know this, seeing as you often do this yourself (except on this subject)?

7. Common sense!
7a. Of course they're not, not even in Finland! Everyone is obviously not equal with say Einstein, who was a particular genius, invented an original idea/concept, backed it up with objective (mathematical) evidence and subsequent science/reliable evidence supported it's predictions. What have you got? An idea that's 70 odd years old and isn't yours, that's based on your own personal preferences/perception (and a few facts cherry picked to agree with them) and that subsequent science has long dismissed in favour of far more sophisticated models! You're clearly not Einstein, nor a Messiah, and just wishing really hard that you were, doesn't make it so! How many times?

G


----------



## sonitus mirus

The possibilities...


----------



## 71 dB

gregorio said:


> 1. You're the one talking stupid, I'm just pointing out how stupid.
> 1a. Exactly!! I would say you've got it now but somehow I doubt it. We do have tractors for muddy fields and Ferraris for race tracks (just like we've got binaural for headphones and stereo for home listening) AND yes, that IS the solution, so why are you advocating a different solution? Your solution is to put big chunky tyres on the Ferrari and falsely state that fixes all the problems but of course, that's "talking stupid"! It might give you better traction in some muddy fields but unless you ALSO sort out a range of other relevant factors, like ground clearance, gear ratios, etc., then all you've really got is something different (that performs differently to both a tractor and a Ferrari), that still doesn't work most of the time in muddy fields but maybe you personally prefer it. That's fine, but of course you don't get to dictate that preference to everyone else or state that your solution fixes all the problems! Furthermore, that actual point being made is that you don't get to call a Ferrari "poorly designed" because it doesn't work well in a situation that it's not designed for (such as a muddy field), as you say, that's why we have tractors! This is the same as you saying a recording mastered for speakers is "poorly mastered" if it doesn't work well on headphones, which is "stupid talk"!!
> 
> 2. But that's effectively a nonsense term that you've just invented! What "_works well for both speakers and headphones_" is YOUR PERSONAL judgement, based on YOUR PERSONAL preferences/perception. "Omnistereophonic" is therefore a nonsense term (to everyone but you) because what it describes is your personal preference/perception, NOT an objective fact. Good god, how many times?
> ...


1. You are of course entitle to think what I say is stupid, but somehow it's difficult for you to demonstrate it here, even using your status of someone who knows a lot. I admit one stupidity in me. When I came here I really assumed spatial hearing works the same way (apart from small detail) for everyone with a functioning hearing system. My justification for this assumption is:

a) why wouldn't it be the same for everyone? what causes differencies?
b) science has been able to establish the general principles of spatial hearing indicating consistency among people.

However after 2 years of reading responses to my posts here I am starting to believe my assumption has been faulty. This is most surprising to me and really difficult to accept, but I have to take into account what people say. I may never understand why spatial hearing works differently for me than other people like you. Anyway I have now dropped that assumption and now I believe crossfeed is a working solution to SOME people, including me. Not ALL people. Hopefully you find that less stupid...

1a. This WOULD be the solution if music in general was published in two forms for speakers and headphones, but this isn't the case. Music is generally published in one form and that form usually favors speakers. The problem is not lack of solutions. The problem is the solutions are not used.

2. We might hear things differently, but we all hear in a subjective manner. Your perception isn't any more true than mine. So, where is the objective truth? What is the objective truth? I don't think you have been succesful in addressing that. Most of the time you keep telling I am wrong and how the terms I invent make no sense. Well, I can't help if they don't make sense to you, but they make sense to me, that's why I invent them in the first place! I think science of human spatial hearing is a good condender for some sort of objective truth and this works for me. Do you think the word "pretty" is nonsense? What is pretty for me may not be pretty for you so doesn't that make the whole word nonsense? There is no objective truth about prettiness. It's all subjective. No, "pretty" makes a lot of sense. Language is full of subjective rather than objective expressions. When some dude says the girl next door is pretty you know what the dude thinks about her looks, even if you yourself disagreed and found her ugly. Similarly when I say a recording is omnistereophonic, you know what I mean: It works well on speakers and headphones for me. Language is communication, exhange of ideas such as how omnistereophonic are Genesis albums. Sometimes it's used to state objective facts.

2a. If you want to be constructive, why don't you take what I say and analyse it? I am not the only person in the word using crossfeed so clearly some other people have similar issues with "not-so-omnistereophonic/binaural" recordings. You act as if every recording ever was made to perfection as far as spatiality goes and any tinkering with it is detrimental. Many recordings in my music collection don't work that well even with speakers! Some recordings are better than others. They are not all perfect! Ping Pong stereophony is lightyears from perfection and I don't care if it's artistic intention or not! I have a few recordings that work very well both on speakers and headphones without crossfeed so that's why I believe omnistereophony is possible. I also use the principles of omnistereophony while making my own music so I know it seems to be possible. However, it requires that sound engineers learn away from the ways they have been creating spatiality and learn to do it smarter. Unfortunately some sound engineers are too full of themselves to even consider such step as demonstrated by you over and over again here.

5. Well, _whose_ personal preferencies should we be satisfying here? The artist's I guess, but if the artist has very different preferencies I have maybe said artist isn't for me?

5a. I see very little science from you. You talk about Ferraries on muddy fields, how I dictate things to other people, artistic intent, how my terms make no sense etc. stuff that has little if anything to do with the science of human hearing. Our most "scientific exhange" here has perhaps went like this:

Me: HRTFs give a good indication of what kind of ILD we should have.
You: Yeah, but you dismiss other parameters such as ITD.
Me: I dismiss them, because the other parameters are quite insignificant in this context.
You: That's you opinion, not fact.
Me: I have calculated the ITD change years ago. It is small and changes the apparent angle (which is all wrong with headphones anyway) of sound a little bit. Big deal.
You: …..[I am kind of still waiting for you answer, but what I get is Ferraries on muddy fields]

Where are your calculations to show crossfeed doesn't work? All you have is your personal perception it doesn't work. Where are your target values for ILD? How do you justify the huge difference on spatial parameter space on speakers and headphones? Speakers in a room give about 0-3 dB at low frequencies, headphones give 0-30 dB. No big deal to you? Both condone the artistic intention? Science deals with more or less exact things. 

7a. Well, I don't ask for the same kind of recognition Einstein has, but it would be nice if everything I say wasn't called nonsense and people would understand why people do this even if they are not geniuses. I'm still a human being with needs. Einstein's theories are over 100 years old. Still valid to my knowledge. 70 years old ideas of crossfeed can be valid too and as you know I admit more sophisticated methods have been developped since. I believe crossfeed is much better than nothing, because headphone spatiality is fundamentally wrong. As I wrote here recently, *it's about switching OFF the unintentional crosstalk canceling headphones have by default*.


----------



## bigshot (Nov 1, 2019)

71 dB said:


> I'm still a human being with needs.*[/COLOR]*.



You're looking for validation in the wrong place. This is just an internet forum. None of us have any responsibility for your well being. You should spend less time typing up long winded self aggrandizing and overly defensive posts and more time in the real world where things count. It always comes back to this because you can't help yourself. You step in the same pile over and over and over and you're surprised by it every time. That ain't Einstein.


----------



## kukkurovaca

thank goodness, I was worried the thread would become constructive


----------



## bigshot

It was for a few minutes there I suppose.


----------



## gregorio (Nov 2, 2019)

71 dB said:


> 1. You are of course entitle to think what I say is stupid, but somehow it's difficult for you to demonstrate it here, even using your status of someone who knows a lot. I admit one stupidity in me. When I came here I really assumed spatial hearing works the same way (apart from small detail) for everyone with a functioning hearing system. My justification for this assumption is:
> a) why wouldn't it be the same for everyone? what causes differencies?
> b) science has been able to establish the general principles of spatial hearing indicating consistency among people.
> [c] However after 2 years of reading responses to my posts here I am starting to believe my assumption has been faulty. This is most surprising to me and really difficult to accept, but I have to take into account what people say. ... Anyway I have now dropped that assumption and now I believe crossfeed is a working solution to SOME people, including me. Not ALL people. Hopefully you find that less stupid...
> ...



And again, round and round, using circular arguments based on misrepresentations, lies and self-contradictions, due to your own ignorance and delusions. For example:

1. If your stupidity hasn't been demonstrated, why in the very next sentence do you admit your stupidity? Although you've only admitted one stupidity, much of what you assert is dependent on that one stupidity and is therefore also stupid!! It's just all so ridiculous, it's funny!
a. Individual perception! Science has extensively studied perception and tells us a considerable amount about some of the processes involved but by no means all and science currently (and for at least the foreseeable future) cannot predict individual perception, except to state that it varies. Your question here (and many of your assertions) feign complete ignorance of this area of science and the large body of reliable evidence!
b. That's a lie! Science has established *SOME* (not all) of the general principles of spatial hearing under certain conditions. HOWEVER, science has ALSO established that when there is missing or conflicting information, human perception will alter or entirely replace what is being heard with it's own "individual interpretation". Listening to commercial stereo recordings on headphones (with or without crossfeed) does NOT comply with those "certain conditions",  there is BOTH missing AND conflicting spatial information and therefore, what is "heard" is very significantly reliant on individual perception/interpretation and is NOT "consistent among people". Your statement is a lie, rather than just ignorance, because this has been explained to you numerous times and you deliberately choose to ignore (or dismiss) this established science and instead cherry pick other established science which is inapplicable because it's conditions are NOT met!
c. I would find it less stupid if you didn't then ENTIRELY CONTRADICT what you say you "now believe" by presenting your personal perception as objective fact. So, you leave me/us with no alternative but to "find that" even MORE stupid!

2. Again, that's ridiculous self-contradiction! You (correctly) state "_we all hear in a subjective manner_", then ask "_what/where is the objective truth?_" and then go on to (falsely) define the objective truth. So which is it, do you think we all hear in a subjective manner OR do you think there is an objective truth? It can't be both! Of course, there is a wealth of reliable evidence/science that answers this question but you choose to ignore it and instead quote established science which is inapplicable! How many times?
2a. I have and it's clearly nonsense because you are falsely conflating your personal perception with objective fact and supporting your opinion with inapplicable science!
2a1. That's a blatantly obvious lie because I've stated numerous times the exact opposite! Completely contrary to being "perfect spatiality", I've stated that virtually all commercial audio recordings of the last 70 odd years or more have messed-up/contradictory spatial information.
2a2. Perfect example! You stated "_when I say a recording is omnistereophonic, you know what I mean: It works well on speakers and headphones *for me*._" - Exactly, "FOR YOU", your personal perception/judgement. Therefore ...
2a3. You are stating that "engineers are required to learn away from the ways they have been creating spatiality and learn ways that satisfy your personal perception/judgement or else be too full of themselves and dumb"! You had the nerve to ask previously how you were being dictatorial AND, you're making it abundantly clear who is "too full of themselves" and dumb. You're clearly delusional!
5. Why are you asking that question, when you've already answered it? You've just stated that "we" (both engineers/artists and therefore all other consumers) should un-learn our dumb ways and be satisfying YOUR personal perception/judgement!

It's really beyond stupid now, it's embarrassing and rather pitiful. Why do you insist on continuing to do this to yourself?

G


----------



## 71 dB

bigshot said:


> You're looking for validation in the wrong place. This is just an internet forum. None of us have any responsibility for your well being. You should spend less time typing up long winded self aggrandizing and overly defensive posts and more time in the real world where things count. It always comes back to this because you can't help yourself. You step in the same pile over and over and over and you're surprised by it every time. That ain't Einstein.



You only know what I do here on this board. You don't know what I do in "real world" because you are not part of my "real world." All I can say is it's hard for me to get validation in "real world". In fact I am shocked how hard it is We live in a superficial word where looks count. I am repulsive so women are not interested and other men don't take me seriosly because I don't look like a man, but a boy. Online I have the benefit of not showing how I look. The only thing I have is a good functioning brain and high intelligence. However people in my "real world" have been more accepting to crossfeed than people here. I have been surprised how anti-crossfeed people here are. People here seems to just want to spend thousands of dollars on the most expensive headphone models on the market and listen to them without crossfeed. I thinks that's waste of money. I rather listen to $100 cans with crossfeed than $1000 cans without, but that's me and my weird ears...


----------



## 71 dB

gregorio said:


> That's a lie! Science has established *SOME* (not all) of the general principles of spatial hearing under certain conditions. HOWEVER, science has ALSO established that when there is missing or conflicting information, human perception will alter or entirely replace what is being heard with it's own "individual interpretation". Listening to commercial stereo recordings on headphones (with or without crossfeed) does NOT comply with those "certain conditions",  there is BOTH missing AND conflicting spatial information and therefore, what is "heard" is very significantly reliant on individual perception/interpretation and is NOT "consistent among people". Your statement is a lie, rather than just ignorance, because this has been explained to you numerous times and you deliberately choose to ignore (or dismiss) this established science and instead cherry pick other established science which is inapplicable because it's conditions are NOT met!
> 
> G



If the perception is NOT "consistent among people", for_ whose_ perception are recordings mixed? Doesn't that mean the whole consept of "good" spatiality is nonsensical, because any recording can be good for only some people, never for everyone? The more consistent spatial hearing in among people, the better changes there are to produce recordings that work well for everyone. When I make my own music, I create the spatiality (omnistereophony) with my own ears so there is no quarantee it works for other people, but what I have heard from my friend he thinks I have good spatiality in my tracks.

Of course human perception will alter what is being heard with it's own "individual interpretation" when there is missing or conflicting information! I think I have talked a lot about this, how crossfeed helps spatial hearing to make sense of the not so perfect spatial information. Crossfeed adds one spatial cue of sound sources around us (both ears hear the sound) and reduces conflicting information by limiting ILD. My misstake has been to assume the way perception alter things is the same for every person.


----------



## 71 dB

gregorio said:


> 2. Again, that's ridiculous self-contradiction! You (correctly) state "_we all hear in a subjective manner_", then ask "_what/where is the objective truth?_" and then go on to (falsely) define the objective truth. So which is it, do you think we all hear in a subjective manner OR do you think there is an objective truth? It can't be both! Of course, there is a wealth of reliable evidence/science that answers this question but you choose to ignore it and instead quote established science which is inapplicable! How many times?
> 
> G



I believe subjectivity is a construction living in objective truth so we can't handle then similarly. They are different levels of reality. I believe the physical reality sets some objectivity such as natural levels of ILD and ITD. These are more or less the same for everybody, because people's heads and bodies are pretty much the same size (John might have 10 % larger head than Helen, but not 100 % or 500 % bigger) meaning the natural levels of ILD and ITD are quite similar. However, since some people like crossfeed and some people doesn't, some significant subjectivity plays a role here. Sometimes it seems as if people have different expectations of what crossfeed should be doing. Maybe I "like" crossfeed because I have realistic expectiations and crossfeed can deliver those while some other people have unrealistic expectation (headphones sound identical to speakers for example) and are disappointed when crossfeed doesn't give that.


----------



## 71 dB

gregorio said:


> 1. If your stupidity hasn't been demonstrated, why in the very next sentence do you admit your stupidity? Although you've only admitted one stupidity, much of what you assert is dependent on that one stupidity and is therefore also stupid!! It's just all so ridiculous, it's funny!
> 
> G



At least it's not unclear what you are doing here. You tell other people to not call other people idiots while calling other people stupid yourself. I stopped taking your words seriously so I don't even get angry anymore. I am calm. I have learned not to lose my temper over some internet bullies. At least I have the guts to admit my stupidity. That is the first step to learn and become wiser. When was the last time you did that? Or are you an all-knowing God? Your condescending style of communication suggests you think you are. My new take on crossfeed based on what I have learned from this board is:

_People hear differently and have different preferencies of what they want from headphone sound. For some people crossfeed improves headphone sound reducing excessive spatiality some people find annoying/detrimental. I am one of those people so I use crossfeed a lot._

Now, let's see what stupidity you find from that statement...


----------



## sander99

71 dB said:


> If the perception is NOT "consistent among people", for_ whose_ perception are recordings mixed? Doesn't that mean the whole consept of "good" spatiality is nonsensical, because any recording can be good for only some people, never for everyone?


The problem of inconsistant perception is much bigger with headphones compared to with speakers because of the differences in HRTFs. Before the sound of speakers enters somebody's ears it has been filtered according to his or her own personal HRTF. Just like all other sounds that are natural occurring in your environment. Music recordings produced for speakers and played back over speakers will be perceived much more consistent by different people.
When music is played back over headphones it is skipping the most part of your natural HRTF filtering. This is different from how natural sounds from your envirenment are perceived. Because this difference is different for different individuals it can not be compensated for with one universal compensation, one universal crossfeed, or one universal whatever.
However binaural simulation over headphones based on your personal HRTF is possible and would be suited to listening to music produced for loudspeakers.
If a Smyth Realiser is too expensive you can use Jaakopasanen's free Impulsifer. You would only have to invest in in-ear mics and an audio interface. It doens't do head tracking (yet) but it would still be better than using some universal crossfeed. Even if your crossfeed works for you it won't work for most people because of different HRTFs. But everyone can do a personal measurement and get a good binaural simulation of loudspeakers over headphones, and that would be suitable to listen to the vast majority of all recorded music from history and today. Even if it is not 100% identical to listening to real speakers, it is at least dealing with the problem of different HRTFs that account for most of the differences in perception.
You could consider binaural simulation of loudspeakers a very sophisticated crossfeed. And the fact that binaural simulation based on someone else's HRTF often does not sound convincing at all is just one example of the fact that crossfeed is perceived differently by different individuals.


----------



## bigshot (Nov 2, 2019)

71 dB said:


> You only know what I do here on this board. You don't know what I do in "real world" because you are not part of my "real world." All I can say is it's hard for me to get validation in "real world".



I don't know about your real world, but I can tell you that the reason you aren't getting validation here is because you're doing it all wrong. Any kind of change would be better than digging your hole deeper for yourself here. I think you like this kind of attention.


----------



## 71 dB

bigshot said:


> I don't know about your real world, but I can tell you that the reason you aren't getting validation here is because you're doing it all wrong. Any kind of change would be better than digging your hole deeper for yourself here. I think you like this kind of attention.



Had I known two years ago what I know today my start on this board would probably been much smoother and people wouldn't say I'm doing it all wrong. Unfortunately I can't go back in time to undo the mistakes.


----------



## 71 dB

sander99 said:


> The problem of inconsistant perception is much bigger with headphones compared to with speakers because of the differences in HRTFs. Before the sound of speakers enters somebody's ears it has been filtered according to his or her own personal HRTF. Just like all other sounds that are natural occurring in your environment. Music recordings produced for speakers and played back over speakers will be perceived much more consistent by different people.
> When music is played back over headphones it is skipping the most part of your natural HRTF filtering. This is different from how natural sounds from your envirenment are perceived. Because this difference is different for different individuals it can not be compensated for with one universal compensation, one universal crossfeed, or one universal whatever.
> However binaural simulation over headphones based on your personal HRTF is possible and would be suited to listening to music produced for loudspeakers.
> If a Smyth Realiser is too expensive you can use Jaakopasanen's free Impulsifer. You would only have to invest in in-ear mics and an audio interface. It doens't do head tracking (yet) but it would still be better than using some universal crossfeed. Even if your crossfeed works for you it won't work for most people because of different HRTFs. But everyone can do a personal measurement and get a good binaural simulation of loudspeakers over headphones, and that would be suitable to listen to the vast majority of all recorded music from history and today. Even if it is not 100% identical to listening to real speakers, it is at least dealing with the problem of different HRTFs that account for most of the differences in perception.
> You could consider binaural simulation of loudspeakers a very sophisticated crossfeed. And the fact that binaural simulation based on someone else's HRTF often does not sound convincing at all is just one example of the fact that crossfeed is perceived differently by different individuals.



I had this thought that since crossfeed is a very simple approximation of HRTF it works equally for everybody because it doesn't do the fine detail. Maybe I was wrong, I don't care anymore. Works for me. Why do I even care if it works for others?


----------



## ironmine

71 dB said:


> I had this thought that since crossfeed is a very simple approximation of HRTF it works equally for everybody because it doesn't do the fine detail. Maybe I was wrong, I don't care anymore. Works for me. Why do I even care if it works for others?



Just add Gregorio to Ignore list, and you won't be able to see his posts.
I've already done so.


----------



## Davesrose (Nov 2, 2019)

71 dB said:


> I had this thought that since crossfeed is a very simple approximation of HRTF it works equally for everybody because it doesn't do the fine detail. Maybe I was wrong, I don't care anymore. Works for me. Why do I even care if it works for others?



As others have indicated, HRTF has much more variation with people over headphones vs speakers.  Speakers are far enough away that they have a sound field all around us.  Headphones, though, have drivers that are very close to our ears.  The variation in human anatomy really comes into play for HRTF with headphones, and why studies with target curves deal with averages of people.  IMO, it also leads to some people preferring one brand to the other (as our perception of frequencies is influenced by interactions with our own specific outer-ears, current tension of the middle ear, and current physical state of inner ear).  It's another factor for why one person might say one brand is more neutral than the other, and another person would disagree.  If I can be candid, it seems you're just beating yourself up and extolling it on this thread...and not sure why there's been pages and pages with you and Gregorio.  I've previously stated I don't find the crossfeed options I've tried to be approximations of "spaciality" as it equates to having music present a forward presentation.  I've also found surround DSPs doesn't really either, but they can have convincing pans going from say 45 degrees to fully in back of my head.  Even when listening from my computer, I'm still not really convinced with filters, and more enjoy the sound quality of direct USB to Benchmark DAC, amp, and high quality headphone.  My amp does have a crossed option for headphones and speakers, which for some tracks sound more fun, but most of the time I have disabled.  Dynamics for many of my preferred genres sound better without it...and I'd prefer sound quality even if I'm suspending some disbelief of the singer being further in front of me.

You keep coming back to this conclusion that you had thought, but then why should you even care about convincing other people.  Why then keep going round and round trying to convince this crowd of your ideas?  You can try to make up validations that may or may not be popular....but much of what I've skimmed through is more about perceptions and theory vs scientific consensus.


----------



## bigshot (Nov 2, 2019)

ironmine said:


> Just add Gregorio to Ignore list, and you won't be able to see his posts.
> I've already done so.



Mistake.



71 dB said:


> Had I known two years ago what I know today my start on this board would probably been much smoother and people wouldn't say I'm doing it all wrong. Unfortunately I can't go back in time to undo the mistakes.



If you start doing it right, you won't have these problems. You know the old saying about the definition of insanity... doing the same thing over and over and expecting different results.


----------



## Hifiearspeakers

bigshot said:


> Mistake.
> 
> 
> 
> If you start doing it right, you won't have these problems. You know the old saying about the definition of insanity... doing the same thing over and over and expecting different results.



The same could be said about your boy, Gregorio. Why does he keep responding and repeating himself over and over? In fact, he’s worse, because all he does now is stoop to repeated ad hominem attacks, which really should be moderated. Gander meet Goose.


----------



## gregorio

71 dB said:


> [1] If the perception is NOT "consistent among people", for_ whose_ perception are recordings mixed? Doesn't that mean the whole consept of "good" spatiality is nonsensical, because any recording can be good for only some people, never for everyone?
> [2] The more consistent spatial hearing in among people, the better changes there are to produce recordings that work well for everyone.



1. You're joking right? Music recordings are mixed/mastered for a target audience and a target audience is never everyone, that's why different music genres exist!

2. That WOULD be true IF everyone preferred/judged music recordings on the same basis as you but of course they don't. Exceedingly few people judge/prefer music recordings purely on the basis of the spatial information and probably fewer still on one aspect of spatial information!



71 dB said:


> I believe subjectivity is a construction living in objective truth so we can't handle then similarly. They are different levels of reality. I believe the physical reality sets some objectivity ......



You say that you now believe people perceive things differently/subjectively but then you completely contradict yourself by stating you believe that subjectivity is really objective because "physical reality sets some objectivity" and you support your assertions with the established science of *some* basics of how we perceive physical reality, which is inapplicable because we're not dealing with physical reality, listening to commercial music recordings on headphones (with or without) crossfeed is about as far from physical reality as it's possible to get. So round and round you go endlessly, stuck in this loop of self-contradiction and inapplicable science, purely on the basis of your personal preference/perception and the fallacious explanation you've invented to explain it!

G


----------



## 71 dB (Nov 3, 2019)

gregorio said:


> 1. You're joking right? Music recordings are mixed/mastered for a target audience and a target audience is never everyone, that's why different music genres exist!
> 
> 2. That WOULD be true IF everyone preferred/judged music recordings on the same basis as you but of course they don't. Exceedingly few people judge/prefer music recordings purely on the basis of the spatial information and probably fewer still on one aspect of spatial information!
> 
> ...


1. Did I make a relevant question you can't answer? Looks like you had to dodge it by first questioning the rationality of my question and then shifting from spatial perception to musical genres and target audiences. It's not as if all crossfeed lovers liked the same music genres. To my exprience crossfeed works for all music genres I have tried it on. I'd keep musical genres and spatial perception apart.

2. Spatial information is only one of the many things I judge recordings for, but it's the one relevant in this thread. I know very few people pay much attention to spatiality, but it doesn't mean I am crazy doing so.

3. Objectivity is kind of what is the same for all people. To my knowledge all people perceive sound coming from left if you play a sound only on left channel. How much left and how natural it sounds is the subjective part. I had thought the objective part is so massive (over 90 %) that it overshadows the subjective part almost completely apart from some rare cares such as HRTF where the subjective part is enforced. Reading yours and others posts here makes me feel the objective part is smaller than 90 %, perhaps as low as 50 % meaning the subjective part is quite relevant.

It seems it doesn't matter what I write here. You keep saying I "completely contradict myself." I'm almost expecting you to say so, part of your personality and by now it has very little effect on me. It's your trademark for me. You say_ "listening to commercial music recordings on headphones (with or without) crossfeed is about as far from physical reality as it's possible to get."_ I agree, but unlike you apparently, I am thinking how it can be made closer to physical reality, if not very precisely at least something which fools our spatial hearing to think it is. Crossfeed fools my spatial hearing so that "as far from physical reality as possible" becomes some kind of sonic virtual reality for me. It's not reality, but it shares some properties with reality such as ILD levels. However, that's just me and by now I have learned I am a weirdo so…


----------



## 71 dB (Nov 3, 2019)

bigshot said:


> If you start doing it right, you won't have these problems. You know the old saying about the definition of insanity... doing the same thing over and over and expecting different results.



Read my recent posts. Are you saying I am not "evolving" to the right direction? I think my attitude and tone has improved a lot, but still I feel I am attacked as much as in the past. It's frustrating. I avoid calling peolpe names (idiot, ignorant etc.) I accept crossfeed is not for everybody. What more do you want? Am I only allowed to write "Gregorio is a God and I agree about everything he says" to be told I am doing it right? What kind of discussion board is this? Is this "Worship your cult leader Gregorio" -board?


----------



## 71 dB

ironmine said:


> Just add Gregorio to Ignore list, and you won't be able to see his posts.
> I've already done so.



I have thought about that, but the man clearly knows a lot and I would loose all that in the process. 
Unfortunately he is a bully who never admits other people can know/understand anything.
His problem is he lives in his professional bubble and he protects that bubble at all costs.
I believe that explains why he has become an internet bully. I have been trying to make him
realize the situation, but as usually I have failed. My problem is I never have the skills and
talent I need in life so I fail.


----------



## 71 dB (Nov 3, 2019)

Davesrose said:


> 1. As others have indicated, HRTF has much more variation with people over headphones vs speakers.  Speakers are far enough away that they have a sound field all around us.  Headphones, though, have drivers that are very close to our ears.  The variation in human anatomy really comes into play for HRTF with headphones, and why studies with target curves deal with averages of people.
> 
> 
> 
> ...



1. I think it's worse to have "wrong" spatial cues than miss some spatial cues. Lack of spatial cues indicate situation where the environment just doesn't generate spatial cues (anechoic chamber) or the sound sources are very near the ear drums (in-ear-headphones). Those situations may sound a bit unnatural (if you haven't got used to be inside anechoic chambers or using in-ear-headphones), but nevertheless possible. Having the HRTF spatial information of another person however creates and "impossible" situation. Even in anechoic chamber you can't hear with your own head what the HRTF of another person makes you hear. It's kind of "I would hear this if my head/body shape was different, but it isn't" -situation. It is not a wonder why people find this contradiction annoying. Normal headphone spatiality is just completely off, so wrong and fake that people (except us crossfeed users) don't care about the contradictions, but when you listen to other person's HRTF it sounds realistic and wrong at the same time. It's like a world where the sky is green and grass is blue. So realistic but also wrong!

2. Well, I have been a Sennheiser user and I don't really know how other brands sound. I'm big on brand loyality even when it's irrational. I just like some brands based on whatever reason from performance to how the logo looks like. I know it's irrational, but life would be quite robotic and boring without some human irrationality such as brand loyality. Other people are loyal to sport teams. I am not into sports so I am loyal to brands.

3. I have realized that people expect different things from crossfeed. I NEVER expected crossfeed to sound like loudspeakers. I only expected crossfeed to deal with excessive spatiality and to scale the ILD levels similar to what I hear with speakers. However, I found out crossfeed doesn't achieve just that, but for me it allows my spatial hearing to be fooled so that I get a miniature soundstage of better or worse quality depending on the quality of spatial information of the recording. So, for me crossfeed gave more than I expected because my expectations were realistic. For me large ILD creates annoying, almost ear tickling sensation of my head "vibrating" and getting rid of that is like opening an umbrella in the rain. When the ILD level is scaled down, the sound calms down, steps back from my ears. The music seems to play on top of silence rather than in a noisy place, and the stereo image which without crossfeed is fracture all over the place for me, gets "re-assembled". The bass frequencies sound natural for me when crossfed while without they sound "fake".

I have never experienced reduction in dynamics with crossfeed. Crossfeed does make music more calm/relaxed for me, but it makes everything in the music more calm in the same way so that the dynamic variations stay the same for me. The music seems to have the same emotional impact for me, but in a more calm format so that listening fatique is reduced for me. I believe the longer you listen to something, the more the benefits of crossfeed are experienced. Any reduction in dynamics due to crossfeed should also happen with speakers due to acoustic crossfeed (and even more reduction due to ER and reverberation), but since most people don't have crosstalk canceling option with speakers they can't find out what the acoustic crossfeed actually does to the sound.

I don't really try convincing other people anymore. I accept that people do what they prefer and that's it. Now I am merely telling what crossfeed does for me so other people can compare their experiences to my experiences.


----------



## Davesrose (Nov 3, 2019)

71 dB said:


> 1. I think it's worse to have "wrong" spatial cues than miss some spatial cues. Lack of spatial cues indicate situation where the environment just doesn't generate spatial cues (anechoic chamber) or the sound sources are very near the ear drums (in-ear-headphones). Those situations may sound a bit unnatural (if you haven't got used to be inside anechoic chambers or using in-ear-headphones), but nevertheless possible. Having the HRTF spatial information of another person however creates and "impossible" situation. Even in anechoic chamber you can't hear with your own head what the HRTF of another person makes you hear. It's kind of "I would hear this if my head/body shape was different, but it isn't" -situation. It is not a wonder why people find this contradiction annoying. Normal headphone spatiality is just completely off, so wrong and fake that people (except us crossfeed users) don't care about the contradictions, but when you listen to other person's HRTF it sounds realistic and wrong at the same time. It's like a world where the sky is green and grass is blue. So realistic but also wrong!



I've never viewed the soundstage from headphones as being so artificial to it being indicative as grass is blue.  The main feeling I've gotten from "narrow soundstage" headphones is sound seeming to be inside my head.  The first crossfeed filter I tried was the first headphone amp I bought: a HeadRoom amp.  I swore I couldn't hear any difference with it on or off.  Recently, the most drastic DSP I've heard that effects FR is Dolby Atmos on my computer: it seems to increase bass through headphones.  With the widest soundstage headphones I've heard, I might hear imaging that puts the center sound to the top of my forehead: no matter what filter or headphone have I heard a soundstage where  center sound is far forward.  But that's OK, as "audiophile" headphones can create lively dynamics and tonality.

From what I've gathered from your posts, you have certain set formulas for filters.  You're also acknowledging now that HRTF can be different for individuals.  It seems to me, these are the main issues that are incompatible.  Differences in human anatomy don't just effect spatiality, but tonal perception as well.  Again, these are reasons why brands come up with sound signatures coming from target curves based on averages of many people.  When it comes to individuality, the best surround processor I've heard for headphones is a Sennheiser Dolby Pro Logic module.  It also has a parametric setting that you dial in for your own ears.  When set properly, that's a surround effect I heard where I could hear a soundscape going fully to the back of my head.


----------



## 71 dB

Davesrose said:


> I've never viewed the soundstage from headphones as being so artificial to it being indicative as grass is blue.  The main feeling I've gotten from "narrow soundstage" headphones is sound seeming to be inside my head.  The first crossfeed filter I tried was the first headphone amp I bought: a HeadRoom amp.  I swore I couldn't hear any difference with it on or off.  Recently, the most drastic DSP I've heard that effects FR is Dolby Atmos on my computer: it seems to increase bass through headphones.  With the widest soundstage headphones I've heard, I might hear imaging that puts the center sound to the top of my forehead: no matter what filter or headphone have I heard a soundstage where  center sound is far forward.  But that's OK, as "audiophile" headphones can create lively dynamics and tonality.
> 
> From what I've gathered from your posts, you have certain set formulas for filters.  You're also acknowledging now that HRTF can be different for individuals.  It seems to me, these are the main issues that are incompatible.  Differences in human anatomy don't just effect spatiality, but tonal perception as well.  Again, these are reasons why brands come up with sound signatures coming from target curves based on averages of many people.  When it comes to individuality, the best surround processor I've heard for headphones is a Sennheiser Dolby Pro Logic module.  It also has a parametric setting that you dial in for your own ears.  When set properly, that's a surround effect I heard where I could hear a soundscape going fully to the back of my head.



I have been thinking about this "center sound seems to come from up/above" claim that is so common here. For sounds coming from forward pinna shape creates a pretty deep and narrow notch in frequency response around 7-9 kHz, because at these frequencies the reflected sound from pinna is out of phase with the direct sound and they cancel each other out almost completely creating the notch. Because pinna is asymmetric, the place of the notch is used by spatial hearing as one spatial cue of the altitude of the sound. Notch around 7 kHz indicates sound coming below, 8 kHz from ahead on horizontal plane and 9 kHz sounds above. As over ear headphones are around the pinna, reflection happens from all parts of pinna and spatial hearing seem to favor 9 kHz notch and concludes the sound coming above. Maybe installing the drives lower inside the ear caps would remove this effect? What about possibility to finetune the placement and angle of drivers for your ears? Kind of the same people do with speaker placement to optimaze them in a room.


----------



## Davesrose

71 dB said:


> I have been thinking about this "center sound seems to come from up/above" claim that is so common here. For sounds coming from forward pinna shape creates a pretty deep and narrow notch in frequency response around 7-9 kHz, because at these frequencies the reflected sound from pinna is out of phase with the direct sound and they cancel each other out almost completely creating the notch. Because pinna is asymmetric, the place of the notch is used by spatial hearing as one spatial cue of the altitude of the sound. Notch around 7 kHz indicates sound coming below, 8 kHz from ahead on horizontal plane and 9 kHz sounds above. As over ear headphones are around the pinna, reflection happens from all parts of pinna and spatial hearing seem to favor 9 kHz notch and concludes the sound coming above. Maybe installing the drives lower inside the ear caps would remove this effect? What about possibility to finetune the placement and angle of drivers for your ears? Kind of the same people do with speaker placement to optimaze them in a room.



I'm not sure how practical it would be to have a leaver that moves. driver up and down: brands have particular housings and padding for set drivers, as well as particular angle to pinna.  I would assume brands decided to keep the driver centered towards the ear canal to maximize FR.  Also, the difference in pinna can vary quite widely (based on size and shape)/so I doubt you could make assumptions about absolutes with 1K increments from 7k, 8k, or 9k.


----------



## ironmine

71 dB said:


> Any reduction in dynamics due to crossfeed



Crossfeed does not reduce dynamics. Actually, it does the opposite thing: it increases the dynamic range of music. I measured audio tracks with Dynamic Range Meter before crossfeed processing and after, and I saw that after such processing the dynamic range increased by 1-2 dB.


----------



## ironmine

71 dB said:


> I have been thinking about this "center sound seems to come from up/above" claim that is so common here. For sounds coming from forward pinna shape creates a pretty deep and narrow notch in frequency response around 7-9 kHz, because at these frequencies the reflected sound from pinna is out of phase with the direct sound and they cancel each other out almost completely creating the notch. Because pinna is asymmetric, the place of the notch is used by spatial hearing as one spatial cue of the altitude of the sound. Notch around 7 kHz indicates sound coming below, 8 kHz from ahead on horizontal plane and 9 kHz sounds above. As over ear headphones are around the pinna, reflection happens from all parts of pinna and spatial hearing seem to favor 9 kHz notch and concludes the sound coming above. Maybe installing the drives lower inside the ear caps would remove this effect? What about possibility to finetune the placement and angle of drivers for your ears? Kind of the same people do with speaker placement to optimaze them in a room.



I also experience this feeling of the sound coming from up and above when I try Tone Boosters Isone.  Otherwise, I like this plugin: even though it's not so transparent and 112dB Redline Monitor, it sounds quite realistic, the feeling of being there is very strong. But this feature (sound coming from above) is annoying and ruins the pleasure.


----------



## Davesrose

ironmine said:


> Crossfeed does not reduce dynamics. Actually, it does the opposite thing: it increases the dynamic range of music. I measured audio tracks with Dynamic Range Meter before crossfeed processing and after, and I saw that after such processing the dynamic range increased by 1-2 dB.



So you believe every single implementation of crossfeed yields the same results as your measurement?


----------



## 71 dB

ironmine said:


> Crossfeed does not reduce dynamics. Actually, it does the opposite thing: it increases the dynamic range of music. I measured audio tracks with Dynamic Range Meter before crossfeed processing and after, and I saw that after such processing the dynamic range increased by 1-2 dB.



Maybe this is because the quieter parts of music are othen decaying reverberation which has statistically different spatiality than louder parts which are dominated by direct sounds? Whatever the reason behind this result is it not dramatic and according to your studies not in the direction of reduced dynamic…


----------



## castleofargh

Hifiearspeakers said:


> The same could be said about your boy, Gregorio. Why does he keep responding and repeating himself over and over? In fact, he’s worse, because all he does now is stoop to repeated ad hominem attacks, which really should be moderated. Gander meet Goose.


me, me! I have that one! it's like if a house is on fire and people around it say "some firefighters should really take care of this", but nobody calls them. 
I don't know how many people still read everything posted(so brave), but I'm not one of them. right now I happened to read your post because it was a short one and you're not one of the regular actors of this boring play(true story). most other modos don't even want to touch this subsection if they're not called for it by a reported post. so if you believe something requires moderation, report the post. if we don't see a report, it's the tree falling in the middle of the forest.
reading this topic is like being caught in Groundhog Day. I kept trying for so long as I thought it was a matter of miscommunication. but by now it's clear that @71 dB is ready to play all the cards(including self pity, and "if you're against me, you're anti-crossfeed"). he has zero interest in the facts. those he acknowledges, he dismisses the next second because they don't matter much to him subjectively(famous objective approach trick...). 

gregorio repeats himself because he's the type of dude who believes that truth is the most important thing there is(he's really that type of unicorn). doesn't mean he's always right himself, he's still human. but it does mean that he gets very very mad when he sees somebody piss all over facts and make up pseudo science. it's been going on about the same logical fallacies and disregard for facts for maybe 2 years now, so I do find attenuated circumstances when Gregorio forgets the rules of the forum from time to time. because this crap is reaching cult level flat earther nonsense by now, and if stupid is probably never fine on this forum, ludicrous certainly comes to mind. 
at the same time, @gregorio : the rules are still the same for you and everybody else. respect them or don't post until you're calm enough to respect them. it does work, I do so all the time. I hit reply, start unleashing hell onto some dude who IMO fully deserves it, realize it's not suited for this particular carebear forum and stop myself from posting. if I can do it, so can you.


----------



## Hifiearspeakers

castleofargh said:


> me, me! I have that one! it's like if a house is on fire and people around it say "some firefighters should really take care of this", but nobody calls them.
> I don't know how many people still read everything posted(so brave), but I'm not one of them. right now I happened to read your post because it was a short one and you're not one of the regular actors of this boring play(true story). most other modos don't even want to touch this subsection if they're not called for it by a reported post. so if you believe something requires moderation, report the post. if we don't see a report, it's the tree falling in the middle of the forest.
> reading this topic is like being caught in Groundhog Day. I kept trying for so long as I thought it was a matter of miscommunication. but by now it's clear that @71 dB is ready to play all the cards(including self pity, and "if you're against me, you're anti-crossfeed"). he has zero interest in the facts. those he acknowledges, he dismisses the next second because they don't matter much to him subjectively(famous objective approach trick...).
> 
> ...



Great reply! And I concur with all of it. But I wasn’t actually calling for him to get flagged by a moderator, because I really don’t care. They’ve been at this for years so it’s just trite now. All I really wanted was to point out the hypocrisy of it all.


----------



## castleofargh

71 dB said:


> I have been thinking about this "center sound seems to come from up/above" claim that is so common here. For sounds coming from forward pinna shape creates a pretty deep and narrow notch in frequency response around 7-9 kHz, because at these frequencies the reflected sound from pinna is out of phase with the direct sound and they cancel each other out almost completely creating the notch. Because pinna is asymmetric, the place of the notch is used by spatial hearing as one spatial cue of the altitude of the sound. Notch around 7 kHz indicates sound coming below, 8 kHz from ahead on horizontal plane and 9 kHz sounds above. As over ear headphones are around the pinna, reflection happens from all parts of pinna and spatial hearing seem to favor 9 kHz notch and concludes the sound coming above. Maybe installing the drives lower inside the ear caps would remove this effect? What about possibility to finetune the placement and angle of drivers for your ears? Kind of the same people do with speaker placement to optimaze them in a room.




you need to extrapolate a little from what he says because he's talking in the context of binaural recording, but at least in my case, his point about frontal localization being set by EQ worked very well(he has another video on how he's doing it step by step but it's basically finding the EQ by doing an equal loudness curve of one speaker right in front of him and then the equal loudness contour of his headphone/IEM. 
I specify that it works great for me, because contrary to what he seems to suggest, it's not a sure thing. there is a non negligible portion of the population that will never get a proper(or even subjectively realistic) frontal localization for mono sounds on headphones. I'm not too sure about the cause, but it's been demonstrated in a few researches(only reason I even know about that). one of my guesses would be that those people's brains rely on sight even more than the average guy(who already uses sight as the dominant reference), so maybe if a sound source isn't visible, the brain for those guys will simply refuse the possibility that the sound source is somewhere in front. 

but for the rest of the population that can imagine a virtual sound source in front, then a mono signal has no localization cues except for elevation from torso, floor, and outer ear reflections. outer ear boost at a different frequency depending on the incoming vertical angle is logically the most important cue as it's the only one that follows head movements. that's a well known mechanism(even if I had no clue until maybe 2 years ago) and there are some funny videos with people who put plasticine or whatever to fill the outer ear and then have to guess where a sound is coming from with their eyes closed. it clearly isn't the only reason why people don't place mono right in front all the time, but it's probably the most common.


----------



## 71 dB

castleofargh said:


> By now it's clear that @71 dB is ready to play all the cards(including self pity, and "if you're against me, you're anti-crossfeed"). *he has zero interest in the facts*. those he acknowledges, he dismisses the next second because they don't matter much to him subjectively(famous objective approach trick...).



I don't get this claim at all. Zero interest in the facts? What? My disagreements with other people are not about ignoring facts. It's about what is objective and what is subjective. It's about terminology, semantics. If I have zero interest in facts when I talk about only ILD with crossfeed (while admitting other spatial parameters exists, but are in my opinion insignificant in this context) then audio engineers using amplitude panoration are AS DAMN IGNORANT as I am because amplitude panoration is ILD (and not even frequency dependent) and nothing more! Just as audio engineers admit amplitude panoration doesn't equal binaural panoration, I admit crossfeed doesn't make binaural recordings. I have learned here that spatial hearing is more subjective than I have previoustly thought and such learning is sign of interest in the facts! So, where does this zero interest in the facts come from? Name one fact (relevant to the topic) I am not interested about.


----------



## ironmine

Does anybody know when I can buy these microphones?


----------



## castleofargh

71 dB said:


> I don't get this claim at all. Zero interest in the facts? What? My disagreements with other people are not about ignoring facts. It's about what is objective and what is subjective. It's about terminology, semantics. If I have zero interest in facts when I talk about only ILD with crossfeed (while admitting other spatial parameters exists, but are in my opinion insignificant in this context) then audio engineers using amplitude panoration are AS DAMN IGNORANT as I am because amplitude panoration is ILD (and not even frequency dependent) and nothing more! Just as audio engineers admit amplitude panoration doesn't equal binaural panoration, I admit crossfeed doesn't make binaural recordings. I have learned here that spatial hearing is more subjective than I have previoustly thought and such learning is sign of interest in the facts! So, where does this zero interest in the facts come from? Name one fact (relevant to the topic) I am not interested about.



the perception of a position(real or virtual) is directly dependent on the listener = subjective
when you use crossfeed, you should set it up for your own head = subjective
when different people get to try some crossfeed system, they get different impressions, some find that convincing or preferable, some do not. in fact AFAIK a majority of people who try, do not = subjective
most people prefer speaker playback, but some actually prefer headphone playback = subjective
you decide that a certain impression of placement is an improvement based entirely on your own preferences = subjective
you cherry pick the variables you consider most relevant in a speaker simulation, and dismiss variables that are definitely involved in speaker playback, all based on your personal opinion, convenience, and impressions = subjective
you declare that some form of approximations in the EQ to apply as "ILD" is good enough, not based on a study, not based on trials, but based on how you feel about those approximations = subjective

all that and more have been explained at length, many times, yet the next day you're back with some BS concept of improved spatiality from using crossfeed based on objective ideas, despite how the very concept of a perceived space is a subjective interpretation of sound and other senses.
yes semantic is a problem, objective stuff are objective, a listener's ILD is something specific, not a one band filter. calling that ILD is a mistake, and of course panning is not ILD either, that's part of what gregorio has been saying all along. that you rely on references for albums that didn't care for them from the start. there was no correct spatiality, only subjective trickery that hopes to feel nice.
all along you've been mistaking an objective approach with pseudo science IMO. when I look at the bird flapping its wings to take off, and I try to move my arms like it does, should that count as an objective approach? is it normal of me to declare that I've made an objective improvement toward flying? I'm of the opinion that an objective approach would also consider the other relevant variables related to a bird flying and not just stick 3 feathers on each arms and go "yet another clear objective improvement!". for each variable you pass for less important, or maybe important but without them we're still making progress, or they should be a certain way but you can't measure so you'll stick to the approximation that you consider good enough because you've never tried better, etc, you have no research to tell you the psychological impact of those decisions about missing or inaccurate variables. all you do is guess or rely on your personal impressions(which for the last time is not how objective approaches work). to answer those questions we'd need studies that probably will never exist because there is a lot to research and crossfeed is clearly outdated compared to what is explored today. 
so from there you have 2 options, stick to having 3 feathers on your arms and call it an objective improvement toward flight. in some disturbed way I can accept that point of view even if it won't give us a take off anytime soon and everybody knows it. or actually care about having conclusive evidence before making claims of objective anything or improved anything for everybody. and simply admit that you, and I, and at his own level, gregorio, don't have a definitive answers for any of this, beyond saying that it is not reality, it is not default headphone, it is not binaural, and it is not virtual speaker on headphones. so whatever conclusive fact we have on those models that we know very well(virtual speakers not so well yet), may or may not apply strictly to crossfeed playback. once we're there, how one feels is .... you've guessed it, subjective! 
then you can go around and if a majority of people start saying that crossfeed improved their experience subjectively, then we'll be able to conclude that crossfeed is a subjective improvement for headphone playback in general. but even that isn't in line with the facts as in practice only a minority of people stick with crossfeed and even among those, many only want it for specific tracks/albums. so objective or subjective, your claims of improvements should have been strictly limited to being your personal impression and opinion, instead of you getting mad when we didn't accept the claims of improvement as a fact. 

I never found that crossfeed was doing anything for me "spatially". if I switch it ON and OFF, sure but nobody does that all the time. in my case(maybe other's? IDK) after a few minutes of using it, the instruments are back to where they were without it in my mind(or very close, I couldn't tell the difference). to the point that I often don't know it crossfeed is ON or not after a while. such a great and obvious improvement that I need to check if it's ON... not a good sign. I loved crossfeed because over a long listening session(like long travel), I felt that it was less tiring for me/my ears. I know some people feel the same, and some don't. and we're back to subjectivity. did we really ever left? I don't think so. 


as to your request, I agree that saying you have zero interest in the facts is wrong. would you find it more accurate if I said that you're interested in the facts and then find excuses to ignore them anyway if they don't serve your crossfeed overlord? 
because once more, I've never seen you act that way for any other topics.


----------



## gregorio

castleofargh said:


> [1] you need to extrapolate a little from what he says because he's talking in the context of binaural recording, ...
> [2] one of my guesses would be that those people's brains rely on sight even more than the average guy(who already uses sight as the dominant reference), so maybe if a sound source isn't visible, the brain for those guys will simply refuse the possibility that the sound source is somewhere in front.



1. I agree but we also have to be careful with any "extrapolation" because "the context of binaural recording" is different to the context of non-binaural commercial stereo recordings in TWO regards: Firstly (and obviously), binaural recordings record the sound which enters the ears (although in his case, the sound which actually impacts the ear drums) thereby including a HRTF, which obviously a standard stereo recording doesn't AND secondly, the sound being recorded is a single aural perspective of a single actual/real acoustic environment (a concert hall), this also is not the case with commercial standard stereo recordings, which either contain multiple different aural perspectives of a single real acoustic environment (in the case of classical and other acoustic genres) or multiple different aural perspectives of multiple different acoustic environments (in the case of popular/non-acoustic genres), neither of which can exist in reality. So, applying any sort of HRTF, even a theoretical, fully characterised HRTF to commercial standard stereo music recordings is still always going to rely on an individual's perceptual interpretation.

2. Certainly, it is well known (and science has clearly demonstrated) that sight is a very significant/dominant sensory input used by the brain in the construction of the perception of sound/hearing. Obviously, with headphones or speakers we have a conflict of sensory input. Even ignoring the conflicting acoustic information in commercial music recordings, we would be hearing say a concert hall but seeing our sitting/listening room. For this reason, some people find it beneficial to close their eyes and eliminate this conflicting visual information. However, this is only beneficial (a perceived improvement) for some, for others, their brain still knows that they're in their sitting room even though they can't see it and this "knowledge" ingredient in the construction of perception is still enough (for some) to dominate or at least affect their perception. In other words, even if standard stereo recordings were a single aural perspective of a single/real acoustic environment and we applied a theoretically perfect set of HRTFs (for each individual), still it wouldn't work for everyone, although most likely it would work for more (nearly everyone).



71 dB said:


> Name one fact (relevant to the topic) I am not interested about.



Ah, that's where we're going wrong, I've been naming several/many relevant facts you're not interested about, rather just one! 

G


----------



## ironmine (Nov 4, 2019)

I try to use this website http://recherche.ircam.fr/equipes/salles/listen/index.html for making an individual crossfeed for myself.

I listened to demo sounds and found that #1012 model gives me a very realistic sound: when the sound is supposed to be passing from left to right in front of me, I really hear that it passes in front of me. With other model heads, the sound at this moment tends to go up and then down.

The description here is very confusing:
"azimuth in degrees (3 digits, from 000 to 180 for source on your LEFT, and from 180 to 359 for source on your right)"

Should it not be the other way around? It think there is a mistake in the description and it should read "From 000 to 180 is for a source on the RIGHT, and from 180 to 359 for a source on the LEFT."

When I google "KEMAR + Head = Azimuth", I see pictures of this kind:


----------



## 71 dB

castleofargh said:


> the perception of a position(real or virtual) is directly dependent on the listener = subjective
> when you use crossfeed, you should set it up for your own head = subjective
> when different people get to try some crossfeed system, they get different impressions, some find that convincing or preferable, some do not. in fact AFAIK a majority of people who try, do not = subjective
> most people prefer speaker playback, but some actually prefer headphone playback = subjective
> ...



Subjective or not, I simply think headphone sound as it is is completely wrong and doesn't make sense, because it's spatiality for speakers, and speakers reproduce spatiality TOTALLY differently to headphones. From my perspective this FACT is ignored by others here. Subjective or not, I have hard time believing large ILD values at low frequencies can be natural to anyone. How is that possible? Our brain learns spatial cue based on what we hear in every day life, and large ILD values at low frequencies isn't something we hear a lot. Anyone can make binaural recordings mics in their ears and record sounds in their life and analyse the ITD. This should not be something I have to fight. I totally get that people are different, but how different can people be? It does make sense that someone has elephant ot cat hearing, because we are humans. We should have somewhat similar hearing. Our spatial hearing is based on learning the connection between the spatial cues and the visual information about the sound source. How can such process develop totally different spatial hearing for people? Makes no sense! This must be about personal prefences rather than science of spatial hearing: I stll believe crossfeed is a step toward spatial information that makes more sense scientifically (because headphone sound as it is often makes very little sense spatially), but people have their preferences and expectations which are not for everybody met using crossfeed.

I have been saying many times after switching crossfeed ON, the sound image seems to narrow a bit (because spatial hearing reacts to the sudden change of ILD scaling), but after a minute it goes back. That's spatial hearing adapting. In fact I believe that's when spatial hearing is adapted to spatial cues that make sense while normal headphone listening means adaptation to spatial cues that don't make sense. To me the differense is not in the width, but in how natural the sound image sound. However, this is what crossfeed does for me.

Good enough is one thing, improvement is another thing. Nothing is good enough. People want perfection and can never have it. I'm not after perfection. I am a realist. I'm happy about improvement, small or big. That's why I can enjoy the improvements crossfeed gives to my ears.

In other topics I don't have the problem I have here. People who have studied digital audio for example share facts with me and it's a clear division between people who understand digital audio and those who don't. In this topic it seems different. Somehow crossfeed seems difficult to understand even for those who know a lot about spatial hearing. I look crossfeed from the angle of what it does and achieves while other people look at it from the angle of what it doesn't do or achieve. I believe this is because my opinion is that headphone spatiality is completely wrong and a mess so that almost anything is better than nothing. From my point of view people don't take serioustly enough the problem of headphone spatiality and even I was simply used to it as it is before having my "heureka" moment in 2012. Speakers in a room can't produce nonsensical spatial cues to listeners, but headphones can! Do we want nonsensical spatiality? If we want and it is artistic intention, then clearly speakers (without crosstalk canceling) are no good. If we don't want nonsensical spatial cues, then headphones are no good unless we use something that turns nonsensical spatial cues into something that makes sense. I think this reasoning is called for even if a lot of subjectivity is part of the equation.

I am totally cool with crossfeed not meeting someone's personal preferences, but the way my reasoning and factual background has been questioned is unfair. Maybe there has been excuses on both side? I have excuses to "ignore" some facts that don't support crossfeed, but other people have excuses to ignore those facts that do support crossfeed.


----------



## 71 dB

castleofargh said:


> all along you've been mistaking an objective approach with pseudo science IMO. when I look at the bird flapping its wings to take off, and I try to move my arms like it does, should that count as an objective approach? .



I see it like this. Speaker spatiality = bird. Headphone spatiality (as it is) = injured bird that can't fly. Headphone spatiality (with crossfeed) = injured bird that has been taken care off and has a "fixed" wing so that it can fly somehow, not as well as it could before the injure, but can fly nevertheless.


----------



## 71 dB

gregorio said:


> Ah, that's where we're going wrong, I've been naming several/many relevant facts you're not interested about, rather just one!
> 
> G



I'm not interested? I remember asking your calculations about what crossfeed does to ITD, but I have seen nothing. If not calculations, how about some sort of explanation why ITD renders crossfeed useless? All I hear from you is that other parameters exist (yeah, I know. I have studied spatial hearing in the university), but no analyse about how these parameters affect crossfeed. I have given my take on the role of ITD in crossfeed:

If speakers give 30° angle, headphones 40° angle without crossfeed and 25° angle with crossfeed because ITD values are scaled down a notch, ITD is hardly a problem. Acoustic crossfeed, ER and reverberation all affect ITD with speakers all of which is missing with headphones. Should be a not so ideal situation. However, I believe our spatial hearing is more tolerant to ITD "errors" because all kind of reflections shape ITD so our spatial hearing is used to ITD values that are a bit off while the laws of physics dictate what kind of ILD values are possible so that errors in ILD are in my opinion more serious. Crossfeed is ILD scaling and in my opinions that's justified for this reason. That's why concentrating on ILD is justified. It is not ignoring facts. It is concentration on relevant things. Not ignoring the other facts makes it possible for me to know they are somewhat irrelevant in this context.

I am totally fine discussing the facts, but most of what I get is claims I am not interested of facts. Crazy.


----------



## 71 dB (Nov 4, 2019)

ironmine said:


> I
> 
> The description here is very confusing:
> "azimuth in degrees (3 digits, from 000 to 180 for source on your LEFT, and from 180 to 359 for source on your right)"
> ...



Kemar is clockwise, ircam is counterclockwise.


----------



## jaakkopasanen

ironmine said:


> Does anybody know when I can buy these microphones?


I have listed some in Impulcifer wiki: https://github.com/jaakkopasanen/Impulcifer/wiki/Measurements#microphones
I'm using the soundprofessional ones for HRIR measurements but I'm starting to suspect that ear canal needs to be properly blocked for reproducible results. That should be possible with an earplug when using the soundprofessional mics which are not ear canal blocking themselves.


----------



## sander99

Just to let the interested readers here know I reacted to above post in another thread (about Impulcifer)


sander99 said:


> (I decided to post this reply here as I think that is more appropiate than in the "To crossfeed or not to crossfeed?" thread.)
> 
> 
> ironmine said:
> ...


----------



## gregorio

71 dB said:


> I am totally fine discussing the facts, but most of what I get is claims I am not interested of facts. Crazy.


No, most of what you get IS the facts but as you keep ignoring/dismissing them, that's OBVIOUSLY WHY you then keep being told you're ignoring the facts! This is obvious to everyone except you, so who's "crazy", you or everyone else? For example:


71 dB said:


> [1] I'm not interested? I remember asking your calculations about what crossfeed does to ITD, but I have seen nothing. If not calculations, how about some sort of explanation why ITD renders crossfeed useless? All I hear from you is that other parameters exist (yeah, I know. I have studied spatial hearing in the university), but no analyse about how these parameters affect crossfeed.
> [1a.] I have given my take on the role of ITD in crossfeed: ...



1. You've had countless explanations of how other parameters affect perception, that your calculations and explanations are inapplicable because we're not dealing with real/natural spatiality in the first place and most recently a video posted by castleofargh but you ignore them all or ... 1a. You have "a take" that dismisses them! How many times?


71 dB said:


> I see it like this. Speaker spatiality = bird. Headphone spatiality (as it is) = injured bird that can't fly. Headphone spatiality (with crossfeed) = injured bird that has been taken care off and has a "fixed" wing so that it can fly somehow, not as well as it could before the injure, but can fly nevertheless.


Nope, the bird still can't fly, you're imagining it! You're free to believe/imagine whatever facts you want, even if your beliefs are dictated by your agenda, and no one is saying you should stop using crossfeed if that's your preference but in this sub-forum you cannot make false statements of fact (no matter how strongly you believe them) without being refuted, and then strongly refuted if you keep repeating them! How many times?

G


----------



## 71 dB (Nov 4, 2019)

gregorio said:


> 1. You've had countless explanations of how other parameters affect perception, that your calculations and explanations are inapplicable because we're not dealing with real/natural spatiality in the first place and most recently a video posted by castleofargh but you ignore them all or ... 1a. You have "a take" that dismisses them! How many times?
> 
> 2. Nope, the bird still can't fly, you're imagining it! You're free to believe/imagine whatever facts you want, even if your beliefs are dictated by your agenda, and no one is saying you should stop using crossfeed if that's your preference but in this sub-forum you cannot make false statements of fact (no matter how strongly you believe them) without being refuted, and then strongly refuted if you keep repeating them! How many times?
> 
> G



1. We are not dealing with real/natural spatiality? Kind agree. So there is no natural spatiality to be messed up with. If crossfeed makes sound appear more natural what's the problem? I watched the video It is quite Finnish study (Tapio Lokki and Ville Pulkki mentioned). So I am not ignoring. The video doesn't say crossfeed can't improve headphone audio. It deals with different things that what crossfeed does.

2. I wasn't the one who brought birds into this! It's pointless to argue whether imaginary birds can fly. This is lunacy! People talk about birds to prove me wrong and when I try to defend myself this happens! **** with the BIRDS!!


----------



## 71 dB (Nov 4, 2019)

This has not been about crossfeed for a long time. This is feuding.
I go now to watch The X-Files Season 11. That's more pleasant than being here…
Keep you birds, Ferraris and muddy fields. Not interested.


----------



## bigshot

71 dB said:


> I see it like this. Speaker spatiality = bird. Headphone spatiality (as it is) = injured bird that can't fly. Headphone spatiality (with crossfeed) = injured bird that has been taken care off and has a "fixed" wing so that it can fly somehow, not as well as it could before the injure, but can fly nevertheless.


----------



## bigshot

Hifiearspeakers said:


> The same could be said about your boy, Gregorio. Why does he keep responding and repeating himself over and over? In fact, he’s worse, because all he does now is stoop to repeated ad hominem attacks, which really should be moderated. Gander meet Goose.



He argues on point with facts. He defines his terms very carefully. He has experience in the field. He understands what he is talking about. He repeats himself because certain people refuse to acknowledge any fact that doesn't support their own argument. They repeat their error over and over as if they don't even hear or understand what he is saying and try to bluff their way through. Gregorio is human and gets frustrated. He reacts with harsh words. When I get frustrated, I give a couple of shots across the bow, then I just react with jokes because I've written the poster off completely. We all react differently. But the best way to get along is to listen to what people are saying to you and interact with them honestly. Gregorio has a wealth of information if you listen. If you don't listen you get what you get. I don't feel sorry for people who go down that road.


----------



## ironmine

71 dB said:


> Kemar is clockwise, ircam is counterclockwise.



If IRCAM is counterclockwise, then this file contains impulses that represent how the left and right ears hear the left speaker: IRC_1012_C_R0195_T030_P000.wav

and this file contains impulses that represent how the ears hear the right speaker:
IRC_1012_C_R0195_T330_P000.wav

because T030 and T330 mean 30 and 330 degree angle.

(P000 means elevation is 0).

However, when I process the sound with these impulses, the result is that the virtual speaker is perfectly located at 90 degrees to the left !!


----------



## ironmine

Yesterday I also tried this trick in VST-chainer: I inserted 112dB Redline Reverb and ran it parallel to the signal path.
(see the block circled it blue):







It takes a direct signal, reverberates it (your can control its amount either with the output knob in the Reverb plugin itself or, as I prefer to do it, with the BitShiftGain), and mixes it with the crossfed signal.

Immediately, I sensed an improvement in the resulting sound in the form of sound sources moving away from me (this is what I wanted to achieve!). Now the sound really resembles 112dB Redline Monitor, I am so excited. What if I can finally improve upon it? At least the initial comparisons sound quite promising.

Now, I am thinking that maybe I should have placed this Reverb block into the Treble Boost block pathway above it, to keep the schematics simpler... But doing so will brighten all the reverberations in addition to the main signal...

Experiments do continue, stay tuned.


----------



## castleofargh

ironmine said:


> I try to use this website http://recherche.ircam.fr/equipes/salles/listen/index.html for making an individual crossfeed for myself.
> 
> I listened to demo sounds and found that #1012 model gives me a very realistic sound: when the sound is supposed to be passing from left to right in front of me, I really hear that it passes in front of me. With other model heads, the sound at this moment tends to go up and then down.
> 
> ...


I honestly don't remember as I rapidly started messing around with those impulses(including renaming them). but there were a few that had an obvious(audible and measurable imbalance) so it was relatively easy to refer to the circular demo to check that some perceived collapse on a side was on the same side in your own fancy crossfeed convolution with music. 
ultimately as you've guessed, you want to focus on the frontal impression and spend some time confirming them with the extracted 30 and 330° or whatever you like best in that area used in a self made "circuit", or as a start, just applied in some so called "true stereo" convolver. the stuff that accept 4 channels impulses(the 2 stereo impulses) instead of just 2.


----------



## ironmine

castleofargh said:


> I honestly don't remember as I rapidly started messing around with those impulses(including renaming them). but there were a few that had an obvious(audible and measurable imbalance) so it was relatively easy to refer to the circular demo to check that some perceived collapse on a side was on the same side in your own fancy crossfeed convolution with music.
> ultimately as you've guessed, you want to focus on the frontal impression and spend some time confirming them with the extracted 30 and 330° or whatever you like best in that area used in a self made "circuit", or as a start, just applied in some so called "true stereo" convolver. the stuff that accept 4 channels impulses(the 2 stereo impulses) instead of just 2.



I don't really need (I guess) "true stereo convolvers", as instead I can simply use two simple stereo convolvers and route their inputs and outputs the way I need in a VST chainer.

I tried that and the result was weird. I got a _perfect _illusion that the sound was coming from the left only. 
I even tried to split the two downloaded stereo impulses into four mono impulses (I had to use four convolvers), but the result was the same.

So, there is something wrong with the orientation of that azimuth dial, or the description of the impulses offered at the website.

I wrote to the contact person at that website, now I wait for his response..


----------



## castleofargh

71 dB said:


> Subjective or not, I simply think headphone sound as it is is completely wrong and doesn't make sense, because it's spatiality for speakers, and speakers reproduce spatiality TOTALLY differently to headphones. From my perspective this FACT is ignored by others here. Subjective or not, I have hard time believing large ILD values at low frequencies can be natural to anyone. How is that possible? Our brain learns spatial cue based on what we hear in every day life, and large ILD values at low frequencies isn't something we hear a lot. Anyone can make binaural recordings mics in their ears and record sounds in their life and analyse the ITD. This should not be something I have to fight. I totally get that people are different, but how different can people be? It does make sense that someone has elephant ot cat hearing, because we are humans. We should have somewhat similar hearing. Our spatial hearing is based on learning the connection between the spatial cues and the visual information about the sound source. How can such process develop totally different spatial hearing for people? Makes no sense! This must be about personal prefences rather than science of spatial hearing: I stll believe crossfeed is a step toward spatial information that makes more sense scientifically (because headphone sound as it is often makes very little sense spatially), but people have their preferences and expectations which are not for everybody met using crossfeed.
> 
> I have been saying many times after switching crossfeed ON, the sound image seems to narrow a bit (because spatial hearing reacts to the sudden change of ILD scaling), but after a minute it goes back. That's spatial hearing adapting. In fact I believe that's when spatial hearing is adapted to spatial cues that make sense while normal headphone listening means adaptation to spatial cues that don't make sense. To me the differense is not in the width, but in how natural the sound image sound. However, this is what crossfeed does for me.
> 
> ...


that's clearly your perception of the system and also of the situation. can't say it is mine. I'm fresh out of analogies, masturbation was probably the more fitting one as it did involve some fair share of mental image and impressions. your initial view for speaker vs headphone is one I happen to share. because that's how I feel and because with most albums ever released done on and for speakers, speakers seem the logical reference of desirable playback. even then I wouldn't go as far as calling it correct because many rooms, many speakers and we rarely know the actual reference used. but I happen to agree on the decision to pick speaker playback as reference for whatever we want to get in the end. 
and that's pretty much where I stop agreeing with with you because the model you keep explaining as your so called objective demonstration is not speaker playback. don't know how many times we have to say it, you just don't care about that "detail". a human head is going to move, a human is going to know he has a headphone on his head, those habits/expectations are not magically going away for your convenience. you assume that if you mix the channels maybe kind of like a listener would get on speakers with his head stuck in an anechoic chamber, then magically he's feel a more natural experience. but *you do not know that!!!!!!!!!!* you only assume it because that's your subjective experience. for starters, let's talk about the odds that your crossfeed settings will actually come close enough to what a listener would experience. have you seen the effective variations from listener to listener? can you claim to know that your changes are going to trigger the desire type of impression anyway, and not something else? let's assume that step turns out ok, then what? the guy will still be missing reverb and any tiny head movement will still reveal to his brain that it's all BS. so now instead of the possibly comfy experience of headphone playback(not because it's natural but because the listener may have been using them for decades and just got used to that different experience), the listener ends up with directly conflicting localization cues. one cue telling him it's over there in front, another cue telling him the source is clearly stuck on his head and the only position that agrees with head movement is on top or inside the head(depending on how we move). you see that as an improvement, but how do you know that for someone else it doesn't end up feeling even more artificial and unnatural than default headphone playback that doesn't bother at all with localization beyond "this is more on the left"? at every turn you make your own assumptions that the entire world will feel like you do, enjoy what you enjoy, and prefer what you prefer. but take any song, any food, and you'll always find people who do not agree with you and do not think as you do when experiencing them. and that's the problem made obvious on the subjective side, but it should have been just as obvious on the objective side the moment you took a complete multivariable systems working under a clear set of conditions, and started to cut a piece of it to make your own "objective" model where you removed head movement, room reverb, headphone signature, specific HRTF, etc. what remained was not speaker playback, what remained was your explanation of what crossfeed does and why. from a scientific approach you can't just take a system and cut out the pieces and variables until it's simple enough, then declare that made up model to have the qualities and behaviors of the original real system. no scientist would accept that unless you demonstrate to them that most results and conclusions do indeed apply to the made up model. something you have never done and as I said probably cannot do. instead what you did is try for yourself, feel that it was correct and decided that it was apparently conclusive for the rest of humanity. 
be it objective or subjective, your views are the views of someone who looks to validate his idea, not someone who looks to test if they're correct. you look for what agrees with you when science would systematically look to disprove an idea and see how sturdy that idea really is. as a fully subjective tool, you can only validate crossfeed as something that happens to be nice to you, and nothing else. if you want to declare any more than that yo have to run trials or at least ask for opinions, but without controlled trials, you'll never know that they set their crossfeed correctly, so... not very meaningful unless even then they all declare that it's really good and a subjective improvement(which as we know doesn't happen very often). 
and all this time, while I juggle from objective to subjective, I try to keep them somewhat apart, but of course in practice it's a giant pudding and our subjective interpretation is going to be the sum of X variables, objective and subjective, and yet again, unless you test those on many people under those specific conditions instead of declaring that you can just apply speaker playback knowledge, you'll really know nothing. 

if you want validation for remembering how to handle a matrix, how to calculate a delay based on distance and speed of sound, how to get some notions of acoustic about how an obstacle will have a frequency dependent impact, then here I am to give you that. I struggled more on matrix than on anything else. mostly due to the teacher I had, as once he changed I got over it in a week somehow, but still all in all I struggled for almost 6 months, applying rules for reasons I did not understand(which for a guy who loved math was a torture). it then took me a little less than 10 years to completely forget all about them and most math I ever learned. nowadays I have to think for a sec about how to do a division by myself...  so I'm sincerely jealous of anybody who still has enough math in him to go look up all the cool stuff I struggle with, as in audio there is an endless amount of things that interest me deeply but require more math than I remember. but if you're looking for someone to tell you how right you are about the clear benefits of crossfeed based on your "demonstration", then I'm not the guy, and I doubt anybody informed on the subject will ever do because you take enough liberties to declare what you say false. be it the method or your conclusions, both take way too many shortcuts. 

and here is the thing, I think I've explained all that many times already. so while I'm still typing, I'm fairly sure that once more you won't care and that once more you'll soon be back explaining how your little toy model of acoustic works and calling a one band EQ ILD. you clearly have all the cards to understand the situation and see your own errors, but clearly we can't just see them for you.  when you change some variables in a complex system, you can usually predict some of the consequences, but not necessarily all of them. when you go and just create a model for the workings of crossfeed, given the many differences compared to actual speaker playback, pretending that you can predict results based on speaker playback is so wrong that we shouldn't have the need to explain it in 25 different ways for over 2 years. I don't know what else to tell you. 
some people enjoy vinyl playback despite how it objectively does everything wrong. some people enjoy super colored kind of grainy tube amps, some people enjoy crossfeed. all those people are happy with what they've got and that's really great. if they're happy, all is good. and among those guys you always have a handful who wants their preferences to be justified as factual superiority. and most of them pass as loonies because they keep on trying to defend something that can't be with reasons that seem to make sense only in their own heads. I'm sure every single one of those thinks he's fighting the good fight for his beloved technology, but the effective result is the opposite. it slowly becomes weird to be associated with them, even just through a personal preference. if you really care about crossfeed and wish to promote it, stop this nonsense of trying to make it be something it is not, and stop behaving like you're crossfeed itself and it's a cult thing. that's my sincere advice to you.  unlike gregorio who fights for facts until he cannot, I'm still coming back and posting that crap because somewhere I still want to have a rational conversation with you, and I still hope it's possible. you've proved to me that it was on many other topics, and you've strongly worked on demonstrating to me that it wasn't on the crossfeed topic. only you can figure out why you're so amazingly different and utterly biased about this. remember, I'm actually a guy who likes crossfeed. that should put things in perspective a little when even I cannot get behind what you say.


----------



## 71 dB

castleofargh said:


> 1. and that's pretty much where I stop agreeing with with you because the model you keep explaining as your so called objective demonstration is not speaker playback.
> 
> 2. don't know how many times we have to say it, you just don't care about that "detail".
> 
> ...


1. Yep. It's not obviously speaker playback. It is crossfed headphone playback.
2. Wrong. I do "care" about that detail. That doesn't mean I'm gonna suffer excessive ILD if I can fix it.
3. Yep. I know I am wearing headphones, crossfeed or not.
4. If headphone sound is a lightyear from speakers, crossfeed take me to maybe 0.8 lightyears distance. Still far away, but those 0.2 lightyears were for me the crucial part, the excessive unnatural spatiality that I find annoying. If I want to go all the way then I simply listen to my speakers! However, I like the 0.8 lightyears distance. Since I have used crossfeed now about 7.5 years, I am pretty sure I know how my spatial hearing reacts to the changes crossfeed does to the sound. If your ears react differently then that's not my problem, is it? 
5. Why is there some random limits for "coming close enough"? Who says what is close enough?  Having speaker-like sound on headphones is a huge technical challenge, but we can have better headphone sound, sound that is clearly headphone sound far from speaker sound, but better in some way, for example free of excessive ILD. We reduce the harm that comes from the differences of headphone and speaker sound. That difference creating for example excessive ILD, but we can fix that with crossfeed so that's what I have been doing ever since I realized the existence of the problem. Of course if YOUR ears think excessive ILD is not a problem then don't use crossfeed!
6. Yeah, and without crossfeed he/she is missing those things as well + dealing potentially with the problem of excessive ILD. How is that any better? Headphone sound is (spatially) BS. That's why I crossfeed it into the kind of BS I actually enjoy.
7. Somehow my mind was flexible enough to make sense of this and the result is miniature soundstage (given the spatial information of the recording allows it).
8. Apparently I don't know. That's why I have backpedaled and now I only say what I hear, because that is what I know. However, I will continue to use science to justify what I hear, because the science made me discover crossfeed (some sort of logical connection exists), and for me crossfeed does what the science say it would do, scale ILD to natural levels. Never did I expect getting anywhere near speaker sound, but somehow you think I am that dumb. It is ridiculous to think crossfeed would turn headphones into speakers and you laid out the reasons. Crossfeed scales ILD and in my case that allows my spatial hearing make more sense of the spatiality and enjoy more not suffering from excessive ILD. Is this a misunderstanding because I have said crossfeed makes the sound speaker-like? Of course I didn't mean identical! A notch toward speakers because ILD is more similar (still quite different, but without crossfeed 10 times more different at low frequencies!). Of course crossfeed can't do all the other stuff of speaker sound, room acoustics and all, but it can mimick acoustic crossfeed.


----------



## gregorio

71 dB said:


> [1] We are not dealing with real/natural spatiality? Kind agree. So there is no natural spatiality to be messed up with. If crossfeed makes sound appear more natural what's the problem?
> [1a] I watched the video It is quite Finnish study (Tapio Lokki and Ville Pulkki mentioned). So I am not ignoring. The video doesn't say crossfeed can't improve headphone audio. It deals with different things that what crossfeed does.
> [2] I wasn't the one who brought birds into this! It's pointless to argue whether imaginary birds can fly. This is lunacy! People talk about birds to prove me wrong and when I try to defend myself this happens! **** with the BIRDS!! ... {from the next post] Keep you birds, Ferraris and muddy fields. Not interested.
> [3] This has not been about crossfeed for a long time. This is feuding.



1. The problem is that crossfeed is NOT some magical process that actually turns the "unnatural spatiality" into "natural spatiality", it obviously just crossfeeds the "unnatural spatiality". Unnatural spatiality + crossfeed = crossfed unnatural spatiality, it does NOT equal "natural spatiality". However, to your personal perception, crossfeed seems to make this crossfed unnatural spatiality "appear" more natural, which is fine but of course we're now talking about the "appearance" of spatiality to your personal perception, NOT objective fact! You seem to agree that crossfeed isn't some magical process and that we all perceive sound/spatiality somewhat differently but then effectively ignore/dismiss this and go on about natural/objective ILD, which by itself does not define spatiality anyway, so then you also have to ignore/dismiss all the other parameters that actually define spatiality. If all that's not bad enough, you then (falsely) state you're not ignoring/dismissing anything, this is indeed "lunacy"!
1a. Again, another of your classic self-contradictions! The video does NOT "deal with different things than what crossfeed does", in large part it deals with what objectively occurs (the FR) at the ear drums, so unless crossfeed is not crossfeed but is instead some magical process that bypasses the ear drums, then the video (at least in part) deals with the same things! So, you've watched the video and on the basis of your false conclusion that it has nothing to do with crossfeed, you ignore/dismiss it and then you falsely state that you are not ignoring it? This is indeed "lunacy"!

2. As you clearly fail to understand the RELEVANT facts/evidence and on that basis keep ignoring/dismissing them (despite them being explained to you numerous times in different ways), then using analogies is a logical way of simplifying and illustrating the facts but you state you're "Not interested", which of course is up to you but then of course you can't ask the question: "What facts am I not interested in?", because that is "lunacy"!

3. In a sense, it IS feuding. You making false assertions of objective fact and me/us refuting them. So, that leaves only 3 options going forward:
A. Me/Us also ignoring/dismissing the relevant facts/evidence/science, on the basis of your personal perception and desire to be a messiah.
B. You ceasing to make false assertions of objective fact. or
C. You continuing to make false assertions and me/us continuing to refute them.

"A" is never going to happen here, or else it ceases to be the Sound Science sub-forum.
"B" is apparently never going to happen because you don't believe you're making false assertions and won't stop posting them because you think you're an enlightened messiah and must enlighten the rest of us.
Which leaves only "C", the endless "feuding" of false assertions vs refutations. But why then are you complaining about this "feuding", when it's you who's causing it and you who can end it? So round and round we go and indeed "This is lunacy"!

G


----------



## 71 dB

ironmine said:


> Yesterday I also tried this trick in VST-chainer: I inserted 112dB Redline Reverb and ran it parallel to the signal path.
> (see the block circled it blue):
> 
> 
> ...



Reverberation indeed is a strong spatial cue of distance. This starts to be "beyond crossfeed", but it's good you like it!


----------



## 71 dB

gregorio said:


> 1. The problem is that crossfeed is NOT some magical process that actually turns the "unnatural spatiality" into "natural spatiality", it obviously just crossfeeds the "unnatural spatiality". Unnatural spatiality + crossfeed = crossfed unnatural spatiality, it does NOT equal "natural spatiality". However, to your personal perception, crossfeed seems to make this crossfed unnatural spatiality "appear" more natural, which is fine but of course we're now talking about the "appearance" of spatiality to your personal perception, NOT objective fact! You seem to agree that crossfeed isn't some magical process and that we all perceive sound/spatiality somewhat differently but then effectively ignore/dismiss this and go on about natural/objective ILD, which by itself does not define spatiality anyway, so then you also have to ignore/dismiss all the other parameters that actually define spatiality. If all that's not bad enough, you then (falsely) state you're not ignoring/dismissing anything, this is indeed "lunacy"!
> G



1. If the facts mattered, I should hate headphone sound, crossfeed or not. It's FACTUALLY unnatural. It is a scientific fact that we don't hear like mics. We have perception and in the end that matters. So, if my perception says crossfed unnatural spatiality appears natural that's how it is for me. Even with speakers stereo sound is based on perception and fooling spatial hearing rather than physical facts. Perception has to be taken into account.

ILD itself doesn't define (at least well) spatiality, but it doesn't have to. The other parameters don't disappear anywhere in crossfeed. I believe and my spatial hearing agrees that the combinations of spatial parameters seem more natural after crossfeed.


----------



## 71 dB

gregorio said:


> 1a. Again, another of your classic self-contradictions! The video does NOT "deal with different things than what crossfeed does", in large part it deals with what objectively occurs (the FR) at the ear drums, so unless crossfeed is not crossfeed but is instead some magical process that bypasses the ear drums, then the video (at least in part) deals with the same things! So, you've watched the video and on the basis of your false conclusion that it has nothing to do with crossfeed, you ignore/dismiss it and then you falsely state that you are not ignoring it? This is indeed "lunacy"!
> 
> G



So, nobody thinks eardrums with headphones, but if you use crossfeed, suddently ear drums are interesting? What? Ear canal resonances and ear drums are the same problem, whether you use crossfeed or not. I don't understand you reasoning of problems emerging only when you use crossfeed. They are there, crossfeed or not! That video has hardly anything to do with crossfeed. It is about creating "sonic ultrarealism", not just scaling ILD. The stuff in the video is like 1000 times more sophisticated than default crossfeed. It's like watching a video of Ferrari F1 cars and trying to use that to debunk someones claims about pedal cars.


----------



## taffy2207




----------



## castleofargh

taffy2207 said:


>



I kind of dislike her for no reason, but what she mentions does exist, and it's even worst because plenty of other effects piggyback on those general behaviors(I've read the books by the dudes who made the studies, dreaming that it would change my own behavior(of course it didn't), so I'm 12% expert myself now). 
but here is the catch, sometimes we're involved with facts, and they're either correct or they're not. having 2 sides arguing them doesn't change the facts themselves(that would really suck). like when crossfeed is such an obvious improvement according to someone, but in practice, only a minority of people stick to crossfeed after trying it. you know the way most people behave when something is an obvious improvement for them. ^_^

but you're right, I had already given up(twice I believe...), and should have stuck to that.


----------



## gregorio

71 dB said:


> 4. If headphone sound is a lightyear from speakers, crossfeed take me to maybe 0.8 lightyears distance. Still far away, but those 0.2 lightyears were for me the crucial part ...
> 5. Why is there some random limits for "coming close enough"? Who says what is close enough?
> 6. Yeah, and without crossfeed he/she is missing those things as well + dealing potentially with the problem of excessive ILD. How is that any better?
> [6a] Headphone sound is (spatially) BS. That's why I crossfeed it into the kind of BS I actually enjoy.
> 7. Somehow my mind was flexible enough to make sense of this and the result is miniature soundstage (given the spatial information of the recording allows it).



4. Exactly. In fact you're not 0.2 lightyears closer, you're the same distance but just in a different position, which apparently creates the illusion/delusion for you that you're 0.2 lightyears closer. So the "crucial part" that you're basing everything on, is actually an illusion/delusion! So what now, are you going to complain again about an analogy that you yourself have continued?

5. No one, EXCEPT YOU! The rest of us say it depends on personal preference/perception.

6. Because with crossfeed, he/she is missing those things as well + dealing with the other consequences of crossfeed (that ALL the spatial information is crossfed, not only the ILD). How is that any better? And the answer is: ...
6a. You ignore, dismiss or simply don't hear/perceive all the other crossfed information, concentrate only on ILD (relative to what it would be in nature) and YOU enjoy it. But as YOU state, it's YOUR "enjoyment" BUT it's not my enjoyment and it's NOT objective fact!

7. I could argue that your mind is less flexible, that either you just don't have the listening skills to discern the negative affects of crossfeed or your mind is too inflexible (in regards to the cognitive biases you've invented from incomplete/cherry picked facts) to allow you to discern them. However, that's just my opinion/conclusion based on the evidence of your posts, not an objective fact. So I'll just argue that you have a different perception/preference, rather than imply your mind is inferior (as you are attempting to do with us!).


71 dB said:


> Ear canal resonances and ear drums are the same problem, whether you use crossfeed or not.


Exactly, they "_are the *same problem*_", thanks for agreeing with me. So why did you effectively dismiss the video (and the evidence it presented) on the basis that "_It deals with *different things* than what crossfeed does._"? Self-contradiction, AGAIN!

Round and round we go! 

G


----------



## 71 dB

gregorio said:


> 4. Exactly. In fact you're not 0.2 lightyears closer, you're the same distance but just in a different position, which apparently creates the illusion/delusion for you that you're 0.2 lightyears closer. So the "crucial part" that you're basing everything on, is actually an illusion/delusion! So what now, are you going to complain again about an analogy that you yourself have continued?
> 
> G



4. I beg to differ and this is my rationale: Lets assume spatialing consists ortogonal parameters. The distance function of two sets of spatial parameters can be defined as the square root of the sum of the squares of each parameters. If one of these distance parameters (such as ILD distance) goes near zero, the whole distance function reduces. Again, my spatial hearing agree with this. I consider stereophony illusion anyway, fooled spatial hearing. So, it doesn't matter to me if this is illusion as well.


----------



## 71 dB (Nov 5, 2019)

gregorio said:


> Exactly, they "_are the *same problem*_", thanks for agreeing with me. So why did you effectively dismiss the video (and the evidence it presented) on the basis that "_It deals with *different things* than what crossfeed does._"? Self-contradiction, AGAIN!
> 
> Round and round we go!
> 
> G



You interpret everything I say on purspose so that you see contradictions? Do you think anyone cound Self-contradiction this much? What do you think I am doing?
If you demand ear canal problems going away with crossfeed you are simply demanding too much. It is a ILD scaler, not a ear canal fixer!

I have been trying to be nice and give leeway but nothing changes on the other side. I would have to agree 100 % with you and that is not happening, because I hear differently.


----------



## 71 dB

71 dB said:


> You interpret everything I say on purspose so that you see contradictions? Do you think anyone cound Self-contradiction this much? What do you think I am doing?
> If you demand ear canal problems going away with crossfeed you are simply demanding too much. It is a ILD scaler, not a ear canal fixer!
> The video is not "just ear canal problem" it is ultrarealistic audio on headphones. That IS different from what crossfeed does.
> 
> I have been trying to be nice and give leeway but nothing changes on the other side. I would have to agree 100 % with you and that is not happening, because I hear differently.


----------



## 71 dB (Nov 5, 2019)

gregorio said:


> 6. Because with crossfeed, he/she is missing those things as well + dealing with the other consequences of crossfeed (that ALL the spatial information is crossfed, not only the ILD). How is that any better? And the answer is: ...
> 6a. You ignore, dismiss or simply don't hear/perceive all the other crossfed information, concentrate only on ILD (relative to what it would be in nature) and YOU enjoy it. But as YOU state, it's YOUR "enjoyment" BUT it's not my enjoyment and it's NOT objective fact
> 7. I could argue that your mind is less flexible, that either you just don't have the listening skills to discern the negative affects of crossfeed or your mind is too inflexible (in regards to the cognitive biases you've invented from incomplete/cherry picked facts) to allow you to discern them. However, that's just my opinion/conclusion based on the evidence of your posts, not an objective fact. So I'll just argue that you have a different perception/preference, rather than imply your mind is inferior (as you are attempting to do with us!).
> 
> ...


Crossfeed is pretty gentle. It does something that acoustic crossfeed does. Should not be weird or wrong. For me negative aspects are insignificant to the good things. excessive ILD is the BIG problem for me crossfeed fixes that mimiking speakers. No problem for me!


----------



## gregorio

71 dB said:


> So, it doesn't matter to me if this is illusion as well. ...  For me negative aspects are insignificant to the good things. excessive ILD is the BIG problem for me crossfeed fixes that mimiking speakers. No problem for me! ... etc. etc.



It doesn't matter TO YOU. Negative aspects are insignificant FOR YOU. Excessive ILD is the big problem FOR YOU. No problem FOR YOU. 

No problem for us either, you can be subject to whatever illusions/beliefs you prefer or your personal perception/biases dictate. However, that's just your personal perception/preferences ... crossfeed does NOT "fix that mimicking speakers", that is factually FALSE. How many times? 


71 dB said:


> I have been trying to be nice and give leeway but nothing changes on the other side. I would have to agree 100 % with you and that is not happening, because I hear differently.


Yes, this is what I said: "_A. Me/Us also ignoring/dismissing the relevant facts/evidence/science, on the basis of your personal perception and desire to be a messiah._" ... _""A" is never going to happen here, or else it ceases to be the Sound Science sub-forum." - _So obviously nothing can change on this side. How, after all this time are you only now seeing the obvious? And, it's got absolutely NOTHING to do with you "hearing differently", it's because you try to impose false objective facts on this forum, which are just your personal subjective perception and NOT objective facts! How many times?

Round and round we go!

G


----------



## ztwindwalker

I found a good resource that may be a good addition to the topic.
That is this album:
https://www.hdtracks.com/3-d-the-catalogue?___store=default&refSrc=453161&nosto=nosto-page-product1
Title: 3-D The Catalogue 
Artist: Kraftwerk 
Genre: Electronic, World, Electronica, Rock and Roll Hall of Fame nominees 
Label: Parlophone UK 
Release Date: 2017 

This album is focus on the 3D sound imagination. There are normally mixed for the speaker and mixed for headphone version of several songs. We can compare the "normal" song+crossfeed VS headphone version. Have some fun with it!


----------



## bigshot

I've compared the headphone and multichannel mix on that album. The headphone version doesn't sound the same at all. It sounds like just a slightly phase altered stereo mix.


----------



## 71 dB

gregorio said:


> It doesn't matter TO YOU. Negative aspects are insignificant FOR YOU. Excessive ILD is the big problem FOR YOU. No problem FOR YOU.
> 
> No problem for us either, you can be subject to whatever illusions/beliefs you prefer or your personal perception/biases dictate. However, that's just your personal perception/preferences ... crossfeed does NOT "fix that mimicking speakers", that is factually FALSE. How many times?
> 
> G



Yes, I admit this, but I still say crossfeed mimicks acoustic crossfeed. In fact crossfeed has been called "acoustic simulator". The idea behind it is to simulate acoustic crossfeed. At low frequences acoustic crossfeed causes you to hear the direct sound from left speaker just a few decibels lower and delayed 200-300 microseconds at your right ear. Crossfeed does something similar and it therefor mimicking.  This is well documented in HRTF measurements and is therefor an objective fact. HRTFs for people are more similar at low frequencies than at high frequencies. All of this is well documented facts and my reasoning is based on it. All I know is the inventors of crossfeed did the same reasoning.


----------



## castleofargh

71 dB said:


> Yes, I admit this, but I still say crossfeed mimicks acoustic crossfeed. In fact crossfeed has been called "acoustic simulator". The idea behind it is to simulate acoustic crossfeed. At low frequences acoustic crossfeed causes you to hear the direct sound from left speaker just a few decibels lower and delayed 200-300 microseconds at your right ear. Crossfeed does something similar and it therefor mimicking.  This is well documented in HRTF measurements and is therefor an objective fact. HRTFs for people are more similar at low frequencies than at high frequencies. All of this is well documented facts and my reasoning is based on it. All I know is the inventors of crossfeed did the same reasoning.


 I'm sure this was it all along. gregorio clearly had no idea that the sound from the left speaker also reached the right ear. what a discovery this must be for the guy who's been making fake room sounds(among other stuff) for movies professionally. I can imagine him in front of his computer right now, thinking "so you mean both ears get the sound? wow!". 
now that this is clear, I see no reason for him not to fully agree with you on everything. well nothing beside all the stuff you dismiss, all the stuff that you inaccurately apply then call ILD anyway, all the stuff you're wrong about, and all the stuff you consider objective only when it pleases you and other similar logical fallacies. but at least we know that we have 2 ears, which is nice.


----------



## Davesrose

castleofargh said:


> I'm sure this was it all along. gregorio clearly had no idea that the sound from the left speaker also reached the right ear. what a discovery this must be for the guy who's been making fake room sounds(among other stuff) for movies professionally. I can imagine him in front of his computer right now, thinking "so you mean both ears get the sound? wow!".
> now that this is clear, I see no reason for him not to fully agree with you on everything. well nothing beside all the stuff you dismiss, all the stuff that you inaccurately apply then call ILD anyway, all the stuff you're wrong about, and all the stuff you consider objective only when it pleases you and other similar logical fallacies. but at least we know that we have 2 ears, which is nice.




I think most people signed out of this thread pages ago....thinking along the lines of this:


----------



## 71 dB (Nov 6, 2019)

castleofargh said:


> I'm sure this was it all along. gregorio clearly had no idea that the sound from the left speaker also reached the right ear. what a discovery this must be for the guy who's been making fake room sounds(among other stuff) for movies professionally. I can imagine him in front of his computer right now, thinking "so you mean both ears get the sound? wow!".
> now that this is clear, I see no reason for him not to fully agree with you on everything. well nothing beside all the stuff you dismiss, all the stuff that you inaccurately apply then call ILD anyway, all the stuff you're wrong about, and all the stuff you consider objective only when it pleases you and other similar logical fallacies. but at least we know that we have 2 ears, which is nice.



First of all, I don't know Gregorio. For me he is a guy online who says he has been in the business of audio engineering for long and he appears to know a lot about audio engineering, but I don't know his resume. I don't know what movies or records he has worked on. To you Gregorio might be a well-known person, but I don't know him other than what I have read here and I have never seen him mention any movie or recording he has worked on.

Secondly, I was not teaching acoustics 101 to anyone, but I was justifying crossfeed. I am fine if Gregorio_ personally_ doesn't like what crossfeed does to the sound, but as I have said many times, there are scientific justifications for crossfeed. When we forget about our preferences and what we like or dislike and just look at the science of human spatial hearing, it indicates that large ILD at low frequencies is a problem. I have been explaining why this is many times. That's why crossfeed was "invented" in the first place to address this problem and you can't explain away this scientific justification by noting crossfeed hasn't been commercially successful (I suppose default Apple earpads are the best headphones in the World since so many people use them!). This is why I don't particularly "like" it when people say I don't know the science or what I am talking about. You can always say that there are better ways to address the problems of headphone sound (certainly is!), but it is _scientifically_ true that crossfeed is a step to the right direction. It is a subjective matter whether this scientific improvement is perceived as a negative or positive change.

Thirdly, even if Gregorio is the best audio engineer ever, it doesn't mean there is no room for improvement in his work, or thinking. Maybe I am wrong because I am not an audio engineer, but I think headphone sound has been historically a blind spot in audio productions and only the last 20 years or so have seen more focus on how recordings sound on headphones because of the explosion of "portable players/smartphones" making headphone listening much more popular.

Finally, I don't demand Gregorio or anyone else to subjectively like crossfeed. We all agree we have our preferences and even _perception_ as I have learned out here* If you use more sophisticated methods such as HRTF convolution, crossfeed can appear quite "old-fashioned" and mirthless in comparison. I'd just wish people would admit I am not ignoring scientific facts and crossfeed really does have scientific justification behind it.

* My professor in the university must have forgotten to mention this or I have missed the lecture where it was mentioned, because I didn't know perception of human hearing can vary a lot between individuals. I have taken it as a self-evident thing that our brain learns to match spatial perception with visual information about sound sources so that spatial perception becomes pretty much the same for everybody.


----------



## gregorio (Nov 6, 2019)

71 dB said:


> [1] I am fine if Gregorio_ personally_ doesn't like what crossfeed does to the sound, but as I have said many times, there are scientific justifications for crossfeed.
> [2] When we forget about our preferences and what we like or dislike and just look at the science of human spatial hearing, it indicates that large ILD at low frequencies is a problem.
> [3] I have been explaining why this is many times.
> [4] That's why crossfeed was "invented" in the first place to address this problem and you can't explain away this scientific justification by noting crossfeed hasn't been commercially successful ...



1. Yes, there are scientific justifications for crossfeed. However, there are also scientific justifications against crossfeed, which is why it was superseded years ago.
2. No it doesn't. It might indicate large ILD at low freqs is a problem in the real/natural world but we're not dealing with the natural world. It's obviously a problem for your personal perception/preferences but then of course you're contradicting yourself AGAIN, because you obviously CANNOT "_forget [y]our preferences_"!

3. Why? Didn't you read castle's last post? What's the point of explaining the same cherry picked half-truths over and over again when it's been explained to you that tactic cannot work here in the sound science forum and you stated you understood that, but the very next day, you contradict your own statement and off you go repeating exactly the same thing all over again. Now that's "lunacy"! How many times?

4. I don't! I _"explain away [your] scientific justification_" with the individual facts that I've enumerated, plus the obvious fact that science itself dumped crossfeed decades ago in favour of a model with far fewer problems than the simple crossfeed model. So when you say/imply that science supports your position, that's effectively a LIE, science hasn't supported your position since about the 1970's! Noting that crossfeed hasn't been commercially successful just supports the science, that crossfeed also introduces problems and demonstrates that many/most people do not prefer it!

Round and round you go. WHY? How do you not understand that repeating the same cherry-picked half truths is NEVER going to get this forum to accept your position?

G


----------



## bigshot

71 dB said:


> First of all, I don't know Gregorio



Over the past few hundred posts, we've come to know a great deal about you. You've successfully made yourself a singular person in the group, and not in a good way. You're the living embodiment of the movie "God's Little Acre". It's as if you are bound and determined to dig a hole to China.


----------



## ztwindwalker

bigshot said:


> I've compared the headphone and multichannel mix on that album. The headphone version doesn't sound the same at all. It sounds like just a slightly phase altered stereo mix.


Yep,I guess the headphone version inserted virtual acoustic environment information. It`s obviously more wet and had a different sound stage when compared to crossfeed+normal version.


----------



## 71 dB

gregorio said:


> 1. Yes, there are scientific justifications for crossfeed. However, there are also scientific justifications against crossfeed, which is why it was superseded years ago.
> 2. No it doesn't. It might indicate large ILD at low freqs is a problem in the real/natural world but we're not dealing with the natural world. It's obviously a problem for your personal perception/preferences but then of course you're contradicting yourself AGAIN, because you obviously CANNOT "_forget [y]our preferences_"!
> 
> 3. Why? Didn't you read castle's last post? What's the point of explaining the same cherry picked half-truths over and over again when it's been explained to you that tactic cannot work here in the sound science forum and you stated you understood that, but the very next day, you contradict your own statement and off you go repeating exactly the same thing all over again. Now that's "lunacy"! How many times?
> ...


1. The things that have superseded crossfeed are rare. Even less people use them than crossfeed and you do hold the lack of commersial success of crossfeed against it. That's because they are expensive and technically demanding. Also there are also scientific justifications against crossfeed against headphone sound as it is, the original reason I discovered crossfeed.

2. My spatial hearing expects some sort of resemblance of natural word to be fooled in a way that makes sense. If the spatial parameters don't make sense, the whole concept of stereophony collapses. I believe the assumption of natural world is inherently baked into stereo sound. Stereo works because of how our spatial hearing works to process sounds from natural world. The same principle applies to 3D movies. The picture for left and right eye need to follow the principles of stereo vision or the concept collapses. With speakers we are dealing with natural world, because we are hearing two speakers playing in a room. It's natural to our ears even if the sounds are "unnatural test signals", because the room acoustics render the spatial cues the same as if it was a person speaking or a animal making sounds where the speaker is. That's the reason why speaker stereophony works. It appears natural no matter how crazy the spatiality on the recording is and our spatial hearing is easily fooled to hear what sound engineers/artists intented. Headphones remove this naturalization process and things go south at least for me, but with crossfeed I can have a coarse simulation of one aspect of the natural processes and my spatial hearing is fooled enough. I think I am lucky my perception allows something as simple as crossfeed to deal with the problem of headphones spatiality. That said, we are again walking on a minefield of semantic interpretations. What is natural world and what isn't? Where is the line?

3. I don't think the problems of too large ILD at low frequencies is cherry picking or half-truth. 9/10-truth maybe. I don't understand why the intention of sound engineers and artists would be large ILD at low frequencies. My intention as a sound engineer would be to produce recordings that sound as good as possible and that is achieved to my knowledge doing completely different things than creating large ILD at low frequency. That's filtering tracks carefully possibly filtering out unwanted resonances, compressing tracks, adding warmth with tape/tube saturation plugings, creating the sense of depth using reverberation differently to track, using side-chain compressing, using glue compressor to glue tracks together into something coherent and so on whatever it is the track under work requires. That's the stuff I encounter when I watch Youtube videos about mixing. Never has any sound engineer in these videos said having a large ILD at low frequences is a positive thing. No, often it is recommended to mix bass MONO! I think that's unnecessorily simplistic and personally my target ILD at low frequencies is 0-3 dB, the range we see in HRTF-measurements. To me that makes scientific sense and is a rational intention in music production. Amplitude panoration at low frequencies is a stereophonic brain fart. Spatial hearing in primarily based on ITD at low frequencies, not ILD so much. You want bass sound for some reason on the left and right in your mix? ILD = 3 dB, ITD = 600-700 microsecond. That's how it's done, how you do omnistereophony that works for both speakers and headphones. On headphones I perceive sounds like this on one ear only even if the sound on the other ear is just 3 dB quieter! The ear where the sound arrives first or is louder "masks" the other ear so that it's as if the other ear hears nothing, but it's all natural. If ILD is larger, say 10 dB, some bass energy is "missing" on the contralateral ear and the masking process overcompensates and I think I actually hear the difference as an out of phase version of the sound. So, if the ILD at low frequencies is too large, I hear "virtual" out of phase sound sources that are modulated in and out of existence by the ILD level! Now you perhaps understand why I experience large ILD at low frequencies distracting and annoying as hell. When the ILD at low frequencies is small enough (a few decibels only), my perception thinks there's nothing missing on the contralateral ear and no virtual out of phase sounds pops out. Everything just sounds natural and pleasing. It's important to understand how the importance and function of spatial parameters vary with frequency:

20-800 Hz: Spatial information is strongly encoded into ITD and weakly encoded into ILD (assumed small) and ISD (assumed small).
800-1600 Hz: Spatial information is encoded into ITD, ILD and ISD, but in a messy way so that spatial hearing doesn't rely much this transition octave.
1600-20000 Hz: Spatial information is strongly encoded into ILD and ISD and weakly encoded into ITD.

Classic methods of creating spatial information to recordings such as amplitude panoration take this all into account poorly so that with speakers it works thanks to the naturalization process of acoustic crossfeed, ER and reverberation, but on headphones it just falls apart, at least for me. Of course these days things are much more sophisticated, but it doesn't help with the older stuff. It is what it is and even the new stuff isn't always perfect, just better…

4. What is this model you are talking about? HRTF convolution? Everything the science said about crossfeed in the 70's is still valid. It never said there can't be anything better in the future. It said crossfeed is an improvement in headphone listening and that's still true today, at least for me. The fact that even BIGGER improvements now exist doesn't nullify anything.


----------



## bigshot

SO! How about them Dodgers, huh?


----------



## 71 dB

bigshot said:


> Over the past few hundred posts, we've come to know a great deal about you. You've successfully made yourself a singular person in the group, and not in a good way. You're the living embodiment of the movie "God's Little Acre". It's as if you are bound and determined to dig a hole to China.



I don't know how well you can know me based on my posts here, but perhaps you by now have some sort of idea of what kind of person I am. Being a singular person in a group is nothing new for me. I am different. I am weird. I am excentric. That makes anyone a singular person in the group. I have not seen the movie you are talking about.


----------



## gregorio

71 dB said:


> 3. I don't think the problems of too large ILD at low frequencies is cherry picking or half-truth. 9/10-truth maybe ....



This isn't the "What 71dB thinks" forum or the "What percentage of the truth 71dB judges" forum, it's the Sound Science forum! Ironically, the point #3 to which you're responding, was actually asking WHY you keep repeating the same explanation over and over again when it CANNOT work here in the sound science forum but you didn't answer the question, you just repeated the same explanation YET AGAIN, as indeed is the rest of your post. It's ridiculous, it's like a Monty Python sketch!!

G


----------



## 71 dB

gregorio said:


> This isn't the "What 71dB thinks" forum or the "What percentage of the truth 71dB judges" forum, it's the Sound Science forum! Ironically, the point #3 to which you're responding, was actually asking WHY you keep repeating the same explanation over and over again when it CANNOT work here in the sound science forum but you didn't answer the question, you just repeated the same explanation YET AGAIN, as indeed is the rest of your post. It's ridiculous, it's like a Monty Python sketch!!
> 
> G



What do you mean by my tactics not working here? Working in what way? Changing the opinions of other people? YOUR judgement is that I am cherry picking. This is not "Gregorio thinks 71 dB is cherry picking" forum.


----------



## gregorio

71 dB said:


> YOUR judgement is that I am cherry picking. This is not "Gregorio thinks 71 dB is cherry picking" forum.



Huh? Even by your own judgement you admitted it was "9/10-truth". If you ignore/dismiss the demonstrated facts/science (even if it were only 10% of the facts/science, which it isn't!), it is not MY judgement that you are cherry-picking, that's pretty much the exact definition of cherry-picking! Round and round you go, digging your hole ever deeper, why?

G


----------



## bfreedma

71 dB said:


> What do you mean by my tactics not working here? Working in what way? Changing the opinions of other people? YOUR judgement is that I am cherry picking. This is not "Gregorio thinks 71 dB is cherry picking" forum.




Can't speak for anyone but myself, but Gregorio is not the only one here seeing you cherry picking facts.


----------



## 71 dB

Did the inventors of crossfeed cherry pick? Did headphone amp manufacturers who incorporated croddfeed into their amps cherry pick?

No, because the limits of crossfeed are understood. I don't cherry pick, because I know the limits.


----------



## bfreedma

71 dB said:


> Did the inventors of crossfeed cherry pick? Did headphone amp manufacturers who incorporated croddfeed into their amps cherry pick?
> 
> No, because the limits of crossfeed are understood. I don't cherry pick, because I know the limits.




I'm not commenting on the inventors of crossfeed or the manufacturers who implement it, I'm commenting on your posts.  The ones where you continually (as has been repeatedly pointed out) cherry pick facts to suit your personal opinions.

Nice try at deflection though...


----------



## 71 dB

bfreedma said:


> I'm not commenting on the inventors of crossfeed or the manufacturers who implement it, I'm commenting on your posts.  The ones where you continually (as has been repeatedly pointed out) cherry pick facts to suit your personal opinions.
> 
> Nice try at deflection though...



I didn't have any personal opinions before I discovered crossfeed and tried it myself. I discovered crossfeed, because I thought about headphone sound and how large ILD is a problem. I used the science I have learned in university, my understanding of human spatial hearing. You say this was cherry picking, but to me it was using science. Turned out that my ears agree with what I had figured out in my head. I thought about crossfeed and what it does to sound, the good and the bad and concluded that the good overrides the bad, so that in all crossfeed is an improvement, at least to my ears. I assume the inventors of crossfeed had the same thought process. How else would they have invented crossfeed? The only thing that leads you to crossfeed is the realization of too large ILD. So, if I am cherry picking so were they! They considered ONLY ILD just like I am. Without "cherry picking" I wouldn't have tried crossfeed at all!


----------



## bfreedma

71 dB said:


> I didn't have any personal opinions before I discovered crossfeed and tried it myself. I discovered crossfeed, because I thought about headphone sound and how large ILD is a problem. I used the science I have learned in university, my understanding of human spatial hearing. You say this was cherry picking, but to me it was using science. Turned out that my ears agree with what I had figured out in my head. I thought about crossfeed and what it does to sound, the good and the bad and concluded that the good overrides the bad, so that in all crossfeed is an improvement, at least to my ears. I assume the inventors of crossfeed had the same thought process. How else would they have invented crossfeed? The only thing that leads you to crossfeed is the realization of too large ILD. So, if I am cherry picking so were they! They considered ONLY ILD just like I am. Without "cherry picking" I wouldn't have tried crossfeed at all!




The difference between what "you think" and the larger set of objective information has been addressed numerous time in this thread.  Sorry, not joining you on your


----------



## bigshot (Nov 7, 2019)

71 dB said:


> I didn't have any personal opinions before I discovered crossfeed and tried it myself.



A blank slate! A unfettered mind, open to any and all possibilities! I salute you for your great discovery... You, Columbus and Neil Armstrong! George Washington and the Cherry Tree, ripe for picking! Let loose with the fireworks!


----------



## 71 dB

bfreedma said:


> The difference between what "you think" and the larger set of objective information has been addressed numerous time in this thread.  Sorry, not joining you on your


I don't know what the problem here is. Are you saying given " the larger set of objective information " and two options:

1) no crossfeed
2) crossfeed

the larger set of objective information says option 1) is better? How is that not cherry picking and ignoring the fact that headphones give larger ILD than speakers to which recordings are mixed?


----------



## bigshot

I choose 3) speakers in a good room


----------



## bfreedma

71 dB said:


> I don't know what the problem here is. Are you saying given " the larger set of objective information " and two options:
> 
> 1) no crossfeed
> 2) crossfeed
> ...




You prefer crossfeed.  I do not.  Picking one of two solutions, neither of which addresses a multitude of inaccuracies, doesn't make either "better".


----------



## 71 dB

bigshot said:


> A blank slate! A unfettered mind, open to any and all possibilities! I salute you for your great discovery... You, Columbus and Neil Armstrong! George Washington and the Cherry Tree, ripe for picking! Let loose with the fireworks!



Yes, I had to _discover_ crossfeed because they are not advertised anywhere. I had thought headphones are totally trouble free things until one day I realized the problem of excessive spatiality and started to study headphone sound. Crossfeed wasn't the only thing I discovered. I also learned out, that headphones often have so small mechanical resistance, that in order to have adequate damping, low output impedance is needed for the headphone amp. To my horror I realized headphones are not even close trouble free. In fact headphone sound is a mess! Too high output impedance here and not enough power there and recordings are mixed for speakers… …it's a mess and I bild myself a headphone adapter to deal with this mess and it helped a lot, so much so that a speaker guy became a headphone guy. I had listened to subpar headphone sound thinking it just is how headphones sound not realizing my output impedance was ridiculously high (no adequate damping + huge FR errors) and I suffered from excessive spatiality. When I got rid of those problems I found out that headphone sound can be awesome!


----------



## 71 dB

bfreedma said:


> You prefer crossfeed.  I do not.  Picking one of two solutions, neither of which addresses a multitude of inaccuracies, doesn't make either "better".



Those are the options I have. Those are also the options implied in the name of this topic. Both my understanding of spatial hearing and my ears say crossfeed is often better out of these two options, given there is excessive spatiality in the recording (almost always is because recordings are mixed for speakers). Some recordings do not have excessive spatiality and I listen to them without crossfeed. The level of needed crossfeed varies from recording to recording and for some recordings it happens to be "minus infinity dB" meaning no crossfeed.


----------



## 71 dB

bigshot said:


> I choose 3) speakers in a good room



Yes you do, but this topic is "to crossfeed or not to crossfeed?" Your option 3) is outside this topic and so are HRTF convolutions etc. I feel I am the only one here honouring the name of this thread and talking about choosing between the two options of this topic.


----------



## bigshot

OK. I'll get into the spirit of your pointless ranting...

Crossed sucks. It's for old ladies and wimps.


----------



## castleofargh

71 dB said:


> Being a singular person in a group is nothing new for me. I am different. I am weird. I am excentric. That makes anyone a singular person in the group.


we're all singular people. you think gregorio is an example of typical dude? or bigshot? or me and my quasi-modo costume? almost everybody here is noticeably singular in some ways. that's a very bad excuse and it really has nothing to do with your outstanding tunnel vision on this topic.



71 dB said:


> I don't know what the problem here is. Are you saying given " the larger set of objective information " and two options:
> 
> 1) no crossfeed
> 2) crossfeed
> ...


back to my previous analogy with masturbation as a sex simulation:

1) my hand
2) my hand while holding a bottle of water in the other
the larger set of objective information says option 1) is better? How is that not cherry picking and ignoring the fact that people are mostly made of water?



I believe those arguments are fairly equivalent. almost anybody can see the flaws in the reasoning and will pick 1) as preferred solution under most circumstances(or even better, real sex and actual speakers). but we can always stubbornly hang on to our pseudo scientific rational and pretend that when something is beneficial in complete isolation, then it's obviously going to be beneficial in the actual system. like how a larger tank in a car let us drive longer distance, so large tanks are always an improvement. why don't we have cars with much much larger tanks? 

on this you clearly confuse science with cherry picking the models and the facts. the guys who keep claiming that 24 or even 32bit files are so much better than 16, are following the same reasoning you do. yes 32bit allows for higher resolution, that one specific and very objective part is totally true. does that make 32bit files an obvious improvement?   I'd sooner agree that 32bit is an improvement, than agree to conclude that crossfeed is an obvious improvement over default headphone playback. because while irrelevant, at least with 32bit there are no unpredictable consequences on the listener's subjective experience. we can actually draw conclusions because we know how that model works. for crossfeed, we know jack crap about the consequences of having a partial, inaccurate (by how much? we don't know it's listener dependent), change applied to the sound. well what we know is that only a minority of people happen to enjoy using that on a regular basis. so we do know at least that it's not the improvement you make it to be. you deny reality in favor of your toy model of acoustic and your own subjective impressions. but want to play pretend that you're interested in facts and objective reality. that's denial.

anyway you're going to find a way to misinterpret that and stick to your ILD crap until the day you die, so I don't know why I keep replying to you on this topic.


----------



## ironmine (Nov 8, 2019)

71 dB said:


> Yes you do, but this topic is "to crossfeed or not to crossfeed?" Your option 3) is outside this topic and so are HRTF convolutions etc. I feel I am the only one here honouring the name of this thread and talking about choosing between the two options of this topic.



Hi 71 dB,

Why do you feel that HRTF convolutions are outside the topic of crossfeed?  HRTF is an advanced form of crossfeed.

https://en.wikipedia.org/wiki/Crossfeed#Digital_(DSP) :

_A digital, or DSP-type, crossfeed is typically more sophisticated, mixing an amount of signal from one channel to the other, delaying the signal to mimic interaural time differences an*d applying other characteristics of head-related transfer functions (HRTFs)* to mimic the changes between the left and right ears. Some digital crossfeeds include controls for varying the realism of the crossfeed implementation and which *HRTF characteristics are used*._

To this end, I have a practical question. I downloaded impulses from http://recherche.ircam.fr/equipes/salles/listen/download.html

I also installed the true stereo convolver: https://www.liquidsonics.com/software/reverberate-2/
I switched this convolver to the true stereo mode, loaded 2 (raw) impulses - and finally, everything sounds the way it should ! (the sound comes from 30 degrees and 330 degrees, if you assume that 0 is North).

My question is: In order to use these HRTF true stereo convolutions as a crossfeed, do I need to process the sound any other way additionally?  Does the use of these impulses take care of ITD, ILD differences - and thus no further equalizing in the form of low pass filtering or treble boosting or arranging a delay is required? Please advise.


----------



## castleofargh

ironmine said:


> Hi 71 dB,
> 
> Why do you feel that HRTF convolutions are outside the topic of crossfeed?  HRTF is an advanced form of crossfeed.
> 
> ...


the impulses are already delayed as they should relatively to one another(otherwise you'd have a bunch of useless data regarding delays, and not the HRTF).
 the compensated version accounts for some stuff they know about the rig, so it's probably a better bet. but clearly the best option is just to try and pick what works for you, as you're just looking for what subjectively comes close, and not actually using your very own measurements. so objectivity can only go so far in that approach.  
you're still left with the headphone's FR. but as you've used it to determine which HRTF was closest to your hearing for frontal image, maybe we can consider that you're already good that way. kind of up to you to maybe EQ for a more theoretical neutral, then retry all the impulses to find the one that work best under those condition? might improve something, or maybe just make you waste a lot of efforts to end up with the same impulses. hard to tell. 

so did you find out what was your problem the other day with the convolution schemes you had tried?


----------



## gregorio

71 dB said:


> [1] Did the inventors of crossfeed cherry pick? Did headphone amp manufacturers who incorporated croddfeed into their amps cherry pick?
> [2] No, because the limits of crossfeed are understood. [2a] I don't cherry pick, because I know the limits.



1. I have no idea, possibly, but if they did, it was certainly nowhere near the extent to which you cherry-pick! In the 1950's when they invented crossfeed, stereo as a consumer product had only been released a few years earlier, it's adoption was slow and there was relatively little reliable evidence/science regarding the perception stereophony. Much/Most of the science/evidence was developed more than a decade later and continues to this day, so how could the inventors of crossfeed deliberately omit (and thereby cherry-pick) science/evidence which didn't exist at the time? If as you state, you have studied crossfeed, how is it possible you don't know even the basic time-line of it's history? Either you know far less than you think you do or your question is a ridiculous attempt to support your position, which is it?
2. Your statement is false! How any particular listener perceives crossfeed with commercial music recordings is NOT well/fully understood. Therefore ... 
2a. This statement is completely backwards! You "know the limits" ONLY because you cherry-pick! Conversely, if you didn't cherry-pick then you could NOT "know the limits".



71 dB said:


> [1] I discovered crossfeed, because I thought about headphone sound and how large ILD is a problem.
> [2] I used the science I have learned in university, my understanding of human spatial hearing.
> [2a] You say this was cherry picking, but to me it was using science.



1. Clearly you didn't discover crossfeed, you simply learned that it had already been discovered about 60 years earlier.
2. You've ALREADY admitted that "the science you learned in university" didn't even mention music production and clearly demonstrated that you have little understanding of either the spatial information contained in commercial music recordings or how it is perceived! And OF COURSE, science is NOT defined by your personal understanding of it.
2a. You ARE using the science of spatial hearing, no one is disputing that. However, you're ONLY using the science of the spatial hearing of real/natural acoustics and even then, ONLY some parts of that science. This is pretty much the text book definition of cherry-picking!


71 dB said:


> 1) no crossfeed
> 2) crossfeed
> the larger set of objective information says option 1) is better? How is that not cherry picking and ignoring the fact that headphones give larger ILD than speakers to which recordings are mixed?


Why are you asking this question, how can you not already know, if as you state you have studied this? The "larger set of objective information" demonstrates that neither is intrinsically better, that it depends on individual perception, although it indicates that crossfeed is not better/preferred by most. Also, "the larger set of objective information" does NOT ignore the fact that headphones give larger ILD than speakers, in fact quite the opposite! If it wasn't for the reduced ILD with crossfeed then "the larger set of objective information" would (probably) demonstrate that not using crossfeed IS intrinsically better and indicate that even fewer people prefer it! So CLEARLY your question and statement are nonsense!


71 dB said:


> [1] Both my understanding of spatial hearing and my ears say crossfeed is often better out of these two options,
> [2] given there is excessive spatiality in the recording (almost always is because recordings are mixed for speakers).


1. Science is NOT defined by YOUR understanding of it or by what your "ears say" and this isn't the "71dB's understanding of science and what his ears say" forum!

2. It is NOT "given" that there is excessive spatiality in the recording (when listening with headphones), you've simply made that up and it's contrary to the actual facts! Completely opposite to your FALSE assertion, recordings do NOT have too much spatiality, they have too little because if they are designed for speaker reproduction there will be the additional spatial information created by the listening environment. This is the actual fact and is simple common sense but you ignore both and repeatedly state the exact opposite of the actual facts, despite it being refuted almost as many times as you repeat it. Your assertion is not just false but clearly ludicrous, some of those who try to compensate for headphone listening do so by adding MORE spatial information (reverb) not by trying to remove/reduce what's on the recording!

Round and round you go. Hey @bfreedma, 71dB has discovered something, a perpetual motion carousel. The power of "agenda" is limitless, even Yoda would be overwhelmed! 

G


----------



## ironmine

castleofargh said:


> the impulses are already delayed as they should relatively to one another(otherwise you'd have a bunch of useless data regarding delays, and not the HRTF).
> the compensated version accounts for some stuff they know about the rig, so it's probably a better bet.



As for the compensated versions, the description says:"the propagation delay is removed". Is it a different kind of delay (from the speaker to the listener perhaps?) that was removed? 



castleofargh said:


> you're still left with the headphone's FR. but as you've used it to determine which HRTF was closest to your hearing for frontal image, maybe we can consider that you're already good that way. kind of up to you to maybe EQ for a more theoretical neutral, then retry all the impulses to find the one that work best under those condition?



Yes, you are right, I never thought about that. I need to apply to these demo sounds first the convolution that adjusts the frequency response of my earphones that will be used for music playback. That's how I gotta choose the head that suits me most.



castleofargh said:


> so did you find out what was your problem the other day with the convolution schemes you had tried?



No, I didn't!  I got a reply from the guy who runs http://recherche.ircam.fr website saying that everything is correct on his website, North is 0 and the angle begins counting as we move from North to the left (90), back (180) and right (270). So the error was on my side. Maybe the VST host does not like running four instances of the same plugin at the same time. Maybe it caches them in some weird way, I don't know. But LiquidSonics works fine, I will use it for now.


----------



## 71 dB (Nov 8, 2019)

ironmine said:


> Hi 71 dB,
> 
> Why do you feel that HRTF convolutions are outside the topic of crossfeed?  HRTF is an advanced form of crossfeed.
> 
> ...



Yes, HRTF is an advanced form of crossfeed. At this point I need a break from this thread sorry. So tired of this. I just don't care anymore. Maybe I leave and enjoy my life with crossfeed and restore sanity to my life. Big mistake coming here...


----------



## 71 dB

Speakers - lot of spatial information because of acoustic crossfeed, ER and reverberation, BUT small ILD at low freq => natural spatiality
Headphones - less spatial information, BUT large ILD at low freq. => excessive spatiality.

That's HOW I define this and I DON'T CARE HOW YOU DEFINE!!!


----------



## 71 dB

How can you say anything "beyond" your understanding? Gregorio can't understand EVERYTHING, he is not God, is he? How can you do anything without cherry picking? How can we improve anything? We can never …. so c


----------



## 71 dB

What is improving headphone sound without cherry pick? WHAT IS IT???
I improve headphone sound for myself: It's crossfeed. Cherry picking or not, It is a huge improvement for me!!!!!!


----------



## 71 dB

I like crossfeed.
I stopped caring what you think becase I don't like you!
Keep this board to yourself!


----------



## 71 dB

ironmine said:


> As for the compensated versions, the description says:"the propagation delay is removed". Is it a different kind of delay (from the speaker to the listener perhaps?) that was removed?



Delay from source to ears are removed, but the SAME among is removed left and right so ITD remains.


----------



## bigshot (Nov 8, 2019)

Could you PLEASE group your replies into a single post. Six contextless two sentence dismissals in a row are too much. Better yet, if you have nothing to say, don't reply at all. If you do that, I promise we will. When you start posting blather like "I don't care what you think because I don't like you." you project a very poor impression of yourself. It's pretty clear to everyone but you that we aren't the problem, you are.


----------



## castleofargh

ironmine said:


> As for the compensated versions, the description says:"the propagation delay is removed". Is it a different kind of delay (from the speaker to the listener perhaps?) that was removed?


that's how I understand it, removing the unnecessary silence at the start on the wave files, so you don't create as massive a lag when applying the convolution(probably not an issue for music only playback).


----------



## ironmine

I spent the whole day today trying to apply the selected HRTF impulses in True Stereo Convolution mode (LiquidSonics Reverberate) to see whether I can make a good crossfeed out of this idea.

The "head" was chosen using the Demo Sounds that were processed first with the HR correction impulse for my earphones. (I did not correct the HR 100%, the dry/wet was set to 20% which is my preferred value. Anything above 20% makes the earphones sound too thin).

Different angles were tried.  30 & 330. 45 & 315.

There is a certain crossfeed effect that can be clearly heard, but it's not enough. Some excessive spatiality still remains and, to my ears, even a little bit of it is too much and distracting.  

I am coming to the conclusion that this kind of method still requires additional processing. Maybe, it still needs Low Pass Filtering to a certain degree.

But, I guess, those folks who like light crossfeed effects may actually find the result I got quite pleasing. It preserves all the details in music and remains very transparent. But, in my opinion, there's still too much stereo remaining and the sound sources are too close to the listener.

During testing, I compared the audio files processed with the method above against my standard 112dB Redline Monitor. I preferred the latter.


----------



## castleofargh (Nov 9, 2019)

ironmine said:


> I spent the whole day today trying to apply the selected HRTF impulses in True Stereo Convolution mode (LiquidSonics Reverberate) to see whether I can make a good crossfeed out of this idea.
> 
> The "head" was chosen using the Demo Sounds that were processed first with the HR correction impulse for my earphones. (I did not correct the HR 100%, the dry/wet was set to 20% which is my preferred value. Anything above 20% makes the earphones sound too thin).
> 
> ...


that approach still misses or approximates several variables, so it's hard to know what subjective result someone is going to get. a few possible ideas:
- those impulses are still not your own, it's pretty much a certainty that you could get better result with in ear mics.
- they're impulses from anechoic chamber. as good as that is to "compose" our own sound field(or whatever that's called), you're missing a room, even a very clean one like in a recording studio.
-  is it possible that in reverberate(I only know the free one so no true stereo option, but the rest seems similar), you somehow don't have set things to get the full compensated signal(no dry/wet stuff at all)?  for example on the 30/330 impulses, if you play some track with only left sound, do you get 30° subjectively? if not, we have 2 options, the impulses aren't that good for your own head. or the signal is not processed the way it should for some reason. personally I never felt 2m distance for anything, and as I've explained, the moment I open my eyes or move my head a little, anything on headphones collapses back near my skull. but I do get the angles right when I switch the processing ON and OFF(then after a while my brain starts to compensate for what it thinks it should be and I'm back to more or less headphone panning).


----------



## gregorio

71 dB said:


> [1] Speakers - lot of spatial information because of acoustic crossfeed, ER and reverberation, BUT small ILD at low freq => natural spatiality
> [1a] Headphones - less spatial information, BUT large ILD at low freq. => excessive spatiality.
> [2] That's HOW I define this and I DON'T CARE HOW YOU DEFINE!!!



1. Again, NONSENSE! When listening to speakers, there is indeed a lot of "natural" spatial information added by the listening room acoustics but what are you adding that spatial information to? OBVIOUSLY, you're adding it to the spatial information already contained in the recording, which is far from natural. The natural spatial information of the listening environment does NOT magically turn the unnatural spatial information on the recording into natural spatiality and of course, stereo is an illusion anyway, that does NOT exist in nature! And if all this isn't more than enough by itself, you've *ALREADY ADMITTED* that it does NOT equal "natural spatiality"!! So why are you just repeating your original assertion yet again, contradicting pretty much everything, INCLUDING EVEN YOURSELF! 
1a. Yes, there is LESS spatial information with headphones because you're missing all the intended spatial information added by the listening environment, all manner of time delayed, freq dependent and directional reflections. Out of ALL this additional spatial information, the ONLY thing we've got more of with headphones is ILD, everything else is not just less but missing entirely. How does LESS spatial information = more/excessive spatiality?

The ONLY way your FALSE statements can be made true, is if you ONLY consider ILD and ignore/discount all the other spatial information (on both the recording itself and what would be produced by the listening environment with speakers). However, this presents several OBVIOUS problems and self-contradictions, for example: A. You falsely state you are not ignoring anything. B. You state you are correct because you studied an acoustics course at university. What was this course? Did they teach you all about acoustics and then tell you to ignore it all (except ILD)? C. You couldn't find a more obvious case of cherry picking! and D. Ignoring/discounting large swathes of science in order to validate your position is pretty much the opposite of science!
So, which is it? EITHER you're cherry-picking and ignoring pretty much all spatial information except ILD in order to make your assertions true (in which case they're still false of course!) OR, you not ignoring anything and your assertions are just false to start with?

2. Exactly, that's been your problem all along! You "DON'T CARE" about how anyone else (including science itself) defines it, you ONLY care about how YOU define it (according to your personal perception/preferences) BUT, guess what forum this is? Is this the "How 71dB defines it" forum or is it the sound SCIENCE forum?



71 dB said:


> What is improving headphone sound without cherry pick? WHAT IS IT??? I improve headphone sound for myself: It's crossfeed. Cherry picking or not, It is a huge improvement for me!!!!!!


NO ONE is saying you can't cherry-pick to make an "improvement for you!!!!!" HOW MANY TIMES?????
What you CAN'T do here in the sound science forum is try to turn what is an "improvement to you" into assertions of objective fact based on cherry-picked bits of science because A. That's NOT science and B. Your assertions of objective fact are FALSE! How many times?

Round and round you go, each time digging the hole deeper, making yourself look more and more foolish/unhinged. WHY? Do you think that making yourself appear more and more foolish/unhinged will help you to become a messiah?

G


----------



## ironmine

castleofargh said:


> that approach still misses or approximates several variables, so it's hard to know what subjective result someone is going to get. a few possible ideas:
> - those impulses are still not your own, it's pretty much a certainty that you could get better result with in ear mics.



Getting in-ear mics is costly. I would invest this money if a successful result were guaranteed. 



castleofargh said:


> - they're impulses from anechoic chamber. as good as that is to "compose" our own sound field(or whatever that's called), you're missing a room, even a very clean one like in a recording studio.
> -  is it possible that in reverberate(I only know the free one so no true stereo option, but the rest seems similar), you somehow don't have set things to get the full compensated signal(no dry/wet stuff at all)?  for example on the 30/330 impulses, if you play some track with only left sound, do you get 30° subjectively? if not, we have 2 options, the impulses aren't that good for your own head. or the signal is not processed the way it should for some reason. personally I never felt 2m distance for anything, and as I've explained, the moment I open my eyes or move my head a little, anything on headphones collapse back near my skull. but I do get the angles right when I switch the processing ON and OFF(then after a while my brain starts to compensate for what it thinks it should be and I'm back to more or less headphone panning).



Ok, I will still play with LiquidSonics for a while. Maybe I have indeed missed something.

Do you know how to use Fog Convolver?  I switched it to True Stereo mode, but for the life of me I don't know how load two stereo impulses into it. There is no button "load". Drag and drop does not work either. What an idiot designed its interface...


----------



## 71 dB (Nov 9, 2019)

gregorio said:


> 1. Again, NONSENSE! When listening to speakers, there is indeed a lot of "natural" spatial information added by the listening room acoustics but what are you adding that spatial information to? OBVIOUSLY, you're adding it to the spatial information already contained in the recording, which is far from natural. The natural spatial information of the listening environment does NOT magically turn the unnatural spatial information on the recording into natural spatiality and of course, stereo is an illusion anyway, that does NOT exist in nature! And if all this isn't more than enough by itself, you've *ALREADY ADMITTED* that it does NOT equal "natural spatiality"!! So why are you just repeating your original assertion yet again, contradicting pretty much everything, INCLUDING EVEN YOURSELF!
> 1a. Yes, there is LESS spatial information with headphones because you're missing all the intended spatial information added by the listening environment, all manner of time delayed, freq dependent and directional reflections. Out of ALL this additional spatial information, the ONLY thing we've got more of with headphones is ILD, everything else is not just less but missing entirely. How does LESS spatial information = more/excessive spatiality?
> 
> The ONLY way your FALSE statements can be made true, is if you ONLY consider ILD and ignore/discount all the other spatial information (on both the recording itself and what would be produced by the listening environment with speakers). However, this presents several OBVIOUS problems and self-contradictions, for example: A. You falsely state you are not ignoring anything. B. You state you are correct because you studied an acoustics course at university. What was this course? Did they teach you all about acoustics and then tell you to ignore it all (except ILD)? C. You couldn't find a more obvious case of cherry picking! and D. Ignoring/discounting large swathes of science in order to validate your position is pretty much the opposite of science!
> ...



1. The music played on speaker excites the room the same way your fingers excite the strings of guitar. Stereo sound is an illusion. You have two mono sound sources playing simultaneously signal with a strong correlation and hopefully these correlations are samething that fools our spatial hearing in ways intented by the sound engineers. The room adds spatial information that is natural room acoustics to our hearing. It can help in fooling our hearing to think the spatial information of the recording is natural too, but it also means we have both natural listening room acoutics and whatever spatiality of the recording at the same time. Our spatial hearing must make sense of this. Especially the room regulates the spatial parameters to levels that are "expected by our hearing". If you play a recording with sound only on left channel, ILD of the recording is "infinite decibels", but your ears hear MUCH smaller ILD, because it has been regulated by the room. At lower frequences your right ear hears almost as loud sound as your left ear. That's natural to your ears, because that's what happens in life with sounds around you. You are constantly blaming me for not formulating things precisely, but these are difficult things to formulate and I am doing my best here. You don't help me at all with our attitude to turn everything against me. Concepts like "natural spatiality" are not defined precisely to my knowledge and if they are please GIVE me the definitions so I can use them precisely. This kind of termilogy is a bit fuzzy and requires the other people to have a will to understand what the other person says. If you actually tried to understand what I mean, you'd see I don't contradict myself, at least much. Why would I?

Does natural sound mean that the spatial information is natural or does it mean the spatial information only _seems_ natural? How fake can natural spatiality be? I use the term "natural spatiality" flexibly depending on the context. In some context fakeness is allowed and the context tells how much fakeness is allowed. If you demand me to use YOUR definitions of the term then give me your definitions!

Using crossfeed means having some sort of regulation of the spatial parameters so that we don't feed infinite/too large ILD into our ears. I totally agree crossfeed isn't perfect, but it is better than nothing, to my ears much better than nothing and It is simple so you can implement is cheaply everywhere unlike those HRTF methods which are too complex and expensive to implement hardly anywhere!

1a. It's about how the spatial information is encoded. Our (my) spatial hearing can take tons of spatial information if the parameters are within "natural" range. Rich spatiality (speakers in room) is different from excessive spatiality (headphones without crossfeed). Lack of spatial information such as ER and reverberation can't be substituted by making the existing spatial information bigger, because they are different parameters. ILD is not reverberation. You don't have reverberation no matter what, but you can crossfeed the ILD similar to what you have in the rich spatiality.

Headphones ignore everything. I ignore one thing less, because I don't ignore ILD. I don't understand how this ignoring thing is so problematic to you. Sorry, but my spatial hearing just doesn't demand total 100 % fix of every aspect to hear an improvement. My spatial hearing says excessive ILD is the largest problem of headphone sound by far and fixing it only, takes things far. What I mean by not ignoring anything means I am aware of everything, but I know I can in this context ignore all other things except ILD and STILL have improvement. Of course I know headphones don't have ER, reverb etc. That's just how it is, but I CAN get of rif of excessive ILD!

I did 11 courses of acoustics + master's thesis.


----------



## 71 dB (Nov 9, 2019)

ironmine said:


> Do you know how to use Fog Convolver?  I switched it to True Stereo mode, but for the life of me I don't know how load two stereo impulses into it. There is no button "load". Drag and drop does not work either. What an idiot designed its interface...


----------



## gregorio

71 dB said:


> [1] I did 11 courses of acoustics + master's thesis.
> [2] You don't have reverberation no matter what, but you can crossfeed the ILD similar to what you have in the rich spatiality.



1. And which one of them taught you to ignore all the other aspects of acoustics and consider only ILD?
2. No you can't because you don't have the rest of the "rich spatiality"!

So what's your plan now, simply invent a new made-up term ("rich-spatiality") and then repeat everything you've already repeated 100 times but using this new term? And this will make you a messiah will it or will it just make you appear even more unhinged/foolish, what do you think?

G


----------



## SoundAndMotion

@71 dB , I will bet you 100€ that your life (and every other person's reading this thread - not part of the bet) will be better if you put gregorio and bigshot on your ignore list, and never, ever respond to them. You are correct that gregorio knows a lot about some things (but not all), but he is not helping you at all. Keep on going exchanging posts with @ironmine and others, and posting in other threads. castle's posts may help you later, but he's lost his patience, so wait until his posts are constructive again.

I'm serious! 100€ If you don't respond to or even ever read anything from them until my birthday (Nov. 28), but you do continue with @ironmine and others, and then you tell me that you are not happier, I'll send you the money. If you are happier (as I predict), I'll send you the link to a charity for you to donate.


----------



## bfreedma

SoundAndMotion said:


> @71 dB , I will bet you 100€ that your life (and every other person's reading this thread - not part of the bet) will be better if you put gregorio and bigshot on your ignore list, and never, ever respond to them. You are correct that gregorio knows a lot about some things (but not all), but he is not helping you at all. Keep on going exchanging posts with @ironmine and others, and posting in other threads. castle's posts may help you later, but he's lost his patience, so wait until his posts are constructive again.
> 
> I'm serious! 100€ If you don't respond to or even ever read anything from them until my birthday (Nov. 28), but you do continue with @ironmine and others, and then you tell me that you are not happier, I'll send you the money. If you are happier (as I predict), I'll send you the link to a charity for you to donate.




Happier - Likely
Less informed - Also likely


----------



## SoundAndMotion

bfreedma said:


> Happier - Likely
> Less informed - Also likely


Happier - we agree.
Less informed - I couldn't disagree more, and I'd invite you to give a recent example of that. @71 dB and gregorio are simply going in circles. How would @71 dB staying in the circle help inform him or anyone of anything? If he breaks the circle, he can interact with others (e.g. @ironmine ) for mutual benefit. Not possible for 71 with gregorio in this thread.


----------



## bfreedma

SoundAndMotion said:


> Happier - we agree.
> Less informed - I couldn't disagree more, and I'd invite you to give a recent example of that. @71 dB and gregorio are simply going in circles. How would @71 dB staying in the circle help inform him or anyone of anything? If he breaks the circle, he can interact with others (e.g. @ironmine ) for mutual benefit. Not possible for 71 with gregorio in this thread.




Breaking the circle is entirely up to the poster in question.  Ignoring factual data and substituting personal preference can't lead to being _more informed_.

Are you suggesting that only taking information from those that agree with one's perspective and categorically ignoring facts that don't align with personal preference leads to being better informed?  Can't agree with you there.


----------



## Hifiearspeakers

bfreedma said:


> Breaking the circle is entirely up to the poster in question.  Ignoring factual data and substituting personal preference can't lead to being _more informed_.
> 
> Are you suggesting that only taking information from those that agree with one's perspective and categorically ignoring facts that don't align with personal preference leads to being better informed?  Can't agree with you there.



The onus isn’t only on the poster. It’s on anyone who is an adult. Any adult is capable of breaking the cycle. 
That said, the only reason this thread is even alive is because of 71db. He’s the only one who is actually talking about crossfeed, even if he is just repeating himself.


----------



## 71 dB

gregorio said:


> 1. And which one of them taught you to ignore all the other aspects of acoustics and consider only ILD?
> 2. No you can't because you don't have the rest of the "rich spatiality"!
> 
> So what's your plan now, simply invent a new made-up term ("rich-spatiality") and then repeat everything you've already repeated 100 times but using this new term? And this will make you a messiah will it or will it just make you appear even more unhinged/foolish, what do you think?
> ...


1. None. Crossfeed is ILD scaler. It can't for example add reverb no matter how much I want not to ignore it. Why is ignoring all the other aspects of acoustics ONLY an issue when I don't ignore ILD? Headphone ignore EVEN that, I don't so stop saying I ignore. It's silly.
2. The importance of how important is having rest of the "rich spatiality" is based on our perception. I don't about you, but MY perception allow fixing ILD only as an improvement. I suppose the whole idea of stereophony collapses if you need everything, at least with headphones.

I needed to coin the term rich-spatiality in order to discuss with you, but even that is difficult with you.


----------



## 71 dB

SoundAndMotion said:


> @71 dB , I will bet you 100€ that your life (and every other person's reading this thread - not part of the bet) will be better if you put gregorio and bigshot on your ignore list, and never, ever respond to them. You are correct that gregorio knows a lot about some things (but not all), but he is not helping you at all. Keep on going exchanging posts with @ironmine and others, and posting in other threads. castle's posts may help you later, but he's lost his patience, so wait until his posts are constructive again.
> 
> I'm serious! 100€ If you don't respond to or even ever read anything from them until my birthday (Nov. 28), but you do continue with @ironmine and others, and then you tell me that you are not happier, I'll send you the money. If you are happier (as I predict), I'll send you the link to a charity for you to donate.



I have considered that now that you encourage you will do that, but I won't pay 100 €, sorry. @ironmine has been very nice as you say, much nicer than I have been because I am full of anger and frustration for all this feuding. I lose nothing putting bigshot on ignore, because all he says 10 times a day is "speakers are the way to go" - on a headphone forum!


----------



## SoundAndMotion

bfreedma said:


> Are you suggesting that only taking information from those that agree with one's perspective and categorically ignoring facts that don't align with personal preference leads to being better informed?  Can't agree with you there.


Since that is not at all what I am suggesting, you can't really agree or disagree with me, only with your own words that you attempted to put in my mouth.

71 dB has some useful knowledge, some opinions and some difficulty expressing what he wants to communicate. The exact same can be said of gregorio, although his communication problems are entirely different from 71's.


----------



## 71 dB

SoundAndMotion said:


> 71 dB has some useful knowledge, some opinions and some difficulty expressing what he wants to communicate. The exact same can be said of gregorio, although his communication problems are entirely different from 71's.



Interesting thoughts and indeed there has been difficulties in communication. 
I have now added gregorio and bigshot on ignore. Let's see if that make me happier, nicer and more balanced person.


----------



## SoundAndMotion

71 dB said:


> I have considered that now that you encourage you will do that, but I won't pay 100 €, sorry. @ironmine has been very nice as you say, much nicer than I have been because I am full of anger and frustration for all this feuding. I lose nothing putting bigshot on ignore, because all he says 10 times a day is "speakers are the way to go" - on a headphone forum!


No problem. Then we won't bet. But both @bfreedma and I believe it's likely you'll be happier if you ignore both. We disagree about whether you can gain any knowledge from gregorio at this point, now that your communication with him is poisoned. Yes, he has useful knowledge, but none of it benefits you right now. His communication method is not compatible with you learning from him, and everything he can offer you can be found elsewhere... with less damage to you.


----------



## 71 dB

Hifiearspeakers said:


> That said, the only reason this thread is even alive is because of 71db. He’s the only one who is actually talking about crossfeed, even if he is just repeating himself.



Thanks for giving me this sort of credit. It helps me feel better!


----------



## 71 dB

SoundAndMotion said:


> No problem. Then we won't bet. But both @bfreedma and I believe it's likely you'll be happier if you ignore both. We disagree about whether you can gain any knowledge from gregorio at this point, now that your communication with him is poisoned. Yes, he has useful knowledge, but none of it benefits you right now. His communication method is not compatible with you learning from him, and everything he can offer you can be found elsewhere... with less damage to you.



Absolutely agree, thanks for encouraging me to take this step.
I feel totally the same, I can learn _elsewhere_ in much nicer way so why have a teacher who makes you only angry?


----------



## bigshot (Nov 9, 2019)

It was a good try Sound and Motion. He did last 25 minutes before he replied to Gregorio. Maybe he isn't listening to you either. I think he only hears what he wants to hear.



SoundAndMotion said:


> @71 dB , I will bet you 100€ that your life (and every other person's reading this thread - not part of the bet) will be better if you put gregorio and bigshot on your ignore list, and never, ever respond to them.



I agree 100%. I also would take it one step further... ignore this topic entirely for a while and come back only if you are willing to engage in conversation instead of grandstanding and buffaloing.



bfreedma said:


> Ignoring factual data and substituting personal preference can't lead to being _more informed_.



If he simply admitted that it was a preference, we wouldn't be having this problem. Anyone will allow him to have a personal preference. The problem is when he justifies his preference by calling it a scientific fact. Sound processing is 100% subjective. People interpret the sound their ears hear differently. I'm not interested in what someone else's interpretation sounds like. I put the salt and pepper on my food to my own taste. I only care that the baseline is correct to start with. After that, he's free to make whatever modifications his little heart desires. But those changes don't apply to me and what I hear.

All he can really say is, "Try it and see if you like it." It isn't making for more realistic spatiality because it doesn't involve real space. I've heard people argue that binaural is more "real". It's only more real if you happen to like reflections bouncing back at you like in the men's washroom at the train station. There is no front and back and no real distance, only reflections slathered all over like mayonnaise. And mixing the channels together doesn't create front and back nor distance either. There are DSPs I like. But they aren't making the sound more accurate, just making it sound better *to me*. I don't tell people that they are getting less true sound if they don't use them. I just say "Try it and see if you like it."



Hifiearspeakers said:


> That said, the only reason this thread is even alive is because of 71db. He’s the only one who is actually talking about crossfeed, even if he is just repeating himself.



You call this thread alive? He isn't just repeating himself, he's talking to himself. Normally threads are considered "alive" when there is interaction going on and information being shared.

When I went to college, my best professors were the ones that kicked my ass the most. I learned more from them than any other because they challenged me. I wanted to be challenged and I wanted to be forced to think things through. I was paying the school a tuition to do that. Luckily, I went to a good school where they did that. It taught me how to find people worth learning from, and how to break down complex problems so I could think them through.


----------



## Hifiearspeakers

bigshot said:


> It was a good try Sound and Motion. He did last 25 minutes before he replied to Gregorio. Maybe he isn't listening to you either. I think he only hears what he wants to hear.
> 
> 
> 
> ...



Alive in the sense that anyone is posting anything here at all.  So you don’t think this thread is going to be completely radio silent if 71db’s were to leave altogether? I believe, if that happens, then the only thing that will be left of this thread is cobwebs.


----------



## 71 dB

Hifiearspeakers said:


> Alive in the sense that anyone is posting anything here at all.  So you don’t think this thread is going to be completely radio silent if 71db’s were to leave altogether? I believe, if that happens, then the only thing that will be left of this thread is cobwebs.



I'm not planning to leave, but I did put people on ignore meaning all that feuding is history, I hope. So, less activity...


----------



## ironmine

71 dB said:


>



I've already done that. I added the folder in my computer, where impulses are stored, to Fog Convolver. I can see all the impulses in the folder, but when I click on the impulse name, it is added only as IR L (left), while the IR R (right) remain empty.


----------



## ironmine

Ok, I've figured it out. To be used in True Stereo mode in Fog Convolver, impulse files need to be re-named in a certain way: their filenames must coincide except the last letter. The last letter in the name of the impulse file that will be used as IR L must be "L". The last letter in the name of the impulse file that will be used as IR R must be "R":

Test L.wav
Test R.wav

In this case Fog Convolver shows this pair of files as one line only ("Test") and when you click on it, both impulse files are loaded... And it's not mentioned in the manual...

So, if I want to use impulse files
IRC_1018_C_R0195_T030_P000.wav
IRC_1018_C_R0195_T330_P000.wav

I need to rename them into, e.g.:
abracadabra L.wav
abracadabra R.wav


----------



## 71 dB

ironmine said:


> Ok, I've figured it out. To be used in True Stereo mode in Fog Convolver, impulse files need to be re-named in a certain way: their filenames must coincide except the last letter. The last letter in the name of the impulse file that will be used as IR L must be "L". The last letter in the name of the impulse file that will be used as IR R must be "R":
> 
> Test L.wav
> Test R.wav
> ...



Ok, good it's settled now. Yeah, the manual could have this as it takes some effort to figure out...


----------



## gregorio

SoundAndMotion said:


> You are correct that gregorio knows a lot about some things (but not all)



In what sense are you applying the Dunning-Kreuger Effect, that I'm overestimating my competence because I'm incompetent or that I'm underestimating my competence because I am competent? And, 


SoundAndMotion said:


> 71 dB has some useful knowledge, some opinions and some difficulty expressing what he wants to communicate.



Yes but what is that useful knowledge? How do you differentiate "useful knowledge" from what might appear to be knowledge that is useful but is actually incorrect factually?


SoundAndMotion said:


> If he breaks the circle, he can interact with others (e.g. @ironmine ) for mutual benefit.



For whose mutual benefit? Just himself and ironmine or everyone else's benefit? And, why should mutual benefit have anything to do with it anyway? This is the sound science forum not the "mutual benefit" forum, isn't the point of science to get to the actual facts, regardless of who they benefit? For example: 
71dB and ironmine had an exchange about dynamic EQ, was that useful (or potentially useful) knowledge or was it factually incorrect? 
A. What they were talking about was effectively just compression (of the band-limited crossfeed signal), not really dynamic EQ in the first place. 
B. Compression significantly (relative to a few hundred micro-secs of ITD) changes the perceived duration, time position, freq response and amplitude of transients peaks (and whatever else is above the compression threshold) and typically, also it's perceived stereo depth, more compression on one element of a mix, relative to another, usually results in the perception of it being more present/closer. Of course, the end result may or may not be preferable to any individual listener but it's not factually better, it's factually just distortion of the crossfed signal and intrinsically worse (but may be preferred).
C. What 71dB was talking about was therefore effectively music mixing/mastering, which raises two problems: Firstly, the music has already been mixed and mastered by the artists/engineers and probably already with a considerable amount of compression, including bandlimited "multi-band" compression. Secondly, by his own admission, 71dB knows next to nothing about music mixing/production, only what he's picked up from watching a few YouTube vids and playing around a bit with a few productions of his own (which his friends apparently complimented).

If we add A, B and C together, what have we got? Isn't it the typical sort of audiophile nonsense we see in the other forums here; mis- used/applied terminology, supposition or assumption based on ignorance and personal perception presented as objective fact? If we're just going to allow exactly the same here, then what is the point of this sub-forum?



SoundAndMotion said:


> We disagree about whether you can gain any knowledge from gregorio at this point, now that your communication with him is poisoned. Yes, he has useful knowledge, but none of it benefits you right now. His communication method is not compatible with you learning from him, and everything he can offer you can be found elsewhere... with less damage to you.



But he's already found it elsewhere, he's apparently already completed 11 university courses on acoustics and unless one or more of those courses were a "quack" course, then he was NOT taught to ignore swathes of acoustic science or to make up alternative, misleading terms instead of the accepted terms that already exist. If, for example, I were to slap a hall reverb on the crossfed signal, would I have just slapped a hall reverb over the entire crossfed signal (including the reverb it already contains) or would I have invented an "ILD spatializer"? Again, the audiophile world is rife with this sort of nonsense, should it be perfectly OK here in this sub-forum too?



Hifiearspeakers said:


> So you don’t think this thread is going to be completely radio silent if 71db’s were to leave altogether? I believe, if that happens, then the only thing that will be left of this thread is cobwebs.



Even if that does happen, what would be left of this thread is "cobwebs" full of inaccurate/false assertions, is that what we want this sub-forum to contain? Is this forum for the benefit of 71dB, for the mutual benefit of 71dB and ironmine, for the benefit of everyone who's sick and tired of the "feuding" regardless of the actual facts OR, for the benefit of anyone who might come here looking for the actual facts/science? If it's for the latter, then how does 71dB and ironmine "ignoring" me (and anyone else who doesn't share their perceptions/opinions) help, rather than hinder? For example, ironmine has had me on "ignore" for a while now and is still referring to "excessive spatiality" (and trying to solve it by adding more spatiality)!

G


----------



## bigshot (Nov 10, 2019)

Irresistable force, please meet immovable object...







With headphones, the spatiality that matters is the space between your ears.


----------



## Degru (Nov 11, 2019)

I took crossfeed into my own hands and made a simple EQ APO config:



Very barebones, totally customizable, and works very well. Sounds great on my Etymotic ER4B, turning into a very enjoyable comfy sounding IEM combined with a careful +3dB bass shelf without messing with the resolution and clarity much. Can also expand this config to use binaural IR files, tho that's a bit more tricky to configure properly.


----------



## SoundAndMotion (Nov 11, 2019)

@gregorio

I think this response to you is better placed in @castleofargh 's thread here, but I didn't want to bring down the level there. Should this continue, perhaps a new thread is better, since it also doesn't belong here.

I have neither the time nor desire to engage in one of your standard bullying, long-winded, pedantic, point-by-point, repetitive battles. Perhaps later, when I have more time, although my previous exchanges with you prove unsatisfying. You never, or at best rarely, acknowledge your mistakes or give credit to valid counterpoints to your points.

I respect your expertise in the methods and standard practices of music recording and production. I can and have referred to your knowledge of that space. I also support your, or anyone's, calls for evidence to back up claims. I'm an evidence-based person and do evidence-based work. There are limits about when and how to call for evidence, but it is a valid and useful endeavor. But using your own words as evidence ("The facts and science back me up.... so disagreeing with me is anti-fact/anti-science") is such a blatant appeal to self-authority that DK problems are an obvious area to explore. You place people in 2 groups: those who show you proper obeisance or the audiophiles (using the negative definition, not one from a dictionary).

Although you may know more than many lay people in the areas of digital and analog audio, electronics/physics, and perception, you are by no means expert in these. But you act as though you are and appeal to your own authority: classic DK. I can dig up your posts and PMs for evidence, but off the top of my head, among other things, you: misapply Shannon-Nyquist to sampling theory, interchange voltage and current incorrectly, confuse frequency and time, confuse measurement and recording, and don't understand the limits of perception. There are valid relationships among these things you use to justify your mistakes, but they do show your lack of understanding, let alone expertise.

71dB has useful knowledge about auditory spatial perception and acoustics. Better than a lay person, but not an expert. He also likes crossfeed and wants to point out that there is a perceptual basis for some people liking it. Sometimes he leaves out the "some" from "some people", but he has gotten better. You want to dismiss the preference as a "vanilla ice cream tastes better than chocolate"-type preference. He repeats his justification for there being a true perceptual reason that the illusion is pleasing to some. You engage in pedantic, bullying, condescending harassment, which just provokes continued, flustered responses. In your exchanges, there is not a correct vs. incorrect side. You both make valid points, often talking past each other, but he struggles with communication (for language and social-interaction reasons) and that is your favorite exploit. The only winners in your exchanges are the popcorn-munching, rubber-necking voyeurs who used to shout "fight, fight, fight" on the playground, and you, because this achieves your goal. You provoke 71dB's worst behavior, and the only way to stop the train-wreck is to just stop.

Most posters on audio forums are not experts in the related fields. So any exchange here must be taken with a grain of salt. The reason most exchanges are posted, rather than PM-ed is that the exchange may benefit others. 71dB's and ironmine's exchange, if they mutually benefit from it, should continue. Not because they write flawless posts, but because it helps both, they have fun, and it could help others. You were not elected hall monitor, and your classic "this is not the *** forum, this is sound science" really shows you seek validation here. We all know it's socially unacceptable to admit that... all except 71dB, he just admits it. And I'm willing to cut him some slack for his language and social awkwardness. You, as a bully, haven't earned that slack.


----------



## ironmine

I almost gave up on using HRTF impulses from *Listen HRTF Database website*.
They just don't provide crossfeed effect to a sufficient degree. And there's too much bass (as if music is playing in a basement). 

So, I will continue with my other "project": trying to build a better alternative to my currently favorite 112dB Redline Monitor using individual VST plugins.

71dB, you mentioned earlier that the low pass filtered channels, before they are crossfed back to the main signal, must be attenuated from -14dB to -2dB - but you advised me, if I have to use one value, to apply the average -8dB. I just wonder is there any objective method (in the form of a VST plugin) to measure an audio track and see how much "hard stereo" there is, especially in in low frequencies?  Can such phenomenon as "hard-panning" be objectively measured?


----------



## 71 dB

ironmine said:


> 71dB, you mentioned earlier that the low pass filtered channels, before they are crossfed back to the main signal, must be attenuated from -14dB to -2dB - but you advised me, if I have to use one value, to apply the average -8dB. I just wonder is there any objective method (in the form of a VST plugin) to measure an audio track and see how much "hard stereo" there is, especially in in low frequencies?  Can such phenomenon as "hard-panning" be objectively measured?



Actually I think the range of needed crossfeed is something like -12 dB (almost "binaural" recordings) to -1 dB ("ping pong" and often multichannel tracks downsampled to stereo).
I think if you need to crossfeed at level lower than -12 dB, it's better to do nothing and just bypass crossfeed. That way you at least avoid all the negative things Gregorio accuses me of ignoring. 

I use my ears to determine the proper level of crossfeed and I think once you undertand what crossfeed should be doing it's quite easy. However, there is of course possible to analyse signals more objectively. I use S/(S+M) method:

M = abs(L + R) _ (the mid signal, absolute value)_
S = abs(L - R) _(the side signal, absolute value)_​
We don't need to scale these in any way to match matrix rotations, because it won't affect the value of 

D = S / (S + M). ​
D tells how much of the total signal is in S, in other words it's a simple measure of how "wide" the stereo signal is.

Mono sound => S = 0 => D = 0 / (0 + M) = 0
Sound only on one channel (L) => S = L, M = L => D = L / (L + l) = 0.5
Same sound on L and R, but out of phase (anti-mono, R = -L) => S = 2L, M = 0 => D = 2L / (2L + 0) = 1.

So, "wideness" index D varies between 0 and 1. We have also target "wideness" Dt which is calculated:

Dt = ( 1 - 10^(-target ILD / 20) ) / 2​
For example the target ILD at low frequences being 3 dB, Dt = (1-10^(-3/20))/2 = 0.15. If a recording has D = 0.4 for example, the needed crossfeed level to reach the target is

beta = (D - Dt) / (D + Dt - 2*D*Dt) = (0.4-0.15) / (0.4+0.15-2*0.4*0.15) = 0.5814
crossfeed level = 20*log10 (beta) = 20*log10 (0.5814) = -4.7 dB.​
This is very simple analyse and Gregorio will again say I "ignore" a million things as I do. However, to me this analyse does correlete pretty well with my ears and you can always finetune the target Dt to your perception. These calculations should of course be done to lowpass filtered signals (for example 800 Hz). One can do the analyse for example for octave bands as I do to see what's happening in my low end ILD-wise when mixing my own music (to create omnistereophonic sound). In prectise I calculate D this way:

D = S / (S + M + epsilon),​
where epsilon is a very small number, like 0.00001 to avoid 0 / 0 calculation whenever S + M = 0. D itself is a signal telling the instantaneous "wideness" of the signal sample by sample so it's it should be averaged for a general D.


----------



## gregorio

SoundAndMotion said:


> [1] He also likes crossfeed and wants to point out that there is a perceptual basis for some people liking it.
> [2] Sometimes he leaves out the "some" from "some people", but he has gotten better.
> [3] You want to dismiss the preference as a "vanilla ice cream tastes better than chocolate"-type preference.
> [4] He repeats his justification for there being a true perceptual reason that the illusion is pleasing to some.
> ...



1. I have never disputed that there is a perceptual basis for some people liking crossfeed, in fact quite the opposite. However, there are is also a perceptual basis for some/many people not liking it.

2. But that is not the issue. 71dB effectively asserts/implies that those who don't like crossfeed don't like it because they are uneducated and/or lack the listening skills/experience. But I agree, he has "gotten better", these insulting assertions/implications are relatively "better" than previously, when he just said we were ignorant, idiots and various other overt insults.

3. No I don't, it's more complex than that, which I've explained!

4. And what's the point of repeating it over and over, when I and everyone else have already acknowledged this, and not just once but going way back nearer the beginning of this thread roughly two years ago?

5. You have that backwards! This accusation (and the points above) indicate you haven't even read this thread before casting your aspersions! I won't be bullied and will push back strongly against anyone who tries, can't you tell the difference? If you can, why are you casting your aspersions against me, instead of 71dB?

6. In this sub-forum there is a "correct side" vs "incorrect side"! The "incorrect side" being the promotion of personal preferences/perception as objective facts. Have you read any of this thread? Besides effectively stating you believe I'm incompetent, you did NOT answer a single question I asked, why is that?
6a. That's not the only "but" or even a particularly relevant "but". The relevant "but" is that 71dB refuses to acknowledge ANY "valid point" (made by anyone) that contradicts his belief that his personal perception/preferences define the objective facts/science. This is my "favourite exploit", NOT his language skills as you FALSELY state!

7. This too is a lie! I've made "my goal" perfectly clear; that 71dB stop posting his false assertions of objective fact. I would FAR rather he did that WITHOUT a "fight in the playground" but if he continues to make those false assertions, I will continue to refute them. Have you read ANY of this thread? And, your response appears to be an attempt to start a new "fight in the playground" yourself, is that "your goal"? 

G


----------



## 71 dB

SoundAndMotion said:


> @gregorio
> 
> I think this response to you is better placed in @castleofargh 's thread here, but I didn't want to bring down the level there. Should this continue, perhaps a new thread is better, since it also doesn't belong here.
> 
> ...



That is one fine post of insight of what has happened here! Totally fair to me in my opinion.


----------



## castleofargh

SoundAndMotion said:


> @gregorio
> 
> I think this response to you is better placed in @castleofargh 's thread here, but I didn't want to bring down the level there. Should this continue, perhaps a new thread is better, since it also doesn't belong here.
> 
> ...


can't argue that gregorio has a hard time posting without attacking others. he would be so much cooler if he could stop that. something I've observed in this hobby, if you give it enough time, I believe that a guy with relevant knowledge(in a given domain) will at some point start keeping it to himself instead of trying to explain why some posts are wrong, or he will get banned for blowing a fuse. either one. people who bother addressing mistakes on a regular basis and remain zen anyway, they're like mythical creatures in this hobby.

can't argue with you when it comes to making claims, whoever makes them. I also would like more supporting evidence/studies/ proper demonstrations for the sake of people who read the topics, and simply to maintain as you say an evidence based conversation. but in this particular case the facts about acoustic or psychoacoustics aren't that significant(strangely enough), because what we contest are mostly @71 dB flawed rational and empty claim about factual improvement. it's mostly about the basics of experimenting and reaching a conclusion at the end if we happen to get conclusive circumstances, instead of coming up with a conclusion we like, and then making up the argument that agrees with it by cherry picking everything.
 let's be clear, @71 dB is the only one who could settle this matter, he's the root of this particular problem as he's the one who stubbornly kept posting the same false stuff and empty claim almost word for word, despite our best and many efforts to point them out and make him understand. we gave him many opportunities, he didn't even have to admit to being wrong(as some people would rather die), we suggested to just agree that crossfeed was a subjective tool(which it is) having subjective and different results on different listeners(which it does). to no avail, he must claim improvement at more than his subjective level.
so let's not make @gregorio the big bad wolf oppressing well informed good guy @71 dB. even if greg is no angel, all that mess isn't his fault. if @71 dB had stopped acting like he's crossfeed's mum, always finding excuses to its flaws, always idealizing its good side, fighting whoever attacked the kid like only a mother would, we could have avoided about 2 years of erroneous spam and the many corresponding and often pissed off replies.let's not mistake a consequence with the actual cause. 

just a few of @71 dB's best hits:
-  insisting on some idea that there is a proper or accurate spatiality when discussing playback of stereo albums. unless we first pick a bunch of more or less arbitrary references, that concept doesn't exist. nothing in the recording, mixing, mastering and playback process focuses on preserving accurate localization cues. plus he kept writing it in the context of crossfeed, which is obviously not accurate or proper anything.
- the fallacy of considering a badly oversimplified EQ as being the listener's ILD. our brain usually knows better, so pretending otherwise only leads to using a faulty rational where we assume that this EQ has the psychoacoustic properties and role of the listener's own ILD. which obviously would need to be confirmed for many listeners before going further, and would most likely be proved not to be the case for all but the luckiest of listeners.
- describing crossfeed as an acoustic model(not existing in reality!) to justify calling it an objective approach. and then make the claim of objective improvement. that made up model allows to guess stuff or make hypotheses. but drawing conclusions from that fake acoustic model and calling them objective...



from a more empirical approach, the fact that different people get different impressions from using a given crossfeed setup, and that not many people like it or wish to keep using it, is IMO, evidence that crossfeed isn't the obvious and very factual improvement 'mother' claims it to be.


----------



## 71 dB

castleofargh said:


> just a few of @71 dB's best hits:
> 1 -  insisting on some idea that there is a proper or accurate spatiality when discussing playback of stereo albums. unless we first pick a bunch of more or less arbitrary references, that concept doesn't exist. nothing in the recording, mixing, mastering and playback process focuses on preserving accurate localization cues. plus he kept writing it in the context of crossfeed, which is obviously not accurate or proper anything.
> 2 - the fallacy of considering a badly oversimplified EQ as being the listener's ILD. our brain usually knows better, so pretending otherwise only leads to using a faulty rational where we assume that this EQ has the psychoacoustic properties and role of the listener's own ILD. which obviously would need to be confirmed for many listeners before going further, and would most likely be proved not to be the case for all but the luckiest of listeners.
> 3 - describing crossfeed as an acoustic model(not existing in reality!) to justify calling it an objective approach. and then make the claim of objective improvement. that made up model allows to guess stuff or make hypotheses. but drawing conclusions from that fake acoustic model and calling them objective...
> .



Maybe I should not, I am kind of forced to defend myself here clarifying things:

1 - Proper spatiality in this context doesn't mean "accurate localization cues". It means spatiality with parameters that seem accurate/rational/natural. Whatever the accurate localization cues are, my ears _expect_ ILD of about 3 dB at low frequencies and that is what my ears get when I listen to speakers. Headphones without crossfeed may give 10 dB, 30 dB… completely off, not even close to 3 dB. I think I have explained this 100 times…

2 - Yes, crossfeed EQ is not listener HRTF, but the way I think the system, headphones without crossfeed = EQ that blocks all frequences. Crossfeed EQ is MUCH closer to HRTF than a filter that blocks everything. That's why crossfeed EQ, oversimplified as it is is superior to no crossfeed. All of this is because people can't see no crossfeed a process too, a process much worse than crossfeed because it even further of HRTF. I think I have explained this 100 times…

3 - To measure objective improvement we need models to simulate hearing and I'm sure those models find crossfeed beneficial. The reduction of ILD to it's "natural" levels is so huge it must mean improvement objectively using any reasonable model for human hearing. Any objective model saying huge ILD at low frqequencies is no problem is completely useless.


----------



## bigshot

I got a set of AirPod Pros today and I've been playing with them. I've never had headphones that poke in my ear before. These are comfortable and I can almost forget I'm wearing them. The noise cancellation is amazing, but when it's turned on, I am stone deaf to anything happening outside of my own head. They seal my ears snugly, so it isn't much better with noise cancellation turned off. But it has an interesting passthrough feature, which uses the microphone in the AirPods to channel ambient sound in. It's a very odd effect because even though it is stereo and the mike is roughly in the position of my ears, it strips off all of the distance cues. The ambient sounds all are contained within my head. The spatiality is all boxed up. Even turning my head is different. It is very hard to determine direction and distance with the AirPods. Spatiality is a lot more than just cross feed between channels.


----------



## ironmine

71 dB said:


> Actually I think the range of needed crossfeed is something like -12 dB (almost "binaural" recordings) to -1 dB ("ping pong" and often multichannel tracks downsampled to stereo).
> I think if you need to crossfeed at level lower than -12 dB, it's better to do nothing and just bypass crossfeed. That way you at least avoid all the negative things Gregorio accuses me of ignoring.
> 
> I use my ears to determine the proper level of crossfeed and I think once you undertand what crossfeed should be doing it's quite easy. However, there is of course possible to analyse signals more objectively. I use S/(S+M) method:
> ...



Is there any VST plugin who can do this analysis automatically? Can one of Mastering The Mix plugins do it?

Ideally, I want to end up with this scheme:
1. I download an album that I want to listen via headphones.
2. I run the album quickly through the monitoring plugin which gives me a value, or at least some sort of understanding, how much crossfeed effect to apply to the album.
3. I convert the album from flac to flac while applying crossfeed with the recommended value.


----------



## bigshot (Nov 12, 2019)

You might want to scan this thread, Ironmine. There's info on a lot of DSPs and how to get them working. https://www.head-fi.org/threads/the-dsp-rolling-how-to-thread.867258/ There's no reason you can't get a real time DSP to do it on a comp. And anything that can be done real time, can be bounced down if you have a sound editing app. There may be a pro plugin you can use. Gregorio would know about that.


----------



## castleofargh

71 dB said:


> Maybe I should not, I am kind of forced to defend myself here clarifying things:
> 
> 1 - Proper spatiality in this context doesn't mean "accurate localization cues". It means spatiality with parameters that seem accurate/rational/natural. Whatever the accurate localization cues are, my ears _expect_ ILD of about 3 dB at low frequencies and that is what my ears get when I listen to speakers. Headphones without crossfeed may give 10 dB, 30 dB… completely off, not even close to 3 dB. I think I have explained this 100 times…
> 
> ...


yeah yeah. your ears, you preferences of what feels natural to you, you expectations, your cherry picked model, your half truth conclusions, therefore it's an objective improvement... is it just another semantic issue where when you write objective, you mean subjective? that would make so much sense TBH.  
 I've given you the same fallacious argument you give but instead of cherry picking ILD, I picked some room reverb. which is also part of giving "spatial whatever impressions", except reverb is even more natural as it also occurs outside of speaker playback, we always have some in our lives. what could possibly be more natural? but then pretty much nobody goes on to add some fixed reverb and only reverb while listening to headphones. because of course it feels weird. and that's also how crossfeed feels to many listeners. the brain works with all the cues and all the expectations and all the preconceptions. giving it one variable from a completely different playback experience is more likely to feel weird than it is to feel more natural. it's pretty obvious and what people would tell you if you ever cared to ask instead of trying to push your world view onto them.



71 dB said:


> Actually I think the range of needed crossfeed is something like -12 dB (almost "binaural" recordings) to -1 dB ("ping pong" and often multichannel tracks downsampled to stereo).
> I think if you need to crossfeed at level lower than -12 dB, it's better to do nothing and just bypass crossfeed. That way you at least avoid all the negative things Gregorio accuses me of ignoring.
> 
> I use my ears to determine the proper level of crossfeed and I think once you undertand what crossfeed should be doing it's quite easy. However, there is of course possible to analyse signals more objectively. I use S/(S+M) method:
> ...


 suddenly the recording does matter again and you're not just pretending that crossfeed is the objective compensation to simulate ILD and ITD from virtual speakers at about 30° on each side. funny how you just shot yourself in the foot.
silly me had the idea that once you've placed your speakers in a room, you play all albums on them the same way. guess I've been doing it wrong all this time.


----------



## castleofargh

bigshot said:


> You might want to scan this thread, Ironmine. There's info on a lot of DSPs and how to get them working. https://www.head-fi.org/threads/the-dsp-rolling-how-to-thread.867258/ There's no reason you can't get a real time DSP to do it on a comp. And anything that can be done real time, can be bounced down if you have a sound editing app. There may be a pro plugin you can use. Gregorio would know about that.


nah, nothing there would be what he looks for. not convinced it exists at all, but then again there are so many guys out there coding some pretty crazy stuff, so who knows?


----------



## 71 dB

ironmine said:


> Is there any VST plugin who can do this analysis automatically? Can one of Mastering The Mix plugins do it?
> 
> Ideally, I want to end up with this scheme:
> 1. I download an album that I want to listen via headphones.
> ...



Sorry, I'm not aware of such VST. 
I recommend using your ears to decide the proper crossfeed level. 
That way you get what's best for your ears and I think this is easy when you get the hang of it.


----------



## 71 dB

castleofargh said:


> 1. yeah yeah. your ears, you preferences of what feels natural to you, you expectations, your cherry picked model, your half truth conclusions, therefore it's an objective improvement... is it just another semantic issue where when you write objective, you mean subjective? that would make so much sense TBH.
> 2. I've given you the same fallacious argument you give but instead of cherry picking ILD, I picked some room reverb. which is also part of giving "spatial whatever impressions", except reverb is even more natural as it also occurs outside of speaker playback, we always have some in our lives. what could possibly be more natural? but then pretty much nobody goes on to add some fixed reverb and only reverb while listening to headphones. because of course it feels weird. and that's also how crossfeed feels to many listeners. the brain works with all the cues and all the expectations and all the preconceptions. giving it one variable from a completely different playback experience is more likely to feel weird than it is to feel more natural. it's pretty obvious and what people would tell you if you ever cared to ask instead of trying to push your world view onto them.
> 
> 3. suddenly the recording does matter again and you're not just pretending that crossfeed is the objective compensation to simulate ILD and ITD from virtual speakers at about 30° on each side. funny how you just shot yourself in the foot.
> silly me had the idea that once you've placed your speakers in a room, you play all albums on them the same way. guess I've been doing it wrong all this time.



1. To me this is clear. I believe there is objective justification for crossfeed. My ears just happen to agree. Speakers give small ILD at low freq, headphones give large ILD, so how surprising is it lowering ILD is an improvent? My claim is not that crossfeed does stellar job (it doesn't). My claim is that headphone spatiality is so totally off (ILD at low freq. not even close to what it should be) and doing just "something" can be an improvement. Crossfeed isn't just "something", it's coarse simulation of acoustic crossfeed and so to me it's not surprising it can improve headphone sound. It's about the rule of diminishing returns. At first it's easy to improve headphone spatiality, but it becomes harder and harder so that improving headphone spatiality further from crossfeed is hard.

2. So, you say we can ONLY fix ILD if we also fix reverberation? Or we should fix nothing, because we are used to the sound as it is? This has nothing to do with objectivity. It's about your subjective preference not to learn alternative ILD-fixed headphone sound. To me this is easy. The recordings are mixed for speakers so it's not surprising the spatiality is off with headphones. Headphones lack ER and reverberation, but recordings tend to have some reverberation. When I listen to a CD of church organ music, the T60 time of the church is 10 times that of my listening room, so listening to it with speakers increase reverberation only by 10 % and the amount of reverberation with headphones is almost the same. Dry sounds dry on headphones, but I kind of like that "precision", but that's me. You may disagree.

3. What are you talking about? Suddenly what? How is explaining objective methods making this LESS objective? First of all recordings are mixed differently so there are different levels of ILD from record to record. Some recordings have so low ILD they don't need crossfeed at all. Since crossfeed isn't perfect, one should use the lowest possible crossfeed level to get rid of excessive ILD. Listening room is ILD-regulator. No matter what recording you play (mono, ping pong…) you get about 0-3 dB of ILD at low freq. Selecting crossfeed level is same kind of regulation. You could change the acoustics of your room to optimaze it for each recordings, but you don't do it because it's too much work, it's totally impractical. So you listen everything using the same speaker set up and acoustics. Crossfeeders can have simple selection/adjustment of crossfeed level so you have the opportunity to easily optimaze it. So much for me shooting my foot!


----------



## 71 dB

castleofargh said:


> nah, nothing there would be what he looks for. not convinced it exists at all, but then again there are so many guys out there coding some pretty crazy stuff, so who knows?



Unfortunately I'm bad at coding. I don't know how to code VST plugins. I have my Nyquist plugins, that's it…..


----------



## gregorio (Nov 12, 2019)

71 dB said:


> [1] Whatever the accurate localization cues are, my ears _expect_ ILD of about 3 dB at low frequencies and that is what my ears get when I listen to speakers. Headphones without crossfeed may give 10 dB, 30 dB… completely off, not even close to 3 dB. I think I have explained this 100 times…
> [2] The reduction of ILD to it's "natural" levels is so huge it must mean improvement objectively using any reasonable model for human hearing.
> [2a] Any objective model saying huge ILD at low frqequencies is no problem is completely useless.



1. Why? Why repeat anything 100 times here, let alone 100 repetitions of utter nonsense?? For those who apparently haven't read this thread, it's utter nonsense for two reasons:
*Firstly*, both music itself and commercial recordings of music are an abstract art form and therefore by definition, do NOT have to comply with reality or "natural levels".
*Secondly*, even if music recordings were bound by the rules of reality, it's still nonsense because 71dB's factual assertion of what constitutes "natural levels" of ILD is *false* anyway! In any room (other than an anechoic chamber) we have a complex interaction between the direct sound and the various sound reflections, which causes an interference pattern and it's this interference pattern that enters our ears. This interference pattern is highly localised (particularly in the lower frequency range) because it entirely depends on the relative phase of the direct and reflected sound, and as we change position within a room, relative to the reflective surfaces and the sound source (say a speaker), this phase relationship changes. The magnitude of these changes is typically around 30dB in an untreated room and even in a very well treated room (such as a commercial recording studio control room), it's still typically around 6dB - 10dB. If you need evidence of this, simply type something like "room frequency response" into Google and click on "Images". The evidence for this being highly localised can be had from anyone who's ever spent more than few minutes in a room with a measurement mic and analysis software; simply take a measurement, move the mic say 6" or so (the average head width), measure again and the two measurements will be significantly different. Naturally occurring ILD in an average domestic room at various frequencies could be as much as 60dB or so, for example in the case of a complete null occurring where one ear is positioned but not at the other. It would be rare to experience such a large ILD in practice, although 10dB or more would be quite common.

Where then does 71dB get his figure of 3dB ILD being the "natural level" from? One would expect around 3dB for the ILD in an anechoic chamber (where we don't have any reflections which interfere with the direct sound), so how can 71dB's ears objectively "_expect ILD of about 3dB_" unless he's spent his whole life living in an anechoic chamber and never been in an average domestic room? Additionally, the second reason above is literally "Room Acoustics 101" and 71dB claims to have taken 11 university courses. How is it possible to take even one course in the subject without doing room acoustics 101, let alone eleven? The only logical conclusion is that either he hasn't done any university courses or he's ignoring/discounting what he's been taught. Either way, his objective/factual assertion of "natural levels" of ILD contradicts the science and empirical evidence (that anyone with a cheap measurement mic and free software can verify for themselves) and he's presented no reliable evidence to support his assertion!!

2. As "natural levels" of ILD in an untreated domestic room can be anywhere from 0dB to about 60dB, then you might have an objective argument for using crossfeed to reduce ILD to 60dB, provided of course that you ignore the fact that music recordings don't have to comply with "natural levels" anyway!
2a. As we can experience "_huge ILD at low frequencies_" in the natural/real world (in a domestic room when listening to any sound source), 71dB is therefore effectively stating that the real world is "completely useless"!

Round and round we go!

G


----------



## bigshot (Nov 12, 2019)

castleofargh said:


> nah, nothing there would be what he looks for. not convinced it exists at all, but then again there are so many guys out there coding some pretty crazy stuff, so who knows?



If he's interested in exploring signal processing, he could at least see what's out there. Maybe he'll find something he likes better than what he's looking for.

The silly thing about this whole argument is that the sound of speakers in a room sounds more natural because it *is* natural. The concept is that you take the canned content and wrap an envelope of real space around it. That makes the secondary depth cues more effective and adds a level of realism to the sound. The sound envelope of the room that engineers expect as a rough baseline for what they intend to create is extremely complex, and it depends on multiple acoustic principles. The sound of the room is just as important to the quality of the presentation as the sound of the speakers themselves. Remove the room from the equation and you're getting half a loaf. That may be good enough for your music listening purposes, and slathering on some signal processing might make it a little better. But it just isn't the same as speakers in a room.

Headphones can't even come close to the sound of speakers in a room without some heavy duty processing and outboard tracking equipment. I'm not convinced that it can even be completely achieved. One thing I've learned from multichannel is that there is a fractal curve to realism. The more channels you are able to balance, the closer you'll get. But there isn't any number of channels that can completely recreate acoustic reality, and smaller differences sound bigger the further down the rabbit hole you go. In CG animation, there's a term called "uncanny valley". This refers to the difficulty of animating realistic human characters. The closer you get to "real", the more obvious the error is. The best way to deal with it is to not hammer away at realism, but instead find an alternate reality that feels real without being real. This has nothing to do with science. It's entirely psychology and it's hard wired into us as humans. We will prefer distortions that appeal to us subjectively, even when a more accurate, but slightly flawed option exists.

There is nothing wrong with subjectivity. It's a huge part of how we listen to music. We should embrace it and cater to it. But trying to convince other people that they should share the same subjective preferences because of "science" is a complete waste of time. Subjective preference is a solipsist exercise. It's fine to suggest that other people try processing and see if they like it too, but telling people science says they should like things this particular way is wrong headed in more ways than one.


----------



## gregorio

I didn't respond to this previously because he's got me on "ignore" but my response *might* be of some use to others:


ironmine said:


> Is there any VST plugin who can do this analysis automatically? Can one of Mastering The Mix plugins do it?
> Ideally, I want to end up with this scheme:
> 1. I download an album that I want to listen via headphones.
> 2. I run the album quickly through the monitoring plugin which gives me a value, or at least some sort of understanding, how much crossfeed effect to apply to the album.
> 3. I convert the album from flac to flac while applying crossfeed with the recommended value.



The answer to your question is "no", there isn't such a plugin, either VST or any other type. The "stumbling block" is item #2. For such a plugin to exist (that wasn't just marketing BS), there would have to be 2 requirements:

1. Some sort of objective measurement of the spatiality on the album (actually each track of the album, as particularly with non-acoustic genres, each track is almost certain to have at least somewhat different spatial information) AND,
2. There would have to be some objective "value" of crossfeed, applied in a variable amount according to the results of requirement #1 (the objective measurement of spatiality on the album/track).

NEITHER of these requirements are possible, there can be no objective measurement of the spatiality on a track/recording because the spatiality on tracks/recordings is applied subjectively and there is no objective crossfeed value because it's a subjective preference.

G


----------



## SoundAndMotion

@gregorio

I'm in a good mood now and really don't want to fight with you, but I'm also not afraid to. But, can we avoid it, please?

Some context for you: no, I have not read all 1500 posts from this thread, nor have I read all of the 200+ from the last month. I check it out off and on, including a few dozen in the last few weeks. You don't have to read my posts, but if you did you'd see I'm not a fan/supporter of everything that 71dB posts (e.g. role of cerebral dominance in crossfeed's appeal for some).

I don't have to read every post to see the back-and-forth between you and 71. IMHO, it is not helpful to anyone and it should stop, but how? One could ban you or 71 or both, or close the thread, but that would be overkill and I'm not a mod (everyone cheers), again IMHO. Someone can try to communicate "you're wrong, so shut up", as some have tried (got a mirror?), but we see that is hopeless, and I can't say that to either of you, because I don't believe it. If I wanted to get a bunch of readers to ROFL, I could simply ask you guys to just stop. I figured if he put you on ignore, it would, ever so slightly(!!!) raise the SNR, and help 71dB. He really looks up to you and respects you, which I can see, but also seeks your approval, which IMO is a self-destructive mistake. That motivates his continued efforts to answer you.

When I said this:





SoundAndMotion said:


> [6] In your exchanges, there is not a correct vs. incorrect side. You both make valid points, often talking past each other, but he struggles with communication (for language and social-interaction reasons) ...


 you responded:


gregorio said:


> 6. In this sub-forum there is a "correct side" vs "incorrect side"! The "incorrect side" being the promotion of personal preferences/perception as objective facts.
> 6A. Have you read any of this thread?
> 6B. Besides effectively stating you believe I'm incompetent,
> 6C. you did NOT answer a single question I asked, why is that?


6- You know I meant in the 71dB side vs. gregorio side, one is not "correct" and the other not "incorrect". I was not referring to "subjective" vs. "objective" sides, also not "true" vs. "untrue" sides. I meant: you both make good points; you both make mistakes and say untrue things(me too); you both cherry-pick (me too); there is no winning side between you. BTW you are not the spokesman for this sub-forum.
6A- Yes (see above).
6B- I know that every human on the planet is incompetent in more areas than they are competent. We all hope to be competent in certain areas. I am competent in flying a small Cessna (according to the FAA), but I am incompetent flying a jet... and admit it. When there is an area where we are incompetent, but believe and act otherwise, that's DK, and that's what I said.
6C- After writing a few posts to 71dB and bfreedma (and mentioning you), I was not asking for a response from you, but I also wasn't surprised. You peppered me with 13 questions (one of your rhetorical devices). If I counted correctly, I answered 7, told you why I would ignore 5, and yeah, I guess I left one of them kind of hanging there... I cherry-picked. Should I claim you lied (another rhetorical method of yours) because I did answer some? No, I don't believe you did. I didn't use your numeration method, and I think you missed the answers within my text.

I'd guess you've heard of the idea that if cops wants to pull you over, they merely need to follow you for several minutes, and voila, they'll find something. If it is their goal, they might follow you a while to collect a list of infractions, so they can claim you are a dangerous driver, even if it is not true.
If you want to know how 71dB must feel when you shred his posts, I'd be happy to take your last post to him and shred it for you. Give it the gregorio treatment. I could cherry-pick and ignore your valid points and relevant questions, number each individual mistake, and go on and on about each one. I would then use the sheer volume to call into question whether you really know anything...
I don't claim your posts are all bad and 71's all good... I see you both having a mixture and not communicating effectively with each other.
But shredding would be unproductive (unless it helps you to see). A more collegial way would be to cite and expand on your contribution, echo your call for evidence for the 3dB ILD (I would cite Feddersen et al, 1957, but I don't know where he got that), and give a gentle, constructive reminder about frequency-dependence. I would ignore a couple silly little things, because I know what you wanted to say (you're a native speaker though, right?)

Well I think my goal of being non-confrontational was a miserable failure, but it was my goal... maybe you can help  and we can elevate the level of discourse together.


----------



## castleofargh

71 dB said:


> 1. To me this is clear. I believe there is objective justification for crossfeed. My ears just happen to agree. Speakers give small ILD at low freq, headphones give large ILD, so how surprising is it lowering ILD is an improvent? My claim is not that crossfeed does stellar job (it doesn't). My claim is that headphone spatiality is so totally off (ILD at low freq. not even close to what it should be) and doing just "something" can be an improvement. Crossfeed isn't just "something", it's coarse simulation of acoustic crossfeed and so to me it's not surprising it can improve headphone sound. It's about the rule of diminishing returns. At first it's easy to improve headphone spatiality, but it becomes harder and harder so that improving headphone spatiality further from crossfeed is hard.


let go back to the basics as explaining doesn't work. your claim, your burden. where is the evidence? how many recordings have you done on people to claim objective improvement? I'm guessing you did a great many as your claim isn't limited to potential improvement(something I actually would agree with objectively, as the potential at least is there), you even argue that crossfeed is always an improvement as long as it goes in "the right direction". to be able to claim that, you must have some huge study right? you wouldn't make objective claims about stuff you have never confirmed objectively, right? not in Sound Science.



71 dB said:


> 2. So, you say we can ONLY fix ILD if we also fix reverberation? Or we should fix nothing, because we are used to the sound as it is? This has nothing to do with objectivity. It's about your subjective preference not to learn alternative ILD-fixed headphone sound. To me this is easy. The recordings are mixed for speakers so it's not surprising the spatiality is off with headphones. Headphones lack ER and reverberation, but recordings tend to have some reverberation. When I listen to a CD of church organ music, the T60 time of the church is 10 times that of my listening room, so listening to it with speakers increase reverberation only by 10 % and the amount of reverberation with headphones is almost the same. Dry sounds dry on headphones, but I kind of like that "precision", but that's me. You may disagree.


there is no "we should". that's my point. if someone is used to headphones, maybe fixing nothing will feel more natural to him than any crossfeed. and maybe some settings won't feel right for someone, and others will. which is what most people experience, @ironmine showed that many times with his feedback on various plugins. 
 I just try to demonstrate that the alleged objective argument about cherry picking one variable from speaker playback and shoving a vague version of it into headphones out of context does not necessarily lead to a subjective improvement, even if it does bring that variable closer to our reference model of speakers. just adding some amount of reverb will rapidly feel weird to most people on most musics, and IMO that disproves part of your reasoning supporting crossfeed as more than something some people will like.
but really the non objectivity of it all is already in your post. you find the lack of left channel reaching the right ear and vice versa very objectionable, and claim that anything dealing with that in some way is an improvement. but then you just said that you like the missing room reverb on headphones. so crossfeed really just happens to serve your subjective preferences, or maybe you got so used to it that now it defines your taste?

we have endless examples of speaker simulations, room simulation(all the surround stuff over the years for headphones, and the many crossfeed and more as VSTs). they all pick some variables and try to make them more like speaker playback. so by your logic for crossfeed's justification, all of those could be declared objective improvements. but they're all subjective tools, aimed at fooling the brain. sometimes for some people it will work pretty well, and for most people most of the time, it will just feel weird because those simulations are only taking care of some variables. and do it in a way that might end up close to someone's HRTF(lucky!), or pretty significantly far from it or whatever room the guy is used to. all the so called objective approaches to those processes result in inconsistent, and in some aspects, unpredictable impressions for the listeners. you don't seem to get how dynamic the interaction between stimuli really is when the brain creates an interpretation from them. if something is missing or if something is different enough from whatever learned "normal" to be felt, you can be pretty sure that it will affect a lot more than just the basic impact from the one variable in total isolation. and we have countless demonstrations of that.
 the most obvious example of an objective approach that simply doesn't agree with the brain despite being objectively very solid(something I can't say about your crossfeed argument),is IMO, the diffuse field compensation for frequency response graphs with headphones. they decided on what the reference sound should be, and went to see how that would look like as FR on a headphone for the average human head. that in principle, should lead a majority of people to find that a flat line in a diffuse field compensation sounds neutral to them(averaged for all direction, using calibrated speakers, all should be good, objectively when strictly considering that diffuse field model!!!). and yet, in practice almost nobody feels that diffuse field is either neutral or enjoyable on headphones. it's been annoying the headphone community for ages. so what gives? IMO the answer is obvious, even something like our impression of neutral involves all stimuli perceived by all our senses, our experiences, our expectations. the all package. and not just the FR at the eardrum based on one model of sound that nobody really gets with their speakers in the first place. making a model that doesn't count everything, cannot guaranty that it will be correct just because we follow the steps after the first few erroneous assumptions. it's true objectively, it's maybe even more true subjectively. and it should remind you of a certain crossfeed argument we're having.





71 dB said:


> 3. What are you talking about? Suddenly what? How is explaining objective methods making this LESS objective? First of all recordings are mixed differently so there are different levels of ILD from record to record. Some recordings have so low ILD they don't need crossfeed at all. Since crossfeed isn't perfect, one should use the lowest possible crossfeed level to get rid of excessive ILD. Listening room is ILD-regulator. No matter what recording you play (mono, ping pong…) you get about 0-3 dB of ILD at low freq. Selecting crossfeed level is same kind of regulation. You could change the acoustics of your room to optimaze it for each recordings, but you don't do it because it's too much work, it's totally impractical. So you listen everything using the same speaker set up and acoustics. Crossfeeders can have simple selection/adjustment of crossfeed level so you have the opportunity to easily optimaze it. So much for me shooting my foot!


that just makes no sense to me. we agree that an album will most likely have been mastered with and for stereo speakers. so if you consider applying a different compensation on some albums for alleged objective reasons, you're treating the sound engineer's artistic intent as an objective mistake. which is nonsense, end of story.
I get that you could prefer a different crossfeed setting on some specific tracks, just like one might want more bass on some tracks, because it feels better to you that way. but the objectivity behind it... get out of here.


----------



## 71 dB

SoundAndMotion said:


> echo your call for evidence for the 3dB ILD.



I ignore gregorio nowadays. so I wouldn't know what he calls for. The 3 dB ILD at low frequencies is a ballpark value based on my insight of the subject. You like 4 dB better fine, but that's more or less where ILD at low frequencies are. I believe I have previously (2 years ago) mentioned it's the mean value of soundwaves from all directions. I was criticized over room modes, but it was ridiculous. I studied control of room modes for Genelec in a 4 year project and I have my name on patents regarding the issue so I have "some" insight about how low frequencies behave in a room. I'm pretty sure records are mixed in studios with acoustic treatment that keeps modes in some sort of control so arguing headphones _should _imitate rooms with lousy acoustics at low frequency and massive modes is totally ridiculous.

Theoretically it's possible that if you have one ear EXACTLY at the bottom of a room mode notch and the other ear is about 17 cm from that point, you can have significant ILD, but this is ridiculous point because: Such listening position/situation means at that frequency the sound is CRAP! You are kind of supposed to do something about it. Move your head to find a better sweetspot and/or modify the acoustics of the room so that the modes are less severe. Also, when these larger ILD situations happen, the sound pressure level is very low at that frequency. The other ear hears next to nothing, and since the other ear is also near the bottom of the notch, it experiences a lowered sound pressure level also. The frequency in question is likely to be masked and also since the equal loudness curves at low frequencies are closer to each other and the threshold of hearing is high, it's likely both ears are UNDER the threshold of hearing/masked so that ILD doesn't matter at all! It is not even heard! Even if it barely heard, it's contribution to "excessive" spatiality remains more or less insignificant. Room modes are typically an issue under 200 Hz. Above that the modes become so compact they are considered reverberation rather than individual modes. Also, the room absorps more sound so the modes became less severe. At 800 Hz crossfeeders typically have a -3 dB point meaning 3 dB more of ILD and so on...


----------



## 71 dB (Nov 13, 2019)

castleofargh said:


> that just makes no sense to me. we agree that an album will most likely have been mastered with and for stereo speakers. so if you consider applying a different compensation on some albums for alleged objective reasons, you're treating the sound engineer's artistic intent as an objective mistake. which is nonsense, end of story.
> I get that you could prefer a different crossfeed setting on some specific tracks, just like one might want more bass on some tracks, because it feels better to you that way. but the objectivity behind it... get out of here.



People hardly ever hear the music the way sound engineer's artistic intented it. What the engineer heard in the studio is what he/she intents. People have different speakers and rooms so even the speaker sound is different from what is heard in studio. Of course the rational intent is that it sounds good so engineers learn to mix so that it sounds good on different kinds of speakers and less than optimal acoustics. Headphones have always been secondary in this. It's not "making mistakes". It's traditions. Recordings have different levels of ILD. They don't look the same on Goniometer. I hear it and I have analysed recordings so I know this as a fact. On speakers this doesn't matter much, because speakers in a room are effectively ILD regulators, but headphones are not! Regulating is needed and I do it using a crossfeed with adjustable level.


----------



## 71 dB

castleofargh said:


> let go back to the basics as explaining doesn't work. your claim, your burden. where is the evidence? how many recordings have you done on people to claim objective improvement? I'm guessing you did a great many as your claim isn't limited to potential improvement(something I actually would agree with objectively, as the potential at least is there), you even argue that crossfeed is always an improvement as long as it goes in "the right direction". to be able to claim that, you must have some huge study right? you wouldn't make objective claims about stuff you have never confirmed objectively, right? not in Sound Science..



I simply do not have evidence that is good for you, only evidence that is good for me. I am tired of fighting over my credibiity. If you think I know nothing then think that! I don't care. Do I put you on ignore too?


----------



## 71 dB

castleofargh said:


> If someone is used to headphones, maybe fixing nothing will feel more natural to him than any crossfeed.



Fine, but such subjective opinion doesn't refute my objective claims that since headphones give often significantly larger ILD at low frequencies than speakers and recordings are mixed for speakers, the conclucion is that scaling ILD with crossfeed is objectively justified.


----------



## 71 dB

IF headphone spatiality was half-decent, improving it with simple methods would be challenging, but since it is complete crap (with recordings mixed for speakers) cherry picking is fine and an improvement. I am totally tired of this cherry picking thing! I have said several time I KNOW I am doing it and I know what I am ignoring and what it means! So stop it!


----------



## 71 dB

Crossfeed is VERY close to HRTF compared to no crossfeed! No crossfeed is as far from HRTF as you can get! Compared to that crossfeed is pretty close.


----------



## 71 dB (Nov 13, 2019)

You people are constantly refuting my objective claims using subjective opinions while telling my own subjective opinion are worthless! Do we talk about subjective opinions or objective facts? You can't refute my objective cherry picked claims using (also cherry picked) subjective opinions (some people don't like crossfeed etc.)

Seems I need to ignore 80 % of people to not lose it here! So tired.


----------



## bfreedma

71 dB said:


> You people are constantly refuting my objective claims using subjective opinions while telling my own subjective opinion are worthless! Do we talk about subjective opinions or objective facts? You can't refute my objective cherry picked claims using (also cherry picked) subjective opinions (some people don't like crossfeed etc.)
> 
> Seems I need to ignore 80 % of people to not lose it here! So tired.




The problem is, you don't have objective data that addresses the full issue and simply cherry pick for convenience to support your subjective views.

Ignoring everyone is an option, just not one well aligned with Sound Science discussions.  If you're going that route, I think a better option is to move the conversation to another sub forum.


----------



## 71 dB

bfreedma said:


> The problem is, you don't have objective data that addresses the full issue and simply cherry pick for convenience to support your subjective views.



How are we supposed to discuss this issue objectively if the objective data doesn't even exist? What is the "full issue"? How do you want to define it? I have said miilion times my "cherry picking" is based on the assumption ILD is the major problem in headphone spatiality and ignoring other aspects doesn't nullify the benefits of fixing ILD.


----------



## castleofargh

71 dB said:


> You people are constantly refuting my objective claims using subjective opinions while telling my own subjective opinion are worthless! Do we talk about subjective opinions or objective facts? You can't refute my objective cherry picked claims using (also cherry picked) subjective opinions (some people don't like crossfeed etc.)
> 
> Seems I need to ignore 80 % of people to not lose it here! So tired.


you do it on your own, stop that victim crap. we're merely the reactive compound here. when you feed Sound Science with empty or false claims, we react. and if you post them over and over again, the more unstable elements of the forum will explode. it's a well known chemical reaction. anybody spending time in this section is fully aware of it. yet here you are, spamming the same claims you have no business claiming.
all it takes to settle the matter if for you to either provide objective evidence for your objective claims, or to, wait for it, just stop posting the same empty claims in this section like a spam bot. for now 2 years, you've provided supporting evidence that you can't do either.

PS: usually when we get posting sprees, and we see the legendary "you people" pop up, that poster is on his way out. often of his own accord, sometimes not. <= that's the modo in me talking.



71 dB said:


> How are we supposed to discuss this issue objectively if the objective data doesn't even exist? What is the "full issue"? How do you want to define it? I have said miilion times my "cherry picking" is based on the assumption ILD is the major problem in headphone spatiality and ignoring other aspects doesn't nullify the benefits of fixing ILD.


congrats, you just explained why you should not make those objective claims! \o/
to discuss objective stuff without enough data, we have ideas and hypotheses, you can indulge in those as much as you want. we can make assumptions and wonder where it would lead us. and maybe when we experiment, we can check how that holds up. it might not be proof but it could suggest we're on the right track(or not). what we do not have, is the right to make objective claims without the objective facts to demonstrate that claim to be true. it's that simple. I say we don't have the right, but of course I mean rationally. in practice Headfi didn't give this section the rules needed to enforce the very methods of science. so people post whatever and in this section, we react.


----------



## bigshot (Nov 13, 2019)

71 dB said:


> People hardly ever hear the music the way sound engineer's artistic intented it. What the engineer heard in the studio is what he/she intents. People have different speakers and rooms so even the speaker sound is different from what is heard in studio. Of course the rational intent is that it sounds good so engineers learn to mix so that it sounds good on different kinds of speakers and less than optimal acoustics. Headphones have always been secondary in this.



Having supervised a few mixes in my day, I can tell you that the intent of the mix is definitely not just what is heard in the studio. Mixes are built to suit a range of home equipment and circumstances, some more specific than others, depending on the purpose of the mix. That said, I have never seen a mix monitored on headphones, and I've never seen a set of decent cans on a mixing stage. Not to say that isn't done, but I don't think it's common. Headphones tend to suck up peaks more than speakers in a room. Generally, it's the wolf tones that one is looking for when you use a different monitoring system. If you get a mix working on speakers, it will be fine on headphones. That isn't necessarily the case with TV speakers, where overdriving certain frequencies can cause problems. I can't picture a reason why an engineer would reduce channel separation.

I've been listening to a variety of music on my new AirPod Pros lately. Some of it has wide separation between the channels and some is massed in the middle. It's still all "inside the head"- no spatiality, no distance, no soundstage- but even the widest spread doesn't sound any less natural to me. If anything, mono vocals anchored in the center seem more inside my head than stuff on the far right and left. I can't see how reducing the channel separation could possibly increase spatiality. To do that, you need to mess around with the time factor, because space is directly related to time.

I have no objection to subjective preferences being discussed and explaining why one might prefer one thing over something else. Whatever floats your boat- fine with me. It's interesting to hear the range of preferences people have. The only time I object to subjectivity is when it's dressed up in the guise of objective science. Science is the meat and potatoes baseline. Subjectivity is the seasoning added to that to tweak it to where you like it. There's a place for both, but they aren't equal, and objectivity should not be used to justify subjectivity.

I really don't care for the pitiful victim routine. It makes you look desperate, and that isn't a good thing at all. When it comes down to it, I think the victimization is fueling your enthusiasm for this thread more than crossfeed. I won't participate in that- as a victimizer nor a white knight. You can beat on yourself yourself and cry about it in private. I don't have to put up with that. We're adults here, not babies.



castleofargh said:


> usually when we get posting sprees, and we see the legendary "you people" pop up, that poster is on his way out. often of his own accord, sometimes not.



Exactly. We are a group. We're all different, but we identify as a group. Go ahead and call us a gang or a group of thugs, but we aren't the ones keeping you out. It's you not agreeing to be a part of the whole that ends up with your isolation.


----------



## gregorio (Nov 14, 2019)

SoundAndMotion said:


> [1] I'm in a good mood now and really don't want to fight with you, but I'm also not afraid to. But, can we avoid it, please?
> 6- You know I meant in the 71dB side vs. gregorio side, one is not "correct" and the other not "incorrect". I was not referring to "subjective" vs. "objective" sides, also not "true" vs. "untrue" sides. I meant: you both make good points; you both make mistakes and say untrue things(me too); you both cherry-pick (me too); there is no winning side between you.
> 6C- After writing a few posts to 71dB and bfreedma (and mentioning you), I was not asking for a response from you, but I also wasn't surprised. You peppered me with 13 questions (one of your rhetorical devices). If I counted correctly, I answered 7, told you why I would ignore 5, and yeah, I guess I left one of them kind of hanging there... I cherry-picked. Should I claim you lied (another rhetorical method of yours) because I did answer some? No, I don't believe you did. I didn't use your numeration method, and I think you missed the answers within my text.
> 7. If you want to know how 71dB must feel when you shred his posts, I'd be happy to take your last post to him and shred it for you. Give it the gregorio treatment. I could cherry-pick and ignore your valid points and relevant questions, number each individual mistake, and go on and on about each one. I would then use the sheer volume to call into question whether you really know anything...
> ...



1. Hopefully! And anyway, I believe castleofargh has very adequately dealt with many of your points already. But there are a few I'll respond to:

6 - I'm not interested in personal sides. I don't come here for personal validation, I get/judge that from my professional roles and the feedback from professional clients, critics, etc. Here I'm just interested in the science/facts, as the subforum name indicates. Also, of course I "cherry-pick" when refuting cherry-picked assertions, there's no logical alternative! If someone is making assertions based on cherry picked science/facts (IE. Ignoring other relevant facts), how else can such assertions be refuted except by concentrating on (cherry-picking) those facts which have been ignored? I therefore don't see the equivalency of "cherry-picking" that you're asserting/implying.

6C - I wasn't lying, I did miss the answers within your text because they are rather oblique and even re-reading it, I'm still not entirely sure you've answered them and if you did, what those answers were. However, if subsequently you did enumerate and further explained your answers, then I would indeed be lying if I stated you hadn't answered them! What about if you re-phrased your answers in several different ways, other posters did too and even used various analogies all in order to make it as obvious as possible you were in fact answering the questions? If I still stated you weren't answering the questions, what conclusion other than that I'm lying (or have some serious cognitive impairment) could there be?

7. Sure, no problem. In fact to a certain extent, I could shred it myself. It was an over-simplification and therefore open to some "shredding", although if you try, you're going to run into a lot of problems with the "Firstly" (which overrides the "Secondly" anyway) and the basic point of the "secondly" is correct, which you could verify for yourself quite cheaply.

8. OK, if you don't have the basic cheap tools to verify yourself, you can't be sure of the all the room FR plots you can find with Google and you want some citations:

"*low-frequency sources near the head (especially at distances <1 m) can generate large ILDs (>10 dB) due to distance-dependent sound dispersion rather than frequency-dependent head-shadowing (Brungart et al. 1999; Kim et al. 2010). Large low-frequency ILDs can also occur in multisource, reverberant environments (Gourevitch and Brette 2012; Młynarski and Jost 2014).*" - "Sound frequency-invariant neural coding of a frequency-dependent cue to sound source location". Heath. G. Jones, et al. Journal of Neurophysiology. [https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4509402/].

And as reverberant environments is what we're talking about (with speakers or any sound source in any room, other than an anechoic chamber), here is a quote from a paper just cited:

"*Without reflections, ILDs are always smaller than about 10 dB. With a reflection at a wall, they can reach more than 30 dB.*" - "The impact of early reflections on binaural cues". Boris Goure ́vitcha and Romain Bretteb. The Journal of the Acoustical Society of America. [https://www.researchgate.net/profil...aural_cues/links/0f31753ac23469cad9000000.pdf]

8a. You could cite Feddersen et al. and indeed the 3dB ILD figure is quite commonly cited in a number of texts/other papers but where "he/they got that" is of vital importance because without exception (as far as I'm aware) "they got that" from FREE FIELD measurements, calculations and studies (IE. Anechoic chambers) and is therefore NOT applicable to speakers in a domestic room (or even a studio) and is obviously NOT what one would always experience "naturally" (in the real/natural world), unless you live your entire life in an anechoic chamber! Furthermore, 71dB claims that he's studied Acoustics but of course anechoic chambers don't have any acoustics, so what was it that he studied?



71 dB said:


> [1] The 3 dB ILD at low frequencies is a ballpark value based on my insight of the subject.
> [2] You like 4 dB better fine, but that's more or less where ILD at low frequencies are.
> [3] I was criticized over room modes, but it was ridiculous. I studied control of room modes for Genelec in a 4 year project and I have my name on patents regarding the issue so I have "some" insight about how low frequencies behave in a room.
> [3a] I'm pretty sure records are mixed in studios with acoustic treatment that keeps modes in some sort of control so arguing headphones _should _imitate rooms with lousy acoustics at low frequency and massive modes is totally ridiculous.
> ...



1. Fine but as this isn't the "71dB's insight" forum, what is your insight of the subject based on? It's clearly NOT based on the science of acoustics, maybe it's based on the science of no acoustics? Maybe you have a personal bias from reading some text like the Feddersen paper mentioned above or maybe it's just a personal preference? Either way, you're contradicting the science of acoustics and have presented no reliable evidence, only repeated 100+ times your personal perception/preference!

2. Not it's not!! Provide applicable evidence (IE. Not free field/anechoic chamber evidence).

3. You keep asserting how qualified you are with acoustics but then completely contradict your assertion because you make basic errors. And, you're cherry picking again (just for a change!), you were not just criticized over room modes but over room acoustics in general.
3a. Some sort of control yes, 3dB, "no"! 10dB is pretty good for a studio.
3b. Not just theoretically but in practice. Are you now admitting that it IS possible to have naturally occurring ILD up to 30dB or more and NOT 3dB?
3c. This is clearly untrue! Have you never actually measured or seen room acoustic measurements? How is that possible if you've studied acoustics? Just in case there's anyone here who hasn't seen such a measurement, here's a typical example (of thousands):





Notice that we have a variation of around 30dB, that the variation *ABOVE* 200Hz (300Hz-1kHz) is over 20dB and that we don't only have dips/notches but peaks as well. So 71dB's following assertions are false, the sound pressure level is NOT necessary low. We could just as easily have one ear near a peak that's say 12dB ABOVE the average level and it's not likely but UNLIKELY that both ears are under the threshold of audibility! And of course, we're not only talking about room modes but all the acoustic phenomena, such as reflections from a desk or whatever else consumers commonly put their speakers on, floor/ceiling reflections, etc. Again, does 71dB really not know anything about acoustics except room modes or is he just cherry-picking again and contradicting the science and objective empirical evidence?

BTW, the above image was ironically taken from: http://www.acousticfrontiers.com/room-acoustic-measurements-101/



71 dB said:


> [1] People hardly ever hear the music the way sound engineer's artistic intented it. What the engineer heard in the studio is what he/she intents. People have different speakers and rooms so even the speaker sound is different from what is heard in studio. Of course the rational intent is that it sounds good so engineers learn to mix so that it sounds good on different kinds of speakers and less than optimal acoustics.
> [2] Headphones have always been secondary in this. It's not "making mistakes". It's traditions.
> [3] Recordings have different levels of ILD. On speakers this doesn't matter much, because speakers in a room are effectively ILD regulators, but headphones are not!
> [4] Regulating is needed and I do it using a crossfeed with adjustable level.



1. That's a contradiction. If engineers "_learn to mix so that it sounds good on different kinds of speakers and less than optimal acoustics_" then surely the engineers' artistic intent is designed for different kinds of speakers and less than optimal acoustics and people can often hear the music as the engineers intended it?

2. True, it's not a mistake. But it's not true that "it's traditions"! It's actually market forces, what consumers want to buy. If someone made an album/track that was primarily for headphones, with little consideration for speaker reproduction and it sold in significant enough numbers, many artists/engineers would quickly change what you call "traditions". This has in fact been tried various times in the past and never have they sold in significant numbers.

3. Speakers in rooms are not "ILD regulators", they can also be ILD exacerbators, as empirical room measurements demonstrate, unless of course you're talking about just speakers and ignoring the effects of the room, as you have consistently!

4. According to what, is "regulating needed"? Not according to the science and not according to any rule of music/music recording, so what does that leave? Personal subjective preferences NOT objective facts!

I know that 71dB is ignoring my posts but others can judge for themselves whether it's "useful" objective facts and science or if it's just the same ignorant assertions of engineers intent, cherry-picked, inapplicable science and all based on the subjective preferences of someone with a crossfeed agenda.

G


----------



## 71 dB

I can't help but to peek what has been talked behind my back and it's not pretty! What can I do put comment on this! 



gregorio said:


> 8. OK, if you don't have the basic cheap tools to verify yourself, you can't be sure of the all the room FR plots you can find with Google and you want some citations:
> 
> "*low-frequency sources near the head (especially at distances <1 m) can generate large ILDs (>10 dB) due to distance-dependent sound dispersion rather than frequency-dependent head-shadowing (Brungart et al. 1999; Kim et al. 2010). Large low-frequency ILDs can also occur in multisource, reverberant environments (Gourevitch and Brette 2012; Młynarski and Jost 2014).*" - "Sound frequency-invariant neural coding of a frequency-dependent cue to sound source location". Heath. G. Jones, et al. Journal of Neurophysiology. [https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4509402/].
> 
> ...



You are cherry picking yourself to discredit me. Yes, Even at low frequencies ILD can get large (20 dB!) when the sound source is very near one ear, but that's not what happens with speakers in a room. Your speakers are 10 feet away or so, not 10 inches or less! I have used this to justify lowering ILD so that the sounds appaer further away from the head so that we get miniature soundstage instead of sound very near head.

ILD can be 30 dB, totally agree, but at what frequency range? Look at the pictures of your source! They start at 500 Hz or so! The ILD gets large in those pics (Fig. 8A & 8C) above 1250 Hz! Crossfeed typically operates at frequencies _below_ 800 Hz, and the 3 dB rule applies stricly to frequencies below 500 Hz (between 500 and 800 Hz a little bit more can be tolerated). Those pictures in your source support my claims! Thanks!

So, you don't even understand what your link source says and you keep claiming I don't know anything about acoustics! Look at Fig 9. How much ILD modification do you see below 1 kHz? Look at Fig. 12. Again, 5 dB of ILD is the max you have below 1 kHz.


----------



## bigshot

71 dB said:


> I can't help but to peek what has been talked behind my back and it's not pretty! What can I do put comment on this!



Happy birthday, SoundandMotion!

71dB, your inability to control and moderate your own posting is the core of the problem here.


----------



## 71 dB (Nov 14, 2019)

gregorio said:


> Have you never actually measured or seen room acoustic measurements? How is that possible if you've studied acoustics? Just in case there's anyone here who hasn't seen such a measurement, here's a typical example (of thousands):
> 
> 
> 
> ...



I have made impulse response measurements of rooms. I have analysed the decay times of room modes, made waterfall plots and all that. The example plot is typical below 300 Hz and above 4000 Hz, but has quite big problems between 300 and 4000 Hz. I'd say it's some strong reflections from the table the speaker is on or something like that.

Yes, the variation between 300Hz-1kHz is over 20 dB (1/3rd octave smoothed curve is within 15 dB), but this is quite bad response as I said above. Killing the table reflection of whatever it is could reduce the variation to 10 dB making the response much better, but even this 20 dB+ doesn't debunk my claims.

Peaks are "easier" ILD-wise than notches. Peaks are broader in space meaning the sound pressure level changes less from left ear to right ear than with notches which are narrower in space. That's just how in phase and out of phase signals sum together. So, sound pressure level is high, but ILD is low. When ILD is higher (notches) the sound pressure level is low.


----------



## gregorio

71 dB said:


> [1] You are cherry picking yourself to discredit me.
> [2] Yes, Even at low frequencies ILD can get large (20 dB!) when the sound source is very near one ear, but that's not what happens with speakers in a room. Your speakers are 10 feet away or so, not 10 inches or less! I have used this to justify lowering ILD so that the sounds appaer further away from the head so that we get miniature soundstage instead of sound very near head.
> [3] ILD can be 30 dB, totally agree, but at what frequency range?
> [3a] Look at the pictures of your source! They start at 500 Hz or so!
> [3b] Those pictures in your source support my claims! Thanks!



1. I AM cherry-picking, I'm cherry-picking the facts that you're ignoring, because you're ignoring them! Duh.

2. But you have claimed, not incorrectly, that ERs/Reverb is the main indicator of distance, not ILD. And, WE do NOT "get miniature soundstage instead of sound very near head", YOU might get that due to your preferences/perception but objective facts are NOT defined by your preferences! How many times?

3. Also in response to point #2: "_Large *low-frequency* ILDs can also occur in multisource, reverberant environments (Gourevitch and Brette 2012; Młynarski and Jost 2014)"_
3a. Did you look at the picture I actually posted? There is potential for relatively large ILD several places in the low freq range. A good example would be the roughly 18dB differential between about 280Hz - 350Hz, although of course we'd have to take a measurement say 6" away and then measure the difference between the two. Have you never done this? 
3b. They do if you're a Gerbil, THANKS! Didn't you even read the paper?
"_The main difference with small animals is that humans stand, and therefore the ground is further from the ears. This implies that delays between direct and ground reﬂected sounds are typically longer for sources in the horizontal plane, even at relatively far distances. Longer delays mean lower interfer-ence frequencies. For example, with a 10 ms delay, the *ﬁrst interference frequency is 50 Hz and the next ones are at 150 and 250 Hz; with a 6.5 ms delay, i.e., a distance of 1.5 m, they are 77, 231, and 385 Hz.* These are in the hearing range of humans. In addition, as we have seen in Sec. III B, natural surfaces are generally very reﬂecting at low frequencies. Moreover, humans typically stand on artiﬁcial textures such as concrete or asphalt that are strongly reﬂective. Therefore, we expect strong modiﬁcations of low frequency binaural cues for sources at the same height as the listener._"

Round and round, well done!

G


----------



## gregorio

Ah, you replied in the meantime, good!



71 dB said:


> [1] Yes, the variation between 300Hz-1kHz is over 20 dB (1/3rd octave smoothed curve is within 15 dB), but this is quite bad response as I said above.
> [2] Killing the table reflection of whatever it is could reduce the variation to 10 dB making the response much better,
> [2a] but even this 20 dB+ doesn't debunk my claims.



1. It is a moderately bad response but not at all uncommon.

2. In the real world, how many listeners can and do always acoustically treat the top of their desks or tables? But regardless of 10dB or 20dB ...
2a. You claimed that "naturally" occurring amounts of ILD do not exceed 3dB with speakers in the real world (and based many of your "objective facts" on this). So provided 20dB (or 10dB) is less than 3dB, you're claim has not been debunked!!! 

G


----------



## 71 dB

gregorio said:


> 1. It is a moderately bad response but not at all uncommon.
> 
> 2. In the real world, how many listeners can and do always acoustically treat the top of their desks or tables? But regardless of 10dB or 20dB ...
> 3. You claimed that "naturally" occurring amounts of ILD do not exceed 3dB with speakers in the real world (and based many of your "objective facts" on this). So provided 20dB (or 10dB) is less than 3dB, you're claim has not been debunked!!!
> ...


1. Unfortunately not uncommon at all, but bad nevertheless and suggesting that headphone sound should be modeled after such a lousy response it lunacy. If the response is that bad because of a table, the response will be somewhat similar to both ears meaning low ILD modification, so ILD is the least of the problems!

2. See 1. Frequence response is crap, but that's it.

3. A frequency response measured in one point of a room having 30 dB variation doesn mean a person experiences 30 dB of ILD in the room! Don't mix frequencies and points is space together! We need another response 17 cm apart and a head between to have real comparison what the two ears hear and to calculate ILD. Also, ILD can theoretically be larger than 3 dB at low frequencies (e.g. sound source is near one ear), but is it desirable? How much ILD is needed? Spatial hearing is based mostly on ITD at low frequencies, so what is the point of "testing the limits" of spatial hearing with excessive ILD? Why have 5 dB or 7 dB if 3 is enough?


----------



## 71 dB

bigshot said:


> Happy birthday, SoundandMotion!
> 
> 71dB, your inability to control and moderate your own posting is the core of the problem here.



So I should allow people write about me how I don't know anything? What would you do? Can you stop posting yourself?


----------



## 71 dB

gregorio said:


> 2. But you have claimed, not incorrectly, that ERs/Reverb is the main indicator of distance, not ILD. And, WE do NOT "get miniature soundstage instead of sound very near head", YOU might get that due to your preferences/perception but objective facts are NOT defined by your preferences! How many times?
> 
> 3. Also in response to point #2: "_Large *low-frequency* ILDs can also occur in multisource, reverberant environments (Gourevitch and Brette 2012; Młynarski and Jost 2014)"_
> 3a. Did you look at the picture I actually posted? There is potential for relatively large ILD several places in the low freq range. A good example would be the roughly 18dB differential between about 280Hz - 350Hz, although of course we'd have to take a measurement say 6" away and then measure the difference between the two. Have you never done this?
> ...



2. ER, reverberation and ILD are ALL cues of distance! For ER and reverberation headphones need to rely on the spatiality of the recording itself since there is no "listening room". As for ILD, crossfeed can help. For me this created miniature soundstage if the spatiality of the recording allow it (is good enough to fool my spatial hearing).
3. I see large low freq. ILD only for near sounds… …also, really weird acoustics is not a good "model" for headphones - good acoustic is.
3a. I looked many pictures - those that had something to do with ILD. Yes, exactly, you need another measurement about 6" away. If you know how soundwaves behave in room you know large ILD doesn't happen just like that. My ears are sensitive to large ILD at low frequencies and I have never experienced it anywhere. I move my head in a room, the loudness level change for mode frequencies, but my ears hears almost the same amplitude. You need higher frequencies to create sudden changes spatially => ILD is large at high freqs!
3b. I don't have time to read everything. I looked 5 minutes so I can answer. I may read later with time… slow reader you know…
The bolded doesn't mean big ILD!!! It means notches/peaks in frequency response! Listening room is not just ground. It's a room with walls and ceiling meaning much more complex reflections meaning the interferencies of ground is just a drop in the bucket.


----------



## gregorio

71 dB said:


> 1. Unfortunately not uncommon at all, but bad nevertheless and suggesting that headphone sound should be modeled after such a lousy response it lunacy.
> [1a] If the response is that bad because of a table, the response will be somewhat similar to both ears meaning low ILD modification, so ILD is the least of the problems!
> 
> 2. See 1. Frequence response is crap, but that's it.
> ...



1. Convolution reverbs are taken from impulse responses of real rooms, many of which are pretty bad (although that's what gives them their character). So those who apply convolution reverbs to their HRTFs or on their own or to the recordings themselves are all lunatics are they, while you're the sane one?
1a. Not necessarily. Again, why don't you take some measurements and actually find out for yourself?

2. Yes exactly that IS it, that is what most consumers experience!

3. So I say "_A good example would be the roughly 18dB differential between about 280Hz - 350Hz, *although of course we'd have to take a measurement say 6" away and then measure the difference between the two. Have you never done this?*_" and you say .... Oh dear!
3a. Hallelujah brother!
3b. Desirable or needed for whom? You, the artists whose art it is, everyone else?
3c. What excessive ILD? You've just said ILD can be much larger than 3dB, therefore, much larger than 3dB of ILD is not excessive, it's "natural"!
3d. Because "more" is always better than "enough"! Why not test the limits of spatial hearing or even exceed them? An artist can do whatever the medium allows, limited only by their imagination, NOT by your personal preferences!

G


----------



## bigshot (Nov 14, 2019)

Soundstage is created by the room. You don't have soundstage without a room.

Spatiality is created by physical space.

The effect of a listening room on the recording isn't error. It completes the intended sound.

You don't have to listen to the complete intended sound, you can listen to just the recording inside your head using headphones.

You can like the way headphones present music better than the complete intended sound. You can add crossfeed if you like. All that is OK.

But crossfeed doesn't add the element of the room to complete the sound. On the contrary, it adds a totally different effect.


----------



## gregorio

71 dB said:


> 2. ER, reverberation and ILD are ALL cues of distance!
> [2a] For ER and reverberation headphones need to rely on the spatiality of the recording itself since there is no "listening room". As for ILD, crossfeed can help.
> [2b] For me this created miniature soundstage if the spatiality of the recording allow it (is good enough to fool my spatial hearing).
> 3. I see large low freq. ILD only for near sounds… …
> ...



2. Yes they are, although ER/reverb is a far greater cue of distance than ILD.
2a. It can help some people, for others it does the opposite because obviously you're crossfeeding the ER/Reverb cues that are on the recording, not just the ILD and many/most people are sensitive to those ER/Reverb cues.
2b. Yes, we know that for you it creates a miniature soundstage and that reducing ILD fools YOUR spatial hearing, because you've told us 100+ times. And I personally don't doubt that you are telling the truth but, reducing the ILD is NOT good enough to fool my (and many others) spatial hearing. It's NOT an objective fact that reducing ILD will fool spatial hearing or that it won't!

3. But you do see large low freq ILD, Hallelujah brother!
3.1. It's not "really weird", it's very common. And what is "good acoustic"? "Good" as in subjectively pleasing or "good" as in flat? You only find a flat response in an anechoic chamber and it sounds subjectively "crap"! Some of the best sounding rooms/studios are nowhere near flat. You say you've measured lots of times, if so, you know the picture I posted is common enough to fall into the "typical" category. If I'm mixing a film or a piece of music and I want something to sound like it's in a fairly typical domestic room, would it be lunacy to apply a convolution reverb with a response like the one I posted, or lunacy not to?
3.2. But you don't know how soundwaves behave in a room. Maybe if you had a supercomputer and extremely accurate reflection/absorption coefficients of everything in the room but anyone experienced with acoustics knows the theoretical behaviour of soundwaves in a room never matches the objective measurements. Sometimes it's quite close, often it's significantly different.
3.3. You said you see large low freq ILD with near sounds, so if you've "never experienced it anywhere" does that mean you've never heard any near sounds? If you've never experienced it at greater distances, due to room effects, I'd be extremely surprised, though not surprised that you did experience it but didn't realise.
3.4. I've personally found that generally you do need higher freqs, typically higher than about 200-250Hz but I have experienced it as low as 80Hz. However, that's a personal observation/perception, I certainly wouldn't claim it's an objective fact!!!!!

3b. Correct, it doesn't. I bolded that text to indicate that the freq range was in the low freqs. However, the quote contained some other text: "_Therefore, we expect strong modiﬁcations of low frequency binaural cues for sources at the same height as the listener." - _What do you think that means?

Is there anything in this or my previous few posts that I or someone else hasn't already covered with you previously? Round and round we go!

G


----------



## bigshot

71 dB said:


> So I should allow people write about me how I don't know anything? What would you do? Can you stop posting yourself?



Who cares what people in an internet forum say? I sure don't. I've had crazy people say all kinds of things about me and I don't care. The only reason I'm commenting here is because I'm interested in directionality and distance in music playback... and I'll admit that I'm also interested in the the weird psychology of internet forums. Audiophile groups are the best for that because people usually start out with incorrect beliefs and feel compelled to defend their errors to the death. Audiophiles are like people who buy fancy cars. They wrap their ego around their fetish object. If you point out that their treasured prize isn't perfect, they take that as a personal attack on themselves. Weird.


----------



## Hifiearspeakers

bigshot said:


> Soundstage is created by the room. You don't have soundstage without a room.
> 
> Spatiality is created by physical space.
> 
> ...



You all in here keep getting this wrong. What type of headphones are you trying to typecast when you keep saying they offer no soundstage? Over ear or circumaural headphones do have space and a room. The earpads are the room! Now IEM’s do channel the sound directly into the ear canal, so your criticism makes sense with those models. But over ear headphones still interact with outer ear, concha, pinna, etc., and there are many models that present a soundstage to varying degrees. It is NOT all stuck in your head!


----------



## bigshot (Nov 14, 2019)

I have Senneheiser HD-590s and Oppo PM-1s. Neither of them sound anything like the soundstage from the speakers in my living room. I know headphone people talk about soundstage, but the majority of them have never experienced what it is. Not all recordings have clearly defined soundstage, especially rock music with complex mixes. But the ones that do have defined soundstage sound completely different on speakers than they do on headphones. Speakers set up a plane of sound 10 to 15 feet in front of the listener. Secondary depth cues create synthetic depth beyond that. The music is arranged left to right in front of you just like you were sitting in the audience in a concert hall with the band arranged left to right at a distance from the listener. If you close your eyes and listen, you can place the instruments in the orchestra in front of you in space and point to them in the distance. The sound fills the room and reflects off the walls creating a bloom of sound all around you that is incredibly natural sounding.

Headphones image through the center of the skull in a straight line from ear to ear. Secondary depth cues make the sound extend a little bit to the left and right, but mono content, like vocals are centered square in the middle of your noggin. Nothing in front, nothing behind. Straight down the middle. I would like to find headphones that present a plane of sound from left to right 10 to 15 feet in front of me with enveloping bloom. I've never found any cans nor recordings (even binaural) that put the sound more than a few inches from my face. And mono vocals are always fixed square in the middle of my head.

The more space, the more spatiality. A larger room with a wider spread between speakers and a more distant listening distance will give you a larger soundstage. As you reduce the distances, the size of the soundstage shrinks. Nearfield speaker setups have very small soundstages. By the time you're putting cups right over your ears, the soundstage has shrunk to nothing and there is absolutely no distance in front or behind you.


----------



## gregorio (Nov 15, 2019)

Hifiearspeakers said:


> [1] You all in here keep getting this wrong. What type of headphones are you trying to typecast when you keep saying they offer no soundstage? Over ear or circumaural headphones do have space and a room. The earpads are the room!
> [2] Now IEM’s do channel the sound directly into the ear canal, so your criticism makes sense with those models. But over ear headphones still interact with outer ear, concha, pinna, etc., and there are many models that present a soundstage to varying degrees. It is NOT all stuck in your head!



1. And how many people can you have over to eat a meal in that room, or watch some football and have a beer?

2. You have some objective evidence for it NOT being "all stuck in" bigshot's head do you? What about for other people? Contrary to your assertion, all the reliable evidence suggests that for most (almost all) people it IS "all stuck in their heads". This is so widely documented and accepted by the scientific community that it has it's own scientific term; "Lateralization" and a great deal (if not virtually all) of the research into binaural sound, HRTFs, etc., is based on countering "Lateralization" (achieving "Externalization"). In this subforum, you cannot contradict established science based only your personal perception and no reliable supporting evidence!



bigshot said:


> Headphones image through the center of the skull in a straight line from ear to ear. Secondary depth cues make the sound extend a little bit to the left and right, but mono content, like vocals are centered square in the middle of your noggin. Nothing in front, nothing behind. Straight down the middle. I would like to find headphones that present a plane of sound from left to right 10 to 15 feet in front of me with enveloping bloom. I've never found any cans nor recordings (even binaural) that put the sound more than a few inches from my face. And mono vocals are always fixed square in the middle of my head.



Careful you don't fall into the same trap as 71dB! I, for example, perceive head-stage very similarly to you most of the time but not always. I don't always perceive mono content square in the middle of my noggin, with some recordings I perceive it at the top of my skull, with some recordings I perceive sounds at the back of my skull. I can justify my perception as being correct using cherry-picked bits of science, for example certain sounds within a recording having a similar spectral/timbral distribution that the ear/brain uses to determine elevation. Likewise, your perception could also be justified using cherry-picked bits of science, for example using the basic meaning of the scientific term "Lateralization" itself. So who is right?

Taking the body of evidence (without cherry-picking), the science indicates 71dB, you and me are ALL correct/right! IE. That it's subjective and varies from person to person. Where 71dB is wrong, is not his perception of what he's hearing but in his assertion that it's objective (which contradicts the body of scientific evidence), that those who perceive differently to him are therefore deficient or defective in some way (taking the lesser of his insults, are not as educated or as aurally sensitive) and the fallacious "facts" he manufactures, cherry-picks or takes out of context to justify his view.

This is maybe a useful citation:

"_Recent experience has shown that when the features of individual HRTFs are accurately simulated with headphones, listeners report an externalized image (Wightman and Kistler, 1989). If the filter parameters are computed from a model, or measured on a dummy head, or taken from an average listener, then the image maybe externalized, but it is usually diffuse or localized incorrectly either in direction or distance (Laws, 1972; Wenzel et al, 1993). It is very common for the synthesis of sources that are in the front hemisphere to produce images that are in the back (Wightman et al, 1992). Frequently the synthesis of a distant source leads to an image that is on the surface of the skull. By contrast, the images of real-world sources are externalized, compact and correctly localized.
 A major problem with research on the externalization of sound images is that the externalization percept is subjective and not precisely defined. Listeners can be fooled. Even a crudely synthesised headphone source can be made to sound somewhat externalized by adding enough artificial reverberation (Sakamoto et al, 1976)._" - William M. Hartmann and Andrew Wittenberg. "On the externalization of Sound Images". Acoustical Society of America, 1996.

In summary of this (and other scientific evidence): It's subjective, there is no universally objective fact here. Some people's perception can be fooled by relatively simple applications of reverb or crossfeed, some by a computed or averaged HRTF, others need an individualized HRTF, while others would need an individualized HRFT with head-tracking and convolution reverb.

My belief (not objective fact) is that even should a theoretically perfect individualised HRTF, head-tracking and perfect reverb system be implemented, this still won't be enough to fool some people's perception. We still have the basic sensory conflict of seeing (for example) our sitting room but hearing a concert hall (and a concert hall made up of conflicting acoustic cues anyway). So the result will still be subjective, although very likely enough to fool the perception of most (or even the vast majority of) people.

71dB effectively asserts that his perception is more easily fooled than many others because he is more educated and a more sensitive/discerning listener. A more logical assertion would be pretty much the opposite, that he is a less sensitive/discerning listener than many others. In his case, possibly/probably caused by biases from misinterpreting and misapplying what he's learnt. Although given the choice, I wouldn't personally characterize individual subjective perception purely in terms of listening discernment and education.

G


----------



## castleofargh

Hifiearspeakers said:


> You all in here keep getting this wrong. What type of headphones are you trying to typecast when you keep saying they offer no soundstage? Over ear or circumaural headphones do have space and a room. The earpads are the room! Now IEM’s do channel the sound directly into the ear canal, so your criticism makes sense with those models. But over ear headphones still interact with outer ear, concha, pinna, etc., and there are many models that present a soundstage to varying degrees. It is NOT all stuck in your head!


while I get what you're trying to describe, because I feel it myself while swapping between headphones or IEMs if I try to, I disagree with your explanation of why. in theory, the "room" is so small that anything you're getting will have incredibly small delays and will mostly manifest as resonance/boost at some frequency. like keeping the bass at a reasonable level unless you lose the seal from the pads. 
in practice don't people feel a bigger head-stage or whatever they call it, more often on open designs? doesn't that contradict the idea of the small acoustic chamber(headphone+ears) creating the feeling of stage? 
also here we're pretending that people will experience the "head on a spike and a brain that strictly limits his interpretation to audio cues" model. which is not reality, this section pretty much exists because people feel audio stuff from non audio variables and draw false conclusions about sound all the time. I've mentioned it a few times in this topic, for me, head movements and sight are very detrimental to my mentally created stage. if I close my eyes and force myself to be a statue, I soon can get a much more impressive(sometime pretty much 3D, usually nonsensical, but 3D) mental image, instead of the usual stuff on the axis between my ears, plus more often than not, the center sounds on my forehead for some reason. 
I think people getting different experiences from headphones is the more likely result, bigshot is simply near one end of a pretty wide spectrum of impressions. I guess at the other end we have those guys who talk about being second row in the theater and describing a typical stereo album on their headphone as if it was a binaural recording done with their own head from that second row(blessed be the power of imagination!!!).


----------



## sander99

gregorio said:


> Careful you don't fall into the same trap as 71dB! I, for example, perceive head-stage very similarly to you most of the time but not always. I don't always perceive mono content square in the middle of my noggin, with some recordings I perceive it at the top of my skull, with some recordings I perceive sounds at the back of my skull. I can justify my perception as being correct using cherry-picked bits of science, for example certain sounds within a recording having a similar spectral/timbral distribution that the ear/brain uses to determine elevation. Likewise, your perception could also be justified using cherry-picked bits of science, for example using the basic meaning of the scientific term "Lateralization" itself. So who is right?
> 
> Taking the body of evidence (without cherry-picking), the science indicates 71dB, you and me are ALL correct/right! IE. That it's subjective and varies from person to person. Where 71dB is wrong, is not his perception of what he's hearing but in his assertion that it's objective (which contradicts the body of scientific evidence), that those who perceive differently to him are therefore deficient or defective in some way (taking the lesser of his insults, are not as educated or as aurally sensitive) and the fallacious "facts" he manufactures, cherry-picks or takes out of context to justify his view.
> 
> ...


This post is like you put the words in my mouth. This is exactly what I have been thinking, only I didn't know so well how to say it, and I didn't know the research you mentioned. For me this post really summarizes the core of what can be said about the relationship between headphones (with and without processing the audio) and spatiality.


----------



## Hifiearspeakers

castleofargh said:


> while I get what you're trying to describe, because I feel it myself while swapping between headphones or IEMs if I try to, I disagree with your explanation of why. in theory, the "room" is so small that anything you're getting will have incredibly small delays and will mostly manifest as resonance/boost at some frequency. like keeping the bass at a reasonable level unless you lose the seal from the pads.
> in practice don't people feel a bigger head-stage or whatever they call it, more often on open designs? doesn't that contradict the idea of the small acoustic chamber(headphone+ears) creating the feeling of stage?
> also here we're pretending that people will experience the "head on a spike and a brain that strictly limits his interpretation to audio cues" model. which is not reality, this section pretty much exists because people feel audio stuff from non audio variables and draw false conclusions about sound all the time. I've mentioned it a few times in this topic, for me, head movements and sight are very detrimental to my mentally created stage. if I close my eyes and force myself to be a statue, I soon can get a much more impressive(sometime pretty much 3D, usually nonsensical, but 3D) mental image, instead of the usual stuff on the axis between my ears, plus more often than not, the center sounds on my forehead for some reason.
> I think people getting different experiences from headphones is the more likely result, bigshot is simply near one end of a pretty wide spectrum of impressions. I guess at the other end we have those guys who talk about being second row in the theater and describing a typical stereo album on their headphone as if it was a binaural recording done with their own head from that second row(blessed be the power of imagination!!!).



Well first of all, I don’t disagree that speakers will almost always cast a larger soundstage. BUT that doesn’t mean headphones don’t create any soundstage. Some can and do, but of course, on a smaller scale. It also depends on what type of music is being heard through them. Live, acoustical, studio, home video, etc. 

Here’s the crux. Are headphones capable of conveying distance at all? I say absolutely. If I record someone yelling 20 feet in front of me with my iPhone and play it back through headphones, will it sound like they’re yelling right between my head or will it sound like they’re in front of me but further away? 

So if 71db is playing acoustic guitar and I record him with my iPhone from 20 feet away, how close or far away do you think it will sound if played though circumaural headphones?

P.S. Anyone who has used many different headphones also knows that certain models do a better job with spatiality/distance than others. 

So I have no problem with anyone saying that THEY can’t hear any soundstage with headphones. But I don’t think it’s fair, especially in Sound Science, to make a blanket statement that it doesn’t exist at all with headphones.


----------



## castleofargh

Hifiearspeakers said:


> Well first of all, I don’t disagree that speakers will almost always cast a larger soundstage. BUT that doesn’t mean headphones don’t create any soundstage. Some can and do, but of course, on a smaller scale. It also depends on what type of music is being heard through them. Live, acoustical, studio, home video, etc.
> 
> Here’s the crux. Are headphones capable of conveying distance at all? I say absolutely. If I record someone yelling 20 feet in front of me with my iPhone and play it back through headphones, will it sound like they’re yelling right between my head or will it sound like they’re in front of me but further away?
> 
> ...


sure. in my case I'm not saying that what you discuss doesn't/cannot be felt, I only contested the proposed causes. as for bigshot, he doesn't seem to experience much of that, and to make things worst, he also doesn't subscribe to the meaning of soundstage commonly (mis)used on the forum. so double opportunity for disagreement over not much.^_^


----------



## Hifiearspeakers

castleofargh said:


> sure. in my case I'm not saying that what you discuss doesn't/cannot be felt, I only contested the proposed causes. as for bigshot, he doesn't seem to experience much of that, and to make things worst, he also doesn't subscribe to the meaning of soundstage commonly (mis)used on the forum. so double opportunity for disagreement over not much.^_^



Objectively, the earpads do represent the room, though, regardless of how small that room might be. And because of that, ear geometry is in play. And just because they’re open, doesn’t mean they’re completely devoid of resonances or reflections. 

Also, not all open back headphones have a larger soundstage than closed back. For example, the Oppo PM-1 is known to have a pretty tiny soundstage even though they’re open back. It just depends on what the manufacturer was going for.


----------



## bigshot (Nov 15, 2019)

gregorio said:


> I, for example, perceive head-stage very similarly to you most of the time but not always. I don't always perceive mono content square in the middle of my noggin, with some recordings I perceive it at the top of my skull, with some recordings I perceive sounds at the back of my skull.



Interesting. I've never had that effect at all. But I imagine it is a subtle difference, involving just inches one way or the other.

Let me see if I can pin down my wording a little tighter. First, a description...

I have speakers. I listen to recordings designed to present a clear focused soundstage, and when I listen to them, I can hear a clear image at a distance in front of me. The reflection of the walls of the room gives the sound bloom surrounding me and adds a natural envelope around the recording. The soundstage image has the size and proportions of what a performance would sound like if I was sitting in the audience at a concert hall. The room provides a sense of space, which when combined with the reverberation baked into the recording creates a realistic presence to the room.

I have headphones. When I listen to them, there is no stage in front of me. There is no bloom from reflections around me. There is a line of sound that goes through the middle of my head. Recordings with phase differences between the channels may sound more diffuse around the right and left side of my head, but I never clearly hear distance in front of me like with speakers. Basically, it sounds like the recording is inside of my head, not external to me.

I've heard binaural recordings, and although they present for me with a few inches of distance outside my head, the further away it is, the less localized the sound is. It's like the aural equivalent of looking at the world through heavy fog... not at all the same as true speaker soundstage. And the front/back cues flicker back and forth making the sound jump behind me and then in front of me. I can't control that and it is VERY irritating when I'm listening carefully.

That is as accurate a description of what I hear as I can come up with. The only science I am calling on to explain it my guess that the difference is caused by the room. The physical space, the walls and remoteness of the sound source is responsible for allowing the recording to be presented in this lifelike, vivid, dimensional way. I don't think it's possible to completely reproduce speaker soundstage using headphones. If it is, I'd like to hear it for myself. Maybe the Smyth Realizer. I've never had any experience with that. But I know for sure that crossfeed is not going to do the job.

Now I'm going to speak subjectively... I have nothing against the sound of headphones. However, speakers sound much more realistic and natural to me. I'll accept headphones as a compromise on the go. Obviously, I can't carry a room around in my pocket like I can my iPhone and AirPods. There is a place for headphones. But for serious listening, I prefer speakers in a room with clearly defined soundstage. That is just my preference. All things being equal and all situations the same, I doubt anyone would feel all that different than I do about that.



gregorio said:


> 71dB effectively asserts that his perception is more easily fooled than many others because he is more educated and a more sensitive/discerning listener. A more logical assertion would be pretty much the opposite, that he is a less sensitive/discerning listener than many others. In his case, possibly/probably caused by biases from misinterpreting and misapplying what he's learnt. Although given the choice, I wouldn't personally characterize individual subjective perception purely in terms of listening discernment and education.



That's similar to a conundrum we've been presented with in the past... If bias and placebo can make you think sound is of a better quality than it actually is, then isn't that just as good as it actually being of a higher quality? That question gets into issues of solipsism, and I'm not too keen to discuss that subject, I guess.

Whenever someone claims that their ears are "educated" that usually tells me that they have tied their ego up in their hearing. I tend not to believe much they say, because ego is a very strong bias.

One other quick comment... When I was setting up my speaker system, I used Solti's Gotterdammerung to determine speaker placement. That recording has pinpoint soundstage by design... It's one of the best examples in all of recorded history. I found that setting up my speakers in the way that I expected would be best didn't really present the best soundstage. Moving them around a little radically changed the presentation from precise soundstage to no soundstage at all. Soundstage is important to me, because my sound system is tied to a projection video system. The sound has to follow the dimensions of the screen. Multichannel adds a whole other set of complications and settings to adjust.

Maybe I'm different than most people, but to me, soundstage is like a light switch. It's either on or off. There isn't a lot of middle ground. Music can sound "nice" with sub-optimal soundstage, but aural imaging is like focusing a camera... either is is sharply defined, or it's completely diffuse. Again, that may just be me.



Hifiearspeakers said:


> Here’s the crux. Are headphones capable of conveying distance at all? I say absolutely. If I record someone yelling 20 feet in front of me with my iPhone and play it back through headphones, will it sound like they’re yelling right between my head or will it sound like they’re in front of me but further away?



I can tell you that for me, it will sound like it's inside my head, just with secondary distance cues baked into the recording. No directionality to indicate if the sound is 20 feet in front of me or behind me. My brain reads that lack of a specific direction as "a reverby recording playing back in the middle of my head". Soundstage is a combination of real physical distance and left/right placement. A mono recording with the speaker a distance in front of me sounds no closer to me to soundstage than a stereo recording with no physical distance. It takes both.



Hifiearspeakers said:


> So I have no problem with anyone saying that THEY can’t hear any soundstage with headphones. But I don’t think it’s fair, especially in Sound Science, to make a blanket statement that it doesn’t exist at all with headphones.



But your example has nothing to do with the headphones. It has to do with the placement of the microphone. We can certainly say that recordings with baked in distance cues and reverberation sound more dimensional, but that doesn't mean that the headphones are causing it. Any headphone capable of reproducing sound with high fidelity would reproduce those distance cues the same.



Hifiearspeakers said:


> Objectively, the earpads do represent the room, though, regardless of how small that room might be.



There is an optimal size for soundstage. You want it to relate to the scale of a real performance. If I sit too close or too far from my speakers, the soundstage falls apart. It's hard to describe to someone who doesn't have speakers, but sitting too close is sort of like making a regular TV show into widescreen. The stretched out distortion ruins it. Sitting too far away shrinks the size of the stage so much that the left/right placement gets compromised. Hard to describe.

I do think that the definition of soundstage among headphone users is completely different than the way speaker people use the term. The term in speakers is very specific. In headphone use, it is much more general and non-specific.


----------



## gregorio

Hifiearspeakers said:


> [1] Well first of all, I don’t disagree that speakers will almost always cast a larger soundstage.
> [1a] BUT that doesn’t mean headphones don’t create any soundstage.
> [2] Some can and do, but of course, on a smaller scale. It also depends on what type of music is being heard through them. Live, acoustical, studio, home video, etc.
> [2a] Here’s the crux. Are headphones capable of conveying distance at all? I say absolutely.
> ...



1. Well I do disagree with that! Speakers do not "cast a larger soundstage", they don't cast any soundstage at all, they just reproduce the frequencies in the recording. It's the listening room acoustics, excited by the the frequencies output by the speakers, that causes the perception of soundstage. 
1a. Agreed, that's not the reason. The reason that headphones don't create any soundstage is because they essentially do exactly the same as speakers, they just reproduce the frequencies in the recording. The difference of course is that we don't have the excitation of the listening room acoustics with headphones/IEMs. Headphones and speakers are effectively just a pair of transducers and transducers simply convert an electrical (analogue) signal into mechanical soundwaves. They are entirely dumb, they have no idea what they're transducing, they don't even know that they're transducing music (as opposed to any other sort of sound or noise), let alone what spatial information it contains.

2. You seem to be confusing your perception of sound waves with the relatively simple process of reproducing sound waves. Your approach is flawed, which is entirely understandable and much/most audiophile marketing is specifically aimed at creating this confusion. Such an approach can ultimately only result in severe conflicts with basic facts/science and claims which effectively amount to audio reproduction equipment having magical properties. For example, transducers somehow knowing what they're reproducing and then somehow altering it according to some human judgement of "good", "better", "musicality" or whatever.
2a. No, they're only capable of transducing the electrical signal into a mechanical sound wave, they have no other capabilities.

3. That's up to your personal perception of what the headphones are transducing.
3a. That's impossible to say. Many/Most people will hear him playing inside their head but some people will hear it further away, if you record video as well, then more people will hear it as being further away, as perception will generally try to match what it's seeing with what it's hearing.

4. No, that's not possible, unless they have some sort of magic. However, it's entirely possible for headphones to have a frequency response that may or may not be perceived as doing a better job with spatiality/distance than others (with a different frequency response). I've experienced that many times myself with different headphones but that's just my perception/preference and not uncommonly, what I've perceived varies significantly from what reviewers have stated they perceived. Referring to my previous post, I'm one of those people for whom a generalised/averaged HRTF (and the majority of binaural recordings) don't work.

5. It's entirely fair, ESPECIALLY in the sound science forum! Unless you want to contradict the science and attribute some magical behaviours to headphones?

G


----------



## Hifiearspeakers (Nov 15, 2019)

Unreal how unscientific you were here in many of your responses. You think all headphones sound the same regardless of the technology used, shape of the earcups, whether the drivers are angled or flat, open or closed, etc??? You cite magic about this???
Ear anatomy matters. Headphone tech matters. Earpad size and material matters. That’s science, not magic. 

Spoken like someone who really doesn’t know much about headphones...

I could unpack all of your responses like you did mine, but there’s no point. Agree to disagree AND you definitely don’t have all of the science on your side.

Also, you talking about room acoustics and reflections from speakers as creating soundstage is just ridiculous. Those reflections from each individual room with speakers will all be different (based on everyone having a different setup, how far away or apart you place your speakers, what material the floor is made out of, walls, etc.) and wouldn’t be a part of the original recording/mixing/mastering. So those cues are false, too, if they weren’t in the original mix.


----------



## bigshot (Nov 15, 2019)

If you aren't interested in backing up your statements, and you think there is no point in replying, why are you replying?

FYI: Recording engineers design their mixes for a range of speaker installations that all follow a basic plan of triangulation between the speakers and listener. Recording studios have monitoring systems set up and calibrated to that plan. There may be differences in from room to room when it comes to size and reflections, but if the speakers are properly placed according to the plan, the soundstage will work in all of them. The soundstage isn't created by the speakers, it's created by the PLACEMENT of the speakers. The only difference from room to room is the envelope of reflected sound wrapped around the direct sound, and that is automatically going to sound normal, because it is the same envelope wrapped around your voice if you speak in the room.

Speaker systems depend on three things... the recording, the transducers and the room. All three of those are equally important. The recording provides the signal embedded with secondary depth cues, the transducers convert the signal to audible sound, and the room provides the dimension and space around the recorded sound. Headphones have signal and sound, but not dimension and space. Dimension and space are integral to soundstage. These are things you would know if you had ever put together a good speaker system. I suspect that we know more about headphones than you know about speakers.

Don't bother going down the "you aren't scientific enough" route. That is usually what people resort to when they aren't keeping up in the discussion and don't want to admit it. If you are hearing things you don't understand, ask questions and listen. Don't try to buffalo us. It won't work.


----------



## bfreedma

Hifiearspeakers said:


> Unreal how unscientific you were here in many of your responses. *You think all headphones sound the same regardless of the technology used, shape of the earcups, whether the drivers are angled or flat, open or closed, etc*??? You cite magic about this???
> Ear anatomy matters. Headphone tech matters. Earpad size and material matters. That’s science, not magic.
> 
> Spoken like someone who really doesn’t know much about headphones...
> ...




Where was the bolded statement made?  Can you point it out or is it an incorrect generalization?


----------



## Hifiearspeakers (Nov 15, 2019)

bfreedma said:


> Where was the bolded statement made?  Can you point it out or is it an incorrect generalization?



he made the statement that no headphones can cast different types of soundstage because everything always sounds like it’s in the center of your head, and that the only way the different models of headphones could alter the soundstage would be through magic, Because headphones don’t have soundstage. He says that in order to have soundstage you have to have distance from the ear like floorstanding speakers. I asserted that there is space from the ear, and that the ear pads themselves represent the room for headphones. And I asserted that there are some headphones that do a better job of this spaciality/soundstage than others.


----------



## bfreedma

Hifiearspeakers said:


> he made the statement that no headphones can cast different types of soundstage because everything always sounds like it’s in the center of your head, and that the only way the different models of headphones could alter the soundstage would be through magic, Because headphones don’t have soundstage. He says that in order to have soundstage you have to have distance from the ear like floorstanding speakers. I asserted that there is space from the ear, and that the ear pads themselves represent the room for headphones. And I asserted that there are some headphones that do a better job of this spaciality/soundstage than others.



That's an entirely different statement than what you posted- "*You think all headphones sound the same regardless of the technology used, shape of the earcups, whether the drivers are angled or flat, open or closed, etc*???"

How did we get from a discussion about soundstage to "All headphones sound the same regardless of..."?


----------



## Hifiearspeakers

bfreedma said:


> That's an entirely different statement than what you posted- "*You think all headphones sound the same regardless of the technology used, shape of the earcups, whether the drivers are angled or flat, open or closed, etc*???"
> 
> How did we get from a discussion about soundstage to "All headphones sound the same regardless of..."?



Sound the same in regards to SOUNDSTAGE.


----------



## bigshot (Nov 15, 2019)

Hifiearspeakers said:


> he made the statement that no headphones can cast different types of soundstage because everything always sounds like it’s in the center of your head, and that the only way the different models of headphones could alter the soundstage would be through magic, Because headphones don’t have soundstage. He says that in order to have soundstage you have to have distance from the ear like floorstanding speakers. I asserted that there is space from the ear, and that the ear pads themselves represent the room for headphones. And I asserted that there are some headphones that do a better job of this spaciality/soundstage than others.



You stated that vaguely then. You said "All headphones sound the same..." All headphones don't sound the same because they have different response curves and seals around the ears. However I suppose you could say that all headphones produce the same amount of soundstage- none. As Gregorio explained, they are transducers. They just produce sound, just like speakers. Soundstage is created by the distance, space, reflections off the walls of the room, the shapes of our individual heads and ear canals, and the way our brain interprets sound. None of that has anything to do with the design of headphones.

Here is a non-scientific guess... Could the fact that some people are easily fooled by secondary depth cues in recordings be because they are listening casually and not thinking about what they hear? I know in the field of art you could show a drawing of a dog to a casual viewer and they would recognize it as a dog. But if you show it to an artist who draws, he could probably speak for a half hour analyzing each element of the drawing and how it works in construction and perspective. He would instantly see errors that the average person would never notice in a million years. Perhaps soundstage is like placebo, and ignorance is bliss.


----------



## Hifiearspeakers (Nov 15, 2019)

bigshot said:


> You stated that vaguely then. You said "All headphones sound the same..." All headphones don't sound the same because they have different response curves and seals around the ears. However I suppose you could say that all headphones produce the same amount of soundstage- none. As Gregorio explained, they are transducers. They just produce sound, just like speakers. Soundstage is created by the distance, space, reflections off the walls of the room, the shapes of our individual heads and ear canals, and the way our brain interprets sound. None of that has anything to do with the design of headphones.
> 
> Here is a non-scientific guess... Could the fact that some people are easily fooled by secondary depth cues be because they are listening casually and not thinking about what they hear? I know in the field of art you could show a drawing of a dog to a casual viewer and they would recognize it as a dog. But if you show it to an artist who draws, he could probably speak for a half hour about each element of the drawing and how it works in construction and perspective. He would instantly see errors that the average person would never notice in a million years. Perhaps this is like placebo- ignorance is bliss.




It wasn’t vague. We have been discussing one subject - soundstage with over ear headphones. You all are the ones who keep making blanket statements, like you just did. “all headphones produce the same amount of soundstage - none.”

“None of that has anything to do with the design of headphones.” What an absolutely ridiculous and unscientific response. This sub forum is a joke if that’s an honest statement by you and you are representative of Sound Science.


Is that good enough for you @bfreedma ???

So to listen to these so-called experts, the Susvara, hd800S, and Abyss Phi TC will cast the exact same soundstage, which is none, that the Oppo PM-1 will, Ath M50, Beats by Dre, R70x, hd600, Utopia, LCD anything, and on and on...

Scientifically speaking of course.  It’s not possible that a headphone could actually be designed to recreate a better soundstage because transducers are dumb, so unless magic is invoked...ad nauseum.


----------



## bfreedma

Hifiearspeakers said:


> It wasn’t vague. We have been discussing one subject - soundstage with over ear headphones. You all are the ones who keep making blanket statements, like you just did. “all headphones produce the same amount of soundstage - none.”
> 
> “none of that has anything to do with the design of headphones.” What an absolutely ridiculous and unscientific response. This sub forum is a joke if that’s an honest statement by you and you are representative of Sound Science.
> 
> Is that good enough for you @bfreedma ???




If headphone soundstage is created by pads alone, can you model out how/why the HD800 is reputed to have a huge soundstage and the Focal Utopia a small one (picking two random examples)?  What in the pad construction/shape/material accounts for the oft cited significant difference?  They are both over ear open headphones and the distance between the pads and ear is, within reason, similar.  Or at least similar enough that distance alone wouldn't seem to be sufficient to explain the reported differential.


----------



## Hifiearspeakers

bfreedma said:


> If headphone soundstage is created by pads alone, can you model out how/why the HD800 is reputed to have a huge soundstage and the Focal Utopia a small one (picking two random examples)?  What in the pad construction/shape/material accounts for the oft cited significant difference?  They are both over ear open headphones and the distance between the pads and ear is, within reason, similar.  Or at least similar enough that distance alone wouldn't seem to be sufficient to explain the reported differential.



When did I say pads alone? The geniuses in here say a room is needed for soundstage so headphones can’t have it because there is no room. I said headphones do have a room. The earpads and driver-housing are the room. Earpad size, shape, material, etc. all contribute to soundstage. And so does the technology used (dynamic, planar, estat), the housing material, the angle of the drivers, the amount of dampening, etc. 

Let me ask you this: Do you contend that the hd800 and Utopia create the exact same amount of “sonic space” for the same track? Or can you hear a difference in the way they present said space?


----------



## bigshot (Nov 15, 2019)

If the pads of the ear cups constitute a "room" how does the degree of absorption/reflection and delay created by distance in an ear cup relate to the degrees of those things created by an actual living room? I'll answer that for you... They don't relate at all. Saying the space around your ears in headphones is the same as a room is like saying a teacup is the same as the Taj Mahal.

"Openness" and "Closedness" related to the seal around your ears is not at all the same as soundstage. Soundstage involves the precise localization of sound at a distance in front of you. Headphones don't have any kind of precise distance localization. They only have precise left/right localization through your head.


----------



## bfreedma

Hifiearspeakers said:


> When did I say pads alone? The geniuses in here say a room is needed for soundstage so headphones can’t have it because there is no room. I said headphones do have a room. The earpads and driver-housing are the room. Earpad size, shape, material, etc. all contribute to soundstage. And so does the technology used (dynamic, planar, estat), the housing material, the angle of the drivers, the amount of dampening, etc.
> 
> Let me ask you this: Do you contend that the hd800 and Utopia create the exact same amount of “sonic space” for the same track? Or can you hear a difference in the way they present said space?



When I listened, they do have a different presentation.  I suspect that the FR and other tuning has more to do with this than physical construction or driver type.  To be clear, that's speculation on my part.

I don't consider the difference to be soundstage though, as neither has the ability to project a 3 dimensional image.


----------



## Hifiearspeakers

bigshot said:


> If the pads of the ear cups constitute a "room" how does the degree of absorption/reflection and delay created by distance in an ear cup relate to the degrees of those things created by an actual living room? I'll answer that for you... They don't relate at all. Saying the space around your ears in headphones is the same as a room is like saying a teacup is the same as the Taj Mahal.



Such nonsense. It’s called SCALE. Ear anatomy is in play because the earpads cover the ear and because there IS space. Circumaural does not equal IEM.


----------



## bigshot

Would walking around with ear cups the size of buckets create better soundstage than normal sized headphones?


----------



## Hifiearspeakers (Nov 15, 2019)

bfreedma said:


> When I listened, they do have a different presentation.  I suspect that the FR and other tuning has more to do with this than physical construction or driver type.  To be clear, that's speculation on my part.
> 
> I don't consider the difference to be soundstage though, as neither has the ability to project a 3 dimensional image.



So why do 99% of audiophiles always cite the 800/800S as the king of soundstage? Or the Susvara and Abyss Phi? You don’t think it’s interesting that no one ever says the Oppo PM 1 or 2 seem to create the largest soundstage they’ve ever heard? Or the ATH M50’s? Or the R70X? Or any Focal model?


----------



## bigshot

Hifiearspeakers said:


> So why do 99% of audiophiles always cite the 800/800S as the king of soundstage?



This is an interesting statistic! It means that 99% of audiophiles don't know what they're talking about... not that I'm saying that statistic is inaccurate, mind you. I just never sat down and counted.


----------



## bfreedma (Nov 15, 2019)

Hifiearspeakers said:


> So why do 99% of audiophiles always cite the 800/800S as the king of soundstage? Or the Susvara and Abyss Phi? You don’t think it’s interesting that no one ever says the Oppo PM 1 or 2 seem to create the largest soundstage they’ve ever heard?



Why do 99% of audiophiles state that copper and silver based cables sound different?  I don't think quoting audiophile dogma is valid evidence.

I'm suggesting that people are confusing FR and other tuning differences with soundstage.  If evidence (objective) is presented that I'm incorrect, I'll be happy to admit it and learn something new.


----------



## bigshot (Nov 15, 2019)

Could it be because an awful lot of people own Sennheiser 800s, and they're just self-validating?

I'm going to go out on a limb and take a wild guess here... You don't list your equipment in your sig. (I commend you for that!) But I bet a buck that you happen to own Senn 800s. Double or nothing bet... You don't own Oppo PM-1s. Am I right?

Edit: Haha! I just peeked at your profile page to see if you list equipment... You owe me three bucks!


----------



## Hifiearspeakers (Nov 15, 2019)

bigshot said:


> Could it be because an awful lot of people own Sennheiser 800s, and they're just self-validating?
> 
> I'm going to go out on a limb and take a wild guess here... You don't list your equipment in your sig. (I commend you for that!) But I bet a buck that you happen to own Senn 800s. Double or nothing bet... You don't own Oppo PM-1s. Am I right?
> 
> Edit: Haha! I just peeked at your profile page to see if you list equipment... You owe me three bucks!



My profile picture is the 800S, genius. It’s not some big secret. If you knew anything about headphones, you wouldn’t have needed any detective work. I’ve owned it for years. I’ve owned many top of the line headphones and very few of them can create soundstage like the 800S. I don’t need validation from any of you clowns in here. It’s obvious that you all know a great deal about speakers and jack sh@$ about headphones. My owning the 800S only proves that I’ve actually heard it and know what I’m talking about. I’ve also heard the Oppo PM-1, 2, and 3. All three are good sounding headphones but have an anemic soundstage. I doubt you even knew there were three models. Did you also know the company is no longer in the headphone business? Do you even know what the difference between the 1 and 2 is? Doubtful. You’d probably have to go berserk with Google to get up to speed.

It’s also obvious that you all are guilty of making the exact same subjective claims as the people you ridicule in here and then try and present them as objective, scientific facts, when they’re not. You make ad hominem attacks and present juvenile straw man propositions and then chuckle to yourself as if you actually said something clever.

You make blanket statements and insult an entire community, because you can’t curb your elitist, arrogant attitudes. And that is why your forum resides in the doldrums.

But at the end of the day, there is a reason this place is like a leper colony. There’s a reason it has been relegated to a dark, quarantined corner of Head-Fi. And deservedly so. Enjoy your sad sub forum. Your banishment is warranted.

+1 for Jude


----------



## castleofargh

what I got from this:

@Hifiearspeakers is talking about an *impression* of space, distance, position, how well defined is the perceive position of a given instrument, etc. those impressions can of course change from headphone to headphone and usually do for listeners. as a simple frequency response variation can create some of those perceived changes, I don't believe anybody's denying the possibility of different headphones feeling differently in some mind space way(space/headstage/soundstage/watevahyoucallit).

now the others are talking about soundstage as in the sound altered by a room.* the objective action* of that room on the sound, giving auditory cues about the space which is that room. it's something rather specific. and while those cues will participate in forming a subjective impression of space, distance ... that relation doesn't necessarily work both ways. feeling space, distance, a certain oritentation, an instrument feeling more like a point in our head or more like an area, etc, do not necessarily imply that the sound got altered by a room. our brain will try to create a mental representation no matter what sound is fed to it(will it do something good, big, or realistic of it... that's another story). the idea here is that feeling space doesn't prove soundstage, at least not the objective action of the room, soundstage.


and that's why they argue against what @Hifiearspeakers wrote, because he doesn't distinguish between the 2, and in some posts seems to justify the impression of space with the notion of the mini room that the headphone creates. which in itself probably would deserve a topic and a lot of studies because intuitively, if we correctly perceived that tiny room, wouldn't that make it even harder to get an impression of bigger space? so IMO we're back to cues feeling a certain way to certain people, but not actually being the room properly perceived by us. 



that's what I read and think I understand. if I misunderstood, please let me know as I'm trying to clarify things, not add more trouble by putting words into other people's mouth ^_^.







anyway:
yes the hd800 feels at least "wider" to almost anybody who tried one(including me, but if I just change the FR a lot of that impression goes away). I'll leave the contentious use of "soundstage" out of it.
no, nobody claimed that all speakers or all headphones sounded the same. I didn't get that message from anything posted by @gregorio. maybe it can be read that way, but it's clearly not what he says.


----------



## bigshot (Nov 16, 2019)

castleofargh said:


> now the others are talking about soundstage as in the sound altered by a room.* the objective action* of that room on the sound, giving auditory cues about the space which is that room. it's something rather specific. and while those cues will participate in forming a subjective impression of space, distance ... that relation doesn't necessarily work both ways. feeling space, distance, a certain oritentation, an instrument feeling more like a point in our head or more like an area, etc, do not necessarily imply that the sound got altered by a room. our brain will try to create a mental representation no matter what sound is fed to it(will it do something good, big, or realistic of it... that's another story). the idea here is that feeling space doesn't prove soundstage, at least not the objective action of the room, soundstage.



I'm afraid I got a little lost in this paragraph, but I think what you are saying is that if you put a microphone 20 feet from a singer on a stage and record it, then listen to it with headphones, it isn't going to sound the same as a close miked recording of the singer playing through a speaker on the stage 20 feet away from the listener. You can swap out every pair of headphones in existence, but it isn't going to sound as natural and the directionality and distance cues are not going to sound as defined as the actual physical distance.


----------



## Davesrose

bigshot said:


> And if you want soundstage, chuck the cans and build yourself a decent listening room. Until then, sit down and shut up. You don't know what you're talking about, and I'm not the one being arrogant. It's been patiently explained to you by Gregorio and Castle and Bfreedma and me, and you won't hear it. That isn't my problem.



I think you're ignoring the reason for needing to listen through headphones.  From what I've read, my preferences also align with yours: IE speakers are much better about localization and what most would term "soundstage".  However, I still listen to headphones for the situations where I can't pump out the sound with my kick ass surround speaker system (especially when working, I'm only listening through headphones).  I would say the main part about a speaker system is that you do get a clear stereo image: I can hear distinct sounds that are clearly in front of me.  However, I think it's also a mistake to totally dismiss a sense of "soundstage" with headphones when it comes to lateralization.  Gregorio hit on it in that in a perceived headphone sound-field, you might hear sounds at the front of your forehead (that's the best "front" image I've heard as well).  I believe some headphones might sound more expansive (when it comes to how lateral) depending on their tuning.  And tuning with headphones is fairly complicated: how the housing is designed, driver, dampening, or say how open the back of the driver is.  Headphone brands have studied this a lot more than any of us.

I don't get caught up into what audiophiles are saying the best headphones are for soundstage....in fact, if I read this forum, I think my headphones are rated as small soundstage...whatever.  Just as crossfeed maybe being an absolute indicator for some.....mine is just how good tonality is with my setup.


----------



## 71 dB

*Sounds "inside" our head*

To me the notion of headphone sound being inside your head isn't as simple as people think. We don't hear real sounds inside our heads, so how does our spatial hearing learn the spatial cues of sounds inside our heads? As far as our spatial hearing is concerned, the "space" inside our head doesn't exist. All we hear in our heads are things like own thoughts and tinnitus. Hardly real physical sound sources!

*So, what exactly is "headstage"? *

I think it is "imagined" based on what our spatial hearing can do about the spatial cues or the lack thereof. People just describe this imagined tiny soundstage using concepts that are more familiar to them no matter if it makes sense or not. When you say you hear sounds in the middle of your head, you mean a small sound source, maybe one inch in diameter? A tiny point in space. I don't know how other people perceive sounds, but I can use logic to determine how they "should" hear them and how they shouldn't be able to hear them. Ask yourself how does your hearing know what a sound source in the middle of your head sounds like? Is it soundwaves propagating in gray matter or in air, Homer Simpson head? No, you don't know and I don't know either so when we hear say mono sound on headphones the spatiality we hear is _imagined_.

*So, how does this work?*

I have a thought of what is happeing with sounds inside our head. The point in space where we think the sound source is inside our head is actually not the sound source, but the "sonic" center of mass. If the Earth was hollow, the center of mass of Earth (feld on the surfice or in the space around the Earth) would still be in the center even if it was just empty space there. So, the sound source in the "middle" of your head is actually imagined to be a somewhat spherical "hollow" sound source of the size of your head (a vibrating balloon perhaps?). If you think this is a point source you mistakenly interpret the sonic center of mass as the sound source. I think this is why miniature soundstage can happen and isn't even that difficult to achieve: Since the sound inside our head are imagined sonic center of mass, it's not a big leap to image sonic centers of mass around our head at close distance. The only difficulty is our spatial hear does know what kind of spatial cues sound sources near our head create, but almost any reasonable spatical cue is likely to fool our spatial hearing to imagine sounds this way. People who say they don't get miniature soundstage with heaphones simply interpret the perceived spatiality differently from people who say they do. I think if you believe in miniature soundstage you can interpret the spatiality as a miniature soundstage, but if you don't believe in it, your brain "forces" the interpretation to take the sonic center of mass as the sound sources.

----

My 2 cents. Comment on it how you want, but please take it as my thoughs of the issue, not facts. I like to throw ideas in the air and see what happens...


----------



## jgazal

I have been wondering cities plenty of electric vehicles without combustion engines. Less ambient noise in listening rooms. Then DSP units, crosstalk cancellation and convolution filters widespread in all kind of entertainment devices. The only combustion to hear would be rockets to the moon. No more limited head stage or sound stage confined to the stereo speakers. 
So many things to become obsolete. 
I have just bought a Sony noise cancelling headphones with an app to photograph your own ear. Probably developed by the team of Dr. Choueiri. 
IMHO, that would be a wonderful future.


----------



## castleofargh

71 dB said:


> *Sounds "inside" our head*
> 
> To me the notion of headphone sound being inside your head isn't as simple as people think. We don't hear real sounds inside our heads, so how does our spatial hearing learn the spatial cues of sounds inside our heads? As far as our spatial hearing is concerned, the "space" inside our head doesn't exist. All we hear in our heads are things like own thoughts and tinnitus. Hardly real physical sound sources!
> 
> ...


my hypothesis, has to do with head movement and sight. a mono sound could come from anywhere on a vertical plane perpendicular to the axis passing through the ears. the frequency response if we're able to estimate it, may give us the cues for a given vertical angle(based on how we're used to ear something bouncing from the pinna at that angle). so if we're lucky, we now have a pretty good idea of a mental direction for the source of the sound, the only thing missing is distance. and there comes sight and head movement IMO. some see nothing in front of them and sight dominate so much in their brain that they reject the possibility of that sound being anywhere in the field of vision. as for head movement, they kill the option of sound at a distance as the sound remain the same when we turn our head. the logical conclusion would be that the sound source is at least stuck on us, or inside us between the ears. my made up explanation from long ago about singer on my forehead or back of my skull(as it also happens to me sometimes), was my brain placing the sound there because inside the head felt impossible to it, so those places were among the few possibilities left(stuck on the head for head movement to make sense, not inside the head or in my neck, not in the field of vision. not that many options left. 
of course I think of that explanation because I happen to notice how sensitive I am to head movement and sight. different people are likely to have different priorities when processing all the cues. and as always, some will have an easier time influencing the result based on their habits and expectations. I wouldn't be too surprised if for some people, it just makes sense to them that the singer will be somewhere in front because that's where the band is when they perform, and that's enough for such people to feel like he is. no matter what information their senses are picking up.


----------



## sander99

71 dB said:


> We don't hear real sounds inside our heads


At the very moment I was reading this sentence I was chewing on some crispy vegetables and I swear I heard noises inside my head!


----------



## taffy2207

The voices inside my head tell me to "Kill all sprouts". I think that's because Christmas is approaching though


----------



## castleofargh

bigshot said:


> I'm afraid I got a little lost in this paragraph, but I think what you are saying is that if you put a microphone 20 feet from a singer on a stage and record it, then listen to it with headphones, it isn't going to sound the same as a close miked recording of the singer playing through a speaker on the stage 20 feet away from the listener. You can swap out every pair of headphones in existence, but it isn't going to sound as natural and the directionality and distance cues are not going to sound as defined as the actual physical distance.


there is that yes. there is also an increased consistency in the perceived impression when using speakers in a room, because we've had a lifetime to calibrate ourselves to have audio cues correlated with rooms we could see and feel. on headphones it's kind of free for all. whatever placement or feeling of space our brain will come up with, can't really be calibrated with anything. if we always sit in the same chair and watch concert with the guys live on the TV, we probably will calibrate to that and feel like the sound kind of comes from the TV if we use those headphones only for that. but otherwise, whatever image we form will probably become the new normal and stay that way. what audiophiles(and me a few years ago) describe as soundstage specific to a given headphone. back then I remember making reviews where I would use the same test track with a female singer and 2 acoustic guitars, and describe gears and IEMs by where I felt that singer and guitars. that's how significantly different those impressions of placement were to me with various IEMs/headphones. and a headphone just had to have a certain bass tuning for me to talk about being enveloped in sound. so while I clearly never felt anything remotely realistic in term of placements or "room", and the biggest "headstage" I ever imagined on headphone without DSP was maybe 30cm from my head(usually on the side or above), I also very much get what @Hifiearspeakers was trying to talk about. and even why he called that soundstage as I've been doing the very same thing before. 
you just talk about something different, while he's thinking about how one headphone will make you feel like a sound source is almost a single point when it has clean output and fairly boosted trebles(easier to locate with the shorter wavelength), while a warmer headphone will make us feel like those same instruments are maybe at a different place, and most likely that the sound source itself is larger or harder to precisely locate. that kind of pretty intuitive stuff, but when you add up the differences between 2 headphones, you can end up with a significantly different perception of that mental image. but at least for me it was never room size and never really felt like a room. but I certainly got impressions of distance, detailed position(or not), and depending on the low end, a certain feeling that there was a real volume to the sound. now I'm pretty sure that at least some of those impressions are correlated with my HRTF and other stuff in my brain, so different people are likely to get differences in how they perceive a given headphone. but the treble boosted sense of detail and more accurate location is for example something we'll all feel. we might just experience a different location for those sounds. it's a mess with some stable rules. 

all that rambling to say that he's not crazy and what he says isn't special. beyond having a different definition of "soundstage", the only real reason to disagree is the part about the headphone being it's own little room. I want to disagree with him like I already did, because of the size of the headphone compared to the length of most audible frequencies and the accuracy of our audio triangulation(which is good but not that good). but at the same time, if we consider that a lot of that mental image is going to come from frequency response and probably distortions(if really high), then as the inside of the headphone participates in tuning and keeping a good or bad airflow, we can in a sense say that the acoustic chamber of the headphone impacts the "headstage" or whatever we call that very artificial mental image we try to form. 
it's really a matter of clearly defining what we talk about, and this "soundstage" term is cursed in this section, like "dynamic". because we don't agree to have it means anything and everything like the rest of the amateur audio hobby does. that makes me want to not use it at all, because words are supposed to help us communicate, not add another layer of confusion.


----------



## Hifiearspeakers

castleofargh said:


> there is that yes. there is also an increased consistency in the perceived impression when using speakers in a room, because we've had a lifetime to calibrate ourselves to have audio cues correlated with rooms we could see and feel. on headphones it's kind of free for all. whatever placement or feeling of space our brain will come up with, can't really be calibrated with anything. if we always sit in the same chair and watch concert with the guys live on the TV, we probably will calibrate to that and feel like the sound kind of comes from the TV if we use those headphones only for that. but otherwise, whatever image we form will probably become the new normal and stay that way. what audiophiles(and me a few years ago) describe as soundstage specific to a given headphone. back then I remember making reviews where I would use the same test track with a female singer and 2 acoustic guitars, and describe gears and IEMs by where I felt that singer and guitars. that's how significantly different those impressions of placement were to me with various IEMs/headphones. and a headphone just had to have a certain bass tuning for me to talk about being enveloped in sound. so while I clearly never felt anything remotely realistic in term of placements or "room", and the biggest "headstage" I ever imagined on headphone without DSP was maybe 30cm from my head(usually on the side or above), I also very much get what @Hifiearspeakers was trying to talk about. and even why he called that soundstage as I've been doing the very same thing before.
> you just talk about something different, while he's thinking about how one headphone will make you feel like a sound source is almost a single point when it has clean output and fairly boosted trebles(easier to locate with the shorter wavelength), while a warmer headphone will make us feel like those same instruments are maybe at a different place, and most likely that the sound source itself is larger or harder to precisely locate. that kind of pretty intuitive stuff, but when you add up the differences between 2 headphones, you can end up with a significantly different perception of that mental image. but at least for me it was never room size and never really felt like a room. but I certainly got impressions of distance, detailed position(or not), and depending on the low end, a certain feeling that there was a real volume to the sound. now I'm pretty sure that at least some of those impressions are correlated with my HRTF and other stuff in my brain, so different people are likely to get differences in how they perceive a given headphone. but the treble boosted sense of detail and more accurate location is for example something we'll all feel. we might just experience a different location for those sounds. it's a mess with some stable rules.
> 
> all that rambling to say that he's not crazy and what he says isn't special. beyond having a different definition of "soundstage", the only real reason to disagree is the part about the headphone being it's own little room. I want to disagree with him like I already did, because of the size of the headphone compared to the length of most audible frequencies and the accuracy of our audio triangulation(which is good but not that good). but at the same time, if we consider that a lot of that mental image is going to come from frequency response and probably distortions(if really high), then as the inside of the headphone participates in tuning and keeping a good or bad airflow, we can in a sense say that the acoustic chamber of the headphone impacts the "headstage" or whatever we call that very artificial mental image we try to form.
> it's really a matter of clearly defining what we talk about, and this "soundstage" term is cursed in this section, like "dynamic". because we don't agree to have it means anything and everything like the rest of the amateur audio hobby does. that makes me want to not use it at all, because words are supposed to help us communicate, not add another layer of confusion.



Well said on all fronts.


----------



## castleofargh

modo ON:  to quote @bfreedma "Could you two take this to PM? Or Reddit..." it's the second time I have to delete some posts, discuss the topic, or at least a topic. if you just want to have a fight, go do that somewhere else.
modo OFF.




bigshot said:


> I used to think that, but I've since gotten VR headsets and I've played games with sound fields that react to head position. They also have visuals to anchor sounds to. But I still keep losing the effect at least half the time, because the directionality is so crude. It sounds more like sound coming from the left and right just in different balances as I turn. I suspect that our sense of directionality and distance is based on multiple things all at once. It all combines into a stew that says "real" to our ears. It's very complex. I'm interested to hear if the Smyth Realizer solves it, but I suspect it will be like 3D video... it looks 3D, but it doesn't look real. The closest to real I have heard is multichannel speakers. The more channels, the more real it sounds. But more channels are a lot more work to integrate into a room properly.


I also get that feeling on the few solutions I've tried so far, but I suspect that it's because the head tracking is working based on some default dummy head HRTF instead of our own. and that can rapidly be a problem depending on how non average our head is .
and maybe it's also lagging too much behind to feel real? with one solution I tried, the tracking worked with a webcam pointed at my face, and unless I had spotlights in my face, I couldn't get enough FPS to get a smooth result.
but yeah, it's a complex mess.


----------



## bigshot

It seems to me that audio head tracking would be easier to accomplish than visual head tracking. My Quest does visual tracking perfectly. I think it's because just one kind of sound location isn't enough... we probably use multiple methods of sound location simultaneously without even realizing it. It's funny how simple things that we take for granted can be so complex... and how people are able to filter out compete distortions to the point they don't even notice it. Like people who have their TV setting adjusted to stretch out old TV shows. It drives me nuts, but some people like it that way. Maybe that is like crossfeed.


----------



## 71 dB

sander99 said:


> At the very moment I was reading this sentence I was chewing on some crispy vegetables and I swear I heard noises inside my head!



Bone conducts the sound to ears.


----------



## gregorio (Nov 17, 2019)

Hifiearspeakers said:


> [1] Unreal how unscientific you were here in many of your responses.
> [2] You think all headphones sound the same regardless of the technology used, shape of the earcups, whether the drivers are angled or flat, open or closed, etc??? You cite magic about this???
> [3] Ear anatomy matters. Headphone tech matters. Earpad size and material matters. That’s science, not magic.
> [4] Spoken like someone who really doesn’t know much about headphones...
> ...



1. Pot, kettle, black!!! Your responses are "I think ...", "I hear [perceive]", misquotes, misrepresentations, personal attacks and actual direct CONTRADICTIONS of the science, and I'm the one being unscientific. That's rich!!

2. Huh? How can you so misquote me, when I stated the opposite? I actually said: "_... it's entirely possible for headphones to have a frequency response that may or may not be perceived as doing a better job with spatiality/distance than others (with a different frequency response). * I've experienced that many times myself with different headphones* but that's just my perception/preference ..._"

3. Of course they all matter, I never stated otherwise. Headphone tech and earpad size/materials obviously "matters" in relation to comfort, visual appeal and freq response (including time/transient response). HOWEVER, you have claimed that earcups create/represent a room (room acoustics). That is NOT science, it's CONTRARY to the science and you've presented NO reliable evidence to support your claim. That's about as unscientific as it's possible to be, and then you call me unscientific. It's so hypocritical it's laughable!!! See point #3 in my response to the next quote below.

4. And you're speaking like someone who doesn't know the difference between acoustics and psychoacoustics, who doesn't know what science is, and "someone who doesn't know" how to even read!!

5. Go right ahead but this time use some actual science rather than the typical audiophile nonsense of your personal perception + (false) assumptions, because: A. This being the sound science forum, it's expected/required and B. To avoid being a massive hypocrite!

6. And what is the "science on your side", you haven't presented any!!!

7. Explain why it's ridiculous (using science/facts obviously) otherwise it's all too clear who is being ridiculous!

8. Yes, they are all different but they are also somewhat similar. For example, every sitting room has significantly different acoustic properties, on the other hand, they also have certain acoustic properties that are quite similar, which is why we can aurally differentiate between say a sitting room and a cathedral.
8a. That assertion is simply false, what do you think mastering is and why does it exist? In effect, what's "in the original mix" of a recording mastered for speakers is: The artistic intent plus the inverse of (a range of) speakers and acoustic environments.


Hifiearspeakers said:


> [1] he made the statement that no headphones can cast different types of soundstage because everything always sounds like it’s in the center of your head, and that the only way the different models of headphones could alter the soundstage would be through magic, Because headphones don’t have soundstage.
> [2] He says that in order to have soundstage you have to have distance from the ear like floorstanding speakers.
> [3] I asserted that there is space from the ear, and that the ear pads themselves represent the room for headphones.
> And I asserted that there are some headphones that do a better job of this spaciality/soundstage than others.


1. That's a lie! I said headphones don't "cast different types of soundstage" because headphones don't know anything about soundstage in the first place, just how to transduce from one signal type to another, NOT because "everything always sounds like it's in the centre of the head". And, in a previous post I specifically stated that I do NOT always perceive sounds in the centre of my head anyway!! Jeez.

2. No I didn't. I said that you have to have speakers some distance away in a room with acoustics.

3. Exactly. The problem is that your assertion CONTRADICTS the science and you've presented absolutely zero science/reliable evidence to support your assertion, just personal perceptions, assumptions and then a bunch of the most hypocritical insults! For those interested in the actual science (rather than hypocritical nonsense) and who don't already know:

The Precedence Effect (Wallach et al, 1949) is one of the older and most established, corroborated and studied (directly and indirectly) in the whole of psychoacoustic science. In essence, the Precedence Effect demonstrates that a sound reflection which occurs after the original sound with a delay greater than about 50ms with speech (Helmut Haas, 1951) and up to 100ms with music, is perceived as two separate sonic events (EG. The original sound plus a distinct echo). Below this threshold we do not perceive two separate sonic events, we perceive a single sonic event. In the case of the delay being between about 1-2ms and 50ms (with speech, or 1-2ms and 100ms with music), we perceive a single "fused" sonic event with spatial information. For example, we would perceive that a single sound is occurring to our left if the original sound is placed in the left channel and the delayed reflection is in the right channel, even if the reflection is the same level or some what higher than the original signal. Below about 1-2ms the precedence effect also breaks down, we perceive just a single frequency modulated event, WITHOUT spatial information! The speed of sound at sea level (and 20C) is 343m/s, which is 34cm per millisecond. Therefore, for earcups to act like a room, to create any perceivable acoustic information, they must have interior (reflective) surfaces greater than about 34cm - 68cm apart (to create reflections greater 1-2ms apart). Anyone know of any headphones with earcups that are about 2 feet (60cm) or bigger?

G


----------



## sonitus mirus

Not sure how they fit.


----------



## bigshot

71 dB said:


> Bone conducts the sound to ears.


----------



## 71 dB

sonitus mirus said:


> Not sure how they fit.



Finally cans that have acoustic crossfeed!


----------



## FlavioWolff

I have read almost half of this thread and used the search as well, but I haven’t found any opinions on Roon’s dsp crossfeed. Weird considering it’s popularity. It has a “default” setting based on something about Bauer (yeah, I’m that ignorant), cmoy and Meier presets and also custom. Works wonderfully for me. 

what do you most knowledgeable folks think about it?


----------



## bigshot

I would think that to judge a good crossfeed from a bad one, you would be looking at the flexibility of the settings. There is no one-size-fits-all setting. You would need to be able to have it be as flexible as possible to be useful.


----------



## FlavioWolff

bigshot said:


> I would think that to judge a good crossfeed from a bad one, you would be looking at the flexibility of the settings. There is no one-size-fits-all setting. You would need to be able to have it be as flexible as possible to be useful.



This is the implementation on roon: http://bs2b.sourceforge.net/
The custom settings lets you select Cut Frequency from 300 Hz to 2 KHz and "feed level" from 1 to 15 dB.
Seems very flexible for a "mainstream" product!


----------



## castleofargh

FlavioWolff said:


> This is the implementation on roon: http://bs2b.sourceforge.net/
> The custom settings lets you select Cut Frequency from 300 Hz to 2 KHz and "feed level" from 1 to 15 dB.
> Seems very flexible for a "mainstream" product!


I was going to point to this. It's quite popular, and has been available as a free VST for years(so used well outside of Roon). In foobar I've used that a lot along with Xnor's Xfeed VST. As far as standalone crossfeed is concerned, they do the job. 
The more recent and often non free offers tend to try and go beyond crossfeed, adding extra variables like room reverb or some more specific HRTF models. Ultimately you try and settle on what happens to feel nice to you.


----------



## 71 dB (Jan 12, 2020)

FlavioWolff said:


> The custom settings lets you select Cut Frequency from 300 Hz to 2 KHz and "feed level" from 1 to 15 dB.



At 300 Hz the delay for the crossfed signal is about the same as for sounds that come from sides, 90° angle. This is "wide-crossfeed" and in my opinion some material may work well with it, but the miniature soundstage I hear is shaped very left to right and there is no feel of depth. Crossfeed level of -3 dB works nicely (crossfeed level is always negative because the crossfed signal is always quieter than the direct sound) because human head shadows low frequencies about that much when the sound comes from sides. The sound doesn't feel mono-like to me, because the delay of crossfed signal is large (about  700 µs) and crossfeeding stops at pretty low, in fact the lack of crossfeeding around 500-1000 kHz can be a small problem for me.

At 800 Hz we have the "standard crossfeed." The delay (about 250 µs) of crossfed signal simulates the delay with speakers in a room, 30° angle. At least for me this gives a miniature soundstage that is most similar to the soundstage with speakers and it feels quite natural in my opinion. There are depth cues for my spatial hearing. Crossfeed level is determined by how large stereo separation the material has: In my opinion some material don't need crossfeeding at all, some need just a little "finetuning" (say crossfeed level -12 dB) and some recordings with very poor spatiality requires crossfeed level -1 dB.

At 2 kHz the simulated angle of the sound has dropped to about 10° and use of higher crossfeed level can make the sound mono-like in my opinion. The shadow effect of human head at 1 kHz is hardly 10 dB, so I'd recommend using only the lower values of crossfeed level, -15 … -8 dB. This means that the ILD at low frequencies of the recording should be limited.


----------



## gregorio

71 dB said:


> ... This gives a miniature soundstage that is most similar to the soundstage with speakers and it feels quite natural. There are depth ques. ...



Oh no, not AGAIN!!!!

You really mean that "it feels quite natural" TO YOU and that YOU perceive depth cues, not that it is actually "quite natural" and there are actually "depth ques" (or better depth cues)! Are you really going to go round and round this same circle yet again?

G


----------



## 71 dB

gregorio said:


> Oh no, not AGAIN!!!!
> 
> You really mean that "it feels quite natural" TO YOU and that YOU perceive depth cues, not that it is actually "quite natural" and there are actually "depth ques" (or better depth cues)! Are you really going to go round and round this same circle yet again?
> 
> G


I don't care about your remarks anymore, but I did "subjectify" my post a little bit so maybe you are happier now.


----------



## FlavioWolff

Could someone please indicate me a track that supposedly doesn’t “need” crossfeed, so I can see if, for my tastes, the crossfeed good or bad on it?
According to Meier’s old article, crossfeed should be desirable even when there isn’t much stereo panning, because of fatigue.


----------



## gimmeheadroom

Most tracks don't need it. It's a correction for bad recordings.


----------



## FlavioWolff (Jan 12, 2020)

71 dB said:


> At 300 Hz the delay for the crossfed signal is about the same as for sounds that come from sides, 90° angle. This is "wide-crossfeed" and in my opinion some material may work well with it, but the miniature soundstage I hear is shaped very left to right and there is no feel of depth. Crossfeed level of -3 dB works nicely (crossfeed level is always negative because the crossfed signal is always quieter than the direct sound) because human head shadows low frequencies about that much when the sound comes from sides. The sound doesn't feel mono-like to me, because the delay of crossfed signal is large (about  700 µs) and crossfeeding stops at pretty low, in fact the lack of crossfeeding around 500-1000 kHz can be a small problem for me.
> 
> At 800 Hz we have the "standard crossfeed." The delay (about 250 µs) of crossfed signal simulates the delay with speakers in a room, 30° angle. At least for me this gives a miniature soundstage that is most similar to the soundstage with speakers and it feels quite natural in my opinion. There are depth cues for my spatial hearing. Crossfeed level is determined by how large stereo separation the material has: In my opinion some material don't need crossfeeding at all, some need just a little "finetuning" (say crossfeed level -12 dB) and some recordings with very poor spatiality requires crossfeed level -1 dB.
> 
> At 2 kHz the simulated angle of the sound has dropped to about 10° and use of higher crossfeed level can make the sound mono-like in my opinion. The shadow effect of human head at 1 kHz is hardly 10 dB, so I'd recommend using only the lower values of crossfeed level, -15 … -8 dB. This means that the ILD at low frequencies of the recording should be limited.



Thank you for the explanation.
Indeed, the "1-15db feed value" on roon seems to be negative. When I set to "15" db, there is much less crossfeed applied than "1db".
Black sabbath's first albums are so much better with xfeed! (i don't need to state that this is subjective evaluation, do I?)


----------



## 71 dB

FlavioWolff said:


> Could someone please indicate me a track that supposedly doesn’t “need” crossfeed, so I can see if, for my tastes, the crossfeed good or bad on it?
> According to Meier’s old article, crossfeed should be desirable even when there isn’t much stereo panning, because of fatigue.



Electronic music example:


Baroque music example:


I don't use crossfeed with these. No need. Crossfeed can only make these worse in my opinion. The Graupner is in my opinion an awesome example of how to get headphone spatiality right. The ILD levels are so correct. The recording works very well on loudspeakers also, so it's what I call an _omnistereophonic_ recording. 

I use crossfeed on almost everything. In my opinion about 98 % of all stereophonic recordings benefit from some crossfeeding, but the rest 2 % should be left alone as crossfeed can only make them worse. So, I disagree with Meier 2 % of the time.


----------



## 71 dB

FlavioWolff said:


> Thank you for the explanation.
> Indeed, the "1-15db feed value" on roon seems to be negative. When I set to "15" db, there is much less crossfeed applied than "1db".
> Black sabbath's first albums are so much better with xfeed! (i don't need to state that this is subjective evaluation, do I?)



No problem!  

It's about how you define it, but the logical and widely used way is to compare the crossfed signal to the direct signal and since it's always quieter, the dB value for crossfeed is negative. So, yes "15" means less crossfeeding than "1". I'm not into Black Sabbath myself, but I listened to "Black Sabbath" from 1970 and yes, it benefits a lot from strong crossfeeding at level -1 dB in my opinion. This is very typical for this kind of music from that era. The channel separation/spatiality is quite uncontrolled and harsh for headphone listening.


----------



## 71 dB

gimmeheadroom said:


> Most tracks don't need it. It's a correction for bad recordings.



Well, something is by definition bad if correction is needed. It's about our experiences of whether the recording is bad or good and if bad how bad? My spatial hearing works so that most recordings benefit from crossfeed and there is no mystery in it in my opinion since recording are mixed primarily for speakers so that low frequency ILD is easily more than a few decibels.


----------



## gregorio

71 dB said:


> [1] The Graupner is in my opinion an awesome example of how to get headphone spatiality right. The ILD levels are so correct. The recording works very well on loudspeakers also, so it's what I call an _omnistereophonic_ recording.
> [2] I use crossfeed on almost everything. In my opinion about 98 % of all stereophonic recordings benefit from some crossfeeding, but the rest 2 % should be left alone as crossfeed can only make them worse.



1. On the other hand, that is a particular recording situation which is effectively mono (just the reverb/acoustics are stereo). However, this isn't the case with a lot of other baroque music, antiphonal renaissance/baroque being a particularly obvious example. Making such an antiphonal piece "omnistereophonic" as you call it, would be a serious error of judgement/musicality! Also, the vast majority of original club mixes are mono (or very close to mono) and with vinyl, which is the case in your example, the bass freqs have to be mono.

2. If we're talking pure perception/opinion, then my opinion is that about 2% of all stereophonic recordings benefit from crossfeed and the other 98% should be left alone as crossfeed can only make them worse!



71 dB said:


> [1] Well, something is by definition bad if correction is needed. It's about our experiences of whether the recording is bad or good and if bad how bad?
> [2] My spatial hearing works so that most recordings benefit from crossfeed and there is no mystery in it in my opinion since recording are mixed primarily for speakers so that low frequency ILD is easily more than a few decibels.



1. Obviously that's not true! For example, there are more than a few bass-heads out there, for whom just about all recordings need correction (additional bass). Are just about all recordings therefore "bad" or is it just a case of their particular perception/preference? Probably no recordings exist that someone, somewhere doesn't think needs correction and therefore, according to your logic, all recordings must be "bad".

2. OK, if we're again going with personal perception/preference rather than the facts/science: My spatial hearing works so that most recordings do not benefit from crossfeed and there is no mystery in it, since crossfeed does not emulate my experience of listening to speakers and doesn't even claim to.

This is largely why we have a Sound Science subforum in the first place, so that we're not just arguing between different individuals' impressions/preferences/perceptions.

G


----------



## 71 dB

gregorio said:


> 1. On the other hand, that is a particular recording situation which is effectively mono (just the reverb/acoustics are stereo). However, this isn't the case with a lot of other baroque music, antiphonal renaissance/baroque being a particularly obvious example. Making such an antiphonal piece "omnistereophonic" as you call it, would be a serious error of judgement/musicality! Also, the vast majority of original club mixes are mono (or very close to mono) and with vinyl, which is the case in your example, the bass freqs have to be mono.
> 
> 2. If we're talking pure perception/opinion, then my opinion is that about 2% of all stereophonic recordings benefit from crossfeed and the other 98% should be left alone as crossfeed can only make them worse!
> 
> ...



1. Yes, the harpsichord is a "mono-like" sound source unless the mic(s) is near the instrument compared to the dimension of the instrument in which case the mic(s) doesn't see a point sound source. Also, unless the instrument is exacly in the middle, there will be some ILD, ISD and ITD generated, but yes the result is from the instrument alone is pretty mono-like. The acoustics of course are an essential part of the sound: We don't want a recording done inside an anechoic chamber. We want to record the instrument in good acoustics. Of course omnistereophonic recordings become more challenging when the amount of instruments/singers increase. It doesn't mean it's an error of judgement/musicality. That's just something we are not able to do with our knowledge and recording technology (using a Jecklin/Schneider disk could work maybe), but the real reason in my opinion is that hardly nobody cares. As long as the spatiality is good on speakers only people like me worry about spatiality with headphones. It seems these rare omnistereophonic recordings are just "happy accidents." 

The bass indeed has to be mono or at least near mono on vinyl, but the track has pretty much same spatiality on CD. It doesn't matter _why_ the sound is mono-like at lower frequencies. Crossfeed is not needed when it is. To my experience this kind of club music was become less mono-like and newer stuff often needs crossfeed. If not because of bass (below 200 Hz) then because of 200-800 Hz range having too much ILD. 

2. I know your opinion. It's pointless for me to argue about it. It's your opinion. You are the one who "controls/owns" it.

3. In my opinion the (allowed/natural) parameters of spatiality come from things like HRTF and are set. You have to modify your body/head to change HRTF. The level of bass is not like that. There is no "set" value for how much bass is correct. It's an artistic choice. Music is mixed for speakers. Headphone spatiality is by default wrong. Sometimes wrong in a lucky way so it doesn't matter. Most of the time just wrong and I use crossfeed to address that. Room REGULATES spatiality so it's ALWAYS within reasonable natural values no matter how the recording is done, but headphones don't regulate anything so it's wild, ILD can be anything. Crossfeed is a regulator limiting what ILD can be. If you crossfeed at level -6 dB, that' the largest ILD you can have no matter what recording you play.

4. You have told me this many times. Crossfeed is by nature about our individual perceptions and that's why opinions about it differ so much. There clearly isn't truth. That's what I have realized after all this time on this board. I have my opinion about crossfeed and I don't care anymore if your's is different. *FlavioWolff* is asking our opinions and I am telling him what I think. He can compare his experiences to our opinions. I know I will NEVER change your mind so I don't even try. Please don't try to change my mind. After using crossfeed for almost 8 years I know by now crossfeed is my thing and I have to live with how MY spatial hearing works. It would certainly make my life easier if 98 % of all stereo recordings didn't need crossfeed, but it is what it is.


----------



## gregorio (Jan 17, 2020)

71 dB said:


> [1] Of course omnistereophonic recordings become more challenging when the amount of instruments/singers increase. It doesn't mean it's an error of judgement/musicality. That's just something we are not able to do with our knowledge and recording technology [1a] (using a Jecklin/Schneider disk could work maybe),
> [2] but the real reason in my opinion is that hardly nobody cares. [2a] As long as the spatiality is good on speakers only people like me worry about spatiality with headphones.
> [2b] It seems these rare omnistereophonic recordings are just "happy accidents."



1. It's not really anything to do with "_our knowledge and recording technology_", it's more about the laws of physics and human perception/preferences. To comply with BOTH of these, "omnistereophonic" not only becomes more challenging under most conditions but impossible and therefore IS an error of judgement/musicality! Obviously this is not true in your particular case, as you personally place ILD above just about all other possible preferences.
1a. Using a simple stereo mic'ing technique is a fairly common cry amongst many audiophiles, audiophiles who don't understand how recordings are made or why. In practice, there are very few occasions where a simple stereo mic'ing technique gives preferable results and antiphonal baroque is certainly not one of them. Professional/Commercial music engineers virtually always use more elaborate mic'ing techniques, not because they are ignorant (as you have previously stated) but for EXACTLY THE OPPOSITE reason!

2. Firstly, how can anyone care about a term that doesn't exist, that you have just invented?
2a. You've repeated this falsehood a number of times! It would be rare that a mix or master is not checked on headphones and it is NOT true that engineers do not worry about spatiality with headphones. However, they/we typically have priorities significantly different to your personal preferences.
2b. Again, what else could it be? You've invented the term "omnistereophonic" and no one else even knows it exists, let alone what you mean by it!



71 dB said:


> 2. I know your opinion. It's pointless for me to argue about it. It's your opinion. You are the one who "controls/owns" it.
> 3. In my opinion the (allowed/natural) parameters of spatiality come from things like HRTF and are set. You have to modify your body/head to change HRTF. The level of bass is not like that. There is no "set" value for how much bass is correct.
> [3a] It's an artistic choice.
> [4] Music is mixed for speakers. [4a] Headphone spatiality is by default wrong.
> [4b] Sometimes wrong in a lucky way so it doesn't matter. Most of the time just wrong and I use crossfeed to address that.



2. There can be a point to arguing against an opinion. For example, in the case of an opinion that contradicts the facts. For instance:

3. The level of bass IS like that and there IS a "set value" for how much bass is correct. In any particular situation, an acoustic instrument produces a "set" amount of bass. However, we typically/routinely override that "set amount" in the name of "human perception/preferences", exactly as we do for spatiality!
3a. Yes it is, in BOTH cases!

4. Generally, music is mixed primarily for speakers but commonly, not exclusively.
4a. Both speaker spatiality and headphone spatiality is by default wrong.
4b. But crossfeed does NOT address that! Even with crossfeed it is still "just wrong" but "just wrong" in a different way, which may or may not be preferable to a particular individual.

You've made ALL the above points more than once previously and they've be rebutted more than once previously, so when I asked "_Are you really going to go round and round this same circle yet again?_" - The answer is apparently "Yes", Oh dear!!

G


----------



## 71 dB

gregorio said:


> The level of bass IS like that and there IS a "set value" for how much bass is correct. In any particular situation, an acoustic instrument produces a "set" amount of bass. However, we typically/routinely override that "set amount" in the name of "human perception/preferences", exactly as we do for spatiality!
> 
> G



Acoustic instruments can be said to have a "set" bass level, but the "bassheads" are into electronic music...


----------



## gregorio

71 dB said:


> Acoustic instruments can be said to have a "set" bass level, but the "bassheads" are into electronic music...



But I'm not talking about "bassheads". I'm simply making the point that we typically override the "set"/"natural" bass level in just about every recording of every genre, the same is true of the "spatiality" and for exactly the same reason (human perception/preferences).

G


----------



## pinnahertz

71 dB said:


> Acoustic instruments can be said to have a "set" bass level, but the "bassheads" are into electronic music...


How about those really sub-bass-heads that lust for a solid or 32' organ pipe note?  IIR, that's roughly 16Hz, and can be LOUD.  Seems like contemporary bass I hear coming from the sub in a car driving down the street is about 40Hz, and confined to one note, which gives me a bass-headache.


----------



## 71 dB

gregorio said:


> But I'm not talking about "bassheads". I'm simply making the point that we typically override the "set"/"natural" bass level in just about every recording of every genre, the same is true of the "spatiality" and for exactly the same reason (human perception/preferences).
> 
> G



I mean acoustic instruments have a physics based bass level they radiate. The bass level at ears can be very different depending on the acoustics. Spatiality is also dependent on the acoustics, but has perhaps lower level of freedom and that's why for example large ILD is an issue to some people. Frankly I have given up trying to justify crossfeed based on science. I thought I can so it quite easily, but I have convinced hardly anyone so far. So much energy and time wasted. I find crossfeed beneficial to most recordings and to me science justifies why. Some other people agree with me, but that's it.



pinnahertz said:


> How about those really sub-bass-heads that lust for a solid or 32' organ pipe note?  IIR, that's roughly 16Hz, and can be LOUD.  Seems like contemporary bass I hear coming from the sub in a car driving down the street is about 40Hz, and confined to one note, which gives me a bass-headache.



Car subwoofers are often band-pass boxes creating high bass output level at a narrow frequency band and in the traffic that's all you hear coming from the car as other low frequencies gets masked by the traffic. So it sounds like one note bass…

The 16 Hz stuff is "SPL people" having ~20 kW amps driving a dozen of 18" woofers to create "sonic hair dryers" to impress girls and vibrating car doors to impress dudes…


----------



## gregorio

71 dB said:


> [1] I mean acoustic instruments have a physics based bass level they radiate. The bass level at ears can be very different depending on the acoustics. Spatiality is also dependent on the acoustics, but has perhaps lower level of freedom and [1a] that's why for example large ILD is an issue to some people.
> [2] Frankly I have given up trying to justify crossfeed based on science.
> [3] I find crossfeed beneficial to most recordings and to me science justifies why.



1. Bass level at the ears can be very different depending on the acoustics but the point I'm making, again, is that we routinely change the bass to levels outside/beyond what does (or even could) exist according to acoustics. The exact same is true of "spatiality" and we have the same amount of freedom or arguably even more, defined/limited purely by artistic intention, which in turn is based on the human perception and preferences of the artists and their target audience.
1a. No, what I've just stated is the reason why "large ILD is an issue to some people", IE. It's a preference and therefore, BY DEFINITION, there will always be at least "some people" who do not prefer it. Just as there will always be at least some people who do not prefer pretty much any/every aspect of music recording creation; bass level, dissonance, treble level, intonation, distortion, etc, etc, etc.

2. But clearly you haven't given up trying. Isn't this debate you started, about there not being an equivalence between bass level and spatiality, exactly that, an attempt to justify crossfeed based on science?

3. Yes, "science justifies why" to YOU, because you cherry-pick, misrepresent and/or mis-apply the science in order to "justify why"! Again, this debate about no equivalence with bass level is another typical example: You stated "_In my opinion the (allowed/natural) parameters of spatiality come from things like HRTF and are set._" - You are defining what is "allowed" according to what occurs "naturally" in the real world. You then state "_There is no "set" value for how much bass is correct. It's an artistic choice._" - But you now admit that there IS a "set value" for how much bass is "correct"/"allowed" if we apply the SAME CONDITIONS, ie. What occurs "naturally" in the real world. So, you mis-applied the science, you cherry-picked the science which is defined by the real/natural world as far as spatiality/ILD is concerned but omitted/misrepresented the science which is defined by the real/natural world as far as bass levels are concerned! Music recording creation is an art-form and therefore the science defined by the real/natural world is NOT applicable, period ... It's not applicable to bass levels or anything else, including spatiality!

G


----------



## FlavioWolff

I wonder how large should your head be so that your ears have the adequate distance from each other to correctly hear the spatialization the “artist intended”


----------



## pinnahertz

FlavioWolff said:


> I wonder how large should your head be so that your ears have the adequate distance from each other to correctly hear the spatialization the “artist intended”


As long as your head is roughly the same size as the artist, you're done.


----------



## FlavioWolff

pinnahertz said:


> As long as your head is roughly the same size as the artist, you're done.



goos to know. Gotta choose my artists more carefully. And make sure their mixing engineers have similar heads


----------



## castleofargh

FlavioWolff said:


> I wonder how large should your head be so that your ears have the adequate distance from each other to correctly hear the spatialization the “artist intended”


Deep nihilistic yet accurate answer: it's irrelevant. The artist's intent is an audiophile utopia that forgets how our own perception of the world is deeply subjective(as in, wrong and unique). We'll never know what the artist really intended and he'll never know exactly how we're feeling when listening to his music.

More practical: it's irrelevant because your brain learned everything with your head, if you don't look left when someone on your right calls you, chances are that your brain is calibrated just fine for sound localization.
With crossfeed, you should have some setting related to your head size and the "virtual speaker" angle you want. So for this specific effect, it's irrelevant because all head sizes are right with the right setting.


----------



## FlavioWolff

castleofargh said:


> Deep nihilistic yet accurate answer: it's irrelevant. The artist's intent is an audiophile utopia that forgets how our own perception of the world is deeply subjective(as in, wrong and unique). We'll never know what the artist really intended and he'll never know exactly how we're feeling when listening to his music.
> 
> More practical: it's irrelevant because your brain learned everything with your head, if you don't look left when someone on your right calls you, chances are that your brain is calibrated just fine for sound localization.
> With crossfeed, you should have some setting related to your head size and the "virtual speaker" angle you want. So for this specific effect, it's irrelevant because all head sizes are right with the right setting.



i was joking


----------



## bigshot

Some artists and engineers have swelled heads. You might try wearing a hat.


----------



## castleofargh

FlavioWolff said:


> i was joking


sorry  
this hobby can make it hard to see the line between humor and a deadly serious argument.


----------



## 71 dB

gregorio said:


> But clearly you haven't given up trying. Isn't this debate you started, about there not being an equivalence between bass level and spatiality, exactly that, an attempt to justify crossfeed based on science?
> 
> G



It's a process and I am not 100 % done giving up. People are told they should try in life and keep trying even when it seems hard. 

Bass level can be whatever depending the acoustics and other things. Room doesn't "regulate" bass level. On the contrary room acoustics make the bass level even more random! ILD at low frequencies can be large only when the sound source is near one ear. If the intent of the artist is to have the sound appear near one ear large ILD is good for that (with headphones), but with speakers it doesn't work at all! You'd need AT LEAST crosstalk canceling, maybe unechoic chamber too. Since music is mixed primarily for speakers, artists should be aware that intents to have the sound near one ear are problematic to say the least: You are mixing for a reproduction system that doesn't really support what you are doing! That's like trying to make colorful movies using black and white flm.


----------



## 71 dB

gregorio said:


> Yes, "science justifies why" to YOU, because you cherry-pick, misrepresent and/or mis-apply the science in order to "justify why"! Again, this debate about no equivalence with bass level is another typical example: You stated "_In my opinion the (allowed/natural) parameters of spatiality come from things like HRTF and are set._" - You are defining what is "allowed" according to what occurs "naturally" in the real world. You then state "_There is no "set" value for how much bass is correct. It's an artistic choice._" - But you now admit that there IS a "set value" for how much bass is "correct"/"allowed" if we apply the SAME CONDITIONS, ie. What occurs "naturally" in the real world. So, you mis-applied the science, you cherry-picked the science which is defined by the real/natural world as far as spatiality/ILD is concerned but omitted/misrepresented the science which is defined by the real/natural world as far as bass levels are concerned! Music recording creation is an art-form and therefore the science defined by the real/natural world is NOT applicable, period ... It's not applicable to bass levels or anything else, including spatiality!
> 
> G



We can't use science 100 % accurately, can we? So, we are doomed to cherry-pick and mis-apply the science. What level of scientific accuracy is needed to have gains? That's were we disagree. To me simple crossfeeder improves things despite of ignoring a lot of what science says. To you crossfeed doesn't do things well enough and ignores too much. Crossfeed doesn't make headphones sound like speakers (in other words what the artists intented), but to me it takes the headphone sound a step or to toward speaker sound while removing aspects or the sound that annoys me (because I don't like sounds that are very near my ears and I don't like how my spatial hearing interprets too large ILD. I don't have these problems when listening to speakers so why should I suffer from them with headphones if crossfeed can fix it for me?).


----------



## gregorio

71 dB said:


> [1] It's a process and I am not 100 % done giving up.
> [1a] People are told they should try in life and keep trying even when it seems hard.
> [2] Bass level can be whatever depending the acoustics and other things. Room doesn't "regulate" bass level. On the contrary room acoustics make the bass level even more random!
> [2a] ILD at low frequencies can be large only when the sound source is near one ear.



1. But you stated "_Frankly I have given up trying to justify crossfeed based on science._"
1a. Obviously that only applies in certain circumstances. Should flat-earthers, climate deniers, creationists, anti-vaxxers, etc., keep trying even though it seems hard?

2. No, bass level CANNOT be "whatever"! Sure, there are variables that define what the bass level is, for example; how the instrument is designed, how it's played, resonances, room acoustics, distance from source, etc., but of course there are limits with any given set of acoustic variables and we routinely exceed those limits with processing, often massively so, according to artistic intention. In fact, without an exception I can think of, all rock and every other popular music genre for the last 60 odd years absolutely relies on this fact!
2a. And again, you are applying the science of what occurs in the real/natural world to an art-form that is neither defined nor constrained by that science and if it were, the vast majority of music recordings for the last 60 years or so could not exist. 


71 dB said:


> [1] We can't use science 100 % accurately, can we?
> [1a] So, we are doomed to cherry-pick and mis-apply the science.
> [1b] What level of scientific accuracy is needed to have gains? That's were we disagree.
> [2] To me simple crossfeeder improves things despite of ignoring a lot of what science says. To you crossfeed doesn't do things well enough and ignores too much.
> ...



1. Why not?
1a. No, that's nonsense! Certainly we have to cherry-pick, as it's virtually always impractical to cite ALL the relevant scientific evidence but the whole point of this subforum is to cherry-pick the science WITHOUT mis-applying it. If we didn't, this forum would be no different to any other forum here and even no different to the majority of audiophile marketing material!
1b. That is indeed where we disagree, because you (inadvertently or not) seem to effectively be arguing that because we cannot be 100% accurate, it's OK to be 100% inaccurate! Again, you CANNOT apply the science of the real/natural world to an art-form, that's largely what differentiates an art-form from science in the first place!!

2. No, that is NOT what I've stated, you are again misrepresenting what's being stated to justify your agenda! It is NOT a case of crossfeed not "doing things well enough", it's a case of crossfeed actually making the situation worse.
2a. Again, NO, you're just making up "facts" to justify your agenda, without any science at all! What evidence do you have that artists never listen to the mix or master with headphones and never intend it to sound how it does on HPs? So the "level of scientific accuracy needed to have gains" is 0% is it?
2b. Which is your personal perception and your personal preference, not an objective fact! To me, crossfeed does NOT "take a step towards speaker sound", it takes a step sideways that is NOT closer to speaker sound and is also obviously not a step towards un-crossfed HP sound, the two sound presentations the artists/engineers are likely to have tested! The difference between us is that I'm stating this as purely my personal perception/preference. I'm not stating it as an objective fact because that would be a lie/perversion of the science, because the science indicates that it does indeed vary according to personal perception! 

The answer to my question at the start of this latest discourse ("_Are you really going to go round and round this same circle yet again?_"), is unfortunately "Yes"!!

G


----------



## pinnahertz

The reality of "crossfeed" is that is its not a universally preferred or accepted remedy because the problem it tries to fix is not universally perceived as a problem in the first place.   Listeners have varying conditioned preferences.  Each recording varies, many have their own baked-in crossfeed, intentional or otherwise. Then there's the problem of degree, each individual recording requires a customized and optimum degree of crossfeed, an amount which cannot be calculated, but can only be arrived at through listener preference.  The degree runs from none to a lot.  Then there's the question of type or style of the crossfed signal, which adds another vector to "degree".

Unlike corrective equalization which has a specific measurable result as its target, the application of crossfeed is entirely subjective, and highly variable.  While the differences between headphone and speaker-in-room reproduction are well known (and neither usually represents an artists original intent fully or accurately) to suggest it is a scientific correction of a specific problem that can or should be universally applied is more a form of fanaticism than applied science, and is fully unsupported by research.   

And if there's an echo in here, it's because this circle has gone around again, and includes several threads.  I typed out a response similar to the above back in 2018, and this thread goes back to 2010.  Pummeling a demised equine never causes it to get up and trot.


----------



## 71 dB

gregorio said:


> 1. But you stated "_Frankly I have given up trying to justify crossfeed based on science._"
> 
> G



The fact that I still post here doesn't mean I'm still trying to justify crossfeed based on science. This is me dealing with my total failure. This is devastating for me, a very painful thing mentally. Science doesn't justify crossfeed? Some people like crossfeed just because they don't respect artistic intent? That's devastating for me.


----------



## pinnahertz

71 dB said:


> The fact that I still post here doesn't mean I'm still trying to justify crossfeed based on science. This is me dealing with my total failure. This is devastating for me, a very painful thing mentally. Science doesn't justify crossfeed? Some people like crossfeed just because they don't respect artistic intent? That's devastating for me.


The fact that others do or do not accept one's personal preferences is not an indication of failure...or success, unless the goal was to garner a following of disciples.  If the goal is to enjoy music on headphones more fully, it would seem you have experienced 100% success in your world.


----------



## pinnahertz

You should have studied marketing.  No product ever marketed has achieved 100% acceptance, most products achieve a very low acceptance across the total market, even the successful ones. Accurate marketing presents the product accurately and produces reasonable expectations.  False marketing presents the product at a level beyond reality and produces elevated expectations which won't often be met.  Most products are marketed between the two.  Market analysis requires the dispassionate consideration of all data, and applying iit to some form of marketing course correction.


----------



## 71 dB

pinnahertz said:


> The fact that others do or do not accept one's personal preferences is not an indication of failure...or success, unless the goal was to garner a following of disciples.  If the goal is to enjoy music on headphones more fully, it would seem you have experienced 100% success in your world.



Yes, crossfeed has been a very nice success in _my world_ enjoying music and that's why the failure here is not only bitter, but also unexpected. Garnering a following of disciples as you put it was the motivation to join this discussion board, but now retrospectively it was a doomed effort. I think I have given everything I have here and it just didn't work. Whether science simply can't be used to justify crossfeed or my intellectual capasity is not up to the task. Had I not joined this board I would perhaps be a happier person living with my delusions that the reason I find crossfeed beneficial is backed up by science of human spatial hearing. Ironically it was thinking about the science of spatial hearing together with headphone audio that made me discover crossfeed, but strange things happen...


----------



## FlavioWolff (Jan 22, 2020)

pinnahertz said:


> The reality of "crossfeed" is that is its not a universally preferred or accepted remedy because the problem it tries to fix is not universally perceived as a problem in the first place.



I was thinking just that.

In my case, I stumbled upon the term crossfeed about a month ago, reading an article about plugins for headphone mixers (specifically the CanOpener crossfeed). The topic grabbed my attention and made me abandon my faithful Spotify and start using Roon, just so I could experience it's crossfeed DSP. I thought it was nice, but there was not a "problem" for me before that. I listen to headphones since my pre-teens (I'm 28 now), so maybe I'm just used to sounds near my ears. I even enjoy the hard panning from old classic rock recordings on headphones. Zero fatigue or dizziness or anything weird about moving my head and the sounds not changing.

Headphone listening was never a problem to me, until I read that it is not what the "artist intended", because most mixes, specially old ones, were made only for speakers (even if it's not true, it makes intuitive sense and I can't stop thinking about it). Now my obsessive me is telling me that I am listening the wrong way. My enjoyment of music is suffering because of this "knowledge". Ignorance sure is bliss!

What conforts me is reading this thread and aknowledging that no crossfeed implementation is 100% right for every recording. That makes the obsessive me question if the current crossfeed  implementation is right for what I'm listening to, which makes me turn it off more often than not. One "wrong" is smaller than the other.

I should actually rewire my brain and "forget" I ever read about this thing.

Or go back to therapy.


----------



## bigshot

The only time I've heard people in my circle use it is to take the curse off of the early Beatles records where the stereo mix was pretty thoughtless.


----------



## pinnahertz

71 dB said:


> Yes, crossfeed has been a very nice success in _my world_ enjoying music and that's why the failure here is not only bitter, but also unexpected. Garnering a following of disciples as you put it was the motivation to join this discussion board, but now retrospectively it was a doomed effort. I think I have given everything I have here and it just didn't work. Whether science simply can't be used to justify crossfeed or my intellectual capasity is not up to the task. Had I not joined this board I would perhaps be a happier person living with my delusions that the reason I find crossfeed beneficial is backed up by science of human spatial hearing. Ironically it was thinking about the science of spatial hearing together with headphone audio that made me discover crossfeed, but strange things happen...


If you think you have not converted a few, you’d be mistaken. If you think your efforts didn’t get even more to try crossfeed, you be even more mistaken. But if you wanted to convince the vast majority that your method is the only or best way, you’d need to manage expectations better.


----------



## pinnahertz

FlavioWolff said:


> I was thinking just that.
> 
> In my case, I stumbled upon the term crossfeed about a month ago, reading an article about plugins for headphone mixers (specifically the CanOpener crossfeed). The topic grabbed my attention and made me abandon my faithful Spotify and start using Roon, just so I could experience it's crossfeed DSP. I thought it was nice, but there was not a "problem" for me before that. I listen to headphones since my pre-teens (I'm 28 now), so maybe I'm just used to sounds near my ears. I even enjoy the hard panning from old classic rock recordings on headphones. Zero fatigue or dizziness or anything weird about moving my head and the sounds not changing.
> 
> ...


One comment...those hard-panned classic rock mixes were known by another name in their youth: great headphone music.  There were even “headphone hours” on the early prog rock radio stations that featured those mixes.  Stereo headphones were new, heck, stereo itself was still new.  And I wouldn’t be so hasty to decide what the artist actually intended either. Just listen to some of those mixes with tracks whip-panning back and forth between channels, and don’t think that headphone listening night never be a thought.


----------



## 71 dB

pinnahertz said:


> But if you wanted to convince the vast majority that your method is the only or best way, you’d need to manage expectations better.



That's not what I tried to convince, but I still need to manage expectations better...


----------



## gregorio

FlavioWolff said:


> Headphone listening was never a problem to me, until I read that it is not what the "artist intended", because most mixes, specially old ones, were made only for speakers (even if it's not true, it makes intuitive sense and I can't stop thinking about it).



This is a big and common problem with some/many audiophiles. They make some assumption/conclusion which appears logical (or actually is logical) and repeat it as an assertion of fact. The whole audiophile world is built on these assumptions but especially when it comes to the creation of music recordings: The composition, arrangement, recording, editing, mixing and mastering, the vast majority of audiophiles appear particularly ignorant, not even understanding on a basic level what each of these processes actually are, let alone what the goals and artistic intents may be. So we see a bunch of assertions, nearly all of which are incorrect! Either these assertions are just plain wrong or they're incorrect on the basis that they're only partially or sometimes true rather than always (or nearly always) true. What you've read "_that it is not what the artist intended_" is INCORRECT. Yes, virtually all music mixes were and are made on speakers, and primarily for speakers but extrapolating "artist intent" from this fact is simply a FALLACY, a correlation (cause and effect) fallacy based on ignorance of what "artistic intent" actually means.

I'm not going to get too far into "artist intent" because it's a large, complex subject area that in western music has been built-up over the course of 600 years (and some aspects, over 2,000 years) but contrary to audiophile belief, there is rarely (if ever) AN artistic intent. In any given track there are likely to be hundreds of artistic intents; some overt, some subliminal, some are very specific, some are not (they cover a "range") and some artistic intentions aren't even intentional, as contradictory as that might appear (in fact there's a sub-genre of "modern" classical music that's entirely reliant on this fact!). Let me give you an example pertinent to this specific discussion: I've been directly responsible or involved in the creation of numerous commercial music tracks (well over 1,000) over the course of nearly 30 years and with a percentage of them, it was appropriate to check the mix and master with headphones. Pretty much without exception, certain aspects of the headphone presentation was preferable to the speaker presentation, while other aspects were not. Sometimes the mix/master would be adjusted to bring the speaker presentation more in line with the headphone presentation (and/or vice versa) and sometimes the mix would not be adjusted, because although the HP and speaker presentations were quite different, BOTH fell within the "range" of artistic intention. Many artists and engineers do the same but some don't because they feel the different presentations aren't materially important to their artistic intention. From a consumer's point of view there's simply no way to know what the situation is (unless it's explicitly stated on the album cover), whether applying crossfeed conflicts with the artistic intentions or makes no material difference. This is further complicated by false assumptions of what constitutes "good" and "better", plus the false assumption that what they personally perceive as "better" is by definition automatically "better".

Not sure if I've helped with all this or just caused more confusion. As a general rule of thumb though, if you read on an audiophile forum; "_that it is not [or is] what the artist intended_", take it with a pinch of salt unless they've got an actual relevant quote from the artist/s!

G


----------



## 71 dB

personally I don't think crossfeed is attacking artistic intent, on the contrary I think  it is _protecting_ the artistic intent. When I compare the original sound to sound crossfed at proper level it's hard for me to understand why the original sound represents the artistic intent more closely than the crossfed version. I could understand if there was a couple of artists in the World WANTING to make high ILD music, but I don't believe pretty much ALL of them are after that. Much more plausible for me is the assumption of music production culture of mixing on speakers and creating speaker spatiality which allows wild ILD as acoustic crossfeed regulates it down to levels expected by spatial hearing. Headphones don't give such freedom without problems of sound that some listeners find annoying and tiring.

I discovered crossfeed in 2012 at age 41. Before that I just thought headphones sound how they sound because they are sound sources near ears. I preferred speakers and considered headphone sound a bit annoying. I didn't even thought about headphone sound, because what can you do to improve headphone sound (apart from buying better ones)? You can improve speaker sound in so many ways. Acoustics of the room, placement, etc. In 2012 I finally thought about headphone sound and what the science of spatial hearing says about it and suddenly realised NOTHING prevents high ILD entering ears with headphones! Then I remembered how I listened to portable radio as a teenager with headphones using mono mode to reduce noise and finding the mono headphone sound somehow comfortable. 

It took me two decades after studying the science of spatial hearing to realize how it reveals a potential problem in headphone sound so it's not surprising for me if the whole thing has always been more or less overlooked in music production. That's why I feel like I am _protecting_ the artist using crossfeed: _"Hey you artist X, you overlooked large ILD when producing your music, but don't worry! I use crossfeed to fix it to enjoy your music fully."_


----------



## pinnahertz

Early cross-feed was called FM Stereo radio.


71 dB said:


> personally I don't think crossfeed is attacking artistic intent, on the contrary I think  it is _protecting_ the artistic intent.


You missed the point.  How do you know what the artist intent is?  Everything else is subjective and opinion:


71 dB said:


> When I compare the original sound to sound crossfed at proper level it's hard for me to understand why the original sound represents the artistic intent more closely than the crossfed version. I could understand if there was a couple of artists in the World WANTING to make high ILD music, but I don't believe pretty much ALL of them are after that. Much more plausible for me is the assumption of music production culture of mixing on speakers and creating speaker spatiality which allows wild ILD as acoustic crossfeed regulates it down to levels expected by spatial hearing. Headphones don't give such freedom without problems of sound that some listeners find annoying and tiring.
> 
> I discovered crossfeed in 2012 at age 41. Before that I just thought headphones sound how they sound because they are sound sources near ears. I preferred speakers and considered headphone sound a bit annoying. I didn't even thought about headphone sound, because what can you do to improve headphone sound (apart from buying better ones)? You can improve speaker sound in so many ways. Acoustics of the room, placement, etc. In 2012 I finally thought about headphone sound and what the science of spatial hearing says about it and suddenly realised NOTHING prevents high ILD entering ears with headphones! Then I remembered how I listened to portable radio as a teenager with headphones using mono mode to reduce noise and finding the mono headphone sound somehow comfortable.
> 
> It took me two decades after studying the science of spatial hearing to realize how it reveals a potential problem in headphone sound so it's not surprising for me if the whole thing has always been more or less overlooked in music production. That's why I feel like I am _protecting_ the artist using crossfeed: _"Hey you artist X, you overlooked large ILD when producing your music, but don't worry! I use crossfeed to fix it to enjoy your music fully."_


Artists don't communicate their "intent" well, if at all.  An example would be a track release in 1991 by Suzanne Ciani, "Rain" on her "Hotel Luna" CD.  The booklet says something to the effect, "thanks to the Roland RSS-10, the raindrops are where they are supposed to be" (not an exact quote, but close).  However, back in 1991 it was darn hard for the average listener to find out what the RSS-10 was supposed to do.  If you did find out, you learned it was an inter-aural crosstalk cancellation system meant to expand a soundstage far beyond the confines of two speakers, and used a DSP do essentially do the inverse of cross-feed.  That only worked properly on speakers, and not at all on headphones.  And it didn't work on the average home speaker setup well at all, it had to have a well controlled and symmetric speaker and room layout with few random early reflections.  Did Suzanne communicate all of that? Not a bit.  Therefore, even though she did imply a special psychoacoustic process was in place on her "raindrops", she might as well have not said a word about it because it didn't help anyone understand how to play the track "as the artist intended", without doing some personal reasearch into a product that, today, has long been discontinued.  So what now do you assume about what the artist intent was?  And that example made at least an attempt at giving the listener at least a tiny peek into the artists production intent.  Still failed, perhaps made things worse.

So what would be doing by expressing a firm conviction to what we think artist intent is?


----------



## 71 dB (Jan 23, 2020)

pinnahertz said:


> Early cross-feed was called FM Stereo radio.
> 1 --- You missed the point.  How do you know what the artist intent is?
> 
> 2 --- An example would be a track release in 1991 by Suzanne Ciani, "Rain" on her "Hotel Luna" CD.  The booklet says something to the effect, "thanks to the Roland RSS-10, the raindrops are where they are supposed to be" (not an exact quote, but close).  However, back in 1991 it was darn hard for the average listener to find out what the RSS-10 was supposed to do.  If you did find out, you learned it was an inter-aural crosstalk cancellation system meant to expand a soundstage far beyond the confines of two speakers, and used a DSP do essentially do the inverse of cross-feed.  That only worked properly on speakers, and not at all on headphones.  And it didn't work on the average home speaker setup well at all, it had to have a well controlled and symmetric speaker and room layout with few random early reflections.  Did Suzanne communicate all of that? Not a bit.  Therefore, even though she did imply a special psychoacoustic process was in place on her "raindrops", she might as well have not said a word about it because it didn't help anyone understand how to play the track "as the artist intended", without doing some personal reasearch into a product that, today, has long been discontinued.  So what now do you assume about what the artist intent was?  And that example made at least an attempt at giving the listener at least a tiny peek into the artists production intent.  Still failed, perhaps made things worse.
> ...



1 --- I can use my own head to figure out the intentions of the artist. Good art encapsulates intent and meaning.

2 --- I don't think I am familiar with this artist, but I listened to the track "Rain" on Spotify which doesn't list any album named "Hotel Luna", but the "Rain" track is listed on album "Pianissimo" from 1990. Track named "Hotel Luna" is found on 1992 album "The Private Music of Suzanne Ciani". Anyway I don't hear any raindrops on the track "Rain". Maybe they are too quiet for my hearing? All I hear is piano and some high pitched synthetic sounds. I don't like how the track sounds on headphones without crossfeed, but crossfeed level -3 dB seems to work nicely for me. The track Hotel Luna is decent new age.


----------



## bigshot (Jan 23, 2020)

gregorio said:


> contrary to audiophile belief, there is rarely (if ever) AN artistic intent. In any given track there are likely to be hundreds of artistic intents; some overt, some subliminal, some are very specific, some are not (they cover a "range") and some artistic intentions aren't even intentional, as contradictory as that might appear



In the era prior to recording, the goal was to NOT have a specific artistic intent. The performer or conductor would INTERPRET the music. On one night the music might sound one way and on another it might sound quite different, because that's how the musicians felt it. Recording brought in the desire among collectors to own the "one and true" version of a piece of music. Chasing after that is a fool's errand. There are "good" versions and "bad" versions, but if there is only one "true" version, that means the music is as dead as a doornail. I look for energy and expression in a performance and fidelity and balance in a recording. Personally, I think it would be better if rock artists did multiple live versions of their albums. I got the Allman Bros Fillmore East box with every concert of their run, and the box set of Zappa's famous Halloween show in New York, and I don't want to pick which concert is the best. I listen to all of them and they are all good for different reasons.



71 dB said:


> I can use my own head to figure out the intentions of the artist.



You could always drop them an email and ask them if they want you to listen to their music on headphones with crossfeed! But I bet you wouldn't like the answers and stick with your own solipsist idea of "artistic intent".


----------



## pinnahertz

71 dB said:


> 1 --- I can use my own head to figure out the intentions of the artist. Good art encapsulates intent and meaning.


Why, because you're psychic?


71 dB said:


> 2 --- I don't think I am familiar with this artist, but I listened to the track "Rain" on Spotify which doesn't list any album named "Hotel Luna", but the "Rain" track is listed on album "Pianissimo" from 1990. Track named "Hotel Luna" is found on 1992 album "The Private Music of Suzanne Ciani". Anyway I don't hear any raindrops on the track "Rain". Maybe they are too quiet for my hearing? All I hear is piano and some high pitched synthetic sounds. I don't like how the track sounds on headphones without crossfeed, but crossfeed level -3 dB seems to work nicely for me. The track Hotel Luna is decent new age.


Assuming you found the original all-electronic version as on the Hotel Luna CD, the raindrops she refers to are musical representations of raindrops, not actual drops of water.  
Ah HA!  So...you Can't use your head to figure out the artist intentions!  You just disproved 1.  without any help from me.


----------



## gregorio

71 dB said:


> [1] personally I don't think crossfeed is attacking artistic intent, on the contrary I think it is _protecting_ the artistic intent. ...
> [1a] That's why I feel like I am _protecting_ the artist using crossfeed: _"Hey you artist X, you overlooked large ILD when producing your music, but don't worry! I use crossfeed to fix it to enjoy your music fully."_
> [1b] It took me two decades after studying the science of spatial hearing to realize how it reveals a potential problem in headphone sound so it's not surprising for me if the whole thing has always been more or less overlooked in music production.
> [2] When I compare the original sound to sound crossfed at proper level it's hard for me to understand why the original sound represents the artistic intent more closely than the crossfed version.
> [2a] Much more plausible for me is the assumption of music production culture ...



1. This statement and the one quoted in #1a, is an excellent demonstration of the point I was trying to make in post #1617. In fact it's hard to think of a better demonstration! You've invented a conclusion that seems logical to you but clearly you are ignorant of the process and your conclusion is actually nonsense. For example:
1a. You seem to have the bizarre notion that the process of "producing music" is some engineer in a studio who doesn't understand what he's doing, throwing together a mix on speakers in a few hours and overlooking "large" things. While workflows can vary considerably, your "notion" is partially correct, the initial phase of mixing IS typically the creation of a (rough) mix in an hour or two but what you are ignoring (or ignorant of) is the rest of the mixing process, which is the vast majority! After this initial phase the producer is brought in, for a number of subsequent phases that typically take anywhere from a few days to a month. By the end of the mixing process there will have been a number of (rough) mixes created, each more refined than the last, listened to and analysed in minute detail by a number of people (the engineer, producer and typically one or more of the musicians/artists) on a variety of playback equipment. By the time the final mix is achieved, the engineer and producer will have heard the track hundreds of times and EVERY element and process in the mix will have been tweaked to an accuracy of half a dB or so. Then of course it's off to another studio and another engineer (the mastering studio and engineer) for further analysis and tweaking at an even more minute level. In all of this lengthy process, performed by a number of highly trained/experienced professionals, much of which is near, at and even BEYOND the limits of human audibility, the notion that they've all "overlooked" a "problem" that is ABSOLUTELY MASSIVE compared to every other aspect of the mix/master is just pure nonsense! I'm not saying it's impossible that a large ILD has been overlooked but certainly as a general rule, if a mix has a "large ILD" the vast majority of the time it is NOT because it's been "overlooked", it's there because it's intended to be there.
1b. Two decades, you're joking? When I started studying sound engineering, it took me about 2 minutes, as it does pretty much every new music engineering student! You have previously admitted you have no formal training, no professional experience and know next to nothing about commercial music production but here you are making grand sweeping (FALSE) assertions about those of us who do it professionally for a living. What an excellent example of TYPICAL audiophile nonsense: Make-up an assertion to justify a belief/agenda and then defend it to the death, regardless of the fact that it contradicts the actual facts and that it requires those who do it for a living, with years of formal training/experience (and all of the education systems which provided that training) are ALL more ignorant than an audiophile who admits near total ignorance. If that's not delusional, I don't know what is!

2. And there we have it! You are effectively stating that because "_it's hard for you to understand_" then it can't be true, while your made-up, unsupported alternative (#2a), which contradicts professional practice (and the facts/evidence) must be true. A rational mind would obviously question their understanding but clearly, a strongly held agenda/belief precludes a rational mind! So round and round we go, AGAIN!
2a. Exactly! You make-up an assumption of music production culture which must be true because it's more plausible to you, DESPITE self admittedly being ignorant of commercial music production, never having even witnessed commercial music production culture, let alone been a member of it and contradicting those who have been a member of that culture for their entire working lives. Your response is pretty much a PERFECT example of what I stated in the first paragraph of my post #1617, so thanks! 


71 dB said:


> I can use my own head to figure out the intentions of the artist. Good art encapsulates intent and meaning.



Yes, good art does encapsulate intent and meaning but artists' ability to communicate ALL their intent and meaning is limited. Not even history's greatest artists were able to communicate all their intents and not even the most expert analysts can uncover them all. You have no training or professional experience of creating artistic intent (in a commercial music product) but of course, you're an audiophile with a belief/agenda and therefore you can achieve what no other human can! How many times have we seen such nonsense assertions in this subforum, audiophiles who can easily hear sound well into the ultrasonic range, noise that's a 1,000 times below the noise floor or a thousandths of a dB difference between cables? And, how many times have you yourself argued against such nonsense assertions? Sure, we can all figure out some of the intentions of the artist, if we couldn't we wouldn't be able to differentiate music from semi-random noise but you're deluding yourself if you believe you can figure out all the intents of the artists, as @pinnahertz has demonstrated to you!



bigshot said:


> [1] In the era prior to recording, the goal was to NOT have a specific artistic intent. The performer or conductor would INTERPRET the music.
> [2] On one night the music might sound one way and on another it might sound quite different, because that's how the musicians felt it.
> [3] Recording brought in the desire among collectors to own the "one and true" version of a piece of music. Chasing after that is a fool's errand. There are "good" versions and "bad" versions, but if there is only one "true" version, that means the music is as dead as a doornail. I look for energy and expression in a performance and fidelity and balance in a recording.
> [4] Personally, I think it would be better if rock artists did multiple live versions of their albums.



1. We have relatively little evidence of performance styles prior to the recording era. What we do know/deduce is that interpretation was generally more free/variable than it is today but certainly there has been a goal of specific artistic intent going back centuries but almost never only specific artistic intent, it was virtually always accompanied with other artistic intents that were less specific. 

2. That's not necessarily true, in fact in some circumstances it would be impossible. With an orchestra for example we've got 60+ musicians, all potentially "feeling it" somewhat differently, this is why orchestras require a conductor in the first place. The music might sound slightly different on another night but can't change by much (without further detailed rehearsal) otherwise the end result is likely to be a complete mess. This isn't the case with genres such as jazz, which is structured and organised differently but then jazz didn't exist before the era of recording.

3. Certainly recording has restricted the variability of interpretation. A radical interpretation is going to seem particularly shocking to an audience accustomed to recordings of more traditional interpretations. There's still space somewhat different interpretations though, if there wasn't then classical music would be dead as a doornail and conductors could all be replaced with robotic arms.

4. Oh dear, I don't. In fact most rock artists simply can't do live versions of their albums, it's wasn't/isn't humanly possible as they're reliant on studio techniques that couldn't/can't be applied in real time. Therefore a live performance might be better in some respects but worse in others. My preference is generally for both live and studio versions, although it depends on the exact nature (construction/arrangement/etc.) of the rock song/track.

G


----------



## 71 dB

pinnahertz said:


> Assuming you found the original all-electronic version as on the Hotel Luna CD, the raindrops she refers to are musical representations of raindrops, not actual drops of water.
> Ah HA!  So...you Can't use your head to figure out the artist intentions!  You just disproved 1.  without any help from me.



I haven't even tried to figure out her intentions. I listened to the track + the other track to know what you are talking about. I commended that for me level -3 dB crossfeed works well, whatever her intentions are. I thought she has mixed recordings of real rain into a track, but that wasn't the case. I did not listen to the track on speakers so I don't know how her synthetic raindrop sound on speakers.


----------



## 71 dB

gregorio said:


> You seem to have the bizarre notion that the process of "producing music" is some engineer in a studio who doesn't understand what he's doing, throwing together a mix on speakers in a few hours and overlooking "large" things. While workflows can vary considerably, your "notion" is partially correct, the initial phase of mixing IS typically the creation of a (rough) mix in an hour or two but what you are ignoring (or ignorant of) is the rest of the mixing process, which is the vast majority! After this initial phase the producer is brought in, for a number of subsequent phases that typically take anywhere from a few days to a month. By the end of the mixing process there will have been a number of (rough) mixes created, each more refined than the last, listened to and analysed in minute detail by a number of people (the engineer, producer and typically one or more of the musicians/artists) on a variety of playback equipment. By the time the final mix is achieved, the engineer and producer will have heard the track hundreds of times and EVERY element and process in the mix will have been tweaked to an accuracy of half a dB or so. Then of course it's off to another studio and another engineer (the mastering studio and engineer) for further analysis and tweaking at an even more minute level. In all of this lengthy process, performed by a number of highly trained/experienced professionals, much of which is near, at and even BEYOND the limits of human audibility, the notion that they've all "overlooked" a "problem" that is ABSOLUTELY MASSIVE compared to every other aspect of the mix/master is just pure nonsense! I'm not saying it's impossible that a large ILD has been overlooked but certainly as a general rule, if a mix has a "large ILD" the vast majority of the time it is NOT because it's been "overlooked", it's there because it's intended to be there.
> 
> G



I have never said music is mixed within a few hours. For me it takes COUNTLESS of hours to mix my own music. Just _one track_ can take hours and if you have say 30-40 tracks in your song, it takes easily 100 hours to mix the whole thing! Most of the time I mix using 1 dB accuracy, but there are certain sounds that require 0.5 dB accuracy. I think this accuracy is good enough, because I am not a professional mixer, but I do this as a hobby.

Also, how music is produced in 2020 is VERY different from say 1980! Nowadays attention to ILD is actually paid. Not so much in 1980. The tools are so different. Mixing Dua Lipa more or less correctly ILD-wise* today doesn't fix Genesis or Kansas tracks from the 70's. Less known/commercially successful artists don't have the luxury of having all the steps you lay out here. Classical music is imo rarely headphone ready as it is no matter how new productions we are talking about.

* It's a balance between "as wide and spacious" sound as possible and headphone suitability so that the ILD levels are a notch above the optimal levels. It means listening to these modern pop track without crossfeed is "ok", but using weak crossfeed can in my opinion improve headphone spatiality a little bit.


----------



## 71 dB (Jan 24, 2020)

gregorio said:


> Two decades, you're joking? When I started studying sound engineering, it took me about 2 minutes, as it does pretty much every new music engineering student! You have previously admitted you have no formal training, no professional experience and know next to nothing about commercial music production but here you are making grand sweeping (FALSE) assertions about those of us who do it professionally for a living. What an excellent example of TYPICAL audiophile nonsense: Make-up an assertion to justify a belief/agenda and then defend it to the death, regardless of the fact that it contradicts the actual facts and that it requires those who do it for a living, with years of formal training/experience (and all of the education systems which provided that training) are ALL more ignorant than an audiophile who admits near total ignorance. If that's not delusional, I don't know what is!
> 
> G



When I learned about spatial hearing in the university the context was how our hearing decodes the directions of sound sources. The context assumed sounds to have correct spatial cues (real physical sound sources) and how our hearing decodes those correct cues. I had zero reason to think about what all of this means in regards of headphone listening. I was into speakers. The context was not spatial cues in music production. At the time I had TONS of other stuff to figure out in my university studies such as Einstein's relativity, quantum physics and analyse of electronic circuits. Things YOU perhaps where not occupied by when you understood whatever you understood in 2 minutes! Also, your studies probably presented these things in a little bit different context making it easier for you to figure some things out. I have been thinking and figuring out a lot of things over the years, but it never was headphone spatiality until 2012 when "the next thing on my list of things to think about" happened to be headphone spatiality and it when that happened it didn't take me 2 minutes to realize the problem of excessive ILD. It took me maybe 3 seconds! So, I was 2 decades "late" just because headphone spatiality happened to be so damn far down on my list of things to think about.

I may not have the exact same training you have, but I have an university degree. So please don't try to imply I am an idiot not knowing anything about anything!


----------



## bigshot (Jan 24, 2020)

gregorio said:


> We have relatively little evidence of performance styles prior to the recording era.



Sure we do. In the 19th century a lot was written on the styles and performance habits of conductors. It was the era of the romantic superstar conductor (even moreso than Karajan!) Newspapers would run reviews, complete with timing for movements and notes on whether repeats were taken or not. I find pre-recording era performance styles to be very interesting. The same conductor could take a radically different approach to the same work at two different times. The closest thing we have to that in the modern era is Stokowski.

There was one 19th century conductor who led the orchestra from an overstuffed chair surrounded by silk handkerchiefs and bottles of perfume. He would swoon at the ends of performances sometimes. I wish I could remember his name. He was a real character. The way conductors were perceived and their purpose was much different in the late 19th century than either before or after that time. It was the golden age of the cult of the maestro. Instrumentalists too. Franz Liszt was like a rock star with women throwing themselves at him. The more traditional kapelmeisters existed then too, but they weren't the stars. Today the kapelmeisters are the stars.


----------



## pinnahertz

71 dB said:


> *I haven't even tried to figure out her intentions.* I listened to the track + the other track to know what you are talking about. I commended that *for me level -3 dB crossfeed works well, whatever *(regardless of what)* her intentions are.*


Uh-huh.  So the artist intentions are not important to you.  Got it.


----------



## FlavioWolff

(subjective insight): put early Black Sabbath material on the "better with crossfeed" list.


----------



## pinnahertz

71 dB said:


> I have never said music is mixed within a few hours. For me it takes COUNTLESS of hours to mix my own music. Just _one track_ can take hours and if you have say 30-40 tracks in your song, it takes easily 100 hours to mix the whole thing! Most of the time I mix using 1 dB accuracy, but there are certain sounds that require 0.5 dB accuracy. I think this accuracy is good enough, because _*I am not a professional mixer, but I do this as a hobby.*_
> 
> Also, how music is produced in 2020 is VERY different from say 1980! Nowadays attention to ILD is actually paid. Not so much in 1980.


Um...how exactly would you know this?  (answer: you wouldn't). 


71 dB said:


> The tools are so different. Mixing Dua Lipa more or less correctly ILD-wise* today doesn't fix Genesis or Kansas tracks from the 70's.


"Fixing" implies it's "broken", which they aren't.  Would you want to "fix" a Monet because today's paints encompass a different color gamut?  Just because you don't like something doesn't mean it's broken and should be fixed for everyone.


71 dB said:


> Less known/commercially successful artists don't have the luxury of having all the steps you lay out here. Classical music is _*imo*_ rarely headphone ready as it is no matter how new productions we are talking about.


Thank you for clarifying that.  IMO, having recorded classical music, and listened on headphones, it's mostly just fine as is. 


71 dB said:


> * It's a balance between "as wide and spacious" sound as possible and headphone suitability so that the ILD levels are a notch above the optimal levels. It means listening to these modern pop track without crossfeed is "ok", but using weak crossfeed can *in my opinion* improve headphone spatiality a little bit.


There.  Clarified the confusion between the absolute, and the subjective opinion.    "...opinions are like ********.  Everyone's got one... (Harry Callahan) (google the rest of it). 

I gotta major Deja Vu here.  I could swear this dirt road has been traveled before, likely in this or a similar thread.


----------



## pinnahertz

71 dB said:


> <snip>
> I may not have the exact same training you have, but I have an university degree. So please don't try to imply I am an idiot not knowing anything about anything!


Not going to bother addressing the rest of your post.  

Having a university degree means you have a university degree.  It doesn't prove knowlege or intelligence, it proves you met certain requirements.  It also doesn't directly compare with others who have decades in the business, the art of audio, or research.  Apples vs oranges.  

Nobody is accusing anybody of being an idiot.  The one issue here is the strong and committed statement of opinion implying it is fact over preference, and universally applicable.


----------



## bigshot

pinnahertz said:


> Not going to bother addressing the rest of your post.



Welcome to the club! Have a beer!


----------



## pinnahertz

bigshot said:


> Welcome to the club! Have a beer!


Thanks!  If you don't mind, I'll do something with a bit more kick and fewer carbs...while I listen to some widely separated 70's rock on my headphones.


----------



## 71 dB

I'm certain artists are much more interested in things like spectral balance and balance of tracks than ILD. Those things are much more consistent between speakers and headphones.


----------



## 71 dB

Decades in "business" gives you knowledge, but you can learn some knowledge by yourself. That's good, because 100 % of people can't work in "business". Only a handful of people can. Other people are needed elsewhere. Driving buses etc. I have watched quite a lot of Youtube videos about mixing and the overall "message" in those videos it to keep ILD at low frequences small. Dr. Luke says "make bass mono" etc. If I am wrong so are THEY! People with serious BIG hits! If that's being wrong who wants to be right?


----------



## 71 dB

The spatial presentation of music depends on the system that is used. Different room. Different speakers. Different placement. Even different reconstruction filter affect the spatiality. The notion of artistic intent is lunacy. Whose speaker system really respects the intent? Can any headphone system respect the intent if the artist constructed his/her intent for speakers? Can any artist expect people to hear their music JUST the correct way? George Lucas created THX to ensure the sound people hear in movie theatres is what he wanted people to hear, but that's movie theatres. People's homes are so different and people use speakers and headphones.


----------



## 71 dB (Jan 24, 2020)

71 dB said:


> The spatial presentation of music depends on the system that is used. Different room. Different speakers. Different placement. Even different reconstruction filter affect the spatiality. The notion of artistic intent is lunacy. Whose speaker system really respects the intent? Can any headphone system respect the intent if the artist constructed his/her intent for speakers? Can any artist expect people to hear their music JUST the correct way? George Lucas created THX to ensure the sound people hear in movie theatres is what he wanted people to hear, but that's movie theatres. People's homes are so different and people use speakers and headphones.



I need to calm down again. I know stuff!!! I do!!!! People saying otherwise are mean!!! They don't know how I feel, what I understand. They don't know I can know!! All these years, studies, hobbies etc. CAN'T mean zero knowledge! That is impossible!! My knowledge is nonzero!!! These people here are trying to make themselves look wiser than they are while trying to take me down!! THEY FEAR ME!!! That's it!!! HAHAA!!

My target has been not to be triggered online. So I really need to get my crap together. What happened here? Why did things get out of hand? What mistake did I make here? Was it my mistake or someone elses? Why are people here so much against what I say? Why are people so triggered about what I say?

I don't have good arguments to not use crossfeed. So I don't say that. I have good arguments to use crossfeed so I say it. That's what I always do no matter the subject. People in the "business" have not convinced me to change my opinions. From my perspective they hide behind their "business" to gain authority. That's their weakness because they can be wrong without realizing it. Since I am not in the "business" I have to be careful about my understanding and knowledge and be critical. I am in constant self-doubt because of that, be so far I haven't seen reasons to drop crossfeed or change my views about it. Some people even agree with me!! That's something to keep in mind.


----------



## bigshot (Jan 24, 2020)

71dB, I think it's time for you to take a break from the keyboard and go outside and get some sunshine again. This isn't being done to you. You are doing it to yourself.


----------



## FlavioWolff

71db and Pinnahertz are arguing since September 2017. It’s almost two and a half years. Consider that the average life expectancy in wealthy countries is about 90 years. That’s a considerable fraction of your lives


----------



## BobG55 (Jan 24, 2020)

I can picture them a few decades from now still arguing from their respective Nursing Homes.


----------



## Hifiearspeakers

71 dB said:


> I need to calm down again. I know stuff!!! I do!!!! People saying otherwise are mean!!! They don't know how I feel, what I understand. They don't know I can know!! All these years, studies, hobbies etc. CAN'T mean zero knowledge! That is impossible!! My knowledge is nonzero!!! These people here are trying to make themselves look wiser than they are while trying to take me down!! THEY FEAR ME!!! That's it!!! HAHAA!!
> 
> My target has been not to be triggered online. So I really need to get my crap together. What happened here? Why did things get out of hand? What mistake did I make here? Was it my mistake or someone elses? Why are people here so much against what I say? Why are people so triggered about what I say?
> 
> I don't have good arguments to not use crossfeed. So I don't say that. I have good arguments to use crossfeed so I say it. That's what I always do no matter the subject. People in the "business" have not convinced me to change my opinions. From my perspective they hide behind their "business" to gain authority. That's their weakness because they can be wrong without realizing it. Since I am not in the "business" I have to be careful about my understanding and knowledge and be critical. I am in constant self-doubt because of that, be so far I haven't seen reasons to drop crossfeed or change my views about it. Some people even agree with me!! That's something to keep in mind.



Dude, you just replied to yourself. As crazy as this sounds, I agree with Bigshot. You need to step away and find something else in life that brings you joy. That thing in life that we call balance, is far from you right now. This forum isn’t worth you having a mental breakdown.


----------



## 71 dB

bigshot said:


> 71dB, I think it's time for you to take a break from the keyboard and go outside and get some sunshine again. This isn't being done to you. You are doing it to yourself.



There's not much sunshine available in Finland  in January, but I agree. I need a break from here to do something complete different…


----------



## pinnahertz

FlavioWolff said:


> 71db and Pinnahertz are arguing since September 2017. It’s almost two and a half years. Consider that the average life expectancy in wealthy countries is about 90 years. That’s a considerable fraction of your lives


Better check that math again.  I was away from Head-fi for about a year and a half, largely because of nonsense like this.  And we never argued continuously 24 hours a day. In fact, can't speak for 71dB, but I suspect we both have large and full lives outside of this tiny world. I know I do, and I'm very well focused on that reality.

Thanks for your concern, but the real fraction of our lives this has taken up is actually very, very tiny indeed.


----------



## pinnahertz

BobG55 said:


> I can picture them a few decades from now still arguing from their respective Nursing Homes.


I honestly think that none of this would have happened face to face.  If we share the same nursing home we'll probably be great friends.


----------



## BobG55

pinnahertz said:


> I honestly think that none of this would have happened face to face.  If we share the same nursing home we'll probably be great friends.



It was only a bit of humour on my part.  I also wrote “from their respective nursing home” meaning 71dB from Finland & you from the U.S. (I assume) so you wouldn’t be sharing the same nursing home.  But I agree, you’d most likely become friends.  Life is short.


----------



## 71 dB

Is the correct answer to the question posed in the title of this thread something like this?

_*Use crossfeed if it improves sound for you, but the ONLY justification to use crossfeed is it improves sound for you, because:*_

_*1) As crossfeed addresses merely ILD and pretty much ignores all other aspects of spatial hearing it can't be justified scientifically.*_

_*2) Crossfeed may work against artistic intent so it can't be justified from artistic point of view. *_​
If this is the case then all I can say is crossfeed works INSANELY well for me for something with so non-existing justification!  So well, that I was fooled for years to think solid scientific justification exists!

This raises the question of the role of scientific justification in audio. When does scientific justification matter and when does it not matter? Up to this point science seems to have agreed with my ears, but can I trust this to be true from now on? I fear this means personal crisis if my fundamental belief system is questioned like this.


----------



## gregorio

71 dB said:


> [1] When I learned about spatial hearing in the university the context was how our hearing decodes the directions of sound sources. The context assumed sounds to have correct spatial cues (real physical sound sources) and how our hearing decodes those correct cues. ... *The context was not spatial cues in music production* ...
> [2] I may not have the exact same training you have, but I have an university degree. [2a] So please don't try to imply I am an idiot not knowing anything about anything!



1. Exactly, that's been a large part of the problem from the very beginning and still continues to be, despite it being clearly explained to you numerous times! You are applying your knowledge of the spatial cues of "real physical sound sources" to a completely different context (music production) that has relatively little to do with "real physical sound sources". In other words, you are MIS-APPLYING your knowledge and therefore, many of your assumptions/conclusion are incorrect/invalid!!

2. You're joking right? Let's say you need brain surgery, do you go to someone who: A. Has a degree in theoretical physics, or B. Has a degree in medicine and is a GP (general practitioner) or C. Is a practising neurosurgeon with several/many years experience? Which of these do you think knows more about brain surgery? The theoretical physicist probably knows a great deal, is highly intelligent and certainly not an idiot but OBVIOUSLY, a degree in theoretical physics doesn't confer or imply any knowledge whatsoever of neurosurgery. The GP certainly knows more about brain surgery than the average person and more than the physicist but without the specialist knowledge, training and practical experience, is still likely to kill you, which is why GPs aren't allowed to perform brain surgery! How do you not know this? Therefore:
2a. I am certainly NOT implying that you do "not know anything about anything". What I AM stating is that you know very little specifically about commercial music production and that's because: A. It's blatantly obvious to anyone who does know about commercial music production and B. You've effectively admitted it yourself, with various statements like the bolded one quoted in the previous point. So, I don't need to imply your are an idiot, you're doing that all by yourself!



71 dB said:


> [1] I'm certain artists are much more interested in things like spectral balance and balance of tracks than ILD.
> [2] Decades in "business" gives you knowledge, but you can learn some knowledge by yourself.
> [2a] That's good, because 100 % of people can't work in "business". Only a handful of people can.
> [3] I have watched quite a lot of Youtube videos about mixing and
> ...



1. Why are you "certain"? Do you have evidence, have you actually asked the artists or have you just made-up that assumption/assertion?

2. Of course "you can learn some knowledge by yourself". I could, for example, "learn some knowledge" about brain surgery. In fact, with today's internet, I could probably learn quite a lot of knowledge but without training and practical experience I'd be extremely likely to misinterpret and/or mis-apply that knowledge, plus, I'd have to be a delusional idiot to repeatedly argue with an actual practising neurosurgeon. And, even more of an idiot to state the neurosurgeon is wrong because they're "hiding behind their business to gain authority"!!
2a. And what evidence are you basing that assertion on? There's currently many tens of thousands (maybe 100,000 or so) of professional music/sound engineers worldwide and over the course of the last 50+ years, many hundreds of thousands but they/we are all wrong and you, a hobbyist, are right.

3. And again, would you expect someone with no training or practical experience in brain surgery to know more than an actual neurosurgeon on the basis that they'd watched "quite a lot of Youtube videos" on the subject?
3a. Great example of self-contradiction! You stated "_it's not surprising for me if the whole thing has always been more or less overlooked in music production_" but now you're saying the "overall message" of the videos you've seen by a professional music producer was precisely the opposite!

4. Dr Luke is not a "THEY", he's a "he". He's perfectly entitled to his opinion and to produce music how he wants and other professional producers are entitled to theirs. As a general rule, I would agree with his advice to "make the bass mono" but the point you consistently ignore is that music production is an art and therefore there are some/many situations where a "general rule" is (or can be) inapplicable. Furthermore, your use (and therefore implied understanding) of "right" and "wrong" is clearly incorrect in the context of art/music production!

5. Which is why the creation of film sound does not have a mastering phase, while the creation of a music recording for home/consumer reproduction does. Don't you even know the basic purpose of mastering?



71 dB said:


> [1] I don't have good arguments to not use crossfeed.
> [1a] So I don't say that. I have good arguments to use crossfeed so I say it.
> [2] People in the "business" have not convinced me to change my opinions.
> [2a] From my perspective they hide behind their "business" to gain authority.
> [3] All these years, studies, hobbies etc. CAN'T mean zero knowledge! That is impossible!! My knowledge is nonzero!!!



1. How is that possible, considering several good arguments have been presented in this thread? The obvious answer would appear to be that you "don't have good arguments to not use crossfeed" because you are deliberately ignoring them!
1a. So, why don't you say that? Why do you "say" only the "arguments" (facts/science) you've decided not to ignore (clearly on the basis that they agree with, rather contradict, your belief/agenda)? Is this the Sound Science forum or the "Bits of sound science that 71dB has decided not to ignore" forum?

2. People in the business (or at least this one) is not trying to convince you to change your opinions. It's near impossible to change the opinion of someone with an agenda. All I'm doing is refuting your false assertions, as I've told you numerous times previously.
2a. Of course they do, because anyone who disagrees with your agenda has to be wrong and hiding behind something to gain authority. What are you hiding behind to gain authority? Being a hobbyist, watching "quite a lot of Youtube videos" and self admittedly never having formally studied music production or even personally witnessed it! 

3. In EFFECT it can indeed "mean zero knowledge"! What's the practical difference between someone with zero knowledge and someone with quite a lot of knowledge who applies it out of context/incorrectly??

And again, round and round the same old circle with exactly the same result! What's the famous cliche attributed to Einstein about repeating exactly the same actions and expecting a different result?

G


----------



## Darksoul

OK...I want to crossfeed. What's the best thing I can use on Foobar?


----------



## 71 dB

gregorio said:


> 1. Exactly, that's been a large part of the problem from the very beginning and still continues to be, despite it being clearly explained to you numerous times! You are applying your knowledge of the spatial cues of "real physical sound sources" to a completely different context (music production) that has relatively little to do with "real physical sound sources". In other words, you are MIS-APPLYING your knowledge and therefore, many of your assumptions/conclusion are incorrect/invalid!!
> 
> G



Music is perhaps a different context, but there are limits to the difference. Music must be sound waves in the air just like other sounds. In fact a criteria for "High quality music reproduction" it sound like "real thing". I believe because of these limits in difference my knowledge is at least partially applicable to music production. Knowledge has overlapping. What sound engineers and acoustic engineers know is partially overlapped. It's not like sound engineers are taught completely different things than say acoustic engineers. Where have the "rules" of music production come from? It has at least partially come from acoustic engineers and that's why the knowledge overlaps. Music production must have learned a lot from acoustic telling what makes sense and what doesn't. If my knowledge applies only partially it means I am partially correct, not completely incorrect. It's also possible even you have made claims outside the area of your expertise. Is headphone listening part of music production knowledge for example? At what point in music consumption YOUR knowledge stops being valid? If my knowledge has it's limits so does yours.


----------



## bfreedma

Darksoul said:


> OK...I want to crossfeed. What's the best thing I can use on Foobar?



Both of the plugins in this work well with different features - try both and see which you prefer

https://www.foobar2000.org/components/tag/crossfeed


----------



## gregorio

bigshot said:


> Sure we do. In the 19th century a lot was written on the styles and performance habits of conductors. It was the era of the romantic superstar conductor (even moreso than Karajan!) Newspapers would run reviews, complete with timing for movements and notes on whether repeats were taken or not. I find pre-recording era performance styles to be very interesting.



No bigshot, we don't! There's two points here:

1. There was a fair amount written in the C19th, particularly in the form of newspaper reviews. However, think about that for a minute ... A newspaper review is typically half a page, rarely more than 1 page and is not intended to be a detailed analysis but the relatively brief personal impression of a critic. What we actually have is therefore a lot of vague, broad descriptions, that often contradict each other. So yes, we might have some information about the timing for movements for example but relatively little about how that timing was achieved, the timing and other performance characteristics of the sections and individual phrases within those movements. We can make some deductions about some of the precise performance details/styles but they're little more than guesses. Also of course, the "pre-recording era" isn't just defined by the era immediately prior to the recording era (the late Romantic Period), the baroque and classical periods are also "pre-recording era" and we have considerably less knowledge of the performance styles of these periods.

2. There has been a gradual reduction of interpretation freedom throughout the history of classical music performance, regardless of the introduction of recording. In the baroque era for example, it was common practice to completely change the orchestration (which instrument played which parts) and the actual notes themselves. In fact, not all the notes were even notated/written down to start with, notation was in the form of "figured bass": Bass notes with numerals/symbols underneath which implied the other notes of the chords but exactly what notes were played, where, when, how and on what instrument, was largely a matter of interpretation. If you go and buy a score for a piece by Bach, you're actually buying a fully notated interpretation by some scholar who probably never even met Bach. By the classical period, this was not the case. All the notes were explicitly notated for specific instruments and the musicians were expected NOT to change them. The exception was "Cadenzas", which were section/s within a concerto that were not notated, where the soloist performed unaccompanied and could play pretty much whatever they wanted. These cadenzas could last anywhere from about 20 secs to about 10 mins and we have reliable anecdotal evidence that soloists sometimes went off on a complete tangent, performing cadenzas that appeared to be completely unrelated to the rest of the concerto. By the romantic era, this freedom was largely a thing of the past, cadenzas were fully notated, although the soloists still had quite a lot of freedom in how they interpreted the cadenzas. Throughout the C17th, C18th, C19th and well into the C20th, notation and musical "markings" gradually became more precise/exacting and the performers' freedom of interpretation thereby reduced. However even in the baroque era, certain musical intents were extremely specific!

From the above two points: We have relatively little evidence of performances styles. Although there are some exceptions, even going back to the baroque era, mostly we just have rather vague generalisations based on educated guesswork, rather than actual specific details. And, although there's no doubt that recording has reduced interpretation freedom, it's not entirely clear how much of the reduced freedom is due to recording and how much would have occurred anyway, as part of the ongoing evolution of music performance (if recording had not been invented).



bigshot said:


> The way conductors were perceived and their purpose was much different in the late 19th century than either before or after that time. It was the golden age of the cult of the maestro. Instrumentalists too. Franz Liszt was like a rock star with women throwing themselves at him.



I'm not sure I can agree with that. It's hard to argue that any late C19th conductor had anywhere near the power, influence or income of say Karajan.
Same with instrumentalists/soloists. It's hard to argue against Farinelli, a C18th star so massive that the other top stars (and even ruling monarchs) begged for audiences. Handel tried for years, Mozart, Casanova and others travelled for days just to spend half an hour with him. He died fabulously wealthy, loaded with honours from different countries, still a household name throughout Europe decades after his retirement and still cited as the greatest operatic singer of all time even a century after his death. Arguably a better example than Liszt, would be Paganini (early C19th), who really defined and invented the blueprint of the "rock star" for all those who followed, including Liszt, who stated that he wanted to be as great a virtuoso on the piano as Paganini was on the violin. Reportedly (though almost certainly one of the numerous myths that sealed his reputation) upon his death a trunk was found amongst his possessions containing the knickers (underwear) of women who had thrown them at him during performances, 3,000 pairs! An outrageously extravagant lifestyle, an alcoholic by the time he was 16, a self-destructive gambler, a serial womanizer, a badly behaved convicted felon and seriously believed by many to have sold his soul to the devil (because his virtuosity was thought to be humanly impossible), Paganini rivalled or exceeded pretty much every actual "rock star" who followed! Not that Liszt didn't contribute to the "rock star" blueprint, he was wildly popular ("Lisztomania" for example) but far more of the "clean cut" rock star than the Faustian Paganini.

G


----------



## Darksoul (Jan 25, 2020)

bfreedma said:


> Both of the plugins in this work well with different features - try both and see which you prefer
> 
> https://www.foobar2000.org/components/tag/crossfeed



Hmmmmmm! I'm liking that Meier Crossfeed, first time I actually hear a difference. I've tried several crossfeed's before but I've failed to pick up any difference, now it's quite audible it sends stuff farther away, things don't sound so much "in your face." Bass tends to lower a bit...but's it's not actually lower...as if it were farther away from the rest of everything? It sounds less congested. I take it there are more crossfeed filters correct?

Found another one, Bauer stereophonic-to-binaural DSP:

http://bs2b.sourceforge.net/

It has some interesting theory in there. Allegedly, this is the type of crossfeed filter that it's implemented the RME ADI-2 DAC.


----------



## bigshot (Jan 25, 2020)

Obviously we don't have anything as specific as a recording from the pre-recording era. And baroque and classical eras had a million exceptions to the rule because music making was regional. But the performance practices of the romantic and late romantic era were pretty well documented... at least as well as you can document music with words.

I found that fun conductor I was talking about... https://en.wikipedia.org/wiki/Philippe_Musard

Some quotes...

His concerts were described in 1837 as "a musical paradise" in "a spacious hall furnished with mirrors, couches, ottomans, statues, fountains, and floral decorations, and at one end a café attended by a troupe of ‘perfumed waiters'".

Audiences attended his concerts not only for the music, but to see the man himself in the act of leading the orchestra, regardless of the music being performed[ At climactic moments, Musard would dispose of his baton, throwing it into the audience, and rise to a standing position (standard practice at the time placed the conductor in a sitting position) to display his dominance over the happenings. Musard employed wild gestures including the hands, feet, arms, knees, and not the least grotesque facial expressions when leading. As a result, rumors circulated that he made a deal with the Devil, preceding Paganini's reputation,

He was not considered attractive physically, having acquired significant scarring from smallpox, a yellow complexion, and had an unkempt appearance, always dressing in a black suit that was not measured properly. A small man, dancers and concert audiences would lift him up and carry him on their shoulders around the concert hall at the conclusion. He became one of the top celebrities in Paris, to the point that effigies made of chocolate, marzipan, and gingerbread were made of his "grotesque" figure and sold and consumed in great quantities throughout Paris.

Musard's reputation was nothing short of international. Concerts in London were advertised as "_a la Musard_", as were those in the United States.

Someone should do an HIP concert a la Musard! It would be a lot of fun!

Here is a contemporary account...

_Sometimes he rolls his eyes like two inflamed balls; sometimes he gazes calmly from right to left and from left to right. His indefatigable bow marks every note, from the whole note to the sixteenth note, and seems to lead the sounds to the ears of listeners.

With his gaze, Musard attracts all that surrounds him; with his bow, he brings back the lost, contains the audacious, warns the distracted, rallies the laggards, and restrains the impetuous. In the adagio, in the andante, his face is uncut, his mouth is smiling, his attitude is full of dignity and contemplation well formed.

In the allegro, his eye throws lightning, his nerves are agitated, and his body realizes the chimera of perpetual motion. Then he no longer beats the measure, he strikes it with redoubled blows, feet, hands, elbows and knees. His foot causes the dust to fly in the air and throws powder in his eyes.

Sometimes he gets up, looks at the ceiling, measures the audience from the height of his majesty, scratches his head or holds his ribs; sometimes he sits down and passes his hand over his forehead, seat of so much genius, receptacle of so much harmony, warehouse of so much responsibility.

In certain moments, the tip of his bow hovers over the note until its agony, and helps it to die; in others, the bow seems to pick up the note on the floor and to return to the desk. It is a curious spectacle, I assure you, that that of Musard conducting his orchestra. We never tire of admiring it._


----------



## castleofargh

71 dB said:


> Is the correct answer to the question posed in the title of this thread something like this?
> 
> _*Use crossfeed if it improves sound for you, but the ONLY justification to use crossfeed is it improves sound for you, because:*_
> 
> ...


Science helps demonstrate facts. When do those facts matter to a given listener, is for him to decide. Because at the end of the day, listening to music is a personal activity and a subjective matter.

To be very clear, having a rational explanation for why you want to use crossfeed, does not make that explanation into a scientific fact about why everybody should want crossfeed, or why it's an objective improvement. It's easy to have a perfectly rational and objective reasoning leading to false conclusions. Almost everybody wrong about something(so all of us several times per day), followed some reasoning that made sense to them based on what they knew(including their biases), and what conclusions they were hoping for. Start with a false axiom, ignore relevant variables, or jump to conclusion(you've done all 3 for crossfeed), and chances are that you'll end up with something erroneous no matter how seriously you do everything else.
In this topic you have avoided a scientific approach as often as necessary to keep your claims alive. Always looking for what agrees with you, always ignoring or finding reasons to downplay the influence of the missing variables of crossfeed(based on a speaker model). Always finding excuses not to mind the errors between crossfeed EQ and a listener's own Head Related EQ(again, speaker model). Often copy pasting knowledge and conclusions from speakers, rooms and listeners in them, right onto crossfeed as if crossfeed was objectively the same system. And that despite how you completely agree that crossfeed is its own thing and not the same as speaker playback. And of course all the times when you're making arbitrary decisions about what matters objectively, based on your own subjective impressions. If all that is science, I'm the singer from Iron Maiden. 


If I had to bet on why all this is happening here and nowhere else with you, I'd go with the sunk cost fallacy. You've invested yourself so much into crossfeed that you've lost all objectivity about it and instead you just hold on to that completely unnecessary need to justify it. Why does it have to be more than something you enjoy using?


----------



## gregorio

bigshot said:


> [1] Obviously we don't have anything as specific as a recording from the pre-recording era.
> [2] And baroque and classical eras had a million exceptions to the rule because music making was regional.
> [3] But the performance practices of the romantic and late romantic era were pretty well documented... [3a] at least as well as you can document music with words.
> [4] I found that fun conductor I was talking about... https://en.wikipedia.org/wiki/Philippe_Musard



1. Yes, obviously but:

2. That's not really true bigshot. Again, a century before the late romantic period, Farinelli had toured Europe and his fame was such that some would travel from countries he didn't visit in order to hear him, Handel for example first heard Farinelli in Venice. This way performance style, technical ability, interpretation, etc., was disseminated throughout Europe, though not as quickly as the recording era allowed of course. And, Farinelli certainly wasn't the only artist in the baroque and classical periods to do this, it was pretty standard practice and also standard practice for young talented composers to travel internationally and study with established/famous composers. For example, by the time he was 18, Mozart had already spent more than a decade on non-stop tour throughout Europe as a child prodigy, studied composition in London with JC Bach and again, this was all before either Paganinni or Musard were even born! In fact, even during the renaissance period, 400 years before the romantic period, it was standard practice for ruling monarchs to take their court composers and musicians with them on international campaigns/tours/royal visits, where they heard and interacted with the local composers and musicians. This of course is why the "language" of musical markings was largely standardised in Italian, rather than regional languages. Ironically (to your argument), it wasn't until the romantic period and the rise of musical nationalism that some composers started to exclusively use their native language for musical markings, Wagner being an obvious example. There were of course regional differences in performance styles, interpretation, etc. during the baroque and classical periods but then, there are still a few regional differences even today, long after the beginning of the recording era. 

3. Again bigshot, that's not really true. Your quote is an excellent example because it's entirely typical. It uses flowery language to describe that Musard dramatised the role of conductor but it gives us almost no specific detail about how that affected the musical interpretation. For example: "_Then he no longer beats the measure, he strikes it with redoubled blows, feet, hands, elbows and knees. His foot causes the dust to fly in the air and throws powder in his eyes._" - Yes, interesting but how exactly did that affect the musical interpretation? Exactly what differences in note production, phrasing, etc., did this cause? There is the occasional vague clue, for example: "_In certain moments, the tip of his bow hovers over the note until its agony, and helps it to die ..._" - This statement implies that Musard is willing to extend the duration of certain notes for dramatic impact, more so than other conductors of the time but again, it's still rather vague; which notes is he willing to extend this way, extend them by how much and "helps it to die" from what to what?
3a. This statement is obviously NOT true and is what invalidates point #3. For example, I'm sure you must be aware that it's entirely possible to have a detailed written analysis as opposed to just a brief flowery review, not least because I mentioned it in a previous post! In fact, we have an entire field within music that's effectively dedicated to exactly such detailed analysis: "Musicology". While musicology didn't emerge as a a specifically named independent scholarly field until the late romantic period, it was a standard part of music education long before. Arguably, the single most important/influential piece of Musicology was written by Johann Fux, an entire book (published in 1725) of extremely detailed analysis of late renaissance counterpoint (specifically as defined by Palestrina). This certainly wasn't the only book of such detailed analysis at that time, there were many. In fact Bach had a personal library of them! But Fux's was the most influential, reaching the status almost of a "bible" for composers who followed: Haydn reportedly taught himself counterpoint by reading it, Mozart's copy is full of his personal annotations, Beethoven and pretty much every classical music composer since has studied it in whole or in part. However, all these analyses deal primarily or exclusively with compositional tools/styles/structures, again, we have relatively little analysis of how performers interpreted and performed those compositions. It's only relatively recently (the late C20th) that previous performance styles/interpretations have been studied in earnest, with the formation of ensembles like the Taverner Consort and several others who try to recreate them but the term for this type of performance itself highlights the problem: Originally called "Period/Authentic Performance" - This is really a marketing term with little scientific/scholarly justification, we simply don't know what performances of the pre-recording era sounded like. For this reason, most now prefer the term "Historically Informed Performance" (HIP) although quite a few dispute even this term, because it implies they're actually "informed" rather than largely based on educated opinion/guesswork!

4. Yes, I'm aware of Musard and there's no doubt that he revolutionised the role of the conductor, raising it from near anonymity to near the status of the "rock star" soloists. He popularised the conductor, turning the role into a sort of "Front Man". We can reasonably infer that he was a talented conductor, rather than ONLY a talented "Show Man" but even that is not entirely clear! It's really as the "Show Man" that he gained popularity and influence. He invented the "Promenade Concert" for the masses but pretty much all of his performances were actually mostly dance music, with the occasional classical music except and accounts/reviews vary as to how good he was as an actual classical music conductor. A comparison with say Karajan is specious IMHO. While both were extremely influential, Karajan was an amazing character who wielded unprecedented power within the classical music world, and I'm not necessarily using the word "amazing" as a compliment. There's a lot about Karajan that wasn't widely known/publicised outside those who worked with him closely. In a sense though, you've kind of disproved your own assertion, you had to find "the fun conductor you were talking about"! I had to rack my brain despite years of formal classical music education and I'd guess that probably no one in this sub-forum has ever heard of Musard, not so with the names of Paganini or Liszt though ... 

G


----------



## bigshot

Yes, most people have never heard about 19th century conductors. But there's lots of fascinating information on the internet.


----------



## 71 dB

castleofargh said:


> In this topic you have avoided a scientific approach as often as necessary to keep your claims alive. Always looking for what agrees with you, always ignoring or finding reasons to downplay the influence of the missing variables of crossfeed(based on a speaker model). Always finding excuses not to mind the errors between crossfeed EQ and a listener's own Head Related EQ(again, speaker model). Often copy pasting knowledge and conclusions from speakers, rooms and listeners in them, right onto crossfeed as if crossfeed was objectively the same system. And that despite how you completely agree that crossfeed is its own thing and not the same as speaker playback. And of course all the times when you're making arbitrary decisions about what matters objectively, based on your own subjective impressions. If all that is science, I'm the singer from Iron Maiden.



Maybe in the beginning I let people think I think crossfeed makes headphones sound EXACTLY like speakers, but I didn't think anyone to think that. No, simple crossfeed can't do that. I can however make ILD more similar to speakers, but nothing else is similar! Nothing!!! Just this one thing, ILD. I'm just somebody how things fixing this one thing is much better than doing nothing. That's because my spatial hearing is easily fooled so that fixing this one things fools it to think other things are perhaps fixed too, ot those other things don't matter to me. Somehow crossfeed just works for me. Doesn't matter if science explains it or not. It works.


----------



## bigshot

I think a lot of headphone users don't know what a good speaker system in a good room sounds like.


----------



## 71 dB

bigshot said:


> I think a lot of headphone users don't know what a good speaker system in a good room sounds like.



I think a lot of drivers don't know what driving a Ferrari feels like. The good news is they still get from A to B.


----------



## bigshot

But I don't think a lot of Kia drivers would insist that their car handles just like a Ferrari. Audiophiles are special.


----------



## 71 dB

bigshot said:


> But I don't think a lot of Kia drivers would insist that their car handles just like a Ferrari. Audiophiles are special.



I believe Kia owners are annoyed by filty rich people bragging about they Ferrari cars… …the point here is headphones don't give the soundstage you get with speakers in a room, but that doesn't mean headphone listening can't be enjoyable. It can be just like a Kia can take you from A to B.


----------



## BobG55 (Jan 28, 2020)

Post deleted


----------



## bigshot

71 dB said:


> I believe Kia owners are annoyed by filty rich people bragging about they Ferrari cars.



I drive a cheap car because I don't care. If someone else cares, it doesn't bother me a bit. I'm not naturally a jealous or vindictive person. We all work hard and spend our money on whatever we want. No one has the right to try to passive aggressively shame us for choosing cars, stereo equipment or limited edition plates with clowns on them. But you can't eat dinner off a Ferrari or drive your stereo to work.


----------



## 71 dB

bigshot said:


> I drive a cheap car because I don't care. If someone else cares, it doesn't bother me a bit. I'm not naturally a jealous or vindictive person. We all work hard and spend our money on whatever we want. No one has the right to try to passive aggressively shame us for choosing cars, stereo equipment or limited edition plates with clowns on them. But you can't eat dinner off a Ferrari or drive your stereo to work.



I don't own a car at all. Helsinki has great public transportation. I use mostly headphones with crossfeed and it fools my spatial hearing so that I experience "miniature" soundstage. Occationally I listen to speakers, for example today I listened to *Badly Drawn Boy*'s Mercury Music Prize winning album " _The Hour of Bewilderbeast _" and *Logh*'s album " _North _" on speakers. I have never got into Badly Drawn Boy, but I like Logh (a Swedish softrock band) quite a lot. Powerful and beautiful music no matter how well the soundstage is rendered.


----------



## bigshot

Take a bus! Go to the movies to listen to sound! Listen to better music!


----------



## 71 dB

bigshot said:


> Take a bus! Go to the movies to listen to sound! Listen to better music!



: Joon-ho Bong's *Parasite Gisaengchung    *is coming to theatres in Finland tomorrow and I am planning to see it at some point. For that, I need to take the Metro, not a bus. By the way, Helsinki Metro is the most Northern Metro in the World. 

Those CDs I mentioned aren't the best music I listen* to and it's not _your_ business what is better music for me. You have your taste. I have my taste. Sometimes I want to listen to silly bubblegum pop. Sometimes I want to listen to Weinberg's String Quartets. Sometimes I want to listen to Clifford Brown/Max Roach. Sometimes I want to listen to some New Age music. To me it's meaningless to ask if the music is "good" or "bad" to other people. All that matters is if the music works in the moment for me. If it works, it's in that sense good music, no matter what some besserwisser music scholars say and I'm not telling other people what to listen to. That's their business.

* The Logh did work in my state of mind thou...


----------



## bigshot

Just to let you know in case you weren't aware of it... you're pretty much drifted to the point where you're just talking to yourself now.


----------



## 71 dB

bigshot said:


> Just to let you know in case you weren't aware of it... you're pretty much drifted to the point where you're just talking to yourself now.



Funny, I have a feeling you have drifted to the point where you're just talking to me.  Thanks anyway…

I wonder what people here (including you) want me to say? Whether I try to justify crossfeed using science or I talk about public transportation in Helsinki people don't seem to like it. I don't even understand why I need the approval of other people...


----------



## mindbomb

castleofargh said:


> Science helps demonstrate facts. When do those facts matter to a given listener, is for him to decide. Because at the end of the day, listening to music is a personal activity and a subjective matter.
> 
> To be very clear, having a rational explanation for why you want to use crossfeed, does not make that explanation into a scientific fact about why everybody should want crossfeed, or why it's an objective improvement. It's easy to have a perfectly rational and objective reasoning leading to false conclusions. Almost everybody wrong about something(so all of us several times per day), followed some reasoning that made sense to them based on what they knew(including their biases), and what conclusions they were hoping for. Start with a false axiom, ignore relevant variables, or jump to conclusion(you've done all 3 for crossfeed), and chances are that you'll end up with something erroneous no matter how seriously you do everything else.
> In this topic you have avoided a scientific approach as often as necessary to keep your claims alive. Always looking for what agrees with you, always ignoring or finding reasons to downplay the influence of the missing variables of crossfeed(based on a speaker model). Always finding excuses not to mind the errors between crossfeed EQ and a listener's own Head Related EQ(again, speaker model). Often copy pasting knowledge and conclusions from speakers, rooms and listeners in them, right onto crossfeed as if crossfeed was objectively the same system. And that despite how you completely agree that crossfeed is its own thing and not the same as speaker playback. And of course all the times when you're making arbitrary decisions about what matters objectively, based on your own subjective impressions. If all that is science, I'm the singer from Iron Maiden.
> ...



This criticism itself seems to be steeped in the nirvana fallacy. A solution doesn't have to be perfect for it to be advocated, just better than the status quo.


----------



## castleofargh

mindbomb said:


> This criticism itself seems to be steeped in the nirvana fallacy. A solution doesn't have to be perfect for it to be advocated, just better than the status quo.


I don't think it is a nirvana fallacy. Unless you consider that false is almost right. I'm not asking for perfection but saying that false assumptions, cherry picking and jumping to conclusions are not part of the scientific method.

You do bring up the true matter of this thread though. Has it been determined anywhere that crossfeed is better than the status quo? If you ask @71 dB, it sure has been. But what about the rest of the planet? If you go ask a hundred audiophiles who tried some, do you have confidence that a majority would consider crossfeed to be an improvement? AFAIK only a minority of people who try crossfeed will keep using it in the long run. And out of that minority, several only use Xfeed on a few select tracks/albums. If someone has data showing differently, I welcome the correction. Otherwise I'll stick to my assumption that crossfeed is a niche subjective tool for the few people who happen to enjoy it.


----------



## MacedonianHero

I use the minimal crossfeed on my Chord DAVE and Hugo 2 and can't go back to turning it off.


----------



## 71 dB

castleofargh said:


> Has it been determined anywhere that crossfeed is better than the status quo? If you ask @71 dB, it sure has been. But what about the rest of the planet? If you go ask a hundred audiophiles who tried some, do you have confidence that a majority would consider crossfeed to be an improvement? AFAIK only a minority of people who try crossfeed will keep using it in the long run. And out of that minority, several only use Xfeed on a few select tracks/albums. If someone has data showing differently, I welcome the correction. Otherwise I'll stick to my assumption that crossfeed is a niche subjective tool for the few people who happen to enjoy it.



I don't see this as a popularity contest. People learn to listen to headphone without crossfeed and think that's the way to do that. It takes efford to adjust your thinking and a lot people aren't willing to do that. I'm sure if all people started listening to headphones with crossfeed, even fewer would "learn" away from it if given the chance. That's because headphone spatiality doesn't really make sense unless it's made for headphones (binaural etc.)

To me it doesn't matter if 0 % or 100 % of people like crossfeed. To me it is an improvement, because it addresses the issue of excessive ILD, takes ILD closer to what I hear when I listen to speakers which should be somewhat close to what the artistic intention* was. Humans are complex psychological creatures. Just like people have very different taste of music or movies, the opinions of crossfeed differ.

* On headphones to me bass sounds "fake" without crossfeed. It doesn't have physicality. Crossfeed gives bass realness making it sound much better. It's difficult for me to believe the artistic intention was to have the bass sound crappy. Maybe ONE band on this planets wants as a joke to have their bass sound crappy, but King Crimson? I don't think so! Of course they want physical great sound! Speakers give it! Crossfeed help having something like that with headphones. That's just one reason why I consider crossfeed an improvement and use it.


----------



## 71 dB

MacedonianHero said:


> I use the minimal crossfeed on my Chord DAVE and Hugo 2 and can't go back to turning it off.



When I turn off the crossfeed with a recording that really needs it the result sounds to me shockingly bad. The sound just "explodes" into pieces and the natural feel is gone. Bass becomes "fake" and it all just feels annoying and wrong. That's why it is very difficult for me to understand how to some other people crossfeed is not an improvement, but it seems I simply have to accept how different people are. Thanks to people like you I feel I am not completely alone with my preferences.


----------



## MacedonianHero

71 dB said:


> When I turn off the crossfeed with a recording that really needs it the result sounds to me shockingly bad. The sound just "explodes" into pieces and the natural feel is gone. Bass becomes "fake" and it all just feels annoying and wrong. That's why it is very difficult for me to understand how to some other people crossfeed is not an improvement, but it seems I simply have to accept how different people are. Thanks to people like you I feel I am not completely alone with my preferences.



We all have different tastes, but I do prefer some crossfeed regardless when listening through headphones. Just a more natural experience IMO.


----------



## 71 dB

MacedonianHero said:


> We all have different tastes, but I do prefer some crossfeed regardless when listening through headphones. Just a more natural experience IMO.



It's no wonder to me if you find some crossfeed more natural, because the sounds we hear around us are more mono than people think. Well, not that mono at high frequences, but low frequencies (below 1 kHz or so) we hear around us are not far from mono. A lot of music is produced so that bass (below 200 Hz or so) is actually mono and to my knowledge nobody complains about it. It just works, even with speakers! Brain expects spatial correlation in the sound heard by left and right ears and when that kind of correlation exists the sound appears more natural, at least to me. Crossfeeder forces this kind of correlation to the signal and even if crossfeeder is just a coarse approximation of the real spatial process, at least my spatial hearing gets fooled. For physical reasons HRTF describing these correlations are quite smooth under 1 kHz, the typically "operating" frequency limit of crossfeeders. Above 1  kHz HRTF becomes very chaotic and difficult to model using simple filters, but at these frequencies crossfeeders do hardly anything anyway. It is what it is. Scaling ILD to natural levels and having those natural correlations below 1 kHz is what counts for my ears.

To my ears recordings have different amount of excessive spatiality and that's why I use variable crossfeed level. Some recordings don't need crossfeed at all: They have proper ILD-levels as they are. Some recording need just a little "finetuning". Modern pop tends to be like that. Bass is perhaps mono, but between 200 Hz  and 1000 Hz ILD can be just a little too much. Some recordings such as early stereophonic records and downmixed multichannel recordings may need pretty strong crossfeed to be "tamed" natural to my ears. Selecting the proper crossfeed level is easy for me: It's the lowest level of crossfeed that gives sound free of excessive spatiality. Using too much crossfeed just makes the sound unnecessorily mono-like and dull for me. The sweetpot is pretty easy to find unless the recording has very strange spatial properties.


----------



## jgazal (Feb 5, 2020)

mindbomb said:


> This criticism itself seems to be steeped in the nirvana fallacy. A solution doesn't have to be perfect for it to be advocated, just better than the status quo.



There are much more comprehensive solutions available right now. I can think four: Sony 360 Reality Audio, Bacch for headphones, Realiser A16 and Impulsifier.


----------



## 71 dB

jgazal said:


> There are much more comprehensive solutions available right now. I can think four: Sony 360 Reality Audio, Bacch for headphones, Realiser A16 and Impulsifier.



I haven't tried any of those, but in theory they can be superior to "default" crossfeed. Actually these are crossfeeders (portion of left channel is send to right ear and vice versa), just more sophisticated than "default" crossfeed.


----------



## jgazal (Feb 6, 2020)

71 dB said:


> I haven't tried any of those, but in theory they can be superior to "default" crossfeed. Actually these are crossfeeders (portion of left channel is send to right ear and vice versa), just more sophisticated than "default" crossfeed.



I chose the word “comprehensive” meaning that they incorporate a greater quantity of filters related to the listening room and listener head and torso. It is controversial to valuate them using the words “standard” or “superior” since music reproduction is not pretended to replicate reality.

Following the lessons from @gregorio and @pinnahertz, I’ve decided to read some book about music under a producer focus. So I‘ve chosen David Byrne’s “How Music Works“ and from the very beginning he states that the acoustical characteristics of the place where musicians perform shape or at least frame how the music culture evolves. For instance, the height of the ceiling of the concert hall at the city where I live is adjustable to change its volume and ensures that the intensity of any composition has its acoustic concept respected.

David Byrne also writes that, for instance, before the advent of the gramophone, the vibrato was rarely used in live performances of string instruments. But it became frequently used in this type of recording to circumvent some of its limitations. Soon after people got so used to it by listening such recordings and they started to expect its frequent use in live music also!

Even those more comprehensive solutions don't replicate reality so its use is more a matter of preference than superiority.

It took me a while to understand that rationale reading the threads here and, as Byrne’s book shows, more professional producers confirm that.

But I didn’t develop any of those solutions. If I had developed any, maybe I would be arguing its superiority until now.

Having said that, I believe some of those solutions may be perceived as really useful for those that want to produce music reliably outside of professional studios.

Cheers!


----------



## Davesrose

jgazal said:


> David Byrne also writes that, for instance, before the advent of the gramophone, the vibrato was rarely used in live performances of string instruments. But it became frequently used in this type of recording to circumvent some of its limitations. Soon after people got so used to it by listening such recordings and they started to expect its frequent use in live music also!



That seems too simplistic to me.  Vibrato was a tonal style with stringed instruments long before the gramophone.  Mozart said he thought many string players during his era used too much vibrato: that it should be reserved for long sustains and the ends of phrases.  During the baroque period, it was thought that subtle use of vibrato should be more ornamental.

https://scholarship.claremont.edu/cgi/viewcontent.cgi?referer=&httpsredir=1&article=1105&context=ppr

I suspect sensibilities about vibrato also depends on music venue: IE, a 4 piece string quartet (where you can hear any individual's intonation) vs full symphony.  Recording techniques might have influenced the use of vibrato, but I don't think it was the only factor for its preponderance by the 20th century.

I do agree that it's interesting to see how engineers might change the acoustics of a concert hall for recording vs a live event.  Here in Atlanta, the Atlanta symphony produces recordings with Telarc.  Perhaps partly because of the venue being empty during recording, they'll lay boards on top of all the seating.


----------



## jgazal

Davesrose said:


> That seems too simplistic to me.



Yes, it seems simplistic now that I read your comment and the article you linked. I don't know that much about the subject to affirm what really happened when recordings appeared.

I've been obsesively thinking in recent years which effect new advancements in immersive audio would have in music if they became mainstream.


----------



## Davesrose

jgazal said:


> Yes, it seems simplistic now that I read your comment and the article you linked. I don't know that much about the subject to affirm what really happened when recordings appeared.
> 
> I've been obsesively thinking in recent years which effect new advancements in immersive audio would have in music if they became mainstream.



Well to be fair, it is funny to read about music theory and how everyone has their own opinions (it's like food critics).  The main point I've taken is that we'll never really know what a musical interpretation would be at a given time.  For example, I don't know how realistic Neville Marriner's interpretation of Mozart is, but I still subjectively prefer him.  So much so that I have some recent SACD recordings of Mozart that are "technically" better from a spec standard.....but still enjoy Marriner better for the actual music.


----------



## Ilomaenkimi

Yes i do use crossfeed. Never been a fan of crossfeed. Until i bought SPL Phonitor xe and tested their Matrix. It's been in use since.
It is very subtle, does not colour sound. And makes listening much more relaxed and enjoyable. Works for me.


----------



## SupperTime

I enjoy crossfeed on the rme adi-2 dac, it makes it easier to listen to sibilant tracks, I have some poor recordings.
On the good recordings I feel crossfeed sort of takes the detail away... Or is just me?


----------



## gregorio

SupperTime said:


> I enjoy crossfeed on the rme adi-2 dac, it makes it easier to listen to sibilant tracks, I have some poor recordings.
> On the good recordings I feel crossfeed sort of takes the detail away... Or is just me?



I don't know exactly what the adi-2 is doing with it's crossfeed but generally, crossfeed does not crossfeed (or change in anyway) the higher freq range where sibilance exists. However, as crossfeed is adding some of the lower frequency signal (typically below about 800Hz or so) to the opposite channel, you will obviously get more lower/bass freqs and therefore, if you (or the adi-2 itself) reduce the overall volume to compensate, you would have relatively less high freq (above ~800Hz). While we're only talking about a very few dB, that can be enough to noticeably reduce sibilance and in some cases, effectively remove it entirely. Again though, it depends on exactly how the adi-2 is implementing it's crossfeed.

Crossfeed does in effect "take the detail away", because some of the left channel signal is superimposed on the right channel (and vice versa), which can blur or obscure "the detail". However, it seems that some people are either not able to perceive this loss of detail in the first place or, do perceive it but don't care/mind, because the other benefits they perceive from using crossfeed outweigh this loss of detail. So, it's not just you but neither is it applicable to everyone else.

And lastly, how are you defining good (and poor) recordings? On the basis of more detail, less sibilance, whether it doesn't sound as good with crossfeed, something else? There are numerous metrics for a good or poor recording, most of which are subjective and quite a few may have more to do with your reproduction equipment and listening environment than whether the recording itself is good or poor.

G


----------



## SupperTime

gregorio said:


> I don't know exactly what the adi-2 is doing with it's crossfeed but generally, crossfeed does not crossfeed (or change in anyway) the higher freq range where sibilance exists. However, as crossfeed is adding some of the lower frequency signal (typically below about 800Hz or so) to the opposite channel, you will obviously get more lower/bass freqs and therefore, if you (or the adi-2 itself) reduce the overall volume to compensate, you would have relatively less high freq (above ~800Hz). While we're only talking about a very few dB, that can be enough to noticeably reduce sibilance and in some cases, effectively remove it entirely. Again though, it depends on exactly how the adi-2 is implementing it's crossfeed.
> 
> Crossfeed does in effect "take the detail away", because some of the left channel signal is superimposed on the right channel (and vice versa), which can blur or obscure "the detail". However, it seems that some people are either not able to perceive this loss of detail in the first place or, do perceive it but don't care/mind, because the other benefits they perceive from using crossfeed outweigh this loss of detail. So, it's not just you but neither is it applicable to everyone else.
> 
> ...


As in shouty, sibilant, less detail


----------



## FlavioWolff

Just found out that HeSuVi has a fully configurable crossfeed implementation.






No mentions on HeSuVi on this thread, though. Has anyone tried it? Since it works with EqualizerAPO, it's system wide.


----------



## lasker98 (May 8, 2020)

This is my first post on Head-Fi after over 10 years as a member. The only reason I'm posting now is to thank 71 dB for all his contributions to this thread.

I felt compelled to make this post since I've just finished reading the entire thread from the beginning over a period of days. Based on this, I'd say 71 dB deserves a lot of credit for the way he handled himself over the life of this thread. Maybe not perfectly, but he was a saint compared to the other vocal majority. I can honestly say he's the only one I learned anything useful from, despite the barrage of posts from most of the other "noise" contributors. I don't know why he put up with the abuse he did, but for me at least, I'm grateful that he did. I got the distinct impression, just my own opinion here, that the majority of the noise posts were made by people that were much more interested in defending some perceived position of authority they seemed to feel they had or deserved for some reason, instead of actually addressing the OP's original premise. This was my first experience in the Sound Science side of Head-Fi and it wasn't a pretty picture to say the least. The level of personal attacks, pedantic posts and responses, ridiculously long responses from people with nothing to contribute beyond their original premise and obvious need to be right while contributing zero to the thread's intent. People posting page after page to say nothing other than basically they don't even use headphones!  And the mod(s) saw fit to attack dB 71 for his posts? It definitely makes me wonder what these people really do for a living when they're able to spend so much time over a period of years making useless posts. I'd say there's a lot of people that participated in this thread that should, but I'm sure won't, feel embarrassed if they would take an honest look back at their "contributions" to this thread. That goes from the mod(s) on down. As I stated, this was my first experience in the Sound Science side of this forum and this is only my opinion. Maybe this is just normal over here. It's sure different from any other threads I've read over the years here on Head-Fi and not in a good way.

I had previously tried crossfeed (Roon with Sennheiser HD600) and had always gone back to no crossfeed. Thanks to 71 dB and some others, I realized what crossfeed actually does and what to listen for. I now also understand what and why the different settings do. Since this thread started I've been using crossfeed and very happy with it. It has definitely added to my headphone listening experience in a major way. I'm currently using Roon custom setting of 700 Hz Cut Frequency and 5 dB Feed Level.

In response to Op's original "To crossfeed or not to crossfeed? That is the question... ", i'm now a big YES for crossfeed. This is completely based on the incredible input and contributions from 71 dB. Thanks 71 dB.


----------



## bigshot

Welcome back 71dB


----------



## castleofargh

lasker98 said:


> This is my first post on Head-Fi after over 10 years as a member. The only reason I'm posting now is to thank 71 dB for all his contributions to this thread.
> 
> I felt compelled to make this post since I've just finished reading the entire thread from the beginning over a period of days. Based on this, I'd say 71 dB deserves a lot of credit for the way he handled himself over the life of this thread. Maybe not perfectly, but he was a saint compared to the other vocal majority. I can honestly say he's the only one I learned anything useful from, despite the barrage of posts from most of the other "noise" contributors. I don't know why he put up with the abuse he did, but for me at least, I'm grateful that he did. I got the distinct impression, just my own opinion here, that the majority of the noise posts were made by people that were much more interested in defending some perceived position of authority they seemed to feel they had or deserved for some reason, instead of actually addressing the OP's original premise. This was my first experience in the Sound Science side of Head-Fi and it wasn't a pretty picture to say the least. The level of personal attacks, pedantic posts and responses, ridiculously long responses from people with nothing to contribute beyond their original premise and obvious need to be right while contributing zero to the thread's intent. People posting page after page to say nothing other than basically they don't even use headphones!  And the mod(s) saw fit to attack dB 71 for his posts? It definitely makes me wonder what these people really do for a living when they're able to spend so much time over a period of years making useless posts. I'd say there's a lot of people that participated in this thread that should, but I'm sure won't, feel embarrassed if they would take an honest look back at their "contributions" to this thread. That goes from the mod(s) on down. As I stated, this was my first experience in the Sound Science side of this forum and this is only my opinion. Maybe this is just normal over here. It's sure different from any other threads I've read over the years here on Head-Fi and not in a good way.
> 
> ...


The personal attacks went too far too many times, even I got mad and that's obviously not right. Gregorio is in trouble right now because of such tendencies to act like a bully.
Now your take away from all this is your own. I didn't quite come to the same conclusion after seeing several audio professionals, sound engineers, and people passionate about sound field and speaker simulation, try to explain to 71 dB what was wrong in his reasoning and some of his statements, just to end up with him repeating the same oversimplified idea again and again and... 
Also being the most active modo in this section, I remember some of the many posts I had to delete(and yes, the horrible stuff that remains is the lesser offensive portion, if you can believe it), so that gives me a different perspective on the "saint" thing. He doesn't blow up too often but when he does it's no joke.

Anyway, you have learned a little about crossfeed and you like it with better settings, great. I've loved using crossfeed for almost a decade now, and only stopped because I've found better tools giving me a more "correct spatiality" if you will(sarcasm is very bad, but I'm also not a saint).
So if you wish to get away from headphones' stereo with albums clearly made for speakers, I strongly recommend https://github.com/jaakkopasanen/Impulcifer . You'll need to procure some binaural microphones or make some yourself and the rest is free. You can ask questions about it there https://www.head-fi.org/threads/recording-impulse-responses-for-speaker-virtualization.890719/ 
and if you have a lot of money and some nice speaker setup you can record at home or at a friend's, you might want to check the Realiser A16 which is pretty much the same thing with head tracking and it costs a fortune.


----------



## housekrl

I owned a Meier Concerto with crossfeed for several years. I think it is possible that the crossfeed switch was not even connected to anything. Myself as well as others could not tell a difference even on old jazz recordings. I should have taken it apart to inspect while I had it. Oh well


----------



## bigshot

By "old jazz recordings" do you mean mono?


----------



## housekrl

bigshot said:


> By "old jazz recordings" do you mean mono?


Not really sure. Recordings where the instruments are obviously on the left or right channel. John Coltrane and stuff like that.
You would think I could hear some change with the crossfeed engaged but I never could.


----------



## bigshot

Maybe it’s one of those magic buttons that don’t do anything. I had a sound purity button on an old SACD player that never seemed to make a lick of difference. But it lit up a beautiful purple LED, so I left it on. I called it my purple placebo button.


----------



## housekrl

bigshot said:


> Maybe it’s one of those magic buttons that don’t do anything. I had a sound purity button on an old SACD player that never seemed to make a lick of difference. But it lit up a beautiful purple LED, so I left it on. I called it my purple placebo button.


Lol. Actually I was very underwhelmed by the Meier Concerto. I paid $750 and thought it shouldn't cost more than $300.


----------



## castleofargh

housekrl said:


> I owned a Meier Concerto with crossfeed for several years. I think it is possible that the crossfeed switch was not even connected to anything. Myself as well as others could not tell a difference even on old jazz recordings. I should have taken it apart to inspect while I had it. Oh well


If it's the usual Meier's crossfeed implementation, it's not so subtle that it could be missed when switching it ON and OFF. So maybe you did get a lemon.


----------



## housekrl

castleofargh said:


> If it's the usual Meier's crossfeed implementation, it's not so subtle that it could be missed when switching it ON and OFF. So maybe you did get a lemon.


That's what I'm thinking.


----------



## sonitus mirus

Here are the specifications about the crossfeed options with my DAC.

https://www.rme-audio.de/downloads/adi2dac_e.pdf

_*8.6 Crossfeed* 

While headphones open the sound stage and make everything easier to hear and to locate by spreading the narrow sound field of stereo speakers to the left/right extreme, some people would like to have a listening situation that is more comparable to a standard speaker setup. The ADI-2 DAC includes Crossfeed to address this wish. Crossfeed reduces the artificial surround ambience that some productions have to make them sound better on speakers, but which sounds unnatural on a headphone.

The Bauer Binaural method is used, with five selectable strengths of narrowing the upper frequencies. This advanced method, which also includes a small delay and correction of the frequency response, works quite well, and is another useful addition as well as a unique feature on a device like the ADI-2 DAC. Details on internal settings The Crossfeed effect is mainly defined by the filter frequency and the amount of crossfeed, here given as damping factor: 

1: 650 Hz, -13 dB (just a touch) 
2: 650 Hz, -9.5 dB (Jan Meier emulation) 
3: 700 Hz, -6 dB (Chu Moy emulation) 
4: 700 Hz, -4.5 dB (30° 3 meter emulation) 
5: 700 Hz, -3 dB (example how even stronger would sound)_


----------



## 71 dB

lasker98 said:


> This is my first post on Head-Fi after over 10 years as a member. The only reason I'm posting now is to thank 71 dB for all his contributions to this thread.
> 
> I felt compelled to make this post since I've just finished reading the entire thread from the beginning over a period of days. Based on this, I'd say 71 dB deserves a lot of credit for the way he handled himself over the life of this thread. Maybe not perfectly, but he was a saint compared to the other vocal majority. I can honestly say he's the only one I learned anything useful from, despite the barrage of posts from most of the other "noise" contributors. I don't know why he put up with the abuse he did, but for me at least, I'm grateful that he did. I got the distinct impression, just my own opinion here, that the majority of the noise posts were made by people that were much more interested in defending some perceived position of authority they seemed to feel they had or deserved for some reason, instead of actually addressing the OP's original premise. This was my first experience in the Sound Science side of Head-Fi and it wasn't a pretty picture to say the least. The level of personal attacks, pedantic posts and responses, ridiculously long responses from people with nothing to contribute beyond their original premise and obvious need to be right while contributing zero to the thread's intent. People posting page after page to say nothing other than basically they don't even use headphones!  And the mod(s) saw fit to attack dB 71 for his posts? It definitely makes me wonder what these people really do for a living when they're able to spend so much time over a period of years making useless posts. I'd say there's a lot of people that participated in this thread that should, but I'm sure won't, feel embarrassed if they would take an honest look back at their "contributions" to this thread. That goes from the mod(s) on down. As I stated, this was my first experience in the Sound Science side of this forum and this is only my opinion. Maybe this is just normal over here. It's sure different from any other threads I've read over the years here on Head-Fi and not in a good way.
> 
> ...



Wow! I haven't been active on this forum lately and I just dropped by to see what's going on here and I see this! I am speechless! Someone who was registered a decade ago makes the first post and it's this! I am humbled! All the work on this forum wasn't total waste of time after all! Somebody got something from my posts. Wow.

I am glad if my posts helped you found the benefits of crossfeed and your headphone listening is improved because of it. You are welcome. Thank you for the honour to be the reason for your first post on this forum! It really makes my day (night actually, but I must respond immediately!) Thanks also for your kind words of me, perhaps too kind as I wasn't always so nice to others.

While I am at this, I want to apologize everyone here for saying nasty things occationally. I was STUNNED by the opposition to my posts and I didn't know how to handle it correctly. Also, I wasn't always 100 % clear about what I mean, but I tried to correct those mistakes toward the end. For example when I said that crossfeed makes headphones sound like speakers I didn't expect people to interpret it to mean you can't tell headphones from speakers. So I hopefully learned to express myself better to avoid misunderstandings. Especially I learned that spatial hearing is very subjective. I can't generalize my own experiences to other people. I have learned to respect the preferencies of other people regarding spatiality. That's why I kind of lost my interest for being here, but all the hours I spend here seems to have achieved something...


----------



## bigshot

Welcome back again 71dB


----------



## 71 dB

bigshot said:


> Welcome back again 71dB


Thanks pal!


----------



## 71 dB

housekrl said:


> I owned a Meier Concerto with crossfeed for several years. I think it is possible that the crossfeed switch was not even connected to anything. Myself as well as others could not tell a difference even on old jazz recordings. I should have taken it apart to inspect while I had it. Oh well


I believe those Meier headphone amps had jumper settings inside that could be changed to have milder or stronger crossfeed effect. I believe the default setting was something like -11.5 dB which is pretty weak crossfeed (some recordings benefit in my opinion from a weak crossfeed like this) and the nature (circuit topology) of Meier crossfeed make it even more difficult to hear the difference, but it's clear when you know what to listen to. At a low level such as -11 dB crossfeed merely makes the sound calmer and reduces listening fatique for some listeners such as myself. Meier kind of tries to be "invisible" crossfeed in a sense that it keeps the sound wide unlike the "Cmoy" type topology that makes the sound narrower and is perhaps easier to hear for this reason. The best way to hear what crossfeed does to the signal is to feed only one channel to it, left or right and the switch it on and off.


----------



## 71 dB (Jun 3, 2020)

gregorio said:


> Crossfeed does in effect "take the detail away", because some of the left channel signal is superimposed on the right channel (and vice versa), which can blur or obscure "the detail". However, it seems that some people are either not able to perceive this loss of detail in the first place or, do perceive it but don't care/mind, because the other benefits they perceive from using crossfeed outweigh this loss of detail. So, it's not just you but neither is it applicable to everyone else.
> 
> G



The way I see the "detail" thing is as long as the recording is primarily produced for speakers, the detail I get with headphones without crossfeed is a little bit_ false detail_ and crossfeed makes the detail closer to what they are "supposed" to be. That's because in speaker listening the sound is heavily blurred and obscured due to room acoustics. Tons of reflections are superimposed to the direct sound and even the direct sound gets acoustically crossfed. In this context headphone crossfeed is quite mild obscuring compared to what a room does to the sound. Why doesn't room acoustics completely destroy the sound? I believe it's because our ears EXPECT it to happen. In real life we don't really hear "pure" sound. It's almost always blurred, obscured and what not and all of that is part of the spatial information that tells us in what kind of acoustic environment we are listening to the music. If the music is produced for headphones (say a binaural recording) the blurring has already happened in the recording and there is not reason to blur things further. I even believe the "blurring" happens only on signal level and that my spatial hearing is able to decode the "unblurred" information (that's what spatial hearing does) because the superimposed signals contain time delays so spatial hearing can figure out how the superimposing happened. This also explanes why room acoustics doesn't blur the sound completely for the hearing system. Spatial hearing is able to calculate what happened to the sound. So, I feel that sound without any "blurring" and "obscurring" is kind of unnatural, sound that happened nowhere. So I never feel like losing detail due to crossfeed. Whatever is lost was never supposed to be there, false detail due to incorrect reproduction of loudspeaker sound on headphones. So, for me crossfeed reducing this false detail is a plus, but that's me. Certainly this is not the only way one can think about detail, but this is how I think about it.


----------



## bigshot

Here we go again. I'll leave it to the folks who enjoy this stuff.


----------



## housekrl

71 dB said:


> I believe those Meier headphone amps had jumper settings inside that could be changed to have milder or stronger crossfeed effect. I believe the default setting was something like -11.5 dB which is pretty weak crossfeed (some recordings benefit in my opinion from a weak crossfeed like this) and the nature (circuit topology) of Meier crossfeed make it even more difficult to hear the difference, but it's clear when you know what to listen to. At a low level such as -11 dB crossfeed merely makes the sound calmer and reduces listening fatique for some listeners such as myself. Meier kind of tries to be "invisible" crossfeed in a sense that it keeps the sound wide unlike the "Cmoy" type topology that makes the sound narrower and is perhaps easier to hear for this reason. The best way to hear what crossfeed does to the signal is to feed only one channel to it, left or right and the switch it on and off.


Interesting. I bought my Meier Concerto when it first hit the market. I think it was in 2010 or so. Directly from there website. It said nothing about jumpers on the site or on the head fi thread that I remember anyways. Who knows, I just could never hear any change whatsoever. Oh well, I don't own it anymore so I guess it doesn't matter. The Concerto was a very short lived product, not sure why.


----------



## bigshot

Maybe it was short lived because they didn't provide enough documentation!


----------



## housekrl

bigshot said:


> Maybe it was short lived because they didn't provide enough documentation!


Also, seamed kind of like a cash grab. Way overpriced.


----------



## 71 dB

housekrl said:


> Interesting. I bought my Meier Concerto when it first hit the market. I think it was in 2010 or so. Directly from there website. It said nothing about jumpers on the site or on the head fi thread that I remember anyways. Who knows, I just could never hear any change whatsoever. Oh well, I don't own it anymore so I guess it doesn't matter. The Concerto was a very short lived product, not sure why.


I believe the end user was never supposed to tinker with those jumper settings and they were there as a "vestige" of the product development phase.


----------



## Blackwoof

Meier crossfeed at 15 sound more normal/natural to me.


----------



## gumisb

I made crossfeed for use with Eqalizer APO
If someone is interested and want to try. Delay is in samples for 44.1kHz


----------



## Rob the Comic (Oct 1, 2020)

pinnahertz said:


> Early cross-feed was called FM Stereo radio.
> You missed the point.  How do you know what the artist intent is?  Everything else is subjective and opinion:
> 
> Artists don't communicate their "intent" well, if at all.  An example would be a track release in 1991 by Suzanne Ciani, "Rain" on her "Hotel Luna" CD.  The booklet says something to the effect, "thanks to the Roland RSS-10, the raindrops are where they are supposed to be" (not an exact quote, but close).  However, back in 1991 it was darn hard for the average listener to find out what the RSS-10 was supposed to do.  If you did find out, you learned it was an inter-aural crosstalk cancellation system meant to expand a soundstage far beyond the confines of two speakers, and used a DSP do essentially do the inverse of cross-feed.  That only worked properly on speakers, and not at all on headphones.  And it didn't work on the average home speaker setup well at all, it had to have a well controlled and symmetric speaker and room layout with few random early reflections.  Did Suzanne communicate all of that? Not a bit.  Therefore, even though she did imply a special psychoacoustic process was in place on her "raindrops", she might as well have not said a word about it because it didn't help anyone understand how to play the track "as the artist intended", without doing some personal reasearch into a product that, today, has long been discontinued.  So what now do you assume about what the artist intent was?  And that example made at least an attempt at giving the listener at least a tiny peek into the artists production intent.  Still failed, perhaps made things worse.
> ...


Well said, and you make very compelling points. New member here awaiting the arrival of an SPL Phonitor XE and came across this thread and am a little intrigued to hear what peoples thoughts are on crossfeed / matrix.. I agree about 'artists intentions'; when I was a boy here in Sydney raising a kid brother with no parents, I heard Bowie's track 'Station to Station' only once the the way it was apparently recorded and supposed to be heard, at a record store on a new fangled quadraphonic sound system. Sure, it was a novelty to hear the train apparently travel from speaker to speaker - but do you think tor a minute the fact that me and a million other poor kids that didn't have the 'correct' set-up stopped saving up to buy it and play it on a crappy all-in-one affair? I have been a professional Comedian for over 40 years and worked with many, many artists and many, many of them couldn't spell the word intention.


----------



## Rob the Comic

Ilomaenkimi said:


> Yes i do use crossfeed. Never been a fan of crossfeed. Until i bought SPL Phonitor xe and tested their Matrix. It's been in use since.
> It is very subtle, does not colour sound. And makes listening much more relaxed and enjoyable. Works for me.


Happy to read this as I have the XE arriving next week. I will jump through all the hoops and see what is what - thanks.


----------



## 71 dB

I ruined my self-confidence here with my crossfeed posts, but Wednesday I start a course on mixing using Pro Tools. I hope after the course I am not considered a clueless fool anymore! At least I DON'T care anymore, because I have done the damn course! At least people can't say I haven't done the course!


----------



## bigshot

ProTools is like all programs. The basics are pretty easy, but there are levels upon levels of added features that you can learn a little at a time.


----------



## 71 dB

bigshot said:


> ProTools is like all programs. The basics are pretty easy, but there are levels upon levels of added features that you can learn a little at a time.



Pro Tools doesn't seem seriously difficult. Ease of use is a good marketing point. My problem is asperger, which makes it difficult to learn "series of small steps" as automated actions. Learning logical connections between things (system thinking) is easy for me, but only a small part of learning a new software is about that. For example learning short cuts is hard for me, because they are often a bit "random." so there is not much logic behind. Why does "Command" + "=" toggle between Edit and Mix windows? Somebody just chose that shortcut for whatever reason and it's like memorazing the decimals of pi.


----------



## ironmine

Has anyone here mentioned https://www.dsoniq.com before? Sounds really great.


----------



## FlavioWolff

gumisb said:


> I made crossfeed for use with Eqalizer APO
> If someone is interested and want to try. Delay is in samples for 44.1kHz



Thanks,  I'm gonna try this out. Does it approach more of the CMOY, or the Meier method?
Is it anyhow different from HeSuVi's crossfeed (which also uses EqAPO)?
Thanks!


----------



## 71 dB

*Double crossfeed? Huh?*

Generally it seems to be a bad idea to do crossfeed _twice_. Using typical crossfeeders the sound becomes too centered and dead. However, there seems to be ways to do double crossfeed successfully. I have been using an arrangement where I have my "wide crossfeeder" followed by a very simple crossfeeder:

Wide crossfeeder is a variation of the CMOY circuit where the crossfeed cut off frequency is lowered to 300 Hz from 800 Hz causing much bigger phase differencies at low frequencies. It emulates the situation where the speakers are on both sides of the head rather than at 30° angles on front. The sound appears wide and not "forward" the way normal CMOY does.

Simple crossfeed is laughable simple, but gives surprisingly nice results similar to Meier crossfeed: It is just a coil and resistor in series connecting left and right channels so that low frequencies "leak" between channels. I use a "UPOC" variation of this idea which uses a few more resistors to make the circuit work better with any output or headphone impedances.

The resulting sound using these two crossfeeders is not too narrow, because the first crossfeeder makes the sound wide and the use of two different crossfeed topologies (CMOY is "X" topology and Meier/simple coil crossfeeders are "H" topology) gives the flavor of both to the sound: "X" topology tries to emulate two speakers at fixed angles while "H" topology spreads continuously the soundstage from left to right. The miniature soundstage I hear using this double crossfeed arrangement is very "solid" and "even", but also small. The sound appears near my head. It feels like wearing a helmet protecting against excessive spatiality. The crossfeeders I use for this have fixed values for crossfeed, but it is possible by making both crossfeeders a bit "milder" the miniature soundstage could increase in size on the expense of the "solidness" and "evenness" of the sound. 

*Multiple crossfeed? Huh?*

If double crossfeed can work, how about multiple crossfeed? What if we crossfeed the signal 10 or 20 times using extremely mild crossfeeders? It would be approaching the final result slowly and might be a method to have a fixed crossfeed that works well with all kind of material with excessive spatiality of varying degree. The biggest problem with this ideas is the massive complexity and amount of variables to optimize! What are the optimal crossfeed types and in what order?


----------



## bigshot

I want a helmet to protect me from excess space!


----------



## FlavioWolff

71 dB said:


> *Double crossfeed? Huh?*
> 
> Generally it seems to be a bad idea to do crossfeed _twice_. Using typical crossfeeders the sound becomes too centered and dead. However, there seems to be ways to do double crossfeed successfully. I have been using an arrangement where I have my "wide crossfeeder" followed by a very simple crossfeeder:
> 
> ...



Very intriguing. Have you ever thought about researching this in the university?


----------



## 71 dB

FlavioWolff said:


> Very intriguing. Have you ever thought about researching this in the university?


Well, I haven't worked at the university for 15 years...


----------



## starepiernikowe (Nov 26, 2021)

Hello

I have been using crossfeed something like 14 years now.
In the beginning I really had problems with it, because all "types" of crossfeed offered by different plugins sounded to me strange.
I really didn't like the stereo image that was presented as two speakers "in the headphones".
I mostly used BS2B in foobar2000 because it didn't make my head hurt.
But I wanted something that will feel natural, and after few years of looking and testing I have finally found it.
I use this little plugin for foobar2000 since the latest version came for f2k 1.1

I used to have it at default settings wit mixer level changed to 15% - that widens the sound stage.
Now with my new buy Shure SRH1840 I use it with all settings set to default "mixer level 25%".
If you use f2k I highly recommend this plugin.

To me, what it does; it takes the music from inside of my head to "on my face and sides".
Without feeling they delay and the speaker placement, the sound is on me, on my face.
What this plugin does is very unique around all other crossfeed plugins.
Try it out, write what you think.
http://www.naivesoftware.com/


Test sample:
https://drive.google.com/drive/folders/1ZT3dEF69iCGcvJh1PfooEEG2yk3Ch8jw?usp=sharing


Paweł


----------



## sonitus mirus

It is probably the Jan Meier natural crossfeed.  I like that with headphones and I have had a few headphone dac/amps that included this DSP option.  Even my current RME ADI-2 DAC FS includes 3 crossfeed options, with Meier being one of them.  I greatly prefer listening to music with speakers, and Meier gets my headphones closer to stereo speaker sound for me, though still not particularly similar.


----------



## FlavioWolff

Hello,
I'm currently experimenting HeSuVi's Crossfeed implementation via EqualizerAPO. 
Which settings should I input to have a Jan Meier-like approach?

These are the available settings:

*Frequency Cutoff*: The frequency that is central to the gain changes set below.
*Direct Shelf Boost*: The gain increase of the higher frequencies for the adjacent ear.
*Cross Attenuation*: The frequency-independent gain decrease of the audio signal to the opposite ear.
*Cross Delay*: The frequency-independent delay of the audio signal to the opposite ear.
*Cross Shelf Decay*: The gain decrease of the higher frequencies for the opposite ear
*Constant 6 dB/Oct Decay*: The value above will be ignored and a constant 6 dB/Oct decay is used instead for the higher frequencies at the opposite ear.

They default as:

*Frequency Cutof*f: 700 Hz
*Direct Shelf Boost*: 30 dB/10
*Cross Attenuation*: 90dB/10
*Cross Delay*: 18 Samples
*Cross Shelf Decay*: 100 dB/10
*Constant 6dB/Oct Decay*: unchecked by default.

Thank you!


----------



## starepiernikowe

sonitus mirus said:


> It is probably the Jan Meier natural crossfeed.  I like that with headphones and I have had a few headphone dac/amps that included this DSP option.  Even my current RME ADI-2 DAC FS includes 3 crossfeed options, with Meier being one of them.  I greatly prefer listening to music with speakers, and Meier gets my headphones closer to stereo speaker sound for me, though still not particularly similar.


Naive from what I know is not based on Meier, to my ears Meier is to much in front - at least the f2k software version is.
Naive started as a winamp crossfeed in 2002.
https://hydrogenaud.io/index.php?topic=4140.msg42151#msg42151


----------



## 71 dB

FlavioWolff said:


> Hello,
> I'm currently experimenting HeSuVi's Crossfeed implementation via EqualizerAPO.
> Which settings should I input to have a Jan Meier-like approach?
> 
> ...


HeSuVi's Crossfeed seems to be using "X-topology" approach while Jan Meier uses "H-topology" approach. This means that the resulting soundstages are different in nature. "H-topology" approach gives edgier and more "surround"-like soundstage while  "X-topology" approach is more like listening to stereo speakers.

People seem to want to have "their personal settings" for crossfeed, but actually each recording calls for it's own setting depending on the spatial nature of the recording. Some recordings don't need crossfeed at all, while some other recordings need massive crossfeeding. Here are what I would recommend for three basic settings weak, moderate and strong:

WEAK:
*Frequency Cutof*f: 800 Hz
*Direct Shelf Boost*: 17 dB/10
*Cross Attenuation*: 100 dB/10
*Cross Delay*: 11 Samples (@ 44.1 kHz)
*Cross Shelf Decay*: 200 dB/10
*Constant 6dB/Oct Decay*: unchecked by default.

MODERATE:
*Frequency Cutof*f: 800 Hz
*Direct Shelf Boost*: 26 dB/10
*Cross Attenuation*: 60 dB/10
*Cross Delay*: 11 Samples (@ 44.1 kHz)
*Cross Shelf Decay*: 200 dB/10
*Constant 6dB/Oct Decay*: unchecked by default.

STRONG:
*Frequency Cutof*f: 800 Hz
*Direct Shelf Boost*: 38 dB/10
*Cross Attenuation*: 20 dB/10
*Cross Delay*: 11 Samples (@ 44.1 kHz)
*Cross Shelf Decay*: 200 dB/10
*Constant 6dB/Oct Decay*: unchecked by default.


----------



## jamesjames (May 7, 2022)

I listen to classical music through headphones.  I've been listening with crossfeed for years now - I think it's essential.  I've only just stumbled across this thread - it's very interesting.  I agree with an idea that turns up in a number of the posts: that crossfeed really can create a performance space out of the head, to the front of the listener, greatly adding to the sense of realism.  (Otherwise, to my ear, headphone listening produces an unnatural - if fascinating - effect.)  In my view, when done well, crossfeed improves every aspect of playback.  I wanted to add that I've long found analogue implementations more satisfactory than digital.  I currently use two amps with switchable crossfeed circuits: the Moon 430HA; and the SPL Phonitor xe.  The Phonitor circuit is adjustable in a number of respects; the Moon in none.  I think both of these amps are tremendously good, but with slightly different emphases.  The Phonitor is perhaps slightly more 'linear' in its output, which can leave the Moon seeming a little dark - but the Moon seems almost magical in the way it reproduces accurate acoustic timbre.  I currently use MySphere 3.2 headphones (previously, I preferred HD800S Senns).  Interestingly, I find myself using one of my amps for some months, before switching to the other, enjoying the slightly different takes on 'natural'.  I might also mention for those familiar with the MySphere that of late I've been using them 'wide open' - that is, with the ear capsules angled forward to the maximum extent possible, rather than 'flat' to the ear.  I've read reviewers whom I respect commenting that this is likely to reduce bass response to an unacceptable extent for most.  My initial reaction was probably something like that.  But I've learned (?) to prefer this effect (however it might be described) and, in combination with crossfeed, find it to produce an uncannily 'real' performance space.


----------



## Rob the Comic

jamesjames said:


> I listen to classical music through headphones.  I've been listening with crossfeed for years now - I think it's essential.  I've only just stumbled across this thread - it's very interesting.  I agree with an idea that turns up in a number of the posts: that crossfeed really can create a performance space out of the head, to the front of the listener, greatly adding to the sense of realism.  (Otherwise, to my ear, headphone listening produces an unnatural - if fascinating - effect.)  In my view, when done well, crossfeed improves every aspect of playback.  I wanted to add that I've long found analogue implementations more satisfactory than digital.  I currently use two amps with switchable crossfeed circuits: the Moon 430HA; and the SPL Phonitor xe.  The Phonitor circuit is adjustable in a number of respects; the Moon in none.  I think both of these amps are tremendously good, but with slightly different emphases.  The Phonitor is perhaps slightly more 'linear' in its output, which can leave the Moon seeming a little dark - but the Moon seems almost magical in the way it reproduces accurate acoustic timbre.  I currently use MySphere 3.2 headphones (previously, I preferred HD800S Senns).  Interestingly, I find myself using one of my amps for some months, before switching to the other, enjoying the slightly different takes on 'natural'.  I might also mention for those familiar with the MySphere that of late I've been using them 'wide open' - that is, with the ear capsules angled forward to the maximum extent possible, rather than 'flat' to the ear.  I've read reviewers whom I respect commenting that this is likely to reduce bass response to an unacceptable extent for most.  My initial reaction was probably something like that.  But I've learned (?) to prefer this effect (however it might be described) and, in combination with crossfeed, find it to produce an uncannily 'real' performance space.


I have three amps - Cavalli Liquid Gold, Heed Canalot PSU - and the Phonitor XE ( no DAC version as I run it through a Schitt Yggdrasil. I sort of use the different amps for different genres and generally use the Phonitor for classical like yourself. I agree that the crossfeed for classical is a lot more ‘lifelike’ and I think, less fatiguing. I like your description of the fascinating effect, sans crossfeed; sometimes I cannot resist that ‘super stereo’ effect with the old Mercury Living Stereo boxes. 😀


----------



## 71 dB

I have a strong allergy against super stereo. When I watch an older movie with mono soundtrack using headphones, I am positively surprised how the stability and simplicity of the soundtrack allows me to concentrate on the movie itself. Many Youtube videos also sound "better" in mono, because people record themselves talking in a reverberant room. Since the direct sound to the microphone is very monophonic while the reverberation is diffuse, mono attenuates the reverberation a few decibels improving intelligibility. Many Youtubers don't know they should always do mono sound when speaking. In the worst case it is left or right channel only! Fortunately I have mono switch in my DIY crossfeeder. Of course binaural stereo is best on headphones, but mono has its place and works well with headphones when stability and simplicity is wanted. Headphone super stereo can be an "sonic effect", but it is quite unnatural and doesn't make much sense. The history of stereo sound has been so loudspeaker-centric, that spatial properties of headphone sound have been almost ignored I think until perhaps recently...


----------



## jamesjames

Rob the Comic said:


> I have three amps - Cavalli Liquid Gold, Heed Canalot PSU - and the Phonitor XE ( no DAC version as I run it through a Schitt Yggdrasil. I sort of use the different amps for different genres and generally use the Phonitor for classical like yourself. I agree that the crossfeed for classical is a lot more ‘lifelike’ and I think, less fatiguing. I like your description of the fascinating effect, sans crossfeed; sometimes I cannot resist that ‘super stereo’ effect with the old Mercury Living Stereo boxes. 😀


Thanks - I should have mentioned I also have the Phonitor and Moon _sans _DACs.  I have for some years now used a Marantz SA10 digital player - balanced analogue connections to the amp.  I like to keep things very simple.  I play CDs, SACDs and USB drives (downloads) directly from the Marantz (no network or other server).  I find sound quality is significantly better that way.  I listen to a lot of hi-res material - not in an effort to find more frequency extension, but because I find it significantly enhances tone density.  To my mind, the value of hi-res is just this - improvement of the holographic effect.  I find subtle improvements are also possible by decompressing FLAC to WAV, and DST to DSD.  I find these decompression differences to be inaudible via loudspeakers, streaming, and lower resolution headphone systems more generally, but quite worthwhile with my current headphone system.  I'm inclined to think all these factors play a part in my preference for crossfeed - which really does seem to allow more scope to appreciate subtle enhancements.


----------



## 71 dB

jamesjames said:


> I listen to a lot of hi-res material - not in an effort to find more frequency extension, but because I find it significantly enhances *tone density*.  To my mind, the value of hi-res is just this - improvement of the *holographic effect*.


What is "tone density"? What is "holographic effect" ?


----------



## gimmeheadroom

71 dB said:


> What is "tone density"? What is "holographic effect" ?


----------



## 71 dB

gimmeheadroom said:


>


I figured it must be something along these lines, in other words placebo-based audiophool nonsense, that doesn't belong to the Sound Science sub forum.


----------



## gimmeheadroom

71 dB said:


> I figured it must be something along these lines, in other words placebo-based audiophool nonsense, that doesn't belong to the Sound Science sub forum.


My post was meant to be humorous rather than as you understood it. I agree the terms are vague, though, especially tonal density. I think "holographic" is acceptable for a setup that has precise/correct instrument placement and accurate soundstage, which reproduces, to a degree, the sense of the venue where the recording was made. For analog this seems easier to produce; in a digital system, an accurate master clock helps a lot.


----------



## 71 dB

gimmeheadroom said:


> My post was meant to be humorous rather than as you understood it. I agree the terms are vague, though, especially tonal density. I think "holographic" is acceptable for a setup that has precise/correct instrument placement and accurate soundstage, which reproduces, to a degree, the sense of the venue where the recording was made. For analog this seems easier to produce; in a digital system, an accurate master clock helps a lot.


Well, I did find your post humorous for sure...

As for the "precise instrument placement and soundstage" goes, 44.1 kHz/16 bit digital audio has about 1000 times more temporal precision than human hearing can detect. There is no way to improve it in audible sense with higher sampling rate or/and bigger bit depth. Furthermore, "precise instrument placement and soundstage" is definitely not easier for analog sound.


----------



## jamesjames (May 8, 2022)

I guess I'd say my terms were intended to describe impressions, and were therefore impressionistic!  'Holographic' was meant to convey the sense of depth and placement within an apparent performance space.  (This is, after all, just a psychoacoustic affect.)  I find hi-res recordings can make quite a difference here.  'Density' was meant to convey the sense that more information is being conveyed.  I find hi-res recordings can make quite a difference in conveying the nuances of timbre and dynamic shading that makes acoustic music seem 'real'.  I have many 16/44 recordings that I consider to be superb.  But I think the best and most successful recordings I have are higher-res.  (Should we consider 16/44 perfect?)  The decompression point is different.  It's simply that more digital processing is required to do that job on the fly, and this can produce noise, even in the best systems.


----------



## bigshot (May 8, 2022)

Just because you have hires files that sound good, that doesn’t mean that it sounds good because it’s hires. It’s more likely due to better mastering. There is no reason why hires files would be audibly different than the same track bumped down to 16/44.1. And there have been plenty of listening tests to prove it.

See the article CD is all you need in my sig file.

Expectation bias and placebo effect often are expressed using vague impressionistic terminology, because the effect is vague and totally dependent on subjective impressions.


----------



## jamesjames

Oh dear ... I can see where this is headed.  I'm not interested in pursuing this - other than to suggest to anyone (still) reading this to download some hi-res files if sufficiently interested, and compare the various rates for yourself.  It's very easy to do.  Come to your own conclusions.  The condescending tone here reminds me of decades past - I'm old enough to remember being told (in no uncertain terms!) that it was foolish to suggest that digital recording might be worth considering; that digital players might be worth considering; that single-bit processing might be worth considering ...  Beware the self-appointed expert!


----------



## bigshot

Welcome to Sound Science! Sorry about your preconceptions.


----------



## jamesjames

Returning to crossfeed, it would be great to hear from anyone out there who has tried the new Burson 3X GT.  I gather it has a variable crossfeed circuit, and provision for a subwoofer!  I've never tried a subwoofer with phones.  Is there anyone out there who has?  I gather there's no low-pass filter with the Burson - simply summed left and right out on RCA.


----------



## 71 dB

jamesjames said:


> I guess I'd say my terms were intended to describe impressions, and were therefore impressionistic!  'Holographic' was meant to convey the sense of depth and placement within an apparent performance space.  (This is, after all, just a psychoacoustic affect.)  I find hi-res recordings can make quite a difference here.  'Density' was meant to convey the sense that more information is being conveyed.  I find hi-res recordings can make quite a difference in conveying the nuances of timbre and dynamic shading that makes acoustic music seem 'real'.  I have many 16/44 recordings that I consider to be superb.  But I think the best and most successful recordings I have are higher-res.  (Should we consider 16/44 perfect?)  The decompression point is different.  It's simply that more digital processing is required to do that job on the fly, and this can produce noise, even in the best systems.


Thank you for explaining what you mean by the terms you are using. However, your thoughts about hi-res are not scientific, but impressionistic. For hi-res to be better in depth and placement it should have audible better temporal resolution and one would think it has because of higher sampling rate, but the temporal resolution of digital audio at 44.1 kHz is already about 1000 times better than needed. Technically hi-res has more "information", but how much of it is audible? Even children can't hear beyond 20 kHz, and the possibility of a lower noise floor isn't something to be heard either. 44.1 kHz/16 bits is actually quite perfect, not by much, but when you are over the line, you are over the line. This is why you can have "have many 16/44 recordings that I consider to be superb" as you say. How good something sounds comes from how it was produced, mixed and mastered. 



jamesjames said:


> Oh dear ... I can see where this is headed.  I'm not interested in pursuing this - other than to suggest to anyone (still) reading this to download some hi-res files if sufficiently interested, and compare the various rates for yourself.  It's very easy to do.  Come to your own conclusions.  The condescending tone here reminds me of decades past - I'm old enough to remember being told (in no uncertain terms!) that it was foolish to suggest that digital recording might be worth considering; that digital players might be worth considering; that single-bit processing might be worth considering ...  Beware the self-appointed expert!


Sorry if you are not interested in pursuing this, because you are in a place where you should be. If the 44.1/16 file is a different mastering than a hi-res file, you are not comparing formats, but masterings. How to compare formats?

1) Select a high-res file. Make a separate 44.1/16 version of it. Ask your friend to play randomly the two versions so that you don't know which one is playing and try to guess the version. If you guess wrong about 50 % of the time, the formats sound the same. If the high-res version gives more audible information/better depth and placement, it should be easy to tell them apart and guess over 90 % of the time correctly.

2) Open the two versions above in a sample editor program. Invert the 44.1/16 version. Add the two version together to get the difference of the two. Listen to the difference. It should be too quiet to hear at reasonable listening levels. This should make it easier to believe the hi-res format doesn't offer anything, unless it is a completely different mastering.


----------



## jamesjames (May 19, 2022)

To answer my own question, I've just come across some impressions of crossfeed on the Burson on the dedicated Head-Fi 3X GT thread.


----------



## Mink

jamesjames said:


> I listen to classical music through headphones.  I've been listening with crossfeed for years now - I think it's essential.  I've only just stumbled across this thread - it's very interesting.  I agree with an idea that turns up in a number of the posts: that crossfeed really can create a performance space out of the head, to the front of the listener, greatly adding to the sense of realism.  (Otherwise, to my ear, headphone listening produces an unnatural - if fascinating - effect.)  In my view, when done well, crossfeed improves every aspect of playback.



My experience is the complete opposite. I listen to classical music a lot as well and I find the majority of recordings to be just fine when it comes to giving the suggestion/impression of a soundstage and being there. And when it comes to older classical music recordings I like the obvious stereo sound, the sound it is not as severely panned between the left and right channel as Beatles recordings for instance. Whenever I try cross feed I always get the sensation the sound gets heavily compressed and every sense of stereo depth and width gets mono, most of the sound just gets in front, right the middle.


----------



## bigshot

I've never had problems with classical music and headphones either. The orchestra is positioned right through the middle of my skull, but there's a good gradation of sound from left to right.


----------



## jamesjames (May 20, 2022)

Mink said:


> My experience is the complete opposite. I listen to classical music a lot as well and I find the majority of recordings to be just fine when it comes to giving the suggestion/impression of a soundstage and being there. And when it comes to older classical music recordings I like the obvious stereo sound, the sound it is not as severely panned between the left and right channel as Beatles recordings for instance. Whenever I try cross feed I always get the sensation the sound gets heavily compressed and every sense of stereo depth and width gets mono, most of the sound just gets in front, right the middle.


Yes, I've heard others say similar things.  I wouldn't suggest there's any right approach to this.  To my ears, a concert or recital is probably always more 'mono' than most recordings suggest through headphones.  That's what I was getting at above - the headphone effect for me is quite fascinating, just not what I hear in the hall.  To my ears, the live performance seems to come from a spot in front of me (there are of course reflections which give some spatial information about the performance space).  For me, crossfeed (when done well) projects the performance space well forward, into the distance.  It does indeed seem to diminish immediate width information, which actually increases the sense of realism for me.  And I would agree this often reduces the apparent immediacy or drama of the recording - the experience becomes more distant, recessed, less immediate.  Low frequencies in particular are apparently less directional.  But I find this sense of distance generally more satisfying, and much less fatiguing - more like how I experience a concert or recital.  Of course, a lot depends on components that can supply sufficient levels of detail to remain convincing.


----------



## bigshot

I've never heard any distance further than a couple of inches in front of my head with headphones, other than secondary depth cues (i.e. reverb mixed into the recording)

Do you have speakers that you listen to classical music on too? If so, have you ever experienced that kind of soundstage using headphones?


----------



## jamesjames

bigshot said:


> I've never heard any distance further than a couple of inches in front of my head with headphones, other than secondary depth cues (i.e. reverb mixed into the recording)
> 
> Do you have speakers that you listen to classical music on too? If so, have you ever experienced that kind of soundstage using headphones?


I've used (and enjoyed) a wide range of loudspeakers over the years - although over the last few years it's been headphones.  I guess my preferred loudspeakers have been bi- or di-poles - operating at some distance from the front boundary, as these generally seemed to me to recreate the best sense of performance-space depth.  I've also had some experience in recording studios, and a range of mic techniques and recording monitors (including phones).  (I've had some experience playing, and so also have some sense of the different perspectives of stage/audience.)  On the whole, I find well-executed crossfeed on good phones more convincing than loudspeakers - which is one reason I now use phones exclusively (together with the better sense of detail retrieval from hi-res sources).  I really do find now on the best recordings the level of playback realism to be way in advance of that I would have imagined possible twenty years ago.


----------



## bigshot (May 20, 2022)

Wow! Dipole *and* Bipole? Those are two completely different things. What kind of speaker layout and seating arrangement do you use with those? Are the speakers mounted on the rear wall, or the side walls? How do you avoid the big null spot? Are you talking about a multichannel speaker system or a stereo one? I am VERY interested to hear your theories on speakers, because I've never encountered someone who used both bipoles and dipoles in a home stereo setting. Is there any sense of left and right, or is it all "sound in a blender"? Do you walk around the room in and out of various null spots?  That must be one crazy sounding speaker system!


----------



## jamesjames

bigshot said:


> Dipole and Bipole? Those are two completely different things. What kind of speaker layout and seating arrangement do you use with those? Are the speakers mounted on the rear wall, or the side walls? How do you avoid the big null spot? Are you talking about a multichannel speaker system or a stereo one? I am VERY interested to hear your theories on speakers, because I've never encountered someone who used both bipoles and dipoles in a home stereo setting. That must be one crazy sounding speaker system! Is there any sense of left and right, or is it all "sound in a blender"?


You seem to have missed the point.  An example of a di-pole speaker I have used is the Quad.  An example of a bi-pole speaker is the Mirage (nla).
Of course, I didn't use them at the same time ...
Studio monitors are typically more conventional.
I hope that helps.


----------



## bigshot (May 20, 2022)

Can you please doodle up a diagram of the room you use those speakers in- where you have them placed and where the listening position is in relation to the speakers? This was your living room?

Is the goal to create some sort of phasey equivalent of mono that sounds different depending on where you sit?


----------



## jamesjames

I've listened to all manner of speakers in all manner of rooms - you might find it more helpful to read one of the many books on speaker placement if you're interested.  My only observation would be that rooms are always a problem - which can be avoided with headphones!


----------



## bigshot (May 21, 2022)

I'm interested to see how you implement dipole and bipole speakers yourself. Your description sounds nothing like any installation I've ever heard about or read about in books. You said you had a speaker system several years ago and several before that going back for years. Could you diagram a few of those systems for me please? Particularly the ones with dipole and bipole speakers, since you said those worked the best for you. I'd like to see how you used these specialty designs in your living room. I didn't think it was possible, so I'd like to see what you did that worked so well for you. Thanks!


----------



## jamesjames

bigshot said:


> I'm interested to see how you implement dipole and bipole speakers yourself. Your description sounds nothing like any installation I've ever heard about or read about in books. You said you had a speaker system several years ago and several before that going back for years. Could you diagram a few of those systems for me please? Particularly the ones with dipole and bipole speakers, since you said those worked the best for you. I'd like to see how you used these specialty designs in your living room. I didn't think it was possible, so I'd like to see what you did that worked so well for you. Thanks!


Sorry if I wasn't clear enough above: I've had both di-pole and bi-pole (and omni-pole and conventional) speakers, but I've only ever used one system at a time.  My use of speakers has been quite standard.  In the case of Quad electrostatics (di-pole), they have been at least 1m from the front boundary, and at least 1m from side boundaries.  The listening position was around the usual position.  In the case of the Mirage speakers (bi-pole - ie, front and rear firing in phase), the positioning was generally the same.  The thing I like about these kinds of loudspeakers is that they seem to push the performance space well away from the listener, well beyond the line of the speakers.  I found this psychoacoustic affect quite striking.  But I find I can get an even more convincing distance effect now using headphones with crossfeed.


----------



## bigshot (May 21, 2022)

I'm interested in hearing what Gregorio has to say about that. I've never heard of dipole or bipole speakers used as mains in a normal triangle with the speakers and listening position. They usually are side or rear speakers. I've never used these myself, but from my understanding, if you used dipoles like you're describing, you would have a nice big phase cancellation null right on the listening position. If you used bipoles like that, you would essentially have mono. Someone else probably knows more about this than I do, I'm interested to hear what they have to say if they do.

Did you really have a speaker system like this, or are you just saying you did? Because I'm actually curious what it would sound like. I can't imagine it sounding good. I've never heard anyone break all the rules like this in a speaker system like this. Directionality is important. You don't want to mess like that. If you do, just push the mono button and you won't have all that phase cancellation.


----------



## jamesjames

bigshot said:


> I'm interested in hearing what Gregorio has to say about that. I've never heard of dipole or bipole speakers used as mains in a normal triangle with the speakers and listening position. They usually are side or rear speakers. I've never used these myself, but from my understanding, if you used dipoles like you're describing, you would have a nice big phase cancellation null right on the listening position. If you used bipoles like that, you would essentially have mono. Someone else probably knows more about this than I do, I'm interested to hear what they have to say if they do.
> 
> Did you really have a speaker system like this, or are you just saying you did? Because I'm actually curious what it would sound like. I can't imagine it sounding good. I've never heard anyone break all the rules like this in a speaker system like this. Directionality is important. You don't want to mess like that. If you do, just push the mono button and you won't have all that phase cancellation.


I can't quite believe your last post!  Anyway, my last comment is simply that I have indeed owned Quad electrostatic speakers and Mirage bipolar speakers (like thousands of other people) and listened to them in the bog standard fashion.  I liked them for the reasons I've outlined.  End of story.


----------



## sander99 (May 21, 2022)

bigshot said:


> I've never heard of dipole or bipole speakers used as mains in a normal triangle with the speakers and listening position. They usually are side or rear speakers.


You seem to be thinking about typical home theater applications, with typical home theater dipole and bipole speakers for example with 2 tweeters (often placed such that indeed the "null" is aimed at the listening position to have less direct and more indirect sound from those speakers).
With regards to dipoles @jamesjames is talking about electrostatic speakers like Quad and Martin-logan or magnetostatic speakers like Magnepan. Big open planars without an enclosure, hence dipole by nature (because the back side of the driver fires backwards out of phase). They are not placed such that the "null" aims at the listening position. Because of the large surface of the drivers and the "nulls" to the sides they mainly fire forwards and backwards, creating less reflexions from the side walls, floor and ceiling, but more from the back walls (behind the speaker I mean).
These kind of speakers are a bit rare of course, but have been used in stereo settings long before home theater really was a thing.
Bipoles just create more reflexions from the back walls compared to normal front firing only speakers.


----------



## bigshot (May 21, 2022)

Ah! So they aren't typical dipoles or bipoles. What is the effect of more refections off the front wall? Does that create coupling, or does the out of phase info cancel that? I would imagine speakers like that do best in the middle of the floor of an open room where the back side can't reflect off the front wall. I had a friend with those Magnipans once. He had to divorce his wife to clear out the living room so his speakers worked properly! They sounded OK but the bass was very thin. Bass is the one thing that sounds full in headphones and isn't anchored between the ears.

Which are the ones that fire in phase front and back? I remember back in the 70s, there was a fad to put rear facing tweeters in studio monitors. I have a set that have that. I turned them off because it messed with the high frequencies. I tend to prefer higher frequencies directional, and lower ones not as directional. I like non directionality and wide dispersion in the rear where there is no center channel to bridge the gap, and horn loaded in the center channel up front to anchor the vocals naturally. That sounds more like the singer is in the room with me. It seems to me that if you are going to fill in, it's better to do that on the sides and rear and not mess with the precise imaging of the soundstage. Spread out sound would ruin a Culshaw opera.


----------



## Mink

I have always been interested in those Mirage speakers. The convenience that they don't have/need a sweet spot for creating a believable soundstage, but rather sound the same no matter at what listening position you are. The downside would be less precise imaging.
How do they compare to the BOSE 901's? Which are kinda similar in their approach, although technically very different. Maybe the difference is that the BOSE have a large sweet spot and the Miraga's none?

http://www.audiopolitan.com/blog/mirage-speakers-the-omnipolar-sound-approach/


----------



## sander99

Before @bigshot gets confused:
In the past Mirage used to make speakers with drivers on front and back like this:






And nowdays they make speakers with the "reflecting cones" to achieve 360 degrees dispersion:



The goal is the same: all around dispersion.


----------



## bigshot (May 21, 2022)

Precise imaging is what is needed to accurately reproduce soundstage, no?

I have KEF speakers in the rear with a radial design that may be similar to reflecting cones. They’re good for rear channels, but I wouldn’t use them for mains myself. I like a precise layout from left to right.

I guess some people like widely dispersed sound. Maybe they don’t like a precise soundstage.

Do you have to set the mirage speakers out freestanding in the middle of a large room too? They sure would dominate the living room!


----------



## hakunamakaka

For accurate soundstage notes have to blend with each other. Listen to certain multidriver IEM's with crossovers, even with razor sharp precision they are unable to reproduce accurate staging and music can sound artificial and detached


----------



## jamesjames

Mink said:


> I have always been interested in those Mirage speakers. The convenience that they don't have/need a sweet spot for creating a believable soundstage, but rather sound the same no matter at what listening position you are. The downside would be less precise imaging.
> How do they compare to the BOSE 901's? Which are kinda similar in their approach, although technically very different. Maybe the difference is that the BOSE have a large sweet spot and the Miraga's none?
> 
> http://www.audiopolitan.com/blog/mirage-speakers-the-omnipolar-sound-approach/


The Mirage speakers were my personal favourites.  I owned several versions, but found the passive 7si to work best in a standard room.  They are/were _*very *_inefficient - requiring lots of power - I bi-amped two Marantz PM11 integrated amps, which I found to be enough (just).  As you say, they can deliver a believable soundstage from a range of listening positions.  They can also deliver prodigious bottom end power - I guess in the case of the 7si down to around 35Hz.  Unlike some other models, they weren't symmetrical (more speakers firing forward).  But I liked all the Mirage speakers, including their active hybrids and omnis.  The Quads are striking - like Stax headphones - but not really convincing for me - and they certainly don't shake the room with bass!  I'm afraid I find all planars a little synthetic in character - something about dynamic shading I think.  I prefer dynamic speakers (loudspeakers and headphones).  The Bose (which I also owned many years ago) were less sophisticated in all respects - a bit crude in comparison.  But they certainly delivered something of the realism of 'reflective' speakers - I actually had the integrated amp made for the 901s, with 'wide' and 'narrow' settings.  I've listened at length to the current Duevel omni speakers, and like them too.  I should say I've never liked front firing box speakers.  I think they are never convincing - always artificial and often glaring.  And, generally, I find the compromises necessary for headphones preferable to those of loudspeakers - and I prefer them as recording monitors.


----------



## bigshot (May 21, 2022)

The sound system in my listening room doubles with a projection video screen. It's 5.1 with about a 14-15 foot spread from left to right. The front soundstage is very precise and defined, and it is perfectly scaled to the screen, so when a character crosses the screen, the sound follows the image perfectly. Multichannel music takes all kinds of approaches to soundstage, from a fixed front soundstage with reverb behind, to an immersive sound field where you are sitting in the middle of the soundstage. You can't really make any generalization about that. For stereo, I often use a DSP called Yamaha Stereo to 7.1. It spreads the sound of stereo music all around the room. That is probably the equivalent of crossfeed for speakers, although I wouldn't say it increases depth at all... just fills the room with sound without messing up the front soundstage too much.

I haven't found any way of listening to headphones that comes close to the spatial depth of multichannel speakers. Not even close. And even stereo with speakers is capable of better depth than headphones. But that depends mostly on the mix.


----------



## jamesjames

I remember finding the advent of multi-channel arrangements, and multi-channel SACDs very impressive.  Unlike the older reflected sound approach (_a la_ omni speakers), which almost ignores the recording, and relies on harnessing in-room delay, multi-channel relies on the recording, and tries to reproduce accurately a recorded fascimile of the sound event (complete with reflected sound from the recording venue).  And I can't really explain why I find di/bi/omni polar speakers more satisfying than multi-channel.  I find it even harder to explain why I prefer headphones, given that reflected sound is arguably less relevant here.  All I can say is that the relatively low-tech strategy of seeming to push the performance space away from the listener seems to be disproportionately important in allowing some blending of sound that I find captures the feeling of a live performance.  And I find crossfeed on the right phones the most convincing effort yet to achieve this - or perhaps I should say the least problematic ...


----------



## bigshot

Distance is directly related to scale. Nearfield speakers can present a miniature soundstage, but that isn’t natural. In my listening room I spent a lot of time experimenting to get the soundstage to a human scale. Bands sound like they’re standing in front of my fireplace arrayed out from one side of the room to the other, like a small club. Orchestras can sound like hearing them from a good seat in the 15th row. That depends a lot on secondary depth cues though. Headphones give me no sense of distance or scale. It’s all inside or around my head. That can sound nice, but it isn’t realistic.


----------



## jamesjames

bigshot said:


> Distance is directly related to scale. Nearfield speakers can present a miniature soundstage, but that isn’t natural. In my listening room I spent a lot of time experimenting to get the soundstage to a human scale. Bands sound like they’re standing in front of my fireplace arrayed out from one side of the room to the other, like a small club. Orchestras can sound like hearing them from a good seat in the 15th row. That depends a lot on secondary depth cues though. Headphones give me no sense of distance or scale. It’s all inside or around my head. That can sound nice, but it isn’t realistic.


... perhaps you should try better crossfeed ...


----------



## bigshot (May 21, 2022)

The two things that determine scale are physical distance and height. You don’t get either of those things with headphones unless you’re operating some complex processing t simulate height or distance. Just fuzzing everything up into a diffuse field isn’t the same.

I’ve tried Apple’s Spatial Audio and Dolby’s virtual surround too, and they just make everything sound like it’s indistinctly distant. Not pinpoint.


----------



## jamesjames

bigshot said:


> The two things that determine scale are physical distance and height. You don’t get either of those things with headphones unless you’re operating some complex processing t simulate height or distance. Just fuzzing everything up into a diffuse field isn’t the same.
> 
> I’ve tried Apple’s Spatial Audio and Dolby’s virtual surround too, and they just make everything sound like it’s indistinctly distant. Not pinpoint.


Seriously, I'm not trying to convince anyone of anything.  I'm just trying to explain my own experience of crossfeed, in a way that might be interesting to others if they are interested in this topic.  This is a thread about crossfeed, after all.  It's abundantly clear that you're not a big fan.  I get that.  What I don't get is why you find it necessary to respond to every post as if this were some sort of debate.  You don't need to read my posts, or respond to them.


----------



## bigshot

The two poles are crossfeed blending on one side, and channel separation. When you lean towards the crossfeed end, the soundstage suffers. When you lean towards the channel separation side, it can sound like a dog's breakfast because most commercial music isn't mixed to be listened to in headphones. It's monitored on speakers in a typical triangulation.

One of the reasons you really liked those two types of speakers is their height. Height and distance create scale. Bookshelf or stereo box speakers don't have that. That's why I brought up scale. Of course neither distance nor scale apply to headphone listening because that is all in your head.


----------



## jamesjames (May 22, 2022)

bigshot said:


> The two poles are crossfeed blending on one side, and channel separation. When you lean towards the crossfeed end, the soundstage suffers. When you lean towards the channel separation side, it can sound like a dog's breakfast because most commercial music isn't mixed to be listened to in headphones. It's monitored on speakers in a typical triangulation.
> 
> One of the reasons you really liked those two types of speakers is their height. Height and distance create scale. Bookshelf or stereo box speakers don't have that. That's why I brought up scale. Of course neither distance nor scale apply to headphone listening because that is all in your head.


Understood.  Many thanks.


----------



## 71 dB

bigshot said:


> The two things that determine scale are physical distance and height. You don’t get either of those things with headphones unless you’re operating some complex processing t simulate height or distance. Just fuzzing everything up into a diffuse field isn’t the same.
> 
> I’ve tried Apple’s Spatial Audio and Dolby’s virtual surround too, and they just make everything sound like it’s indistinctly distant. Not pinpoint.


How does the ears know how far the sound came from and how large the sound source was? Spatial cues! If you can generate such spatial cues well enough, the spatial hearing can be fooled. Normal crossfeed is very simplistic way of mimicking those spatial cues, but it is _something_ and for me it changes the spatiality a bit toward speaker spatiality. We don't really experience large ILDs at low frequences* in our everyday lives and that's why at least some people including me experience these large ILDs headphones often generate unnatural and annoying. Crossfeed fixes this problem for me.

* Headphones are pretty much the ONLY method to generate large ILDs at low frequencies. Normally low frequencies arrive at both ears at ILD of no more than a few decibels. Spatial hearing expects the ILD to be very small at low frequencies. Binaural recordings can be analysed to see how much ILD, ITD and ISD there is in normal listening of environmental sounds. At low frequences the sound is quite monophonic. Only at higher frequencies do we have significant ILD-values. There is a sweet pot for ILD that is a function of frequency. Going below this sweet spot makes the sound mono-like and going above it make the sound super-stereo-like. I want to do my listening at the sweet pot. With speakers it happens "automatically", because room acoustics regulate the spatiality. With headphones I need crossfeed to limit ILD, because there is no room.


----------



## castleofargh

jamesjames said:


> Seriously, I'm not trying to convince anyone of anything.  I'm just trying to explain my own experience of crossfeed, in a way that might be interesting to others if they are interested in this topic.  This is a thread about crossfeed, after all.  It's abundantly clear that you're not a big fan.  I get that.  What I don't get is why you find it necessary to respond to every post as if this were some sort of debate.  You don't need to read my posts, or respond to them.


We went through this at length with @71 dB and if there is one thing to take away, it’s how different someone’s experience can be compared to someone else’s.
Back then I was already sold on that fact but only because of objective reasons, like how the right delay should probably match the width of your own skull,  and how the shape of your ears would surely impact the interaural deviation in frequency response.
But all that was merely suggesting a need for a more custom crossfeed(or if possible some actual HRIR captures from speakers at our ears for around 30° directions on both sides). And I now know that was still an oversimplified approach that ignored the individuality of human brains too much.
Some people(small percentage) never feel like the center image is outside their head when using headphones, no matter how good the crossfeed or other virtualization dsp is. 
Also people are more or less influenced by what they see(the room they’re in, speakers in sight...). It’s something obvious and only a pure breed of audiophile ignoramus ”I know what I heard” would proudly reject the reality of visual biases. 
I’ve known that for a long time, but I sure underestimated how large the range of interpretations for each listener could be. And that’s me saying this, the guy who can’t make a post without briging human bias up. Anyway, here is at least one compelling argument on why it’s completely normal to expect different experiences based on sight and the room not matching what you hear: https://www.nature.com/articles/srep37342

My own case goes one step further, I need head tracking(and a custom one at that), otherwise anytime I move my head, imaging completely collapses. I’ve talked to a few people who don’t experience this at all(to the point where some have head tracking for their headphones and don’t use it).

I would almost always favor crossfed over default headphone ”stereo”, so at least in that respect, I’m not like @bigshot. Even though the impact for me is mostly reduced fatigue instead of distance or proper panning. In fact over time my brain brings back full left sounds at 90° and I need to turn the crossfeed on and off to get back a temporary 30 something degree impression. I also know it’s not the case for everyone.
We’re truly different people living in different subjective realities and experiencing the same stuff in widely different ways. 
To get back to the link, I found that I can improve my crossfeed experience(and any other speaker simulation or surround dsp) with a pair of speakers(turned off, duh!  ) on my desk in front of me. My brain does whatever it wants with imaging, but tends to want to anchor just about all sounds to those speakers it sees. Removing the speakers significantly degrades all my anti lateralization efforts. While I expect yet again to see variations between people, I’m quite optimistic that this can still help most listeners to some degree.


----------



## jamesjames

castleofargh said:


> We went through this at length with @71 dB and if there is one thing to take away, it’s how different someone’s experience can be compared to someone else’s.
> Back then I was already sold on that fact but only because of objective reasons, like how the right delay should probably match the width of your own skull,  and how the shape of your ears would surely impact the interaural deviation in frequency response.
> But all that was merely suggesting a need for a more custom crossfeed(or if possible some actual HRIR captures from speakers at our ears for around 30° directions on both sides). And I now know that was still an oversimplified approach that ignored the individuality of human brains too much.
> Some people(small percentage) never feel like the center image is outside their head when using headphones, no matter how good the crossfeed or other virtualization dsp is.
> ...


Well, I must confess, I've never encountered head tracking in practice.  How does it work?


----------



## bigshot (May 22, 2022)

You can discern size of a sound source by moving your head.

One thing Castle… I don’t prefer straight headphone output to cross feed. I prefer speakers to headphones. I find any long term listening with headphones to be fatiguing… and for more reasons than crossfeed can fix.


----------



## 71 dB

For me the miniature soundstage of headphones is not part of the reality around me, but kind of a "sonic hologram" that moves and turns with my head. I listen to headphones nowadays so much, that for me the changes in sound when I move my head while listening to speakers feels strange. If I listen to bass-heavy music (say techno) on speakers and walk around in the room, I even get "shocked/surprised" when my head gets inside the maximum of a room mode. With headphones this just doesn't happen. The sound stays the same no matter how I move my head. I like that stability. It is like being in euphoric dream-like state.


----------



## castleofargh

jamesjames said:


> Well, I must confess, I've never encountered head tracking in practice.  How does it work?


Basic solutions like Audeze mobius, waves nx little sensor to mount on the headphone, or Apple’s solution, rely on a model HRTF(probably some dummy head recording impulses in various directions or some rather limited database of HRTFs). So when you move to the right, the angle is constantly calculated and the audio is convolved with the impulse associated with the look angle at that moment(the lag is only felt when really big, like maybe near 100ms and you move your head rapidly to make thing more obvious. In general, lag isn’t felt.
 Just like with the speakers in sight, the purpose is to add yet another cue to trick the brain. With sound changes following a turn of the head, we’re supposed to get an extra tool to triangulate the position of the sound source. And we also break free from the sound turning with the head, so the brain is less likely to collapse the imaging after thinking the audio is attached to the head(and the headphone).
Like with crossfeed, there is a lot of genetic luck involved when using some preset solution. They try to fine-tune a little by getting the size of the head(or distance between the ears), but it’s far from actually having the audio recorded at your own ears like the stupidly expensive Realiser A16 does. 
In the end, some people will have a head close to the model simulated and will be in audio heaven. While the rest will probably feel weird enough to give up on using the tracking/processing. 
Those stuff really belong to the ”just listen” category. Nobody can predict with certainty how you’ll feel. Just like how out of 10 people, it’s extremely rare to have them all favor the same crossfeed setting.


----------



## jamesjames

castleofargh said:


> Basic solutions like Audeze mobius, waves nx little sensor to mount on the headphone, or Apple’s solution, rely on a model HRTF(probably some dummy head recording impulses in various directions or some rather limited database of HRTFs). So when you move to the right, the angle is constantly calculated and the audio is convolved with the impulse associated with the look angle at that moment(the lag is only felt when really big, like maybe near 100ms and you move your head rapidly to make thing more obvious. In general, lag isn’t felt.
> Just like with the speakers in sight, the purpose is to add yet another cue to trick the brain. With sound changes following a turn of the head, we’re supposed to get an extra tool to triangulate the position of the sound source. And we also break free from the sound turning with the head, so the brain is less likely to collapse the imaging after thinking the audio is attached to the head(and the headphone).
> Like with crossfeed, there is a lot of genetic luck involved when using some preset solution. They try to fine-tune a little by getting the size of the head(or distance between the ears), but it’s far from actually having the audio recorded at your own ears like the stupidly expensive Realiser A16 does.
> In the end, some people will have a head close to the model simulated and will be in audio heaven. While the rest will probably feel weird enough to give up on using the tracking/processing.
> Those stuff really belong to the ”just listen” category. Nobody can predict with certainty how you’ll feel. Just like how out of 10 people, it’s extremely rare to have them all favor the same crossfeed setting.


I see, thanks.  Do you know, is this something that's popular with gamers?


----------



## bigshot

I've used Apple's Spatial Audio. The interesting thing is that the head tracking is more focused on the center channel than the mains. This tends to make the vocals and drums spatial, while the rest stays left and right. If you get up and walk around with head tracking on, it will peg to the left or right and recenter every couple of minutes. It isn't very sophisticated. I don't think it's even in the same universe as a Smyth Realiser.


----------



## castleofargh

jamesjames said:


> I see, thanks.  Do you know, is this something that's popular with gamers?


There has been tracking solution for gamers for a while, even before 3d googles, but afaik it was focused on video only. Stuff using a webcam with say a flight simulator so you could look around in your cockpit. With 3d googles they got no choice but to somehow track audio or it feels silly. For ”real gamers”, Idk where we are . I would assume that any delay over 40 ms for procesing would have many refusing the tech altogether. But the Audeze Mobius I mentioned is an option for headtracking(not so personalized though).


----------



## Rob the Comic

castleofargh said:


> We went through this at length with @71 dB and if there is one thing to take away, it’s how different someone’s experience can be compared to someone else’s.
> Back then I was already sold on that fact but only because of objective reasons, like how the right delay should probably match the width of your own skull,  and how the shape of your ears would surely impact the interaural deviation in frequency response.
> But all that was merely suggesting a need for a more custom crossfeed(or if possible some actual HRIR captures from speakers at our ears for around 30° directions on both sides). And I now know that was still an oversimplified approach that ignored the individuality of human brains too much.
> Some people(small percentage) never feel like the center image is outside their head when using headphones, no matter how good the crossfeed or other virtualization dsp is.
> ...


That is what I find the most and the most favourable result - less fatigue. I run a Schitt Yggdrasil to the Phonitor XE and a variety of Headphones. Even with HD800’s and full crossfeed angle, it’s not a ‘oh wow!’ thing for me but it is less fatiguing.


----------



## 71 dB

Rob the Comic said:


> That is what I find the most and the most favourable result - less fatigue. I run a Schitt Yggdrasil to the Phonitor XE and a variety of Headphones. *Even with HD800’s and full crossfeed angle, it’s not a ‘oh wow!’* *thing for me* but it is less fatiguing.


People are so demanding and hard to impress these days. It just shows how above a certain level _more_ doesn't mean increased happiness or contentment. I realized this pretty early on in my life and that is why don't use "expensive" audio gear. I wouldn't be happier, just poorer. I wouldn't be any happier listening to HD 800 compared to HD 598 (maybe two weeks I would, but then the _novelty_ would wear away), so why would I want to spend many times more money? HD 598 has never failed me. I feel I hear everything well enough. What fails me is the spatiality of recordings (mixed for speakers) on headphones and that is something I really want to fix (hence crossfeed). It is the fatiguing effect. It is the unnaturalness of super-stereo and how it harms the sound for me.

If you learn to be satisfied with less, more things in life will give 'oh wow' feel. Greed makes this difficult, but this is the ultimate reason why you don't have so many 'oh wow' moments in life.


----------



## bigshot

The best way to fix the spatiality of music mixed for speakers is to play it on speakers. That’ll get you your wow.


----------



## lsantista

what crossfeed solutions have you successfully used with e-stats?


----------



## 71 dB

"Mixed for speakers" is not defined very precisely or scientifically. Mostly this is because speakers + room help shaping a reasonable soundstage. Ping-Pong stereo can be "mixed for speakers" as well as recordings made by whatever microphone set-up (AB, OSS, ORTF, Blumlein, XY, MS,...) All "mixed for speakers" really mean is the mixing was mostly or completely done using studio monitor speakers. More often than not this way of mixing leads to spatiality that can be considered excessive with headphone given the science of human spatial hearing. How this excessive nature of the spatiality is experienced seems to vary among listeners to the effect that some headphone listeners want to "scale" the spatiality down to something more like the science of human hearing suggests. If the exact method of creating the spatial information in recordings "mixed for speakers" varies among mixers/producers and even with time period, a common feature of recordings "mixed for speakers" is that they are super-stereo for headphones, althou there are some recordings "mixed for speakers" that happen to be spatially "compatible" as they are also for headphones. Jecklin Disk is an example of a method to create this kind of recordings "mixed for speakers AND headphones." So, mixing and producing a recording as if it was recorded with Jecklin Disk should result in spatiality suitable for both speakers and headphones. This doesn't mean to simulate a Jecklin Disk 100 %, but to use similar philosophy about how for example ILD and ITD behave.


----------



## jamesjames

Rob the Comic said:


> That is what I find the most and the most favourable result - less fatigue. I run a Schitt Yggdrasil to the Phonitor XE and a variety of Headphones. Even with HD800’s and full crossfeed angle, it’s not a ‘oh wow!’ thing for me but it is less fatiguing.


Yes, that's generally how I see it too - although I must say that with small-scale chamber music it sometimes does produce a remarkable sense of a performance space at some distance from me.


----------



## jamesjames

lsantista said:


> what crossfeed solutions have you successfully used with e-stats?


With my SR-009S I got good results combining the iFi Pro iESL energiser with either my SPL Phonitor xe or Moon 430HA (both of which have switchable analogue crossfeed circuits).  I found the iESL with the iFi Pro iCAN was also good, but not quite as good as the Phonitor or Moon.  In all cases, I used the amp balanced headphone out to the iESL balanced input (Mogami cable).  Even w/o crossfeed, I found the iESL with a good conventional amp was better than any of the dedicated stat drivers I've owned or used (and I've tried many).


----------



## sonitus mirus

bigshot said:


> The best way to fix the spatiality of music mixed for speakers is to play it on speakers. That’ll get you your wow.


 I agree with the "wow" factor with speakers.  Headphones simply cannot provide the experience I enjoy the most.


----------



## lostrockets

Can anyone recommend a resource to make a simple diy crossfeed adapter or even better one that already built? Budget <$200


----------



## 71 dB

lostrockets said:


> Can anyone recommend a resource to make a simple diy crossfeed adapter


The set-up you are using dictates what kind of cross-feeders can be added.


----------



## bigshot

sonitus mirus said:


> I agree with the "wow" factor with speakers.  Headphones simply cannot provide the experience I enjoy the most.


And it isn't just channel separation. Physical bass and dynamics in sound inhabiting real space are exciting. It's a sound you can feel, not just hear.


----------



## gregorio

71 dB said:


> Jecklin Disk is an example of a method to create this kind of recordings "mixed for speakers AND headphones." So, mixing and producing a recording as if it was recorded with Jecklin Disk should result in spatiality suitable for both speakers and headphones.


And yet pretty much no one ever uses only a Jecklin Disk to make commercial recordings. In fact any use of a Jecklin Disk is very rare. You think that’s because no music/sound engineers, past or present, know what they’re doing? Or maybe, just you’ve got it wrong in an attempt to explain your personal perception?

G


----------



## bigshot

I think I'd rather have it sound really really good one way than have it sound kinda good both ways. To be honest, I don't find that I really need crossfeed, because classical music and jazz generally are recorded with a fairly normal acoustic (even though it might be completely fabricated in the mix); and goofy Pink Floyd / ping pong artificial acoustics are supposed to sound artificial. I get listening fatigue from listening to headphones too long, but it has nothing to do with channel separation. Having little speakers strapped to my ears is tiring no matter what. I can sleep with speakers on, but not headphones.


----------



## 71 dB (Jun 5, 2022)

gregorio said:


> And yet pretty much no one ever uses only a Jecklin Disk to make commercial recordings.


True. I have never claimed they do.



gregorio said:


> In fact any use of a Jecklin Disk is very rare.


True. I did not suggest using only Jecklin Disk. I suggested using the spatial philosophy of Jecklin Disk meaning shaping the spatiality in a DAW to mimick Jecklin Disk.



gregorio said:


> You think that’s because no music/sound engineers, past or present, know what they’re doing?


I think the spatial compatibility with headphones has not been top of the list. For example recording an orchestra, sound engineer may think things such as the balance of instruments and reverberation etc. override headphone compatibility of spatiality. 



gregorio said:


> Or maybe, just you’ve got it wrong in an attempt to explain your personal perception?
> 
> G


That is of course possible, but then again I am not the only person in the World using cross-feed. Maybe all of us cross-feeders are just wrong about everything? Maybe it is impossible for sound engineers like you to be wrong?


----------



## bigshot (Jun 5, 2022)

You can use any kind of signal processing you want. It's fine with me. Some people like reverb. Others like v shaped EQ curves. Others like phase tricks. It's all fine. It's your ears, you can feed them any way you want.

But signal processing isn't fidelity. It doesn't return some attribute to sound and it doesn't restore it to being natural. It just adds a filter on top of it. If you like the sound of that filter, great. Some people put ketchup on everything and that is fine.


----------



## 71 dB

bigshot said:


> And it isn't just channel separation. Physical bass and dynamics in sound inhabiting real space are exciting. It's a sound you can feel, not just hear.


Occationally I hear bass bumping from the neighbour flat. I think it is hip hop music. I find it annoying. If my neighbour used headphones it would not be annoying. I wonder how much it annoys my neighbours when I play music from speakers...


----------



## 71 dB

bigshot said:


> I think I'd rather have it sound really really good one way than have it sound kinda good both ways.


Well, all recordings don't have a good spatiality even with speakers. A ping-pongy recording isn't that great even from speakers. Also, I think it is possible to do better than "kinda good" both ways using Jecklin Disk-type spatial philosophy. Speaker-spatiality is more "flexible" because the room does so much. This explains why there are so many types of microphone set-ups for speaker spatiality. So many ways to do it well for speakers, so why not choose those that work well with headphones too?  



bigshot said:


> To be honest, I don't find that I really need crossfeed, because classical music and jazz generally are recorded with a fairly normal acoustic (even though it might be completely fabricated in the mix);


Almost always I use cross-feed with classical music, but often just a little bit is enough. With solo piano music the cross-feed level is very critical. Jazz (new enough to be stereophonic) REALLY needs cross-feed, because often the spatiality is really brutal for headphones, so much so that in most severe cases making the sound mono is the only way to make the sound suitable for my ears.



bigshot said:


> I get listening fatigue from listening to headphones too long, but it has nothing to do with channel separation. Having little speakers strapped to my ears is tiring no matter what. I can sleep with speakers on, but not headphones.


I feel comfortable enough to wear HD 598 for hours and I sleep in silence.


----------



## 71 dB (Jun 5, 2022)

bigshot said:


> You can use any kind of signal processing you want. It's fine with me. Some people like reverb. Others like v shaped EQ curves. Others like phase tricks. It's all fine. It's your ears, you can feed them any way you want.
> 
> But signal processing isn't fidelity. It doesn't return some attribute to sound and it doesn't restore it to being natural. It just adds a filter on top of it. If you like the sound of that filter, great. Some people put ketchup on everything and that is fine.


To my ears raw headphone sound is _missing_ inter-aural correlation my spatial hearing expects to be there. On speakers this correlation IS there, because my both ears hear both speakers. This creates the correlation. With headphones I need cross-feeder to do that, because the amount of acoustic leakage between left and right ear happens at a level that is far too low. My solution is to produce spatiality that incorporates spatial inter-aural correlation into recording in ways that do not compromise speaker spatiality (because of doubled spatial inter-aural correlation). Since almost all my recordings do not employ this kind of solution, cross-feed IS the solution.

Stereophony is based on an illusion of spatiality. Therefor the spatiality is no fidelity. We can only experience an illusion of spatial fidelity.


----------



## gregorio (Jun 5, 2022)

71 dB said:


> I did not suggest using only Jecklin Disk. I suggested using the spatial philosophy of Jecklin Disk meaning shaping the spatiality in a DAW to mimick Jecklin Disk.


Which again, pretty much no one ever does and why do you think that is? Which brings us back to the question of whether you think no engineer past or present knows what they’re doing?


71 dB said:


> For example recording an orchestra, sound engineer may think things such as the balance of instruments and reverberation etc. override headphone compatibility of spatiality.


Obviously balance of instruments is a priority. You might not care if a back desk violin is louder than the entire brass section as long as you perceive pleasing spatially but virtually everyone else does. Engineers do typically consider what the mix will sound like on cans but obviously we’re not going to prioritise your personal perception of headphone spatiality at the expense of everything else. For one thing, we don’t even know what your personal perception is.


71 dB said:


> That is of course possible, but then again I am not the only person in the World using cross-feed. Maybe all of us cross-feeders are just wrong about everything?


Come on, I don’t expect such fallacious arguments from you. If you believed the earth was flat, you wouldn’t be the only person in the world, does that mean the Earth is flat? And, using a non-sequitur about being wrong about one thing and therefore wrong about all things is beneath you!


71 dB said:


> A ping-pongy recording isn't that great even from speakers.


Yes it is. According to who isn’t it “that great”? Most ping-pongy recordings are quite early stereo recordings, where the typical consumer stereo speakers were inside a single cabinet or otherwise relatively close together and were therefore not as objectionable as they could be today, where consumer stereo speakers are typically much further apart. And if we’re talking about more modern recordings, how do you know it wasn’t intended by the musicians/engineers to sound objectionable?


71 dB said:


> Speaker-spatiality is more "flexible" because the room does so much. This explains why there are so many types of microphone set-ups for speaker spatiality.


No it doesn’t, what actually explains why there are so many types of mic setups is because there are so many different types and groupings of instruments and so many different types of sound the musicians and engineers wish to achieve.


71 dB said:


> So many ways to do it well for speakers, so why not choose those that work well with headphones too?


There’s not “_so many ways to do it well for speakers_”, there’s relatively few and fewer still if you’re after a particular sound, which virtually all musicians/engineers are. But again, musicians/engineers do typically consider headphone use, although not your personal perception of headphone use of course.

All the above has been explained to you before, more than once in some cases, so why are you still repeating the same falsehoods?

G


----------



## bigshot (Jun 5, 2022)

I have absolutely no idea what you mean by "spatiality". You appear to use it as a catch all word to justify whatever point you want to make.

There is more to speakers than just blending between channels. There's kinesthetic energy you can feel. There are incredibly complex primary distance cues that combine with secondary distance cues in the mix to create an illusion of *specific* space- not just generalized space. There are subtle timing effects and reflections of sound, all of which add to the naturalness. There's head tracking. You can get up out of your chair and move around the room and hear how the sound is different in different places. All of those things work together to define the space that the sound inhabits. The sound affects the space. The space affects the sound. Isn't that "spatiality"?

Blending the channels to reduce ping pong when you listen with headphones is fine. It might take a bit of the curse off of one downside to headphone listening. But it doesn't make headphones sound spatial like speakers. _It just blends channels._ That's a nice jury rigged patch on one problem. If you like it swell.

By the way, I happen to have this album and it sounds FANTASTIC on speakers! It's a lot of fun on headphones too.


----------



## 71 dB

gregorio said:


> Which again, pretty much no one ever does and why do you think that is? Which brings us back to the question of whether you think no engineer past or present knows what they’re doing?


If sound engineer in general are as stubborn as you are it is no wonder fresh ideas are left alone. However, there are people able to "think outside the box" such as Jürg Jecklin. It is not about knowing what you are doing, but what are you doing. Headphone spatiality has been for whatever reason low priority. It is ridiculous to say there is NOTHING to be done to improve headphone spatiality. 



gregorio said:


> Obviously balance of instruments is a priority. You might not care if a back desk violin is louder than the entire brass section as long as you perceive pleasing spatially but virtually everyone else does. Engineers do typically consider what the mix will sound like on cans but obviously we’re not going to prioritise your personal perception of headphone spatiality at the expense of everything else. For one thing, we don’t even know what your personal perception is.


Well, I don't think good balance and spatiality are mutually exclusive. I believe you can have both good at the same time and for both speakers and headphones. I don't expect _my_ perception to be served, but rather a general perception based on the science of human hearing. Perception may vary from person to person, but we can do "objective" things such as use average HRTF to figure out what kind of spatiality could work for everyone with speakers and headphones better than what we have now. 



gregorio said:


> Come on, I don’t expect such fallacious arguments from you. If you believed the earth was flat, you wouldn’t be the only person in the world, does that mean the Earth is flat? And, using a non-sequitur about being wrong about one thing and therefore wrong about all things is beneath you!


Well, I just don't know how to answer.



gregorio said:


> Yes it is. According to who isn’t it “that great”? Most ping-pongy recordings are quite early stereo recordings, where the typical consumer stereo speakers were inside a single cabinet or otherwise relatively close together and were therefore not as objectionable as they could be today, where consumer stereo speakers are typically much further apart. And if we’re talking about more modern recordings, how do you know it wasn’t intended by the musicians/engineers to sound objectionable?


ok. my bad im not using ancient cabinets. Jesus!

What comes to modern stuff: headphones and speakers give different spatiality so which one is intention? Also, as I hate excessive spatiality, I hate material with Intented ping pong that's me thou. You have your favorites



gregorio said:


> No it doesn’t, what actually explains why there are so many types of mic setups is because there are so many different types and groupings of instruments and so many different types of sound the musicians and engineers wish to achieve.


I have test CD with the same thing recorded with various mic setups, but I do understand you of course. That said, speaker spatiality IS more flexible BECAUSE the room is so much of the result. 



gregorio said:


> There’s not “_so many ways to do it well for speakers_”, there’s relatively few and fewer still if you’re after a particular sound, which virtually all musicians/engineers are. But again, musicians/engineers do typically consider headphone use, although not your personal perception of headphone use of course.


How come some recordings works so well for both when others don't? I mean similar music in similar acoustics. Why do symphony recordings have so differing spatiality if there is only a few ways to do it? You assume I know nothing, but of course I know something because I have listened to recordings. I know how different they sound. To me this tells there is way to do things in many ways. Also, because I make music myself, I have tried spatial things myself. That's the reason I write about this stuff. I have knowledge and ideas, thoughts. Again, what the **** Am I, if my thoughts are wrong? Try to understand me and what I do. I know what can be done! I have tested things out. Many producers use similar ideas. You are shooting ideas down. That is not good. If ideas don't work then you know they don't work but how do you know if you don't try? What have you done to improve spatiality? Nothing?


----------



## bigshot (Jun 5, 2022)

71 dB said:


> If sound engineer in general are as stubborn as you are it is no wonder fresh ideas are left alone.


As I've explained before, when someone starts out being out of line in their first sentence, I don't bother to read any further. Maybe you have something to say in the rest of your comments, but I'm not interested. Attacking the person instead of the point they are making is rude. If you need to resort to personal insults to defend your ego, you should speak less and not risk it. You're flailing around here.


----------



## 71 dB

bigshot said:


> I have absolutely no idea what you mean by "spatiality". You appear to use it as a catch all word to justify whatever point you want to make.
> 
> There is more to speakers than just blending between channels. There's kinesthetic energy you can feel. There are incredibly complex primary distance cues that combine with secondary distance cues in the mix to create an illusion of *specific* space- not just generalized space. There are subtle timing effects and reflections of sound, all of which add to the naturalness. There's head tracking. You can get up out of your chair and move around the room and hear how the sound is different in different places. All of those things work together to define the space that the sound inhabits. The sound affects the space. The space affects the sound. Isn't that "spatiality"?
> 
> Blending the channels to reduce ping pong when you listen with headphones is fine. It might take a bit of the curse off of one downside to headphone listening. But it doesn't make headphones sound spatial like speakers. _It just blends channels._ That's a nice jury rigged patch on one problem. If you like it swell.


I do use speakers. I don't really disagree about what you say. People use headphones also for example to disturb other people less. This forum exists because people use headphones. You don't use much headphones, do you, but for some reason you post here a lot. I have seen you active on many other forums too. I don't know how you have so much time to post, but good for you I guess...



bigshot said:


> By the way, I happen to have this album and it sounds FANTASTIC on speakers! It's a lot of fun on headphones too.



Interesting take on fantastic... ...to me this recordings uses very aggressive and simple methods to create spatial effects. The hard left/right panned instruments are localized to the speakers while the other sound spread all over. This has been interesting in the past, but in the 21st century this is not sophisticated.


----------



## The Jester

Maybe more like this ?


----------



## bigshot

The Jester said:


> Maybe more like this ?



I have this album and a compilation that has tracks from it. The binaural does nothing for my ears, but the multichannel mix on my speaker system is fantastic. It creates a full 360 sound field with jungle sounds and flies buzzing around. Neat album. It reminds me of Wendy Carlos’s Sonic Seasonings album.


----------



## The Jester

Bought his “Aero” album on CD when it was released over 10 years ago, it came with stereo and 5.1 versions on the discs,
Great album if you can still find it …


----------



## bigshot

Yeah, that is the album I’m thinking of. Love it!


----------



## gregorio

71 dB said:


> If sound engineer in general are as stubborn as you are it is no wonder fresh ideas are left alone.


What fresh ideas are left alone? You think I’ve never tried a Jecklin Disk? And, it’s not a fresh idea anyway, it’s a very old idea. It’s just an adaptation of Blumlein’s baffled stereo mic technique from about 1930!


71 dB said:


> However, there are people able to "think outside the box" such as Jürg Jecklin.


He wasn’t thinking outside the box, he adapted a “box” that’s been around since stereo was first patented. I and many engineers often think outside the box but not if what I’m after is best achieved with a box already developed!


71 dB said:


> Headphone spatiality has been for whatever reason low priority.


No it hasn’t, we just don’t give priority to your personal perception of headphone spatiality.


71 dB said:


> It is ridiculous to say there is NOTHING to be done to improve headphone spatiality.


It is ridiculous to say that, so why are you? I certainly have not said that and neither has anyone else to my recollection, so why are you saying it?


71 dB said:


> Why do symphony recordings have so differing spatiality if there is only a few ways to do it?


There are only a few ways to do it, although an infinite number of small variations within some of those ways. The reason symphony recordings have different spatiality is because they’re recorded in different locations, mixed differently, performed differently and the musicians, engineers and producers have different intentions.


71 dB said:


> You assume I know nothing, but of course I know something because I have listened to recordings. I know how different they sound. To me this tells there is way to do things in many ways.


I assume you know little/nothing because that’s the impression you give, even after it’s been explained to you.

There is an almost infinite number of ways to do things but that number is massively reduced according to practicalities, the intentions of the musicians, engineers/producer and preferences of the target demographics.


71 dB said:


> Try to understand me and what I do. I know what can be done! I have tested things out.


Really, you’ve tried recording a symphony orchestra with just a Jecklin Disk setup have you?


71 dB said:


> Many producers use similar ideas. You are shooting ideas down. That is not good.


Yes, it is very good, in fact it’s why professional engineers are professional and not unpaid amateurs. Shooting down crazy ideas people come up with, based on decades of experience and experimentation of what does and doesn’t work, is why we’re employed as professionals in the first place!

Like many other engineers, if I’m not sure if something will work, I’ll give it a go (budget and time allowing) but that’s not the case here because I have already given it a go and already know the advantages and disadvantages, that’s what I’m paid for!


71 dB said:


> If ideas don't work then you know they don't work but how do you know if you don't try? What have you done to improve spatiality? Nothing?


Now that’s just nonsense! I have tried Jecklin disk and pretty much all other mic techniques and there isn’t a single project I’ve been involved with in nearly 30 years where I haven’t tried to improve spatiality because that’s a fundamental part of music/sound mixing and always has been. You think maybe I’ve never used pan-pots, delays or reverbs? You think I’ve never checked a mix on cans or made any adjustment I felt appropriate?

G


----------



## 71 dB

gregorio said:


> What fresh ideas are left alone? You think I’ve never tried a Jecklin Disk? And, it’s not a fresh idea anyway, it’s a very old idea. It’s just an adaptation of Blumlein’s baffled stereo mic technique from about 1930!


I am NOT talking about Jecklin Disk, but Jecklin Disk -type spatial philosophy! Use whatever mics you want, but the spatiality can be shaped to "mimic" a Jecklin Disk recording in a DAW. Each separate track doesn't even need to be 100 % headphone compatible alone, because only the whole mix matters and tracks can mask each other's spatial problems to certain extent.



gregorio said:


> He wasn’t thinking outside the box, he adapted a “box” that’s been around since stereo was first patented. I and many engineers often think outside the box but not if what I’m after is best achieved with a box already developed!


Well, at least he did something.



gregorio said:


> No it hasn’t, we just don’t give priority to your personal perception of headphone spatiality.


I can clearly hear that from the recordings I own... ...I give priority to my personal perception of headphone spatiality, of course!



gregorio said:


> It is ridiculous to say that, so why are you? I certainly have not said that and neither has anyone else to my recollection, so why are you saying it?


Because you are against what I say. 



gregorio said:


> There are only a few ways to do it, although an infinite number of small variations within some of those ways. The reason symphony recordings have different spatiality is because they’re recorded in different locations, mixed differently, performed differently and the musicians, engineers and producers have different intentions.


Exactly! So why not study which locations give better result in regards of speaker/headphone compatibility? Why not study what style of mixing is the best and so on... ..there are many things that can be tinkered at least a little bit and the net sum of all those aspects can be huge, hence we have very different sounding recordings!



gregorio said:


> I assume you know little/nothing because that’s the impression you give, even after it’s been explained to you.


We have different backgrounds. That's the main reason we don't understand each other. I get what you "explain" to me, but I am not a simpleton believing it nullifies/debunks everything I say. 



gregorio said:


> There is an almost infinite number of ways to do things but that number is massively reduced according to practicalities, the intentions of the musicians, engineers/producer and preferences of the target demographics.


Of course, but "massively reduced" can still be significantly more than one. I am after JUST ONE new way to do things, the Jecklin Disk-philosophy way.



gregorio said:


> Really, you’ve tried recording a symphony orchestra with just a Jecklin Disk setup have you?


No, I DON'T suggest anyone should try to do THAT! Use whatever mics you think are appropriate to record the orchestra and then mix the tracks together shaping each track to have spatiality that is a little bit too "wide" for headphones, but not too wide, because the final mix of all individual tracks will have a bit narrower spatiality because of masking. Balance of instruments should not be an issue, because track levels and the "panning" of the tracks can be adjusted separately to perfection. These principles seem to work when I mix my own music and they also seemed to work when I mixed music by a band on a mixing course.

The idea of Jecklin Disk-philosophy is to have the spatial signature of a Jecklin Disk without the practical problems/limitations of using a Jecklin Disk.



gregorio said:


> Yes, it is very good, in fact it’s why professional engineers are professional and not unpaid amateurs. Shooting down crazy ideas people come up with, based on decades of experience and experimentation of what does and doesn’t work, is why we’re employed as professionals in the first place!


I have experimented with these things for years. How else would I even have these ideas and opinions about how things should be done? Do you really think I have zero knowledge? If so, then people who don't know what ILD means have NEGATIVE knowledge. 



gregorio said:


> Like many other engineers, if I’m not sure if something will work, I’ll give it a go (budget and time allowing) but that’s not the case here because I have already given it a go and already know the advantages and disadvantages, that’s what I’m paid for!


I don't know what you have tried.



gregorio said:


> Now that’s just nonsense! I have tried Jecklin disk and pretty much all other mic techniques and there isn’t a single project I’ve been involved with in nearly 30 years where I haven’t tried to improve spatiality because that’s a fundamental part of music/sound mixing and always has been. You think maybe I’ve never used pan-pots, delays or reverbs? You think I’ve never checked a mix on cans or made any adjustment I felt appropriate?


I don't even know what you have produced & mixed! Why do you think I am talking about YOUR skills? Most of the recordings I own could have been better in my opinion (in regards of headphone spatiality) , so there are audio engineers out there who could have done better job theoretically. I don't know if you are one of those audio engineers or not.


----------



## gregorio

71 dB said:


> So why not study which locations give better result in regards of speaker/headphone compatibility? Why not study what style of mixing is the best and so on...


What do you think recording engineers have been doing for the last century? You think professional engineers never study “_what style of mixing is the best_” or the nuances of recording in different locations?

It’s statements and suggestions like this which indicate you don’t know the first thing about engineering/production. 


71 dB said:


> Of course, but "massively reduced" can still be significantly more than one. I am after JUST ONE new way to do things, the Jecklin Disk-philosophy way.


Sometimes there is just one fundamental way of doing things for best results, sometimes 2 or 3 and occasionally many but the Jecklin Disk is extremely rarely the best way. 


71 dB said:


> Use whatever mics you think are appropriate to record the orchestra and then mix the tracks together shaping each track to have spatiality that is a little bit too "wide" for headphones, but not too wide, because the final mix of all individual tracks will have a bit narrower spatiality because of masking. Balance of instruments should not be an issue, because track levels and the "panning" of the tracks can be adjusted separately to perfection.


Clearly you’ve never mixed an orchestral recording. How on earth can track levels and panning be adjusted separately to spatial information when that spatial information is on those recorded tracks? You think maybe we can magically separate all the reverb and spatial information from the instruments themselves when recording a live orchestra?


71 dB said:


> I have experimented with these things for years.


Obviously you haven’t, it’s clear you have no idea how orchestras are recorded and haven’t even done it once, let alone for years! If you had done it, you would not have made the suggestion above which is impossible. 


71 dB said:


> How else would I even have these ideas and opinions about how things should be done?


God knows. Maybe a fanatic who’s played around with some samples and free plugins in a DAW and think they know how to record and produce an orchestra?


71 dB said:


> Do you really think I have zero knowledge?


Why else would you be suggesting nonsense, things that even a beginner student would quickly discover?


71 dB said:


> I don't even know what you have produced & mixed! Why do you think I am talking about YOUR skills?


Let me get this straight, you have no idea what I’ve mixed, produced, studied or done and that’s why you’re telling me how I should do my job? You’re joking?


71 dB said:


> Because you are against what I say.


I disagree with you’re suggested methodology of recording/mixing, so that’s a valid reason for you to make up ridiculous assertions, seriously?

G


----------



## bigshot

duffer


----------



## 71 dB (Jun 6, 2022)

gregorio said:


> What do you think recording engineers have been doing for the last century? You think professional engineers never study “_what style of mixing is the best_” or the nuances of recording in different locations?


Recording engineers in general have been much much more interested of speaker spatiality than headphone spatiality. Perhaps these days thanks to headphones listening becoming much more common things have changed, but for decades speaker spatiality dominated and created conventions that are not so headphone-friendly such as ping-pong stereophony. Had sound engineers solved these issues decades ago, we wouldn't be needing crossfeeders much, would we?



gregorio said:


> It’s statements and suggestions like this which indicate you don’t know the first thing about engineering/production.


You are for some reason against my ideas and views and try to use your authority of decades in the industry to strike me down, but I am not going away. It is reidiculous to say I don't know anything. As if I had not been to schools one day in my life! I have an university degree! What is it you think I know? My own name?

I don't claim to know everything. Nobody does. I am educated enough to know there is always more to learn. Your knowledge is limited too and people disagree. The best engineers in the World have their own sound, own philosophy to do things.



gregorio said:


> Sometimes there is just one fundamental way of doing things for best results, sometimes 2 or 3 and occasionally many but the Jecklin Disk is extremely rarely the best way.


Jecklin Disk-philosophy is about having the strengths of Jecklin Disk (spatial compatibility with speakers and headphones) without the weaknesses and problems. You don't need Jecklin Disks to use Jecklin Disk-philosophy in mixing, because the spatiality is shaped in a DAW. It is about using spatiality cleverly so, that the result has great spatiality on speakers and headphones.



gregorio said:


> Clearly you’ve never mixed an orchestral recording.


No, I have not. As I say, I have mixed my own music and one track of a band on a mixing course.



gregorio said:


> How on earth can track levels and panning be adjusted separately to spatial information when that spatial information is on those recorded tracks?


Huh? Because level information and spatial information are practically orthogonal mathematically.



gregorio said:


> You think maybe we can magically separate all the reverb and spatial information from the instruments themselves when recording a live orchestra?


Well, don't you use close up mics and room mics to make this possible? This was the case when I mixed the track of the band on the mixing course and I didn't have too much trouble of controlling the room reverb level. These were somewhat amateurish recordings of a random no name band.



gregorio said:


> Obviously you haven’t, it’s clear you have no idea how orchestras are recorded and haven’t even done it once, let alone for years! If you had done it, you would not have made the suggestion above which is impossible.


Why do you keep talking about orchestras as if nothing else was ever recorded? If orchestras can't be recorded and mixed with my ideas then FINE. Something else maybe can. For example I use the Jecklin Disk philosophy to mix my own music all the time!



gregorio said:


> God knows. Maybe a fanatic who’s played around with some samples and free plugins in a DAW and think they know how to record and produce an orchestra?


I have never claimed to know how to record and produce an orchestra! Very few people need to know that. Somehow you are one of those rare people. You are probably the only one on this board.



gregorio said:


> Why else would you be suggesting nonsense, things that even a beginner student would quickly discover?


Nonsense to you, not to me. I'm sorry I am not a super-genius who was born with the knowledge of how to record orchestras. I am just a dummy who needs years if not decades to learn and understand things.



gregorio said:


> Let me get this straight, you have no idea what I’ve mixed, produced, studied or done and that’s why you’re telling me how I should do my job? You’re joking?


I'm not telling YOU what to do. I don't even know how you mix and produce! I am suggesting everybody working in the field how the problems of headphone spatiality could be fixed. Maybe you already have fixed this problem in your way, but not everybody has.



gregorio said:


> I disagree with you’re suggested methodology of recording/mixing, so that’s a valid reason for you to make up ridiculous assertions, seriously?
> 
> G


What should I be doing in your opinion? What assertions should I be making? Do I know anything about anything in your opinion?


----------



## bigshot

Most people go away and come back less agitated.


----------



## 71 dB

Thinking about this, maybe I should just tell how I mix my music, what kind of spatiality I create. Maybe that makes professionals here feel less targeted?


----------



## bigshot (Jun 6, 2022)

My advice is to stop being emotional and stop taking factual disagreement as a personal attack.


----------



## 71 dB (Jun 6, 2022)

bigshot said:


> My advice is to stop being emotional and stop taking factual disagreement as a personal attack.


Not a bad advice at all *bigshot*, but for a person who is struggling with low self-esteem taking comments like _"Clearly you know nothing about this, because you have not recorded/mixed/produced orchestras..."_ is difficult to stop taking personally and emotionally. I am tired of hearing from other people that the things I HAVE DONE mean nothing and are worthless.

I am ready to discuss about the problems of Jecklin Disk for example on technical level, but gregorio has a tendency of attacking my knowledge and experience on the field of recording to discredit me as a debater. It is also annoying to me, an INTJ/P with asperger's how he implies that audio engineers with a lot of experience (especially with orchestra) are above constructive criticism. Architects are constantly criticized for their "ugly" architecture (no need to "respect" the artistic intent), but it is somehow blasphemy to point out stereophonic recordings have spatial issues with headphones...


----------



## bigshot (Jun 6, 2022)

You aren’t arguing about the subject any more. You’re arguing your emotions and you can’t let go. Gregorio does this stuff for a living. If you listened to what he says and incorporated it, you could grasp what he’s saying and stop  putting yourself into the position of punching bag. He isn’t doing it to you. He’s just responding to the same factual error repeated over and over and over. You’re doing it to yourself by being more invested in arguing than the topic being discussed.

This isn’t about spatiality. It never was about that. It’s about your bruised ego. You don’t have to win at all costs. There’s way too much of that around here already. No one listens to each other, and everyone speaks entirely for their own benefit. It’s all a big dumb contest for “king of the forum”. That’s lame. Better to listen and try to understand. It isn’t about winning an argument. It’s about learning from each other.

You have a professional sound engineer talking with you. Take advantage of that. Don’t try to be more of an expert than an expert.


----------



## gregorio (Jun 6, 2022)

71 dB said:


> but for decades speaker spatiality dominated and created conventions that are not so headphone-friendly


Of course, because for decades hardly any consumers used headphones and when headphone use did start becoming widespread, it was non-critical listening conditions (EG. While jogging or travelling).


71 dB said:


> such as ping-pong stereophony.


Ping-pong stereophony is not now, nor has it ever been a “convention”! It’s use is rare, although somewhat less rare when stereo was fairly new to the market, as already explained more than once.


71 dB said:


> Had sound engineers solved these issues decades ago, we wouldn't be needing crossfeeders much, would we?


But we don’t need crossfeeders much! Only a tiny percentage of headphone users apply crossfeeders and those who have a serious issue with headphone presentation typically prefer something more sophisticated than a crossfeed, HRTF, head-tracking and speaker convolution for example. Plus, sound engineers cannot solve “these issues” because solving them requires user specific solutions.


71 dB said:


> You are for some reason against my ideas and views and try to use your authority of decades in the industry to strike me down, but I am not going away.


I’m not using my authority to “strike you down”, I am using basic music recording principles to strike you down. Basic principles even a novice, with almost no authority, should know.


71 dB said:


> It is reidiculous to say I don't know anything.


You’ve never studied orchestral mixing, by you own admission you’ve never done any yourself, you’ve probably never even seen it done by professionals and are making nonsense suggestions that even a novice would know are impossible/impractical. What rational conclusion can we take from all this other than “you don’t know anything”?


71 dB said:


> Jecklin Disk-philosophy is about having the strengths of Jecklin Disk (spatial compatibility with speakers and headphones) without the weaknesses and problems.


The Jecklin Disk technique is the use of a pair of matched omni-direction mics, 36cms apart with a baffle (sound absorbent disk) between them. How do you apply that “philosophy” to say 20 or more mics, that are not matched, are not 36cms apart and have no baffle? There is of course no way to have all the strengths of Jecklin Disk and none of weaknesses and if there were, that’s what we’d all have been using for decades.


71 dB said:


> Well, don't you use close up mics and room mics to make this possible?


Of course not. In a symphony orchestra you’ve got around 90 or more musicians, playing together in close proximity. Plus, most of the instruments are largely reliant on room acoustics to produce their recognisable sound. Close mic’ing significantly reduces the room acoustics captured but would not eliminate “spill”. So, you would end up with 90+ mics/channels, all out of phase with each other and most of them not capturing the desired sound anyway. A great deal of extra time and cost for a hugely inferior result, not smart. What we actually do is have a main array (say a Decca Tree for example), outriggers covering areas of the orchestra furthest away from the main array, spot mics covering sections of the orchestra, some room mics and occasionally, depending on the piece of music, one or a few close mics.


71 dB said:


> This was the case when I mixed the track of the band on the mixing course and I didn't have too much trouble of controlling the room reverb level. These were somewhat amateurish recordings of a random no name band.


Hang on, you were the one who brought up orchestra recordings, then argued that you’ve experimented for years and know a lot. Now you’re saying your actual experience is mixing a track that was an amateurish recording of a no name band as part of a mixing course? Oh dear.

A rock/pop band is an entirely different thing. Pretty much none of the instruments are reliant room acoustics to produce their recognisable sound and there’s not dozens of musicians all playing at the same time in close proximity. In fact typically, there is no proximity between the musicians because they are not playing together, they’ve probably been multitracked at different times/days. So apart from the drumkit, there is no spill or phase issues and artificial reverb can be applied instead of a natural acoustic. Again, absolute basics, I’d ask for your money back for that mixing course!


71 dB said:


> Why do you keep talking about orchestras as if nothing else was ever recorded?


Err, because YOU introduced orchestras as the example and continued to do so right up until the moment you started to realise it’s maybe nonsense and now it’s “_why do you keep talking about orchestras_”?

Incidentally, what I’ve stated doesn’t only apply to symphony orchestras, it also broadly applies to any large acoustic ensemble.


71 dB said:


> I'm sorry I am not a super-genius who was born with the knowledge of how to record orchestras. I am just a dummy who needs years if not decades to learn and understand things.


No one is born with the knowledge of how to record orchestras and everyone is a dummy who needs years to learn and understand how to. The difference is; when I was a dummy, before the years of learning/understanding, I didn’t argue with those who were not dummies, the professionals who already had years of learning/understanding.


71 dB said:


> I'm not telling YOU what to do. … I am suggesting everybody working in the field how the problems of headphone spatiality could be fixed.


You are telling me what to do! You are telling me and repeatedly arguing that I should mix according to the “Jecklin Disk philosophy”.

Enough now, others must be getting bored and most of this has already been explained previously anyway. If you don’t know, then ask but please don’t keep arguing nonsense suggestions based on knowing next to nothing about the recording and production process.

G


----------



## 71 dB (Jun 6, 2022)

bigshot said:


> You aren’t arguing about the subject any more. You’re arguing your emotions and you can’t let go. Gregorio does this stuff for a living. If you listened to what he says and incorporated it, you could grasp what he’s saying and stop  putting yourself into the position of punching bag. He isn’t doing it to you. He’s just responding to the same factual error repeated over and over and over. You’re doing it to yourself by being more invested in arguing than the topic being discussed.
> 
> This isn’t about spatiality. It never was about that. It’s about your bruised ego. You don’t have to win at all costs. There’s way too much of that around here already. No one listens to each other, and everyone speaks entirely for their own benefit. It’s all a big dumb contest for “king of the forum”. That’s lame. Better to listen and try to understand. It isn’t about winning an argument. It’s about learning from each other.
> 
> You have a professional sound engineer talking with you. Take advantage of that. Don’t try to be more of an expert than an expert.


So, if you are 1-99 % expert you should keep your mouth shut. Only when you are a 100 % expert you suddenly know it all...

We all are experts in something, because we all do something for living. My job just never has been producing orchestral recordings. My job has been various things from measuring loudspeakers to writing minutes of meetings to calculating the heat losses of buildings to taking water samples from faucets and many other things someone has been willing to pay me to do, or there has been a need to do by someone.

Of course I take the advantage of all the knowledge available here. Sorry, if I come out ungrateful, but my social skills are really bad (one of the major reasons why I struggle in life so much). It is difficult for me to think about the feelings of other people when I communicate. I am an aspie, practically autistic, just not severely so I can function in society somehow, but it is difficult. Other people just don't understand me.


----------



## 71 dB

gregorio said:


> Of course, because for decades hardly any consumers used headphones and when headphone use did start becoming widespread, it was non-critical listening conditions (EG. While jogging or travelling).
> 
> G


Interesting point here. Looks like my problem is I don't do my headphone listening in non-critical listening conditions!


----------



## gregorio

71 dB said:


> So, if you are 1-99 % expert you should keep your mouth shut.


Not sure what that’s got to do with it, as that’s not the issue here. The issue is more like: Is it wise to argue with say a 90% expert if you’re only a 0%-1% expert (IE. Not even a novice)? Of course it’s up to you whether you “_keep your mouth shut_” but in your position I’d only open my mouth to ask questions unless I was pretty certain, and I couldn’t be pretty certain if I’d never tried mixing an orchestra, studied it or even seen others do it. 


71 dB said:


> Only when you are a 100 % expert you suddenly know it all...


No one is a 100% expert and those who get close certainly don’t get there suddenly!


71 dB said:


> We all are experts in something, because we all do something for living.


True but then of course I wouldn’t argue with you about calculating heat loss from buildings because I’ve never done it myself or studied it and my expertise in recording/mixing obviously doesn’t mean I’m also an expert in building heat loss!


71 dB said:


> Interesting point here. Looks like my problem is I don't do my headphone listening in non-critical listening conditions!


Or probably using a 1980’s Walkman cassette player and bundled HPs? Maybe that’s your solution?

G


----------



## 71 dB

gregorio said:


> Ping-pong stereophony is not now, nor has it ever been a “convention”! It’s use is rare, although somewhat less rare when stereo was fairly new to the market, as already explained more than once.


True. It wasn't really used after the early years of commercial stereo recordings, but those recordings are still "ping-pongy" and the fact that it wasn't a convention doesn't help.



gregorio said:


> But we don’t need crossfeeders much! Only a tiny percentage of headphone users apply crossfeeders and those who have a serious issue with headphone presentation typically prefer something more sophisticated than a crossfeed, HRTF, head-tracking and speaker convolution for example. Plus, sound engineers cannot solve “these issues” because solving them requires user specific solutions.


My suggestion was to solve the issues on crossfeed-level of sophistication which is not so user specific.



gregorio said:


> I’m not using my authority to “strike you down”, I am using basic music recording principles to strike you down. Basic principles even a novice, with almost no authority, should know.


I said "use the mics you want." I am not telling to use different principles. I am talking about shaping the spatiality in DAW while mixing the tracks together. Nowadays we have even mics that record using different patterns and combining these afterwards with a software one can adjust the pattern freely. 



gregorio said:


> You’ve never studied orchestral mixing, by you own admission you’ve never done any yourself, you’ve probably never even seen it done by professionals and are making nonsense suggestions that even a novice would know are impossible/impractical. What rational conclusion can we take from all this other than “you don’t know anything”?


Too bad if those suggestions are  impossible/impractical. The university I studied in did not teach orchestral mixing. The 101 course in acoustics for exampe used the textbook _"The science of sound"_ by Thomas D. Rossing. The book contains (spread over about 600 pages) parts: 

1 - Motion, Energy, Waves and Other Physical Principles
2 - Perception and Measurement of Sound
3 - Acoustics of Musical Instruments
4 - The Human Voice
5 - The Electrical Production of Sound
6 - The Acoustics of Rooms
7 - Electronic Music
8 - Environmental Noise

I thought I learned something from that course/book. Most of my studies were not acoustics. They included things like Math(s), Physics, Electronics, Programming, Telecommunication, Optical instruments, Economics, etc. 



gregorio said:


> The Jecklin Disk technique is the use of a pair of matched omni-direction mics, 36cms apart with a baffle (sound absorbent disk) between them. How do you apply that “philosophy” to say 20 or more mics, that are not matched, are not 36cms apart and have no baffle? There is of course no way to have all the strengths of Jecklin Disk and none of weaknesses and if there were, that’s what we’d all have been using for decades.


I have constructed my own Jecklin Disk, so I know what it is and what kind of spatiality the recordings done with have. I haven't recorded music with it, but just environmental sounds (cars driving by, doors opening and closing etc.) The results have been good in my opinion.

What Jecklin Disk does is it generates simplified ILD, ITD and ISD information to the recorded sound, maps the complex 3-dimensional acoustic sound field into 2 channels of audio in a way that tries to be compatible with both speaker- ja headphone-based reproduction of sound. That's what the philosophy is. It is about the goal, not the physical dimensions of a Jecklin Disk. Listeners do not care what the mics looked like. They care about how it sounds.



gregorio said:


> Of course not. In a symphony orchestra you’ve got around 90 or more musicians, playing together in close proximity. Plus, most of the instruments are largely reliant on room acoustics to produce their recognisable sound. Close mic’ing significantly reduces the room acoustics captured but would not eliminate “spill”. So, you would end up with 90+ mics/channels, all out of phase with each other and most of them not capturing the desired sound anyway. A great deal of extra time and cost for a hugely inferior result, not smart. What we actually do is have a main array (say a Decca Tree for example), outriggers covering areas of the orchestra furthest away from the main array, spot mics covering sections of the orchestra, some room mics and occasionally, depending on the piece of music, one or a few close mics.


Well, one would think having one mic for every instrument group would be enough. The word "close" is relative. I am trying to learn from what you write here.



gregorio said:


> Hang on, you were the one who brought up orchestra recordings, then argued that you’ve experimented for years and know a lot. Now you’re saying your actual experience is mixing a track that was an amateurish recording of a no name band as part of a mixing course? Oh dear.


Did I? I have forgetten. I must have mentioned orchestra as an example and I could have talked about anything else. What I have experienced with for years is the electronic music I make myself. I think it is a good area for learning, because spatiality has to be generated from scratch. Nothing is given. Sorry about the confusion. Hopefully I made myself clear now.



gregorio said:


> A rock/pop band is an entirely different thing. Pretty much none of the instruments are reliant room acoustics to produce their recognisable sound and there’s not dozens of musicians all playing at the same time in close proximity. In fact typically, there is no proximity between the musicians because they are not playing together, they’ve probably been multitracked at different times/days.


In my case, the band (Turkuaz) was recorded playing together meaning there was acoustic leakage on the tracks (drums could be heard on guitar tracks etc.) What I learned is that this isn't necessarily a big problem if handled with care and can make the resulting sound even better (richer). 



gregorio said:


> So apart from the drumkit, there is no spill or phase issues and artificial reverb can be applied instead of a natural acoustic. Again, absolute basics, I’d ask for your money back for that mixing course!


Yes, I think the drums where the only tracks with phase issues and artificial reverb was added, both short and long. The course was free and it wasn't about recording orchestras, but about the basics of using Pro Tools and working in a studio. It delivered what it promised. I have never seen "How to records/mix" orchestral music courses offered. Maybe such education is not given in Finland?



gregorio said:


> Err, because YOU introduced orchestras as the example and continued to do so right up until the moment you started to realise it’s maybe nonsense and now it’s “_why do you keep talking about orchestras_”?


Well, we can go from orchestras to say EDM is you want, altho I think EDM producers already know a lot about how to make headphone spatiality good... ...they tend to mix the low end mono-like for example and use the DAW plugins cleverly.



gregorio said:


> Incidentally, what I’ve stated doesn’t only apply to symphony orchestras, it also broadly applies to any large acoustic ensemble.


Makes sense.



gregorio said:


> No one is born with the knowledge of how to record orchestras and everyone is a dummy who needs years to learn and understand how to. The difference is; when I was a dummy, before the years of learning/understanding, I didn’t argue with those who were not dummies, the professionals who already had years of learning/understanding.


Okay, I stop arguing with you about this then, because I will never learn/understand these things. I am already 51. It is too late for me.



gregorio said:


> You are telling me what to do! You are telling me and repeatedly arguing that I should mix according to the “Jecklin Disk philosophy”.


Well, I didn't mean to do so. Sorry! My lack of social skill showed its ugly head again...


----------



## 71 dB

gregorio said:


> True but then of course I wouldn’t argue with you about calculating heat loss from buildings because I’ve never done it myself or studied it and my expertise in recording/mixing obviously doesn’t mean I’m also an expert in building heat loss!


I have not "studied" calculating heat losses of buildings. I was shown at work how its done in 10 minutes and I started doing it, because it is not rocket science. It is pretty simple (for an engineer with university degree). It is just a lot of work, because every single room/hall/etc. has to be calculated separately, but of course if identical rooms exist (say hotels), one calculation can be copied.


----------



## bigshot (Jun 6, 2022)

If you have limitations, you should make an effort to overcome them. If you make no effort, then it’s all on you.

As I said before, no one is doing this to you. You’re doing it to yourself.


----------



## 71 dB

bigshot said:


> If you have limitations, you should make an effort to overcome them. If you make no effort, then it’s all on you.
> 
> As I said before, no one is doing this to you. You’re doing it to yourself.


To my knowledge there is no cure for autism. As for limitations of knowledge/skills getting into orchestral music recording isn't easy, at least in my country. In the area of music theory I have advanced a lot in 3 years studying it myself. There just isn't much discussion about chord progressions and species counterpoint on this board. Somehow we are often talking about stuff that I know nearly nothing about according to experts here.


----------



## gregorio

71 dB said:


> It wasn't really used after the early years of commercial stereo recordings, but those recordings are still "ping-pongy" and the fact that it wasn't a convention doesn't help.


No they weren’t still “ping-pongy”. An extremely small percentage of consumers used headphones at that time and most stereo reproduction systems were integrated units or relatively small speakers close together. Put your speakers 2ft apart, sit 10ft away from them and then tell me it’s ping-pongy!


71 dB said:


> My suggestion was to solve the issues on crossfeed-level of sophistication which is not so user specific.


Obviously that doesn’t “solve the issues”. If crossfeed did solve the issues, why then develop individualised HRTFs, head tracking and speaker/room convolution to solve an issue already solved by crossfeed? Clearly crossfeed is preferable to nothing for some people but also clearly, it does NOT solve the issues for the vast majority. 


71 dB said:


> Well, one would think having one mic for every instrument group would be enough.


One would only think that, provided one didn’t know much about orchestras or mics, or using mics to record orchestras! Take just the 1st violins for example, a modern symphony orchestra would typically have 16-20 1st violins arranged two to a “desk” and covering an area of roughly 8ft x 30ft. So …


71 dB said:


> The word "close" is relative.


When we say “close” mics we typically mean a few inches or less. You can’t close mic an area of 8ft by 30ft  containing 16+ musicians with one mic. The mic has to be more distant to cover that area and will therefore pick up far more acoustic reflections and spill, especially as the brass is much louder than the 1st violins. And if we take all the violins, that’s double the number and area to cover. 


71 dB said:


> In my case, the band (Turkuaz) was recorded playing together meaning there was acoustic leakage on the tracks (drums could be heard on guitar tracks etc.) What I learned is that this isn't necessarily a big problem if handled with care and can make the resulting sound even better (richer).


That’s because the freqs of the drumkit are fairly different from the freqs of a guitar, you haven’t got many guitar tracks to “handle with care” and so it’s often a problem that can be dealt with “well enough”, depending on what processing is required. No so with an orchestra where several different instruments all inhabit the same frequency range (say basses, tuba and timpani or trombones, bassoons, cellos and low horns) and you’d have 90+ nightmare channels to deal with rather than just a couple of guitar tracks.


71 dB said:


> The course was free and it wasn't about recording orchestras, but about the basics of using Pro Tools and working in a studio. It delivered what it promised.


As a double certified Pro Tools instructor (Music and Post), I’ve got a pretty good idea what your course covered, which is barely more than an introduction to studio engineering. 


71 dB said:


> I have never seen "How to records/mix" orchestral music courses offered. Maybe such education is not given in Finland?


There probably are, although I’m not certain. In the UK there are many specialist audio engineering courses but only some include orchestral mixing. 


71 dB said:


> What Jecklin Disk does is it generates simplified ILD, ITD and ISD


Yes, I know what Jecklin Disk is, what it does and the philosophy behind it. 


71 dB said:


> Listeners do not care what the mics looked like. They care about how it sounds.


It requires a “matched pair”, that doesn’t mean the mics look the same, it means the mics perform the same; same polar sensitivities, same freq response and sensitivity, not close but “matched”, virtually identical. We don’t use just a pair of mics with orchestras, we use typically 30+ and they’re not matched because we require mics with different performance depending on the instruments we’re recording. For example wider patterns for the main array, narrower for spot or close mics, widest for room mics. In the case of a band, we’d typically use significantly different mics for say the kick drum and vocals because a kick drum could easily break the delicate large diameter condenser mics that typically give the best results for vocals. Commonly we’ll use 5 or more different mic types when recording a band that are pretty much the opposite of “matched”.  


71 dB said:


> Well, we can go from orchestras to say EDM is you want, altho I think EDM producers already know a lot about how to make headphone spatiality good.


EDM is another kettle of fish again. It’s not that EDM producers intrinsically already know a lot about headphone spatiality, it’s because EDM is commonly initially produced on headphones, is usually “performed” with headphones in clubs, where you generally have to use stereo in a very limited manner and bass is all routed to a mono sub (or subs). 

What you suggest just isn’t practical in the vast majority of cases with music, because of the number and variety of mics required. Although some EDM could be an obvious exception.

G


----------



## bigshot

If you make no effort to understand what people are saying to you and you respond with unregulated emotions like this, you’re basically an argument bot. And you’ve found another argument bot to interact with. That is a supreme waste of time and energy… and it isn’t just your time and energy that’s being wasted. I’m done with this for now.


----------



## 71 dB (Jun 7, 2022)

gregorio said:


> Obviously that doesn’t “solve the issues”. If crossfeed did solve the issues, why then develop individualised HRTFs, head tracking and speaker/room convolution to solve an issue already solved by crossfeed? Clearly crossfeed is preferable to nothing for some people but also clearly, it does NOT solve the issues for the vast majority.


Well then I simply don't have solutions to problems in the World. Sorry!



gregorio said:


> One would only think that, provided one didn’t know much about orchestras or mics, or using mics to record orchestras! Take just the 1st violins for example, a modern symphony orchestra would typically have 16-20 1st violins arranged two to a “desk” and covering an area of roughly 8ft x 30ft. So …


I do know this.



gregorio said:


> When we say “close” mics we typically mean a few inches or less. You can’t close mic an area of 8ft by 30ft  containing 16+ musicians with one mic. The mic has to be more distant to cover that area and will therefore pick up far more acoustic reflections and spill, especially as the brass is much louder than the 1st violins. And if we take all the violins, that’s double the number and area to cover.


Well, when I say close, I mean the distance is significantly smaller than the size of the sound source. That's when we are in the near field and it has certain effects to the sound. Something like 10 feet is "close" in this case.



gregorio said:


> As a double certified Pro Tools instructor (Music and Post), I’ve got a pretty good idea what your course covered, which is barely more than an introduction to studio engineering.


You are correct in that assessment.



gregorio said:


> It requires a “matched pair”, that doesn’t mean the mics look the same, it means the mics perform the same; same polar sensitivities, same freq response and sensitivity, not close but “matched”, virtually identical. We don’t use just a pair of mics with orchestras, we use typically 30+ and they’re not matched because we require mics with different performance depending on the instruments we’re recording. For example wider patterns for the main array, narrower for spot or close mics, widest for room mics. In the case of a band, we’d typically use significantly different mics for say the kick drum and vocals because a kick drum could easily break the delicate large diameter condenser mics that typically give the best results for vocals. Commonly we’ll use 5 or more different mic types when recording a band that are pretty much the opposite of “matched”.


I didn't talk about matched pairs. I talked about what the mic set up looks like. Listeners don't care if Jecklin Disk has a disc, or how much apart the mics are. Anyway, I don't argue anymore.


----------



## 71 dB (Jun 7, 2022)

gregorio said:


> What we actually do is have a main array (say a Decca Tree for example),


What are the important properties of possible main arrays? Could Jecklin Disk be the main array? How much weight in the mix does the main array signal have typically? How hot is it mixed compared to the rest of the mix? Am I right thinking the main _purpose_ of a main array is to "glue" the orchestral sounds together?



gregorio said:


> outriggers covering areas of the orchestra furthest away from the main array,


Well, it makes sense to have such mics. "Outrigger" is an interesting name for those.



gregorio said:


> spot mics covering sections of the orchestra,


These spot mics probably require lots of skills and experience to set up well.



gregorio said:


> some room mics


Would a Jecklin Disk put to were someones head in the audience would be work as a room mic? Adjusting the row could be used to adjust how "roomy" the sound is.



gregorio said:


> and occasionally, depending on the piece of music, one or a few close mics.


I think in the 60's it was common to "overdo" very close mics on certain instruments such as cymbals. When the cymbal crashes, I go almost deaf and on headphones the cymbals are "at the ear" because the sound is very dry. More modern orchestral recordings don't really have this kind of issues. That's my experience.


----------



## bigshot

When I was in elementary school I learned to play a song on the violin. After two months, my parents returned my violin to the music store.

That's my experience.


----------



## gregorio (Jun 7, 2022)

71 dB said:


> Listeners don't care if Jecklin Disk has a disc, or how much apart the mics are.


If a Jecklin Disk setup doesn’t have a disk and the mics aren’t 36cms apart, then it isn’t a Jecklin Disk setup, it’s an ORTF or another of the more common stereo mic setups.


71 dB said:


> Well, when I say close, I mean the distance is significantly smaller than the size of the sound source.


In which case the term “close” would be meaningless because nearly all mics would be “close”, with the exception of some room mics. Of course the term is relative, so in certain cases (with a very large sound source) “close” could mean up to a metre or so but typically it means a few inches or less.


71 dB said:


> What are the important properties of possible main arrays?


As even a coverage of the whole orchestra as possible, with a wide stereo image and a close perspective. A main orchestra array is therefore virtually always flown a couple of metres above the conductor or above and slightly in front of the conductor.


71 dB said:


> Could Jecklin Disk be the main array?


Any stereo pair could be used as the main array but typically don’t produce as good a result as 3 mic arrangements, such as the Decca Tree, INA-3 or OCT for example. Additionally, the disk/baffle would be largely ineffective in the preferred main array position, thereby defeating the purpose/philosophy of the Jecklin Disk technique. Jecklin Disk needs to be used in front of the source.


71 dB said:


> How much weight in the mix does the main array signal have typically? How hot is it mixed compared to the rest of the mix?


There’s no absolute answer to this question, it depends. Orchestra recording for film soundtracks sometimes have very little or virtually no weight on the main array. The main “weight” coming from the mixed section/spot mics. At the other extreme, the mix might be almost entirely the main array, with the other mics used minimally just for emphasis. Typically it’s somewhere between these extremes.


71 dB said:


> Am I right thinking the main _purpose_ of a main array is to "glue" the orchestral sounds together?


It can be used for that but more typically it’s more like the backbone. All the above depends on the circumstances, the exact layout of the orchestra (which varies, even with the same orchestra), the musical requirements of the piece, the artistic desires of the producer and the output formats required (which typically includes surround).


71 dB said:


> Well, it makes sense to have such mics. "Outrigger" is an interesting name for those.


Not sure if that’s a ubiquitous term but certainly they’re often called that.


71 dB said:


> These spot mics probably require lots of skills and experience to set up well.


You certainly need to know what you’re doing. Although there’s always going to be a lot of spill. In film score orchestral recording it’s common for the orchestra sections to be separated with screens, to reduce spill but you still get a fair amount.


71 dB said:


> Would a Jecklin Disk put to were someones head in the audience would be work as a room mic?


At least that would be a position where the Jecklin Disk setup would actually operate as a Jecklin Disk but it wouldn’t generally be very good as a room mic. In addition to the main array, we typically use an “ambience array”, such as a Hamasaki Square, Double ORTF or IRT Cross and then possibly some additional room mics. Exactly what’s used, mainly depends on the acoustics of the specific recording venue. Often the ambience array is flown quite high though, to maximise reflections/acoustics over direct sound.


71 dB said:


> I think in the 60's it was common to "overdo" very close mics on certain instruments such as cymbals. When the cymbal crashes, I go almost deaf and on headphones the cymbals are "at the ear" because the sound is very dry.


It’s still standard practice to close mic the cymbals in a drum kit, depending on the genre. For example the ride cymbal is particularly important in most jazz music, the hi-hats generally more so in rock/pop and therefore typically requires close mic’ing. Spash cymbals are often just covered by the overhead pair but depending on the piece/style will also sometimes be close mic’ed. In the 60’s it was pretty much required to “overdo” the cymbals because they rely so heavily on the high freqs, which is the first thing you loose with analogue tape recording, bouncing and reproduction. The situation improved slightly in the ‘70’s with better tape formulations, noise reduction that didn’t kill the high freqs and the good old Aphex Aural Exciter. Of course, the same basic issues existed with classical/orchestral music but digital effectively solved the problem and cymbals typically no longer need enhancement.

Incidentally, adding spatial information/reverb to cymbals can be interesting, particularly splash cymbals. Much of the sound is perceptually almost identical to high frequency white noise, so adding reverb commonly doesn’t make it sound spatially any different, it just makes it sound louder. Sometimes there are no solutions to this issue, so overly dry or present sounding cymbals maybe unavoidable in certain circumstances.

G


----------



## 71 dB

gregorio said:


> If a Jecklin Disk setup doesn’t have a disk and the mics aren’t 36cms apart, then it isn’t a Jecklin Disk setup, it’s an ORTF or another of the more common stereo mic setups.


I believe Jürg Jecklin initially used 165 mm distance to imitate the distance of human ears, but later increased the distance as a method to use "acoustic zoom": Sounds coming from the sides create 2 times bigger ITD making them appear coming from a bigger angle. ORTF has two cardioids at 110° angle without baffle in between.  ORTF is btw also very headphone compatible in my opinion. I wouldn't hang on to one particular mic distance, as a Jecklin Disk with adjustable mic distance gives flexibility in various acoustic/musical situations.



gregorio said:


> In which case the term “close” would be meaningless because nearly all mics would be “close”, with the exception of some room mics. Of course the term is relative, so in certain cases (with a very large sound source) “close” could mean up to a metre or so but typically it means a few inches or less.


Well if I just say a mic 10 feet away, we don't need to argue about what "close" means in various contexts...

In orchestras the player tend to move their instrument back and forth as part of artistic expression (especially smaller instruments). "Close" mics at a few inches away make the sound unstable. The distance to the instrument can easily double meaning that the level of the direct sound will vary several decibels while a listener dozens of feet away wont hear almost any change in the sound. So, a very near mic should be attached to the instruments at fixed point in relation to the instrument to overcome this problem. I'm thinking myself loud here based on my knowledge and education in acoustics, so correct me if needed.


----------



## Corti

bigshot said:


> you’re basically an argument bot. And you’ve found another argument bot to interact with.





sorry i just couldnt resist


----------



## Blanchot

gregorio said:


> Of course not. In a symphony orchestra you’ve got around 90 or more musicians, playing together in close proximity.


Unless there is a pandemic which forces musicians to keep a distance to comply with authority regulations. There has been written quite a lot about how conductors and performers tackled the situation but it would be interesting to know how the technicians dealt with it. How did the distancing requirements affect the recording process? I'm veering a bit off topic here but it would be interesting to know nonetheless.


----------



## gregorio

71 dB said:


> ORTF has two cardioids at 110° angle without baffle in between. ORTF is btw also very headphone compatible in my opinion.


Yes, I know, I’ve used ORTF numerous times.


71 dB said:


> I wouldn't hang on to one particular mic distance, as a Jecklin Disk with adjustable mic distance gives flexibility in various acoustic/musical situations.


Mic distances are incredibly important and have been experimented with extensively. Get mic distances wrong and you can cause yourself all kinds of nightmares in a multi-mic setup. Additionally, mic distances are often what differentiates different techniques. For example, increase the ORTF mic distance from 17cm to 20cm (and change the angle of incidence to 90deg) and it’s no longer an ORTF pair, it’s a DIN pair, increase the distance of a DIN pair to 30cms and it’s a NOS pair.

An obvious disadvantage of all the stereo pair setups as a main array is with a 5.1 or greater mix, which has been standard practice for many years now, while something like the Decca Tree is great for multi-channel mixes and stereo. There are other advantages and disadvantages between nearly all the setups but that’s an obvious one between the stereo and 3 mic arrays.


71 dB said:


> "Close" mics at a few inches away make the sound unstable. The distance to the instrument can easily double meaning that the level of the direct sound will vary several decibels while a listener dozens of feet away wont hear almost any change in the sound. So, a very near mic should be attached to the instruments at fixed point in relation to the instrument to overcome this problem.


We rarely use close mics with orchestral recordings but occasionally it’s necessary. Sometimes we’ll use a “clip on”, a mic attached to the instrument  but that can also have issues (vibrations, mechanical noise) so sometimes we’ll use a fixed mic (on a stand) and try to get the musician not to move too much. This is typical with pop/rock singers and they’re used to it. Likewise, we’ll sometimes use clip ons for instruments in a drumkit but typically, clip ons are generally preferred for live performance more than for studio recordings.


Blanchot said:


> How did the distancing requirements affect the recording process?


Sorry, I can’t answer that. I didn’t do any orchestral or large ensemble recording during the pandemic. 

G


----------



## 71 dB

gregorio said:


> Mic distances are incredibly important and have been experimented with extensively. Get mic distances wrong and you can cause yourself all kinds of nightmares in a multi-mic setup. Additionally, mic distances are often what differentiates different techniques. For example, increase the ORTF mic distance from 17cm to 20cm (and change the angle of incidence to 90deg) and it’s no longer an ORTF pair, it’s a DIN pair, increase the distance of a DIN pair to 30cms and it’s a NOS pair.


Yes, but the things that separates ORTF from OSS (Jecklin Disk) on principal level are that ORTF uses cardioid mics at 110° angle to create directivity that increases with frequency while OSS uses a disc in between the omnidirectional mics to create similar effect. Jürg Jecklin developed Jecklin Disk, because he felt omnidirectional mics are better than directional mics. If you use omnidirectional mics instead of directional mics, you can't achieve desired ILD and ISD information without something in between the mics to block the sound increasingly with sound, hence the disc. ORTF, DIN and NOS are just variations of the _same_ idea while OSS is a variation of the idea. There is an variation of the OSS idea also, Schneider Disk (aka the pregnant wife of Jecklin Disk ), which aims to give more binaural sound.



gregorio said:


> An obvious disadvantage of all the stereo pair setups as a main array is with a 5.1 or greater mix,


Sure! I am thinking stereophonic recordings here, because all of this is about having recordings that are spatially compatible with speakers and headphones. The latter are stereophonic, so multichannel sound has to be downmixed for them anyway (another can of worms on its on).



gregorio said:


> which has been standard practice for many years now, while something like the Decca Tree is great for multi-channel mixes and stereo. There are other advantages and disadvantages between nearly all the setups but that’s an obvious one between the stereo and 3 mic arrays.



I have been thinking about a 5-channel variation of OSS. It would have 5 half-discs creating five 72° sections for the 5 mics.


gregorio said:


> We rarely use close mics with orchestral recordings but occasionally it’s necessary. Sometimes we’ll use a “clip on”, a mic attached to the instrument  but that can also have issues (vibrations, mechanical noise) so sometimes we’ll use a fixed mic (on a stand) and try to get the musician not to move too much.


Yes, mic on a stand + musicians trying not to move too much is better, because the risk of mechanical noises is there with mics attached to the instrument.


----------



## gregorio (Jun 8, 2022)

71 dB said:


> … OSS uses a disc in between the omnidirectional mics to create similar effect.


That depends on what you mean by “similar”. Cutting off (absorbing) the higher freqs from a side/quadrant of an omni-directional pattern (as per a Jecklin Disk) is not particularly similar to a cardioid pattern.


71 dB said:


> Jürg Jecklin developed Jecklin Disk, because he felt omnidirectional mics are better than directional mics.


In some respects (both freq response and directionality) they are better, in others they’re worse. An obvious example, an omni picks up sound much better from behind than a cardioid but of course that’s only better if you actually want to pick up sound from behind the mic, rather than what you’re actually pointing the mic at. A requirement of a main orchestral array is very good width/separation (and depth) a Jecklin Disk setup is not ideal for that. Not to say it couldn’t be done, just that it could usually be done better.


71 dB said:


> There is an variation of the OSS idea also, Schneider Disk (aka the pregnant wife of Jecklin Disk ), which aims to give more binaural sound.


Yes but more at the expense of speaker reproduction. Schneider disk is very rarely used because in almost all circumstances where you might use a Schneider disk, a dummy head setup would be superior.


71 dB said:


> Sure! I am thinking stereophonic recordings here, because all of this is about having recordings that are spatially compatible with speakers and headphones.


As 5.1 is stereophonic, I assume you mean 2 channel stereo? But “_all of this_” is NOT “_about having recordings that are spatially compatible with speakers and headphones_”, it’s all about making products that meet or exceed consumer demand. As all film for the last 40 years or so, virtually all HD TV and many orchestral recordings are expected and presented in surround, the vast majority (if not virtually all) orchestral recordings are done in surround and have been for many years. Because it’s relatively trivial to create a great stereo mix from a surround recording but not so going from a stereo recording to a surround mix. This is one of the reasons the Decca Tree array is so popular, it was invented as an improvement for 2 channel stereo mixes but as a 3 mic array is particularly well suited to the 3 main speaker arrangement of surround systems, the best of both worlds. It’s not as “spatially compatible” with headphones as the Jecklin Disk or better still a binaural (dummy head) setup, but it works acceptably well for the majority of headphone users, typically somewhat better than Jecklin Disk (and of course very much better than binaural recordings) with a two channel stereo speaker system and of course way better than Jecklin Disk for surround speakers.


71 dB said:


> I have been thinking about a 5-channel variation of OSS.


I can’t see what that would bring to the table? The whole point of Jecklin Disk is to make a 2 channel stereo recording that works on speakers but also roughly approximates some HRTF characteristics with headphones. A 5 channel variation of the Jecklin Disk would be difficult to construct, not work ideally with a 5.1 speaker setup and would only make sense for HPs with 5 channel output and there’s not many of those around.

G


----------



## 71 dB (Jun 8, 2022)

gregorio said:


> That depends on what you mean by “similar”. Cutting off (absorbing) the higher freqs from a side/quadrant of an omni-directional pattern (as per a Jecklin Disk) is not particularly similar to a cardioid pattern.


But the result is similar: Less high frequencies on ipsilateral side => ILD rises with frequency and angle of insident. Do we really argue about the meaning of "similar"? Semantic evaluation is part of communication. That's why words can be used flexibly within reason.



gregorio said:


> In some respects (both freq response and directionality) they are better, in others they’re worse. An obvious example, an omni picks up sound much better from behind than a cardioid but of course that’s only better if you actually want to pick up sound from behind the mic, rather than what you’re actually pointing the mic at. A requirement of a main orchestral array is very good width/separation (and depth) a Jecklin Disk setup is not ideal for that. Not to say it couldn’t be done, just that it could usually be done better.


Of course directive mics have their place and purpose. Why else would have have them? Jecklin Disk has obliviously very small width/separation at low frequencies, but larger at high frequencies. However, since Jecklin Disk creates ITD similarly at all frequencies, the sound is far from "double mono" at low frequencies. The width/separation is just encoded as phase difference following the basic principles of spatial hearing.

In my (uneducated) view if we think about reproduction by speakers, there are plenty of options for good main array, but many of these are not optimal for headphones. That's when solutions such as Jecklin Disk become viable. Jecklin Disk as the backbone of the recording would provide strong foundation for headphone spatiality, but the question is what does this mean for the speaker spatiality? If some harm is done compared to other main arrays such as Decca Tree, could this harm be undone with the other mics? And if we "undo" the harm, do we introduce new issues for headphone listening?



gregorio said:


> Yes but more at the expense of speaker reproduction. Schneider disk is very rarely used because in almost all circumstances where you might use a Schneider disk, a dummy head setup would be superior.


Yes, because a dummy head is even more binaural than Schneider Disk. The problem of dummy heads is that the spatiality isn't suitable for speakers anymore. If something is produced to headphones ONLY, then dummy heads are the superior way to do it. If speakers are included, then obviously we need something less binaural. They is some circumstances (small groups) Schneider Disk can work, while  Jecklin Disk works better when recording bigger groups.



gregorio said:


> As 5.1 is stereophonic, I assume you mean 2 channel stereo?


Yes. It seem my definition is correct and indeed stereophonic means also "more channels of transmission and reproduction so that the reproduced sound seems to surround the listener and to come from more than one source." Interesting! I have always called anything more than 2 channels to be multichannel sound, not stereo. I have even believed the word "stereo" is latin and means two. Turns out "duo" is two for latin.



gregorio said:


> But “_all of this_” is NOT “_about having recordings that are spatially compatible with speakers and headphones_”, it’s all about making products that meet or exceed consumer demand. As all film for the last 40 years or so, virtually all HD TV and many orchestral recordings are expected and presented in surround, the vast majority (if not virtually all) orchestral recordings are done in surround and have been for many years. Because it’s relatively trivial to create a great stereo mix from a surround recording but not so going from a stereo recording to a surround mix. This is one of the reasons the Decca Tree array is so popular, it was invented as an improvement for 2 channel stereo mixes but as a 3 mic array is particularly well suited to the 3 main speaker arrangement of surround systems, the best of both worlds. It’s not as “spatially compatible” with headphones as the Jecklin Disk or better still a binaural (dummy head) setup, but it works acceptably well for the majority of headphone users, typically somewhat better than Jecklin Disk (and of course very much better than binaural recordings) with a two channel stereo speaker system and of course way better than Jecklin Disk for surround speakers.


What you say here makes a lot of sense I must admit. The conclusion is that those productions that have gone multichannel sound are difficult to make headphone compatible. So, the headphone compatibility is possible mainly in productions that are still done in 2 channel format. Since we also have tons of older 2 channel productions from the early days of stereo that weren't so headphone compatible (because at that time headphones weren't so popular), people like me with demands toward headphone compatibility are stuck with cross-feeders or even more sophisticated methods to address the problem.



gregorio said:


> I can’t see what that would bring to the table? The whole point of Jecklin Disk is to make a 2 channel stereo recording that works on speakers but also roughly approximates some HRTF characteristics with headphones. A 5 channel variation of the Jecklin Disk would be difficult to construct, not work ideally with a 5.1 speaker setup and would only make sense for HPs with 5 channel output and there’s not many of those around.
> 
> G


I don't know if it would bring anything to the table, but my brain likes to ponder with concepts/possibilities like this. It is my way to be happy.


----------



## gregorio (Jun 8, 2022)

71 dB said:


> But the result is similar: Less high frequencies on ipsilateral side => ILD rises with frequency and angle of insident.


No, careful here, you’re falling into your old habit of ONLY considering one thing or group of things, rather than the entire picture. In respect of what I’ve quoted, it is similar but in other respects it isn’t. For example, I mentioned that because the Jecklin setup uses Omnis, it picks up sound from behind the mic, IE. The auditorium. So firstly, it’s not great for recording live performances because you’ll have a lot of audience noise and secondly, even without an audience, you’ll pick up a lot of room acoustics. A solution to the latter would be to put the Jecklin setup closer to the source, to pick up a higher ratio of direct sound but now it won’t function well as a main array because you won’t get as much coverage. Another example ….


71 dB said:


> Jecklin Disk has obliviously very small width/separation at low frequencies, but larger at high frequencies.


Firstly, that’s not great for an orchestra where most of the mid bass and almost all of the low bass is on the right hand side or far right side (cellos, tuba and basses). Secondly, the problem is worse because omni mics are more sensitive to low freqs at mid and far distances than cardioids and the disk is effectively transparent to low freqs. Sure, we’ve  got ITD but even with the ideal listening position between the speakers, we’ve still got relatively poor separation/width, plus room acoustics issues and/or coverage issues. We could of course increase the angle of incidence and move the mics further apart but, that causes two issues:  1. The Jecklin disk would need to be huge, although we could overcome this by creating a mic that’s omni directional in the low freqs, increasingly more directional in the higher freqs and therefore do away with the disk altogether.
2. We would have a serious weakness in the phantom centre position, although we can solve this by putting another of our special omni mics there, plus, if we move this mic forward a bit we can help the sense of depth as well.

A great deal of experimentation was done and the best results were achieved with the stereo pair angle of incidence at a full 180deg and 2m apart, with the centre mic 1m in front and the whole thing placed above and just in front of the conductor. Good coverage, wide/separated image, good depth and it provides reasonable headphone compatibility too.

What I’ve described in issue one has already been done, the mic is called the Neuman M50 and what I’ve described in issue 2 has also been done, that’s the exact specifications of the Decca Tree which is designed specifically for use with the Neuman M50!


71 dB said:


> That's when solutions such as Jecklin Disk become viable. Jecklin Disk as the backbone of the recording would provide strong foundation for headphone spatiality, but the question is what does this mean for the speaker spatiality?


Explained above.


71 dB said:


> If some harm is done compared to other main arrays such as Decca Tree, could this harm be undone with the other mics?


Not as far as the additional room acoustics and audience noise is concerned. We could use the outriggers to help with the coverage and width but we’d need to rely on them quite heavily, so we’ll loose pretty much all the HP advantages of the Jecklin Disk setup.


71 dB said:


> And if we "undo" the harm, do we introduce new issues for headphone listening?


Yes, we’ve now got a very wide spaced (AB) pair mixed fairly equally with the Jecklin Disk. We’re pretty much back to square one of getting an arbitrary stereo mix for speakers to work on HPs.


71 dB said:


> I have always called anything more than 2 channels to be multichannel sound, not stereo.


“Stereo” is commonly used to describe 2 channel stereo but all surround formats are stereophonic and as you used the term “stereophonic” and we were also talking about surround, I wanted to be sure. In these circumstances it’s often wise to specify “2 channel stereo” to avoid confusion. Especially as the first widespread surround format was called “Dolby Stereo” which was a 4 channel (LCRS) surround.

G


----------



## 71 dB

Instead of writing here, *I watch Pink Panther movies and classic Doctor Who*. I have nothing valuable to say and thinking about things and having ideas is clearly useless.  Thanks for educating me gregorio.


----------



## gregorio

71 dB said:


> I have nothing valuable to say and thinking about things and having ideas is clearly useless. Thanks for educating me gregorio.


There’s a popular audiophile myth that engineers are cloth eared idiots who know nothing except how to make the loudness war worse. Sure, there are some duffers out there and many of them these days because anyone with a laptop, mic and free audio software can call themselves a recording engineer. Even at the high levels there are some who are not as competent as they should be but I’ve been lucky enough to work at some of the world’s best studios and the chief engineers there are extremely knowledgeable, very bright, very well educated, very creative, massively experienced and truly world class. Everything I said in my previous message was discovered by these specialists nearly 70 years ago!  Compared to them, I’m definitely 2nd rate but I console myself that they’re specialists, I have other audio duties that they don’t know so much about.

So don’t give up with the ideas but ask rather than argue because there’s a very good chance it’s already been explored. If you’re interested in the subject, look up “BBC binaural proms” and follow the technical links. Some really innovative work being done in this field by the BBC R&D dept, creating live 3d audio mixes and creating complex, customised binaural mixes from the same mic feeds at the same time, in real time.

G


----------



## bigshot

Blake Edwards’ The Party is even funnier than the Pink Panther films. “Birdy num num”


----------



## 71 dB

bigshot said:


> Blake Edwards’ The Party is even funnier than the Pink Panther films. “Birdy num num”


I like that movie a lot!


----------



## bigshot

Jacques Tati’s Mon Oncle and Pierre Etaix’s YoYo are great too.


----------



## 71 dB (Jun 9, 2022)

bigshot said:


> Jacques Tati’s Mon Oncle and Pierre Etaix’s YoYo are great too.


I am a Tati fan (_Playtime_ is in my opinion one of the best films ever), but I don't know Pierre Etaix.

The problem with movies is that seeing them can be challenging. I watch movies mainly on Blu-ray which to my eyes gives good enough presentation if the source material is good. I also watch movies on TV occationally. I don't use streaming services. French movies are mainly released on Blu-ray in France (and perhaps in Belgium). Since French people think French language is the only one that counts and other languages shouldn't even exist, those releases don't have any subtitles. If there is a German release, it has perhaps only German subtitles (and dubbing). In the UK French movies are not popular. Native English speakers are not enthusiastic to watch movies with subtitles. They rather just watch movies with English dialog. The rest of the European countries are too small to release "niche" movies. So, most of the time there won't be a European Blu-ray release of French movies with English subtitles not to mention Finnish which would be ideal for me because I am a Finn. As for US releases, region coding becomes one problem. I do have a region free player, but for how long? Will my next player be? So, I avoid locked to region A or C, because I live on region B. Another problem is that the shipping costs from the US to Finland/Europe are nowadays ridiculously high and there are issues with importing stuff and all that hassle*. So, I order NOTHING from the US (even ordering from the UK became problematic thanks to Brexit!). Pierre Etaix has been released only in the US + Canada, so that's that. It is possible his films are shown on TV here.

Luckily _some_ French movies have had Nordic releases with Finnish, Swedish, Norwegian and Danish subtitles. Tati's movies for example! I have also managed to collect some French movies released in UK and Germany with English subtitles.

* Years ago orders under 45 euros were duty free => no hassle. Then they dropped the limit to 22.5 euros. Now nothing is duty free.


----------



## 71 dB

gregorio said:


> There’s a popular audiophile myth that engineers are cloth eared idiots who know nothing except how to make the loudness war worse. Sure, there are some duffers out there and many of them these days because anyone with a laptop, mic and free audio software can call themselves a recording engineer. Even at the high levels there are some who are not as competent as they should be but I’ve been lucky enough to work at some of the world’s best studios and the chief engineers there are extremely knowledgeable, very bright, very well educated, very creative, massively experienced and truly world class. Everything I said in my previous message was discovered by these specialists nearly 70 years ago!  Compared to them, I’m definitely 2nd rate but I console myself that they’re specialists, I have other audio duties that they don’t know so much about.
> 
> So don’t give up with the ideas but ask rather than argue because there’s a very good chance it’s already been explored. If you’re interested in the subject, look up “BBC binaural proms” and follow the technical links. Some really innovative work being done in this field by the BBC R&D dept, creating live 3d audio mixes and creating complex, customised binaural mixes from the same mic feeds at the same time, in real time.
> 
> G


I'd say you have been very very lucky with your career as an audio engineer. I have been very unlucky with the work I have done. That's why I am a tired bitter man who struggles with self-esteem every day. Coming to this discussion board was one of my efforts to change that, but somehow the effect has been the opposite. People like you have shown me my place, humbled me and the struggle continues. I don't understand how people can become so good at something, that other people look up to them. To me it seems to require super-human capabilities.

Technology has changed the ways music is produced and made it possible to produce music in new ways. A dude with a laptop may not know what you know, but he might know something else relevant to what he is doing. I make music with Garageband. I export the "raw" tracks and mix them together in Audacity. I have written Nyquist plugins for Audacity to have effects I want for example to control spatiality. Working under such limitations keeps creativity alive and since my working career is a total mess, I need to think about the future and save money. Music-production related software and hardware is ridiculously expensive if you make music only as a hobby newer profiting one penny of it. Often the software doesn't even work properly and something like updating the operating system can introduce problems. A typical simple home studio with all the gear and software can easily cost $10.000. Where the hell do normal people get that kind of money? Aren't most people broke struggling to pay the next rent to avoid eviction?

I'll check out “BBC binaural proms”, thanks!


----------



## gregorio

71 dB said:


> I'd say you have been very very lucky with your career as an audio engineer.


Both very lucky and at other times quite unlucky.


71 dB said:


> People like you have shown me my place, humbled me and the struggle continues.


It’s NOT my intention to show you your place or humble you, just to correct inaccuracies or falsehoods where I’m able.


71 dB said:


> I don't understand how people can become so good at something, that other people look up to them.


There’s not just one way. Sometimes it’s massive natural talent, sometimes it’s purely huge amounts of hard work to the point of an obsessive disorder but mostly it’s a bit of both. In my case, I was dropped into the deep end in a big way a few times and I worked obsessively hard to avoid looking like an idiot. After quite some time of doing that, some people started to “look up to me”, which was a big shock because I felt I was still just trying to avoid being the idiot amongst my peers/competitors.


71 dB said:


> I export the "raw" tracks and mix them together in Audacity. … Music-production related software and hardware is ridiculously expensive if you make music only as a hobby newer profiting one penny of it. Often the software doesn't even work properly and something like updating the operating system can introduce problems. A typical simple home studio with all the gear and software can easily cost $10.000.


You’re lucky, when I started about 30 years ago, computer DAWs were just toys for enthusiasts, incapable of pro standards. Entry price to pro standards at that time was about $200k minimum and that was for a cut down/compromised system!

If by hardware you mean mics and recording facilities, then yes, it gets expensive fast, if you want pro quality recordings. But if you’re OK with just synths, samples and virtual instruments, it’s ridiculously cheap. GarageBand is really a toy, Audacity is good for audio editing and some specific audio tasks but is NOT a decent DAW for music production.

As you probably know, Pro Tools is pretty much the industry standard, virtually all the top studios use it. Many composers/song writers prefer Logic Audio or CuBase but for the seriously budget conscious, Reaper (https://www.reaper.fm) is the popular choice. It’s a proper DAW, mature, stable and very well featured, including for multi-channel. It comes with various supplied plugins and supports VST/VST3, AU and other plug-in formats, so if you’re willing to search around there’s many good free plugins.  The life-time license is $60 but after the 60 day trial you can continue using it without paying the license, if you don’t mind the “nag screen”. I would say it’s a no brainier for you.

G


----------



## 71 dB

gregorio said:


> It’s NOT my intention to show you your place or humble you, just to correct inaccuracies or falsehoods where I’m able.


I know, but it feels like being shown my place and humbled when someone is able to point out those inaccuracies and falsehoods. Some people are able to block all criticism and live in their own delusions of being right. I am not. It means minor existential crises and re-organisation of the internal structures in my mind for me. This is of course not your problem. It is my problem. 



gregorio said:


> There’s not just one way. Sometimes it’s massive natural talent, sometimes it’s purely huge amounts of hard work to the point of an obsessive disorder but mostly it’s a bit of both. In my case, I was dropped into the deep end in a big way a few times and I worked obsessively hard to avoid looking like an idiot. After quite some time of doing that, some people started to “look up to me”, which was a big shock because I felt I was still just trying to avoid being the idiot amongst my peers/competitors.


To me the term "hard work" is weird. I have a level of working. I can't work less or more hard than that. Why? Because working less hard feels inefficient and stupid and working harder would exhaust me fast so that I would be unable to work further for a while. So, there is only one level of working for me, the level I can sustain and that feels efficient. So, to me the thought that a person can choose the level of working is strange. Deadlines don't suite for me at all for this reason, unless the deadline happens to the same as my natural working speed, but in that case I would have completed the work by the deadline anyway. 



gregorio said:


> You’re lucky, when I started about 30 years ago, computer DAWs were just toys for enthusiasts, incapable of pro standards. Entry price to pro standards at that time was about $200k minimum and that was for a cut down/compromised system!


I have made music with computers for 30 years too. In the 90's it just was insanely crappy. I had Amiga 500 + Protracker. Four channels, 2 on the left, 2 on the right. In late 90's I used a PC. It had very noisy analog sound card. It has taken me decades to learn everything I know today, but it has also been an interesting hobby. It has been mindblowing to realize how much there is to learn. Something as simple as dynamic compression contains so much stuff to learn, and that is just a fraction of what goes to making music.



gregorio said:


> If by hardware you mean mics and recording facilities, then yes, it gets expensive fast, if you want pro quality recordings. But if you’re OK with just synths, samples and virtual instruments, it’s ridiculously cheap. GarageBand is really a toy, Audacity is good for audio editing and some specific audio tasks but is NOT a decent DAW for music production.


Well, a typical home studio has a couple of mics and so on, but there's also monitor speakes (a pair of small Genelec speakers isn't cheap), room treatment, analog synths for "warm" sound etc.

Garageband is a toy, but you can do a lot with it already. I'm able to mix in Audacity using my on method of mixing.  



gregorio said:


> As you probably know, Pro Tools is pretty much the industry standard, virtually all the top studios use it. Many composers/song writers prefer Logic Audio or CuBase but for the seriously budget conscious, Reaper (https://www.reaper.fm) is the popular choice. It’s a proper DAW, mature, stable and very well featured, including for multi-channel. It comes with various supplied plugins and supports VST/VST3, AU and other plug-in formats, so if you’re willing to search around there’s many good free plugins.  The life-time license is $60 but after the 60 day trial you can continue using it without paying the license, if you don’t mind the “nag screen”. I would say it’s a no brainier for you.
> 
> G


Yes, of course I know that. At some point I looked at these "affordable" DAWs and the prices were around 200-300 dollars. At the moment my main interest is in composing, but I'll keep Reaper in mind.

"Purchase a license directly by sending a check or money order in USD, drawn on a US bank." What the hell? No Paypal or Credit Card?


----------



## 71 dB

I registered on this forum 5 years ago because I was a fool who thought I know _something_. Of course I don't! I am a loser who can never be an expert of anything. Because of this I have zero value as a person. I have almost nothing to give to the World. I try, of course, but other people do everything 1000 times better than me so of course my offerings have zero value in comparison. All I can do is live this poor life of an unemployed bum on social security and hope for as left-leaning politics as possible.

Yoda says DO OR DO NOT, THERE IS NO TRY. My life has been trying and failing time after time. My midi-chlorian count must be very low... ...I didn't get to choose my parents so I got crappy DNA. The World doesn't seem to need people like me anywhere. All skills and talent I think I have seems to be useless in current capitalism while the skills and talent I should have I do not have at all. The World wants tall, sporty and beautiful and handsome people. I am short, ugly and not sporty at all. The World wants video game coders. I have never been into video games and I am crappy coder for someone with university degree in engineering. In the 80's when I has a teenager I did not understand that gaming industry will be HUGE in the 21st century. I thought I am good if I can do some math, but I have learned that employers do nothing with math heads without million other skills (which I of course don't have!). Capitalsm needs productiove people. I am not productive. Social networks are impotant. I am not into that at all, because I am extremely introverted. I like to be alone most of the time.

Sorry World, I don't "fit in." Someday you will get rid of me when I die. Before that happens, I listen to music with cross feed and I don't give a crap about anything else.


----------



## bigshot

I’m gonna eat some worms…


----------



## FlavioWolff

This thread is a true patrimony of Head-Fi.


----------



## redrol

Ive been using Cooledit which is now Adobe Audition for maaany years.  I really enjoy its low level wave editing stuff.  Sometimes I'll make a track just by editing a single waveform endlessly.  Great for just banging out mic recorded tracks as well.


----------



## BobG55 (Sep 14, 2022)

bigshot said:


> I’m gonna eat some worms…


Personally, I would start with the gummy ones. Then work my way up gradually (or down since that’s where they live). 🐛 🪱


----------



## Vamp898

I just discovered this thread and i don't really get what crossfeed is supposed to do.

I was so curious that i actually tried it with a few of my songs but somehow i still don't really get it.

What exactly is it supposed to do?

For example i have one song where the drum is on the middle right and the sound spreads from this position, across the whole room and reverbs/reflects from the wall from different positions.

With crossfeed this effect is lower. So the drum is now a bit more in the middle and this refelection and "travel" effect of the drum is almost completely gone. So its like the room around the drum disappeard and now its only the drum without any room. But... why? Is this the effect?

I read it sounds best with good acoustic recordings so i tried again with some of my best live acoustic recordings and its just that everything got closer. Instead of a concert hall, im suddenly in a room, but everything still sounds like in a hall. Just weird. The large stage the instruments were on just shrunk and every instrument is more tightly together, but they still have the specific hall sound. It sounds a bit like they played the concert in a smaller room and used some kind of hall effect 

Also the crowd that was "around" more, is now a bit more left/right and less "around" and it sounds more like im standing inside people instead of people stand around me.

With faster music it tends to distord a little and just starts to sound mushy.

I don't get it, what it is suposed to do? I checked some sample recordings on the internet that should show exactly how it works with some before/after examples.

But the before recordings sound very strange and unnatural and have totally messed and over the place soundstage, layering and so on. So is the idea to fix such bad recordings where the mixer went all out and placed the instruments just randomly all over the place?

At least with the acoustic recordings i have, everything just starts to sound off and wrong so maybe i don't have the material to profit from this.


----------



## 71 dB

Vamp898 said:


> I just discovered this thread and i don't really get what crossfeed is supposed to do.


It is supposed to simulate acoustic cross-feed when you listen to speakers (both ears hear sound from both speakers which reduces channel separation. With headphones this doesn't happens and for some people me included the resulting channel separation feel unnaturally large, but using cross-feed this problem can be eased or even fixed). 



Vamp898 said:


> I was so curious that i actually tried it with a few of my songs but somehow i still don't really get it.


The best way to understand what cross-feed does is to use test signals (for example pink noise) and play it on left or right channel only. Then compare the sound cross-feed on and off. It should be clear how cross-feed reduces the unpleasantness of having sound on one ear only, especially at low frequencies. Nest step is to use cross-feed with recordings with very ping-pongy stereophony. Some recordings don't even "need" of benefit from cross-feed, because they have mixed to not contain large channel separation. Cross-feed doesn't work with binaural recordings. The more ping-pongy the more benefits there are.



Vamp898 said:


> What exactly is it supposed to do?


Read above.



Vamp898 said:


> For example i have one song where the drum is on the middle right and the sound spreads from this position, across the whole room and reverbs/reflects from the wall from different positions.
> 
> With crossfeed this effect is lower. So the drum is now a bit more in the middle and this refelection and "travel" effect of the drum is almost completely gone. So its like the room around the drum disappeard and now its only the drum without any room. But... why? Is this the effect?


Yes, this is what happens more of less. Without cross-feed the drum is heard too far from the middle and cross-feed moves it closer to where it "should" be and where it is when you listen to the track on speakers. Your ears are used to unnatural spatial cues and now when cross-feed modifies them more natural, they feel "anemic" in comparison, but if you listen to longer your ears adjust to the more natural spatiality and you start to notice the benefits of reduced listening fatigue, more natural bass, more and ordered spatiality. That's what my ears do at least. I can't know how you experience things.

The level of cross-feed is important because recording differ with each other. Ping-pongy recordings need extreme cross-feed while some almost binaural recordings may benefit from only a very subtle cross-feed if at all. So, if cross-feed kills too much of the "room" then the level is too high. The trick is to have the correct balance and without cross-feed there is too much going on.



Vamp898 said:


> I read it sounds best with good acoustic recordings so i tried again with some of my best live acoustic recordings and its just that everything got closer. Instead of a concert hall, im suddenly in a room, but everything still sounds like in a hall. Just weird. The large stage the instruments were on just shrunk and every instrument is more tightly together, but they still have the specific hall sound. It sounds a bit like they played the concert in a smaller room and used some kind of hall effect


Again, the level of cross-feed is important. Also, when you put the crossfeed on, your hearing compares the sound to no cross-feed sound and it feel "dead" for a minute of so, but if you keep listening your ears adapt and the sound "widens" up again, only in a less fatiguing and more natural way. This is similar to speed blindness. Listening to music without cross-feed is like driving very fast and when you reduce your speed for a while it feels like your speed was almost zero.



Vamp898 said:


> Also the crowd that was "around" more, is now a bit more left/right and less "around" and it sounds more like im standing inside people instead of people stand around me.


Again, how does the crowd sound after a while when your ears have adapted?



Vamp898 said:


> With faster music it tends to distord a little and just starts to sound mushy.


I have never experienced this and I don't know why this would happen. For my ears proper cross-feed cleans up the spatiality and makes it easier to ear separate sounds in the music, so I'd say cross-feed makes music less mushy for me.



Vamp898 said:


> I don't get it, what it is suposed to do? I checked some sample recordings on the internet that should show exactly how it works with some before/after examples.
> 
> But the before recordings sound very strange and unnatural and have totally messed and over the place soundstage, layering and so on. So is the idea to fix such bad recordings where the mixer went all out and placed the instruments just randomly all over the place?


As I said above, the more ping-pongy recording, the more benefits can be had using cross-feed. Personally I get benefits with maybe 98 % of recordings. The rest 2 % are so binaural spatially, that using even the mildest amount of cross-feed is stupid.



Vamp898 said:


> At least with the acoustic recordings i have, everything just starts to sound off and wrong so maybe i don't have the material to profit from this.


I don't know what material you listen to, but from everything you say it looks like you are cross-feeding too aggressively. If possible, try milder settings. Also, you probably need to just listen to cross-fed music more to get used to "more natural" spatial cues with headphones. Pay attention to how cross-feed affects listening fatigue and how "physical" bass sounds are.


----------



## Vamp898

71 dB said:


> It is supposed to simulate acoustic cross-feed when you listen to speakers (both ears hear sound from both speakers which reduces channel separation. With headphones this doesn't happens and for some people me included the resulting channel separation feel unnaturally large, but using cross-feed this problem can be eased or even fixed).
> 
> 
> The best way to understand what cross-feed does is to use test signals (for example pink noise) and play it on left or right channel only. Then compare the sound cross-feed on and off. It should be clear how cross-feed reduces the unpleasantness of having sound on one ear only, especially at low frequencies. Nest step is to use cross-feed with recordings with very ping-pongy stereophony. Some recordings don't even "need" of benefit from cross-feed, because they have mixed to not contain large channel separation. Cross-feed doesn't work with binaural recordings. The more ping-pongy the more benefits there are.
> ...


Ah that makes sense. As most music I have in my collection is mostly newer (and Japanese) music, its mixed with/for Headphones, that explains why, with my music collection, I don't experience an improvement


----------



## bigshot

Personally, I don’t care for crossfeed and for me it messes up channel separation, but some people like it. It doesn’t add spatial aspects to the sound. It just mushes stuff up.


----------



## castleofargh

Vamp898 said:


> Ah that makes sense. As most music I have in my collection is mostly newer (and Japanese) music, its mixed with/for Headphones, that explains why, with my music collection, I don't experience an improvement


There is also the fact that all the changes are somehow user dependent. So it's very common to try many crossfeed solutions and find some better than others for ourselves. Or perhaps none of the solutions will work because you'd need something more customized/advanced. 
It's a little like binaural music in a sense. It comes from a good intention, but in the end most people would need something different for their own heads. leading to many who would rather not have it at all. Also the center can sound weird depending on implementation.
I said it often, but to me the main benefit was that I found music to be a little less tiring over long listening sessions with crossfeed turned ON.
But yeah the objective is to turn the 180°panning back to the 60° or so of speakers.


----------



## 71 dB

Vamp898 said:


> Ah that makes sense. As most music I have in my collection is mostly newer (and Japanese) music, its mixed with/for Headphones, that explains why, with my music collection, I don't experience an improvement


Yes, modern pop music is often mixed in ways that is more suitable for headphones, but even they might benefit from subtle cross-feed in my experience. At least you tried cross-feed out.

*A friendly warning*: Your music diet might be too unilateral. You might want to expand your taste and if you do so you probably encounter recordings that benefit from cross-feed more than your modern Japanese headphone music.


----------



## Vamp898

71 dB said:


> Yes, modern pop music is often mixed in ways that is more suitable for headphones, but even they might benefit from subtle cross-feed in my experience. At least you tried cross-feed out.
> 
> *A friendly warning*: Your music diet might be too unilateral. You might want to expand your taste and if you do so you probably encounter recordings that benefit from cross-feed more than your modern Japanese headphone music.


There might be a misunderstanding.

With modern music i meant music that was recorded/mixed/mastered in the recent years, not pop music. Even though i am often surprised about how old some music actually is that i was sure came out only a few weeks ago and then suddenly is 10+ years old.

I listen to pretty much all kind of genres except for Rap/HipHop.

Even though there is Pop music like this in my collection like these



I also listen to almost all varieties of metal and rock



To several type orchestral recordings


And even Jazz



So for me its not a matter of genre but a matter of age of when the music was recorded/mixed/mastered.

Some songs from サカナクション sound like they are much older than they are though like slow motion


I also have several Artists from Germany, France, Spain and the US in my collection, but they are not the majority


----------



## bigshot

I think it depends on your personal tastes whether cross feed is necessary or even a benefit. For me, the only thing I might consider it for is early stereo like the first Beatles records in stereo where the instruments were isolated in a single channel. But even with that, I can listen to it without cross feed and not be particularly irritated.

It's likely that cross feed isn't necessary for you.


----------



## 71 dB (Sep 22, 2022)

Vamp898 said:


> There might be a misunderstanding.
> 
> With modern music i meant music that was recorded/mixed/mastered in the recent years, not pop music. Even though i am often surprised about how old some music actually is that i was sure came out only a few weeks ago and then suddenly is 10+ years old.
> 
> ...



I tested what cross-feed levels works best for me and got these results:

VIDEO 1: -10 dB
VIDEO 2: OFF (excellent channel separation for headphones!)
VIDEO 3: -8 dB
VIDEO 4: -6 dB
VIDEO 5: -8 dB
VIDEO 6: -10 dB
VIDEO 7: -10 dB
VIDEO 8: OFF (perfect channel separation for headphones! Even better than video 2!)

Overall these music samples have good channel separation on headphones and I don't blame you for not hearing massive benefits using cross-feed on these (for me there are some benefits except for videos 2 & 8 which should be listened without cross-feed). Video 4 has the most excessive spatiality for headphones and the -6 dB crossfeed level is moderate. Typically the music I listen to has much worse channel separation for headphones and I rarely encounter something as good as in videos 2 and especially 8.


----------



## xxearvinxx (Sep 23, 2022)

I’m torn on using crossfeed.
I am deaf in my left ear and have been since a very young age. That’s part of the reason I got into this hobby. I wanted to give my one good  ear the best listening experience possible.

Crossfeed is great to mix channels, especially for songs that really utilize left and and right channels in the recordings.
I remember first noticing this about a decade ago when I was still in highschool listening to cheap Apple earbuds.
While on the bus to school, a song I had heard a hundred times came on my iPod. During a part of the song where there was normally a guitar solo, I only heard drums. I was shocked and confused. I’d heard this song so many times and that part was completely different this time.
Apparently I had accidentally switched the right and left earbuds without noticing when I put them in. That section of the song was recorded with drums in the left channel (my deaf ear) and guitar in the right channel. After switching them back I realized for the first time that my single sided deafness meant that I was not able to fully appreciate some songs with headphones.
Since then I switched my iPod, and now these days my iPhone to play in mono. Problem solved, sorta.

I use an RME ADI - 2 DAC and the crossfeed does a good job at fixing the channel separation for me, but not every song utilizes both channels independently.
From what I’ve noticed there seems to be a fuller sounding presentation from songs being played in mono that is more enjoyable.
Occasionally I’ll switch to using crossfeed to get a more stereo sound and better idea of what sounds were meant for which channel. Although I always end up switching back to mono because it just sounds better.

This preference is entirely due to my single sided deafness, but figured I’d add my experience to the conversation.


----------



## bigshot

If you’re deaf in one ear, mono would be best.


----------



## xxearvinxx

bigshot said:


> If you’re deaf in one ear, mono would be best.


I agree.
Still nice to mess with crossfeed on occasion and see what I’m missing though.


----------



## bigshot

You really aren’t missing a lot. A really good mono mix can sound great.


----------



## xxearvinxx

@bigshot Thanks for including the Amps Sound the Same article in your signature. 
It was an interesting read.


----------



## bigshot

No problem. Glad it helped.


----------



## jamesjames

xxearvinxx said:


> I’m torn on using crossfeed.
> I am deaf in my left ear and have been since a very young age. That’s part of the reason I got into this hobby. I wanted to give my one good  ear the best listening experience possible.
> 
> Crossfeed is great to mix channels, especially for songs that really utilize left and and right channels in the recordings.
> ...


I've never understood why crossfeed isn't more popular.  I can no longer listen without it.  I think it unquestionably improves the result, by which I mean it tends to make acoustic music sound more natural - the performance space is apparently more three dimensional.  It's also apparently at some greater distance from me - as at a recital or concert.


----------



## bigshot (Sep 23, 2022)

Do you have a speaker system to compare the soundstage of crossfeed to? If so, does crossfeed approach that?

Apple recently released ear calibration for their spacial audio, and I read people talking about the spatiality and how things sound like they are all around them. But I have the multichannel blu-ray audio of the same recordings they're talking about and spatial audio is absolutely nothing like the presentation of a speaker system. It just pots the sound left and right as you turn your head and adds a little bit of a phase trick. I'm wondering if people know what multichannel music actually sounds like. Maybe it's the same with speaker soundstage.


----------



## 71 dB (Sep 23, 2022)

bigshot said:


> Do you have a speaker system to compare the soundstage of crossfeed to? If so, does crossfeed approach that?


Headphones + crossfeed doesn't give the type of soundstage you get with speakers, but at least for me it improves spatiality on headphones. Very large ILD makes sounds appear very near one ear and since crossfeed reduces ILD, those sounds move further away from ears, at least for me. This alone makes listening much more pleasant, because sounds very near one ear is very annoying and causes a kind of "tickling" sensation for me. Crossfeed also "organizes" the spatiality for me. To me headphone spatiality without crossfeed is "broken", like shattered all over the place because of the unnatural levels of spatial cues. My spatial hearing has hard time to make sense of it all and the result is a broken mess.

Crossfeed cleans up this mess for me and while the "virtual soundstage" is much smaller in size (about 3-10 times smaller than speaker soundstage depending on the recording), at least it has the same kind of "intactness" as speaker spatiality. Bass feels "real" and not "fake". Sharp transient sounds are placed accurately on one point in space instead of being scattered around my head. Listening to headphones without crossfeed is like watching TV with color, contract, sharpness etc. set to maximum*. It may look interesting and exciting at first, but the more you watch the more annoying and tiresome things look. All the detail you think you got by setting sharpness adjustment to max isn't real. It is fake detail. Then you adjust the TV picture to have natural picture. It may look less interesting at first, but it is comfortable to look at for longer times. To me crossfeed does something similar for headphones sound.

* If you watch your TV through an almost transparent membrane that reduces sharpness, color, etc. in the picture, these extreme settings may give you the best picture. The membrane here corresponds the room acoustics in audio.



bigshot said:


> Apple recently released ear calibration for their spacial audio, and I read people talking about the spatiality and how things sound like they are all around them. But I have the multichannel blu-ray audio of the same recordings they're talking about and spatial audio is absolutely nothing like the presentation of a speaker system. It just pots the sound left and right as you turn your head and adds a little bit of a phase trick. I'm wondering if people know what multichannel music actually sounds like. Maybe it's the same with speaker soundstage.


I know what  multichannel music actually sounds like (I have had a 5-channel speaker system for 21 years), but I haven't tested Apple's spacial audio.


----------



## 71 dB

jamesjames said:


> I've never understood why crossfeed isn't more popular.  I can no longer listen without it.  I think it unquestionably improves the result, by which I mean it tends to make acoustic music sound more natural - the performance space is apparently more three dimensional.  It's also apparently at some greater distance from me - as at a recital or concert.


When I came to this board years ago, I thought ALL people would prefer to use crossfeed if only informed about it, but I hit a brickwall. My assumption seems to have been wrong. A lot of people who have tried crossfeed don't hear any benefits or even find the result worse. This made me really confused, because I thought I had figured this out an had the science of spatial hearing to back me.

Lately I have started to think that maybe we who prefer crossfeed are different from other people. The way we experience spatiality might differ from other people. Autistic people are typically very sensitive to certain sensory stimulus. For a "normal" person walking on bare foot on wet grass isn't a big thing, but for someone with autism the wet grass slapping against bare feet and legs may cause sensory overload and make the experience extreme. Maybe people who like crossfeed have similar issues with excessive spatial cues?

I am pretty sure I have aspergers. I am sensitive to certain things such as cold water on my skin. What if this is the reason I love crossfeed? Do you have aspergers or autism?


----------



## bigshot

People describe crossfeed (and spatial audio) using terms that apply to speaker soundstage… ie: “the sound is a distance in front of me”. I’m curious if some people experience more spatial sound than others, or if it’s placebo effect (or just incorrect use of terms).


----------



## 71 dB (Sep 23, 2022)

bigshot said:


> People describe crossfeed (and spatial audio) using terms that apply to speaker soundstage… ie: “the sound is a distance in front of me”. I’m curious if some people experience more spatial sound than others, or if it’s placebo effect (or just incorrect use of terms).


I experience distance even without crossfeed, but it is just messy compared to crossfeed. Also, as I wrote, crossfeed "removes" sounds at my ear and moves the on my shoulders. For me in headphone virtual soundstage not everything is at a larger distance. A lot of the sound remain near head, but the "space" where the sounds are has substantial size. I suppose my spatial hearing detects things like reverberation in the sound and concludes the sounds must happen in a space of vast size, much bigger than a helmet. Without crossfeed the same reverberation contains excessive spatiality and it is difficult to know what it means spatially (fake reverberation? What?)

Terminology is what it is. We have to live with the fact that nobody has bothered to create good terminology for headphone spatiality. I have suggested some terms, but other people tend to reject them so here we are, everyone using their own favorite terms...

Stereophonic sound is based on tricking our spatial hearing. How else can you create soundstages using just two speakers? How can you make sounds appear between the speakers when there is nothing to radiate sound? By tricking spatial hearing. What we hear and experience is not the same as physical reality. So, spatial hearing can be tricked and to me this works also with headphones, just a little differently because the placement of the transducers is so different and there isn't a room. Whether we should call it a placebo effect is a philosophic question. If it is then I can say crossfeed helps me experience the placebo effect that I like.


----------



## jamesjames

In answer to bigshot, I've used speaker systems for many years, have recorded music, and have been recorded.  But that's irrelevant.  I put it badly above perhaps.  Perhaps it would have been better to say I've never understood why there isn't more interest in crossfeed on this website (and particularly the 'Crossfeed' topic!).  Indeed, my impression is that certain contributors here seem to suggest it's a waste of time.  I would simply observe that many of the most serious designers - dCS, Simaudio, Weiss, iFi, etc, etc - continue to invest heavily in refining their crossfeed products.  I don't think they're doing that because no one's interested.  My experience is that many agree that the artificial or 'super' stereo effect inherent in headphone listening can be ameliorated to some extent by good crossfeed implementation.  Speaking for myself, good analogue crossfeed transforms headphone listening from one that's fascinating but quite unnatural, to the most natural playback experience possible.  All forms of playback involve some practical compromises.  Crossfeed obviously makes a big difference to me.  And I know it makes a big difference to others.  So, as I've said, I don't understand the rather negative treatment it seems to receive here.


----------



## bigshot

That’s fine. Everyone is free to add any coloration they want. I use DSPs on occasion myself. My only objection is when these filters are described as restoring spatial dimension to the sound. I’ve experienced that with speakers, but never with headphones. I was hoping Apple’s spatial audio would do that. I went out and bought their $500 cans with great expectations, but it was a bust. I’m still looking for a way to create real soundstage with headphones. There’s the Smyth Realiser, but it has compromises too and it’s very expensive. I’ll find what I’m looking for someday. It shouldn’t be impossible.


----------



## jamesjames

bigshot said:


> That’s fine. Everyone is free to add any coloration they want. I use DSPs on occasion myself. My only objection is when these filters are described as restoring spatial dimension to the sound. I’ve experienced that with speakers, but never with headphones. I was hoping Apple’s spatial audio would do that. I went out and bought their $500 cans with great expectations, but it was a bust. I’m still looking for a way to create real soundstage with headphones. There’s the Smyth Realiser, but it has compromises too and it’s very expensive. I’ll find what I’m looking for someday. It shouldn’t be impossible.


So why do you continue visit this topic?  You clearly don't find crossfeed worthwhile.  I guess your contributions don't help create a welcoming environment for those who do.


----------



## bigshot

I keep getting attracted by people talking about DSPs that can simulate realistic soundstage with distance and maybe surround information. I think that this will be the next big leap forward for sound quality... spatiality. Dolby and Apple are working on it but it isn't quite there yet. I want to see if someone can recommend something that comes closer. It will be happening soon.


----------



## jamesjames

bigshot said:


> I keep getting attracted by people talking about DSPs that can simulate realistic soundstage with distance and maybe surround information. I think that this will be the next big leap forward for sound quality... spatiality. Dolby and Apple are working on it but it isn't quite there yet. I want to see if someone can recommend something that comes closer. It will be happening soon.


I see.  Fair enough.  But perhaps there's no need to continue to beat the drum on the evils (!) of crossfeed for those who actually turn up to the crossfeed topic to talk about crossfeed ...


----------



## bigshot

I don't say cross feed is evil. I say that everyone can use whatever coloration they want. Just don't tell other people the filter is doing things it isn't really doing. My only objection is to describing cross feed as creating a more dimensional soundstage or placing the sound in front of you like a speaker system. Those are the two things headphones can't do (yet). Sound with headphones is a line between the ears straight through the head. Secondary depth cues can fool you into thinking you hear distance, but it's never more than a couple of inches in front of your face.

Soundstage is good for flattening out excessive channel separation for headphones, since with speakers the room does that... in addition to adding natural dimensionality.


----------



## jamesjames

bigshot said:


> I don't say cross feed is evil. I say that everyone can use whatever coloration they want. Just don't tell other people the filter is doing things it isn't really doing. My only objection is to describing cross feed as creating a more dimensional soundstage or placing the sound in front of you like a speaker system. Those are the two things headphones can't do (yet). Sound with headphones is a line between the ears straight through the head. Secondary depth cues can fool you into thinking you hear distance, but it's never more than a couple of inches in front of your face.
> 
> Soundstage is good for flattening out excessive channel separation for headphones, since with speakers the room does that... in addition to adding natural dimensionality.


Yes, we've heard it all before ...


----------



## jamesjames

jamesjames said:


> Yes, we've heard it all before ...


... and it really is quite tedious ...


----------



## bigshot

At least I'm making points and not telling other people to shut up.


----------



## xxearvinxx (Sep 24, 2022)

bigshot said:


> I keep getting attracted by people talking about DSPs that can simulate realistic soundstage with distance and maybe surround information. I think that this will be the next big leap forward for sound quality... spatiality. Dolby and Apple are working on it but it isn't quite there yet. I want to see if someone can recommend something that comes closer. It will be happening soon.


I accidentally quoted without posting my reply.


----------



## Steve999 (Sep 24, 2022)

I like variable crossfeed, I find it works to different degrees with different recordings. I have had two of these for years, $159 on Amazon. I love them. I’d guess they measure like dreck and you get reverse-snob-appeal at best but they do exactly what I want without any degradation in sound that I care about and a nice strong quiet headphone output with the headphones I use and ergonomics are off the charts cool for my purposes. So you press the crossfeed button, and turn the crossfeed knob, it’s continuous from almost no crossfeed to very strong crossfeed, with a mono button if you just don’t want any stereo at all. You can use a USB driver via computer or there are xlr inputs and outputs, you can get adapters to get it to work with rca jacks. Multiple inputs & outputs. Nice big stepped volume knob. Two headphone outs. My own personal nirvana.


----------



## xxearvinxx (Sep 24, 2022)

bigshot said:


> I keep getting attracted by people talking about DSPs that can simulate realistic soundstage with distance and maybe surround information. I think that this will be the next big leap forward for sound quality... spatiality. Dolby and Apple are working on it but it isn't quite there yet. I want to see if someone can recommend something that comes closer. It will be happening soon.


I agree that something DSP is going to be a large part of future of audio and headphones. I’m not sure how the purists will take this as it becomes even more mainstream. Analog is great, but just like everything else in our lives technology is continuing to make moves forward weather people like it or not.
Like the headphone jack being removed from phones. In 2016 Apple was one of the first and criticized heavily for it. Now 7 years later it’s all but excepted that a dongle or Bluetooth is just the way to go.

I’m excited to see what improvements come with DSP, crossovers and spacial audio in the future. Anything to improve the listing experience or change it in a meaningful way.
What I do worry about though is with these added features it will require processing and with that batteries.
What will happen if when the chips are no longer supported or can’t keep up with the new standards? What about when the battery gives out? Is your expensive new headphone just destined to end up in a landfill?

The benefit off most current audiophile headphones is that they are analog and should last for many years, if not forever, if taken care of.
Maybe it’ll be the DACS and amps that will continue to add new features to tweak sounds and utilize DSP. With more and more average consumers switching to battery powered Bluetooth headphones, I see it as just a matter of time before enough improvements are made with the tech that the brands we enjoy start to take this route as well.


----------



## xxearvinxx

Steve999 said:


> I like variable crossfeed, I find it works to different degrees with different recordings I have had two of these for years, $149 on Amazon. I love them. I’d guess they measure like drek and you get reverse-snob-appeal at best but they do exactly what I want without any degradation in sound that I care about and a nice strong quiet headphone output with the headphones I use and ergonomics are off the charts cool for my purposes. So you press the crossfeed button, and turn the crossfeed knob, it’s continuous from no crossfeed to very strong crossfeed, with a mono button if you just don’t want any stereo auto all. You can use a USB driver via computer or there are xlr inputs and outputs, you can get adapters to get it to work with rca jacks. Multiple inputs & outputs. Nice big stepped volume knob. Two headphone outs. My own personal nirvana. FWIW.


This looks interesting. I’ll have to look more into it. 
Can it work as an XLR switch as well. Like plug your DAC into the input and then switch between two amps or powered speakers with the output?


----------



## Steve999

xxearvinxx said:


> This looks interesting. I’ll have to look more into it.
> Can it work as an XLR switch as well. Like plug your DAC into the input and then switch between two amps or powered speakers with the output?


I think so! Though I think it has a perfectly good if pedestrian DAC built in.

Anyway. here’s the Berhringer product page https://www.behringer.com/product.html?modelCode=P0BK8 and here’s a thumbnail picture of the back:


----------



## jamesjames

bigshot said:


> At least I'm making points and not telling other people to shut up.


Hmmm - perhaps making the same (dismissive) point over and over again - not quite the same thing.


----------



## bigshot

You're dismissed.


----------



## 71 dB

bigshot said:


> My only objection is when these filters are described as restoring spatial dimension to the sound. I’ve experienced that with speakers, but never with headphones. I was hoping Apple’s spatial audio would do that. I went out and bought their $500 cans with great expectations, but it was a bust. I’m still looking for a way to create real soundstage with headphones. There’s the Smyth Realiser, but it has compromises too and it’s very expensive. I’ll find what I’m looking for someday. It shouldn’t be impossible.


I can't speak for others, but I admit speaking too favorably about crossfeed spatiality when I came here. I said crossfeed makes heaphones have similar soundstage than speakers. I didn't expect people to think I mean speaker-like, but off course people misunderstand if possible. Similar in this context means off course that the properties of heapdphone spatiality move a little bit toward speakers. It doesn't take you to the Moon, but to ISS to give an analogy. You are not on Moon, but AT LEAST you are in "space." Crossfeed simulates using simple low-pass filter the acoustic crossfeed that happens in speaker listening. The frequency response of the "channel leakage" is a smooth approximation of what it is in "real life" due to HRTF: Strong at low frequencies and weak on high frequencies. Cross-feed also simulates the time delay of this "channel leakage", which is about 200-250 µs with typical speaker set up. The combination of these two aspect makes (because spatial hearing can be fooled) headphone sound to some people such as myself more natural, less fatiguing, more enjoyable and a bit closer to what speaker spatiality is.

*Cross-feed DOES NOT make headphones sound like speakers! It makes headphones sound more natural, less fatiguing to more enjoyable to some people.*

To make headphones sounds exactly like speakers you need methods so sophisticated they barely even exist! Your expectations have been _unrealistic_. Lower your expectations by 90 % and you may find something that meets the criteria. My expectations have always been realistic, in fact so low, that when I discovered crossfeed I was BLOWN AWAY by how much it increases my enjoyment of headphone sound. A simple electric circuit can do this? It felt like magic, but gradually I understood why it is possible. To me headphone sound without crossfeed is just wrong and perverse (unless the recording is mixed for headphones and has binaural-like spatiality).


----------



## 71 dB

bigshot said:


> I don't say cross feed is evil. I say that everyone can use whatever coloration they want. Just don't tell other people the filter is doing things it isn't really doing. My only objection is to describing cross feed as creating a more dimensional soundstage or placing the sound in front of you like a speaker system. Those are the two things headphones can't do (yet). Sound with headphones is a line between the ears straight through the head. Secondary depth cues can fool you into thinking you hear distance, but it's never more than a couple of inches in front of your face.
> 
> Soundstage is good for flattening out excessive channel separation for headphones, since with speakers the room does that... in addition to adding natural dimensionality.


Yes, crossfeed makes the sound typically couple of inches in front of your face, depending on the recording. The point is that the sound is more _enjoyable_ (for me at least) than without crossfeed (when it is at ears causing tickling sensation + bass feels fake + spatiality is broken + listeting fatigue). I don't mind so much that the sound isn't where it would be with speakers, because it is _enjoyable._

Since even speaker spatiality is largely based on fooling spatial hearing, there is nothing wrong with being fooled by virtual spatial cues created by crossfeed. It is not placebo, because crossfeed really does alter the sound clearly audibly.


----------



## jamesjames

Steve999 said:


> I like variable crossfeed, I find it works to different degrees with different recordings. I have had two of these for years, $159 on Amazon. I love them. I’d guess they measure like dreck and you get reverse-snob-appeal at best but they do exactly what I want without any degradation in sound that I care about and a nice strong quiet headphone output with the headphones I use and ergonomics are off the charts cool for my purposes. So you press the crossfeed button, and turn the crossfeed knob, it’s continuous from almost no crossfeed to very strong crossfeed, with a mono button if you just don’t want any stereo at all. You can use a USB driver via computer or there are xlr inputs and outputs, you can get adapters to get it to work with rca jacks. Multiple inputs & outputs. Nice big stepped volume knob. Two headphone outs. My own personal nirvana.


I've also had good results with variable crossfeed.  I've used the iFi iCAN and the SPL Phonitor.  The Phonitor is particularly interesting and allows the user to separately adjust what SPL describes as the 'interaural time' and the 'interaural level' settings.  The first covers timing delays between the arrival of sounds between one side of the head and the other; the second covers differences between more and less directional frequencies.   A chart sets out what SPL thinks are some optimal combinations - but by experimenting you can get a good sense of how the adjustments incrementally change the psychoacoustic effect.  It's fascinating.  Anyone who is interested can look at Phonitor xe manual on line, which I found sets out an excellent explanation of the thinking behind analogue crossfeed.


----------



## 71 dB

Steve999 said:


> I like variable crossfeed, I find it works to different degrees with different recordings. I have had two of these for years, $159 on Amazon. I love them. I’d guess they measure like dreck and you get reverse-snob-appeal at best but they do exactly what I want without any degradation in sound that I care about and a nice strong quiet headphone output with the headphones I use and ergonomics are off the charts cool for my purposes. So you press the crossfeed button, and turn the crossfeed knob, it’s continuous from almost no crossfeed to very strong crossfeed, with a mono button if you just don’t want any stereo at all. You can use a USB driver via computer or there are xlr inputs and outputs, you can get adapters to get it to work with rca jacks. Multiple inputs & outputs. Nice big stepped volume knob. Two headphone outs. My own personal nirvana.


Very reasonably priced product. I tried to look for details about the crossfeed (what is the range in dB of min-max + crossfeed curves) by didn't find.


----------



## jamesjames (Sep 24, 2022)

71 dB said:


> I experience distance even without crossfeed, but it is just messy compared to crossfeed. Also, as I wrote, crossfeed "removes" sounds at my ear and moves the on my shoulders. For me in headphone virtual soundstage not everything is at a larger distance. A lot of the sound remain near head, but the "space" where the sounds are has substantial size. I suppose my spatial hearing detects things like reverberation in the sound and concludes the sounds must happen in a space of vast size, much bigger than a helmet. Without crossfeed the same reverberation contains excessive spatiality and it is difficult to know what it means spatially (fake reverberation? What?)
> 
> Terminology is what it is. We have to live with the fact that nobody has bothered to create good terminology for headphone spatiality. I have suggested some terms, but other people tend to reject them so here we are, everyone using their own favorite terms...
> 
> Stereophonic sound is based on tricking our spatial hearing. How else can you create soundstages using just two speakers? How can you make sounds appear between the speakers when there is nothing to radiate sound? By tricking spatial hearing. What we hear and experience is not the same as physical reality. So, spatial hearing can be tricked and to me this works also with headphones, just a little differently because the placement of the transducers is so different and there isn't a room. Whether we should call it a placebo effect is a philosophic question. If it is then I can say crossfeed helps me experience the placebo effect that I like.


Yes, I think you're right to say it's difficult to find good language to talk about these psychoacoustic effects.  The thing we're talking about is inherently impressionistic - so we have to reach for descriptions of our impressions of the sounds that reach our ears.  Your point about this being an almost philosophic question is good one I think - I see where you're coming from.  And I also like the effect whatever language is used to describe it.  I like it because it brings me closer to my sense of the sound of the concert hall.


----------



## gregorio

jamesjames said:


> Yes, I think you're right to say it's difficult to find good language to talk about these psychoacoustic effects.


I’m not so sure it’s difficult to find good language to talk about these psychoacoustic effects. Although it is almost always true that it’s difficult (and somewhat ambiguous) to find good language to talk about how these psychoacoustic effects make you personally feel, perceive, believe or prefer. Especially in this particular case, where there’s such a complexity and wide variation of individual responses. 


jamesjames said:


> The thing we're talking about is inherently impressionistic - so we have to reach for descriptions of our impressions of the sounds that reach our ears.


The thing you’re talking about is not inherently impressionistic, it’s inherently objective. Your personal impression of this “thing” is obviously “impressionistic” though.


jamesjames said:


> Your point about this being an almost philosophic question is good one I think - I see where you're coming from.


Again, I’m not sure it is a philosophical question or if it is, then it’s already been answered. “Placebo”, when used in sound/audio is virtually always an analogous term, so strictly speaking, it is not a placebo effect and even if just using it analogously, we still shouldn’t call it a placebo effect because it’s not analogous. 


jamesjames said:


> And I also like the effect whatever language is used to describe it. I like it because it brings me closer to my sense of the sound of the concert hall.


And I don’t like it whatever language is used to describe it. I don’t like it because it does not bring me closer to my sense of the sound of the concert hall and on top of that, it confuses/muddies what the engineers/artists actually put on the recording.

This particular topic has proven to be problematic in this subforum, for all the above reasons and others. The danger is of presenting or implying individuals’ impressions as facts or universal truths for all or many, omitting facts and/or contrary impressions and thereby being inadvertently anti-scientific.

G


----------



## jamesjames

gregorio said:


> I’m not so sure it’s difficult to find good language to talk about these psychoacoustic effects. Although it is almost always true that it’s difficult (and somewhat ambiguous) to find good language to talk about how these psychoacoustic effects make you personally feel, perceive, believe or prefer. Especially in this particular case, where there’s such a complexity and wide variation of individual responses.
> 
> The thing you’re talking about is not inherently impressionistic, it’s inherently objective. Your personal impression of this “thing” is obviously “impressionistic” though.
> 
> ...


This is certainly becoming quite deep!  Suffice to say, I guess we'll have to agree to disagree.

J


----------



## gregorio

jamesjames said:


> This is certainly becoming quite deep!


I don’t think it’s particularly philosophically deep but it is complex and scientifically problematic because we’re dealing with the complex perceptual processes (which aren’t yet entirely understood/explained) of a highly atypical sound presentation and therefore a wide variety of individual responses which are only partially predictable.


jamesjames said:


> Suffice to say, I guess we'll have to agree to disagree.


As far as the penultimate paragraph of my previous post is concerned, then “sure” but I’m not sure there’s much to rationally disagree with in the rest of it.

G


----------



## jamesjames

Really?  'Didactic' is the word that comes to mind.


----------



## bigshot

Just a reminder... You're in the Sound Science forum. This is a place to learn from people with information you might not be aware of and share facts you know about., hopefully leading to being useful to other readers.


----------



## castleofargh

I'm really confused by words such as didactic or pedagogy in English because I never know if the writer means the "true" meaning(from Greek roots) or if it's a pejorative use. Not a good place for ambiguity.  
That second option simply doesn't exist in the French definitions and I do not understand why it does in English. It's like at some point in history, some English guys pushed the sarcasm so far for so long when using those words that even the definitions had to change to reflect it. But then why don't I see the same thing for other words used mostly in a sarcastic way, like say "genius"? Is it some anti fancy Greek sounding words movement, or some anti learning and education movements that led to this?


----------



## 71 dB

bigshot said:


> Just a reminder... You're in the Sound Science forum. This is a place to learn from people with information you might not be aware of and share facts you know about., hopefully leading to being useful to other readers.


What can we say about cross-feed that is scientifically accurate?


----------



## bigshot

It blends channels together?


----------



## 71 dB

bigshot said:


> It blends channels together?


Well, that's more like a multichannel/stereo-to-mono converter. The word "together" suggests that the result is a single (mono) channel.


----------



## bigshot

...gradually.


----------



## jamesjames

bigshot said:


> Just a reminder... You're in the Sound Science forum. This is a place to learn from people with information you might not be aware of and share facts you know about., hopefully leading to being useful to other readers.


I'll certainly be careful to avoid it in future ...


----------



## gregorio (Oct 5, 2022)

71 dB said:


> What can we say about cross-feed that is scientifically accurate?


What it is, what parameters are changed, the equipment/software that implements it, what crossfeed is designed to achieve, how it does achieve it, why it may work for some and not others, the history of crossfeed and probably several other things too, although we’ve covered most of it already.


jamesjames said:


> I'll certainly be careful to avoid it in future ...


Yep, if you don’t have any facts to share or are not interested in reading or discussing any of the facts or science, this is a subforum to avoid.

G


----------



## bigshot

jamesjames said:


> I'll certainly be careful to avoid it in future ...


That's fine if you want. I didn't mean my comment as a threat. You just didn't seem to understand why you got the reaction you did. I was explaining. In this part of Head-Fi, subjectivity and psychological perception aren't the subject and we don't react well to those kinds of posts. Here we talk about things that can be proven with tests and ways to avoid the influences of bias and placebo effect. Subjectivity and psychological perception is the topic that dominates the rest of Head-Fi. You'll get a better reaction to posts like that there. Here we talk about accuracy and fidelity.


----------



## sander99

bigshot said:


> subjectivity and psychological perception aren't the subject


Well, to be more precise: actually they are important subjects here, but we try to be aware of it and seperate it from actual sound differences.


jamesjames said:


> I guess we'll have to agree to disagree.





gregorio said:


> but I’m not sure there’s much to rationally disagree with in the rest of it.





jamesjames said:


> Really?  'Didactic' is the word that comes to mind.


@jamesjames: I think you misunderstood gregorio. Different peoples brains perceive crossfeed differently. You can perceive something different than someone else, like for example gregorio. Your perception is real (as a perception), and gregorio's perception is equally real (as a perception). There is no disagreement in this, except when you claim/think that everybody has the same perception.


----------



## 71 dB

gregorio said:


> What it is,


To me it is a method of "leaking" stereophonic channel information into the opposite channel based (very) roughly on the direct sound of typical speaker listening and the properties of human spatial hearing.



gregorio said:


> what parameters are changed,


Mainly channel difference and channel cross-correlation below about 800 Hz. As a side-effect for example frequency response/bass-treble balance may change a tiny bit depending on the cross-feed implementation, but these effects are mild compared to the acoustics of speaker listening.



gregorio said:


> the equipment/software that implements it,


Various implementations exists. I have myself constructed following DIY crossfeeders:

- Default Linkwitz/Cmoy crossfeeder (with 6 crossfeed levels, mono, "almost mono" and treble crossfeed)
- Jan Meier Crossfeed  (with 2 crossfeed levels)
- "Wide" cross-feed (Linkwitz/Cmoy at -3 dB, but with lower cut of frequency to create about 640 µs ITD at low freq.)
- Crossfeed for 1/4" headphone outputs (designed to work with various output impedance levels)
- Tiny 2 level crossfeeder for portable devices.
- Simple cross-feed for my late mother to listen to TV sound on headphones.
- Balanced 6 level Linkwitz/Cmoy as a commissioned work.

I have also wrote Nyqvist-plugins to by used in Audacity as cross-feed effects.



gregorio said:


> what crossfeed is designed to achieve,


It is designed to mitigate the feel of unnatural and fatiguing spatiality on headphone some people feel.



gregorio said:


> how it does achieve it,


By removing the spatial aspects from the sound that causes the feel of unnatural and fatiguing spatiality for some people. The main trick to achieve this is to limit ILD at low frequencies to no more than about 10 dB. 



gregorio said:


> why it may work for some and not others,


This is something we don't know much about, but I think it may have something to do with the fact that some people are more sensitive to certain stimuli such as sound.



gregorio said:


> the history of crossfeed and probably several other things too, although we’ve covered most of it already.
> 
> G


Probably...


----------



## bigshot (Oct 5, 2022)

It has nothing to do with spatial hearing. Reducing channel separation doesn’t add distance.

Are you trolling now? This has already been beaten to death here. It’s pointless to keep bringing it up.


----------



## 71 dB

bigshot said:


> It has nothing to do with spatial hearing. Reducing channel separation doesn’t add distance.


I believe spatial hearing uses spatial cues to decode the distance, because ears do not have measuring sticks. Reducing channel separation _may_ result in spatial cues that the spatial hearing decodes as greater distance. Cross-feed is not particularly a "distance generator", but for some people such as myself, it helps imagining the sounds further away from ears because extreme channel separation indicates sounds very near one ear (it is not rocket science to conclude why and I don't understand how this is so difficult for you to get).



bigshot said:


> Are you trolling now? This has already been beaten to death here. It’s pointless to keep bringing it up.


You have beaten to death your pro-speaker propaganda, but at least I don't call you a troll!


----------



## bigshot

Blending channels doesn’t affect primary or secondary depth cues.


----------



## 71 dB (Oct 5, 2022)

bigshot said:


> Blending channels doesn’t affect primary or secondary depth cues.


Yes it does! It is another story whether the changes are positive or negative and people may hear the result differently. You somehow assume cross-feed is like adding milk to coffee, but it is more delicate than that because

(1) The blending is frequency dependent (ILD is shaped into something that very roughly imitates natural HRTF-based ILD)
(2) The blending happens with a delay corresponding typical speaker angle (ITD shaping)

To me headphone listening is about experiencing contradictory spatial cues. Some cues (typically in the recording in the form of e.g. reverberation) indicate large distance while the excessive spatiality indicates very close sound. Even if crossfeed doesn't "add" new cues, it can make some contradictory cues weaker allowing the relevant cues dominate the experienced spatiality.


----------



## bigshot

No it doesn't, because it doesn't alter timing which is the core aspect involved in distance cues.


----------



## 71 dB

bigshot said:


> No it doesn't, because it doesn't alter timing which is the core aspect involved in distance cues.


Explaining things to you feels like explaining them to the monkey in your avatar!

Crossfeed doesn't do everything, but that doesn't mean it achieves nothing. For me it achieves a lot!


----------



## bigshot (Oct 5, 2022)

Crossfeed reduces channel separation. That’s what it does. It doesn’t turn your headphones into the Royal Albert Hall, it just moves the sound more to the middle of your head instead of it coming from the sides of your head.

If you subjectively like the effect, great. But it is what it is.


----------



## 71 dB

bigshot said:


> Crossfeed reduces channel separation.


Yes. Speakers in a room + HRTF changes channel separation too (can be increases or reduced).



bigshot said:


> That’s what it does.


Yes. My point is what it does isn't much "on paper", but for some people like me it means surprisingly big improvement. Why would ANYONE use it otherwise? Why would anyone even invented it in the first place if it didn't do somethin positive? Crossfeed doesn't work for you, but it does for many! ACCEPT it already and move on!



bigshot said:


> It doesn’t turn your headphones into the Royal Albert Hall


Nobody says it does. That's crazy talk, but it can make recordings made in Royal Albert Hall sound BETTER for some of us, because it removes contradictions and allows the recorded spatial cues from Royal Albert Hall create BETTER headphone spatiality. Yes, all those delays and whatnot you say are the real cues! Those are in the recording because the acoustics of  Royal Albert Hall created them. They just don't work for me, if I am force-fed excessive unnatural ILD!



bigshot said:


> , it just moves the sound more to the middle of your head instead of it coming from the sides of your head.


Sounds are bend forward a little bit and also sounds at ears jump to shoulders and are less annoying.



bigshot said:


> If you subjectively like the effect, great. But it is what it is.


It is what it is and it revolutionised my headphone listening! Five years ago I thought it does this for everyone, but now I know I am part of minority.


----------



## bigshot

Channel we paration with headphones just makes the sound go from the center of your head to your ears. Altering separation doesn’t add distance or directionality, both of which are what makes speakers “spatial” sounding.

We’ve been through all this before.


----------



## 71 dB

bigshot said:


> Channel we paration with headphones just makes the sound go from the center of your head to your ears. Altering separation doesn’t add distance or directionality, both of which are what makes speakers “spatial” sounding.
> 
> We’ve been through all this before.


I wonder have you ever heard binaural recordings? 

Your "ear to ear" sound on headphones applies pretty much when the recording has zero spatial cues. If it has as much as reverberation, MY spatial hearing starts to place the damn sound OUTSIDE my head.


----------



## bigshot (Oct 5, 2022)

Binaural recordings are recordings of the spacial cues of the recording venue. Those spatial cues are baked into the recording and are time based. They have nothing to do with channel separation. Headphones with cross feed don't create binaural recordings. Apples and oranges.

Your spacial hearing is INSIDE your head.


----------



## 71 dB

bigshot said:


> Binaural recordings are recordings of the spacial cues of the recording venue. Those spatial cues are baked into the recording and are time based. They have nothing to do with channel separation. Headphones with cross feed don't create binaural recordings. Apples and oranges.
> 
> Your spacial hearing is INSIDE your head.


To me binaural recordings with headphones create a rather strong feeling of acoustic space around me. I assume it is the same for you and other people (no?) So, if that assumption is correct binaural recordings PROVE that you can have significant distance with headphones GIVEN the signal has headphone compatible spatiality and indeed binaural recordings try to optimize that! 

Recordings done with other types of microphone setups do have time based information. It just isn't optimized for headphones. My spatial hearing tries to use such not so optimal spatial information and that's why I experience out of head spatiality with headphone depending on the recording (some microphone setups such as ORTF have binaural-like properties, some other such as Decca tree not so much). 

So, the time based spatial cues among other spatial cues might allow reasonable spatiality on headphones IF contradictory spatial information such as excessive channel separation is dealt with.

Every recording is a separate fruit. Some of those are closer to apples, some others (Tangerine Dream?) are closer to oranges.


----------



## bigshot (Oct 6, 2022)

You don’t listen at all. You make stuff up and defend it to the death. You quote my comment and reply, but your reply doesn’t address my comment at all. You just ignore it and make more stuff up.


----------



## 71 dB

bigshot said:


> You don’t listen at all. You make stuff up and defend it to the death. You quote my comment and reply, but your reply doesn’t address my comment at all. You just ignore it and make more stuff up.


Don't tell me I don't "listen to you at all". I answer to the claim you make to my abilities. I am not making stuff up. I am telling the facts as I see them.

Fact one: Binaural recording can have "out of head" spatiality with headphones.
Fact two: Monophonic recordings have a in the middle of the head spatiality with headphones.

So, when we move from mono to binaural in regards of spatiality we have two options:

(a) at some threshold the spatiality "jumps" suddenly from inside head to outside head. 
(b) there is a gradual shift from the middle of the head to outside head.

MY spatial hearing experiences (b).


----------



## bigshot (Oct 6, 2022)

Placebo

It’s like whack a mole. I answer something, you move on to something else, I answer that, the first thing that I’ve already answered pops up again. We’ve been all through this cycle at least twice now. I don’t know why. I answer with one sentence answers and you use that as a springboard for six paragraphs of non replies. It’s like a hydra- cut off one head and three appear. You’re a perpetual motion argument machine on this. I tire of argument without discussion.


----------



## sonitus mirus

71 dB said:


> Don't tell me I don't "listen to you at all". I answer to the claim you make to my abilities. I am not making stuff up. I am telling the facts as I see them.
> 
> Fact one: Binaural recording can have "out of head" spatiality with headphones.
> Fact two: Monophonic recordings have a in the middle of the head spatiality with headphones.
> ...


Not everyone has the same experience, even with healthy hearing.  I just listened to the Virtual Barber Shop 8D on YouTube with my Denon D5000 headphones connected to my RME ADI-2 DAC.  I am able to instantly switch between mono and stereo, and I can also set the crossfeed to 5 different levels.  

While there is a clear difference to me when I switch from mono to stereo, I never truly experience positioning of sound outside my head.  It all sound like it is coming from two speakers pointed at my ears with varying degrees of volume levels and timing differences.  Front and back are interchangeable positions that I am able to swap around in a similar manner that I can with an optical illusion where something looks like an old woman or beautiful young girl.





https://i.ytimg.com/vi/P9iv173VtGM/maxresdefault.jpg

I do perceive left and right positioning with binaural recordings accurately, but it still seems to be generated in my head, and not outside.

My Apple AirPods Max spatial audio setting, when used to watch a movie, is quite convincing to me, but multi-speaker audio/music is also clearly different to mono or stereo speakers.


----------



## 71 dB (Oct 6, 2022)

sonitus mirus said:


> Not everyone has the same experience, even with healthy hearing.  I just listened to the Virtual Barber Shop 8D on YouTube with my Denon D5000 headphones connected to my RME ADI-2 DAC.  I am able to instantly switch between mono and stereo, and I can also set the crossfeed to 5 different levels.
> 
> While there is a clear difference to me when I switch from mono to stereo, I never truly experience positioning of sound outside my head.  It all sound like it is coming from two speakers pointed at my ears with varying degrees of volume levels and timing differences.  Front and back are interchangeable positions that I am able to swap around in a similar manner that I can with an optical illusion where something looks like an old woman or beautiful young girl.
> 
> ...


That virtual barber shop works really well for me, the way binaural recordings are supposed to work I believe._ Everything_ is outside my head ranging from several metres to a few centimetres. The barber and the guitarist are behind me except when the barber moves my "front".  It is surprising for me to hear some people hear these binaural recordings INSIDE their heads! That's obviously not how it should be!


----------



## 71 dB (Oct 6, 2022)

bigshot said:


> Placebo


Not placebo, but illusion that spatial audio is based on. Not much different from 3D movies where 2 different 2D pictures to both eyes create illusion of depth. Are you saying experiencing 3D movies as 3D is placebo and we should just see 2 overlapping 2D pictures?



bigshot said:


> It’s like whack a mole. I answer something, you move on to something else, I answer that, the first thing that I’ve already answered pops up again. We’ve been all through this cycle at least twice now. I don’t know why. I answer with one sentence answers and you use that as a springboard for six paragraphs of non replies. It’s like a hydra- cut off one head and three appear. You’re a perpetual motion argument machine on this. I tire of argument without discussion.


You assume your answers are definite! The fact that this continues is a sign they are not.


----------



## bigshot (Oct 6, 2022)

Binaural recordings have real room reflections recorded right into them. Those timing differences are what give the illusion of space. It is nothing like crossfeed.

Crossfeed has no depth cues involving timing. All it does is blend the two stereo channels. This effectively reduces separation and moves the sound from the sides of the head towards the center of the head. If there is any space involved, it's between the ears.

Crossfeed can't reduce excessive spatiality or create spatial depth. All it can do is take the curse off of ping pong stereo on headphones. If you want to create synthetic spacial cues, you have to modify timing, like with a reverb.

The fact that most foolish things continue in Sound Science is a testament to obliviousness. There's no reason to spend all our time arguing over things like hearing differences between cables or high bit rates. That stuff is self evident. The only reason we continue arguing about it is because one person grabs on to an incorrect idea and won't let go, no matter how many facts are presented to disprove it. You've admitted in the past that your description of the effect of crossfeed is purely subjective, yet here you are trotting it out as "fact" again. The circular part of this argument is you.


----------



## castleofargh

I agree with @bigshot that crossfeed doesn't handle distance cues in any meaningful way. 

But I'm also with @71 dB at least in a potential way. Here, we're not discussing normal listening conditions. Instead we're dealing with a fight between us, headphones, and stereo music. Several variables contradict the original(desired) spatial cues and often make the brain go "F it, let's collapse the all thing on itself". If some action from crossfeed gets the brain to relax about some of those disrupting infos, perhaps it won't be so keen on making the all thing collapse? Of course we wouldn't get the "proper" dimensions just like that, but perhaps they can get noticeably bigger for some listeners compared to the almost nothing imagined otherwise?
 I don't have any knowledge that allows me to completely dismiss that possibility.

Just to contradict myself a third time, I'm not feeling that from crossfeed. If anything I feel like I lose more on the sides than I gain at the front so the space ends up smaller for me. I'm defending a feeling I don't have, that how open minded I'm willing to be today.


----------



## bigshot

There are a million different things that can conceivably create an environment conducive to encouraging a subjective impression of depth... a glass of wine, a particular color, a full belly... it's possible that crossfeed or a rolled off high end or a bass boost or some other sort of unrelated modification of sound qualities might trigger that too. But that would be entirely unique to an individual. I don't see any reason why crossfeed or EQ would objectively create spatial depth cues. In fact, as you say, it draws the sound of headphones closer into the center of the head, so it actually reduces whatever infinitesimal depth headphones have.

If 71 just said that he personally perceives this illusion, and it isn't something anyone else would necessarily perceive, I wouldn't have anything to argue against. In fact, at least two times he's broken down and admitted just that and it all stopped for a while. But like a bad penny, he keeps going back to describing crossfeed as objectively spatial. It's incredibly tiresome and counter productive to getting any kind of meaningful discussions going around here. It's not like he's the only one guilty of that though. I don't mean to infer he's the only one.


----------



## 71 dB (Oct 6, 2022)

bigshot said:


> If 71 just said that he personally perceives this illusion, and it isn't something anyone else would necessarily perceive, I wouldn't have anything to argue against. In fact, at least two times he's broken down and admitted just that and it all stopped for a while. But like a bad penny, he keeps going back to describing crossfeed as objectively spatial. It's incredibly tiresome and counter productive to getting any kind of meaningful discussions going around here. It's not like he's the only one guilty of that though. I don't mean to infer he's the only one.


There seems to be so degree of miscommunication here. Maybe I am not clear enough, but I nowadays I hold the belief that people hear things differently. I am in the team cross-feed myself.

We need to keep objective facts such as spatial cues in recordings separated from subjectivism (how we experience those cues). When I talk about those objective things  my tone is more objective. When I talk about the subjective things, my tone is subjective.


----------



## bigshot (Oct 6, 2022)

You are not as clear in your tone as you think you are, and you're mixing things together that aren't at all the same. Crossfeed and binaural recording are two completely different things.


----------



## 71 dB

bigshot said:


> You are not as clear in your tone as you think you are, and you're mixing things together that aren't at all the same. Crossfeed and binaural recording are two completely different things.


Yes, cross-feed is completely different from binaural, BUT crossfeed can shape the spatiality in direction closer to binaural (for example, binaural recordings have very small ILD at low frequencies unless the sound originates very near head). Red Toyota is not red Ferrari, but it is "closer" than blue Toyota. At least the colour is same.


----------



## bigshot

71 dB said:


> Yes, cross-feed is completely different from binaural, BUT crossfeed can shape the spatiality in direction closer to binaural (for example, binaural recordings have very small ILD at low frequencies unless the sound originates very near head).


Bingo. Subjective impression stated as objective fact.

Crossfeed cannot shape spatiality like binaural because it lacks any of the depth cues related to sound reflection and timing. Those depth cues are what gives binaural its spatiality. Without them, it's just a recording that is very close to mono. Objectively, crossfeed can only draw channel separation closer to mono, which is perceived with headphones as being in the center of the head. So crossfeed does the opposite of binaural recording.


----------



## 71 dB

bigshot said:


> Bingo. Subjective impression stated as objective fact.


ILD is not "subjective impression". It is a technical objective property. If I say it is 80°F out because the thermometer says so, it is an objective fact. If I say it is warm out I am stating my subjective opinion about the temperature. When I talk about crossfeed shaping spatiality toward binaural I am talking about the objective things (I even gave you the example!) so please don't tell me I confuse subjectivity and objectivity when I don't.

I can say TO ME crossfeed makes sound more binaural, but that is MY subjective opinion, not a general objective fact.



bigshot said:


> Crossfeed cannot shape spatiality like binaural because it lacks any of the depth cues related to sound reflection and timing. Those depth cues are what gives binaural its spatiality. Without them, it's just a recording that is very close to mono. Objectively, crossfeed can only draw channel separation closer to mono, which is perceived with headphones as being in the center of the head. So crossfeed does the opposite of binaural recording.


I have admitted crossfeed is very simple. Somehow* it manages to improve headphone sound a lot for me and some other people.

* to me the scientific explanations make sense, but whenever I talk about them I am accused of making this TOO objective, so I don't how to talk about it.


----------



## bigshot

ILD is reduced with crossfeed. Less, not more.


----------



## 71 dB

bigshot said:


> ILD is reduced with crossfeed. Less, not more.


Don't you understand anything!?

Stereo sound => Larger ILD at low frequencies
Binaural sound => Small ILD at low frequencies (except for sound near ear)
Crossfeed => large ILD made smaller

CONCLUSION: crossfeed makes stereo sound (mixed for speakers) closer to binaural.


----------



## bigshot (Oct 6, 2022)

Binaural's spatial qualities aren't because of ILD. You keep jumping from spatial to irrelevant things. ILD is related to reducing the channel separation, which no one is arguing. Clearly crossfeed reduces channel separation and moves the image closer to the center of the head like mono.

Crossfeed doesn't create spatial cues.


----------



## 71 dB (Oct 6, 2022)

bigshot said:


> Binaural's spatial qualities aren't because of ILD. You keep jumping from spatial to irrelevant things.


ILD is ONE aspect of spatial hearing. There are many others, but crossfeed manipulates mainly ILD as you know. Binaural doesn't work properly if ILD is wrong. In fact it needs to be correct very accurately!


----------



## bigshot (Oct 6, 2022)

If ILD is messed up, stereo doesn't work and neither do headphones. ILD is not what creates the effect of depth in binaural recordings. Recorded timing variations from reflected sound do, which is completely separate from ILD and cross feed.

I think you aren't thinking clearly today. You are incapable of arguing on point and you aren't hearing what I'm saying. As soon as we get to the next comment, you forget everything I said before.


----------



## 71 dB

bigshot said:


> I think you aren't thinking clearly today. You are incapable of arguing on point and you aren't hearing what I'm saying. As soon as we get to the next comment, you forget everything I said before.


I have been thinking about these things for 10 years. My ability to think today means nothing. I address your last reply so of course I "drop" your earlier replies. I already answered them. didn't I?


----------



## bigshot

You haven’t answered a single thing I’ve said. You’ve just changed the subject to things that have nothing to do with what makes sound spatial. I think you’re talking for your own benefit and I’m just a prompt for you to say more random things. That encourages me to just revert to curt, single sentence dismissals.


----------



## 71 dB

bigshot said:


> You haven’t answered a single thing I’ve said. You’ve just changed the subject to things that have nothing to do with what makes sound spatial. I think you’re talking for your own benefit and I’m just a prompt for you to say more random things. That encourages me to just revert to curt, single sentence dismissals.


I haven't answered the way you want me to answer. I have answered the way I want to answer.


----------



## bigshot

You don’t need me to carry on a monologue.


----------



## gregorio

71 dB said:


> BUT crossfeed can shape the spatiality in direction closer to binaural (for example, binaural recordings have very small ILD at low frequencies unless the sound originates very near head).


That’s untrue, it cannot shape the spatiality closer to binaural. You might perceive it that way, for whatever reason but that’s very unusual. 


71 dB said:


> Stereo sound => Larger ILD at low frequencies
> Binaural sound => Small ILD at low frequencies (except for sound near ear)
> Crossfeed => large ILD made smaller
> 
> CONCLUSION: crossfeed makes stereo sound (mixed for speakers) closer to binaural.


That’s been your problem all along. That conclusion would only be valid if ILD were the only significant factor but it isn’t! And although you acknowledge this fact, you omit it from your conclusion! For example:


71 dB said:


> ILD is ONE aspect of spatial hearing.


Yes, exactly! We also have ITD, timing and direction differences between each of the direct sounds and its reflections, frequency differences between the direct sounds and between their reflections, etc. The act of using crossfeed does make the ILD smaller but it also messes with the ITD (because it obviously crossfeed a it), likewise the direction differences between direct sounds and between their reflections. In other words, crossfeed brings us closer to binaural in just one way but further from binaural in other ways. Your conclusion above is false because it omits everything except smaller ILD!


71 dB said:


> I have been thinking about these things for 10 years.


And unfortunately latched on to a smaller ILD as bringing stereo (designed for speakers) as closer to binaural, which it doesn’t! Clearly in your particular case crossfeed and its resultant smaller ILD by itself is enough cause your personal perception to create an illusion closer to binaural. It does not for me, the vast majority of others and even amongst those who choose to use crossfeed, many seem to do so on the basis of a simple balance of preference rather than because it gets them closer to binaural.

G


----------



## 71 dB (Oct 7, 2022)

gregorio said:


> That’s untrue, it cannot shape the spatiality closer to binaural. You might perceive it that way, for whatever reason but that’s very unusual.


It depends on how we define "closer".

If we have a recording that has  say19 dB of ILD at low frequencies, I get 1-2 dB of ILD at low frequencies if I listen to it with speakers. That's acoustics and HRTF. If that same recorfding was made binaural instead being mixed mainly for speakers, it would have 1-2 dB of ILD at low frequencies unless some of the recorded instruments played very near the binaural microphine. Headphones as such do not change much ILD (there is very weak acoustic leak, but it doesn't do much). If I can "shape" the ILD from 10 dB to say 6 dB or even 2 dB for headphones, I'd say we get "closer" to binaural. Of course the distance still remain large overall, since crossfeed doesn't do much else than scale ILD, but then again ILD is the main problem for me in headphone listening. It is THE thing to fix!

Binaural recordings in general have very different spatiality to me than stereo recordings with crossfeed, but they share one aspect: The sound lacks the annoying feel of unnaturally large ILD. So, nobody should think I claim crossfeed makes stereo recordings binaural.

Sometimes I feel if I say there is a red car parked on my street people here think I am claiming ALL cars in the World are red. I make it very clear I talk about ILD here and I also make it clear I know spatial hearing is much more than that, so obviously I do NOT claim crossfeed fixes ALL aspects of spatiality. It fixes ILD and that's the ONE thing I want fixed to enjoy headphone sound. That's why crossfeed works well for ME.

If something else was more important than ILD to fix, then no doubt crossfeeders would have been invented differently, but they were invented to reduce ILD and for me the reason behind it is clear.



gregorio said:


> That’s been your problem all along. That conclusion would only be valid if ILD were the only significant factor but it isn’t! And although you acknowledge this fact, you omit it from your conclusion! For example:


ILD is not the only significant factor of spatial hearing, but for me it is the only significant factor to _enjoy_ headphone sound.



gregorio said:


> Yes, exactly! We also have ITD, timing and direction differences between each of the direct sounds and its reflections, frequency differences between the direct sounds and between their reflections, etc. The act of using crossfeed does make the ILD smaller but it also messes with the ITD (because it obviously crossfeed a it), likewise the direction differences between direct sounds and between their reflections. In other words, crossfeed brings us closer to binaural in just one way but further from binaural in other ways. Your conclusion above is false because it omits everything except smaller ILD!


Does ITD get messed up when acoustic crossfeed happens with speakers? Your left ear hears direct sound from both left and right speakers and since right speaker is a little bit further away, the sound arrived about 200-250 µs "late" compared to left speaker sound. Shouldn't that mean messed up ITD? But way a minute! Those recordings are mixed with speakers and for speakers so the messing up happens in the studio while mixing!! So, obviously the mixing is done assuming such acoustic crossfeed happens! Why shouldn't it happen also with headphones (or why it happening with headphones is a bad thing when it is _assumed_ to happen with speakers?)

This is what I don't understand. Speakers in a room messes up the sound so much more than what a simple crossfeed can ever do. Nobody cares how the sound is reflected from the bookshelf, but crossfeed creating a consistent, predictable and even desired effects on the sound is bad? Sometimes I feel you people understand nothing about the philosophy of crossfeed. It is fine if you don't like crossfeed personally, but this cherry picking of things to "prove" crossfeed satanic is ridicuous. My coffee maker can't cook me a meal. It still makes damn good coffee! I don't expect it to cook me! Similarly I don't expect my crossfeeder to transfer my stereo recordings into binaural. I just want to listen to headphones without unnatural annoying levels of ILD. Yeah, the ITD gets messed up and the cellist moved from 30° angle to 20° angle, but know what? I don't care. With speakers the cellist is at angle 23° and who is to say were she should be? Maybe they heard her at 22° angle in the studio when they mixed it? I don't know and you don't know either unless you where the one who mixed it...



gregorio said:


> And unfortunately latched on to a smaller ILD as bringing stereo (designed for speakers) as closer to binaural, which it doesn’t! Clearly in your particular case crossfeed and its resultant smaller ILD by itself is enough cause your personal perception to create an illusion closer to binaural. It does not for me, the vast majority of others and even amongst those who choose to use crossfeed, many seem to do so on the basis of a simple balance of preference rather than because it gets them closer to binaural.
> 
> G


You misunderstand my "closer to binaural" talk. Crossfeed doesn not make binaural recordings!! You know it. I know it. I say crossfeed make closer to binaural because they share ILD charasteristics. Closer can mean from 100 miles to 99 miles or from 100 miles to 1 miles. You assume I mean closer mean almost identical. I don't. Many things between crossfeed and binaural are very different keeping the distance between them large, but one mile toward the Moon is still closer...

If binaural is colour movies, then crossfeed is black and white movies with similar contrast as the color movies. I don't claim adjusting the contrast in black and white movies creates colors. I claim it can make the picture more pleasant for my eyes.


----------



## bigshot (Oct 7, 2022)

Special sound isn’t about left to right.

Cross feed brings sound closer to mono.


----------



## 71 dB

bigshot said:


> Special sound isn’t about left to right.
> 
> Cross feed brings sound closer to mono.


Yes, but natural sound is quite monophonic at low frequencies. That's why directional hearing is based on ITD at low frequencies. ILD just is a few decibels at low frequencies unless the sound source is very near the other ear. When a lion roars 2 inches from your left ear, you definitely have large ILD, but you better get annoyed by the sound so that you start running as fast as you can to survive. I don't live among lions, so my ears almost never experience large ILD at low frequencies. Small objects are very bad at generating low frequencies. That's why subwoofers have large 18" woofers unlike 1" tweeters. Large objects are good at generating low frequencies, but they also generate a lot of sound energy meaning the sound pressure level is high. Having such sound source near ear is bad for hearing and can be painful. So I don't get my ears very near these large objects such as car engines generating low frequencies at high sound pressure level. So, I hear these low frequences from a distance. A car passes me 20 feet away for example, but that means very small ILD between my ears. So, almost all low frequency sounds I hear in my life are almost mono! That doesn't matter, because my spatial hearing uses ITD to determine the direction of the sound. If I listen to headphones without crossfeed, suddenly I am force-fed with low frequencies with high ILD. No wonder it sounds very unnatural and annoying.


----------



## bigshot (Oct 7, 2022)

You’re making stuff up to justify your incorrect theory. Crossfeed doesn’t enhance spatial cues. All it does is shift stereo towards mono.


----------



## 71 dB (Oct 7, 2022)

bigshot said:


> You’re making stuff up to justify your incorrect theory. Crossfeed doesn’t enhance spatial cues. All it does is shift stereo towards mono.


Do you even read my posts or think what I am saying?? I FU*KING AGREE WITH YOU Crossfeed makes the sound more MONO!!!

Now, could you please use your brain and read my posts to see why this might be BENEFICIAL!!!

Since crossfeed doesn't enhance spatial cues in your opinion, acoustic crossfeed with speakers hardly does it either! So, should we build a wall in the middle our head inside the wall and the speakers on both sides of the wall so that left ear doesn't hear right speaker and vice versa? Of course not! The music is mixed to assume acoustic crossfeed happens. With headphones we do not have acoustic crossfeed! I compensate for that using crossfeed!


----------



## gregorio

71 dB said:


> If I can "shape" the ILD from 10 dB to say 6 dB or even 2 dB for headphones, I'd say we get "closer" to binaural.


I know you would and that’s why you’re wrong! 


71 dB said:


> So, obviously the mixing is done assuming such acoustic crossfeed happens!


No it’s not! The mixing is done assuming all the room reflections are happening, not just acoustic crossfeed!


71 dB said:


> This is what I don't understand. Speakers in a room messes up the sound so much more than what a simple crossfeed can ever do.


Yes, it’s clear you don’t understand, despite it being explained numerous times. Speakers/room acoustics “messes up the sound” in a way that our hearing perception expects because we experience sounds messed up by room acoustics all the time from the day we are born. Simple crossfeed does not mess up sound in that way.


71 dB said:


> Sometimes I feel you people understand nothing about the philosophy of crossfeed.


I understand what crossfeed does but I don’t understand your philosophy of crossfeed because it omits so many factors and doesn’t make sense. 


71 dB said:


> I say crossfeed make closer to binaural because they share ILD charasteristics.


And I say it’s further from binaural because crossfeed makes many characteristics (apart from ILD) further from binaural.

I’m not going to go through yet again the same points we’ve already gone through in detail more than once already. Your “philosophy of crossfeed” is basically; say ILD is only one of many factors in the perception of distance, position and space but then only consider ILD and ignore all the other factors!

G


----------



## 71 dB (Oct 7, 2022)

gregorio said:


> I know you would and that’s why you’re wrong!


You are too black and white on this. I am at least partially right. If ILD becomes smaller then we in a sense get closer! It is math. If you are at origo and I am at (1,1) and move to (0,1) I get closer to you.



gregorio said:


> No it’s not! The mixing is done assuming all the room reflections are happening, not just acoustic crossfeed!


Don't nitpick! God damn you! Of course the room reflections are happening! Acoustic crossfeed happens anyway, even when room reflections do not happen (anechoic chamber).



gregorio said:


> Yes, it’s clear you don’t understand, despite it being explained numerous times. Speakers/room acoustics “messes up the sound” in a way that our hearing perception expects because we experience sounds messed up by room acoustics all the time from the day we are born. *Simple crossfeed does not mess up sound in that way.*


*Neither does not using crossfeed!! *In fact it messes even less, because there is not EVEN compensation for the lack of acoustic crossfeed! It is surprising to me how people don't seem to realize how WRONG headphone sound is UNLESS the music is mixed for headphones instead of speakers or is binaural sound. Crossfeed for me is a way to make it LESS wrong, particularly addressing the main thing (ILD) that makes it wrong in an _annoying_ way. the other ways it is wrong (not having speaker-soundstage etc,) are not that annoying for me.



gregorio said:


> I understand what crossfeed does but I don’t understand your philosophy of crossfeed because it omits so many factors and doesn’t make sense.


I don't "omit" factors. I recognise what things are relevant and what are less relevant in the context of listening to music. I have an understanding of how much different factors matter. Since crossfeed is able to make the sound enjoyable for me (the end goal of music listening), I don't see why I should "care" about the rest of the "factors". They are irrelevant for me to enjoy music.



gregorio said:


> And I say it’s further from binaural because crossfeed makes many characteristics (apart from ILD) further from binaural.


Possibly, althou it is difficult to say.



gregorio said:


> I’m not going to go through yet again the same points we’ve already gone through in detail more than once already. Your “philosophy of crossfeed” is basically; say ILD is only one of many factors in the perception of distance, position and space but then only consider ILD and ignore all the other factors!
> 
> G


I can "ignore" the other factors, because they do not ruin the enjoyment for me.

If crossfeed moves the cellist from 30° to 20°, that does not cause me listening fatique either way, but if the ILD from the cello is 12 dB when it "should" be much smaller for sound from that angle and an assumed distance to the instrument much larger than a few inches while the other "factors" such as direct sound to reverberation ration etc. indicate the cello is not playing at my ear I will have annoying tickling feel + listening fatique + the lowest frequencies sound "fake".


----------



## bigshot

You are deep into Dunning Kruger territory here.


----------



## 71 dB

bigshot said:


> You are deep into Dunning Kruger territory here.


My answer to this was deleted for "getting personal", but I think your post got personal too.


----------



## The Jester

71 dB said:


> My answer to this was deleted for "getting personal", but I think your post got personal too.


Just a question for you,
Do you find the need for cross feed is the same with analogue (assuming you use it) as digital or does analogue (Vinyl LP) with the different mastering requirements and less channel separation add its own “cross feed” ?


----------



## 71 dB (Oct 8, 2022)

The Jester said:


> Just a question for you,
> Do you find the need for cross feed is the same with analogue (assuming you use it) as digital or does analogue (Vinyl LP) with the different mastering requirements and less channel separation add its own “cross feed” ?


Thanks for this good question! In general vinyl sound needs much less if at all crossfeed than "digital" sources. The channel separation at low frequencies has to be limited on vinyl to keep the needle in the groove or it jumps out. The elliptic filtering that is used to do that is effectively crossfeed, just not extending into frequencies as high as crossfeed typically. This reduction of low frequency ILD with vinyl for technical reason might be one reason why many people love vinyls sound. 

Ironically I don't like listening to vinyls on headphones, because all of the distortions/pops introduced by the format that become VERY noticeable/annoying with headphones. So, I listen to vinyls (or digitized vinyls because I don't even own a TT) on speakers, but most of the music I listen to is from digital sources which most of the time require crossfeed to be to my liking.


----------



## 71 dB (Oct 8, 2022)

Here is the virtual barber shop sound (one minute clip). It is not left and right channel, but Mid/side tracks low pass filtered at 200 Hz. Mono sound in stereo form has identical left and right channels so that all the information is in Mid (Left + Right) channel and Side (Left - Right) channel is zero. Here we see that for the most part Mid channel is much louder than Side channel. So, the sound is almost mono below 200 Hz. However, highlighted yellow, there are parts where the Side channel is almost as loud as Mid channel. That's the part where the barber puts the bag over the customers head and takes it away. The sound is literally at the ears! Most of the time the barber is a few feet away from the customers ears and this is enough to make the sound almost mono at low frequencies.


----------



## gregorio (Oct 8, 2022)

71 dB said:


> I can "ignore" the other factors, because they do not ruin the enjoyment for me.


Exactly, you admit it! That’s not a problem until you start asserting theories of what’s occurring that are nonsense, because it ignores other factors which affect perception of “spatiality”, simply on the basis that you don’t seem consciously aware of them and they don’t ruin the enjoyment for your personal perception.

I’ll try that: Einstein’s E=MCsq is wrong. My own theory is simply E=M, it ignores Csq because my enjoyment isn’t ruined by whether or not the speed of light is a constant!


71 dB said:


> If crossfeed moves the cellist from 30° to 20°, that does not cause me listening fatique either way, but if the ILD from the cello is 12 dB …


No, crossfeed doesn’t move anything, it just crossfeeds some of the signal to the other channel/ear, this includes all the factors that affect perception, not only the ILD.

G


----------



## 71 dB (Oct 8, 2022)

gregorio said:


> Exactly, you admit it! That’s not a problem until you start asserting theories of what’s occurring that are nonsense, because it ignores other factors which affect perception of “spatiality”, simply on the basis that you don’t seem consciously aware of them and they don’t ruin the enjoyment for your personal perception.


I admit it of course, but you could admit that sometimes simplifying things, for example omitting factors may STILL explain to certain extent things. Otherwise the invention of crossfeed would be total mystery. How could anyone invent a device that improves headphone spatiality tinkering almost only ILD if ALL factor must be considered?

It is also silly to say crossfeed "ignores" other factors. How exactly should it pay attention to them? As if the "other" factors were perfect for headphones and crossfeed somehow ruins them. No, they are NOT perfect, because most recordings are mixed for speakers! All the acoustics stuff is assumed. How can the "other" factors be perfect without it? There is no acoustic crossfeed, no early reflections, no reverberation! How are they perfect or even good without them? How do you know they are less perfect after crossfeed? To me they are better, because AT LEAST we have simulated acoustic crossfeed.

So, if you want to take the other factors into account, start talking about them! How to control them for headphones correcttly?



gregorio said:


> I’ll try that: Einstein’s E=MCsq is wrong. My own theory is simply E=M, it ignores Csq because my enjoyment isn’t ruined by whether or not the speed of light is a constant!


Okay... good luck with that.



gregorio said:


> No, crossfeed doesn’t move anything, it just crossfeeds some of the signal to the other channel/ear, this includes all the factors that affect perception, not only the ILD.
> 
> G


The same happens with speakers acoustically, but nobody sees problems. Since crossfeed simulates this, the other factors are affected in a healthy manner. That's why we can pretend they don't exist. I assume the other factors be "linked" to ILD so there is no reason to talk about them generally. Only when for example we talk about temporal stuff for example, it is good to talk about ITD for example.


----------



## bigshot (Oct 8, 2022)

Crossfeed doesn't improve spatiality. It reduces channel separation, moving a stereo image more towards mono. Crossfeed doesn't ignore factors. You do. Gregorio has explained it clearly. You really don't understand what is being said to you. The conversation as you see it is taking place entirely in your own head.


----------



## 71 dB (Oct 8, 2022)

bigshot said:


> Crossfeed doesn't improve spatiality.


This is subjective. It depends on the person how they experience crossfeed.



bigshot said:


> It reduces channel separation


Yes. The question is what is the channel separation in the recording and how does it relate to _desired_ channel separation? To my ears recordings mixed for speakers have too much channel separation for headphones so I reduce it with crossfeed. Problem fixed. Binaural recordings which are made for headphones have lower channel separation at low frequencies except for sounds very near ears (not common in music!) as I have demonstrated in my previous post. That's why binaural recordings do not require reduction on channel separation and are listened to without crossfeed, of course.

Speakers in a room form an acoustic channel separation regulator. Mono recordings get some acoustic separation between ears while ping pong stereo recordings are reduced drastically from the recordings. The result is natural levels of channel separation for the listener. With headphones the channel separaition is all over the place depending on the recordings, but I can regulate it with my crossfeed with adjustable crossfeed level so that that result feels natural (and enjoyable for that reason).



bigshot said:


> , moving a stereo image more towards mono.


Speakers in a room does the same if the channel separation on the recording is larger than natural HRTF based channel separation. Crossfeed can of course make the sound TOO mono (or not mono enough). That's why the correct crossfeed level is important.



bigshot said:


> Crossfeed doesn't ignore factors. You do. Gregorio has explained it clearly. You really don't understand what is being said to you. The conversation as you see it is taking place entirely in your own head.


Well maybe I stop ignoring. Makes live much harder and more complex, but you folks demand it...

Crossfeed reduces ILD. It also lower ITD a little bit (increases 250 µs stuff). ISD improves closer to HRTF. Reverberation to direct ratio gets lower because reverberation has generally bigger channel separation than direct sound. Without crossfeed reverberation can be amplified for this reason. Did I ignore something?


----------



## bigshot (Oct 8, 2022)

Objectively, crossfeed doesn't increase spatiality. It just reduces channel separation, moving a stereo image more towards mono. To increase spatiality, you either need actual space for sound to inhabit to create natural distance cues, or some sort of timing alteration to simulate the reflections and delays that create spatiality in sound.

Subjectively, crossfeed can be the best thing since sliced bread, or it can be the reason why we can't have nice things. That depends on your personal tastes and mental processes. It doesn't have anything to do with spatiality.


----------



## gregorio

71 dB said:


> I admit it of course, but you could admit that sometimes simplifying things, for example omitting factors may STILL explain to certain extent things.


I admit that in some cases it can, in other cases doesn’t and actually misleads. 


71 dB said:


> Otherwise the invention of crossfeed would be total mystery. How could anyone invent a device that improves headphone spatiality tinkering almost only ILD if ALL factor must be considered?


Why, has no one ever invented anything they thought would work but didn’t? Clearly it works for you and probably for an few others. For the vast majority it doesn’t work but is preferable for some and not so for others. 


71 dB said:


> It is also silly to say crossfeed "ignores" other factors. How exactly should it pay attention to them?


With for example HRTFs, head tracking and convolution reverb. What’s really silly is that you already know this and yet still ask the same silly question. 


71 dB said:


> Okay... good luck with that.


And for exactly the same reason, good luck with yours too!


71 dB said:


> The same happens with speakers acoustically, but nobody sees problems.


You keep falsely asserting this, why? The same absolutely does NOT happen. Speakers acoustically do NOT just crossfeed the signals and you again already know this!


71 dB said:


> Since crossfeed simulates this, the other factors are affected in a healthy manner.


Crossfeed does NOT simulate hardly any of what “happens with speakers acoustically” and the other factors are NOT affected in a “healthy manner”, they are affected in a way that does not happen with speakers or any natural sound!

Again, it’s just nonsense and you already know all the reasons why but just ignore it over your theory extrapolated from your personal perception/preference.

Please do not repeat this same nonsense yet another time, it’s been dealt with already!

G


----------



## 71 dB

bigshot said:


> Objectively, crossfeed doesn't increase spatiality.


Well, objectivity doesn't matter. Objectively my favorite music is not good or bad. It is just music I like. How do we even define objective spatiality when spatiality happens in our brains? If there was objective spatiality, stereo would certainly be technically too simple for it. Spatiality is an illusion. Without the illusion we would probably need dozens of audio channels and speakers to have any reasonable spatiality.



bigshot said:


> It just reduces channel separation, moving a stereo image more towards mono. To increase spatiality, you either need actual space for sound to inhabit to create natural distance cues, or some sort of timing alteration to simulate the reflections and delays that create spatiality in sound.


You talk about increased spatiality probably meaning the soundstage feels bigger? I talk about more _natural_ spatiality. Different thing.


----------



## bigshot (Oct 8, 2022)

"Objectivity doesn't matter. I like what I like. How can we even define something objectively when our brain is involved?"

Another great quote that doesn't belong in Sound Science in a single day!


----------



## 71 dB

gregorio said:


> With for example HRTFs, head tracking and convolution reverb. What’s really silly is that you already know this and yet still ask the same silly question.


Sure, but not using crossfeed ignores those things also!! How is it a lesser crime? People who don't use crossfeed ignore EVERYTHING. I ignore everything except the thing that ruins enjoyment for me. The lack of HRTF, head tracking and convolution reverb does not prevent me from enjoying music so I ignore them as not important things. Doesn't mean I don't understand what those things are.



gregorio said:


> And for exactly the same reason, good luck with yours too!


Thanks, I am doing fine! 



gregorio said:


> You keep falsely asserting this, why? The same absolutely does NOT happen. Speakers acoustically do NOT just crossfeed the signals and you again already know this!


Human brain can separate the other reflections so the acoustic crossfeed is kind of its own thing. Channel separation gets drastically smaller ANYWAY! That's the point. To me it is better to do AT LEAST crossfeed compared to doing NOTHING especially if that fixes the damn problem of annoying sound!



gregorio said:


> Crossfeed does NOT simulate hardly any of what “happens with speakers acoustically” and the other factors are NOT affected in a “healthy manner”, they are affected in a way that does not happen with speakers or any natural sound!


Crossfeed was invented to do that, but whatever. It works for me. It reduses channel separation. Headphone sound without crossfeed is completely wrong because recordings are mixed for speakers, not headphones. Crossfeed is a simple way to make the sound a little bit less wrong (by killing INSANE channel separation that ONLY makes any sense with speakers.)



gregorio said:


> Again, it’s just nonsense and you already know all the reasons why but just ignore it over your theory extrapolated from your personal perception/preference.


Personal preference is part of this, but my preferences are based on what I learned in the university. I discovered crossfeed when I happened to apply the theory of human spatial hearing on headphones in 2012 and realized how wrong it is as a concept because there are no regulation of spatial cues that make them natural. The stuff just flows into the ears...



gregorio said:


> Please do not repeat this same nonsense yet another time, it’s been dealt with already!
> 
> G


I can't stop you calling my posts nonsense. To me it is not nonsense. I can't understand your attitude.


----------



## 71 dB (Oct 8, 2022)

bigshot said:


> "Objectivity doesn't matter. I like what I like. How can we even define something objectively when our brain is involved?"
> 
> Another great quote that doesn't belong in Sound Science in a single day!


By measuring of course! I can measure ILD for example. We can also measure HRTF from people and see what are natural levels of ILD. To me there is a match between these things and my personal preference. 0-3 dB of ILD is natural at lowest frequencies* and my brain agrees. I use crossfeed to turn crazy 10 dB ILD to 3 dB ILD and make it natural...

* unless the sound is very near head.


----------



## bigshot (Oct 8, 2022)

Now you're arguing with yourself.

Here is what you are ignoring...

Without a time alteration, there is no spatiality. Stereo imaging using headphones is a one dimensional line through the middle of your head from left to right. There is no spatiality because headphones don't lend any distance cues to the sound. Crossfeed doesn't lend any distance cues either. All it does is blend channels along the same one dimensional line.

Speakers blend the two channels too, but that isn't what creates the feeling of space. Spatiality is created when a sound source is at a physical distance from the listener. The speakers interact with the room, creating room reflections which _ALTER TIMING_ and are perceived as distance cues. If you were arguing that reverb increases spatiality in sound, I would agree with you. That is the way to synthesize spatiality. Crossfeed doesn't increase spatiality because it doesn't alter timing.

Spatiality is created by _SPACE._ It isn't caused by making stereo closer to mono. Yes, speakers do one thing similar to crossfeed. But that isn't the thing that causes spatiality. Correlation is not necessarily causation.


----------



## gregorio

71 dB said:


> Sure, but not using crossfeed ignores those things also!!


No it does NOT, it doesn’t just crossfeed ILD, it crossfeeds everything within the crossfeed bandwidth. You know this, so why are you falsely stating otherwise? It’s only you who are ignoring those things in your nonsense reasoning for why it works for your perception!


71 dB said:


> I can't stop you calling my posts nonsense. To me it is not nonsense.


Great, to me, my theory on Einstein’s equation isn’t nonsense, for exactly the same reason. Although of course it is nonsense!

G


----------



## bigshot (Oct 8, 2022)

You ignored my post.

I sense we're reaching the end of your cycle. This marks the third time.


----------



## 71 dB (Oct 8, 2022)

bigshot said:


> You ignored my post.
> 
> I sense we're reaching the end of your cycle. This marks the third time.


You apparently expanded your post. I only saw one line and it was not worth commenting. It is 1:50 am here and I should sleep. I am in very bad mood because I am sick. I feel really bad. dizzy as hell. My ears are ringing. so annoying so tired. have not slept well for 2 weeks.



Speaker spatiality is a line from one speaker to the other speaker. We don't hear it that way thanks to the illusion. Same with headphones. Illusion creates spatiality. Headphones lack room reflections, but there are (should be) these reflections, time delays in the recording. Headphone spatiality rely on those. The space is the concert hall the music was recorded in. The mic records reflections too, not only direct sound from instruments. If it is electronic music (no room) maybe digital effects were used to generate it. Without those cues headphone sound is insanely dead. Luckily almost all music has those cues in the recording. In fact if those cues are really well recorded, the lack of listening room with headphones can be a plus, because the room acoustics does not ruin the spatiality.

I use time delays all the time in my music to create spatiality. So does pretty much all music makers.

So, when I use crossfeed, the ILD level gets natural and my spatial hearing can "hear" freely the spatial information in the recording and is fooled by it.


----------



## bigshot (Oct 8, 2022)

71 dB said:


> Speaker spatiality is a line from one speaker to the other speaker.


Ignored from the post you are replying to...


> Without a time alteration, there is no spatiality. (snip) Spatiality is created when a sound source is at a physical distance from the listener. The speakers interact with the room, creating room reflections which _ALTER TIMING_ and are perceived as distance cues.


A mono speaker can have spatiality if it is a distance from the listener and the sound inhabits and reflects off the walls of a room. The separation between speakers is CHANNEL SEPARATION, not spatiality.


71 dB said:


> Headphones lack room reflections, but there are (should be) these reflections, time delays in the recording.


Secondary depth cues are exactly the same, whether you are listening to headphones without crossfeed, headphones with crossfeed or speakers. Secondary depth cues are spatial, but they aren't enhanced by crossfeed. They are irrelevant to this discussion.


71 dB said:


> In fact if those cues are really well recorded, the lack of listening room with headphones can be a plus, because the room acoustics does not ruin the spatiality.


False. Commercial music is mixed and mastered using speakers in a room, not using headphones. The engineers and artists judge the amount of secondary depth cues (baked in echo and reverberation) to be in proportion to the amount of primary depth cues (real reflections in a real room) as they mix a recording. Removing primary depth cues is not a "plus". It's a "minus". You're listening to music in a way that wasn't intended when it was created. Headphones are an incomplete way of reproducing music. Headphones' lack of primary depth cues is exactly the same sort of problem as excessive stereo separation. The problem is created by listening to music in a way that it wasn't originally intended to be listened to. If you want to correct excessive separation, you use cross feed. If you want to synthesize primary depth cues, you do that with some sort of digital delay scheme.


71 dB said:


> I use time delays all the time in my music to create spatiality. So does pretty much all music makers.


Feel free. But that is completely irrelevant to crossfeed.


71 dB said:


> So, when I use crossfeed, the ILD level gets natural and my spatial hearing can "hear" freely the spatial information in the recording and is fooled by it.


When you use crossfeed, you reduce the stereo separation and move it closer to mono. It has absolutely no effect on spatiality because secondary cues are present and audible no matter how you listen to music.


----------



## 71 dB (Oct 9, 2022)

bigshot said:


> Ignored from the post you are replying to...
> 
> A mono speaker can have spatiality if it is a distance from the listener and the sound inhabits and reflects off the walls of a room. The separation between speakers is CHANNEL SEPARATION, not spatiality.


Yes, I agree.


bigshot said:


> Secondary depth cues are exactly the same, whether you are listening to headphones without crossfeed, headphones with crossfeed or speakers. Secondary depth cues are spatial, but they aren't enhanced by crossfeed. They are irrelevant to this discussion.


All cues are crossfed. Also secondary cues can have "excessive" values on headphones and crossfeed can "tame" them to natural level. Also, as  I mentioned in my previous post, room reverberation is very diffuse sound field while direct sound is not meaning reverberation tends to have larger ILD, ISD and ITD values than direct sound. Early reflections are in the middle. In reducing especially ILD, crossfeed reduces reverberation in the recording in relation to direct sound (this is I believe the main reason why some people think crossfeed removes detail). Some recordings have problematic spatiality for headphones: Direct sound requires strong crossfeed while reverberation requires weak crossfeed (imbalance of spatial wideness), but to my experience these recordings are not common. Crossfeed can't save bad recordings obviously, but it at least gives choices to optimize the sound to be least bad.



bigshot said:


> False. Commercial music is mixed and mastered using speakers in a room, not using headphones.


I was not talking about mixing, but listening (consumer end). Mixing of course is done mostly on speakers and a little bit on headphones. This is the main source of the headphone compatibility issues crossfeed tries to solve.



bigshot said:


> The engineers and artists judge the amount of secondary depth cues (baked in echo and reverberation) to be in proportion to the amount of primary depth cues (real reflections in a real room) as they mix a recording.


Consumers of course have completely different rooms/speakers where they are listening, so this delicate proportion isn't generally what the engineers heard in studio. We can talk about headphone related problems till the cows come home, but speakers have their fair share of problems too.



bigshot said:


> Removing primary depth cues is not a "plus". It's a "minus".


Depends. If the listening room has bad acoustics, the primary spatial cues are bad. Removing them using headphones leaves secondary spatial cues which are the same (baked in the recording). Also, if the music was played in an acoustic space very different from "living room" such as in a large church, the lack of primary cues can help retaining the feel of a large space. If secondary cues say we are in a cathedral and primary cues say we are in a living room, where are we? Small church? For music recorded in studio rooms of similar size to living rooms this is not an issues in my opinion.



bigshot said:


> You're listening to music in a way that wasn't intended when it was created.


Am I listening to it the way it is intended when I use speakers? I have said many times I use both speakers and headphones.



bigshot said:


> Headphones are an incomplete way of reproducing music.


Indeed, because most music is not binaural! I make it less incomplete by using crossfeed.



bigshot said:


> Headphones' lack of primary depth cues is exactly the same sort of problem as excessive stereo separation.


Yes, it is technically similar problem, but luckily for me not that annoying. In fact I kind of like the intimate private feel the lack of primary cues create. However, I am annoyed by the lack of acoustic crossfeed. I compensate for that using crossfeed.



bigshot said:


> The problem is created by listening to music in a way that it wasn't originally intended to be listened to.


Yeah, exactly, but this thread assumes we are doing so. The question we should be concentrating here is do we do it with or without crossfeed?



bigshot said:


> If you want to correct excessive separation, you use cross feed.


Yeah, that is what I figured out a decade ago. I realized excessive separation is a thing in headphones and that headphone sound can be much more natural and enjoyable if this problem of excessive separation is fixed using crossfeed.



bigshot said:


> If you want to synthesize primary depth cues, you do that with some sort of digital delay scheme.


Obviously.



bigshot said:


> Feel free. But that is completely irrelevant to crossfeed.


Yes, irrelevant perhaps, by my knowledge and understanding of spatiality is constantly questioned. So, I try to demonstrate I have the understanding and knowledge I say I have.



bigshot said:


> When you use crossfeed, you reduce the stereo separation and move it closer to mono.


You have said that quite a many times and I have been agreeing with you.



bigshot said:


> It has absolutely no effect on spatiality because secondary cues are present and audible no matter how you listen to music.


For me excessive separation ruins the music including secondary cues. They are "audible", but in a wrong way. With spatial cues, quality matters more than quantity. Good cues are a result of good balance between ILD, ITD, ISP, etc. so that the combination of them makes sense to spatial hearing. In speaker listening, acoustic crossfeed is very strong at low frequencies and without it the balance is disturbed badly.


----------



## bigshot

You have a hare brained theory about spatiality, and I’ve patiently explained why you’re wrong more times than I can remember. I’m not the only one who has done that. You ignore all of us and keep on babbling what Gregorio calls nonsense. At this point, there’s no reason to engage in conversation with you. You’re here to do soliloquies with the sole purpose of entertaining yourself and patting yourself on the back. I know what happens when that phase comes to an end and the pendulum swings the other firection again. We’ve seen that several times over the past couple of years. When that happens, I can assure you it isn’t my fault. I have absolutely no connection with you. This is your solo turn and I have no part in it.


----------



## 71 dB

bigshot said:


> You have a hare brained theory about spatiality,


I wouldn't say I have a _*theory*_ of my own, hare-brained or anything else. I'm just applying the theories I learned in university in practise. The last 10 years have increased my understanding and knowledge of spatiality especially regarding headphones, something I didn't have before that because universities (at least the one I studied in) don't teach _headphone spatiality_, but _human spatial hearing in general_ and one has to figure out himself/herself what it means when applied to headphones/audio/music etc.



bigshot said:


> and I’ve patiently explained why you’re wrong more times than I can remember.


Wrong? We agree about some things and disagree with others. What makes you think it is always you who is right? Are you some kind of all-knowing God who can't be wrong? The impression I have got from you is that you have done a lot audio work in life, mainly with speakers, but you are not that savvy in Math for example. Whenever I take things more technical/mathematical you tend to shy away. For example you haven't commented at all my analyse of the virtual barber 8D sound. Is it because you don't properly understand what my analyse is about? Commenting on it would be a good chance for you to demonstrate how I am wrong, but you pass the opportunity. Could it be because I am not as wrong as you assume I am?



bigshot said:


> I’m not the only one who has done that. You ignore all of us and keep on babbling what Gregorio calls nonsense.


You have been the most active recently. Maybe Gregorio has seen how I have learned some things here (that people hear spatiality differently) and I have admitted when being wrong (that is how we learn). I have changed my rethoric about crossfeed from 5 years ago. Maybe that makes Gregorio feel less need to correct me? You on the other hand require ALL people on this planet to agree 100 % with you regardless of whether you are right or not. 



bigshot said:


> At this point, there’s no reason to engage in conversation with you.


So don't! Frankly your posts do not contribute much in here. My posts at least contain ideas that even made you think I have my own theories! I try my best to apply the theories of human spatial hearing in headphone context and all I get is I am wrong. It is as if you just want people stop using headphones. 



bigshot said:


> You’re here to do soliloquies with the sole purpose of entertaining yourself and patting yourself on the back


Yes, I am a human being. I need the feel of relevance, that I have something to offer to the World. I hope that what I say here may help/inspire someone. Why are you here? To make other people feel bad about themselves?



bigshot said:


> I know what happens when that phase comes to an end and the pendulum swings the other firection again. We’ve seen that several times over the past couple of years. When that happens, I can assure you it isn’t my fault. I have absolutely no connection with you. This is your solo turn and I have no part in it.


For someone having absolutely no connection with me you have commented my posts A LOT! I wonder why...


----------



## bigshot

No, I’m done.


----------



## gregorio (Oct 9, 2022)

71 dB said:


> Makes live much harder and more complex, but you folks demand it...


It’s not just us folks, it’s science that demands it. You are oversimplifying to (and beyond) the point of contradicting the facts/science. The actual fact is that the reality is much harder and more complex.


71 dB said:


> Crossfeed reduces ILD. It also lower ITD a little bit (increases 250 µs stuff). ISD improves closer to HRTF. Reverberation to direct ratio gets lower because reverberation has generally bigger channel separation than direct sound. Without crossfeed reverberation can be amplified for this reason. Did I ignore something?


Yes, you ignored a great deal and some of what you stated was not even correct to start with! Crossfeed does not lower ITD a little bit, it just crossfeeds the signal below the crossfeed freq threshold, the ITD above that threshold is unaffected and, by crossfeeding the signal you now have the timing differences between the two channels superimposed on the opposite channel. So now you’ve got an arbitrary timing difference between the same sound source between freq bands and potential phase issues. There are related issues with ISD and the spectral differences in the recording.  Reverberation ratio to direct sound is unaffected because crossfeed does not separate reverb from direct sound, it crossfeed the entire signal (below a threshold) and reverb typically has less separation than the direct sound because it is diffuse. There are various other things you’re ignoring and/or misrepresenting but the above is enough to be going on with!


71 dB said:


> How do we even define objective spatiality when spatiality happens in our brains?


Nonsense, spatiality does NOT happen in our brains! Early reflections, reverb and spectral, level and timing of both the direct and reflected sound occurs in reality. These factors are what define spatiality, they are all objective and objectively measurable. All these factors then get modified again by HRTFs which again occurs in reality and is objective and objectively measurable. What happens in the brain is the determination of how and what we perceive in response to all this actual/objective spatial information and our personal preferences of it.


71 dB said:


> If there was objective spatiality, stereo would certainly be technically too simple for it.


But if there wasn’t objective spatiality stereo would not exist.


71 dB said:


> Spatiality is an illusion.


No, spatiality is a reality. The perception of spatiality when listening to stereo is partly an illusion.


71 dB said:


> Without the illusion we would probably need dozens of audio channels and speakers to have any reasonable spatiality.


No, you can have reasonable spatiality with just one speaker without illusion, you can have distance/depth using the previously mentioned objective factors. Of course, this is just the spatial dimension of distance (and potentially height) that doesn’t include width, so we could argue it’s “reasonableness”.


71 dB said:


> Speaker spatiality is a line from one speaker to the other speaker. We don't hear it that way thanks to the illusion.


No, (stereo) speaker spatiality is just two point sources (speakers). We perceive a line (under certain conditions) thanks to the illusion. Typically we perceive more than just a line (also depth) due to objective factors and how our brains interpret them.


71 dB said:


> Also, as I mentioned in my previous post, room reverberation is very diffuse sound field while direct sound is not meaning reverberation tends to have larger ILD, ISD and ITD values than direct sound.


As reverb is very diffuse it will have lower ILD, ISD and ITD values than direct sound, not larger. Except in the case of a direct sound in the phantom centre, in which case it will have roughly the same or marginally more.


71 dB said:


> Early reflections are in the middle.


ERs are not diffuse, the ILD and ITD values will depend on the relative position of the sound source relative to the initial reflection points and therefore can be greater or less than the ILD and ITD of the direct sound. The spectrum of the ERs will depend on the distance and reflective properties of the boundaries.


71 dB said:


> In reducing especially ILD, crossfeed reduces reverberation in the recording in relation to direct sound.


Crossfeed does not reduce “especially ILD”, it reduces/changes everything equally below the freq threshold. It also does not separate direct sound from the reverb/reflections and therefore does not reduce reverb relative to direct sound.


71 dB said:


> If secondary cues say we are in a cathedral and primary cues say we are in a living room, where are we? Small church?


We’re in a space that cannot possibly exist, you obviously can’t fit a large cathedral in a small living room. However, the human brain typically cannot accept such a scenario and will therefore change it’s perception to create a scenario it can accept. In most cases we would perceive we’re in a cathedral or at least a cathedral size/type space, due to the engineers manipulating the reverb according to his/her perception within their monitoring environment and also due to the brain’s plasticity in learning/adapting to listening environments. There is wide variation in localisation and related perceptual processes though, so it’s likely some would perceive a “small church” or various other spatial locations.


71 dB said:


> For music recorded in studio rooms of similar size to living rooms this is not an issues in my opinion.


It is probably the exact same issue or sometimes worse, because studio recordings are often close mic’ed and therefore have relatively little acoustic room information and often the rooms used in studio recording are not of a similar size and/or have significantly different acoustic properties.


71 dB said:


> I have changed my rethoric about crossfeed from 5 years ago. Maybe that makes Gregorio feel less need to correct me?


You’ve changed your rhetoric in regards to effectively stating that what you perceive is definitely right and anyone who perceives something different is an ignorant idiot. Obviously that was both factually false and exceptionally offensive but thankfully you don’t do that anymore, you seem to accept that perception varies and isn’t correlated to ignorance or idiocy. However, your rhetoric regarding spatiality, what crossfeed does to it and why it works so well for you, has not changed. I don’t correct you as much because your posts are generally not so offensively false, not as incorrect (except when you go off on your hobby horse of ILD at the expense of everything else) and sometimes I just can’t be bothered because it’s all been stated before and you either just ignore it or acknowledge and then dismiss it on the basis of circular arguments about your perception/preferences of ILD!

G


----------



## 71 dB

gregorio said:


> It’s not just us folks, it’s science that demands it. You are oversimplifying to (and beyond) the point of contradicting the facts/science. The actual fact is that the reality is much harder and more complex.


I didn't know science demands anything. It is just that if you oversimplify too much, science stops working the way it is supposed to work. If I want to describe how an apple falls from a tree, I can oversimplify gravity. I don't need to worry about how the mass of Earth bends 4-dimensional space-time. So, Newton's simplyfied theory of gravity works just fine, but if I want to describe the movement of Merkurius around the Sun, I need Einstein's theory of gravity, because Sun is so massive that the way it bends space-time is _significant_ at the distance of Merkurius.

Similarly depending on what I am doing with spatiality matters in how precise models I need to use. Crossfeed is a very coarse manipulation of spatiality and therefore looking at ILD only works fine. When you use HRTF to create very realistic soundstage you obviously need much more detailed models and ILD alone is not enough!



gregorio said:


> Yes, you ignored a great deal and some of what you stated was not even correct to start with! Crossfeed does not lower ITD a little bit, it just crossfeeds the signal below the crossfeed freq threshold, the ITD above that threshold is unaffected and,


Of course. Cross-feed level goes down with frequency. Similar thing happens acoustically with speakers, because with frequency the listeners head starts to shadow the sound more and more blocking crossfeed. this is part of the principles of spatial hearing and leads to natural spatiality.



gregorio said:


> by crossfeeding the signal you now have the timing differences between the two channels superimposed on the opposite channel.


Same happens with speakers. Spatial hearing is "used" to this, even expecting such cross-correlation between the ears and it means natural spatiality. Headphone sound can be avoid of such cross-correlation leading to unnatural spatiality.



gregorio said:


> So now you’ve got an arbitrary timing difference between the same sound source between freq bands and potential phase issues.


Similar thing happens with speakers. It is called mono colourization. Typically crossfeed happens up to 800 Hz (wavelength 43 cm) and crossfeed timing difference is typically 250 µs (8-9 cm). This means that the original and the crossfed signals add up close to in phase. At 800 Hz the delay is actually only 71 % of the 250 µs (180 µs to be generous whch is 54° phase difference) and even the crossfeed level has dropped by 3 dB making "phase issues" milder. The phase issues would be serious, if the phase difference was about 180° (canceling), but it isn't even close to that.

Also, crossfeeding happen from left to right and from right to left meaning things are mirrored on both "sides" and spatial hearing is able to figure out what is going on.



gregorio said:


> There are related issues with ISD and the spectral differences in the recording.  Reverberation ratio to direct sound is unaffected because crossfeed does not separate reverb from direct sound, it crossfeed the entire signal (below a threshold) and reverb typically has less separation than the direct sound because it is diffuse.


Make a test signal of in phase (mono) noise and out of phase noise summed. Crossfeed it. You'll hear how the mono noise is not affect almost at all, but the out of phase noise gets attenuated. So, crossfeed indeed CAN separate signal components of differing channel difference. Since reverberation (diffuse sound field characteristics) tend to have much higher channel separation than direct sound (free field charasteristics), crossfeed can to some extend separate them and modify them differently.



gregorio said:


> There are various other things you’re ignoring and/or misrepresenting but the above is enough to be going on with!


You are just craping the bottom of the barrel to find reasons to discredit me. Nobody is talking about "various other things" in the context of crossfeed and for a good reason: They are irrelevant! If you think otherwise then please give me a mathematical calculation of why these things shouldn't be ignored. I am a math guy. Give me the math and I believe you!



gregorio said:


> Nonsense, spatiality does NOT happen in our brains! Early reflections, reverb and spectral, level and timing of both the direct and reflected sound occurs in reality.


Reality generates spatial _cues_ our brain interprets in a way that more or less aligns with the reality, but not always! If you play mono sound on both stereo speakers, the reality is sound is radiated from the speakers, but your brain interprets the situation as sound coming from the middle point between the speakes. Simple proof of spatiality happening in brain. Spatial_ cues_ normally happen in reality but can be generated other ways too such as HRIR convolution.



gregorio said:


> These factors are what define spatiality, they are all objective and objectively measurable. All these factors then get modified again by HRTFs which again occurs in reality and is objective and objectively measurable. What happens in the brain is the determination of how and what we perceive in response to all this actual/objective spatial information and our personal preferences of it.


They are objectively measurable properties of the reality, but how we hear them is a separate thing, although very highly correlated.



gregorio said:


> But if there wasn’t objective spatiality stereo would not exist.


Human spatial hearing has developed in reality so that's why what we hear and what the reality is are VERY close to each other. Our spatiality is subjective, but almost identical to identical with objective reality. That's why it seems our spatiality is objective, but there are differencies and the illusion of stereo sound is based on them.



gregorio said:


> No, spatiality is a reality. The perception of spatiality when listening to stereo is partly an illusion.


Yes, stereo is illusion.



gregorio said:


> No, you can have reasonable spatiality with just one speaker without illusion, you can have distance/depth using the previously mentioned objective factors. Of course, this is just the spatial dimension of distance (and potentially height) that doesn’t include width, so we could argue it’s “reasonableness”.


Yes, but I have limited this to stereo, because the topic of crossfeed "requires" stereo to make sense.



gregorio said:


> No, (stereo) speaker spatiality is just two point sources (speakers). We perceive a line (under certain conditions) thanks to the illusion. Typically we perceive more than just a line (also depth) due to objective factors and how our brains interpret them.


Yes, no disagreements here.



gregorio said:


> As reverb is very diffuse it will have lower ILD, ISD and ITD values than direct sound, not larger. Except in the case of a direct sound in the phantom centre, in which case it will have roughly the same or marginally more.


You are right if we talk about reality, but we don't crossfeed "reality." We crossfeed stereo recordings made in reality. Those recordings can have wild values depending on how they were produced. Also, when I said direct sound, I meant direct sound from centre of near centre. Sorry about that. Direct sound from the side has larger values than reverb as you say, but how often are instrumets recorded that way? You know this better than I.



gregorio said:


> ERs are not diffuse, the ILD and ITD values will depend on the relative position of the sound source relative to the initial reflection points and therefore can be greater or less than the ILD and ITD of the direct sound. The spectrum of the ERs will depend on the distance and reflective properties of the boundaries.


Of course not "100 % diffuse", but the combination of all ER is more diffuse than direct sound. Otherwise I completely agree.



gregorio said:


> Crossfeed does not reduce “especially ILD”, it reduces/changes everything equally below the freq threshold. It also does not separate direct sound from the reverb/reflections and therefore does not reduce reverb relative to direct sound.


Yeah, it reduces my blood pressure equally... ...depending on how the recording is mixed direct sound can have differing effect to reverb as I have explained above. If mono remains almost the same mono, but out of phase sound attennuates, there is separation. Now, it is up to the recording how mono or out of phase things are...



gregorio said:


> It is probably the exact same issue or sometimes worse, because studio recordings are often close mic’ed and therefore have relatively little acoustic room information and often the rooms used in studio recording are not of a similar size and/or have significantly different acoustic properties.


Yes, but don't those productions use room mics to add room acoustics?



gregorio said:


> You’ve changed your rhetoric in regards to effectively stating that what you perceive is definitely right and anyone who perceives something different is an ignorant idiot. Obviously that was both factually false and exceptionally offensive but thankfully you don’t do that anymore, you seem to accept that perception varies and isn’t correlated to ignorance or idiocy. However, your rhetoric regarding spatiality, what crossfeed does to it and why it works so well for you, has not changed. I don’t correct you as much because your posts are generally not so offensively false, not as incorrect (except when you go off on your hobby horse of ILD at the expense of everything else) and sometimes I just can’t be bothered because it’s all been stated before and you either just ignore it or acknowledge and then dismiss it on the basis of circular arguments about your perception/preferences of ILD!
> 
> G


I was VERY offensive back then, because the way my first posts here were received stunned me. Since then I have learned that this planet is VERY hostile toward crossfeeders and I better accept being a second-class citizen. So I am more humble, but also more depressed and I don't believe myself at all. I have always failed so I am a loser. At least 2012-17 gave me an illusion of knowing something relevant.

If I am wrong about why crossfeed works for me then can you tell me why it works for me? I have not seen your alternative theory. That's why I have no need to abandon my own theories. For me my theories make perfect sense. I don't understand why they don't make sense to you. You haven't been able to explain that. I need math. Your posts don't have much math. Sorry.


----------



## bigshot (Oct 9, 2022)

When you tear posts into contextless tiny bits to reply to them, it's easier to convince yourself that you are countering them without actually addressing what they actually say.

I'll answer one of your questions...



> If I am wrong about why crossfeed works for me then can you tell me why it works for me?


Subjective preference, the same way I like chocolate ice cream without any scientific evidence that chocolate ice cream is good for me.


----------



## WoodyLuvr (Oct 9, 2022)

71 dB said:


> I was VERY offensive back then, because the way my first posts here were received stunned me. Since then I have learned that this planet is VERY hostile toward crossfeeders and I better accept being a second-class citizen. So I am more humble, but also more depressed and I don't believe myself at all. I have always failed so I am a loser. At least 2012-17 gave me an illusion of knowing something relevant.
> 
> If I am wrong about why crossfeed works for me then can you tell me why it works for me? I have not seen your alternative theory. That's why I have no need to abandon my own theories. For me my theories make perfect sense. I don't understand why they don't make sense to you. You haven't been able to explain that. I need math. Your posts don't have much math. Sorry.


My good man, with the utmost respect and the kindest of intentions, would you not like to discuss something else? You have been extremely hyper-focused on this particular subject and it's associated never-ending argument across a number of threads for way too long. It's not healthy mate. Your passion is commendable but it can have a real bad side to it as well if you let it take control (I was quite guilty of this too in my younger years so I am speaking from experience). If I may, have you considered participating in another thread... or better yet start a new thread about a completely different topic that interests you. Let others interact with you in a completely different situation on a completely different subject and you may be surprised what happens! It might cheer you up!

BTW: I don't think you are a loser. Losers tend to not have the passion, intelligence, nor the fortitude to argue a losing point-of-view for as long as you have! I am not saying you are wrong or right but rather trying to say that you been in a long-term battle and greatly out numbered. We are always are own worst critics and way too hard on ourselves. Don't take yourself or life too seriously... it is one big practical joke, a comedic tragedy, and then we die! LOL! Cheers.


----------



## 71 dB

I have posted elsewhere too, but usually it is like talking to a wall:

https://www.head-fi.org/threads/autechre-appreciation-thread.238228/page-2


----------



## castleofargh

WoodyLuvr said:


> My good man, with the utmost respect and the kindest of intentions, would you not like to discuss something else? You have been extremely hyper-focused on this particular subject and it's associated never-ending argument across a number of threads for way too long. It's not healthy mate. Your passion is commendable but it can have a real bad side to it as well if you let it take control (I was quite guilty of this too in my younger years so I am speaking from experience). If I may, have you considered participating in another thread... or better yet start a new thread about a completely different topic that interests you. Let others interact with you in a completely different situation on a completely different subject and you may be surprised what happens! It might cheer you up!
> 
> BTW: I don't think you are a loser. Losers tend to not have the passion, intelligence, nor the fortitude to argue a losing point-of-view for as long as you have! I am not saying you are wrong or right but rather trying to say that you been in a long-term battle and greatly out numbered. We are always are own worst critics and way to hard on ourselves. Don't take yourself or life too seriously... it is one big practical joke, a comedic tragedy, and then we die! LOL! Cheers.


I noticed early on that @71 dB was massively invested in crossfeed, but I couldn't understand why he gave any F about what greg, biggie and others believed. And I think I got it now. I think he likes those guys and that's why it hurts so much.

The discussion format also doesn't help. This all affair would surely be a nothing burger if we all were face to face in a pub having a good time.


----------



## bigshot

I agree. I don't think this is about crossfeed. This is about being accepted as a peer. That should be a given among the regulars at sound science. It isn't something we have to prove to each other. He's taking disagreement as rejection. No one should argue for status. This is just an Internet forum for goodness sakes, not a University. The idea is to visit and learn from each other, not impress each other with our knowledge and expertise. I know I don't care if people think ill of me. (That should be blatantly obvious!)


----------



## Steve999

A foolish consistency is the hobgoblin of little minds, adored by little statesmen and philosophers and divines. With consistency a great soul has simply nothing to do. He may as well concern himself with his shadow on the wall. Speak what you think now in hard words, and to-morrow speak what to-morrow thinks in hard words again, though it contradict every thing you said to-day. — 'Ah, so you shall be sure to be misunderstood.' — Is it so bad, then, to be misunderstood? Pythagoras was misunderstood, and Socrates, and Jesus, and Luther, and Copernicus, and Galileo, and Newton, and every pure and wise spirit that ever took flesh. To be great is to be misunderstood.
— Ralph Waldo Emerson, "Self-Reliance", _Essays: First Series_, 1841


----------



## Steve999 (Oct 9, 2022)

@71 dB , at my own great peril, I do say, unequivocally, that I enjoy your crossfeed posts, and I find them to be thoughtful and intelligent. 🙂


----------



## 71 dB (Oct 10, 2022)

castleofargh said:


> I noticed early on that @71 dB was massively invested in crossfeed, but I couldn't understand why he gave any F about what greg, biggie and others believed. And I think I got it now. *I think he likes those guys and that's why it hurts so much.*


You are not wrong. I believe Gregorio and Bigshot both have massive knowledge on audio issues and in many things I look up to them, but I also think even they aren't all-knowing Gods and someone like me may bring out areas where their opinions aren't as correct as they think.



bigshot said:


> He's taking disagreement as rejection.


Yes, because you don't express disagreement by saying you disagree. You express it by saying I am wrong. You can't think the possibility that it is actually you who is wrong. According to you I can be wrong because the way I hear things misleads me, but you don't realize that would apply to you too! Maybe the way you hear has misled you? I am constantly "fact-checking" myself and that makes me unsure of myself (am I right or wrong?), but I can't adopt a new dogma, your dogma just because you say I am wrong. I need to be convinced! Telling me I am wrong is not convincing. I need something much more intellectual. Logic, math and yes, it would help if I heard things the same way other people do.



Steve999 said:


> @71 dB , at my own great peril, I do say, unequivocally, that I enjoy your crossfeed posts, and I find them to be thoughtful and intelligent. 🙂


Thanks! Nice to hear that! You made my day Steve! It is easy to forget that people who agree with me don't reply to my posts as much as those who disagree.


----------



## bigshot

I'm not wrong on this one. I know the areas I don't know and I avoid talking about them. Just about everyone in the forum has pointed out you're not correct on this one. It isn't just me you're ignoring.


----------



## jamesjames

71 dB said:


> You are not wrong. I believe Gregorio and Bigshot both have massive knowledge on audio issues and in many things I look up to them, but I also think even they aren't all-knowing Gods and someone like me may bring out areas where their opinions aren't as correct as they think.
> 
> 
> Yes, because you don't express disagreement by saying you disagree. You express it by saying I am wrong. You can't think the possibility that it is actually you who is wrong. According to you I can be wrong because the way I hear things misleads me, but you don't realize that would apply to you too! Maybe the way you hear has misled you? I am constantly "fact-checking" myself and that makes me unsure of myself (am I right or wrong?), but I can't adopt a new dogma, your dogma just because you say I am wrong. I need to be convinced! Telling me I am wrong is not convincing. I need something much more intellectual. Logic, math and yes, it would help if I heard things the same way other people do.
> ...


I agree with Steve999.


----------



## 71 dB (Oct 10, 2022)

bigshot said:


> I'm not wrong on this one. I know the areas I don't know and I avoid talking about them. Just about everyone in the forum has pointed out you're not correct on this one. It isn't just me you're ignoring.


Not _everyone_ has "pointed out I am wrong". Only some of you thinking you know better than I do. I am not someone who tested crossfeed for 5 minutes yesterday for the first time and now claim expertise of it. I have used crossfeed for 10 years*. I have academic background. I have thought about these issues for years. I have written Nyquist plugins to test things. Why would I write this much about crossfeed if it wasn't "my thing"?

* it means I have listened to crossfed material on headphones thousands of hours. How about you? How can you be an expert on crossfeed when you hardly even use headphones! You are an expert of speaker sound!


----------



## bigshot (Oct 10, 2022)

You really are incapable of letting this go. I think you need time away from Sound Science. Your participation here is welcomed by me, but grabbing on this hard can't be good for you.


----------



## jamesjames

bigshot said:


> You really are incapable of letting this go. I think you need time away from Sound Science. Your participation here is welcomed by me, but grabbing on this hard can't be good for you.


I enjoy crossfeed.  I think it makes music sound more natural.  Is my participation here welcomed by you?


----------



## bigshot

Enjoying crossfeed is fine. I have absolutely no problem with people liking crossfeed, bass boosts, high end roll off or euphonic distortion. If you like it, great. Enjoy it in good health.

My only objection is saying that crossfeed increases spatiality, because that isn’t what crossfeed does. Crossfeed reduces channel separation. Doing that can make headphone listeners find music more pleasant. But it doesn’t create an “outside the head experience” and it doesn’t contribute distance cues to the sound, making the music seem to be at a distance from the listener. It is nothing like binaural recording. And it isn’t spatial like speakers in a room.

Blending channels is cool. I’d like it myself with early stereo Beatles records. But blending channels doesn’t create spatiality. There are other ways to do that, which I’ve already outlined, so I won’t repeat myself.


----------



## jamesjames

bigshot said:


> Enjoying crossfeed is fine. I have absolutely no problem with people liking crossfeed, bass boosts, high end roll off or euphonic distortion. If you like it, great. Enjoy it in good health.
> 
> My only objection is saying that crossfeed increases spatiality, because that isn’t what crossfeed does. Crossfeed reduces channel separation. Doing that can make headphone listeners find music more pleasant. But it doesn’t create an “outside the head experience” and it doesn’t contribute distance cues to the sound, making the music seem to be at a distance from the listener. It is nothing like binaural recording. And it isn’t spatial like speakers in a room.
> 
> Blending channels is cool. I’d like it myself with early stereo Beatles records. But blending channels doesn’t create spatiality. There are other ways to do that, which I’ve already outlined, so I won’t repeat myself.


It does, in fact, create an 'outside the head experience' for me.  So I guess we fundamentally disagree.  However, I accept you are entitled to your opinion.


----------



## 71 dB

bigshot said:


> You really are incapable of letting this go. I think you need time away from Sound Science. Your participation here is welcomed by me, but grabbing on this hard can't be good for you.


A few posts ago you wrote you are done here. Accusing me of not letting go is ironic in that context. If you wish me to post less here then don't attack me the way you do forcing me to reply.


----------



## 71 dB (Oct 10, 2022)

bigshot said:


> *My only objection is saying that crossfeed increases spatiality*, because that isn’t what crossfeed does. Crossfeed reduces channel separation. Doing that can make headphone listeners find music more pleasant. But it doesn’t create an “outside the head experience” and it doesn’t contribute distance cues to the sound, making the music seem to be at a distance from the listener. It is nothing like binaural recording. And it isn’t spatial like speakers in a room.


The problem here is you don't even try to figure out what I _mean_ when I say those things. You interpret it in a way that allows you to attack me the most.

Increased spatiality in this context means the secondary spatial cues in the recording (there is no primary cues with headphones) are scaled into more natural form so that spatial hearing can make better sense of them. For some people this makes the music more enjoyable, decreases listening fatique and allows “outside the head experience” depending on the quality of the secondary spatial cues in the recording. It doesn't mean binaural sound or speakers in a room. Crossfeed doesn't create new distance cues. It allows better interpretation of the distance cues in the recording. Without crossfeed we "bypass" the important process of scaling channel separation into natural levels and this makes the secondary spatial cues in the recording hard to interpret. Scaling channel separation into natural levels is important for us who get annoyed by unnatural levels, because the practise of mixing music for speakers creates unnatural levels of channel separation (because acoustic crossfeed scales it back to natural). With speakers there are no problems. The music is mixed and listened to with acoustic crossfeed, but with headphones we suddenly drop acoustic crossfeed and our ears are bombed with unnatural levels of channel separation. Some people don't mind it, some do. I am one of those who do mind.


----------



## bigshot (Oct 10, 2022)

jamesjames said:


> It does, in fact, create an 'outside the head experience' for me.


How much distance do you perceive? Five feet? Ten feet? Does it pop in and out as you dial the crossfeed up and down? Or is the effect consistent regardless of how much crossfeed you dial in? Could the crossfeed DSP you're using have some sort of time delay built into it?


----------



## 71 dB

bigshot said:


> How much distance do you perceive? Five feet? Ten feet? Does it pop in and out as you dial the crossfeed up and down? Or is the effect consistent regardless of how much crossfeed you dial in?


I can't speak for jamesjames, but for me typically the distance is quite small, 1-2 feet, but on some recordings with very well recorded spatiality of large acoustic spaces (church music for example) the distance can be as much as 5 feets. These distances exist also without crossfeed, but the miniature soundstage is fractured and it fluctuates with the music. Hit of a piano note for example can hit on the ear, but then move further as it decays away. Crossfeed puts the minuature soundstage in order for me and the instrument stay were they are. The instruments are also more point-like sound sources and not fractured all over. For me this stability is one of the great benefits of crossfeed.

Immediately after increasing crossfeed level the miniature soundstage feels smaller, because spatial hearing reacts to the change, but adjusts back in a few seconds. The opposite happens when decreasing crossfeed level. The effect of crossfeed should always be valuated after spatial hearing has adjusted to it. Closing eyes can help "recognizing" the miniature soundstage, its size and shape.


----------



## castleofargh

jamesjames said:


> It does, in fact, create an 'outside the head experience' for me.  So I guess we fundamentally disagree.  However, I accept you are entitled to your opinion.


But that's the thing, if different people get significantly different experiences from crossfeed, saying that it improves spatiality compared to headphones could becomes a game of sampling the listeners to cheat on statistical results. Where has it been established that it is consistently an improvement for listeners(spatially or not)? 
A lot of this apparently technical discussion really comes down to personal impressions and personal priorities. At the end of the chain, the human can't be ignored. 
From years of being myself a solid crossfeed user and an all time seeker of headphone saving tools, I know that most people who tried it didn't keep using it.

@71 dB argued that like EQ, most people might be rejecting crossfeed simply because they never tried it with good settings for them. I doubt it would solve everything(no I'm sure it wouldn't), but I agree that it's also part of the difficulty in assessing the actual success rate of crossfeed.
You like it, that other guy doesn't. All it shows is that people will have different experiences. Sometimes wildly different.

We understand psychoacoustics pretty well when it comes to spatial cues. But our understanding is specific to a real sound source in front of us. The moment some variables go off track, don't exist, or shouldn't exist, it becomes a gamble. 
Predicting how we'll perceive a sound in front of us is made stable by how we have a lifetime of experiencing sounds that way. But something like crossfeed, who knows how a given brain will receive that half baked approximation of ... IDK, invisible speakers in an anechoic chamber but with room acoustic for all outside noises, and an interaural EQ that's not really what our head expects for that angle? That might still not be an accurate representation of the effect, but my point is, it's clearly not something we have a lifetime of experience dealing with and calibrating with the help of our eyes.

Speakers in some aspects are also acoustic monsters compared to a given sound source in front of us. But the key difference is that they do provide a consistent experience to all listeners in the same chair with still acceptable hearing. We can more easily make predictions about speakers and draw conclusions about what effect will have what impact.
Headphones don't have that consistency, and neither does crossfeed. 
All that to say, those who manage to get more 3D anything from crossfeed are lucky, I never got that so I hate all of you.


----------



## castleofargh

71 dB said:


> I can't speak for jamesjames, but for me typically the distance is quite small, 1-2 feet, but on some recordings with very well recorded spatiality of large acoustic spaces (church music for example) the distance can be as much as 5 feets. These distances exist also without crossfeed, but the miniature soundstage is fractured and it fluctuates with the music. Hit of a piano note for example can hit on the ear, but then move further as it decays away. Crossfeed puts the minuature soundstage in order for me and the instrument stay were they are. The instruments are also more point-like sound sources and not fractured all over. For me this stability is one of the great benefits of crossfeed.
> 
> Immediately after increasing crossfeed level the miniature soundstage feels smaller, because spatial hearing reacts to the change, but adjusts back in a few seconds. The opposite happens when decreasing crossfeed level. The effect of crossfeed should always be valuated after spatial hearing has adjusted to it. Closing eyes can help "recognizing" the miniature soundstage, its size and shape.


That interesting. I get the maximum effect when I switch it ON. After a while, my brain somehow manages to push the instrument back to nearly 180° panning, and beside a difference in long term fatigue, I get to a point where I can't tell if crossfeed in ON or not until I switch ON/OFF again


----------



## 71 dB

castleofargh said:


> That interesting. I get the maximum effect when I switch it ON. After a while, my brain somehow manages to push the instrument back to nearly 180° panning, and beside a difference in long term fatigue, I get to a point where I can't tell if crossfeed in ON or not until I switch ON/OFF again


For me the panning or placement of the instrument don't change, so your experience of 180° panning on crossfeed after a while is interesting. I tried to "visualize" the miniature soundstage without crossfeed (upper) and with proper crossfeed (lower):


----------



## bigshot

I would like to hear JamesJames’ description of the effect. I have some questions that might help us figure out where this effect is coming from.


----------



## jamesjames

bigshot said:


> How much distance do you perceive? Five feet? Ten feet? Does it pop in and out as you dial the crossfeed up and down? Or is the effect consistent regardless of how much crossfeed you dial in? Could the crossfeed DSP you're using have some sort of time delay built into it?


I'm happy to answer.  It's an interesting question.  I've always experienced headphones as quite 'flat' in their presentation - two-dimensional.  A fascinating sound, but not a particularly persuasive recreation of the sound I hear at concerts and recitals.  I find with crossfeed that the presentation can be more three-dimensional - there seems to be a 'z' axis where previously there was only an 'x' and a 'y' axis to the acoustic image.  It seems to create a depth perspective which was previously absent.  I find the perceived effect varies significantly depending on implementation.  I find crossfeed also removes a fatiguing aspect of standard headphone presentation.  To date, I've not encountered digital implementations that work very well for me - although I haven't heard what I suspect might be the best (eg, dCS, Weiss).  But I've found analogue implementations can be excellent to my ear - I'm thinking in particular of the Moon 430HA and SPL Phonitor amps.  The effect also varies with the type of music.  Smaller scale music seems to benefit most - but it's hard to predict.  Of course, we're talking here about impressionistic, psychoacoustic affects, so it's hard to be precise.  But with crossfeed the performance space can apparently be moved some distance from me - with a string quartet in a resonant acoustic it might be some feet.  Orchestras can seem quite distant.  I guess I would say that, when implemented well,  crossfeed creates the impression that the listener is appropriately distant from the performance - much as you would hope to achieve with well-postioned loudspeakers.  That said, it seems it's not for everyone! All I would suggest is that anyone interested in the idea should try it and make up his or her own mind.


----------



## gregorio (Oct 10, 2022)

71 dB said:


> Crossfeed is a very coarse manipulation of spatiality and therefore looking at ILD only works fine.
> When you use HRTF to create very realistic soundstage you obviously need much more detailed models and ILD alone is not enough!


You seem to be arguing against yourself here! Crossfeed IS a very course manipulation but it’s not only manipulating ILD, it crossfeeds everything, including all those factors HRTFs addresses. You can’t just ignore/dismiss those factors because crossfeed fails to handle them correctly! And, if we could simply dismiss those factors then why did anyone bother to invent HRTFs in the first place?


71 dB said:


> Same happens with speakers.


No it doesn’t, you can’t just keep repeating that.


71 dB said:


> If you play mono sound on both stereo speakers, the reality is sound is radiated from the speakers, but your brain interprets the situation as sound coming from the middle point between the speakes. Simple proof of spatiality happening in brain.


No, it’s simple proof spatiality is happening in the speakers/room and your brain is interpreting that spatiality to create its own perception.


71 dB said:


> Human spatial hearing has developed in reality so that's why what we hear and what the reality is are VERY close to each other.


Not particularly and even less so in the case of stereophonic sound.


71 dB said:


> I can't adopt a new dogma, your dogma just because you say I am wrong. I need to be convinced!


It’s not a new dogma and it’s not my dogma. HRTFs demonstrate the deficiencies of simple crossfeed, HRTFs are not new or my idea/hypothesis/dogma. So, I’ve no idea why you’re not convinced.


71 dB said:


> Increased spatiality in this context means the secondary spatial cues in the recording (there is no primary cues with headphones) are scaled into more natural form so that spatial hearing can make better sense of them.


But spatial cues are NOT scaled into more natural form, the ILD spatial cue might be but the other spatial cues are NOT, which is why spatial hearing cannot make better sense of them, although a minority of people do seem perceive that effect.


jamesjames said:


> I enjoy crossfeed. I think it makes music sound more natural. Is my participation here welcomed by you?


There’s not problem at all if you enjoy crossfeed or if you have the perception the music seems more natural, because that’s in line with the science and you’re not contracting it without reliable evidence.


jamesjames said:


> It does, in fact, create an 'outside the head experience' for me.


Here though we do start to potentially have a problem. Crossfeed doesn’t create an “outside the head experience” for me and many/most others, so what’s going on? Does the crossfeed know when you’re using it and create something different when I’m listening to it? Obviously not. The difference when you and I listen with crossfeed is not the crossfeed, it’s us, we have different HRTFs and different perception. So, it is not “in fact” crossfeed that creates the “outside the head experience”, it’s your perception!

G


----------



## bigshot (Oct 10, 2022)

jamesjames said:


> I'm happy to answer.  It's an interesting question.


Do you mind answering specific questions? I understand your general impression. I'm trying to figure out what is creating that impression.

1) Here is a song I'd like you to listen to without crossfeed and then again with it...



How much of a difference in distance do you perceive between without and with? Five feet? Ten feet? Is it like a speaker system with a soundstage at the other side of the room from you? Or is it just a couple of inches from your face?

If each element is different, let me know which ones sound like they're inside your head and which ones sound like they're on the other side of the room (or whatever distance you perceive)... the vocals, the bongos, the piano, the woo woo's, the guitar solo. Are they all at the same distance from you, or are some things closer and some things further?

2) Do you get any perception of distance at all without crossfeed? If so, what elements sound further away?

3) Does the perception of distance pop in and out as you dial the crossfeed up and down? Is there a narrow window for the perception of distance to be apparent? Or is the effect pretty consistent regardless of how much or how little crossfeed you dial in?

4) Could the crossfeed DSP you're using have some sort of time delay built into it? I know Apple's spatial audio does. What brand and model are you using?

You don't need to write long explanations. If you could just give me direct concise answers to each question, that would be great. That will help me track it down. When you finish this, I have one other track for you to listen to and check that one out the same way... with crossfeed and without. Thanks for your helpfulness!


----------



## jamesjames

gregorio said:


> You seem to be arguing against yourself here! Crossfeed IS a very course manipulation but it’s not only manipulating ILD, it crossfeeds everything, including all those factors HRTFs addresses. You can’t just ignore/dismiss those factors because crossfeed fails to handle them correctly! And, if we could simply dismiss those factors then why did anyone bother to invent HRTFs in the first place?
> 
> No it doesn’t, you can’t just keep repeating that.
> 
> ...


Hello Gregorio

I don't propose to answer the substance of this because it's completely arid.

J


----------



## jamesjames

bigshot said:


> Do you mind answering specific questions? I understand your general impression. I'm trying to figure out what is creating that impression.
> 
> 1) Here is a song I'd like you to listen to without crossfeed and then again with it and tell me how much of a distance in front of you the various elements in the mix sound when you turn on the crossfeed... the vocals, the bongos, the piano, the woo woo's, the guitar solo. Are they all at the same distance from you, or are some things closer and some things further? Do you get any perception of distance without crossfeed?
> 
> ...



Hello Bigshot

Re 1, I haven't listened to this, but I can say that various elements of the mix can apparently 'move' - some back, some forward, some sideways.  Some bad recordings sound even worse.  But I've never heard the various elements fall apart so that some are 'inside' and some 'outside' the head as you say.

Re 2, I can't add to my earlier answers.

Re 3, the parameters I've described in my earlier posts work as discussed to affect different aspects of the adjustable crossfeed 'matrix' (as described by SPL).  I suggest you take a quick look at the SPL Phonitor xe manual on line to find a detailed discussion of the design and intended operation of these.  The iFi analogue implemention in the iCan is, I think, different again.  Meier digital approaches as found in foobar2000 and Meier amps seem to vary in levels of adjustability - depending on implementation.

Re 4, the SPL parameters just mentioned are designed to adjust for interaural time and level differences w/o DSP.

The Moon crossfeed can be switched in and out but can't be adjusted. I've tried to copy a link to an article by Tyll Hertsens on the Moon below.  Among other things, it spends some time on the history of crossfeed and includes one or two of Hertsens' thoughts on the subject.

https://wilbert.nl/images/stories/v.../moon review/MOON-Neo-430HA-innerfidelity.pdf


----------



## bigshot (Oct 10, 2022)

JamesJames... would you PLEASE listen to the track both with and without crossfeed and answer the questions. Your answers don't mean anything without being based on a specific recording. This isn't a trick. I'm trying to eliminate variables to focus on what we are trying to figure out. (It's a good song too.) You don't have to explain why you hear what you hear. I'm just trying to understand specifically *what* you are hearing. So just answer the questions directly.

I checked the manufacturer's specs on your headphone amp. The crossfeed is just a simple blending of channels. Pretty basic. No fancy effects. You say it isn't adjustable? Are most cross feeds hard wired like that? Weird. When you answer the questions, you can skip number 3 and 4 since yours isn't adjustable and doesn't involve any time delay.

The song I linked to contains several different kinds of sounds that are clearly differentiated, so it will be useful to hear how all the different elements in the song sound to you.


----------



## jamesjames

bigshot said:


> JamesJames... would you PLEASE listen to the track both with and without crossfeed and answer the questions. Your answers don't mean anything without being based on a specific recording. This isn't a trick. I'm trying to eliminate variables to focus on what we are trying to figure out. (It's a good song too.) You don't have to explain why you hear what you hear. I'm just trying to understand specifically *what* you are hearing. So just answer the questions directly.
> 
> I checked the manufacturer's specs on your headphone amp. The crossfeed is just a simple blending of channels. Pretty basic. No fancy effects. You say it isn't adjustable? Are most cross feeds hard wired like that? Weird. When you answer the questions, you can skip number 3 and 4 since yours isn't adjustable and doesn't involve any time delay.
> 
> The song I linked to contains several different kinds of sounds that are clearly differentiated, so it will be useful to hear how all the different elements in the song sound to you.


Ok!  But, having listened, I'm not sure you'll find it very helpful ... you might be better reading the Hertsens article and the comments of others out there who use and like crossfeed.

So, without crossfeed, I'm afraid this sounds like a highly compressed pop song, with heavy, centered emphasis on vocals and lead guitar.  The 'wall of noise' effect tends to swallow up the percussion, etc, as the levels rise.  The soundstage is essentially flat.  I have no sense of any performance space surrounding the performers. Actually, it sounds to me like a collection of instruments and sounds taken from a mixing desk in a studio.

With crossfeed it's essentially the same - with the wall of noise perhaps shifted back slightly (or perhaps slightly more reverberant - hard to say).  I don't hear any added depth of performance space or movement between instruments.

But I do like the song - and I guess basic 'fatigue' levels would be probably lower with crossfeed over time.

I've said earlier that I listen only to classical music, and that my experience may not be so relevant for other kinds of music.  I guess that the prevalence of 'field' recording of classical music is pretty important here.  Even studio recordings of solo piano recitals, for example, are mic'd in a similar way.  This means the recording - usually hi-res these days - is likely to present a performance space as part of the acoustic image.  It's also important that the recording doesn't involve compression.


----------



## jamesjames (Oct 10, 2022)

I meant to add that I don't know whether most crossfeed implementations allow adjustmentment.  The Moon doesn't; the iFi, SPL and Meier amps do to my knowledge.  I can't recall whether foobar does.  I believe dCS and Weiss amp/DACs are adjustable.  I seem to recall the Burson allows a couple of adjustments (unlike the SPL, for example, which is highly adjustable).


----------



## bigshot (Oct 10, 2022)

That actually is helpful. I'll get you different kind of track tomorrow to check out with crossfeed.


----------



## jamesjames

bigshot said:


> That actually is helpful. I'll get you different kind of track tomorrow to check out with crossfeed.


Well, I'm happy to have been of service.  But I'm afraid the jamesjames testing laboratory is closing its doors for good - due to competing priorities ... I don't think I have anything useful to add for the time being.  And I think testing crossfeed (or other playback features) by reference to YouTube feeds probably won't be terribly helpful in allowing people to take their thinking further.  As I said earlier, I think there's more than enough information out there already, prepared by people with much greater technical expertise than me.  I really would suggest downloading foobar2000, or finding a store with a demo crossfeed amp, if anyone's interested in exploring.  Speaking for myself, I was initially very sceptical about the idea - to the point of not trying the Moon circuit for some years.  Hearing it with music and phones that I knew and liked was the important thing once I had an interest.


----------



## bigshot (Oct 11, 2022)

That is disappointing. You helped me figure one big thing about your particular preference for crossfeed though. I appreciate it. I think I've figured out what is going on and why some people report more effect from crossfeed than others.


----------



## 71 dB

gregorio said:


> You seem to be arguing against yourself here! Crossfeed IS a very course manipulation but it’s not only manipulating ILD, it crossfeeds everything, including all those factors HRTFs addresses. You can’t just ignore/dismiss those factors because crossfeed fails to handle them correctly! And, if we could simply dismiss those factors then why did anyone bother to invent HRTFs in the first place?


What I say it we can LOOK only ILD, even if crossfeed does something else (e.g. ITD changes)

What exactly do you mean by saying "it crossfeeds everything"? A 100 Hz low pass filter filters everything, but it 20 Hz frequencies remain almost untouched while 20 kHz is filtered away massively. Similarly crossfeed doesn't change mono signal much (Jan Meier - type of H-topology crossfeeders are even mono NEUTRAL!), but stereo signal with large channel separation gets modified a lot (made more mono!) 

This is what this has been. I write things how I understand and know them using my skills in English (my English is pretty good, but it is not my native language). Not-so-standardized terminology regarding audio / sound doesn't help. Then you read my post and interpret it (purposedly or not) in ways that make them look wrong, contradictory. Then I try to explain what I really mean and again you read that explanation not trying to understand what I mean, but trying to twist it so that you can again say I am wrong!! So frustrating! The fact that there are objective and subjective element to this makes things even worse.

If I say I measured the width of a  book using measuring stick, you probably say I can't use a measuring stick because books are made of atoms and the resolution of measuring sticks is 10^8 times too coarse to measure atoms. That's how this feels to me.



gregorio said:


> No it doesn’t, you can’t just keep repeating that.


Yes I can because I am right. With speakers the left channel "leaks" acoustically to right ear and why versa. Crossfeed does something similar electronically. The result in both cases is similar: Reduced ILD at low frequencies and increased cross-correlation between the ears favoring ITD of about 250 µs. I have never said those things are identical, of course. Crossfeed is a coarse approximation of acoustic crossfeed that ignores the fine detail of HRTF and instead simulates the overall shape with a low pass filter. Would you PLEASE try to understand what I mean instead of twisting everything?



gregorio said:


> No, it’s simple proof spatiality is happening in the speakers/room and your brain is interpreting that spatiality to create its own perception.


If that was true, binaural spatiality would be impossible. No room - no spatiality. Spatial cues are generated in a room (or otherwise) and our spatial hearing creates spatiality from them, but I don't think we are even disagreeing here. It is more like arguing how to express things with words. 



gregorio said:


> Not particularly and even less so in the case of stereophonic sound.


Yes, but stereophonic sound is a small fraction of what we hear in our lives. It is a special case of sound that is made to fool our spatial hearing. I think spatial hearing is easy to fool, but there just isn't much fooling going on in the world. Again, you interpreted what I said in a funny way. TRY to understand what I say. Read between the line. Give me the benefit of a doubt instead of always interpreting everything the worse way. Ask me to clarify things before jumping to your hostile conclusions declaring I am WRONG. Do you really think I am an idiot who knows nothing? That's how you treat me! It is INSANELY insulting. That's why I lose my temper time to time!



gregorio said:


> It’s not a new dogma and it’s not my dogma. HRTFs demonstrate the deficiencies of simple crossfeed, HRTFs are not new or my idea/hypothesis/dogma. So, I’ve no idea why you’re not convinced.


*BUT I don't deny deficiencies of simple crossfeed!!!* 
I don't think crossfeed is the be all end all solution! I am saying it is surprisingly effective for its simplicity and it is enough for me to make headphone sound enjoyable. What I do deny is your claim that simple crossfeed can improve headphone sound only in specific cases such as hard panned ping pong recordings.

I am convinced HRTF solutions are better than simple crossfeed. Of course I am!
I am convinced even simple crossfeed can improve headphone spatiality + enjoyment a lot.

Default headphone sound => worst
Simple crossfeed => better
State of the art HRTF convolution solution => best

Hopefully this clarifies some things for you.



gregorio said:


> But spatial cues are NOT scaled into more natural form, the ILD spatial cue might be but the other spatial cues are NOT, which is why spatial hearing cannot make better sense of them, although a minority of people do seem perceive that effect.


Of course fixing only one problem can improve the situation, especially if the fixed problem was the most harmful one. Maybe it is you who has a mental block to get the most out of crossfeed? That's how this sounds. It is as if you try to come up with excuses why crossfeed can't improve things. I listen to the sound with open mind: Does it improve the sound for me or not? Do I enjoy the sound better? 

How much are the other spatial cues wrong to begin with? If there was something nearly as bad as ILD then perhaps crossfeeders tried to fix it too? I am not aware of any other spatial cue besided ILD that is a big problem with headphone. If you know one then please tell about it and explain why it is a problem and how it should be fixed.

I believe simple crossfeed is surprisingly successful, because what it does simulates what happens with speakers with direct sound, the acoustic crossfeed.


----------



## gregorio

71 dB said:


> Crossfeed puts the minuature soundstage in order for me and the instrument stay were they are. The instruments are also more point-like sound sources and not fractured all over. For me this stability is one of the great benefits of crossfeed.





71 dB said:


> I tried to "visualize" the miniature soundstage without crossfeed (upper) and with proper crossfeed (lower):


That’s interesting because in many respects, what I experience is almost the exact opposite of what you describe.

Using the example of say an orchestral recording, what I experience with speakers vs cans is analogous to listening to the orchestra from an ideal listening position, say 15-20m away from the orchestra, while putting on the cans is like suddenly jumping forward towards the orchestra, to a position roughly equivalent to the conductor but even further forward. In the real scenario, the orchestra appears much wider and has a lower ratio of reverb to direct sound but of course we aren’t stretching the width of everything just the soundstage. The sound sources (instruments) aren’t stretched, they’re just more separated within a wider soundstage and appear even more distinct due to the lower reverb ratio, which also reduces depth. This is very similar to the effect of wearing cans and is why some/many engineers use cans when recording, because it’s easier to notice details/faults that maybe masked or partially concealed by reverb and it’s easier to identify where (which instrument or mic) the detail/fault is happening. This perception seems to be almost the exact opposite of your perception. Instead of more separation and more distinct positioning, you seem to experience a “blurring” effect.

This analogy of cans and sitting in the orchestra is not ideal though, all the sound appears to be occurring inside my head with cans. There is some perception of depth but it’s more squashed and not as coherent as in the real life scenario. My perception of bass and bass balance is not the same either but it is quite a linear relationship and therefore usually fairly predictable. I do occasionally get anomalies with popular genres, say the lead vocal on a different horizontal plane, at the top of my head. Everything is always inside my head with cans though, the only exception is some binaural sound recordings accompanied by video (providing visual cues). 

With crossfeed I perceive a narrower soundstage, so more like the width of an orchestra from the ideal listening position but without the distance or the higher ratio of reverb. The bass also appears different but not more similar to a real life scenario and not as linearly/predictably as without crossfeed. Sometimes I get an EQ notch type effect in the bass, sometimes the bass sounds artificially louder, sometimes I get the bass component of a sound/instrument within the mix in a slightly different location to the higher freq components, which I find particularly annoying and doesn’t appear correlated with the crossfeed freq. I also get unpredictable effects with the location and FR of ERs/reverb. In general it’s more blurred, less spatially coherent, less stable and more unpredictable. It’s also still always all inside my head.

Maybe it’s because I spent a lot of time actually sitting inside orchestras that I don’t mind that extreme width/separation and don’t find it unnatural. Without crossfeed is far from ideal, it would be good to get it outside my head, get more depth, have more representative bass and not to have those occasional anomalies but even with all these failings, it’s still acceptable for me. With crossfeed, the narrower width without the greater distance is a conflict, as is the same sound in different locations, in addition to the less coherent reflections and other issues, it appears far less natural to me and is typically unacceptable. I can’t just sit and enjoy it, because I’m constantly trying to figure out what’s going on. I should mention there are exceptions and it’s often not as obviously “black and white” depending on the mix (which can vary wildly). I have encountered recordings that I did prefer with crossfeed but such exceptions are so rare, it’s not worth the effort.

Even amongst those like me who are not fans of crossfeed, I don’t assume they are going to experience the same as me. Some of what I described maybe identical or similar for other non-fans but they might not perceive or be consciously aware of the other things I’ve described or even if they are aware, they might not be troubled by them and it’s very likely some non-fans experience yet other effects that I don’t.

G


----------



## 71 dB

bigshot said:


> Do you mind answering specific questions? I understand your general impression. I'm trying to figure out what is creating that impression.
> 
> 1) Here is a song I'd like you to listen to without crossfeed and then again with it...
> 
> ...



This recording is avoid of almost any secondary spatial cues. It is very dry tracks hard panned in LCR style. VERY unsuitable for headphones as it is.

Without crossfeed distance is 1-10 inches. A lot of the sound is annoyingly close to my ears. Nasty spatiality!
With crossfeed (-2 dB level seems optimal): 8-12 inches. Thanks to crossfeed the sounds stay at least 8 inches from my ears, but the lack of secondary spatial cues makes it impossible to have a bigger headphone soundstage than about a feet. That's okay, because such miniature presentation can be cozy and intimate.



bigshot said:


> If each element is different, let me know which ones sound like they're inside your head and which ones sound like they're on the other side of the room (or whatever distance you perceive)... the vocals, the bongos, the piano, the woo woo's, the guitar solo. Are they all at the same distance from you, or are some things closer and some things further?


Without crossfeed the stuff mixed center is inside my head. With crossfeed it moves about 4 inches forward and is on my upper face. The stuff hard panned left and right is outside my head, but closer without crossfeed as described above.



bigshot said:


> 2) Do you get any perception of distance at all without crossfeed? If so, what elements sound further away?


Yes, 1-10 inches. It is difficult to say what is furher away, because the soundstage is so fractures and all over the place. The sounds are not point-like but long objects that extend from near to further and move/change shape in time. The center-mixed stuff (singing) is pointlike steady sound inside my head.



bigshot said:


> 3) Does the perception of distance pop in and out as you dial the crossfeed up and down? Is there a narrow window for the perception of distance to be apparent? Or is the effect pretty consistent regardless of how much or how little crossfeed you dial in?


It is gradual change in this regards, but there is an optimal level for the crossfeed giving the largest distances overall. Above and below it the distances get smaller, but in very different ways. Too much crossfeed makes the sound mono-like and moves it toward the center of my head. Too little crossfeed moves the sound toward my left and right ears. As making sound mono kills the out of phase content in the sound, it is the in-phase content that moves toward the center of my head while using too weak crossfeed keeps out of phase content amplified and that moves toward my ears. So, it is pretty complex, but proper crossfeed level manages to balance these things nicely. That's how I recognize it. Things just feel balanced and natural.


----------



## 71 dB

jamesjames said:


> I've said earlier that I listen only to classical music, and that my experience may not be so relevant for other kinds of music.  I guess that the prevalence of 'field' recording of classical music is pretty important here.  Even studio recordings of solo piano recitals, for example, are mic'd in a similar way.  This means the recording - usually hi-res these days - is likely to present a performance space as part of the acoustic image.  It's also important that the recording doesn't involve compression.


Well-recorded classical music is excellent for crossfeed. Tons of natural-based spatial cues in the recording providing possible nice headphone soundstage. Organ music recorded in a cathedral can sound really impressive on headphones properly crossfed, almost binaural!


----------



## gregorio (Oct 11, 2022)

71 dB said:


> What exactly do you mean by saying "it crossfeeds everything"? A 100 Hz low pass filter filters everything, but it 20 Hz frequencies remain almost untouched while 20 kHz is filtered away massively.


A 100Hz LPF filters everything above 100Hz, not just ILD or some other factor. Likewise, crossfeed is crossfeeding everything (by a determined amount) below a set threshold, not just ILD but all the sound which includes all the timing, spectral and other information used by our perception.


71 dB said:


> Similarly crossfeed doesn't change mono signal much …


What mono signal? We’re dealing with stereo, so a mono signal is a signal which only occurs in either the left or right channel and obviously crossfeed does change that. If you’re talking about the perception of a sound in the phantom centre, then that’s a dual mono signal and summing them together does change it to an extent (it increases at least the level). Furthermore, in most cases even a sound in the phantom centre is likely to have stereo reverb (artificial or acoustic), variations between the left and right channels and therefore the potential for spectral and timing/phase issues.


71 dB said:


> The fact that there are objective and subjective element to this makes things even worse.


There are always objective and subjective elements, the trick is understanding which is which and not making false assertions about the former based on the latter. This case is tricky because we’re talking about subjective responses which don’t have precise definitions/descriptions and which vary considerably between different individuals.


71 dB said:


> Yes I can because I am right. With speakers the left channel "leaks" acoustically to right ear and why versa.


No you can’t because you are wrong. With speakers the left channel does not “leak” acoustically to the right ear! What actually happens is that the signal from the left speaker reflects off the left and right walls of the listening environment, so now we have a mixture of direct and reflected sound with different timing and spectral content. Some of the direct sound and sound reflected off the left wall reaches your right ear but is further affected by your skull and pinnae (attenuated and spectrally altered), the reflections from the right wall do not have to pass through your skull to reach your right ear but are affected by your right pinnae. What we actually get is very significantly different from just crossfeed, so you can’t just keep repeating “_Same happens with speakers._”!


71 dB said:


> The result in both cases is similar: Reduced ILD at low frequencies and increased cross-correlation between the ears favoring ITD of about 250 µs.


No, the result is not similar it’s very different as explained above and as you already know but are ignoring!


71 dB said:


> Of course fixing only one problem can improve the situation,


But you’re not “_fixing only one problem_” because you are not only crossfeeding ILD, you’re crossfeeding all the signal below the threshold and by fixing one problem you’re making other factors/considerations worse. 4+4+4=8 if you ignore/dismiss that last “+4”, which I don’t really notice and doesn’t affect my enjoyment anyway!


71 dB said:


> especially if the fixed problem was the most harmful one.


But what if it’s not the most harmful one? What if all the other factors combined, which you’re damaging by fixing that one problem, are more harmful? What if you don’t find that problem you fixed to be that harmful a problem to start with?

You have a particular perception and you’ve invented an idea/theory that explains it by effectively dismissing/ignoring everything that your perception isn’t consciously aware of and you don’t believe is harmful. If your perception were the same as everyone else’s then maybe you’d be on to something but clearly it isn’t. If your theory of solving the most “harmful” problem and being closer to ideal were correct, then why, after being around for 50 years or more, don’t we see it as standard or at least as an option on every headphone device, especially as it has the potential to be a money earner? It’s never taken off and science knows why but you dismiss this too and instead  falsely assert it’s due to training (or previously ignorance or idiocy).

It appears you’ve fallen into the same logical trap so many audiophiles do with other aspects of audio. They have a perception, find or invent explanations that support it and ignore or dismiss anything to the contrary. Typically you do not fall into that trap, unless it includes the letters “ILD”!!

G


----------



## bigshot (Oct 12, 2022)

JamesJames was very helpful. I think I’ve figured out what’s going on here now. Crossfeed isn’t creating spatiality. In fact, the perception of spatiality depends more on how the person hears spatial cues than it does the crossfeed itself. Crossfeed was designed for a specific purpose, and it does that well. But it has a beneficial unintended side effect for people who hear recorded music in a certain way. It reduces a kind of separation in music that some people hear as a contrast, and others hear as a distraction. By evening out the contrast, it allows people who are easily distracted to hear elements in the music that other people are able to easily parse even through the contrasts.

An analogy would be like this… Two people are standing side by side alternately calling out numbers. One of them is counting down from a hundred, a number at a time. The other one is calling out random numbers. One kind of listener can parse out the numbers that have a pattern from those that don’t. They can focus on the descending numbers and set aside the random ones. The other kind of listener hears the random number and his perception hits reset. He can’t hear the pattern clearly because he can’t focus his attention beyond the random numbers.

That is what’s happening here. Gregorio and I are used to hearing competing sounds in a mix and making sense of them so we can organize and balance them. JamesJames was unable to comment on the effect of crossfeed on different elements in the song because the contrast was so wide in some of them. Some elements, like the vocals and guitar solo were straight mono- equal loudness from both speakers. And some elements, like the bongos and the woo woos were hard panned left and right. Cross feed would have absolutely no effect on the former, but a large effect on the latter. But JamesJames didn’t hear it like that. He heard it all as one thing and couldn't sort out any differences between them at all.

I suspect that some people are extraordinarily sensitive to being distracted by sound that is hard panned left or right. When sound comes at them from both sides like that, they’re unable to parse the sound in the middle. It becomes muddled and they are unable to hear it as a separate thing. This is irritating to them, and they describe it as listening fatigue. When the competing contrasts are evened out with crossfeed, their minds can suddenly perceive the stuff in the middle, and it’s as if that suddenly turned on like a light bulb.

What is generally in the middle of the two channels, but not hard panned to left and right? Secondary depth cues, like reverb and room ambience.

Cross feed doesn’t enhance spatiality, it reveals the spatial cues to people who can be distracted by extreme contrasts. Crossfeed doesn't create the spatiality. The spatial cues are in the track all along, they just can’t be heard by people who are not good at parsing big contrasts. It’s purely a subjective thing, dependent on how an individual hears and organizes sound in his head.


----------



## bigshot (Oct 11, 2022)

One other analogy I just thought of...

I once attended an organizational meeting of a science fiction group. They put on cons related to sci-fi tv shows and books. It was a very unusual group of people. They had more in common than just science fiction, but that wasn't apparent at first glance. There was a guy in the group who was quite odd, but brilliant. I heard him talking and asked someone about him. They told me that his IQ was off the charts. I chatted with him a little and realized that he always directed the conversation- you reacted to what he said, not the other way around. So I decided to try throwing a 90 degree turn into the conversation and see what he did. He was talking about how some invention was created, and I grabbed a little side detail of the story he was telling and asked him a question about it. He stopped dead and his eyes went wide open like his brain had reset. He quickly brushed aside the sidetrack and went back to his story again. I threw in another sidetrack. He stopped dead and stared into space again. He was incapable of flowing with a conversation. He had a pathway in his mind that was the direction he was used to going, and if you took him off that train of thought, he went blank and couldn't say anything at all.

Some people are photosensitive. If there is a blinking light, they zone out like a zombie. Other people are hyper-sensitive to certain kinds of textures or colors. It makes sense that excessive channel separation might be a similar kind of blind spot for some listeners. Reduce the sounds coming from competing directions, and they are able to hear again. It isn't science in a sort of definition involving physical sound, but more a psychoacoustics way. All crossfeed really does is reduce channel separation. But for some people that can change the way they perceive other unrelated aspects of the sound dramatically.


----------



## 71 dB

gregorio said:


> A 100Hz LPF filters everything above 100Hz, not just ILD or some other factor. Likewise, crossfeed is crossfeeding everything (by a determined amount) below a set threshold, not just ILD but all the sound which includes all the timing, spectral and other information used by our perception.


filters filter one channel without the knowledge of other channels. Since ILD is a property of how channels differ from each other, it doesn't "exist" for the filter. Nor does any other spatial cues that are based on channel differencies such as ITD or ISD.



gregorio said:


> What mono signal?


Stereo sound in L/R form can be transformed into M/S from (Mid/side) where Mid is "centered mono" (left and right channels are the same) and Side is the difference of left and right channel. Since Mid/Side processing is common in music production/mixing, I thought you'd be familiar with these concepts. No wonder my analyse of the binaural recording seemed to go over your head. If you are unfamilar with this you can study it for example here:

https://www.izotope.com/en/learn/what-is-midside-processing.html



gregorio said:


> We’re dealing with stereo, so a mono signal is a signal which only occurs in either the left or right channel and obviously crossfeed does change that. If you’re talking about the perception of a sound in the phantom centre, then that’s a dual mono signal and summing them together does change it to an extent (it increases at least the level).


You are talking about mono in "mixing" context. Mixers work like that. There are pan laws and what not, but elsewhere mono is a simpler concept. In this (stereo consumer audio) context it is just all the stuff that is the same for left and right channels, the "M" channel. In mixing context we can have a mono track panned hard (100 %) left, but in crossfeed context this is not at all mono sound, because you can't use mono playback system to indicate left channel has sound while right channel is silent. You need stereo playback system for that. It is stereo sound that was created by hard panning mono sound. The whole point of panning mono tracks in mixing is to create stereo from mono! 



gregorio said:


> Furthermore, in most cases even a sound in the phantom centre is likely to have stereo reverb (artificial or acoustic), variations between the left and right channels and therefore the potential for spectral and timing/phase issues.


Yes, you are right. 



gregorio said:


> There are always objective and subjective elements, the trick is understanding which is which and not making false assertions about the former based on the latter. This case is tricky because we’re talking about subjective responses which don’t have precise definitions/descriptions and which vary considerably between different individuals.


Yes, but we can't ignore the fundamental problem of spatiality created for speaker in headphone listening. Back in the day tv series were produced for 4:3 screens. Then came 16:9 TVs and people watched 4:3 shows widened on their 16:9 screen. To keep the original aspect ratio and picture shape, you need to add "black bars" on the left and right. Luckily TV had that property even if many didn't "like" if for the black bars, but crossfeed is a similar thing, just for sound. I want to experience the spatiality at the "scale" it was created for speakers, not in a scale of excessive spatiality. 



gregorio said:


> No you can’t because you are wrong. With speakers the left channel does not “leak” acoustically to the right ear! What actually happens is that the signal from the left speaker reflects off the left and right walls of the listening environment, so now we have a mixture of direct and reflected sound with different timing and spectral content.



The sound of course reflects in many other surfaces too. From floor. From ceiling. From back wall etc. These reflections can ruin speaker spatiality if the room is acoustically  bad.

Anyway, NONE of this happens with headphones, BUT if you use crossfeed, you simulate part of it, the part of direct sound from speakers. It is like having the speakers in an anechoic chamber. No reflections! Only direct sound! I am not wrong because I don't make the claim you say is wrong! I am making a claim that is right.



gregorio said:


> Some of the direct sound and sound reflected off the left wall reaches your right ear but is further affected by your skull and pinnae (attenuated and spectrally altered), the reflections from the right wall do not have to pass through your skull to reach your right ear but are affected by your right pinnae. What we actually get is very significantly different from just crossfeed, so you can’t just keep repeating “_Same happens with speakers._”!


That's complete semantic nitpicking!! I have said crossfeed simulates ONLY acoustic crossfeed of direct sound and indeed that DOES happen. Of course the lack of of the reflections AFFECT the sound, but the same happens without crossfeed! Lack of room acoustics is not a crossfeed problem. It is a headphone problem! Crossfeed solves the lack of direct sound acoustic crossfeed problem. 



gregorio said:


> No, the result is not similar it’s very different as explained above and as you already know but are ignoring!


Still less different from not using crossfeed. You think you can only do things if you can do them 100 % perfectly. I think 1 % improvement is an 1 % improvement. That is our fundamental philosophical difference.



gregorio said:


> But you’re not “_fixing only one problem_” because you are not only crossfeeding ILD, you’re crossfeeding all the signal below the threshold and by fixing one problem you’re making other factors/considerations worse. 4+4+4=8 if you ignore/dismiss that last “+4”, which I don’t really notice and doesn’t affect my enjoyment anyway!


The pros win the cons easily for me. If it was the other way around, obviously I wouldn't like crossfeed. To me headphone sound as it is is so wrong that sound that is wrong, but less wrong is a huge improvement. Maybe you can explain me what exactly are these bad things (that aren't bad without crossfeed) I do not notice. Again, I have listened to crossfeed thousands of hours. If there is something to notice, it must be something really hard to notice!



gregorio said:


> But what if it’s not the most harmful one? What if all the other factors combined, which you’re damaging by fixing that one problem, are more harmful? What if you don’t find that problem you fixed to be that harmful a problem to start with?


Such questions could be made about everything in life. I have thinked a lot what crossfeed does to the music and I just don't believe in harmful things. If there is something, it must be very minor. Frankly I think you fear "damage" too much. You don't trust crossfeed. To me listening to music mixed for speakers on headphones is the damage and using crossfeed makes the damage less harmful for enjoyment.



gregorio said:


> You have a particular perception and you’ve invented an idea/theory that explains it by effectively dismissing/ignoring everything that your perception isn’t consciously aware of and you don’t believe is harmful.


So people should not invent ideas/theories? Again, I am NOT ignoring anything. I have just concluded those things insignificant. You keep touting these ignored things, but you have zero theories how they ruin things in crossfeed. It is like saying mankind can't go to Mars without taking account the mating habits of unicorns, but not explaining how the mating habits of unicorns affect space travel to Mars.



gregorio said:


> If your perception were the same as everyone else’s then maybe you’d be on to something but clearly it isn’t. If your theory of solving the most “harmful” problem and being closer to ideal were correct, then why, after being around for 50 years or more, don’t we see it as standard or at least as an option on every headphone device, especially as it has the potential to be a money earner? It’s never taken off and science knows why but you dismiss this too and instead  falsely assert it’s due training (or previously ignorance or idiocy).


Crossfeed is not a "standard" in every device, but it hasn't gone away either. To my experience crossfeed is insanely difficult to sell to people, because it requires understanding of human spatial hearing which most people don't have and for a novice the benefits of crossfeed can be difficult to figure out. As it kills superstereo, many people think it makes the sound duller, more mono and removes detail. I don't blame those people, because it takes time to learn to appreciate crossfeed. I can't "sell" crossfeed even to you, so how could I sell it to someone who understands nothing about spatiality and audio?



gregorio said:


> It appears you’ve fallen into the same logical trap so many audiophiles do with other aspects of audio. They have a perception, find or invent explanations that support it and ignore or dismiss anything to the contrary. Typically you do not fall into that trap, unless it includes the letters “ILD”!!
> 
> G


I am happy in this trap...


----------



## 71 dB

gregorio said:


> That’s interesting because in many respects, what I experience is almost the exact opposite of what you describe.
> 
> Using the example of say an orchestral recording, what I experience with speakers vs cans is analogous to listening to the orchestra from an ideal listening position, say 15-20m away from the orchestra, while putting on the cans is like suddenly jumping forward towards the orchestra, to a position roughly equivalent to the conductor but even further forward.


It is the same for me.



gregorio said:


> In the real scenario, the orchestra appears much wider and has a lower ratio of reverb to direct sound but of course we aren’t stretching the width of everything just the soundstage. The sound sources (instruments) aren’t stretched, they’re just more separated within a wider soundstage and appear even more distinct due to the lower reverb ratio, which also reduces depth. This is very similar to the effect of wearing cans and is why some/many engineers use cans when recording, because it’s easier to notice details/faults that maybe masked or partially concealed by reverb and it’s easier to identify where (which instrument or mic) the detail/fault is happening. This perception seems to be almost the exact opposite of your perception. Instead of more separation and more distinct positioning, you seem to experience a “blurring” effect.


I was unable to me the picture how I wanted it. Blurring is not right. Fractured, pointy is correct. Sharpness.


----------



## gregorio (Oct 11, 2022)

71 dB said:


> filters filter one channel without the knowledge of other channels.


Even a stereo filter?


71 dB said:


> Stereo sound in L/R form can be transformed into M/S from (Mid/side) where Mid is "centered mono" (left and right channels are the same) and Side is the difference of left and right channel. Since Mid/Side processing is common in music production/mixing, I thought you'd be familiar with these concepts.


Does a consumer stereo setup have 2 speakers, a left and a right, or does it have 3, a mid and 2 sides out of phase with each other? What about headphones? I thought you’d be familiar with the concepts of a consumer stereo setup.


71 dB said:


> In this (stereo consumer audio) context it is just all the stuff that is the same for left and right channels, the "M" channel.


There is no “M” channel in stereo consumer audio, just a left and a right.


71 dB said:


> Yes, but we can't ignore the fundamental problem of spatiality created for speaker in headphone listening.


Yet hundreds of millions of consumers have for decades.


71 dB said:


> Luckily TV had that property even if many didn't "like" if for the black bars, but crossfeed is a similar thing, just for sound.


Of course it’s not a similar thing. A TV does not crossfeed the left side of the image to the right side and vice versa. The TV analogy would be simply reducing the panning width.


71 dB said:


> BUT if you use crossfeed, you simulate part of it, the part of direct sound from speakers. It is like having the speakers in an anechoic chamber.


No it is not. A HRTF is like having the speakers in an anechoic chamber, crossfeed isn’t.


71 dB said:


> I am not wrong because I don't make the claim you say is wrong!


But you just have!!


71 dB said:


> I am making a claim that is right.


No, crossfeed is not a HRTF!


71 dB said:


> That's complete semantic nitpicking!!


So HRTFs are just “_complete semantic nitpicking_” and the development and ongoing research is a waste of time. And you wonder why I think you are wrong?


71 dB said:


> I have said crossfeed simulates ONLY acoustic crossfeed of direct sound and indeed that DOES happen.


Which is false because crossfeed does not “simulate ONLY acoustic crossfeed”, it ALSO crossfeeds all the other factors, which you are dismissing!


71 dB said:


> Crossfeed solves the lack of direct sound acoustic crossfeed problem.


At the expense of causing other problems!


71 dB said:


> I think 1 % improvement is an 1 % improvement. That is our fundamental philosophical difference.


Exactly! I think a 1% improvement is only a 1% improvement if there isn’t at the same time a 1% or greater degradation. Simple math, 1-1=0 or 1-2=-1. For you, 1-1=1 because the “-1” part is nitpicking!


71 dB said:


> Again, I am NOT ignoring anything. I have just concluded those things insignificant.


That’s a contradiction! You have (falsely) concluded those things are insignificant and therefore you ignore them! But “those things” are all the things that crossfeed doesn’t account for and HRTFs (+ reverb) do. Those things define the difference between crossfeed and HRTFs!


71 dB said:


> You keep touting these ignored things, but you have zero theories how they ruin things in crossfeed.


I’ve iterated them countless times but you ignore it! Just closing your eyes and sticking your fingers in your ears does not mean something ceases to exist, at least not in science!


71 dB said:


> To my experience crossfeed is insanely difficult to sell to people, because it requires understanding of human spatial hearing which most people don't have and for a novice the benefits of crossfeed can be difficult to figure out.


Thanks for proving my point! You’ve effectively just claimed that those who don’t experience crossfeed as you do, are ignorant. We’re back where we started and you’re just as wrong now as you were then!


71 dB said:


> I am happy in this trap...


As are most audiophiles, which is why they get upset if you try to explain there is no audible difference between Ethernet cables, why they come out with nonsense explanations which ignore/dismiss/omit facts, why they accuse others of ignorance and why they ban mention of science in those forums!!!

G


----------



## 71 dB (Oct 11, 2022)

gregorio said:


> Even a stereo filter?


Yes, as long as the filters operate independently.


gregorio said:


> Does a consumer stereo setup have 2 speakers, a left and a right, or does it have 3, a mid and 2 sides out of phase with each other? What about headphones? I thought you’d be familiar with the concepts of a consumer stereo setup.
> 
> There is no “M” channel in stereo consumer audio, just a left and a right.


This is so ridiculous! Your knowledge of this basic signal processing method is lacking amazingly badly! M/S is the L/R information in another from! Consumer audio is in L/R form, but you use a simple matrix to turn it to M/S:

M = k * (L + R)
S = k * (L - R)

where k = 1/SQRT(2) = 0.707106781... you go back to L/R from using similar matrix

L = k * (M + S)
R = k * (M - S)

So, there is "M" and "S" channels encoded in consumer audio. On vinyl "M" corresponds horizontal movement and "S" channel vertical movement of the needle. Didn't you even check out the link I gave you? You are too busy telling I am wrong, but at this point you are embarrassing yourself badly.



gregorio said:


> Yet hundreds of millions of consumers have for decades.


Yes. Most consumers have crappy audio systems anyway.



gregorio said:


> Of course it’s not a similar thing. A TV does not crossfeed the left side of the image to the right side and vice versa. The TV analogy would be simply reducing the panning width.


My analogy was about the "presentation format". Listening to music mixed for speakers with headphones is like watching shows made in 4:3 format with 16:9 TVs.

You are working hard to find excuses to say I am wrong. It is pathetic at this point, even sad. What a pathetic man you are for not being able to admit I might be right about something. I thought I was weakminded, but now I am really starting to see who you are. Your knowledge has its limits and on some areas I can easily surpass them with my background.



gregorio said:


> No it is not. A HRTF is like having the speakers in an anechoic chamber, crossfeed isn’t.


Crossfeed simulates it coarsely. HRTF simulates it accurately. Former is easy to implement. Latter is hard to implement. That is the difference.



gregorio said:


> But you just have!!


Apparently you can interpret anything I say a way that makes it wrong.



gregorio said:


> No, crossfeed is not a HRTF!


When have I said (simple) crossfeed is HRTF? That would be a very silly claim to make! You just invent things I have said! What is wrong with you? I am losing all respect I have had toward you. Maybe you should visit a doctor for possible brain damage.



gregorio said:


> So HRTFs are just “_complete semantic nitpicking_” and the development and ongoing research is a waste of time. And you wonder why I think you are wrong?


Huh? When have I done such claims? Of course research into HRTFs is not waste of time! All I am saying I don't personally need as advanced methods as HRTF, because simple crossfeed is good enough. So, if it is good enough, why would I make my life harder and go HRTF?



gregorio said:


> Which is false because crossfeed does not “simulate ONLY acoustic crossfeed”, it ALSO crossfeeds all the other factors, which you are dismissing!


All the other factors which you never list or explain. You find a silly excuse to everything I say. The difference of acoustic crossfeed and crossfeed is in the detail how the crossfeeding happens. With speakers there are aspects such as the radiation pattern of the speakers and listener HRTF. With crossfeed it is straightforward simple circuit that does the crossfeeding coarsely simulating HRTF with a low pass filter + treble boost on the ipsilateral side.



gregorio said:


> At the expense of causing other problems!


Problems you never explain or list!



gregorio said:


> Exactly! I think a 1% improvement is only a 1% improvement if there isn’t at the same time a 1% or greater degradation. Simple math, 1-1=0 or 1-2=-1. For you, 1-1=1 because the “-1” part is nitpicking!


I have said to me the pros win the cons. Since you are finally doing some simple math, you could at least come up with examples that reflect what I say to mathematically illustrate what I say!



gregorio said:


> That’s a contradiction! You have (falsely) concluded those things are insignificant and therefore you ignore them! But “those things” are all the things that crossfeed doesn’t account for and HRTFs (+ reverb) do. Those things define the difference between crossfeed and HRTFs!


Yes, there are clear differencies between crossfeed and HRTF. It doesn't take much to know that!



gregorio said:


> I’ve iterated them countless times but you ignore it! Just closing your eyes and sticking your fingers in your ears does not mean something ceases to exist, at least not in science!


Have you? Then I must have missed them, or I have seen them and concluded insignificant.

My choices are no crossfeed and crossfeed. I don't have the choice of HRTF. Crossfeed is hands down better of my options. So, even if it had 1000 problems, I would have to live with them! Not using crossfeed is JUST SO MUCH WORSE!!



gregorio said:


> Thanks for proving my point! You’ve effectively just claimed that those who don’t experience crossfeed as you do, are ignorant. We’re back where we started and you’re just as wrong now as you were then!


No, I am saying crossfeed is not easy to sell. I am not calling anyone ignorant. It is a fact most people don't know spatial hearing, because it is special knowledge important only in very specific sectors of jobs, so why would constructor workers know spatial hearing? They don't need such knowledge!



gregorio said:


> As are most audiophiles, which is why they get upset if you try to explain there is no audible difference between Ethernet cables, why they come out with nonsense explanations which ignore/dismiss/omit facts, why they accuse others of ignorance and why they ban mention of science in those forums!!!
> 
> G


Except differences in ethernet cables aren't real, only placebo while crossfeed changes the sound audibly. There is even a _reason_ to do something to headphone sound because most music is mixed for speakers.


----------



## IanB52

I think for a lot of amps these days crossfeed is irrelevant because the crosstalk is so high. Probably 3/4 of the amps I have owned had significantly less channel separation than a professional SS amp. Some of the tube ones I joke are pseudo mono, seeming about as wide as a vinyl record. Its great if you want your whole mix squished together between 10:00 and 2:00 and don't mind the masking or phase canceling. 

The industry seems to have this flavor covered to the degree that it is hard to find true stereo output.


----------



## 71 dB

gregorio said:


> This analogy of cans and sitting in the orchestra is not ideal though, all the sound appears to be occurring inside my head with cans. There is some perception of depth but it’s more squashed and not as coherent as in the real life scenario. My perception of bass and bass balance is not the same either but it is quite a linear relationship and therefore usually fairly predictable. I do occasionally get anomalies with popular genres, say the lead vocal on a different horizontal plane, at the top of my head. Everything is always inside my head with cans though, the only exception is some binaural sound recordings accompanied by video (providing visual cues).


I didn't address this post of yours completely. I need time to avoid burnout from this. The faster I comment on what you say, the lousier are my answers. So, I take this in small steps.

For me binaural recordings are very effective even without visual help. In same ways it is almost scary. To have the sound completely inside my head the sound must be almost mono or it "leaks" outside my head (mostly sides). To make it simplified: Comparing Mid (M) and Side (M) channels.

M >> S ===> inside my head
M > S ===> outside my head, natural feel
M <= S ===> outside my head or at my ears, unnatural annoying feel
M << S ===> sound all over the place (diffuse as hell), unnatural and very annoying fee.

This is for the whole bandwith which is dominated by low frequencies because that's where most energy is in music. If we concentrate on treble for example things change. Above 1600 Hz M and S being equal in strength means natural diffuse spatiality, because the phase differences get "randomized" due to the dimensions of head. At higher frequencies it is more convient to use L and R channels instead:

L >> R ===> Sound source 90° left and close
L > R ===> Sound source between left and center
L = R ===> inside my head (mono)
L < R ===> Sound source between right and center
L << R ===> Sound source 90° right and close

The difficulty of having forward depth with headphones comes from the fact that L = R for any sound that is in the center regardless of the distance. Floor and ceiling reflections are helpful, because the time difference of direct sound and floor/ceiling reflection is the bigger the closer the sound is. Loud reverberation compared to direct sound is another trick to indicate more distance.


----------



## bigshot

It's a subjective preference based on the way you as an individual interpret the sound you hear. It may be that way for you, but other people subjectively interpret sound differently.


----------



## castleofargh

You're talking to a guy who's been manipulating sounds to match video scenes for a living, He doesn't need simplifications or to have you explain M/S(seriously, how hard did you misread his posts?). Any simplification you made has been exactly what got you in trouble from the get go. Because this has been a discussion about very specific variables and techniques between people who at the very least, learned the basics about them(counting myself in), things got elevated compared to the usual "let's talk an ignorant guy out of something false by simplifying reality until we hopeful can get down to his level" kind of discussions. Because of that, more accuracy is expected not less. 
This new road you're taking about M/S looks like self arm to me. I cannot imagine this having any potential for a happy ending.

No matter what, crossfeed not being a HRTF or speaker simulator, Some variables are even more wrong/added/false/missing than with a simulation attempt. It's a fact, and everybody here accepts it. When we discuss facts in this forum, we're can never tell people that they don't feel or prefer what they feel and prefer. We argue that they misunderstood the cause of it. This applies here too.






71 dB said:


> For me the panning or placement of the instrument don't change, so your experience of 180° panning on crossfeed after a while is interesting.


 I do get 60° initially, I imagine that my brain is already convinced it's a headphone and that a given song must have that instrument all the way to the side. Over time it progressively compensates until I get there. It the only explanation I can think about but maybe it's something else?

I got some interesting results using head tracking, be it the A16 or the Waves 3D crap with the tracker I got on kickstarter years ago(using some generic HRTF based on the size of the head, so most HRTF cues were still chaos for me but not as much as fixed dummy head chaos). I kept a stable panning over time because I moved enough to re-calibrate into the effect(like turning crossfeed on and off often will immediately make me feel 60° when it's ON). But if I rest my head on something and pay attention not to move, I still end up progressively spreading instruments to the sides and killing all distance for the center. Even with my custom impulses on the A16. Change is what keeps me in the dream.

The other extreme I talked about, was spending months using a laptop placed on the side with another keyboard and bigger screen in front of me. I would often listen to youtube videos directly through the integrated tweeters of the computer on the side. after months of doing that occasionally, I started feeling like the sound was coming from the screen. It never off balanced anything else, not headphones, not daily life, and not my actual speakers in the other room. Another sign that my eyes do most of the listening and that my brain knows what it's doing when it's messing with my senses.

I don't claim to be the common average guy, I usually am not and generic 3D solutions have always been very bad for me(that includes binaural recordings made with a dummy head). 
From the Realiser thread I learned that I clearly am even more prioritizing vision to make sense of sounds than the average guy usually does. Probably why I noticed early on how much I could fool myself with eye candy and why I got interested in controlled tests.


----------



## bigshot (Oct 13, 2022)

I think his perception is unable to compensate. If it's a little bit off, his subjective impression is focused on the error and he can't ignore it. It isn't something he can just tune out. This is why he's so insistent on this issue. Unfortunately, I suspect that hearing this way leads to a lot of parallel parking. While a normal person can get close to the sweet spot and it's fine because he naturally compensates the rest of the way, 71dB's target is very narrow, he has no way to compensate, and it may even wander, forcing him to constantly adjust to try to compensate. If crossfeed takes a little bit of a curse off this hyper focus, that is good for him though.

But this doesn't appear to be a matter of spatial cues or enhanced "realism". It's more a matter of psychoacoustics being used to help a perceptual problem. His problem is basing all of his theories on a test subject group of one.


----------



## 71 dB

castleofargh said:


> No matter what, crossfeed not being a HRTF or speaker simulator, Some variables are even more wrong/added/false/missing than with a simulation attempt.


I am not comparing crossfeed to HRTF based simulations when talking about benefits or improvement. I assume the alternative to crossfeed to be _nothing_ (just headphone sound as it is). If it is crossfeed vs HRTF based simulations then of course the latter wins.


----------



## 71 dB

gregorio said:


> With crossfeed I perceive a narrower soundstage, so more like the width of an orchestra from the ideal listening position but without the distance or the higher ratio of reverb. The bass also appears different but not more similar to a real life scenario and not as linearly/predictably as without crossfeed. Sometimes I get an EQ notch type effect in the bass, sometimes the bass sounds artificially louder, sometimes I get the bass component of a sound/instrument within the mix in a slightly different location to the higher freq components, which I find particularly annoying and doesn’t appear correlated with the crossfeed freq. I also get unpredictable effects with the location and FR of ERs/reverb. In general it’s more blurred, less spatially coherent, less stable and more unpredictable. It’s also still always all inside my head.


I don't get significant artifacts with bass. The phase difference of crossfeed is not large enough to create notch-type canceling at bass, and at higher frequences where the phase difference is large enough for that, the crossfeed level starts to drop so that out-of phase canceling remains weak. To me bass with large ILD sounds utterly unnatural and annoying. Any theoretical artefact introduced by crossfeed is insignificant in comparison, and similar artifacts also happen with acoustic crossfeed with speakers (and nobody cares). For me crossfeed bass isn't 100 % "real", but it is definitely much closer to real and without crossfeed. My "wide" crossfeed gives the most real-feeling bass, but the price is that the the sound lacks depth compared to default crossfeed. 



gregorio said:


> Maybe it’s because I spent a lot of time actually sitting inside orchestras that I don’t mind that extreme width/separation and don’t find it unnatural.


Sounds being located left and right is not the problem for me. At bass ILD for sound coming at 90° angle is about 3 dB and ITD is about 640 µs.  



gregorio said:


> Without crossfeed is far from ideal, it would be good to get it outside my head, get more depth, have more representative bass and not to have those occasional anomalies but even with all these failings, it’s still acceptable for me.


I have hard time accepting it, because speakers and headphones are so fundamentally different. Only if the music has been somehow mixed for both or it is only for headphones (binaural) does it work without crossfeed (in fact in these cases crossfeed only make things worse).

I try to mix my own music for both speakers and headphones because it is interesting to create such "versatile" spatiality. So, ironically I listen to my _own music_ without crossfeed, but since so much of the music in the World is so badly headphone compatible, I have to use crossfeed.



gregorio said:


> With crossfeed, the narrower width without the greater distance is a conflict, as is the same sound in different locations, in addition to the less coherent reflections and other issues, it appears far less natural to me and is typically unacceptable. I can’t just sit and enjoy it, because I’m constantly trying to figure out what’s going on. I should mention there are exceptions and it’s often not as obviously “black and white” depending on the mix (which can vary wildly). I have encountered recordings that I did prefer with crossfeed but such exceptions are so rare, it’s not worth the effort
> Even amongst those like me who are not fans of crossfeed, I don’t assume they are going to experience the same as me. Some of what I described maybe identical or similar for other non-fans but they might not perceive or be consciously aware of the other things I’ve described or even if they are aware, they might not be troubled by them and it’s very likely some non-fans experience yet other effects that I don’t.
> 
> G



Interesting, how differently we hear things. I wish I had known this before coming to this board, but I always learn things too late. I need to make mistakes in life in order to encounter the information that would have helped me prevent the mistake in the first place. Well now I know that all the theories about spatiality I have in my head work only for me. As we learn from out mistakes, I may not make this kind of mistakes again (I hope), but unfortunately there are many kinds of mistakes to make...


----------



## 71 dB

IanB52 said:


> I think for a lot of amps these days crossfeed is irrelevant because the crosstalk is so high. Probably 3/4 of the amps I have owned had significantly less channel separation than a professional SS amp. Some of the tube ones I joke are pseudo mono, seeming about as wide as a vinyl record. Its great if you want your whole mix squished together between 10:00 and 2:00 and don't mind the masking or phase canceling.
> 
> The industry seems to have this flavor covered to the degree that it is hard to find true stereo output.


I have not used these tube amps so I wouldn't know, but who know how much crosstalk there is. The gear I use to drive my headphones has good channel separation, so if I want to reduce channel separation I need crossfeed for that.


----------



## bigshot

Now you're responding to posts by Gregorio that you've already replied to?


----------



## 71 dB

bigshot said:


> Now you're responding to posts by Gregorio that you've already replied to?


I am responding to them in parts, because it is a lot of work.


----------



## 71 dB

bigshot said:


> I think his perception is unable to compensate.


Who's perception is able to compensate? Do you use the crappiest audio gear yourself and just let your perception to compensate for it to perceive awesome sound? To some extend my hearing can get used to things, but it is quite limited.



bigshot said:


> If it's a little bit off, his subjective impression is focused on the error and he can't ignore it. It isn't something he can just tune out.


I can'r ignore it now that I know how much better headphone sound can be. I would rather listen to stereo recordings with "superstereo" mixed to mono than without crossfeed.



bigshot said:


> This is why he's so insistent on this issue.


Crossfeed has been dear to me for a decade and I have invested a lot of time on it. It has been devastating for me to realize all I have is my personal increased enjoyment of headphone sound. That's all. I can't apply my understanding and knowledge to anyone else than myself! If crossfeed is not a topic I know something revevant about, then what relevant things do I know and understand? So, this has been my identity crises for the last 5 years...



bigshot said:


> Unfortunately, I suspect that hearing this way leads to a lot of parallel parking. While a normal person can get close to the sweet spot and it's fine because he naturally compensates the rest of the way, 71dB's target is very narrow, he has no way to compensate, and it may even wander, forcing him to constantly adjust to try to compensate.


My perception doesn't wander. If the proper crossfeed level for a recording was -6 dB years ago, it is the same today. 



bigshot said:


> If crossfeed takes a little bit of a curse off this hyper focus, that is good for him though.


It is not much different from changing uncomfortable headphone to comfortable headphones for me. 



bigshot said:


> But this doesn't appear to be a matter of spatial cues or enhanced "realism". It's more a matter of psychoacoustics being used to help a perceptual problem.


As I see it it is a matter of transforming the available spatial cues in the recording into more digestible and natural form. 



bigshot said:


> His problem is basing all of his theories on a test subject group of one.


I have underestimated the subjective side of spatial hearing. My university studies of the topic made it seem very objective and never did my professor warn me people hear spatiality differently. Maybe he didn't know? It took me years of crossfeed hobby to hear from other reliable knowledgeable  people that they hear spatiality differently than I do myself. The only difference I "knew" there is between spatial hearing is HRTF (everyone has their own), but such differences alone are not a problem for crossfeed, because crossfeed imitates HRTF so roughly. It is a very rough estimate of HRTF for everybody. Turns out there is a lot more than just the differences in HRTF between individuals...


----------



## bigshot (Oct 14, 2022)

You are obsessed. Your posts are like kudzu. This isn’t normal, and it can’t end well.


----------



## 71 dB

castleofargh said:


> You're talking to a guy who's been manipulating sounds to match video scenes for a living, He doesn't need simplifications or to have you explain M/S (seriously, how hard did you misread his posts?).


Sorry if I have misread something. Gregorio responded to my post in a way that made me (to my surprise) feel he isn't familiar with the consept of M/S audio. Interestingly he hasn't confirmed being familiar with the concept.


----------



## 71 dB

bigshot said:


> You are obsessed. This isn’t normal, and it can’t end well.


Crossfeed is dear to me. You can call it an obsession if you want. Aren't we all obsessed of something?


----------



## gregorio (Oct 14, 2022)

castleofargh said:


> I would often listen to youtube videos directly through the integrated tweeters of the computer on the side. after months of doing that occasionally, I started feeling like the sound was coming from the screen. It never off balanced anything else, not headphones, not daily life, and not my actual speakers in the other room.


I presume you’ve probably seen a fair bit of the research on sound localisation and HRTFs. For the benefit of others if you have, there as been some interesting research over the last 10 years or so, interesting from the point of view that some of it is conflicting.

It is typical when running scientific DBTs for thresholds of particular effects, say jitter or other distortions for example, to provide a period of training for the subject. Starting with say clearly audible amounts of jitter allows the subject to familiarise themselves with the specific sound (distortions) it creates which makes them more sensitive to the effect and lowers their threshold, often significantly, providing a threshold determination effectively encompassing a range of subjects wider than the limited sample size. It’s common that training is also used in localisation and HRTF testing but depending on the exact training this can really screw with the results. This has been researched in the last decade and the evidence suggests we have a very high plasticity to learning new locational perception. For example, someone else’s HRTF that is different from our own and doesn’t work at all, can become almost perfect after a couple of days of heavy training and visual feedback cues definitely help. Interestingly subjects did not appear to loose (replace) their own HRTF after this training, so they effectively had two HRTFs that worked for them and also seemed to retain this new/“wrong” HRTF even after a couple of months of not using it. This was with relatively simple localisation experiments, not complex multi-positional content such as music mixes but various studies indicate this plasticity. Of course we all have different listening experiences, have spent more or less time critically listening to speakers or headphones, or headphones with crossfeed or HRTFs. So, not only do we all have different HRTFs and therefore are all likely to have somewhat different perception of the same presentation but this difference is likely to be exacerbated in the case of presentations we like, which we therefore listen to more and become more “trained” (acclimatised) to.


71 dB said:


> Well now I know that all the theories about spatiality I have in my head work only for me.


Maybe at last you’re starting to get it? That’s been the problem all along, you’ve developed theories to explain your personal perception, theories that have holes/omissions. You’ve then defended your theories with the circular argument that you’re justified in omitting these factors because you don’t perceive them and/or they don’t  negatively impact your perception. However, this is the sound science forum and in science you can’t argue for the validity of a theory by simply omitting/dismissing the factors which invalidate it, regardless of your personal perception. Nor can you use what is effectively an “appeal to authority” fallacy, by stating/assuming everyone who disagrees is just ignorant.

Incidentally, I obviously do not need a lesson in M/S basics. M/S provides some useful options when mastering but is also a dangerous tool. The Mid channel does not just contain the information in the middle of the L/R mix, so there are potential phase and spectral issues when processing either the M or S channels.

G


----------



## castleofargh

gregorio said:


> I presume you’ve probably seen a fair bit of the research on sound localisation and HRTFs. For the benefit of others if you have, there as been some interesting research over the last 10 years or so, interesting from the point of view that some of it is conflicting.
> 
> It is typical when running scientific DBTs for thresholds of particular effects, say jitter or other distortions for example, to provide a period of training for the subject. Starting with say clearly audible amounts of jitter allows the subject to familiarise themselves with the specific sound (distortions) it creates which makes them more sensitive to the effect and lowers their threshold, often significantly, providing a threshold determination effectively encompassing a range of subjects wider than the limited sample size. It’s common that training is also used in localisation and HRTF testing but depending on the exact training this can really screw with the results. This has been researched in the last decade and the evidence suggests we have a very high plasticity to learning new locational perception. For example, someone else’s HRTF that is different from our own and doesn’t work at all, can become almost perfect after a couple of days of heavy training and visual feedback cues definitely help. Interestingly subjects did not appear to loose (replace) their own HRTF after this training, so they effectively had two HRTFs that worked for them and also seemed to retain this new/“wrong” HRTF even after a couple of months of not using it. This was with relatively simple localisation experiments, not complex multi-positional content such as music mixes but various studies indicate this plasticity. Of course we all have different listening experiences, have spent more or less time critically listening to speakers or headphones, or headphones with crossfeed or HRTFs. So, not only do we all have different HRTFs and therefore are all likely to have somewhat different perception of the same presentation but this difference is likely to be exacerbated in the case of presentations we like, which we therefore listen to more and become more “trained” (acclimatised) to.


Yes, I already had similar experiences with online games where at least when I was playing a lot, both audio and video were bad and we had to learn how to interpret cues almost from scratch. For sound I remember some panning and an abuse of Doppler effect for anything that moved, and not much else. But after some time, it became second nature to locate a baddy walking near me by ear, or identify a certain tone on that one pixel as a guy crouching far away. But in that case it was training for something that had nothing really correct/natural. 
With my laptop on the side anecdote, from the start I had the right cues to locate it where it was. It's clearly a case of my brain deciding to follow my eyes over anything else that could move a perfectly fine sound source toward the screen in the long run. That's what amazed me. Because a quick look to the side and my brain should confirm that the sound source was where it sounded, but somehow that's not how it went. People were moving in the screen, the sounds connected with their actions, voila!


----------



## 71 dB (Oct 14, 2022)

gregorio said:


> Maybe at last you’re starting to get it?


Well, it has been a slow process that started when I came here. I have tried to convince myself that I know this stuff in order to uphold my self-confidence, but it seems I have to AGAIN admit defeat and failure in life. So, my self-esteem is AGAIN shattered. I don't know what to do. I am destined to live with low self-esteem.



gregorio said:


> That’s been the problem all along, you’ve developed theories to explain your personal perception, theories that have holes/omissions. You’ve then defended your theories with the circular argument that you’re justified in omitting these factors because you don’t perceive them and/or they don’t  negatively impact your perception. However, this is the sound science forum and in science you can’t argue for the validity of a theory by simply omitting/dismissing the factors which invalidate it, regardless of your personal perception. Nor can you use what is effectively an “appeal to authority” fallacy, by stating/assuming everyone who disagrees is just ignorant.


My theories _could_ have been correct. At least I had the iniative to make them. Most people don't think anything and never come up with any theories. I did nothing wrong. I was just unlucky to develop a faulty theory. I believed in myself. Everyone says "BELIEVE IN YOURSELF!". Well I *DID*!!! That is what it gave me! I don't understand life! How am I supposed to keep believing myself when I am always told to be wrong?



gregorio said:


> Incidentally, I obviously do not need a lesson in M/S basics. M/S provides some useful options when mastering but is also a dangerous tool. The Mid channel does not just contain the information in the middle of the L/R mix, so there are potential phase and spectral issues when processing either the M or S channels.
> 
> G


Of course M/S processing contains dangers and one has to know what he/she is doing when applying it. I talked about M and S channels as analytic tools. The purpose wasn't to change the sound, but to analyse the signal beyond L and R channels.


----------



## sander99

gregorio said:


> For example, someone else’s HRTF that is different from our own and doesn’t work at all, can become almost perfect after a couple of days of heavy training and visual feedback cues definitely help. Interestingly subjects did not appear to loose (replace) their own HRTF after this training, so they effectively had two HRTFs that worked for them and also seemed to retain this new/“wrong” HRTF even after a couple of months of not using it.


Ah, interesting. This seems to relate to something I experienced with the Smyth Realiser and using my own personal measurements. In the beginning there occasionally seemed to be certain problem sounds or frequencies that sounded inside my head, while the rest was at the proper location(s). Frequently moving my head always fixed the problem. After a few days however, the problem was gone. All sound was and stayed at the proper location also when holding my head still. I think I trained my brain to somehow correct for the imperfections of the PRIR. In this case it seems the headtracking correctly adapting ITD - and other things - to my headmovements gives very strong clues to my brain overruling errors in hrtf.


----------



## 71 dB (Oct 14, 2022)

Speaker spatiality may have problems because for example ITD is omitted. Nothing guarantees that the listener perceives ITD the same way the mixer perceived it in the studio while mixing. This might create phasing issues. For me this doesn't seem to limit enjoyment, since it doesn't limit in crossfeed either, but some people may hear spatiality differently and be annoyed by this.


----------



## gregorio (Oct 14, 2022)

71 dB said:


> I have tried to convince myself that I know this stuff in order to uphold my self-confidence, but it seems I have to AGAIN admit defeat and failure in life.


IMHO, that’s a very shortsighted, absolute and self destructive way of looking at it. No one knows everything so is everyone a failure in life? No, of course not. The other side to this coin is that you clearly know a lot about the issues but have mis-interpreted and dismissed some of what you learnt because you thought it irrelevant. You’re starting to realise that view was incorrect and so you now know more than you did a few days ago. That’s an achievement, not a failure!


71 dB said:


> Speaker spatiality may have problems because for example ITD is omitted.


I’m not sure what you mean, obviously with speakers you have ITD of the direct sound and of the directional room reflections, so ITD is not omitted.


71 dB said:


> Nothing guarantees that the listener perceives ITD the same way the mixer perceived it in the studio while mixing.


Absolutely and in fact there’s no guarantee this will be perceived the same way by the SAME mixer in a different studio, even between two top class studios and it’s not just ITD but almost every factor! I was shocked at the difference when I took a mix done in one world class studio to another world class studio. That’s the reason we have the mastering process, mastering engineers check the phase information of the mix, adjust according to their experience and test the master on different equipment. Even recording studios use both near and far field systems and the purpose of mastering isn’t to get the mix to sound great in the mastering studio but to sound as good as possible on a wide range of playback scenarios. This virtually always means some level of compromise.


71 dB said:


> This might create phasing issues.


Yes it might but it seems the brain is quite good at compensating for such issues in an acoustic environment. For example, it was thought that comb-filter effects were particularly problematic in listening environments but research by Floyd Toole indicated they’re not as bad as previously thought because listeners and engineers can learn/train to compensate, even with quite noticeable comb-filtering.


sander99 said:


> I think I trained my brain to somehow correct for the imperfections of the PRIR.


That sounds very plausible. There’s limited research on this training/plasticity issue with HRTFs and I should imagine there are difficulties with experiment design. For example, when do you know you’ve fixed improved something or if the subjects have just subconsciously learned to compensate?

G


----------



## bigshot

I don't mean to sound mean, but I'm here to casually chat about science. I don't want to be held responsible for somebody's self esteem.


----------



## 71 dB

gregorio said:


> IMHO, that’s a very shortsighted, absolute and self destructive way of looking at it. No one knows everything so is everyone a failure in life? No, of course not. The other side to this coin is that you clearly know a lot about the issues but have mis-interpreted and dismissed some of what you learnt because you thought it irrelevant. You’re starting to realise that view was incorrect and so you now know more than you did a few days ago. That’s an achievement, not a failure!


I don't mind not knowing much about things I have not invested time and work on, but crossfeed is something that has taken time and effort. How about all other things I have invested time and effort on? How many false believes and theories do I have with those? Do I really understand anything about US politics for example? Or music theory? Steven Spielberg's directing style? 



gregorio said:


> I’m not sure what you mean, obviously with speakers you have ITD of the direct sound and of the directional room reflections, so ITD is not omitted.


I mean the ITD is not the same in your listening room it is in the mixing room unless the rooms are identical. For example if your room is bigger and less treated acoustically, the reflections will come to your ears later and with different level/spectrum/angle than in the studio. If that doesn't affect spatiality, I don't know what does. 



gregorio said:


> Absolutely and in fact there’s no guarantee this will be perceived the same way by the SAME mixer in a different studio, even between two top class studios and it’s not just ITD but almost every factor! I was shocked at the difference when I took a mix done in one world class studio to another world class studio. That’s the reason we have the mastering process, mastering engineers check the phase information of the mix, adjust according to their experience and test the master on different equipment. Even recording studios use both near and far field systems and the purpose of mastering isn’t to get the mix to sound great in the mastering studio but to sound as good as possible on a wide range of playback scenarios. This virtually always means some level of compromise.


This is how I have understood it. Different playback scenarious give an interpretation of the recordings with strenghts and weaknesses. One room gives nice bass, another room has good diffuse airy treble etc. This is the origin of my attitude of "omitting" factors. Not omitting then can lead to extremely difficult and costly measures to control them for relatively minor benefits. Because of the way my spatial hearing works, crossfeed is to me a "miracle" solution: simple, inexpensive and it gives me major improvements.

The studio where I had the mixing course had near and far field speakers and the difference between them was quite dramatic!



gregorio said:


> Yes it might but it seems the brain is quite good at compensating for such issues in an acoustic environment. For example, it was thought that comb-filter effects were particularly problematic in listening environments but research by Floyd Toole indicated they’re not as bad as previously thought because listeners and engineers can learn/train to compensate, even with quite noticeable comb-filtering.


I believe so too, hence the mentality of omitting these factors. I believe it is not much different with headphones without acoustic environment. I did think about these things 10 years ago when crossfeed was new to me and I wanted to get a good understanding of what crossfeed does to the signal. The phase difference is too small to create comb-filter effects. The critical octave is about 500-1000 Hz, but even there comb-filtering doesn't happen. In general first order filters are pretty safe, because the phase response isn't so dramatic and simple default crossfeed uses first order filters. Higher order filters create easily massive issues if not used carefully.


----------



## bfreedma

bigshot said:


> I don't mean to sound mean, but I'm here to casually chat about science. I don't want to be held responsible for somebody's self esteem.



Perhaps it would be best to stop commenting on it then.


----------



## bigshot

Do you think that would make it stop?


----------



## 71 dB

bigshot said:


> I don't mean to sound mean, but I'm here to casually chat about science. I don't want to be held responsible for somebody's self esteem.


Maybe I was too dramatic about it, but the way I am is a result of how experiences in life have shaped my personality. Good self-esteem doesn't just exist. It needs "stuff" to stay alive, positive experiences in life. Acceptance, praise, success, etc. Some people have those more than other people. The most positive experiences in my life recently have nothing to do with this discussion board/crossfeed/audio. It is something so private that I won't share it here.


----------



## bfreedma (Oct 14, 2022)

bigshot said:


> Now you're responding to posts by Gregorio that you've already replied to?





bigshot said:


> You are obsessed. Your posts are like kudzu. This isn’t normal, and it can’t end well.





bigshot said:


> I don't mean to sound mean, but I'm here to casually chat about science. I don't want to be held responsible for somebody's self esteem.





bigshot said:


> Do you think that would make it stop?



I'm willing to hear your explanation of how these posts are anything more than internet bullying.

Whether it stops or not is not the issue - your responses are.

Edit - apologies to @71 dB for quoting those posts.  Hopefully won't need to do it again.


----------



## 71 dB

bigshot said:


> Do you think that would make it stop?


You can only stop what you are doing, not what other people are doing.


----------



## 71 dB

bfreedma said:


> Edit - apologies to @71 dB for quoting those posts.  Hopefully won't need to do it again.


No problem! I like your post!


----------



## bfreedma

71 dB said:


> No problem! I like your post!



I realize that my post you responded to flies in the face of this, but we would all be better off leaving the personal stuff at the door and focusing the discussion on audio and facts/opinions on that.


----------



## bigshot (Oct 14, 2022)

thank you

bfreedma, respect is earned. Ignoring what people say over and over and replying over and over without ever addressing their points doesn't earn respect. It earns a curt dismissal. It doesn't have to be intentional trolling to crap up a thread. It's important to be self aware and consider your audience. We aren't here to talk for our own benefit. We're here to discuss things with others. When it crosses the line over into arguing for arguing's sake, it doesn't do anyone any good. I am a straightforward person and I don't pet people on the head and let them go on and on. I call it as I see it to help them see what they are accomplishing or not accomplishing. If you find the repeated inaccuracies interesting and useful to you, go ahead and engage in discussion and that's fine. I won't interrupt. But I don't see anyone besides 71dB with any interest in discussing this further. And I see the whole forum being sucked into arguing over something that isn't even factually correct.

I'm not being personal. In fact, I'm trying to prevent it going there. I'm just pointing out unproductive arguing and suggesting it be tabled, not propagated.


----------



## bfreedma

bigshot said:


> thank you
> 
> bfreedma, respect is earned. Ignoring what people say over and over and replying over and over without ever addressing their points doesn't earn respect. It earns a curt dismissal.



I tried, but as usual, you appear to refuse to consider your participation and responsibility for your own posts.

Sorry Bigshot, but you’ve gone way past curt dismissal and into bullying.  Please stop.  It has no place here amongst adults.

Curt dismissal - “I disagree and it isn’t worth continuing the conversation”
Bullying - “You are obsessed. Your posts are like kudzu. This isn’t normal, and it can’t end well.”


----------



## bigshot

I’ll take your advice and use your phrase when I see unproductive conversations going on in the future.


----------



## bfreedma

bigshot said:


> I’ll take your advice and use your phrase when I see unproductive conversations going on in the future.



Thank you.

We can now return to the 137 page debate about crossfeed, which I admit is making me a little crosseyed.  Thankfully, we seem to have cleared the crossroads  

I was going to make a Kris Kross reference, but that would be going too far...


----------



## 71 dB

bigshot said:


> Ignoring what people say over and over and replying over and over without ever addressing their points doesn't earn respect.


I have tried to address those things from my perspective. I'm sorry my answers are not what you expect to see.


----------



## gregorio

71 dB said:


> I mean the ITD is not the same in your listening room it is in the mixing room unless the rooms are identical.


The Interaural Time Difference is the same, the distance between your ears doesn’t change when you listen in a different room but the sound entering your ears is obviously not the same. 


71 dB said:


> For example if your room is bigger and less treated acoustically, the reflections will come to your ears later and with different level/spectrum/angle than in the studio.


Yes of course, you’re going to get a significantly different set of reflections, more early reflections, more time delay between the ERs, more spectral interaction between the ERs and between the direct sounds, longer RT60, etc. In addition, there’s going to be differences in the direct sound reproduced to start with, the different speakers in the two listening environments are going to have a somewhat different spectral/freq response and a somewhat different time domain response (group delay, etc.).


71 dB said:


> If that doesn't affect spatiality, I don't know what does.


Yes, it does of course affect spatiality but this is the spatiality of the sound that is entering the ears and then of course that sound interacts with the ears; the pinnae, the skull, body, etc.


71 dB said:


> One room gives nice bass, another room has good diffuse airy treble etc. This is the origin of my attitude of "omitting" factors.


If we could not hear the difference between those two different rooms/speakers/presentations that would be a justification for omitting those factors. That is why your reason for omitting factors doesn’t make any sense to me because obviously we can hear the difference between the two different rooms/presentations. 


71 dB said:


> The studio where I had the mixing course had near and far field speakers and the difference between them was quite dramatic!


Exactly but this effectively contradicts your theory that “_Not omitting them _[those factors] _can lead to extremely difficult and costly measures to control them for relatively minor benefits_.” - You said above the difference “_was quite dramatic!_” but here you’re saying “_relatively minor benefits_”. Obviously we have a lot of factors involved here, not just the factors you omitted, but your rationale for omitting factors doesn’t correlate with your own observation and that was in exactly the same room just with a different speaker presentation. 


71 dB said:


> I believe so too, hence the mentality of omitting these factors.


You are taking one example/property of human perception and applying it to a different context (headphone use). There’s two problems with this, either of which on their own can invalidate your “_mentality of omitting these factors_”: Firstly, you are ignoring other examples/properties of human perception when listening to speakers and secondly, you don’t have any reliable evidence that the different context (headphone use) doesn’t affect any of these factors.

Floyd Toole (and others) didn’t only demonstrate that the brain can adapt over time/training to certain weaknesses in frequency response, he demonstrated a great deal more, such as the importance of time domain performance,  off-axis and other speaker/room performance issues. A good practical example of this is the old Yamaha NS10 phenomenon (see this article and accompanying research). In this case we’ve effectively got the same room and even the same presentation (near field monitors), the only difference is the actual speakers. Why, when there were dozens of different near field monitors available, did virtually all commercial studios in the world have NS10s and more interestingly in the context of this thread, why does it elicit more polarised opinion than pretty much any other “industry standard”? The freq response was poor compared to other near fields but what set it apart was it’s time domain response, it’s group delay/impulse response. 


71 dB said:


> I believe it is not much different with headphones without acoustic environment.


What basis, apart from your personal perception, do you have for that belief? In fact your belief is contrary to a considerable amount of reliable evidence. In an acoustic environment we have speakers in a room and a considerable amount of resultant spatial/acoustic information, however, all this information correlates to our sense of sight. We see the speakers and the room and the spatiality/acoustic information we hear obviously correlates with that. Furthermore, when we listen in such an environment, we don’t have our head clamped in a vice, it’s moving around at least slightly and the resultant slight (or significant) changes in ITD and other factors reinforces the location/s of what we’re seeing (and hearing). And lastly, reliable evidence demonstrates that sight significantly influences positional hearing/perception. We don’t have any of this with headphones (unless they have HRTFs, head tracking and a reverb applied). And continuing:


71 dB said:


> The phase difference is too small to create comb-filter effects.


How do you know? Firstly, phase differences as low as 100micro-secs can cause audible comb-filter effects, although probably not in the lower freq band affected by crossfeed. Secondly, even though the phase difference is too small to create audible comb-filter effects in the lower band with coherent test signals, how do you know that’s the case with a stereo music mix, which is almost never phase coherent to start with, contains all sorts of direct mono and stereo sound sources, reflections and all sorts of audio effects, some/many of which are already partly out of phase to start with. Overlaying that and adding a further delay with crossfeed can indeed cause comb-filter effects or any similar/related type of effect (Doppler, flanging, phasing, etc.) and although not extreme, I’ve certainly perceived such effects when using crossfeed.

In addition, we’re not just talking about the spectral side effects of crossfeed delay but also the positional perception factors, and reliable evidence demonstrates that differences of as little as 5micro-secs can affect location perception. So for example, a sound in the mix with say a 600Hz fundamental will have that fundamental crossfed and delayed but it’s 2nd, 3rd and subsequent  harmonics will not be, so you could perceive different spectral parts of the same sound to be in somewhat different locations. 

With headphones, we’ve got no visual reference to influence/correct our perception of location perception and we’ve got the added complexity that the signals we’re listening to (stereo music mixes) not only are not spatially consistent/coherent internally but also do not correlate acoustically to our listening environment. The evidence indicates that under such conditions of limited sensory references (and those we do have conflict), plus the complex, confusing and contradictory aural cues within the music mixes themselves, results in our perception effectively having little to rely on and simply making-up whatever seems to make the most sense. This is why there is such a wide variety of individual responses to headphone listening from “sounds just like being there” to “a bit strange but I still like it” to “this is nonsense and very annoying”. Research is relatively limited in this case, it’s so far been mainly limited to understanding basic processes, such as location perception of single simple test signals, not multiple complex sounds with different locations and different acoustic information all occurring simultaneously. But, we can’t rationally just discount/ignore much of what we have discovered simply on the basis of one person’s perception, especially as it’s not representative of the majority.

G


----------



## 71 dB

gregorio said:


> The Interaural Time Difference is the same, the distance between your ears doesn’t change when you listen in a different room but the sound entering your ears is obviously not the same.


Interesting how "different factors" are a serious thing with crossfeed: ITD is ruined just like that, but with room acoustics everything is okay. I brought this ITD thing with room acoustics because you have been "teaching" me not to omit "factors".

I am afraid I can't agree with you about ITD staying the same. The room dimension (+ speaker/listener positions) determine the angles the reflected sound arrives to the listeners ears. If the room is wider, the reflection from side walls not only arrive later compared to direct sound, but also at a larger angle generating bigger ITD. Sure the differences aren't massive, but they are there. 



gregorio said:


> Yes of course, you’re going to get a significantly different set of reflections, more early reflections, more time delay between the ERs, more spectral interaction between the ERs and between the direct sounds, longer RT60, etc. In addition, there’s going to be differences in the direct sound reproduced to start with, the different speakers in the two listening environments are going to have a somewhat different spectral/freq response and a somewhat different time domain response (group delay, etc.).


Yep. The differences are many. It is very complex. Despite of this people "bite the bullet." Crossfeed makes consistent simple predictable modifications to the sound.  I wonder which one is less problematic?



gregorio said:


> Yes, it does of course affect spatiality but this is the spatiality of the sound that is entering the ears and then of course that sound interacts with the ears; the pinnae, the skull, body, etc.


Using over-the-ear headphones the sound interacts with the pinnae (althou not the way it does with speakers because the sound is not arriving at different directions, but at one fixed direction, the headphone driver). Crossfeed approximates very roughly the interaction with skull (whereas not using crossfeed or anything else simulates situation where your head is inside a wall so that your ears/speakers are in different acoustically isolated rooms). Body is not simulated.



gregorio said:


> If we could not hear the difference between those two different rooms/speakers/presentations that would be a justification for omitting those factors. That is why your reason for omitting factors doesn’t make any sense to me because obviously we can hear the difference between the two different rooms/presentations.


How is hearing differences between no crossfeed and crossfeed any different? How am I omitting factors with crossfeed if I am not omitting factors when listening to speakers in my room?



gregorio said:


> Exactly but this effectively contradicts your theory that “_Not omitting them _[those factors] _can lead to extremely difficult and costly measures to control them for relatively minor benefits_.” - You said above the difference “_was quite dramatic!_” but here you’re saying “_relatively minor benefits_”. Obviously we have a lot of factors involved here, not just the factors you omitted, but your rationale for omitting factors doesn’t correlate with your own observation and that was in exactly the same room just with a different speaker presentation.


Benefits can still be minor while differences are dramatic. Both set of speakers gave good representation of the music in different ways. It is differing set of pros and cons. 



gregorio said:


> You are taking one example/property of human perception and applying it to a different context (headphone use). There’s two problems with this, either of which on their own can invalidate your “_mentality of omitting these factors_”: Firstly, you are ignoring other examples/properties of human perception when listening to speakers and secondly, you don’t have any reliable evidence that the different context (headphone use) doesn’t affect any of these factors.


This is how my mind works, I guess... ....As an _INTJ_ intuition is big part of how I "process" information. My senses have less to do with it.

This is all for now... I comment of the rest of your post later...


----------



## gregorio

71 dB said:


> Interesting how "different factors" are a serious thing with crossfeed: ITD is ruined just like that, but with room acoustics everything is okay.


The different factors are always a serious thing. Remove, ignore or even change one a bit and it can change/alter the perception. Everything isn’t OK with room acoustics but we have a great deal more spatial information, reflections/harmonics, more opportunity for masking and of course the mix and master are made by engineers for this sound presentation. 


71 dB said:


> I am afraid I can't agree with you about ITD staying the same.


How can you not agree? The only way ITD can change is if the distance between the ears change. You think maybe when you listen in a different room your skull expands or contracts by several centimetres?


71 dB said:


> It is very complex. Despite of this people "bite the bullet." Crossfeed makes consistent simple predictable modifications to the sound.


And that’s the problem! It is very complex but it’s what our hearing and perception have evolved to deal with and it’s what we hear all day, every day for all our lives is also very complex. Crossfeed on the other hand is not very complex, HRTFs (head tracking and reverb) are far more complex and far better/closer to what our perception requires/expects. Crossfeed “makes consistent simple predictable modifications to the sound” mathematically but it’s obviously NOT predictable to most people’s perception because what you end up with when you apply crossfeed to complex musical mixes is various phase and other artefacts which are not similar to what happens with real sound in a real acoustic. 


71 dB said:


> I wonder which one is less problematic?


That really should be obvious by now! It’s the simpler one which produces the spatial/spectral effects not expected/required by human perception. 

You seem to still be missing the basic facts: We have a pair of complex signals which contains complex and somewhat contradictory spatial information, made more complex by speaker/room reproduction and we have hearing perception which uses a whole bunch of timing and spectral factors to make sense of what we’re hearing. Using headphones which don’t have that speaker/room interaction or other factors (such as head tracking) obviously lacks some of those factors which leaves perception with a lot more guesswork and therefore the results are unpredictable. Some people perceive a completely natural/realist result. Others perceive serious problems and not at all a natural/realistic result. Most though perceive something in between these two extremes, for example some (like me) perceive a reasonably natural result but from a much closer perspective and with a few artefacts/problems which aren’t a big deal. With crossfeed, we are still missing those factors and so our perception is still reliant on guesswork, however, the factors we do have are altered, some are improved and some degraded and this affects the result of our perception’s guesswork. For some people it completely ruins what was a very good/realistic result, for some (like you apparently) it completely cures what was an extremely poor result but again, most people fall between these two extremes. Some of this majority feel on balance that crossfeed has problems but is generally preferable and use it, most others feel on balance it’s not worth it, for example (like me) that it’s maybe a bit better with a few pieces but worse in most cases. 

G


----------



## 71 dB

gregorio said:


> The different factors are always a serious thing. Remove, ignore or even change one a bit and it can change/alter the perception. Everything isn’t OK with room acoustics but we have a great deal more spatial information, reflections/harmonics, more opportunity for masking and of course the mix and master are made by engineers for this sound presentation.


Why do you have to be so fast? I am not even finished with your prior post! I am so tired. I can try, but you have excuses (masking this time!)



gregorio said:


> How can you not agree? The only way ITD can change is if the distance between the ears change. You think maybe when you listen in a different room your skull expands or contracts by several centimetres?


ITD depends on the angle of the sound. That's the point of ITD! Different size room gives different angle for reflections => different ITD.

That's all for now.


----------



## bigshot (Oct 15, 2022)

Even a bad room adds spatiality. Just the wrong kind. And it’s primarily a matter of timing reflections and delays.


----------



## 71 dB

gregorio said:


> And that’s the problem! It is very complex but it’s what our hearing and perception have evolved to deal with and it’s what we hear all day, every day for all our lives is also very complex. Crossfeed on the other hand is not very complex, HRTFs (head tracking and reverb) are far more complex and far better/closer to what our perception requires/expects. Crossfeed “makes consistent simple predictable modifications to the sound” mathematically but it’s obviously NOT predictable to most people’s perception because what you end up with when you apply crossfeed to complex musical mixes is various phase and other artefacts which are not similar to what happens with real sound in a real acoustic.


Complex but natural! That's the key thing. You don't get crazy ILD at low frequencies with speakers in a room. Room "regulates" spatial cues so that they are VERY complex, but the spatial cues live in the natural range so that spatial hearing is able to understand it. With headphones the spatial cues are simpler, but they can be very unnatural which is the problem. Sure, crossfeed doesn't give the same things as room, but the it is more natural and that is the key for me.


----------



## 71 dB (Oct 15, 2022)

The low+high thing: Above about 1600 Hz ITD loses meaning, so it doesn't matter much if delay low vs high is different. At least I don't have issues, on the contrary wihtout crossfeed localization is a mess! Much better with crossfeed!


----------



## bigshot

No time change, no spatiality. Reverb adds spatiality.


----------



## gregorio

71 dB said:


> ITD depends on the angle of the sound. That's the point of ITD! Different size room gives different angle for reflections => different ITD.


Oh dear! Yes, ITD depends on the angle of the sound. So with a sound directly in front of us the ITD is zero, as the sound hits both our ears at the same time.  Room size makes NO difference, whether the sound is 2m away or 20m, it still hits both ears at the same time and there’s zero ITD. Room size also makes no difference to angle, what makes a difference to angle is the relative position of the sound source (speaker for example) relative to the head position of the listener. Room size obviously makes a difference to the initial delay of the first early reflection and the delay of subsequent reflections but OF COURSE, that’s the initial delay/s of the early reflections and not ITD!!


71 dB said:


> Complex but natural! That's the key thing. You don't get crazy ILD at low frequencies with speakers in a room.


That is correct but also very incorrect. Yes, complex but natural is the key thing but there is NOTHING natural about only reducing ILD without also having room reflections, varying ITD and varying spectral spectral content of both the direct sound and reflections from both the room and the pinnae, skull, etc. 


71 dB said:


> Sure, crossfeed doesn't give the same things as room, but the it is more natural and that is the key for me.


You’re contradicting yourself again. Crossfeed is not “more natural”, where do you get that from, it’s nonsense but still you keep repeating it! What is natural about a fixed ITD below a certain frequency, about a spectral effect/absorption that does not vary according to angle? All of this is completely unnatural. The only factor out of many that is natural with crossfeed is ILD, all the other factors are unnatural and you can’t simply say “I’m right because I ignore all those other factors” because it is contrary to the science and indeed to common sense, because if you were right there would be no need or point of developing HRTFs!

G


----------



## 71 dB

gregorio said:


> Oh dear! Yes, ITD depends on the angle of the sound. So with a sound directly in front of us the ITD is zero, as the sound hits both our ears at the same time.  Room size makes NO difference, whether the sound is 2m away or 20m, it still hits both ears at the same time and there’s zero ITD. Room size also makes no difference to angle, what makes a difference to angle is the relative position of the sound source (speaker for example) relative to the head position of the listener. Room size obviously makes a difference to the initial delay of the first early reflection and the delay of subsequent reflections but OF COURSE, that’s the initial delay/s of the early reflections and not ITD!!


When I talked about direct sound from speakers, you told me to not forget the other stuff, reflections and reverberation. Now you are talking about direct sound! I meant of course early reflections from side walls, and yes, room dimensions do affect the sound angle. It is basic geometry! Since you want to talk about direct sound, there is acoustic crossfeed happening with it. I have tried to make that point, but you constantly move the goalposts!


----------



## bigshot

With stereo speakers, the position of the speakers is a triangle with the listening position. As the room size increases, the triangle scales up so the sound is always coming from the same angles.


----------



## 71 dB (Oct 16, 2022)

bigshot said:


> With stereo speakers, the position of the speakers is a triangle with the listening position. As the room size increases, the triangle scales up so the sound is always coming from the same angles.


The shape of the room affects and also the distance to the side walls. Most of the time the angles* are quite similar, but if we are worried about 5 µs difference, those are certainly possible.

* If the side wall was all mirror, the angles are the ones that you see your spakers on the mirror. If you make the listening triangle smaller, the angle increases and vice versa. Theoretically it can be between 30° and 90°, but in practice it is about 50°-70°


----------



## bigshot

The angle of the sound from the speakers doesn’t change as the room size gets larger.


----------



## 71 dB

gregorio said:


> You seem to still be missing the basic facts: We have a pair of complex signals which contains complex and somewhat *contradictory* *spatial information*,


With speakers this contradictory spatial information isn't a huge issue, because the room acoustics shape it into something less contradictory. With headphones that doesn't happen.



gregorio said:


> made more complex by speaker/room reproduction and we have hearing perception which uses a whole bunch of timing and spectral factors to make sense of what we’re hearing.


Naturally spatial sound has got so much information (all the reflections etc.), that I believe the analyse of it all in our brain happens largely statistically. It means that individual reflections (apart from direct sound + early reflections that are more sparse and can be analysed more carefully) don't matter so much, but the combination of all of them. 



gregorio said:


> Using headphones which don’t have that speaker/room interaction or other factors (such as head tracking) obviously lacks some of those factors which leaves perception with a lot more guesswork and therefore the results are unpredictable.


Yes, and the result of this is I get broken messy spatiality with headphones as they are (without crossfeed) unless it is binaural or binaural-like sound.



gregorio said:


> Some people perceive a completely natural/realist result.


I don't and I don't understand how some people do. Of course this is a nice talent to have. Makes headphone listening simpler.



gregorio said:


> Others perceive serious problems and not at all a natural/realistic result.


That's me.



gregorio said:


> Most though perceive something in between these two extremes, for example some (like me) perceive a reasonably natural result but from a much closer perspective and with a few artefacts/problems which aren’t a big deal.


How do you hear large ILD at low frequencies? How does sound in only one ear feel to you? To me these things sound very unnatural and annoying. Using phone is annoying too, but thanks to the lack of low frequencies it is not very bad.



gregorio said:


> With crossfeed, we are still missing those factors and so our perception is still reliant on guesswork, however, the factors we do have are altered, some are improved and some degraded and this affects the result of our perception’s guesswork.


Things are altered in a way that roughly simulates how they are altered in regards of direct sound in "real life". So, even if some things degrade, they degrade similarly to "real life" that is familiar to my spatial hearing.

What is important to realize is that in stereo sound we can have channel differences, that don't really make sense to spatial hearing. It is like video signal with color information in IR and UV range that messed up the RGB color information. For example stereo signal where left and right channels are negative versions of each other doesn't make sense for human spatial hearing which expects certain types of cross-correlation between the channels. Mono sound (left and right channels identical) don't make sense in the context of spatial hearing either. When there is some differences, but not huge differencies is the "sweet spot". Room acoustics transform even the craziests ping pong recordings and even mono recordings to this sweet spot. Crossfeed does similar thing, but unfortunately it can't improve mono sound. For that I have developped "*diffuse mono*", where an artificial "S" channel is created from the original mono sound and delayed randomly before "added" to the original sound. That makes the mono sound diffuse. Mono recordings with headphones sound very "dead" and totally centered inside head. Making the mono diffuse makes it feel more outside head and also more lively, like a stereo recording where every instrument happened to be in the center. This is kind of the opposite of what crossfeed does (channel separation is increased instead of reduced). Unfortunately I can only do this processing beforehand, not "on the go".



gregorio said:


> For some people it completely ruins what was a very good/realistic result, for some (like you apparently) it completely cures what was an extremely poor result but again, most people fall between these two extremes.


It is interesting these differences exist among people, something I would have never believed before you educating me! 



gregorio said:


> Some of this majority feel on balance that crossfeed has problems but is generally preferable and use it, most others feel on balance it’s not worth it, for example (like me) that it’s maybe a bit better with a few pieces but worse in most cases.


Maybe we simply have different threshold points of crossfeed being beneficial? For me crossfeed is beneficial for about 98 % of stereo recordings while for the rest 2 % it is not (for headphone compatible recordings crossfeed can be quite harmful, but those recordings are rare). For you it is the opposite?

Related to this my newer theory is that intuitive people favor crossfeed more than sensitive types. Intuitive types (MBTI = xNxx) suffer often from overwhelming sensory information while sensitive types (MBTI = xSxx) control sensory information better. Also, intuitive types process incoming information in order to find contradictions while sensitive types take information more as it is.

This is just an idea I have and can be totally wrong...


----------



## 71 dB

bigshot said:


> The angle of the sound from the speakers doesn’t change as the room size gets larger.


You are right, but the angles of the reflections from room surfaces do.


----------



## gregorio

71 dB said:


> When I talked about direct sound from speakers, you told me to not forget the other stuff, reflections and reverberation. Now you are talking about direct sound!


You’re joking? Maybe I gave you too much credit for what you know? Yes, we cannot forget about reflections/reverb BUT OBVIOUSLY that does NOT mean we only consider reflections/reverb and forget about the direct sound. So yes, sometimes I talk specifically about direct sound and sometimes I talk specifically about reflections but we must always consider both! You seem to have a habit of considering just one thing (typically ILD) and ignoring/dismissing everything else. You seem incapable of understanding that a lot of factors are occurring simultaneously and interact with each other or that if I’m talking about one thing, it is ALWAYS in the context of everything else. 


71 dB said:


> I meant of course early reflections from side walls, and yes, room dimensions do affect the sound angle. It is basic geometry!


But of course we cannot consider only the reflections but also the direct sound and the angle of the direct sound does not change which is also basic geometry! For example an equilateral listening triangle would have the speakers relative to the listener at an angle of 60deg. Make the room bigger and leave the listening triangle the same or bigger and the angle is still 60deg. When it comes to the angle of side reflections, room size does have an effect but again it is only one of several factors. For example, we would get the same angle difference as a bigger room by having exactly the same room but a smaller listening triangle. A weird trapezoid or pentagon shaped room will seriously affect reflection angles but of course not many consumers have such listening rooms we’re typically dealing with rectangular shaped rooms. Also, we cannot only consider the angle of reflections, we also have to consider the relative timing/delay of those reflections, the relative level and the spectral content. In addition, in the case of side wall reflections, this is also determined by the off-axis response of the speakers. And lastly, the most prominent initial reflection is typically the two reflection points on the rear wall directly in line with the speakers, behind the listener. These result in relatively small ITDs and ILDs compared to side wall reflections (same with ceiling and floor reflections) but they have specific spectral differences by the time they reach the ear drum which human perception relies on a factor you consistently ignore. 


71 dB said:


> Since you want to talk about direct sound, there is acoustic crossfeed happening with it. I have tried to make that point, but you constantly move the goalposts!


I’m not moving the goalposts, the goalposts have always been the same but you don’t seem to realise the goalposts include all the factors, not just one at a time!


71 dB said:


> I don't and I don't understand how some people do.


And that’s the problem! I don’t perceive a completely natural result with headphones either but I DO understand how some people do, it’s the result of perception changing what we’re actually hearing in order to make sense of it and for a few people the result seems to be almost perfectly natural/realistic, for others less so and for a few (like you) it’s an uncomfortable mess. You don’t understand the 1st or 2nd group because you’ve come up with some “theory” which dictates these two groups cannot exist. Clearly they do exist, so obviously your theory must be wrong but you don’t seem able to let go of it in the face if this obvious evidence. This perception process which allows some people to perceive almost a perfect result without crossfeed is the same process you subconsciously use, although with the different result of you perceiving a near perfect result using crossfeed. In both cases the actual result is far from perfect/natural but your (and their) perception leads you to believe otherwise. 


71 dB said:


> How do you hear large ILD at low frequencies?


I don’t understand the question. A large ILD is natural and not even especially uncommon, we experience a large ILD when a sound is close to one side of our head. 


71 dB said:


> To me these things sound very unnatural and annoying.


But they’re not unnatural, a large ILD happens naturally IRL. Maybe you just haven’t experienced it much and therefore it sounds unnatural to you. In my case, I spent many years as an orchestral musician, so was habituated to having other instruments very close on either side of me (in various different ensembles), and in front and behind me. Maybe that’s why I’m not so annoyed by it and you are? Either way though, it’s not unnatural just because you perceive it to be. 


71 dB said:


> Things are altered in a way that roughly simulates how they are altered in regards of direct sound in "real life".


You keep falsely stating that, presumably because you personally perceive crossfeed that way and your “theory” depends on it but these two things don’t provide any real evidence and actually contradict reliable evidence. The only thing they do provide is a self-reinforcing circular argument, which is why we’ve been going in nonsense circles for so long!!

Your statement is false because IRL we never hear only the direct sound and even considering only the direct sound, there are several important aspects that crossfeed does not simulate at all, roughly or otherwise. Your answer to this is that these “important aspects” not only are not important, they’re irrelevant and should be dismissed. That’s nonsense because it contradicts established science. As one single example, if we have a direct sound centrally in front of us and then the exact same sound centrally behind us the ITD and ITD are zero in both cases. We can tell the difference due to spectral differences caused by the different absorption of the front of the pinnae compared to the back of the pinnae and other absorption characteristics of the front of the body/skull and the back. Cross feed does not simulate any of these differences in any way at all, not even roughly and you cannot claim they are irrelevant because your perception relies on them as does everyone else’s. Unless you’re claiming that in this experiment you wouldn’t be able to perceive the different location of the sound in front or behind? 


71 dB said:


> Room acoustics transform even the craziests ping pong recordings and even mono recordings to this sweet spot.


Not necessarily, again you’re just omitting various factors to maintain the myth of your false theory! Yes, those extreme, hard ping pong recordings you sometimes find in early stereo recordings do sound bizarre on HPs but they also sound fairly bizarre in a room on quite widely spaced near-field speakers/monitors. Not as bizarre but certainly not in this “sweet spot”. The “sweet spot” is achieved by having the speakers quite close together and the listening position further away, by decreasing both the ILD and ITD, increasing the ratio of reflections/reverb to direct sound and changing the spectral content of both the direct and reflected sound. 


71 dB said:


> Crossfeed does similar thing,


No it doesn’t, apart from the ILD it does nothing similar to the above at all!


71 dB said:


> Related to this my newer theory is that intuitive people favor crossfeed more than sensitive types. Intuitive types (MBTI = xNxx) suffer often from overwhelming sensory information while sensitive types (MBTI = xSxx) control sensory information better.


Looks like you’re going to fall into the same trap again! A “theory” requires reliable supporting evidence, even a hypothesis requires some basis in or reference to reliable evidence. If you just make-up some idea based on your personal perception of observations that may or may not correlate and even if they do, may not imply causation; the chances are that some existing scientific evidence will falsify it and you’ll spend another decade believing a falsehood and then making up circular arguments to defend it! Wouldn’t it be better (and save many years) to first read as much scientific research/studies as you can on location perception and related issues?

G


----------



## bigshot (Oct 16, 2022)

I don't disagree that the size of the room affects the reflection of sound off the walls, but no one was talking about the reflections off the walls of the room. You shifted to that when your comment about room sizes affecting the angle of the direct sound was pointed out to be incorrect. Room reflections are irrelevant because crossfeed doesn't produce room reflections, and that is precisely why it doesn't create spatiality. Normally, one would acknowledge their error, correct it, and then move on to another point. But you slid from an error to an irrelevant fact, hoping to try to bury your error under a subtle change of subject. That's not a particularly admirable argumentative technique. Discussions aren't about buffaloing one's point across come hell or high water. It's a conversational give and take. Acknowledging when the other person is correct is an important part of that, and intellectual honesty is what earns respect.

Sliding on and off points doesn't score points. It can only derail the point. It's always better to focus on arguing on point and not let your emotions force you to use argumentative tricks to "win" at any cost.


----------



## 71 dB

gregorio said:


> I don’t understand the question. A large ILD is natural and not even especially uncommon, we experience a large ILD when a sound is close to one side of our head.


When is it in your opinion natural to have sound close to one side of our head?


----------



## bigshot (Oct 16, 2022)

Reflections are important because that's what crossfeed is missing. That's why crossfeed doesn't affect spatiality.

The angle of direct sound is constant with speakers. With headphones it's constant at the wrong angle. With crossfeed, it's the same, just the signals are just mixed. There is no angle at all. It's a line through your head. Without room reflections, there is no direction and there is no space.

The problem is, your arguments aren't focused on point. They don't answer the arguments you are being given. They just deflect or change the subject entirely. You cut the argument you're given up into little bits and answer the bits, not the entire context. Line by line replies tend to encourage that kind of stuff.


----------



## 71 dB

bigshot said:


> Reflections are important because that's what crossfeed is missing. That's why crossfeed doesn't affect spatiality.


Crossfeed affects sound, makes it different. You can call the chance whatever you want. I don't care what you call it. I call it *improved spatiality* and I don't need your approval for that! I have been a fool for wanting the approval of you. Reflections are missing in headphone sound, crossfeed or not. Lack of reflections is not a crossfeed thing. It is a headphone thing.



bigshot said:


> The angle of direct sound is constant with speakers.


It is not constant when the listener moves head sideways, but I get your point.



bigshot said:


> With headphones it's constant at the wrong angle. With crossfeed, it's the same, just the signals are just mixed. There is no angle at all. It's a line through your head. Without room reflections, there is no direction and there is no space.


My spatial hearing is fooled by crossfeed to think the angle changed. Also, the spatial cues in the recording fool my spatial hearing and I sense small space.



bigshot said:


> The problem is, your arguments aren't focused on point. They don't answer the arguments you are being given. They just deflect or change the subject entirely. You cut the argument you're given up into little bits and answer the bits, not the entire context. Line by line replies tend to encourage that kind of stuff.


I am so sorry I am so bad at everything. I try but this is the result always. I don't know how to do well.


----------



## bigshot (Oct 16, 2022)

You can call it that, but crossfeed has nothing to do with spatiality. With headphones, stereo is a single dimension- a line down the middle of the head between the ears. Crossfeed simply moves sound closer to the center of the head.


----------



## 71 dB

bigshot said:


> You can call it that, but crossfeed has nothing to do with spatiality. With headphones, stereo is a single dimension- a line down the middle of the head between the ears. Crossfeed simply moves sound closer to the center of the head.


It doesn't make sense to me to say a line down the middle of the head between the ears, because:

1. Nothing goes physically through our head when we use headphones 
2. I don't hear it that way unless the recording lacks all secondary spatial cues (e.g. test tones)


----------



## old tech

@71 dB Just as an aside, the Myers-Briggs personality test doesn't have much (if any) scientific validity, certainly the evidence for it is lacking.  Read for example:
https://skeptoid.com/episodes/4221


----------



## 71 dB

Since everyone hears crossfeed differently, this is no different from arguing if strawberry ice-cream tastes better than banana ice-cream...



old tech said:


> @71 dB Just as an aside, the Myers-Briggs personality test doesn't have much (if any) scientific validity, certainly the evidence for it is lacking.  Read for example:
> https://skeptoid.com/episodes/4221


I know, but getting into it last year has helped me to understand myself better so I guess it is better than nothing. It is weird that something with not much scientific validity is used so much.


----------



## bigshot

1) I hear spatiality with a recording with secondary depth cues.
2) I do not hear spatiality with a recording without secondary depth cues.

A) I hear spatiality with crossfeed and a recording with secondary depth cues.
B) I do not hear spatiality with crossfeed and a recording without secondary depth cues.

Maybe the spatiality has nothing to do with the crossfeed and everything to do with the secondary depth cues.

Secondary depth cues consist of recorded room reflections and reverberation, both of which involve timing changes.

Crossfeed does not alter timing, therefore….





…


----------



## old tech

71 dB said:


> I know, but getting into it last year has helped me to understand myself better so I guess it is better than nothing. It is weird that something with not much scientific validity is used so much.


It is not used at all in the psychology profession but it is analogous to Astrology being used a lot in the daily newspapers.


----------



## 71 dB

bigshot said:


> Crossfeed does not alter timing, therefore….


I guess crossfeed does nasty things and was invented to ruin headphone sound. I am satanic for trying to give scientific justification to crossfeed. Fortunately blessed people like you work hard to defeat satanic forces threatening the mankind! Crossfeed is known to encourage some people to use more headphones. It tricks some people to even have out of head spatiality on headphones! How perverse! Headphone sound belongs inside head! Only speaker sound is allowed the freedom of outside head existence.



old tech said:


> It is not used at all in the psychology profession but it is analogous to Astrology being used a lot in the daily newspapers.


In my case nothing was ever used. That's why I lived 50 years wondering why other people are so weird and different.


----------



## 71 dB (Oct 17, 2022)

gregorio said:


> No it doesn’t, apart from the ILD it does nothing similar to the above at all!


Acoustic crossfeed of direct sound creates ITD of about 250 µs. Crossfeed mimicks this 250 µs. So there is that similar aspect about ITD. Also, crossfeed mimick the ISD how acoustic crossfeed happens (very roughly): Low frequencies "leak" to contralateral ear more than higher frequencies. So, there is that similarity: 

I am tired of you twisting things and terms in ways that are always as unfavorable to crossfeed as possible. I think I am much more honest: I admit what crossfeed can't do. I admit its limitation, but I also give credit to crossfeed when it is due. I see the positive and the negative. You want to see only the negative.


----------



## 71 dB

gregorio said:


> Not necessarily, again you’re just omitting various factors to maintain the myth of your false theory! Yes, those extreme, hard ping pong recordings you sometimes find in early stereo recordings do sound bizarre on HPs but they also sound fairly bizarre in a room on quite widely spaced near-field speakers/monitors. Not as bizarre but certainly not in this “sweet spot”. The “sweet spot” is achieved by having the speakers quite close together and the listening position further away, by decreasing both the ILD and ITD, increasing the ratio of reflections/reverb to direct sound and changing the spectral content of both the direct and reflected sound.


I don't mention all things all the time. Doesn't mean I omit things. They have affected my thought process at some point. I agree totally with you about how ping  pong recordings can be made better with speakers.



gregorio said:


> Looks like you’re going to fall into the same trap again! A “theory” requires reliable supporting evidence, even a hypothesis requires some basis in or reference to reliable evidence. If you just make-up some idea based on your personal perception of observations that may or may not correlate and even if they do, may not imply causation; the chances are that some existing scientific evidence will falsify it and you’ll spend another decade believing a falsehood and then making up circular arguments to defend it! Wouldn’t it be better (and save many years) to first read as much scientific research/studies as you can on location perception and related issues?
> 
> G


I don't mean scientific theory. I use the term theory loosely and I wish you could use semantic reading. I have thoughts about things. Thoughts don't need to be scientific. They can be anything.


----------



## bigshot (Oct 17, 2022)

Correlation isn’t necessarily causation.

Spatiality is created by the effect of space on sound. That’s reflections and delays.


----------



## gregorio

71 dB said:


> When is it in your opinion natural to have sound close to one side of our head?


When someone whispers in your ear, when driving, looking ahead and the passenger talks to you, when an insect flies close to one ear. When you’re a musician sitting right next to another musician, when you play an instrument that’s to one side of your head (such as a flute, violin or tuba for example), in fact many long term professional musicians of such instruments have serious noise induced hearing loss/damage in just that one ear.

I’m sure there are number of other scenarios which occur IRL and are therefore natural. I’m sure most people experience such a scenario at least once and quite a large number experience it several times or even fairly commonly. Unlike for example the bizarre acoustic experience of being in an anechoic chamber but even then, individual responses vary dramatically.
I’m very surprised you couldn’t come up with any IRL scenarios to answer your own question. This further indicates you have a “blind spot” for anything which may falsify your theory or contradict your personal perception!


71 dB said:


> I guess crossfeed does nasty things and was invented to ruin headphone sound.


Crossfeed was invented to improve HP sound but also “does nasty things”, which is why it works better than no crossfeed for some people and worse for others. This is why HRTFs (and then added reverb and head tracking) we’re invented, to remove/avoid those “nasty things”. But then you already know this!


71 dB said:


> I am satanic for trying to give scientific justification to crossfeed.


Of course not, because there is a scientific justification to crossfeed. However, there is also scientific justification for why it does not work (for many/most people) and it is “satanic” to simply ignore/dismiss this science for no reason other than to defend a nonsense theory. We see this sort of thing commonly in audiophile marketing and you rightly challenge it but not in this case when it’s your own theory. Take for example perceived sonic differences in cables and the common scientific justification of skin effect, all of which is true/real, providing we ignore/dismiss the science that dictates skin effect doesn’t affect audible freqs. Or, I’ve seen articles and even white papers (by audiophile manufacturers) that were several/many pages long explaining everything to do with jitter, the problems/distortion it creates and where every single stated measurement and fact was correct but nevertheless, the whole thing is invalidated for the intended reader by omitting just one single fact, that it’s well below audibility. You know all this but somehow can’t apply it your own theory. 

G


----------



## 71 dB (Oct 17, 2022)

gregorio said:


> Your statement is false because IRL we never hear only the direct sound and even considering only the direct sound, there are several important aspects that crossfeed does not simulate at all, roughly or otherwise. Your answer to this is that these “important aspects” not only are not important, they’re irrelevant and should be dismissed. That’s nonsense because it contradicts established science. As one single example, if we have a direct sound centrally in front of us and then the exact same sound centrally behind us the ILD and ITD are zero in both cases. We can tell the difference due to spectral differences caused by the different absorption of the front of the pinnae compared to the back of the pinnae and other absorption characteristics of the front of the body/skull and the back. Cross feed does not simulate any of these differences in any way at all, not even roughly and you cannot claim they are irrelevant because your perception relies on them as does everyone else’s. Unless you’re claiming that in this experiment you wouldn’t be able to perceive the different location of the sound in front or behind?


YES, crossfeed doesn't simulate many things!! I admit that! So don't say I don't. Since I admit it, what I say is not false!! Crossfeed can still improve the sound a lot for some people despite not simulating everything and only doing some very simple things! I know because I am one of those people! I have also learned and admitted that what I hear is not what everyone hears. I ADMIT IT!!!! You can't admit that! Instead you base your claims to your own how you hear crossfeed.

You should admit that crossfeed is BASED on the science of spatial hearing. Very roughly and simply perhaps, but that is the origin. Crossfeed wasn't invented by trying something totally random out of the hat! It was invented by simulating acoustic crossfeed. This is the FACT and you are wrong if you claim otherwise. I also discovered crossfeed by thinking headphone spatiality the way I was though in university realising how problematic excessive ILD can be with headphones. So, it is useless to take science away from crossfeed. This science isn't taken away by the fact that _nowadays_ we can do things MUCH BETTER with HRTF/head tracking etc. I have always admitted those methods are even better.

Headphones don't have  front/behind separation. Crossfeed doesn't "take it away" nor "give it". Also, I know this stuff of course. It was teached to me in the university.


----------



## bigshot

Crossfeed doesn’t simulate the things that are responsible for giving sound a feeling of space. Reverbs and digital delays do that. Crossfeed reduces channel separation, which may be desirable if you don’t like ping pong stereo.


----------



## 71 dB

gregorio said:


> When someone whispers in your ear, when driving, looking ahead and the passenger talks to you, when an insect flies close to one ear. When you’re a musician sitting right next to another musician, when you play an instrument that’s to one side of your head (such as a flute, violin or tuba for example), in fact many long term professional musicians of such instruments have serious noise induced hearing loss/damage in just that one ear.


How common are those in music listening? When I go to a music concert, I don't have people whispering to my ears, at least as part of the music. I don't want insects either when enjoying music. So, in music reproduction sounds near one ear aren't very relevant or even desired! ( Also, with speakers it is practically impossible to generate this effect (inside anechoic chamber it can be done using cross-talk-cancelation) and since music is mostly mixed for speakers...



gregorio said:


> I’m sure there are number of other scenarios which occur IRL and are therefore natural. I’m sure most people experience such a scenario at least once and quite a large number experience it several times or even fairly commonly. Unlike for example the bizarre acoustic experience of being in an anechoic chamber but even then, individual responses vary dramatically.
> I’m very surprised you couldn’t come up with any IRL scenarios to answer your own question. This further indicates you have a “blind spot” for anything which may falsify your theory or contradict your personal perception!


Sure, millions perhaps, but hardly any of them are related to music listening. I can't come up any related to music listening, but of course I can come up many non-music-related (e.g. when I touch my ear I have the rubbing noise at one ear ==> huge ILD)



bigshot said:


> Crossfeed doesn’t simulate the things that are responsible for giving sound a feeling of space. Reverbs and digital delays do that. Crossfeed reduces channel separation, which may be desirable if you don’t like ping pong stereo.


Crossfeed helps me to interpret the cues in the recording that give feeling of space. I have explained the process many many times, but nobody wants to understand.

Crossfeed tells me the sound was intended for speakers and make sense in that context. Without crossfeed the intent is binaural sound.


----------



## bigshot (Oct 17, 2022)

I linked an example of a song that had elements hard panned left and right that was clearly mixed for speakers. You said that crossfeed didn’t improve it or add spatiality, I believe.


----------



## 71 dB

bigshot said:


> I linked an example of a song that had elements hard panned left and right that was clearly mixed for speakers. You said that crossfeed didn’t improve it or add spatiality, I believe.


I don't think I said crossfeed didn't improve anything. That song contained very little spatial cues in the mix, so obviously crossfeed can't do much about it. Crossfeed helps me to interpret spatial cues in the recording, but if there aren't any/much then there isn't. However, even if crossfeed didn't help much with the spatiality, it did make the sound less annoying and fatiguing for me, so there where still benefits.


----------



## bigshot (Oct 17, 2022)

Crossfeed doesn’t add spatial cues. It simply blends channels. All the spatiality is in the recording itself.


----------



## gregorio

71 dB said:


> Acoustic crossfeed of direct sound creates ITD of about 250 µs.


No it doesn’t. Acoustic crossfeed of a direct sound can result in an ITD of anything from about 0µs to around 800µs. It’s depends on the horizontal position of the direct sound AND the morphology of the individual’s skull and body. Furthermore, ITD is not a single number, it varies non-linearly with frequency due to skull refraction by up to about 150µs.


71 dB said:


> Crossfeed mimicks this 250 µs. So there is that similar aspect about ITD.


Exactly, crossfeed mimics 250µs, which is not at all similar to actual ITD. It’s like saying a stopped clock mimics a functioning clock because it’s right twice a day!


71 dB said:


> I am tired of you twisting things and terms in ways that are always as unfavorable to crossfeed as possible.


No, I am presenting the facts which falsify your theory and explanation of why crossfeed supposedly works.


71 dB said:


> I think I am much more honest: I admit what crossfeed can't do. I admit its limitation,


Yes, you do admit what crossfeed can’t do and it’s limitations BUT, you then spend innumerable pages trying to explain why those limitations are either just irrelevant to start with or how crossfeed overcomes them using false/made-up assertions that it mimics or simulates what happens IRL (or with speakers). That is NOT “much more honest”, it is far less honest!!


71 dB said:


> I see the positive and the negative. You want to see only the negative.


That’s not true. I’ve stated that crossfeed works very well for a few people, acceptably well for a group of people and even that I prefer crossfeed in a very limited number of cases.


71 dB said:


> Crossfeed helps me to interpret the cues in the recording that give feeling of space.


Generally crossfeed makes it far more difficult for me to interpret the cues in the recording, gives me a more mono and therefore a lesser feeling of space.


71 dB said:


> I have explained the process many many times, but nobody wants to understand.


That’s because my perception, the perception of many/most others and loads of scientific evidence (such as HRTFs for example) falsifies your explanation of the process, regardless of how many times you repeat it!!


71 dB said:


> How common are those in music listening?


Now who’s changing the goal posts? We do quite commonly experience large ILD in real life. And, I’ve already given you examples where we do even with music; anyone who’s ever played a flute, violin, tuba, some other instruments or in a closely spaced ensemble. You could add, children being sung to by their mother with one side of their head near her mouth, anyone who’s ever listened to a radio on their shoulder or a mobile with the speaker close to one ear and there’s probably some other scenarios. There are various potential real life scenarios (that are not incredible rarities) which falsifies your assertion that high ILDs are “unnatural”. Again, you’re just making up false assertions to justify your “theory”/explanation.

G


----------



## bigshot

The problem isn’t you not acknowledging the limitations of crossfeed, it’s you attributing things to it that are completely unrelated. Spatial cues with headphones are 100% recorded in the mix. Crossfeed doesn’t change that. And speakers create a spatial soundstage that is completely different than narrowing the stereo spread with crossfeed. Yet you keep talking about crossfeed “enhancing” or creating spatiality, and comparing aspects of headphone listening with crossfeed to speakers.


----------



## castleofargh

bigshot said:


> Crossfeed doesn’t add spatial cues. It simply blends channels. All the spatiality is in the recording itself.


That's not correct. The *perception* of space is a complex system and our HRTF(or what parts from it) is always involved in our interpretation. Mix filtered channels with a delay are very likely to alter that interpretation, be it in a positive or negative way. I get what you're trying to say about space and the room defining it, but even that is hard to keep alive when considering mono mics all over said room that get mixed together into an album.

About the angle thing before, I'm also not convinced you're right. With walls further away, the angle wouldn't change for the reverb only if both the speaker and the listener were at the same distance from the wall. That's not the case. It's not significant IMO as just about all other variables of the signal bouncing off the wall will be changed in ways more significant for the brain(we're not about a few degrees for the direct sound so it's unlikely to be a big deal for secondary cues), but I thought I shouldn't let 71 dB get only called out when wrong and never when correct.


----------



## 71 dB (Oct 17, 2022)

gregorio said:


> No it doesn’t. Acoustic crossfeed of a direct sound can result in an ITD of anything from about 0µs to around 800µs. It’s depends on the horizontal position of the direct sound AND the morphology of the individual’s skull and body. Furthermore, ITD is not a single number, it varies non-linearly with frequency due to skull refraction by up to about 150µs.


You know I mean situation where the speakers are at plus minus 30° angle and the listener doesn't turn head, but your style is to create a scenario where what I said doesn't apply. If the speakers move around you or the listener turns his/head then yes, but I wasn't talking about such situations. Crossfeed generates the ITD at frequencies up to about 800 Hz. Below that frequency ITD is quite constant. Above 800 Hz the importance of ITD goes away with frequency and ILD becomes more important. 800-1600 Hz is the transition band.



gregorio said:


> Exactly, crossfeed mimics 250µs, which is not at all similar to actual ITD. It’s like saying a stopped clock mimics a functioning clock because it’s right twice a day!


At all?? Don't be so difficult. You know 250µs a proper approximation of ILD. My wide-crossfeed uses 640 µs, but that's another story. Not using crossfeed doesn't give ANY ITD because it doesn't crossfeed anything in any way! Crossfeed gives the 250 µs which is close to acoustic crossfeed.



gregorio said:


> No, I am presenting the facts which falsify your theory and explanation of why crossfeed supposedly works.


You are nitpicking. Your explanations made sense if I claimed crossfeed does perfectly everything, but I don't claim that. To some of us crossfeed is able to improve headphone sound a lot DESPITE of being VERY imperfect AND there is actually scientific explanation (that led to the original idea of crossfeed of simulating acoustic crossfeed with speakers) as to why this is the case. I have tried to explain this for 5 years now and I am fed up with you and other crossfeed-skeptics here.



gregorio said:


> Yes, you do admit what crossfeed can’t do and it’s limitations BUT, you then spend innumerable pages trying to explain why those limitations are either just irrelevant to start with or how crossfeed overcomes them using false/made-up assertions that it mimics or simulates what happens IRL (or with speakers). That is NOT “much more honest”, it is far less honest!!


They are irrelevant for me to enjoy music! Did it ever occure to you that people might enjoy headphone sound without using state of the art HRTF processing? To me crossfeed is good enough, but headphone sound as it is is NOT good enough.

Also, crossfeed does simulate certain things. Simulation can be very coarse. There is no threshold of how accurate simulation has to be to be called simulation. So, you are using semantics to discredit me and I don't like that AT ALL!! I am very honest here.



gregorio said:


> That’s not true


You say that to everything say. Maybe I am a machine that generates untrue claims? So funny. Who takes you seriously at this point? Some fools maybe...



gregorio said:


> . I’ve stated that crossfeed works very well for a few people, acceptably well for a group of people and even that I prefer crossfeed in a very limited number of cases.


Yes you have, but the next thing you say is that my scientific explanations are false, but they are not.



gregorio said:


> Generally crossfeed makes it far more difficult for me to interpret the cues in the recording, gives me a more mono and therefore a lesser feeling of space.


That is interesting. Thanks to you I know now that people like you exists and my way of hearing crossfeed is not the only way common to all people.



gregorio said:


> That’s because my perception, the perception of many/most others and loads of scientific evidence (such as HRTFs for example) falsifies your explanation of the process, regardless of how many times you repeat it!!


No. Scientific evidence tells us we can do things even better than crossfeed. We both agree HRTF is better than crossfeed, but my claim is crossfeed is better than nothing (and good enough for me to enjoy headphone sound).



gregorio said:


> Now who’s changing the goal posts? We do quite commonly experience large ILD in real life. And, I’ve already given you examples where we do even with music; anyone who’s ever played a flute, violin, tuba, some other instruments or in a closely spaced ensemble. You could add, children being sung to by their mother with one side of their head near her mouth, anyone who’s ever listened to a radio on their shoulder or a mobile with the speaker close to one ear and there’s probably some other scenarios. There are various potential real life scenarios (that are not incredible rarities) which falsifies your assertion that high ILDs are “unnatural”. Again, you’re just making up false assertions to justify your “theory”/explanation.
> 
> G


How an earth should I hear flute or tube at my ear when I listen to a recording of Elgar's 2nd Symphony? I am not supposed to play in the orchestra! I am supposed to sit in the audience 15 meters from the orchestra! Large ILD is not unnatural in all context, but it is  unnatural in the context of music listening. There is also the matter of spectrum. When we hear large ILD in real life, it tends to be mid/high frequencies (insects flying by, mother singing lullaby). Low frequencies generally require large vibrating surfaces and if such objects are near head, it is near field* meaning the ILD isn't that large. In fact, (closed) headphones are the best way to generate large ILD at low frequencies, and that's also the danger and motivation to use crossfeed.

* the vibrating surface is so big, that even the nearer ear isn't that near the average distance. Only a small part of the surface is very near. That limits the ILD.


----------



## gregorio

71 dB said:


> Crossfeed generates the ITD at frequencies up to about 800 Hz. Below that frequency ITD is quite constant.


Is this what you call “quite constant”?




Taken from “_On the variation of interaural time differences with frequency_” - Victor Benichoux, Marc Rebillat, Romain Brette, JASA, 2016. 


71 dB said:


> Maybe I am a machine that generates untrue claims? So funny. Who takes you seriously at this point? Some fools maybe...


Clearly, from the data in the peer reviewed paper above, ITD does vary by frequency below 800Hz, by as much as 200μs. Your claim of it being constant is untrue, a fixed 250μs delay does NOT simulate what actually occurs so you are apparently an untrue claim generating machine and your insult applies to yourself!!

I can’t be bothered with the rest of it, it’s just more of the same. 

G


----------



## 71 dB (Oct 17, 2022)

gregorio said:


> Is this what you call “quite constant”?
> 
> Taken from “_On the variation of interaural time differences with frequency_” - Victor Benichoux, Marc Rebillat, Romain Brette, JASA, 2016.
> 
> ...


It is less constant than what I have understood it to be. I need to study this paper. Thanks for the link!


----------



## bigshot (Oct 17, 2022)

castleofargh said:


> That's not correct.
> 
> About the angle thing before, I'm also not convinced you're right. With walls further away, the angle wouldn't change for the reverb


If you notice, I said "Crossfeed does not add SPATIAL CUES. It adds nothing that will create spatiality. If it triggers something in someone's brains and makes them hear something they were tuning out before, that is part of their perception, not the actual sound being produced, nor is it necessarily a universal reaction to crossfeed.

You are correct though that delays and reverbs do add synthetic spatial cues. But simply reducing channel separation doesn't introduce a delay. Creating a coherent artificial ambient space is very complex. I bet Gregorio could speak for many pages talking about all the elements that go into creating a realistic sense of space in a mix with delays and reverbs.

As for angles, we weren't talking about reflections because headphones don't have reflections. The comment I was replying to said that as a room size got larger, the angle of the direct sound from the speakers changes, but crossfeed keeps that angle consistent. That isn't true because the triangle of the speakers and listening position scales up to maintain the same angles. And crossfeed does not change the angle. It's still 90 degrees off the sides of the head.

The problem here was that I answered one comment and then it was replied to with alterations to the context of the original post I was replying to. (A "yes but" was added after I answered...) When the conversation wiggles and morphs like that, it's difficult to follow. It would be easier if there was an acknowledgment of my point, and then introduce the next point as a new argument, but that isn't how things work around here sometimes. No acknowledgments... one point morphs into another slightly different one when it's proven wrong. Set context on "blend".


----------



## 71 dB (Oct 17, 2022)

Looks like I have been wrong about ITD. What I have said about it has been based on the Woodworth's formula: ITD = a/c * ( 𝞱+sin𝞱 ), but it never occured to me that it only applies to high frequencies (above 1 kHz or so). At lower frequences ITD is larger: Instead of 250 µs, about 400 µs corresponds 30° angle of sound. I apologize everyone for writing falsehoods about ITD. I really thought I had it right, but apparently that wasn't the case! Oh dear! Do I know/understand anything?


----------



## gregorio (Oct 17, 2022)

71 dB said:


> Instead of 250 µs, about 400 µs corresponds 30° angle of sound.


Yes, providing the speakers are at 30° and the music only contains freqs at about 300Hz. If it contains 700Hz freqs, that corresponds to about 150µs ITD or 150Hz freqs correspond to about 500µs, it’s variable by freq! Plus, there’s still other factors associated with this single aspect; Those graphs represent an average of 130 people’s HRTFs, so it doesn’t show that each ear (of the same person) has a different HRTF/HRIR, with a different ITD vs freq curve and before you argue this doesn’t affect your enjoyment of music: Reliable evidence suggests this difference between the ITD of each ear plays a significant role in sound elevation discrimination. Crossfeed ignores all of this and a significant number of other factors and again, we’re just discussing the direct sound, reflections add another whole bunch of factors!

G


----------



## 71 dB

gregorio said:


> Yes, providing the speakers are at 30° and the music only contains freqs at about 300Hz. If it contains 700Hz freqs, that corresponds to about 150µs ITD or 150Hz freqs correspond to about 500µs, it’s variable by freq! Plus, there’s still other factors associated with this single aspect; Those graphs represent an average of 130 people’s HRTFs, so it doesn’t show that each ear (of the same person) has a different HRTF/HRIR, with a different ITD vs freq curve and before you argue this doesn’t affect your enjoyment of music: Reliable evidence suggests this difference between the ITD of each ear plays a significant role in sound elevation discrimination. Crossfeed ignores all of this and a significant number of other factors and again, we’re just discussing the direct sound, reflections add another whole bunch of factors!
> 
> G


Yeah, now I know, but despite of all this, I genuinely do prever using crossfeed most of the time. It manages to do something important for me.


----------



## gregorio

71 dB said:


> Yeah, now I know, but despite of all this, I genuinely do prever using crossfeed most of the time.


I’ve never once questioned your perception or preference, just your explanation/theory of why crossfeed works. What crossfeed does is not natural or related to real life, with the single exception of ILD, it pretty much messes up everything. However, HPs without crossfeed is also pretty messed up because the mixes are primarily designed for speaker playback. So what we’re left with is two differently messed up HP presentations, how our personal perception responds to them and personal preference. There’s no scientific justification for preferring crossfeed beyond it just being how your personal perception responds to it. There is a scientific justification for a personalised set of HRTFs + reverb + head tracking though.

G


----------



## 71 dB

gregorio said:


> I’ve never once questioned your perception or preference, just your explanation/theory of why crossfeed works. What crossfeed does is not natural or related to real life, with the single exception of ILD, it pretty much messes up everything. However, HPs without crossfeed is also pretty messed up because the mixes are primarily designed for speaker playback. So what we’re left with is two differently messed up HP presentations, how our personal perception responds to them and personal preference. There’s no scientific justification for preferring crossfeed beyond it just being how your personal perception responds to it. There is a scientific justification for a personalised set of HRTFs + reverb + head tracking though.
> 
> G


Unfortunately HRTFs + reverb + head tracking is not available for me, but crossfeed is.


----------



## bigshot

71 dB said:


> Yeah, now I know, but despite of all this, I genuinely do prever using crossfeed most of the time. It manages to do something important for me.


Perfectly fine. No argument there. Signal processing is a personal preference.


----------



## sander99

71 dB said:


> My HRTF was measured once





71 dB said:


> Unfortunately HRTFs + reverb + head tracking is not available for me, but crossfeed is.


What happened to your HRTF measurement results? They got lost? Or you didn't get access to them?


----------



## 71 dB

bigshot said:


> Perfectly fine. No argument there. Signal processing is a personal preference.


Cool, but now there is the question of what am I doing on this discussion board? I can tell my preferences, but so what?

I have a lot of work ahead of me getting more familiar with the more precise aspects of ITD and how to implement them in my Nyquist plugins.


----------



## 71 dB

sander99 said:


> What happened to your HRTF measurement results? They got lost? Or you didn't get access to them?


No access. My HRTFs are intellectual property of the Nokia Corporation.


----------



## gregorio (Oct 18, 2022)

71 dB said:


> I have a lot of work ahead of me getting more familiar with the more precise aspects of ITD and how to implement them in my Nyquist plugins.


I think you’re flogging a dying horse, for a couple of reasons:

The “precise aspects of ITD” are far more complex than a simple static delay time time, as discussed it varies by freq but also, it has to be integrated with other factors, such as ILD obviously but also with the quite complex and variable freq response created by skull and pinnae diffraction and absorption. If you don’t do this, FR and ITD effectively fight with each other and can cancel each other out. Experiments show that a sound which should appear to be panned say hard left according to it’s ITD, will be perceived by some/many to be centrally panned if the skull/pinnae FR for a hard right panned sound is also applied. So if you ignore this or implement it with some simple static value, the results will be unpredictable from listener to listener and you’ll have done a lot of work but be back where you started. And, if you implement it correctly, then it’s no longer a crossfeed plug-in, it’s a HRTF plug-in!

However, HRTFs have a serious problem. While the processing power in mobile devices is now sufficient to cope with implementing them, the problem remains that generic HRTFs don’t work well for a lot of people and getting usable personalised HRTFs is very far from practical for consumers. However, there seems to be considerable research going on in this area. For example, there’s a published paper which uses simple skull measurements with a tape measure, correlates them with a database of HRTFs to come up with a function to modify a generic HRTF and thereby personalise it to a significant degree. However, I suspect most of the ongoing research isn’t being published because it’s being done in/by the corporate world. Apple filed a patent for using ear scans created by the iPhone’s “true depth” camera to personalise generic HRTFs. Dolby recently allowed personalised HRTFs to be used in their Atmos RMU (for content creators), implying something in the pipeline for consumers down the road. I would think that if Apple (and probably Dolby) are working on this, Google and some other big guns probably are too. So, I think we’ll see a lot more products incorporating HRTF/“spatial audio” functionality and incremental improvements to this generic HRTF problem, probably starting in the fairly near future.

G


----------



## 71 dB (Oct 18, 2022)

gregorio said:


> I think you’re flogging a dying horse, for a couple of reasons:
> 
> The “precise aspects of ITD” are far more complex than a simple static delay time time, as discussed it varies by freq but also, it has to be integrated with other factors, such as ILD obviously but also with the quite complex and variable freq response created by skull and pinnae diffraction and absorption. If you don’t do this, FR and ITD effectively fight with each other and can cancel each other out. Experiments show that a sound which should appear to be panned say hard left according to it’s ITD, will be perceived by some/many to be centrally panned if the skull/pinnae FR for a hard right panned sound is also applied. So if you ignore this or implement it with some simple static value, the results will be unpredictable from listener to listener and you’ll have done a lot of work but be back where you started. And, if you implement it correctly, then it’s no longer a crossfeed plug-in, it’s a HRTF plug-in!
> 
> ...


Well, my plugins are for music making. I have used my simple ITD and ILD models for that, but now I can, with some effort, make those models more accurate. I don't deal with dead horses. I have my own young pony. It is far from dead.

I don't amplitude pan, I use the combination if ILD and ITD to come up with panning that works better for headphones.


----------

