# Recording Impulse Responses for Speaker Virtualization



## jaakkopasanen

Speakers can be virtualized (simulated) with headphones very convincingly with impulse responses and convolution software. This however requires the impulse responses to be measured for the individual listener and headphones. I'm trying to achieve this.

I made impulse response recordings by playing sine sweep on left and right speaker separately and measuring it with two ear canal blocking microphones. I turned these sweep recordings into impulse responses with Voxengo deconvolver software. I also measured my headphones the same way and compensated their frequency response with an EQ by inverting the frequency response as heard by the same microphones. Impulse responses are quite good and certainly better than any other out of the box impulse response I have ever heard. However they are suffering a little by coarseness, sound signature is a bit bright and sound localization is a tad fuzzy.

When listening on headphones a music recording which was recorded with the mics in ears while playing the music on speakers the result is practically indistinguishable from actually listening to speakers. My impulse responses and convolution come close but still leaves me wanting for better. I think the main problem might be the noise introduced by my motherboard mic input.

I thought about using digital voice recorder like Zoom H1n for the job. This model can do overdub recordings with zero delay between the playback and recording making it possible even to record each speaker separately. I'm also assuming that the mic input on this thing is quite a bit better than my PCs motherboard.

Does is seem like sensible idea to use voice recorder and are there better options? Can you think of other sources of error than the noise from the mic input? Should I do some digital noise filtering on the sine sweep recording before running the deconvolution? Any other ideas for improving the impulse responses?


----------



## 71 dB

What about phase? Phase/time delay between ears? How long sweep do you use? Doubling the sweep duration theoretically increases signal-to-noise ratio by 3 dB. Is the system linear enough? Increasing level of measured signal increases signal-to-noise ratio too, but easily introduces more distortion (loudpeakers!). Broadband noise can be filtered out of the response using sweeping band-pass filter that follows the frequency of the sweep. If this filtering is done in "filtfilt" style (first normally and then again for the reverved response), no additional phase shifts are introdused. The filter should be asymmetrical, steep for higher frequencies, because you don't expect higher than f frequencies when measuring frequency f, but you do expect lower frequencies, because they are attenuating away.



jaakkopasanen said:


> When listening on headphones a music recording which was recorded with the mics in ears while playing the music on speakers the result is practically indistinguishable from actually listening to speakers.



Tell that to bigshot.


----------



## gregorio

There are a number of issues and potential issues with what you're trying to achieve, most of which are related to the fact that the impulse response you're recording largely isn't an impulse of your speakers. It's an impulse response of your room (acoustics) and of your recording chain at least as much as it is of your speakers and additionally, it's a response of your "impulse". In no order of preference/importance:

1. Your room acoustics: Human perception works on the basic principle of actually discarding/ignoring most of the sensory input we receive and combining what's left, along with prior experience/knowledge, to create a "perception". There's simply too much sensory data for our brain to process in any reasonable amount of time and evolution has come up with this method as the most practical way of making sense of the world around us. In effect, our perception is an educated guess/judgement of what's really occurring, rather than an accurate representation of it and music (and pretty much all art) absolutely depends on this difference between reality and perception. In other words, certain aspects of what our senses are telling us can be (and usually are) changed somewhat by our brain, in order to make sense of it all. Hence why optical and aural illusions exist and why two different witnesses to an event can truthfully describe that event significantly differently. In the particular scenario you're talking about we have a sensory conflict, the brain will typically change (the perception of) some of that sensory input to remove the conflict and make sense of it. Let's take the example of a recording of a symphony; when you play that recording back on your speakers, what you're hearing is the acoustic space of a large concert hall but your knowledge and eyes are telling you that you're in (for example) your sitting room, we have a sensory conflict. The music producer and engineers compensate for this as much as possible (as they too are creating the recording/mix/master in acoustic spaces which are significantly different to a concert hall) but nevertheless, there's still somewhat of a conflict which the brain will try to resolve. So even with a theoretically perfect recording, perfect speakers and perfect home acoustics, the reproduced recording is never going to sound the same as the original performance in the concert hall, although it might be close enough to fool some/many/most people. In addition, what you're attempting to achieve is a faithful reproduction of your speakers/room in a different room/environment, an additional sensory conflict. In other words, even if it were possible to achieve a perfect impulse response and convolution, when listening to your symphony recording on your headphones, you're effectively hearing a concert hall in your sitting room while your knowledge and eyes are telling your brain that you're actually in (say) a bus! How convincing is that going to be? Maybe almost completely convincing to you personally, but who knows. It might be interesting to see if it's more convincing listening on your convolved headphones when you're actually in your sitting room (or whatever room your speakers are in) than when listening in a significantly different environment. I assume it would be more convincing but whether that makes enough of a difference to you personally I obviously can't say.

2. Your recording chain: Microphones, being transducers, are relatively inaccurate. Measurement mics are the most accurate as far as freq response is concerned but unless you buy very expensive ones, even measurement mics are still relatively inaccurate. A more favoured solution these days is to buy cheaper measurement mics, have a "correction file" created by a calibration lab for each mic and software which allows you to apply them. However this is not a perfect solution and additionally, measurement mics typically gain their frequency accuracy at the expensive of a lot more self noise, which is why measurement mics are never used in studios for recording music. Music mics have far less self noise but are typically far more inaccurate, each brand/model of mic has it's own "colouration" which is desirable for commercial music/audio recording but not when what you're specifically trying to record is the "colouration" itself, of a different transducer (your speakers)! There's also the issue of "off-axis" mic response. Then there's the rest of the chain, the mic pre-amps and noise introduced by say your computer/motherboard. The Zoom H1n should have little/no motherboard noise but it does have rather poor mic pre-amps. It's effectively cheap consumer grade electronics, which is OK for a quick, dirty record of an event but a long way from higher-end pro units. Of course all of this is relative, if your recordings suffer from a great deal of motherboard noise then the H1n could be a considerable improvement.

3. Your impulse: Does a sine sweep fully characterise your speakers? How do your speakers respond to sharp, loud transients rather than a continuous sine wave?

The things I've mentioned above can each be fairly insignificant on their own or quite noticeable, depending on what equipment you've got and your personal perception. Additionally, even if they are relatively insignificant on their own, the cumulation of them might not be.

G


----------



## Speedskater

To expand on the above great reply:
a sine-sweep test will only tell you about the sustain response of the system, which is mostly room response.  This type of test tells you little about the transit or impulse of the speakers (or direct response).


----------



## jaakkopasanen (Oct 13, 2018)

71 dB said:


> What about phase? Phase/time delay between ears? How long sweep do you use? Doubling the sweep duration theoretically increases signal-to-noise ratio by 3 dB. Is the system linear enough? Increasing level of measured signal increases signal-to-noise ratio too, but easily introduces more distortion (loudpeakers!). Broadband noise can be filtered out of the response using sweeping band-pass filter that follows the frequency of the sweep. If this filtering is done in "filtfilt" style (first normally and then again for the reverved response), no additional phase shifts are introdused. The filter should be asymmetrical, steep for higher frequencies, because you don't expect higher than f frequencies when measuring frequency f, but you do expect lower frequencies, because they are attenuating away.
> 
> 
> 
> Tell that to bigshot.



Not sure what you mean by those phase and time delay between ears questions. They should be mapped correctly by having the mics in ears, no? Sweeping bandpass filter is probably just the thing I was looking for. I think Smyth Realizer A16 does this because it sounds like the sweeps from different channels are overlapping some. Controlling the bandpass filter steepness (bass part of it) would allow control of reverberation time if I haven't understood things wrong. It might be possible to have better room acoustics in the impulse response than in the real room. Thanks for the hint!



gregorio said:


> There are a number of issues and potential issues with what you're trying to achieve, most of which are related to the fact that the impulse response you're recording largely isn't an impulse of your speakers. It's an impulse response of your room (acoustics) and of your recording chain at least as much as it is of your speakers and additionally, it's a response of your "impulse". In no order of preference/importance:
> 
> 1. Your room acoustics: Human perception works on the basic principle of actually discarding/ignoring most of the sensory input we receive and combining what's left, along with prior experience/knowledge, to create a "perception". There's simply too much sensory data for our brain to process in any reasonable amount of time and evolution has come up with this method as the most practical way of making sense of the world around us. In effect, our perception is an educated guess/judgement of what's really occurring, rather than an accurate representation of it and music (and pretty much all art) absolutely depends on this difference between reality and perception. In other words, certain aspects of what our senses are telling us can be (and usually are) changed somewhat by our brain, in order to make sense of it all. Hence why optical and aural illusions exist and why two different witnesses to an event can truthfully describe that event significantly differently. In the particular scenario you're talking about we have a sensory conflict, the brain will typically change (the perception of) some of that sensory input to remove the conflict and make sense of it. Let's take the example of a recording of a symphony; when you play that recording back on your speakers, what you're hearing is the acoustic space of a large concert hall but your knowledge and eyes are telling you that you're in (for example) your sitting room, we have a sensory conflict. The music producer and engineers compensate for this as much as possible (as they too are creating the recording/mix/master in acoustic spaces which are significantly different to a concert hall) but nevertheless, there's still somewhat of a conflict which the brain will try to resolve. So even with a theoretically perfect recording, perfect speakers and perfect home acoustics, the reproduced recording is never going to sound the same as the original performance in the concert hall, although it might be close enough to fool some/many/most people. In addition, what you're attempting to achieve is a faithful reproduction of your speakers/room in a different room/environment, an additional sensory conflict. In other words, even if it were possible to achieve a perfect impulse response and convolution, when listening to your symphony recording on your headphones, you're effectively hearing a concert hall in your sitting room while your knowledge and eyes are telling your brain that you're actually in (say) a bus! How convincing is that going to be? Maybe almost completely convincing to you personally, but who knows. It might be interesting to see if it's more convincing listening on your convolved headphones when you're actually in your sitting room (or whatever room your speakers are in) than when listening in a significantly different environment. I assume it would be more convincing but whether that makes enough of a difference to you personally I obviously can't say.
> 
> ...



1. I've noticed this myself. Listening to a PRIR with speakers far away is quite a weird experience when sitting close to a computer monitor. Brains don't really know how to reconcile the auditory cue for distant sounds and visual cue for near sounds. Works a lot better when both match. This could work the other way around too by making the impulse response sound better than what it actually is if it's recorded in the exact same spot as the listener sits when listening to headphones. I recorded impulse response from my own speakers sitting in my regular spot so it's quite easy for my brain to believe that what it's hearing is actually the real deal because my brains have been conditioned for some time now for this environment having that sound.

2. If I'm not wrong the frequency response of the microphones doesn't really matter. I'm recording frequency response of my headphones with the same mics in ears and whatever that result is it also contains the frequency response of the mics. So when I compensate for the headphone frequency response with EQ I'm actually compensating for the mics' response too. Mic preamps on my motherboard are probably beyond abhorrent. Zoom H1n isn't known for it's mic pre-amps but should be significantly better than motherboard. Anything is better than motherboard really. If Zoom H1n mic pre-amps are not sufficient I will try separate pre-amps and feed the signal into the recorder by line input.

3. Maybe it's good to make clear that I'm not actually trying to imitate speakers perfectly. The goal is to have realistic audio reproduction with headphones.



Speedskater said:


> To expand on the above great reply:
> a sine-sweep test will only tell you about the sustain response of the system, which is mostly room response.  This type of test tells you little about the transit or impulse of the speakers (or direct response).



If I understand correctly this would actually be a good thing. I don't really want speakers' transient response in there messing with my music experience. I think one could have significantly better transient response performance with headphones than with speakers (at least when considering price) so this speaker/room virtualization could sound better than the recorded speakers.

I'm also thinking that it might be possible to do better room correction for the virtual room being simulated than for the actual physical room. Room correction can only go so far because not all acoustic phenomena are easy to handle just with DSP but since headphones don't have that problem (standing waves etc) it could just be so that the impulse response can be edited to have better room acoustics than what would be normally possible.


----------



## 71 dB

jaakkopasanen said:


> 1. Not sure what you mean by those phase and time delay between ears questions. They should be mapped correctly by having the mics in ears, no?
> 2. Sweeping bandpass filter is probably just the thing I was looking for. I think Smyth Realizer A16 does this because it sounds like the sweeps from different channels are overlapping some.
> 3. Controlling the bandpass filter steepness (bass part of it) would allow control of reverberation time if I haven't understood things wrong. It might be possible to have better room acoustics in the impulse response than in the real room. Thanks for the hint!


1. Yes. I'm not sure what I meant when asking it… 
2. Hopefully it helps…
3. Yes. Logarithmic sweeps greats a response of "natural" shorter reverberation time whereas linear sweep creates "unnaturally" decaying shorter reverberation (faster initial decay and then slower decay).


----------



## bigshot

Can you simulate 5.1, 7.1 or Atmos?


----------



## RRod

I use my Roland R-05 all the time for this task; works just fine. As 71dB noted, you can both extend the sweep and up your speaker volume to get better SNR. After deconvolution you will see miniature IRs before the main IR that correspond to the orders of harmonic distortion; these can be windowed-off to get to the linear part of the decomposition. As far as errors, one of the big ones can be binaural mic placement, so it's good to do several sweeps and pick one that seems reasonable. You might check out the Aurora plugins, made by a researcher who is big into this kind of thing. After it's all said and done you won't get something that sounds perfect. For me, just having something clamping on my head seems to prevent a real sense of sitting in front of speakers, and you won't have head movement accounted for. But satisfactory results don't take a huge amount of effort.


----------



## castleofargh

on the SNR idea, it can be interesting to have some notions of which loudness levels the mic can handle without distorting much. I've ruined a bunch of recordings/measurements myself trying to get the very best SNR possible without thinking of checking for distortions.


----------



## jaakkopasanen

bigshot said:


> Can you simulate 5.1, 7.1 or Atmos?



This is my main use case. I'm using HeSuVi on Windows and that can do 7.1. I should be able to turn stereo setup into 7.1 when I receive the Zoom H1n because overdubbing allow me to record channels separately. Atmos however is out of reach because Windows doesn't have decoder for that and I'm not super optimistic that there would be one in the near future.



RRod said:


> I use my Roland R-05 all the time for this task; works just fine. As 71dB noted, you can both extend the sweep and up your speaker volume to get better SNR. After deconvolution you will see miniature IRs before the main IR that correspond to the orders of harmonic distortion; these can be windowed-off to get to the linear part of the decomposition. As far as errors, one of the big ones can be binaural mic placement, so it's good to do several sweeps and pick one that seems reasonable. You might check out the Aurora plugins, made by a researcher who is big into this kind of thing. After it's all said and done you won't get something that sounds perfect. For me, just having something clamping on my head seems to prevent a real sense of sitting in front of speakers, and you won't have head movement accounted for. But satisfactory results don't take a huge amount of effort.



Very cool! What I've read the Roland R-05 has quite good mic pre-amps but doesn't have overdubbing. That windowing thing seems like an idea worth trying. What mics are you using? I have Sound Professional SP-TFB-2. Are you compensating your headphones? I would imagine having mics in different position when measuring the room impulse response and when measuring headphones would be problematic. But doing multiple sweeps is a good tip. And thanks a lot for the Aurora plugins link, maybe I don't have to write all the code myself after all.



castleofargh said:


> on the SNR idea, it can be interesting to have some notions of which loudness levels the mic can handle without distorting much. I've ruined a bunch of recordings/measurements myself trying to get the very best SNR possible without thinking of checking for distortions.



Indeed. However I don't know how easy would it be to detect loudspeaker distortion from the sine sweep measurement.


----------



## RRod

jaakkopasanen said:


> Very cool! What I've read the Roland R-05 has quite good mic pre-amps but doesn't have overdubbing. That windowing thing seems like an idea worth trying. What mics are you using? I have Sound Professional SP-TFB-2. Are you compensating your headphones? I would imagine having mics in different position when measuring the room impulse response and when measuring headphones would be problematic. But doing multiple sweeps is a good tip. And thanks a lot for the Aurora plugins link, maybe I don't have to write all the code myself after all.



Yeah I have binaurals from sound professionals as well, I think the 'masters' series of that same set. I am not currently doing a full inversion to match my speakers to headphones, simply because the software I use and like doesn't have a convolver. I use the speaker measurements to adjust my headphone EQ and to set a crossfeed, though. I DO plan to use the Kirkeby filter in Aurora to help make a 1024 tap filter to add to my speaker chain on my miniDSP.


----------



## castleofargh

jaakkopasanen said:


> Indeed. However I don't know how easy would it be to detect loudspeaker distortion from the sine sweep measurement.


manufacturer information, people with the same mic and a good experience of it, or yourself after you go and measure a bunch of sound sources at various levels and find out some pattern in the distortions that correlate well with levels at certain frequencies.
but I don't want to worry you for nothing, using speakers at a distance in fairly realistic conditions, you probably don't get all that loud and most mics should handle that just fine as they were made for such conditions. I was just bringing it up to make clear that while SNR is an obvious concern for us all, the ideal measurement conditions are probably not going to be when we manage to push the speakers to 1337dB ^_^. be it for the mics or for the speakers.


----------



## Glmoneydawg

I feel like SNR is likely to be the least of my concerns at 1337db...noise complaints from other cities would be up there along with the local electricity supplier melting down


----------



## bigshot

I'd rather listen to noise than signal at that level! At least it would be quieter!


----------



## 71 dB

RRod said:


> After deconvolution you will see miniature IRs before the main IR that correspond to the orders of harmonic distortion; these can be windowed-off to get to the linear part of the decomposition.


For this to work the sweep must be logarithmic. Linear sweeps have the distortion products scattered all over the impulse response. Also, the miniture distortion IRs are folded in time to the end of the whole IR (but they are easy easy to window away anyway).


----------



## jaakkopasanen

castleofargh said:


> manufacturer information, people with the same mic and a good experience of it, or yourself after you go and measure a bunch of sound sources at various levels and find out some pattern in the distortions that correlate well with levels at certain frequencies.
> but I don't want to worry you for nothing, using speakers at a distance in fairly realistic conditions, you probably don't get all that loud and most mics should handle that just fine as they were made for such conditions. I was just bringing it up to make clear that while SNR is an obvious concern for us all, the ideal measurement conditions are probably not going to be when we manage to push the speakers to 1337dB ^_^. be it for the mics or for the speakers.



My idea here would be not only to have this working for myself but also create a process, a guide and tools so that pretty much anybody can do it themselves. In that sense it's not realistic to ask the random headphone enthusiast to know or to learn how to detect distortion in the impulse responses or even to go read through manufacturer documents. What I really would like to have is a almost one click solution which processes the sine sweeps, analyzes them, gives warnings to user if things didn't go as expected and of course produces the output impulse responses which can be directly used with different convolution softwares. Something like what I did with my AutoEQ project. Of course the user still needs to own the mics at minimum so it's not quite possible to have that low level of commitment from the user.


----------



## jaakkopasanen (Nov 11, 2018)

Finally found a bit time for doing measurements using the Zoom H1n instead of motherboard's mic input. The improvement is significant. I have been testing with music from Spotify and with videos on YouTube for speech. Previously speech was a good test to notice differences between headphones and speakers but now with the new measurements even speech is very deceptive. I haven't done yet any of the proposed filtering tricks so it's just pure convolution.

What are the biggest sources of error right now, I'm not sure. My headphones (HiFiMAN HE400S) are not certainly in the same class as my speakers (Dynaudio Focus 110). Also I'm not sure if my headphone compensation if very good. At least it doesn't have separate compensation for left and right channel. Maybe just by upgrading headphones and doing better compensation I would achieve flawless results.

Measurement is just stereo for now but I'll try to get the multi-channel working later. Disappointing discovery was that Zoom H1n creates overdub tracks by having the original track mixed in so it's not so simple to use that. Maybe I can subtract the original sweep track from the mix. Other option is to adjust latencies between recordings by hand but that might prove to be quite difficult because delay of few samples can mess it up.

Certainly this thing has huge potential. I usually don't care too much about sound stage with music but this is already now so good that I would not go back to listening music with headphones without this if I have the choice.


----------



## Joe Bloggs

Very interesting!  Let me know if you get any ideas for automating this process.  I have a programmer friend here working on related stuff that might help.


----------



## jaakkopasanen

I'm making progress. Today I recorded my first 7.0 HRIR. The 7 channels are recorded two at a time because I only have a stereo speaker setup. First round was for front left and right in normal listening position. 2nd was recorded while looking at 120 degrees right so that left speaker becomes back left and right speaker becomes side left. 3rd round was the opposite ie looking at 120 degrees left making left and right speakers side right and back right respectively. Last round was center channel while looking directly at left speaker.

PCs don't have latency consistency in recording meaning the channel delay will be whatever which is not usable for binaural speaker virtualization. I however came up with a method to correct channel delays in post-processing. This correction syncs the channels by the ear which first receives the signal: left ear for left side speakers and right ear for right side speakers. Delay between left speaker and left ear should be the same as between right speaker and right ear. I then looked up head breadth for adults in wikipedia and made some calculations about how much faster signal from each speaker should arrive at ear compared to center of head. Center of the head is a reference point where all speakers should have equal delay. Variance in head breadths is so small that it has next to no effect on actual channel delays. I'm using the following delays with sampling rate of 48 kHz: 5 samples for front and back, 10 samples for center and 0 samples for side channels. This seems to be working wonders. I still need to do some testing to confirm that the break points will have good imaging (between front left and side left, front right and side right, back left and back right).

So now I have very good HRIR for movies and games. Music is so sensitive to sound signature that it still needs more work. I need to work more on headphone compensation because I think I made mistakes with it the last time. Now I have better tools for it but still it remains a mystery. I was under the impression that I should equalize headphones flat as heard by the in-ear mics but this doesn't look to be the case. I actually got good results just by using normal Harman-like target curve for Sennheiser HD 518. Fortunately I have some things I can try to improve this.

The harshness problem I mentioned earlier might be caused by incorrect headphone compensation or insufficient sampling rate. I read somewhere that sampling rate needs to be at least 6 times as high as the highest frequency in sine sweep recording for deconvolution to work well. I have so far done my recordings at 48 kHz so this could explain something bad happening to highs.



Joe Bloggs said:


> Very interesting!  Let me know if you get any ideas for automating this process.  I have a programmer friend here working on related stuff that might help.



I created a github repository for this project: https://github.com/jaakkopasanen/Impulcifer. Maybe your friend would like to take a look. Is he working on or does he have knowledge about deconvolution algorithms? I added simple FFT deconvolution to the repository but it's not as good as Voxengo Deconvolver. I would like to learn which is the best way to go about doing deconvolution. Eventually I want to have a full pipeline from measurements to HRIR output and headphone compensation.


----------



## jaakkopasanen

I messed up the headphone compensation I mentioned in the previous reply. I forgot to deconvolve the logarithmic sine sweep recording to impulse response. Logarithmic sine sweep has unbalanced frequency response so it cannot be used as is. Now I did the compensation again and got very moderate compensation curve. This sounds a lot better although frequency response for left ear has massive dip in the 8 to 10 kHz range asking for a lot of positive gain which does not do wonders for the sound. I'm using the right ear eq for both. Not ideal but serves me fine for the time being.

I also tested this new compensation by listening (without speaker virtualization) to music I recorded earlier with my in-ear mics and that sounds just wonderful. This too indicates that the problems with harshness are not in headphone compensation but in the impulse response estimation. Hopefully increasing sampling rate produces better results.


----------



## Joe Bloggs (Dec 9, 2018)

I'm not able to understand all of what goes above but some comments:
1. Interaural time delay for one speaker seems much more important to get right than the time delay between different speakers to the close ear, which in real life would vary wildly between different setups unless you have the speakers in a perfect circle around you.
2. For interaural time delay I'm simply using a stereo binaural mic and recording the sweep response of both ears at the same time from one speaker sweep.  This gives me accurate interaural time delays.
3. Also, pretty sure that the recording latency isn't a variable "whatever" but some fixed "whatever" which means that if you place your sweep at a fixed time on your DAW and route it to a different speaker each time, then if you hit "record" on that and crop the same timed stereo chunk of output from each recording for deconvolution, you'd also have a set of recordings that are not only timed correctly between the two ears but also timed correctly relative to each speaker's distance from your ears, if that tickles your fancy.
4. I hesitate to declare the same if you record long chunks of audio including multiple sweeps from different speakers then try to crop the same timed recordings relative to each sweep, because of clock drift.
5. Assuming you're not compensating the response of your HRTF recordings from each speaker to your binaural mics in any way, yes, you want to finalize by putting headphones over your ear and record the transfer function from your headphones to your binaural mics, then neutralize that, to make your headphones simulate canalphones stuffed where the binaural mics were.  But in practice I found the effect less than ideal :/

If you add me at joe0bloggs on Skype, facebook or Telegram we could discuss and further share our findings more quickly and thoroughly


----------



## jaakkopasanen

Joe Bloggs said:


> I'm not able to understand all of what goes above but some comments:
> 1. Interaural time delay for one speaker seems much more important to get right than the time delay between different speakers to the close ear, which in real life would vary wildly between different setups unless you have the speakers in a perfect circle around you.
> 2. For interaural time delay I'm simply using a stereo binaural mic and recording the sweep response of both ears at the same time from one speaker sweep.  This gives me accurate interaural time delays.
> 3. Also, pretty sure that the recording latency isn't a variable "whatever" but some fixed "whatever" which means that if you place your sweep at a fixed time on your DAW and route it to a different speaker each time, then if you hit "record" on that and crop the same timed stereo chunk of output from each recording for deconvolution, you'd also have a set of recordings that are not only timed correctly between the two ears but also timed correctly relative to each speaker's distance from your ears, if that tickles your fancy.
> ...



1. True
2. I'm doing the same
3. Latency varies between recordings unfortunately. This is a consequence of CPU scheduling by operating system which allocates CPU cycles to different processes in non-deterministic fashion.
4. As far as I understand measuring all speakers in one go, assuming you have access to 7 speaker setup, should make all channel delays sync. I'm don't know if clock drift would affect this in practice. Problems come when measuring speakers individually, if you have different delay to front left speaker than to front right speaker the stereo image will be unbalanced. Measuring two speakers at a time can mitigate this but there will still be problems with different measurements. I suspect what I'm doing is actually better than anything else because this way the channel delay will be synced even if the speakers are not set up in a perfect circle.
5. This is my approach. In theory this should work perfectly because what ever is the eara canal transfer function it is there wihen listening to speakers or headphones. For my most recent experiment this worked well in practice as well. One proposed alternative is to do frequency response by comparing 1/3 octave band noises against single fixed band, say around 500 Hz, and adjusting the levels to be the same loudness. This would be repeated with speakers and then with headphones. I'm going to try this at some point and report back. This could make it possible to compensate IEMs which is not possible otherwise. Ideally everything would be measured at the ear drum but that's not feasible I'm afraid.


----------



## johnn29

Really excited to see this come to fruition. I've been toying with all the concepts recently so much of what you've made makes sense. Currently using SXFI/ Waves/SXFI and would love to get real measurements from my own ears into the mix. There's a few companies out there that will offer that service eventually - JVC had a product called *EXOFIELD *at CES2018 but it's not available commercially yet.


----------



## jaakkopasanen

johnn29 said:


> Really excited to see this come to fruition. I've been toying with all the concepts recently so much of what you've made makes sense. Currently using SXFI/ Waves/SXFI and would love to get real measurements from my own ears into the mix. There's a few companies out there that will offer that service eventually - JVC had a product called *EXOFIELD *at CES2018 but it's not available commercially yet.



I would love to see this service being commercially available for consumers. That Exofield seems interesting, I'll have to look into it more later. There's also a Finnish startup called Hefio which seems to be this as well. The founder Marko Hiipakka has published studies about ear canal acoustics calibration and measuring HRTFs with particle velocity microphones at the ear canal entrance in a way that highly accurate estimation of frequency response at ear drum is possible. Normally headphone calibration with the same ear canal blocking microphones is not exact because headphones change ear canal resonances and ideally those need to accounted for.

I bought Sennheiser HD 800 headphones and the listening fatigue I mentioned seems to have disappeared. So I guess it was mainly caused by my older headphones HiFiMAN HE400S although I need to experiment more to confirm this. I haven't yet tried higher sampling rates but that should in theory improve the impulse response estimation on high frequencies.


----------



## jaakkopasanen

It's been a while since my last update but the project is progressing well. I have logarithmic sine sweep measurement process and code done with phase controlled ESS and inverse filter deconvolution. I also have a headphone compensation implemented and both the speaker measurement and the headphone measurement can be done in a couple of minutes. This is a very good baseline and the results are already very good but I also have a lot of ideas to improve it. I haven't for example implemented the tracking filter for noise reduction yet and room correction is still missing though that should be done soon since I have all the algorithms for it.

I just finished writing the guide for doing the measurements with Audacity and processing them with Impulcifer. Eventually I would like have a website which guides the user but for now this is a process that can be done by anyone who has the patience to read through the guide. Find the project and the guide here: https://github.com/jaakkopasanen/Impulcifer. All the ideas about improving the results can be found in the issues: https://github.com/jaakkopasanen/Impulcifer/issues

I would be thrilled to hear experiences for others if someone here has the required recording gear available and could take couple of minutes to try this out. I'm myself super hyped about the results so far and about the potential which all the improvements hold.


----------



## johnn29

I posted on the OOYH thread but just so people here know - it's the real deal. Once it's matured a bit I'll post a review compared to Super X-Fi which has really improved.

The prospect of doing room correction in a virtual room sounds amazing - and that's the one thing missing for me. The simulated rooms from SXFI and OOYH have been setup better than my real room. If I could just get a flat frequency response and virtually reduce reverb it's gonna be a winner.

Just a big shame there's no open source DTS;X or Dolby Atmos decoder on the horizon. We're never going to get a decent virtual experience of those. Dolby Atmos for Headphones does the height channels but it sounds like crap.


----------



## johnn29 (Jul 16, 2019)

Just thought I'd give a quick update, after a lot of experimenting I finally got it working with a full 7.1 HRIR. Used the stereo sweep and finally figured out the center channel with the X for ignore.

The measurement process has a ton of issues due to its nature. I was pressing the inst/line button accidentally on the Behringer which caused errors. I spent a total of, I think 4 hours, to get this working over a couple days. Once the guide is fully written it won't take that long - but I was in a rush to get it working before a month off!

So far been going back and forth with SXFI and Impulcifer. Impulcifier is slightly better for me. There's going to be no contest when I can fix the frequency response of my room though - that's the only thing OOYH and SXFI have going for it.

I'm assuming without headphone compensation I can specify for a flat EQ for any set of IEM's and it'd have decent results. I'll report back on that. Obviously the binural mics I've got have no calibration file but it could be the only way to use impulcifier with IEM's.

Seriously great work man! You working alone put out something better than Creative Labs and OOYH and it's not vaporware like the Realiser

Edit: Not using HP compensation with flat EQ computed via Auto EQ works works amazingly well with my BT Sony WI-1000x in ear noise cancellers. I can EQ these to flat because my pair were measured on a gras coupler so I think I'm getting excellent results.

Just remember to resample to 44.1khz for Bluetooth, because Impulcifer outputs at 48khz

Virtually being able to take my main theater room with me on a plane is something I just can't get over!


----------



## jaakkopasanen

I'm glad to hear that you got it working and are enjoying the results. I've been quite busy lately so I had to rush the guide and I haven't got time to write a proper guide for surround measurement.

One option for IEMs is to use AutoEq to equalize your around ear headphones to sound like the IEMs and use that equalization profile when measuring headphone compensation with the around ear headphones. I tried this quickly by equalizing my HD 800 to the natural frequency response of CustomArt FIBAE 3. Results were obviously not as good as with HD 800 but still better than any HRIR not personalized for me. This still needs a bit more investigation to get the best out of it.


----------



## johnn29

I'll give that a shot. The flat EQ is sounding really good to me. I'll try playing with the 3khz band that you mentioned on Github.

Oratory measured my XM3s and Grado gw100 so I can compare the results from the Impulcifer headphone compensation Vs AutoEQ to flat with a Gras as the source on some overears.


----------



## johnn29

So I tried it out - flat EQ vs headphone compensation. The Headphone Compensation renders a more realistic sound than EQ to flat. Although it's very close.

I also have some extra info on the IEM for HRTF from oratory that I'll post on a GitHub thread.

What's I'm finding is so strange is how your brain works. I did a bunch of testing on my laptop in a cafe between Dolby Headphone Cinema room with a flat EQ and Impulcifer. My tests there made me conclude that I actually preferred the sound signature of the Dolby Headphone and it had a crazy realistic center channel for an artificial HRIR. While Impulcfier did have more accuracy in the rear channels but it just didn't sound right. That was on Friday.

Today, sitting in my actual theater room, where I took the measurements, it's the complete opposite. I don't like the sound of the Dolby Headphone room and the center channel just isn't right. But Impulcifer sounds amazing and like my actual speakers. I'm even enjoying music on it which did not sound right in the cafe.

Psychoacoustics is weird. I suspect my brain knows what my theater room sounds like and is correcting for it. But when I take that same sound into an unfamiliar environment  It knows that the center channel should be coming from a distance away too - where as in the cafe I was on a laptop so Dolby's HRIR was sufficient for that. 

It makes it really hard to do traditional A vs B testing and figuring out which you prefer. I just didn't appreciate how important the room was and even the visual cues from the speakers in the room etc.


----------



## castleofargh

johnn29 said:


> So I tried it out - flat EQ vs headphone compensation. The Headphone Compensation renders a more realistic sound than EQ to flat. Although it's very close.
> 
> I also have some extra info on the IEM for HRTF from oratory that I'll post on a GitHub thread.
> 
> ...


vision is our dominant sense, it's using a bigger area of our brain. and then maybe there's also habit, how your room sounds, how you're used to that and now consider it to be how it should sound. so it's very likely that you have enough reference of that room to simply want something as close as possible instead of other fancy simulations. I've told that anecdote a few times, where I used a laptop on the side feeding a bigger screen and I was of course sitting in front of the screen using an external keyboard for a few months. I would often just use the tweeters of the laptop when watching a youtube video or some web radio as background while browsing or ruining my pictures with unskilled post processing. at some point, my brain started to place sound at the screen where the guy in the video was appearing. because I assume, my brain still had more confidence in what I was seeing than in what I was hearing. and there has been so many such examples of brain plasticity, including the guys learning to live with glasses that inverted the image. it's nothing new. what's IMO very interesting is that if I decided to put on my headphone, I never even for a second felt that the sound was off center. my brain somehow had a clear understanding that the laptop sound was a on off weirdness, and that using the headphone was another system with other audio rules. 
all that to say that something as basic as being in a given room, or having real speakers in your field of view, that can and probably does play a part in what you hear. then there may also be something about the reverb in the room, at a cafe I'm guessing you could perceive enough outside noises to capture a sense of the room's acoustic. it's both amazing and super frustrating to me TBH ^_^.


----------



## johnn29 (Jul 22, 2019)

It's also very loud in a cafe - I was using ANC in good tips but maybe all the room information in the HRIR is very difficult for the brain to compute in loud environments. The Dolby synthesised HRIR sounds much more like a headphone in a quiet environment.

I've read that neural plasticity is huge in visual and aural acuity. There's those experiments where the shape of the pinnae were artificially changed which made median localisation very difficult, but after time the brain adapts. But I've never had it demonstrated to my own senses so quickly. I was convinced after hours of testing that I'd found the optimal HRIR. But nope!

I did something kinda similar with my 120" front projector and my 65" OLED TV. Because my TV is in a dedicated black velvet curtain laden theater room and an electric screen rolls out over it I can sit 1m away from it and replicate the viewing angle of a real IMAX screen on my TV. After some time, because my brain has no visual queues I'm watching a TV due to it being surrounded my black velvet in a pitch black room the TV appears as large as the projection screen. The math works out - a 70 degree viewing angle on a 65" TV is at 1m, on a projector it's at 2m. But in an ordinary living room you see all sorts of queues that you're really just watching a small screen close. But take those away, and the brain is fooled.

Another similar trick is closing one eye while watching a 2D image in a dark room. Eventually it looks like a 3D movie.

But it's funny - I was very much in the camp of objective measurements - so I always tried to A vs B and where I could ABX. But this has really made me realise it's not so simple.

Ultimately I think I'm going to have to go for some HRIR's based on seating distance away from the screen for movies. Like one where the speaker is monitor/laptop distance away and so on.

And with the A16 realiser finally shipping - I think many owners are going to experience a similar effect.

Visions always been a big thing in audio but I thought objective measurements saved me. When I got my first nice set of speakers (B&W 803s) I'd listen to to them with the grills off and just admire the look. They sounded better because they looked nice! No matter the measurements


----------



## castleofargh

johnn29 said:


> It's also very loud in a cafe - I was using ANC in good tips but maybe all the room information in the HRIR is very difficult for the brain to compute in loud environments. The Dolby synthesised HRIR sounds much more like a headphone in a quiet environment.
> 
> I've read that neural plasticity is huge in visual and aural acuity. There's those experiments where the shape of the pinnae were artificially changed which made median localisation very difficult, but after time the brain adapts. But I've never had it demonstrated to my own senses so quickly. I was convinced after hours of testing that I'd found the optimal HRIR. But nope!
> 
> ...


yup, removing some cues sometimes ruin an effect, but sometimes it makes the remaining cues the only thing that matters and boosts their impact. what's a little annoying(or let's call it impracticable) is how some people apparently have a different ranking in their mind for various cues and how important they are for them. like some just can apparently never feel a mono sound at a reasonable distance if they don't have a visual cue of the sound source. 

about the A16, I expect that most, if not all of them are on summer holidays right now. but one day... when they finally deliver a product, indeed people might be surprised by how much some apparently trivial non audio stuff can affect the experience.

about objective approach to sound, I don't see a problem. on one hand, if we're trying to figure out what is happening to the sound, then that's something 100% related to objective reality. a sound wave isn't going to bend the other way without a proper and predictable physical reason.
on the other hand we humans don't experience anything objectively, so of course a complete objective approach will usually fail to translate into what we feel. plus, we suck at trying to separate our senses in our head. if anything, experiences like those we have witnessed only reinforce my opinion that if we want to know about sound, or what a given device does, we need to properly control everything we can the scientific way. and after that, I'm in my room and the color of my cable makes me enjoy music more(for whatever reason), I'll happily use that color for my cables and enjoy the subjective benefits. I don't think there is a conflict. I will just not be coming on the forum telling others to get the same color claiming that it does change the sound a certain way because I feel like it does. and to be able to refrain from doing that, or simply just know better, I do need my controlled experiments. different things, different purposes. 
I'm honestly fine with the concept of objective and subjective reality, I only wish I had a better understanding of myself and humans in general. but that's curiosity, or because I realize that if I understood something better, I might be able to get even more of a kick out of my favorite music ^_^.


----------



## johnn29 (Jul 26, 2019)

Your points are all true - I think I'm going to research more about human preferences.

I've been tinkering some more with the tool and running my own room corrections. Like any Home Theater setup I have a crossover problem and I think the below will solve it with the HRIR I created.

The problem with my real room is I experience crossover suck out. I've got a dual sub and KEF R300 setup that I cross over at 60hz. The problem is around cross over, due to cancellation, there's a deep dip in the frequency response. It's been known for a while that there's a distance trick you can run with Audysey Room Correction to fill this gap in. It involves delaying the subwoofer output marginally so it arrives later than the suck out. That has never sounded right to me so it's something I live in my real room.

The Frequency Response graphs from impulcfer identified it - look at the huge dip at around 50hz.





Because it's in the bass range it's actually present on most of the speakers and it's identical between ears. So I used WebPlot Digitizer to convert the image graph into a frequency response csv that I could import into REW. REW's EQ calc then generated a flat EQ to solve the suck out. Export to text and import into peace.



Now I think I have much cleaner bass.

The power of virtual room correction is going to be huge - we should be able to get properly flat frequency response across the entire range. I'm not smart enough to figure it out for the treble range where it varies with which ear you're looking at but in the bass range, where it's clear - this is a good way to fix it.

I'm very excited at the prospect of having this all automagically done by impulcifer


----------



## Joe Bloggs

johnn29 said:


> Your points are all true - I think I'm going to research more about human preferences.
> 
> I've been tinkering some more with the tool and running my own room corrections. Like any Home Theater setup I have a crossover problem and I think the below will solve it with the HRIR I created.
> 
> ...


There's no reason you need to delay the sub output unless it's actually hitting earlier than your mains. Rather it sounds like you need to invert the phase.  You can see the actual timings of various frequencies in REW by going to spectrogram view and activating the "Plot the peak energy curve" option.


----------



## johnn29

It's not a phase issue - I've got a great sub only response with dual sub. It's when you run the fronts in conjunction with the sub the issue appears. I've tried playing around with various phase angles to no avail. If you watch the vid it describes it in more detail or there's a ton of threads on AVSForums - like this one that go into detail. I did eventually hit a config that prevented the suck out but I've since re-arranged my room and haven't had time to tinker with it.

I think it's going to be a problem in any loud speaker measurement in a room unless it's truly full range and has been meticulously setup. The virtual room correction can easily take care of it - and the sub 80hz bass could even be artificially generated or pasted in from a perfect measurement since it has no impact on HRTF or localisation.


----------



## jaakkopasanen

When I get the room correction implemented there should not be a need for full range speakers or subwoofers. The bass frequencies can simply be boosted back in and this is what I'm doing myself with a GraphicEQ filter in EqualizerAPO. Even small bookshelf speakers will reproduce subbass frequencies but they are just simply rolled off heavily. But since the frequencies exist in measurement they can be equalized to a correct level. Now for example 40 dB bass boost might sound like a bad idea but keep in mind that the impulse response is what will reduce the headphone bass reproduction to the level of bookshelf speakers if not corrected so it is quite safe to negate that effect and the final result will have only the bass boost which is required by the headphones. Of course this will affect signal to noise ratio in the subbass range but I'm not too worried about that. Another option of course is to generate the bass frequencies in the sweep recording but that might create a disconnection in the reverb.

By the way @johnn29 you can actually use room correction DSP while measuring the HRIR, just remember to turn it off while measuring the headphones.


----------



## jaakkopasanen

Another idea I've been toying around with is virtual room correction without room measurements. It should be possible to correct the HRIR frequency response at least up to 1000 Hz or so because below that point the HRTF has very minimal impact on the frequency response or at least it should be quite consistent across individuals. Maybe I could extend this frequency limit by inspecting Ircam or other HRTF measurements and trying to find a good enough average HRTF frequency response. I know this won't take it past 5000 Hz in any case because the individual variance is so large in high frequencies but it might be very good for bass and decent for mids.


----------



## Joe Bloggs

johnn29 said:


> It's not a phase issue - I've got a great sub only response with dual sub. It's when you run the fronts in conjunction with the sub the issue appears. I've tried playing around with various phase angles to no avail. If you watch the vid it describes it in more detail or there's a ton of threads on AVSForums - like this one that go into detail. I did eventually hit a config that prevented the suck out but I've since re-arranged my room and haven't had time to tinker with it.
> 
> I think it's going to be a problem in any loud speaker measurement in a room unless it's truly full range and has been meticulously setup. The virtual room correction can easily take care of it - and the sub 80hz bass could even be artificially generated or pasted in from a perfect measurement since it has no impact on HRTF or localisation.


It's a phase issue between your subs and your mains.  If they are cancelling themselves out at the said frequencies, inverting either the subs or the mains will make them sum up instead.  Use a phase switch instead of any phase knob.  A phase knob set to 180 does something quite entirely different (and much more unpredictable) than a phase switch set to reverse.  If only a phase knob is available, try to use software to invert the sub.


----------



## jaakkopasanen

I did another test of IEM usage but this time using Harman over-ear 2018 target for the IEM instead of Harman in-ear 2017-1 target. The results are exceptionally good despite the fact that Custom Art FIBAE 3 has that big gap between 5 to 7 kHz which shall not be compensated for. I honestly did not expect this realistic results. If I had to use only my FIBAE 3 for speaker virtualization from now on I wouldn't even be disappointed. I definitely need to add this as an option when running Impulcifer. I suspect the problem previously was the steep drop after 8 kHz in Harman in-ear target.


----------



## johnn29

Could you talk me through why you'd EQ to a harman type target? I just don't get why - I'd have thought all the HRTF related frequency changes would already be there due to the HRIR. So flat would make most sense. I'm getting great results using a Flat EQ, but I'd just like to understand it a bit more.

I also find the headphone compensation has a hard time with treble sibilant headphones like the DT990. I found the sound to be better using no headphone compensation but relying on oratory's measurements and a flat EQ. With the headphone compensation the DT990's still have that nasty treble that just isn't there in the room with the loud speaker. Of course - that also comes down to headphone choice but it's good to know it can be fixed.

Something of relevance from oratory over on reddit



> Me: Btw - Jaakko mentioned this "I'm fairly sure equalizing an ear simulator measured frequency response flat is not the way to go for HRTF. Ear simulator measurements include the 3 kHz peak which is caused by ear canal resonance and should always be there"
> 
> Oratory:
> That's true, but the 3 kHz peak is already there in the HRTF depending on how it was measured - some institutions measure HRTF at the EEP instead of at/near the DRP, meaning a transfer function accounting for the ear canal will have to be added. I've recently had my HRTF measured at the Austrian Academy of Sciences, and they did exactly that: place microphones at the EEP and then add the transfer function for the ear canal.
> ...


----------



## jaakkopasanen

johnn29 said:


> Could you talk me through why you'd EQ to a harman type target? I just don't get why - I'd have thought all the HRTF related frequency changes would already be there due to the HRIR. So flat would make most sense. I'm getting great results using a Flat EQ, but I'd just like to understand it a bit more.
> 
> I also find the headphone compensation has a hard time with treble sibilant headphones like the DT990. I found the sound to be better using no headphone compensation but relying on oratory's measurements and a flat EQ. With the headphone compensation the DT990's still have that nasty treble that just isn't there in the room with the loud speaker. Of course - that also comes down to headphone choice but it's good to know it can be fixed.
> 
> Something of relevance from oratory over on reddit



Perhaps I left some details out when I wrote about equalizing to harman target. I'm not actually equalizing the IEMs to Harman target but I'm using Harman target as the mutually shared reference point when generating equalization settings to make the IEMs sound like the over-ear headphones. This is how the frequency response "morphing" or "transfer" between two headphones work in AutoEQ. I use the error curve of HD 800 as the error target for FIBAE 3. Error of HD 800 is the difference between the raw frequency response and Harman target. Similarly error of FIBAE 3 is the difference between it's raw fr and Harman target. Previously I used the Harman in-ear target as the reference for FIBAE 3 but this caused problems that were fixed when using the Harman on-ear target. AutoEQ has parameter called --sound_signature for this and that should point to an existing equalization result CSV file which has error data included. See here for more details: https://github.com/jaakkopasanen/AutoEq#using-sound-signatures

Could you share the Impulcifer headphone compensation graphs for your DT990? I'd like to take a look what is going on in there. Maybe there's something I can do to improve the algorithm.

When HRTF is measured at EEP (ear canal entrance) then it doesn't include ear canal transfer function. There's two options: first is to add it to HRTF and equalize headphones flat at DRP (ear drum) and the second is to leave it out from HRTF and equalize headphones flat at EEP (this is how Impulcifer works). Second option is not ideal because it assumes (falsely) that over-ear headphones don't affect ear canal transfer function. First option is not feasible for most because that requires specialized gear to measure the ear canal transfer function. Impulcifer is meant as an easy tool which can be used by normal people in their homes so building support for ear canal transfer function measurements would not fit very well with that goal.


----------



## johnn29

Ah ok - I get it. It's the suggestion you made to me to morph the IEM's to sound like my already corrected over-ears. I'll give that a shot tomorrow too.

I don't have the DT990 graphs now. I've started saving every pass I run with notes now so I know which headphones and setup it was from. Before I was just blindly over-writing everything. I'm going to do some more measurements tomorrow, along with the raising rear speakers and will submit github issues after confirming them.


----------



## johnn29

Some more thoughts after using Headphones as my main sound source for the last couple of weeks as I'm in an airbnb.

- Taking two measurements was a good idea. I took one from my normal listening/watching position - about 1.8-2m away from my speakers. I took another 1m away (quasi anechoic?). The one further away is so much better for music - I guess I love those side wall reflections. It opens up the sound stage dramatically. That's the kind of thing you just don't process in real life - because there's no way to do an A vs B that quick. The close ones have little reverb - locations are pin point. That's good for Atmos movies that are properly mixed in 7 channels (you gain 7.1 with atmos even though Windows can't do height). But for regular TV shows that don't have gerat surround mixes - the reverb is welcome.

- Speaker virtualisation is a game changer for personal VR theaters. I have a Goovis Cinego - it simulates a large screen at 20m distance. The syntheised HRTF's just feel so wrong when the screen is that far away. With my 2m measurement I could swear the sound was coming from an accoustically transparent screen. It's really made me enjoy watching movies on that now.

- The real headphone compensation is superior to a flat EQ from gras measurements. It just gets the treble right.


----------



## Joe Bloggs

You know this sh!t is deep when you've been working on the same stuff for like 3 years and still can't follow @jaakkopasanen 's version of the idea


----------



## Hooknej

I've been messing with and enjoying Impulcifer for the last few weeks and wanted to share some thoughts, but the BLUF is that this is the real deal and if you take some time to do things right you will be rewarded with the best speaker recreation through headphones this side of the Smyth Realiser systems.

- I have the Realiser A16 pre-ordered and while waiting have gone down the rabbit hole trying to get something close. I have tried or purchased OOYH, the Audeze Mobius, WIndows Sonic, Dolby Atmos, HeSuVI... basically everything except the creative SXFi. Impulcifer is far and away the best recreation of listening to my sound system in my living room. I frequently have to remove my headphones to make sure the speakers aren't playing. It's amazing.

- I used the Sound Professionals SP-TFB-2 and Roland CS10-EM binaural microphones, recording each through both my PCs motherboard and through a Zoom H1N. Using the Zoom vs onboard made a large difference in imaging, in putting the speakers where they 'belong' in the room through headphones. Using the Sound Professional mics pushed things over the edge and brought the headphone sound qualities closer to the actual speakers. Going back and forth in the HeSuVI presets is a huge difference - Buy the Sound Professionals and use the Zoom as the bare minimum starting point for the recordings - quality of recording equipment makes a big difference in the final results.

- I am tech literate, but don't spend a lot of time with the Windows CMD interface, so the installation and use instructions for Impulcifer were a little bit intimidating. Overall though, jaakkopasanen's instructions are great and if you have any problems he is very responsive and seems willing to help. Overall, it wasn't hard but a GUI one day would be nice!

- Lastly, I am currently using my Sennheiser HD598s while I wait for my Stax to arrive - I can't wait to see how much further I can push things with the upgraded headphones.

Anyways, I just wanted to say thanks to jaakkopasanen for an awesome program and for making me question whether I actually needed to spend all of that money on the realiser.....


----------



## jaakkopasanen

Hooknej said:


> I've been messing with and enjoying Impulcifer for the last few weeks and wanted to share some thoughts, but the BLUF is that this is the real deal and if you take some time to do things right you will be rewarded with the best speaker recreation through headphones this side of the Smyth Realiser systems.
> 
> - I have the Realiser A16 pre-ordered and while waiting have gone down the rabbit hole trying to get something close. I have tried or purchased OOYH, the Audeze Mobius, WIndows Sonic, Dolby Atmos, HeSuVI... basically everything except the creative SXFi. Impulcifer is far and away the best recreation of listening to my sound system in my living room. I frequently have to remove my headphones to make sure the speakers aren't playing. It's amazing.
> 
> ...



Thanks a lot for the feedback. There is a bit of a starting barrier with Impulcifer so I greatly appreciate anyone trying it out and telling about the experience. 

It's very good to know that the recording gear had such a big impact on the impulse response quality. I haven't got around experimenting with that aspect too much. 

There will be a graphical user interface in form of a website but first I'm going to get the algorithm right, at least implement room correction.


----------



## Hooknej

jaakkopasanen said:


> Thanks a lot for the feedback. There is a bit of a starting barrier with Impulcifer so I greatly appreciate anyone trying it out and telling about the experience.
> 
> It's very good to know that the recording gear had such a big impact on the impulse response quality. I haven't got around experimenting with that aspect too much.
> 
> There will be a graphical user interface in form of a website but first I'm going to get the algorithm right, at least implement room correction.


No problem! I was also surprised by how much of a difference the recording gear made, but it's very obvious when switching between the HeSuVI presents that were generated by each iteration. 

As good as things sound now, I can't wait to see what room correction does - it really is incredible to me how well this works, especially when you compare it the $4K retail price of the Realiser. Thanks again for what you're doing and I'll keep testing new releases as they get pushed to GitHub and let you know how things go.


----------



## Dixter

Tried again to install this on Windows 10 and can't get it done properly...    for some reason I'm not catching on how to get past this part..  " 

Go to Impulcifer folder
cd C:\path\to\Impulcifer-master

Create virtual environment
virtualenv venv

Activate virtualenv
venv\Scripts\activate

Install required packages
pip install -r requirements.txt

Verify installation
python impulcifer.py -H
When coming back at a later time you'll only need to activate virtual environment again

cd C:\path\to\Impulcifer-master


----------



## sander99 (Sep 3, 2019)

I get the following error, so what and where is 'git' and why isn't it in my PATH?

(venv) C:\Sander\Impulcifer\Impulcifer-master>pip install -r requirements.txt
Collecting git+https://github.com/jaakkopasanen/autoeq-pkg (from -r requirements.txt (line 5))
  Cloning https://github.com/jaakkopasanen/autoeq-pkg to c:\users\laptop\appdata\local\temp\pip-req-build-86uzywa_
  Running command git clone -q https://github.com/jaakkopasanen/autoeq-pkg 'C:\Users\Laptop\AppData\Local\Temp\pip-req-build-86uzywa_'
ERROR: Error [WinError 2] The system cannot find the file specified while executing command git clone -q https://github.com/jaakkopasanen/autoeq-pkg 'C:\Users\Laptop\AppData\Local\Temp\pip-req-build-86uzywa_'
ERROR: Cannot find command 'git' - do you have 'git' installed and in your PATH?

Edit:
Ok, I made a mistake, I didn't download AutoEQ.zip until now, but the reason is this:
(@jaakkopasanen : a little mistake on your behalf I humbly think, but of course I should not criticise your generous free gift to us)
The link 'AutoEQ zip' in


> Download AutoEQ zip and exctract to a convenient location. Or just git clone if you know what that means.


(on https://github.com/jaakkopasanen/Impulcifer) points to 'Impulsifer-master.zip', which confused me into thinking AutoEQ.zip simply contained the same stuff I already downloaded (Impulsifier-master) and that I didn't need more.

I now found the correct link to 'AutoEQ.zip' in the sig of jaakkopasanen (so at the bottom of all his posts):
'AutoEQ - Equalization settings for 2500+ headphones'.

My pc is still busy unzipping AutoEQ.zip, afterwards I'll try if this fixed above 'git' problem, but I am not sure that is related to this.

Edit 2:
I guess I should just install AutoEQ first, and I now see on 'https://github.com/jaakkopasanen/AutoEq' that I should have installed Python 3.6 and not 3.7...

Edit 3:
…or does that not apply to using AutoEQ with Impulcifer (needing Python version 3.6 I mean)
?

I did everything again with Python 3.6 and get different errors now.

(venv) C:\Sander\Impulcifer\Impulcifer-master>pip install -r requirements.txt
Fatal Python error: Py_Initialize: can't initialize sys standard streams
Traceback (most recent call last):
  File "c:\sander\impulcifer\impulcifer-master\venv\lib\abc.py", line 105, in <module>
ModuleNotFoundError: No module named '_abc'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
  File "c:\sander\impulcifer\impulcifer-master\venv\lib\io.py", line 52, in <module>
  File "c:\sander\impulcifer\impulcifer-master\venv\lib\abc.py", line 109, in <module>
ModuleNotFoundError: No module named '_py_abc'

Edit 4:
I also tried Python 3.0.0 (I didn't notice at first that it was available) without result, because I also didn't see "Python 3" I initially assumed I could just pick the latest 3.x.y.
Exactly what version should I use best?


----------



## jaakkopasanen

Dixter said:


> Tried again to install this on Windows 10 and can't get it done properly...    for some reason I'm not catching on how to get past this part..  "
> 
> Go to Impulcifer folder
> cd C:\path\to\Impulcifer-master
> ...



The instructions weren't really clear that command prompt needs to be used. I made some changes and maybe it's a bit clearer now. Basically you search for *cmd* in Windows Start menu and an intimidating looking terminal opens up. Worry not because you should be able to simply copy the commands from the installation instructions into the command prompt and hit enter after each one. When the installation goes smoothly, you can try to run the demo.

Start again with the new instructions and tell me if you still have problems: https://github.com/jaakkopasanen/Impulcifer#installing



sander99 said:


> I get the following error, so what and where is 'git' and why isn't it in my PATH?
> 
> (venv) C:\Sander\Impulcifer\Impulcifer-master>pip install -r requirements.txt
> Collecting git+https://github.com/jaakkopasanen/autoeq-pkg (from -r requirements.txt (line 5))
> ...



Ah, yes. My mistake was to leave the AutoEq mention there. It was supposed to say download Implucifer-master.zip, not AutoEq-master.zip. I however redid the installation instructions and now Git needs to be installed first. Downloading the zip is no longer a recommended option but the instructions ask you to Git clone the repository. Having git installed will solve the problem with AutoEq installation too. Impulcifer uses AutoEq under the hood so when installing the dependencies from *requirements.txt* it will install AutoEq with Git. I just had forgotten that not installing Git and instead downloading the zip wouldn't work because of AutoEq installation. Python 3.6 is not a requirement anymore and 3.7 should work just as well.

Could you try and start over but this time following the new installation instructions? Please report back here if you have more problems.


----------



## sander99

jaakkopasanen said:


> Could you try and start over but this time following the new installation instructions?


Thank you very much. It seems to work now.
I did get an error message at the end of the help information in the verify installation step though:
impulcifer.py: error: unrecognized arguments: -H
Ah, but using -h instead of -H in 'python impulcifer.py -H' gives no error.

And no error with running the demo, and it did produce the hesuvi.wav file (plus other files).
I did not test the hesuvi.wav file yet, and I don't have mics etc. yet to do my own measurements.


----------



## jaakkopasanen

New recording guide is up at Impulcifer wiki: https://github.com/jaakkopasanen/Impulcifer/wiki/Measurements

The stereo recording process and gear discussion are largely the same but now there is a guide for doing surround recordings with seven, two or just one speaker. I added some illustrations for explaining how the stereo speaker surround recording works. Hopefully it's clear(er) now.

I would appreciate any feedback on the guide. Is there still something that's unclear or confusing? Something that could be improved? Anything is welcome!


----------



## johnn29

I like how you put the following in bold



> When using audio interface that can provide phantom power make sure you never, ever have it turned on if you are about to plug in anything else than microphones which require phantom power! 48 volts is enough to fry any electronics device!



After frying my first SXFI trying to record an impulse response from it - it's a good warning! 

Diagrams make things much clearer now.

One tip I have is that, just like the guide says, you need to measure the headphones in the same session as the speakers. I (wrongly) assumed that you could make separate headphone recordings and compensate with a single speaker recording - but every re-seat causes a different response. Some headphones with higher clamp force, like the Bose 700 I've been trying to measure prove very difficult. I've got the best results from the Creative Aurvana SE - very low clamp force and a decent sound without any EQ.


----------



## jaakkopasanen

johnn29 said:


> I like how you put the following in bold
> 
> 
> 
> ...



Thanks. I added a mention about the need to redo headphone measurements on every session to the guide. Good point.


----------



## jaakkopasanen

A major update to recording process. I implemented a recording utility in Impulcifer so now there should not be a need to mess with Audacity anymore. Recorder.py is a playback and recording utility specifically designed and implemented to record sine sweeps for HRIRs. I updated the measurement guide to use recorder instead of Audacity: https://github.com/jaakkopasanen/Impulcifer/wiki/Measurements#recording-with-recorderpy. The project front page has quick reference guide for running the measurements and after one has familiarized self to the recording process the quick reference should function as a shortcut for simply copy-pasting the commands: https://github.com/jaakkopasanen/Impulcifer#measurement.

I'm personally quite happy with the process. It doesn't really take a lot of time to get a surround sound setup measured with just one or two speakers and using 7.1 system it's even faster. This is no graphical user interface but it serves well for now.

The new recording process comes with changes to usage of Impulcifer. recording.wav and --speakers parameter are no longer the way to tell Impulcifer where to find the recording and which speakers does it contain. The new way is to create one or multiple recording files which have the speaker names in the file name. For example FR,FL.wav and SR,BR.wav. This makes everything clearer and simpler because now the files themselves document the speaker order and there is no longer need to have all the tracks in the same file. With this scheme it's impossible to lose the speaker order which makes it easier to share recordings and get back to them after a long time.

Another, smaller, update is the decrease of time it takes to process the recordings. Most of the time was spent on producing the graphs but now the graphs are not produced by default. Those who are still interested in them can tell Impulcifer to generate them with --plot parameter. Plotting the graphs is faster too so the total processing time with plotting should be down by half. Processing without plotting the graphs only takes a few seconds.


----------



## johnn29

Just saw that you've built in a recording function too - 3 simple commands now to generate a HRIR. Pretty amazing!

My theater refurb should be complete this week so very much looking forward to getting some new measurements done. Seems like it'll be a breeze compared to the initial ones I did.


----------



## jaakkopasanen

I added EQ baking so now it's possible to create HRIRs for IEMs. This requires a FIR filter which equalizes the IEM frequency response to the around ear headphone frequency response. Headphone compensation cannot be done with IEMs so around ear headphones have to be used at that time but when the measurements have been done IEMs are good to go. The FIR filter which turns the IEM into the around ear headphone can be created with AutoEQ.

Second addition is the output sampling rate. If the desired sampling rate doesn't match the sampling rate of the recorded files then Impulcifer can do resampling on the fly. This is convenient because with resampling one doesn't need to do measurements twice if both 44.1 kHz and 48 kHz HRIRs are wanted. I intend to test at some point if there is any difference between recording with 48 kHz and recording with 96 kHz and sampling down. Perhaps @johnn29 you would like to test this? IIRC you have an audio interface which can do more than 48 kHz.

Lastly for today's work I made Impulcifer write a readme with date, sampling rates and some stats about the measured impulse responses, like peak to noise ratio etc...


----------



## arksergo

Hi guys!

Yesterday i finally recorded my HRIR using Impulcifer and I got to tell you that the result is mind blowing! It is the best headphone virtualization experience that I ever had. I literally don't hear the difference between using headphones and speakers. 
Thank you jaakkopasanen for sharing your amazing work with us!

A couple of words regarding measurement process:
I wasn't able to bye binaural mics because of weird legal restrictions, so I made it myself. I used couple of Primo 258N microphones and assembled simple mic amplifier with XLR phantom power. For recording I used Behringer UMC202HD.
Recording process was pretty simple, but unfortunately I failed to use recorder.py, because of error (something about sounddevice module being not found). So I ended up using good old Audacity.
It is worth to mention, that you have to use 64bit version of Python, as one of required packages doesn't exist in 32bit version.


----------



## jaakkopasanen

arksergo said:


> Hi guys!
> 
> Yesterday i finally recorded my HRIR using Impulcifer and I got to tell you that the result is mind blowing! It is the best headphone virtualization experience that I ever had. I literally don't hear the difference between using headphones and speakers.
> Thank you jaakkopasanen for sharing your amazing work with us!
> ...



Awesome!

Great work with making the mics yourself! Good to hear that this can be a valid option. I've thought about it myself but haven't got there yet. Primo EM172 looks even better specs wise and is less than £12. Perhaps I'll make my own binaural mics instead of upgrading to Soundprofessionals master series mics.

The sounddevice package wasn't listed in the requirements.txt because I forgot to add it there. It's there now and recorder.py should work. I also added a mention in installing instructions about 64-bit Python. Big thanks for reporting these.


----------



## jaakkopasanen

Oh, and another update I did on Sunday: Decay plot and sweep spectrogram plot have been improved. Decay plot represents the levels more correctly and should be easier to read. Peak in the plot is scaled to 0 dB but the calculation actually can do correct absolute levels now so it would be possible to compare levels across tracks. Perhaps I should add that somehow. Spectrogram is on logarithmic frequency scale so low frequencies are not lost in the image anymore. Here is an example:


----------



## arksergo

jaakkopasanen said:


> Primo EM172 looks even better specs wise and is less than £12


I've also looked at EM172, but its diameter (10mm vs 6mm for EM256) confused me. I will share a photo of my setup when I'll get home. Also I recommend to bye mics with presoldered wires. I actually burned one myself, while soldering it.



jaakkopasanen said:


> The sounddevice package wasn't listed in the requirements.txt because I forgot to add it there. It's there now and recorder.py should work. I also added a mention in installing instructions about 64-bit Python. Big thanks for reporting these.


Cool! I will try to use it next weekend. Thanks for quick response!


----------



## Ripley

It's too bad there isn't some way to rent/borrow the hardware needed to make a recording. I know that the mics and audio interface aren't _super_ expensive, but I still hesitate to buy something that I will likely only use once, for a few minutes, and then never use again. I suppose I could always resell them...Hm...Decisions, decisions!


----------



## Joe Bloggs

Ripley said:


> It's too bad there isn't some way to rent/borrow the hardware needed to make a recording. I know that the mics and audio interface aren't _super_ expensive, but I still hesitate to buy something that I will likely only use once, for a few minutes, and then never use again. I suppose I could always resell them...Hm...Decisions, decisions!


1. You probably won't get it right on the first try
2. Once you get it right, your earphones will sound better than any earphones you can buy, for ANY money...


----------



## SilverEars (Oct 8, 2019)

Ripley said:


> It's too bad there isn't some way to rent/borrow the hardware needed to make a recording. I know that the mics and audio interface aren't _super_ expensive, but I still hesitate to buy something that I will likely only use once, for a few minutes, and then never use again. I suppose I could always resell them...Hm...Decisions, decisions!


It's something one can get into with a bit of effort I would think.  I think that's with any passion out there, you have to really be obsessed with it to to find victims to record and experiment with various equipment and try to get others to record their performances.

Like photography, it's for those types of people that are really interested recording what can be sensed as accurately as possible.

From what I've read around, there are expensive mics out there.  Like better quality mic with better signal to noise if one wants to do accurate THD+N measurements of headphones.

With this said, I do admire quality recordings and I wonder if I can do the same with the right equipment.  A recording I really admire is 'Jazz at the Pawnshop' and I've always wondered how the recording was done.  Same for high quality cinematography.  If anybody has seen reviewer Joshua Valour's videos, you'd recognize that he has a knack for shooting videos.  He has to be into video equipment to be able shoot in such quality.

What can I do with a DSLR?


----------



## Ripley

Joe Bloggs said:


> 1. You probably won't get it right on the first try
> 2. Once you get it right, your earphones will sound better than any earphones you can buy, for ANY money...



Wowza! When you put it that way, it's hard not buy the gear and give Impulcifer a shot. Maybe I'll make it a holiday treat to myself...


----------



## Zenvota

Ripley said:


> Wowza! When you put it that way, it's hard not buy the gear and give Impulcifer a shot. Maybe I'll make it a holiday treat to myself...


kinda like buying a meter for video calibration, beforehand your display looks great, but once you see correct grayscale, gamma, and low color errors it makes the meter purchase worthwhile.

Not to mention you can apply the autoeq function to other renderers, measure multiple headphones, multiple speakers, should be more than a one time use thing.


----------



## Ripley

Zenvota said:


> kinda like buying a meter for video calibration, beforehand your display looks great, but once you see correct grayscale, gamma, and low color errors it makes the meter purchase worthwhile.
> 
> Not to mention you can apply the autoeq function to other renderers, measure multiple headphones, multiple speakers, should be more than a one time use thing.



Ironically, I actually _do _own a colorimeter, namely the X-Rite i1Display Pro. And display calibration really _does_ make a big difference, at least on some of my displays. I guess I might really have to jump onto this bandwagon!


----------



## Zenvota

Ripley said:


> Ironically, I actually _do _own a colorimeter, namely the X-Rite i1Display Pro. And display calibration really _does_ make a big difference, at least on some of my displays. I guess I might really have to jump onto this bandwagon!


The measured impulse responses of speakers applied to binaural rendering is really fantastic.  I probably use Out of Your Head for 99% of my headphone listening so very anxious to have some free time to sit down and try Impulcifer.  Pre thanks to @jaakkopasanen for all the work hes done I'm really looking forward to it trying it.


----------



## Ripley

Zenvota said:


> The measured impulse responses of speakers applied to binaural rendering is really fantastic.  I probably use Out of Your Head for 99% of my headphone listening so very anxious to have some free time to sit down and try Impulcifer.  Pre thanks to @jaakkopasanen for all the work hes done I'm really looking forward to it trying it.



I've tried basically everything out there, short of a Smyth Realiser. I've tried Dolby Headphone, Dolby Atmos Headphone, DTS Headphone: X, Razer Surround, Out of Your Head, HeSuVi, Super X-Fi Amp, Spatial Sound Card, ...maybe some others I'm forgetting. For quite awhile, Out of Your Head was my go-to, with the Genelec preset. Once I found Spatial Sound Card, though, that's worked the best for me. It must be that whatever HRTF they're using happens to match pretty closely to my own. There's definitely a huge difference between the effects, and for me, the only two that really gave me a terrific "out of the head" experience were Out of Your Head and Spatial Sound Card (the "Dubai" location). If Impulcifer can take it up another notch, it will definitely knock my socks off.


----------



## Zenvota

Ripley said:


> For quite awhile, Out of Your Head was my go-to, with the Genelec preset. Once I found Spatial Sound Card, though, that's worked the best for me.


The measured solutions desperately need headphone correction over 2khz at the least.  and with ooyh if you listen close on certain material its possible to fine tune the hrtf at least a little bit.  The headphone correction of autoeq should make this much more accurate so looking forward to that as well. I could never get SSC to work for some reason.

Im also of the opinion that equipment makes a huge difference with this type of audio stream though youd have to pm me if youre interested in that as they wont hear it in this section of the site ;]


----------



## arksergo (Oct 10, 2019)

Hi! As I promised, posting pictures of my diy microphones and amp:


----------



## arksergo

By the way, jaakkopasanen, do you think it is possible to implement algorithm on hardware DSP, like one of the SigmaStudio from AD? I would like to use it not only on my pc, but PlayStation too, so now looking for options of making the solution portable.


----------



## jaakkopasanen

arksergo said:


> By the way, jaakkopasanen, do you think it is possible to implement algorithm on hardware DSP, like one of the SigmaStudio from AD? I would like to use it not only on my pc, but PlayStation too, so now looking for options of making the solution portable.



There could be something although I'm not aware of any. Usually the problem is that dedicated DSPs don't have the computing power to run convolution on 14 different long FIR filters. Creative Super X-Fi has but it's not possible to import your own HRIR into that. Smyth Realizer A16 can do this and it might be able to import your own HRIR but then Realizer has the measurement stuff built in so there's no need. And it's $4000.

Only if there was a way to input audio from HDMI (ARC) to Raspberry Pi...


----------



## johnn29

jaakkopasanen said:


> I intend to test at some point if there is any difference between recording with 48 kHz and recording with 96 kHz and sampling down. Perhaps @johnn29 you would like to test this? IIRC you have an audio interface which can do more than 48 kHz.



Yep I can test this next week. My theatre refurb is finally complete so I'll get a few recordings done.

I've also gotta test headphone compensation with the DT990 already EQ'd by oratory to get rid of that nasty treble.


----------



## gregorio

jaakkopasanen said:


> I intend to test at some point if there is any difference between recording with 48 kHz and recording with 96 kHz and sampling down.



Short answer: "No". Long answer: "Virtually always no". There is potentially a circumstance where it could make a difference; a particularly old or dodgy resampling process, which could then affect a non-linear downstream process (such as a modelled vintage compressor) or of course an extremely large pitch shift. I don't recall ever having experienced such a set of circumstances personally but it is theoretically possible. I have recorded IRs at both 48 and 96kHz incidentally.



SilverEars said:


> I think that's with any passion out there, you have to really be obsessed with it to to find victims to record and experiment with various equipment and try to get others to record their performances.
> Like photography, it's for those types of people that are really interested recording what can be sensed as accurately as possible.
> From what I've read around, there are expensive mics out there.  Like better quality mic with better signal to noise if one wants to do accurate THD+N measurements of headphones.
> With this said, I do admire quality recordings and I wonder if I can do the same with the right equipment.  A recording I really admire is 'Jazz at the Pawnshop' and I've always wondered how the recording was done.



It's not really that simple because it's not so much about the tools but about how one uses them. For example, you could get a set of the very best carpentry tools that money can buy but you're not going to start churning out Chipendales any time soon.

While I don't know much about photography, in the case of music it's not really got much to do with recording "as accurately as possible". The aim is to end up with a recording that is as pleasing/subjectively "good" as possible and that commonly means not recording as accurately as possible. There are some very expensive mics out there but the most expensive ones are not very accurate and don't have particularly low self noise, in fact they're usually far less accurate and noisier than mics which are 10 or more times cheaper. The reason they're more expensive is because they have certain colourations (and other properties) which are desirable because under certain conditions they produce a subjectively better result than other mics. The question then becomes; which mics should we use in which circumstances/situations and how should we use and position them (relative to the sound source and each other)? As the situation always varies, either: Different instruments, different pieces, different recording venues, different musicians, different instrument positions within the venue or different artistic intentions (different musicians have different ideas on what is subjectively better), then the choice and/or use of mics always varies, so how do we know/learn which mics to use and how? 

Traditionally one got a job as the tea-boy in a top class studio, studied the literature, watched/learned what was going on, eventually becoming an assistant engineer being overseen and instructed by the chief engineer and then several years later becoming a chief engineer yourself. This way, decades of cumulative knowledge is passed along. In other words, even if one could "find victims to record" and had various equipment to experiment with, you'd need a few lifetimes to discover for yourself the cumulative knowledge that a typical chief recording engineer of a top class studio would have. So, could you "do the same", record an ensemble as well as a top class studio/recording team? It's possible but very unlikely. To start with, what's the "right equipment"?

G


----------



## jaakkopasanen

gregorio said:


> Short answer: "No". Long answer: "Virtually always no". There is potentially a circumstance where it could make a difference; a particularly old or dodgy resampling process, which could then affect a non-linear downstream process (such as a modelled vintage compressor) or of course an extremely large pitch shift. I don't recall ever having experienced such a set of circumstances personally but it is theoretically possible. I have recorded IRs at both 48 and 96kHz incidentally.



Thanks, good to know! I remember reading somewhere that the deconvolution process would be somehow sensitive to sampling frequency, something like deconvolution cannot do good job above one 6th of the sampling frequency. This would put the limit to 8 kHz with 48 kHz sampling rate and to 16 kHz with 96 kHz. But I've never managed to find the reference again so it could be that I dreamed the whole thing.


----------



## gregorio

jaakkopasanen said:


> I remember reading somewhere that the deconvolution process would be somehow sensitive to sampling frequency, something like deconvolution cannot do good job above one 6th of the sampling frequency.



Ah, in that case I don't know. That's not something I've heard myself but then I'm not in that side of the business. I just use plugin reverbs (as a professional sound engineer), I have limited knowledge of what goes on under the hood. It could be that there is some non-linear process occurring that benefits from 96kHz and some of my reverbs do upsample internally (from 48kHz to 96kHz). All I can say is that I've had occasion to work with IRs that were recorded natively at both 96kHz and 48kHz and the end result (output of the convolution reverb at 48kHz) was audibly identical but I don't know what was going on under the hood inside the reverb plugin.

G


----------



## jaakkopasanen

Waveform and waterfall plots. Aren't they pretty.


----------



## johnn29

Those are gonna be great for the virtual room correction: before and after!


----------



## Joe Bloggs

gregorio said:


> Ah, in that case I don't know. That's not something I've heard myself but then I'm not in that side of the business. I just use plugin reverbs (as a professional sound engineer), I have limited knowledge of what goes on under the hood. It could be that there is some non-linear process occurring that benefits from 96kHz and some of my reverbs do upsample internally (from 48kHz to 96kHz). All I can say is that I've had occasion to work with IRs that were recorded natively at both 96kHz and 48kHz and the end result (output of the convolution reverb at 48kHz) was audibly identical but I don't know what was going on under the hood inside the reverb plugin.
> 
> G


All I can say is that I have the ear of a dev who's an EE graduate with heavy focus on signal processing and he told me your concern about sample rate pplies to some filter design tasks but definitely not to deconvolution.


----------



## jaakkopasanen

Joe Bloggs said:


> All I can say is that I have the ear of a dev who's an EE graduate with heavy focus on signal processing and he told me your concern about sample rate pplies to some filter design tasks but definitely not to deconvolution.


Excellent. This saves me some trouble then. Thanks a lot!


----------



## gregorio

Joe Bloggs said:


> All I can say is that I have the ear of a dev who's an EE graduate with heavy focus on signal processing and he told me your concern about sample rate pplies to some filter design tasks but definitely not to deconvolution.



To be fair, I don't have any concerns about 48kHz vs 96kHz for deconvolution (or in fact most other digital audio processes), because I've used digital reverbs for almost 30 years, convolution reverbs for around 20 years and have never detected any audible differences with them between these two sample rates. Furthermore, I can't think of anything within the convolution/deconvolution process which would produce any audible difference, I was just being thorough/honest by covering the possibility there's something I haven't thought of (and have never experienced). Some commercial reverb plugins do upsample 48kHz input internally to 96kHz but of course there are various reasons why this maybe preferable that have nothing to do with fidelity or audible differences.

G


----------



## johnn29 (Oct 17, 2019)

Jaakko - I just measured a new HRIR with the updated improved method. It's remarkably easy now - so user friendly. The ability to measure my real 7 channel system without the spin-o-rama has also meant I get a much more accurate recording - even with my Bose 700s! The damn mics don't keep moving around cause my ass is firmly still. That combined with hooking the binural mics over my ear has made the whole process so much easier.

Only suggestions I have from the process is when an error is thrown on running the 7 channel recording command it should prompt the user to generate the correct wav file. i.e. "no 7.1 sweep found, would you like to generate one? y/n"?

I've been A vs B on my HRIR vs the speakers I just measured. I use MPC BE to output to both headphones and my speakers simultanously and mute the speakers as needed. They sound remarkably similar. I suspect the majority of the difference is due to me using BT close back headphones. I'm going to measure up my open backs shortly too.

Seriously amazing job. I know you don't take donations but you saved me £4k on a Smyth AND I can use it anywhere in the world, on the metro, planes etc. So if you re-consider that I'm in!


----------



## jaakkopasanen

johnn29 said:


> Jaakko - I just measured a new HRIR with the updated improved method. It's remarkably easy now - so user friendly. The ability to measure my real 7 channel system without the spin-o-rama has also meant I get a much more accurate recording - even with my Bose 700s! The damn mics don't keep moving around cause my ass is firmly still. That combined with hooking the binural mics over my ear has made the whole process so much easier.
> 
> Only suggestions I have from the process is when an error is thrown on running the 7 channel recording command it should prompt the user to generate the correct wav file. i.e. "no 7.1 sweep found, would you like to generate one? y/n"?
> 
> ...



Glad to hear and always grateful to receive feedback and improvement ideas.


----------



## sander99

arksergo said:


> Hi! As I promised, posting pictures of my diy microphones and amp:


I'd like to see your microphones very much but I only see your amp...


----------



## arksergo

sander99 said:


> I'd like to see your microphones very much but I only see your amp...


One of them is on the first photo. I made it from replacement tips of my old sennheiser in-ear headphones. The shield wire is not needed so can be cut off. 
I will take better pictures this weekend.


----------



## sander99

arksergo said:


> One of them is on the first photo.


I don't know why but I can only see the second photo in your post, and the text "


----------



## arnaud

jaakkopasanen said:


> Waveform and waterfall plots. Aren't they pretty.



I like the way you got these charts setup as it makes for easy intepretation! We see the peak in amplitude before 3sec, check that there's a hot spot at 100Hz in the spectrogram and verify that in the frequency response function...

Is the processing the same on the spectrogram and waterfall or the waterfall a gated FFT from the reconstructed impulse response?

Also, out of curiousity how to you compute the spectrogram? I've seen such graphs based on wavelet analysis but I wonder if for a sine sweep you can just make single block FFTs (like 100ms in your case it seems) as the sweep goes through?

Finally, again out of curiosity but are these typical distortion figures you think (this is how I interpret the oblique traces at H2, H3 and expect it would be more obvious even on a 80dB scale or so)? I've never done such spectrogram measurements on headphone (at best did waterfall processing of impulse response).

cheers,
arnaud


----------



## jaakkopasanen

arnaud said:


> I like the way you got these charts setup as it makes for easy intepretation! We see the peak in amplitude before 3sec, check that there's a hot spot at 100Hz in the spectrogram and verify that in the frequency response function...
> 
> Is the processing the same on the spectrogram and waterfall or the waterfall a gated FFT from the reconstructed impulse response?
> 
> ...


Spectrogram is calculated from the recorded exponential sine sweep and waterfall is calculated from the reconstructed impulse response. Both use overlapping Hanning windows for FFT. Spectrogram sets the FFT window length so that there is 10 Hz resolution and waterfall uses 300 ms or one tenth of the impulse response length, whichever is shorter. Spectrogram uses 200 segments on time axis and sets the window overlap accordingly. Waterfall uses 90% overlap. None of these numbers are based anything more scientific than that the graphs look pretty good with the selected values.

I don't know what are typical distortion numbers but in the decay graph you can see that the 2 harmonic component is at -55 dB so I guess that's ok.

ps. Room compensation has been implemented and it sound really good. It's not documented yet and I got it working only for stereo setup now, but that could be the surround setup recodings. Will look into it and try to get 7.1 demo files ready with room correction.


----------



## jaakkopasanen

I actually got the surround setup room correction working. I just had renamed two room measurement files wrong. Now the demo contains recordings for 7.1 room response corrected HRIR.

I'll need to write the measurement guide still but for now the most eager ones can read through the updated processing documentation and play around with webcam mic placement helper. That thing leaves a ghost behind when taking a picture which can be used to place the room measurement mic in the same location where the binaural mic was. The seven rectangles in right side are slots for different measurements. Select a slot by clicking it and the picture will be saved to that slot. Clicking another slot will take it's ghost to the big picture. Like so: https://i.imgur.com/vKwrEGI.png. Here the microphone stand is in front of me and you can see both my face and the measurement microphone. It's not the best of all solutions for the problem but serves for now. It would be better to have the webcam above the listening position looking down.


----------



## johnn29

Excellent! I read the documentation yesterday - I must confess I'm a little confused with how it'd work so I'll wait for the full documentation.

Initially I just expected room correction to flatten the frequency response. In the bass range I thought it'd just be a simple flat line until around 80-100hz and after that it'd be some sort of average response for each speaker/ear so that the peaks and nulls are removed. That way you'd maintain the HRTF but obviously it's way more complicated than that.

I have a UMIK-1 so I'll try it out when the documentation is live.


----------



## jaakkopasanen

johnn29 said:


> Excellent! I read the documentation yesterday - I must confess I'm a little confused with how it'd work so I'll wait for the full documentation.
> 
> Initially I just expected room correction to flatten the frequency response. In the bass range I thought it'd just be a simple flat line until around 80-100hz and after that it'd be some sort of average response for each speaker/ear so that the peaks and nulls are removed. That way you'd maintain the HRTF but obviously it's way more complicated than that.
> 
> I have a UMIK-1 so I'll try it out when the documentation is live.


I'm afraid it's not possible to know how the HRTF affects the measured frequency response and how much is from room acoustics. This requires a reference measurement with calibrated microphone in the precisely the same location as the in-ear microphone. You place the measurement microphone in the same spot where left ear in-ear mic was and run record sweeps. This produces room-FL,FC,FR,SR,BR,BL,SL-left.wav on 7.1 speaker system. Same thing has to be repeated for right ear and obviously the measurement microphone has to be moved to the position of right ear.


----------



## johnn29

Oh ok that doesn't seem complicated as I thought. I'll give it a go soon, very excited about this and it's a real way your solution is better than the realiser


----------



## arksergo

sander99 said:


> I don't know why but I can only see the second photo in your post, and the text "


----------



## jaakkopasanen

I've been using the HRIR with room correction for several hours now and I have to say I'm very pleased. So much in fact that I've started to do critical listening again. I was of the opinion that I might even prefer my Custom Art FIBAE 3 CIEMs for music listening to HD 800 or my speaker setup because they provide very laid back and pleasant listening experience when EQd neutral. Now however the FIBAE 3s start to sound boring compared to what HD 800 can do with room corrected HRIR. Harman target equalized headphones are very close to my neutral but it's lacking a bit of the excitement the room corrected HRIR offers.

Compared to my speakers, with which the HRIR was recorded, virtualization offers a lot better detail retrieval. It's on par with the HD 800 without virtualization and in some ways better. Imaging is also significantly better than with my speakers. This was a complete surprise for me because I didn't expect virtualized speakers to be able to provide better imaging than the physical speakers and room they try to simulate. Soundstage is similar and maybe even a bit wider. Vertical placement of the instruments is definitely more accurate. On the physical speakers instruments and vocals seem to shift upwards but with virtualized speakers they are centered around the vertical level of the physical speakers with clear variation. I'm not sure if the vertical places of the sounds are correct per se but it's nice to have more two dimensional sound image.

Good imaging with an actual speaker soundstage combined with the natural detail level of the headphones make it even easier to detect all the detail in the recording because now that the sound "blobs" are further apart it's easier to focus on individual instruments. I think it would be exceedingly difficult to achieve this level of detail retrieval with speakers alone. Same seems to apply to dynamics. HD 800 are very dynamic headphones and the speaker virtualization doesn't compress the dynamics too much. Maybe there is a slight reduction but I actually prefer it this way because on some songs the HD 800 can be a bit too much. All in all the technical ability of the headphones seems to translate very well to speaker virtualization.

To me the speaker virtualization with headphones is the way to go for ultimate hifi. It combines the best of both worlds retaining the technicalities of headphones while providing the sound stage of speakers and even exceeding them in imaging. Tonality with room correction is the best I've heard in headphones or speakers. Having the frequency response tailored to my ears makes for extremely natural and effortless listening experience.

I think I should be able to improve on the bass speed and punch with decay management. Currently the room corrected HRIR has a bit slow bass because the bass decay is so long. I'm also hoping that the improved signal-to-noise ratio from tracking filter would improve imaging even further. At least the early reflection management should do this if I manage to pull it off. I got an idea of detecting the reflected sound direction the same way as human brains localize sounds and then cancelling the reflected sounds that are unbalanced or come from undesired directions. We'll see how this works out.


----------



## johnn29

Sounds exciting - I'm still waiting to try out the room correction when I get some time.

I did notice in some of the OOYH presets that work particularly well for me the stereo center image sounds like a real center. I assume because the room that was measured has bang on frequency response for the left and right channels so I'm very much looking forward to trying that out on my own.

Had the perfect use case for Impulcifer just now - there's construction work going on in the cellar beneath my room. I wanted to watch a show in peace in my office so don the Bose 700's, ANC to max and my 7.1 channel measurement from my theater in HeSuVi. Can't get over how you can just do that now days. After not using it in sometime I always have to check the speakers aren't on too - testimony to the re-creation of a loud speaker.


----------



## Joe Bloggs

jaakkopasanen said:


> I've been using the HRIR with room correction



What does your room correction consist of?


----------



## jaakkopasanen

Joe Bloggs said:


> What does your room correction consist of?


Currently it's just a minimum phase EQ to Harman room target but I intend to implement mixed phase filter. Tips for that are welcome if anyone here has any insights.


----------



## jaakkopasanen

I did some quick testing with different mic insertion depths and heapdhone placements. The results are wild. As in there is a huge variation in frequency response depending on both of these factors and because of this I'm not getting a correct channel balance consistently. Inserting the mics deeper into the ear canal will boost treble by some 10 to 20 dB and this is most likely due to ear canal resonance. I'm suspecting that the ear canal resonance plays a role even when the mics are at the ear canal opening and the ear canal hasn't been blocked as is the case with The Soundprofessionals' mics. Basically all literature says that the best way is to use ear canal blocking mics but I kind of assumed that this only meant that the mic needs to be at ear canal opening and the actual blockage isn't that important. Perhaps I was wrong to assume. I'm going to test this hypothesis by blocking the ear canal with an regular foam earplug.


----------



## sander99 (Nov 4, 2019)

(I decided to post this reply here as I think that is more appropiate than in the "To crossfeed or not to crossfeed?" thread.)


ironmine said:


> Does anybody know when I can buy these microphones?


(@ironmine: I assume you meant where you can buy these, not when?)


jaakkopasanen said:


> I have listed some in Impulcifer wiki: https://github.com/jaakkopasanen/Impulcifer/wiki/Measurements#microphones
> I'm using the soundprofessional ones for HRIR measurements but I'm starting to suspect that ear canal needs to be properly blocked for reproducible results. That should be possible with an earplug when using the soundprofessional mics which are not ear canal blocking themselves.


I am trying to decide which mics I may buy. I actually think the ones in ironmine's picture look the most suitable I have seen - shape and size like I mean - because there is minimal material in the earflaps and the mic is really in the opening of the ear canal, and not a little bit further to the outside like many others. Also these resemble the Smyth Research mics more than any of the others and Smyth sure know something about HRIR/PRIR measurements. The plastic clips - even more with the cable running around - inside the earflaps of the soundprofessional worry me because how could they not change the reflections/influence of the earflaps? I would sooner suspect those plastic clips cause problems. But it is just an intuitive feeling.


----------



## jaakkopasanen (Nov 4, 2019)

sander99 said:


> (I decided to post this reply here as I think that is more appropiate than in the "To crossfeed or not to crossfeed?" thread.)
> 
> (@ironmine: I assume you meant where you can buy these, not when?)
> 
> I am trying to decide which mics I may buy. I actually think the ones in ironmine's picture look the most suitable I have seen - shape and size like I mean - because there is minimal material in the earflaps and the mic is really in the opening of the ear canal, and not a little bit further to the outside like many others. Also these resemble the Smyth Research mics more than any of the others and Smyth sure know something about HRIR/PRIR measurements. The plastic clips - even more with the cable running around - inside the earflaps of the soundprofessional worry me because how could they not change the reflections/influence of the earflaps? I would sooner suspect those plastic clips cause problems. But it is just an intuitive feeling.


My next mics will definitely be Primo EM258 capsules in either IEM shell or simply glued to an ear plug. This will allow me to make them so that the mic sits exactly at the ear canal opening and the ear canal is blocked. Plenty of cheap IEM options in AliExpress for less than $2. And as a matter of a fact I just ordered 3 pairs of IEMs for $3.46


----------



## jaakkopasanen

Hmm. Now I also found these:
https://micbooster.com/primo-microphone-capsules/58-primo-em172-z1-module-with-35mm-plug-15m.html
https://micbooster.com/primo-microp...primo-em172-module-35-mm-plug-thin-cable.html
https://store.lom.audio/collections...cts/usi-phantom-adapter?variant=4542168629280

Add IEM housings or ear plugs and that's a complete binaural mics for around $100 with very little work. Those are Primo EM172 which are 10mm capsules so I don't know how well they would fit at the ear canal opening. Need to test with some dummy capsules first.


----------



## johnn29

I had that channel balance issue with many of my recordings - I figured I messed up something on my DAW. I got lucky and got a bang on recording off one of my attempts that I actually A vs B'd the real speakers and it sounded very similar, so I'm happy with those. But I need to do more recordings for the room correction and other rooms I have.

If blocking the ear with a regular foam plug works I can continue to use the sound professional mics I have - otherwise I'll need a new solution.


----------



## johnn29 (Nov 5, 2019)

I've run 4 recordings today and each of them had channel balance issues with the sound professional mics. It seems I've only got one recording that's bang on for channel balance from another day. It's very hard to get right - not sure if it's the headphone compensation or the speaker measurement. My setup makes it harder because I'm trying to use the Bose 700s which have a high clamp force.

On the positive you can fix the issue with HeSuVi's attenuation. If you play the white noise Atmos test tones for the center channel - you can adjust attenuation until both L and R are equal in voicemeter. But obviously this is having an impact on the fidelity of the HRIR.


----------



## jaakkopasanen

@arksergo have you had a channel balance issues with your ear plug style mics?


----------



## Dasm

What about these mics ? https://ru.aliexpress.com/item/3290...&terminal_id=6da114871b834f97a495a3ab513237b9
Only 4x1.5 mm and can complete sits inside earchannel near drum


----------



## arksergo (Nov 5, 2019)

jaakkopasanen said:


> @arksergo have you had a channel balance issues with your ear plug style mics?


No, I had not. I get very good result with first try.


----------



## jaakkopasanen

Dasm said:


> What about these mics ? https://ru.aliexpress.com/item/32904312556.html?spm=a2g0o.productlist.0.0.749368cby4x4t5&algo_pvid=5afb3969-fdcb-411e-883d-34335637ddee&algo_expid=5afb3969-fdcb-411e-883d-34335637ddee-19&btsid=ac892c58-bb17-4192-b9ac-1769fda6effd&ws_ab_test=searchweb0_0%2Csearchweb201602_6%2Csearchweb201603_55&dp=5c5295331d1312d8703126c31a2b4dff&dp=5c5295331d1312d8703126c31a2b4dff&af=843361&af=843361&cv=47843&cv=47843&afref=&afref=&mall_affr=pr3&mall_affr=pr3&aff_platform=aaf&cpt=1572976982385&sk=VnYZvQVf&aff_trace_key=0bfd625ee04d465d8629b786236c081d-1572976982385-02840-VnYZvQVf&terminal_id=6da114871b834f97a495a3ab513237b9
> Only 4x1.5 mm and can complete sits inside earchannel near drum


There are no info about frequency response, sensitivity or noise.



arksergo said:


> No, I had not. I get very good result with first try.


Have you made more than one measurement. I'm wondering how reproducible your results are. I have one good stereo measurement with close to perfect stereo balance but it's really hard to reproduce this success with my mics.

I might cut open the hooks of my mics to take the capsule out and place it in an ear plug. That would be the fastest way for me to test my hypothesis.


----------



## Dasm

jaakkopasanen said:


> There are no info about frequency response, sensitivity or noise.
> 
> 
> Have you made more than one measurement. I'm wondering how reproducible your results are. I have one good stereo measurement with close to perfect stereo balance but it's really hard to reproduce this success with my mics.
> ...


Freq responce is easy to compensate. I bought 5 pcs, will show you measurements later.


----------



## arksergo

jaakkopasanen said:


> Have you made more than one measurement


No, I measured once. I am going to do some additional measurements this weekend, wanna try to get big virtual room for movies and separate layout for 5.1 movies


----------



## 71 dB (Nov 5, 2019)

I tried electret mics Jecklin disc and ear mics, but it's not working. Signal to noise ratio is complete rubbish. Weird noises like ghost hunter gear! Useless! My first try with mics, so I probably made all the mistakes. Depressing.






Using this circuit and feeding Olympus LP-5 recorder ( I have tried both line in and mic connection). I don't know what is wrong. Would this help:


----------



## johnn29

jaakkopasanen said:


> There are no info about frequency response, sensitivity or noise.
> 
> 
> Have you made more than one measurement. I'm wondering how reproducible your results are. I have one good stereo measurement with close to perfect stereo balance but it's really hard to reproduce this success with my mics.
> ...



Is there a way we can ask Impulcifer to assume channel balance with the headphones and then attenuate L or R to get balance? When I correct for it via HeSuVi's attenuation I'm hard pressed to find anything "wrong" with the sound. It sounds really good and is bang on center then. Again I A vs B against the real speakers and they're remarkably similar. It's not identical though, but even my naturally perfect measurement wasn't identical.

Only thing I find frustrating playing with the balance is then I can't A vs B with my perfect recording quickly. Although if I can't pick a winner with a 5 second pause, perhaps that means there's no real better version!


----------



## arksergo

71 dB said:


> I tried electret mics Jecklin disc and ear mics, but it's not working. Signal to noise ratio is complete rubbish. Weird noises like ghost hunter gear! Useless! My first try with mics, so I probably made all the mistakes. Depressing.
> 
> 
> 
> Using this circuit and feeding Olympus LP-5 recorder ( I have tried both line in and mic connection). I don't know what is wrong. Would this help:


I had hard times trying to get good signal from my mics too. In fact it was the most challenging part for me, because I am not any good in analog schematics.
Check out this link: http://www.epanorama.net/circuits/microphone_powering.html
Also you can find good scheme for measurement mic here (russian only): http://cxo.lv/index.php/solder/micamp/111-micamp01, author suggests to use this scheme: 





I used this scheme, modified for balance output and phantom power with lm358 op amp. It is quite good, but you should keep wires from mics to amp as short as possible. Mine was about 20 cm and I ended up using it like this:


----------



## jaakkopasanen

johnn29 said:


> Is there a way we can ask Impulcifer to assume channel balance with the headphones and then attenuate L or R to get balance? When I correct for it via HeSuVi's attenuation I'm hard pressed to find anything "wrong" with the sound. It sounds really good and is bang on center then. Again I A vs B against the real speakers and they're remarkably similar. It's not identical though, but even my naturally perfect measurement wasn't identical.
> 
> Only thing I find frustrating playing with the balance is then I can't A vs B with my perfect recording quickly. Although if I can't pick a winner with a 5 second pause, perhaps that means there's no real better version!


That shouldn't be too much work. Would you care to create an issue about this?


----------



## jaakkopasanen

Small update: headphone plots are now produced befor HRIR processing. This makes it possible to run Impulcifer right after headphone compensation recording to check the left and right ear frequency responses. This will make it faster to iterate with the mic placement because now headphone recording - processing - mic adjustment cycle can be done before even the first HRIR recording has been made. So in this stage the folder only needs to contain headphones.wav and the plots folder will have Headphones.png.

This allowed me to do a new measurement with good channel balance.


----------



## 71 dB

arksergo said:


> I had hard times trying to get good signal from my mics too. In fact it was the most challenging part for me, because I am not any good in analog schematics.
> Check out this link: http://www.epanorama.net/circuits/microphone_powering.html
> Also you can find good scheme for measurement mic here (russian only): http://cxo.lv/index.php/solder/micamp/111-micamp01, author suggests to use this scheme:
> 
> ...



Thanks! Your gear looks badass!


----------



## Joe Bloggs (Nov 9, 2019)

You can always assume that the left and right ear should receive HRIRs with the same frequency response and hence, for any measurement where there's a left and right ear version,
1. Calculate average of responses from the two ears
2. Equalize the impulses for the left and right ears to the average response, using minimum phase filters.  The EQs should be octave smoothed to a certain degree so that the filters don't "try too hard".

This will yield a very balanced soundstage in most cases yet sound much superior to if you'd simply made the left and right impulses exactly the same.  I know, I've been doing it this way for years.

Incredible idea, isn't it


----------



## jaakkopasanen

Joe Bloggs said:


> You can always assume that the left and right ear should receive HRIRs with the same frequency response and hence, for any measurement where there's a left and right ear version,
> 1. Calculate average of responses from the two ears
> 2. Equalize the impulses for the left and right ears to the average response, using minimum phase filters.  The EQs should be octave smoothed to a certain degree so that the filters don't "try too hard".
> 
> ...


I've been thinking about this but wasn't sure if that would be ideal since not everyone's ears are symmetrical. Maybe this is not ideal but good enough. At least I could add a parameter for doing this so users could try it out easily and hear if it improves the result in their case.


----------



## jaakkopasanen

Busy day. Got plenty of tools for solving channel balance issues.

Firstly the headphone and result plots have now both sides in the same graph for easier comparison. This will make it easier to play around with mic placement and to figure out what is wrong with the channel balance.

The bigger change is channel balancer tool which adjusts left and right side levels or frequency responses to match each other. This usually nails the channel balance but terribly bad recordings cannot be made perfect. Here is the documentation about the new feature: https://github.com/jaakkopasanen/Impulcifer#channel-balance-correction

See here for squiggly lines: https://imgur.com/a/cWfO8Zt

I also made mock microphones by gluing a 11mm disc to a foam ear plug to see if 10mm or so microphone would fit into the ear canal opening. I doesn't and I have fairly large ears. I figured out that FEL Communications sell also pre-made EM258 mono modules and RODE VXLR+ adapters. Two of both would be needed which would make about 100€.
https://micbooster.com/primo-microphone-capsules/97-primo-em258-mono-module-with-35mm-plug-10m.html
https://micbooster.com/leads-and-ad...r-converter.html?search_query=vxlr+&results=1
I sent them an email asking if they could provide binaural mics with matched EM258 modules and a ear plug style support and possibly adapters to make them work with both recorders and USB audio interfaces.


----------



## johnn29

Will give this a try soon as I can. Does the channel balance logic have any consequences for the room correction? Ideally my next recordings I wanted to work on getting that incorporated also.


----------



## jaakkopasanen

johnn29 said:


> Will give this a try soon as I can. Does the channel balance logic have any consequences for the room correction? Ideally my next recordings I wanted to work on getting that incorporated also.


Channel balancing is done after the room correction using the final results. Room correction is not needed for channel balance correction. New recordings are actually not needed either but channel balance can be corrected from an older existing recordings also.


----------



## johnn29

I'm trying to make my own notes for the room correction measurement process and hopefully help improve the documentation  Can you tell me if I've got it right?

0) Run a regular recording with the binural mics. But do not process.
1) Replace the csv/text with the mic calibration data from MiniDSP's website
2) Select the UMIK-1 in Windows as default and run (I've renamed the default to prefix it with room and with --channels=1)

python recorder.py --channels=1 --play="data/sweep-seg-FL,FC,FR,SR,BR,BL,SL-7.1-6.15s-48000Hz-32bit-2.93Hz-24000Hz.wav" --record="data/my_hrir/*room*-FL,FC,FR,SR,BR,BL,SL.wav"

3) Run

python impulcifer.py --test_signal="data/sweep-6.15s-48000Hz-32bit-2.93Hz-24000Hz.pkl" --dir_path="data/my_hrir"

Where I'm confused is how to tell Impulcifer that you've run the room correction recording? Part 3 is just the regular process - if I don't specify --no_room_correction will it by default process the room correction? How will it know the room correction recording file?


----------



## jaakkopasanen

johnn29 said:


> I'm trying to make my own notes for the room correction measurement process and hopefully help improve the documentation  Can you tell me if I've got it right?
> 
> 0) Run a regular recording with the binural mics. But do not process.
> 1) Replace the csv/text with the mic calibration data from MiniDSP's website
> ...


You got it pretty much right with minor changes.

Impulcifer uses the same principle for finding the room recording files as it uses for the HRIR recording files: it looks at the file name patterns. For room recordings it's room-<CH1>,<CH2>,...<CHN>-left|right.wav. If the files exist then Impulcifer is going to do room correction unless --no_room_correction is present.

Actually you have to run the room recording twice: first with the measurement mic in the same location where the left ear mic was and then again at the location of the right ear mic.
python recorder.py --channels=1 --play="data/sweep-seg-FL,FC,FR,SR,BR,BL,SL-7.1-6.15s-48000Hz-32bit-2.93Hz-24000Hz.wav" --record="data/my_hrir/room-FL,FC,FR,SR,BR,BL,SL*-left*.wav"
python recorder.py --channels=1 --play="data/sweep-seg-FL,FC,FR,SR,BR,BL,SL-7.1-6.15s-48000Hz-32bit-2.93Hz-24000Hz.wav" --record="data/my_hrir/room-FL,FC,FR,SR,BR,BL,SL*-right*.wav"

Or you could only run in once with the measurment microphone at the center of the head and copy that file as -left and -right but then it cannot be guaranteed that the results will be stellar. Room correction EQ extends all the way up to 20 kHz and the frequency response can change quite a lot between two locations 8 cm apart, depending on the room. It also could be just fine. You can use webcam.html to help placing the measurement mic to a correct location.

I'm going to implement general *room.wav* support which doesn't EQ to so high frequencies and the location doesn't have to be so precise.


----------



## johnn29 (Nov 13, 2019)

Excellent - I'll give it a shot now.

This morning was productive - I nailed a good recording with my LS50 setup. Balance was off, but my process of using the 7.1.4 atmos test tones center channel worked well to figure out the balance. Impulcifer then adjusted it with the new commands. Compared to even my old recording I liked this is much better.

I did try playing around with the other balancing methods (mids and both avgs). They did funky things with the localisation for me which felt very un-natural. The numerical balance is the best method.

Edit: Tried to place the UMIK properly but because of my setup it's extremely difficult to get it in exactly the same position. I'm taking measurements on a sofa. I have an idea of using a laser level which I'll try another time.

The next thing I'll do is wait for the airpods pro to get updated into AutoEQ so I can transform my overear measurements to that!


----------



## jaakkopasanen

I added another channel balancing strategy: "trend". This takes the frequency response difference between left and right sides and smooths it heavily. This smoothed curve is then used as the equalization target for right side. Now because the smoothing is so heavy this doesn't create the uncanny feeling that "avg" or "min" can do in some cases while managing to balance bass, mids and treble. I tested it with two measurements, one with quite good natural channel balance and one with poor balance. I prefer trend over all other strategies for both. Here's a graph illustrating the trending.


----------



## jaakkopasanen

I took the leap and cut the hooks off from my mics and glued them to ear plugs. Works fantastically and now they are so much more stable. I got very good results on the first try with good channel balance although using balance correction improves it. I'll do a surround setup measurement tomorrow.

https://imgur.com/a/0ELAqti


----------



## johnn29 (Nov 17, 2019)

Trend looks interesting - I assumed that's what room correction was going to be. You just flatten out the natural response. Did you compare trend to the (manual) numerical mode?

I've got to get into the habit of saving my damn recordings, I keep having to re-measure to try out new stuff!

The idea of gluing to a foam plug sounds ideal - they'll be much more stable then. That way you can also compensate multiple headphones with one recording because they'll stay put better. I ended up ordering the XLR version because I assumed a higher higher SNR would be better, so I'm a bit reluctant to cut into those more expensive ones.

Edit: I did have a recording saved. I just tried trend - works very well - the same as manual (numerical) correction really. Really nice feature to help make this a turn key solution without much faffing around.

Edit: How do I get the chart for the channel balance output like you've done above? --plot doesn't output it. I'm pretty surprised because it sounds like my stereo imagining is better over the headphones than my real speakers. I want to see what's going on with the charts! Amazing


----------



## jaakkopasanen

I calibrated my binaural mics agains my MiniDSP UMIK-1. Quite surprisingly the frequency response is ±0.5 dB between 90 Hz and 9 kHz with a roll off on both ends. I expected bigger variation. There is a level difference between left and right but that I already suspected.

Now I start to wonder what would happen if I didn't do headphone compensation at all but instead used the calibration files produced for the binaural mics and baked in Harman target equalization for my headphones without the bass boost. Headphone compensation seems to be quite tricky business because the heaphone and mic placements affect the results more than I would like. It could be that the mic placement affects the HRIR measurements similarly and the headphone compensation is really needed even when the mics have been calibrated. I need to test this hypothesis.

There's a tool for the mic calibration now in research/mic-calibration: https://github.com/jaakkopasanen/Impulcifer/tree/master/research/mic-calibration

Here are the same images in Imgur for future's sake in case something happens to the readme: https://imgur.com/a/9YzJtwx



johnn29 said:


> Trend looks interesting - I assumed that's what room correction was going to be. You just flatten out the natural response. Did you compare trend to the (manual) numerical mode?
> 
> I've got to get into the habit of saving my damn recordings, I keep having to re-measure to try out new stuff!
> 
> ...



I did compare trend with manual numerical method and I prefer trend method because it balances out bass, mids and treble. With numerical method it's possible that male voices are in the center but female voices are off-center or vise versa.

The trend chart is not included in impulcifer, that was a one time trick I did. You could add a new line: trend.plot_graph() on line 271 in hrir.py if you wanted to see it.


----------



## johnn29

I saw that calibration rig you made - impressive DIYing.

I've been listening to the results of the latest HRIR I made with the trend - it's so much like I thought room correction would be. It seems to have improved everything. I'm noticing surround effects more, the harsh treble I sometimes thought I noticed has gone, everything seems to have the smooth natural response of the LS50s. The thing I really did notice most was that the virtual center channel was so much more precise than my real speakers. Previously I'd been using a ffdshow to output in Dolby Prologic over headphones like I do on my real system to get a perfect center channel. Now I don't need to - virtual center sounds like a real center just like on one of the OOYH presets or Dolby Headphone

And it's not just stereo music - in the Dolby Atmos test clips I play I notice the side and rear channel placement much better when panning, e.g. jets flying from behind, to the side and then front. I guess when I play the 7 channel test tones - those are easy to localise but it's the panning effects that relies on good imagining between speakers to get right.

The room correction routine you've built - does that basically just target a Harman Room Loudspeaker Target? Or does it do anything with T20 etc?

It's such a shame that Atmos/DTS:X can't be decoded in windows.


----------



## jaakkopasanen

johnn29 said:


> I saw that calibration rig you made - impressive DIYing.
> 
> I've been listening to the results of the latest HRIR I made with the trend - it's so much like I thought room correction would be. It seems to have improved everything. I'm noticing surround effects more, the harsh treble I sometimes thought I noticed has gone, everything seems to have the smooth natural response of the LS50s. The thing I really did notice most was that the virtual center channel was so much more precise than my real speakers. Previously I'd been using a ffdshow to output in Dolby Prologic over headphones like I do on my real system to get a perfect center channel. Now I don't need to - virtual center sounds like a real center just like on one of the OOYH presets or Dolby Headphone
> 
> ...


I'm very glad it works so well for you. I might add the trend as default if more people here could try it out and report their experiences.

Currently room correction is only a minimum phase EQ but I will try to implement decay management to ensure that all frequencies decay at the same speed. Also suppressing reflections that happen before 30 milliseconds could help fool brains into thinking that they are hearing the recording venue acoustics instead of the room where the listener is sitting right now.


----------



## musicreo

I have tested Impulcifer and it works great! I have measured only stereo speakers but want  to add a LFE dummy head recording. 
   I only don't understand  how can i add a "real"  LFE channel in  HeSuVi?


----------



## Hooknej

I just reprocessed my recordings with the trend channel balance and when swapping back and forth between the old hrir without balancing and the new one with, surround effects are better placed in the 'room' and the center channel sounds better localized. I watched a few scenes from favorite movies that I test often and definitely prefer the balanced hrir - I think it sounds bigger and more spacious in the room. You're doing awesome work!


----------



## johnn29

musicreo said:


> I have tested Impulcifer and it works great! I have measured only stereo speakers but want  to add a LFE dummy head recording.
> I only don't understand  how can i add a "real"  LFE channel in  HeSuVi?



You don't need to add an LFE track - if you run through atmos/dts x test tones when they come to the LFE channel you still get playback.

If the speakers you measured don't produce subbass you can use Peace to boost those frequencies so that your playback will work. The speakers do produce them but just at a lower volume.


----------



## musicreo

johnn29 said:


> You don't need to add an LFE track - if you run through atmos/dts x test tones when they come to the LFE channel you still get playback.
> 
> If the speakers you measured don't produce subbass you can use Peace to boost those frequencies so that your playback will work. The speakers do produce them but just at a lower volume.



I already added a simple text file in HeSuVi that boost the LFE channel before processing  but I don't think that it sounds as good as a "real" LFE channel.


----------



## jaakkopasanen

I noticed some problems with the HRIR measured after cutting off the hooks of my mics. The overall tonal balance is a lot brighter now which I don't like and when I'm using room correction there is weird low frequency ringing that is very noticeable. I need to investigate more into this. In the meantime I wouldn't recommend anyone cutting hooks off of their The Sound Professionals mics.


----------



## johnn29 (Nov 19, 2019)

Perhaps the glued ear plugs are adding physical dampening to the microphone capsule which it's not designed for? Sucks.

I did some new recordings today. Getting the most holographic and realistic recording yet. I'm not sure if it's placebo but I recorded at lower levels today and left more headroom. Perhaps a lower playback volume excites less resonances which leads to a more accurate recording?

It's been a while since I used Impulcifer with proper open backs - using really open/light ones like the Grado GW100 really makes the experience ridiculously real in the room you made the recording in. My main use case is mobile and when there's noise in the house, but it's so nice for late night listening sessions to have the Grado's.

Edit: Also used the single speaker method to take a near field (arms length/3 foot) of one of my LS50s. Recording process was a breeze this time round with the new recorder.py compared to Audacity. The nearfield one is really good for use with a monitor or tablet on the go. And there's no way I could fit my LS50's on my desk - but this way I get to use them virtually in the position they'd be on my desk.


----------



## jaakkopasanen

johnn29 said:


> Perhaps the glued ear plugs are adding physical dampening to the microphone capsule which it's not designed for? Sucks.
> 
> I did some new recordings today. Getting the most holographic and realistic recording yet. I'm not sure if it's placebo but I recorded at lower levels today and left more headroom. Perhaps a lower playback volume excites less resonances which leads to a more accurate recording?
> 
> ...



I don't think it's plugs because there already is a silicone wrapper on the capsules. What I suspect is that the same problem has been there always. Depending on the recording sometimes it's both ears, sometimes it's only one and sometimes I've gotten both mics in a good placement. It could be that having the capsules closer to the ear canal opening without the hooks messing around the pinna response is captured better but the headphone compensation doesn't capture the pinna response when wearing headphones. Headphone compensation is supposed to cancel out the pinna effect when wearing headphones so that there would be only one pinna response in playback and that's the one captured in the HRIR. According to the literature I've read the headphone compensation isn't quite as solid solution as I'd like.

I'm going to run some more experiments to investigate what the headphone compensation is doing to the frequency response and how the current ones compare to the earlier ones made with the hooks still in place. At least now there should be superior reproducibility, I just have to crack the compensation. Maybe I'll implement manual EQ tools to see how that would work out.

I've been wondering about the recording levels too. I get some pretty nasty resonances in my apartment if I crank up the volume too much. Definitely a nice thing about the ear plug attached mics is that I can now use high volumes without risking my hearing (or could if I didn't have the resonance issue). This measurement process definitely seems to have a lot of practical problems one needs to overcome to get it perfect. It would be nice to have an algorithmic solutions for all of these but that might be naive. Channel balance definitely has a very big impact on the final results. This might be the main reason why my virtualized speakers with HD 800 sound better than the physical speakers.


----------



## jaakkopasanen

Actually I'm not sure if the new measurments are really worse than the best one I made with the hooks in place. The channel balance wasn't maybe as good but that could be because now I have surround recording and the trend balancer could be confused by the side, center and rear channels. I made a quick adjustment to only use FL and FR channels for the channel balancing and it sounds very good. Other thing was level matching: the original with hooks is a lot louder but when I boost the new one with Audacity to roughly the same loudness it's even better in a way. It's very hard to make conclusive observations because brains adapt so fast to these changes.

I still have the low frequency ringing there which could be caused by the room acoustics or resonances because I made some temporary adjustment to the room layout to try and capture better acoustics. The room correction FR plot would indicate this as well. There is a very steep rise at 55 to 65 Hz and two sharp spikes around 200 Hz: https://imgur.com/a/O2N7wco. These kind of features in the EQ filter can easily cause ringing. Need to do something about the algorightm to avoid this...


----------



## johnn29

The trend channel balance algorithm you implemented has been the biggest impact on fidelity for me. Prior to that - the sound was definitely out of my head and way better than anything else I'd experienced. But it didn't really sound like the system I measured. The characteristics of the speaker/room I mean. Since the channel balance I can actually tell that it's my LS50's in the office, or my R300's in my theater. It actually sounds like my speakers.

That method has also made measurement much simpler - today I actually just did one speaker recording a compensated 3 headphones back-to-back. They all sounded identical to my ears - one was a cabled Creative Aurvana SE, the other was a Grado GW100 and finally a Bose 700. Prior to that getting multiple headphones working the same was very difficult. Prior to that I used to have to check each HRIR, it was quite painful. 

I didn't realise a surround measurement would mess with the trend setting. All my recordings have been surround. Now that I've become meticulous about saving recordings I'm excited to re-process with only the L and R once you push it to the master.

We've briefly talked about it before but I find running a flat target via oratory's measurements worked really well for my IEMs. I know there's the 4khz resonance peak from the simulator that shouldn't be flattened. Perhaps you can develop a real flat target in AutoEQ that takes that into account? 

The only issue I had with a low volume on recordings was the one I raised today on Github - one of the algorithms got confused by the low level. I know from calibrating my subs over the years -when you go for max SPL the waterfall plots used to fall apart. With the Behringer DAW I have and the XLR sound professionals - I should be able to go for really quiet level recordings.

I'm selling some B&W 803s soon that I no longer use. Kinda cool that I can "copy" my speakers before selling them. Speaker piracy! 

As well as getting a proper transform measurement completed for my Airpod Pros tomorrow I plan on measuring the B&W's too now that I've practiced the single speaker method. Single speaker method also makes the room EQ much easier to place because I don't need to sit in my sofa.


----------



## musicreo

Mabey a stupid question but how  and at which step can i use the  trend channel balance in Impulcifer?


----------



## jaakkopasanen

johnn29 said:


> The trend channel balance algorithm you implemented has been the biggest impact on fidelity for me. Prior to that - the sound was definitely out of my head and way better than anything else I'd experienced. But it didn't really sound like the system I measured. The characteristics of the speaker/room I mean. Since the channel balance I can actually tell that it's my LS50's in the office, or my R300's in my theater. It actually sounds like my speakers.
> 
> That method has also made measurement much simpler - today I actually just did one speaker recording a compensated 3 headphones back-to-back. They all sounded identical to my ears - one was a cabled Creative Aurvana SE, the other was a Grado GW100 and finally a Bose 700. Prior to that getting multiple headphones working the same was very difficult. Prior to that I used to have to check each HRIR, it was quite painful.
> 
> ...


What do you exactly mean by flat? And what do you mean by a real flat target?



musicreo said:


> Mabey a stupid question but how  and at which step can i use the  trend channel balance in Impulcifer?


No stupid questions here. In the final step when you run impulcifer.py you simply add a parameter --channel_balance=trend. That's literally only thing you have to do.


----------



## johnn29

I made the naive assumption that the Out of your Head, and now Impulcifer, recordings have all the HRTF information in them. Because most of the ear/head phones we use have some sort of target built into them that try and emulate a loud speaker (diffuse field, harmen etc.) you need to flatten your headphone response and then the HRIR sounds much more natural and not overly harsh. But I know the way I compute flat EQ from AutoEQ is wrong because it also flattens some of the resonances that shoudln't be flattened from the dummy head. 

Now a "real flat" target for the purposes of Impulcifer would know that, say, Oratory's measurements use a certain Gras copuler that has certain challenges when it's a deep insert IEM and not to try and flatten resonances from the raw measurement results. Or mess with any EQ beyond 10khz because ear canal length will be the primary factor on response that high. But it'll still flatten the natural curve built in of the headphone because the HRIR contains all the room we need.

Does that make sense? Or am I just on the wrong track?


----------



## musicreo

jaakkopasanen said:


> No stupid questions here. In the final step when you run impulcifer.py you simply add a parameter --channel_balance=trend. That's literally only thing you have to do.


So something like this:
python impulcifer.py  --channel_balance=trend --test_signal="data/sweep-6.15s-48000Hz-32bit-2.93Hz-24000Hz.pkl" --dir_path="data/my_hrir" ?

But this does not change anything in the final HeSuVi file?


----------



## jaakkopasanen

musicreo said:


> So something like this:
> python impulcifer.py  --channel_balance=trend --test_signal="data/sweep-6.15s-48000Hz-32bit-2.93Hz-24000Hz.pkl" --dir_path="data/my_hrir" ?
> 
> But this does not change anything in the final HeSuVi file?


Like that, yes. Of course it's possible that you already have good channel balance which doesn't require correction. You can check this by inspecting the Results.png graph in plots folder when not using --channel_balance. If the purple difference curve looks flat then you have naturally good channel balance. Trend balancing method doesn't affect the narrow notches and peaks in the difference curve but works on a bigger scale balancing bass, mids and treble.


----------



## musicreo (Nov 20, 2019)

The  original curves look like that. it is not really flat but still I don't hear any difference after balancing.


----------



## johnn29 (Nov 20, 2019)

I tried the transform headphone feature for my IEM's using my Bose 700 - it seems to work - the image is extremely realistic but it's severely lacking in bass compared to no headphone compensation and EQing the Airpod Pros to flat.

This was the command I ran


```
python frequency_response.py --input_dir="rtings/data/onear/Bose Noise Cancelling Headphones 700" --output_dir="my_results/Bose 700 (AirPods Pro)" --compensation="rtings\resources\rtings_compensation_avg.csv" --sound_signature="results/rtings/avg/Apple AirPods Pro/Apple AirPods Pro.csv" --equalize --parametric_eq --max_filters=5+5 --ten_band_eq --bass_boost=4
```

I tried adding a boost 10 but to no avail.

I need to trouble shoot more or pick up a pair of decent open backs that can act as a simulator headphone. The DT990 that I have will have too much treble simblance for that purpose. Is there a way I can bake in the EQ compensation from oratory on my DT990s that remove the nasty treble and use that as the simulator headphone? That'd be the ideal headphone for me for critical Impulcifer use.

Edit: I also tried to use the trend balancer with a stereo only recording vs 7.1. The virtual center was much more accurate with the 7.1 recording. The stereo recording felt like it expanded the sound stage around me.

I also compared my headphone compensated Bose 700 recording with no headphone compensation and a flat EQ with oratory as a source. They're very very similar but the treble is more on point with the actual headphone comp


----------



## jaakkopasanen

johnn29 said:


> I made the naive assumption that the Out of your Head, and now Impulcifer, recordings have all the HRTF information in them. Because most of the ear/head phones we use have some sort of target built into them that try and emulate a loud speaker (diffuse field, harmen etc.) you need to flatten your headphone response and then the HRIR sounds much more natural and not overly harsh. But I know the way I compute flat EQ from AutoEQ is wrong because it also flattens some of the resonances that shoudln't be flattened from the dummy head.
> 
> Now a "real flat" target for the purposes of Impulcifer would know that, say, Oratory's measurements use a certain Gras copuler that has certain challenges when it's a deep insert IEM and not to try and flatten resonances from the raw measurement results. Or mess with any EQ beyond 10khz because ear canal length will be the primary factor on response that high. But it'll still flatten the natural curve built in of the headphone because the HRIR contains all the room we need.
> 
> Does that make sense? Or am I just on the wrong track?



Something like this could be relevant for OOYH if that doesn't have headphone compensation. I remember reading that it doesn't have but I could be just as easily imagining this. If there is no headphone compensation then some aspects of the frequency response would need to be flattened but definitely not all. Basically it's very hard to say what would have to be done if the headphone compensation is missing entirely. Maybe compensate out the pinna response but the headphone FR measurements include all the other aspects of ear as well like ear canal resonances and these should not be flattened even with OOYH.

HRIR doesn't contain anything beyond the ear canal opening so those parts should not be touched. Unless the measurement has been made at the ear canal using silicone tube mics but this is only ever used in academic situations and requires a physician to insert the tubes.

In conclusion: flat target is never desired. Compensation for headphone's pinna activation is and it's done with headphone compensation in Impulcifer.



musicreo said:


> The  original curves look like that. it is not really flat but still I don't hear any difference after balancing.



Ooh, that's beautiful. No wonder the trend doesn't do anything because you already have near perfect channel balance. I've never managed to do a measurement like that. Enjoy!



johnn29 said:


> I tried the transform headphone feature for my IEM's using my Bose 700 - it seems to work - the image is extremely realistic but it's severely lacking in bass compared to no headphone compensation and EQing the Airpod Pros to flat.
> 
> This was the command I ran
> 
> ...



Are you using this transform EQ during headphone compensation? Because the better way is to record headphone compensation without any EQ and the apply the transform EQ during processing. This would mean you have to point input dir to AirPods Pro and sound signature to Bose 700 results. Apply 4 dB bass boost is you use pre-computed results. Then you take the minimum phase impulse response WAV file and copy it to the Impulcifer folder as eq.wav. This way the tranform EQ which turns AirPods Pro into Bose 700 will be incorporated in the produced HRIR.

Your experience with stereo vs 7.1 balancing is probably due to the success with the recording process instead of the channel balancing algorithm.


----------



## johnn29 (Nov 20, 2019)

Ah understood about the flat target. I still don't get why it sounds better and more natural to me - I just have to reduce the 4000hz attenuation and it's virtually there for me.

Yep - I'm copying the eq.wav and doing the headphone compensation then. I guess I got it the wrong way round - will try again tomorrow.

Edit: the balancer issue was user error - I compensated the wrong headphones. The center image is slightly tighter with just the stereo recording. Would it be an idea to run the balancer per speaker?


----------



## musicreo

jaakkopasanen said:


> Ooh, that's beautiful. No wonder the trend doesn't do anything because you already have near perfect channel balance. I've never managed to do a measurement like that. Enjoy!
> .



Ok, that explains why I don't hear any difference. Actually this sound already very close to the real speakers in the room.  But the funny thing is that I did only one test measurement before and that this was the first real measurement with Impulcifer and I had trouble with the microfon (Pui 5024HD capsule with XLR plug working at  approx. 4V) placement in my ears.  I hope I find time in the weekend to do more measurements. I have also put together a second microfon  working at 6V and a microfon with the 3,5mm plug (but this  looks a bit noisy). Actually I only measured with my HD 555 and I also want to test my AKG 701.


----------



## jaakkopasanen

johnn29 said:


> Ah understood about the flat target. I still don't get why it sounds better and more natural to me - I just have to reduce the 4000hz attenuation and it's virtually there for me.
> 
> Yep - I'm copying the eq.wav and doing the headphone compensation then. I guess I got it the wrong way round - will try again tomorrow.
> 
> Edit: the balancer issue was user error - I compensated the wrong headphones. The center image is slightly tighter with just the stereo recording. Would it be an idea to run the balancer per speaker?



I'm thinking of running balancer per speaker pair, so FL+FR, FC, SL+SR and BL+BR.

How much do you attenuate the 4 kHz and how wide filter? Are you using this on top of Impulcifer's headphone compensation?



musicreo said:


> Ok, that explains why I don't hear any difference. Actually this sound already very close to the real speakers in the room.  But the funny thing is that I did only one test measurement before and that this was the first real measurement with Impulcifer and I had trouble with the microfon (Pui 5024HD capsule with XLR plug working at  approx. 4V) placement in my ears.  I hope I find time in the weekend to do more measurements. I have also put together a second microfon  working at 6V and a microfon with the 3,5mm plug (but this  looks a bit noisy). Actually I only measured with my HD 555 and I also want to test my AKG 701.



Very cool. Could you show us your mics? The capsule looks very good specs wise. -24 dB sensitivity and 14 dB noise. This is essentially better than Primo EM172 and more readily available and cheaper. Too bad it's 10 mm capsule although I'm not so sure if that's that bad thing because my current mics are 9 mm with the silicone casings and the cable is going to be the biggest problem anyways when trying to insert the mics exactly at the ear canal opening.

I asked if FEL Communications (micbooster.com) could provide stereo EM 258 modules with matched capsules. They agreed to do it on request and use the thin Mogami microphone cable. So this but stereo: https://micbooster.com/primo-microphone-capsules/97-primo-em258-mono-module-with-35mm-plug-10m.html. They also were kind enough to tell me which adapters I would need to connect such a stereo module to an USB audio interface with two RODE VXLR+ phantom power adaptors: https://www.amazon.co.uk/Plated-Copper-Female-Stereo-Adapter-0-9-Meter/dp/B0785VKZW4/. This is my current best guess for a high performance binaural mics without too much DIYing. Of course one needs to glue these capsules to ear plugs but that can hardly be failed. They also have the molding putty for this specific purpose: https://micbooster.com/microphone-holders/112-lugguards-custom-moulded-earplugs.html


----------



## musicreo

jaakkopasanen said:


> Very cool. Could you show us your mics? The capsule looks very good specs wise. -24 dB sensitivity and 14 dB noise. This is essentially better than Primo EM172 and more readily available and cheaper. Too bad it's 10 mm capsule although I'm not so sure if that's that bad thing because my current mics are 9 mm with the silicone casings and the cable is going to be the biggest problem anyways when trying to insert the mics exactly at the ear canal opening.



This is one of the mics. For the first measurement i just put a small rubber around the capsule and a cover for ear buds. It is adapted from this Thread  (http://www.hifi-forum.de/index.php?action=browseT&forum_id=71&thread=13437&back=&sort=&z=1)  on the German hifi forum. There you can find images of the very simple XLR interior.


----------



## johnn29 (Nov 21, 2019)

jaakkopasanen said:


> I'm thinking of running balancer per speaker pair, so FL+FR, FC, SL+SR and BL+BR.
> 
> How much do you attenuate the 4 kHz and how wide filter? Are you using this on top of Impulcifer's headphone compensation?



I only use this for the Airpods/IEMs. I let the AutoEQ's Parametric EQ get imported into Peace - then just push down the attenuation at around 4khz. For the airpods the automatic calc is outputting 2845 hz, -6.6db, Q 0.9. So I'm just reducing that by 3db. That's without any headphone compensation in the HRIR.

I think it was user error - I likely copied the wrong headphone.wav into the directory. The transform works perfectly! Thanks for helping me out so much.

Edit: the other good thing about the transform process is that if you've captured a particularly good recording that you're happy with and get a new set of headphones - you can use transform for the compensation. For example - with a room/speaker combo that no longer exists.


----------



## sander99

@jaakkopasanen: Just now I tried to re-install Impulcifer. (Last time I did something with it was 6 september).
I skipped the following, assuming this part doesn't need to be done again (could that have been my mistake?):


> Download and install Git: https://git-scm.com/downloads. When installing Git on Windows, use Windows SSL verification instead of Open SSL or you might run into problems when installing project dependencies.
> Download and install 64-bit Python 3. Python 3.8 doesn't work yet. Make sure to check _Add Python 3 to PATH_.


I followed the rest of your list of actions starting from:


> Install virtualenv.
> pip install virtualenv



I renamed the old Impulcifer folder, because first I got a complaint that the folder already existed.
One little remark: two times you wrote 'cd Implucifer' instead of 'cd Impulcifer'. Not a big deal, only copy-pasting all commands goes wrong here of course.

More serious (unless I made a silly mistake I don't know about):
python impulcifer.py --help
now gives the following output:


> Traceback (most recent call last):
> File "impulcifer.py", line 9, in <module>
> from scipy import signal
> File "C:\Users\Laptop\Impulcifer\venv\lib\site-packages\scipy\signal\__init__.py", line 289, in <module>
> ...


----------



## jaakkopasanen

I reworked channel balancing to work in speaker pairs FL+FR, SL+SR, BL,BR and FC. I also figured out the low freq ringing problem I had, it was caused by boost below 20 Hz. Now Sub 20 Hz region is not boosted anymore and the included Harman room target has -60 dB at 10 Hz. Demo recordings included in Impulcifer are measured with my mics without hooks and glued to ear plugs. Life is good!



sander99 said:


> @jaakkopasanen: Just now I tried to re-install Impulcifer. (Last time I did something with it was 6 september).
> I skipped the following, assuming this part doesn't need to be done again (could that have been my mistake?):
> 
> I followed the rest of your list of actions starting from:
> ...


Sounds a lot like this issue. Maybe take a look and report back if that solves your case. And thanks for pointing out the typos, I fixed them.


----------



## johnn29 (Nov 24, 2019)

Excited to try out the new channel balance. Can A vs B the old method ones I have. Will report back!

Edit: Re-ran my fav HRIR's with the new balance - definitely has improved the localisation. It's very hard to tell though unless you A vs B - but I'd imagine it'd mean more recordings (even bad ones) result in a good HRIR.

Btw - over on the Realiser thread they're posting about differences in microphone placement having a big impact on the HRIR it creates.


----------



## jaakkopasanen (Nov 24, 2019)

@johnn29 I started thinking about that 3 or 4 kHz attenuation you apply. The Harman over-ear target has some modifications compared to speaker listening because headphones are somewhat different experience. So I created a new room target which I call virtual room target. This is in essence Harman room target modified with the difference between Harman's flat loudspeaker in room response and Harman over-ear target. The code, results and plots can be found here: https://github.com/jaakkopasanen/Impulcifer/tree/master/research/virtual-room-target

Maybe give it a try a tell me how it sounds. I think I prefer the unmodified Harman target.

*EDIT* I added a light version of the virtual room target which doesn't include boost below 1 kHz. I think I might prefer this to the others. It doesn't make the whole tonality dark but takes the edge off just a bit to make it more laid back and effortless.


----------



## johnn29 (Nov 25, 2019)

I've still not managed to do the room correction yet but I'm going you attempt it tomorrow so i hope to try it out. Remember I was only reducing the 4khz band when I didn't use headphone compensation for IEMs. Since you helped me figure out transform I'm using headphone compensation with my airpods and loving it. It sounds more realistic than my flat EQ method.

I've spent a while trying to perfect some HRIRs. Now I feel I have the next logical thing is the room correction so I'll figure out how to nail the mic placement.

Can I just confirm I need to run the following commands. First for left ear, then right.


```
python recorder.py --channels=1 --play="data/sweep-seg-FL,FC,FR,SR,BR,BL,SL-7.1-6.15s-48000Hz-32bit-2.93Hz-24000Hz.wav" --record="data/my_hrir/room-FL"
python recorder.py --channels=1 --play="data/sweep-seg-FL,FC,FR,SR,BR,BL,SL-7.1-6.15s-48000Hz-32bit-2.93Hz-24000Hz.wav" --record="data/my_hrir/room-FR"
```

Edit: got it working. Will post thoughts on the process shorlty


----------



## Joe Bloggs (Nov 25, 2019)

jaakkopasanen said:


> I added another channel balancing strategy: "trend". This takes the frequency response difference between left and right sides and smooths it heavily. This smoothed curve is then used as the equalization target for right side. Now because the smoothing is so heavy this doesn't create the uncanny feeling that "avg" or "min" can do in some cases while managing to balance bass, mids and treble. I tested it with two measurements, one with quite good natural channel balance and one with poor balance. I prefer trend over all other strategies for both. Here's a graph illustrating the trending.


There should be no uncanny feeling, if you did like I originally proposed and equalized both the left and right channels to the average response, instead of one to the other. I am after all just telling you what already works for me


----------



## jaakkopasanen

Joe Bloggs said:


> There should be no uncanny feeling, if you did like I originally proposed and equalized both the left and right channels to the average response, instead of one to the other. I am after all just telling you what already works for me


This is what the avg strategy does and I'm experiencing the uncanny feeling on some measurements. I guess it also depends on the individual.


----------



## johnn29 (Nov 25, 2019)

jaakkopasanen said:


> @johnn29 I started thinking about that 3 or 4 kHz attenuation you apply. The Harman over-ear target has some modifications compared to speaker listening because headphones are somewhat different experience. So I created a new room target which I call virtual room target. This is in essence Harman room target modified with the difference between Harman's flat loudspeaker in room response and Harman over-ear target. The code, results and plots can be found here: https://github.com/jaakkopasanen/Impulcifer/tree/master/research/virtual-room-target
> 
> Maybe give it a try a tell me how it sounds. I think I prefer the unmodified Harman target.
> 
> *EDIT* I added a light version of the virtual room target which doesn't include boost below 1 kHz. I think I might prefer this to the others. It doesn't make the whole tonality dark but takes the edge off just a bit to make it more laid back and effortless.



Finally got the placement working for the UMIK - turns out sitting on the floor worked best and I could use the webcam app.

Also got rid of my plot error by installing on my desktop. The laptop is still throwing errors despite re-install.

The 3 folders that plot outputs - can we get more detail on what they mean? is "post" after room correction? Does the room folder contain the frequency response post of the HRIR (post correction)

I tried the standard room correction target, the virtual and virtual light. It's very hard to compare because I get different levels for virtual and virtual light compared to standard. Standard's levels are the same as no room correction. It's very hard to pick a winner on what sounds better for me - they do sound different for sure though. That could be down to me being an uncritical listener. If I had to pick I'd pick either of the virtual room targets. Could be just a level match thing though. But that does sound better than either the uncorrected or regular target.

Edit- after watching a surround TV show I much prefer the room corrected version. I'm using my Airpods Pro via the AutoEQ Transform function and the room correction seems to be much more obvious with these. I can't say which room corrected version I prefer yet though, but they are much superior to the non room corrected one


----------



## johnn29

After an hour listening to music with the room correction I can safely say I've surpassed my real speakers by a long way. Just wow - cliche audiophile phrase but it's making me listen to all my favorite tracks again. I don't know if it's psychoaccoustics but there's something about IEM's for me in delivering bass. It feels much more real for me. The airpods pro are just giving me bass the Bose 700s couldn't. Weirdly the boss measure better. The airpods with Impulcifer are analogous if your regular glasses were a VR headset. They make it so comfortable that it just immerses you into the reality. The complete silence I get from them means I get a huge dynamic range compared to speakers or regular headphones.

I guess the next step will be when the RT time adjustment feature is live - that's likely going to be great for movies. I always thought head tracking was a big feature but after having it on the Mobius I didn't value it that much.


----------



## Joe Bloggs

jaakkopasanen said:


> This is what the avg strategy does and I'm experiencing the uncanny feeling on some measurements. I guess it also depends on the individual.


What is the smoothing applied to the corrections there?


----------



## sander99 (Nov 25, 2019)

jaakkopasanen said:


> Sounds a lot like this issue. Maybe take a look and report back if that solves your case. And thanks for pointing out the typos, I fixed them.


I interpreted this as that I should install the software mentioned here:


> i got scipy problem i resolved with https://www.microsoft.com/en-in/download/details.aspx?id=48145 Visual C++ Redistributable for Visual Studio 2015.


This I did and I then removed the Impulcifer folder that was created during my previous try and again started from the step "pip install virtualenv".
And now I got another error with te step "python impulcifer.py --help":


> Traceback (most recent call last):
> File "C:\Users\Laptop\Impulcifer\venv\lib\site-packages\soundfile.py", line 142, in <module>
> raise OSError('sndfile library not found')
> OSError: sndfile library not found
> ...



I noticed that in the old renamed Impulcifer folder from my last succesfull install in september the folder and file
"C:\Users\Laptop\Impulcifer\venv\lib\site-packages\_soundfile_data\libsndfile64bit.dll" does exist, but not in the new Impulcifer folder.


Instead of trying to fix this could I maybe better try to completely undo everything I did/installed related to Impulcifer, Python and Git and start from scratch? But I am not sure how to do that(?).

Edit: oh, or should I have just used "git pull" instead of what I did... But what does that do exactly? After "git pull" should I repeat some of the other steps like "pip install -r requirements.txt"?
But I assume it has no use to try that now as I somehow made a mess of things.

One thought however: could it be that in your newer versions you forgot something (like that missing _soundfile_data\libsndfile64bit.dll) that doesn't give rise to problems for people who use "git pull" to update because maybe "git pull" only adds things, and for those people the things missing in the new version maybe were simply still there from an older version? And when someone starts from scratch (without the old Impulcifer folder I mean now) it would be missing? This is just a vague hunch I have, hope it helps.

I hope I don't sound like a complete fool, but if I do: apparently the Impulcifer installation process is not "foolproof" yet!


----------



## jaakkopasanen

I'm going to investigate these errors soon. There's something going on for sure. 


johnn29 said:


> Finally got the placement working for the UMIK - turns out sitting on the floor worked best and I could use the webcam app.
> 
> Also got rid of my plot error by installing on my desktop. The laptop is still throwing errors despite re-install.
> 
> ...


Pre is the binaural recording without any processing, room is the room recording as is and post is the binaural measurement with all the processing (room correction, channel balancing, headphone compensation,... ). I'll add these to the instructions. 



johnn29 said:


> After an hour listening to music with the room correction I can safely say I've surpassed my real speakers by a long way. Just wow - cliche audiophile phrase but it's making me listen to all my favorite tracks again. I don't know if it's psychoaccoustics but there's something about IEM's for me in delivering bass. It feels much more real for me. The airpods pro are just giving me bass the Bose 700s couldn't. Weirdly the boss measure better. The airpods with Impulcifer are analogous if your regular glasses were a VR headset. They make it so comfortable that it just immerses you into the reality. The complete silence I get from them means I get a huge dynamic range compared to speakers or regular headphones.
> 
> I guess the next step will be when the RT time adjustment feature is live - that's likely going to be great for movies. I always thought head tracking was a big feature but after having it on the Mobius I didn't value it that much.


I think the biggest value of head tracking would come from added comfort because you can safely mice your head around without the fear of that imaging gets ruined. That being said I have to confess that I've never tried head tracking... 



Joe Bloggs said:


> What is the smoothing applied to the corrections there?


Third octave if I remember correctly. Or maybe it was my heuristic algorithm which flies over narrow notches without being too heavy. EQ cannot be applied without smoothing because the responses contain extremely sharp notches and those cannot be raised. 



sander99 said:


> I interpreted this as that I should install the software mentioned here:
> 
> This I did and I then removed the Impulcifer folder that was created during my previous try and again started from the step "pip install virtualenv".
> And now I got another error with te step "python impulcifer.py --help":
> ...


I haven't touched the dependencies in a while but they do contain some leeway in terms of which version gets installed. Git pull updates the files in the folder to the newest version but doesn't install dependencies so if they have changed then that would have to be done. I'll add that to the instructions. This whole thing is quite unintuitive because these are essentially tools for developers. I definitely need to make the whole thing smoother and user friendly. For the time being I'm very grateful for everyone who reports these problems instead of sulking in silence.


----------



## musicreo

I have done some new measurements and now also used my AKG701. This time the channel balance was necessary and helped a lot to obtain again a very nice result. The AKG701 and the Sennheiser HD555   really sound very similar and both  now deliver a realistic room image.

The most positive effect for me in this new measurment  was to put one of the speakers to the center position. This center sounds much better than the ones recorded from the left or right speaker position (probably due the different reflections of the walls as the center speaker does  now have 1.38 m more space to both sides.)


----------



## johnn29

jaakkopasanen said:


> I'm going to investigate these errors soon. There's something going on for sure.
> 
> Pre is the binaural recording without any processing, room is the room recording as is and post is the binaural measurement with all the processing (room correction, channel balancing, headphone compensation,... ). I'll add these to the instructions.
> 
> ...



Can you change the axis for the room correction report so the Y axis scales to show the full frequency range better? Currently because of the huge negative gain from 0-20db you can't really see the plots in much detail and they all look quite flat. Would love to see those in proper detail.

What I find puzzling though is the decay time. The pre corrected measurement has a decay time in the order of seconds. See the attached images for pre and post. How is Impulcifer correcting that?

After being an AVR user with all the different room correction solutions, ultimately coming to the conclusion that room correction over the transition frequency is pointless due to Toole's work - it's kind of amazing to get room correction at an ideal point and then have that point move around with your head. No averaging, no multi point measurements.


----------



## jaakkopasanen

johnn29 said:


> Can you change the axis for the room correction report so the Y axis scales to show the full frequency range better? Currently because of the huge negative gain from 0-20db you can't really see the plots in much detail and they all look quite flat. Would love to see those in proper detail.
> 
> What I find puzzling though is the decay time. The pre corrected measurement has a decay time in the order of seconds. See the attached images for pre and post. How is Impulcifer correcting that?
> 
> After being an AVR user with all the different room correction solutions, ultimately coming to the conclusion that room correction over the transition frequency is pointless due to Toole's work - it's kind of amazing to get room correction at an ideal point and then have that point move around with your head. No averaging, no multi point measurements.


I'll see what I can do with the FR plot Y-axis. The limits are set automatically right now.

Decay in the decay graph starts at around 3500ms and reaches noise floor at around 4000ms so it's only half a second. Everything before the largest spike in the decay graph are the harmonic components. Impulcifer crops out everything before the left-most peak and also everything after the noise floor has been reached. This is why there is the drop in the post waterfall graph and why you can't see the impulse responses of the harmonic components there.


----------



## jaakkopasanen

Channel balance of the demo recording has been bugging me even with channel balancing using trend method. Giving more gain to left ear doesn't really do the trick and it creates this uncanny uneven pressure feeling so it's not the way to go about it. Now I tried adjusting the channel balance of the speakers before HeSuVi and that worked a lot better. I added this to config.txt:

```
Copy: L=1.078*L R=0.928*R
Include: HeSuVi\hesuvi.txt
```
That's 1.3 dB difference in favor of FL. HeSuVi's speaker placement adjustment doesn't do this and I don't really know what it does.

It's possible that the level matching in room correction is not working correctly. I will have to investigate what are the speaker levels in room recordings, level matched room recordings and the in the final HRIR output. It would be really weird if there would not be a level difference when manually adjusting for that fixes the channel balance issue.


----------



## johnn29

Trying to nail a near field measurement this morning. It's a bit of a pain to move everything and extend the wires but hoping it's worth it! The external webcam really helped with the UMIK placement.


----------



## Joe Bloggs

jaakkopasanen said:


> Channel balance of the demo recording has been bugging me even with channel balancing using trend method. Giving more gain to left ear doesn't really do the trick and it creates this uncanny uneven pressure feeling so it's not the way to go about it. Now I tried adjusting the channel balance of the speakers before HeSuVi and that worked a lot better. I added this to config.txt:
> 
> ```
> Copy: L=1.078*L R=0.928*R
> ...


Are you doing anything about time alignment at all?


----------



## jaakkopasanen

Joe Bloggs said:


> Are you doing anything about time alignment at all?


Yes, I am. Each speaker delay is adjusted so that the distances from the speakers are equal to the center of the head. Interaural time difference is not touched but relies on the measurement itself. The speaker delay adjustment is done on the basis of the primary ear (left ear for left side speakers, right for right).

I did take a quick look at the demo recording FR levels and there doesn't seem to be much of a difference. So I'm puzzled.


----------



## Joe Bloggs

jaakkopasanen said:


> Yes, I am. Each speaker delay is adjusted so that the distances from the speakers are equal to the center of the head. Interaural time difference is not touched but relies on the measurement itself. The speaker delay adjustment is done on the basis of the primary ear (left ear for left side speakers, right for right).
> 
> I did take a quick look at the demo recording FR levels and there doesn't seem to be much of a difference. So I'm puzzled.


Do you want to send me the IR files before and after processing to have a look?


----------



## jaakkopasanen

Joe Bloggs said:


> Do you want to send me the IR files before and after processing to have a look?


Do you have Impulcifer installed? The recordings can be found in the data/demo folder. When running Impulcifer pointed to that folder it will produce responses.wav which contains the unprocessed IRs and hrir.wav which contains the processed IRs.


----------



## johnn29

Room correction on the nearfield HRIR I produced yesterday has a much bigger impact. I suspect it's because I'm really only picking up on bass improvements. My measurement at my regular seating position used my sub, the nearfield one was without a sub or any AVR bass management/correction. The virtual room correction takes care of that quite nicely. 

I did get an unusual situation when moving about 1.5m away from the speakers though - the HRIR produced odd effects where the virtual channels bled into each other. For example on a channel ID check I could hear a very quiet echo from the rear left/rear right when it was the front channels being ID'd. I suspected it was the room correction bootsing bass frequencies to the directional level and that was correct. I'll raise an issue in Github. Not an issue for me really - but worth fixing.


----------



## jaakkopasanen

So today I don't really need the -1.5 dB on FL speaker channel to get good channel balance with demo recording. Maybe I'm going crazy. Or maybe it's about how I happen to put headphones on my head.


----------



## johnn29

In my experience it's so hard to do perceptual testing with HRIR's. One day you feel like you hear one thing, and another day it's not there. My perception even gets confused depending on where I'm sitting. 

With headphone placement I suspect the best headphones are those with active EQ's that compensate for placement on your head so they always deliver a consistent frequency response. But those tend to have loudness curves baked in, which aren't ideal.


----------



## jaakkopasanen

johnn29 said:


> In my experience it's so hard to do perceptual testing with HRIR's. One day you feel like you hear one thing, and another day it's not there. My perception even gets confused depending on where I'm sitting.
> 
> With headphone placement I suspect the best headphones are those with active EQ's that compensate for placement on your head so they always deliver a consistent frequency response. But those tend to have loudness curves baked in, which aren't ideal.


I've noticed this myself too. Also a note of interest here is that I've never had reliable and perfect channel balance with any of my speaker systems, they always tend to lean towards right side (unless I compensate for it), just like these demo recordings. So maybe it has something to do with asymmetrical head and ear shape which isn't problem with sounds that originate from the actual physical source but isn't ideal with phantom center.

@Joe Bloggs You can also find the impulse responses now in the speaker balance graphs experiment folder without running Impulcifer here: https://github.com/jaakkopasanen/Impulcifer/tree/master/research/speaker-balance-graphs


----------



## johnn29

This could be of relevance? https://www.audioholics.com/room-ac...ons-human-adaptation/what-do-listeners-prefer



> A possible factor in the acceptance of some reflected energy may be related to the interaural crosstalk that audibly distorts the phantom center image in stereo.  When in the stereo seat, in the absence of reflections there is a huge 2 kHz dip - it is not hard to hear when one is in a dominantly direct sound field. As shown in Figure 9.7 (below) it is somewhat alleviated by reflected sounds - even improving speech intelligibility.  Remember that normally this is the sound of the featured artist. There can be no better argument for a center channel.



I've never found phanton centers to be convincing. Headphones is the closest I've gotten to it being bang on with synthetic HRTFs and some of the ooyh presets and now Impulcifer. When I listen to my real system I always use Pro Logic to mix it to surround. I don't really like the rear channels much, but it does a great job of extracting the center image and that's better for multiple positions/seats. I've tried ffdshow audio and AC3 filter to do the same thing with stereo sources for HRTF. It works to an extent - but the Dolby Surround processor isn't as good as on my AVR.


----------



## jaakkopasanen

I had an idea that perhaps the headphone compensation isn't working as intended because it's only trying to equalize the headphone frequency response flat at the microphone and is therefore only a feed-forward compensation. I tested this hypothesis by implementing a feedback compensation which does the headphone measurement using sine sweep sequence convolved with the HRIR measurement. This way the frequency response of the heaphone measurement should be the same as the frequency response of the HRIR measurement. The cool thing about this idea is that one doesn't need to know the feed-forward target of the headphones, it can be whatever as long as the virtualized speakers have the same frequency response as the physical speakers.

Hypothesis was proven to be incorrect. The headphone compensation for both feed-forward and feedback measurements is almost exactly the same with only significant difference in sub 30 Hz region. This also proves that headphones really need to be flat at the microphone as was previously assumed. Check out the experiment and results here: https://github.com/jaakkopasanen/Impulcifer/tree/master/research/headphone-feedback-compensation

My mic calibration shows the binaural mics to have a level difference of about 1 dB but now both headphone compensation methods indicate 3 dB level difference. Channel balancing in impulcifer reduces the right side volume so I would suspect that the level difference according to the mic calibration is correct. I find it unlikely that the level difference would be caused by the headphones either. Very strange...


----------



## Joe Bloggs

jaakkopasanen said:


> I've noticed this myself too. Also a note of interest here is that I've never had reliable and perfect channel balance with any of my speaker systems, they always tend to lean towards right side (unless I compensate for it), just like these demo recordings. So maybe it has something to do with asymmetrical head and ear shape which isn't problem with sounds that originate from the actual physical source but isn't ideal with phantom center.
> 
> @Joe Bloggs You can also find the impulse responses now in the speaker balance graphs experiment folder without running Impulcifer here: https://github.com/jaakkopasanen/Impulcifer/tree/master/research/speaker-balance-graphs


Which would be the final IRs applied to each channel after all the measurements and compensations (incl.headphone compensations)?

Going by your previous description perhaps you need to align the cross-ear impulses in time for channel pairs as well as aligning the direct-ear impulses.

OTOH I've also read of a weird case from one person where delaying one channel rather than any volume balance centred the image for him (we guessed one of eardrums must be further inside his head than the other or something)

Might want to ask others if they experience any of your weirdness when using your demo files.


----------



## jaakkopasanen

Joe Bloggs said:


> Which would be the final IRs applied to each channel after all the measurements and compensations (incl.headphone compensations)?
> 
> Going by your previous description perhaps you need to align the cross-ear impulses in time for channel pairs as well as aligning the direct-ear impulses.
> 
> ...


responses.wav is the unprocessed one and hrir.wav is the one with all the processing including headphone compensation. I've updated the responses.wav in that folder, the old one had the harmonic components still in it and the new one has them cropped out.

I don't think there should be a need to adjust interaural time differences because the delays between right and left channel are always correct so the ITD should be exactly correct in the measurement. Another thing is that channel balancing corrects the issue (mostly) so I don't think this is about timing problems.

Other interesting notes here are that adjusting volume with volume2 causes changes in the channel balance. Or at least that's how I perceive it. It's subtle but definitely it's doing something, or at least I hear it doing something. I don't know if there is an actual effect or if I'm just imagining it. I'll do some measurments to (dis)prove this hypothesis. The second thing is that I did HRIR measurements for my friend on Sunday and the headphone level difference for him was between 1 and 2 dB. Also his left and right side FRs were a lot more similar than mine. So maybe it's simply my weird ears that are doing this.


----------



## jaakkopasanen

johnn29 said:


> This could be of relevance? https://www.audioholics.com/room-ac...ons-human-adaptation/what-do-listeners-prefer
> 
> 
> 
> I've never found phanton centers to be convincing. Headphones is the closest I've gotten to it being bang on with synthetic HRTFs and some of the ooyh presets and now Impulcifer. When I listen to my real system I always use Pro Logic to mix it to surround. I don't really like the rear channels much, but it does a great job of extracting the center image and that's better for multiple positions/seats. I've tried ffdshow audio and AC3 filter to do the same thing with stereo sources for HRTF. It works to an extent - but the Dolby Surround processor isn't as good as on my AVR.


Very interesting article! Something that caught my eye is this:


> For optimum stereo listening if your music tastes are as eclectic as mine, one really needs adjustable acoustics and, possibly, variable-directivity loudspeakers, but we know that won’t happen.


Now imagine if there was a way to change room acoustics and speakers on the fly somehow. Like by simulating the whole thing and then simply switching the simulation model when needed...


----------



## johnn29

Exactly! I don't get why anyone into loud speakers isn't jumping all over this. That's why I'm really excited at the potential of the reverb time adjustment that you might implement in future. I could deaden the virtual room for movies - and open it up for music. Currently I kind of do that with the near field/mid field/far field measurements I have but because it's very obvious where the sound is coming from it has been hard to watch a movie with the nearfield ones. Although in pitch black, without visual cues it works well and it works outstandingly well in VR movie headsets.

I had an hour long listening session with my nearfield HRIR with various YouTube acoustic/unplugged tracks. There's no real practical way I can sit so close to my real speakers in that position. I find the acoustic ones contain all the room information I need. Then I listened to some regular studio stuff off Deezer - where I switched to my 1.5m mid field recording because I like the reflections.

Finally dragged my beastly B&W 803s into my office - going to take a measurement with those tomorrow. 40kg each - to be simulated on earphone that weigh 5.4 grams. Crazy!


----------



## phoenixdogfan

Is the final result stored as a single convoltion file playable by JRiver or Foobar 2000?


----------



## jaakkopasanen

phoenixdogfan said:


> Is the final result stored as a single convoltion file playable by JRiver or Foobar 2000?


The final result is single WAV file with 14 tracks, one per each speaker ear pair. I don't know what JRiver or Foobar expect but I could add support for those if the current one doesn't work.


----------



## Joe Bloggs (Dec 8, 2019)

jaakkopasanen said:


> The final result is single WAV file with 14 tracks, one per each speaker ear pair. I don't know what JRiver or Foobar expect but I could add support for those if the current one doesn't work.


On foobar, you can use matrix mixer (foo_dsp_mm) to generate 14 input channels for the gapless convolver (foo_dsp_convolver_0.4.7) then mix it back into stereo.

For a wav file ordered in LL LR RL RR CL CR BLL BLR BRL BRR SLL SLR SRL SRR (where the last letter denotes the output channel and the preceding letters denote the surround channel) I've worked out the following separation and reconstitution matrices:

Separation





Reconstitution




Of course, you'd want to put a resampler plugin in front of all this to make sure the convolver sees a constant sample rate.


----------



## musicreo

Or you can use the Convolver VST with resampler in front ( http://convolver.sourceforge.net/vst.html).  Works also with direct show players.
There you have to create a txt file like this one:


```
48000 8 2 0
0 0 0 0 0 0 0 0
0 0
C:\Program Files\BinauralAudio\wav impulse responsefiles\48 Khz\7_1\Impulcifer\hrir.wav
0
0
0
C:\Program Files\BinauralAudio\wav impulse responsefiles\48 Khz\7_1\Impulcifer\hrir.wav
1
0
1
C:\Program Files\BinauralAudio\wav impulse responsefiles\48 Khz\7_1\Impulcifer\hrir.wav
2
1
0
C:\Program Files\BinauralAudio\wav impulse responsefiles\48 Khz\7_1\Impulcifer\hrir.wav
3
1
1
C:\Program Files\BinauralAudio\wav impulse responsefiles\48 Khz\7_1\Impulcifer\hrir.wav
4
2
0
C:\Program Files\BinauralAudio\wav impulse responsefiles\48 Khz\7_1\Impulcifer\hrir.wav
5
2
1
C:\Program Files\BinauralAudio\wav impulse responsefiles\48 Khz\7_1\Impulcifer\hrir.wav
6
3
0
C:\Program Files\BinauralAudio\wav impulse responsefiles\48 Khz\7_1\Impulcifer\hrir.wav
7
3
1
C:\Program Files\BinauralAudio\wav impulse responsefiles\48 Khz\7_1\Impulcifer\hrir.wav
8
4
0
C:\Program Files\BinauralAudio\wav impulse responsefiles\48 Khz\7_1\Impulcifer\hrir.wav
9
4
1
C:\Program Files\BinauralAudio\wav impulse responsefiles\48 Khz\7_1\Impulcifer\hrir.wav
10
5
0
C:\Program Files\BinauralAudio\wav impulse responsefiles\48 Khz\7_1\Impulcifer\hrir.wav
11
5
1
C:\Program Files\BinauralAudio\wav impulse responsefiles\48 Khz\7_1\Impulcifer\hrir.wav
12
6
0
C:\Program Files\BinauralAudio\wav impulse responsefiles\48 Khz\7_1\Impulcifer\hrir.wav
13
6
1
C:\Program Files\BinauralAudio\wav impulse responsefiles\48 Khz\7_1\Impulcifer\hrir.wav
14
7
0
C:\Program Files\BinauralAudio\wav impulse responsefiles\48 Khz\7_1\Impulcifer\hrir.wav
15
7
1
```


----------



## johnn29

Impulcifer's word is growing! https://archimago.blogspot.com/2019/12/redscape-and-creative-super-x-fi-amp.html


----------



## phoenixdogfan

Is there a way to combine the custom HTRF of Impulcifier with the head tracking of Redscape?


----------



## musicreo

Is it possible that the channel order in  Hesuvi which is shown in this image is mixed up for all right speakers (first right ear (R-R) than left ear (R-L) seems to work for me and not R-L,R-R)?  https://cdn-images-1.medium.com/max/800/1*whrx-d3gpnLBZ_Q1IuWIEg.png


----------



## Joe Bloggs

musicreo said:


> Is it possible that the channel order in  Hesuvi which is shown in this image is mixed up for all right speakers (first right ear (R-R) than left ear (R-L) seems to work for me and not R-L,R-R)?  https://cdn-images-1.medium.com/max/800/1*whrx-d3gpnLBZ_Q1IuWIEg.png


That's some seriously messed up channel order, where did you get it from?


----------



## musicreo

Joe Bloggs said:


> That's some seriously messed up channel order, where did you get it from?



The image is from the Hesuvi wiki  "How-To Record Impulse Responses Digitally"  https://sourceforge.net/p/hesuvi/wiki/How-To Record Impulse Responses Digitally/ 

Is there a reason why Hesuvi does not use the normal channel order (L/R/C/LFE/LS/RS/LB/RB)?  It took me some time to figure  out that the channel order is  1 2 9 10 13 14 5 4 3 12 11 16 15  6 but  the image says it is 1 2 9 10 13 14 5 3 4  11 12 15 16 6


----------



## jaakkopasanen (Dec 9, 2019)

musicreo said:


> Is it possible that the channel order in  Hesuvi which is shown in this image is mixed up for all right speakers (first right ear (R-R) than left ear (R-L) seems to work for me and not R-L,R-R)?  https://cdn-images-1.medium.com/max/800/1*whrx-d3gpnLBZ_Q1IuWIEg.png


HeSuVi indeed has odd track order and I don't know why. Impulcifer produces hesuvi.wav which has the impulse responses in the expected order. Here's what Impulcifer does in the code:


```
# Write multi-channel WAV file with HeSuVi track order
hrir.write_wav(
os.path.join(dir_path, 'hesuvi.wav'),
track_order=['FL-left', 'FL-right', 'SL-left', 'SL-right', 'BL-left', 'BL-right', 'FC-left', 'FR-right',
'FR-left', 'SR-right', 'SR-left', 'BR-right', 'BR-left', 'FC-right']
)
```

Based on that image the guide might have the order wrong. Impulcifer definitely has it right.


----------



## royster

I haven't found the answer but can you use impulcifer without having a fancy room and speakers? If I just buy the mics and record the sweeps? And then have those work in a virtual perfect room? I guess not because youre measuring the response of the speakers/room relative to your ears? I guess my best bet would find a friend or use a prepped room from a studio or even music store?


----------



## johnn29

You don't need a fancy room and speakers - you just need a room and a single speaker. Even if you have a very echoy/reverb filled room you can use nearfield sweeps to help that. The closer you are to the speaker the less room you hear. One of my best sounding HRIR's I produced was just from sitting very close to my LS50's and not even bothering to worry about room placement.

It's also relative. The HRIR that I've generated from Impulcifer sound way better than anything else I've tried - because it's custom to my ears. So you might not have an ideal room or speakers but it'll be the best thing you've tried.

If you don't have a speaker currently I know KEF's engineering philosophy is that the ideal loud speaker generates sound from a single point. For nearfield measurement that seems even more the case. An ideal and cheap speaker for this would be the KEF R100, I'm sure you can pick one up used for cheap. You can then follow the single speaker measurement guide in the Wiki.

The virtual room correction does work wonders especially for bass response. Currently it has limits that mean if your speakers can't generate sufficient bass the virtual correction will boost bass too much at higher frequencies which resulsts in channel bleed. That means sound meant for one channel will be in another channel. jaakkopasanen's aware and a bug is on GitHub. I found with my LS50's at around 1 meter the channel bleed is barely noticeable and only perceivable in test tones. With something like an r100 you should be fine or if you employ a little boundary gain and place your speaker close to a wall.

There's also a learning curve on all this - from the small things like placing the mics in your ears, to how you put the headphones on, to remembering to have everything close by you need for a measurement, remembering the channel order etc. Only once I've figured all that out would I try it at another location. Otherwise you might end up frustrated that you can't nail a good recording.

My one big tip would be always save your recordings to another folder to back them up. Because there's always improvements in Impulcifer you can always re-process your existing recordings without having to re-measure.


----------



## jaakkopasanen

royster said:


> I haven't found the answer but can you use impulcifer without having a fancy room and speakers? If I just buy the mics and record the sweeps? And then have those work in a virtual perfect room? I guess not because youre measuring the response of the speakers/room relative to your ears? I guess my best bet would find a friend or use a prepped room from a studio or even music store?


There shouldn't be a need for anything too fancy. The measurement method is quite robust against distortion of the speakers and room response can be corrected by doing a room measurement with calibrated measurement microphone. I don't have a guide yet for this and there is one small issue I need to fix but the functionality is there for room correction. And there's more to come...


----------



## johnn29

Excited to hear there's more coming! Any hints?


----------



## jaakkopasanen

I got some ideas how to do reverberation and decay management and early reflection cancellation. But first I'll fix all the bugs.


----------



## lowdown

Very interested in your project, and all respect to you for your effort and expertise.  I'm a novice so a couple of questions.

Have you come to a conclusion about the best in ear mics to use and how to place them? I'd seriously consider the Roland CS-10EM since they're dual purpose, but not if they aren't an ideal option for use with Impulcifer.

I have a Zoom H2N recorder that can be used with external mics.  Is that good enough, or should I really get the Behringer?

Also, I stumbled across this, so in case you happen not to have seen it, perhaps something useful in there.  http://3d-tune-in.eu/toolkit-developers

Thanks again.


----------



## jaakkopasanen

lowdown said:


> Very interested in your project, and all respect to you for your effort and expertise.  I'm a novice so a couple of questions.
> 
> Have you come to a conclusion about the best in ear mics to use and how to place them? I'd seriously consider the Roland CS-10EM since they're dual purpose, but not if they aren't an ideal option for use with Impulcifer.
> 
> ...


Hello and welcome to the group!

I'm still using the Sound Professionals mics but I ordered two Primo EM258 mono modules from FEL Communications yesterday along with y-splitter, two RODE VXLR+ phantom power adapters and a Behringer U-Phoria UMC202HD USB audio interface.
https://micbooster.com/primo-microphone-capsules/97-primo-em258-mono-module-with-35mm-plug-10m.html
https://micbooster.com/leads-and-adaptors/90-5mm-st-plug-to-2-x-mono-sockets-015m.html
VXLR+ and 202HD I ordered from Thomann.

When these arrive I'll have two audio interfaces and two pairs of microphones so I can test if these upgrades help and how is the impact of upgrading audio interface vs mics.

That being said I'm having very good results with the sound professionals mics and Zoom H1n so you should be fairly safe with the H2n, at least for now. I would not recommend the Roland CS-10EM because I remember reading in some other forum that somebody had tried them but did not get as good results as with the sound professionals mics.

I haven't come across the 3d-tune-in solution but it looks like there could be some very usefull stuff there. Thanks a lot for the link.


----------



## lowdown

jaakkopasanen said:


> Hello and welcome to the group!
> 
> I'm still using the Sound Professionals mics but I ordered two Primo EM258 mono modules from FEL Communications yesterday along with y-splitter, two RODE VXLR+ phantom power adapters and a Behringer U-Phoria UMC202HD USB audio interface.
> https://micbooster.com/primo-microphone-capsules/97-primo-em258-mono-module-with-35mm-plug-10m.html
> ...



Thanks, very helpful.  I look forward to your updates, and will report back when I have something to share.


----------



## jaakkopasanen

Some updates:

Level adjustment implemented
Equalization for headphone transfer uses AutoEQ style CSV files instead of FIR filters in WAV files

All equalizations combined. This might / will change because I realized it doesn't make so much sense. @johnn29 can you check if this affects the bass bleed in your case at all?

Fixed level normalization with resampling
Moved all research experiment data files to Impulcifer-Research repository. The research folder size was over 1GB.
Bug fixes
Requirements need to be upgraded when updating Impulcifer.

In addition to these changes I ran an experiment for investigating if volume control affects channel balance. I was hearing this happening when adjusting volume with volume2 but couldn't quite believe myself because that definitely should not happen with digital volume control. TTurns out I'm not insane because it really does happen. Normally volume2 keeps the channel balance in order but in my case, for whatever reason, the volume2 options had channel balance at 48/52 instead of 50/50 and volume2 cannot retain the same channel balance when adjusting volume in this case. I have done all of my measurments with 48/52 channel balance in volume2 . You can check the results here: https://github.com/jaakkopasanen/Impulcifer/tree/master/research/level-vs-balance.


----------



## johnn29

I'll get some time later in the week and give it a try


----------



## johnn29

Re-processed the recordings with the latest - there's barely any channel bleed now - but it's still very very slightly there. I A vs B'd the old processing and it's so much better, but still a tiny bit remains. I'd only be able to notice this on channel ID test tones with vocals though.


----------



## jaakkopasanen

johnn29 said:


> Re-processed the recordings with the latest - there's barely any channel bleed now - but it's still very very slightly there. I A vs B'd the old processing and it's so much better, but still a tiny bit remains. I'd only be able to notice this on channel ID test tones with vocals though.


Good to hear. The post processing plots show some kind of distortion for low frequencies in the spectrogram plot. I believe this has something to do with the bass bleed. That should be fixed for good by the tracking filter because that removes all higher frequencies that the currently playing one. Another idea I have is to to use parametric EQ filters.

In any case this is quite tricky because I'm not exactly sure what causes the bleeding. I have it even without room correction so it's not only the room correction and other equalizations.


----------



## Joe Bloggs

Bass bleeding?


----------



## johnn29

It's where the channels bleed into each other. For example a centre channel tone will also be heard from the back left channel, perhaps with a slight delay and echo


----------



## phoenixdogfan

johnn29 said:


> It's where the channels bleed into each other. For example a centre channel tone will also be heard from the back left channel, perhaps with a slight delay and echo


Crosstalk.


----------



## bigshot (Jan 1, 2020)

Bass isn't directional. It flows all through the room. The upper frequencies, like the pluck sound are what gives it direction.


----------



## Joe Bloggs

johnn29 said:


> It's where the channels bleed into each other. For example a centre channel tone will also be heard from the back left channel, perhaps with a slight delay and echo


Has this been ascertained by replacing the back left channel HRTF with silence?

In other words, do we have proof that the signal is getting fed to that channel somehow or does it only *sound* like it's coming from that direction?


----------



## musicreo

Joe Bloggs said:


> In other words, do we have proof that the signal is getting fed to that channel somehow or does it only *sound* like it's coming from that direction?



The channel routing is done by EQ-APO/ Hesuvi so it should not possible that channels are fed the wrong way.


----------



## johnn29

It only sounds like that. It's not actually channel crosstalk. I said it was bass related because it only occurred for me in one measurement I took without a sub and used the virtual room correction to boost bass massively. But I believe it's not to do with that exactly


----------



## jaakkopasanen

Here is a demonstration of what I'm experiencing when I talk about this problem: https://drive.google.com/open?id=1d91VZI1cyfbXNBBjYIwwWPdc-4S-2_Di

You need to use headphones when listening to the 8_Channel_ID_binaural.wav. The low frequency ringing can be heard best with center channel on the right side. I added plots for the center channel there as well but I'm not seeing big difference between left and right sides that would explain what is going on. The plots are of the output impulse responses with all the processing already done.


----------



## Joe Bloggs

jaakkopasanen said:


> Here is a demonstration of what I'm experiencing when I talk about this problem: https://drive.google.com/open?id=1d91VZI1cyfbXNBBjYIwwWPdc-4S-2_Di
> 
> You need to use headphones when listening to the 8_Channel_ID_binaural.wav. The low frequency ringing can be heard best with center channel on the right side. I added plots for the center channel there as well but I'm not seeing big difference between left and right sides that would explain what is going on. The plots are of the output impulse responses with all the processing already done.


I guess you guys are boosting low frequencies too much with speakers that aren't reproducing them in the sweep.  The bass would then be of the right magnitude but random phase that's all over the place.  You'd do better to ignore the recorded signal for frequencies below a certain cutoff magnitude and synthesize the frequencies in the output impulses to be whatever magnitude you want and phase that matches the rest of the impulse.


----------



## jaakkopasanen

Joe Bloggs said:


> I guess you guys are boosting low frequencies too much with speakers that aren't reproducing them in the sweep.  The bass would then be of the right magnitude but random phase that's all over the place.  You'd do better to ignore the recorded signal for frequencies below a certain cutoff magnitude and synthesize the frequencies in the output impulses to be whatever magnitude you want and phase that matches the rest of the impulse.


Thanks for the insights! Sounds like the alternative way to do this would be to create a mixed phase FIR filter for the room correction which corrects both the magnitude and the phase. Not sure which is better or easier.


----------



## Joe Bloggs

jaakkopasanen said:


> Thanks for the insights! Sounds like the alternative way to do this would be to create a mixed phase FIR filter for the room correction which corrects both the magnitude and the phase. Not sure which is better or easier.


I would think that the phase of frequencies that are measured correctly (with actual signal from the loudspeaker) contributes to the room feel of the impulse while phase of rubbish frequencies (that the speakers can't put out and where the mic is just picking up background noise) would best be ignored.  So unless you can build a mixed phase FIR filter that takes this into account, I would say the other approach should be better / easier.

It may also be best to have an option for the user to manually enter the range of frequencies to be considered, as bass background noise can sometimes be strong enough that simply considering the volume picked up by the mic wouldn't be appropriate for checking what frequencies were actually played.


----------



## jaakkopasanen

It's not just the room correction eq boosting bass that's causing this because the ringing / bleeding / whatever is present also in HRIRs that don't have any EQ built in. See here for newer results: https://drive.google.com/open?id=1BCcp7_hJ9WFn5UNKEsAM9UFymvmZtxeu. There are versions of the same file with and without room correction, headphone compensation and channel balance correction. All of the files have some degree of low frequency ringing.


----------



## Joe Bloggs

jaakkopasanen said:


> It's not just the room correction eq boosting bass that's causing this because the ringing / bleeding / whatever is present also in HRIRs that don't have any EQ built in. See here for newer results: https://drive.google.com/open?id=1BCcp7_hJ9WFn5UNKEsAM9UFymvmZtxeu. There are versions of the same file with and without room correction, headphone compensation and channel balance correction. All of the files have some degree of low frequency ringing.


The low frequencies that aren't reproduced by the test speakers will be problematic regardless of these, if the mic picked up background noise at these frequencies.  What frequencies does it start to ring and what frequencies do you reckon your speaker can go down to?


----------



## jaakkopasanen

Joe Bloggs said:


> The low frequencies that aren't reproduced by the test speakers will be problematic regardless of these, if the mic picked up background noise at these frequencies.  What frequencies does it start to ring and what frequencies do you reckon your speaker can go down to?


I haven't found a good synthetic test signal for this so I don't know the frequencies where it rings. However the channel ID has male voice which should not be less than 80 Hz and this particular one is probably around 100 Hz or more. My speakers (Dynaudio Focus 110) start to roll off at 70 Hz which is lower than the lowest male voices so this shouldn't be the issue here.


----------



## johnn29

The virtual crosstalk/ringing doesn't happen to me when I'm recording my system with a sub, mine's flat to 6hz. It only happens when I don't use a sub, so it has to be something about producing those frequencies with your speakers right? Can we do a "direct bass" feature like the Smyth has where bass is just passed through?


----------



## jaakkopasanen

Last bits of room correction have now been implemented. I added generic room measurement which will be used for speaker-ear pairs that don't have the specific named measurements. room.wav is the file where these measurements should go and recorder.py can now add tracks to existing WAV file with parameter --append so it's possible to run recorder.py for the same room.wav file multiple times, each time moving the measurement microphone a bit.

I added two methods for combining the frequency responses of these multiple room measurements. The first is a simple averaging method and the second uses the absolute minimum difference and only if all the measurements are on the same side of the 0 dB level. So with the conservative method the error will be 0 dB at 1 kHz if the errors of the 5 different measurements at 1 kHz are -0.4 dB, 0.3 dB, 1.9 dB, 1.1 dB and 3.0 dB even though the average is clearly above zero. This makes it a lot safer since it ensures that the correction won't make any of the measurements worse. See example graphs here: https://imgur.com/a/jcHpaBv and the details for usage can be read here: https://github.com/jaakkopasanen/Impulcifer#room-correction. I still need to write room measurement guide but now that the implementation is done I can actually start on it.

Another smaller update is that the hrir.wav and all responses wav files now include LFE tracks. The LFE tracks are silent but I decided to add them there to be compatible with other systems (like ffmpeg) which expect them to be there. The speaker order is now: FL, FR, FC, LFE, BL, BR, SL, SR. Every odd numbered track (1st, 3rd, ...) is for left ear and every odd numbered for right ear.



johnn29 said:


> The virtual crosstalk/ringing doesn't happen to me when I'm recording my system with a sub, mine's flat to 6hz. It only happens when I don't use a sub, so it has to be something about producing those frequencies with your speakers right? Can we do a "direct bass" feature like the Smyth has where bass is just passed through?


Does the crosstalk / ringing sound the same for you? Could you perhaps share a pre-processed file where you hear this or share the measurements without sub? I would like to make sure we are experiencing the same thing.

What is this "direct bass" feature? What does bass pass through mean?


----------



## sander99

jaakkopasanen said:


> What is this "direct bass" feature? What does bass pass through mean?


With direct bass the bass below some crossover frequency is not binauralised but directly mixed into the headphone signal. But of course bass from each input channel goes to both headphone channels, and properly delayed with the latency of the binauralisation process I assume. I vaguely remember Stephen Smyth saying in a podcast that in the A16 they refined this function with adding left-right or right-left relative delays.
The result would be very dry precise bass, actually just as dry and precise as your headphone can do it.
(For people who find it too dry, or unnaturally contrasting the behaviour of the other frequencies, maybe an option to add some slight artificial reverb in the bass would be a nice feature? The Smyth don't have that as far as I know.)

In your case, as you only generate the impulses and let the rendering do by third party software like HeSuVi, you would have to manipulate the impulse responses (or the sweep responses) somehow, but I won't have to tell you that of course.


----------



## johnn29

jaakkopasanen said:


> Does the crosstalk / ringing sound the same for you? Could you perhaps share a pre-processed file where you hear this or share the measurements without sub? I would like to make sure we are experiencing the same thing.
> 
> What is this "direct bass" feature? What does bass pass through mean?



Sander summarised the direct bass feature great. It just might be useful to try out.

I'll get the files over to you tomorrow - I'm abroad right now but I have my trusty Galaxy Book with me and keep my recordings so I can update my hrir's wherever I am! I'll also listen to your ones to see if we're talking about the same thing.


----------



## Joe Bloggs

jaakkopasanen said:


> I haven't found a good synthetic test signal for this so I don't know the frequencies where it rings. However the channel ID has male voice which should not be less than 80 Hz and this particular one is probably around 100 Hz or more. My speakers (Dynaudio Focus 110) start to roll off at 70 Hz which is lower than the lowest male voices so this shouldn't be the issue here.


https://www.dropbox.com/s/4zrci9cd7a8mz9s/SineGen.exe?dl=0

Try this

It could conceivably be a strong resonance mode at that frequency in your room


----------



## johnn29

Jaakko - I actually already uploaded my recordings on the bug thread: https://github.com/jaakkopasanen/Impulcifer/issues/42


----------



## lowdown

Well this is a fine cup of tea.  You've ruined my perfectly good, expensive audiophile speakers.  Ok, not exactly, the speakers are just fine.  But now I don't want to listen to them anymore because the sound I'm getting through my headphones is so much better! 

I got my Sound Professionals mics last week and the first attempt with Impulcifer even with less than ideal mic in ear positioning and no room correction was stunning.  I've listened to binaural recordings for years and tried all the HRIR options in HeSuVi but have never gotten the full illusion of speakers 8 ft in front of me with the imaging and sound stage I heard on that initial measurement round.  Was hard to wipe the smile off my face.  Been doing measurement sessions daily since, trying to refine all the sonic aspects, with very mixed results.  But last night I hit a great new plateau and it's almost unbelievable.  The tonal balance, detail, imaging, intimacy to the sound as if the performers are in my living room.  Now I've got to listen to everything I own again!  I've spent hours sinking into the music, and I know this isn't even as good as it can get.  There's no way I could thank you enough, and all those who've contributed to this virtual miracle.

One tip, I had significant problems getting the Sound Pro mics to sit in my ears properly, and stay in position. As you documented I ended up gluing foam ear plugs onto the back and that made a huge difference in positioning, stabilizing, and getting much better recordings.

I may be back with some novice level questions, and look forward to your future efforts, but wanted to offer an update and my extreme appreciation.


----------



## Joe Bloggs

jaakkopasanen said:


> I haven't found a good synthetic test signal for this so I don't know the frequencies where it rings. However the channel ID has male voice which should not be less than 80 Hz and this particular one is probably around 100 Hz or more. My speakers (Dynaudio Focus 110) start to roll off at 70 Hz which is lower than the lowest male voices so this shouldn't be the issue here.


I downloaded and deconvolved FC from your samples





Top is the raw results from it.  It does seem that there is a very strong resonance at 65Hz, lasting for over half a second.  The magnitude at that frequency is also high, but the main issue seems to be how long it drags on.

Bottom is the impulse after manual editing (I split the impulse into three parts and EQed out the resonance tails in the latter parts with a parametric EQ.  I'm actually surprised this worked  )


----------



## jaakkopasanen

lowdown said:


> Well this is a fine cup of tea.  You've ruined my perfectly good, expensive audiophile speakers.  Ok, not exactly, the speakers are just fine.  But now I don't want to listen to them anymore because the sound I'm getting through my headphones is so much better!
> 
> I got my Sound Professionals mics last week and the first attempt with Impulcifer even with less than ideal mic in ear positioning and no room correction was stunning.  I've listened to binaural recordings for years and tried all the HRIR options in HeSuVi but have never gotten the full illusion of speakers 8 ft in front of me with the imaging and sound stage I heard on that initial measurement round.  Was hard to wipe the smile off my face.  Been doing measurement sessions daily since, trying to refine all the sonic aspects, with very mixed results.  But last night I hit a great new plateau and it's almost unbelievable.  The tonal balance, detail, imaging, intimacy to the sound as if the performers are in my living room.  Now I've got to listen to everything I own again!  I've spent hours sinking into the music, and I know this isn't even as good as it can get.  There's no way I could thank you enough, and all those who've contributed to this virtual miracle.
> 
> ...


Awesome! I'm so glad to hear this. These kind of experiences people have is what keeps me working on this more than I should.



Joe Bloggs said:


> I downloaded and deconvolved FC from your samples
> 
> 
> Top is the raw results from it.  It does seem that there is a very strong resonance at 65Hz, lasting for over half a second.  The magnitude at that frequency is also high, but the main issue seems to be how long it drags on.
> ...


Interesting. This should be visible in my plots but it's not. Probably my plots are not correct / perfect. I need to check those.

I fed my room dimensions into a room mode calculator and indeed there is a lot going on just below 65 Hz: https://amcoustics.com/tools/amroc?l=830&w=360&h=268&r60=0.6

Tracking filter and reverberation time management should then fix my issue.



johnn29 said:


> Jaakko - I actually already uploaded my recordings on the bug thread: https://github.com/jaakkopasanen/Impulcifer/issues/42


Ah, silly me. Thanks. I'll take a look (listen).


----------



## jaakkopasanen

@johnn29 I had a listen of your measurements. The problem sounds like the same I'm having. I noticed it best with side right and back right channels where the bass ringing is heard on the right headphone. I actually had the same ringing there even without room correction, although reduced in level.


----------



## johnn29

Ok thanks for looking. I hope your tracking filter corrects it. I guess with the sub the room EQ I have on my AVR corrects for that room mode, so I don't get the issue with the sub


----------



## lowdown (Jan 14, 2020)

As I've posted, the results I'm getting with Impulcifer are truly amazing.  But I have encountered a couple of anomalies.  These may be very obvious to you, but at the risk of embarrassing myself I'll offer them in case it could help someone else or provide some useful data.

I have Sound Professionals mics, am using a Zoom H2N plugged into the PC, mics into the H2N, and Senn HD600 headphones into the PC headphone jack. The issue is I'm getting two very different headphone plots, neither of which looks good. The first seems sort of normal except for that severe trough:

https://i.imgur.com/yEV8t0V.png

The second is virtually flat, except that it's a mess of tight peaks and troughs, like an extended earthquake plot:

https://i.imgur.com/osoQ4YL.png

I assume I'm doing something different and wrong on both, but don't know what it is. But the key is when I use the 2nd flat but messy one to generate the hesuvi.wav file the results are spectacular. The first one yields really bad sounding results, which probably makes sense given how uneven it is. But whatever you're doing to smooth things out is working to perfection on the 2nd one. Again, I don't know if that's interesting to you, but there it is.

Second, on several of my results I heard a slight but distinct low frequency thumping sound in my right ear on some music tracks. Not terrible, but noticeable and distracting. Everything else kept the illusion of coming from well in front of me, but those little thumps were right in my ear. Then on one run I included the room_target option using your Harmon file, and that thumping was completely gone. I didn't prefer the extra bass boost in the Harmon target so I edited the file and equalized everything from 20hz to 9khz flat, but left the Harmon settings below and above those two frequencies. The thumping was still gone, so I stumbled onto a solution.

I'm so happy with the final result there's really not a problem to solve, I just wanted to thank you, and give some feedback that might be of interest.


----------



## musicreo

The headphone plots look very strange. How high was your headroom for the measurements? How was the Mic placement?  The valley at 200 to 700 Hz  is the reason why it sounds bad. Impulcifer will compensate it but this is not the real headphone response. The valley at 8 Khz is probably due to resonance between headphone and mic. This is how my headphone measurements look like. 1) AKG701 2) Sennheiser Hd555(with 595mod). 
  

For me the best measurement so far was done with only one speaker in the center position. I guess that this a effect of the room accustics as I don't have a calibrated hearing position and for the center position the distance to reflecting walls is the highest of all.


----------



## lowdown (Jan 14, 2020)

musicreo said:


> The headphone plots look very strange. How high was your headroom for the measurements? How was the Mic placement?  The valley at 200 to 700 Hz  is the reason why it sounds bad. Impulcifer will compensate it but this is not the real headphone response. The valley at 8 Khz is probably due to resonance between headphone and mic. This is how my headphone measurements look like. 1) AKG701 2) Sennheiser Hd555(with 595mod).
> 
> 
> For me the best measurement so far was done with only one speaker in the center position. I guess that this a effect of the room accustics as I don't have a calibrated hearing position and for the center position the distance to reflecting walls is the highest of all.



Yes, I agree mine look quite strange.  On the 1st example the headroom would vary from -20 to -6 depending on the gain on the mic and output level on the PC volume control.  The other odd thing I didn't mention is on the 2nd example the headroom was always -0 no matter what I did with the gain.  I suspect there are some very basic things that I don't have right in both cases.  As far as mic fit in my ears I had trouble getting a stable fit until I glued foam earplugs onto the back of the mics, but actually that had little to do with the measurements.  I got nearly identical graphs with and without the earplug mod.  What might be at play is a mismatch between the bit rate in Windows and on the H2N.  I tried tweaking that on the last measurements and I'm pretty sure the graph switched from the 2nd version back to the first.  But since I was getting such good final results with that 2nd graph I pretty much quit experimenting for the time being.

As far as room measurements I have a calibrated UMIK-1 mic, and took a couple of different "specific" measurements, but again the results are good enough that I've paused making more.  I'm only using stereo since my interest is in music vs movie surround.  I may try using 1 speaker to see what effect that has but would kind of like to get that headphone measurement thing figured out first.  Your graphs look great.


----------



## lowdown (Jan 14, 2020)

Ok, I figured something out, and as I suspected it was very basic.  Previously I had my headphones plugged in to the headphone jack on the laptop.  I had originally tried using the headphone jack on the H2N but was getting feedback, and didn't realize "monitor" needed to be off, so I ended up using the laptop headphone connection.  I believe that's what was causing most of the problems.  I just now did another run with the headphones plugged into the H2N, with monitor turned off, and here's the graph:

https://i.imgur.com/S2hNEKm.png

Much more normal looking.  One other change is I cut the wings off of the Sound Professionals mics, so it's likely a better recording of my actual ears.  No idea how much that was a factor, but it's certainly better now.

Edit:  So far the results from my original post, that flat/earthquake headphone graph, are better than this latest run.  I'll keep playing and adjusting, but curious to me that it's so much better sounding. If I never get better results than that I'll still be ecstatic.


----------



## musicreo

That makes sense. If monitor was on, the sweep was mixed with the signal from the mics.


----------



## lowdown

musicreo said:


> That makes sense. If monitor was on, the sweep was mixed with the signal from the mics.



Agree, novice mistake.  But it also seems to have been a combination of having the headphones plugged in to the PC jack rather than to the H2N.  Some of those runs were with monitor turned off as well.  I'm also still puzzled why the sound using that flat messy scan is better than the newest ones.  Sorting through various options to narrow it down.


----------



## musicreo

lowdown said:


> I'm also still puzzled why the sound using that flat messy scan is better than the newest ones.



How does it sound without headphone equalisation?


----------



## lowdown

musicreo said:


> How does it sound without headphone equalisation?



I haven't applied any specific headphone equalization.  The flat fuzzy plot version sounds much better, much more tonally balanced bottom to top, and in HeSuVi it sounds best with no headphone equalization applied.  The latest one, where the plot is more normal, is similar sounding in the low and mid frequencies but has a very bright treble section, which is reflected in the plot. The L/R balance isn't as good either. So far the best results I've had by far have come from what's obviously a mistake in the setup.  It sounds so good I really don't know if it can be improved.  The tonal balance, clarity, imaging and detail are stunningly good.  A happy accident for me.


----------



## sander99

musicreo said:


> How does it sound without headphone equalisation?





lowdown said:


> I haven't applied any specific headphone equalization.  The flat fuzzy plot version sounds much better, much more tonally balanced bottom to top, and in HeSuVi it sounds best with no headphone equalization applied.  The latest one, where the plot is more normal, is similar sounding in the low and mid frequencies but has a very bright treble section, which is reflected in the plot. The L/R balance isn't as good either. So far the best results I've had by far have come from what's obviously a mistake in the setup.  It sounds so good I really don't know if it can be improved.  The tonal balance, clarity, imaging and detail are stunningly good.  A happy accident for me.


@lowdown: I have a feeling you don't understand what musicreo is saying. Normally impulcifer uses your headphone measurement to insert the headphone equalisation (or compensation) in the resulting hesuvi.wav file. So when using it with the HeSuVi software you don't use additional headphone compensation (because then you would compensate double).
When you run python impulcifer.py you can use the option --no_headphone_compensation and then no headphone compensation is inserted in the resulting hesuvi.wav, then your headphone measurements are not used at all.


----------



## lowdown (Jan 15, 2020)

sander99 said:


> @lowdown: I have a feeling you don't understand what musicreo is saying. Normally impulcifer uses your headphone measurement to insert the headphone equalisation (or compensation) in the resulting hesuvi.wav file. So when using it with the HeSuVi software you don't use additional headphone compensation (because then you would compensate double).
> When you run python impulcifer.py you can use the option --no_headphone_compensation and then no headphone compensation is inserted in the resulting hesuvi.wav, then your headphone measurements are not used at all.



You're right, I don't understand all the options or how the pieces of this sonic puzzle fit together.  It's taken me a while to even figure out how to connect the HW correctly.   I very much appreciate you taking the time to clarify.  I haven't tried the no_headphone_compensation option.  Would that be useful in figuring out why I get good results from such a flawed measurement?  (edit) It's not clear to me what a "no headphone measurement" hesuvi.wav file would reflect, and how it would be used.  Sorry for the noob questions.


----------



## sander99 (Jan 15, 2020)

lowdown said:


> It's not clear to me what a "no headphone measurement" hesuvi.wav file would reflect, and how it would be used.


Ha ha, actually I don't know exactly why @musicreo asked. You could use it in HeSuVi either without any headphone compensation or with one of the standard headphone compensation files (if there is one for your headphones). (I don't remember now if you have to download those seperately somewhere...)
If you use it without any headphone compensation then as a net result the frequency response of your headphones (on your head) is "added" to (applied as an extra filtering on...) the frequency response of the virtual speakers plus room.


----------



## musicreo

I asked because I'm not sure if the strange headphone measurement really have an impact on the result.  A check if the raw measurement with channel balance correction already sounds very similar would answer the question.


----------



## lowdown

musicreo said:


> I asked because I'm not sure if the strange headphone measurement really have an impact on the result.  A check if the raw measurement with channel balance correction already sounds very similar would answer the question.



Interesting.  I don't understand your rationale, but I just tried re-running my command line with --no_headphone_compensation and it does sound just like my previous best version.  I didn't use a channel balance option before so left it out this time as well.  I did try channel balance previously but got better results without it.  So, what does that tell us?  That strange flat fuzzy headphone response was doing nothing, and I really prefer the sound with no headphone/ear compensation?  Am I really only thinking it's so great because it's much better than the results from that other measurement with the big dip in it?  Am I so used to the sound of my headphones, that as long as I'm getting that and the illusion of speaker localization it sounds "right"?  Seems like now that I've figured out how to get a proper headphone measurement I've got some experimenting to do.  Thanks for any clarification and advice.


----------



## musicreo

lowdown said:


> I did try channel balance previously but got better results without it.


Can you show us the "Results" plot. For me the channel balance option so far never made things worse.



lowdown said:


> So, what does that tell us?  That strange flat fuzzy headphone response was doing nothing, and I really prefer the sound with no headphone/ear compensation?


Yes! I already thought yesterday that the measurement have so many ups and downs that in sum it will do only minor changes.


----------



## lowdown (Jan 15, 2020)

musicreo said:


> Can you show us the "Results" plot. For me the channel balance option so far never made things worse.
> 
> 
> Yes! I already thought yesterday that the measurement have so many ups and downs that in sum it will do only minor changes.



Sure, here's the Results plot:

https://i.imgur.com/FXZKmDG.png

I did have the thought yesterday that if just a basically flat headphone plot gave such good results why not skip that measurement and stick in a flat line.   I didn't take it the next step like you did.  For me using the channel balance option messed up the imaging, so I dropped it.  I thought i had read a post by Jaakkco saying the results could be mixed and it wasn't always best, which fit my result.

So, worst case I have something I really like, and best case I can now start doing real measurements and perhaps improve on it.  Thanks for helping me sort through it.


----------



## musicreo

For me the plot looks good but still your right speaker is louder in the range up to 800Hz. On headphones you should normally hear this difference. I would guess that the soundstage is not perfectly centered and shifted to the right side.   
Can you show us also the same plot but using the channel balance trend option?


----------



## lowdown (Jan 16, 2020)

musicreo said:


> For me the plot looks good but still your right speaker is louder in the range up to 800Hz. On headphones you should normally hear this difference. I would guess that the soundstage is not perfectly centered and shifted to the right side.
> Can you show us also the same plot but using the channel balance trend option?



Sorry, I wish I could, but I've inadvertently overwritten some files and now can't recreate that specific plot.  I had been using the trend option but as I mentioned I noticed some anomalies in imaging.  That flat headphone graph looked like the L/R balance was overall so close that I tried a run without channel_balance and liked it better on the test tracks I used.  Now that I've figured out how to set up the recording correctly and am getting normal plots I'm focusing on tweaking the options for those runs.  There is a noticeable balance issue in the uncorrected recording.  Previously I had only tried the trend option, or no balance option, but with these new recordings I've found that using "right" is working best.

The headphone plot:

https://i.imgur.com/S2hNEKm.png

Results with channel_balance=right, which so far is giving very good imaging results.

https://i.imgur.com/qLpa8Os.png

I'm working on tweaking the room_target to get some eq adjustments, but I'm very encouraged that the final results will be even better than what I was so happy with before.

I really appreciate everyone's help, and so kindly putting up with my stumbling learning curve.


----------



## johnn29

Jaakko - can you please confirm I'm doing the headphone transform correctly? I decided to use my Sony WH-XM3's (over ears) for Impulcifer use - but I don't want to re-record my nearfield measurement because of the hassle. So just like Transform works with IEM's - it should work with over ears. Obviously not as ideal as areal headphone comp but good to save time.

Is this the correct command? I'm trying to use oratory's measurements because the XM3's are actually mine he measured. I'm just always confused at which compensation curve and sound signature to use - guidance here would be good in the wiki. I.e. which curve for rtings, oratory, cinnacle etc.


```
python frequency_response.py --input_dir="oratory1990\data\onear\Sony WH-1000XM3" --output_dir="my_results/Bose 700 (WH-1000XM3)" --compensation="compensation/harman_over-ear_2018_wo_bass.csv" --sound_signature="results\oratory1990\harman_over-ear_2018\Bose Headphone 700\Bose Headphone 700.csv" --equalize --parametric_eq --max_filters=5+5 --ten_band_eq --bass_boost=4[/quote]
```


----------



## jaakkopasanen

I implemented an experimental script for managing reverberation time. The script can be found in research/reverberation-management folder. This script crops reverberant tails of the impulse responses with a Hanning window (in dB scale). I would appreciate everyone trying it out and giving me feedback. If it works without problems, I'll add it to Impulcifer proper. I will try to do this with tracking filter as well because that should make it possible to adjust reverberation times separately for each frequency.

Reducing reverberation of FC channel in my demo recording to 300 ms removes the bass ringing / bleed / cross talk problem. So I guess the problem really is with low frequency ringing artifacts.



johnn29 said:


> Jaakko - can you please confirm I'm doing the headphone transform correctly? I decided to use my Sony WH-XM3's (over ears) for Impulcifer use - but I don't want to re-record my nearfield measurement because of the hassle. So just like Transform works with IEM's - it should work with over ears. Obviously not as ideal as areal headphone comp but good to save time.
> 
> Is this the correct command? I'm trying to use oratory's measurements because the XM3's are actually mine he measured. I'm just always confused at which compensation curve and sound signature to use - guidance here would be good in the wiki. I.e. which curve for rtings, oratory, cinnacle etc.
> 
> ...


That command would make XM3 sound like Bose 700. This is correct if you made the headphone compensation recording with Bose 700 and want to listen to virtual speakers with XM3. I added more instructions about this headphone transfer: https://github.com/jaakkopasanen/Impulcifer#headphone-compensation


----------



## johnn29 (Jan 19, 2020)

Just tried it - it works! No more channel bleed at all - even with the worst offender I had. It also has the benefit of improving clarity without messing around with nearfield recordings.

I've only done a quick test but previously my nearfield recording sounded very good but it was very obvious the sound was coming out close to me. Now I can trim the reverb time on my 1.5-2m recordings to improve clarity but without having the speakers be so close to me. I've literally spent about 5 mins with this though - I'll do some critical listening and watch a few TV shows tomorrow and report back more.

Thanks for the clarification in the wiki - all makes sense now.

Edit: More testing with stereo recorded speech off YouTube - the virtual center with the cropping is so much more accurate - much more like a real surround recording with a center channel there. Without the crop the center channel is less pin point. All this is pretty amazing for movies.

For music I can see where a selectable frequency reverb control would be beneficial. You could then get rid of the bass bleed and leave the pleasing reverb alone.

I'll do some more outputs with lower reverb times and see what I like.


----------



## lowdown

I'm only using stereo for music at this point so doubt my observation is very helpful.  I only tried 300 to see what effect it would have and it moves vocals more inside my head, but didn't cause any other "harm".  Nice feature to have it configurable.  Can't comment on the bass bleed.

One apparent typo, the example shows a --times option, which throws an error, instead of --reverb.


----------



## musicreo

I don't hear any effect and it seems the hesuvi.wav is not changed at all?


----------



## lowdown

musicreo said:


> I don't hear any effect and it seems the hesuvi.wav is not changed at all?



In this test mode it creates a file called cropped.wav in the reverberation-management folder that's the modified copy of hesuvi.wav.


----------



## musicreo (Jan 20, 2020)

Ok, I found it. I will test it in the evening.

Edit: It works. Tested from 100 to 800ms.  To small settings start to sound more in the head.  So I would not go under 300ms and in the moment I prefer 500-800ms where the changes are very small.


----------



## johnn29

Watched a few shows/movies with 300ms on a couple of my HRIR's - no ill effects.

Think it's ready for the main branch!


----------



## lowdown

A couple of questions.  Sorry if they are too basic, or I'm so confused even the questions don't make sense. 

What's the difference between the hesuvi.wav and hrir.wav files?  I've been using hesuvi.wav.  What is hrir.wav for?

The doc says "Impulcifer will compensate for the headphone frequency response using headphone sine sweep recording".  Is there a default eq curve that you compensate the headphone/ear measurement to match?  I realize there are ways to bake in other eq adjustments as documented, but I'm wondering what the target is without the extra eq.csv file.  

Does the "results" plot show the L/R FR curves that I'm hearing at my ear canal, so it should match my ideal FR curve, or is that the FR curve that is sent to the headphone that then gets modified by my specific headphone response curve and ear pinna?  I'm trying to understand from the results plot how far off I am from my ideal curve.  Maybe the "pre" and "post" plots are closer to what I'm after but I can't tell from the documentation.

Thanks.


----------



## musicreo (Jan 24, 2020)

lowdown said:


> A couple of questions.  Sorry if they are too basic, or I'm so confused even the questions don't make sense.
> 
> What's the difference between the hesuvi.wav and hrir.wav files?  I've been using hesuvi.wav.  What is hrir.wav for?



Hrir.wav has a different  channel order compared to Hesuvi.wav. The reason is the very strange channel configuration in hesuvi.




lowdown said:


> The doc says "Impulcifer will compensate for the headphone frequency response using headphone sine sweep recording".  Is there a default eq curve that you compensate the headphone/ear measurement to match?  I realize there are ways to bake in other eq adjustments as documented, but I'm wondering what the target is without the extra eq.csv file.


In the moment  the plot shows a flat target.  So I guess that's it.


----------



## jaakkopasanen

Left curve in results plot is the average frequency response of all left ear impulse responses. The results plot contains all the equalizations, corrections and what not. It's the total EQ curve that gets sent to the headphones. 

I'm not sure how the results curve relates to an the personal ideal FR target without speaker virtualization. Maybe it would be good with HRIR measurements that only have FL and FR channels. Also could be that ideal target for virtual speakers is a bit different than for headphones without virtualization.


----------



## lowdown

musicreo said:


> Hrir.wav has a different  channel order compared to Hesuvi.wav. The reason is the very strange channel configuration in hesuvi.



Ok, got it.  Thanks.



jaakkopasanen said:


> Left curve in results plot is the average frequency response of all left ear impulse responses. The results plot contains all the equalizations, corrections and what not. It's the total EQ curve that gets sent to the headphones.
> 
> I'm not sure how the results curve relates to an the personal ideal FR target without speaker virtualization. Maybe it would be good with HRIR measurements that only have FL and FR channels. Also could be that ideal target for virtual speakers is a bit different than for headphones without virtualization.



That helps, but I'm still unclear, which I know is my lack of knowledge.  Here's my basic problem, and maybe I should have presented it this way to start.  I'm only dealing with stereo, FL/FR for now.  If I just listen to music with my speakers and on some tracks I hear something out of tonal balance, is it the recording, or the FR of my speakers/amp/room?  In that case I can use my calibrated UMIK-1 and REW and see the plot of the FR of my equipment at my listening point.  I can then make adjustments to get the curve closer to what I want without guessing where a specific peak or drop may be.  Is that too much thump I hear in the bass on that track in the recording or my system, and if it's my system which freq needs adjusting?  I look at the results plot for my hesuvi.wav and I see large 10dB swings up and down along the frequency range.  I can't figure out how to intepret that, whether it's normal and good, or really abnormal.  Using Impulcifer with my headphones is as if all of my recordings have been remastered.  As I've said before it's amazingly good, but the freq balance has changed, and I can't tell if it's that much more true to the recordings or if it's introduced an imbalance that I could fix with eq.  I know some people are experts at this but for me using individual music tracks to take stabs at where and how much to tweak the eq on specific frequencies is a pretty hit and miss approach.  Is there a procedure or a way of interpreting plots that would help me narrow this down?


----------



## johnn29

lowdown - are you using the virtual room correction? That'll handle everything automagically and EQ to what's been shown to be a listening panel's preference - the Harman Room curve. If you are using that - that is why it sounds different.

A good frequency response in room would be +/- 5db - the virtual room correction can you you close to that. 

I know humans find it hard to pick up on frequency nulls more than booms.

For what it's worth in my entire "career" of being an audio-nerd I've never put much stock into a flat frequency response. Your brain copes with much of it automatically. Now I have Impulcifer I can generate a flatter frequency response vs the natural room and A vs B. I couldn't really notice much difference outside of the bass boost. But it satisfies the audio-OCD in me to know that I have a much flatter frequency response than in my real room with the virtual room.


----------



## lowdown (Jan 28, 2020)

johnn29 said:


> lowdown - are you using the virtual room correction? That'll handle everything automagically and EQ to what's been shown to be a listening panel's preference - the Harman Room curve. If you are using that - that is why it sounds different.
> 
> A good frequency response in room would be +/- 5db - the virtual room correction can you you close to that.
> 
> ...



Yes, I'm using room correction.  I've tried both with the Harmon curve, and with versions of it that I've tweaked.  The first time I used it stock the bass was boosted too much, so I've created a couple of edited versions.  I understand that flat is not most people's preference.  I'm not stuck trying to achieve flat, but what's challenging for me is determining where and how much to use eq, or some other option, to get closer to my ideal.  I'm getting a lot of variation in sound quality from my final resulting hesuvi.wav files after several different measuring sessions.  And I'm having trouble isolating which step or combination is causing so much variation.  Then when I get a result that's very good but not quite ideal how do I make the right adjustment?  Part of it is inconsistency in my measurements, both with headphones on and room measurements, and part is having so many options to tweak in the last computation command line. Then lastly I end up trying to make eq adjustments by ear using various music tracks with no objective measurement feedback.

I totally agree that Impulcifer is better than my speakers.  I've basically stopped listening to music through my speakers because the headphone experience is so much more revealing and captivating, and the overall sound quality is so good.  But there are still tonal and spacial anomalies that I'd like to improve and am working my way through how to do that.


----------



## johnn29

Ah I see what you mean. You can generate different virtual room targets that Jaako tried (https://github.com/jaakkopasanen/Impulcifer/tree/master/research/virtual-room-target). See if you like any of those better.

In my experience with trying to figure out what HRIR sounds good I was even thrown off by being in different rooms. I posted about it when I first generated a good HRIR that I was very happy with when listening in my home theater. I then took it into a cafe and found it sounded terrible and that Dolby Headphone sounded far better. It's because my brain wasn't expecting the theater sound in a loud and novel environment. 

I'd say the easiest thing to do is shoot for the harman target and then use simple bass/mid/treble tone controls on some of your favorite tracks to see what you like. When Harman's studied room correction systems they found that listeners didn't actually prefer them and simple tone controls were sufficient anyway.

In some ways having a HRIR means we can do A vs B so much quicker and easier than on a real loud speaker setup. But in other ways because you can switch so quickly between HRIR's you don't give your brain enough time to compensate. You then end up fiddling with many settings. For my own listening i've created 3 go-to HRIR's. One is nearfield that I use when close to a screen - like a laptop or desktop monitor, the other is midfield where I'm sitting 1-2m away from a TV and the final is far-field whic his for projection use. The only one I can consistently use in all situations is nearfield - if I use the far field one in a different environment it just sounds wrong.


----------



## lowdown (Jan 30, 2020)

johnn29 said:


> Ah I see what you mean. You can generate different virtual room targets that Jaako tried (https://github.com/jaakkopasanen/Impulcifer/tree/master/research/virtual-room-target). See if you like any of those better.
> 
> In my experience with trying to figure out what HRIR sounds good I was even thrown off by being in different rooms. I posted about it when I first generated a good HRIR that I was very happy with when listening in my home theater. I then took it into a cafe and found it sounded terrible and that Dolby Headphone sounded far better. It's because my brain wasn't expecting the theater sound in a loud and novel environment.
> 
> I'd say the easiest thing to do is shoot for the harman target and then use simple bass/mid/treble tone controls on some of your favorite tracks to see what you like. When Harman's studied room correction systems they found that listeners didn't actually prefer them and simple tone controls were sufficient anyway.



Ok, I've looked at that but don't understand how to use the virtual room targets.  My approach has been to edit the harman-room-target.csv file.  A bit arduous but doable.  But I've had mixed results trying to use it to adjust the final freq response curve.  Basically where I'm at now is leaving the Harmon adjustments below 20 and above 12k and setting it flat in between. I can more easily adjust eq at the end using Peace and hear the results in real time.  It's not nearly as granular as the Harmon file but as you say bass/mid/treble adjustment may be sufficient.



> In some ways having a HRIR means we can do A vs B so much quicker and easier than on a real loud speaker setup. But in other ways because you can switch so quickly between HRIR's you don't give your brain enough time to compensate. You then end up fiddling with many settings. For my own listening i've created 3 go-to HRIR's. One is nearfield that I use when close to a screen - like a laptop or desktop monitor, the other is midfield where I'm sitting 1-2m away from a TV and the final is far-field whic his for projection use. The only one I can consistently use in all situations is nearfield - if I use the far field one in a different environment it just sounds wrong.



Switching between HRIRs quickly is definitely a bit disconcerting.  Some sound very good at first with certain tracks, but serious flaws can show up on other tracks.  And there is a period of hearing adaptation to a different tonal balance and imaging presentation.

I'm currently getting good repeatabliity with my headphone measurements.  What I believe is the biggest cause of the variablity are the room measurements.  Some of my HRIR's are very tight, the imaging is so precise, and others are much more spread out, with a much broader spacious sound stage but vague blurred imaging.  Voices vary greatly from really excellent to echoy as if in a cave.  I'd like to be able to pick and choose attributes or fix specific elements, but figuring out what the cause is and how to make surgical adjustments for me is very elusive.  In my fantasies there are sliders or knobs to make changes in key parameters, like eq now, while listening, but that could be beyond what's feasible.  As you say it's some of my audio-OCD.  But the results can be so good, it's such a breakthrough, it's hard not to want to perfect it when something odd shows up on a track.  I could very well end up with several HRIRs as you have, switching for the type of music.  Pretty nice problem to have where you can make that level of sound system change with the click of a mouse.


----------



## jaakkopasanen

I redid the Harman room target because the old one had too much bass. I don't know where I did get the previous one but the new one has about 6.8 dB of bass boost. This new target is called harman-in-room-loudspeaker-target. I also added another target which Harman's headphone target for in room setup. It's essentially the same but with less treble. This is called harman-in-room-headphone-target.

Another update is the bass_boost and tilt parameters which work in the same way as in AutoEQ. Now it's possible to use Harman room targets without bass and add desired level of bass boost with the parameter. Tilt makes the whole frequency response darker or brighter.


----------



## lowdown

Just want to say Impulcifer is so amazingly good.  I thank you every day.

That is all.


----------



## jaakkopasanen

lowdown said:


> Just want to say Impulcifer is so amazingly good.  I thank you every day.
> 
> That is all.


Aww. Thanks!


----------



## musicreo

jaakkopasanen said:


> I also added another target which Harman's headphone target for in room setup. It's essentially the same but with less treble. This is called harman-in-room-headphone-target.



If I have a headphone measurement how do I actually apply the targets? What is the command for that?

By default the headphone compensation is invidual for the left and right channel or is a average of both channels used?   If I only want to use left or the right measured channel for the headphone compensation how do I do this without editing the headphones.wav?

Attached is one of my latest headphone maesurements (HD555 measured with Primo EM258). The difference in the frequency response begin already at 2 kHz.  I  guess  invidual compensation in this "relativ low" frequency range is problematic as the mic placement may have a different effect on the speaker measurement compared to the heaphone measurement?


----------



## johnn29

Been loving my HRIR for sometime now. I'm spending another month away from home in an airbnb. Still can't get over how with a 15.6" 4k OLED and Impulcifer I can be transported back to my theater room - had a great experience watching Midway on the laptop. Surround track on that is excellent!

Just to chime in on my thesis that doing A vs B very quickly on HRIR isn't a great way to test and that you need multiple HRIR's for multiple physical rooms/situations so you get used to the sound of the room. Sean Olive (researcher at Harman) cites some research that may be of interest that seems so support this: https://www.stereophile.com/content/slow-listening


----------



## venkatesh63

@jaakkopasanen @johnn29 @musicreo  can you share your recordings of HRIR.i don't want demo recordings from Impulcifer page. i want your created copies of different speaker's and Equalization's(eq.csv) of headphone's compensation files. i want for Movies and Music to listen copies of hesuvi.wav or hrir.wav you guys created with your gears.


----------



## Zenvota

johnn29 said:


> Just to chime in on my thesis that doing A vs B very quickly on HRIR isn't a great way to test and that you need multiple HRIR's for multiple physical rooms/situations so you get used to the sound of the room. Sean Olive (researcher at Harman) cites some research that may be of interest that seems so support this: https://www.stereophile.com/content/slow-listening


I definitely find there to be a longer acclimation phase using hrir/prir.  When I switch between rooms I always pause playback and then try to ignore initial observations and let my brain adjust.  Then after a short while I feel I can better analyze the audio.


----------



## castleofargh

johnn29 said:


> Been loving my HRIR for sometime now. I'm spending another month away from home in an airbnb. Still can't get over how with a 15.6" 4k OLED and Impulcifer I can be transported back to my theater room - had a great experience watching Midway on the laptop. Surround track on that is excellent!
> 
> Just to chime in on my thesis that doing A vs B very quickly on HRIR isn't a great way to test and that you need multiple HRIR's for multiple physical rooms/situations so you get used to the sound of the room. Sean Olive (researcher at Harman) cites some research that may be of interest that seems so support this: https://www.stereophile.com/content/slow-listening


The actual research is good, the article...
 and of course room convolution is something we need to settle in, slowly. Just considering reverb, switching rapidly is like entering the shower rapidly while singing and realizing how much reverb there is. But after a while the brain will identify the reverb components and remove or at least attenuate them, despite still making use of them for some secondary spatial cues.
That need of getting used to sound so our brain can do what it does is very common and an old story. Simply playing around with an EQ makes it painfully obvious. Listen to a song, boost the bass a lot, turn that boost OFF, and for a while the perfectly fine and usual amount of bass will feel weak. That's because instead of comparing a sound to our general habits, we're now comparing it to that other new reference. When we consider the deep power of priming in psychology, it's not hard to imagine the influence that rapid A/B switching can have on our impressions.
 Rapid switching is strictly for the purpose of detecting differences between 2 samples. That's what it is for and the only qualitative judgement we should ever get from rapid switching is "do they sound different?". The silly guy who wrote the stereophile article clearly doesn't know that, otherwise he would be ashamed of sentences like this one: "Scientific testing methodologies such as ABX, which require quick and conscious evaluation of a change in the sound, have long struck many of us as insufficient, seeming to miss much that affects our enjoyment of music." ABX to evaluate enjoyment? Did he read anything about ABX before writing that? Thinking himself smart, he's really telling the world that clever him and his buddies have long noticed how a hammer makes for a poor TV remote. He had no business bringing ABX in that article and anytime he does, he makes a fool of himself.

ps: Not sure what Olive has to do with this? Did I miss something?


----------



## venkatesh63

venkatesh63 said:


> @jaakkopasanen @johnn29 @musicreo  can you share your recordings of HRIR.i don't want demo recordings from Impulcifer page. i want your created copies of different speaker's and Equalization's(eq.csv) of headphone's compensation files. i want for Movies and Music to listen copies of hesuvi.wav or hrir.wav you guys created with your gears.


Please anybody give me the files from your created gears i want for Movies and Music.


----------



## lowdown

venkatesh63 said:


> Please anybody give me the files from your created gears i want for Movies and Music.


Impulcifer is used to create a personalized HRTF specific to an individual's ears and headphones.  Your request is kind of like asking for other people's eyeglass prescriptions.  Not likely to help you.


----------



## venkatesh63 (Feb 26, 2020)

@lowdown i just asked the files lol !!!. i'm not asking your assets.This group is for sharing and caring. And i asked the above mentioned guys not you @jaakkopasanen @johnn29 @musicreo. why i can't try other's HRTF if i make a mistake i will learn how they did i will analyze and rectify myself HRIR.


----------



## johnn29

castleofargh said:


> ps: Not sure what Olive has to do with this? Did I miss something?



He posted it on his twitter, I assumed he knew the researchers.

More fun with Impuclifer this week - I've been using the AutoEQ transform function to use my existing HRIR with new headphones without having to re-measure to great effect. Also DTS Headphone xv2.0 was updated to support many more headphones - for me it's the best synthetic HRTF I've tried and has the benefit of virtual height channels. But compared to my LS50 Impucifer HRIR it pales.

Only thing I'd like is the ability to trim the reverb to only the bass frequencies. I'm hoping that gets rid of the channel bleed but preserves the pleasing reverb.


----------



## musicreo (Feb 28, 2020)

One thing I want to share is my actual favourite microphone fixing.  I have three pairs of Pui 5024HD capsules  and two pairs off Primo EM258  microphones. After testing different options I found that OHROPAX® Silicon is for me the best way for holding the capsules in place during measurements.




Another thing I can say after a lot of testing  is that my Sennheiser HD555 (with 595mod) sounds always better than my  AKG701 headphones for binaural playback. I have the feeling that the AKG 701 is much more prone to small channel balance errors compared to the HD555.


----------



## dwk

Hi, 
 New user coming up to speed.

 First, major props (and thanks) for implementing this. I've had this idea in my head ever since reading about the Realizer some years ago and have had my Sound Professionals mics stting around for a few years, but never got off my butt to do anything. This is quite impressive.

 I'm interested in 'conventional' use of Impulcifier, but I also have a couple somewhat unconventional ideas that I"m hoping to explore.

 The first comes from a long-standing feeling that the headphone guys and ambiophonic guys need to talk to each other. Ambiophonic guys are spending a LOT of time and money to set up a speaker system that eliminates crosstalk whereas the headphone guys seem to be constantly looking for new ways to introduce crosstalk. This led me to a theory that what the headphone guys really want is HRTF and not crosstalk.

 So, my first application is to try to create 'ambiophonics over headphones' with Impulcifier - i.e. recreating the front stereo dipole. This should actually be really easy - it's just FC-L and FC-R filters applied to a conventional L/R stereo signal. I tried this last night, and the jury is still out on how well it works. It definitely sounds good and opens things up significantly (on my HD6xx), but doesn't quite recreate the mind-blowing sense of space you can get out of an ambio dipole. I suspect this is at least partially due to my measuring the single speaker in the nearfield, so I'm planning on retrying the measurement farther out in-room (need to figure out cabling to be able to do that). Plus, my bass vanished despite my Kef R3s having reasonable response, so I may need to add room correction or otherwise investigate since it made things a bit hard to completely evaluate.. (I've looked at the code and it looks pretty simple to update the output code to add an ambio filter with just FC-L and FC-R, so I may generate a pull request if you're interested)

 The other non-conventional application I"m thinking of is DIY speaker crossover auralization. The idea being that you perform measurements of the individual raw drivers in the cabinet in-room, and then individually filter the input channels with appropriate IIR crossover filters ahead of the HTRF processing. As long as you can capture and preserve the relative timing between drivers (might need a separate step), this should allow accurate 'listening' to the candidate crossover without having to build it. There are a couple packages that do anechoic simulation of on-axis speaker crossover response, but nothing that does in-room as far as I know. With the growing appreciation of how off-axis response influences perception, I'm rather intrigued by this capability. This one will take a bit longer, and I'll definitely have to figure out what to do to protect the tweeters from full-scale signals.

 BTW - has anyone looked into convolution playback on iOS? If my ambio-over-headphones idea works, even a simple straight L/R convolution might greatly enhance listening on-the-go.

once again though - great job.
Doug.


----------



## lowdown

dwk said:


> So, my first application is to try to create 'ambiophonics over headphones' with Impulcifier - i.e. recreating the front stereo dipole. This should actually be really easy - it's just FC-L and FC-R filters applied to a conventional L/R stereo signal. I tried this last night, and the jury is still out on how well it works. It definitely sounds good and opens things up significantly (on my HD6xx), but doesn't quite recreate the mind-blowing sense of space you can get out of an ambio dipole. I suspect this is at least partially due to my measuring the single speaker in the nearfield, so I'm planning on retrying the measurement farther out in-room (need to figure out cabling to be able to do that). Plus, my bass vanished despite my Kef R3s having reasonable response, so I may need to add room correction or otherwise investigate since it made things a bit hard to completely evaluate.. (I've looked at the code and it looks pretty simple to update the output code to add an ambio filter with just FC-L and FC-R, so I may generate a pull request if you're interested)


I'm only using Impulcifer in stereo mode, with FL/FR and am getting extraordinary results.  The imaging, detail, soundstage, and tonal balance are much better than any of the audiophile speakers I've heard or owned.  The illusion of listening to exquisite speakers several feet in front of me is stunningly real, without any of the artificial sounding artifacts that are obvious in the other virtual surround products I've tried. For me the sound field and space are not inflated beyond what's in the recording, so no simulated surround imaging in stereo mode, but often the sense of sitting in the recording session is uncanny. My suggestion would be don't come to a conclusion on how good Impulcifer can be too soon. I went through numerous recording sessions, tried many different combinations of command line options, and tweaking the room correction files, before I had the version that I'm listening to now. Also, as others have posted it can take some time for your ears to adjust to a different HRIR. It may take some effort but the end result can be way more than worth it.


----------



## jaakkopasanen

dwk said:


> So, my first application is to try to create 'ambiophonics over headphones' with Impulcifier - i.e. recreating the front stereo dipole. This should actually be really easy - it's just FC-L and FC-R filters applied to a conventional L/R stereo signal. I tried this last night, and the jury is still out on how well it works. It definitely sounds good and opens things up significantly (on my HD6xx), but doesn't quite recreate the mind-blowing sense of space you can get out of an ambio dipole. I suspect this is at least partially due to my measuring the single speaker in the nearfield, so I'm planning on retrying the measurement farther out in-room (need to figure out cabling to be able to do that). Plus, my bass vanished despite my Kef R3s having reasonable response, so I may need to add room correction or otherwise investigate since it made things a bit hard to completely evaluate.. (I've looked at the code and it looks pretty simple to update the output code to add an ambio filter with just FC-L and FC-R, so I may generate a pull request if you're interested)


This is very interesting. I'm not very familiar with ampiophonics but from what I quickly read it's basically stereo pair separated by 20 degrees and cross-talk cancellation. If this is all there is to it then this is super easy to do in Impulcifer. Normal stereo measurement and then just muting FL-right and FR-left tracks in the output HRIR file. Pull requests are more than welcome!


----------



## castleofargh

dwk said:


> The first comes from a long-standing feeling that the headphone guys and ambiophonic guys need to talk to each other. Ambiophonic guys are spending a LOT of time and money to set up a speaker system that eliminates crosstalk whereas the headphone guys seem to be constantly looking for new ways to introduce crosstalk. This led me to a theory that what the headphone guys really want is HRTF and not crosstalk.


A stereo album is for the time being practically always made for stereo speakers. Trying to correct the inherent default of stereo speakers(having 2 sound sources pretending to be only one at another position), can only become the right way if albums start being made with that correction in mind. Otherwise it's just a fancy DSP that some people happen to like. It has no objective legitimacy if the way albums are made doesn't change.

There is no accurate 3D audio for headphones without HRTF, or at least without HRIR like here to provide some virtual speakers at a given position. so you're clearly right about that.



About dealing with mono sounds, apparently not everybody reacts the same way. The general concept is that with perfect mono, we mostly have a little idea of the elevation of something somewhere in front thanks to the outer ear and the general signature of the music(with very poor sense of distance beside close vs far). And that's about it. It's easy for the brain to find conflicting information between that small cue and sight, our dominant sense. So for a few people, mono will forever be crappy unrealistic placement with headphones(in the head, on the forehead, ...). Sadly impulcifier might not solve the problem as the cause is us.

If you have a clear issue with the elevation of mono sounds, then perhaps instead of trying to force a center channel, you might want to try what's in this video from Mr Griesinger. 

 it's irrelevant for the headphone's own frequency response when using the tools in this thread, but maybe the mics and how they're placed in the ear cause some frequency variations? Maybe there remains a difference because we're not getting the ear canal resonance? Whatever the reason, it's possible that the frequency response variation is off just enough to mess up with your perception of elevation(this is my guess I don't actually know that for a fact ^_^).

If you're really prioritizing the center, following the tuto in the video is fine. But if you're just trying to make sure that your Head Related Impulse Responses give you a response close enough to your speakers, then I would suggest to run the same experiment in the video but with the usual speaker angles(the way they were when you recorded your impulses). Then if done right, chance are that the center would still improve just from being created by more convincing virtual speakers with the right frequency response at your eardrum.

It's not simple, and it's a long shot just to try and confirm if you're one of the people who really need that extra accuracy to get an immersive experience, so I don't know if I can strongly advice to do it. But I mention it just in case. 
The next step if even that doesn't push the center forward, would be to add head tracking somehow. Because then any small head movement will turn mono and too little cues, into sound at an angle with the all list of psychoacoustic helpers including the legendary co stars, ILD and ITD. It's a lot of efforts just to improve the center image, but some people need all that. While others seem to feel like they're watching a band live while listening to the nonsensical stereo of default headphone playback. So finding where you as a listener is on that crazy wild spectrum of humanity can be its own adventure.

The last attempt is to fight fire with fire, we fail to accept that a sound source is at a given place in front of us because we see that it's not in front of us. So just place a possible sound source in front of you and find out if you can fool yourself with that. If I see a speaker right in front of me, or watch TV and see the guy talking, my brain will sooner or later, end up placing the sound on it. Sometimes it's fast, sometimes it's not when things just are too unrealistic, but if I spend enough time in that configuration, progressively my brain will move the sound over there. Sight always wins in the end in my case.

A few years ago when I first started to use a bad DIY version of what impulcifier allows to do, it never felt good(it was just slightly better than crossfeed) until I happened to get a pair of monitors(speakers) to put on my desk. Then in a matter of days, I was happy with the perceived position of the convolved sound on my headphones as it started to better match the position of the speakers. Again, my anecdote, I cannot tell if you would get similar experience.


----------



## jaakkopasanen

castleofargh said:


> A stereo album is for the time being practically always made for stereo speakers. Trying to correct the inherent default of stereo speakers(having 2 sound sources pretending to be only one at another position), can only become the right way if albums start being made with that correction in mind. Otherwise it's just a fancy DSP that some people happen to like. It has no objective legitimacy if the way albums are made doesn't change.
> 
> There is no accurate 3D audio for headphones without HRTF, or at least without HRIR like here to provide some virtual speakers at a given position. so you're clearly right about that.
> 
> ...



This is what I thought too. Albums are made for stereo speakers so "fixing" the limitations of stereo speakers with anything doesn't sound realistic. However the ambiophonics are promising more than just better center image, namely 120 to 150 degree sound stage compared to the normal 60 degree one. I've no idea how this works but it sounds interesting. At least I'm going try it out myself at some point.


----------



## dwk

jaakkopasanen said:


> This is very interesting. I'm not very familiar with ampiophonics but from what I quickly read it's basically stereo pair separated by 20 degrees and cross-talk cancellation. If this is all there is to it then this is super easy to do in Impulcifer. Normal stereo measurement and then just muting FL-right and FR-left tracks in the output HRIR file. Pull requests are more than welcome!


Yeah, 20-degree is generally the recommendation. The math for XTC gets ill-conditioned at the separation gets smaller, and the wider you get the more the 'tonal inaccuracies' of 2-channel creep in. My inference is that the ideal is to be centered, but that doesn't work mathematically.  I started with a single FC since it offered the potential for a simple single-measurement solution, but also seemed to be in line with the theoretical ideal. 

Having made a few passes, my conclusion is that I need a few more . I took a single mono center run in the 'big system' today rather than the desktop system, and in certain respects it's wildly successful. Center-anchored vocals are superb - to the point I immediately understand the comments saying "I can sell my speakers". Sounds that are spread out laterally are less successful, and hard-panned sounds collapse back into the ear - you get a bit of a horseshoe effect. This is interesting to me, since hard-panned sounds in my ambiophonic experiments have been problematic for the opposite reason - they end up "way over there" and actually take explicit focus to try to integrate into the soundstage.

I did take a conventional L/R measurement in the 'big system' as well, but haven't listened to it yet since I'm using JRiver and need to set up a convolution config for it. (Is there a canonical config for the common cases? Not a big deal to set up, but it one exists it saves a step).

Once again though - great work. Pretty straightforward to use, and my $350 headphone rig (Khadas Tone Board, JDS Atom, NAD Viso HP50) sounds spectacular.


----------



## dwk (Mar 14, 2020)

castleofargh said:


> A stereo album is for the time being practically always made for stereo speakers. Trying to correct the inherent default of stereo speakers(having 2 sound sources pretending to be only one at another position), can only become the right way if albums start being made with that correction in mind. Otherwise it's just a fancy DSP that some people happen to like. It has no objective legitimacy if the way albums are made doesn't change.


Yes, this is the debate that seems to rage any time ambiophonics gets discussed. It's tough to argue with - 'authorial intent' and all that. OTOH, on the right recordings ambiophonics can be absolutely mind-blowing - it was definitely the most jaw-dropping audio experience I've had since the first time I heard Spica TC-50s and realized that spatial presentation of audio was possible at all.  It works best on minimally mic'd / processed acoustic recordings where it's more believable that the natural soundfield has passed through. Heavily processed studio recordings frequently don't work well at all (Rodrigo Y Gabriella is the worst I've run into - the hard-panned technique makes things sound pretty comical). You also sacrifice a lot of headroom to the XTC process, making it tricky to preserve full dynamics. (my most successful trial was using Yorkville U15 Unity PA cabinets, so headroom wasn't an issue)

I could never fully commit to an ambio system as a primary system, but that's the beauty of Impulcifier - use it for times/albums where it works, and use a different config in other cases.  Following that train of thought - I'm seriously thinking of building some Lx Mini speakers strictly to set up once and measure - I suspect that if augmented with some open-baffle woofers it might make the perfect system for chamber and solo piano music.


----------



## jaakkopasanen

dwk said:


> Yeah, 20-degree is generally the recommendation. The math for XTC gets ill-conditioned at the separation gets smaller, and the wider you get the more the 'tonal inaccuracies' of 2-channel creep in. My inference is that the ideal is to be centered, but that doesn't work mathematically.  I started with a single FC since it offered the potential for a simple single-measurement solution, but also seemed to be in line with the theoretical ideal.
> 
> Having made a few passes, my conclusion is that I need a few more . I took a single mono center run in the 'big system' today rather than the desktop system, and in certain respects it's wildly successful. Center-anchored vocals are superb - to the point I immediately understand the comments saying "I can sell my speakers". Sounds that are spread out laterally are less successful, and hard-panned sounds collapse back into the ear - you get a bit of a horseshoe effect. This is interesting to me, since hard-panned sounds in my ambiophonic experiments have been problematic for the opposite reason - they end up "way over there" and actually take explicit focus to try to integrate into the soundstage.
> 
> ...


Could you elaborate on what did you do exactly? What I gathered from this is that your measured a mono system and somehow that creates a sound stage. I'm sure this is not the case. How many speakers did you measure? At which positions? Did you do something extra to the output HRIR file that Impulcifer doesn't do? Do you need some kind of special DSP processing to make this work with Impulcifer created HRIR? Did you have some kind of special DSP processing for the speakers during the measurement? I'm baffled.


----------



## sander99

jaakkopasanen said:


> Could you elaborate on what did you do exactly? What I gathered from this is that your measured a mono system and somehow that creates a sound stage. I'm sure this is not the case. How many speakers did you measure? At which positions? Did you do something extra to the output HRIR file that Impulcifer doesn't do? Do you need some kind of special DSP processing to make this work with Impulcifer created HRIR? Did you have some kind of special DSP processing for the speakers during the measurement? I'm baffled.


If I may take a guess: he probably did what boiles down to the same as what @Erik Garci did with his A8: Erik measured a 2 channel PRIR using one center speaker. He sent the left and right channel sweeps to that one speaker. When the A8 measured the left channel he muted the right in-ear microphone. When the A8 measured the right channel he muted the left in-ear microphone.
So what he gets at playback of a stereo source is this: the left ear "hears" the left channel of the input source being played over the center speaker, and the right ear "hears" the right channel of the input source being played over the center speaker. It figures that for the mono-component in the signal it sounds very natural, because for that mono-component the total works the same as if that mono component had been sent to a "normal" virtual center speaker. Non-mono parts of course will have ILD and/or ITD clues. The further out of center sounds are placed the further there will be a mismatch between the actual HRTF filtering (based on the position of the center) and the HRTF filtering that would correspond with the placement in the "soundstage".

Note: I never tried any of this myself, and I do realise it is higly subjective and not at all a-priori logical to apply this to recordings that are made with a different usage in mind.


----------



## dwk

Yes, @sander99 is basically correct - just use Impulcifier to perform a normal binaural measurement of the FC channel. Then run the analysis as normal, and Impulcifier will generate a hrir.wav with only FC-left and FC-right elements being non-zero. Pull these 2 channels out into a normal stereo .wav, and load into the JRiver convolver so that FC-left is applied to the left channel, and FC-right is applied to the right channel. (I actually tweaked the impulcifier code to spit out the 2 filters into ambio.wav directly, but you can use audacity). JRiver makes this  easy since it supports a stereo wav file as a convolution filter directly - you don't have to generate a config file for it.  

And yes - what this does is trade inaccurate HRIR for mono/center sounds which you get in normal stereo for HRIR inaccuracies in L/R separated content. This seems to be a reasonable trade with center-dominated content, but maybe not for more general use.

So, in terms of an experiment this 'works' to a degree, but doesn't actually reproduce the subjective impressions of a true ambio system since the L/R sounds collapse in a way that they don't over speakers. Not entirely sure why - I may continue to do some experiments. It's still the best headphone sound I've had, at least for the voice+guitar stuff that makes up the bread and butter of my listening. I'm going to compare to the conventional stereo measurement I also made - it's entirely possible that the stereo version will be so good that I'll abandon the 'ambio simulation' route, but we'll see. Either way, the fact that $350 in headphone gear can sound THIS good is remarkable.

The other reason I was interested in trying this is that it's a basic 2-channel process,and could be used with something like the minidsp HA-DSP which doesn't really have enough horsepower to do a proper 2x2 convolution, or maybe even on an iPhone so I could have a system for use at work. Not entirely sure whether this will come to pass, though.

One other interesting thing I'll have to look into - when I measured my 'big system' I did not experience the same lack of bass as I did with the R3s measured on my desktop. Subjectively I don't find the R3s lacking bass when played normally (although the main system is defnitely better), but I had to boost the bass quite a bit to get them sounding OK. No such problem with my main speakers - great bass captured right in the measurement.


----------



## musicreo

dwk said:


> Yes, @sander99 is basically correct - just use Impulcifier to perform a normal binaural measurement of the FC channel. Then run the analysis as normal, and Impulcifier will generate a hrir.wav with only FC-left and FC-right elements being non-zero. Pull these 2 channels out into a normal stereo .wav, and load into the JRiver convolver so that FC-left is applied to the left channel, and FC-right is applied to the right channel.



I 'don't understand this concept.   I will hear left only on the left headphone speaker and rigth only on the right headphone speaker? Is this not the opposite of what I want to get from impulcifer?


----------



## sander99

musicreo said:


> I 'don't understand this concept.   I will hear left only on the left headphone speaker and rigth only on the right headphone speaker? Is this not the opposite of what I want to get from impulcifer?


If your goal is to realistically binaurally simulate loudspeakers in a room then you are right. In this concept impulcifer (or a realiser) is used for something completely different, an alternative way of listening to stereo recordings (with HRTF filtering and room influence but without cross talk), different from normal (or realistically binaurally simulated normal) loudspeakers (HRTF filtering and room influence & cross talk), and different from normal (unprocessed) headphone listening (no HRTF filtering or room influence & no cross talk). Whether or not this is usefull or "an improvement" is debatable and subjective, lets just say that some people like it with some recordings.


----------



## dwk

Once again, agree with @sander99. What I'm doing is an unconventional/alternate use of the Impulcifier capabilities to experiment with some different ideas. Read up on Ambiophonics and the discussion (perhaps 'cult' isn't too far from the truth) to better understand why I was interested in trying it.  

Having had a day or so to compare the 'ambio' single-center-channel measurement to the 'normal' stereo measurement, I think I have to conclude that the conventional stereo measurement is generally better. The 'ambio' is preferable on certain minimalist vocal recordings, but as soon as you add much instrumentation the stereo version becomes much more credible.  I'm still not entirely sure why the ambio idea wasn't more successful, and may continue some experiments - maybe getting a bit of separation and trying a 20-degree 2-channel dipole for example. I think the conventional stereo version will be my normal setup for now though.


----------



## Dogway

Found this interesting to share: Road to PS5


----------



## jaakkopasanen

Dogway said:


> Found this interesting to share: Road to PS5


Cool! They certainly seem serious with 3D audio. Of course it will be years before they have decent HRTF synthesis working if they get it working well at all. But anycase I'm happy to see big companies starting to tackle this problem.


----------



## Dogway

Yes, things like this going mainstream is very good to raise awareness in audio tech. People just don't care or know enough. Very surprised to see him talk about HRTF but I'm not sure how personalized HRTFs could work with a few photos (like Super X-Fi).
I have been tempted for some time to sculpt my ear in 3D and see if there's something that could be done programmatically with that, found E.A.R addon for Blender. It's something to research on.


----------



## johnn29

Thanks for posting the Sony link - fascinating. It's nice that it's gone from military simulation environments to more main stream gaming now. Sony's approach is novel - I like that they'll have some sort of test to see which of the 5 matches you well and are eventually looking to develop a custom version.

They've recently done the 360 audio with ear mapping - hope this makes it to that. It's crazy that we're on the verge of simulating the perfect loud speaker in the perfect room that'll blow away any real setup. Great time to be an audiohead.


----------



## johnn29

jaakko I'm planning on cutting my ear hooks and gluing ear plugs to my mics like you did. Is this still something you advice or do? Now that we're all at home and social distancing I might as well try and nail some more HRIR's 

I'm hoping this method leads to much easier recordings that are repeatable - which would imply they are more realistic. I'd like to stop relying on transform for my over-ears and rely on headphone compensation. But historically it's actually been hard to nail a good recording due to channel balance issues and the damn mics that keep falling out or changing position.


----------



## jaakkopasanen

johnn29 said:


> jaakko I'm planning on cutting my ear hooks and gluing ear plugs to my mics like you did. Is this still something you advice or do? Now that we're all at home and social distancing I might as well try and nail some more HRIR's
> 
> I'm hoping this method leads to much easier recordings that are repeatable - which would imply they are more realistic. I'd like to stop relying on transform for my over-ears and rely on headphone compensation. But historically it's actually been hard to nail a good recording due to channel balance issues and the damn mics that keep falling out or changing position.


Using ear plugs instead of the hooks makes life so much easier. The mics stay in place without any worry of moving or falling out. I also find it easier to place the mics in the same location time after another. I think I would trust it right now to make measurements and then measure another pair of headphone later even though the mics were taken off in between. So yes, I would recommend cutting the hooks.


----------



## lowdown

jaakkopasanen said:


> Using ear plugs instead of the hooks makes life so much easier. The mics stay in place without any worry of moving or falling out. I also find it easier to place the mics in the same location time after another. I think I would trust it right now to make measurements and then measure another pair of headphone later even though the mics were taken off in between. So yes, I would recommend cutting the hooks.



Had the same experience.  Trying to use the mics with hooks was very frustrating with the precarious, unstable positioning. I was also a bit concerned the hooks could alter my pinnae's frequency reflections, though I have no evidence, just a thought.  I cut about 1/3 off of some foam earplugs to shorten them, glued on the back of the mics and it made all the difference. Final results with Impulcifer are incredible.


----------



## johnn29

Thanks guys, going to do that on Monday.

The idea of doing only a single speaker recording and multiple headphone compensations later in time is going to be awesome.


----------



## johnn29 (Mar 24, 2020)

So I cut the hooks and glued onto foam plugs. SO MUCH EASIER! And native channel balance is perfect, no processing needed.

Edit: Tried measuring headphones after the initial recording - results are great!


----------



## pfzar (Apr 3, 2020)

dwk said:


> The first comes from a long-standing feeling that the headphone guys and ambiophonic guys need to talk to each other. Ambiophonic guys are spending a LOT of time and money to set up a speaker system that eliminates crosstalk whereas the headphone guys seem to be constantly looking for new ways to introduce crosstalk. This led me to a theory that what the headphone guys really want is HRTF and not crosstalk.



This is an incorrect assumption of what ambiophonics is.  By having the speakers closer together you are shortening the path of the contralateral and the ipsilateral.  You then run the signal through a feedback delay network.  In this manner, you are creating a cross-talk cancelation network.


Happy to see so much excitement about HRTF and spatial audio for headphones. Psychoacoustics, and HRTF's are my primary research responsibilities.  Fun times indeed with all the new things happening here.


----------



## Joe Bloggs

pfzar said:


> This is an incorrect assumption of what ambiophonics is.  By having the speakers closer together you are shortening the path of the contralateral and the ipsilateral.  You then run the signal through a feedback delay network.  In this manner, you are creating a cross-talk cancelation network.
> 
> 
> Happy to see so much excitement about HRTF and spatial audio for headphones. Psychoacoustics, and HRTF's are my primary research responsibilities.  Fun times indeed with all the new things happening here.



Great to see you here, but I don't see the contradiction here with "set up a speaker system that eliminates crosstalk".  Unless you object to the "spending a LOT of time and money" part, if you consider that the crosstalk cancellation a done deal that can be realized on everything down to two speaker smartphones and stuff.

I also personally prefer upmixing stereo content to an actual conventional surround system to enhancing it using ambiophonics :]


----------



## pfzar

Joe Bloggs said:


> Great to see you here, but I don't see the contradiction here with "set up a speaker system that eliminates crosstalk".  Unless you object to the "spending a LOT of time and money" part, if you consider that the crosstalk cancellation a done deal that can be realized on everything down to two speaker smartphones and stuff.
> 
> I also personally prefer upmixing stereo content to an actual conventional surround system to enhancing it using ambiophonics :]



The speaker system is not what eliminates the crosstalk.  The understanding of the geometry applied to the algorithms is what provides the cues for the brain.


----------



## jaakkopasanen (Apr 25, 2020)

I came across a very interesting presentation about binaural hearing and headphones. Strongly recommended read. It for example argues that around-ear headphones are not good for binaural rendering (speaker virtualization for example) because practically all around-ear headphones have 90 degree destructive interference which cannot be equalized away. The author also shows how blocked ear canal measurements can have correct localization but not timbre. The frequency response measured at blocked ear canal opening is not the same as at ear drum when wearing headphones. For best results the measurements should be done with probe microphones that measure at ear drum.

I picked this up from article A Deep Dive Into Harman Curves Part 1, which I also recommend for reading.


----------



## johnn29

Thanks for the links!

Half way through the presentation and my thoughts:

- He is correct on the mad man would put those mics so close to their ear drum!

- Is what we're interested in - speaker virtualisation - significantly different to dummy head recordings? We only have 7 point sources. We also hear all the room reflections too - so those 7 point sources become quite vast and we know from the trim tail script you've got in the research folder that the tail is very important for localisation. What the presentation is interested in is concert hall recordings that have a ton of sources that vary in distance, elevation, instrument type (string produces differently to a drum for example).


- Because of the above - the around ear headphone conclusion of the presentation doesn't seem valid? In my own experience I've used on ear (GW100, B&W p5) and a load of over ears (Bose 700, WH-XM3, DT990 and more). I agree that the ANC algorithm results in the most benchmarked result - especially with the Sony XM3's because they have a calibration button. So you can seat it on your head, calibrate it and you know that playback will be identical to when you took your recording, as long as you calibrated before recording. The Sony's and the Bose do mess with the bass/treble levels depending on volume though - but that's a simple boost/reduction. I do get the most realistic presentation with the GW100's (on ears) but I assumed that was because they were the most open per the rtings isolation measurements.

I've never had an issue with sound localisation with any of the recordings. Timbral differences yes - especially when trying to EQ in ear earphones via transform. With my Air Pods Pro, for example, I get the localisation but the sound signature sounds really off. The only way I've gotten them to sound right is by using Bose 700 and Air Pods Pro from rtings source, anything else sounds off. This is further complicated by firmware updates changing the sound signature - so Transform might not work as well.

While we're all in lock down I've not had to use Impulcifer as much and enjoying my actual loudspeakers. It's funny how now I appreciate head movement with a real set of speakers. But on weekends sometimes my wife can be reading a book next to me and I'm watching my OLED with a movie at reference volume with my headphones. Pretty sweet!


----------



## jaakkopasanen

I wouldn't say the claim about around-ears (vs on-ears) isn't valid in a room necessarily. Maybe more important is the question how much does the 90 intereference matter in the end. It's more or less a single notch in the frequency response and that doesn't mean the timbre wouldn't be good enough. It's interesting that your observation about GW100 seems to confirm the argument. Then of course there's the question of how important all this is vs comfort of around-ear headphones and all the technical abilities the higher end ones have.


----------



## johnn29

I'll do some more tests between the DT990 and the GW100.

The article also states



> Regardless, due to the relatively unnatural “illumination” of the entire pinna by the radiated wavefront (and hence low fidelity to HRTFs that would generate a natural frontal soundscape), headphone playback (especially of stereo recordings but even for binaural) is intrinsically compromised.



I wonder if this has something to do with the fact that when you move rooms or locations - sometimes the HRIR I generate sounds "off". The visual cues are over-riding the auditory ones when you are in the same room - but when away from those visual cues your brain is only left with the sound.


----------



## jaakkopasanen

When you're listening one room (virtualized) in another room (physical) the visual and auditory cues contradict to some degree. How much depends on how similar acoustics the two rooms have. Visual cues are always dominant to auditory cues. This is actually how brains learn to localize sounds based on auditory cues in the first place.


----------



## johnn29 (May 8, 2020)

This will be of interest to people in the thread. JVC looks to be finally releasing it's Exofield tech: https://eu.jvc.com/audio/home-theater/XP-EXT1/

It was supposed to come out in April but I guess Covid set things back. It's a $1k price tag. But it uses mics inside the headphones for the customization and some sort of algorithm database. Includes DTS:X and Atmos. I'd be willing to give it a try purely for the convenience of an external box, rather than having to be tied into a PC. 

Still not as flexible as Impulcifer due to being restricted to one set of headphones. I really like ANC for my use case.


----------



## musicreo

johnn29 said:


> It was supposed to come out in April but I guess Covid set things back. It's a $1k price tag. But it uses mics inside the headphones for the customization and some sort of algorithm database.



I thought they want to use in-ear mics like shwon on their webpage  EXOFIELD" Measurement. Using mics in a headphone and using a database is very dissapointing.


----------



## johnn29 (May 8, 2020)

The binural mics are a difficult issue for mass market products. It could be that they offer a proper binural measurement but for the sake of market adoption it's not the default. I think many audio companies are aware of the deconvolution process but in talking to DTS about their solution the biggest barrier is UX. That's why a lot of these products have remained very niche like the Smyth and the ones that are popularized rely on things like photos. I know Sony is looking at potentially playing a game to narrow your personal HRTF. We are just very lucky that Jaakko has given us such an elegant solution at 0 cost.

I just wish there was a box we could buy (like a Raspberri Pi) and it could use audio in over HDMI and output our HRIR. I don't mind using a PC in my office setup, but for my main theater I'd really like to just be non-PC. I still have a HTPC in there for MadVR's tone mapping but it's a bit of a pain to switch between watching on my Shield and resuming via headphone on my HTPC.


----------



## castleofargh

The AES people submit papers that are more often about all types of sound field and or HRTF/psychoacoustic related stuff. It's going to come, the only questions are when? And how far will they dare to go with the well needed customization?

On the other hand, I was already thinking "it's going to come soon" about 20 years ago. but the industry does seem motivated this time. I won't count on audiophiles, most start pulling out a cross and loading silver bullets when they read about equalizer. so DSPs... But I expect even them to slowly get into it after they get to try something that is properly customized for them, even if at first it goes against their deepest false beliefs.


----------



## johnn29

The gaming industry is where it's really going. What Sony is doing with the PS5 is a real big step forward. Once that's bedded in it'll filter down to other products I'm sure. Sony have a strong market in Japan where this sort of stuff is in demand due to the tiny apartments.

Agree on the audiophiles, it's a cult!


----------



## mindbomb

On the talk of headphone choice, rtings actually has been measuring pinna interactions as part of their passive soundstage measurements. (although they treat it as valuable rather than problematic).
https://www.rtings.com/headphones/tests/sound-quality/passive-soundstage

The sennheiser hd650 and grado sr125e seem like they do really well. Likely because the designs keeps the driver relatively close to the head compared to other headphones.


----------



## jaakkopasanen

I did a quick comparison of audio interfaces and binaural microphones is terms of signal to noise ratio. Behringer Uphoria UM202HD has clearly quieter mic pre-amps than Zoom H1n. 9 dB of difference by just using better audio interface. The Sound Professionals SP-TFB-2 mics beat Primo EM 258 capsules with a hefty margin of 11 dB when using Zoom H1n with both. This was unexpected because specs wise the EM 258 is a better capsule. Unfortunately I don't have adapter cables to connect The Sound Professionals mics to Behringer audio interface so can't test that at the moment.

Now it's easy for me to recommend The Sound Professional mics. They perform better and also are easier to work with because they have the silicone sleeve. These aren't even the master series mics! One user has also reported that his Primo capsules have developed significant high frequency roll off during in just a few months.

Behringer Uphoria UMC202HD with Primo EM 258 mics





Zoom H1n with Primo EM 258 mics




Zoom H1n with The Sound Professionals SP-TFB-2 mics


----------



## musicreo

How high was the voltage  you used for the mics at the Behringer? There is probably a sweet spot for the primo capsules between 3-10V regarding  signal to noise ratio.  The other question is how constant is the build quality of the primo capsules or the Sound Professionals SP-TFB-2 ?


----------



## jaakkopasanen

musicreo said:


> How high was the voltage  you used for the mics at the Behringer? There is probably a sweet spot for the primo capsules between 3-10V regarding  signal to noise ratio.  The other question is how constant is the build quality of the primo capsules or the Sound Professionals SP-TFB-2 ?


I'm using RODE VXLR+ adapters which provide 5 V. Can't say anything about the mic consistency and quality control. But at least the Primo's have flatter bass response than SP-TFB-2. That doesn't really matter a lot for this application though since the heapdhone compensation corrects the mic frequency response also.


----------



## musicreo

jaakkopasanen said:


> This was unexpected because specs wise the EM 258 is a better capsule.



Actually  the specs are very similar.
Primo Em 258:

*Signal to Noise ratio 74dB *
*Sensitivity -32 dB*
*Max Input Sound Level :  115 dB S.P.L *
Sound Professionals SP-TFB-2

*Signal to Noise ratio 75dB  *
*Sensitivity -32 dB*
*Max Input Sound Level :  115 dB S.P.L *


----------



## jaakkopasanen

musicreo said:


> Actually  the specs are very similar.
> Primo Em 258:
> 
> *Signal to Noise ratio 74dB *
> ...


Those are specs for the more expensive master series model MS-TFB-2. I have the cheaper SP-TFB-2 which have 17 dB of more noise and 10 dB smaller sensitivity.


----------



## Speedskater

Seems like the wrong test, to measure signal-to-noise.
I would use mid-band pink noise to set the reference level.


----------



## jaakkopasanen

Speedskater said:


> Seems like the wrong test, to measure signal-to-noise.
> I would use mid-band pink noise to set the reference level.


Could you elaborate why is this a wrong test to measure signal to noise ratio in impulse response? Or actually this is peak to noise ratio to be more exact.


----------



## Speedskater

With pink noise or a 1 kHz tone, you can set a full volume reference level. This is the long standard protocol for measuring signal-to-noise.


----------



## jaakkopasanen (May 16, 2020)

Speedskater said:


> With pink noise or a 1 kHz tone, you can set a full volume reference level. This is the long standard protocol for measuring signal-to-noise.


The signal level played on speakers was the same for all measurements. The recorded signal level differences are caused by microphone sensitivity and mic pre-amp gain. The noise level of Zoom H1n changes depending on the gain and the best noise performance can be achieved at position 5.5 or 6 of the gain wheel. This is what I used for the measurements done with Zoom H1n. Additionally I don't think the recording level affects signal to noise ratio because changing the mic gain amplifies the transient signal and the noise similarly. This is in effect the same as normalizing the reconstructed impulse responses digitally.


> PNR is obtained as the noise power of a normalizedRIR sample, wheren nn[k] is the noise that is observed in a 0 dB normalized RIR.


this is equation 3 from A note on the definition of signal-to-noise ratio of room impulse responses, 2012 paper by Csaba Huszty and Shinichi Sakamoto. This is precicely what I did, although I have to admit that the noise part calculation in this case was done visually so give or take a dB because of that.

EDIT: I realized that my decay plots are amplitudes and the PNR equation in that paper uses powers. So the PNR numbers from the graphs should be halved.


----------



## jaakkopasanen

I did new measurements today. This time I had speakers a bit closer together and this improved the phantom center image. Room acoustics management was used during all speaker measurements and while this was only two paramteric filters at around room mode frequencies, it had very positive impact on the results. Now there is barely any need for reverberation management to remove the bass ringing. There is still a bit left but it's very minor. I strongly suggest using room eq during measurement until the frequency dependent reverberation management is done.

This time I made sure I don't have channel balance issues in Volume2 and now the channel balancing in Impulcifer is not needed. The trend balance method does basically nothing audible, the balance is so good out of the box. I did headphone compensation measurements 4 times and each time the frequency response changes a lot above 6 kHz. Headphone compensation remains a problem. The more I read about it and study it, the more difficult it seems to do right.




 

 

 



Webcam was above me this time and that really made it a lot easier to place the room measurement microphone to the same location where the binaural mics where. This resulted in a lot better tonality for surround channels. I wore a hat (tube scarf actually) beacuse otherwise my hair hides the pinna making it impossible to locate the ear canal entrance from the webcam ghost images. Still the pictures don't quite show the location of the ear canal entrance but I think I got it estimated well enough. You'll see in the picture that the room measurement mic is placed a bit towards the center of the head from the left ear pinna. There are two pictures in the side panel for center measurements (pics 2 and 3) because I wanted to test FC measured with FL vs FR. Pictures 4 and 5 are for surround channels.





I think all in all this is my favorite so far and therefore I updated the demo recordings to the new one.


----------



## musicreo

Great to hear that you still can optimise your results.
When i look at the image  I see that you not only rotate your  head  but also change the head position for the surround measurements (I draw the line in the image to show what I mean). In some of my measurements I'm pretty sure that it got worse when my head moved out of the sweetspot.


----------



## castleofargh

jaakkopasanen said:


> I did new measurements today. This time I had speakers a bit closer together and this improved the phantom center image. Room acoustics management was used during all speaker measurements and while this was only two paramteric filters at around room mode frequencies, it had very positive impact on the results. Now there is barely any need for reverberation management to remove the bass ringing. There is still a bit left but it's very minor. I strongly suggest using room eq during measurement until the frequency dependent reverberation management is done.
> 
> This time I made sure I don't have channel balance issues in Volume2 and now the channel balancing in Impulcifer is not needed. The trend balance method does basically nothing audible, the balance is so good out of the box. I did headphone compensation measurements 4 times and each time the frequency response changes a lot above 6 kHz. Headphone compensation remains a problem. The more I read about it and study it, the more difficult it seems to do right.
> 
> ...


For that kind of test, I'd suggest to use a much simpler chair for the measurement. Anything with a very small acoustic footprint.
And I'd also advise to completely turn your body, making sure that your shoulders are always aligned with your head, so the impact of the torso is the right one.


----------



## jaakkopasanen

musicreo said:


> Great to hear that you still can optimise your results.
> When i look at the image  I see that you not only rotate your  head  but also change the head position for the surround measurements (I draw the line in the image to show what I mean). In some of my measurements I'm pretty sure that it got worse when my head moved out of the sweetspot.


It's very hard to keep the center of the head in the same position between the measurements. I'm looking directly to TV when doing the FL, FR measurement so when I'm doing surround channels, I don't see where my head is relative to the first measurement. I did separate room measurements for all of the positions, left and right ear separately. This ensures that the frequency response ends up being the same even if I move the head out of the sweet spot. Or at least in theory that is...



castleofargh said:


> For that kind of test, I'd suggest to use a much simpler chair for the measurement. Anything with a very small acoustic footprint.
> And I'd also advise to completely turn your body, making sure that your shoulders are always aligned with your head, so the impact of the torso is the right one.


I'm not sure I quite understand what you mean by "that kind of test" or "simpler chain for the measurement". Could you elaborate a bit?

You are right with your second point. Unfortunately that's not really possible on sofa and this time I didn't want to reorganize my room. I intend to have a measurement session where I optimize the speaker placement and all the furniture for best possible acoustics but I haven't got there yet. That would of course also include full rotation of the body.


----------



## Joe Bloggs

jaakkopasanen said:


> It's very hard to keep the center of the head in the same position between the measurements. I'm looking directly to TV when doing the FL, FR measurement so when I'm doing surround channels, I don't see where my head is relative to the first measurement. I did separate room measurements for all of the positions, left and right ear separately. This ensures that the frequency response ends up being the same even if I move the head out of the sweet spot. Or at least in theory that is...
> 
> 
> I'm not sure I quite understand what you mean by "that kind of test" or "simpler chain for the measurement". Could you elaborate a bit?
> ...


Simpler "chair"


----------



## jaakkopasanen

Joe Bloggs said:


> Simpler "chair"


I know, but the sofa just happens to be there in the listening position.


----------



## castleofargh

jaakkopasanen said:


> It's very hard to keep the center of the head in the same position between the measurements. I'm looking directly to TV when doing the FL, FR measurement so when I'm doing surround channels, I don't see where my head is relative to the first measurement. I did separate room measurements for all of the positions, left and right ear separately. This ensures that the frequency response ends up being the same even if I move the head out of the sweet spot. Or at least in theory that is...
> 
> 
> I'm not sure I quite understand what you mean by "that kind of test" or "simpler chain for the measurement". Could you elaborate a bit?
> ...


I only mentioned this in case it had skipped your mind. If you don't bother with something by choice, then all is well


----------



## johnn29

Re-arranging your room to get Impulcifer recordings done is something that I've got a lot of experience in! I can barely get the UMIK placement right with only two positions since I use my surround system. It is a right pain though. I never thought about buying a property articulating mic stand - I only use a monopod and sit on the floor. It would be much more comfortable to sit in a chair and use an articulating mic.

Re: responses above 6Khz - will that also apply to something like the Sony XM3 that has a "calibrate" button. Once you put your headphone on, it's supposed to measure the wearing condition (glasses, hair, placement etc.) to get an identical FR each time you reseat. I'm not sure how how it goes to though, or if it only cares about bass regions. 

Re: virtual center is what you get best from Impulcifer compared to real life. Always bang in the sweet spot. But like in real life, I prefer applying Dolby Prologic to stereo signals via ffdshow audio with Impulcifer.


----------



## musicreo

I have bougth a simple measure mic and  want to start with room measurements. I want to compensate some  mesurements using only a single center channel for the complete measurement. First try is just to put the mic at the center postion of my head.

After reading the instructions I'm asking if there is a special command and sweep for the room measurement in impulcifer?   Should I use "sweep-seg-FL-mono-6.15s-48000Hz-32bit-2.93Hz-24000Hz" and rename the output to "room.wav" ?  In the readme there is a average option but how do I  do repeated measurements with impulcifer? Is there a command option that repeats the sweep several times?

If I don't choose any speaker target is then equalized to a flat response?


----------



## johnn29 (Jun 12, 2020)

I don't believe there is multiple room measurements available - I guess if you're not going to be able to accurately place it to where your L/R ears were a spatial averaging makes sense. Jaakko will be able to tell if that's a good idea in a headphone context.

I believe Impulcifer keeps the native room response if no target is supplied.

My own notes on how to run measurements:

Yep - in your case you'd rename to room.

For anyone doing it on the left and right ears with proper surround setups I've saved these commands for my own setup.



> Default device to headphones
> python recorder.py --play="data/sweep-seg-FL,FR-stereo-6.15s-48000Hz-32bit-2.93Hz-24000Hz.wav" --record="data/my_hrir/headphones-AurvanaSE.wav"
> 
> Default device to AVR - binural recording
> ...


----------



## musicreo

I can place the mic to L/R ears but I want  to compare the results with the central placement. As I use only one speaker I have to place the mic 14 times for the measurements and I have to estimate the position which resutls in a misplacement of 1-2cm. We are talking about 10-12cm difference to the cnetral position  and I hope that thisdoes not have such a huge impact when using only one speaker for the measurement.


----------



## jaakkopasanen

musicreo said:


> I have bougth a simple measure mic and  want to start with room measurements. I want to compensate some  mesurements using only a single center channel for the complete measurement. First try is just to put the mic at the center postion of my head.
> 
> After reading the instructions I'm asking if there is a special command and sweep for the room measurement in impulcifer?   Should I use "sweep-seg-FL-mono-6.15s-48000Hz-32bit-2.93Hz-24000Hz" and rename the output to "room.wav" ?  In the readme there is a average option but how do I  do repeated measurements with impulcifer? Is there a command option that repeats the sweep several times?
> 
> If I don't choose any speaker target is then equalized to a flat response?


You can do multiple room measurements and save them all to a same file called room.wav:

```
python recorder.py --play="data/sweep-seg-FL,FR-stereo-6.15s-48000Hz-32bit-2.93Hz-24000Hz.wav" --record="data/my_hrir/room-FL,FR-left.wav" --input_device="Umik-1" --channels=1 --append
```
The --append parameter adds a new track to an existing file so you can repeat this same command as many times as you like.

This assumes you want to use speaker which is connected to front left terminal in the amplifier. If you intend to use speaker connected to a center channel terminal, you need to generate a new sweep sequence with impulse response estimator:

```
python impulse_response_estimator.py --dir_path="data" --fs=48000 --speakers="FC" --tracks="7.1"
```

See 
	
	



```
python recorder.py --help
```
 and 
	
	



```
python impulse_response_estimator.py --help
```
 for more details.


----------



## musicreo

Ok, I have 14 single files in the following format in the folder (room-BL-left,room-BL-right,room-FC-left,.....). Are all files processed for the room correction when I use
 following code?

```
python impulcifer.py --test_signal="data/sweep-6.15s-48000Hz-32bit-2.93Hz-24000Hz.pkl"  --fr_combination_method=conservative --room_target="data/harman-in-room-loudspeaker-target.csv" --specific_limit=9000 --generic_limit=2000 --channel_balance=trend --dir_path="data/my_hrir"
```

I hear only a very small difference when I use the code  with a single  file "room.wav. "  I'm wondering if I'm doing something wrong even though it sounds ok?


----------



## jaakkopasanen

musicreo said:


> Ok, I have 14 single files in the following format in the folder (room-BL-left,room-BL-right,room-FC-left,.....). Are all files processed for the room correction when I use
> following code?
> 
> ```
> ...


With those files you get separate room correction for each speaker-ear pair and the room.wav does nothing. When you use room.wav  instead and have no other room measurement files available, all the frequency responses from room.wav will be combined. There are two combination methods: conservative and average. Conservative only changes the frequencies where all of the frequency responses are on the same side of zero plane and for these it selects the smallest value. This way the conservative correction can never make any additional problems but often results in minor corrections. Correction with room.wav is only effective up to the specified generic limit, which in your case was 2000 Hz. You could try to change the combination method to average and see if that does bigger changes. And of course all depends on the actual room response and how much that changes with listening position. It could be that your room response is not very sensitive to small position changes and so the difference between specific and generic room corrections is small. Or perhaps you're not very sensitive to small frequency response differences.


----------



## musicreo

jaakkopasanen said:


> And of course all depends on the actual room response and how much that changes with listening position. It could be that your room response is not very sensitive to small position changes and so the difference between specific and generic room corrections is small. Or perhaps you're not very sensitive to small frequency response differences.



I increased the generic limit to 9000Hz but still the the difference is very small. When I look at the room plots they look all very similar. 
I guess the reason is that I use only one loudspeaker in center position and that I  moved this speaker from the wall into the room for the measurement. 
However,  the room correction improved the already great results. Your programm is absolut fantastic!


----------



## johnn29 (Jun 22, 2020)

This week I messed around with my Audssey XT32 room correction on my AVR. I figured out how to basically shoot for a Harman in room target/slope and achieved decent results from measurement and by ear.

So I ran another Impulcifer recording to take advantage of that - this time with no virtual room correction. I just didn't want to faf around with the UMIK 1 mic placement and I figured I'd already done it on my real system. I used the DT990's. After it generated I A vs B'd the real speakers and this is the most convincing realistic one I've done to date. Not messing with the room correction means the frequency response is identical as it can be in the room. But it was crazy how realistic is is.

So for all the headaches of figuring out headphone compensation and how it's really difficult at high frequencies - for me the method just works.

Still amazed by this tool - wish Dolby/DTS would open up their decoding to windows so it's channel based so I could incorporate those.

Edit: I should say the glued ear plugs on the mics has resulted in perfect channel balance with no need to use any of the methods to correct.


----------



## Joe Bloggs

johnn29 said:


> This week I messed around with my Audssey XT32 room correction on my AVR. I figured out how to basically shoot for a Harman in room target/slope and achieved decent results from measurement and by ear.
> 
> So I ran another Impulcifer recording to take advantage of that - this time with no virtual room correction. I just didn't want to faf around with the UMIK 1 mic placement and I figured I'd already done it on my real system. I used the DT990's. After it generated I A vs B'd the real speakers and this is the most convincing realistic one I've done to date. Not messing with the room correction means the frequency response is identical as it can be in the room. But it was crazy how realistic is is.
> 
> ...


I've been wondering how Dolby / DTS work / don't work on Windows myself.  Can you share your experience?


----------



## johnn29

All Dolby/DTS codecs work fine in Windows via LAV Decoder, or even natively with Windows 10 now. LAV can passthrough Atmos/DTS:X to an AVR no problem.

Any regular Dolby/DTS also decodes to channels in Windows - so it'll decode to 5.1 or 7.1 if you have an analogue output soundcard or virtual device like with us using HeSuVi with Impulcifer. However, it doesn't decode DTS:X or Atmos's height channels. Windows only has a maximum of 7.1 channels. It does support DTS:X and Atmos via Headphones though with both Dolby and DTS Headphone X. However, the generic HRTF pales in comparison to what Impulcifer can generate.


----------



## musicreo

But even with the Atmos headphone app a file played with MPC via the LAV decoder will be limited to 7.1 or is there a passthrough to the Atmos app?


----------



## johnn29

Yep - anything played with regular filters and direct show/LAV/MPC won't pass through Atmos.

Has to be using the Windows 10 Films and TV app.

THX recently bought out their headphone surround - it's limited to 7.1 too.


----------



## johnn29

There's a very good video with an acoustic tech here on room correction: 

Headphones take away a lot of the problems with room correction because we are always in the same space. But what I found interesting was that the calibrations for the mic's we're using aren't accurate in the high treble at all. 90 degrees means it attenuates the treble heavily so full range correction, even on Impulcifer is probably not what we should shoot for. The alternative is to point the mic at each speaker. Well worth a watch if you're into any of this.


----------



## musicreo (Jun 25, 2020)

johnn29 said:


> But what I found interesting was that the calibrations for the mic's we're using aren't accurate in the high treble at all. 90 degrees means it attenuates the treble heavily so full range correction, even on Impulcifer is probably not what we should shoot for.



Here is a german article which shows some measurements for three measure mics under different angles.

In the graphs you can see the changes due to the angle.  Cheap mics like the  Behringer EMC800 don't provide any calibration file for 0° or 90°. The mic specs say that they are free field equalized  but the the same guy from the article showed that they are more close to diffuse field equalization and provide best results at approx. 60 degree! So using  those mics at 0° will result in   wrong results for high frequencies. But when you know these drawbacks the cheap mics are very close to profesional measurement mics.


----------



## johnn29

Thanks - I didn't catch that in the video they were talking about the cheap mics that come with the receiver. 

Now I know what the 90 deg calibration file is for!


----------



## musicreo

I just skipped through the video but I think all other aspects they are discussing are only important for real speakers.  As we are transferring the equalisation to headphones we can even correct room modes and can make cheap speakers sound more like expensive speakers. However, we have all the uncertainties with the headphone equalisation.


----------



## johnn29 (Jun 27, 2020)

I've been doing lots of experimenting with virtual room correction and I like my results now combined with the experimental reverb management feature.

My real room has a ton of ringing in the low bass. I have rigid walls and floors, the benefit is crazy room gain (115db to 6hz on a small sealed sub -SB2000) but I get ringing.




RT60



See the above plot - that was only at 75db, the louder the worse the situation is. Bass response after real EQ is fairly flat, so multisub in the real room won't do anything to get rid of ringing from my understanding. See below (no smoothing):



All it can do is destructively interfere with the peak and kill it to flatten response. Still rings, but less peak. Besdies, that's another £~800ish, a ton of setup and pain etc.

To really get rid of it it'd take bass traps - which is a big project and will eat up room and I cannot be bothered.


Enter Impulcifer. I nailed a nice virtual room correction on top of what Audssey did up to 250hz. It killed some of the peaks I had in the midbass which you can hear on A vs B. But the ringing still remained. I get this weird sensation in my ears when I hear it too, not sure if it's some weird phasing thing but it definitely sounds jarring in my real room. A testament to the convolution method is I get a similar feeling with the headphones.

So I used the reverb management from the experiments folder to set everything to 300ms. No more ringing! I guess this is time domain management we can do in virtual rooms which are impossible in real world. What's interesting about the reverb control is when you take it to artificially low levels, like sub 200ms the bass perception is much lower. I guess the SPL is identical but tighter/dryer so it's perceived as less?

Jaakko - is it worth trying to roll this into the room targets for virtual room correction? i.e. an ideal decay time?

Thought it was worth sharing - an example of how you can improve on your real rooms with this stuff. Still blows my mind. And it's better than the Smyth - you cannot do any of this with the A16.


----------



## jaakkopasanen

Very interesting videos. I didn't know that the normal room measurement doesn't match what we hear. I believe this is 100% relevant for virtual room correction too.


----------



## jaakkopasanen

johnn29 said:


> Jaakko - is it worth trying to roll this into the room targets for virtual room correction? i.e. an ideal decay time?


Roll reverb management into room targets. What does that mean? Adjusting the room target based on reverb time? Or having a decay time as parameter for Impulcifer?


----------



## johnn29

Only in the bass regions does what we measure on a simple steady state measure match our hearing. Through the transition frequency, which isn't a fixed point, it can help but after it - you are likely to make things either stay the same (at best) and at worse make things worse. I figured that since we place the mics exactly where our ear is it won't be a big issue for us though.

I figured the decay time should be a feature for Impulcifer proper (not experimental) and also a target decay time for virtual room correction. I've been watching a ton of videos here - it's between 250-350ms. We're not simply deadding the room - but it'll kill off any of the ringing. See here for more detail from Flyod Toole. You're killing ringing not reverb in the low frequency.



			
				Flyod Toole said:
			
		

> . Often reverberation times measured at low frequencies are the decays of a few under-damped room modes. This is not reverberation; this is ringing!


----------



## jaakkopasanen

I agree that reverb control needs to make its way from the experimental script to Impulcifer, and it will. 

As I understood the problem with eqing stochastic range, havibd the mics in the exact same location doesn't fix everything because the measurements will be affected by reverb while our perception of frequency response is not. Please correct me if I am wrong.


----------



## Joe Bloggs (Jun 27, 2020)

johnn29 said:


> Floyd Toole said:
> 
> 
> 
> > Often reverberation times measured at low frequencies are the decays of a few under-damped room modes. This is not reverberation; this is ringing!


You got me wondering when Floyd Toole joined head-fi... 😍😮😋


----------



## johnn29

jaakkopasanen said:


> I agree that reverb control needs to make its way from the experimental script to Impulcifer, and it will.
> 
> As I understood the problem with eqing stochastic range, havibd the mics in the exact same location doesn't fix everything because the measurements will be affected by reverb while our perception of frequency response is not. Please correct me if I am wrong.



Actually that is true from the videos I watched. But when using Impulcifer to set a room correction limit I cannot even tell the difference between a 5k limit and full range. Like A vs Bing a track or movie there's no difference. Honestly I also struggle to notice the difference in a flattened frequency response, I just notice the tilt higher bass, lower treble. The only solution I can see is a limit on correction - unless you have something up your sleeve?


----------



## jaakkopasanen

johnn29 said:


> Actually that is true from the videos I watched. But when using Impulcifer to set a room correction limit I cannot even tell the difference between a 5k limit and full range. Like A vs Bing a track or movie there's no difference. Honestly I also struggle to notice the difference in a flattened frequency response, I just notice the tilt higher bass, lower treble. The only solution I can see is a limit on correction - unless you have something up your sleeve?


I have some ideas but they might not work in practice. If I understood correctly that reverb doesn't affect the perceived FR, then doing measurements which exclude the reverb would be more accurate basis for room correction.

Convolving a very short exponential sine sweep with the impulse response, applying steep band pass tracking filter for this and the deconvolving it to a new anechoic impulse response might do the job. SNR is not an issue here because the convolved short sine sweep doesn't contain any noise or distortion because it's not an actual measurement so short sweep should be fine. Short sweep is important because that's way it's possible to have shorter octaves in the sweep (in time axis) and therefore a 100 dB/oct band pass filter cuts the reverb shorter.

Second thing I could do is to implement mixed phase filtering for room correction. This would mitigate some of the phase related problems in the stochastic range discussed in the video. Room measurement mic placement could help with this too.

Of course I have no idea if it's actually possible to pull these off...


----------



## musicreo

jaakkopasanen said:


> . Room measurement mic placement could help with this too.



If I understood correct the angle of the mic changes the recieved FR at higher frequencies  when you measure in the near field. When you measure  in the  far field
the angle of the mic will  change the recieved the FR only minimal as you can not distiguish anymore between reverb and direct sound.
From my amateur perspective I would say that all the measurements should be performed not too far away from the source.


----------



## castleofargh (Jun 28, 2020)

musicreo said:


> From my amateur perspective I would say that all the measurements should be performed not too far away from the source.


Then you measure the source instead of the sound at the listener's position. We tend to care about the latter more.


----------



## musicreo

castleofargh said:


> Then you measure the source instead of the sound at the listener's position. We tend to care about the latter more.



I don't want to say measure 50 cm in front of the speaker with your measure mic. But do your complete measurement as close as possible to the speakers.  Don't do measuremnets with 3 m distance where you have more reverb when you can do useful measurements at 1.5m where you have more direct sound unless you have an optimised room where it does not matter if you're measuring at 1 m or 3 m.


----------



## johnn29 (Jul 4, 2020)

I tried that - an ultra near field measurement. Works great for desktop/laptop/phone use because it feels more natural. But for two channel the reflections actually help the sound for me. So I prefer listening to music with my 1.5-2m HRIR's. Some speakers also don't respond well to near field listening - they are expecting to have the room reflections fill holes in the frequency response.

There's a very good discussion on it here: 

Also thanks to Impulcifer I can't now help hearing the bass ringing and SBIR interference in my real room. I've got no real choice where I place my front channels due to room layout. But at least I have a virtual fix and it doesn't matter so much for movies


----------



## johnn29

I've been playing with applying EQ in Peace with the HRIR in place to good effect. In my real system I found after playing with Denon/Marantz "Cinema EQ" feature I liked it a lot - it's a high treble roll off. It tames overly bright tracks and especially helps with movies for things like breaking glass etc.

I found a graph of the THX-Re-eq filter which is similar so copied that.

Think I've realized once you have the virtual room correction done, reduced bass ringing by clipping the reverb you can still play with regular EQ to improve to taste.


----------



## johnn29 (Jul 16, 2020)

Over on the Equalizer APO Discussion forums, Mutt, has created some great upmixing scripts based on Dolby Pro Logic in all its varients.

https://github.com/Dogway/emulation-random/tree/master/EqualizerAPO/Surround

So now we basically have a virtual AVR for Impulcifer - makes it really useful when watching 2 channel videos on YouTube and the like. Or improving imagining with a center speaker. I can confirm they work as supposed to with various Dolby Prologic test videos.


----------



## yosoro

I'm curious if Impulcifer can bring the headphones close to the sound of the speaker.
I have used software such as waves and ooyh software that does not require ear measurement, and the results are not good.


----------



## castleofargh

yosoro said:


> I'm curious if Impulcifer can bring the headphones close to the sound of the speaker.
> I have used software such as waves and ooyh software that does not require ear measurement, and the results are not good.


It is a massive step up in getting more accurate audio cues. But how convincing it will be, might depend on you. Can you trick your brain into thinking you're listening to speakers, when there are no speakers to see in front of you? Can you remain tricked if you move your head and the sound turns with you? I feel like different people can be impacted differently.

But in term of acoustic, with the impulses you're only missing non linear variations(which can be seen as bad and good). This time the rest is measured at your ears instead of being measured by some dummy head. That may or may not make a big difference(how close to the human average are you?).
I happen to care for head movement A LOT. But even then, I felt that custom impulses without head tracking, were more important than head tracking with the wrong acoustic model(Waves NX was wrong for me, I can't tell about anybody else). OOYH has the benefit of having some really cool rooms/speakers, but nothing is made for our own head, so once again it's all down to genetic luck. I'm sure some people are getting a very convincing result from it. I didn't. And among the stuff I've tested, it's the one I stopped using the fastest.

All that to say, it depends, but yes it's probably a significant step up if you measure the signal of the speakers and of your headphone from inside your ears(so you need in ear mics, DIY or binaural products).


----------



## musicreo (Jul 17, 2020)

yosoro said:


> I'm curious if Impulcifer can bring the headphones close to the sound of the speaker.



For me it sounds very close to real speakers.

Except headtracking impulcifer has functions (room correction) that even the realizer doesn't provide.  

To be honest I have done about 50 measurements with 5 self build mics (two different mic types) and there are some measurements that did not sound good. I tested  different speakers, different distances, different positon of the speakers and different fixing of the mics in the ear.  There is a learning curve but now all of my last measurements were very good.


----------



## johnn29

Along with room correction, Impulcifer also lets you use IEM's which Smyth doesn't.

It's exactly like my real speakers when you sit in the same spot and do an A vs B. Combine it with a Bass Transducer and play some Atmos demo clips and you'll be shocked!

The biggest impact on the quality of the HRIR I made was actually cutting the hooks off my Sound Professionals mics. Since gluing the earplugs to them channel balance is perfect everytime and it just sounds great everytime.

Once you take whatever HRIR you make to other places your brain does do tricks and some sound odd. But if you take it back to the seat you measured it's going to sound the same. The way I've got around this is taking an ultra nearfield measurement that I use with my mobile devices or out of my own house. That sounds good anywhere.

I think head tracking is valuable for identifying where sounds are located, especially behind you or above you. But if you're just listening to regular movies/music I don't think it's that valuable. But I guess this varies with each user. It works well with my Mobius because the HRIR is generic - so the tracking helps the center channel pop out of your head.


----------



## sander99 (Jul 18, 2020)

johnn29 said:


> Impulcifer also lets you use IEM's which Smyth doesn't.


You can use IEMs with a Smyth Realiser. You can create a HPEQ (headphone compensation) with either the so called manloud or manspeaker method.
[Edit: see my next post]


----------



## johnn29

Ah - wasn't aware of that. Thanks


----------



## sander99

sander99 said:


> You can use IEMs with a Smyth Realiser. You can create a HPEQ (headphone compensation) with either the so called manloud or manspeaker method.


Actually the manspeaker method would be not so suitable, my mistake. In this method you compare the headphones with the real speakers by ear using test signals. With IEMs that would mean taking them out and inserting them all the time.
With the manloud you compare and try to level match different frequencies with each other by ear. Then you can leave the IEMs in your ears during the entire process.


----------



## lowdown

My experience with Impulcifer is a bit different.  It doesn't match the sound of my speakers, it's much better.  The illusion of listening to actual speakers in front of me is stunningly convincing. After months I still sometimes have to take off my headphones to make sure the speakers aren't playing late at night disturbing my neighbors, even though given my setup I know they can't be playing.  But also on several key points the Impulcifer sound is much better than my speakers. The tonal balance, clarity, resolution of detail, elimination of any "in the speaker" sound, and zero crossover or room anomalies, all make Impulcifer a major upgrade.  It's a very obvious improvement, so much so that I no longer listen to my speakers for music.


----------



## jaakkopasanen

I made some performance improvements to Impulcifer. On my machine the demo measurements get processed now in 9 seconds instead of 35. Plots are still very slow though and I doubt there is a lot that can be done about those. There are 256 graphs plotted for 7.1 system afterall.

Reverberation management is coming to Impulcifer proper soon.


----------



## johnn29 (Jul 22, 2020)

Very excited about reverb management!

Edit: Liking the progress dialogues in the latest update:


----------



## kalstein

I have a question about 'room correction' part. 
Is that additional information for making 'hesuvi.wav' or standalone solution for room correction like REW ?


----------



## johnn29

Room correction just amends the processed hesuvi.wav with different parameters - a specified curve/tilt, flatter frequency response and a bass boost if your actual room can't dive as low as the target.

Users/Jaakko - some more videos that might be of interest. I saw there's an open issue around mixed phase correction so the Dirac one might be useful for you. Although you probably know it already! Audyssey one is also from the software engineer

 - Audyssey room correction.
 - Dirac creator


----------



## johnn29 (Jul 26, 2020)

*Loudness Compensation / Dynamic EQ with HeSuVi/Impulcifer use*:

I found that Equalizer APO has a built in loudness compensation filter that works ideally! Just load up Editor.exe in the EAPO folder and then click the plus button.





Bass perception falls off as volume falls. So when we listen at lower volumes bass seems to be missing and sound is balanced. The target curves we shoot for with our real AVR's or Impulcifer (like 6db Harman slope) apply for reference volume. The good thing with the loudness compensation algo is that you can actually get a reference volume on headphones ala 0 on your AVR and it'll boost the bass/treble based on psychoacoustic models (Fletcher-Munson although more modern models).

The filter lets you set a reference volume and then you just leave your device volume alone and use windows to control. When you set your volume to 100 (or whatever it finds as the 75db mark) - that's reference volume. Now - it's easier to get a reference volume on an AVR because you can use a UMIK-1 with calibration file loaded and shoot for 75db with your internal test tone. Over headphones it's harder. What I did was get the tone playing at 72db on my real AVR then flip to my open back DT990 and perceptually match.

Works really well.

Some headphones have this built in - like the Bose 700. But unlike with Bose - this one actually knows what the reference volume should be so works ideally, not approximately with any content that has a reference volume like movies.

Offset is there for things that are mixed louder than movies - like music or TV. Basically if you google Dynamic EQ by Audyssey you'll get a better understanding.


----------



## jaakkopasanen

Recommended reading for those who are interested in the reverberations and early reflections. https://www.audioholics.com/room-ac...ions-in-home-theaters-a-different-perspective

The article speculates that the ideal reverberation time and first reflection delay depends on the source material. Speech requires different acoustics than say classical music. This is of course not possible in a physical room but with Impulcifer one can set different reverb times for center channel, front channels and surround channels. Center channel mainly has dialogue and the surround channels have ambiance in movie sound tracks. Additionally different room acoustics could be ideal for different genres of music and with Impulcifer one can have different BRIRs for different genres.


----------



## johnn29

Jaakko - another related one recently on speech intelligibility/reflections from Matt Poes at Audioholics:


----------



## kalstein

I want to apply 'room correction'. but it happens error.

```
Traceback (most recent call last):
  File "impulcifer.py", line 513, in <module>
    main(**create_cli())
  File "impulcifer.py", line 59, in main
    plot=plot
  File "C:\Users\JHJ\Impulcifer\room_correction.py", line 83, in room_correction
    fr = ir.frequency_response()
  File "C:\Users\JHJ\Impulcifer\impulse_response.py", line 190, in frequency_response
    fr = FrequencyResponse(name='Frequency response', frequency=f[1::step], raw=m[1::step])
ValueError: slice step cannot be zero
```

in frequency_response() function, 'step' value has 0.
because, len(f) = 4800, n = 6000 (aka self.fs = 48000)
and len(self.data) is 9600.

which part is problem??


----------



## jaakkopasanen

Lo and behold, the decay time management is here!

Impulcifer has a new parameter called *--decay* which gives you the power of adjusting the decay time of the binaural room impulse responses. This new method works on the basis of target levels, instead of the adjustment magnitude, so you should be able to achieve consistent decay times for all of your BRIRs without knowing the natural decay times in advance. Proper decay time detection algorithms needed to be implemented for this and it was a quite a bit of more work than I anticipated. But now the added benefit is that the RT20, RT30 or RT60 time in the README is the correct one as is the peak to noise ratio. The largest decay time available is used.

Use the new tool with eg. *--decay=300* and the result BRIR will have a RT60 time of 300 ms. Or actually it might not be the real RT60 time if the impulse response doesn't have good enough signal to noise ratio but in these cases the 60 dB decay time is calculted with RT20 or RT30. Separate values for individual speakers are also possible, like previously: *--decay=FL:400,FC:200,FR:400,SL:300,BL:300,BR:300,SR:300*. Decay time cannot be adjusted upwards with this tool since I thought it wouldn't make any sense.

The new feature is not in master branch yet so if you'd like to try it out, switch to *reverb-management* branch:

```
git checkout reverb-management
```

Any feedback is most welcome!

Here's a debug graph from the decay tail. It's not possible to generate these yourself.


----------



## jaakkopasanen

kalstein said:


> I want to apply 'room correction'. but it happens error.
> 
> ```
> Traceback (most recent call last):
> ...


That would sound like your room impulse response is only 4800 samples long ie. 100 ms. How did you manage this? Are you recording in an anechoic chamber perhaps? That or your room measurement might be corrupted somehow. I would need to have your measurement files to say more. Could you share them with me?


----------



## kalstein

jaakkopasanen said:


> That would sound like your room impulse response is only 4800 samples long ie. 100 ms. How did you manage this? Are you recording in an anechoic chamber perhaps? That or your room measurement might be corrupted somehow. I would need to have your measurement files to say more. Could you share them with me?



https://drive.google.com/file/d/1AHp6Kh9nwnN7nGvBzDRNCnqYrSDnbIeX/view?usp=sharing
(It will not share after 3 days...)

I made room-FL,FR-left,right.wave file from Audacity using 1 channel export. Should I use another method?


----------



## johnn29 (Aug 2, 2020)

Amazing! Going to test/play now.

You need to run


```
git fetch
```

To get the reverb branch before you can switch.

Just tried it and went back and forth with my BRIR's. Corrects the bass ringing I have with my real room which is in my BRIR's and unlike the flat --reverb flag I used before the BRIR still feels natural and not more in your head. 

I suspect for those situations I used to get when recording only one speaker where you could actually hear (virtual) channel bleed on channel ID checks, it'll completely fix those.

As usual man, amazing work.


----------



## jaakkopasanen

kalstein said:


> https://drive.google.com/file/d/1AHp6Kh9nwnN7nGvBzDRNCnqYrSDnbIeX/view?usp=sharing
> (It will not share after 3 days...)
> 
> I made room-FL,FR-left,right.wave file from Audacity using 1 channel export. Should I use another method?


Thanks. I debugged this a bit but actually got some other errros than what you reported, at first. My problems was caused by a bug which I found and fixed. I'm using branch *reverb-management* which is going to be merged to the master branch soonish so I'm not eager to spend a lot of time debugging and fixing the master branch at this point.

Your measurements are looking quite odd. There is a massive amount of harmonic distortion in all of your measurements. Impulcifer's algorithms aren't necessarily robust with this level of THD. What kind of speakers are you using for the purpose? How about microphones and other components of the recording chain?

Here are the plots: https://imgur.com/a/smhBuTv

For an example FL-left measurement graphs look like this:





While my own measurements look like this:





For some reason your decay graph shows 10 dB bump after the initial (fundamental) impulse has already decayed. Also the third harmonic impulse is just under -10 dB while my own has the highest harmonic at under -50 dB. Also the spectrogram graph for you measurements looks like nothing I've seen before. Your room measurement graphs actually looked a lot better although they do have the crazy spectrogram behaviour.

Another problem is that in this case the decay time detection algorithm fails with one of the impulse responses resulting in a very long BRIR file. My demo hesuvi.wav is less than 2 MB while the hesuvi.wav generated from your measurements is over 13 MB. This doesn't mean it wouldn't work but will introduce unnecessary CPU load, which might or might not be noticeable on your machine. The noise tail can always be cropped out in Audacity if that seems necessary.

All this doesn't necessarily mean that you wouldn't get results out of it that don't sound good or at least create the illusion of speakers. Especially the harmonic distortion is (in theory) negated entirely by Impulcifer as long as it doesn't mess up too badly with the different heuristic algorithms I'm using there.

Try out the *reverb-management* branch:

```
git fetch
git checkout reverb-management
git pull
```


----------



## kalstein (Aug 2, 2020)

jaakkopasanen said:


> Thanks. I debugged this a bit but actually got some other errros than what you reported, at first. My problems was caused by a bug which I found and fixed. I'm using branch *reverb-management* which is going to be merged to the master branch soonish so I'm not eager to spend a lot of time debugging and fixing the master branch at this point.
> 
> Your measurements are looking quite odd. There is a massive amount of harmonic distortion in all of your measurements. Impulcifer's algorithms aren't necessarily robust with this level of THD. What kind of speakers are you using for the purpose? How about microphones and other components of the recording chain?
> 
> ...



Thanks for your replay.
How did you make plot? I cannot check the plot.
I already reported https://github.com/jaakkopasanen/Impulcifer/issues/51.

I used parametes like this.
"python impulcifer.py --test_signal="data/sweep-6.15s-48000Hz-32bit-2.93Hz-24000Hz.pkl" --dir_path="data/my_hrir" --no_room_correction --channel_balance=mids --plot"
Without '--plot' option, impulcifer.py is working good.

Here is My sound environment :
  Speaker : creative sxfi (2 speaker mode) - Bose C20
  Headphone : Topping D50s - JDS atom amp - Audeze LCD-gx
  Mic : SP-TFB-2 with XLR, umik-1

and I measureed my speakers with REW (usind umik-1)






There are some booming at 200hz in SPL, but It doesn't bother me much. so I didn`t EQ.
Is there something odd?

And I have a question about measurement for room correction.
When I measure with umik-1, am I sitting on chair or is there only mic?
I am a little confused.


----------



## musicreo (Aug 2, 2020)

Ok I have also the same error with --plot. 
Although it was working some weeks ago.


----------



## jaakkopasanen (Aug 2, 2020)

kalstein said:


> Thanks for your replay.
> How did you make plot? I cannot check the plot.
> I already reported https://github.com/jaakkopasanen/Impulcifer/issues/51.
> 
> ...


Is it the Creative SXFI carrier sound bar that you're using?

When you do room measurements, there should only be the microphone in the listening position and you should be somewhere else yourself. *--plot* is the parameter which makes Impulcifer generate the graphs but since that's not working for you at the moment, you cannot obviously use it.

Try out the reverb-management branch if you didn't already and tell me if the problem persists.


----------



## kalstein

jaakkopasanen said:


> Is it the Creative SXFI carrier sound bar that you're using?
> 
> When you do room measurements, there should only be the microphone in the listening position and you should be somewhere else yourself. *--plot* is the parameter which makes Impulcifer generate the graphs but since that's not working for you at the moment, you cannot obviously use it.
> 
> Try out the reverb-management branch if you didn't already and tell me if the problem persists.



I am using 'sxfi amp'. I use only bose speaker.


----------



## kalstein

jaakkopasanen said:


> Is it the Creative SXFI carrier sound bar that you're using?
> 
> When you do room measurements, there should only be the microphone in the listening position and you should be somewhere else yourself. *--plot* is the parameter which makes Impulcifer generate the graphs but since that's not working for you at the moment, you cannot obviously use it.
> 
> Try out the reverb-management branch if you didn't already and tell me if the problem persists.



I changed branch. but same result.


```
C:\Users\JHJ\Impulcifer>venv\Scripts\activate

(venv) C:\Users\JHJ\Impulcifer>git checkout reverb-management
Switched to a new branch 'reverb-management'
Branch 'reverb-management' set up to track remote branch 'reverb-management' from 'origin'.

(venv) C:\Users\JHJ\Impulcifer>git pull
remote: Enumerating objects: 3, done.
remote: Counting objects: 100% (3/3), done.
remote: Total 3 (delta 2), reused 3 (delta 2), pack-reused 0
Unpacking objects: 100% (3/3), 472 bytes | 42.00 KiB/s, done.
From https://github.com/jaakkopasanen/Impulcifer
   8e83bf3..e94f364  reverb-management -> origin/reverb-management
Updating 8e83bf3..e94f364
Fast-forward
 hrir.py | 6 +++++-
 1 file changed, 5 insertions(+), 1 deletion(-)
 
 (venv) C:\Users\JHJ\Impulcifer>python impulcifer.py --test_signal="data/sweep-6.15s-48000Hz-32bit-2.93Hz-24000Hz.pkl" --dir_path="data/my_hrir" --no_room_correction --channel_balance=mids --plot
Creating impulse response estimator...
Running headphone compensation...
Creating headphone equalization...
Creating frequency response target...
Opening binaural measurements...
Plotting BRIR graphs before processing...
Cropping impulse responses...
Equalizing...
 ** On entry to DLASCLS parameter number  4 had an illegal value
 ** On entry to DLASCLS parameter number  4 had an illegal value
Traceback (most recent call last):
  File "impulcifer.py", line 556, in <module>
    main(**create_cli())
  File "impulcifer.py", line 131, in main
    fr.smoothen_heavy_light()
  File "C:\Users\JHJ\Impulcifer\venv\lib\site-packages\autoeq\frequency_response.py", line 1240, in smoothen_heavy_light
    treble_iterations=1
  File "C:\Users\JHJ\Impulcifer\venv\lib\site-packages\autoeq\frequency_response.py", line 1197, in smoothen_fractional_octave
    treble_f_upper=treble_f_upper
  File "C:\Users\JHJ\Impulcifer\venv\lib\site-packages\autoeq\frequency_response.py", line 1155, in _smoothen_fractional_octave
    y_normal = savgol_filter(y_normal, self._window_size(window_size), 2)
  File "C:\Users\JHJ\Impulcifer\venv\lib\site-packages\scipy\signal\_savitzky_golay.py", line 337, in savgol_filter
    coeffs = savgol_coeffs(window_length, polyorder, deriv=deriv, delta=delta)
  File "C:\Users\JHJ\Impulcifer\venv\lib\site-packages\scipy\signal\_savitzky_golay.py", line 139, in savgol_coeffs
    coeffs, _, _, _ = lstsq(A, y)
  File "C:\Users\JHJ\Impulcifer\venv\lib\site-packages\scipy\linalg\basic.py", line 1218, in lstsq
    raise LinAlgError("SVD did not converge in Linear Least Squares")
numpy.linalg.LinAlgError: SVD did not converge in Linear Least Squares
```


----------



## johnn29

Might be a silly thing but have you ensured that SXFI processing is disabled? The button LED should be orange, not green.


----------



## kalstein

johnn29 said:


> Might be a silly thing but have you ensured that SXFI processing is disabled? The button LED should be orange, not green.



Don`t worry about that


----------



## johnn29

If you're trying to record the SXFI's BRIR via sweeps over headphones it'd be better to just record them via line in. https://sourceforge.net/p/hesuvi/wiki/How-To Record Impulse Responses Digitally/

That would explain all the funky noise in your recordings?

I did the same with Out of your Head


----------



## kalstein

johnn29 said:


> If you're trying to record the SXFI's BRIR via sweeps over headphones it'd be better to just record them via line in. https://sourceforge.net/p/hesuvi/wiki/How-To Record Impulse Responses Digitally/
> 
> That would explain all the funky noise in your recordings?
> 
> I did the same with Out of your Head



I don't like sxfi 3d sound so much.
Because that sound has too many reverb.


----------



## yosoro

I found the built-in speaker frequency response of Realiser A8 in the discussion about Realiser A16.
Is it possible to run them in Impulcifer?


----------



## yosoro

This is the file sent by A16 user
https://mega.nz/file/GZsxxa6a#Su6w6ipONLn-jOsy67knY6H4nePNLja8V-ijPPerDCE


----------



## johnn29 (Aug 4, 2020)

So I tried the --decay parameter on the speaker recordings I did without a sub, which then would rely on virtual room correction to boost the bass. Initially these were the ones that had that virtual channel bleed on channel ID call outs. Happy to say it's completely fixed them and none of the issues with the reverb crop. With a simple crop it just sounded way more in your head.

Also I'm constantly amazed at our perceptions. I took a BRIR in front of my speakers the same place with headphones. Sounded amazing and subjectively much better than any of the previous BRIR's I did. I go to sleep, go to my desk where there's no speakers - and another BRIR sounds better. We absolutely do listen with our eyes. It sounds insane but I think this has really helped me realize why there's so much quackery with audiohphilla.

Also anyone getting errors plotting charts make sure you update your dependencies


```
pip install -U -r requirements.txt
```

Happened to me on one of my recordings (not the other) but soon as I updated it went through fine


----------



## musicreo

I tested the decay option.  It improves the sounding of my measurement  a little. I like it.



johnn29 said:


> Also anyone getting errors plotting charts make sure you update your dependencies
> 
> ```
> pip install -U -r requirements.txt
> ...



No, this didn't fix the error.


----------



## johnn29

@jaakkopasanen. Is eq.csv still supported? Trying to transform a recording for my Air Pods Pro - but the output doesn't sound right and I can't see it doing anything on the terminal. It's been like this way for a while - I was assuming I just couldn't get a decent headphone transform done anymore. But I remember early on, when it was eq.wav it used to work fine.


```
(venv) C:\Users\dushy\Impulcifer>python impulcifer.py --test_signal="data/sweep-6.15s-48000Hz-32bit-2.93Hz-24000Hz.pkl" --dir_path="data/my_hrir " --decay=300  --fs=44100
Creating impulse response estimator...
Running room correction...
Running headphone compensation...
Creating headphone equalization...
Creating frequency response target...
Opening binaural measurements...
Cropping impulse responses...
Equalizing...
Adjusting decay time...
Normalizing gain...
Plotting results...
Resampling BRIR to 44100 Hz
Writing BRIRs...
```


----------



## kalstein

jaakkopasanen said:


> Thanks. I debugged this a bit but actually got some other errros than what you reported, at first. My problems was caused by a bug which I found and fixed. I'm using branch *reverb-management* which is going to be merged to the master branch soonish so I'm not eager to spend a lot of time debugging and fixing the master branch at this point.
> 
> Your measurements are looking quite odd. There is a massive amount of harmonic distortion in all of your measurements. Impulcifer's algorithms aren't necessarily robust with this level of THD. What kind of speakers are you using for the purpose? How about microphones and other components of the recording chain?
> 
> ...



I think Creative Sxfi dosen`t fit to measure audio signal.
Below plot is measured by Topping D50s today.





Thanks a lot.


----------



## johnn29

I use my SXFI amp and it works perfect.


----------



## jaakkopasanen

kalstein said:


> I think Creative Sxfi dosen`t fit to measure audio signal.
> Below plot is measured by Topping D50s today.
> 
> 
> ...


Looking very good! How does it sound?


----------



## jaakkopasanen

johnn29 said:


> @jaakkopasanen. Is eq.csv still supported? Trying to transform a recording for my Air Pods Pro - but the output doesn't sound right and I can't see it doing anything on the terminal. It's been like this way for a while - I was assuming I just couldn't get a decent headphone transform done anymore. But I remember early on, when it was eq.wav it used to work fine.
> 
> 
> ```
> ...


It's supposed to work. Is the result any different if you use eq.csv vs not using it?


----------



## kalstein

johnn29 said:


> I use my SXFI amp and it works perfect.


I don't know why... that's just result.


----------



## kalstein

jaakkopasanen said:


> Looking very good! How does it sound?



Normal case has same sound i think.
But,'Harman Room Target' is completely different.
Previously, only unpleasant reverb sounds in the bass range were heard.
This time, the correct bass is played.


----------



## johnn29 (Aug 5, 2020)

jaakkopasanen said:


> It's supposed to work. Is the result any different if you use eq.csv vs not using it?



Yep they are different, so I guess it is processing it.

I don't get it though - early on when you first came out with the transform method I used a Bose 700 measurements from rtings with their Air Pods Pro measurements and it sounded bang on right. Now i just can't get them to sound right. Whenever I use my Air Pods Pro I still use one of the first BRIR's i made. I wish I kept the damn AutoEQ output!

This time I'm trying to use a recording I made with the Creative Aurvana Live and running this in AutoEQ


```
python autoeq.py --input_dir="C:\Users\dushy\AutoEq\oratory1990\data\earbud\Apple Airpods Pro" --output_dir="my_results/APP (Aurvana)" --compensation="compensation/harman_over-ear_2018_wo_bass.csv" --sound_signature="C:\Users\dushy\AutoEq\results\oratory1990\harman_over-ear_2018\Creative Aurvana Live!\Creative Aurvana Live!.csv" --equalize --parametric_eq --max_filters=5+5 --ten_band_eq --bass_boost=4
```

But it sounds nothing like it. Same when I try and use rtings as a source.

Odd. will experiment more and with more IEM's.

Edit: Can confirm it's all working. Ran a bunch of them just now. Still getting best results from the rtings source. I don't think oratory's Air Pods Pro measurements were solid - I know that he didn't publish an EQ because the FR varied wildly on the dummy head depending on insertion. Perhaps rtings on this occasion is just better.


----------



## kalstein

I think everyone`s room left and right fr is different.
so, how about putting the same speaker data on the left and the right?
then can the result of room acoustic be applied relatively well?


----------



## johnn29

You should try the `--balance` modes to see if any work well.

I found that while they worked well for restoring channel balance - they did have an impact on HRTF. I guess because your ears aren't actually identical so it messed with the localization and you got an unusual feeling from it.


----------



## kalstein

johnn29 said:


> You should try the `--balance` modes to see if any work well.
> 
> I found that while they worked well for restoring channel balance - they did have an impact on HRTF. I guess because your ears aren't actually identical so it messed with the localization and you got an unusual feeling from it.


I think balance is affect to headphone record. Dosen't it? I said about speaker record.


----------



## musicreo

kalstein said:


> so, how about putting the same speaker data on the left and the right?



Can you explain what you mean by "same speaker data"?


----------



## kalstein

musicreo said:


> Can you explain what you mean by "same speaker data"?


left and right speaker. 
Of course, I know that the data is bound to be different due to room acoustics.


----------



## sander99

kalstein said:


> left and right speaker.
> Of course, I know that the data is bound to be different due to room acoustics.


Speaker and room information and personal hrtf information are intertangled and not seperable (at least not easy) in the HRIR measurements. And as someone mentioned your ears won't be identical (and your head won't be 100% symmetrical) so your hrtf won't be left-right symmetrical. So you can not simply use the same speaker data for left and right without messing up something regarding hrtf.


----------



## musicreo

kalstein said:


> left and right speaker.
> Of course, I know that the data is bound to be different due to room acoustics.


You mean using the inverted left speaker measurement for the right ear?
I have tested it but the virtual center sounds much more "in your head".


----------



## kalstein

musicreo said:


> You mean using the inverted left speaker measurement for the right ear?
> I have tested it but the virtual center sounds much more "in your head".


yes, i meant that. thanks for sharing your experience.


----------



## johnn29

Speaker characteristics are separated with room measurement though. So I believe a form of that already happens with room correction that'll then preserve HRTF.


----------



## kalstein

I changed my speaker from bose c20 to psb xb.
and recording again.
normal case is ok. but room correction case has error.
(It is a message that I have seen before.)


```
(venv) C:\Users\JHJ\Impulcifer>python impulcifer.py --test_signal="data/sweep-6.15s-48000Hz-32bit-2.93Hz-24000Hz.pkl" --dir_path="data/my_hrir" --specific_limit=5000 --generic_limit=2000 --room_mic_calibration="data/7063899_90deg.txt" --room_target="data/harman-in-room-loudspeaker-target.csv" --decay=300
Creating impulse response estimator...
Running room correction...
Traceback (most recent call last):
  File "impulcifer.py", line 556, in <module>
    main(**create_cli())
  File "impulcifer.py", line 60, in main
    plot=plot
  File "C:\Users\JHJ\Impulcifer\room_correction.py", line 83, in room_correction
    fr = ir.frequency_response()
  File "C:\Users\JHJ\Impulcifer\impulse_response.py", line 352, in frequency_response
    fr = FrequencyResponse(name='Frequency response', frequency=f[1::step], raw=m[1::step])
ValueError: slice step cannot be zero
```

and here is my record files.
https://drive.google.com/file/d/19nTS_r7eIBKulgWZ91GkHYPB_0i6AYqs/view?usp=sharing

@jaakkopasanen check it again please.


----------



## jaakkopasanen

My idea of being able to correct rooms up to 20 kHz really does seem silly now...


----------



## johnn29

Saw that video a while back - it's why I gave up on room correction above 500hz with my AVR! If you've not read it - his book is excellent too.

It's made my life a lot easier with making measurements - just a single room measurement is good and you cut if off at the same range.

Been mostly on Headphones during a heatwave here - my Marantz AVR pumps out way too much heat. In going back to real speakers though, the one thing that strikes me is your brain can really localize sound because of head movement. It's probably a real pain to do - but is head tracking on your radar? Waves has a blue tooth headtracker.


----------



## jaakkopasanen

johnn29 said:


> Saw that video a while back - it's why I gave up on room correction above 500hz with my AVR! If you've not read it - his book is excellent too.
> 
> It's made my life a lot easier with making measurements - just a single room measurement is good and you cut if off at the same range.
> 
> Been mostly on Headphones during a heatwave here - my Marantz AVR pumps out way too much heat. In going back to real speakers though, the one thing that strikes me is your brain can really localize sound because of head movement. It's probably a real pain to do - but is head tracking on your radar? Waves has a blue tooth headtracker.


I would love to implement head tracking but that needs to be in the real time processor, not in Impulcifer. Impulcifer of corse would have support measurements with different angles. I'm certainly not sure I'm I'm guessing that head tracking with EqualizerApo would require me to fork it and implement a version of the convolver which supports real time updates. And that sounds like a lot of work.


----------



## johnn29

Ah yes, that would be a huge project. I remember you mentioned that a while back. In real life viewing you never really move your head that much. But when you get the freedom back it makes a difference. I still don't think it's essential.


----------



## sander99

I think headtracking is very important because the brain can initiate unvoluntary head movements to solve ambiguities in sound localisation.
Having said that, some people don't seem to need it, others do.
For me personally: Horizontal head tracking helps a lot. I haven't experienced vertical headtracking yet but  think I don't need that so much: with my borrowed A16 I could look up and down and didn't get a sense that the virtual speakers were moving up and down with my head movements (except in extreme cases like looking straight upwards).
A funny thing is that many people who had the A16 demo thought that the horizontal and vertical headtracking were fully functional, while in fact only horizontal was (and only horizontal lookangles were measured).
With vertical head movements you don't change the strong ITD and ILD clues, so I suppose just seeing the loudspeakers overrides the absence of change of the more subtle hrtf filtering cues in that case.


----------



## johnn29

Interesting experience from A16 users.

i've read the research and I've been more mindful of how I localise sound. I guess it depends on how you listen - but for movies and music I'm not trying to isolate locations. Most movies want you glued to the screen, so not looking around and there's way too much going on to be able to localize individual sounds without getting out of the movie. Even with object based - I find it's all too loud and confusing for the brain - so you get the bubble instead and might notice a few louder sounds. It's cool to be able to do it on the Atmos/DTS:X demo clips, but I never do it in movies. The only exception is whenever I tweaked my system, I pay more attention to make sure everything is balanced and right.

Now when I'm in the real world, like in nature. I notice I localize sounds easily with head movement and tilting. But that's very different for me. 

The one area I do find head tracking to work wonders is with my Mobius/Waves headphones. Because it's a generic HRTF, with coarse customization options for your head circumference, the center channel is much more in your head. The head movement there fools you into thinking the in head center is slightly in front of you - maybe 3 feet away. Which makes sense for a PC gaming headset. I'm excited to try out Apple's implementation of it.

Once I'm days into listening to my BRIR I do sometimes turn it off and on again, just to quickly "demonstrate" to my brain that the sound is really out there. I don't need to, but I think it helps - same with listening to speaker call out ID tests.


----------



## Àedhàn Cassiel (Aug 23, 2020)

Hey guys... I'm already at my personal endgame with headphones, and I have the EQ settings I want to stick with forever (or until aging drastically changes my hearing). Meanwhile, I don't know the first thing about speakers or the speaker market. I don't own any speakers except for a passable bluetooth speaker to connect to my phone. I don't have any treated rooms or anything like that. Could anyone give me an idea what it would cost and how much trouble it would be for me to try to do the measurements for Impulcifer on my own? Would there be enough people like me to justify some kind of speaker loaner tour? Are people finding any noticeable difference between the different Sound Professionals microphones; is there good enough reason to go above the $20 model? Or would anyone happen to be near USA - SC? I'd gladly pay a few bucks to use someone else's setup. 

I'm also wondering whether this can be used to make sense of how and why peoples' preferences in headphone FR differ from the Harman Curve. I've dialed my midrange preference in very clearly: it begins rising gradually in the lower mids, it doesn't begin the steep rise seen on the Harman Curve until past 2kHz, and it looks like Harman with a few dB less peak from 3kHz onwards. Might measurements like these help clarify whether I have different preferences from people that like the Harman Curve, or need a different headphone FR to have the same FR delivered to my ear drum that fans of the standard Harman Curve are getting delivered to theirs? Might binaural measurements together with Harman Curve FR calibrations be used to quickly dial in any given person's most preferred signature?


----------



## sander99

Àedhàn Cassiel said:


> to have the same FR delivered to my ear drum that fans of the standard Harman Curve are getting delivered to theirs?


Subtle difference: Most natural would not be to get the same FR on your ear drums as on other people's ear drums, but the FR on your ear drums that your brain expects based on how sound coming from a distance outside your head will be filtered before it reaches your ear drums.


----------



## jaakkopasanen

Àedhàn Cassiel said:


> Hey guys... I'm already at my personal endgame with headphones, and I have the EQ settings I want to stick with forever (or until aging drastically changes my hearing). Meanwhile, I don't know the first thing about speakers or the speaker market. I don't own any speakers except for a passable bluetooth speaker to connect to my phone. I don't have any treated rooms or anything like that. Could anyone give me an idea what it would cost and how much trouble it would be for me to try to do the measurements for Impulcifer on my own? Would there be enough people like me to justify some kind of speaker loaner tour? Are people finding any noticeable difference between the different Sound Professionals microphones; is there good enough reason to go above the $20 model? Or would anyone happen to be near USA - SC? I'd gladly pay a few bucks to use someone else's setup.
> 
> I'm also wondering whether this can be used to make sense of how and why peoples' preferences in headphone FR differ from the Harman Curve. I've dialed my midrange preference in very clearly: it begins rising gradually in the lower mids, it doesn't begin the steep rise seen on the Harman Curve until past 2kHz, and it looks like Harman with a few dB less peak from 3kHz onwards. Might measurements like these help clarify whether I have different preferences from people that like the Harman Curve, or need a different headphone FR to have the same FR delivered to my ear drum that fans of the standard Harman Curve are getting delivered to theirs? Might binaural measurements together with Harman Curve FR calibrations be used to quickly dial in any given person's most preferred signature?


You can get convincing externalization with any half-decent speaker but if you want timbral accuracy, nothing can really replace good speakers in a good room. An affordable option would be to get a single JBL 305P MkII speaker which costs about 120€ in Europe.

The sound professionals SP-TFB-2 mics work well. A cheaper option is two Primo EM258 mono modules from FEL Communications with the added benefit of being able to connect these directly to two RODE VXLR+ adapters. Just make sure to buy mics that fit at the entrance of your ear canal. There are models on the market which are too large for this.

Room treatment isn't necessarily so important since it's the speakers which dominate sound above ~300 Hz and Impulcifer can get the low frequencies in control with room correction and reverb management. You will need a measurement mic for this but it's a lot cheaper option than room treatment. Of course many people listen to speakers without any room treatment or EQ and enjoy the music just fine.


----------



## johnn29

The used market is also a great place to pickup good speakers. If you check audiosciencereview.com you'll see some great speakers and their preference score based on Harman research.

I never thought that some people have grown up with no stereo loud speakers as a reference. It's going to be really interesting to get your thoughts on what that sounds like vs headphones.

Of interest to the thread: https://www.aes.org/tmpFiles/elib/20200825/20868.pdf



> Summary of Publication:
> 
> 
> > When spatial audio content is presented over headphones, the audio signal is typically filtered with binaural room impulse responses (BRIRs). An accurate virtual auditory space presentation can be achieved by flattening the headphones’ frequency response. However, when presenting stereo music over headphones, previous studies have shown that listeners prefer headphones with a frequency response that simulates loudspeakers in a listening room. It is as yet unclear if headphones that are calibrated in such a way will be preferred by listeners in the context of spatial audio content as well. This study investigates how listeners’ preferences for headphone frequency response may differ between stereo audio content and spatial audio content, which was rendered by convolving the same stereo content with in-situ-measured BRIRs of loudspeakers in a room.



Summary (from reddit)



> It was shown that an individually calibrated flat response was the preferred choice for spatial audio (binaural) content, but not for stereo content, for which the Harman target was rated significantly higher. This explicitly confirms that content-dependent HpEQ would be beneficial for devices designed to reproduce both spatial and non-spatial audio, such as head-tracked headphones, VR headsets and AR glasses.



Interestingly enough I preferred using AutoEQ to get to flat for any headphones I used (especially IEM's) for Out of your Head use.


----------



## phoenixdogfan

I have a Smyth A16 Realiser and would like to use the binaural mics for another project which involves measuring my hrtf on my existing speakers and sending it to a company so they can generate room and crosstalk cancellation filters similar to those created by the $5000 Baach system, only this software costs only $300.  

Question is how can I do this.  Do the Smyth mics need a phantom power source, and what about how would I create a mic calibration curve?

This is the software if anyone is interested.

https://www.homeaudiofidelity.com/english/home/


----------



## sander99

phoenixdogfan said:


> I have a Smyth A16 Realiser and would like to use the binaural mics for another project which involves measuring my hrtf on my existing speakers and sending it to a company so they can generate room and crosstalk cancellation filters similar to those created by the $5000 Baach system, only this software costs only $300.
> 
> Question is how can I do this.  Do the Smyth mics need a phantom power source, and what about how would I create a mic calibration curve?
> 
> ...


About using the A16 mics for other measurements with in-ear mics, for example with Impulcifer:
The A16 has a mode in which the signal of the mic inputs is played back as-is over the user B headphone outputs (analog RCA and digital outputs also).
This how you activate it:
Main menu -> Apps -> Listen to microphones on HPB.

I have a cheap Behringer usb audio interface with which I could get the signal from the analog RCA outputs (of the borrowed A16 I had a while back) back into my pc. (Unfortunately it didn't have spdif in so I had to take the analog detour, but I don't think that has any practical consequences for these kind of measurements).


About the $300 dollar software: Interesting, but I have one advice for you before you invest in this:
With the A16 you can simulate reduced cross talk or eliminated cross talk with a trick. (And with Impulcifer it may even be more easy to do this, only then you won't have headtracking).

And it could very well be that you don't like the effect at all! Because it is not at all obvious that it is an objective improvement. Some people will like it, others don't.
If you try first to simulate it with the A16 or Impulcifer then you know more before spending $300.

Eliminate cross talk all together can be done by doing a manipulated PRIR measurement where you unplug the right mic while measuring the left speaker, and unplug the left mic while measuring the right speaker. (Although I didn't try this yet. But people did it succesfully with the A8.)

Just reducing cross talk with the A16 is a little bit more complicated. I can try to work out the details later, start reading here to get the idea for how to do it with the A16 (only there is a little problem because the individual channels can not be attenuated, let alone per user, which is a condition for the "dual user mode" method I propose but maybe I can think of way around that):
https://www.head-fi.org/threads/correcting-for-soundstage.865212/page-2#post-13906823


----------



## musicreo

Is there a solution for the plotting issue?
Only the pre folder is plotted.  I still get following error since the last impulcifer update.

Running room correction...
Running headphone compensation...
Creating headphone equalization...
Creating frequency response target...
Opening binaural measurements...
Plotting BRIR graphs before processing...
Cropping impulse responses...
Equalizing...
 ** On entry to DLASCLS parameter number  4 had an illegal value
 ** On entry to DLASCLS parameter number  4 had an illegal value
Traceback (most recent call last):
  File "impulcifer.py", line 513, in <module>
    main(**create_cli())
  File "impulcifer.py", line 130, in main
    fr.smoothen_heavy_light()
  File "\Impulcifer\venv\lib\site-packages\autoeq\frequency_response.py", line 1240, in smoothen_heavy_light
    treble_iterations=1
  File "\Impulcifer\venv\lib\site-packages\autoeq\frequency_response.py", line 1197, in smoothen_fractional_octave
    treble_f_upper=treble_f_upper
  File "\Impulcifer\venv\lib\site-packages\autoeq\frequency_response.py", line 1155, in _smoothen_fractional_octave
    y_normal = savgol_filter(y_normal, self._window_size(window_size), 2)
  File "\Impulcifer\venv\lib\site-packages\scipy\signal\_savitzky_golay.py", line 337, in savgol_filter
    coeffs = savgol_coeffs(window_length, polyorder, deriv=deriv, delta=delta)
  File "\Impulcifer\venv\lib\site-packages\scipy\signal\_savitzky_golay.py", line 139, in savgol_coeffs
    coeffs, _, _, _ = lstsq(A, y)
  File "\Impulcifer\venv\lib\site-packages\scipy\linalg\basic.py", line 1218, in lstsq
    raise LinAlgError("SVD did not converge in Linear Least Squares")
numpy.linalg.LinAlgError: SVD did not converge in Linear Least Squares


----------



## jaakkopasanen

musicreo said:


> Is there a solution for the plotting issue?
> Only the pre folder is plotted.  I still get following error since the last impulcifer update.
> 
> Running room correction...
> ...


I'm afraid I haven't managed to fix it because I can't reproduce the problem on my computer. I have one idea that would probably fix it but I'm not too keen on taking that route.


----------



## kalstein

jaakkopasanen said:


> I'm afraid I haven't managed to fix it because I can't reproduce the problem on my computer. I have one idea that would probably fix it but I'm not too keen on taking that route.


I have same error. I think you can test it new os system. For example virtualbox.


----------



## johnn29

Been using my AfterShokz Aeropex for a lot of spoken word stuff on my phone. Decided to throw one of my BRIR's from Impulcifer for watching talking head YouTube videos at my desk. Pretty amazing how realistic the BRIR is when your ear is completely unobstructed and you forgot you're wearing headphones. Shame bone conduction isn't yet Hi-Fi.


----------



## Crema

I just finished working in the worst environment. Here's my data.

https://gall.dcinside.com/mgallery/board/view?id=speakers&no=121168

The microphone I used was cut off and repaired, but the balance of the left microphone was broken, resulting in a roll-off in the high frequency range. 

So I corrected the response of the left microphone with a Convolver and gave the right microphone a delay for measurement. 

However, the measurement results were ringing, and the overall quality was slightly reduced.

I was frustrated. I did the measurement in tears and mustard, but I wasn't actually expecting it. 

But the sound is good. It's worth listening to. Excellent calibration technology! Thank you for making this great impulcifier! 

I'm now listening very well to the HRIR made with HD800S and B2031A. The sound is amazing! 

Dummy head Comparison

Here's my dummy head response. There are three types: original, speaker, and impulcifer headphones. The sound quality has dropped a little to remove Humnoise, but I think it will be possible to compare. In fact, there's not much difference between headphones and speakers. Haha;

There are various songs. From pop songs to classical music to game music.

Originally, I was going to make a YouTube video, but copyright became a problem. So we share the original. If you're interested, download it and compare it. 

Once again, thank you for making this wonderful work. Cheers!


----------



## lowdown

Crema said:


> Once again, thank you for making this wonderful work. Cheers!



Yes, from me as well, once again and every day, thank you Jaacko!  

My only "regret" about Impulcifer is no one else can hear what I'm hearing as they could with actual speakers.  Yes, each can do their own measurements, but are they hearing what I am?  It is so good I think if $10,000 speakers sounded like this they'd be considered an amazing bargain.


----------



## jaakkopasanen

I'm super glad to hear more people find Impulcifer and are liking it!


----------



## amanieux (Oct 8, 2020)

this project seems really interesting, thanks for sharing your work, is what android apps like wavesnx is doing ? (do they use your open source code ) ?


----------



## halcyon

Somebody with an access to 7.1 full range speaker setup and an anechoic chamber, would do the whole field & community a huge service, by recording an HRIR using their own head (or even a B&K stereo mic dummy head). This would make a great generic starting point for a 7.1 Virtualization HRTF setup for HeSuVi, without all the added room reflections and artificial echo. The acoustically driest starting point, so to speak.




When I was studying (back in the ages), my engineer friends had routine access to a proper anechoic chamber along with multi-channel speaker setups. Unfortunately, no more...


----------



## musicreo

halcyon said:


> This would make a great generic starting point for a 7.1 Virtualization HRTF setup for HeSuVi, without all the added room reflections and artificial echo. The acoustically driest starting point, so to speak.



There a databases that provide such measurements.  HESUVI uses already some of these measurements.
But the whole idea of impulcifer is to use your own measurements as measurements done by someone else will not give you a proper speaker virtualization.


----------



## halcyon (Oct 15, 2020)

musicreo said:


> There a databases that provide such measurements.  HESUVI uses already some of these measurements.
> But the whole idea of impulcifer is to use your own measurements as measurements done by someone else will not give you a proper speaker virtualization.



Could you point to a 7.1 loudspeaker impulcifier measurements file done with stereo ear canal microphones in an anechoic chamber? Real person or dummy head, doesn't matter.
I have checked the latest HeSuVi 2.0.0.1 installer and I can only find pre-baked 3D VSS impulse HRIRs and some room based measurements, not anechoic. I don't have access to an anechoic chamber myself, and would like to compare a dead space HRIRs.
Thanks!


----------



## musicreo (Oct 15, 2020)

These are not done with impulcifer but in the end the principle is always the same.

The CIPIC files are included in HESUVI  https://www.ece.ucdavis.edu/cipic/spatial-sound/hrtf-data/
I think this is also included: http://recherche.ircam.fr/equipes/salles/listen/download.html
Here are some dummy head recordings, that are not included: https://audiogroup.web.th-koeln.de/
Another database: https://www.york.ac.uk/sadie-project/database.html


----------



## johnn29

Absent of any reflections the center channel would actually be pretty hard to locate.

Been happily using Impulcifer for months now. Risen to 50% of my viewing/listening. Next door house is a construction site and my cellar below me is having to have industrial extraction fan to cope with the humidity due to some construction there too. A crap show for watching high dynamic range movies - noisy as hell. 

Bose 700 ANC on, Impulcifer processing - silent bliss. Can't get over it.


----------



## halcyon (Oct 16, 2020)

johnn29 said:


> Absent of any reflections the center channel would actually be pretty hard to locate.


Interesting. In real life, with natural sound sources (without visual cues) this is fairly easy.

Then again, IRL again, the right behind sound direction can be somewhat confusing (without head movement). But tons more accurate than any of the Virtual Sound Processing algos are able to achieve.

IMHO, almost all of the Virtual Surround Processing algos fail abysmally with behind the listener sound localization.

Then again, in my own experience even inside a completely dead full size anechoic chamber, this cone cone of confusion is comparatively smaller again IRL (compared to Virtual Surround).

Why are the virtualization algos not able to achieve the same level of sound source localization for behind the listener sources? Is it all down to (lack of) head movement?


----------



## musicreo

How is the mic calibration (left-mic-calibration.csv/right-mic-calibration.csv) processed into the final result? So far I don't hear  difference?


----------



## jaakkopasanen

musicreo said:


> How is the mic calibration (left-mic-calibration.csv/right-mic-calibration.csv) processed into the final result? So far I don't hear  difference?


Mic calibration is not supported. There's only calibration for a single room measurement mic. That's baked into the impulse responses by eq when doing room correction.


----------



## jaakkopasanen

halcyon said:


> Interesting. In real life, with natural sound sources (without visual cues) this is fairly easy.
> 
> Then again, IRL again, the right behind sound direction can be somewhat confusing (without head movement). But tons more accurate than any of the Virtual Sound Processing algos are able to achieve.
> 
> ...


The localization perception is very much an individual thing. I fail to get frontal localization with all virtual surround systems except when the BRIR is my own. I think the majority leans towards front localization being harder than rear localization but I have no data to back this up. Virtual surround with headphones have other perceptual challenges as well because when you wear headphones you are aware of it and even the perfect matching BRIR can be confusing when you consciously know that you're listening to headphones. Another problem is the potential mismatch between room acoustics of the BRIR and the room you're listening in.


----------



## pfzar (Oct 21, 2020)

Localizations: Externalization, front back/ up down
Localization.


----------



## outerspace (Oct 22, 2020)

halcyon said:


> IMHO, almost all of the Virtual Surround Processing algos fail abysmally with behind the listener sound localization.


Sit on chair, close your eyes and ask someone to snap fingers in 0.1-1 m distance around you. Try to point out were sound is come from. You will realize you not such precise as you think if eyes are closed. You can confuse sound from front with from back and vice versa. Such experiment  you can see here.


----------



## johnn29

pfzar thanks for posting those papers.

The brain uses multiple senses to get an idea where a sound is coming from. In nature i've noticed that head tilting is important. But in small rooms it isn't. If you look at a dog trying to locate a sound - that's an extreme example of what we do.


----------



## yosoro

Traceback (most recent call last):
  File "recorder.py", line 260, in <module>
    play_and_record(**create_cli())
  File "recorder.py", line 206, in play_and_record
    min_channels=n_channels
  File "recorder.py", line 135, in get_devices
    devices = sd.query_devices()
  File "C:\Users\17810\Impulcifer\venv\lib\site-packages\sounddevice.py", line 565, in query_devices
    for i in range(_check(_lib.Pa_GetDeviceCount())))
  File "C:\Users\17810\Impulcifer\venv\lib\site-packages\sounddevice.py", line 565, in <genexpr>
    for i in range(_check(_lib.Pa_GetDeviceCount())))
  File "C:\Users\17810\Impulcifer\venv\lib\site-packages\sounddevice.py", line 578, in query_devices
    name = name_bytes.decode('utf-8')
UnicodeDecodeError: 'utf-8' codec can't decode byte 0xc2 in position 6: invalid continuation byte


Occurs when running recorder


----------



## jaakkopasanen

yosoro said:


> Traceback (most recent call last):
> File "recorder.py", line 260, in <module>
> play_and_record(**create_cli())
> File "recorder.py", line 206, in play_and_record
> ...


Could you share the full command you used?


----------



## yosoro

python records.py --play =“ data / sweep-seg-FL，FR-stereo-6.15s-48000Hz-32bit-2.93Hz-24000Hz.wav” --record =“ data / my_hrir / headphones.wav”


----------



## jaakkopasanen

Odd. Could you please create an issue here so that we don't spam this thread with the trouble shooting: https://github.com/jaakkopasanen/Impulcifer/issues


----------



## reter

Guys pls i need help! 

I've an Asus Xonar U3 and i like its Dolby DH-2, it's spacial but without too much noticeable/annoying reverb, so i started trying to record it's impulse responses to port it to Hesuvi and using another sound card/amp with it but now i'm stuck with something weird:

I read the tutorial in hesuvi page to make it but in the chapter "*III. Recording the Impulse Response"* *point 6 *it says to "make sure that you have a single stereo track in the end" but also in the upper chap "II" says to  "Select highest number of channels available in the recording channel configuration dropdown and delete to left only two tracks (stereo)", so i did it but when i try to record i get a *single* impulse response to every channel (in the image)






My question is, how i should record all the impulse responses in only two tracks to make it stereo and import in Hesuvi? i tried to use 2 channels in the channel configuration of Audacity but it gives me an error "error code -9997 invalid sample rate" so i can't record it

What i'm doing wrong? please someone, i really want to import this surround profile that's really different than the DH2 in the Hesuvi HRIR list


----------



## Joe Bloggs

reter said:


> Guys pls i need help!
> 
> I've an Asus Xonar U3 and i like its Dolby DH-2, it's spacial but without too much noticeable/annoying reverb, so i started trying to record it's impulse responses to port it to Hesuvi and using another sound card/amp with it but now i'm stuck with something weird:
> 
> ...


It doesn't look like you are doing a loopback recording properly.  You should have a cable going from the headphone out on your soundcard to the line in on your soundcard and recording from the line in as you feed the impulses through the different channels.  You should not have any option other than stereo for your line in, really.


----------



## jaakkopasanen

Room measurement guide is here! It was a long time in the making but I guess better later than never. Please do give feedback if you have any ideas about improvement or something is confusing.

https://github.com/jaakkopasanen/Impulcifer/wiki/Measurements#room-correction


----------



## jaakkopasanen

I finally, finally found the missing link from the measurement chain! The 3.5mm female stereo to 2x male mono Y-splitter cable: https://www.amazon.com/gp/product/B0785VKZW4. It's required to connect binaural mics with a 3.5 mm stereo male connector to the RODE VXLR+ adapters but for some reason this type of cable is extremely rare. I bought it from Amazon just to test it out for you and can confirm that it works as intended. This particular cable even has shielding so there won't be any concerns about the low mic signal picking up electrical interference. It's also available in other Amazon stores, not just the north american .com one.


----------



## outerspace (Nov 21, 2020)

How is your algorithm calculate perceived frequency response? Is it just measure sum of direct sound and all reflections? Researches shows that calculation of perceived FR can be tricky.


----------



## jaakkopasanen

outerspace said:


> How is your algorithm calculate perceived frequency response? Is it just a sum of direct sound and all reflections? Researches shows that calculation of perceived FR can be tricky.


There is no perceived frequency response in Impulcifer, only what the binaural microphones capture. That is of course affected by the listener's own ears so the end resulryis very neutral. Research shows people prefer neutral. 

Room correction could be benefit from perceived frequency response since that might make it possible to fix speakers and whatnot but at the moment I have no tools for estimating such a thing.


----------



## Benik3

Hello guys.

Pretty interesting project. I just tried to play with it.
I have new Corsair HS70 Pro Wireless headphones and I build my own In-ear microphones with WM-61A.
Can I use the resulted hrir.wav simply in EQ APO as Convolution with impulse response? I tried it and I always got result which is very left sided (and even each side have different sound). I tried --no_room_correction --no_equalization --no_headphone_compensation but the result is almost the same.

Thanks!


----------



## jaakkopasanen

Benik3 said:


> Hello guys.
> 
> Pretty interesting project. I just tried to play with it.
> I have new Corsair HS70 Pro Wireless headphones and I build my own In-ear microphones with WM-61A.
> ...


Welcome aboard!

You better install HeSuVi and use hesuvi.wav. There is a way to use the hrir.wav with EqualizerApo without HeSuVi but I haven never tried that so cannot tell you which additional steps need to be taken.


----------



## musicreo (Nov 30, 2020)

For  EqualizerApo  I use following code for Brirs that use the common channel configuration.  For the HESUVI files you have to change it as it uses: L-l L-r LS-l LS-r LB-l Lb-r C-l R-r R-l RS-r RS-l RB-r RB-l C-r . 

#Common preamp    
Preamp: -16 dB    
L=Left ; R=Right C=Center; SUB=LFE; SL=Left Surround; RL=Rigth Surround, RL=Rear Left; RR=Rear Right    
#Create virtual speaker channels       
Copy: L0=L R1=L  L1=R  R0=R C1=C C2=C SUB0=SUB SUB1=SUB SL0=SL SR1=SL SL1=SR SR0=SR RL0=RL RR1=RL RL1=RR RR0=RR     
#Mute Input Channels       
Copy: L=0 R=0 C=0 SUB=0 RL=0 RR=0 SL=0 SR=0    
#virtual channels that are filtered         
Channel: L0 R1 L1 R0   SL0 SR1 RL0 RR1 C1 C2 SUB0 SUB1 SL0 SR1  SL1 SR0 RL0 RR1 RL1 RR0    
#Folder of Convolution files complete folder name  or if files in config "\...."    
Convolution: convolution\BRIRs\CR7_KU_ROTM_(L_R_C_LFE_LS_RS_LB_RB).wav        
# Copy the virtual speaker channels to Left & Right Headphone Channel    
Copy: L=L0+L1+C1+SL0+SL1+RL0+RL1+SUB0   R=R1+R0+C2+SR1+SR0+RR1+RR0+SUB1        
Channel: L R    
Convolution:  convolution\BRIRs\KH_Filter\KU100_CDFC_minphase.wav    


@Benik 3 If you build your own mics I would suggest to use the Primo em258 which is the better capsule.


----------



## Benik3 (Nov 30, 2020)

Thanks for reply guys!
I will test it 
I compared the DIY microphones with Omnitronic MM-2USB and it looks pretty OK, but thanks for the tip on the EM258!





EDIT: Both options works, but the sound in hesuvi is different from just APO (suggested by musicreo). In Hesuvi it's not so harsh.
Anyway I found that I have swaped back speakers


----------



## reter

jaakkopasanen said:


> Room measurement guide is here! It was a long time in the making but I guess better later than never. Please do give feedback if you have any ideas about improvement or something is confusing.
> 
> https://github.com/jaakkopasanen/Impulcifer/wiki/Measurements#room-correction



Is impulcifer good to remove the reverb? I tried many hrir and overall they are with much reverb

I'm trying to achieve good spacial hrir like the dolby home teather dh2 in hesuvi but with almost no reverb, but i'm new to these kind of stuff so i need a hand to understand


----------



## jaakkopasanen

reter said:


> Is impulcifer good to remove the reverb? I tried many hrir and overall they are with much reverb
> 
> I'm trying to achieve good spacial hrir like the dolby home teather dh2 in hesuvi but with almost no reverb, but i'm new to these kind of stuff so i need a hand to understand


Impulcifer has tools to shorten the reverbant tail but not from existing BRIRs. You should do your own measurements with binaural mics. The results will be so much better.


----------



## Benik3

I tried to play with it. The sound is really so wide, but I have problem, that it's too wide 
When I tried to play e.g. Mass Effect Andromeda with enabled hesuvi, I almost wasn't able to identify, from where the sound come. It was like when I'm on some big place and sound came from long distance...

Does the room measurement is there to remove effect of the room where you measure or on the other hand to make the sound similar to room where you are?
Also what does the Equalization do? (the switch --no-equalization).

Thanks!


----------



## halcyon

Benik3 said:


> I tried to play with it. The sound is really so wide, but I have problem, that it's too wide
> When I tried to play e.g. Mass Effect Andromeda with enabled hesuvi, I almost wasn't able to identify, from where the sound come. It was like when I'm on *some big place and sound came from long distance*...
> 
> Does the room measurement is there to remove effect of the room where you measure or on the other hand to make the sound similar to room where you are?
> Thanks!



This is exactly my issue with 99% of algos like this. I don't need an artificial recreation of a closed space (of any size), because most of my source material is NOT supposed to sound like I'm in an enclosed space. For open-space spatial-direction accuracy most algos fail (esp. without head-tracking, which enables you to move your head and pinpoint the direction more accurately).

If somebody solves this - and no, so far it's not Dolby (Atmos) or Creative (Super X-fi) or DTS or any of the other players - they will be successful and hopefully rich :-D


----------



## Benik3

I have now new Corsair HS70 Pro and I still don't know if I should keep them because of this (without effect the stereo is too much "separated"). Otherwise the sound is pretty good (especially for gaming headphones).
Before I had SoundBlaster tactis 3D Sigma where is simply a slider in their settings, for setting the "3D sound effect". It was perfect, I have it on 15% so it's not clean stereo but It's not also like this.

I also tried of curse Dolby Atmos and DTS:Headphone X and it also sound weird. Mainly the DTS was really bad.


----------



## musicreo

halcyon said:


> This is exactly my issue with 99% of algos like this.



Impulcifer is not an algorithm that simulates speakers. Impulcifer is a tool to measure and deconvolute your own "BRIR" (smyth calls this personalized impulse response PRIR ).



Benik3 said:


> I tried to play with it. The sound is really so wide, but I have problem, that it's too wide


 
From my experience with Impulcifer I can tell that for me the two most critical parts are the mic placement and the SNR of the measurement.  The mics should be placed as close as possible to the ear canal. This improves for me the speaker localization and the headphone compensation.   Measurements with a low volume and  a lot of noise sound more 
distant to me and have more reverb. After trying different mountings I now use the construction like shown in the image below for 3 of my Primo pairs.  It is working very well and I can even place the speakers without taking out the mics in the room.









Benik3 said:


> I tried --no_room_correction --no_equalization --no_headphone_compensation



Headphone compensation is very important for biaunral audio. 

Strange I do not see "--no_equalization" in the readme?


----------



## jaakkopasanen

Benik3 said:


> I tried to play with it. The sound is really so wide, but I have problem, that it's too wide
> When I tried to play e.g. Mass Effect Andromeda with enabled hesuvi, I almost wasn't able to identify, from where the sound come. It was like when I'm on some big place and sound came from long distance...
> 
> Does the room measurement is there to remove effect of the room where you measure or on the other hand to make the sound similar to room where you are?
> ...


What kind of speaker setup did you use for the measurements? A regular stereo speakers perhaps? If so, do the speakers sound too wide to you? You could try narrowing the stereo speaker triangle if you're not happy with the current results. When you run the speaker test in HeSuVi, can you accurately locate the virtual speakers and are they in their expected locations?

The sense of distance is affected by a couple of things. You could take another measurement while sitting closer to the speakers. You could also try playing with the decay time parameter. Shorter decay times will usually bring the sound closer to you.

Room correction is to fix problems in room acoustics. No room is perfect and the sound of the room is captured by the measurement. You can measure room response with a calibrated room measurement microphone and then Impulcifer will create a correction filter which it will bake into the BRIR output.

No equalization disables equalization. The equalization option is not the headphone compensation but the additional eq curve which you can bake into the BRIR. You need to have eq.csv in the folder to use this. --no_equalization is just a shortcut for disabling this in case you want to try compare with and without. The main use case for additional equalization curve is to transfer the results to another headphones than what was used to do the measurements, for example IEM.


----------



## Benik3

@musicreo  I have the microphone pretty deep in my ear canal, maybe too much? 








@jaakkopasanen I tried now mono speaker setup. The sound sounds OK for me in stereo. Yes I was able to locate the sound pretty well in the HeSuvi test. I will try to do room corection and do another measuring, eventually play with the decay time.

Can the eq.csv to be used also as microphone correction curve?


----------



## jaakkopasanen

Benik3 said:


> @musicreo  I have the microphone pretty deep in my ear canal, maybe too much?
> 
> 
> 
> ...


The binaural microphones get calibrated when you do headphone compensation. Using eq.csv would double the correction. Don't do it it.


----------



## Benik3

I mean calibration of the frequency response, because when you look on my previous screen where I compare my DIY mic with Omnitronic (which should be calibrated) there is difference in high tones.


----------



## castleofargh

Benik3 said:


> I mean calibration of the frequency response, because when you look on my previous screen where I compare my DIY mic with Omnitronic (which should be calibrated) there is difference in high tones.


my intuitive guess:
You'll use the binaural mics to capture the FR of both the headphone and then the room(ideally while keeping the mics in the same place in your ears). The app will work with the delta between the 2 measurements to decide what must be sent to the headphone. So the actual FR of the mic itself is pretty much irrelevant(up to a point I imagine). If you compensated somewhere by +3dB, those +3dB would come in both measurements(room and headphones) and would ultimately also be cancelled out when using the delta between those measures.


----------



## Benik3

True, but it has also function to make FR compensation of the headphones. But on the other hand I can do this always with REW and just add the correction to APO EQ.


----------



## jaakkopasanen

castleofargh said:


> my intuitive guess:
> You'll use the binaural mics to capture the FR of both the headphone and then the room(ideally while keeping the mics in the same place in your ears). The app will work with the delta between the 2 measurements to decide what must be sent to the headphone. So the actual FR of the mic itself is pretty much irrelevant(up to a point I imagine). If you compensated somewhere by +3dB, those +3dB would come in both measurements(room and headphones) and would ultimately also be cancelled out when using the delta between those measures.





Benik3 said:


> True, but it has also function to make FR compensation of the headphones. But on the other hand I can do this always with REW and just add the correction to APO EQ.


@castleofargh explained it well. However it's important here to understand that the measurement for room correction shouldn't be done using the binaural mics but instead a calibrated measurement mic like MiniDSP UMIK-1. room-mic-calibration.txt (or .csv) can be used to feed in the room measurement mic calibration data. If you use binaural mics for room correction measurement, then the inverse of the mics' frequency response will be added to the BRIR output. With headphone compensation, this would be the second binaural mic FR compensation so it must be avoided.

So there are three different measurements:
1. Headphones, using binaural mics in ears and playing test signal with headphones
2. Binaural room impulse response, using binaural mics in ears and playing test signal with speakers
3. Room acoustics, using calibrated measurement microphone and playing test signal with speakers

When castleofargh said "room" he was talking about number 2, not 3.


----------



## musicreo

Benik3 said:


> @musicreo  I have the microphone pretty deep in my ear canal, maybe too much?



That looks very good.What are you using for mounting the capsule?
For me the Primo capsule is already to big so it only fits tilted  into the beginning of my ear canal. But the WM-61A is even bigger (5.8mm,2mm compared to 6mm, 3.4mm).  I always had the problem to find a proper position of the mics  that does not change during measurements but with the wire construction it works very well for me.

The  WM-61A have a SNR of 62db while the Primo have 74db.  I also read that the WM-61A have audible noise in records.  I don't know if this is a problem for Impulcifer?


----------



## musicreo

jaakkopasanen said:


> If you use binaural mics for room correction measurement, then the inverse of the mics' frequency response will be added to the BRIR output. With headphone compensation, this would be the second binaural mic FR compensation so it must be avoided.



Does it really make a big difference?   I limit the room correction to 750Hz and the binaural mics have more or less the same frequency response (the red curve is the Behringer EMC999) as the measurement mic up to some kHz.


----------



## jaakkopasanen

musicreo said:


> Does it really make a big difference?   I limit the room correction to 750Hz and the binaural mics have more or less the same frequency response (the red curve is the Behringer EMC999) as the measurement mic up to some kHz.


It doesn't if the mics have a flat frequency response. The Sound Professionals SP-TFB-2 have bass roll off: https://imgur.com/a/9YzJtwx


----------



## jaakkopasanen

And of course it's possible to use the binaural mics for room correction measurements if you have the calibration data available for them.


----------



## Benik3

musicreo said:


> That looks very good.What are you using for mounting the capsule?
> For me the Primo capsule is already to big so it only fits tilted  into the beginning of my ear canal. But the WM-61A is even bigger (5.8mm,2mm compared to 6mm, 3.4mm).  I always had the problem to find a proper position of the mics  that does not change during measurements but with the wire construction it works very well for me.
> 
> The  WM-61A have a SNR of 62db while the Primo have 74db.  I also read that the WM-61A have audible noise in records.  I don't know if this is a problem for Impulcifer?



I use rubber "seal" from in-ear headphones


----------



## Dr Strangelove (Dec 5, 2020)

Hey there,

can you use Impulcifer to match two different headphones, as well?
Can it match both frequency and (linear) phase response from A to B? And will it include crossfeed between the ears (LL+LR / RR+RL)?

Would welcome your response. 

Regards
Strangelove


----------



## jaakkopasanen

Dr Strangelove said:


> Hey there,
> 
> can you use Impulcifer to match two different headphones, as well?
> Can it match both frequency and (linear) phase response from A to B? And will it include crossfeed between the ears (LL+LR / RR+RL)?
> ...


Impulcifer cannot do this. The best would be to use AutoEq for the task. The phase responses with AutoEq are minimum, not linear, because headphones are minimum phase devices and therefore the adjustments to frequency response should be done with minimum phase filters. Using minimum phase filters will simultaneously correct the phase response of the headphone because in minimum phase systems the frequency response and the phase response are coupled. The same also happens when the acoustic engineer changes the headphone's tuning with acoustical or mechanical means.


----------



## Dr Strangelove

Thanks for the feedback! 
I will try to feed my data to AutoEQ and see what it spits out.

Regards,
Strangelove


----------



## yosoro

I encountered a problem
The position of the sound is not accurate

Because my ears are inconsistent left and right, I have to adjust the delay to listen to the speakers or headphones to enjoy the stereo.
 

But after using impulcifer, it will cause the low frequency to shift to the left, mid to high frequency shift to the right, and even if the delay on the left and right is adjusted, the stereo sound cannot be normal.


----------



## musicreo

Never heard about such a strange hearing problem but you could measure the speakers normaly and  add the delay later  in eq-APO to the virtual speaker.


----------



## yosoro

musicreo said:


> Never heard about such a strange hearing problem but you could measure the speakers normaly and  add the delay later  in eq-APO to the virtual speaker.


I tried normal measurement and added delay
The result is a low frequency shift to the left

I also tried to put the speaker left channel -10DB
But the result is also the low frequency shift to the left


----------



## castleofargh

yosoro said:


> I encountered a problem
> The position of the sound is not accurate
> 
> Because my ears are inconsistent left and right, I have to adjust the delay to listen to the speakers or headphones to enjoy the stereo.
> ...


You need to explain things a little more(what you're trying to achieve?). I can't imagine that applying the same delay on speakers and headphones would give a similar experience. On speakers, the one with an added delay will still reach both ears. So the brain will still get the same interaural delay it's getting with any other sound source. It's just that one speaker comes in "late".
While on headphones, it would clearly delay all sounds of one ear. those are 2 pretty different situations with different consequences for something like Impulcifier.
We must know precisely what you need, to even be able to consider if that can be properly done here.


----------



## yosoro

castleofargh said:


> You need to explain things a little more(what you're trying to achieve?). I can't imagine that applying the same delay on speakers and headphones would give a similar experience. On speakers, the one with an added delay will still reach both ears. So the brain will still get the same interaural delay it's getting with any other sound source. It's just that one speaker comes in "late".
> While on headphones, it would clearly delay all sounds of one ear. those are 2 pretty different situations with different consequences for something like Impulcifier.
> We must know precisely what you need, to even be able to consider if that can be properly done here.


The stereo sound that my ears hear is on the left (you can imagine that there is only left mono)
So add a delay to the left channel so that the stereo MID is in the middle


This method is very effective when impulcifer is not used. Adding a delay after impulcifer will result in only the middle and high frequencies in the middle and the bass on the left.


----------



## sander99 (Dec 8, 2020)

yosoro said:


> Adding a delay after impulcifer


Do you mean a delay of one channel of the already binauralised signal? That you should never do as it would totally destroy the binauralisation.
If you want to emulate the situation in which one real speaker is delayed then you should delay that channel in the input signal to the binauralisation process [Edit: with the binauralisation process I mean HeSuVi or whatever you use for that]: then one of the virtual speakers will play a delayed signal.


----------



## yosoro (Dec 8, 2020)

sander99 said:


> Do you mean a delay of one channel of the already binauralised signal? That you should never do as it would totally destroy the binauralisation.
> If you want to emulate the situation in which one real speaker is delayed then you should delay that channel in the input signal to the binauralisation process [Edit: with the binauralisation process I mean HeSuVi or whatever you use for that]: then one of the virtual speakers will play a delayed signal.


 I hope you can use pictures to illustrate your correct usage  
Google Translate is not very accurate.


----------



## sander99

yosoro said:


> I hope you can use pictures to illustrate your correct usage
> Google Translate is not very accurate.


1. This is what I mean:




2. Unfortunately I don't know _how_ to do it. Sorry.


----------



## musicreo

sander99 said:


> 2. Unfortunately I don't know _how_ to do it. Sorry.


That is easy in EQ-APO
Channel: L 
Delay 50.5 ms

If you use HESUVI this must be loaded in Beforehand.


----------



## Benik3

So I played again with Impulcifier. I made even room calibration, but still the sound is bad.
I totally lost bass, the sound is like from the cheapest headphones :/ Also it's VERY quiet. Even after adding 20dB preamp it's still much quieter the without hesuvi.
Here is frequenccy response in REW before and after apply of Hesuvi


----------



## johnn29

If you want to lift the bass you can add a manal EQ to boost bass frequencies.

The quiet HRIR's are an issue I experience too. Equalizer APO seems to run out of headroom for all the EQing Impulcifer needs. I can't just simply preamp boost in EAPO because you get clipping. My only solution is to use voicemeeter. So then I can get Voicemeeter to apply a 12db boost. Gain chaining I guess.

Would be real nice to get a loud HRIR though like the out of the box Dolby Headphone etc. that HeSuVi ships with.


----------



## musicreo (Dec 15, 2020)

@Benik3 Can you show a screenshot of the frequency response shown in the analyse panel of EQ-APO? Did you use the Harman target for room correction?  I use room correction with the Harman target and  I have more bass then my headphones would provide without any pre processing.

Is it possible to upload your measurement somewhere? 
I'm wondering if this would be generally a good idea to upload some measurements to get a feeling how different measurements are (more in terms of noise balance problems and different measurement equickment) and maybe to find problems that occur during measurements?

For example here is one my latest measurements with one of my primo pairs :

Impulcifer one drive link

It has some balance problems when you analyse the wav files (always higher amplitude on the right channel) but impulcifers correction compensates thats very good for me.


----------



## castleofargh

Benik3 said:


> So I played again with Impulcifier. I made even room calibration, but still the sound is bad.
> I totally lost bass, the sound is like from the cheapest headphones :/ Also it's VERY quiet. Even after adding 20dB preamp it's still much quieter the without hesuvi.
> Here is frequenccy response in REW before and after apply of Hesuvi


*if* this 50Hz rockets are a bleeding of sort from the electrical grid, I would guess that the SPL values showed in REW are not the right ones and that at least one of your measurements needs to be done at higher SPL.
No idea if you have other issues.


----------



## Benik3

johnn29 said:


> If you want to lift the bass you can add a manal EQ to boost bass frequencies.


The sound is so bad that I wasn't even able to fix it by EQ. Whole soundstage is weird.



musicreo said:


> @Benik3 Can you show a screenshot of the frequency response shown in the analyse panel of EQ-APO? Did you use the Harman target for room correction?  I use room correction with the Harman target and  I have more bass then my headphones would provide without any pre processing.
> 
> Is it possible to upload your measurement somewhere?
> I'm wondering if this would be generally a good idea to upload some measurements to get a feeling how different measurements are (more in terms of noise balance problems and different measurement equickment) and maybe to find problems that occur during measurements?





Where do you mean to apply Harman correction? I did room calibration as mentioned in wiki of Impulcifer with stereo setup and run the processing command as mentioned on first page on github.
Here is my measuring: https://drive.google.com/file/d/1gg2zaKjE62Q8QMnc23-cnE1xblznHYJt/view?usp=sharing



castleofargh said:


> *if* this 50Hz rockets are a bleeding of sort from the electrical grid, I would guess that the SPL values showed in REW are not the right ones and that at least one of your measurements needs to be done at higher SPL.
> No idea if you have other issues.


The 50Hz peak is really weird, but I don't hear anything like this and when I measure my stereo, it's not there:



BTW when I do processing with --plot option I got this error:


```
(venv) C:\Users\benik\Desktop\Impulcifer>python impulcifer.py --test_signal="data/sweep-6.15s-48000Hz-32bit-2.93Hz-24000Hz.pkl" --dir_path="data/my_hrir" --plot
Creating impulse response estimator...
Running room correction...
 ** On entry to DLASCLS parameter number  4 had an illegal value
 ** On entry to DLASCLS parameter number  4 had an illegal value
Traceback (most recent call last):
  File "impulcifer.py", line 556, in <module>
    main(**create_cli())
  File "impulcifer.py", line 60, in main
    plot=plot
  File "C:\Users\benik\Desktop\Impulcifer\room_correction.py", line 118, in room_correction
    fr.smoothen_fractional_octave(window_size=1/3, treble_window_size=1/3)
  File "C:\Users\benik\Desktop\Impulcifer\venv\lib\site-packages\autoeq\frequency_response.py", line 1197, in smoothen_fractional_octave
    treble_f_upper=treble_f_upper
  File "C:\Users\benik\Desktop\Impulcifer\venv\lib\site-packages\autoeq\frequency_response.py", line 1155, in _smoothen_fractional_octave
    y_normal = savgol_filter(y_normal, self._window_size(window_size), 2)
  File "C:\Users\benik\Desktop\Impulcifer\venv\lib\site-packages\scipy\signal\_savitzky_golay.py", line 337, in savgol_filter
    coeffs = savgol_coeffs(window_length, polyorder, deriv=deriv, delta=delta)
  File "C:\Users\benik\Desktop\Impulcifer\venv\lib\site-packages\scipy\signal\_savitzky_golay.py", line 139, in savgol_coeffs
    coeffs, _, _, _ = lstsq(A, y)
  File "C:\Users\benik\Desktop\Impulcifer\venv\lib\site-packages\scipy\linalg\basic.py", line 1218, in lstsq
    raise LinAlgError("SVD did not converge in Linear Least Squares")
numpy.linalg.LinAlgError: SVD did not converge in Linear Least Squares
```


----------



## musicreo

The error during plotting is a bug.  I have it too and I can only plot the pre processed files.

A first look into your upload shows me two things:
1) The measured  response of the headphones shows some extreme bass at 50-150Hz.  What headphones are you using?
2) The wav files show a relative low amplitude and in some of the speaker measurements you have a noise problem on the left channel. Is it possible that there was some low pressure on the left mic?  Did you perform the sweeps as load as it is still comfortable for your ear?


----------



## Benik3

I use Corsair HS70 Pro. Maybe is problem that Impulcifier always measure on max volume? (if I got it right from description)
The mic was pretty well fitted in my ear and tried to make sure that they didn't changed over measuring.


----------



## musicreo (Dec 15, 2020)

In the pre plots ( python impulcifer.py --test_signal="data/sweep-6.15s-48000Hz-32bit-2.93Hz-24000Hz.pkl"  --no_room_correction    --dir_path="data/my_hrir " --plot) you see that the left channel shows more noise.  Audacity  shows me that it is -45 dB on the left channel and   -51.5 dB on the right channel. Only the SR,BR.wav does not show this noise in the left channel. Look at the waveform where I compare your  FC-left measurement (the lower one)  with my measurement (-56 dB). Actually the noise on the left channel sounds like a noise I had with some Pui capsulse when touching them.

At what distance did you perform the measurement?


----------



## Benik3 (Dec 15, 2020)

Interesting. I'm about 1,5m from the speakers. They are for a computer.
Now I'm thinking, could be problem that it's 2.1 system? Not 2.0 full range...
But to the PC it's connected as stereo 2.0.


----------



## jaakkopasanen

A little investigation into the plot error has led me here: https://stackoverflow.com/a/64971496. This answer says the problem is caused by BLAS subprogram which was broken somehow in a recent Windows update. Hopefully Windows will fix this soon but until then you could try using conda instead of pip for managing the dependencies.

@musicreo @Benik3 are you on Windows, 32-bit or AMD CPU perhaps? I tested with files supplied by @Benik3 but could not reproduce. I'm running 64-bit Windows 10 on Intel CPU.


----------



## musicreo

jaakkopasanen said:


> @musicreo @Benik3 are you on Windows, 32-bit or AMD CPU perhaps? I tested with files supplied by @Benik3 but could not reproduce. I'm running 64-bit Windows 10 on Intel CPU.



Yes I work on my laptop with Win 10 64 Bit and  AMD Ryzen 5 3500U  CPU.


----------



## Benik3

I use win 10 64bit on Intel 6700K


----------



## yosoro

A researcher who resigned from harman used Nvidia’s RTX graphics card to create a gyroscope-enabled HRTF solution. He used the RT core of the RTX graphics card to simulate sound reflections.
Below is his video
https://www.bilibili.com/video/av843399591


----------



## outerspace

There is head tracking solutions which use webcam. Gyroscope may not required.


----------



## dts350z

I'd like to see the virtual surround with headphones extended to 7.1.4 (12 channels) or other immersive formats. Obviously this works well on the Realiser A16 (I have a 2u balanced unit) but the cost is beyond the audio budget of most. 

So, how would one use impulsifer to make the measurements to produce BRIR files for the height channels, and is there any available software to do the equivalent of HeSuVi/Equalizer APO for immersive channel configurations?

Given the BRIR I know I can do it using per channel per ear IRs and Guitar Cab Convolver VSTs, but would prefer something a little more user friendly.

I guess non-realtime conversion of 7.1.4 audio files to binaural would be OK, but realtime would be best.

FYI my interest is because I've been experimenting with up/re-mixing of stereo or 5.1 to 7.1.4 with music source separation tools, along with my own upmixer, and would like to see an immersive eco system for playback.


----------



## musicreo

dts350z said:


> So, how would one use impulsifer to make the measurements to produce BRIR files for the height channels



You could measure the height channels like every other channel but have to do the deconvolution in two steps. First do your 7.0 setup and then rename your height channels to FL,FR,SL,SR and do again the deconvolution. After that you can merge the two output files together.



dts350z said:


> and is there any available software to do the equivalent of HeSuVi/Equalizer APO for immersive channel configurations?
> 
> FYI my interest is because I've been experimenting with up/re-mixing of stereo or 5.1 to 7.1.4 with music source separation tools, along with my own upmixer, and would like to see an immersive eco system for playback.



I think there is one realtime solution. Do the upmix in EQ-APO directly before the convolution.


----------



## dts350z

I guess it's a little off topic but I had some success with convolution with ffmpeg. At least for now, I'm doing it in 3 steps. 

Convolve an input 7.1 file with the 7.1 impulse responses for the left ear and output as mono (first file is the input, second file is the IRs):

ffmpeg.exe" -y -i "ChannelIDs714-sides first-mapped.wav" -i "A16_7.1_ sides_first_left_ear.wav" -filter_complex "[0] [1] afir=dry=10:wet=10" -ac 1 Ch_id_left.wav​
now the right ear (same input file):

ffmpeg.exe" -y -i "ChannelIDs714-sides first-mapped.wav" -i "A16_7.1_ sides_first_right_ear.wav" -filter_complex "[0] [1] afir=dry=10:wet=10" -ac 1 Ch_id_right.wav​
then join the left and right ear mono files into a stereo (binaural) file:

"D:\Google Drive\12ch-mono-split\ffmpeg.exe" -y -i Ch_id_left.wav -i Ch_id_right.wav -filter_complex "[0:a][1:a]join=inputs=2:channel_layout=stereo[a]" -map "[a]" -acodec pcm_s24le  Ch_id_binaurl.wav

Besides the details of the ffmpeg syntax, and how the impulses should be normalized, I don't know what to do with the LFE channel. At the moment I just have an all zeros IR, which zeros out the LFE. Should I just copy the Center channel IR in there?


----------



## musicreo (Jan 5, 2021)

What is the advantage of  using ffmpeg compared to EQ-APO or ConvolverVST?


----------



## Joe Bloggs

dts350z said:


> FYI my interest is because I've been experimenting with up/re-mixing of stereo or 5.1 to 7.1.4 with music source separation tools, along with my own upmixer, and would like to see an immersive eco system for playback.



Off topic, but:  what are you doing for the height channels in your upmixing and since you mentioned realtime playback, is the upmixing real time at all?


----------



## dts350z

Joe Bloggs said:


> Off topic, but:  what are you doing for the height channels in your upmixing and since you mentioned realtime playback, is the upmixing real time at all?



PM'd so we can stay on topic here.


----------



## dts350z

musicreo said:


> What is the advantage of  using ffmpeg compared to EQ-APO or ConvolverVST?



My goals are to 1) go beyond 7.1 and 2) To have tool chain that is more user friendly (install AND use), and cross platform. e.g. drag a drop a multichannel surround file to a converter that gives you Binaural. 

But real time tools would also be good. Especially if it was a plugin for popular players. 

I have not looked at convolverVST, but VSTs are not super friendly for average users that don't have/use DAWs etc. Especially if you need multiple instances and routing to do surround virtualization for headphones.


----------



## musicreo

dts350z said:


> My goals are to 1) go beyond 7.1 [...]
> But real time tools would also be good. Especially if it was a plugin for popular players.



You can work with virtual channels in  EQ-APO  and this way you can work with more than 7.1. 



dts350z said:


> I have not looked at convolverVST, but VSTs are not super friendly for average users that don't have/use DAWs etc. Especially if you need multiple instances and routing to do surround virtualization for headphones.



I mentioned convolverVST because I used it before EQ-APO for movie playback in MPC (there is  a VST and a directshow Filter) and still use it in foobar2000 for converting up to 7.1 binaural audio. I think it can work with more than 7.1 but I have never tested it. Compared to other convolution VSTs it can work with multiple channels with just one instance.


----------



## reter (Jan 21, 2021)

guys, i'm thinking to buy the Soncoz LA-QXD1 i saw that's a good cheap balanced dac but also i saw on internet that it's useless to buy good dac when you use the surround virtualization algorythm, is that true?

I'm willing to use hesuvi with that dac so...


----------



## jaakkopasanen

reter said:


> guys, i'm thinking to buy the Soncoz LA-QXD1 i saw that's a good cheap balanced dac but also i saw on internet that it's useless to buy good dac when you use the surround virtualization algorythm, is that true?
> 
> I'm willing to use hesuvi with that dac so...


Doesn't really matter since practically all modern DACs are audibly transparent, except the ones whuch are engineered purposefully to color the sound. Even your motherboard probably has really good DAC.


----------



## reter

jaakkopasanen said:


> Doesn't really matter since practically all modern DACs are audibly transparent, except the ones whuch are engineered purposefully to color the sound. Even your motherboard probably has really good DAC.



so you are telling me that i don't need a good dac? maybe i need an amp? i tried to use the motherboard but the audio is too low and i doubt that there's some audio quality hardware in that mobo (is cheap), i use the xonar u3 usb sound card from years but i want to change to something much more solid but if you're telling me that there's zero differences in surround sound if i use one dac or another, i trust you


----------



## jaakkopasanen

reter said:


> so you are telling me that i don't need a good dac? maybe i need an amp? i tried to use the motherboard but the audio is too low and i doubt that there's some audio quality hardware in that mobo (is cheap), i use the xonar u3 usb sound card from years but i want to change to something much more solid but if you're telling me that there's zero differences in surround sound if i use one dac or another, i trust you


You need an amp that has more power if you can't push your headphones to high enough volume.


----------



## musicreo

musicreo said:


> After trying different mountings I now use the construction like shown in the image below for 3 of my Primo pairs.  It is working very well and I can even place the speakers without taking out the mics in the room.



I wanted to share that in the meantime I have changed the mic placement. I removed the yellow foam and place the capsule now really into the ear canal opening as shown in the image. Overall it  improved the quality of the measuremt. The second image shows how I did  my last  measurements.  I'm really happy with the results.


----------



## johnn29

I came across this reddit post that seems a great idea in principle to translate headphones from one to another. You just put your over ear headphone into one ear and your IEM into the another ear and perform equal loudness adjustments in EAPO. 

Mr Pasanen, would there be any way we can implement that to translate over ear headphone compensations to IEMs? I'm still struggling to get a good sounding profile for my Air Pods Pro and this seems like it'd work because it's manual EQ/perceptual rather than on a dummy head!


----------



## castleofargh

johnn29 said:


> I came across this reddit post that seems a great idea in principle to translate headphones from one to another. You just put your over ear headphone into one ear and your IEM into the another ear and perform equal loudness adjustments in EAPO.
> 
> Mr Pasanen, would there be any way we can implement that to translate over ear headphone compensations to IEMs? I'm still struggling to get a good sounding profile for my Air Pods Pro and this seems like it'd work because it's manual EQ/perceptual rather than on a dummy head!


Usually what is done is that you set your own loudness contour as baseline and you EQ both IEM/headphones to it(using tones and working you way with EQ until a sweep feels like it's all at the same loudness. Then from those 2 EQ you have the difference. It's long, annoying and takes time to get good at it(at least it took time for me back then). 
Something like this 
I did ask him for his really basic .exe back then and it made the process easier in some ways, so long as you gave yourself some headroom before starting(to avoid clipping).
One phone in each ear felt easier, but also didn't work as well for me somehow.


----------



## musicreo

johnn29 said:


> I came across this reddit post that seems a great idea in principle to translate headphones from one to another. You just put your over ear headphone into one ear and your IEM into the another ear and perform equal loudness adjustments in EAPO.



I think most over ear headphone are not so flexible that you can wear them like this without  changing the sealing and  sound signature. _ManLoud_ EQ z

I tried the mehtod suggest by Griesinger but I found it very hard to get a feeling for equal loudness.


----------



## castleofargh

musicreo said:


> I think most over ear headphone are not so flexible that you can wear them like this without  changing the sealing and  sound signature. _ManLoud_ EQ z
> 
> I tried the mehtod suggest by Griesinger but I found it very hard to get a feeling for equal loudness.


We sadly are struggling when going back and forth using single tones, and need a lot of training to get something useful. If only we could be like all the elite audiophiles who can listen to gears with different music at different volume levels and years apart, and still tell you in great detail, with absolute confidence, all about the differences they heard.
I use sarcasm to hide how jealous I am.


----------



## Joe Bloggs

johnn29 said:


> I came across this reddit post that seems a great idea in principle to translate headphones from one to another. You just put your over ear headphone into one ear and your IEM into the another ear and perform equal loudness adjustments in EAPO.
> 
> Mr Pasanen, would there be any way we can implement that to translate over ear headphone compensations to IEMs? I'm still struggling to get a good sounding profile for my Air Pods Pro and this seems like it'd work because it's manual EQ/perceptual rather than on a dummy head!


Also the phase response difference between the two headphones would mean you would perceive the sound as off to one side even when they are at equal loudness...


----------



## johnn29

castleofargh said:


> We sadly are struggling when going back and forth using single tones, and need a lot of training to get something useful. If only we could be like all the elite audiophiles who can listen to gears with different music at different volume levels and years apart, and still tell you in great detail, with absolute confidence, all about the differences they heard.
> I use sarcasm to hide how jealous I am.



Yep! If it's one thing Impulcifer has taught me it's that your other senses "hear". Hearing is a multi sensory experience - I've cited it before but my HRIR I generate in my theater sounds IDENTICAL to the real thing in the same room. I took that same HRIR into a cafe (pre covid!) and it sounded like ass. Dolby Virtual Headphone blew it away. 

I tried David C's loudness matching and just couldn't really do it, I only gave it one shot though, might try it again.

I guess I'll stick to the synthetic HRIR's for my Air Pods Pros and rely on my Bose 700s for my theater when I need ANC


----------



## castleofargh

johnn29 said:


> Yep! If it's one thing Impulcifer has taught me it's that your other senses "hear". Hearing is a multi sensory experience - I've cited it before but my HRIR I generate in my theater sounds IDENTICAL to the real thing in the same room. I took that same HRIR into a cafe (pre covid!) and it sounded like ass. Dolby Virtual Headphone blew it away.
> 
> I tried David C's loudness matching and just couldn't really do it, I only gave it one shot though, might try it again.
> 
> I guess I'll stick to the synthetic HRIR's for my Air Pods Pros and rely on my Bose 700s for my theater when I need ANC


It also seems like we're not equals when it comes to the impact of certain cues. I'm a total slave to visual cues. I sort of knew it, but for a long time I assumed it was just standard human behavior prioritizing sight(after all it's the bigger area allocation for the brain). But in the last years discussing and reading everything I can find on psychoacoustics, it has becomes apparent that my eyes do even more "hearing" than for the average guy. I guess it started when people made fun of me at an audio meeting for having the habit to close my eyes to test gears.
The best thing I did for my headphone setup at my desk was to put some speaker monitors on the desk, and elevate them at my level. They're like sound magnets, anything I feel in the general direction of the speakers gets stuck on them(in my mind^_^). If I move my head everything collapses, but it comes back pretty fast. Even some very average crossfeed DSP works fairly well if I don't move much.

Head tracking on the A16 with the right impulses for all directions, works great without needing the speakers as visual anchors, but it still helps in my case and I still get influenced to some degree. I played moving them around and even almost 45° to the side, after a while not moving my head much, I feel like they're emitting the sound. It blows my conscious mind.
But then I've seen people who don't even care to use a head tracker, and people who only see minimal improvements between vague Xfeed effect and HRIR. I mean some people even say that default headphone playback without any DSP, makes them feel like they're front raw at a concert. I always thought it was BS, but apparently not. Our imagination is the limit and in this case, I might be lacking because the best I can get from typical stereo albums with unprocessed headphone signal, is lateralization and the singer+drums usually sitting on my forehead unless they're panned in some way.

Oh and of course, like you I can get pretty different feelings depending on the room size and where I'm sited. It's annoying because it puts pretty hard limits to what I can hope to simulate. Some of the best headphone experience I got in my life were in a completely dark room where I couldn't see anything. The "image" I get can be pretty badass after a while. Maybe some people experience that with their eyes open? IDK.


----------



## lowdown

Not much activity in this thread, which makes me a bit sad.  I really wish many more people could hear what I'm hearing every day with Impulcifer.  Like a dream come true after decades of seeking great sound.  Can't thank Jaakko enough.


----------



## jaakkopasanen

Plotting bug is probably now fixed in branch plot-fix. Switch from master with `git checkout plot-fix` and install dependencies `python -m pip install -U -r requirements.txt`. This comes with the requirement of Python 3.8 (up from 3.7).

Would be superb if someone who has encountered the bug could test if this fixes the problem.


----------



## johnn29

One of my old laptops had the issue - I'll try it on that tomorrow when I get some time. Saw you active on the Git - nice to see a few updates. It's mostly perfect as is.


----------



## musicreo

@jaakkopasanen

For me it fixes the bug.


----------



## jaakkopasanen

musicreo said:


> @jaakkopasanen
> 
> For me it fixes the bug.


Thanks for confirming. The fix is now in the master.


----------



## jaakkopasanen

And now there's also a fix for channel time alignment. I did a measurement in February with only one speaker and ended up with channel balance problems no matter how I played with the channel balance parameter. Turns out the FL and FR channels were time aligned wrong. I allowed the algorithm to look for smaller peaks in the impulse responses and this can avoid the situation where the higher peaks are further back in the impulse response and cause misalignment.

Hopefully this helps some of you avoid some channel balance problems.


----------



## musicreo

I think the fix for channel time alignment is already in the master? That would explain why the channel balance noticeably improved for me.


----------



## jaakkopasanen

musicreo said:


> I think the fix for channel time alignment is already in the master? That would explain why the channel balance noticeably improved for me.


It is. Glad to hear it works.


----------



## trigeger (Jun 13, 2021)

Hi. Requirements installation gives me an error:

(venv) d:\Programming\Python\Impulcifer\Impulcifer>pip install -U -r requirements.txt
Collecting git+https://github.com/jaakkopasanen/autoeq-pkg@1.2.5#autoeq (from -r requirements.txt (line 8))
  Cloning https://github.com/jaakkopasanen/autoeq-pkg (to revision 1.2.5) to c:\users\trig-ger\appdata\local\temp\pip-req-build-5wapxejp
  Running command git clone -q https://github.com/jaakkopasanen/autoeq-pkg 'C:\Users\trig-ger\AppData\Local\Temp\pip-req-build-5wapxejp'
  Running command git checkout -q 75920c0cfcb7abf52d90f661c27e5e955a5475ef
Collecting matplotlib~=3.3.3
  Using cached matplotlib-3.3.4-cp39-cp39-win_amd64.whl (8.5 MB)
Requirement already satisfied: numpy~=1.19.5 in d:\programming\python\impulcifer\impulcifer\venv\lib\site-packages (from -r requirements.txt (line 2)) (1.19.5)
Collecting scipy~=1.5.4
  Using cached scipy-1.5.4-cp39-cp39-win_amd64.whl (31.4 MB)
Collecting soundfile~=0.10.2
  Using cached SoundFile-0.10.3.post1-py2.py3.cp26.cp27.cp32.cp33.cp34.cp35.cp36.pp27.pp32.pp33-none-win_amd64.whl (689 kB)
Collecting sounddevice~=0.3.14
  Using cached sounddevice-0.3.15-py2.py3.cp26.cp27.cp32.cp33.cp34.cp35.cp36.cp37.cp38.cp39.pp27.pp32.pp33.pp34.pp35.pp36.pp37-none-win_amd64.whl (167 kB)
Collecting nnresample~=0.2.4
  Using cached nnresample-0.2.4.1-py3-none-any.whl (6.1 kB)
Requirement already satisfied: tabulate~=0.8.5 in d:\programming\python\impulcifer\impulcifer\venv\lib\site-packages (from -r requirements.txt (line 7)) (0.8.9)
Collecting Pillow~=7.2.0
  Using cached Pillow-7.2.0.tar.gz (39.1 MB)
Collecting pandas~=1.2.0
  Using cached pandas-1.2.4-cp39-cp39-win_amd64.whl (9.3 MB)
ERROR: Could not find a version that satisfies the requirement tensorflow~=2.4.0 (from autoeq) (from versions: 2.5.0rc0, 2.5.0rc1, 2.5.0rc2, 2.5.0rc3, 2.5.0)
ERROR: No matching distribution found for tensorflow~=2.4.0

How to fix this?  (I'm on Windows 10)


----------



## jaakkopasanen

trigeger said:


> Hi. Requirements installation gives me an error:
> 
> (venv) d:\Programming\Python\Impulcifer\Impulcifer>pip install -U -r requirements.txt
> Collecting git+https://github.com/jaakkopasanen/autoeq-pkg@1.2.5#autoeq (from -r requirements.txt (line 8))
> ...


I'm guessing you are using Python 3.7 or something like that. Only Python 3.8 is supported.


----------



## trigeger

Thanks, that fixed the issue. Didn't notice the importance of the exact 3.8 version and nothing above.


----------



## johnn29

Just FYI for anyone into all this - Apple's new tVOS 15 will enable spatial audio with head tracking for the Air Pods Pro. Quite excited about that!


----------



## bigshot

Don’t you mean regular AirPods? I think it already works with pro.


----------



## johnn29

it's beeen released for iOS yes, but this is tVOS aka Apple TV. So think of theater watching. Plex/Infuse and the usual apps will need to add support, but hopefully it'll be forthcoming or work arounds will be in place.

Obviously it won't compete against what Impulficer can do for pure realism - but there's use cases for it when you need max ANC, lounge around, pair multiple headphones for multiple people and pure convienence etc.  Hoping like Dolby Headphone the algorithm will be tuned for a different setting - home theater watching rather than close up phones.


----------



## morgin

Hi folks I’m new here and I’ve tried the demo with impulcifer. It actually sounds unreal having headphones simulate a speaker in front of me.

I’ve ordered 
2x Primo EM258 Mono module, with 3.5mm plug, 1.0m

Behringer UMC202HD

Rode VXLR+ Minijack to XLR Adaptor with Power Convertor

Is this everything I need? Also for the speaker can I use my tv lg cx or does it need to be a good standalone speaker.

also is room measurement a must because I will also need to buy a mic.

I’ve read someone mentioning height channels and I really would like to add this too once olive got things working. Can someone link me a guide or instruct how to set this up. Many thanx


----------



## sander99

morgin said:


> I’ve read someone mentioning height channels and I really would like to add this too once olive got things working.


Measuring and simulating height channels is no problem, but what are you going to play over them?
Dolby Atmos, DTS:X, and Auro 3D can use height channels but I don't know if you can decode those on your computer?


----------



## dts350z

sander99 said:


> Measuring and simulating height channels is no problem, but what are you going to play over them?
> Dolby Atmos, DTS:X, and Auro 3D can use height channels but I don't know if you can decode those on your computer?


Some of us are experimenting with up/remixing music for 12 or more channels.


----------



## morgin

I want to play it over my pc through hesuvi using movies with Dolby atmos or dtsX.


----------



## morgin

sander99 said:


> Measuring and simulating height channels is no problem, but what are you going to play over them?
> Dolby Atmos, DTS:X, and Auro 3D can use height channels but I don't know if you can decode those on your computer?


I want to play it over my pc through hesuvi using movies with Dolby atmos or dtsX.


----------



## morgin

dts350z said:


> Some of us are experimenting with up/remixing music for 12 or more channels.


Is that possible on windows and does it work with movies in Dolby atmos or dtsX?


----------



## phoenixdogfan

morgin said:


> I want to play it over my pc through hesuvi using movies with Dolby atmos or dtsX.


There's no software that will decode Atmos. DTS-X or Auro 3D on a PC.  Only dedicated processors whose manufacturers have purchased the license will be able to decode those codecs.


----------



## dts350z

morgin said:


> Is that possible on windows and does it work with movies in Dolby atmos or dtsX?


Some people have a way to play multi-channel files directly, e.g. an audio interface with > 8 channels, or there's the Smyth Realiser, or do it yourself virtualization. There is also a way, in windows 10, to play a 12 channel file and have windows encode (7.1.4) dolby atmos on the fly, and you connect to a dolby atmos decoder (an AVR or whatever) via HDMI. This is the windows 10 spatial audio capability, combined with the Dolby Access app. The intent was for games to use that API interface but there is a Microsoft example file player with source (hard coded audio file names, UWP style program - ugh!) and one free file player that can also do it (a bit clunky but works). Sadly MS doesn't expose it as simple sound output interface that other programs could use.


----------



## musicreo

morgin said:


> Rode VXLR+ Minijack to XLR Adaptor with Power Convertor


You need two of them.


morgin said:


> Is this everything I need? Also for the speaker can I use my tv lg cx or does it need to be a good standalone speaker.


You don't need expensive speakers but a TV speaker is a bad choice.


----------



## morgin

Hi I need some help. I’ve done all of my measurements and obtained the sine sweeps. I’m running the command to turn them into brir. I then get a folder (plots) and also a headphone-responses.wav file. But there is no hesuvi.wav file to be found to use in hesuvi

will the hesuvi.wav be in impulcifer/data/my_hrir or somewhere else?


----------



## musicreo (Aug 11, 2021)

Usually it is saved in the folder  "my_hrir". There should be files named hesuvi.wav, hrir.wav, response.wav,  headphone-resposne.wav, README.md.  If you use room correction you will also find room-responses.wav. The plots show results.png and headphones.png. When you use the plot option you will also have  folders named post, pre and room.

Is the command line showing you during processing of your measurement "Writing BRIRs" ?


----------



## morgin

Ignore my message having the mics on works lol


----------



## morgin

Ok another problem now. I’m getting all the files now. But when I select my hesuvi.wav in hesuvi I only get the back right sound. All the included hrir’s work in proper surround.


----------



## musicreo

Does the result plot look ok?    You could open your measurements in Audacity to see if everything is ok.  The hesuvi.wav file should have 14 channels.
You could also abload somewhere your measurement and we can take a look at the problem.


----------



## morgin

musicreo said:


> Does the result plot look ok?    You could open your measurements in Audacity to see if everything is ok.  The hesuvi.wav file should have 14 channels.
> You could also abload somewhere your measurement and we can take a look at the problem.


I don’t know if the plot looks ok it’s my first time doing this. Where can I upload the files so you guys can have a look?


----------



## morgin

C:\WINDOWS\system32>cd Impulcifer

C:\Windows\System32\Impulcifer>venv\Scripts\activate

(venv) C:\Windows\System32\Impulcifer>python recorder.py --play="data/sweep-seg-FL,FR-stereo-6.15s-48000Hz-32bit-2.93Hz-24000Hz.wav" --record="data/my_hrir/headphones.wav"
Input device:  "IN 1-2 (BEHRINGER UMC 202HD 192k) Windows DirectSound"
Output device: "LG TV SSCR (High Definition Audio Device) Windows DirectSound"
Headroom: 29.8 dB

(venv) C:\Windows\System32\Impulcifer>python recorder.py --play="data/sweep-seg-FL-stereo-6.15s-48000Hz-32bit-2.93Hz-24000Hz.wav" --record="data/my_hrir/FL.wav"
Input device:  "IN 1-2 (BEHRINGER UMC 202HD 192k) Windows DirectSound"
Output device: "LG TV SSCR (High Definition Audio Device) Windows DirectSound"
Headroom: 29.3 dB

(venv) C:\Windows\System32\Impulcifer>python recorder.py --play="data/sweep-seg-FL-stereo-6.15s-48000Hz-32bit-2.93Hz-24000Hz.wav" --record="data/my_hrir/FC.wav"
Input device:  "IN 1-2 (BEHRINGER UMC 202HD 192k) Windows DirectSound"
Output device: "LG TV SSCR (High Definition Audio Device) Windows DirectSound"
Headroom: 30.3 dB

(venv) C:\Windows\System32\Impulcifer>python recorder.py --play="data/sweep-seg-FL-stereo-6.15s-48000Hz-32bit-2.93Hz-24000Hz.wav" --record="data/my_hrir/FR.wav"
Input device:  "IN 1-2 (BEHRINGER UMC 202HD 192k) Windows DirectSound"
Output device: "LG TV SSCR (High Definition Audio Device) Windows DirectSound"
Headroom: 28.9 dB

(venv) C:\Windows\System32\Impulcifer>python recorder.py --play="data/sweep-seg-FL-stereo-6.15s-48000Hz-32bit-2.93Hz-24000Hz.wav" --record="data/my_hrir/SR.wav"
Input device:  "IN 1-2 (BEHRINGER UMC 202HD 192k) Windows DirectSound"
Output device: "LG TV SSCR (High Definition Audio Device) Windows DirectSound"
Headroom: 26.6 dB

(venv) C:\Windows\System32\Impulcifer>python recorder.py --play="data/sweep-seg-FL-stereo-6.15s-48000Hz-32bit-2.93Hz-24000Hz.wav" --record="data/my_hrir/BR.wav"
Input device:  "IN 1-2 (BEHRINGER UMC 202HD 192k) Windows DirectSound"
Output device: "LG TV SSCR (High Definition Audio Device) Windows DirectSound"
Headroom: 26.0 dB

(venv) C:\Windows\System32\Impulcifer>python recorder.py --play="data/sweep-seg-FL-stereo-6.15s-48000Hz-32bit-2.93Hz-24000Hz.wav" --record="data/my_hrir/BL.wav"
Input device:  "IN 1-2 (BEHRINGER UMC 202HD 192k) Windows DirectSound"
Output device: "LG TV SSCR (High Definition Audio Device) Windows DirectSound"
Headroom: 27.0 dB

(venv) C:\Windows\System32\Impulcifer>python recorder.py --play="data/sweep-seg-FL-stereo-6.15s-48000Hz-32bit-2.93Hz-24000Hz.wav" --record="data/my_hrir/BL.wav"
Input device:  "IN 1-2 (BEHRINGER UMC 202HD 192k) Windows DirectSound"
Output device: "LG TV SSCR (High Definition Audio Device) Windows DirectSound"
Headroom: 26.4 dB

(venv) C:\Windows\System32\Impulcifer>python impulcifer.py --test_signal="data/sweep-6.15s-48000Hz-32bit-2.93Hz-24000Hz.pkl" --dir_path="data/my_hrir" --plot
Creating impulse response estimator...
Running room correction...
Running headphone compensation...
Creating headphone equalization...
Creating frequency response target...
Opening binaural measurements...
Plotting BRIR graphs before processing...
Cropping impulse responses...
Equalizing...
Normalizing gain...
Plotting BRIR graphs after processing...
Plotting results...
Writing BRIRs...

(venv) C:\Windows\System32\Impulcifer>


----------



## morgin




----------



## musicreo (Aug 11, 2021)

1) You can use every site for abloads you like.   Google will show you many free sites where you can abload files  without registration.
2) You can diretcly insert the png images here and don't need screenshots.

I can see one big problem in your measurement. The amplitude is very low.  The headroom is between 26 and 30db in your measurement. You should increase the volume on your speakers and the microphone amplification on the Behringer. ( I use some Primo EM 258 mics and the amplification on my Behringer is at 11 to 15 o'clock)

The frequency response plot of the results shows that you have a very strong mismatch between the left and right channel over the complete frequency range. The headphone plot shows also this mismatch and a unusual high amount of noise.   Attached you see one of my measurements.

You should also use the   *--channel_balance=trend*  option. This will help to correct the channel mismatch.

python impulcifer.py --test_signal="data/sweep-6.15s-48000Hz-32bit-2.93Hz-24000Hz.pkl"  --fr_combination_method=conservative  --room_target="harman-in-room-loudspeaker-target.csv" --specific_limit=750 --generic_limit=750  *--channel_balance=trend* --target_level=-12.5 --decay=500 --dir_path="data/my_hrir "


----------



## morgin

thanks i will try this. I am using the primo em258 and behringer 202hd. Can you tell me what buttons need to be pressed (line in/inst-PAD-direct monitor) also do i select the 48v on or off at the back


----------



## musicreo

If you use the VXLR+ adapter you have to activate the phantom voltage. The adapter will change the 48V to approx. 5V. The Primo em258 work well at 4-10V. Without the voltage the Primo capsules will record nothing.  None of the buttons you mentioned should be pressed for the measuring.


----------



## morgin (Aug 12, 2021)

thanks everyone for all your help. I figured out the problem incase anyone comes across this issue again. the error i was getting was sound only coming out from Rear Right. I was using my TV as the speaker for measurements and had set it so only the left channel was outputting sound. I changed that so both speaker were now out putting and now im getting full surround.


----------



## morgin

I’ve read through the whole thread and realise you guys know this stuff so well. I feel like a child and apologise because I’m learning and my questions are very basic. 



Does height matter when measuring (same height as the speaker)

When measuring should Mic be facing in or out of ear 

Do both mic have to be in the exact same position in both ears (keep slipping around)

Movies sound tinny with little base 

Can’t hear as much detail as Atmos (hear 50-60% of sound information)

Better to bake in headphone EQ or use it in hesuvi 

Will external noise effect the hrir a lot (outside traffic)

Do I need to learn python or is there an easier method to make changes if not what command options I should concentrate on first. 

Is there a method similar to this where a sphere of measurement is recorded


----------



## jaakkopasanen

Your results have no bass because your TV speakers have no bass. Impulcifer captures the sound of the speakers in the room. This can be mitigated a bit with room measurement but I doubt anything will save you from the fact that you are using TV as your measurement speaker. 

Mics in ears should be facing outwards.You can glue an earplug to the back of the mics, that helps keep them in place. Might want to put some tape on the back of the mic first so that you can remove the plugs if needed.


----------



## jaakkopasanen

Headphone compensation should be baked in IMO. Noise will affect the results but I'm not certain to what degree.


----------



## musicreo

morgin said:


> Does height matter when measuring (same height as the speaker)



It is depending on the type of speaker. But many speaker should be positioned at ear height.


morgin said:


> When measuring should Mic be facing in or out of ear


 These are omnidirectional microphones so the placement have some degree of freedom.  I think that is only important to not block the front side of the capsules with your ear.


morgin said:


> Do both mic have to be in the exact same position in both ears (keep slipping around)


They should be positioned similar but exact positioning is not necessary. Positioning is one of the most important aspects in the measurement.  The mics should not move during your measurements!


morgin said:


> Movies sound tinny with little base


Your plots show that your speaker have a roll off at 100Hz.  You can add bass with the EQ or try room correction. But a TV speaker is not really suited for this measurements.


morgin said:


> Can’t hear as much detail as Atmos (hear 50-60% of sound information)


This is depending on the quality of your measurement. 


morgin said:


> Do I need to learn python or is there an easier method to make changes if not what command options I should concentrate on first.


What changes do you have in mind?


morgin said:


> Is there a method similar to this where a sphere of measurement is recorded



Windows limits the input to 7.1 so there is no benefit to measure more than 7.1.


----------



## morgin

Thank you so much for your help. The more I learn from you guys the better it’s getting. I’ve used silicone for the mics and elastic band around my face to hold the wires in place. 
That made a huge difference and timing the recordings when there isn’t traffic flowing outside.
I’ve channel balanced and used different speaker. And now the hrir is loud.
another problem I’m coming across now. Is when there’s dialogue in the movie there’s a noisy hiss goes away when they stop talking


----------



## morgin

musicreo said:


> What changes do you have in mind?


I want to bake in my oratory Sennheiser hd560s EQ but can’t work out the line of  code and when I need to put it into cmd


----------



## jaakkopasanen

morgin said:


> I want to bake in my oratory Sennheiser hd560s EQ but can’t work out the line of  code and when I need to put it into cmd


There's no option because you don't need to do that. oratory1990's eq profiles make headphones match Harman target which is an approximation of sound of good speakers in a good room using headphones. Impulcifer does exactly this but is not so much of an approximation anymore but rather a simulation tailored to you. Just do the normal headphone compensation with Impulcifer and you'll get the neutral sound. Of course if your speakers and room aren't neutral then you won't be get neutral output. In this case you need to do room correction and potentially get better speakers and maybe even some room treatment.


----------



## morgin (Aug 12, 2021)

So I should take off the headphone graphic equaliser in hesuvi too?

I will get a speaker or two if I can manage. Any suggestions on speakers. I have a couple but the behringer isn’t able to power them enough to get a suitable volume.


----------



## sander99

morgin said:


> I have a couple but the behringer isn’t able to power them enough to get a suitable volume.


Don't you have a normal speaker amplifier? I think on the used equipment market you should be able to get a stereo amplifier for 10 Britisch Pounds or so.


----------



## morgin (Aug 12, 2021)

I managed to get a speaker to work which was very good considering how old it was.



And it was loud.

The hrir is almost perfect for me. Atmos was really good but was too close to my head. This makes movies seem more immersive. 

Just the volume is still low even after the balance. Need to somehow increase that but when I try with the measurement I already have it gives an error.

does putting the speaker further away and measuring enlarge the surround?

 Also it’s got base but still lacking that extra punch you get when watching movies. Or Dolby atmos demos


----------



## sander99

morgin said:


> does putting the speaker further away and measuring enlarge the surround?


I would expect so because in theory the virtual speakers should sound like they are at the same distance as the real speakers were during the measurement.


----------



## jaakkopasanen

morgin said:


> So I should take off the headphone graphic equaliser in hesuvi too?
> 
> I will get a speaker or two if I can manage. Any suggestions on speakers. I have a couple but the behringer isn’t able to power them enough to get a suitable volume.


Yes, don't use a separate headphone eq with Impulcifer. Unless you try to do something advanced you probably shouldn't be worrying about right now.


----------



## morgin (Aug 13, 2021)

thanks, anyway of doing height channels too?


----------



## jaakkopasanen

morgin said:


> thanks, anyway of doing height channels too?


No, because there's nothing that can use them. There is no Atmos decoder on Windows except the Atmos for headphones app but that's locked into a single BRIR.


----------



## morgin (Aug 13, 2021)

Jakko let me take this time to say thank you very very very much for this. For anyone on the fence try this it’s works wonders.

it sounds freaking unreal. I thought Dolby atmos was the best I can get. But now with measuring the speaker further away I have a surround system larger than my room. I’ve also managed to get the volume up by changing the target level.

Only two things

 1 missing thing now is I can’t increase the base. I’ve tried peace but that messes the whole thing up and I have to reinstall hesuvi and all the drivers. Is there a setting in cmd I can use to increase the base.

2 I’m getting a hissing sound whenever sound plays


----------



## jaakkopasanen

morgin said:


> Jakko let me take this time to say thank you very very very much for this. For anyone on the fence try this it’s works wonders.
> 
> it sounds freaking unreal. I thought Dolby atmos was the best I can get. But now with measuring the speaker further away I have a surround system larger than my room. I’ve also managed to get the volume up by changing the target level.
> 
> ...


You can add filters with EqualizerAPO's Configuration Editor, no need to use Peace. Just add a new parametric filter of type low shelf with the normal 12 dB/oct slope and center frequency of 105 Hz. Adjust gain as needed.

I don't know what causes the hissing sound. Maybe share the hesuvi.wav with us here.


----------



## morgin

You helped out so much man. The base is perfect now too. 

Just checked it’s not the hesuvi that’s causing the hiss. Happens when Theres any sound even windows prompts. Also If I disable hesuvi program or if I use Dolby or any other hrir. Must be windows or behringer which I’m now using as a dac to amplify my headphones.


----------



## morgin (Aug 13, 2021)

I’m seeing these Dolby guidlines




Does the distance not matter with impulcifer The image shows the rear speaker a lot closer to the listener. But when measuring with impulcifer we are doing a circle.

would it also be a good idea when measuring to take into account equilateral triangles?

Also I was wondering isn’t there a way to take our individual measurement and use them for different scenarios. Like a cinema or sound stage. Or even use them with Dolby atmos so we get their software working for our ears and head shape?

edit: just reading through the guide again incase I missed something. When I measure with headphones do they need to be plugged in? I’ve just been putting them on and recording because plugging them in with mic’s in my ear causes interference and I have to take off the headphones. If they do need to be plugged am I supposed to hear the sweep sound through them?


----------



## musicreo

morgin said:


> Does the distance not matter with impulcifer The image shows the rear speaker a lot closer to the listener. But when measuring with impulcifer we are doing a circle.


 The best is to always have the same distance to the speakers but in a real room with real speakers  it is often not possible to realize such a positioning of the speakers. 


morgin said:


> edit: just reading through the guide again incase I missed something. When I measure with headphones do they need to be plugged in? I’ve just been putting them on and recording because plugging them in with mic’s in my ear causes interference and I have to take off the headphones. If they do need to be plugged am I supposed to hear the sweep sound through them?



For the headphone measurements the headphones must be plugged in and play the sweep.  For this measurement be careful with the volume to prevent hearing damage.


----------



## morgin

musicreo said:


> The best is to always have the same distance to the speakers but in a real room with real speakers  it is often not possible to realize such a positioning of the speakers.
> 
> 
> For the headphone measurements the headphones must be plugged in and play the sweep.  For this measurement be careful with the volume to prevent hearing damage.


I’ve been up early getting things done properly. I posted earlier that it sounded better than Dolby that I preferred. This was before I did the headphone measurements plugged in. Now that I’ve done it correctly I am hearing EVERYTHING. In full surround like there are objects in my room. I’ve watch 2 movies so far and want to watch everything. It’s like going from standard Ariel tv to 4K hdr oled.
I don’t need to correct for bass anymore because when done properly everything works and you will definitely hear it.

I hope me being a amateur and going through this process of working things out in a basic manner and getting all the answers helps someone else reading this forum because the investment I have made so far is minute compared to what I am getting. 

In pursuit of getting things right this time…my headroom is 37.8db how do I lower this to get close to zero?


----------



## frans callebaut

hello,
i must confess that i have not read all 577 post, but my question is: i have a yamaha cxa-5100 surround processor where all my sources are connected on (hdmi output one to the television.) when i connect one of the  JVC EXOFIELD XP-EXT1 hdmi inputs to the second hdmi output of the 5100, will that work(there are not enough inputs on the jvc for connecting all my sources), will i still have picture and sound and is it necessary to connect the hdmi output of the jvc to my tv ?


----------



## sander99 (Aug 14, 2021)

frans callebaut said:


> hello,
> i must confess that i have not read all 577 post, but my question is: i have a yamaha cxa-5100 surround processor where all my sources are connected on (hdmi output one to the television.) when i connect one of the  JVC EXOFIELD XP-EXT1 hdmi inputs to the second hdmi output of the 5100, will that work(there are not enough inputs on the jvc for connecting all my sources), will i still have picture and sound and is it necessary to connect the hdmi output of the jvc to my tv ?


It is good that you didn't read all 577 posts if your goal was to find the answer to your question because it is not in there (pretty sure, not 100%)!

Your question only depends on what your yamaha cxa-5100 can output over its 2nd hdmi output (or maybe only HDMI output 1). (And maybe is a little bit off topic here.)
If you are out of luck, it will only output pcm stereo. If you are lucky it will adapt to the audio capabilities of the JVC (negotiated over the HDMI connection between the two devices). I vaguely remember to have once observed in a little test that my Yamaha RX-V711 receiver passes on Dolby and Atmos(sorry) DTS formats but only in stand by (with standby HDMI passthrough switched on, a setting somewhere in the Yamaha menu), but I am not sure anymore.
Can you just try or don't you have your JVC yet?


----------



## morgin

I’m having problems with the headroom being too high on some measurements. Is it ok for every measurement that the volume is adjusted to get similar headroom numbers or should the volume be the exact same for each recording.


----------



## frans callebaut

sander99 said:


> It is good that you didn't read all 577 posts if your goal was to find the answer to your question because it is not in there (pretty sure, not 100%)!
> 
> Your question only depends on what your yamaha cxa-5100 can output over its 2nd hdmi output (or maybe only HDMI output 1). (And maybe is a little bit off topic here.)
> If you are out of luck, it will only output pcm stereo. If you are lucky it will adapt to the audio capabilities of the JVC (negotiated over the HDMI connection between the two devices). I vaguely remember to have once observed in a little test that my Yamaha RX-V711 receiver passes on Dolby and Atmos(sorry) DTS formats but only in stand by (with standby HDMI passthrough switched on, a setting somewhere in the Yamaha menu), but I am not sure anymore.
> Can you just try or don't you have your JVC yet?


thank you for your answer, i have not yet the jvc, because i'm afraid that it will not work the way i want it to and that would be a lot of money thrown away.
best regards,
frans callebaut


----------



## frans callebaut

p.s. ik heb te laat gezien dat je in nederland woont en ik dus in belgie.dat is vrijwel dezelfde taal, volgens mij.


----------



## morgin

Hi I’m now getting proper surround sound. But have not got a mic for room measurements. Is room measurement an important step and what apart from seeming like I have actual speakers in the room would it bring to the experience. I mean is it an important step?


----------



## jaakkopasanen

morgin said:


> Hi I’m now getting proper surround sound. But have not got a mic for room measurements. Is room measurement an important step and what apart from seeming like I have actual speakers in the room would it bring to the experience. I mean is it an important step?


It is just as important as it is with speaker listening. Many people don't do it but definitely can help especially in the lower frequencies where room modes run rampant.


----------



## morgin

Hesuvi stops producing surround everytime theres a windows update or a geforce update. I end up having to delete all drivers and software and install everything from scratch (vbcable, equalizer apo, hesuvi etc) how do you guys keep the settings to stick for good?


----------



## lowdown

morgin said:


> Hesuvi stops producing surround everytime theres a windows update or a geforce update. I end up having to delete all drivers and software and install everything from scratch (vbcable, equalizer apo, hesuvi etc) how do you guys keep the settings to stick for good?


I've had no problems with Windows 10 updates and been using HeSuVi with an Impulcifer HRIR for nearly 2 years.  Don't have geforce so can't comment about that.  Sorry can't be more helpful.


----------



## morgin (Sep 8, 2021)

Is it Possible after the measurements are taken to increase the volume in say the rear channels? My measurement sound better than Atmos and much clearer. But Atmos rear channels are better where as my measurement sound better in everything else. Watching first flight where there's a fly in the rocket with them, using Atmos it sounds like it behind me buzzing around where as in my measurement its not so distinct. also when people are talking at the funeral the sound is more around me rather than mostly in front.

I hope someone can look at these measurements and check if I done everything right. Or if there is something I have overlooked.


----------



## musicreo

morgin said:


> Is it Possible after the measurements are taken to increase the volume in say the rear channels?



You could increase the volume in HESUVI for side and rear channels.


----------



## morgin

musicreo said:


> You could increase the volume in HESUVI for side and rear channels.


I’ve tried that but it’s not as good with rear pinpointed sounds. Like it’s there but not accurate just messy over there somewhere.

This is my layout and how I take measurements almost to scale 😂
Wardrobe behind me, circle is me, square is speaker 


Does the echo from the cupboard behind me make a big difference to my measurements when recording? Should I hang a thick duvet to dampen the sound. I think the echo or bounce of being so close is messing up my rear recordings


----------



## morgin (Sep 10, 2021)

Still getting more rear detail from the other hesuvi hrir’s and on my measurements I’m getting more details from the front speakers.

Can anyone explain what I am doing wrong. Is it the headroom difference causing this?

Also is it ok whilst taking each measurement to keep changing the volume and gain on to get the lowest headroom?

is having equaliser apo, vb cable and hesuvi already installed causing problems when taking measurements. Or are they not in play when the sweeps are played. Should I remove these before taking measurements?


----------



## jaakkopasanen

morgin said:


> Still getting more rear detail from the other hesuvi hrir’s and on my measurements I’m getting more details from the front speakers.
> 
> Can anyone explain what I am doing wrong. Is it the headroom difference causing this?
> 
> Also is it ok whilst taking each measurement to keep changing the volume and gain on to get the lowest headroom?


You shouldn't change the volume between different channel measurements or the channels will end up with different volumes and you definitely don't want that. 

Are you measuring with one, two or more physical speakers?


----------



## morgin (Sep 10, 2021)

jaakkopasanen said:


> You shouldn't change the volume between different channel measurements or the channels will end up with different volumes and you definitely don't want that.
> 
> Are you measuring with one, two or more physical speakers?


One speaker, I can use two if that makes it better

my best measurements I have done so far is without changing the volume but keeping it so each headroom is the lowest. But they vary between 5 and 1 headroom 

could it be the placement of the speaker that can cause issues. Because at the moment I’m turning my head and kind of guessing the poisitions to be looking at.


----------



## jaakkopasanen

It's really hard to say without knowing more. Maybe it's simply an issue of bad speaker / room / placement and the reason why front channels sound better than HeSuVi HRIRs is that front doesn't work well without personalization
so even sub-optimal setup is better with personalization than off the shelf HRIR.


----------



## musicreo

morgin said:


> One speaker, I can use two if that makes it better
> 
> my best measurements I have done so far is without changing the volume but keeping it so each headroom is the lowest. But they vary between 5 and 1 headroom



I think that the headroom is just the peak volume (please correct me if I'm wrong). So even if just a single frequency  have more power due to reflections of the walls it can cause  a different headroom for your measurements although you move your ears only a few centimeters. 



morgin said:


> is having equaliser apo, vb cable and hesuvi already installed causing problems when taking measurements. Or are they not in play when the sweeps are played. Should I remove these before taking measurements?



Impulcifer uses the default sound device.  You should be sure not to use HESUVI/EQ-APO for the sweeps as this would also process the sweeps. If you have installed EQ-APO only for vb cable you can just switch in Windows to your soundcard/Interafce as default device for the measurements.


----------



## morgin

musicreo said:


> I think that the headroom is just the peak volume (please correct me if I'm wrong). So even if just a single frequency  have more power due to reflections of the walls it can cause  a different headroom for your measurements although you move your ears only a few centimeters.
> 
> 
> 
> Impulcifer uses the default sound device.  You should be sure not to use HESUVI/EQ-APO for the sweeps as this would also process the sweeps. If you have installed EQ-APO only for vb cable you can just switch in Windows to your soundcard/Interafce as default device for the measurements.


I will try uninstalling all the equliaserApo vbcable and hesuvi to be safe and take measurements.

can you recommend a good speaker that will work with behringer for measurements.


----------



## jaakkopasanen

morgin said:


> I will try uninstalling all the equliaserApo vbcable and hesuvi to be safe and take measurements.
> 
> can you recommend a good speaker that will work with behringer for measurements.


You don't need to uninstall, just disable HeSuVi (and if you have other filters in EqualizerAPO configs). Any good speaker is good for recording binaural room impulse responses. For example Genelec 8351B is a very good speaker.


----------



## sander99 (Sep 11, 2021)

morgin said:


> Any suggestions on speakers. I have a couple but the behringer isn’t able to power them enough to get a suitable volume.





morgin said:


> can you recommend a good speaker that will work with behringer for measurements.





jaakkopasanen said:


> Any good speaker is good for recording binaural room impulse responses.


Maybe @morgin is still thinking about using a passive speaker on the Behringer UMC202HD headphone output, but that seems a bad idea to me.
An active speaker like the Genelec is good of course (on the line output of the UMC202HD).
Or buy an amp for the speakers that you have, like I suggested before.
[Edit: hi hi, looked at the price of the Genelec, maybe that is a bit more than what @morgin had in mind spending, why not just buy a used stereo amp for 10 or 20 Britisch Pounds.]


----------



## morgin

sander99 said:


> Maybe @morgin is still thinking about using a passive speaker on the Behringer UMC202HD headphone output, but that seems a bad idea to me.
> An active speaker like the Genelec is good of course (on the line output of the UMC202HD).
> Or buy an amp for the speakers that you have, like I suggested before.
> [Edit: hi hi, looked at the price of the Genelec, maybe that is a bit more than what @morgin had in mind spending, why not just buy a used stereo amp for 10 or 20 Britisch Pounds.]


Yes the price is wayyyyyy above what I can spend. Just trying to get those small rear details that I’m getting from all the other hesuvi hrir’s.

right now I’m plugging in behringer into the USB port. And my speakers into the line out on the back of my pc. Changing to behringer in windows sound options when doing the headphone measurement and then changing the windows sound options to speaker line out for speaker measurements.

the single speaker measurements and two speaker measurements sound very similar. So I don’t think the problem is there.

the other hesuvi hrir (atmos,dtsX) give me distinct voices and conversations in a scene with a crowded room. I can hear each voice and what is being said and where they are behind me.
With my measurements I’m hearing them slightly but they are all kind of merged. Maybe it’s just me trying to get it too perfect


----------



## sander99

morgin said:


> And my speakers into the line out on the back of my pc.


Now you have me confused. Are you using active (pc?) speakers or do you mean you plugged them in the pc's headphones/speakers output rather than line output?


----------



## morgin

sander99 said:


> Maybe @morgin is still thinking about using a passive speaker on the Behringer


When you say passive and active. Do you mean speaker either plugged into behringer in the back (output) and speaker plugged into the Pc motherboard? Which way is better and why?


----------



## morgin

sander99 said:


> Now you have me confused. Are you using active (pc?) speakers or do you mean you plugged them in the pc's headphones/speakers output rather than line output?


I’m plugging my speakers into the pc motherboard rear output plug. My speakers have two leads so basically

hifi system. Two wires for each speaker into the hifi. And from the pc to the hifi input.

here are some pictures to demonstrate coz I don’t make sense sometimes


----------



## musicreo (Sep 11, 2021)

morgin said:


> When you say passive and active. Do you mean speaker either plugged into behringer in the back (output) and speaker plugged into the Pc motherboard? Which way is better and why?



Active speaker have the amplifier built in the speaker. Passive speakers need to be connected to an external amplifier. In your case you have passive speakers from a compact hifi system where the main unit of the system is also the amplifier. A active speaker is directly plugged to a power outlet.



morgin said:


> can you recommend a good speaker that will work with behringer for measurements.


The JBL 305p MKII are a budget tip for active speakers and can be used with the Behringer UMC202HD. Compared to the speakers you showed on the photo the JBL 305P should be much better.


----------



## sander99 (Sep 11, 2021)

As @musicreo says above.
So I was mistaken, I assumed you first where trying to use passive speakers _directly_ connected to the headphones output of the Behringer without an amplifier. So forget my remarks about buying an amp.


----------



## morgin

musicreo said:


> Active speaker have the amplifier built in the speaker. Passive speakers need to be connected to an external amplifier. In your case you have passive speakers from a compact hifi system where the main unit of the system is also the amplifier. A active speaker is directly plugged to a power outlet.
> 
> 
> The JBL 305p MKII are a budget tip for active speakers and can be used with the Behringer UMC202HD. Compared to the speakers you showed on the photo the JBL 305P should be much better.


Thank you I will look at these because they more price friendly. Will the speakers make a big difference to my set up?
These are what I am using at the moment


----------



## musicreo

For your Kenwood speakers I don't see any information on frequency response and it is difficult to tell how they perform. But a 4 way speaker system with a 6" subwoofer, a 6" woofer, a 2" tweeter and 1" super tweeter is really unusual.   I guess that it is built for being loud and have a lot of bass giving you a boomy sound that suppresses finer details.


----------



## morgin

Yeh they’re loud with a lot of bass. I’ve ordered these from Amazon I think they’re a mode above what you suggested. JBL Professional 306P MKII 6" 2 Way they should be here tomorrow.

thanks for your suggestions. I know impulcifer is the way to go. I can watch movies and game with good volume without disturbing anyone. Whilst getting full surround and not paying for an expensive setup. But I know I’m missing something to get it just right.

for everyone else when toggling between your measurements and atmos or the other hrir’s how does the surround sound compare? Especially with rear audio


----------



## musicreo

morgin said:


> for everyone else when toggling between your measurements and atmos or the other hrir’s how does the surround sound compare? Especially with rear audio



For me Atmos from HESUVI sounds a bit more detailed on all channels. But already the front channels sound like they are side channels and all channels sound like they are very close to my head.  Even looking at a speaker it sounds for me like listening over a headphone.


----------



## morgin

musicreo said:


> For me Atmos from HESUVI sounds a bit more detailed on all channels. But already the front channels sound like they are side channels and all channels sound like they are very close to my head.  Even looking at a speaker it sounds for me like listening over a headphone.


Thanks for your reply. Yes the atmos sounds too close where as impulcifer helps widen the sound.

With impulcifer front channels are very detailed with so many distinct individual sounds but lack that same detail in the rear speakers like atmos gives. That’s the only thing I need to fix for a perfect experience. I hope the new speaker somehow gives that clarity


----------



## morgin

Quick question but I’m having problems getting sound to play on my new speaker using behringer. It’s really low volume unless i connect to the front and then I’m getting error message in cmd telling me sound is wrong coming to the right first when it should be left.

currently I’m using a rca to 1/4 (6.35mm) stereo jack

I purchased JBL Professional 306P MKII 6" 2 Way. Does this speaker plug into the back of behringer with a mono to mono 1/4 (6.35mm) jack?


----------



## musicreo

morgin said:


> I purchased JBL Professional 306P MKII 6" 2 Way. Does this speaker plug into the back of behringer with a mono to mono 1/4 (6.35mm) jack?


Yes, you can use a mono to mono 1/4 (6.35mm) cable or a 6.35mm to XLR cable to connect the JBl speaker with the Behringer.


----------



## morgin (Sep 16, 2021)

I apologise if I’m asking too many question or if I am going off topic. Please let me know and I will stop.

my question now is with all the different measurements I’m hearing different sounds. For example a quiet place 2 at the start in the baseball game I hear all the background people talking. Changing to another measurement hrir I’m hearing similar amount of sound but the same scene I hear another lot of people talking in the background. (Both levels exactly clear as the last)  Like there are rings of sound and each hrir is in on one ring.

that is just one example but it happens on all movies with crowds  of people talking.

I don’t know if that makes sense. But is it a limitation of headphones that you can only get a certain amount of sounds in one go? Or is it the in ear mics that are picking up different frequencies? Or can it be some balancing issues?

also when I do the processing and want to change some settings with - -, do I do this after the measurements or after I’ve first processed and then redo it?


----------



## musicreo

morgin said:


> For example a quiet place 2 at the start in the baseball game I hear all the background people talking. Changing to another measurement hrir I’m hearing similar amount of sound but the same scene I hear another lot of people talking in the background. (Both levels exactly clear as the last)  Like there are rings of sound and each hrir is in on one ring.


That is very strange.   I noticed also differences between my measurements but nothing that causes such extreme differences.


morgin said:


> also when I do the processing and want to change some settings with - -, do I do this after the measurements or after I’ve first processed and then redo it?


You do this when you process your measurement to get your hrir.
Have you already done measurements your new speaker ?


----------



## morgin

Yes I’ve measured the new speakers. I did some close and far measurements. But the ones I did with my old speaker turned out better and they also sounded like they were further away from me.

the new speaker sounded more treble heavy and quite piercing to my ears even after balancing where as the old speakers had more bass and sounded more like a movie theatre.

I believe you stated that the experience you’re having listening to music is something beautiful. Like you had to take off your headphones many times to make sure it was the headphones and not external speakers.

so I wanted to ask what equipment are you using because I want what your experiencing. Or what settings made the most difference in your findings.  This rabbit hole has got me addicted to getting the best.


musicreo said:


> You do this when you process your measurement to get your hrir.
> Have you already done measurements your new speaker ?


----------



## morgin

Another silly question but if you don’t ask you never know.

Can you merge two hrir somehow to get the best of both. My example being some measurements I’m getting really good front and side audio and some rear audio.


----------



## musicreo (Sep 17, 2021)

morgin said:


> I believe you stated that the experience you’re having listening to music is something beautiful. Like you had to take off your headphones many times to make sure it was the headphones and not external speakers.
> 
> so I wanted to ask what equipment are you using because I want what your experiencing. Or what settings made the most difference in your findings.


The foto shows how I did my best measurement. I used two JBL 305MKII speakers on Gravity SP 3202 stands. The speakers are placed in front of the smaller wall (the room is 6m*4m and the center speaker is 10-15cm in front of the wall) at a distance of 1.5m and the tweeter is at ear height.  The Behringer UMC202HD is connected  to my laptop  and used to record and play the sweeps.  During the measurement I put the Behringer on my lap. I use a 5m USB cable for the connection to the laptop and two 5m  long 6.3mm to XLRs cables for connecting the speakers.
For the microphones I use the Primo Em258 capsules with the Rode VXLR+ adapter. Like shown in the second image I placed the microphone really deep in my ear. On the floor I marked the positions for the 7.1 setup at 0°, 30°, 110° and 135°.

I did the measurement  by myself so I have to start  impulcifer and then rotate on the chair to the correct position at 0°. For this I use pauses of some seconds in the command line. First I check the microphones with Audacity to see if I have some unwanted noise from the deep in ear  placement. Then I start the measurement with one speaker  placed at 0° using three sweeps for adjusting the headroom.  After that I record three sweeps for the center speaker. Then I  put the center speaker to 30° but without removing the mics from my ear! Again I use three sweeps for recording the L and R  before I move the two speakers to the  LS,RS positon. I repeat this procedure for the LB and RB positions. I marked the 0° position at the wall so I know where to look at during the sweeps.
After the speaker are measured I directly put on my headphones (AKG 701 and Sennheiser HD 555) and again do three sweeps.
Notice that I do not removed the mics during the complete procedure. After that I place my measurement mic (Superlux ECM999) at the ear positions and record all sweeps for the room correction (here I use just one sweep). I may sound complicated but the complete procedure is finished in 10-15 minutes.


----------



## morgin

musicreo said:


> The foto shows how I did my best measurement. I used two JBL 305MKII speakers on Gravity SP 3202 stands. The speakers are placed in front of the smaller wall (the room is 6m*4m and the center speaker is 10-15cm in front of the wall) at a distance of 1.5m and the tweeter is at ear height.  The Behringer UMC202HD is connected  to my laptop  and used to record and play the sweeps.  During the measurement I put the Behringer on my lap. I use a 5m USB cable for the connection to the laptop and two 5m  long 6.3mm to XLRs cables for connecting the speakers.
> For the microphones I use the Primo Em258 capsules with the Rode VXLR+ adapter. Like shown in the second image I placed the microphone really deep in my ear. On the floor I marked the positions for the 7.1 setup at 0°, 30°, 110° and 135°.
> 
> I did the measurement  by myself so I have to start  impulcifer and then rotate on the chair to the correct position at 0°. For this I use pauses of some seconds in the command line. First I check the microphones with Audacity to see if I have some unwanted noise from the deep in ear  placement. Then I start the measurement with one speaker  placed at 0° using three sweeps for adjusting the headroom.  After that I record three sweeps for the center speaker. Then I  put the center speaker to 30° but without removing the mics from my ear! Again I use three sweeps for recording the L and R  before I move the two speakers to the  LS,RS positon. I repeat this procedure for the LB and RB positions. I marked the 0° position at the wall so I know where to look at during the sweeps.
> ...


You are so much help. I’m going to do exactly how you done your measurements. Except for the room measurements as I don’t have the money to spend for the mic (shame it can’t be done with the in ear microphones) I’m also going to put the mics in deep like you did maybe having them in the ear canal makes a big difference.  
Just need to make sure when they’re in the ear canal I’ve not got them faced into the sides of the ear.  I’ll have to learn how to check with audacity


----------



## morgin (Sep 17, 2021)

Actually found this if they’re at this price then I will buy. Do I need anything else with this mic? Like cables

also can I add this to the already best measurement I already have or do I need to all measurements again?


----------



## musicreo

morgin said:


> Actually found this if they’re at this price then I will buy. Do I need anything else with this mic? Like cables


  I bought a XLR Male - XLR Female 6 meter cable (the sssnake SM6BK) (5 €) and a simple microphone stand (Millenium MS 2002) (16€).


morgin said:


> also can I add this to the already best measurement I already have or do I need to all measurements again?


Yes, if you haven't changed your room you can  do the room measurement  and add this to your old measurement.


----------



## morgin

Just to be sure, the mic with the Extention xlr cable… I only need 1? And that one mic and cable is connected to the front of behringer 

Also left or right port on behringer or it doesn’t matter? I’ve ordered the mic and the cable.


----------



## musicreo

morgin said:


> Just to be sure, the mic with the Extention xlr cable… I only need 1? And that one mic and cable is connected to the front of behringer
> 
> Also left or right port on behringer or it doesn’t matter? I’ve ordered the mic and the cable.



Yes you need only one and you can use the left or right input on the Behringer.


----------



## morgin

I’m doing the measurements like always but now it tells me that br and bl are mixed up. Sound is coming from one of those first when it should be the other way.
My mics are placed correct. I’m using just the left channel


----------



## musicreo

Probably just some noise that is recorded before the sweep.


----------



## morgin

I'm once again stuck. I have the room recording mic on the way and I am trying to replicate my best measurement so when it arrives I can get the same result and record with the mic in the same place. But no matter what I try I cannot get the same results. I have tried removing all drivers and reinstalling. Tried one and two speakers. Tried different cables. Tried different distances. And much more (spent 7 hours till I gave up) I'm hoping if I cant post the plots from my best measurement and my latest someone can see what I may be doing different. I cannot work out the graphs and plots even though I tried

I will give the one who can help *£5 transferred to their paypal* if they can help me get the same results.


----------



## morgin

those were the good measurments


these are the latest not good ones


----------



## morgin

It looks like the big change is in the headphone measurement the rest looks similar. When I'm measuring the headphones first do I keep the two speaker's connected in the rear of Behringer so the sound is played in both my headphones and speaker's at the same time? (I'm thinking I might have left them plugged in and didn't hear the speakers active whilst I was listening to my headphones)


----------



## musicreo

morgin said:


> It looks like the big change is in the headphone measurement the rest looks similar.


The second headphone measurement looks better.

I see in  the first measurement that the amplitude of FR, SL and SR decreases  very fast at 3.5 kHz. In the second measurement it decreases for all channels at 3.5 kHz.  You should check  the tweeter. From the frequency plot  I would guess the second measurement sounds very dark and muffled.



morgin said:


> When I'm measuring the headphones first do I keep the two speaker's connected in the rear of Behringer so the sound is played in both my headphones and speaker's at the same time?


No the speakers should be silent when measuring the headphones. Also be careful that you don't activate the direct monitor function of your Behringer!


----------



## morgin (Sep 19, 2021)

Yes exactly right the second is dark and muffled. Where as the first i can hear all sounds distinctly. I am using the same equipment as before but can’t figure out where the issue is. The tweeter sound seems correct when playing videos and music

All the buttons on behringer are not pressed.

Im worried I have purchased the mic for room correction but will not be able to replicate the best measurement


----------



## musicreo

The room effects the frequency range from 20Hz-1000Hz.  In this range even small changes can shift the measured resonances. But for the higher frequency the room is not so important anymore and the speaker dominates the sound. This is the reason why I thought that the tweeter may have a problem. 
I also don't see any relation for FR, SL and SR in the first measurement.  You mentioned that you miss some details from the back  and actually both virtual  surround speakers have a problem in your first measurement. So you have the same problem in both measurements but in the first measurement  only 3 of the 7 channels have the problem. So if the tweeter is ok, then the mic is not recording the higher frequencies.  Maby it is just something simple like some earwax on the capsules that is damping higher frequencies?


----------



## morgin (Sep 19, 2021)

Can ear wax also give issues such as telling me my left ear is receiving sound before my right even though the speaker is on the right on that measurement? I’m getting that error a lot now


When you mention FR,Sl and SR relation can you please elaborate a little so I know what to look out for. Do you mean the graphs don’t look similar


----------



## musicreo

morgin said:


> Can ear wax also give issues such as telling me my left ear is receiving sound before my right even though the speaker is on the right on that measurement? I’m getting that error a lot now


No this must be something else.


morgin said:


> When you mention FR,Sl and SR relation can you please elaborate a little so I know what to look out for. Do you mean the graphs don’t look similar


Sorry I mentioned the wrong channels. It is  FR, SL and *BR* that have the decreasing slope starting at 3.5 kHz. Compare them to the other channels and you notice that the higher frequencies are much lower. Now look at your second measurement and you will notice that all channels show this decreasing slope starting at 3.5 kHz.


----------



## morgin

you mean this






should all the measurements be flat like this one





so if concentrate on getting all of these level and keeping the headroom close to 0.5 I should get all the speakers sound?

Also my headphones should look like this





not this





can you pm me your PayPal for the time you’ve taken to help so far. I really am greatfull for your help


----------



## musicreo

morgin said:


> you mean this


Exactly.


morgin said:


> should all the measurements be flat like this one


I can't tell you if they should look like that, because your ear, the mic position are unknown factors. But certainly for similar measurement conditions the frequency response should not change so much.


morgin said:


> Also my headphones should look like this


In terms of noise it should look like that in terms of the frequency response your ear and  the mic position are unknown factors.


morgin said:


> can you pm me your PayPal for the time you’ve taken to help so far. I really am greatfull for your help


I just try to share my experience from my measurements.  If jaakkopasanen ever introduce a donation button for impulcifer you can donate him for his great work. 
​​


----------



## morgin

I plan to donate to him if he does. When I first started I didn’t think I would want to spend more and more money but he’s done something special here. And even though I’m not fully set up correctly it already sounds amazing.


----------



## morgin

“Alternatively it's possible to do only one measurement by placing the microphone in the location of the center of the head during binaural measurements.”

when doing the above which way do you face the mic towards the speaker or straight up?

same goes for if I take my time for each ear in each position. Do I face it toward the speaker or pointing where the ear opening was pointing at the time of measuring?


----------



## musicreo

Microphones are diffuse field equalized or free field  equalized. If I remember correct the microphone   should  look at  a altitude of 0° to the speaker  for the free field and at 90° for the diffuse field. 
As the mics are omnidirectional the azimuth is not so important.  The Superlux 999 is most likely the same as the Behringer ECM-8000. For this mic there is a test that observed that it performs best at 60°.  But this is just important when you want accurate measurements even in the high frequencies which are not used in the room correction. 

​


----------



## jaakkopasanen

MiniDSP UMIK-1 comes with separate calibration files for 0 and 90 degree positions


----------



## morgin

Jakko you really should somehow sell this it’s awesome and I’m probably only getting 60% quality of what is possible.

On pc it is brilliant. Is there a way to have this work on say a PlayStation 5 or any other 7.1 device. Make the device think it passing through to an A.V device. Like a little portable device that has hesuvi and we can install out hrir’s. Then just plug out headphones in.

The ps5’s 3D audio is great but like the other hrir’s very close to the head and not as clear I’d rather have impulcifer.


----------



## morgin (Sep 20, 2021)

So once I’ve got the mic. Is it a good idea to calibrate both mic and speakers first before doing any further measurements?

if so what would be the best way to do this?

and with the jbl 305p should I turn the volume on the rear of the speaker to max or keep it lower to stop static?


----------



## musicreo

morgin said:


> So once I’ve got the mic. Is it a good idea to calibrate both mic and speakers first before doing any further measurements?


For your measurement it is not necessary. But if you want do some measurements I suggest to take a look at the free software REW.


morgin said:


> and with the jbl 305p should I turn the volume on the rear of the speaker to max or keep it lower to stop static?


For me it was already very loud much before the maximum. The noise floor and distortion level are better when you  don't use the maximum amplification on the JBL.


----------



## morgin

Any idea why my headphone measurements look so different. I’m using the same technique of mic placement and headphone placement but the results now always come back as more jagged.






I want to get the flatter graph. It’s a shame I can’t see it after the headphone measurement and need to wait till the whole thing is done and then processed.
Could it be the volume or the amp?


----------



## musicreo

What headphones and amp do you use?  Did you change the volume between the two measurements?  
From the plot I would say you should hear the difference already during the sweep.


----------



## morgin

I’m using the Sennheiser 560s and when I said amp I meant just the behringer. I can’t tell the difference maybe because only one measurement turned out good and the rest are the bad ones. Should I use the highest volume and gain without clipping to get the best results?


----------



## lowdown

morgin said:


> Any idea why my headphone measurements look so different. I’m using the same technique of mic placement and headphone placement but the results now always come back as more jagged.
> 
> 
> I want to get the flatter graph. It’s a shame I can’t see it after the headphone measurement and need to wait till the whole thing is done and then processed.
> Could it be the volume or the amp?


I saw plots similar to that top one with some of my first measurements and it turned out to be caused by a "monitor" setting for the mics.  Check out posts #232 and #235 in this thread.  I'm a novice, but perhaps that can help.

https://www.head-fi.org/threads/rec...r-speaker-virtualization.890719/post-15416276

https://www.head-fi.org/threads/rec...r-speaker-virtualization.890719/post-15416276


----------



## musicreo

morgin said:


> I’m using the Sennheiser 560s and when I said amp I meant just the behringer. I can’t tell the difference maybe because only one measurement turned out good and the rest are the bad ones. Should I use the highest volume and gain without clipping to get the best results?



For me the headroom on the measurements was about 1-5db for the headphones and I was below 12 O' Clock on the Behringer headphone output while the Mic amplification was at 2  O' Clock.


----------



## morgin

What about gain on the behringer? I have to set them both to different levels as one doesn’t clip on the 10 o’clock position and one needs to be on the 9 o’clock position for both not to clip and be at the highest


----------



## morgin

lowdown said:


> I saw plots similar to that top one with some of my first measurements and it turned out to be caused by a "monitor" setting for the mics.  Check out posts #232 and #235 in this thread.  I'm a novice, but perhaps that can help.
> 
> https://www.head-fi.org/threads/rec...r-speaker-virtualization.890719/post-15416276
> 
> https://www.head-fi.org/threads/rec...r-speaker-virtualization.890719/post-15416276


Awesome I’ll try what you did in the post where you used the headphone jack. I think with the 50+ Measurements I have done and with all sorts of combinations I must have used a different 3.5mm input somewhere and got the decent looking result.

when you guys say monitor you mean the main speaker?


----------



## lowdown

morgin said:


> Awesome I’ll try what you did in the post where you used the headphone jack. I think with the 50+ Measurements I have done and with all sorts of combinations I must have used a different 3.5mm input somewhere and got the decent looking result.
> 
> when you guys say monitor you mean the main speaker?


No, by "monitor" I meant a setting that allows listening to what's being recorded.  It's been well over a year since I've done any recordings so don't remember exactly where the setting was, but there is a Windows mic setting for listening to the mic input that I believe is what I unchecked.  A side note, I understand what it's like to get all this set up, make the recordings, and try different command line options.  When I hit the right combination it was beyond my expectations, which were high.  Keep at it.  The pot of gold at the end of this rainbow still totally amazes me all the time.


----------



## morgin

lowdown said:


> No, by "monitor" I meant a setting that allows listening to what's being recorded.  It's been well over a year since I've done any recordings so don't remember exactly where the setting was, but there is a Windows mic setting for listening to the mic input that I believe is what I unchecked.  A side note, I understand what it's like to get all this set up, make the recordings, and try different command line options.  When I hit the right combination it was beyond my expectations, which were high.  Keep at it.  The pot of gold at the end of this rainbow still totally amazes me all the time.


That’s why I’m spending more money and investing so much time. Also asking so many questions because I know the potential. 

Can I ask you also if you did everything like room recording and if you spent on the high end stuff or just the cheapest good quality for mics, speaker etc 

your post and the help I am getting is way more than I expected and want to again thank everyone


----------



## lowdown

morgin said:


> That’s why I’m spending more money and investing so much time. Also asking so many questions because I know the potential.
> 
> Can I ask you also if you did everything like room recording and if you spent on the high end stuff or just the cheapest good quality for mics, speaker etc
> 
> your post and the help I am getting is way more than I expected and want to again thank everyone


I used Sound Professionals MS-TFB-2 mics, which I modified by cutting off the wings and gluing half of a foam earplug to the back of each one.  That allowed inserting them into my ear canal so they would stay in place easily.  I also used a not expensive Zoom H2N mic as the interface between the ear mics and my PC.  I did do room recordings as I already had a UMIK-1 mic for calibrating my stereo system.  My speakers were Anthony Gallo Stradas with Gallo subs.  I don't know how much the quality of the speakers influences the quality of the Impulcifer end result.  Based on what I've read here it's possible to get very good results with relatively modest speakers.  I'm actually in the process of selling my Gallos right now as I never listen to them for music any more.  Impulcifer is so much better it's totally spoiled me for serious music listening.  I don't know what it would cost to find speakers and a room that sounds as good to me as my Senn HD600's and Impulcifer.  I can only say my decades long search for better sound ended happily when I found Jaakko's great gift.


----------



## musicreo

morgin said:


> What about gain on the behringer? I have to set them both to different levels as one doesn’t clip on the 10 o’clock position and one needs to be on the 9 o’clock position for both not to clip and be at the highest



I have 12 primo capsules and the preamplification to match them is also different.  Hence I wouldn't worry about different amplifications to much.

I have a small collection of my mountings of the Primo mics which I tried for measurements:


----------



## morgin

musicreo said:


> I have 12 primo capsules and the preamplification to match them is also different.  Hence I wouldn't worry about different amplifications to much.
> 
> I have a small collection of my mountings of the Primo mics which I tried for measurements:


Brilliant so having different levels is ok as long as I can get the lowest db without clipping and making sure the mics don’t move. If I can fix my headphone measurement and with the room correction when the mic arrives I’m sure I will get a good result.

I’m excited


----------



## MayaTlab

lowdown said:


> I used Sound Professionals MS-TFB-2 mics



Another reference from Sound Professionals that might be better suited out of the box, with no soldering skills required, for this thread's application, is the following reference :
https://www.soundprofessionals.com/cgi-bin/gold/item/SP-EAR-MIC-2
Which is the stereo version of that :
https://www.soundprofessionals.com/cgi-bin/gold/item/SP-EAR-MIC

The capsule is really, really small (3mm in total including the housing + the wire) and it's easy to DIY it with other types of earplugs / tips so that it sits at least flush with the ear canal's entrance and doesn't protrude, which I think is quite important for this thread's subject.


----------



## lowdown

MayaTlab said:


> Another reference from Sound Professionals that might be better suited out of the box, with no soldering skills required, for this thread's application, is the following reference :
> https://www.soundprofessionals.com/cgi-bin/gold/item/SP-EAR-MIC-2
> Which is the stereo version of that :
> https://www.soundprofessionals.com/cgi-bin/gold/item/SP-EAR-MIC
> ...


I like that design.

Here's my MS-TFB-2 mics with the wings cut off and foam earplugs glued to the back.  Worked well for positioning and keeping them in place during recordings.


----------



## musicreo

The  small mics look interesting but I don't see any specifications?


----------



## MayaTlab

musicreo said:


> The  small mics look interesting but I don't see any specifications?



They're really bad at mentioning important info on their own website, but you can send them a mail, so far they've been quite reactive. They can also provide a pairing service apparently (they try to match the L and R mics a bit better than by default).
Sensitivity is a bit lower than their larger MS mics, but I don't have any problem with it for measuring headphones so far.


----------



## morgin (Sep 27, 2021)

Hi, if I'm using the Superlux ECM999 with behringer do I keep the power switch at at the back at 48v and plug it direct with an extention cable? I don't want to fry it

When you did your room measurements did you do per ear measurements or just the one measurement?

And is there something that I should look out for or do before I start to measure the room? I think I'm disturbing my neighbour with the loud sweeps so wanna try and keep measuring at a minimum amount of tries.


----------



## lowdown

Can't answer the Behringer question. I did take measurements with my UMIK-1 at both ear positions, or as close as I could get.


----------



## musicreo

morgin said:


> Hi, if I'm using the Superlux ECM999 with behringer do I keep the power switch at at the back at 48v and plug it direct with an extention cable? I don't want to fry it


Yes the microphone works with phantom power (+48V) .


----------



## morgin

I've just done my room measurement does this look correct or is it bad


----------



## musicreo

Which settings in Impulcifer do you use for processing the files?
Is the JBL 306 or your other speaker?


----------



## morgin

im using this 

python impulcifer.py --test_signal="data/sweep-6.15s-48000Hz-32bit-2.93Hz-24000Hz.pkl" --fr_combination_method=conservative --room_target="harman-in-room-loudspeaker-target.csv" --specific_limit=750 --generic_limit=750 --channel_balance=trend --target_level=10 --decay=500 --dir_path="data/my_hrir " --plot


and ive exchaned my speaker and now using the jbl 305


----------



## morgin

could someone with good measurements please post their plots so i can compare. I know they wont look exact but it might help me figure out where im going wrong.

also if possible can someone post their hesuvi file 

thanks


----------



## musicreo

I put my room measurement from the JBL305P MK II  over your plot.  Depending on the size of the room the frequency range up to 500-1000 Hz is strongly influenced by the room.









morgin said:


> I know they wont look exact but it might help me figure out where im going wrong.


What is now  the problem with your sound?  The only thing that still looks strange is the plot of your headphone. But not all problems are visible in the plot. For me timing problems were a huge problem with some measurements but you will not see such problems in the plot.



morgin said:


> also if possible can someone post their hesuvi file


 You  can download my measurement (for the HD555/95) here: file HESUVI
But keep in mind that a measurement from a different person can sound completely wrong for you!


----------



## lowdown (Sep 28, 2021)

morgin said:


> could someone with good measurements please post their plots so i can compare. I know they wont look exact but it might help me figure out where im going wrong.
> 
> also if possible can someone post their hesuvi file
> 
> thanks


I've lost track of the plots for the HeSuVi file I'm using, but here's the wav file for my HD600's.  Only problem, of course, is it's like handing someone your eyeglasses to try.  Not likely they'll see what you see.  This may sound terrible to you, but for me it's the best sound I've ever heard.  Cheers.


----------



## morgin

Thankyou to both I wanted to just hear the levels of your files compared to something like dolby or the other hesuvi included ones then compare to mine. The reason I'm a Little concerned is because with the room measurements it hasn't made any difference to the best measurement I have. It very closely matches and was thinking room correction will make a noticeable difference.

here is my hesuvi file 

https://drive.google.com/file/d/1RRhTDvsCCgjMmAOVpxZC-Y-XTIxP6e-m/view?usp=sharing


----------



## musicreo (Sep 28, 2021)

morgin said:


> Thank you to both I wanted to just hear the levels of your files compared to something like dolby or the other hesuvi included ones then compare to mine.


The levels? It is depending on your settings ( I used --target_level=-12.5 you used --target_level=10).  I think with your settings you will run into clipping when you don't reduce the input signal. Keep in mind that you need much more headroom for 7.1 than for stereo.



morgin said:


> The reason I'm a Little concerned is because with the room measurements it hasn't made any difference to the best measurement I have. It very closely matches and was thinking room correction will make a noticeable difference.


I could easily hear the difference when I used my room measurement. When I look at the analyse panel of EQ-APO I see some strong dips between 50-400Hz which can only be explained with your strange headphone measurement.


----------



## morgin

I've tried plugging the headphones into behringer and then the pc but both give me the same results. I've even removed the behringer pc driver and used whatever windows gets. tried removing the silicon from the ear mics and just inserting them into my ear canals but its the same. also lowered the volume, gain and amp.  So I'm stuck with trying to get a smooth headphone measurement.

The dip at the start of the room measurement is nothing to be concerned about, shouldn't it be close to the target line?


----------



## lowdown

morgin said:


> I've tried plugging the headphones into behringer and then the pc but both give me the same results. I've even removed the behringer pc driver and used whatever windows gets. tried removing the silicon from the ear mics and just inserting them into my ear canals but its the same. also lowered the volume, gain and amp.  So I'm stuck with trying to get a smooth headphone measurement.
> 
> The dip at the start of the room measurement is nothing to be concerned about, shouldn't it be close to the target line?


I mentioned before that your jagged headphone graph looks looks a lot like one I was getting at first.  It's definitely not right.  I'm pretty sure I resolved it by unchecking a box under the mic settings in Windows.  Here's a pic of the screen:




Under Sound Control Panel > Recording > (your mic input) - look at the Listen tab and uncheck "Listen to this device".  I know virtually nothing about recording but I think in my case at least it was feedback causing the messed up headphone recording.  At least this is simple enough to check and eliminate if it's not the cause in your setup.


----------



## morgin

I appreciate you taking time to look for the setting you mentioned earlier and screenshotting it. Unfortunately I already Checked that first because you mentioned it before.


----------



## lowdown

morgin said:


> I appreciate you taking time to look for the setting you mentioned earlier and screenshotting it. Unfortunately I already Checked that first because you mentioned it before.



Ok.  Sorry I can't be more help.


----------



## musicreo

You could try some measurements of your headphone with REW with different volume and see if it gives you always the same result.


----------



## morgin

lowdown said:


> Ok.  Sorry I can't be more help.


You helped loads so far. So did Musicreo. I think I have the best I can manage and it really sounds sublime I think I'm being greedy and wanting to squeeze every last drop.


----------



## lowdown

morgin said:


> You helped loads so far. So did Musicreo. I think I have the best I can manage and it really sounds sublime I think I'm being greedy and wanting to squeeze every last drop.


Understand.  I kept at it until I hit on a result that I couldn't imagine being better.  Far more luck, stubborn persistence, and trying lots of command line options, than knowing what I was doing.  That headphone recording would bother me until I figured out what's causing the apparent feedback, then work on tweaks after that.  But that's just a guess as to how to improve what you're hearing.


----------



## morgin

Are sine sweeps supposed to be smooth and sound like this?



When I play it over my headphones and speakers I get a vibration sound almost all the way through which could be the cause of the headphone measurement being so spiky


----------



## musicreo

morgin said:


> Are sine sweeps supposed to be smooth and sound like this?


Yes.


morgin said:


> When I play it over my headphones and speakers I get a vibration sound almost all the way through which could be the cause of the headphone measurement being so spiky


 Can you upload your recorded headphone  sweep? This way we could hear what is going on.
I have a small vibration with my HD555 and the JBL 305 but only in the lower frequencies when playing the sweep  at a painfull high volume.


----------



## morgin

here is a link to the folder 

https://drive.google.com/drive/folders/1-XPVfQjRjwEsoCECyiAlbWh48p4Z-4na?usp=sharing


----------



## musicreo

Your link is not working for me (no acess).


----------



## morgin

https://drive.google.com/file/d/1j20umOahTHdDx8ymkGSn92QvYQ0cLp0X/view?usp=sharing

hope this link works 

did you change any of the switches on the back of the speaker? Input sensitivity/boundary EQ/input sensitivity?


----------



## musicreo

I still have no permission for your link. 
 I think I put the volume to 6 or 7 for the measurements.  I'm not sure but I think I also set the Boundary EQ to -1.5dB or even -3dB when the speakers where placed closed to the wall.


----------



## morgin

I apologise. I’ve now enabled anyone to view.
I asked about this on Reddit oratory replied that it could be the difference with in ear mics. Did you guys calibrate them? I don’t think that is the issue because one of the readings looks like it should


----------



## lowdown

morgin said:


> I apologise. I’ve now enabled anyone to view.
> I asked about this on Reddit oratory replied that it could be the difference with in ear mics. Did you guys calibrate them? I don’t think that is the issue because one of the readings looks like it should


I didn't calibrate my ear mics.  Don't know how to do that.  

One other note regarding that "monitor" issue I brought up.  I looked back at my postings, and my equipment, and there's a Monitor setting on my Zoom H2N device that I used to interface the ear mics to the PC.  I had to turn that off on the H2N, in addition to that Windows mic "Listen" setting, to get the headphone plot to look normal.  There may be something comparable on the Behringer.  I won't harp on this any more, but wanted to clarify.


----------



## musicreo (Sep 29, 2021)

morgin said:


> I apologise. I’ve now enabled anyone to view.


Thank you. Your recorded Impulcifer headphone sweep does not sound like my Impulcifer sweeps! Here  you can listen to my two headphone sweeps: my measurement . I hope you hear the difference?
So there is something wrong!


morgin said:


> I asked about this on Reddit oratory replied that it could be the difference with in ear mics. Did you guys calibrate them?


I also didn't calibrate my ear mics. You could put the capsules at 30-50cm before the speaker and do a sweep at a moderate volume  with REW to compare the frequency response of your capsules.


> There may be something comparable on the Behringer.  I won't harp on this any more, but wanted to clarify.


The Behringer has a monitor button. If it is pressed the recorded signal is played back over the output. But I mentioned this before and I think it was not the problem.


----------



## morgin (Sep 29, 2021)

lowdown said:


> I didn't calibrate my ear mics.  Don't know how to do that.
> 
> One other note regarding that "monitor" issue I brought up.  I looked back at my postings, and my equipment, and there's a Monitor setting on my Zoom H2N device that I used to interface the ear mics to the PC.  I had to turn that off on the H2N, in addition to that Windows mic "Listen" setting, to get the headphone plot to look normal.  There may be something comparable on the Behringer.  I won't harp on this any more, but wanted to clarify.


Yeh thank you I made sure none of the buttons were pressed on the behringer as musicero had advised on an earlier post.

looks like the culprit could be the sweeps not just on the headphone but I’m hearing it on the speaker too. I’m glad I checked YouTube to see what they should sound like. I’ll try playing your sweep on my pc to try and see if it my pc or impulcifer giving me the juddery sweep.

can I use sweeps from another source if they play smoother than impulcifer


----------



## musicreo

morgin said:


> can I use sweeps from another source if they play smoother than impulcifer



No. Impulcifer uses the sweep also for deconvolution. So you can't use a different sweep.


----------



## morgin (Sep 29, 2021)

They do sound different mine sounds more bass'ier and the start is more vibrating. Yours is gradual and cleaner sounding.

Any idea why that is? I'm listening to both on my pc through behringer and connected to my headphones same as when I would be taking measurements. 

I've tried connecting headphones into pc and playing the sweep, connecting to behringer and playing the sweep. Also getting the same sound through the speaker so I'm guessing those measurements are not going to be correct either.


----------



## morgin

Ok so I figured out the volume can make a big difference to the graph. Low volume give me a better reading on my headphones but gives me a headroom of 15 to 20 db

my next question then what is the maximum headroom I can aim for both headphones and speakers. Is 15-20db acceptable for good results


----------



## musicreo

I looked again at your headphone measurement.  It looks for me like the sweep is played a second time with small time delay.


----------



## morgin

thanks for looking but I don’t know what that means or how it would effect my measurements. Is there a command to fix it?

I’m also noticing faint noise when people are talking in movies. Especially when it’s a quiet moment and there’s only speech


----------



## musicreo

When your speakers are disconnected and your monitor function on the Behringer is not active I have no idea what can cause this problem. You should check  the speaker measurements if you see there also the strange double peaks in the spectrum.


----------



## morgin

What program did you use for that graph? So I can check it with new measurements.

Maybe I'm listening to sweeps too loud and its either echoing too much or feeding into the other mic. I'm actually using the speaker at full volume with pc on full and with behringer on full with full amp to get low between 1db and 5db headroom.


----------



## musicreo

morgin said:


> What program did you use for that graph? So I can check it with new measurements.


Audacity



morgin said:


> Maybe I'm listening to sweeps too loud and its either echoing too much or feeding into the other mic. I'm actually using the speaker at full volume with pc on full and with behringer on full with full amp to get low between 1db and 5db headroom.


You use the full pre-amplification for the mics and everything at maximum volume?


----------



## morgin (Sep 30, 2021)

musicreo said:


> You use the full pre-amplification for the mics and everything at maximum volume?


Yeh to get headroom between 1-5db otherwise it’s around 9-15db. Does headroom matter that much I believe you said it’s only for volume.

mics the gain is around 9 o’clock otherwise it’ll clip. But everything is else is full on behringer, pc and speaker


----------



## morgin

I have managed to resolve the issue. If anyone in future is stuck with this problem here is what I was doing wrong. The gain is what I needed to adjust to get a low headroom reading. I was using full volume on everything which was too loud. Putting the volume a little higher than mid on Behringer and the speaker and full on pc with gain on both mics set to 2pm and amp to 4pm position has given me really good plots. I am blown away with the details I am now hearing and the illusion of real speakers. I know I can do a little better with tweaks and a quieter room but right now I am gobsmacked.


----------



## kalstein

How can I use webcam.html ? 
I don`t have a webcam, so I used droidcam app. 
that app makes my phone using as webcam.

And I opened webcam.html (with MS edge), there is no reaction.
Is it working with real webcam ?


----------



## morgin (Oct 22, 2021)

Hi, I'm having a weird issue where 7.1 movies and games sound phenomenal but 5.1 movies don't. They sound a little better than stereo. Have I overlooked something?

And can I ask what player you use for playing video to work best with 7.1 and maybe the settings guide that I should follow for that player


----------



## Brandon7s (Oct 25, 2021)

This whole project is absolutely amazing. I just started trying Impulcifer this past weekend and I've already got a personalized BRIR that absolutely blows away anything else I've ever used before, and this is just in stereo without using any room correction though I just got the mic and calibration file yesterday so that's on the to-do list for this week. This is the _very_ first time I've been able to use headphones and not feel like it's a massive compromise. I'm using HiFiMan Anandas right now and I can't wait to go back and transform my favorite HRIRs to be used with my other headphones, as well as recording new BRIRs with them.

 Toggling back and forth between the BRIR listening experience and completely dry and it's jaw dropping. Its really highlights just how unnatural and _bad_ headphones sound without virtualization. It's like all of the details and imaging are a jumbled and harsh mess to me, even with AutoEQ correction applied, and then BOOM - as soon as I activate Hesuvi with one of my favorite BRIRs, the sound unfolds into something that has so much more depth and dimension, not to mention it seems much more _clear_, like using the headphones without the virtualization is cramming too much sound through a narrow straw.

@jaakkopasanen - thank you so much for all of the work you've done here, it's almost impossible for me to overstate just how much of an impact this has had in my enjoyment of music. I can't use my speakers 95% of the time due to apartment life and now, for once, I don't even _want_ to listen to my speaker setup - there's no need since there's no compromise now!

 My goal this week is to create some 7.1 surround BRIRs and try it out in some games. If the localization is anywhere as near as good as it has been in stereo for me then I am prepared to have my mind blown.


----------



## jaakkopasanen

Brandon7s said:


> This whole project is absolutely amazing. I just started trying Impulcifer this past weekend and I've already got a personalized BRIR that absolutely blows away anything else I've ever used before, and this is just in stereo without using any room correction though I just got the mic and calibration file yesterday so that's on the to-do list for this week. This is the _very_ first time I've been able to use headphones and not feel like it's a massive compromise. I'm using HiFiMan Anandas right now and I can't wait to go back and transform my favorite HRIRs to be used with my other headphones, as well as recording new BRIRs with them.
> 
> Toggling back and forth between the BRIR listening experience and completely dry and it's jaw dropping. Its really highlights just how unnatural and _bad_ headphones sound without virtualization. It's like all of the details and imaging are a jumbled and harsh mess to me, even with AutoEQ correction applied, and then BOOM - as soon as I activate Hesuvi with one of my favorite BRIRs, the sound unfolds into something that has so much more depth and dimension, not to mention it seems much more _clear_, like using the headphones without the virtualization is cramming too much sound through a narrow straw.
> 
> ...


🤗


----------



## lowdown

Brandon7s said:


> This whole project is absolutely amazing. I just started trying Impulcifer this past weekend and I've already got a personalized BRIR that absolutely blows away anything else I've ever used before, and this is just in stereo without using any room correction though I just got the mic and calibration file yesterday so that's on the to-do list for this week. This is the _very_ first time I've been able to use headphones and not feel like it's a massive compromise. I'm using HiFiMan Anandas right now and I can't wait to go back and transform my favorite HRIRs to be used with my other headphones, as well as recording new BRIRs with them.
> 
> Toggling back and forth between the BRIR listening experience and completely dry and it's jaw dropping. Its really highlights just how unnatural and _bad_ headphones sound without virtualization. It's like all of the details and imaging are a jumbled and harsh mess to me, even with AutoEQ correction applied, and then BOOM - as soon as I activate Hesuvi with one of my favorite BRIRs, the sound unfolds into something that has so much more depth and dimension, not to mention it seems much more _clear_, like using the headphones without the virtualization is cramming too much sound through a narrow straw.
> 
> ...



This is a very good summary of my experience as well.  The sound stage, imaging, tonal balance, overall sound quality, detail and clarity using Impulcifer is by far the best sound I've ever heard in decades of listening.  Stunning is an understatement.  It's also as if the performers are live in front of me, as there's no perception of speakers at all, just the sound.  Truly astonishing.  I also don't listen to my speakers anymore, and in fact just sold them!  I live in a townhouse, and was actually seriously looking at buying a house so that I could play music any time at any volume without disturbing my neighbors.  That necessity totally disappeared when I found Impulcifer.  It literally, and dramatically changed my life, and has given me a huge improvement in my appreciation of music.  Jaakko deserves all the praise and recognition for the gift he's so generously given us.


----------



## morgin

Brandon7s said:


> This whole project is absolutely amazing. I just started trying Impulcifer this past weekend and I've already got a personalized BRIR that absolutely blows away anything else I've ever used before, and this is just in stereo without using any room correction though I just got the mic and calibration file yesterday so that's on the to-do list for this week. This is the _very_ first time I've been able to use headphones and not feel like it's a massive compromise. I'm using HiFiMan Anandas right now and I can't wait to go back and transform my favorite HRIRs to be used with my other headphones, as well as recording new BRIRs with them.
> 
> Toggling back and forth between the BRIR listening experience and completely dry and it's jaw dropping. Its really highlights just how unnatural and _bad_ headphones sound without virtualization. It's like all of the details and imaging are a jumbled and harsh mess to me, even with AutoEQ correction applied, and then BOOM - as soon as I activate Hesuvi with one of my favorite BRIRs, the sound unfolds into something that has so much more depth and dimension, not to mention it seems much more _clear_, like using the headphones without the virtualization is cramming too much sound through a narrow straw.
> 
> ...


It really is amazing and also the help from this forum is fantastic especially for a noob like me. 

Definitely do the 7.1 measurements because they will change the immersion in games and movies and bring it to a whole new level. I’ve said it before but the upgrade in sound is like going from a vcd quality video to 4K hdr oled. Makes a big impact. I’m actually looking for a place that is quiet so I can measure as best as possible. Might even go as far to get a hotel room if I have to.


----------



## jaakkopasanen

morgin said:


> It really is amazing and also the help from this forum is fantastic especially for a noob like me.
> 
> Definitely do the 7.1 measurements because they will change the immersion in games and movies and bring it to a whole new level. I’ve said it before but the upgrade in sound is like going from a vcd quality video to 4K hdr oled. Makes a big impact. I’m actually looking for a place that is quiet so I can measure as best as possible. Might even go as far to get a hotel room if I have to.


I've been thinking about renting a studio mixing booth for an hour to do measurements in properly treated room. That's probably better option than a hotel room if you have any studios within a reasonable distance.


----------



## Brandon7s (Oct 25, 2021)

lowdown said:


> This is a very good summary of my experience as well.  The sound stage, imaging, tonal balance, overall sound quality, detail and clarity using Impulcifer is by far the best sound I've ever heard in decades of listening.  Stunning is an understatement.  It's also as if the performers are live in front of me, as there's no perception of speakers at all, just the sound.  Truly astonishing.  I also don't listen to my speakers anymore, and in fact just sold them!  I live in a townhouse, and was actually seriously looking at buying a house so that I could play music any time at any volume without disturbing my neighbors.  That necessity totally disappeared when I found Impulcifer.  It literally, and dramatically changed my life, and has given me a huge improvement in my appreciation of music.  Jaakko deserves all the praise and recognition for the gift he's so generously given us.



 "Like there's no speakers at all" is a great way to put it. Doing this whole Impulcifier/BRIR thing really has me wondering about the headphone experience in general. For instance, I've _never_ been able to get something like a distinct phantom center with _any_ headphone. The closest I've ever gotten to that kind of soundstage is with my Ananda, and it's soundstage on it's own still sounds very narrow, shallow, and congested when compared to my LSR305's - with the 305's, I immediately get a very strong phantom center that sets a gloriously wide soundstage and I don't even have to be in a picky spot in relation to the speakers. It's very obvious from my Impulcifer plots that my left and right ears have radically different signatures - so much so that _all _of the -channel_balance options always sound unnatural and 'wrong', even though the channel balance is technically better.

 Great idea about booking a studio session in order to create some BRIR profiles, I'd be curious as to how well those would work when used outside of the studio they are recorded in.

  Oddly enough, I got into this whole BRIR/HRTF rabbit hole because I'm a guitarist and have been on a journey to make playing guitar with headphones a pleasant experience. I'm going to try making a BRIR profile using my amp and guitar cabinet to see how close I can get to the feeling of playing the amp without headphones. I suspect it'll work great, though if it works then I have to figure out a decent way to use that amp BRIR with low latency (under 10ms roundtrip is acceptable). I've created a stereo IR of the OOYH Hesuvi presets a couple weeks ago and those worked decently well, so I don't think it'll be any more difficult to do the same with my own personal BRIR. If this works well for playing and monitoring guitar it's going to be a whole 'nother level or life changing for me. Guitar is my greatest passion and being able to replicate an amp-in-the-room experience would be my holy grail. If this works as well as I hope then I'm going to be bringing this up in my usual guitar forum and see if I can get others to try it. I know people who would pay thousands upon thousands of dollars to get a reasonable facsimile of the amp-in-the-room experience on headphones, though I know the hard part would be convincing folks to try it - it IS a bit intimidating to try it out but I also think it's return on investment in both time and money is out of this world.


----------



## davidtriune

MayaTlab said:


> Another reference from Sound Professionals that might be better suited out of the box, with no soldering skills required, for this thread's application, is the following reference :
> https://www.soundprofessionals.com/cgi-bin/gold/item/SP-EAR-MIC-2
> Which is the stereo version of that :
> https://www.soundprofessionals.com/cgi-bin/gold/item/SP-EAR-MIC
> ...




I'm buying a pair of these tiny things and modding it to fit as much into the earhole as possible. I had CS-10EMs and they didnt sound very good probably because they are sticking way out of the earhole. This obstructs the sound. I think it needs to sit in a way that the pinna can "funnel" the sound into it, because that's the way our ears work.


----------



## davidtriune (Oct 25, 2021)

I have recorded several HRIRs using the awesome tool and also this method . I'm blown away by how a tiny impulse response waveform can produce these effects. And it only takes about 5 minutes to record it, using either method.






This is recorded with the latter method, but it showcases what you can do in 5 min


----------



## lowdown

davidtriune said:


> I'm buying a pair of these tiny things and modding it to fit as much into the earhole as possible. I had CS-10EMs and they didnt sound very good probably because they are sticking way out of the earhole. This obstructs the sound. I think it needs to sit in a way that the pinna can "funnel" the sound into it, because that's the way our ears work.


Not sure about that.  I glued foam earplugs onto the back of my mics and the surface of the mic was not down in my ear canal.  My results were very good in terms of the spatial illusion of imaging and the sound in front of me.  But ears vary, so YMMV.


----------



## Brandon7s (Oct 25, 2021)

lowdown said:


> Not sure about that.  I glued foam earplugs onto the back of my mics and the surface of the mic was not down in my ear canal.  My results were very good in terms of the spatial illusion of imaging and the sound in front of me.  But ears vary, so YMMV.


My best results so far are are also with the mics glued to foam earplugs and shallow placement, but I've ordered a set of Sound Professional's MS-TFB-2-11849 - Master Series mics since I am curious about their performance and if there's any noticeable improvements with the upgrade.

 I already cut the wings off of the lower-end Sound Pro mics that I bought for this project, so I'm going to see about removing the plastic shell housing the mic so I can make it as small as reasonable and do some measurements with a semi-deep insertion (not trying to go to the eardrum, just near the entrance of the ear canal). I feel like I've gotten the best possible mic measurements that I can near the surface of the ear, and while it's a huge upgrade over not using the BRIRs, I also see room for improvements over what I've gotten so far.

  A measurement that takes in more of the ear structure could help get that few extra percent that it needs to be nearly 100% in comparison to my speakers. The speaker's advantage right now is still on sharper imaging with a more stable center image.

 My measurements right now have good frontal localization but the image isn't consistent across the frequency spectrum. Some parts of songs seemed to be smeared or stretched in the soundstage, making them hard to localize and separate from other elements in the mix. I've tried all of the channel balance options and only the mids option doesn't screw up the sound beyond use for me and I'm not sure what other options they're are for improving imaging with impulcifer. Maybe the reverb management feature? I only tried messing with it briefly and to no success but I'm going to look more into it and figure out how exactly to use it.


----------



## morgin

Brandon7s said:


> My best results so far are are also with the mics glued to foam earplugs and shallow placement, but I've ordered a set of Sound Professional's MS-TFB-2-11849 - Master Series mics since I am curious about their performance and if there's any noticeable improvements with the upgrade.
> 
> I already cut the wings off of the lower-end Sound Pro mics that I bought for this project, so I'm going to see about removing the plastic shell housing the mic so I can make it as small as reasonable and do some measurements with a semi-deep insertion (not trying to go to the eardrum, just near the entrance of the ear canal). I feel like I've gotten the best possible mic measurements that I can near the surface of the ear, and while it's a huge upgrade over not using the BRIRs, I also see room for improvements over what I've gotten so far.
> 
> ...


I tried with the bare mics all the way into my ear canal without any foam or silicone and trying not to damage anything inside my ear. But the better results always came with the silicone where my mics were sitting just at the entrance of the canal. 

Reverb made a huge difference and to cancel the echo effect I had to set the decay to 150


----------



## musicreo

davidtriune said:


> I have recorded several HRIRs using the awesome tool and also this method . I'm blown away by how a tiny impulse response waveform can produce these effects. And it only takes about 5 minutes to record it, using either method.



Recording your own impulse response with impulcifer or determining the transfer function of a  "audio system" with a Dirac delta sequence are two different things and not different methods.


----------



## musicreo (Oct 26, 2021)

morgin said:


> Reverb made a huge difference and to cancel the echo effect I had to set the decay to 150


A low SNR in this kind of measurements results in stronger reverb. When you have to set the decay to short times as 150ms to avoid "echo effects" I would interpret that your measurement have a bad signal to noise ratio.


----------



## morgin (Oct 26, 2021)

musicreo said:


> A low SNR in this kind of measurements results in stronger reverb. When you have to set the decay to short times as 150ms to avoid "echo effects" I would interpret that your measurement have a bad signal to noise ratio.


Sorry I was under the impression the echo was from the room being mostly bare walls. And I was just cancelling out some of the reverb 

How would bad signal to noise ratio happen and how would I stop it. I thought I had a very good measurement. Now with your advice I’m thinking it’s not and can get it better.


----------



## musicreo

morgin said:


> Sorry I was under the impression the echo was from the room being mostly bare walls. And I was just cancelling out some of the reverb


You are right. A bad room can also have this effect.  Maby it is just your room and evrything is fine. But I also had  a measurement with a lot of noise and which results in a strong reverb which was not caused by the room.
Can you show your unprocessed plots of the FC (left and right)? This will help to see if there is a noise problem.


----------



## morgin

thanx here they are I'll put up all my plots incase you see something that I surely would miss

these are pre processing


----------



## morgin

these are post processing


----------



## musicreo

The FC  from the pre plot would have been enough. I don't see a noise problem.


----------



## Brandon7s (Oct 27, 2021)

I did a bit of testing with various mic insertion depths and I've found that shallow insertions work best. When I get my new Sound Professionals mics in I want to try some more sessions with the stock clip and see how that affects imaging and tonal balance. I was actually getting fantastic results with the clips before I cut them due to convenience, so maybe there's something about them that work well for me in getting quality captures.

 What's your experience been like with using room correction? I have a measurement mic but I got very poor results the one time I tried to make a BRIR with room correction. I only tried the single center-of-head correction though and not measuring one for each ear. If that still sounds bad to me then I'll try implementing corrective EQ on my speakers and then record the usual Impulcifer captures with the EQ turned on for the speaker sweep. I know there's some might nasty spikes around 140hz and 90hz in my room so my hope is that one of these methods will eliminate that automagically, or at least manually but with natural sounding results.

   Also, I finally did a 7.1 surround BRIR session and got a halfway decent surround captured, though the side and rear angles are off a little, and so I got to try out gaming in glorious surround sound. I gotta say, it's dang amazing, even with a BRIR of questionable quality. And I just played Overwatch today, not even a game with decent audio. I tried to get into Battlefield V since I think it has incredible audio and sound design, but I couldn't get the game to start, it just kept crashing to desktop. Oh well, I'll try again tomorrow.

  I definitely need to do a session with my speakers set up in far more optimal places in my room. I did that a couple days ago to great results. I want to try an ultra-nearfield stereo capture to see what kind of advantages would come with using Impulcifer but getting very little room response. That's the crazy thing about Impulcifer, it's not _just_ downloading great speakers, it's also like downloading a room and the speaker _placement_, too. I can move my speakers to stand near the center of my room and get a snapshot of the speakers in the best possible position, and then I can enjoy that anywhere. It's mind-blowing.

  Does anyonee have any idea how to use these stereo (or surround) BRIR's on an Android phone, by any chance? Or on a Qudelix 5K. I'm not familiar with any software that can do convolution reverb or load impulse response filters, so I'm very interested in hearing any mobile success stories. This is just too amazing to _not_ to use it everywhere possible.

 One of the next things I need to do is to try this out with my Moondrop Blessing 2 IEMs, I have a feeling that will be an entirely new level of immersion. I've not had time to dive into exactly how to do it, but it sounds like a simple matter of generating a filter from AutoEQ that uses some math on my source headphone EQ curve and then copying the resulting measurement file into the usual impulcifer folder.

  I've always loved the bass response and depth of good IEMs but I've also found them too fatiguing to listen to as my daily drivers; Impulcifer might change that dramatically!

  It's going to be a ton of fun in general to go through and rediscover my entire headphone collection over again, in the way that were _meant_ to be heard. I spent so much time and money searching for the next headphone that would take my audio experience to the next level and I can only imagine how much more of both I would have spent  if not for trying this stuff out for myself.

 Headphones now sound GREAT, and instead of being something that I _have_ to use because the alternative is to upset the girlfriend and neighbors, I listen to them because they are the best possible way for me to experience music in the comforts of my own home.


----------



## musicreo

Brandon7s said:


> Does anyonee have any idea how to use these stereo (or surround) BRIR's on an Android phone, by any chance? Or on a Qudelix 5K. I'm not familiar with any software that can do convolution reverb or load impulse response filters, so I'm very interested in hearing any mobile success stories. This is just too amazing to _not_ to use it everywhere possible.



The only easy way is to  convert files on PC and copy them to your phone.


----------



## morgin

Isn’t there a way to run something like a raspberry pi external device that has hesuvi and our measurements on it. This can then be connected like a dac to any device like a ps5, Xbox, tv that outputs stereo, 5.1 or 7.1?


----------



## MayaTlab (Oct 27, 2021)

Brandon7s said:


> I did a bit of testing with various mic insertion depths and I've found that shallow insertions work best. When I get my new Sound Professionals mics in I want to try some more sessions with the stock clip and see how that affects imaging and tonal balance. I was actually getting fantastic results with the clips before I cut them due to convenience, so maybe there's something about them that work well for me in getting quality captures.



There is quite a lot of literature on the subject of optimal microphone placement for HRTF measurements which could probably be carried over to BRIR measurements.
In nearly all articles I've read on the subject it seems that measurements should preferably done at least flush with the ear canal entrance to be accurate.

For headphones it's a little bit more complicated. Blocked ear canal mics may provide inaccurate results in the ear canal gain area compared to open ear canal solutions. Personally I've also struggled to get consistent _comparative_ results between headphones above 7kHz with various mics flush with the ear canal entrance or deeper, either three different electret mics or two different DIY probe tubes, which makes me concerned that any one of them accurately measured my headphones in the first place, although listening tests with headphones with sharp, high magnitude peaks above 7kHz make me think that the probe tubes inserted somewhere near DRP are _closer_ to the real absolute values and therefore may provide more accurate comparative results.

I'm skeptical that accurate results can be obtained above 10-12kHz.


----------



## lowdown

Brandon7s said:


> What's your experience been like with using room correction? I have a measurement mic but I got very poor results the one time I tried to make a BRIR with room correction. I only tried the single center-of-head correction though and not measuring one for each ear. If that still sounds bad to me then I'll try implementing corrective EQ on my speakers and then record the usual Impulcifer captures with the EQ turned on for the speaker sweep. I know there's some might nasty spikes around 140hz and 90hz in my room so my hope is that one of these methods will eliminate that automagically, or at least manually but without showing unnatural.


I did measurements using my UMIK-1 at the two positions corresponding to my ear locations as close as I could guess.  I think I just got lucky.  The results are exceptionally good.  Trial and error is the only way I know to do this.  I did numerous in ear measurements and lots of variations of command line options until I hit the combination that I've been using since.  Most were off or way off for various reasons, some were pretty good, a couple are really good, and one is as close to perfect as I can imagine.  I also modified the Harman Curve file, and use the Equalizer APO in HeSuVi to tweak a few frequencies.  My speaker measurements were done using Audyssey correction as I figured that gave me the best chance of a decent balance, but of course each room and speaker system is different.  The pot of gold at the end of this rainbow was way more than worth the effort it took me to find it.


----------



## davidtriune

@musicreo fine, impulse response of virtual speakers. same concept.


----------



## davidtriune

Brandon7s said:


> It's going to be a ton of fun in general to go through and rediscover my entire headphone collection over again, in the way that were _meant_ to be heard. I spent so much time and money searching for the next headphone that would take my audio experience to the next level and I can only imagine how much more of both I would have spent  if not for trying this stuff out for myself.
> 
> Headphones now sound GREAT, and instead of being something that I _have_ to use because the alternative is to upset the girlfriend and neighbors, I listen to them because they are the best possible way for me to experience music in the comforts of my own home.


I think binaural is the way music is meant to be  heard. I really dont get why we listen to stereo music after discovering binaural.  Soon headphones will be simply made neutral just for playing binaurally recorded music. The only thing is headphones add pinna gain, so IEMs are technically supposed to be better at playing back binaural since they can be made to simulate only the ear canal. 

I mean seriously, they should be mixing music like they do over at  binaulab on youtube


----------



## musicreo

davidtriune said:


> @musicreo fine, impulse response of virtual speakers. same concept.


There is ViPER4Android which can do convolution but this is not a simple app that can be installed.


----------



## Brandon7s

musicreo said:


> There is ViPER4Android which can do convolution but this is not a simple app that can be installed.


I actually just read a bit about that last night. There's also JamesDSP which looks like it can be installed via Magisks, which might be a bit easier and do-able without rooting. I'm going to look into this further and will report back if I have any success!


----------



## morgin (Oct 27, 2021)

davidtriune said:


> I think binaural is the way music is meant to be  heard. I really dont get why we listen to stereo music after discovering binaural.  Soon headphones will be simply made neutral just for playing binaurally recorded music. The only thing is headphones add pinna gain, so IEMs are technically supposed to be better at playing back binaural since they can be made to simulate only the ear canal.
> 
> I mean seriously, they should be mixing music like they do over at  binaulab on youtube


How come that sounds better and more 3d when deactivating everything on hesuvi? Activating hesuvi makes it sound 2d.

Also how do you guys know where your measurements are lacking is it from the plots graphs? Or another way? As you all know I'm a audiophile noob and don't know where to look to start my tweaks from


----------



## davidtriune

morgin said:


> How come that sounds better and more 3d when deactivating everything on hesuvi? Activating hesuvi makes it sound 2d.


thats the point. those are fully binaural . you're adding binaural effect on top of binaural effect so it won't work.


----------



## Brandon7s (Oct 27, 2021)

morgin said:


> How come that sounds better and more 3d when deactivating everything on hesuvi? Activating hesuvi makes it sound 2d.


Because it was mixed specifically for binaural audio, which is essentially what Impulcifer does - convert all audio to binaural audio as heard from the listener's position in relation to speakers. So those mixes sound flat and ugly when using a personalized BRIR because it's already mixed to have a very similar result.


----------



## morgin (Oct 27, 2021)

So with your measurements are you hearing 2D music the same as listening to this youtube video when hesuvi is deactivated? Playing my music with my BRIR is the same but from all around and much clearer than without my BRIR

I'm asking so I have a target to aim for and know when I've got it close to the best I'll achieve. Maybe I've got a decent BRIR but I'm only 50% to the quality I should be aiming for


----------



## davidtriune

it wont sound as good because you're simulating a stereo or 7.1 speaker setup with HeSuVi. In a fully binaural recording the sound source can be any point in space. 

Stereo music on HeSuVi is still better than nothing IMO. my point is that stereo way of listening is outdated and we should move more towards binaural recording methods when we can


----------



## morgin (Oct 27, 2021)

That's put me at ease with my current measurement because others mentioned music sounds like it is live and performing right in front of you similar to the youtube video effect. And yes music does sound way better with my BRIR. Meaning that what I have currently is a good measurement. Only step up from here would probably be a full binaural recording which will probably be only useful in gaming where sound can be engineered from any direction not just 7.1.


----------



## sander99

davidtriune said:


> This is recorded with the latter method, but it showcases what you can do in 5 min





davidtriune said:


> I think binaural is the way music is meant to be heard. I really dont get why we listen to stereo music after discovering binaural. Soon headphones will be simply made neutral just for playing binaurally recorded music.


I was already wondering why you posted that fragment. You don't seem to realise that binaural only works really good when it matches your personal HRTF. That is the whole reason why you personally have to measure the impulse responses with in-ear-mics. For example I listened to that fragment you posted and it does almost nothing for me with respect to out-of-head localisation, apparently my HRTF does not match with yours at all. And that is why binaural recorded music never became a success. Only a small percentage of the people will get a good result, those who's HRTF matches the dummy head or real head that was made to record. So if you got a good experience with that you are one of a lucky few. How it sounds to you and how it sounds to someone else can be completely different.
An alternative could be a sound format with 3D sound objects and a renderer that creates a binaural rendering of that based on your personal HRTF. Of course I hope this is going to happen.
But until that happens using normal recordings over virtual speakers (or real speakers) is the best we can do that works for everyone and not only for a few people with coincidental matching HRTF. Plus it has the advantage that it can be used to play the enormous amount of music already recorded in stereo (or multichannel), and mostly intended for playback over speakers.


----------



## davidtriune

sander99 said:


> I was already wondering why you posted that fragment. You don't seem to realise that binaural only works really good when it matches your personal HRTF. That is the whole reason why you personally have to measure the impulse responses with in-ear-mics. For example I listened to that fragment you posted and it does almost nothing for me with respect to out-of-head localisation, apparently my HRTF does not match with yours at all. And that is why binaural recorded music never became a success. Only a small percentage of the people will get a good result, those who's HRTF matches the dummy head or real head that was made to record. So if you got a good experience with that you are one of a lucky few. How it sounds to you and how it sounds to someone else can be completely different.
> An alternative could be a sound format with 3D sound objects and a renderer that creates a binaural rendering of that based on your personal HRTF. Of course I hope this is going to happen.
> But until that happens using normal recordings over virtual speakers (or real speakers) is the best we can do that works for everyone and not only for a few people with coincidental matching HRTF. Plus it has the advantage that it can be used to play the enormous amount of music already recorded in stereo (or multichannel), and mostly intended for playback over speakers.


yeah... its a shame its not one size fits all. 

does binaulab do anything for you at all? i was hoping a binaural renderer that uses an average of all ears might work for most people because that's what they used.


----------



## Brandon7s (Oct 27, 2021)

davidtriune said:


> yeah... its a shame its not one size fits all.
> 
> does binaulab do anything for you at all? i was hoping a binaural renderer that uses an average of all ears might work for most people because that's what they used.


It sounds decent to me, far more immersive than non-binaural recordings for me but it's not as pleasant as using my own BRIR with ordinary recordings.


----------



## sander99

davidtriune said:


> does binaulab do anything for you at all? i was hoping a binaural renderer that uses an average of all ears might work for most people because that's what they used.


For example the guitar sounds about 3 inches out of my head to the left and slightly behind my ear, and Bowie's voice in the beginning sounds about 3 inches out of my head on the right. Some of the reverb seems to come from a little further away.
If I use a Smyth Realiser A16 with my own PRIR ("="HRIR) and HPEQ ("="headphone compensation) I hear virtual speakers at 2 meters distance (the distance I measured them at).


----------



## morgin

sander99 said:


> For example the guitar sounds about 3 inches out of my head to the left and slightly behind my ear, and Bowie's voice in the beginning sounds about 3 inches out of my head on the right. Some of the reverb seems to come from a little further away.
> If I use a Smyth Realiser A16 with my own PRIR ("="HRIR) and HPEQ ("="headphone compensation) I hear virtual speakers at 2 meters distance (the distance I measured them at).


I hear the same as this which means the head they used for recordings is similar to my head. Can’t people who have these headshapes use their (binaulabs) measurements for true binaural experience in hesuvi? Until there’s an easier way of people getting hrir’s


----------



## musicreo

For me the youtube video sounds like everything is comming from the side and behind but very close to my ears. I also don't understand why the singing voices  rotates. It is like I would activate the rotation on the virtual speaker shifter on my Asus soundcard. This is not how music should be recorded.


----------



## morgin (Oct 28, 2021)

Try the others not the music ones. They have a true sense of people walking around and talking from different spaces. The music ones do have the vocals moved around which is off putting 

Some people are put off with the investment and know how to use command lines. So that’s why I was asking if there was a way people can use binaulabs or others average headshape measurements to try with hesuvi.

Or use their binaural and mix it with our measurements and then fill in all the blanks


----------



## musicreo (Oct 28, 2021)

morgin said:


> Try the others not the music ones. They have a true sense of people walking around and talking from different spaces.


I will test some other clips.



morgin said:


> Some people are put off with the investment and know how to use command lines. So that’s why I was asking if there was a way people can use binaulabs or others average headshape measurements to try with hesuvi.


I'm not sure if I understand your question. You can use every impulse responses you want in HeSuVi/EQ-APO. So you need the impulse response from binaulabs to use it.


----------



## Brandon7s (Oct 28, 2021)

I messed around more with room correction today and had much better results than the previous time I attempted it. This time I didn't use the room EQ correction within Impulcifer, as using that in the past has given me very wonky-sounding results and destroyed all localization. Instead, I did correction via REW and then used Equalizer APO/Peace to apply the filters that were calculated. The improvement in sub-500hz frequencies is pretty dramatic. I then created a stereo BRIR with Impulcifer as usual but with APO only active during the speaker measurement part. This translated _very_ well to the resulting BRIR!

 While I was taking the room response measurement with REW, I also took some 'mock' measurements that were generally where I'd expect my left and right ears to be. This made it very clear that even moving the mic a few inches left/right gives a different measurement, so I'm going to attempt doing the room compensation entirely within Impulcifer again but this time utilizing specific measurements for each ear. I have a theory about how to make this easier than using a webcam for getting precise positioning using two mic stands. I'm going to set the stands up as physical guides so there's a part of them touching where I'd like to place to measurement mic for each ear. I should then be able to place the mic in each stand one at a time and use the physically marked location for reliable mic placement.

 I also just received my Rode VXLR Plus XLR adapter so I can use my audio interface's phantom power and ditch the battery pack that I've been using so far. I'm hoping that helps improve SNR at least a little bit. My Sound Professional MS-TFB-2 mic pair should arrive either tomorrow or early next week and I _know_ that will reduce the signal-to-noise ratio quite a bit; it'll be interesting to see if there's a noticeable improvement over my current measurements with it. I'll report back once I've been able to give it a whirl.


----------



## jaakkopasanen

Brandon7s said:


> I messed around more with room correction today and had much better results than the previous time I attempted it. This time I didn't use the room EQ correction within Impulcifer, as using that in the past has given me very wonky-sounding results and destroyed all localization. Instead, I did correction via REW and then used Equalizer APO/Peace to apply the filters that were calculated. The improvement in sub-500hz frequencies is pretty dramatic. I then created a stereo BRIR with Impulcifer as usual but with APO only active during the speaker measurement part. This translated _very_ well to the resulting BRIR!
> 
> While I was taking the room response measurement with REW, I also took some 'mock' measurements that were generally where I'd expect my left and right ears to be. This made it very clear that even moving the mic a few inches left/right gives a different measurement, so I'm going to attempt doing the room compensation entirely within Impulcifer again but this time utilizing specific measurements for each ear. I have a theory about how to make this easier than using a webcam for getting precise positioning using two mic stands. I'm going to set the stands up as physical guides so there's a part of them touching where I'd like to place to measurement mic for each ear. I should then be able to place the mic in each stand one at a time and use the physically marked location for reliable mic placement.
> 
> I also just received my Rode VXLR Plus XLR adapter so I can use my audio interface's phantom power and ditch the battery pack that I've been using so far. I'm hoping that helps improve SNR at least a little bit. My Sound Professional MS-TFB-2 mic pair should arrive either tomorrow or early next week and I _know_ that will reduce the signal-to-noise ratio quite a bit; it'll be interesting to see if there's a noticeable improvement over my current measurements with it. I'll report back once I've been able to give it a whirl.


The webcam.html is exactly for this. Also Impulcifer has options for controlling the frequency limit of room correction. It sounds like your problems might be due to attempting to correct too high frequencies in the room response.


----------



## musicreo

davidtriune said:


> I'm buying a pair of these tiny things and modding it to fit as much into the earhole as possible. I had CS-10EMs and they didnt sound very good probably because they are sticking way out of the earhole. This obstructs the sound. I think it needs to sit in a way that the pinna can "funnel" the sound into it, because that's the way our ears work.



Did you get some specifictaions for this microphone capsules?  The very small size is probably very good for positioning but I wonder how this effects  the SNR or sensitivity?


----------



## davidtriune

musicreo said:


> Did you get some specifictaions for this microphone capsules?  The very small size is probably very good for positioning but I wonder how this effects  the SNR or sensitivity?


nope, but i'm looking into one of the Primo $10 capsules instead. at least those have specifications on the order page. (could probably just email sound prof for the specs tho)
also saving $30 on the cable is nice.


----------



## Brandon7s

I'm trying to troubleshoot why my headphone plots have been so jagged and I'm hoping you fine folks might be able to help. The resulting BRIRs _sound_ decent-to-great, but one thing I'm starting to notice is random frequencies popping up far too strongly on either the left or the right side for no apparent reason. For instance, in a female vocal line a part of a word will suddenly pop up far left of the soundstage while all most of the rest of the line is stable and centered. This seems to me to indicate that the very spikey headphone measurements is a likely culprit.

 I've read through this thread multiple times to see how others have solved this, and the main two takeaways I've seen are to make sure monitoring is disabled (it is, I've triple-checked) and then to keep the volume below very high levels. I just got done experimenting with a variety of playback and recording levels when taking the measurements and this seems to have zero effect on the headphone measurements being jagged and spikey. The headphones I'm using right now are the Hifiman Anandas. I've also tried this with DT770s, ATH-M50s, and DT1990s and all have the same jagged appearance, though the Ananda seems to exhibit it the worst.

 I've tried all of the channel balancing options when processing the BRIRs and all except for Mids sound more unbalanced than leaving the channel balance alone and keeping it stock. Here's some screenshots of the plots, so you can see what I mean. These are all measurements from my Ananda with the MS-TFB binaural mics glued to foam earplugs with the plastic wings cut off - I've got 2 Rode VXLR+ phantom power adapters. My audio interface is a MOTU Ultralite MK3, which is well above decent. My speakers are a pair of mark 1 JBL LSR305s. Placement in the room isn't great but I figured I'd worry about that after troubleshooting this headphone measurement problem.

Headphones.png: this is what I mean by "jagged", there's not a single segment that is even close to flat.





Results.png:




Post FL-left (I can show the rest if it helps):





Pre FL-left: this one looks particularly noisey, definitely not the best Pre measurement that I've taken so far. 



 Would love to hear if anyone has some insight into this. Thank you!


----------



## morgin (Nov 1, 2021)

Brandon7s said:


> I'm trying to troubleshoot why my headphone plots have been so jagged and I'm hoping you fine folks might be able to help. The resulting BRIRs _sound_ decent-to-great, but one thing I'm starting to notice is random frequencies popping up far too strongly on either the left or the right side for no apparent reason. For instance, in a female vocal line a part of a word will suddenly pop up far left of the soundstage while all most of the rest of the line is stable and centered. This seems to me to indicate that the very spikey headphone measurements is a likely culprit.
> 
> I've read through this thread multiple times to see how others have solved this, and the main two takeaways I've seen are to make sure monitoring is disabled (it is, I've triple-checked) and then to keep the volume below very high levels. I just got done experimenting with a variety of playback and recording levels when taking the measurements and this seems to have zero effect on the headphone measurements being jagged and spikey. The headphones I'm using right now are the Hifiman Anandas. I've also tried this with DT770s, ATH-M50s, and DT1990s and all have the same jagged appearance, though the Ananda seems to exhibit it the worst.
> 
> ...



Mine were doing the same thing I’m sure you have read my post on how I fixed it by not using full volume. The other thing I changed it to change the gain on both mics to as high as I can get without the clipping lights coming on.

So gain as high as possible so the slightest sound triggers the signal lights and without clipping and volume around medium.


----------



## Brandon7s (Nov 1, 2021)

morgin said:


> Mine were doing the same thing I’m sure you have read my post on how I fixed it by not using full volume. The other thing I changed it to change the gain on both mics to as high as I can get without the clipping lights coming on.
> 
> So gain as high as possible so the slightest sound triggers the signal lights and without clipping and volume around medium.


I thought that might be the problem in my case as well but running the headphone and speaker volumes fairly low while using preamp gain to get a headroom level of around 6 didn't change the results compared to cranking the speaker and headphones volume and lowering the preamp gain. I've not tried _maxing_ the preamp gain, since doing so would make my headphones and speakers nearly unaudibly low in volume due to the very high gain my MOTU interface's preamps supply.

UPDATE: @morgin, your solution definitely had an affect this time; I must not have been using low enough SPLs with my prior headphone measurements to be able to see the improvements that adjusting the volume makes.

 I just tried making a BRIR with VERY low SPL output from both speakers and headphones. Much lower than I'd usually listen and well below conversational level. This indeed DID reduce the jaggedness of the headphones measurement and the results are pretty good, definitely one of the better BRIRs I've gotten from Impulcifer so far. I'm also noticing far fewer occurances of frequencies randomly popping out of place in the soundstage. It _is_ a little unfocused compared to the prior ones I've made, something that I'd assume is an artifact of taking measurements with a whole lot of background noise; the signal to noise ratio while outputting very low SPLs is much lower than I'm comfortable with.

 This still points me in the right direction though. I wonder if the jaggedness is caused by one ear's mic picking up the signal from the _other _headphone driver, either through open air and/or through the head itself, but that would only happen if the measurements are recording both left and right ears for both left and right measurements, which I doubt is the case. I'm going to try a different output device just in case there's something wrong with the way my audio interface is outputting the sweep signal. That's a longshot though. Besides that, I'm going to fiddle around with measuring the headphones with a variety of SPLs to see if I can find the sweet spot between smooth frequency response and low noise.

This sessions headphone measurements:




Results.png:


----------



## Brandon7s (Nov 2, 2021)

Another update. I've ruled out headphone output volume (SPL) as being the cause of these extra-jagged headphone measurements.

I just did a test where everything is the same except for the headroom; I made 7 measurements, increasing the preamp gain by +5 and reducing headphone output by the same amount, therefore keeping the total headroom between 5.5db and 6.3db for each measurement. I checked the plots for each one and the amount of 'jaggedness' in the headphone measurement plot were very close between all of the 7 measurements. It was minimal, much lower than I've gotten before, regardless of headphone SPL. The BRIRs are nearly indistinguishable from each other, even when jumping from opposite SPL measurements.

 I also used my Qudelix 5K DAC/AMP to power my headphone's output for playing back Impulcifer's sweeps. I'm 99.9% sure that changing the output device is what fixed it. I have to run a couple more tests to be certain, but I think there's some weird audio routing going on within my MOTU interface that is causing a feedback loop while recording at the same time as playing back. I wish that Impulcifer had support for ASIO since that'd give me a lot more tools to narrow down exactly what is going on, but I'm not _too _worried about the root of the problem - not as long as I have a fix for it.

 The BRIRs that I made using these latest measurements is a big step up in audio quality. Balance is now dead-on whereas it was never quite right before. I'm not getting a VERY strong center image and the placement of instruments outside of the center are panned consistently and reasonably, instead of sounds popping out of nowhere near hard-panned to one side. I didn't even use a separate output for my speaker measurements, so doing that could be responsible for another improvement in quality. I'll give that a shot tomorrow.

  Hopefully this helps out anyone else who might be running into extra-jagged and weird headphone measurements: try separating your playback device from your recording device. Especially with MOTU interfaces.

New headphone measurement with preamp gain at +15dB, with output set to fairly high SPL (well above my usual listening volume). By the way, I'm pretty sure I know what's causing the channel imbalance in the low bass region: a wall reflection from my left speaker which is only a matter of inches away from a corner. The right speaker is in front of a wall by about the same distance, but NOT in a corner.




Results from the same measurement:


----------



## davidtriune (Nov 6, 2021)

hi all,

Thanks to jaakkopasanen for the amazing program.
i made a GUI to save myself from typing commands, but i think others would benefit from it too.



I attached "gui.py", to run it just drop it in your impulcifer folder and type "python gui.py" in the command line. The console messages will still show up as you run it.  Let me know if you run into any bugs.


----------



## morgin

davidtriune said:


> hi all,
> 
> Thanks to jaakkopasanen for the amazing program.
> i made a GUI to save myself from typing commands, but i think others would benefit from it too.
> ...


This looks great should help people out loads. So mic calibration you just browse for the file on your computer? 

Also target level how does that work. If I set it to -6db will it keep it at that level?


----------



## davidtriune

morgin said:


> This looks great should help people out loads. So mic calibration you just browse for the file on your computer?
> 
> Also target level how does that work. If I set it to -6db will it keep it at that level?


Yes but if you don't set it it will automatically grab from room-mic-calibration.txt (or csv) if you have one.

target level is like the average volume. keep it below 0db to prevent clipping.


----------



## Brandon7s

davidtriune said:


> hi all,
> 
> Thanks to jaakkopasanen for the amazing program.
> i made a GUI to save myself from typing commands, but i think others would benefit from it too.
> ...


Great work! This will be a lot easier than copying and pasting commands from a .txt file I have on my desktop, haha. The only thing is that I have to use the --input_device parameter in order to record from my audio interface and it doesn't look like that command exists in your GUI. Still, it will be nice and handy for those that don't need to specify their input and output devices.


----------



## davidtriune

Brandon7s said:


> Great work! This will be a lot easier than copying and pasting commands from a .txt file I have on my desktop, haha. The only thing is that I have to use the --input_device parameter in order to record from my audio interface and it doesn't look like that command exists in your GUI. Still, it will be nice and handy for those that don't need to specify their input and output devices.


your audio interface doesnt show up in the recording devices? have you tried changing Host API first?


----------



## Brandon7s (Nov 2, 2021)

davidtriune said:


> your audio interface doesnt show up in the recording devices? have you tried changing Host API first?


The particular interface I have, MOTU Ultralite MK3, does show up there but it isn't recognized as having enough channels. I have to force it to select a different input that cannot be selected as a default recording option in Windows for some reason. It's an interface designed entirely with ASIO in mind, so the usual windows audio support is minimal.


----------



## jaakkopasanen

davidtriune said:


> hi all,
> 
> Thanks to jaakkopasanen for the amazing program.
> i made a GUI to save myself from typing commands, but i think others would benefit from it too.
> ...


Wowzers! This needs to go to the repo. Would you mind creating a pull request for this in Github?


----------



## musicreo

I put the gui.py into my Impulcifer folder. In the command line I moved to to the impulcifer folder but when I try to start the gui no window appears? 

The gui will probably help some people but for me it is not really a difference if a start the gui with one command or if I start the measurement with one command.


----------



## davidtriune

jaakkopasanen said:


> Wowzers! This needs to go to the repo. Would you mind creating a pull request for this in Github?


Sure! just made a pull request. i'm glad you like it.


----------



## davidtriune

musicreo said:


> I put the gui.py into my Impulcifer folder. In the command line I moved to to the impulcifer folder but when I try to start the gui no window appears?
> 
> The gui will probably help some people but for me it is not really a difference if a start the gui with one command or if I start the measurement with one command.


I'm trying to figure out how to make an exe file (pyinstaller just packages the entire program in one massive file instead of making it standalone)
will let you guys know.


----------



## Brandon7s (Nov 4, 2021)

Okay, got another problem I've run into and I have a feeling it's something simple that I'm forgetting to do.

I'm trying to use a single room-FL,FR.wav recording from my measurement microphone for room correction with impulcifer and I keep getting a "division by zero" error and I have no idea why.

*Here's the command I'm using to record the room measurement:*
python recorder.py --play="data/sweep-seg-FL,FR-stereo-6.15s-48000Hz-32bit-2.93Hz-24000Hz.wav" --record="data/my_hrir/room-FL,FR.wav" --input_device="In 1-24 (MOTU Pro Audio)" --channels=1

*Processing with this command:*
python impulcifer.py --test_signal="data/sweep-6.15s-48000Hz-32bit-2.93Hz-24000Hz.pkl" --room_target="data/harman-in-room-loudspeaker-target.csv" --generic_limit=2000 --dir_path="data/my_hrir"

*And then here's the error:*
(venv) C:\Users\Brandon\Impulcifer>python impulcifer.py --test_signal="data/sweep-6.15s-48000Hz-32bit-2.93Hz-24000Hz.pkl" --room_target="data/harman-in-room-loudspeaker-target.csv" --dir_path="data/my_hrir" --plot
Creating impulse response estimator...
Running room correction...
Traceback (most recent call last):
  File "impulcifer.py", line 556, in <module>
    main(**create_cli())
  File "impulcifer.py", line 53, in main
    _, room_frs = room_correction(
  File "C:\Users\Brandon\Impulcifer\room_correction.py", line 44, in room_correction
    rir = open_room_measurements(estimator, dir_path)
  File "C:\Users\Brandon\Impulcifer\room_correction.py", line 175, in open_room_measurements
    rir.open_recording(file_path, speakers, side=side)
  File "C:\Users\Brandon\Impulcifer\hrir.py", line 59, in open_recording
    n_columns = round(len(speakers) / (recording.shape[0] // tracks_k))
ZeroDivisionError: division by zero


 I've double checked that the harman-in-room-loudspeaker-target.csv file is in the correct location and it is. I also double checked the room-FL,FR.wav to make sure it's not all silence and it's not, it clearly shows 2 waveforms in sequence on a single track. I'm able to process this BRIR if I use --no_room_correction without any problem.

  Anyone have any ideas on what might be causing this and how I can fix it? Thank you!

*UPDATE*: Solved, probably. I changed the filename from "room-FL,FR.wav" to simply "room.wav" and that worked. I got the original command from the documentation on github, from the spot directly below how to record 7.1 surround using a stereo speaker configuration. The command there uses "room-FL,FR.wav", but it must not work with just a standard stereo speaker BRIR (rather than a 7.1 one). The section below that on creating a 7.1 BRIR with a single mono speaker says to use "room.wav", which is the reason I tried it.

 It sounds great, by the way! Really makes the low end come alive in a way that I was not able to duplicate before.


----------



## davidtriune (Nov 7, 2021)

^I seem to get that error randomly, and it goes away if I either a. record again or b. switch the test signal to the other one, e.g. pkl if you're using the wav one, and wav if youre using the pkl.


I have a question. Is the "microphone boost" option in my windows the same as plug-in/phantom power? Seems like it just boosts the incoming signal because I get a ton of noise from it.





If not, then does anyone know of a cheap USB sound card with Plug-in power? I'd go with the 48v->5v converter like everyone else is doing but doesnt seem as efficent as just buying a sound card that powers mics with 5v.

P.S. I found a bug with my program, updated it.


----------



## Brandon7s (Nov 7, 2021)

davidtriune said:


> ^I seem to get that error randomly, and it goes away if I either a. record again or b. switch the test signal to the other one, e.g. pkl if you're using the wav one, and wav if youre using the pkl.
> 
> 
> I have a question. Is the "microphone boost" option in my windows the same as plug-in/phantom power? Seems like it just boosts the incoming signal because I get a ton of noise from it.
> ...



 The microphone boost does not work like phantom power or plug-in power. All it does is digitally increases gain (along with noise). Plug-in and phantom power both provide voltage to the microphone, which is required to get anything useable out of the mics.

 If you can find a soundcard that provides plug-in power, go that route. Another option is to use a battery pack that supplies plug-in power. I bought the one from Sound Professionals that works very well, though I later switched to the Rode VXLR+ adapters since I prefer using phantom power.


----------



## davidtriune (Nov 7, 2021)

Thanks a lot, and also this page is very enlightening for those wondering the same things as me.
However, it seems to be saying that 12v is better for "small mics" than 5v. Is this true, or is that meant for the non-binaural condenser mics? maybe theyre TOO small and 12v would fry them.


----------



## morgin

Hey all just to confirm with vbcable are these the right settings. I’m sure before I was getting all 8 input levels working now it’s only two. (Sorry about the quality)


----------



## musicreo

@morgin 
I also only see two input levels in the VB panel but still all channels work for me.


----------



## musicreo

davidtriune said:


> Thanks a lot, and also this page is very enlightening for those wondering the same things as me.
> However, it seems to be saying that 12v is better for "small mics" than 5v. Is this true, or is that meant for the non-binaural condenser mics? maybe theyre TOO small and 12v would fry them.



Higher Plug-in power often allow higher recording levels. But  the voltage is different for every capsule.  I read that the voltage  range is usually between 3V-10V.  
My two Rode VXLR+ adapter give me 5.12V and 5.07V Plug-in power on the Behringer which is already completely adequate for most capsules.  



davidtriune said:


> I have a question. Is the "microphone boost" option in my windows the same as plug-in/phantom power? Seems like it just boosts the incoming signal because I get a ton of noise from it.


It is only increasing the input level digitally. You don't increase your SNR.  Audio Interfaces have a microphone pre-amplification that increase the input level without increasing the noise the same amount.



davidtriune said:


> If not, then does anyone know of a cheap USB sound card with Plug-in power? I'd go with the 48v->5v converter like everyone else is doing but doesnt seem as efficent as just buying a sound card that powers mics with 5v.


The problem is that most soundcard manufacturs seem not to care much about the Plug-in power.


----------



## Brandon7s (Nov 10, 2021)

Oh man, I'm nearly at a loss for words. I spent a few hours playing electric guitar through Impulcifer created BRIRs last night and it was just UNREAL!

 I mean, it felt and sounded REAL, so real that I still can't believe it. It's the first time I've actuality felt like I had a cranked guitar amp in the room with me while using headphones, except it was even better than MY room. At first I was using Impulcifer on its own, which was glorious enough, but then I put an instance of Valhalla's Room reverb plug-in right after and basically dialed in an amazing room and put that right after the convolution plug-in running my BRIR. Holy cow. It's like playing any guitar rig and amp I want, since I'm using a Fractal modeler, and in any ROOM I want at the same time. I just had to crank the overall volume on my headphone amp so that the speaker virtualize was prominent enough and then adjust all other levels in the chain prior to the BRIR to control how loud I actually want things to be.

 It was 100% convincing and it was the most fun I've had on guitar in AGES. If the folks on the guitar forums that I frequent were able the  experience this, they would sell they firstborn to get this thing set up and working like I have. I'm trying to figure out a way to demo this but I don't think its going to be anywhere near as easy as just posting a couple clips and having people understand what this thing does and how incredibly accurate it is. I'm going to start working on a post to put out in the community though, one explaining the process and the rewards and see if anyone bites. It's frustrating having something so mind-blowingly awesome that is, no joke, changing my life, but having no simple way to convince others. I don't think anyone else will believe me when I say that I found the LITERALLY PERFECT amp-in- the- room sound from headphones. That's the holy grail that every much every modern guitarists has spent some tune chasing and eventually being disappointed by.

 If you're wondering where all this is coming from all of a sudden, here's why.

I made a few more measurements yesterday and made a personal breakthrough with placing the mics at just the right depth, as far into the entrance of the ear canal without actually going past it, using the glued-to-foam earplugs method that we've all been doing. Something about that placement suddenly made localization crystal clear. I then had another revelation: this stuff requires VOLUME. For me at least, the amount of volume needed to replicate the experience of truly convincing speaker virtualization is quite a bit higher than my typically non-HRIR listening requires. Previously, I was operating under the mentality that I was listening to the MUSIC, and so when I set levels I was setting them as I would workout a BRIR, which is not high enough to get the full feeling of replicating a room. One I realized that I'm NOT listening to music, and that I'm actually listening to my ROOM and my speakers that are placed in it, I was able to feel immersed and to set levels that replicate the experience exactly as I would if I wasn't wearing headphones at all and was just listening to my speakers as normal.

 Just that change in mentality and perception made a huge difference, but together with the latest measurements I've gotten, I just don't know how it can get any better. It is incredible and surreal!


----------



## lowdown

Excellent description of what I've been experiencing on all aspects.  The incredible exhilaration of hearing sound so stunningly real, literally a life changing experience, knowing what a huge impact it could have for so many, and at the same time facing the daunting hurdle of how to share it.  It's why I hang out here piping in with my overly emotive ramblings, hoping to encourage others to make the effort the hear this.  It's really great to see someone else who's opened the box to this treasure.


----------



## sander99

Brandon7s said:


> I'm trying to figure out a way to demo this but I don't think its going to be anywhere near as easy as just posting a couple clips and having peyote understand what this thing does and how incredibly accurate it is.


The only real way to demo this is in person, invite someone, stick the microphones in his or her ears, etc.


----------



## morgin (Nov 10, 2021)

That’s beautiful man. Im in awe with my recordings but I have a feeling im still not at the level you guys are experiencing. I wish I had the knowledge to be able to determine what’s wrong and tweak it to perfection

Any chance of a picture of your mics glued to the foam and maybe how they look placed in your ear?


----------



## lowdown

morgin said:


> That’s beautiful man. Im in awe with my recordings but I have a feeling im still not at the level you guys are experiencing. I wish I had the knowledge to be able to determine what’s wrong and tweak it to perfection
> 
> Any chance of a picture of your mics glued to the foam and maybe how they look placed in your ear?


Here's mine:


----------



## Brandon7s (Nov 10, 2021)

morgin said:


> That’s beautiful man. Im in awe with my recordings but I have a feeling im still not at the level you guys are experiencing. I wish I had the knowledge to be able to determine what’s wrong and tweak it to perfection
> 
> Any chance of a picture of your mics glued to the foam and maybe how they look placed in your ear?


Sure! Here's a couple pictures of how I'm wearing the mics, along with a standalone shot of the mics so you can see how much foam I'm using. I've experimented with cutting the foam down to different lengths and as long as they are short enough for me to get an insertion that puts the mic capsules right up next to the entrance of the ear canal, I get good results.




Slightly different angle:





And here's the naked shot. These are the Master Series binaural mics from The Sound Professionals. I also have the non-master series version and I completely removed the silicon sleeve that encompasses the mic capsules and this lets me get an even deeper fit, but I don't think they actually produce better results. I guess at a certain point there's no advantage to measuring with the mic further in, and the tonality gets progressively more bright the further in the insertion, from my experience, so this placement in the photo is the most optimal that I've found so far. Also, trimming the foam TOO short makes it difficult to keep the mics facing the right direction due to lack of support from the foam.




The other thing I recommend you use if you're not already doing so is room correction with the Harman-in-room-loudspeaker-target.csv curve. It might SEEM like the bass is overwhelming but it made a BIG difference in getting tonality that was realistic. It was off-putting at first but after changing my mentality from approaching Impulcifer as a SPEAKER emulation rather than a ROOM emulation, something "clicked" and now it makes perfect sense. Such a bassy curve more accurately represents the sound of a speakers that are LOUD in my room, as I would typically get when I'm playing a guitar amp in my room or listening to a movie in a theater, or even seeing a band live. That kind of bass response is not something I get to enjoy on a regular basis and it took me some time to get acclimated to getting THAT kind of sound from headphones.

 Overall tonality and frequency response of the BRIR you take makes a HUUUGE difference as to how convincing it will be. When I go back to listen to my older BRIRs _the_ immersion breaking factor is more often than not the thin sound with little to no low-end support. Localization is still good on most of them but without that low end support they don't truly convince my brain that I'm listening to decently loud speakers. Basically, the bass isn't meant to be heard so much as it is meant to be _felt_, and that made a huge difference to how realistic the speaker virtualization appeared to me. If you can't "feel" the bass from your headphones then you are probably not getting the most out of Impulcifer.

 Also, what headphones are you using? I think that can have a dramatic effect on how convincing your results are. My Ananda's are FAR, FAAAR more convincing than my DT1990's, ATH-R70X, HD6XX, or DT770s. Now that I've got a pretty good handle on getting great results out of my Anandas, I'm going to try working on getting the same results out of my other headphones. So far the BRIRs I've made with them have been quite lackluster so I have a lot more experimenting to do.

Reverb management is also a really good thing to try out if you haven't yet. _I _like it because it lets me set a short reverb time and lets me use my own software plugins like Valhalla's Room and Liquidsonic's Reverberate 3, among others, to create the space that I want to experience. It's wild just how well this works. I can set up a room that is any size and I while I'm changing parameters I can hear the changes in real time and can easily match the reverb settings to create a room of similar size to the one I'm actually in, just waaay better sounding - like, the best-treated room of all time.

I'm still playing around a lot with the channel balance options but I've not settled on any particular setting as being optimal. Sometimes Trend works _great_ and a lot of times it just messed up my bass perception horribly.  I still get great results without using any channel balance options, but they just aren't _perfect_, so more experimenting to be done there. I think manually adjusting left/right balance is probably a better option for me.

EDIT: one more thing that helped me appreciate Impulcifer's accuracy: realize that a truly good BRIR won't make it sound like your speakers are the sound source. They'll make it sound like the PHANTOM CENTER is the sound source. It's a subtle difference but it It's important. Sit down with your speakers and listen to them as normal. Get a feel for the phantom center and really focus on it. Pay special attention to the low end and how it seems to just appear out of nowhere and everywhere all at once. It won't SOUND like the bass is coming from your speakers even though it is.

 Next, put on your headphones and your favorite BRIR. Instead of trying to determine if your brain is fooled into thinking that the sound is coming from the speakers, focus on that phantom center again. Use that to gage how successful your Impulcifer results are rather than a general feeling of whether it sounds like your speakers are the source of audio.


----------



## morgin

@lowdown These look perfect I’m going to try using the foam plugs like you have


----------



## morgin

Brandon7s said:


> Sure! Here's a couple pictures of how I'm wearing the mics, along with a standalone shot of the mics so you can see how much foam I'm using. I've experimented with cutting the foam down to different lengths and as long as they are short enough for me to get an insertion that puts the mic capsules right up next to the entrance of the ear canal, I get good results.
> 
> 
> Slightly different angle:
> ...



Thanks for the detailed response it’s a big help and a lot of info that I would have not even thought about. I will definitely try your suggestions. Appreciate it so much


----------



## lowdown

morgin said:


> @lowdown These look perfect I’m going to try using the foam plugs like you have


The foam on mine could be a bit shorter, it's a matter of finding the balance between long enough to stay in place but short enough to let the mic seat close to the canal opening.  I'm used to wearing foam earplugs so inserting these far enough wasn't a problem.


----------



## morgin

Hi all, just wanted to update anyone having issues with surround sound movies. I've been using MPC-HC for my media playback and had it set up for 7.1 sound output and it performed well, I've also setup and used VLC media player too to compare. I was under the impression that all the players would output 7.1 sound the same as long as they are setup correctly. Last night I decided to try potplayer and configured it to play 7.1 movies, and I have to post that potplayer has made a huge difference in the sounds I'm hearing. So if you use impulcifer for media playback please definitley give potplayer a try


----------



## musicreo

morgin said:


> Last night I decided to try potplayer and configured it to play 7.1 movies, and I have to post that potplayer has made a huge difference in the sounds I'm hearing. So if you use impulcifer for media playback please definitley give potplayer a try



You should check the setting of potplayer. I guess it doing some normalisation.


----------



## castleofargh

morgin said:


> Hi all, just wanted to update anyone having issues with surround sound movies. I've been using MPC-HC for my media playback and had it set up for 7.1 sound output and it performed well, I've also setup and used VLC media player too to compare. I was under the impression that all the players would output 7.1 sound the same as long as they are setup correctly. Last night I decided to try potplayer and configured it to play 7.1 movies, and I have to post that potplayer has made a huge difference in the sounds I'm hearing. So if you use impulcifer for media playback please definitley give potplayer a try


potplayer is still my favorite player, and beside some very rare artifacts(for a fraction of a second and then all is well for a while, like a buffer issue of sort could do) on maybe 2 DVDs, and one surround demo I found online, this player has been good to me. But IMO if you do get a clear sound difference, it's probably because some setting is causing it, and given all you can fool around with in potplayer, I would bet on it being the odd one.


----------



## morgin

I was also thinking someone who’s had good results should maybe make a YouTube video tutorial. Some of the steps are daunting for newbies like myself and a lot more people would probably try impulcifer if there was a visual guide to follow. Coz this software is too good for just a few to have.


----------



## Brandon7s (Nov 11, 2021)

Okay, progress update for those who might be interested. I just finished my 60th measurement session and this new BRIR is absolutely BONKERS! I just tried A/Bing it with my Ananda vs. my speakers and I can't tell them apart anymore. This is the first time I've gotten close enough to the real-deal with Impulcifer that I simply cannot tell if audio is coming from my speakers or from my headphones, once volume matched. My last couple of measurements were VERY good, or at least I THOUGHT they were, but this is on a whole different level. You'll probably see why once you see the charts.

I've been having a heck of a tough time getting good channel balance. It's always off somewhere, either the low end or the high end, and getting it nailed perfectly has alluded me until this session. The reason why is that it appears my right and left ears are RADICALLY different, acoustically. How do I know? Take a look at this Headphones plot:




For anyone with relatively balanced left/right hearing, this would sound horrifically out of balance, since there's nothing even _close_ to balanced here! The only point in which my ears hear the same frequencies at roughly equal volume is that small section from 700 to 1,000 hertz, and that's all. But the sound is _perfect! _

I tried using the average, mids, and trend --channel_balance options and all of them made the BRIR practically unusable. But with no channel balance at all? It's incredible. This explains so much about my experience with headphones in general. I don't get phantom centers with _any_ headphones except for IEMs, and even with those it's weak and ill defined. Most of my headphones sound roughly the same to me when I'm not using an Impulcifer BRIR; however, _with_ the BRIR it's completely different. I can hear all of the headphone's characteristics, just as if I were listening to different speakers.

Now take a look at this corresponding Results plot:


It's nearly dead-on in balance at every frequency when compared to past results. This is the first time I've gotten anything even _close_ to this tight of a balance match.

Here's an example from the prior 2 measurements I've made, both are my previous favorites that I created earlier this week. Here you can see the whacky channel balance plot, which isn't TOO far off of my most recent measurement in the low end but everything passed 1,000hz is completely different and much higher in amplitude compared to my new, perfect measurement:




Results from the same session: pretty tight low end balance and then it gets crazy at just about 100 hz. This is why I could never get the high and mid frequencies to be properly aligned. No --channel_balance setting can fix this wide comb-like pattern:





Here's the second set of measurements from earlier in the week. This sounds a bit brighter but still pretty good sounding. Once again though, it has balance issues between the lows and highs and no --channel_balance options could fix that:



And the results from that session: by the way, the balance issues in the bass regions are HIGHLY apparent, causing the low-end to move left to right depending on the frequency being played. It's super annoying to try to monitor bass guitar with this happening, as you can imagine. 




 So, what did I do to make this massive improvement from today's measurements? I placed the mics further into my ears than normal, that's all. It looks like there's a critical point at which the highs start to get attenuated to the levels that my eardrum receives, and if I don't push the mics in far enough, I don't get the effect of that attenuation.

I think that trying to judge channel balance by the headphones.png plots is NOT useful in determining the quality of the final results, and I had spent quite a lot of time trying to figure out how to "fix" that problem... but it was never _actually_ a problem! The problem was channel balance versus the speakers, as shown in the results.png plots. THAT is what you need to try to perfect in order to get a truly amazing quality measurement, I believe.


----------



## morgin (Nov 11, 2021)

Keep the details incoming. They are a big help. I’m planning on another recording session and want to try all your suggestions because although localisation is very good it still doesn’t feel like I’m listening to speakers in my room so I know I’m nowhere near what I can get.

How far into your ears did you set your mics?

And are you suggesting no balance trend

Also with room correction are you taking readings for both ears in every position with your mic? and if so what angle are you positioning the mic (toward the speaker/facing toward the ceiling?)

I’m thinking for room correction, if I can strap the mic to my head and as close to the ear whilst sitting in the exact same position after headphone and ear measurements without getting off my swivel chair I’ll get a good room recording. But the mic will be facing the ceiling. Then I can do the same for the other ear.

Something like this.


----------



## sander99

morgin said:


> I’m thinking for room correction, if I can strap the mic to my head and as close to the ear whilst sitting in the exact same position after headphone and ear measurements without getting off my swivel chair I’ll get a good room recording. But the mic will be facing the ceiling. Then I can do the same for the other ear.


I don't think that is a good idea, because this way the sound you measure is influenced by the presence of your head, in a way the room sound has gone through a part of your HRTF this way. The sound from the right surround speaker at your left microphone for example will be strongly influenced by the head that is full in the path of the sound, the sound has to bend around your head and will be severely filtered. If you compensate for this filtering you will be compensating for a part of your HRTF and thus undoing a part of the HRTF filtering that the speaker virtualisation is doing!


----------



## Brandon7s (Nov 12, 2021)

morgin said:


> Keep the details incoming. They are a big help. I’m planning on another recording session and want to try all your suggestions because although localisation is very good it still doesn’t feel like I’m listening to speakers in my room so I know I’m nowhere near what I can get.
> 
> How far into your ears did you set your mics?
> 
> ...



 Glad to be able to help, hopefully you can get similar results and if there's any way I can help you get there, I'm happy to do it! It sounds like hyperbole but the results I'm getting now are literally life changing. And I mean LITERALLY, not figuratively. As a practically life long guitarists who's been relegated to apartment life for all but a few years, this is HUGE. I can FINALLY enjoy electric guitar at decent volumes and without disturbing my girlfriend or my neighbors. Not only that, it's not a compromise! It's practically indistinguishable from the real deal while I'm in this room.

  And outside of the guitar part of this is the, the fact that I can now hear songs as they are meant to be heard is incredible. I love my speakers but I only get the chance to use them at very low levels, far lower than most people use for mixing recordings. So I've always used headphones and I've spent a lot of money on them just trying to get something that sounds even slightly decent. It just turns out that the closest thing I've gotten is so far from "decent" that it's nearly impossible to describe.Then Impulcifer came along and now I want to listen to my entire music collection from start to finish again. I'm hearing things that I couldn't even imagine ever being able to hear, like a pen or guitar pick being dropped on the floor from recordings that I've listened to hundreds of times. It's really hard to understate _just _how much this has affected my enjoyment of music, both listening to it and playing it. THANK YOU @jaakkopasanen! If there's any way I can donate to you, just let me know and I'll be happy to do it.

Okay, rave over, let me see if I can answer your questions. 

Here's a shot of about how deep I think I had it for this last measurement. It's pretty far in, though not further than what I could reach with my pinky finger. That's the front of the mic facing directly at the camera, but as you can see it's obscured by a portion of my ear's geometry. I think how deep you have to get is really just a factor of your personal physiology and ear-shape.





 About --channel_balance: I'm not saying that you shouldn't use the channel balance options, I'm saying that when you get a good, proper fit then you won't _have_ to, and that once your measurement is good enough then using the channel balance options just make things worse, because there's nothing to fix. Could you post your headphone.png and results.png from your favorite BRIR that you've made so far? I'd be curious to see if and how they resemble my older, less improved measurements.

 For room correction: I'm just using a general room mic placement. I did 5 different measurements with the mic vaguely near where my head would normally be to room.wav and then used this command to process the BRIR with room correction:
`python impulcifer.py --test_signal="data/sweep-6.15s-48000Hz-32bit-2.93Hz-24000Hz.pkl" --room_target="data/harman-in-room-loudspeaker-target.csv" --generic_limit=2000  --dir_path="data/my_hrir"`

I'm not using specific room measurements. I did try that once but the results weren't any better than I was getting with general measurements. I'd focus on trying to get a very accurate measurement in left/right balance first, and worry about the room EQ afterwards. It can actually make the illusion of listening to your same speakers even weaker, because we're somewhat familiar with how our room sounds and you can recognize that easier if you don't EQ out the imperfections.

 If there's anything else you can think of, feel free to ask!


----------



## morgin

sander99 said:


> I don't think that is a good idea, because this way the sound you measure is influenced by the presence of your head, in a way the room sound has gone through a part of your HRTF this way. The sound from the right surround speaker at your left microphone for example will be strongly influenced by the head that is full in the path of the sound, the sound has to bend around your head and will be severely filtered. If you compensate for this filtering you will be compensating for a part of your HRTF and thus undoing a part of the HRTF filtering that the speaker virtualisation is doing!


This actually makes a lot of sense so when room recording do you think just being present in the room will effect the measurement? Because originally I’m sat down but when doing the room recording I’m in a different place in the room


----------



## morgin (Nov 12, 2021)

Brandon7s said:


> Glad to be able to help, hopefully you can get similar results and if there's any way I can help you get there, I'm happy to do it! It sounds like hyperbole but the results I'm getting now are literally life changing. And I mean LITERALLY, not figuratively. As a practically life long guitarists who's been relegated to apartment life for all but a few years, this is HUGE. I can FINALLY enjoy electric guitar at decent volumes and without disturbing my girlfriend or my neighbors. Not only that, it's not a compromise! It's practically indistinguishable from the real deal while I'm in this room.
> 
> And outside of the guitar part of this is the, the fact that I can now hear songs as they are meant to be heard is incredible. I love my speakers but I only get the chance to use them at very low levels, far lower than most people mix recordings at, for sure. So I've always used headphones and I've spent a lot of money on them just trying to get something that sounds even slightly decent, it turns out that the closest thing I've gotten is so far from "decent" that it's impossible to describe. Impulcifer came along and now I want to my entire music collection from start to finish again. I'm hearing things that I couldnt' even imagine ever being able to hear, like a pen or guitar pick being dropped on the floor from recordings that I've listened to hundreds of times. It's really hard to understate _just _how much this has affected my enjoyment of music, both listening to it and playing it. THANK YOU @jaakkopasanen! If there's any way I can donate to you, just let me know and I'll be happy to do it.
> 
> ...


The actual help and willingness I'm getting from you guys is just beautiful.

here are my png







I'm curious as to the volume you did your recordings at. Its difficult to say I know but
how loud were your speaker when doing the recordings. Just at comfortable listening volume or something loud as to disturb the people in the house

Did you keep the same amount of volume for both headphones and speaker so they sounded the same to your ears, or did you concentrate more on headroom being the same


----------



## musicreo

@Brandon7s​Are you doing stereo measurements or is it a 7.1 measurement?
I find it really strange that you have such a flat frequency response while your headphone plot shows such large variations, especially as you say that you don't use any channel balance option.


----------



## Brandon7s (Nov 12, 2021)

morgin said:


> The actual help and willingness I'm getting from you guys is just beautiful.
> 
> here are my png
> 
> ...



Ah, your Results graph looks very similar to my lesser improved ones, though your low end matches a lot better than most of mine ever did. But you see where you're starting to get significant swings in channel accuracy from about 500hz upwards? I think that's indicating the problem. I'd take a guess and say that this sounds decent and sometimes it convinces you that you're listening to your speaker rig, but that it's not doing that consistently, and that you probably have issues with instruments changing position in the stereo field when they play different notes. I think you're not too far off but still need a better fit to fix that mismatching in the mids and highs.

By the way, you might need to remove the silicone sleeve that encases your mics in order to get them to be small enough to fit far enough into your ear to capture the sound of enough geometry to get a properly balanced measurement. That's what I did with one of my pair of mics (the cheaper one) and it definitely makes getting the fit less uncomfortable. The casing on the regular mics can be irritating and a bit sharp so I spent some time sanding down sleeve's sharper edges with a nail file in order to make it less sharp where I had cut the wings off. That helped, but if you have ears any smaller than mine then I think removing the silicone casing will practically be necessary.

Here's a photo of what I mean. Getting it positioned so the mics aren't covered up by the foam can be a bit tricky thanks to the wire getting in the way, but I think it's an option you can consider if you're not able to get the mics with the casing far enough in:






 Good question about amplitude and listening volume when taking my measurements. When I run the headphone measurement sweeps I have the volume VERY loud. Louder than I'd ever listen to without earplugs in - and I'd never even try that kind of volume _without_ the earplugs, it'd be damaging. I do that with the headphone measurements because I want to maximize headroom and minimize the noise floor. Best way to do that is to turn the preamp gain on the mics down and crank the volume of your headphones. You don't want distortion though, so if you hear anything odd in the sweep playback - particularly any kind of weird "bouncing" sound in the first half a second, then either your headphones are distorting or your output itself is. Both cases are fixable by reducing the volume in Windows (using the media volume Up and Down keys).

I take the speaker measurements at sensible volume levels. I actually just got an SPL meter yesterday and checked at what levels I typically use and it ranges from about 72 to 80. 72 is loud enough to need to talk a little louder than normal to be heard over the music. So it's a little bit above conversational levels.

  I actually did a test in a prior measurement session: I recorded 6 takes of both the speakers and the headphone measurements. I started at VERY loud volumes from both speakers and headphones by pushing both nearly to their limits, and then I reduced the VOLUME by 5 db while increasing the preamp gain by 5 dB. This gives each take a total volume change of 10 dB quieter but yet still retains the same headroom (I aimed for 6 dB in headroom for this test). I then processed the BRIRs and listened to music with them while switching randomly between the 6 different measurements. *I couldn't tell the difference between any of them. *That's right, the volume I used when taking the measurements didn't seem to make any audible difference to the quality of the results. And this includes going from the nearly whisper quiet measurement where I would struggle to hear any sound at all, straight over to the BRIR where the volume was probably loud enough to damage my hearing if I hadn't been wearing earplugs on the mics.

 I don't think that the volume used when taking the measurement for either headphones or speakers is a big factor in the quality of the end results.



musicreo said:


> @Brandon7s​Are you doing stereo measurements or is it a 7.1 measurement?
> I find it really strange that you have such a flat frequency response while your headphone plot shows such large variations, especially as you say that you don't use any channel balance option.


Right now I'm just sticking to stereo measurements until I can consistently replicate these latest great results. I've tried all of the channel options with this latest measurement and all of them make the sound uncanny and warped, so it's definitely correctly balanced for my own ears.

I've taken two different 7.1 measurements, neither of which is all that good but both are VASTLY better than using generic Dolby Headphones or other off-the-shelf speaker virtualization software. I need to record another 7.1 session with the same good-fit that I've gotten this last time. I bet the results from that will be truly amazing for games and movies.

 I think the headphones plot can be a bit misleading, at least until you have a result that is accurate and doesn't improve with --channel_balance options being used. I think my hearing is just crazy lopsided naturally, and that the wildly uneven nature of it is actually my standard for flat. The problem with using the headphones chart to determine accuracy _before_ getting a near-perfect measurement is that you don't actually know if your hearing SHOULD be unbalanced in order to be balanced in your perception!

 I believe the Results graph is where our focus should be. I'm not entirely sure what it's measuring, but I _think_ it's measuring the variance between what you heard out of your left/right speakers vs. the EQ and timing changes that Impulcifer applies to your left/right headphone channels. So, it's telling you just how far off your _headphones_ are from your speakers after processing. And since matching the speakers is the ultimate goal here, it makes sense that this Results chart would be _the_ chart to look at if you're trying to see how accurately the BRIR is modifying your headphone's response to match the speakers. I think that the balance in the Headphones.png should really only be used as a troubleshoot tool once you have a good idea of what your ideal left and right channel response should look like. And for some, like me, a wildly out of balance graph is the ideal.


----------



## sander99

morgin said:


> This actually makes a lot of sense so when room recording do you think just being present in the room will effect the measurement? Because originally I’m sat down but when doing the room recording I’m in a different place in the room


Depending on the exact situation there may be some effect, but if you just keep away from the mic and the speakers, and don't sit in the direct path between a speaker and the mic, maybe even sit low on the floor, then it shouldn't be very dramatic I think.


----------



## morgin

musicreo said:


> @Brandon7s​Are you doing stereo measurements or is it a 7.1 measurement?
> I find it really strange that you have such a flat frequency response while your headphone plot shows such large variations, especially as you say that you don't use any channel balance option.


I'm doing 7.1 measurements and I did use channel balance trend for this measurement

python impulcifer.py --test_signal="data/sweep-6.15s-48000Hz-32bit-2.93Hz-24000Hz.pkl" --fr_combination_method=conservative --room_target="harman-in-room-loudspeaker-target.csv" --specific_limit=750 --generic_limit=750 --channel_balance=trend --target_level=10 --decay=500 --dir_path="data/my_hrir " --plot



Brandon7s said:


> Ah, your Results graph looks very similar to my lesser improved ones, though your low end matches a lot better than most of mine ever did. But you see where you're starting to get significant swings in channel accuracy from about 500hz upwards? I think that's indicating the problem. I'd take a guess and say that this sounds decent and sometimes it convinces you that you're listening to your speaker rig, but that it's not doing that consistently, and that you probably have issues with instruments changing position in the stereo field when they play different notes. I think you're not too far off but still need a better fit to fix that mismatching in the mids and highs.


yes exactly how you describe it



Brandon7s said:


> By the way, you might need to remove the silicone sleeve that encases your mics in order to get them to be small enough to fit far enough into your ear to capture the sound of enough geometry to get a properly balanced measurement.


My ear canals are too small for using foam and mic's the wire keep pushing the mic out when its curved to come back out of the ear. The only way to get them further in is for them to be bare mics but then I have the problem of them not touching the side of my ear canals and I just cant get both of them to face outwards to the canal. 

I just tried with hot glue and foam and ended up breaking the wire from the solder. Just re soldered them and thank god they are working again. very fiddly with them being so tiny and closely soldered



Brandon7s said:


> I don't think that the volume used when taking the measurement for either headphones or speakers is a big factor in the quality of the end results.


Interesting so dont worry about the volume just concentrate on positioning the mics in ear is the best way forward.

Do I just aim to get this and is this result from headphone measurement or speaker?


----------



## Brandon7s (Nov 12, 2021)

morgin said:


> I'm doing 7.1 measurements and I did use channel balance trend for this measurement
> 
> python impulcifer.py --test_signal="data/sweep-6.15s-48000Hz-32bit-2.93Hz-24000Hz.pkl" --fr_combination_method=conservative --room_target="harman-in-room-loudspeaker-target.csv" --specific_limit=750 --generic_limit=750 --channel_balance=trend --target_level=10 --decay=500 --dir_path="data/my_hrir " --plot
> 
> ...



If I were you I wouldn't worry about doing 7.1 measurements just yet unless you have a full 7.1 hifi system that you can take a measurement without having to turn around or move the speakers, just to keep things simple and less potential balance issues from being being exasperated. I'd worry about getting a solid stereo BRIR first, with which you can't tell your speakers apart from your headphones when you A/Bing the two rigs. Once you have _that_, then I think it'd be worth it to try for a full 7.1 measurement.

 I'm with you there on the difficulty with getting the mics to be facing the right direction with the foam. It's a pain! I'm going to experiment with rotating the mic piece so the wire is in a spot that helps to straighten out the position so the mic isn't covered up, I think with some experimentation I can find a spot that works well consistently. I've tried it without the foam at all, just the bar mic capsules outside of their silicone sleeve, and the results were far too bright to be usable. I think the foam is needed, or at least the ear canal needs to be blocked and the easiest way to do that is with the foam+mic setup.

 Keep experimenting with placement and I think you'll get there. I mean, my only truly great measurement came after 60 attempts, so obviously this isn't an easy thing for everyone to achieve great results with.



> Interesting so dont worry about the volume just concentrate on positioning the mics in ear is the best way forward.


Correct, that's what I think the best route is. I wouldn't even worry about room EQ yet, either. I've tried the measurement with and without room correction and both are incredibly convincing with this latest measurement; both 100% fool my brain into thinking I'm listening to my speakers while I'm in that spot. It does sound better with the room correction turned on but it doesn't sound any more _realistic. _It just sounds like my room went from sounding like garbage to sounding pretty good, acoustically.



> Do I just aim to get this and is this result from headphone measurement or speaker?


Those are the results from the combination of both; the purple line in the Results.png graph is measuring how far off your headphones measure compared to your speakers for that particular measurement session. The bigger the variance in the purple line from 0, the more different your headphones sound vs. your speakers. That's what I _believe _its measuring though I don't know exactly how it's doing it. This means its incredibly important that the position of the mic within your year not change when you take off your headphones to record the speakers, too, so make sure you're getting as little mic movement as possible when you're changing to the speaker measurement after having taken the headphone's one.


----------



## morgin

I think I fried my mic will need to buy a new one. The soldering doesn’t work anymore


----------



## Brandon7s

morgin said:


> I think I fried my mic will need to buy a new one. The soldering doesn’t work anymore


Aw man, that's a bummer! 

I'm working on getting a consistently good fit with good results and so far I've not been able to get another measurement anywhere near as accurate as the one I most recently posted about. I'm going to try going back to using a full length of foam again, I think its easier to be consistent in the placement of the mic using more than just the few millimeters that I had going on. It easily gets twisted around and one of them will face the wrong direction just enough to through everything off. If I can get _consistently _good measurements then I know I've hit on the right length and method and I'll let you know when I get to that point.


----------



## morgin

Any suggestions on the next pair of mics I have been using the primo em258's or do they not make that much of a difference?


----------



## Brandon7s (Nov 12, 2021)

morgin said:


> Any suggestions on the next pair of mics I have been using the primo em258's or do they not make that much of a difference?


I've not noticed any practical difference between my Master Series mics and the cheaper ones, so I'm inclined to think that they don't make a significant difference. At least not until good measurements are achieved, then maybe the difference has more of an impact.


----------



## musicreo

Brandon7s said:


> the purple line in the Results.png graph is measuring how far off your headphones measure compared to your speakers for that particular measurement session. The bigger the variance in the purple line from 0, the more different your headphones sound vs. your speakers.


In the code the comment says. "Plot left and right side results with all impulse responses stacked."
If I understand correct the purple line shows  the difference in the amplitude of your final result of the sum of your left and right channels. This would not be the difference to the headphone measurement.


----------



## jaakkopasanen

Left-right balance without those narrow peaks and dips is not necessarily the goal as those are caused by having the peaks and dips in left and right in ever so slightly frequencies. That's caused by differing ear geometry ie asymmetry and is quite natural.


----------



## Brandon7s (Nov 12, 2021)

musicreo said:


> In the code the comment says. "Plot left and right side results with all impulse responses stacked."
> If I understand correct the purple line shows  the difference in the amplitude of your final result of the sum of your left and right channels. This would not be the difference to the headphone measurement.


Aha, I see now. Not the same thing as the differences between headphone measurement and speaker measurement then, good to know.

 I've been attempting to get a measurement that is anywhere even close to as even as I got on my best BRIR yet to date, the one that I posted earlier that's very straight and has only minor channel balance differences in the Results graph. I think microphone position and fit could be a big part of it but trying to replicate those measurements is proving to be very difficult even when trying to match the same fit that I used that time. Maybe I just got lucky that time and had a fit that was _juuust_ right. I'll keep at it and see if I can nail down whatever factor that is that made it work so well.

  I've gone back and listened to some of my prior favorite BRIRs and the main difference I can hear between those and this new perfectly-balanced BRIR is the low end. All of my less-than-perfect ones have significant spikes and dips of imbalances below 250 hz. It's VERY obvious when one of those spikes/dips is having an impact while listening to music since bass will suddenly drop-out dramatically from one ear and not the other on specific notes. I feel like if I can solve that problem then any high-end channel-balance issues might fall into from the same fix, so that's what I'm going to be working on today. I figure a good place to start is to try to find an optimal sitting position where the left and right ear microphone balance is as close to each other as possible and then see how that compares.


----------



## davidtriune

morgin said:


> I think I fried my mic will need to buy a new one. The soldering doesn’t work anymore


i think i fried my em258s too after hotgluing. this graph used to be flat, but now it doesnt pick up any bass anymore. these things are delicate.


----------



## morgin

Sorry for your loss buddy. I might get ones that are just the mic no wires coz they're cheaper. Anyone know if its just two wires that need soldering or is there a third ground wire? I connected two on my previous one but the sound that was being picked up was too low


----------



## davidtriune (Nov 12, 2021)

Brandon7s said:


> I've been having a heck of a tough time getting good channel balance. It's always off somewhere, either the low end or the high end, and getting it nailed perfectly has alluded me until this session. The reason why is that it appears my right and left ears are RADICALLY different, acoustically. How do I know? Take a look at this Headphones plot:


I suggest also checking whether your mics are channel matched. I measured my SP-TFB-2 in REW with 1 mic on the edge of the table and one headphone channel pressed against it, then switched mics.  It doesnt look well matched at all.





My EM258s measure way better than my SP-TFB-2s.

Your mics might be fine, but just saying that you cant always trust the QA on these.


----------



## Brandon7s

davidtriune said:


> I suggest also checking whether your mics are channel matched. I measured my SP-TFB-2 in REW with 1 mic on the edge of the table and one headphone channel pressed against it, then switched mics.  It doesnt look well matched at all.
> 
> 
> Yours might be fine, but just saying that you cant always trust the QA on these.


Oh that's a good idea, I'll give that a try and see how they are by comparison. The channel balance is pretty dang good when I use a very shallow insertion, or from the measurements before I cut the silicon wings (which were even more shallow).


----------



## Brandon7s (Nov 12, 2021)

Quick update on that graph of mine with very little channel balance variance: that is NOT the plot for the BRIR that I like so much, that's actually the same BRIR but with --channel_balance set to Average. No wonder it looks so nice and clean compared to the rest. I must have missed changing some file names and it got mixed in on accident with my other plots.

THIS is the real Results.png plot from that BRIR.





Significantly more variation, though less than a lot of the ones I've gotten up until this point, especially in the low end. As you can see on this plott, the low end is fairly similar other than the overall level. there's no dramatic dips or spikes to make any notes pop out of place like I was getting before. 

I tried using the Trend, Mids, and Average, and Left channel balance options on this just to see if there would be an improvement and only the Mids option didn't make it significantly worse (more "in my head"). The Trend option "fixes" the channel balance "issues" that begin at about 100hz and go all the way to the bottom, but when I try out that BRIR it makes the bass completely out of balance. My ears definitely don't hear bass at the same level that low end channel imbalance is needed to get it truly balanced to my perception. The Mids option didn't have any audible change at all, so I guess the mids are already as aligned as they can get, which makes sense. I honestly can't imagine getting a tighter stereo image than I'm getting with this measurement without channel balance options anyway. I think its as good as it can get!

  I made a couple new measurements today with the same deep insertion mic position I used for this one and the first result was horrible, but the second was very nearly as good as this one.

  So, I take it back, getting a tightly balanced Results plot is not any kind of "secret to success" for this, though I believe it can give you some ways of troubleshooting - especially on multi-channel BRIRs - but I don't think that trying to achieve any particular shape in channel balance variation is worthwhile unless you've already got one that is spot-on. I'm pretty sure that using the deeper insertion was the key to getting highly improved results compared to what I was getting previously.

 Also, I really need to clean up my Impulcifer folder, it's getting ridiculous keeping all 60 sessions and every single variation that I try with them. I think with this measurement pictured above, which is my favorite so far, and the new one I just took that is just about as good, I can discard the rest into an Archive folder and only keep them around if I want look back at their plots for informational purposes or experimentation.


----------



## morgin

This maybe stupid but I'm using multiple channel balance trends together and getting mixed results


----------



## davidtriune

morgin said:


> Sorry for your loss buddy. I might get ones that are just the mic no wires coz they're cheaper. Anyone know if its just two wires that need soldering or is there a third ground wire? I connected two on my previous one but the sound that was being picked up was too low



Some models have ground, like the EM172, some don't, like EM258. I assume ground also helps with lowering noise. but not many models have that good of a frequency response besides the EM258. To me, frequency response is more important than SNR or sensitivity.

Some have fried their EM172 in the comments:


----------



## MayaTlab

davidtriune said:


> I suggest also checking whether your mics are channel matched. I measured my SP-TFB-2 in REW with 1 mic on the edge of the table and one headphone channel pressed against it, then switched mics.  It doesnt look well matched at all.
> 
> 
> 
> ...



If I understand well, you measured channel balance with the mics on the cable and the headphones' cup laying flat against the table, covering the mic ? 

May I suggest completing this test with testing both mics against another calibrated one (let's say UMIK-1) with speakers in near field ? The problem being that headphones's FR will vary more or less depending on the exact amount of pad compression going on, something you may not be able to fully control with the methodology used. 

You can also request a match pair from Sounds Professionals, where they make more efforts to match the L and R channels.


----------



## musicreo

davidtriune said:


> Some models have ground, like the EM172, some don't, like EM258. I assume ground also helps with lowering noise. but not many models have that good of a frequency response besides the EM258. To me, frequency response is more important than SNR or sensitivity.
> 
> Some have fried their EM172 in the comments:




So far I have fried none of my Em258s. Resoldering always helped when the frequency response was strange.


----------



## musicreo

Brandon7s said:


> I'm pretty sure that using the deeper insertion was the key to getting highly improved results compared to what I was getting previously.


I also think that deeper insertions worked better for me.  


Brandon7s said:


> Also, I really need to clean up my Impulcifer folder, it's getting ridiculous keeping all 60 sessions and every single variation that I try with them.



I have also a folder that includes measurements from October 2019 until December 2020 but I made the mistake not to always write down how the measurement were done.


----------



## lowdown (Nov 13, 2021)

musicreo said:


> I have also a folder that includes measurements from October 2019 until December 2020 but I made the mistake not to always write down how the measurement were done.


Along those same lines, is there a way for Impulcifer to save the command line used along with the HRIR and plot files?  I tried so many options and lost track of which ones I used to create each HRIR.


----------



## Brandon7s

lowdown said:


> Along those same lines, is there a way for Impulcifer to save the command line used along with the HRIR and plot files?  I tried so many options and lost track of which ones I used to create each HRIR.


That'd be nice. I've been coding the parameters into the file names, along with the session numbers.


----------



## morgin

I'm on around 45 different measurements since I started a few months ago. I name all my folder with details like volume level, gain level, mic (foam or silicone) etc to keep track. 

I have a measurement that is so good right now, every time it improves I think that's it until I mess around with mic placement and channel balance options. I've got this measurement with --channel_balance=right






Is there any other way to make this better without anymore mic recordings as right now I've got a broken mic. Any commands I've overlooked or not tried correctly


----------



## Brandon7s

morgin said:


> I'm on around 45 different measurements since I started a few months ago. I name all my folder with details like volume level, gain level, mic (foam or silicone) etc to keep track.
> 
> I have a measurement that is so good right now, every time it improves I think that's it until I mess around with mic placement and channel balance options. I've got this measurement with --channel_balance=right
> 
> ...


The overall frequency response of that plot seems rather dark, if you want to brighten it up a little then I recommend trying out the --tilt parameter, it works pretty well. Otherwise, I can only recommend trying every channel balance option and sticking with what sounds the best to you.


----------



## morgin

- - tilt option? I’ve not seen that on the GitHub guide I’ve just double checked. Can you link me to it so I can read up on it and try the different settings


----------



## Brandon7s

morgin said:


> - - tilt option? I’ve not seen that on the GitHub guide I’ve just double checked. Can you link me to it so I can read up on it and try the different settings


I don't think its actually in the Impulcifer guide. I think I saw it talked about earlier in this thread. It works the same way as it does in AutoEQ. I believe --bass_boost is also an option when processing BRIRs with Impulcifer.

 It's in this page. Scroll down to the Command Line Arguments section to see it.


----------



## morgin

I’m still learning the audiophile stuff but I think I get what you mean by dark as in the low frequencies are high and it slopes downwards towards the highs.  I believe I can balance it on eqalizerAPO. 

Am I just trying to level this graph out ?


----------



## Brandon7s (Nov 13, 2021)

morgin said:


> I’m still learning the audiophile stuff but I think I get what you mean by dark as in the low frequencies are high and it slopes downwards towards the highs.  I believe I can balance it on eqalizerAPO.
> 
> Am I just trying to level this graph out ?


Yes, though if that sounds good to you then I wouldn't worry about it. A frequency response like that would sound muffled to me so tilt could be used to raise the highs and lower the lows in order to bring back some clarity and 'sparkle'. That's going to depend on the individual though.

 And yes, you can do that I'm Equalizer APO. That's how I make most of my adjustments, but if you want to bake the change I to the BRIR itself in order to avoid having to use EQ, that's an option. 

  What I'm trying to figure out is a way to use the EQ response of the BRIR's that Impulcifer creates in order to figure out a good personalized EQ curve. I'm thinking that doing something like that equal loudness test but with using BRIR instead of a speaker for the target match. That way I can get some of the advantages of using the BRIR but without the crossfeed.


----------



## morgin

When I try to use the -- tilt during processing I get this 

(venv) C:\Windows\System32\Impulcifer>python impulcifer.py --test_signal="data/sweep-6.15s-48000Hz-32bit-2.93Hz-24000Hz.pkl" --room_target="data/harman-in-room-loudspeaker-target.csv" --generic_limit=2000 --channel_balance=right --tilt TILT --target_level=10 --dir_path="data/my_hrir" --plot
usage: impulcifer.py [-h] --dir_path DIR_PATH [--test_signal TEST_SIGNAL]
                     [--room_target ROOM_TARGET]
                     [--room_mic_calibration ROOM_MIC_CALIBRATION]
                     [--no_room_correction] [--no_headphone_compensation]
                     [--no_equalization] [--fs FS] [--plot]
                     [--channel_balance CHANNEL_BALANCE] [--decay DECAY]
                     [--target_level TARGET_LEVEL]
                     [--fr_combination_method FR_COMBINATION_METHOD]
                     [--specific_limit SPECIFIC_LIMIT]
                     [--generic_limit GENERIC_LIMIT] [--bass_boost BASS_BOOST]
                     [--tilt TILT]
impulcifer.py: error: argument --tilt: invalid float value: 'TILT'


----------



## Brandon7s

morgin said:


> When I try to use the -- tilt during processing I get this
> 
> (venv) C:\Windows\System32\Impulcifer>python impulcifer.py --test_signal="data/sweep-6.15s-48000Hz-32bit-2.93Hz-24000Hz.pkl" --room_target="data/harman-in-room-loudspeaker-target.csv" --generic_limit=2000 --channel_balance=right --tilt TILT --target_level=10 --dir_path="data/my_hrir" --plot
> usage: impulcifer.py [-h] --dir_path DIR_PATH [--test_signal TEST_SIGNAL]
> ...


Tilt needs a value. Try something like --tilt=2 as a starting point to make sure it's working.  Positive values tilt towards a brighter response while negative values tilt it to darker.


----------



## morgin

Tilt works to level out the graph and lessen the base but it also takes away some of the rear and side surround. I’ll have to try a few combinations but it sounds better and immersive when it’s dark


----------



## morgin

How are these in ear mics compared to primo em258’s. I’m going to add slim wires so they can wrap around the foam easier and fit inside my ears.


----------



## musicreo (Nov 14, 2021)

On  micbooster you find: The Primo EM258 is identical to the Primo EM158 which is no longer available

The EM265 has much worse specifications.


----------



## morgin

Thank you I’ve ordered some em158n which were available but they’re 5.8mm diameter hopefully they’re ok. 

I’m reading through the forum once again which I wouldn’t normally do for anything but this stuff excites me to the bones because it’s so good. So I came across this by Jakko 

“Or you could only run in once with the measurment microphone at the center of the head and copy that file as -left and -right but then it cannot be guaranteed that the results will be stellar”

I have a measurement with one room mic recording. But I’ve not copied that file as —left -right. Is that something that needs doing


----------



## jaakkopasanen

morgin said:


> Thank you I’ve ordered some em158n which were available but they’re 5.8mm diameter hopefully they’re ok.
> 
> I’m reading through the forum once again which I wouldn’t normally do for anything but this stuff excites me to the bones because it’s so good. So I came across this by Jakko
> 
> ...


My comment you quoted is probably very old. Currently Impulcifer works just well with a single room.wav file


----------



## morgin

Whilst you're online 
jaakkopasanen​can I just say I love you man! The feels I get just by listening to music, watching a movie or playing games is unreal, I tear up a little. The 7.1 I have currently beats anything I've heard previously with such clarity and positioning. I have no clue what most of the graphs mean but even with that I'm blown away. Once again thankyou so much man


----------



## jaakkopasanen

morgin said:


> Whilst you're online
> jaakkopasanen​can I just say I love you man! The feels I get just by listening to music, watching a movie or playing games is unreal, I tear up a little. The 7.1 I have currently beats anything I've heard previously with such clarity and positioning. I have no clue what most of the graphs mean but even with that I'm blown away. Once again thankyou so much man


Thanks, really means a lot!


----------



## morgin (Nov 14, 2021)

My room mic was not calibrated when I did my best measurement. Is it possible to calibrate it now and apply it to that measurement. And also will calibration to the mic make a big difference?

Also reading on volume to having imbalance 48/50. Is that still a problem as my recording were done using volume2 and if there is an imbalance can I fix it?


----------



## Brandon7s (Nov 14, 2021)

morgin said:


> My room mic was not calibrated when I did my best measurement. Is it possible to calibrate it now and apply it to that measurement. And also will calibration to the mic make a big difference?
> 
> Also reading on volume to having imbalance 48/50. Is that still a problem as my recording were done using volume2 and if there is an imbalance can I fix it?



 You should be able to apply mic calibration just the same as normal. Simply download your mic calibration file in either TXT or CSV format. Then put it in your my_hrir directory with file name "room-mic-calibration.txt" or "room-mic-calibration.csv". 

The difference the calibration file makes in my mic's case is pretty minor, I doubt I'd be able to hear the difference if I wasn't actively looking in just the right frequencies. That'll depend on how accurate your mic is without the calibration though, so your mileage may vary.

 I can't speak on the Volume2 issue since I don't use it.


----------



## morgin (Nov 14, 2021)

Found a file didnt make any difference. But I will try with more room measurements for each ear


----------



## musicreo

morgin said:


> Found a file didnt make any difference. But I will try with more room measurements for each ear



What do you mean you found a file?


----------



## Brandon7s

musicreo said:


> What do you mean you found a file?


I think he means the mic calibration file that comes with their measurement mic. 

I'm not surprised it didn't make any difference. The adjustments made by the mic calibration files are typically minor (less than a couple dB in either direction for mine).


----------



## morgin

Brandon7s said:


> The other thing I recommend you use if you're not already doing so is room correction with the Harman-in-room-loudspeaker-target.csv curve.


Is this separate from just running the process command with the room file in the hrir folder. I thought the room correction was done automatically


----------



## Brandon7s (Nov 15, 2021)

morgin said:


> Is this separate from just running the process command with the room file in the hrir folder. I thought the room correction was done automatically


It's a couple options added to the standard processing command. It won't be done automatically because the room correction needs a target curve to aim for. When I process my BRIRs, here's the base commands that I use:

`python impulcifer.py --test_signal="data/sweep-6.15s-48000Hz-32bit-2.93Hz-24000Hz.pkl" --room_target="data/harman-in-room-loudspeaker-target.csv" --generic_limit=2000  --dir_path="data/my_hrir"`

I'll add --channel_balance=mids to this when I'm processing 7.1 BRIRs, just to be extra sure that it's all reasonably aligned. I'll add --target_level sometimes as well, with the value ranging from -8 to -14 if the original is too quiet. I've tried changing up the --generic_limit value but I honestly don't notice much difference above 500hz, but setting it higher doesn't hurt anything for me so I cap it out at 2000hz as my default.

I run this with the room.wav in the my_hrir folder after having recorded 3 or 4 measurements in slightly different positions by using the --append option when recording the room.wav file. Impulcifer will then automatically use that to get my room's EQ curve and then it uses the --room_target="data/harman-in-room-loudspeaker-target.csv" part to tell it how to adjust my room's curve to match it. You can create your own custom curve too, if you'd like, but after trying that myself I ended up preferring Harman's un-edited curve more than any other so that's what I use when processing all of my BRIRs now.


----------



## morgin

I found that the best measurement for me so far have been with speakers around 4 feet away from me, further away and they don't seem as clear or detailed.



Brandon7s said:


> I'll add --channel_target=mids


you mean --channel_balance=mids?


----------



## Brandon7s (Nov 15, 2021)

morgin said:


> you mean --channel_balance=mids?


Doh! Yes, channel_*balance*, not target. That's what I get for trying to type both at once!

 By the way, anyone have any luck translating their BRIRs for use with IEMs? I've tried to do this a couple times with my Moondrop Blessing 2 pair and my results have been lacklust due to the EQ being far too lean in bass and too spiky and painful in the highs. I'm going to try using tilt and increasing the bass value past 4 but I don't think that'll be surgical enough to be on the same level of quality that I get with my Ananda's. I don't have specific measurements for _this _pair of IEMs though and I've been using the graphs that Crinical has posted as my target curve for these. I'd love to get my actual pair measured but I don't think that's going to happen any time soon.



morgin said:


> I found that the best measurement for me so far have been with speakers around 4 feet away from me, further away and they don't seem as clear or detailed.


It's interesting how our perception of the quality of the sound depends significantly on where we're listening. The ones I've created from a good distance, about 3 and a half feet apart, sound really good but _only_ while I'm sitting in that one specific location. When I'm _not _sitting there the BRIR takes on a overly mid-heavy signature and is a bit harsh but when I'm at that position then it's about good as it gets! I've found that I like using my very near-field BRIR, where I sat about 25 inches from the speakers, at pretty much all locations. That's what I use when I'm not sitting at my usual desk spot. Considering this, I think it's a good idea to have a variety of different measurements to choose from depending on your own location and listening material.


----------



## lowdown

Brandon7s said:


> It's interesting how our perception of the quality of the sound depends significantly on where we're listening. The ones I've created from a good distance, about 3 and a half feet apart, sound really good but _only_ while I'm sitting in that one specific location. When I'm _not _sitting there the BRIR takes on a overly mid-heavy signature and is a bit harsh but when I'm at that position then it's about good as it gets! I've found that I like using my very near-field BRIR, where I sat about 25 inches from the speakers, at pretty much all locations. That's what I use when I'm not sitting at my usual desk spot. Considering this, I think it's a good idea to have a variety of different measurements to choose from depending on your own location and listening material.


It is interesting.  As others have mentioned vision also plays a big role in hearing perception.  My speaker measurement was from about 8 ft, which is my stereo music listening spot.  The imaging and soundstage then in Impulcifer is as if I'm sitting about that far from the musicians, at the edge of the stage or in the recording session.  For me it's ideal for a realistic, holographic illusion of them playing in my living room, and getting every bit of the detail and resolution.  I've also tried sitting at my computer desk using that HRIR but the visual aspect changes the illusion a lot.  But then if I'm at my normal spot and close my eyes the spatial illusion is maintained.  Curious and astonishing.


----------



## morgin (Nov 15, 2021)

Brandon7s said:


> It's interesting how our perception of the quality of the sound depends significantly on where we're listening. The ones I've created from a good distance, about 3 and a half feet apart, sound really good but _only_ while I'm sitting in that one specific location.



That’s what I’m going to do from now on. Because I only use one speaker I’ll measure from where my tv is and sit where I’d normally sit to visually trick my brain.


----------



## morgin

I’ve just received my two in ear mics today that I need to solder. But if I can I want to only solder one as the other is working fine. What software do you guys use to measure the in ear mics and be able to compare how similar or different they are.


----------



## musicreo

To compare the mics I put them both in front of the speaker at 50cm distance and make a sweep with REW.


----------



## morgin

Finally worked out REW.... I think. I had to solder both new mics as the old one lost a fair bit of volume. I could tell just by listening to both of them.

I had to measure the mics separate (blue and green) and then got the average orange.





Do they look ok or do I need to calibrate them to sound the same. And how do I do the calibration if it need to be done


----------



## Brandon7s

morgin said:


> Finally worked out REW.... I think. I had to solder both new mics as the old one lost a fair bit of volume. I could tell just by listening to both of them.
> 
> I had to measure the mics separate (blue and green) and then got the average orange.
> 
> ...


I wasn't aware that you could provide calibration data for the binaural mics in Impulcifer. @jaakkopasanen, is there currently any way to do this?


----------



## musicreo

morgin said:


> Do they look ok or do I need to calibrate them to sound the same. And how do I do the calibration if it need to be done



They show both the same frequency response so I would say everything is ok.  The very high frequencies (>10kHz) are usually effected by the measuremnet angle of the capsule and differ between 0° and 90°.
You could measure against your calibration mic. It should be very close for the lower frequencies up to 2-3kHz.


----------



## davidtriune (Nov 16, 2021)

looks like just a simple volume difference. you could use channel_balance=(number of db difference) to match it


----------



## Brandon7s (Nov 17, 2021)

davidtriune said:


> looks like just a simple volume difference. you could use channel_balance=(number of db difference) to match it


I measured my pair of binaural mics today and there's a gentle downward slope to my right-side microphone that can't be fixed via adjusting the level of that entire channel, unfortunately. If the high end is aligned than the low-end is mis-aligned, and visa vers. It's not a _huge _difference, it's probably off by about -3 dB at around 10kHz, but that's also not a small amount and would certainly be audible.

 Any ideas for applying a corrective EQ curve to the microphone itself? My current idea is to route the input signal from my mics into a DAW and then use corrective EQ and overall level balancing to get them to match as closely as possible and then route the output from my DAW straight into a loopback input from which to feed impulcifer the corrected signal. That _should_ work for my setup since I have more than enough I/O on my interface, though it'd be nice if there was a way to apply calibration target curves to the binaural mics within Impulcifer itself so that folks with less full-featured audio interfaces can also make those corrections.

 I suspect that the reason for the mics being as imbalanced as they are is due to the differences in both the foam and the glue that I've used to attach them to the foam. I wasn't all that consistent with either of those and I would guess that would make the mics even further off balance compared to where they were originally, and I _highly_ doubt they were a tight match to start with.


----------



## lowdown

Brandon7s said:


> I measured my pair of binaural mics today and there's a gentle downward slope to my right-side microphone that can't be fixed via adjusting the level of that entire channel, unfortunately. If the high end is aligned than the low-end is mis-aligned, and visa vers. It's not a _huge _difference, it's probably off by about -3 dB at around 10kHz, but that's also not a small amount and would certainly be audible.
> 
> Any ideas for applying a corrective EQ curve to the microphone itself? My current idea is to route the input signal from my mics into a DAW and then use corrective EQ and overall level balancing to get them to match as closely as possible and then route the output from my DAW straight into a loopback input from which to feed impulcifer the corrected signal. That _should_ work for my setup since I have more than enough I/O on my interface, though it'd be nice if there was a way to apply calibration target curves to the binaural mics within Impulcifer itself so that folks with less full-featured audio interfaces can also make those corrections.
> 
> I suspect that the reason for the mics being as imbalanced as they are is due to the differences in both the foam and the glue that I've used to attach them to the foam. I wasn't all that consistent with either of those and I would guess that would make the mics even further off balance compared to where they were originally, and I _highly_ doubt they were a tight match to start with.


This may just make me look dumb, but can you swap the mics between ears for two recordings and use the L/R files from each so they both are recorded with the same mic?


----------



## Brandon7s

lowdown said:


> This may just make me look dumb, but can you swap the mics between ears for two recordings and use the L/R files from each so they both are recorded with the same mic?


Eh, I think that any minor changes in placement and position would make getting truly sharp and balance recordings difficult, but it is an idea to try out if the DAW method doesn't work out.


----------



## lowdown

Brandon7s said:


> Eh, I think that any minor changes in placement and position would make getting truly sharp and balance recordings difficult, but it is an idea to try out if the DAW method doesn't work out.


Positioning differences could well swamp the advantage of having a perfectly matched mic response curve.  But that could be true using calibration files for each mic as well, and from the foam/glue differences you mentioned.  And as you say many may not have the equipment you do.  Is there a way to average multiple measurement files, perhaps that would mitigate some of the positioning variance between runs.  I'm just an amateur babbling out loud...


----------



## Brandon7s

lowdown said:


> Positioning differences could well swamp the advantage of having a perfectly matched mic response curve.  But that could be true using calibration files for each mic as well, and from the foam/glue differences you mentioned.  And as you say many may not have the equipment you do.  Is there a way to average multiple measurement files, perhaps that would mitigate some of the positioning variance between runs.  I'm just an amateur babbling out loud...


I was wondering the same thing about averaging multiple measurements. I think that the fact that there's timing information along with frequency response information would make that nearly impossible though. It can be done with the room correction measurements since timing isn't a factor in that specific case. 

 I'm going to try getting as close a match as possible today and see if it makes any difference to the final results. I doubt it'll make a significant difference but there's only one way to find out for sure!


----------



## lowdown

Brandon7s said:


> I was wondering the same thing about averaging multiple measurements. I think that the fact that there's timing information along with frequency response information would make that nearly impossible though. It can be done with the room correction measurements since timing isn't a factor in that specific case.
> 
> I'm going to try getting as close a match as possible today and see if it makes any difference to the final results. I doubt it'll make a significant difference but there's only one way to find out for sure!


Everyone appreciates how much you've shared with your process, observations, tips and results.  Thanks very much.


----------



## morgin (Nov 17, 2021)

My latest measurement has got me good results on the highs. I’m hearing things a lot clearer than before which is surprising as I thought it was already super clear.

I did all the room measurements and hot glued the mics onto the foam which helped keep the mics in place and just at the entrance of the ear canals.

Only problem now is getting rid of the noise when my speaker is on. I’ve had it connected directly to the pc via 3.5mm jack. Also connected to the behringer which itself is connected via usb but still get the noise. I’ve tried a mono 6.35mm to 6.35mm jack connected from the speaker to the left back port of the behringer and tried a stereo 3.5mm jack from pc to stereo 6.35mm to left port of behringer.

@musicero how did you connect you speaker because we’re using the same equipment. Or is there a cable that will reduce noise/interference?


----------



## Brandon7s (Nov 17, 2021)

Okay, I set up my system to allow me to apply EQ to the mic inputs so I just did a measurement after applying corrective EQ to my right-side binaural microphone. It's tough to definitively say that the results are better or worse with only a single measurement, but my initial impression is that tonal balance between the left and the right channels are more even and that the phantom center is more consistently centered across the spectrum.

Its interesting just how great the mismatch between my mics is. It was a lot further off than I originally thought!

Here's without my EQ treatment to the right side (1/12th smoothing). The right side is about 10 dB lower than the left side at between 1kHz and 3kHz. This seemed consistent regardless of how I measured the microphones, either in front of a speaker or with the mics on the table with a headphone cup placed over them:





And here's after I applied EQ. MUCH more closely matched. I think the largest variance is now more like 2 dB instead of 10, which I think is close enough for our uses:



  I will need to measure twice in order to _really _tell what affect this calibration has on the results. One measurement with the corrective EQ turned off and then another in with it on, within the same session.

 Unrelated, but I also need to figure out why a lot of my measurements result in a particularly harsh upper-mids and treble area between 3000 and 4500hz. I don't notice that painful treble spot when I'm listening to my speakers, but it's very apparent on my final Impulcifer HRIRs. Its not too difficult to smooth over and fix with EQ but I'm wondering why the difference is persisting in the first place. Time to experiment with different placement depth and see if different depths have an effect on that region.


----------



## Brandon7s (Nov 17, 2021)

morgin said:


> My latest measurement has got me good results on the highs. I’m hearing things a lot clearer than before which is surprising as I thought it was already super clear.
> 
> I did all the room measurements and hot glued the mics onto the foam which helped keep the mics in place and just at the entrance of the ear canals.
> 
> ...


Glad you're seeing improvements!

I might be able to help with the speaker noise problem. A couple questions first, if you're up for it. What model of speakers are you using and what does the noise sound like? Is it a constant, somewhat high pitched hissing that sounds a bit like white noise? Also, does the noise get louder as you increase volume using the speaker's volume knob?

My initial thought is that you have a ground loop problem, in which case I'd recommend making sure that your speakers and everything connected to them is plugged into the same power outlet. That includes your computer, your audio interface's power supply, both speakers, and any other peripherals that you might have connected to your computer that are supplied power via a power outlet.  That guarantees that there's not multiple paths to ground. A ground loop will sound like a constant low-level humming and it will usually get louder in a linear fashion when turning up the speakers, just like regular audio output.

If what you're hearing is more like an annoying white-noise hiss, then that could be the speakers themselves. Most budget-friendly studio monitors like my JBL LSR305s and 308s produce a low level of hiss. That's just a function of how they are designed and I don't believe there's any way to get rid of it. It will increase when the speakers are turned up but it'll be less dramatic than increasing the volume while listening to music/audio.


----------



## morgin

Brandon7s said:


> What model of speakers are you using and what does the noise sound like? Is it a constant, somewhat high pitched hissing that sounds a bit like white noise? Also, does the noise get louder as you increase volume using the speaker's volume knob?


Sorry for the late reply 

It’s a jbl 305p mkii speaker. And the sound is how you described and the volume does make it louder. 

Everything is connected to one multi socket that’s connected to one power outlet.


----------



## musicreo

morgin said:


> @musicero how did you connect you speaker because we’re using the same equipment. Or is there a cable that will reduce noise/interference?


I have both connected with  6.3mm to XLR cables with the Behringer.
The JBLs have a noise that is audible when it is silent and you are close to the speaker. But the sweep is so much louder that it should not have any effect on your measurement.


----------



## musicreo

Brandon7s said:


> Any ideas for applying a corrective EQ curve to the microphone itself?



In theory the headphone measurement should already equalize the mismatch.


----------



## Brandon7s (Nov 18, 2021)

morgin said:


> Sorry for the late reply
> 
> It’s a jbl 305p mkii speaker. And the sound is how you described and the volume does make it louder.
> 
> Everything is connected to one multi socket that’s connected to one power outlet.


That sounds like the ordinary noise that those speakers produce. The volume of the noise should be very low-level compared to any audio you play through them. My speakers make the same kind of hissing noise and I don't even notice it anymore. It shouldn't have any effect on your measurements.



musicreo said:


> In theory the headphone measurement should already equalize the mismatch.


That explains the very slight difference between this last measurement with the corrective EQ on vs. my prior ones without out. It makes me nervous knowing that my mics are as far off as they are from each other though, so I'm probably going to keep using it just for the peace of mind, haha.

I did some A/Bing between my speakers and a few of my most recent measurements while tweaking overall EQ on my headphone output to see if some manual EQing on top of the BRIR could get a closer match to my speakers. Interestingly, all of the BRIRs sounded a lot more accurate in the upper mids and highs when I applied a high-shelf filter at about 2650hz ranging from -1.5 to 3.5, depending on the specific BRIR. That made an immediately and obvious difference in how convincing the results are so I highly recommend folks try that out if they've not already done so. This largely solves the spikiness I was frequently getting at between 3500 and 4000hz.


----------



## morgin (Nov 18, 2021)

Wow what a difference. I would highly recommend doing the full room measurements they don't actually take that long (15 mins at the most). I printed out a large protractor for the angles and used a laser level device to accurately place where each speaker position should be. I marked the center position of where I was sat and I even found that swiveling on a office chair with no back my head still didn't rotate centrally so you actually have to sit so the center of the head is exactly above the point of turning (use a mirror when swiveling and you will see what I mean). The ear mics were firm this time just deeper than the ear canal entrance. I used a stand for the room mic, the stand was placed so my nose would line up with the middle of the stand and placed the mic so it's exactly wide as my ear. Then taped a straight stick so it was 90degree from the mic and would just point to the speaker markings, rotating for left or right ear measurements. I can hear virtual speaker's in my room all with perfect clarity. I'm at a stage where it sounds better without any channel balance or any other tweaks besides the target level.

Edit. I have to mention that without the help of everyone I definitely would be lost and doubt I'd be able to get to this stage. So once again Thankyou you're all awesome people!


----------



## Brandon7s

morgin said:


> Wow what a difference. I would highly recommend doing the full room measurements they don't actually take that long (15 mins at the most). I printed out a large protractor for the angles and used a laser level device to accurately place where each speaker position should be. I marked the center position of where I was sat and I even found that swiveling on a office chair with no back my head still didn't rotate centrally so you actually have to sit so the center of the head is exactly above the point of turning (use a mirror when swiveling and you will see what I mean). The ear mics were firm this time just deeper than the ear canal entrance. I used a stand for the room mic, the stand was placed so my nose would line up with the middle of the stand and placed the mic so it's exactly wide as my ear. Then taped a straight stick so it was 90degree from the mic and would just point to the speaker markings, rotating for left or right ear measurements. I can hear speaker in my room all with perfect clarity. I'm at a stage where it sounds better without any channel balance or any other tweaks besides the target level.



 Awesome!

 I've been far more sloppy in my surround measurements since I've only been roughly estimating the angles. Time to buy a protractor and a laser level!

 I've been trying to figure out a good way to keep my head position stationary when rotating and I think I'm going to try setting up a mic stand at a central point from which my head should revolve, and then use that as a placemarker. What I was doing before was pure guesswork and I think that's causing my 7.1 measurements to be less than optimal.


----------



## morgin

Brandon7s said:


> Awesome!
> 
> I've been far more sloppy in my surround measurements since I've only been roughly estimating the angles. Time to buy a protractor and a laser level!
> 
> I've been trying to figure out a good way to keep my head position stationary when rotating and I think I'm going to try setting up a mic stand at a central point from which my head should revolve, and then use that as a placemarker. What I was doing before was pure guesswork and I think that's causing my 7.1 measurements to be less than optimal.


It's a game changer and way better than any cinema. In certain movies you feel like you're inside the action so realistic I did not know the movies contained so much sound information. I cant really explain how good it sounds. Also I see what you meant by the bass not hearing it but feeling it. Oh yeh forgot to mention my speaker was at around 6-7 feet away from me.


----------



## Brandon7s (Nov 19, 2021)

I did an experiment today with room correction. I have been having issues with certain notes on my guitars being FAR louder than most even though I have been using room correction via generic, center-of-head measurements. I tried EQing the problems out manually without any luck, it always caused more problems than it solved. So today I made a measurement utilizing ear-specific room measurements. That made a HUGE difference in flattening out the frequency response and now no notes are misbehaving at all anymore. Its a  night-and-day difference compared to using just generic room measurements. I'm never going to do without them now that I've seen how much of an improvement this makes.

 My desk and speaker configuration are quite asymmetrical since my desk is in a corner of the room instead of in the middle of a wall, so if you've got a fairly symmetrical setup then you might not see as much of a difference as I do, but I still highly recommend at trying out specific left/right ear room measurements just the same. I think that let's Impulcifer perform much more accurate room correction than with the generic ones, especially for layouts like mine.


----------



## morgin

I know windows can only do 7.1 surround but what about another operating system like Linux? Can we get height channels?


----------



## jaakkopasanen

morgin said:


> I know windows can only do 7.1 surround but what about another operating system like Linux? Can we get height channels?


The issue is with content is released with proprietary codecs such as Dobly Atmos and they don't tell how to decode the audio stream.


----------



## sander99

morgin said:


> I know windows can only do 7.1 surround but what about another operating system like Linux? Can we get height channels?





jaakkopasanen said:


> The issue is with content is released with proprietary codecs such as Dobly Atmos and they don't tell how to decode the audio stream.


If you really want it here is one way to do it, but it will be costly:
Buy a surround receiver/amp/processor that supports Atmos etc. and has analog outputs for all channels, and buy a USB audio interface with 12 or more analog inputs to get the decoded channels into the pc, and find software that can do the convolution for the number of channels that you need (I think there is such software).


----------



## musicreo

I added to my Sennheiser HD555 and AKG 701 a Sennheiser HD600.  Unfortunately the mics I used for  my favourite measurement  did not work probably anymore. So I used some other of my 12 capsules for two different headphone measurements of the HD600.  One of the mics measures much higher amplitudes for  higher frequencies resulting in a more dark result. The other mic measures a drop in high frequency resulting in a bright result. 
I prefer the dark result as the  bright result somehow produces a slight  hiss.   
However, the good news is that even using a different pair of mics for the headphone compensation gives  an excellent result.

The bad news is that I have to check my Primo capsules as I found on 3 caspsules the drop in high frequencies and 2 capsules  produced very noisy results.


----------



## morgin (Nov 22, 2021)

.


----------



## Brandon7s (Nov 22, 2021)

morgin said:


> I’ve got a few measurements just curious from the graph which one stands out to you guys as the best. Ignore the text it’s just how I’ve named the files to keep track


Not that it matters much what these measurements LOOK like, since all that really matters is how the results sound, but the first two graphs look the best. That's judging them by my own preferences in frequency response and seeing that the channel balance is fairly even.

 I did some testing last night on manually EQing issues with balance in the low end of many of the BRIRs I've been using regularly. For some reason the bass in the left side was exaggerated and throwing off localization. Turns out that I can fix the low end with a -4 to -8 dB low-shelf filter on ONLY the left channel. This sharpened center imaging a bit, too, since it affects so much of the mids. Turns out that it's not just the bass that is unbalanced.

 Then I found something weird: the EQ fix ONLY works when the EQ is applied after Hesuvi or my convolution vst. This implies to me that it must not be a balance issue that can be fixed by Impulcifer because it's something about my hearing itself. I also checked some BRIRs with my other headphones. Almost all of them resulted in improved balance when that specific EQ fix is applied to the left channel, post all other processing.

  So now I'm trying this with IEMs and I'm getting the same thing. The cause of the balance issue seems to me to be something with my ears beyond the point of what our binaural mics can process, so it's by no means a fault of the software. I'm very happy to have found this out, it's made a big difference in low end consistency and balance in general for all of my headphone listening, including without Impulcifer.


----------



## morgin

Quick question on equaliser apo the graphs (in red) are very different to each other for both channels L and R. Should they look the same or is that normal. 

The image is just from a google search not mine.


----------



## Brandon7s (Nov 22, 2021)

morgin said:


> Quick question on equaliser apo the graphs (in red) are very different to each other for both channels L and R. Should they look the same or is that normal.
> 
> The image is just from a google search not mine.



The Left and Right channel graphs _should_ look different since impulcifer picks up on the differences from our physical ears as well as the rooms and our speakers, so yes, that's normal.

I don't know who took that picture, but they need to reduce their preamp gain by at least 16.6 dB or they're gonna be getting some clipping!


----------



## morgin

Lmao. Mines on red too. I tried to use some settings to lower it but the volume was way too low. Not sure where I should lower it from. Is it high because of the target level chosen in impulcifer. Or can it be done on equalizerapo. Also what’s the maximum it should be? I’m guessing 0db


----------



## Brandon7s (Nov 22, 2021)

morgin said:


> Lmao. Mines on red too. I tried to use some settings to lower it but the volume was way too low. Not sure where I should lower it from. Is it high because of the target level chosen in impulcifer. Or can it be done on equalizerapo. Also what’s the maximum it should be? I’m guessing 0db


I've been fiddling around with different levels myself. For the past few weeks I've been boosting the level with the preamp gain by about 12 db, which has a large portion of them in the red by over 5 dB. I was doing that for the same reasons you were, volume being too low and I didn't really want to crank my headphone amp well beyond what I was used to using. However, today I decided to try keeping out of the red completely and keeping the highest peak at just below 0.0 dB via APO preamp adjustments (and yes, 0 is the point at which clipping occurs) and just turning my amp up as needed. It's not quite a night-and-day difference, but I definitely feel like I'm getting a more accurate representation of my room and I've also noticed less harshness in the highs. It just seems more convincing overall, so I think all that headroom really is important.

  I'd try different target-levels and see what gets you closest to the 0 dB mark without going into clipping. For me, that seems to be a target level of -10 dB for stereo recordings. I'm guessing that full 7.1 take up more headroom and might need to have a target level a bit lower than that, but I haven't run any tests on that so don't take my word for it.

 My second recommendation is getting a decently powerful headphone amp. I'm using a JB Labs Atom, which I love, and an old Matrix M-Stage amp which I don't love as much (it's noisier than the Atom). I think that getting the most of out Impulcifer really does require that you have an amp that can let you get enough volume to truly utilize the headroom requirements for the BRIRs it produces.


----------



## morgin

I’m currently just using my behringer to amplify my audio because just using the pc is way too low. Would an amp make a bigger difference than using the behringer?


----------



## Brandon7s (Nov 22, 2021)

morgin said:


> I’m currently just using my behringer to amplify my audio because just using the pc is way too low. Would an amp make a bigger difference than using the behringer?


Oh yeah, it's not even close. None of the audio interfaces I've ever used come anywhere near to supplying the amount of power to headphones that my Atom or M-Stage can dish out. I've had a Behringer UMC404HD before and its headphone output is lower than the headphone out on both my Focusrite Scarlett 18i8 and my MOTU Ultralite MK3 interfaces, which are both higher quality and pricier. And the Atom/M-Stage put out far, faaar more power than either of _those _can; a decent headphone amp would 100% solve your low output problem.


----------



## morgin

Brandon7s said:


> Oh yeah, it's not even close. None of the audio interfaces I've ever used come anywhere near to supplying the amount of power to headphones that my Atom or M-Stage can dish out. I've had a Behringer UMC404HD before and its headphone output is lower than both my Focusrite Scarlett 18i8 and my MOTU Ultralite MK3 interfaces. And the Atom/M-Stage put out far, faaar more power than either of those can; a decent headphone amp would 100% solve your low output problem.


I have no idea what is a good amp or what to look for. Can you recommend one that will be good but not expensive


----------



## Brandon7s

morgin said:


> I have no idea what is a good amp or what to look for. Can you recommend one that will be good but not expensive


It depends on your budget, but I love my JDS Labs Atom. It's $99 USD new: https://jdslabs.com/product/atom-amp/


----------



## lowdown

Brandon7s said:


> It depends on your budget, but I love my JDS Labs Atom. It's $99 USD new: https://jdslabs.com/product/atom-amp/


I've had two JDS Labs amps, currently have an Element, and really like them.


----------



## musicreo

morgin said:


> I’m currently just using my behringer to amplify my audio because just using the pc is way too low. Would an amp make a bigger difference than using the behringer?


 What headphones do you use? I also use the Behringer and it can easily  drive my AKG701 and HD600.


morgin said:


> Quick question on equaliser apo the graphs (in red) are very different to each other for both channels L and R. Should they look the same or is that normal.


The graphs show the sum of all channels. So if you want see a single channel you have to mute all other channels in Hesuvi or EQ-APO.


----------



## morgin (Nov 23, 2021)

Thanx lads I’ll check these out.




jaakkopasanen said:


> The issue is with content is released with proprietary codecs such as Dobly Atmos and they don't tell how to decode the audio stream



Can I ask what about gaming that doesn’t use Dolby for its sound. Would it be a good idea to record more channels with impulcifer. Or is it best just to use the in game 3D audio


----------



## morgin

musicreo said:


> What headphones do you use? I also use the Behringer and it can easily drive my AKG701 and HD600.


I use the Sennheiser hd560s the behringer is loud enough to power them but not when I have them low enough to not show clipping in EQApo


----------



## musicreo

morgin said:


> Can I ask what about gaming that doesn’t use Dolby for its sound. Would it be a good idea to record more channels with impulcifer. Or is it best just to use the in game 3D audio


I think no game will give you more than 7.1 discrete audio channels without using Atmos and Atmos can only be used with a AVR supporting this format.  
So if you want to record more channels with Impulcifer the only way to use them is using 7.1 and  a  upmix in EQ-APO (this works I have tested it ).


----------



## musicreo

morgin said:


> I use the Sennheiser hd560s the behringer is loud enough to power them but not when I have them low enough to not show clipping in EQApo


The HD560s should be easier to drive than my HD600.  Even my old  Asus Xonar U1 can drive my HD600 using my measurement.
Can you show us a screenshot from your EQ-APO analysis panel?


----------



## morgin (Nov 23, 2021)

Here it is... when i lower the reference so its not clipping as shown at the bottom image i have to use full volume on behringer, windows and media player


----------



## musicreo (Nov 23, 2021)

From the last plot I would suggest that you add a preamp that shifts the peak at  50Hz to 0db. So you will already add 6-7db and without any real danger of clipping (I think you will find 20-30Hz with an amplitude close to 0db only in very rare cases). If the plot shows the surround measurement you can even add much more preamp  when listening  to stereo signals.


----------



## jaakkopasanen

If you have 7 channel BRIR measured and are looking at the graphs in EqualizerApo, the gain over 0 dB doesn't necessarily mean clipping. That's because this is sum of 7 channels and there hardly is any material where all channels are blasting at full volume at the same time. Mine is almost 10 dB in the red for bass frequencies and I have never had issues with clipping. Also the headroom makes no difference if the signal is not clipping. You won't get higher sound quality with more headroom unless the signal clips with less headroom.


----------



## Brandon7s (Nov 25, 2021)

jaakkopasanen said:


> If you have 7 channel BRIR measured and are looking at the graphs in EqualizerApo, the gain over 0 dB doesn't necessarily mean clipping. That's because this is sum of 7 channels and there hardly is any material where all channels are blasting at full volume at the same time. Mine is almost 10 dB in the red for bass frequencies and I have never had issues with clipping. Also the headroom makes no difference if the signal is not clipping. You won't get higher sound quality with more headroom unless the signal clips with less headroom.



 Thanks for the info on this, that's very useful! I was noticing what sounds like compression pumping in the low end when I was setting the preamp gain very high on my stereo BRIRs. I was was probably going _too_ high, obviously. Glad to know I can boost the preamp gain on 7.1 BRIRs to reasonable levels without worrying about getting into the red as much with surround BRIRs, cause those levels can be VERY low compared to stereo BRIRs. Knowing that those are just summed channels in the APO configuration editor helps a lot.

By the way, I'm seriously considering an upgrade my from my Hifiman Anandas to the HD800S. I _believe_ you have the HD800S as well, or perhaps the non-S version. I'm curious as to how much an improvement you gained in terms of how convincing the Impulcifer virtualization is when you moved to the HD800(s) from your prior headphones, if you did indeed use Impulcifer with any prior headphones. Do you feel it was worth it and made a noticeable difference?


----------



## sander99

Brandon7s said:


> By the way, I'm seriously considering an upgrade my from my Hifiman Anandas to the HD800S.


You know the Drop + Sennheiser HD 8XX is now US $949?
https://drop.com/buy/drop-sennheiser-hd-8xx-headphones


----------



## Brandon7s

sander99 said:


> You know the Drop + Sennheiser HD 8XX is now US $949?
> https://drop.com/buy/drop-sennheiser-hd-8xx-headphones


Oooh.. thanks for that, that's tempting. I'll do some research to see how similar it is to the regular 800 or 800S version. If it's very similar, that's gonna be hard to pass up!


----------



## jaakkopasanen

Brandon7s said:


> Thanks for the info on this, that's very useful! I was noticing what sounds like compression pumping in the low end when I was setting the preamp gain very high on my stereo BRIRs. I was was probably going _too_ high, obviously. Glad to know I can boost the preamp gain on 7.1 BRIRs to reasonable levels without worrying about getting into the red as much with surround BRIRs, cause those levels can be VERY low compared to stereo BRIRs. Knowing that those are just summed channels in the APO configuration editor helps a lot.
> 
> By the way, I'm seriously considering an upgrade my from my Hifiman Anandas to the HD800S. I _believe_ you have the HD800S as well, or perhaps the non-S version. I'm curious as to how much an improvement you gained in terms of how convincing the Impulcifer virtualization is when you moved to the HD800(s) from your prior headphones, if you did indeed use Impulcifer with any prior headphones. Do you feel it was worth it and made a noticeable difference?


I used HE400S before HD 800 but that was with older version of Impulcifer and different BRIR measurements obviously. Cannot say anymore what's the difference due to headphones. I bought HD 800 because I wanted a reference tool for development, one that has tight manufacturing tolerances and whose faults are well known so that I have better idea if what I'm hearing is due to my algorithms or the headphones. For casual use, the biggest thing for me is comfort. HD 800 are the most comfortable headphones for me due to their very low clamping force, huge ear cups and relatively low weight. If Anandas are comfortable to you, I doubt you will find any benefits from switching to HD 800.


----------



## kalstein

@jaakkopasanen Are you not interested in hrtf simulation?  You can get a head model using the iPhone's lidar sensor, and there are projects like mesh2hrtf. Direct measurement may be more error-prone.  And accessibility is much better.


----------



## morgin

What is the benefit of using different headphone or speaker with sound signatures? Can I transform my headphones into better sounding headphones or same with speakers?


----------



## jaakkopasanen

kalstein said:


> @jaakkopasanen Are you not interested in hrtf simulation?  You can get a head model using the iPhone's lidar sensor, and there are projects like mesh2hrtf. Direct measurement may be more error-prone.  And accessibility is much better.


I am, in theory but it's not for me to pursue. Too complicated for the time I have available.


morgin said:


> What is the benefit of using different headphone or speaker with sound signatures? Can I transform my headphones into better sounding headphones or same with speakers?


From my perspective, none. There probably are some intangible features to the sound quality which we don't know how to interpret from frequency response measurements yet and therefore cannot eq in. And of course measurements come with their own limitations, such as the fact that your own head is not exactly the same as dummy head. Speakers are more complicated because they have issues with distortion at high levels and also have directivity patters which cannot be changed with DSP.


----------



## morgin

jaakkopasanen said:


> I am, in theory but it's not for me to pursue. Too complicated for the time I have available.


Well with what you have achieved and the time, energy and knowledge spent on this project is very much appreciated by me and I’m sure everyone who’s managed to get a good measurement. It must have taken a good number of hours for you to work on this let alone having to make a guide for us.


----------



## Brandon7s (Nov 27, 2021)

jaakkopasanen said:


> I used HE400S before HD 800 but that was with older version of Impulcifer and different BRIR measurements obviously. Cannot say anymore what's the difference due to headphones. I bought HD 800 because I wanted a reference tool for development, one that has tight manufacturing tolerances and whose faults are well known so that I have better idea if what I'm hearing is due to my algorithms or the headphones. For casual use, the biggest thing for me is comfort. HD 800 are the most comfortable headphones for me due to their very low clamping force, huge ear cups and relatively low weight. If Anandas are comfortable to you, I doubt you will find any benefits from switching to HD 800.



 Thanks, you just saved me a thousand dollars or a return! One of the reasons I've been contemplating getting an HD800 is due to the perceived imbalance that I've been getting with my Anandas, even without using any BRIRs. Upper-mid bass (150 to 200hz) tends to lean right while sub-bass leans seems to lean left. I don't experience that with any of my other headphones: DT1990, 6XX, 58X, R70X, DT770, and a bunch of other mid tier gear. I just ran some more tests in REW to get an idea of how the left and right sides differ on my Anandas and the results are that the variations are very minor. I did some corrective EQ to get the left/right cups to match and toggling that off and on results in an imperceptible difference, so I doubt it's channel balance of the headphones themselves that is the issue now.

Overall, I _love_ my Anandas. I find them incredibly comfortable and a huge upgrade from all of my other headphones in clarity, separation, detail, and especially soundstage. All of the rest sound like listening to music through a small window by comparison.

 I think that the reason for the variance between my Anandas and my other headphones is that the Anandas encompass the entire ear and don't press upon it, while the rest do. My understanding is that this would change the frequency response at the ear drum simply due to the differences in pinna activation and greater variation in cup placement. If my hypothesis is true then I would experience the same balance variations with the HD800, which would make that a fruitless upgrade for this particular issue.

I went through my Impulcifer archive and found a few that has dead-on balance and I'm going to stick with those for my daily drivers. I wish I had taken better notes on those measurement sessions so I can try to replicate that balance in future measurements, but other than a vague "sit a little further away from the speakers than usual", I didn't write much down that would be of use. I'm going to measure those BRIRs that are dead-on today in REW while using those as a baseline for what "centered" looks like on paper, and then I'll see if I can create a corrective EQ curve with that information so that I can get a correctly centered stereo image even when _not_ using Impulcifer's magic.


----------



## jaakkopasanen

Brandon7s said:


> Thanks, you just saved me a thousand dollars or a return! One of the reasons I've been contemplating getting an HD800 is due to the perceived imbalance that I've been getting with my Anandas, even without using any BRIRs. Upper-mid bass (150 to 200hz) tends to lean right while sub-bass leans seems to lean left. I don't experience that with any of my other headphones: DT1990, 6XX, 58X, R70X, DT770, and a bunch of other mid tier gear. I just ran some more tests in REW to get an idea of how the left and right sides differ on my Anandas and the results are that the variations are very minor. I did some corrective EQ to get the left/right cups to match and toggling that off and on results in an imperceptible difference, so I doubt it's channel balance of the headphones themselves that is the issue now.
> 
> Overall, I _love_ my Anandas. I find them incredibly comfortable and a huge upgrade from all of my other headphones in clarity, separation, detail, and especially soundstage. All of the rest sound like listening to music through a small window by comparison.
> 
> ...


Interesting. Can't say I have heard of this phenomenon you're describing before. Pinna deformation is unlikely to be the cause because your issues are in the bass region where the wavelengths are so long that pinna activation is not much of a thing, as far as I understand it.


----------



## morgin

isnt there a way for people who don’t want to buy equipment or are too afraid to use command lines be able to use impulcifer. 

What I’m thinking is instead of measuring with mics in ears. There are a set of sounds that are played in the headphones, and the distance the sound are heard can be adjusted by the user. So a sound plays and description says this sound you should hear 5 feets at 35degress and the user adjusts until they hear it over that area. Adjustments can be made closer or further, right to left and High or low. Then after several of these measurements impulcifer can fill in the blanks and have a profile for that individual.


----------



## Brandon7s (Nov 27, 2021)

jaakkopasanen said:


> Interesting. Can't say I have heard of this phenomenon you're describing before. Pinna deformation is unlikely to be the cause because your issues are in the bass region where the wavelengths are so long that pinna activation is not much of a thing, as far as I understand it.


That's a good point. I wish the Ananda's cup's shape were symmetrical, then I could simply put them on backwards and see if the bass balance flip-flops, but they won't even seal if I try that. Also, I took a look at the measurements from my R70X, DT770, and 1990 and they all seem to experience a very slight balance shift in the low end on the graph, but it's a LOT less pronounced than it appears in the same session's Ananda measurements while being worn.

 I'm going to mess with high-passing at different points to see if I can nail down more precisely where the flip in balance occurs between sub/bass and mid/upper bass. If the frequency center where I achieve balance is at the same point where the Ananda's minor channel balance appears to be, then that would indicate that the low-end imbalance on the Anandas isn't quite as minor as it appears on graphs.



morgin said:


> isnt there a way for people who don’t want to buy equipment or are too afraid to use command lines be able to use impulcifer.
> 
> What I’m thinking is instead of measuring with mics in ears. There are a set of sounds that are played in the headphones, and the distance the sound are heard can be adjusted by the user. So a sound plays and description says this sound you should hear 5 feets at 35degress and the user adjusts until they hear it over that area. Adjustments can be made closer or further, right to left and High or low. Then after several of these measurements impulcifer can fill in the blanks and have a profile for that individual.



 That sounds a bit like the idea that some folks were throwing around with how Sony's PS5 could possibly "game-ify" creating HRTF targets. I'd be curious to see that tried and what the results would be.


----------



## Iohfcasa (Nov 27, 2021)

Hello,
Does anyone tried the equal loudness method to  determine  the correct eq-curve needed for incorporating the individual ear canal resonance?
Otherwise it should be quite impossible to obtain the complete hrir without probe microphones.
You can use a program like "earful" for that selecting the "two devices loudness"  and set your speakers as the reference signal.
The loudness  of the (hrir convolved) headphone signal is to be matched to the (o.c. unconvolved) speakers loudness with several pink/ white noise frequency-bands.
I think it can only be done for mono/ stereo, because it's crucial to sit in front of the speakers at centre.
It appears a bit awkward to put the headphones on and off after a second for loudness matching, but it gives your brain/ ears a reliable reference to adjust the headphone's loudness to.
This procedure resembles the realiser's "manspkr" function.

Griesinger says, the hrtf of the blocked ear canal measurement  does not differ much from the unblocked one, BUT this only applies for the room/speakers hrtf and NOT the headphone hrir.
With headphones ON the personal ear canal resonances come to effect, because the acoustical conditions are altered.

https://dokumen.tips/documents/bina...one-equalization-david-griesinger-harman.html
Site 44 ff.


----------



## Brandon7s (Dec 4, 2021)

Brandon7s said:


> I'm going to mess with high-passing at different points to see if I can nail down more precisely where the flip in balance occurs between sub/bass and mid/upper bass. If the frequency center where I achieve balance is at the same point where the Ananda's minor channel balance appears to be, then that would indicate that the low-end imbalance on the Anandas isn't quite as minor as it appears on graphs.



 Quick follow-up on this: after testing with some pure sine waves, the sub-bass isn't the problem. The problem appears as soon as harmonics are added to the sinewave at around 200 to 220hz, then the perceived balance immediately shifts to the right while the region below that stays pretty close to centered. This isn't an impulcifer thing though, and in fact using my favorite BRIRs greatly reduces this shift in balance. I don't think it's the headphones either, since there's almost zero channel-balance variation near those frequencies on any of the measurements I've taken. I think it's just the way I hear things.

 I tried the sine wave test from the low end all the way to the highest I can hear and the channel imbalance is actually FAR worse in many places in the highs and mids. For example, I played a note around 880hz (an A) and it was well centered. I then went up a perfect 5th (D, 1175hz) and the note immediately panned about 50% to the right in my perception, even though it was in mono and perfectly centered in my DAW and on the output meters of my interface. I then tried the same thing after turning on my favorite BRIR filter and BOOM, both notes were perfectly centered - even though the meters read that the right channel was at least 10dB higher than the left.

  This would explain why I simply don't experience soundstage when using headphones AT ALL, _except_ for when I'm using one of my personal BRIRs created with Impulcifer. It's incredible how much better that makes the headphone experience for me. I recently tried listening to some music raw, nothing from Impulcifer or even any external crossfeed, and it is truly a _horrible _time. There's no direction to the sounds. Everything seems to be coming from all directions without any sense of a stage whatsoever, and sounds constantly shift location. Even single instruments will pan from hard left to hard right depending on the register that they are playing in. I have no idea how on earth I put up with that for so many years! Maybe that also explains why I'm such a fan of ambient music like the band Hammock, Lowercase Noises, and those types of artists. Their music is a wash of sound more than it is individual instruments and notes, so I don't notice much difference between listening to most of their stuff with Impulcifer enabled vs. disabled. Its definitely better with a personal BRIR but it's not too bad without it either.



Iohfcasa said:


> Griesinger says, the hrtf of the blocked ear canal measurement  does not differ much from the unblocked one, BUT this only applies for the room/speakers hrtf and NOT the headphone hrir.
> With headphones ON the personal ear canal resonances come to effect, because the acoustical conditions are altered.
> 
> https://dokumen.tips/documents/bina...one-equalization-david-griesinger-harman.html
> Site 44 ff.


I find equal loudness comparisons very difficult just due to the tediousness and the fact that my aural memory for that kind of thing is very poor. However, this particular bit about personal ear canal resonances changing when wearing headphones gave me an idea and I think it turned out to work well for me, though it's tough to say why or even if this additional measurement had anything to do with it at all and I just got lucky in EQing the area where the difference was the most extreme.

  I've been struggling to EQ out harshness of my BRIRs that is present in the high-mids, just slightly above where most female vocals sit. I've been experimenting with different corrective EQ curves but had not yet managed to find a good solution. It seems that every time I tamed the section that I was aiming for, I also lost a lot of non-problematic frequencies along with it. It's the other main reason I was strongly considering upgrading from my Anandas, outside of not being sure if it's channel balance variance was low enough.

 My idea was to go through a normal Impulcifer measurement session: put mics in ears, measure headphones and then take off the headphones and measure the speakers. However, in addition to performing the normal speaker measurement, I did an identical measurement but while wearing my headphones. Yes, it changed the sound a bit - dulled the highs some and changed some of the lower-mid range character, but I figure that doing this could help point out areas where the headphones are adding extra resonance changing acoustic characteristics that I might want to correct via EQ.

 Here's the results for the normal speaker measurement with the headphones removed. I didn't use room correction just to keep the variables to a minimum, FYI:





And here's the speaker measurement that I made while I was wearing my Anandas:



The primary difference between these two is the exaggerated hump from between 1500 to 2200hz that you can see in the top graph. Compare the highest peak in that section to the valley at about 1000hz. The difference between those two spots on the _bottom _graph looks to be about 3 to 4dB and it tapers off to another valley at just about 2kHz. The difference in that same section in the _top _graph (the normal measurement) is that the valley at 1000hz and the next peak looks to be about 7 to 8dB, but the overall length of that hump is now more like a plateau than the spike that it is in the bottom graph. It's higher in both amplitude and frequency extension.

 I set up a filter in Equalizer APO to bring that section between 1500 to 2200hz down about 6dB and the difference was an major improvement. With that correction, this BRIR now sounds far more like my actual speakers and I don't feel like I'm losing anything in the process. The vocals still retain clarity and their position in the mix but the harshness that I was getting from distorted guitars and other harmonically complex content vanished. After listening to it for a bit turning off the EQ correction sounds like turning on a white-noise machine in the recording. It's a dramatic difference, especially when listening to metal.

 I'm curious if this same plateau is present in the BRIRs of my other headphones. I haven't had the time yet to try it but I'm definitely going to give it a whirl. I'm also going to do the same kind of measurements with the other headphones to see if the peaks for those are a match for this peak in my Ananda's BRIRs or if each one has a difference resonance. I'm betting it'll be a little bit of both.

 The downside to this kind of measurement is the fact obvious that it'll almost certainly be useless with closed-back headphones. However, if the frequencies where the resonance occur doesn't change much between open-back to closed-back headphones then this will still give me a decent starting point for reducing that resonance on my DT770 and ATH-M50S. The BRIRs I've been using with them seem to exhibit the same kind of harshness that I solved via this measurement method for my Anandas so maybe it will still be useful. I'll mess with this more over the weekend and see what I can figure out.


EDIT: update from next day. I still feel good about the corrective EQ that I've applied at the 2kHz area, though I'm not sure if it's just pure luck that the graphs highlighted that area to me and just happened to be the spot at which I've been experiencing harshness, or if the two measurements are objectively capturing something that wasn't accounted for in the ordinary Impulcifer speaker measurements. Theoretically, Impulcifer should already be accounting for any difference above the point of mic insertion. In any case, I still have a use for the additional headphone-worn speaker measurement since I can use that in conjunction with an equal-loudness test as I describe in my next post.


----------



## Iohfcasa (Dec 4, 2021)

Brandon7s said:


> I find equal loudness comparisons very difficult just due to the tediousness and the fact that my aural memory for that kind of thing is very poor


Me too, but I think you missunderstood my post a bit, because with earful's "two devices" method you neither need to have a good aural memory nor to match different frequeny bands like Griesinger suggests.
With earful's "two devices" method you can hear the two signals as nearly a single one due to the short transition, and that makes detecting loudness deviations so much easier.
A badge indicates the reference signal and as soon the speaker plays you put off your headphones, adjust the headphone's loudness untill it matches the speaker's one.
Done this, you won't hear any difference between the both signals and so it's heard as a single one.

Your method described in the post can't include any ear canal resonances, because it remains a blocked ear channel measurement.
However, you could try to use something like that for an easier use of the two devices method without the need of putting headphones  off.


----------



## Brandon7s (Dec 4, 2021)

Iohfcasa said:


> Me too, but I think you missunderstood my post a bit, because with earful's "two devices" method you neither need to have a good aural memory nor to match different frequeny bands like Griesinger suggests.
> With earful's "two devices" method you can hear the two signals as nearly a single one due to the short transition, and that makes detecting loudness deviations so much easier.
> A badge indicates the reference signal and as soon the speaker plays you put off your headphones, adjust the headphone's loudness untill it matches the speaker's one.
> Done this, you won't hear any difference between the both signals and so it's heard as a single one.


I think I understood correctly, as it seems to me to be nearly identical to the method shown in the video in this post from earlier in this thread. A tone is played out of a speaker and then again with headphones, and then the user has to match the perceived amplitude of the two. Even the short transition between listening to the speaker to putting on and listening to headphones gives me trouble. I find that what I think "matches" has enough variance between attempts to make me doubt my own accuracy.

  This is made even more difficult when it comes to trying to match each ear independently since muting one speaker side doesn't prevent the other ear from hearing the opposite-side speaker, though it did just occur to me that I could run that test per-ear more accurately by using earplugs in the ear that I'm _not_ measuring. Doing that, combined with your suggesting of keeping my open-back headphones worn to shorten transition times between the tones could very well be a major improvement for me, so I'm going to give that a shot today.



Iohfcasa said:


> Your method described in the post can't include any ear canal resonances, because it remains a blocked ear channel measurement.
> However, you could try to use something like that for an easier use of the two devices method without the need of putting headphones  off.


I should have clarified more on this point. I wasn't trying to capture the differences between blocked vs. unblocked ear canals, since as you mentioned they would be blocked in both measurements. I was trying to capture any _other_ acoustic differences to try to improve the corrective EQ that I have been using to bring the higher frequency spectrum more in line with what I hear out of my speakers. I probably shouldn't have used the word "resonance" when referring to that difference in my original posts, since I don't actually know if the differences are caused by resonance, per se.

This particular bit that I've put in bold is what made me want to perform this particular test:


Iohfcasa said:


> Griesinger says, the hrtf of the blocked ear canal measurement  does not differ much from the unblocked one, BUT this only applies for the room/speakers hrtf and NOT the headphone hrir.
> With headphones ON the personal ear canal resonances come to effect,* because the acoustical conditions are altered.*


  I figured that if there's an acoustic difference outside of the ear canal when wearing vs. not wearing headphones, then maybe my two-measurement method could capture some of that difference. Now, if the _only_ relevant acoustic difference would already be captured by the binaural mics located outside of the ear canal then this experiment would be fruitless. It wouldn't provide any ability to compensate for the effects of the ear canal directly but I thought it could lead to an incremental improvement in correcting other properties that are altered when wearing headphones.

It seemed like a long-shot but I think I did get something useful out of it, at least with my Anandas. Hopefully the two-device method as mentioned above will provide reliable data that includes ear canal resonance itself, which would be fantastic.

 Also, even if the on-off measurements that I took yesterday with Impulcifer end up having provided no objectively useful information and the EQ that I applied as a result of taming the 2kHz hump was by pure, dumb luck actually just correcting ear canal resonance, I can still use the headphone-on speaker measurement to calibrate the two-device equal loudness measurement. Doing that test with the headphones worn will obviously affect the outcome of that test, but I can use the Impulcifer speaker measurements to help _"_undo" the affect that wearing the headphones have while listening to speakers.

  Even if Impulcifer 100% perfectly captured the auditory signal-chain and  included perfect replication of both physiological and psycho-acoustic properties of my hearing, I'd still like to have a well-tailored personal left-and-right EQ curve that I could use with devices like my phone that can't  utilize convolution technology. I've been using AutoEQ's harman target curves for years and while they improve upon the stock EQ of everything I've listened to I think that my ideal personal curve is still miles away from what those measurements can provide - which isn't a surprise considering the complex nature of hearing.


----------



## Iohfcasa (Dec 4, 2021)

Brandon7s said:


> I think I understood correctly, as it seems to me to be nearly identical to the method shown in the video in this post from earlier in this thread. A tone is played out of a speaker and then again with headphones, and then the user has to match the perceived amplitude of the two.


Griesinger's approach aims for matching *dissimiliar *frequency bands to each other, whereas the two devices method allows you  to match always the *same two frequency* bands and that's a lot easier to do.
Especially with the inherent test of proofing the results by ear.
If the loudness of hps and speakers is equal, it's heard as a *single tone/ noise band  *without any interruptions.
Trust me, I tried griesinger's method and it's far too abstract, tedious and complicated for my brain to decide, whether the loudness matching of two different frequency bands is right or not. It's more a confusing guessing game



Brandon7s said:


> This is made even more difficult when it comes to trying to match each ear independently since muting one speaker side doesn't prevent the other ear from hearing the opposite-side speaker


Exactly, this problem remains and is not that easy to negotiate.
Even with the perfect front speaker / phantom speaker (stereo setup) to hp loudness matching, you need to adjust the left/right balance afterwards.
The loudness is always perceived as a l+r sum heard by both ears and if their hearing ability differs, the results will be a bit odd.
Earful also provides a left/right channel eq, you can choose the frequency bands and noise signal for it.


Brandon7s said:


> I'd still like to have a well-tailored personal left-and-right EQ curve that I could use with devices like my phone that can't utilize convolution technology


Did you try something like the "hearing test" app (by e-audiologia/ marcin masalski)?
For dual channel  equing on android 9 and above  you can use spoteq 31.



> I can still use the headphone-on speaker measurement to calibrate the two-device equal loudness measurement. Doing that test with the headphones worn will obviously affect the outcome of that test, but I can use the Impulcifer speaker measurements to help _"_undo" the affect that wearing the headphones have while listening to speakers.



Like I said, you can try, but I don't recommend it due to the uncertainities of omitting the ear canal resonances.
They vary from ear to ear and this is one reason why the same hps sound so different to different people.
Look at "standing waves in (half) closed and open tubes", the length of the ear canal is crucial and it's even more complex due to it's curved form similiar to a horn.


----------



## Brandon7s (Dec 4, 2021)

Iohfcasa said:


> Griesinger's approach aims for matching *dissimiliar *frequency bands to each other, whereas the two devices method allows you  to match always the *same two frequency* bands and that's a lot easier to do.
> Especially with the inherent test of proofing the results by ear.
> If the loudness of hps and speakers is equal, it's heard as a *single tone/ noise band  *without any interruptions.
> Trust me, I tried griesinger's method and it's far too abstract, tedious and complicated for my brain to decide, whether the loudness matching of two different frequency bands is right or not. It's more a confusing guessing game


Oh man, you got that right. I just spent some time trying Earful's equal loudness test with the 1khz tone matching to different frequencies  just because I hadn't used this particular software before and wanted to get a hang of how it works... but geez, the results of that particular kind of test are worse than useless. It's impossible to make a decision on what sounds similar in amplitude when the frequencies are so far apart. 

I'm going to give the two-device equal loudness test tomorrow though, I have confidence that will be a huge improvement. Matching the same frequency will be a dream after going through the mess I just did, haha.



Iohfcasa said:


> Did you try something like the "hearing test" app (by e-audiologia/ marcin masalski)?
> For dual channel  equing on android 9 and above  you can use spoteq 31.


 I've tried the hearing test built into Peace, which looks to be very similar in design to this one by e-audiologia. I'm going to give that one a shot too though, out of curiosity. And thanks for the spoteq 31 link! I've been using Wavelet for EQing but it doesn't support dual-channel EQ so I'm definitely going to install this app and give it a try. If I can get decent results from the two-device Earful tests then I should be able to nail a good dual-channel EQ curve for portable use, that'd be lovely. And even if I can't get good results out of the Earful two-device test, I could use the data provided by my Impulcifer measurements to help create separate left/right channel correction curves, though they will be omitting earn canal resonance, of course, but I still think it'd be a whole lot better than nothing.


----------



## Iohfcasa

Brandon7s said:


> I'm going to give the two-device equal loudness test tomorrow though, I have confidence that will be a huge improvement


Ok, keep in mind to:
Sit in front of the speakers at the same position the measurements were done
Use the same loudness as for listening to music
Route the signals the right way- headphones with convolution applied, Speakers without any convolution of course. So with eq-apo applied for convolving the hp signal, the speakers signal has to circumvent the windows driver, you can use asio for that.
Choose a noise band and set the q-factor to the desired value, would keep it between 1 to 5, I don't know why griesinger's noise bands sounds less harsh and smoother even with the same q-Value.
Try to decrease  instead of increase the eq-volume
After the "two device" loudness method is done, you can run the channel balance test by adjusting the left/right  db Value of the eq, the noise should be located at the center in  front of you. That's the point, where it's getting more complicated, perhaps we need some other more appropriate test signals for this purpose. (Voices/ beeps, etc.)
After the channel balance adjustment is completed, you have to repeat the two devices method, because the  loudness of the frequency bands has been altered.
Just maintain the left-right db difference and adjust both channels at once


----------



## davidtriune (Dec 5, 2021)

i wonder how much better recordings would sound with probe microphones since headphones are better equalized and directional changes are more accurate. (not sure how dangerous it is to stick a pvc tube up your ear canal tho)


----------



## Brandon7s (Dec 5, 2021)

Iohfcasa said:


> Ok, keep in mind to:
> Sit in front of the speakers at the same position the measurements were done
> Use the same loudness as for listening to music
> Route the signals the right way- headphones with convolution applied, Speakers without any convolution of course. So with eq-apo applied for convolving the hp signal, the speakers signal has to circumvent the windows driver, you can use asio for that.
> ...



 Thank you for these steps, it was very helping in my first attempt at completing this process. I wasn't able to get much useful info out of today's session but that's because the BRIRs I've created with Impulcifer to try this out have too many channel balance issues. This room makes it VERY difficult to get symmetrical measurements below about 300 hz, with massive channel balance spikes like -17dB in left ear vs. right ear at 100hz. Ear-specific room correction helps but I'm finding it nearly impossible to get a mic placement that is good enough to resolve the issue to my satisfaction. Using the --channel_balance options min, left, or right all improve the bass balance to a large degree, but mess up everything above the bass frequencies, so sadly I can't use those. 

  What I'd LOVE is the option to utilize two different channel balance options with a threshold control so that I could use avg/left/right on the problematic low-end frequency range and then use something like mids, trend, or nothing at all in the higher frequencies so those don't get messed up. Either that or have a filter that makes the bass mono, which is something that a lot of mastering EQ plugins can do and works wonderfully. I've been trying to think of how to processes the binaural mic's input signal through my DAW so that I can slap one of those bass-mono-izer plugins on it so I can use that when taking Impulcifer measurements. That'd remove quite a bit of headache. I'm going to work on that here in a minute and see how it goes. I know it's doable, just gotta experiment with routing on my interface to figure out the details. I do have a few BRIRs that have VERY good channel balance, but my room arrangement has changed and I can't replicate that same listening environment well enough for me to trust that it would be a sufficient match for the earful two-device test. The acoustics of the room have changed quite a bit since those recordings.

 Once I have a nicely balanced BRIR with my current room layout then I'll give the Earful two-device test another shot. Just doing the left and right side's tests did provide some insights, such as that particular hump at 2000hz showing up as clear as day. My understanding of ear canal resonance though is that it has a much more noticeable affect at higher frequencies, at above 8kHz or so, but I think I'm missing some information there and need to read up that subject more. The thing is, if it ISN'T ear canal resonance then I'm curious why Impulcifer isn't picking it up and compensating for it. Maybe there's still room for me to improve the binaural mic placements, though I'm not sure how to go about that. There's not much further I could insert those mics and still get a decent signal due to blocking the mic port.


----------



## Iohfcasa

Brandon7s said:


> My understanding of ear canal resonance though is that it has a much more noticeable affect at higher frequencies


That's calculable, just think of the ear canal as an approximated half closed tube (and more both sides -closed  with headphones on) and  you'll see the  affected range of frequencies.


> "Acoustically, the outer ear works as a tube resonator, with the strongest first resonance around 3 kHz, where a quarter wavelength of sound in air (10 cm / 4 = 2.5 cm) fits the length of the ear canal"


https://www.bksv.com/en/knowledge/blog/sound/anatomy-of-the-ear

Keep in mind, that the acoustic conditions are altered with blocked ears e.g. with headphones. 
Just google for "outer ear" + "resonances"


----------



## musicreo

davidtriune said:


> i wonder how much better recordings would sound with probe microphones since headphones are better equalized and directional changes are more accurate. (not sure how dangerous it is to stick a pvc tube up your ear canal tho)


They use soft silicone tubes at the end. The video from Griesinger where he shows how he builds this kind of mics with a low budget is very helpful. But the mics he uses aren't available anymore and I also fear that the technical specs of are bad.
 I think that new MEMS- microphones could be a game changer.  For example look at the Infineon IM73A135V01.  The  technical specifications are comparable or even better to the specifications of the  very good Primo capsules we use here but they come in a housing that is much smaller.  Such mics can be placed much easier into the ear canal. The only problem is that soldering can't be done by the average guy anymore and we would have to wait for companies to provide us with such kind of InEar mics.
​


----------



## Brandon7s (Dec 7, 2021)

musicreo said:


> They use soft silicone tubes at the end. The video from Griesinger where he shows how he builds this kind of mics with a low budget is very helpful. But the mics he uses aren't available anymore and I also fear that the technical specs of are bad.
> I think that new MEMS- microphones could be a game changer.  For example look at the Infineon IM73A135V01.  The  technical specifications are comparable or even better to the specifications of the  very good Primo capsules we use here but they come in a housing that is much smaller.  Such mics can be placed much easier into the ear canal. The only problem is that soldering can't be done by the average guy anymore and we would have to wait for companies to provide us with such kind of InEar mics.


That looks very promising. This is the first time I've heard about MEMS microphones and it's amazing to me that microphones that small can have those kind of specs. Very cool tech.

 I've still not had the time to work more on this Earful two-device test, but I have had a lot of fun experimenting with multi-band FX processing in my digital audio workstations. I don't know why it didn't occur to me to try this before, but the flexibility it offers is huge. Using a mutli-band splitter plugin like TBProAudio's ISOL8 (which is free) or any other similar affect has let me mix and match different BRIRs as much as I want, as well as mixing BRIR's with the dry signal at whatever frequency cutoff points I want. For instance, I can use BRIRs with different --channel_balance options for the low end and high end, or even low end, low mids, mids, and highs, though I've not found a need for that yet. I had quite a lot of success with using a BRIR with the Average channel option in the low end with the crossover point at about 220hz and then using one from the same session with the Trend option for everything above that.

  I've also played around a bit with using a completely dry signal for the low-end and that has been enlightening. It's interesting to see just how much of an affect BRIRs have on our perception of the bass region. Generally, the completely dry bass signal is far more fatiguing and much harder to extract detail from compared to the same frequency region going through one of Impulcifer's IRs. However, its not at all difficult to mimic the affect of the BRIR in that region by adding reverb with a plugin to the dry headphone signal. There's no localization happening in that area so it doesn't sound 'uncanny' like it would in the higher ranges and using the dry headphone + reverb gives the advantage of not having to worry about making a bunch of EQ corrections to get the sound reasonably flat if your room is horrible like mine is.

 The plugin that I use for loading the BRIRs into my DAW is Reverberate 3 by LiquidSonics and they just came out with an update that I've found VERY useful. It allows for adjustment of reverb length in an impressively transparent way (no downsides at all, from what I can tell) and it allows the reverb length parameter to be adjusted independently for both highs and lows with an adjustable crossover point. So now I can instantly fix any low-end ringing to _any_ BRIR without requiring me to fiddle with reverb-management. It can also increase the reverb time, though I've not yet found a good use for that. I could have done this using multi-band processing but it would have required me to create a new BRIR using Impulcifer's reverb-management for every possible length that I would want to try, which is ridiculously time consuming compared to adjusting it in real time.

 My DAW of choice is Bitwig but it doesn't support multi-channel audio so I've also been playing around with this stuff a bit in Reaper, which does. This allows for each channel of the 7.1 configuration to be independently processed; I can set up a multi-channel session and tweak the decay time, reverb time, levels and EQ of each 'speaker' however I'd like, all in real-time while watching a movie. That makes it a much easier to fine tune and to fix problems as you notice them vs. going back and forth between processing options using just Impulcifer. It's not really even that processor intensive either, I was running 7 instances of Reverberate 3 along with a slew of EQ and an instance of Cinematic Rooms Pro, which is LiquidSonic's mutli-channel spatial-audio reverb plugin, and I was barely hitting 8% in CPU usage. My processor isn't anything crazy, either, it's an Intel i7-7700.

 The part that I've not tried is creating new IRs from the processing that I've been doing so that I can use the final results with Hesuvi. It shouldn't be difficult to figure out though. I've created TrueStereo IRs before using Wave Arts' MlsTool to copy the ooyh HRIRs that come with Hesuvi, but the end result did not sound exactly the same as the source HRIR. I think that could be because MlsTools uses noise as it's source signal instead of sweeps so it should simply be a matter of finding a better method of creating the IRs post-processing.

 If anyone else wants to try this kind if processing, feel free to let me know and I can go into more detail about how to set up the Reaper session for 7.1, as well as how to extract the individual IRs from 14-channel hesuvi-order .wav files produced by Impulcifer so that they can be used in convolution reverb plugins like Reverberate 3 or Convology XT (which is free but not anywhere near as powerful). None of the convolution plugins I know accept IRs with more than 4 channels, and they have to be in a specific order.


----------



## morgin

I came across this video. Is this something impulcifer does automatically? There’s a noticeable change in the depth of the sound and sound like what impulcifer does to my audio


----------



## Brandon7s (Dec 8, 2021)

morgin said:


> I came across this video. Is this something impulcifer does automatically?


Dan Worrall is awesome, one of my favorite people in the world of audio production and the internet in general. He's incredible at explaining complex topics in a very clear and concise manner. His video on oversampling and sample rates is a great example.

 And yes, Impulcifer introduces crossfeed thereby changing binaural mixes to stereo mixes just by the nature of how it works. You can test this it for yourself if you'd like: take any song and then hard-pan the output PRIOR to Hesuvi in the signal chain and you'll still hear audio in the opposite side's channel. Turn off Hesuvi and suddenly that side of the headphones will go dead silent. I consider headphones without crossfeed to be dual-mono playback systems and not stereo for this very reason. 

 The delay he introduces near the end of the video is essentially emulating the way we hear audio in the real world by introducing phase differences between the channels. People don't hear sound waves at the same phase in both ears, not unless the frequencies are at the perfect length, but most of the time there is a very slight difference in amplitude from our left and right ears. That amplitude difference is caused by our ears experiencing the same wave at different points within its phase.


----------



## Brandon7s

Iohfcasa said:


> Ok, keep in mind to:
> Sit in front of the speakers at the same position the measurements were done
> Use the same loudness as for listening to music
> Route the signals the right way- headphones with convolution applied, Speakers without any convolution of course. So with eq-apo applied for convolving the hp signal, the speakers signal has to circumvent the windows driver, you can use asio for that.
> ...



 I finally got around to doing this with a decently balanced BRIR and just as I got up to the 8kHz slider on the left channel, my left speaker died (I'm using LSR305 1st gen). Completely unresponsive. That's a good excuse for me to go buy some Kali LP-6's that I've been wanting try out though, so I'm not complaining. 

 I finished up the Earful two-device test for the right side since I had already gotten nearly done with the left one. This means that I couldn't finish up with balancing the two sides and then doing another pass of the test with Left+Right, unfortunately. I went ahead and setup an APO profile using the two EQ correction curves and then tried to balance them by ear but the results are pretty far out of whack. Still, I can see a lot of potential with this and there's definitely improvements to the timbre quality of the BRIR in the areas where I was having trouble before. 

  Once I get my new speakers I'll have to take a stab at this again so I can properly adjust channel balance and then fine tune things as needed.


----------



## castleofargh

morgin said:


> I came across this video. Is this something impulcifer does automatically? There’s a noticeable change in the depth of the sound and sound like what impulcifer does to my audio



I tend to call stereo albums played unprocessed on headphones, "wrong stereo". That's one of my 99 secrets to make many friends on Headfi. 

I played the video mostly because @Brandon7s said the guy is awesome. 15s into it my brain went: "hey, that's the voice from the Fabfilter videos! hermagerd I lurv this guy!".
Apparently I can learn a great deal from somebody and be shameless enough not to remember his name. But at least I can +1 on the awesomeness and pedagogy.


----------



## Iohfcasa (Dec 8, 2021)

Brandon7s said:


> finally got around to doing this with a decently balanced BRIR and just as I got up to the 8kHz slider on the left channel, my left speaker died (I'm using LSR305 1st gen). Completely unresponsive. That's a good excuse for me to go buy some Kali LP-6's


You can do the loudness matching also with one speaker in front of you, the heard left-right missmatch should remain the same.
With two speakers stereo-setup you'll perceive the Phantom center in front of you and adjust it's loudness. I guess, it sounds quite similiar to a mono center signal-
at least, if the mono speaker's loudness is amplified to the db- volume of the stereo's phantom center.
In both cases, stereo and mono center, you can only correct the tembre for front localisation and the left-right disbalance.
The front-left and front-right timbre correction is way more complicated, if not impossible to achieve, fortunately these are less crucial to the ear.


----------



## Brandon7s

Iohfcasa said:


> You can do the loudness matching also with one speaker in front of you, the heard left-right missmatch should remain the same.
> With two speakers stereo-setup you'll perceive the Phantom center in front of you and adjust it's loudness. I guess, it sounds quite similiar to a mono center signal-
> at least, if the mono speaker's loudness is amplified to the db- volume of the stereo's phantom center.
> In both cases, stereo and mono center, you can only correct the tembre for front localisation and the left-right disbalance.
> The front-left and front-right timbre correction is way more complicated, if not impossible to achieve, fortunately these are less crucial to the ear.


I was actually just wondering about this. I think I'll give it a shot with one speaker and see how it goes.


----------



## Iohfcasa (Dec 8, 2021)

Perhaps the pink noise will sound a bit different to the hrir convolved one, because with a mono center speaker you'll get only one signal per ear.

I don't know how well earful's eq is transferable to others like apo, because eqs don't work all the exact  same way.
Is there a way to use apo for equalising although running earful for speaker/hp + noise band switching?


----------



## Brandon7s (Dec 8, 2021)

Iohfcasa said:


> Perhaps the pink noise will sound a bit different to the hrir convolved one, because with a mono center speaker you'll get only one signal per ear.
> 
> I don't know how well earful's eq is transferable to others like apo, because eqs don't work all the exact  same way.
> Is there a way to use apo for equalising although running earful for speaker/hp + noise band switching?



 The pink noise already sounds different to me even with a regular stereo setup in the same position in which the BRIR was recorded. That's been making it a bit difficult to match amplitude perfectly and is probably why the channel balance that I'm getting is so wonky. Mono might actually make getting a better resemblance of ear canal resonance easier since there's less variables involved. Course, the effect of that ear resonance is going to change with angle of the speaker in relation to the ear, but it'd still be better than nothing I would imagine.

 The test results from Earful are saved as REW-readable measurement files, so making a matching EQ curve is no more troublesome than creating any other kind of corrective EQ curve.

 Using APO for equalizing while running Earful's tests is a piece of cake as long as you are using more than one device and are using the two-device test. In fact, I plan on experimenting with using APO to tweak channel balance while using Earful to go back and listen to the same noise bands again. That should even be doable without using a reference playback at all, though I have no idea if it'll provide decent results. It's worth a shot though - best way to find out is to try it, haha.

I suppose I could even do that channel balance tweaking without even having applied the corrective EQ, essentially using Earful just as a tool to help me systematically fix channel balance issues between a large amount of frequencies in the raw BRIR. Actually, I should probably do that _first_, and then go back and run the full Earful two-device test from scratch using the BRIR with improved channel balance.


----------



## Iohfcasa

Hmm, I've tried the two devices method only a short time with a few bands and without any active hrir just to proof the usability.
With hps putting on and off for speaker signal comparison I could match the pink noise  loudness so well, that it appeared as one single noise to me.
Thought, an active hrir would even amend the comparability, on the other hand, pink noise doesn't sound really spatial by itself.
Sometimes I misstook the speaker signal with the headphones one, lol.

It's not forbidden to try loudness matching with the sine signal, but I think it's more prone to errors caused by the too narrow bands and it's even more tedious.



Brandon7s said:


> I suppose I could even do that channel balance tweaking without even having applied the corrective EQ, essentially using Earful just as a tool to help me systematically fix channel balance issues between a large amount of frequencies in the raw BRIR


Hmm, if the channel balance problems are based on hearing abilty, a pure speaker channel balance eq may be advised, because it prevents any channel balance errors based on the hrir measurement.
I personally find it quite hard to determine the correct channel balance with pink noise, it sounds so wide and somehow unspatial by itself.
Eventually my hearing ability is quite similiar on both ears, I hope so.


----------



## Brandon7s (Dec 9, 2021)

Iohfcasa said:


> Hmm, I've tried the two devices method only a short time with a few bands and without any active hrir just to proof the usability.
> With hps putting on and off for speaker signal comparison I could match the pink noise  loudness so well, that it appeared as one single noise to me.


Did the bass region give you any trouble? I found it very difficult to match amplitude below 200hz due to the uneven nature of the noise. I eventually switched over to a pure tone to make that easier, though I think using a pure sinewave in the low end is a bad idea simply because of the fact that the distance you are from the speaker can have a dramatic affect on the amplitude you experience - the perceived loudness of those frequencies are highly phase dependent. Just moving your head a few inches forward or backwards can change loudness significantly.



Iohfcasa said:


> Thought, an active hrir would even amend the comparability, on the other hand, pink noise doesn't sound really spatial by itself.
> Sometimes I misstook the speaker signal with the headphones one, lol.


I ran into this same problem. I wish there was a way to change the audio that is being used by Earful, then I could load up the sounds of instruments or noises that I can more easily recognize spatially. I'll probably have to design a sound myself and then filter it with a bandpass to match the narrow the frequency range that I use in Earful and see if I can get some sounds that are be a decent replacement and easier to determine balance with. I can do this in a DAW and then set up a series of pitches that mirrors the frequency bands used in the Earful test in order to keep things as Apples to Apples as possible.



Iohfcasa said:


> Hmm, if the channel balance problems are based on hearing abilty, a pure speaker channel balance eq may be advised, because it prevents any channel balance errors based on the hrir measurement.


You know, it never even occurred to me that I should try checking my channel balance perception with my speakers themselves. I've only been testing it with my headphones, with and without BRIRs applied. I have a feeling that I will have no issues with channel balance and speakers but I should definitely verify that first. If I DO have problems with that then I need to work on correcting that balance before I produce anymore BRIRs.


----------



## Iohfcasa (Dec 9, 2021)

Brandon7s said:


> Did the bass region give you any trouble? I found it very difficult to match amplitude below 200hz due to the uneven nature of the noise


I think it's not advised to run the loudness eq below 200hz, thankfully the bass section is not so crucial for localisation, at least timbre-wise.
Although in general bass is more room-filling and less perceived as a pinpoint spot,  the reverb of bass in a specific room  has a decisive effect on the perception of distance and room acoustics.
I hear that clearly with  kickbass,  probably due to the short and well defined impulse followed by the reverb without much overlay.

I cannot help much with the bass section, think it's very room dependent and eventually needs a manually adjusted reverb.

Do you still use the ananda?
I guess, you're not the first one having problems with it's left/right balance, at least with convolution.

_"Had measured in one run, speakers, Hifiman Ananda and HD800 . HD800 super, Ananda- something went wrong. Left side sounds much fuller than the  right."_
Quotation of "richterdi" from hifi-forum.de in the realiser a16 thread


----------



## Brandon7s (Dec 9, 2021)

Iohfcasa said:


> I think it's not advised to run the loudness eq below 200hz, thankfully the bass section is not so crucial for localisation, at least timbre-wise.
> Although in general bass is more room-filling and less perceived as a pinpoint spot,  the reverb of bass in a specific room  has a decisive effect on the perception of distance and room acoustics.
> I hear that clearly with  kickbass,  probably due to the short and well defined impulse followed by the reverb without much overlay.
> 
> I cannot help much with the bass section, think it's very room dependent and eventually needs a manually adjusted reverb.


I agree, the bass is largely at the mercy of room acoustics - for my second Earful I ended up using 1kHz as the starting point, though I think that's a little bit too high and I'll probably go with 500hz next time since my spatial perception doesn't really start kicking into gear until then, from what I can tell. 



Iohfcasa said:


> Do you still use the ananda?
> I guess, you're not the first one having problems with it's left/right balance, at least with convolution.


I'm still using the Ananda but I've also created BRIRs for my DT1990 Pros and my DT770s and the left/right imbalance that I get is still present in those for the higher frequencies (500hz and up), though those headphones do seem to have less issue with imbalance in the low end. I feel like I've gotten a pretty good grasp on correcting the wonky low end balance with EQ on my Anandas now though so I'm not too worried about that at this point. And they are a truly huge upgrade in overall sound quality compared to even the DT1990s. 

 I did have a thought concerning improving the channel balance issues that are exaggerated when I use EQ correction from Earful's two-device tests: I wonder if it might be that I'm actually inserting the in-ear mics too far when I'm creating the BRIR, therefor not limiting Impulcifer's compensation to just the affects of the pinna alone. If that's true and if I'm capturing some resonance from the ear canal in the BRIR then I believe that would throw off the results from using Earful's EQ curves by applying double-compensation. I'm going to try some more shallow insertions to try to limit the affect of anything beyond the pinna and see if that helps. That might explain why deeper insertions tend to be brighter than the BRIRs I have with shallower insertions, too. 

  I'm going to guitar center today to buy a pair of Kali LP-6s so I'll have time to try some of these ideas over the weekend. I'm interested in seeing how the bass port being in the front affects the low end in my room; I don't expect it to be much of a difference overall but at least that will give me more flexibility in positioning, which would be most welcome.


----------



## Iohfcasa (Dec 9, 2021)

Brandon7s said:


> wonder if it might be that I'm actually inserting the in-ear mics too far when I'm creating the BRIR, therefor not limiting Impulcifer's compensation to just the affects of the pinna alone. If that's true and if I'm getting some resonance from the ear canal in the BRIR then I believe that would throw off the results from Earful's EQ curves by applying an amount of double-compensation


I don't know, if impulcifer does a simple hpeq or even measures some hp's  phase/ reverb and tries to correct that.
I assume the first, so with just equing frequency bands  you shouldn't be able to inflict any "wrong/double compensation".
With the speaker/hp loudness matching you capture the sound reaching your *eardrum * and decide any further adjustments by that. So you're assessing the target values directly.
You could also omitt the hpeq compensation  and use the two device method with the raw hrir, although this would probably lead to more work of eqing.
That's the only way provided for inears, so quite essential in some cases.

I personally would prefer a  mic seat shallow with the ear entrance, just because it's a bit easier to judge the (un)evenness.

Keep in mind, that there are two main cues for detecting left/right position of sound sources.
Interaural Time difference (itd)- more crucial for low frequencies, because the longer wavelenghts reach the ears at different phases
Interaural level difference (ild)- more crucial for high frequencies, because the amplitude of  smaller wavelengths is decreased much more by resistance (headshadow)
and
If the hrir contains itd errors due two uneven placement, these can't be easily corrected by loudness adjustments.

The question is, where ends the itd crucial frequency range and of course, these ranges are not separated strictly.


Edit: https://www.frontiersin.org/articles/10.3389/fnins.2014.00034/full
3. Human itd sensitivity


----------



## Brandon7s (Dec 9, 2021)

Iohfcasa said:


> I don't know, if impulcifer does a simple hpeq or even measures some hp's  phase/ reverb and tries to correct that.
> I assume the first, so with just equing frequency bands  you shouldn't be able to inflict any "wrong/double compensation".
> With the speaker/hp loudness matching you capture the sound reaching your *eardrum * and decide any further adjustments by that. So you're assessing the target values directly.


Good point, that would depend on exactly what Impulcifer is doing when it performs its headphone compensation. That said, one of the issues I'm having is a not-insignificant difference in how I hear the noise bands in Earful between the headphones+BRIR and the speakers, even though my sitting position and speaker position are prqctically identical as the way the BRIR was originally recorded in the first place.The sound of the noisebands with headphones seems to always be at a higher pitch and with differences in stereo spread vs. what I hear out of the speakers. I can get the two tones to sound like one identical long tone in only a couple locations along the 32 bands that I'm testing and I think that's making the Earful test more unreliable than it would be otherwise. So it looks like I have to find a way to improve the BRIR process a bit in order to take full advantage of that kind of equal loudness test.



Iohfcasa said:


> You could also omitt the hpeq compensation  and use the two device method with the raw hrir, although this would probably lead to more work of eqing.
> That's the only way provided for inears, so quite essential in some cases.


Huh, that's an interesting idea. I might be able to use that to help troubleshoot the differences that I'm getting between the BRIRs and real-world speakers.



Iohfcasa said:


> I personally would prefer a  mic seat shallow with the ear entrance, just because it's a bit easier to judge the (un)evenness.
> 
> Keep in mind, that there are two main cues for detecting left/right position of sound sources.
> Interaural Time difference (itd)- more crucial for low frequencies, because the longer wavelenghts reach the ears at different phases
> ...


 This is particularly interesting... when you say "easier to judge the unevenness", are you referring to the physical placement of the mic being in the same spot in each respective ear in relation to the ear canal and pinna, or are you referring to the audible unevenness in frequency response and stereo imaging? 

  I've been making no attempt whatsoever at placing the binaural microphones in the same position in each ear relative the ear structure with all but my earliest measurements. I found the results to be far more thin and bright sounding than I would have liked when I was using shallow insertions and so I started just putting them in each ear to the furthest depth that is reasonably comfortable instead. That definitely improved the frequency response of the BRIR's timbre but that could also explain why I've had so many issues with low-end content and why I have to do extra processing to get the bass and sub-bass regions feeling natural and centered. It might also be part of the reason why I've had less issues wiyh low-end balance issues on my DT770/1990s vs the Anandas, because both of those headphones press down on the outter ear and change the position of the headphone drivers relative to the mics. The Anandas don't touch the ear at all and maybe the increased distance between drivers/mics is having an exaggerating affect on the phase differences for that low end content.

 The differences between my ears are significant, at least as far in as IEM insertion depths. I have to use different tips between ears with IEMs and the angles that they are placed at while listening with a good seal are visibly different. Avoiding that inconsistency via shallower insertions is probably the better way to go. Then finishing up the EQ process by the equal-loudness test might make up for much of the "thinness" that I was getting before with shallower insertions. I'm not sure if localization will suffer as a result or not, but it's definitely worth giving a try.

 I appreciate your insights here, you've given me some good things with which to experiment and troubleshoot!


----------



## musicreo

Iohfcasa said:


> If the hrir contains itd errors due two uneven placement, these can't be easily corrected by loudness adjustments.


The sound travels in air with 343m/s. So a even  large different placement of e.g. 5mm would be only add a time difference of 15µs. I doubt that this is a real problem.


----------



## Brandon7s (Dec 9, 2021)

musicreo said:


> The sound travels in air with 343m/s. So a even  large different placement of e.g. 5mm would be only add a time difference of 15µs. I doubt that this is a real problem.


 I always thought it was a combination of both sound propagation speed and the phase differences in the waveform that permitted ITD to be to be used for localization of lower frequencies (sub 1500hz is what I've read) rather than just the speed alone. Duplex theory, if I recall correctly.


----------



## musicreo

Brandon7s said:


> I always thought it was a combination of both sound propagation speed and the phase differences in the waveform that permitted ITD to be to be used for localization of lower frequencies (sub 1500hz is what I've read) rather than just the speed alone. Duplex theory, if I recall correctly.


Perhaps I'm wrong but I think the phase differences should be  also very small for a  difference of 5mm.  The filter effect of the ear probably have a much stronger effect for a 5mm difference.


----------



## Brandon7s (Dec 11, 2021)

Man, I don't know if it's the fact that I replaced my LSR 305 1st gen speakers with a pair of Kali LP-6s, or if it's the binaural mic placement itself, but I'm getting amazingly balanced results now! I've had three measurement sessions with these new speakers and every one of them has turned out to be the best I've ever taken. These Kali's have improved low-end balance tremendously, which I can only assume must be because of the front-ported bass and improved woofer/tweeter crossover. In any case I'm loving the results.

Check out that Difference line up through 200hz. I've _never_ been able to get results like this without using one of the more drastic channel-balance options (which then messed up the higher regions to be unusable). The low end on this is about as good as I could ever want it and it sounds perfectly natural to my ears. Also, this is without any channel balance options or room correction from Impulcifer; I double and triple checked because it looked way too flat compared to everything else I've been getting, but it really is legit:




Not only is the bass region centered perfectly for me but the mids and highs are also centered dead-on. I even ran a simple sine-sweep slowly through the entire frequency range and not _once_ did it shift from center. None of my other measurements came anywhere _near _that level of channel balance evenness. I've not tested balance with multi-channel audio yet besides the 7.1 surround test built into Hesuvi, but that seemed very well balanced as well.

Things that I've changed to get these great results:

Upgrade speakers from LSR 305 first generation to Kali LP-6. Not sure if the Kalis are first or second generation, but either way they have contributed to a massive improvement in channel balance and they've made the low end much easier to control.
Apply some corrective EQ to the low end to prior to making the speaker measurements. I've done this before with my old LSR 305s and the results weren't any better than only using Impulcifer's built-in room correction, so I'm not sure how much of an effect this had.
Use shallower binaural mic insertion and paying close attention to their placement relative to each ear's pinna structure, trying to keep them as similar as possible. I think this has had a significant effect; however, I won't know for sure until I try making a measurement with the same kind of haphazard, deep insertion that I was previously using. I'm in no rush to try that though since these results are great.
Measuring the speakers at a lower SPL than I was using previously and cranking the preamp gain to compensate. I think that this combined with using room correction EQ through APO when taking the measurement has made a huge difference in helping to control the severe standing waves that I was getting before. You can still see the affect of what are likely standing waves at about 103hz and 181hz. Previously those were both huge trouble spots and the main reasons why the low end was so difficult to get a well-balanced measurement. They are a non-issue now and I could easily use additional corrective EQ on the BRIR to flatten them out even further if I wanted to. They were unfixable before.

I've not tried the Earful two-device loudness test yet but I did do a little A/Bing between my speakers and the BRIR with regular music and the results are phenomenal. I don't know if I could tell the difference if it weren't for what I assume to be ear canal resonance making the high end of the BRIR a bit more pokey than the speakers. I'm going to run that test right now and see how that goes and I'll report back once I've gone through it.


----------



## Iohfcasa

Quite interesting, because the jbl lsr305 are well known for their wider sweet spot, but this refers more to the directional pointed high frequencies than the bass section, I guess.
The channel disbalance affected more the lower range, right?



Brandon7s said:


> This is particularly interesting... when you say "easier to judge the unevenness", are you referring to the physical placement of the mic being in the same spot in each respective ear in relation to the ear canal and pinna


This, similiar mic placement

What pair of mics do you use and is it a matched one?

I tested earful's two device method a bit with unconvolved hp-signal and while loudness  matching  wasn't that easy due to different timbre, I think it's manageable with some training.
The right timing of putting the hps on/ off is mandatory and sometimes raising/ lowering the hp-band volume by a larger db range (e.g 3) can help to approximate the appropriate value.
I also experimented with different q factors (1.4, 3) and white noise / pink noise, sine suits better  the lower end.

The problem is, we have no alternatives without probe mics and even then, the placement remains crucial.
The probes shouldn't contact the eardrum, and with hps on that's hard to control.


----------



## Brandon7s (Dec 12, 2021)

I just performed the Earful two-device test and the results are MUUUUCH better than I was getting with my prior measurements. The timbre and pitch with this latest BRIR is nearly a perfect match for my speakers so that made it a lot more reliable and quicker to complete. I created two different corrective EQ curves from those tests, one which is an average of both ears and is applied to both equally and another which is a unique EQ curve for each ear. Both sound great, though I think I do have a little more work to do in order to get the independent ear EQ curves just right. It certainly does take some practice to get the correct timing down when removing the headphones to hear the reference tone and then placing them back. I think if I did it again I would have even better results just because I think I'm better at it now than when I started, but the results that I've gotten are good enough that I'm not in any rush to retake the test. I'm quite happy with what I've gotten so far.



Iohfcasa said:


> Quite interesting, because the jbl lsr305 are well known for their wider sweet spot, but this refers more to the directional pointed high frequencies than the bass section, I guess.
> The channel disbalance affected more the lower range, right?


The bass section was a consistent problem with them but I think that might be attributed to the rear port design. The speaker placement in my room is less than ideal and is both asymmetrical and closer to the walls than I'd like, with the left speaker being very close to a corner while the right speaker is more towards the middle of the wall. I think that corner could have been exaggerating that side's bass frequencies more than I thought. The odd thing is the high frequency imbalance. I'm beginning to suspect that one or both of my 305s were experiencing failure over an extended period of time. I just don't know how else the highs could have been such a problem to balance otherwise, and considering the left speaker straight up died on me while I was using Earful's test at the 8kHz frequency, maybe that was just the final straw for it. One-off speaker failure shouldn't come as a big surprise for something made in such an inexpensive fashion, I think might have just gotten unlucky.

 Overall I've been very happy with my 305s and in fact I've owned two pairs - one for my work desk and one for my personal desk, and this is the only time I've experience any issues with them whatsoever.



Iohfcasa said:


> This, similiar mic placement
> 
> What pair of mics do you use and is it a matched one?


I'm using The Sound Professional's Master Series MS-TFB-2, though I also have their non-Master Series version as well, the SP-TFB-2. I've done a couple of tests to see how well matched they are and they aren't terribly close, though after doing some manual calibration via EQ to the right side mic I found that it made no real difference. I believe Impulcifer automatically corrects for mic imbalance fairly well or I think I would have noticed a significant difference between the calibrated and non-calibrated measurement results.



Iohfcasa said:


> I tested earful's two device method a bit with unconvolved hp-signal and while loudness  matching  wasn't that easy due to different timbre, I think it's manageable with some training.
> The right timing of putting the hps on/ off is mandatory and sometimes raising/ lowering the hp-band volume by a larger db range (e.g 3) can help to approximate the appropriate value.
> I also experimented with different q factors (1.4, 3) and white noise / pink noise, sine suits better  the lower end.
> I was using a Q factor of 4 with my tests today and I think it worked well but that's comparing convolved headphones vs. speakers, which I'm sure is a heck of a lot easier than unconvolved headphone output. I'm going to do that at some point here just so I can get a decent custom EQ curve for use with my portable devices.
> ...



 I'd love to try a probe mic just to see how much closer to perfect they can make the results, and to cut down on EQ correction for ear canal resonance. It's 100% worth spending the time to do the Earful test though, even if it does take a couple hours for me to perform the tests, generate the correction curves from REW, and then load those into Peace/APO and my VST EQ plugins of choice for use with my music production software. It does a great job of cutting down on the harsh highs that I simply couldn't EQ out manually before.


----------



## Iohfcasa (Dec 12, 2021)

Brandon7s said:


> created two different corrective EQ curves from those tests, one which is an average of both ears and is applied to both equally and another which is a unique EQ curve for each ear


You can't equalise for each ear separately, because the loudness will always be composed of the left AND right ear perceived loudness by the brain.
For perfect equing we would need the eq correction for each of the four hrir tracks (left speaker-left/right ear, right speaker-left/right ear) and this could only be done with a perfect seal of one ear.
Sadly, such a seal doesn't exist and even if, bone conduction comes more to effect at lower frequencies than many think.
Therefore audiologists work with "masking signals", but that's a whole new story.

So I recommend stereo or mono center setup for the two device loudness method and  equalizing for both ears at once.
This presumes identical hearing capabilities of both ears, which won't apply 100% accurately.
That's the reason, we can't get around the final left/right channel balance matching by determining the (phantom) center localisation and adjust the left/right volume properly.

I'm quite unsure, which method applies more to our goal, stereo or centered mono?
Because at the end the eq refers only  to the center and two speakers will introduce more eventually unneeded crosstalk, while one speaker keeps it more simple.
On the other hand, a stereo hrir can't match the same timbre of the mono setup and bedevils the equing itself.

Edit:
Oh, regard how earful's channel muting works together with eq-apo, because eq-apo assumably takes  a single channel signal always as a single *speaker* signal to convolve and therefore allocates it to left *and* right hp channel.
So for left/right ear channel equing,  the non-referred ear/hp-channel has to be muted, that must probably done in eq-apo itself.


----------



## Brandon7s (Dec 12, 2021)

Iohfcasa said:


> You can't equalise for each ear separately, because the loudness will always be composed of the left AND right ear perceived loudness by the brain.
> 
> For perfect equing we would need the eq correction for each of the four hrir tracks (left speaker-left/right ear, right speaker-left/right ear) and this could only be done with a perfect seal of one ear.
> Sadly, such a seal doesn't exist and even if, bone conduction comes more to effect at lower frequencies than many think.
> Therefore audiologists work with "masking signals", but that's a whole new story.



I see what you mean but I'm not necessarily shooting for perfection, though of course that'd be great. I'd like perfect but I'm also happy with "good enough".

 I'm more concerned with a few treble problem spots than I am with anything in the low end with these latest BRIRs. This last one I've gotten out of Impulcifer is so good that outside of those few trouble spots, which are all above 2kHz, I doubt the value of additional correction. 

 I expect the EQ I'm applying to have some compromises since I'm not EQing all 4 tracks as you mention, but I'm now in the land of diminishing returns. How much more convincing can the convoluted signal be made and how much time and effort would be needed to gain the extra few percent to reach full potential? I think I'm very close to the point where that legwork is not worth the return on investment. 



Iohfcasa said:


> So I recommend stereo or mono center setup for the two device loudness method and  equalizing for both ears at once.
> This presumes identical hearing capabilities of both ears, which won't apply 100% accurately.


 I do think that this is worth trying and I'll give it a try today. It'll take only a short amount of time to do the test for both ears at once and I'm curious to see how close the resulting EQ curve is to the left/right average curve that I created yesterday from the normal stereo setup. It's still going to be a compromise but if it's close enough then it'll be worth it, since I could perform this test to generate corrections for my other headphones in much less time than in stereo. 



Iohfcasa said:


> I'm quite unsure, which method applies more to our goal, stereo or centered mono?
> Because at the end the eq refers only  to the center and two speakers will introduce more eventually unneeded crosstalk, while one speaker keeps it more simple.
> On the other hand, a stereo hrir can't match the same timbre of the mono setup and bedevils the equing itself.


 I have a feeling that the EQ generated via stereo tests will be an overall better fit for most music consumption, but the best way to find out is to try it, I suppose. 😁 

  I do have a particular use for center mono so I really should do that test regardless. I frequently like to use just the front-center channel for monitoring guitar since that more closely resembles the experience of playing guitar through a single guitar cab placed directly in front of me. I especially like doing that when playing to backing tracks for fun, since that keeps the guitar in a more distinct space and makes it easier to hear both the backing track and the guitar simultaneously. 

Impulcifer had been awesome for this sort of thing. One of the disadvantages to monitoring guitar through stereo monitors or headphones is the feeling of the audio behind conjested and too busy to mentally separate the two sources. That's 100% solved using a surround sound BRIR and placing the guitar signal through a different virtual speaker than what is being used by the other audio sources. It's a significant quality of life improvement for me and makes apartment life and guitar a MUCH better combination. 



Iohfcasa said:


> Edit:
> Oh, regard how earful's channel muting works together with eq-apo, because eq-apo assumably takes  a single channel signal always as a single *speaker* signal to convolve and therefore allocates it to left *and* right hp channel.
> So for left/right ear channel equing,  the non-referred ear/hp-channel has to be muted, that must probably done in eq-apo itself.


 I'm not sure how to mute the final output of only a single channel in APO but I've also not looked into it. I've just been using earplugs in the ear that I'm not EQing and that seems to work very well. I could also unplug the headphone connector to whichever cup should be muted, though that wouldn't work for my Beyerdynamic headphones which use a single connection for both cups. Depending on the audio interface one should also be able to mute the final output for either channel via the interface's mixing software, but that option might not be available on some of the more inexpensive interfaces.


----------



## Iohfcasa (Dec 12, 2021)

The compromises should only concern the virtual left / right channels, because they lack accurate equing.
Fortunately, relating to Giesinger, the right timbre is more relevant to front localisation, which the two device method is adressing here.

The realiser a8/ a16  provides a very similiar method called "manspkr" and it's also limited  to these eq possibilities.

On my windows pc I can mute  each channel in the "Preference" window of  the output device (e.g. Speakers realtek high definition audio) by clicking on the "balance" button in the "volume" tab.
But it's not always available with external devices/ dacs.


----------



## Brandon7s (Dec 16, 2021)

I've gotten really annoyed with the mismatch between my Sound Professional binaural mic pair which has been requiring a +8dB preamp gain to the right mic in order to obtain the same levels as the left mic, so a couple days ago I decided to try to fix it. I removed the silicon housing that the mic capsules were encased in and that immediately fixed the problem. The mics are now far, faaar more evenly matched and the headphone measurements look much better. Also, removing the housing on the capsules has helped reduce noise significantly. I can now obtain optimal-headroom speaker measurements with Impulcifer without having to add +40dB preamp gain while also turning up my speakers passed my usual listening levels. Now I only need about +20dB in preamp gain to get reach similar headroom levels.

Here's a headphone measurement of my Anandas prior to removing the silicone housing:




And here's one after removing the housing, same headphones at roughly the same SPL but far less preamp gain:



I'm uncertain of the sonic results but my impression is that the overall fidelity of the BRIRs has been improved, particularly in the highs.


----------



## musicreo

Brandon7s said:


> I've gotten really annoyed with the mismatch between my Sound Professional binaural mic pair which has been requiring a +8dB preamp gain to the right mic in order to obtain the same levels as the left mic, so a couple days ago I decided to try to fix it. I removed the silicon housing that the mic capsules were encased in and that immediately fixed the problem. The mics are now far, faaar more evenly matched and the headphone measurements look much better.



The measurement you postet 5 days ago was already done with the "new" mics? Otherwise I don't understand how you could get such a good alignment without any channel balance correction?


----------



## Brandon7s (Dec 16, 2021)

musicreo said:


> The measurement you postet 5 days ago was already done with the "new" mics? Otherwise I don't understand how you could get such a good alignment without any channel balance correction?


No, that was with the silicone sleeve still on the mics. I agree that the channel balance is very good. Since I've gotten these new speakers, it's been _consistently_ good without using any channel balancing options in Impulcifer.

Here's one that I took yesterday, very balanced until the high end, though it _sounds_ balanced so those variance spikes haven't caused any issues. This is after removing the silicone housing:




And here's one from the day before yesterday. This one is noticeably darker in the highs because I was using room correction EQ on the speakers through APO and had some poorly set filters in the high end. I apply an inversion of those filters to the BRIR in order to "undo" the bad EQ job I did to the speakers. The channel balance is great though. This one was also after the housing was removed:


----------



## musicreo

Brandon7s said:


> No, that was with the silicone sleeve still on the mics. I agree that the channel balance is very good. Since I've gotten these new speakers, it's been _consistently_ good without using any channel balancing options in Impulcifer.



But if your headphone measurement was not balanced with the "old" mics how can your result which includes the headphones be  balanced without any correction applied?


----------



## Brandon7s (Dec 17, 2021)

musicreo said:


> But if your headphone measurement was not balanced with the "old" mics how can your result which includes the headphones be  balanced without any correction applied?


I don't think the old mics had anything to do with the balance being off, I think that the improvement in the measurements I've been getting recently are due to the new speakers most of all, and perhaps paying more attention to the placement of the mics in the ears (keeping them the same depth relative to each ear's structure) might have  also had an impact on balance, but I think it's mostly due to the speakers and how they interact with the room in the low end.

 The way Impulcifer works, it automatically compensates for the binaural mic imbalance, hence why it actually worked just fine with mine when they varied so wildly. However, fixing the balance by removing the housing on the mics has let me reduce the noise floor significantly since I only have to use half the amount of preamp gain in order to obtain the same levels. And it's just nice knowing that the mics are no longer highly unbalanced anymore, just because it bugs me when they are so mismatched - Impulcifer had no problem with making them work though.


----------



## Iohfcasa

Will you also give your second pair of jbl 305 a try with the more even mic position?


----------



## Brandon7s

Iohfcasa said:


> Will you also give your second pair of jbl 305 a try with the more even mic position?


I don't own the second pair of 305s anymore; however, I do own a pair of first pair of first generation LSR 308s, which are basically the same thing but larger. I use those as my living room TV speakers since they take up quite a bit of space. I'll give those a try soon and let you know how it goes.


----------



## morgin

I miss the updates from you guys. Anyone tried anything new to get better results?


----------



## Brandon7s (Jan 7, 2022)

morgin said:


> I miss the updates from you guys. Anyone tried anything new to get better results?


I haven't made any new measurements in a few weeks, not since the last measurement I posted about here. I still intend to make one with my pair of LSR 308s to see how they fare against my Kali LP-6s that I recently bought, but I've been so happy with my last set of measurements that I've not felt any need to make some more.

 That said, this morning I just glued my master series Sound Professional binaural mics (that I removed the silicone housing from) to some foam earplugs and I'm going to make take another measurement with these soon. My last few measurements had been without foam earplugs, I was simply putting them a little ways into my ears without any covering or anything, but I feel like that's made for an unnaturally wide stereo image on the BRIR. As a result I have been using the _avg_ channel_balance and that's worked very well for me. It _does _change the EQ of the BRIR a bit, making it darker and more muffled, but I've been using some post-BRIR EQ on the highs to fix that. I think that putting them directly in the ear makes the mic measurements inconsistent since their orientation isn't fixed in place, so putting them on foam earplugs again will likely fix that.

I've been hesitant to glue them directly to foam since this isn't a reversible modification. I thought about putting some tape on them and then gluing the tape, but there's so little surface area without the housing that it just doesn't stick enough to work well. I think this will be fine though, it won't make the mics any more mismatched than they were prior to removing the casing, I'm pretty confident of that.

 I'll take some more measurements this weekend, including with the JBL LSR 308s, and will report back on the results.


----------



## lowdown

Brandon7s said:


> I've been so happy with my last set of measurements that I've not felt any need to make some more.





Brandon7s said:


> I've been hesitant to glue them directly to foam since this isn't a reversible modification.


I'd like to have a $ for every time I've thought while listening "I can't imagine how this could sound better."  I do think about taking more measurements sometimes, but then I listen again and my motivation evaporates.

A suggestion on the non-reversible glue issue, I used silicone caulk for that very reason.  Once it sets it's secure, but also flexible enough to be removed if desired without damaging the mics.


----------



## morgin

Brandon7s said:


> I've been hesitant to glue them directly to foam since this isn't a reversible modification


My best measurements were with foam. I’ve tried them naked in my ear and with silicone housing. But the best have been with foam. 

I cut out a little ditch in the middle where the mic would be glued but made sure the mic isn’t flush with the foam. And used a hot glue gun because I find that easily peels off if I need to remove it. 

Really wanna try getting a sound treated room to rent to see if that makes a big impact. But need to find reliable consistent recordings first


----------



## Brandon7s (Jan 7, 2022)

lowdown said:


> I'd like to have a $ for every time I've thought while listening "I can't imagine how this could sound better."  I do think about taking more measurements sometimes, but then I listen again and my motivation evaporates.


Very true! I actually just straight-up prefer using my Ananda's with my latest BRIR over using my speakers now. And not just because I don't have to worry about annoying the neighbors by being too loud. The acoustic problems with my room stick out like a sore thumb to me since I'm so used to listening with a BRIR that has all of those issues corrected with EQ. And my speakers can't produce sub-bass like my headphones can so they speakers seem thin and weak by comparison.



lowdown said:


> A suggestion on the non-reversible glue issue, I used silicone caulk for that very reason.  Once it sets it's secure, but also flexible enough to be removed if desired without damaging the mics.


 That's brilliant! I'm going to pick some up this weekend. I never would have thought of that in a million years. 



morgin said:


> I cut out a little ditch in the middle where the mic would be glued but made sure the mic isn’t flush with the foam. And used a hot glue gun because I find that easily peels off if I need to remove it.
> 
> Really wanna try getting a sound treated room to rent to see if that makes a big impact. But need to find reliable consistent recordings first


The hot glue also sounds like a good option. I'd love to try making a measurement in an acoustically treated room as well, though I find myself wondering how much of a difference that would really make vs. correcting BRIRs with EQ like I've been doing. It probably would be an improvement but just how much of an improvement would be very interesting to discover.


----------



## lowdown (Jan 7, 2022)

Brandon7s said:


> Very true! I actually just straight-up prefer using my Ananda's with my latest BRIR over using my speakers now. And not just because I don't have to worry about annoying the neighbors by being too loud. The acoustic problems with my room stick out like a sore thumb to me since I'm so used to listening with a BRIR that has all of those issues corrected with EQ. And my speakers can't produce sub-bass like my headphones can so they speakers seem thin and weak by comparison.


I actually recently sold the fancy speakers I'd had for years on eBay.  My headphones with the Impulcifer enhancement are so much better than listening to the speakers that they became just expensive decorations.  All the issues with my room and speaker anomalies I spent so much time trying to tweak have been totally eliminated.  And with the speakers I would always be worried about whether my neighbors were home, the time of day, and whether the volume was perhaps a bit too loud.  The complete freedom to listen at any time at the ideal volume, and to have as close to perfect sound as I can imagine after seeking it for so long, is more than amazing.


----------



## Iohfcasa

I have used pritt sticky tacks, because they were laying around here, they stick well to the capsules and are even moldable like plasticine.


----------



## musicreo

I have tested the option to equalize for a different headphone. I tried to equalize my old HD555 measurement to a HD600 using the instructions in the readme. The result is that my own equalizer setting I made sounds much better for me.


----------



## Brandon7s (Jan 10, 2022)

Iohfcasa said:


> Will you also give your second pair of jbl 305 a try with the more even mic position?



Sorry for being so late on getting a new measurement with my LSR 308s, I just finally did that and the results turned out fantastic. I think this is my favorite BRIR yet, though that's probably just as much a result of a change to the mic position as it is to the speakers. There's absolutely nothing wrong with these 308s, in any case! They do have a little more definition in the low end compared to my Kali LP-6s, but they also do seem to exaggerate a particularly bad standing wave in my room at about 105hz. That issue is lessened with the LP-6s but not fixed completely by any stretch; the difference is not significant.

So yeah, the LSR 308 first generation speakers get a big thumbs up from me, and I bet that a lot of the channel balance problems I had while using my 305s were from a fault in one of speakers themselves. I don't think there's any inherent issue in the design of the speakers themselves that was making good measurements difficult. This latest measurement has the most accurate imaging and channel balance that I've gotten so far.

 One problem that I've been having is that the mic capsules get turned and twisted from their neutral position on the foam earplugs due to the cable not being flexible enough and forcing the capsule to tilt. Here's what I did to fix that:





 I cut a slit to allow the cable to flow down more freely from the capsule so it wouldn't tilt it and then I just glued the cable to the foam in a small loop leading back to the top side. It works great! The capsules lay completely flat in relation to the foam, keeping them both much more centered and evenly placed related to each ear's structure while being worn. I've not taken a measurement with my Kali monitors with this yet but the 308 measurement I made with this has absolutely no problems with the stereo image being off-centered or unnaturally wide, which was something that was so inconsistent before. Once I make a measurement with these on my Kali LP-6s I'll be able to more accurately compare how I like them vs. my LSR 308s. I'll get that done sometime this week.

 Also, I noticed something a bit odd today and I wonder if anyone else has experienced it. The last few measurements I've taken have been with generic room measurements to correct the EQ through Impulcifer, but all of those have a weird low end ringing, though I'm not sure if that's even the right word... it's like the very low end is reverberating longer than the rest of the frequencies but mostly only in one ear. It almost sounds like a slapback delay limited to the bass and sub-bass region.

 When I process the same measurements using --no_room_correction it sounds perfectly fine and the low-end ringing/delay is nowhere to be found. At first I thought that means there's some low-end issues in my room that are being brought to the forefront and becoming noticeable simply because those low frequencies were too low in amplitude to be heard unless they were boosted via EQ; however, I just did some manual EQing to obtain a close match for the Impulcifer EQ results and the problem is completely absent. I've not yet tried processing the BRIR while using the Average or Conservative processing modes for the room correction but I'll try both of those and see if the problem goes away when using one versus the other. I've been taking 3 generic room measurements using a calibrated microphone, one at center of where the head would be, one at slightly right, and one slightly left.

 This isn't a big issue though, I can generate a measurement of my manual EQ corrections using REW and then simply toss the resulting .csv file into the my_hrir folder to bake the EQ directly into the BRIR itself, which will save additional processing further on down the line. Though I've not actually tried that yet... hopefully it doesn't bring back the ringing/delay problem. I'll report back once I've gotten the time to try it.


----------



## musicreo

@Brandon7s​Have you tried if the decay option helps with your problem?


----------



## Brandon7s

musicreo said:


> @Brandon7s​Have you tried if the decay option helps with your problem?


I actually haven't tried the decay option for this yet but I HAVE tried using reverbation management and that helps significantly. The ringing/delay goes away at about 600ms and 700ms for most of the measurements I've run into that have this issue when using room correction through Impulcifer.


----------



## conql (Jan 18, 2022)

I have been digging into this idea for months without knowing Impulcifer and was about to write a program on my own to implement it. But then I found the Impulcifer, which has such a good completion and elegant algorithms! It is such an amazing job! Can't say how much I love it!

Has anyone made the silicone mold microphones proposed by Fabian Brinkmann? According to his paper, this yields the least inter-measurement variance.








I made a pair of them following his idea but with different types of silicone. This is what it looks like. Though I don't have the equipment to measure its variance, it does seem quite stable during multiple measurements.





The mics are bought from a Chinese company called "Yinglear", costing less than one dollar each. They claim that it has 72+dB SNR and flat frequency response. I don't know whether it's exactly true, but it does have an SNR greater than 65dB from my experience.

This thread is too long and I haven't read through it yet. I wonder has anyone experienced the loss of precision for high frequency (10k+) measurement? Plus, I do feel very bright in 10k+ when comparing the simulation to real loudspeakers.









https://github.com/jaakkopasanen/Impulcifer/issues/61

Sooooo glad to see this thread.


----------



## morgin (Jan 18, 2022)

Hello and welcome, it’s always nice to see more people discovering this treasure and more input should help everyone get good results.



conql said:


> Has anyone made the silicone mold microphones proposed by Fabian Brinkmann? According to his paper, this yields the least inter-measurement variance.


This is interesting and something new to try out do you have a link to this paper?

Edit found the paper if anyone else wants to check it out

https://www2.ak.tu-berlin.de/~akgroup/ak_pub/abschlussarbeiten/2011/Brinkmann_MagA.pdf


Also wanted to ask if it was better in anyway to have multiple measurements and then have impulcifer average them out to get the best results from all recordings?


----------



## musicreo

conql said:


> The mics are bought from a Chinese company called "Yinglear", costing less than one dollar each. They claim that it has 72+dB SNR and flat frequency response. I don't know whether it's exactly true, but it does have an SNR greater than 65dB from my experience.


Less than one dollar is great.  The Primo Em258 is ten times that price. 


conql said:


> This thread is too long and I haven't read through it yet. I wonder has anyone experienced the loss of precision for high frequency (10k+) measurement? Plus, I do feel very bright in 10k+ when comparing the simulation to real loudspeakers.


The accuracy for headphones measurements is a problem at high frequencies.  The best way would be to do the measurements at the eardrum with a probe mic.  But this is not possible with our kind of microphone capsules.  
For me I would say that blocked ear channal measurements sound more bright while measurements with only partial blocked ear channal sound better.


----------



## conql (Jan 18, 2022)

morgin said:


> Also wanted to ask if it was better in anyway to have multiple measurements and then have impulcifer average them out to get the best results from all recordings?


Theoretically, doubling the number of averages increases the SNR of the measurement result by 3 dB, and lengthening the sweep signal can do that too. In my experience, lengthening the sweep signal does improve high-frequency accuracy for measuring headphones with Impulcifer **, but I haven't tested that in measuring loudspeakers.



musicreo said:


> For me I would say that blocked ear channal measurements sound more bright while measurements with only partial blocked ear channal sound better.


Thanks, I will give that a try.

Edit:
**: This is not true. A longer sweep does not significantly increase high-frequency accuracy. The default 6.15s sweep is sufficient already.


----------



## Brandon7s

conql said:


> I have been digging into this idea for months without knowing Impulcifer and was about to write a program on my own to implement it. But then I found the Impulcifer, which has such a good completion and elegant algorithms! It is such an amazing job! Can't say how much I love it!
> 
> Has anyone made the silicone mold microphones proposed by Fabian Brinkmann? According to his paper, this yields the least inter-measurement variance.
> 
> ...



 I'd be very interested in getting my hands on some of those microphones and trying this out. Do you have a link to where we might be able to order these microphones? I did some quick googling but I was too short on the specifics to be able to narrow down the search much. The specs you quoted are pretty good! 



conql said:


> This thread is too long and I haven't read through it yet. I wonder has anyone experienced the loss of precision for high frequency (10k+) measurement? Plus, I do feel very bright in 10k+ when comparing the simulation to real loudspeakers.


 I've definitely found that the precision in the high end, above 10kHz, seems to be all over the place and is wildly inconsistent throughout my measurements. I thought that it might be a limitation of the microphones, but maybe not.


----------



## conql

Brandon7s said:


> Do you have a link to where we might be able to order these microphones?


I ordered them from 1688.com, a China-specific version of alibaba.com. Link is here: https://detail.1688.com/offer/622086499867.html

Not sure whether they ship outside of China or not.


----------



## musicreo (Jan 18, 2022)

conql said:


> Theoretically, doubling the number of averages increases the SNR of the measurement result by 3 dB, and lengthening the sweep signal can do that too.


The problem is that averaging and increasing the length also increases the time where your head must avoid any possible movement. 



conql said:


> In my experience, lengthening the sweep signal does improve high-frequency accuracy for measuring headphones with Impulcifer, but I haven't tested that in measuring loudspeakers.


I haven't tested this with impulcifer but with REW and I don't remember to have seen a significant change between sweep times for headphone measurements.
One big problem of the measurement is to get the correct timbre. This talk from David Griesinger does explain the problem. But I have some measurements were the timbre was ok although the measurement was not taken at the eardrum. Still I believe that deeper mic insertions improve the final result.


----------



## conql (Jan 18, 2022)

musicreo said:


> I haven't tested this with impulcifer but with REW and I don't remember to have seen a significant change between sweep times for headphone measurements.


Yeah that's why I posted an issue on Github. Normally 1 second sweep signal can yield a pretty accurate result, but with Impulcifer, it takes 10 seconds to get the same accuracy.


Edit:

After posting this, I check my codes again and find that it's actually my bad......
I thought leaving 100ms of blank in the recording was enough for compensating system latency, but actually, it's not. So I accidentally cut a small part at the end of the recording, making the frequency response at 10k+ abnormal.
Impulcifer is good for measuring headphones even with 1 second sweep. So the default 6.15s sweep should be quite sufficient already. A longer sweep is probably not very helpful.


----------



## morgin (Jan 18, 2022)

@conql

I’m wondering with your moulds with mics are you doing stereo recording or have you tried the 7.1 surround too?

So far with everyone’s help I’ve managed to get stunning 7.1 results and I cannot see it getting better but that’s what I thought about some previous measurements and they were improved upon. Just wondering if you did something that got you your good measurements

Also the moulds of the ears is interesting because if we can make those of our ears we can glue the mics exactly in place to them so there is no movement. And use those for measurements. But maybe the sound travelling through our heads has a impact


----------



## conql (Jan 18, 2022)

morgin said:


> @conql
> 
> I’m wondering with your moulds with mics are you doing stereo recording or have you tried the 7.1 surround too?
> 
> So far with everyone’s help I’ve managed to get stunning 7.1 results and I cannot see it getting better but that’s what I thought about some previous measurements and they were improved upon. Just wondering if you did something that got you your good measurements


I tried both stereo recording and surround sound. But I don't think I have got very good measurements yet.

Stereo recordings are very convincing if the sound source is not near. However, when the source is within half a meter, the localization becomes less authentic. I am not sure whether it's because of the headphones or measurement errors.

I use stereo speakers to create 7.1 surround sound. It's stunning for someone who hasn't tried this experience before, but after a short time, I find it only plausible, not authentic. It sounds too bright in high frequencies and I need to adjust it manually for a comfortable listening experience.

Currently, I'm trying to compensate headphones without Impulcifer so that I can measure headphones multiple times to choose the best curve to reduce inter-wear variance. But I don't know whether this will actually help... I'm also planning to use gyroscopes in my phone to get a more accurate orientation when measuring.

I'd be very interested if you can talk about how your measurements were improved. I am new to this topic and I cannot think of a better way than to increase speaker volume and upgrade microphones. 

By the way, what headphones do you guys use for reproduction? I read from some papers that headphones meeting Free-air Equivalent Coupling can produce better authenticity. But I don't quite understand this concept and have no idea what headphones meet this criterion.


----------



## Brandon7s (Jan 18, 2022)

conql said:


> I tried both stereo recording and surround sound. But I don't think I have got very good measurements yet.
> 
> Stereo recordings are very convincing if the sound source is not near. However, when the source is within half a meter, the localization becomes less authentic. I am not sure whether it's because of the headphones or measurement errors.


There's definitely an increase in difficulty with getting good near-field measurements compared to getting good measurements from a distance of greater than 5 or 6 feet or so. 

 My own personal hypothesis as to why quality nearfield measurements are more difficult to obtain is due to the closer measurements being less forgiving to variances in the binaural mic placement/orientation and to any small movements made by the wearer during the measurement process. I'm honeslty doubtful that the wearer making small movements has any significant effect on results, though I am going to test this out by using a monopod or something else to rest my head on getting the capture, but I think that _mic placement and orientation_ is the primary factor at play here.

 In my experience, the closer the match in how the two mics are placed in relation to each ears' structure, the better the results and more clear the stereo image and localization becomes. It's a bit difficult to really call it "localization" at such near-field distances since we don't actually hear sound as coming from the speakers at that range - not while listening to both speakers at the same time. It's more about getting an authentic phantom center than getting a feel for the speakers' location in a 3D space, I think.

 I think longer distant measurements are less susceptible to speaker directionality since the room has much more of an impact on the measurement when one is listening and measuring a source beyond near-field range. It's easier to localize objects when there's more reflections to work with so I think that more distant measurements will always have an advantage in localization over their near-field counterparts.

  The biggest improvement I've made to my capture process with Impulcifer has been taking off the silicon housing on my Sound Professional microphones and gluing them to a regular pair of foam earplugs, and then gluing the cable to the earplugs in a way that will prevent the cable from the tilting the mic capsules' when worn. This keeps the face of each microphone consistently facing straight out of the ear. Then I adjust the depth of each insertion so that the mic capsule is the roughly the same distance from an arbitrarily chosen part of each ear, judging distance by feel alone. Not only have my results since making this change been the best I've ever gotten, they are also a lot more consistent from measurement to measurement. They sound quite similar when A/B'd and the main difference is in where the center of the stereo image appears.

Here's a picture of my mic setup:




conql said:


> I use stereo speakers to create 7.1 surround sound. It's stunning for someone who hasn't tried this experience before, but after a short time, I find it only plausible, not authentic. It sounds too bright in high frequencies and I need to adjust it manually for a comfortable listening experience.


The brightness is a tough one to work out. I've had very good results in using EQ matching via an equalizer plugin from FabFilter called Pro-Q 3 in my DAW. This method involves taking two instances of the EQ plugin and placing one _before_ the BRIR processing, which is done by a convolution reverb plugin, and then the second instance of Pro-Q is placed _after _the BRIR. I then load up the same sweep audio file that Impulcifer uses and places that in a track. I set the post-BRIR Pro-Q instance to match the EQ of the pre-BRIR EQ instance. I then play the audio of the sweep and Pro-Q generates an EQ curve that it thinks will get your BRIR-processed audio closer to the pre-BRIR audio.

Its a bit confusing trying to explain it like that, I suppose. Basically, I use EQ matching to get Pro-Q 3 to tell me how it would apply corrective EQ to the BRIR in order to get something that resembles the raw sweep response. I initially did this just to see what would happen, but I've actually had some pretty good results by using the EQ curves Pro-Q has given me. A picture might explain this better.

This the EQ curve that Pro-Q gives me when I compare my lastest BRIR (#99!) to the raw sweep audio. This is for my Anandas:




The really interesting part is that big chunk that is taken out around the 2500hz mark, stretching from 1000hz to 7000hz. That's a -4.77db scoop, plus another more minor scoop of -2db centered at 4040hz. That entire trough takes out a LOT of the harshness I experience on the uncorrected BRIR. I've dropped depth of that trough from -4.77 to about -3 to -2.5db but it's a very noticeable improvement even when reduced to only -2db.

Here's what I ended up using as my post-BRIR EQ settings. All of the filters in the beginning are manual room correction filters, since I've not been using Impulcifer's room correction recently. You can see I'm still using the same EQ bands that Pro-Q generated for me at the 2500hz mark:




  That particular 2500hz trough is always there whenever I run these EQ match comparisons with my BRIRs, too. It's not just this one particular measurement that benefits from that cut. I'd be curious to know if this kind of EQ matching could point out some trouble-spots in your own measurements to help reduce your brightness problem. If you want to give it a try then I'll be happy to run the comparison for you if you'd like, all I'd need is a copy of whatever hesuvi-compatible BRIR you'd like me to try it with.

 You can download the Pro-Q 3 demo to try the EQ curve out and to try disabling/modifying the bands to see what happens and it works just fine when inserted as a VST plugin into Equalizer APO's configuration, but it doesn't support side-chaining to get the EQ match for the part of the signal chain that is pre-BRIR. You have to use it within a DAW to use that feature.






conql said:


> By the way, what headphones do you guys use for reproduction? I read from some papers that headphones meeting Free-air Equivalent Coupling can produce better authenticity. But I don't quite understand this concept and have no idea what headphones meet this criterion.


My primary headphone is a Hifiman Ananda, which I love. I've also used Impulcifer with my Beyerdynamic DT1990, DT770 (250ohms), Sennheiser HD6XX, HD58X, Audio Technical R70X, and ATH-M50S. The Ananda's are head-and-shoulders above the rest in realism even when using BRIRs from the same measurement session, so headphone quality certainly _does _make a difference.

 I'm not sure what free-air equivalent coupling would look like on a pair of headphones, but I'm guessing open-back designs are about as close as we currently have just based off of the keywords used in the name.


----------



## morgin

I’m not as knowledgeable as everyone else but my best measurements came from using a decent speaker, my headphones are hd560s, using foam with mic glued on at the end and cutting them so they fit just slightly into the ear canal, making sure the mics are facing out and both in very similar positions. Also to make sure there is 0 movement (that part and positioning both the same are a pain)

 I also used a laser level and a print out of a protractor to measure precisely the angles for surround sound then marked where I should be looking with some tape. The speakers I found having them around 6ft away gave me the best results. Head movement made a big difference too so I used a swivel chair and made sure my head was exactly above the base so when turning my ears were rotating exactly on point and not making a big circle.  

Also a big factor with my measurement we’re not using full volume I found that just above normal listening levels was right. 

With all that I don’t use any balance options because I feel they make a negative impact on what I have atm.


----------



## conql (Jan 19, 2022)

Brandon7s said:


> There's definitely an increase in difficulty with getting good near-field measurements compared to getting good measurements from a distance of greater than 5 or 6 feet or so.
> 
> My own personal hypothesis as to why quality nearfield measurements are more difficult to obtain is due to the closer measurements being less forgiving to variances in the binaural mic placement/orientation and to any small movements made by the wearer during the measurement process. I'm honeslty doubtful that the wearer making small movements has any significant effect on results, though I am going to test this out by using a monopod or something else to rest my head on getting the capture, but I think that _mic placement and orientation_ is the primary factor at play here.
> 
> ...


That's what I think too. So I choose to do binaural recordings of my smartphone playing music while I'm turning my head around. And by checking localization errors I can easily tell whether my headphone compensation is accurate enough.


Brandon7s said:


> The brightness is a tough one to work out. I've had very good results in using EQ matching via an equalizer plugin from FabFilter called Pro-Q 3 in my DAW. This method involves taking two instances of the EQ plugin and placing one _before_ the BRIR processing, which is done by a convolution reverb plugin, and then the second instance of Pro-Q is placed _after _the BRIR. I then load up the same sweep audio file that Impulcifer uses and places that in a track. I set the post-BRIR Pro-Q instance to match the EQ of the pre-BRIR EQ instance. I then play the audio of the sweep and Pro-Q generates an EQ curve that it thinks will get your BRIR-processed audio closer to the pre-BRIR audio.
> 
> Its a bit confusing trying to explain it like that, I suppose. Basically, I use EQ matching to get Pro-Q 3 to tell me how it would apply corrective EQ to the BRIR in order to get something that resembles the raw sweep response. I initially did this just to see what would happen, but I've actually had some pretty good results by using the EQ curves Pro-Q has given me. A picture might explain this better.
> 
> ...


It's a bit complicated and I don't quite understand Do you mean using the plugin to see the frequency domain of the BRIR recordings and adjust the filters according to that? What's the difference between this and the APO provided graph? Anyway, thanks for your information.



morgin said:


> I also used a laser level and a print out of a protractor to measure precisely the angles for surround sound then marked where I should be looking with some tape. The speakers I found having them around 6ft away gave me the best results. Head movement made a big difference too so I used a swivel chair and made sure my head was exactly above the base so when turning my ears were rotating exactly on point and not making a big circle.


It's indeed a good idea to mark the place to look at. Gyroscopes' error is too large, not accurate enough for measurement.


musicreo said:


> For me I would say that blocked ear channal measurements sound more bright while measurements with only partial blocked ear channal sound better.


I make a pair of microphones without all the silicone stuffs and directly plug them in my ear canals with tapes mounting. And they provide much better results for binarual recordings reproduction. I am speechless now cause I really have spent a lot of time making silicone moulds and etc... Now it proves they're useless.

The problem is, I have two MDR-V6 headphones, one is with the original earmuffs but has bad seal and a lot of sound leakages, the other is with new earmuffs that provide good seal. The blocked ear canal microphones work fine on the old headphone, though not better than the new microphones. But they are significantly worse when measuring the good seal headphone and reproducing binaural recordings on it. I suspect changing the earmuffs affects headphones transfer characteristics, making them less "Free-air Equivalent", thus the blocked ear canal method will bring significantly more error. I think I've got to do some research on that...

And I notice that open ear canal measurement results in more high frequency variance from taking off headphones and taking on again. I wonder how do you deal with this inter-wear variance? For me it's a clearly audible difference.

@musicreo 
I'm very interested in your mic setup. Could you share how you make your partial blocked ear canal mics?


----------



## Brandon7s (Jan 19, 2022)

conql said:


> It's a bit complicated and I don't quite understand Do you mean using the plugin to see the frequency domain of the BRIR recordings and adjust the filters according to that? What's the difference between this and the APO provided graph? Anyway, thanks for your information.


What I'm doing with the Pro-Q equalizer software is using it to compare the sound of a BRIR from Impulcifer to the sound of the unprocessed sound sweep. Pro-Q then looks at the comparison and automatically places a corrective EQ curve to make the audio post-BRIR more closely resemble the EQ curve of the unprocessed sweep. This makes it obvious to see where the EQ of your BRIR dramatically differs from a perfectly "flat" frequency response (to the computer's ears). It's a bit of a long-way around trying to manually EQ out problematic areas by seeing if the EQ software shows anything that really pops out as being different to the theoretical flat response. 

In my own case, the EQ software showed that a big dip between 1kHz and 7kHz would bring the final EQ curve more close to flat, and that ended up being the case to my ears when I tried it out. The screenshots I shared show that portion of the EQ that the software pointed out as needing significant changes to the EQ of my BRIR. 

 You could do this by making changes to the EQ manually and judging the visual flatness of the APO graph, though attempting that myself yielded poor results. The nice thing about Pro-Q is that it generates EQ corrections that are easy to switch on and off and to adjust manually.


----------



## conql

Brandon7s said:


> What I'm doing with the Pro-Q equalizer software is using it to compare the sound of a BRIR from Impulcifer to the sound of the unprocessed sound sweep. Pro-Q then looks at the comparison and automatically places a corrective EQ curve to make the audio post-BRIR more closely resemble the EQ curve of the unprocessed sweep. This makes it obvious to see where the EQ of your BRIR dramatically differs from a perfectly "flat" frequency response (to the computer's ears). It's a bit of a long-way around trying to manually EQ out problematic areas by seeing if the EQ software shows anything that really pops out as being different to the theoretical flat response.
> 
> In my own case, the EQ software showed that a big dip between 1kHz and 7kHz would bring the final EQ curve more close to flat, and that ended up being the case to my ears when I tried it out. The screenshots I shared show that portion of the EQ that the software pointed out as needing significant changes to the EQ of my BRIR. You could do this by making changes to the EQ manually and judging the flatness via the APO graph, though attempting that myself yielded poor results. The nice thing about Pro-Q is that it generates EQ corrections that are easy to switch on and off and to adjust manually.


I see it now. I'll definitely give it a try when I finish building a new mic setup and sorting out this measurement issue.


----------



## musicreo

conql said:


> And I notice that open ear canal measurement results in more high frequency variance from taking off headphones and taking on again. I wonder how do you deal with this inter-wear variance? For me it's a clearly audible difference.





This plot shows three times removing the mics and headphone.  I don't have big differences as long as I use the same mic pairs.



conql said:


> @musicreo
> I'm very interested in your mic setup. Could you share how you make your partial blocked ear canal mics?


According to the paper Sound transmission to and within the human ear canal the mic position is not so important for the measurements. Still they show extreme differences between measurement positions.  

Post #515 shows my setup I used for my measurements. 
I tried a lot of mountings. The last image in the first row  provided me my best measurement. The mounting in the last two images use a silicon tube with a small wire so that the capsule can be bent and hold into position.  This was an idea I wanted to test recently but I run into a strange SNR problem with many  Primo EM258 capsules. This is the reason why I used a big capsule (last image) for the test.


----------



## musicreo

Brandon7s said:


> What I'm doing with the Pro-Q equalizer software is using it to compare the sound of a BRIR from Impulcifer to the sound of the unprocessed sound sweep. Pro-Q then looks at the comparison and automatically places a corrective EQ curve to make the audio post-BRIR more closely resemble the EQ curve of the unprocessed sweep. This makes it obvious to see where the EQ of your BRIR dramatically differs from a perfectly "flat" frequency response (to the computer's ears). It's a bit of a long-way around trying to manually EQ out problematic areas by seeing if the EQ software shows anything that really pops out as being different to the theoretical flat response.


I don't understand the reason to equalize a BRIR to a flat frequency response?



Brandon7s said:


> You could do this by making changes to the EQ manually and judging the visual flatness of the APO graph, though attempting that myself yielded poor results.


When you use HeSuVi you must consider that you see always the sum of the channels in the EQ-APO graph.

In the moment I started to equalize any bigger channel differences between all left and right channels that are even present after using Impulcifers channel balance correction. First impression is very good.


----------



## Brandon7s (Jan 19, 2022)

musicreo said:


> I don't understand the reason to equalize a BRIR to a flat frequency response?


To help counter any errors in the measurement process or deficiencies in the microphones, mostly. Knowing exactly which frequencies differ from flat, you can get some ideas on what spots of the measurement might need additional correction. For instance, the measurements I make with my current mic setup all sound quite a bit less 'sparkly', hi-fi, and clear  when compared to listening to the physical speakers. I'm not sure exactly why that is yet but my best guess is that my mics exhibit more high-end rolloff than I thought.

 In any case, running my BRIRs through Pro-Q set to match to a flat EQ will quickly provide an EQ correction curve to help offset that high-end rolloff. I then tweak it to taste, which is a lot easier than trying to quickly come up with a manual correction curve by ear from scratch for each BRIR I use.


----------



## musicreo (Jan 20, 2022)

I made a graph that explains my thoughts about channel balance problems.   I think the parts I marked with audible are problematic. Actually they don't do any harm to the channel localisation but result in a exhausting listening experience over time.  With my very open AKG701 and HD555 I did not notice this problem much and ignored it but with my HD600 it became annoying.  Equalizing this parts really improved the listening experience with the HD600 a lot.   The problem is that there is no way to know if left channels should match right channels or right channels should match left channels or if this is also depending on the frequency range.  So I think it is just testing and finding the best way. But certain is that in my case the channel balance correction of Impulcifer did a good job but still left some serious issues.

  If I would compensate for a flat response in that plots I guess that I would remove the filter function of my ear.


----------



## conql

Brandon7s said:


> To help counter any errors in the measurement process or deficiencies in the microphones, mostly. Knowing exactly which frequencies differ from flat, you can get some ideas on what spots of the measurement might need additional correction. For instance, the measurements I make with my current mic setup all sound quite a bit less 'sparkly', hi-fi, and clear when compared to listening to the physical speakers. I'm not sure exactly why that is yet but my best guess is that my mics exhibit more high-end rolloff than I thought.
> 
> In any case, running my BRIRs through Pro-Q set to match to a flat EQ will quickly provide an EQ correction curve to help offset that high-end rolloff. I then tweak it to taste, which is a lot easier than trying to quickly come up with a manual correction curve by ear from scratch for each BRIR I use.


If my understanding is correct, the ideal curve of your BRIRs should look more like the harman target instead of flat. The flat target is ideal only when you're measuring the speakers in a professional anechoic room, which is almost not accessible for ordinary people. Another curve that can be measured by us is the steady-state room curve, which is the sound pressure measured in the middle of the head with the listener absent in a normal room. And the ideal shape of that is like this.



F. Toole, "The Measurement and Calibration of Sound Reproducing Systems," _J. Audio Eng. Soc._

However, when you plug the microphones in your ear, the pinnae and ear canal can have a great influence on the sound, thus the ideal curve becomes something like the harman target for headphones which has boosts at both low frequencies and high frequencies.




Another thing is that the frequency response of the mics isn't very important for binaural synthesis. Say your mics have a great high-end roll-off, though your measured BRIRs are significantly lower at high frequencies than the harman target, it also affects the measurement of your headphones too. So when Impulcifer compensates the headphone to flat, the errors in BRIRs should also be canceled out.  With that being said, a flat curve for microphones is still preferred because it yields a better SNR.


----------



## conql (Jan 20, 2022)

conql said:


> The problem is, I have two MDR-V6 headphones, one is with the original earmuffs but has bad seal and a lot of sound leakages, the other is with new earmuffs that provide good seal. The blocked ear canal microphones work fine on the old headphone, though not better than the new microphones. But they are significantly worse when measuring the good seal headphone and reproducing binaural recordings on it. I suspect changing the earmuffs affects headphones transfer characteristics, making them less "Free-air Equivalent", thus the blocked ear canal method will bring significantly more error. I think I've got to do some research on that...



Here is the information about FEC criterion. Basically, when you measure the BRIRs with ear canal blocked, you're assuming that your headphones are perfectly Free-air Equivalent Coupling, so it has the same acoustic impedance as free-air. Notice that this is different from HRTF or HPTF curve and can only be compensated by prob microphones measuring at eardrums. The deviation from ideal is called Pressure Division Ratio (PDR). Since the measurement of it is complicated, I cannot find much data about it. But from this 90s paper, it's not neglectable if not significant for some headphones.




Møller, Henrik, et al. "Transfer characteristics of headphones measured on human ears." Journal of the Audio Engineering Society 43.4 (1995): 203-217.

This criterion does not need to be considered when measuring with open ear canal. However, putting mics inside ear canal can interfere with the sound field and without proper mounting it may have poor consistency. But for me, a mic with a 4mm diameter doesn't audibly influence the sound quality and reduces the error of PDRs significantly, so I guess I'll stick to open ear canal measuring. thx @musicreo


----------



## castleofargh

musicreo said:


> This plot shows three times removing the mics and headphone. I don't have big differences as long as I use the same mic pairs.


Impressive consistency! All your time and efforts paid out, you are now officially a machine integrated in your system! 




conql said:


> If my understanding is correct, the ideal curve of your BRIRs should look more like the harman target instead of flat. The flat target is ideal only when you're measuring the speakers in a professional anechoic room, which is almost not accessible for ordinary people. Another curve that can be measured by us is the steady-state room curve, which is the sound pressure measured in the middle of the head with the listener absent in a normal room. And the ideal shape of that is like this.
> F. Toole, "The Measurement and Calibration of Sound Reproducing Systems," _J. Audio Eng. Soc._
> 
> However, when you plug the microphones in your ear, the pinnae and ear canal can have a great influence on the sound, thus the ideal curve becomes something like the harman target for headphones which has boosts at both low frequencies and high frequencies.
> ...


I don't think we can infer anything about the Harman target in here, because the research is focused on "normal" headphone use. Once you replace the silly audio with speaker simulation, you add the room, speaker position and as much of your HRTF as you managed to capture. From channel mixing to added reverb and direction dependent FR, you're dealing with a different animal.

IMO, once we go the way of customized simulation, a generic target will probably turn out to be a false good idea.


----------



## conql (Jan 21, 2022)

castleofargh said:


> I don't think we can infer anything about the Harman target in here, because the research is focused on "normal" headphone use. Once you replace the silly audio with speaker simulation, you add the room, speaker position and as much of your HRTF as you managed to capture. From channel mixing to added reverb and direction dependent FR, you're dealing with a different animal.



Of course, the BRIRs should not be the same as the harman target, but it's reasonable to say they should have a similar shape. According to "Listener Preference For Different Headphone Target Response Curves" by Sean Olive where harman target was first proposed, the harman target curves (RR_G & RR1_G) were measured in the Harman Reference Listening Room with microphones mounted in an artificial head, very similar to what we're measuring here, except Olive did a lot of smoothing which removes the reverberation etc. I think the overall trend of the BRIRs measured by us should not deviate too much from that curve. And I agree that self-recorded BRIRs carry much more information about your ears so accurately equalizing them to harman target will be a really bad idea.


----------



## jaakkopasanen

conql said:


> Of course, the BRIRs should not be the same as the harman target, but it's reasonable to say they should have a similar shape. According to "Listener Preference For Different Headphone Target Response Curves" by Sean Olive where harman target was first proposed, the harman target curves (RR_G & RR1_G) were measured in the Harman Reference Listening Room with microphones mounted in an artificial head, very similar to what we're measuring here, except Olive did a lot of smoothing which removes the reverberation etc. I think the overall trend of the BRIRs measured by us should not deviate too much from that curve. And I agree that self-recorded BRIRs carry much more information about your ears so accurately equalizing them to harman target will be a really bad idea.


Harman target is for couplers which emulate response at the ear drum. When measuring with mics at the ear canal entrance, the expected frequency response shape is going to be different. The 3 kHz range is the ear gain and that is not going to be similar in these two cases.


----------



## musicreo

Is there any paper that shows the frequency response with headphone compensation for different measurement points? We know that the frequency response is different for the ear drum, inside the ear canal and at the ear canal opening. We also know that the frequency response is different for blocked and partially blocked/open ear canal measurements.  But how much does the headphone measurement really compensate in dependence of the measurement point?


----------



## conql

Is there any graphical interface for impulcifer now?


----------



## castleofargh

musicreo said:


> Is there any paper that shows the frequency response with headphone compensation for different measurement points? We know that the frequency response is different for the ear drum, inside the ear canal and at the ear canal opening. We also know that the frequency response is different for blocked and partially blocked/open ear canal measurements.  But how much does the headphone measurement really compensate in dependence of the measurement point?


Best I have is this:
https://hal.archives-ouvertes.fr/hal-03234204/document
What's pretty obvious is how much the mic itself is going to impact the results, which is not what you wanted to read because it ruins everything. But at the same time you already knew.
Another thing that's pretty obvious, is how some French dude lied on his resume when he wrote that he was fluent in English.


----------



## morgin

castleofargh said:


> What's pretty obvious is how much the mic itself is going to impact the results, which is not what you wanted to read because it ruins everything. But at the same time you already knew.


What are the best mics to purchase on the market right now?

also is there a way to get my GPU to do the audio in my pc rather than using the motherboard? I think my rtx 3080 would do a better job


----------



## Joe Bloggs

jaakkopasanen said:


> Harman target is for couplers which emulate response at the ear drum. When measuring with mics at the ear canal entrance, the expected frequency response shape is going to be different. The 3 kHz range is the ear gain and that is not going to be similar in these two cases.


Furthermore, unless you have earphones that measure flat in the literal sense (which would sound terrible) and want them corrected from the ground up, the Harman target would not be appropriate.

If for example you have a pair of earphones well tuned to the Harman target already, the ideal curve to follow in that sense *is* flat.

There are of course various optimizations you can apply to shift the soundstage forward, backward, upwards etc... but that all depends on what the measured result sounds like at first.  Also the optimizations may not be very effective in the long term if you're shifting both of a stereo BRIR using EQ as then the brain may simply adapt to the new sound.

If you have measured out BRIRs in at least the classic 5.1 directions,

I listen using a developer version of this foobar2000 upmixer plugin me and James Fung developed,
https://www.dropbox.com/s/4wnesjgzjrktnc9/foo_dsp_trident.dll?dl=0

It will upmix any stereo music up to 5.1, which you can then render using the BRIRs you measured here.

I found out the hard way that in this case EQing individual BRIR channels is VERY valuable for getting the proper spatialization directions.

A few pithy explanations of the upmixer options:
1. Subwoofer cut-off adjusts the frequency below which upmixed 5-channels crosses over to the subwoofer.
2. LR<->Sub: since headphone BRIRs may not really have a separate subwoofer channel, you can use this to redirect all the low frequencies from (1) to the front left and right speakers instead.
3. spread: please simply leave on default
4. ambience: ditto
5. ratio: playing with this lets you change the size of the soundstage, from mono-centre at 0% to wrapped all around from SL to SR at 25%.  If that's not enough, you can turn it up up and up to 100%! (not usually recommended)

We're developing a paid professional version of this that would be a VST plugin, output to any number of channels, and let you adjust the individual speaker angles, level of per speaker separation, etc. etc.


----------



## musicreo

Joe Bloggs said:


> if you have measured out BRIRs in at least the classic 5.1 directions,
> 
> I listen using a developer version of this foobar2000 upmixer plugin me and James Fung developed


Very nice that you provide us a developer version! 
For 7.1 in foobar surround and rear speakers are switched, which an be corrected with the Matrix Mixer.


----------



## morgin (Jan 30, 2022)

Joe Bloggs said:


> I listen using a developer version of this foobar2000 upmixer plugin me and James Fung developed,
> https://www.dropbox.com/s/4wnesjgzjrktnc9/foo_dsp_trident.dll?dl=0



I want try this but when installed in equalizerApo I get the message ''library has the wrong architecture. Only 64bit libraries are supported''


----------



## musicreo

@morgin 
This is a foobar2000 dsp and not a VST that can be used in EQ-APO.


----------



## reter (Feb 8, 2022)

can i use impulcifier to calibrate my headphones and use it with the hrir already made in hesuvi? i mean, the Smyth A16 lets you calibrate your headset with stereo microphones and use it directly with already made hrir if i'm not wrong


----------



## conql (Feb 8, 2022)

reter said:


> Guys, can i use impulcifier to calibrate my headphones and use it with the hrir already made in hesuvi? i mean, the Smyth A16 lets you calibrate your headset with stereo microphone and use it directly with already made hrir if i'm not wrong


If you don't record your BRIR and only want to compensate headphones, the results probably won't be better than using HeSuVi built-in headphone eqs. Because most common hrirs in HeSuVi are measured in artificial ears, and in that situation, compensating headphones with artificial ears provides better authenticity. If you want to use the "More HRIRs" recorded in human ears, firstly you can't get the same mics as they used and secondly these HRIRs don't include reverberation, thus they sound less pleasant.




Joe Bloggs said:


> Furthermore, unless you have earphones that measure flat in the literal sense (which would sound terrible) and want them corrected from the ground up, the


I wasn't talking about the final compensation curve but the BRIR measurement curve, which equals the final curve plus hptf curve. So if your mics have the same frequency response as what was used in harman experiment and are inserted close to ear drums, a harman target curve is preferred. If the headphones are tuned to harman target already, then the BRIR curve should be close to the HPTF curve, therefore a flat final compensation curve is preferred. Nevertheless, what I really meant was that the harman target here is just a rough estimation for tuning the balance of high and bass, and no one should equalize their measurement to exact harman target.


----------



## sander99

reter said:


> the Smyth A16 lets you calibrate your headset with stereo microphones and use it directly with already made hrir if i'm not wrong


You are not wrong but this is far from optimal unless you are very lucky to have a somewhat matching HRTF with the person or dummy head used for the speaker measurement. Really the way to go is to do a personal measurement of speakers, otherwise you will probably just never know what a Smyth Realiser or Impulcifer/HeSuVi is really capable of...
The Smyth also has a manual procedure called manLOUD to improve a (measured) HPEQ (=Headphone Compensation) that can be used to try and compensate somewhat for a different HRTF but even that isn't guarenteed to give a good result (that is not possible because this compensation should in fact be done independently for all the individual impulces, not with one stereo EQ on the stereo headphone signal). (And another aim of the manLOUD is to compensate for issues related to the ear canal resonance and the closed ear canal measurement method.)


----------



## Iohfcasa

It's also not easy to determine the equal loudness of different frequency bands and they use a standard average  curve for the hp>>Speaker loudness correction.


----------



## reter (Feb 8, 2022)

conql said:


> If you don't record your BRIR and only want to compensate headphones, the results probably won't be better than using HeSuVi built-in headphone eqs. Because most common hrirs in HeSuVi are measured in artificial ears, and in that situation, compensating headphones with artificial ears provides better authenticity. If you want to use the "More HRIRs" recorded in human ears, firstly you can't get the same mics as they used and secondly these HRIRs don't include reverberation, thus they sound less pleasant.
> 
> 
> 
> I wasn't talking about the final compensation curve but the BRIR measurement curve, which equals the final curve plus hptf curve. So if your mics have the same frequency response as what was used in harman experiment and are inserted close to ear drums, a harman target curve is preferred. If the headphones are tuned to harman target already, then the BRIR curve should be close to the HPTF curve, therefore a flat final compensation curve is preferred. Nevertheless, what I really meant was that the harman target here is just a rough estimation for tuning the balance of high and bass, and no one should equalize their measurement to exact harman target


what about if i want to simulate something like a cinema with my own speakers? can i reproduce it or i will be limited only to my room hrtf?

i don't think my room does fit for what i want to achieve, also i don't think my cheap speaker would be enough to achieve good results


----------



## sander99

reter said:


> what about if i want to simulate something like a cinema with my own speakers?


I am not sure what you are asking:

1. Can you make your stereo pair of speakers (instead of headphones) sound like a multichannel cinema with the help of Impulcifer/HeSuVi?
The simple answer is: No, that is not what it is intended for. HeSuVi (using files produced by Impulcifer) creates a binaural signal intended for headphones. By using headphones it is easy to control what sound goes in to the left ear fully independently from what sound goes into the right ear. To create a similar effect using speakers is a more complicated matter.
However: There is a Smyth Realiser A16 Speaker Edition that seems to be able to do this.
(https://www.head-fi.org/threads/smyth-research-realiser-a16-speaker-edition.960912/)
(They say the A16 needs additional hardware to do this, but I "secretly" do wonder what would happen if you ran the Impulcifer sweeps through such a Realiser Speaker Edition (personalized to you) and created a HRIR from that, and then use HeSuVi with speakers instead of headphones...)

2. Can I use my stereo pair of speakers to measure a complete virtual (over headphones) multichannel loudspeaker system using Impulcifer/HeSuVi?
Yes. You can measure all channels one or two at a time by either repositioning the speakers or repositioning yourself relative to the speakers.

3. Something else?



reter said:


> can i reproduce it or i will be limited only to my room hrtf?


Again I am not fully sure what you are asking. With "room hrtf" do you mean the combination of your room sound with your personal hrtf? That is in fact what you get if you measure with with your own head in your own room with Impulcifer. But Impulcifer has some options to improve the results like with room correction/eq and reverberation management I think. (I don't know the details but they are described somewhere.)
Or do you think a room has a hrtf? That is not the case. Hrtf is short for head related transfer function, and it describes how sound from specific directions is altered by bending around your head and into your ears.


reter said:


> i don't think my room does fit for what i want to achieve, also i don't think my cheap speaker would be enough to achieve good results


Not all properties of the speaker are caputered by the measurement which means that in some ways the virtual speakers can sound better than the real speakers. And again: room correction and reverberation management can further improve the result.
And if your speakers are really bad: maybe borrow or rent a better speaker just for the measurements?


----------



## reter (Feb 10, 2022)

sander99 said:


> I am not sure what you are asking:
> 
> 1. Can you make your stereo pair of speakers (instead of headphones) sound like a multichannel cinema with the help of Impulcifer/HeSuVi?
> The simple answer is: No, that is not what it is intended for. HeSuVi (using files produced by Impulcifer) creates a binaural signal intended for headphones. By using headphones it is easy to control what sound goes in to the left ear fully independently from what sound goes into the right ear. To create a similar effect using speakers is a more complicated matter.
> ...


thx for the patience! I'll try to explain better my situation

I want to record with impulcifer the hrtf that like you explained i'll be using with my headphones... my room is a little sized one, so i was trying asking if i could indeed manage the reverberation so i can "simulate" something like a cinema instead of just my room with impulcifer

just like i said a post ago, i like the idea that the smyth let you use all the presets (bbc and stuff) so i want to try to do something similar with impulcifer in my room, in 7.1 of course


----------



## morgin (Feb 10, 2022)

reter said:


> thx for the patience! I'll try to explain better my situation
> 
> I want to record with impulcifer the hrtf that like you explained i'll be using with my headphones... my room is a little sized one, so i was trying asking if i could indeed manage the reverberation so i can "simulate" something like a cinema instead of just my room with impulcifer
> 
> just like i said a post ago, i like the idea that the smyth let you use all the presets (bbc and stuff) so i want to try to do something similar with impulcifer in my room, in 7.1 of course


My experience is I have a small room too. Id suggest you get hold of a good speaker and do your measurements in a bigger room. Or measure with the speaker further away from you it will have the effect of a big room speaker set up.

I used only one speaker and tested with measurements close and far and they gave me different results. But the best was having the speaker around 6ft away from me. It still offers a 6ft surround setup effect.

Get a good speaker it’s very important and take a lot of measurements with different ear placement, speaker placement, volume etc


----------



## Iohfcasa

The realiser's manspkr method doesn't guarantee a matching result, but it's an approach to personalize alien hrirs.
You can try to replicate the manspkr method with a program like earful, but it's a difficult procedure with many imponderables.


----------



## castleofargh

reter said:


> thx for the patience! I'll try to explain better my situation
> 
> I want to record with impulcifer the hrtf that like you explained i'll be using with my headphones... my room is a little sized one, so i was trying asking if i could indeed manage the reverberation so i can "simulate" something like a cinema instead of just my room with impulcifer
> 
> just like i said a post ago, i like the idea that the smyth let you use all the presets (bbc and stuff) so i want to try to do something similar with impulcifer in my room, in 7.1 of course


A16 or impulcifier, you’ll get a convincing result when the measurements of the room are done at your own ears. A room measured with a dummy head or at somebody else’s ears, will have components of those HRTF instead of your own. Unless you're lucky and find measurements from someone with about the same head and ears as you, such measurements will not be anywhere as natural as something measured at your ears.
The A16 has some rooms by default, and apparently many users never go through with making their own measurements. All I can say is that it's their loss.
I much prefer my measurements from an acoustically really horrible bedroom, over some fancy opera house in OOYH that just wasn’t working for me. And while we’re talking personal preferences, I prefer little amounts of reverb, and while I can’t claim that you will, I did notice a trend about that for several other users.
I tried a bunch of measurements and ultimately I use one where my speakers were about 1.3meter away from me when I measured them. Again, I don’t have more than my own anecdotes, so they might not apply to anybody else. 
I think impulcifier has some OOYH capture somewhere, so you can just try.


----------



## reter (Feb 10, 2022)

castleofargh said:


> A16 or impulcifier, you’ll get a convincing result when the measurements of the room are done at your own ears. A room measured with a dummy head or at somebody else’s ears, will have components of those HRTF instead of your own. Unless you're lucky and find measurements from someone with about the same head and ears as you, such measurements will not be anywhere as natural as something measured at your ears.
> The A16 has some rooms by default, and apparently many users never go through with making their own measurements. All I can say is that it's their loss.
> I much prefer my measurements from an acoustically really horrible bedroom, over some fancy opera house in OOYH that just wasn’t working for me. And while we’re talking personal preferences, I prefer little amounts of reverb, and while I can’t claim that you will, I did notice a trend about that for several other users.
> I tried a bunch of measurements and ultimately I use one where my speakers were about 1.3meter away from me when I measured them. Again, I don’t have more than my own anecdotes, so they might not apply to anybody else.
> I think impulcifier has some OOYH capture somewhere, so you can just try.


i need to ask a question to feed my curiosity: why all the videos that shown the Smyth comes with the sennheiser HD800? is this some sort of optimization thing? still i didn't find a video with a different pair of headphones...weird

anyway i'm starting to understand more what you guys are trying to tell me, there's no other way than to try and do some stuff, but first i need a good speaker, any suggestion? impulcifer does need only one speaker to record the hrir so what should be ideal for this kind of measurements?


----------



## morgin

I was suggested to get the jbl mk2 3 series by @musicero and for the price and quality cannot fault it. My headphones are sennheiser 560s and both sound amazing for my hrir.


----------



## castleofargh

reter said:


> i need to ask a question to feed my curiosity: why all the videos that shown the Smyth comes with the sennheiser HD800? is this some sort of optimization thing? still i didn't find a video with a different pair of headphones...weird
> 
> anyway i'm starting to understand more what you guys are trying to tell me, there's no other way than to try and do some stuff, but first i need a good speaker, any suggestion? impulcifer does need only one speaker to record the hrir so what should be ideal for this kind of measurements?


The HD800 has good fidelity and is light/comfy for most people, it also doesn't need hundred volts like a Stax and can be powered by the A16 itself. So it's a rather good candidate. Remember that they're trying to simulate and recreate a specific sound. An objectively good transducer can't hurt when trying to convince people that the simulation is accurate.

Now, because the localization cues are mainly about the right frequency response for a given virtual direction, and the right delay between channels, you only need to avoid headphones with a frequency response in the shape of Mordor and high distortions figures. So, audible stuff that basic EQ won't handle.

At some point you have to make your own choice. Are you happy with something that's already incomparably more natural, relaxing, and in some ways more accurate than typical headphone listening at any price? Or, is your inner maniac audiophile screaming at you for not using a great transducer? 
I use a HD650. For the price range it has pretty good specs, except at low freqs which aren't all that important for spatial cues. and after bending the metal plates in the headband to loosen the clamp of death, I find it very comfy(and light), so I use that(not that audiophile of a reason, but that's all I have). I did feel noticeable changes with other headphones, but then again, it's extremely hard to get consistent measurements when any little change placing the mics or the headphone can lead to several dB differences here and there. I don't know for sure if the subjective changes I got came mostly from the headphones being different, from my personal biases about those headphones, or from large measurement tolerances. 
@musicreo is able to get stupidly good repeatability, to the point where I wouldn't be surprised if around 1984 he was looking for Sarah Connor. ^_^ the point is; his take on the impact from different headphones is probably most interesting if he's able to stay as consistent with all his measurements.


----------



## musicreo

castleofargh said:


> @musicreo is able to get stupidly good repeatability, to the point where I wouldn't be surprised if around 1984 he was looking for Sarah Connor. ^_^


Some of my measurement have the following filename ending: *T800*: channel balance *T*rend, decay time *800*ms  ^_^



castleofargh said:


> the point is; his take on the impact from different headphones is probably most interesting if he's able to stay as consistent with all his measurements.


I read some years ago in a publication that headphones with small soundstage should be better as they need less compensation (so a HD800 would not be the best choice if this is true ).  When I compare my HD600 to the AKG701 this could be true but my HD600 showed me some channel imbalance in my measurement that wasn't really bothering me with my AKG701 and HD555/595 but made listening over the HD600 really exhausting.  I could fix that with some additional equalizing which improved also the sound for the two other headphones. Overall all three headphones sound fantastic with the measurement.

If I remember correct jaakkopasanen uses a HD800 so he can probably say more about that headphone.
​


----------



## jaakkopasanen

HD 800 is the most comfortable headphone ever. That's enough of a reason to pick it. For me specifically it's also good because of the very tight manufacturing tolerances and it's flaws are very well known so it's a good reference tool for developing these algorithms.


----------



## reter

musicreo said:


> Some of my measurement have the following filename ending: *T800*: channel balance *T*rend, decay time *800*ms  ^_^
> 
> 
> I read some years ago in a publication that headphones with small soundstage should be better as they need less compensation (so a HD800 would not be the best choice if this is true ).  When I compare my HD600 to the AKG701 this could be true but my HD600 showed me some channel imbalance in my measurement that wasn't really bothering me with my AKG701 and HD555/595 but made listening over the HD600 really exhausting.  I could fix that with some additional equalizing which improved also the sound for the two other headphones. Overall all three headphones sound fantastic with the measurement.
> ...


mhm, what about imaging then? i have the HD660s that some people state being "smaller in soundstage but with good imaging"... i personally don't know much, i never tried other expensive brands in my life yet, also i think the HD660s does have the same drive the HD800 has if i'm not mistaken


----------



## musicreo

The HD660s should work well.  If you it is comfy for you you don't have to look for a other headphone.


----------



## conql

Have you guys tried comparing headphones and loudspeakers by quick switching? I was amazed by my measurements before but now I feel quite the opposite. The simulation sounds too bright even though the localization is quite accurate. I haven't done careful tweaking now but I think it's about 5db more with a q value of 0.5 in the highs. I use @musicreo 's setup where ear canals are partially blocked, and I tried to insert the mics as deep as I can so it's less than 1cm from ear drum. This problem doesn't come from impulcifer, because when I play binaural recordings of speakers, it still sounds the same bright. I tried the manloud method proposed by David Griesinger, but I cannot get consistent results as he does and it doesn't sound good.


musicreo said:


> One big problem of the measurement is to get the correct timbre. This talk from David Griesinger does explain the problem. But I have some measurements were the timbre was ok although the measurement was not taken at the eardrum. Still I believe that deeper mic insertions improve the final result.






I measured the hptf after equalizing headphones, which is very flat during multiple wearing except for some small dips at high frequencies. So it can only be the problem of measuring. But if my understanding is correct, measuring from open ear canal should not cause large systematic error like this.  So which step did I possibly go wrong?


----------



## jaakkopasanen

conql said:


> Have you guys tried comparing headphones and loudspeakers by quick switching? I was amazed by my measurements before but now I feel quite the opposite. The simulation sounds too bright even though the localization is quite accurate. I haven't done careful tweaking now but I think it's about 5db more with a q value of 0.5 in the highs. I use @musicreo 's setup where ear canals are partially blocked, and I tried to insert the mics as deep as I can so it's less than 1cm from ear drum. This problem doesn't come from impulcifer, because when I play binaural recordings of speakers, it still sounds the same bright. I tried the manloud method proposed by David Griesinger, but I cannot get consistent results as he does and it doesn't sound good.
> 
> 
> I measured the hptf after equalizing headphones, which is very flat during multiple wearing except for some small dips at high frequencies. So it can only be the problem of measuring. But if my understanding is correct, measuring from open ear canal should not cause large systematic error like this.  So which step did I possibly go wrong?


Did you do headphone compensation? Could you share your plots with us?


----------



## conql (Feb 24, 2022)

jaakkopasanen said:


> Did you do headphone compensation? Could you share your plots with us?









The next two plots are what I get after applying headphone equalization and put on everything again.







The shp 9500 sounds significantly brighter than speakers, while the mdrv6 sounds only a bit brighter, probably just around 1 dB more high. But 9500 are open back headphones, why do they sound worse?

Apart from timbre, localization is good for both headphones.


----------



## jaakkopasanen

conql said:


> The next two plots are what I get after applying headphone equalization and put on everything again.
> 
> 
> 
> ...


What are these plots? They don't quite look like what Impulcifer produces?


----------



## conql

jaakkopasanen said:


> What are these plots? They don't quite look like what Impulcifer produces?


No they aren't. I have many headphones to measure so I don't want impulcifer to embed the headphone compensation. So I write some code to call autoeq to do it. It's not smoothed because I find it makes no difference.


----------



## castleofargh

conql said:


> No they aren't. I have many headphones to measure so I don't want impulcifer to embed the headphone compensation. So I write some code to call autoeq to do it. It's not smoothed because I find it makes no difference.


Are you measuring the FR of the headphones at your ears, or are you just relying on the database?


----------



## conql

castleofargh said:


> Are you measuring the FR of the headphones at your ears, or are you just relying on the database?


I am measuring FR of headphones by mics that are inserted really deep into ear canal. And I also measured the FR after headphone equalization which is flat enough as expected.


----------



## jaakkopasanen

conql said:


> I am measuring FR of headphones by mics that are inserted really deep into ear canal. And I also measured the FR after headphone equalization which is flat enough as expected.


And you measured with microphones in the exact same position as you did the speaker measurements ie. not taking microphones off in the meanwhile?


----------



## Joe Bloggs

I mean... I tried this in the past,
1. measure BRIR of loudspeakers using in ear mics
2. measure BRIR of over-ear headphones using same in ear mics
3. take final BRIR to be 1-2

and yes it sounded nowhere near what would be expected in terms of tonality.  Much brighter and sharper on the headphones after the steps than the loudspeakers

I could only put it down to the mics not being my eardrums and the *FR difference between the speakers and the headphones* being much different on the mics than on my eardrums


----------



## conql (Feb 26, 2022)

jaakkopasanen said:


> And you measured with microphones in the exact same position as you did the speaker measurements ie. not taking microphones off in the meanwhile?





musicreo said:


>



I did take off microphones in previous measurements, but I think my mic setup is quite consistent and I do get steady measurements similar to musicreo. However, to verify this I measure loudspeakers and headphones again without taking the mics off and try to keep them in the same place. The results are sadly the same, some headphones are just much brighter than the others. And every headphone sounds less smooth than speakers. I give up now and decide to tweak the highs manually. Apart from tonality, I'm very satisfied with my current best measurement. I get pretty good imaging and localization and the sound is better than any headphones I've heard. Thank you for your great work! @jaakkopasanen

Recently, I write a small program as a graphical interface for impulcifer to help speed up measurement and make evaluating different profiles much easier. I really want to make it public but it's very buggy now because I don't have much programming experience and was aimed at fast development. If someone finds it helpful, I would love to keep developing it and upload it to GitHub when I have time.


----------



## reter

Joe Bloggs said:


> I mean... I tried this in the past,
> 1. measure BRIR of loudspeakers using in ear mics
> 2. measure BRIR of over-ear headphones using same in ear mics
> 3. take final BRIR to be 1-2
> ...


can i ask what headphone your tried? open back?


----------



## jaakkopasanen

conql said:


> I did take off microphones in previous measurements, but I think my mic setup is quite consistent and I do get steady measurements similar to musicreo. However, to verify this I measure loudspeakers and headphones again without taking the mics off and try to keep them in the same place. The results are sadly the same, some headphones are just much brighter than the others. And every headphone sounds less smooth than speakers. I give up now and decide to tweak the highs manually. Apart from tonality, I'm very satisfied with my current best measurement. I get pretty good imaging and localization and the sound is better than any headphones I've heard. Thank you for your great work! @jaakkopasanen
> 
> Recently, I write a small program as a graphical interface for impulcifer to help speed up measurement and make evaluating different profiles much easier. I really want to make it public but it's very buggy now because I don't have much programming experience and was aimed at fast development. If someone finds it helpful, I would love to keep developing it and upload it to GitHub when I have time.


Have you tried Impulcifer's headphone compensation? I think it's doing the same but just for eliminating potential error sources. 

That being said, I do tweak the treble myself. I measure with blocked ear canal mics though.


----------



## conql

jaakkopasanen said:


> Have you tried Impulcifer's headphone compensation? I think it's doing the same but just for eliminating potential error sources.


I use the same parameters as impulcifer except smoothing. But none of those parameters affects tonality from my experiment. Anyway, I will still give it a try when I have some time.


----------



## Joe Bloggs

reter said:


> can i ask what headphone your tried? open back?


Closed back ones


----------



## morgin (Mar 4, 2022)

Ignore if this a dumb idea but would there be a way to use ray traced audio to simulate a room with speakers and have each channel play in those virtual speakers? I’m thinking something like the vr cinema apps you can get to watch movies.. have a full screen video playing in a virtual environment but also in that have virtual speakers with ray traced audio for each channel.

Another idea would be instead of physical measurements with in ear mics. Use ray tracing audio in a virtual environment with a virtual head (with our personal head/ear distance measurements) We could play a sound from virtual 7.1 speaker and listen to audio ques and adjust the virtual in ear mics in real time for best and accurate results.

Idea 3 use lidar same as what you get on new iPhones to scan your ears place the scan on a virtual head same size as your own. Then use the virtual speaker and raytraced audio to take measurements

With a program like this




With those we might be able to get height channels if I’m not pulling ideas from my backside


----------



## Iohfcasa

Genelec offers something like that with their new "aural id" service, but nevertheless further frequency adjustments are needed.
Someone could try to replicate the service by creating  a 3d scan of his head/ ears to load it into mesh2hrtf, but I can't find any instructions for the program.


----------



## morgin

Iohfcasa said:


> Genelec offers something like that with their new "aural id" service, but nevertheless further frequency adjustments are needed.
> Someone could try to replicate the service by creating  a 3d scan of his head/ ears to load it into mesh2hrtf, but I can't find any instructions for the program.



I found this YouTube video on mesh2hrtf he says he will follow up with a guide but I only see one video.


----------



## lowdown

morgin said:


> I found this YouTube video on mesh2hrtf he says he will follow up with a guide but I only see one video.



Looks interesting, but reading the documentation clearly the quality of the 3D scans is key, and the options without an iPhone aren't good.  So happy I have Impulcifer.


----------



## morgin

lowdown said:


> Looks interesting, but reading the documentation clearly the quality of the 3D scans is key, and the options without an iPhone aren't good.  So happy I have Impulcifer.


I’m great full for impulcifer and cannot see myself watching movies or gaming without it. Just want the height channels so bad 

Plus the true binaural audio is a game changer with the immersion it brings.


----------



## lowdown

morgin said:


> I’m great full for impulcifer and cannot see myself watching movies or gaming without it. Just want the height channels so bad
> 
> Plus the true binaural audio is a game changer with the immersion it brings.


Not a gamer but totally understand the interest and motivation.  If I had an iPhone I'd definitely be trying this out, even though for my music listening what I get with Impulcifer is better than I could wish for.  The more tools and interest in personal HRTFs the better.  Look forward to how this one develops. Thanks for linking it.


----------



## morgin

lowdown said:


> Not a gamer but totally understand the interest and motivation.  If I had an iPhone I'd definitely be trying this out, even though for my music listening what I get with Impulcifer is better than I could wish for.  The more tools and interest in personal HRTFs the better.  Look forward to how this one develops. Thanks for linking it.


He replied on YouTube saying a proper guide will be coming soon so I will be testing it. I have an iPhone 13 pro but he’s saying the iPhone 10 is better for scanning. Just not sure if I need another operating system to use with mesh2hrtf otherwise I’d be trying it out now


----------



## musicreo

Mesh2HRTF is a very interesting project. But it seems to be very difficult to make good 3D scans  with consumer hardware. You also don't have headphone compensation.


----------



## morgin (Mar 8, 2022)

That’s true. Impulcifer is soo good and well made. The only problem is getting consistent results and not knowing if what you have is the best. That’s why I want to try Mesh2hrtf and then compare the results. Maybe it will put me at ease with my current measurements or maybe it will indicate there’s something missing in my setup.

Maybe they both work and can be used together in the future for perfect results. 

Either way I still love impulcifer and thank jakko for the brilliant work he has done from what not many people have been able to achieve. Still boggles me how he’s done something that normally would take several minds with a lot of money and equipment.


----------



## castleofargh

@jaakkopasanen . I've thought about that a few times, but you probably should edit your first post to present things a little or at least add the link for Impulcifier.  I know it's in your signature, but I keep linking the first page of this thread when that first page has zero relevant information(beside your signature but I'm not sure many people get it).


----------



## morgin

musicreo said:


> Mesh2HRTF is a very interesting project. But it seems to be very difficult to make good 3D scans  with consumer hardware. You also don't have headphone compensation.


But you also won’t need to worry about sound proofing your room. Worth trying to see what the results are like. Only issue atm is the instructions are all over the place unlike jakkos guide which was complete and written well.


----------



## morgin (Mar 14, 2022)

This guy talks about waves nx able to play Dolby atmos for height channels. Isn’t there a way to use what waves nx uses for our own hrir’s


----------



## castleofargh

morgin said:


> This guy talks about waves nx able to play Dolby atmos for height channels. Isn’t there a way to use what waves nx uses for our own hrir’s



Unless they have changed that in the recent years, no. The basic version that came with the kickstarter's headphone tracker was super limited(crap), and the more advanced version just had more controls like the ability to move the speakers around, but all based on the preset HRTF. The only customization tool for the impulses was to give the size of your head(circumference). And while it did improve things for me subjectively, it was nowhere near an option to add your own impulse responses. I know because it's the reason I finally gave it up at the time and started convolving my own stuff with the Reverberate VST(not the free one as it didn't have "true stereo").


----------



## reter (Mar 14, 2022)

guys do you know where i can buy a good binaural microphone in europe? i'm trying to find an european shop but i only see american shops with so much shipping fee

Also, there's a high and low sensitivity choice for the sp-tfb-2, what should i eventually buy?


----------



## morgin

reter said:


> guys do you know where i can buy a good binaural microphone in europe? i'm trying to find an european shop but i only see american shops with so much shipping fee
> 
> Also, there's a high and low sensitivity choice for the sp-tfb-2, what should i eventually buy?


Don’t know if it’s the in ear mics you were asking about. But if you were these are the ones I bought from eBay they’re the ones that got me the best results 

https://www.ebay.co.uk/itm/Primo-EM...p2349624.m46890.l6249&mkrid=710-127635-2958-0


----------



## morgin

I know this forum is only for Impulcifer but I'm almost done with my test run of Mesh2hrtf but I will end up with .sofa files. Can someone help me with using .sofa with husevi or do I need to convert it somehow to a .wav file?


----------



## Xam198

Hi Guys. I'm passionate by this project, just bought 2 matched mono Primo EM258 and two rodes VXLR+. The impulcifer wiki talk about an y splitter but since each mono EM258 has his own jack plug, the y splitter isn't necessary no ? Thanks


----------



## musicreo

EQ-APO can't use sofa files but you can extract  the channels at the correct angles and convert them to wav. There is a API for matlab,  python and ffmpeg  that can do this.


----------



## musicreo

Xam198 said:


> Hi Guys. I'm passionate by this project, just bought 2 matched mono Primo EM258 and two rodes VXLR+. The impulcifer wiki talk about an y splitter but since each mono EM258 has his own jack plug, the y splitter isn't necessary no ? Thanks


Correct the splitter is not necessary.


----------



## morgin

Xam198 said:


> Hi Guys. I'm passionate by this project, just bought 2 matched mono Primo EM258 and two rodes VXLR+. The impulcifer wiki talk about an y splitter but since each mono EM258 has his own jack plug, the y splitter isn't necessary no ? Thanks


Definitley try this and then keep going till you get results that make you smile everytime. People on this forum are a good source of help


----------



## Iohfcasa

What equipment is needed for mesh2hrtf and how to proceed?


----------



## morgin

Iohfcasa said:


> What equipment is needed for mesh2hrtf and how to proceed?


iPhone with true depth camera, blender, and decent computer with at least 16gb ram. iPhone 10 is supposed to be better than the newer ones.

https://sourceforge.net/p/mesh2hrtf/wiki/Basic_HRTF_post_processing/


----------



## reter

morgin said:


> iPhone with true depth camera, blender, and decent computer with at least 16gb ram. iPhone 10 is supposed to be better than the newer ones.
> 
> https://sourceforge.net/p/mesh2hrtf/wiki/Basic_HRTF_post_processing/


now i'm wondering why is better in an older model... do you know what iPhone did he try?

also, i'm very interested to someone that uses impulcifer to compare mesh2hrtf and hopefully dolby phrtf to let us know what's best in what situation... surely an iphone X is much cheaper for me than buying stereo microphones+speaker+audio stuff

also, how can windows recognize all the channels dolby does?


----------



## morgin

reter said:


> now i'm wondering why is better in an older model... do you know what iPhone did he try?
> 
> also, i'm very interested to someone that uses impulcifer to compare mesh2hrtf and hopefully dolby phrtf to let us know what's best in what situation... surely an iphone X is much cheaper for me than buying stereo microphones+speaker+audio stuff
> 
> also, how can windows recognize all the channels dolby does?


I’m very close to getting my sofa files that I should be able to convert and use with hesuvi so I will keep you updated on the differences I notice. Just stuck on a problem with a .py not working as intended and can’t seem to solve it. 

iPhone 10 has a more detailed true depth camera I’m lucky to have both iPhone 10 and 13 pro and I can tell the iPhone 10 is better more detailed. 

Still think impulcifer will be better because it’s not simulated but actual sound entering your ear. 

Regarding the Dolby I don’t think we can get height channels I’ve asked many times lol


----------



## morgin

I'll link to the problem I have if anyone can help me 

https://www.head-fi.org/threads/mesh2hrtf.962708/


----------



## reter (Mar 25, 2022)

morgin said:


> I'll link to the problem I have if anyone can help me
> 
> https://www.head-fi.org/threads/mesh2hrtf.962708/


you should ask directly in the youtube videos of mesh2hrtf, surely he can help you

for the in ear microphones from ebay you linked me, i don't know how to make a proper binaural equipment, do you know a guide i can follow to make the gear properly?


----------



## morgin

reter said:


> you should ask directly in the youtube videos of mesh2hrtf, surely he can help you
> 
> for the in ear microphones from ebay you linked me, i don't know how to make a proper binaural equipment, do you know a guide i can follow to make the gear properly?


I just soldered the mics to a wire with 3.mm jack. You might be better off buying the ready soldered mics with 3mm jack


----------



## musicreo

morgin said:


> I just soldered the mics to a wire with 3.mm jack. You might be better off buying the ready soldered mics with 3mm jack


They are sold presoldered at Micbooster.


----------



## morgin

musicreo said:


> They are sold presoldered at Micbooster.


Yes that’s the website I was looking for to forward for @reter I couldn’t find it in my emails. 

That’s the website I got my first pair from


----------



## reter

morgin said:


> Yes that’s the website I was looking for to forward for @reter I couldn’t find it in my emails.
> 
> That’s the website I got my first pair from


thanks

what's the meaning of "2 matched"? do i have to buy the 2 matched version or 2 of the single? also what audio interface have you used with these?

for proper measurements, how did you plug them in the ears?


----------



## morgin

Get 2 matched they will be frequency matched. 

I used the Behringer U-PHORIA UMC202HD 

Glued the mics to some foam. There are pictures in this thread and many examples.


----------



## Iohfcasa (Mar 26, 2022)

I don't think an iphone 10 or any usual smartphone will capture the earcanal properly, we need something like otoscan.


https://gardenahearing.com/otoscan/


----------



## reter (Mar 26, 2022)

morgin said:


> Get 2 matched they will be frequency matched.
> 
> I used the Behringer U-PHORIA UMC202HD
> 
> Glued the mics to some foam. There are pictures in this thread and many examples.


thanks a lot

do you have other suggestions to get the best results? i read many people do different measurements with different settings in impulcifier, what do you suggest?


----------



## morgin

The others helped me I’m still very new to all this. But the advice given to me helped loads. 

First just concentrate on getting good measurements the tweaking with settings can come later because you can always save the files and try different settings. 

Also keep folder names or txt documents recording how you set your speakers, what you used to insert mics into ears/how deep. What volume you used on each component etc. when you get good results you will know what you did well. 

It’s a try and keep trying method. Once you have good results try and top it with a couple more until you are satisfied. 

My best results came from around 6ft distance from 1 speaker. 
I marked out the positions of 7.1 speaker layout to look at for each measurement. 
Tried to record with the least background noise. 
Stuck the mics onto some foam plugs with a hot glue gun and inserted just 2mm past the entrance making sure they faced out and looked the same distance in each ear. 
I used a swivel chair and positioned myself on the chair that when I rotated my head was pivoting exactly in centre. 
With my results I didn’t have to apply any balancing when I did it made it worse. 

Hope that helps I’m probably missing something but the previous posts should help.


----------



## lowdown

reter said:


> thanks a lot
> 
> do you have other suggestions to get the best results? i read many people do different measurements with different settings in impulcifier, what do you suggest?


I'll just add a bit of encouragement to what morgin posted.  I would suggest taking multiple measurements and experimenting freely with the Impulcifer command options.  You may be experienced and lucky and get it all perfect the first time, but don't expect that.  Be patient and persistent.  I can only speak for my experience but I did numerous measurement sessions and many different command option combinations, and just a couple of the final results were extremely good.  One in particular is to my ears virtually perfect.  The potential is so stunning and amazing that even after 2 years I can still hardly believe it.


----------



## reter (Mar 28, 2022)

@morgin @lowdown considering that i will use the behringer UMC202HD, should i plug everything in that audio interface? by everything i mean headphones, speakers and the binaural mics? or i should stick to my audio amp fornthe headphones and the audio interface for all the rest?
do you think these adapters for the mics should be good enough to do the work to record a good signal or i should go for the xlr adapters instead? https://www.amazon.it/dp/B07SKTLXCS/ref=cm_sw_r_apan_i_NM3Q3C9QKFXWQQKY4G18?_encoding=UTF8&psc=1

also, how do i know that the volume of the speaker is right (so not too low or too high) for right measurements?

sorry for all these questions but i'm trying to figure out everything before the mics will arrive


----------



## morgin

reter said:


> @morgin @lowdown considering that i will use the behringer UMC202HD, should i plug everything in that audio interface? by everything i mean headphones, speakers and the binaural mics? or i should stick to my audio amp fornthe headphones and the audio interface for all the rest?
> 
> also, how do i know that the volume of the speaker is right (so not too low or too high) for right measurements?
> 
> sorry for all these questions but i'm trying to figure out everything before the mics will arrive


Don’t be sorry about the questions you wanna see what kind of questions I was asking lmao. 

I plugged everything into behringer for my measurements. As for the volume I believe it was just louder than normal listening. One of the issues I was having was crap speakers and I was playing everything full blast. Then I changed to summertime playing music, loud but comfortable making sure my behringer wasn’t showing clipping. 

Try different volumes in one sitting whilst everything is set up. Also the main thing is to not let the ear mics move sometimes let like to slip out a few mm just by heavy breathing or moving. 

I’m still trying this mesh2hrtf and it’s a long process still not getting my measurement. Makes you appreciate how easy impulcifer was in comparison.


----------



## castleofargh

Ideally, normal listening level would be desired to capture the sound like it is when you use your speakers. But there may be a lot of ambient noise in the room and the mics might not be sensitive enough depending on models. So we tend to go for louder test signals to try and improve the SNR. 
How loud is good? Who knows? The speakers and headphone might start having a lot more distortions at higher SPL, and the mic, depending on the type, might not respond well to high intensity. 
Trial and error is the safer and usually the only answer. You're going to look for something convincing, not for some objective target you don't even know. It's one of those fairly rare occasions where I don't advise to turn into a measurement nerd. Just try stuff, various levels, various mic placement, various "ear plugs" to hold the mic, and if you like how some impulses impact music, use them and have fun.


About what gear to use while measuring, I don't see why you couldn't just use your usual setup for playback.


----------



## reter

castleofargh said:


> Ideally, normal listening level would be desired to capture the sound like it is when you use your speakers. But there may be a lot of ambient noise in the room and the mics might not be sensitive enough depending on models. So we tend to go for louder test signals to try and improve the SNR.
> How loud is good? Who knows? The speakers and headphone might start having a lot more distortions at higher SPL, and the mic, depending on the type, might not respond well to high intensity.
> Trial and error is the safer and usually the only answer. You're going to look for something convincing, not for some objective target you don't even know. It's one of those fairly rare occasions where I don't advise to turn into a measurement nerd. Just try stuff, various levels, various mic placement, various "ear plugs" to hold the mic, and if you like how some impulses impact music, use them and have fun.
> 
> ...



fact is i bought the jds atom amp and i instantly noticed the audio somehow spacialized in some frequencies and more overall clarity, so considering i will use that amp with hesuvi i don't know if is a good idea to not use that amp to do the headphone calibration (correct me if i'm wrong).


----------



## lowdown

reter said:


> @morgin @lowdown considering that i will use the behringer UMC202HD, should i plug everything in that audio interface? by everything i mean headphones, speakers and the binaural mics? or i should stick to my audio amp fornthe headphones and the audio interface for all the rest?
> do you think these adapters for the mics should be good enough to do the work to record a good signal or i should go for the xlr adapters instead? https://www.amazon.it/dp/B07SKTLXCS/ref=cm_sw_r_apan_i_NM3Q3C9QKFXWQQKY4G18?_encoding=UTF8&psc=1
> 
> also, how do i know that the volume of the speaker is right (so not too low or too high) for right measurements?
> ...


I'm not familiar with the Behringer so can't offer any specific advice about it.  I used my standard Denon AVR and speakers for recording the room measurements, and a Zoom H2N connected to my binaural mics and laptop for the headphone measurements.  You mentioned the Atom headphone amp in another post, I do my normal headphone listening with a JDS Labs Element headphone amp but didn't use it for any of the Impulcifer measurements.  My impression is their amps are considered very neutral, so I'd be surprised if your Atom is adding any spatial effects, more likely its clarity is allowing you to hear spatial cues in the recordings, which Impulcifer does in spades.  I didn't try any other setups for comparison, but I don't think it's necessary to have everything connected to the Behringer to get good results, nor do you need to use the Atom.  Of course experimenting if you are motivated to hear any differences can never hurt. Also, I used all standard connectors like you linked, and the results are spectacular, but others with XLR experience may want to weigh in on whether there's an audible advantage.


----------



## musicreo

reter said:


> @morgin
> do you think these adapters for the mics should be good enough to do the work to record a good signal or i should go for the xlr adapters instead? https://www.amazon.it/dp/B07SKTLXCS/ref=cm_sw_r_apan_i_NM3Q3C9QKFXWQQKY4G18?_encoding=UTF8&psc=1



 You need the more expansive  XLR adapter ( like the Rode VXLR+)  that convert the 48 V phantom power to  3-5 V plugin power. Otherwise your mic will be silent on the Behringer as the 6.3mm inputs don't have plugin power!


----------



## lowdown

musicreo said:


> You need the more expansive  XLR adapter ( like the Rode VXLR+)  that convert the 48 V phantom power to  3-5 V plugin power. Otherwise your mic will be silent on the Behringer as the 6.3mm inputs don't have plugin power!


Hmm, yeah I'd call that an audible advantage.


----------



## reter

musicreo said:


> You need the more expansive  XLR adapter ( like the Rode VXLR+)  that convert the 48 V phantom power to  3-5 V plugin power. Otherwise your mic will be silent on the Behringer as the 6.3mm inputs don't have plugin power!


oh that's something i wasn't considering, thanks for the tip, so another 60 bucks for 2 vxlr adapers ahahah

do you have some other suggestions? i want everything to be set and ready with no issue or oversight


----------



## Xam198

Hi guys, sorry for the newbie question, but at first i will concentrate on simulating my stereo speakers on the headphone (will see for 7.1 later). If i understand well, even in this case i must use hesuvi, but do i have to set my playback device in stereo in Windows, or 7.1 ? Not very clear for on how hesuvi will work regarding stereo or 7.1 incoming. Thanks


----------



## morgin

Xam198 said:


> Hi guys, sorry for the newbie question, but at first i will concentrate on simulating my stereo speakers on the headphone (will see for 7.1 later). If i understand well, even in this case i must use hesuvi, but do i have to set my playback device in stereo in Windows, or 7.1 ? Not very clear for on how hesuvi will work regarding stereo or 7.1 incoming. Thanks


First get hesuvi to work and give you good 7.1 sound from the presets included (Dolby atmos,gsx..etc) then you will know you are ready to start measuring and importing your new hrir. 

These are my hesuvi settings sorry for the quality I’m just in the middle of something hope it helps


----------



## sander99

Xam198 said:


> Hi guys, sorry for the newbie question, but at first i will concentrate on simulating my stereo speakers on the headphone (will see for 7.1 later). If i understand well, even in this case i must use hesuvi, but do i have to set my playback device in stereo in Windows, or 7.1 ? Not very clear for on how hesuvi will work regarding stereo or 7.1 incoming. Thanks


When I first tried HeSuVi I could only get it to work in stereo. And later when I got it to work in multichannel it still also worked in stereo. So I expect stereo will never be problematic. The only thing I could imagine being a problem (but maybe it is no problem att all) is not having all channels available in the preset (even if not actually used), but that can easily be solved: when processing your 2 self measured main channels with Impulcifer just add example measurements that came with Impulcifer for the other channels.

At playback time make sure that the upmixer option ("Upmix Content") in HeSuVi is switched off.
Then when you play stereo content it will be simply played over the 2 virtual main speakers, the other virtual speakers - if available - will simply not be used.


----------



## Xam198

Ouch, sorry it is not ver clear. My idea was that for measurements with impulcifer,  for just 2 speakers in stereo, Hesivu off and window playback in stereo. Playback time : , Hesuvi on, using hesuvi.wav has hrir, and window playback device in ..? 2.0 or 7.1..? Am i wrong ?


----------



## lowdown

Xam198 said:


> Ouch, sorry it is not ver clear. My idea was that for measurements with impulcifer,  for just 2 speakers in stereo, Hesivu off and window playback in stereo. Playback time : , Hesuvi on, using hesuvi.wav has hrir, and window playback device in ..? 2.0 or 7.1..? Am i wrong ?


I only use Impulcifer for 2 channel stereo music.  I did the measurement setup as you described.  For playback I have HeSuVi set for "Stereo".  I don't use any Windows multi-channel settings.  It works just as it should.  You can also freely experiment to see what sounds best for your sources.  If you move on to more channels in Impulcifer I'll defer to others.


----------



## musicreo (Mar 30, 2022)

Xam198 said:


> . If i understand well, even in this case i must use hesuvi, but do i have to set my playback device in stereo in Windows, or 7.1 ? Not very clear for on how hesuvi will work regarding stereo or 7.1 incoming. Thanks



Just to clarify, HESUVI is only a gui for EQ-APO.  You can use EQ-APO also without HESUVi. For example you can save code as txt file and load it in EQ-APO. This can look like this:

#Common preamp
Preamp: 0 dB
#  L=Left ; R=Right C=Center; SUB=LFE; SL=Left Surround; RL=Right Surround, RL=Rear Left; RR=Rear Right
#Create virtual speaker channels
Copy: L0=L R1=L  L1=R  R0=R C0=C C1=C SUB0=SUB SUB1=SUB SL0=SL SR1=SL SL1=SR SR0=SR RL0=RL RR1=RL RL1=RR RR0=RR 
#Mute Input Channels
    Copy: L=0 R=0 C=0 SUB=0 RL=0 RR=0 SL=0 SR=0
#virtual channels that are filtered
Channel: L0 R1 L1 R0 C0 C1 SUB0 SUB1 SL0 SR1  SL1 SR0 RL0 RR1 RL1 RR0
#      Channel: LL LR RL RR CL CR LFEL LFER SLL SLR  SRL SRR BLL BLR BRL BRR
#Folder of Convolution files complete folder name  or if files in config "\...."
    Convolution: C:\Program Files\EqualizerAPO\config\conVT\210110-A7-M21-FC3-T500(L-R-C-LFE-LS-RS-LB-RB).wav
#Corrections
Channel: C1
#Preamp: 0 db
Filter  1: ON  PK       Fc   126.5 Hz  Gain  -1.30 dB  Q  3.523
Channel: R0
Preamp: -1db
#RR->LL
Filter  1: ON  PK       Fc   71.00 Hz  Gain  -9.00 dB  Q 11.519

Copy: L=L0+L1+C0+2*SUB0+SL0+SL1+RL0+RL1  R=R1+R0+C1+2*SUB1+SR1+SR0+RR1+RR0

This way you can even apply filter settings on every (virtual) channel which is not possible with HESUVI. Also that way it would be possible to use  internally more than 7.1 channels.


----------



## morgin (Mar 30, 2022)

musicreo said:


> Just to clarify, HESUVI is only a gui for EQ-APO.  You can use EQ-APO also without HESUVi. For example you can save code as txt file and load it in EQ-APO. This can look like this:
> 
> #Common preamp
> Preamp: 0 dB
> ...


So does is it better to use the measured hrir this way rather than using hesuvi? Also where do I find my file with this text to copy over?


----------



## reter

Xam198 said:


> Hi guys, sorry for the newbie question, but at first i will concentrate on simulating my stereo speakers on the headphone (will see for 7.1 later). If i understand well, even in this case i must use hesuvi, but do i have to set my playback device in stereo in Windows, or 7.1 ? Not very clear for on how hesuvi will work regarding stereo or 7.1 incoming. Thanks


follow this guide here https://sourceforge.net/p/hesuvi/discussion/general/thread/ce7c354dd7/

it's the easiest way to get 7.1 working with hesuvi, you can also follow the pictures below


----------



## sander99

@musicreo: By rewriting/extending that script (if I can call it that) would it be possible to facilitate multiple users each with their own independent virtualization? I mean, use for example output channels L+R for user 1, output channels LB+RB for user 2, and output channels LS+RS for user 3. And connect three headphone amps to those different output pairs.


----------



## musicreo

sander99 said:


> @musicreo: By rewriting/extending that script (if I can call it that) would it be possible to facilitate multiple users each with their own independent virtualization? I mean, use for example output channels L+R for user 1, output channels LB+RB for user 2, and output channels LS+RS for user 3. And connect three headphone amps to those different output pairs.



For EQ-APO it should be possible. But I think you would need a device that splits the  5.1 or 7.1 output from EQ-APO to three or 4 headphone amplifiers.


----------



## musicreo

morgin said:


> So does is it better to use the measured hrir this way rather than using hesuvi?


If you want to apply filters on the virtual channels like me it is better. 



morgin said:


> Also where do I find my file with this text to copy over?


I don't understand this question?


----------



## morgin

musicreo said:


> I don't understand this question?


I didn't know how to use that text, I figured it out. When I applied it and turned off hesuvi the sound wasn't right was somewhat quieter. Could hear the surround but the front channel was muted. 

Also I have no idea how the filters work anyway I can import yours to try out?


----------



## musicreo

morgin said:


> I didn't know how to use that text, I figured it out. When I applied it and turned off hesuvi the sound wasn't right was somewhat quieter. Could hear the surround but the front channel was muted.


Oh now I understand.  The above example script is not using "hesuvi.wav" files as I don't like the strange channel order. Best is to convert the "hesuvi.wav" to the "normal" channel configuration.  A very simple python script that does this is:

import soundfile as sf

folder='C:/Users/pythondata/'
file='hesuvitestfile.wav'
path=folder+file
audiodata,samplerate = sf.read(path)

#hesuvi to normal LS/LB sometimes have to be changed depending on software
newaudiodata=audiodata[:,[0, 1, 8, 7, 6, 13, 6, 13, 2, 3, 10, 9, 4, 5, 12, 11 ]]

#normal to hesuvi 
#newaudiodata=audiodata[:,[0, 1, 8, 9, 12, 13, 4, 3, 2,  11, 10, 15, 14, 5 ]]

sf.write('new_file(L-R-C-LFE-LS-RS-LB-RB).wav', newaudiodata, samplerate,'PCM_32')

#[L-l L-r LS-l LS-r LB-l Lb-r C-l   R-r R-l RS-r RS-l RB-r RB-l C-r ]%Hesuvi
#[0    1    2   3     4    5    6    7   8   9  10    11   12  13  14   15]
#[L-l  L-r  R-l  R-r  C-l C-r  S-l  S-r  LS-l LS-r Rs-l Rs-r LB-l Lb-r RB-l RB-r  ]%normal




morgin said:


> Also I have no idea how the filters work anyway I can import yours to try out?


 You should look in the EQ-APO wiki.

You can find my file here:
https://1drv.ms/u/s!AnbO8YLUAW6JjgqUDP9SoWcyP3Qn?e=ofqS19


----------



## morgin

musicreo said:


> Oh now I understand.  The above example script is not using "hesuvi.wav" files as I don't like the strange channel order. Best is to convert the "hesuvi.wav" to the "normal" channel configuration.  A very simple python script that does this is:
> 
> import soundfile as sf
> 
> ...


Thanks buddy I’ll give this a try tonight.


----------



## reter

musicreo said:


> If you want to apply filters on the virtual channels like me it is better.
> 
> 
> I don't understand this question?


can i ask a question? why should you apply filters to the channels? i'm curious on what kind of filters you're using and why


----------



## musicreo

reter said:


> can i ask a question? why should you apply filters to the channels? i'm curious on what kind of filters you're using and why


I do  some equalizing for channel balance correction. The reason for that you find in post #960.  I could merge  the equalizing into the wav file now but for testing it was really helpful to have the option  to change filters on the fly.


----------



## morgin

musicreo said:


> Oh now I understand.  The above example script is not using "hesuvi.wav" files as I don't like the strange channel order. Best is to convert the "hesuvi.wav" to the "normal" channel configuration.  A very simple python script that does this is:
> 
> import soundfile as sf
> 
> ...


I've tried running this script but I get an error

Python 3.10.4 (tags/v3.10.4:9d38120, Mar 23 2022, 23:13:41) [MSC v.1929 64 bit (AMD64)] on win32
Type "help", "copyright", "credits" or "license" for more information.
>>> import soundfile as sf
>>>
>>> folder='C:/Users/pythondata/'
>>> file='hesuvitestfile.wav'
>>> path=folder+file
>>> audiodata,samplerate = sf.read(path)
>>>
>>> #hesuvi to normal LS/LB sometimes have to be changed depending on software
>>> newaudiodata=audiodata[:,[0, 1, 8, 7, 6, 13, 6, 13, 2, 3, 10, 9, 4, 5, 12, 11 ]]
>>>
>>> #normal to hesuvi
>>> #newaudiodata=audiodata[:,[0, 1, 8, 9, 12, 13, 4, 3, 2, 11, 10, 15, 14, 5 ]]
>>>
>>> sf.write('new_file(L-R-C-LFE-LS-RS-LB-RB).wav', newaudiodata, samplerate,'PCM_32')
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "C:\Users\morgi\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\soundfile.py", line 314, in write
    with SoundFile(file, 'w', samplerate, channels,
  File "C:\Users\morgi\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\soundfile.py", line 629, in __init__
    self._file = self._open(file, mode_int, closefd)
  File "C:\Users\morgi\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\soundfile.py", line 1183, in _open
    _error_check(_snd.sf_error(file_ptr),
  File "C:\Users\morgi\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\soundfile.py", line 1357, in _error_check
    raise RuntimeError(prefix + _ffi.string(err_str).decode('utf-8', 'replace'))
RuntimeError: Error opening 'new_file(L-R-C-LFE-LS-RS-LB-RB).wav': System error.
>>>
>>> #[L-l L-r LS-l LS-r LB-l Lb-r C-l R-r R-l RS-r RS-l RB-r RB-l C-r ]%Hesuvi
>>> #[0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15]
>>> #[L-l L-r R-l R-r C-l C-r S-l S-r LS-l LS-r Rs-l Rs-r LB-l Lb-r RB-l RB-r ]%normal
>>>


----------



## musicreo

You have to change the folder='' and file='' to your folder and your hesuvi.wav file you want to convert.


----------



## morgin

I actually made the same folder as yours in the location of the text file and copied the hesuvi.wav changing it to the same name as what you have. Same error. 

I also tried changing it to another location but didn’t work. Maybe I’m missing something in python


----------



## reter

musicreo said:


> I do  some equalizing for channel balance correction. The reason for that you find in post #960.  I could merge  the equalizing into the wav file now but for testing it was really helpful to have the option  to change filters on the fly.


very interesting! I didn't know that after the impulcifer measurements you can still have channel unbalance, i will try too after done my personal hrir

ordered all the stuff, i should start in a week or so hopefully...


----------



## musicreo (Apr 1, 2022)

morgin said:


> I actually made the same folder as yours in the location of the text file and copied the hesuvi.wav changing it to the same name as what you have. Same error.
> 
> I also tried changing it to another location but didn’t work. Maybe I’m missing something in python


The only package used is SoundFile. You can run "pip install SoundFile" to check if you have installed it.


----------



## morgin

reter said:


> very interesting! I didn't know that after the impulcifer measurements you can still have channel unbalance, i will try too after done my personal hrir
> 
> ordered all the stuff, i should start in a week or so hopefully...


Good luck, hope you get good results. Keep a track of what you tried and what worked for and let us know


----------



## reter

morgin said:


> Good luck, hope you get good results. Keep a track of what you tried and what worked for and let us know


will i have to repeat the recording of the hrir most of the time or some of the impulcifer functions are post recording? i'm trying to think about how i should prepare the working area, hopefully i will not have to do the recording and turn around many times

do you use the stereo and 5.1 upmix in hesuvi? i mean that function which plays all the 7 channels together even if the format you're playing is in stereo


----------



## morgin (Apr 1, 2022)

reter said:


> will i have to repeat the recording of the hrir most of the time or some of the impulcifer functions are post recording? i'm trying to think about how i should prepare the working area, hopefully i will not have to do the recording and turn around many times
> 
> do you use the stereo and 5.1 upmix in hesuvi? i mean that function which plays all the 7 channels together even if the format you're playing is in stereo


Whilst recording you will need to turn your head. I’d recommend a few in one go. Each time you complete the recording save the folder somewhere else, delete the recording, adjust a little (change volume, push the mics a little further, change speaker position a little closer/further away) and do another recording.

The balance and other tweaks can be done another time. I was like you and wanted it done in the first sitting. I was wrong and it took me a few days

I upmix 5.1 to 7.1 in hesuvi

Also whilst waiting, you might as well set up and try the demo so your computer is all ready for the full measurements


----------



## reter (Apr 1, 2022)

morgin said:


> Whilst recording you will need to turn your head. I’d recommend a few in one go. Each time you complete the recording save the folder somewhere else, delete the recording, adjust a little (change volume, push the mics a little further, change speaker position a little closer/further away) and do another recording.
> 
> The balance and other tweaks can be done another time. I was like you and wanted it done in the first sitting. I was wrong and it took me a few days
> 
> ...


i already installed impulcifer a while ago to try to cut some reverber in some hrir but i didn't like how it sounded, in fact my only concern about my future hrir recording is "how much reverb i will have?", i was even thinking to buy some acoustic foam to help reduce the eventual reverb lol but too much money so whatever


----------



## morgin

reter said:


> i already installed impulcifer a while ago to try to cut some reverber in some hrir but i didn't like how it sounded, in fact my only concern about my future hrir recording is "how much reverb i will have?", i was even thinking to buy some acoustic foam to help reduce the eventual reverb lol but too much money so whatever


You can reduce the reverb with one of the balance options it worked really well for one of my measurements in the past. 

Decay Time Management​The room decay time (reverb time) captured in the binaural room impulse responses can be shortened with --decay parameter. The value is a time it should take for the sound to decay by 60 dB in milliseconds. When the natural decay time is longer than the given target, the impulse response tails will be shortened with a slope to achieve the desired decay velocity. Decay times are not increased if the target is longer than the natural one. The decay time management can be a powerful tool for controlling ringing in the room without having to do any physical room treatments.


----------



## Xam198 (Apr 1, 2022)

Hi, firsts tests, first measurement - stereo -, i've got a warning FT measurement has lower delay to left ear than to right, 1,8125... But everything seems correctly set up : i'm using a zomm H5 for input, the test signal is heared on left then on right, on headphones or speakers. Any idea ?
I've tested with Dire Straits Sultans of Swing, when i activate in Hesuvi, i loose all reverbs on guitars.


----------



## morgin (Apr 1, 2022)

Xam198 said:


> Hi, firsts tests, first measurement - stereo -, i've got a warning FT measurement has lower delay to left ear than to right, 1,8125... But everything seems correctly set up : i'm using a zomm H5 for input, the test signal is heared on left then on right, on headphones or speakers. Any idea ?
> I've tested with Dire Straits Sultans of Swing, when i activate in Hesuvi, i loose all reverbs on guitars.


I was getting similar graphs and my volume was way too loud. I set mine to just above normal loud listening and set the gain on both mics to just about below where they were clipping. Both mics had a slightly different gain setting.

Try lowering your volume or the gain on your in ear mics


----------



## reter (Apr 2, 2022)

morgin said:


> I was getting similar graphs and my volume was way too loud. I set mine to just above normal loud listening and set the gain on both mics to just about below where they were clipping. Both mics had a slightly different gain setting.
> 
> Try lowering your volume or the gain on your in ear mics


if both mics have different gain does this mean that you have to lower the gain on one of the two? then how can i understand the exact gain to balance both the mics?


----------



## conql (Apr 2, 2022)

Xam198 said:


> Hi, firsts tests, first measurement - stereo -, i've got a warning FT measurement has lower delay to left ear than to right, 1,8125... But everything seems correctly set up : i'm using a zomm H5 for input, the test signal is heared on left then on right, on headphones or speakers. Any idea ?


It's likely that you reversed the left and right mics. When the right speaker is playing, sound should arrive earlier in the right ear than the left. That's what the warning means.


----------



## Xam198

conql said:


> It's likely that you reversed the left and right mics. When the right speaker is playing, sound should arrive earlier in the right ear than the left. That's what the warning means.


Humm yes, i didn't checked that perhaps on zoom H5, the input 1 not corresponding at L channel, i will test that. 
For the levels, has said Morgin, i've increased them so that i was very close to 0db headroom, has suggested in the wiki, so that the S/B his high.


----------



## Xam198 (Apr 2, 2022)

Xam198 said:


> Humm yes, i didn't checked that perhaps on zoom H5, the input 1 not corresponding at L channel, i will test that.
> For the levels, has said Morgin, i've increased them so that i was very close to 0db headroom, has suggested in the wiki, so that the S/B his high.


Ok i switched the mics, but i still have the warning, but the delay is lower, 0,29 ms. The plots are very similar to the first one, i decreased the levels. First listening, the lower delay has a impact: there is something here, this far from perfect, but that the very first time i have no more the feeling that the sound with the headphones is in my head, neither around the head but almost coming from my speakers that are mute. No virtualisation (dolby headphone, ooyh and others) never gave me this feeling, I understand now the benefit of personal HRIR. Still have a lot of work to make it better but it's a good beginning.
Any other advice to correct this delay et my bad graphs ?

Edit : grrr i've got a "hummmm" with one of the vxlr+ adaptator, not very huge, but i can hear it when i increase the input level of the mic. So more tests are useless, i've got to resend the bad vxlr+


----------



## reter

Xam198 said:


> Ok i switched the mics, but i still have the warning, but the delay is lower, 0,29 ms. The plots are very similar to the first one, i decreased the levels. First listening, the lower delay has a impact: there is something here, this far from perfect, but that the very first time i have no more the feeling that the sound with the headphones is in my head, neither around the head but almost coming from my speakers that are mute. No virtualisation (dolby headphone, ooyh and others) never gave me this feeling, I understand now the benefit of personal HRIR. Still have a lot of work to make it better but it's a good beginning.
> Any other advice to correct this delay et my bad graphs ?
> 
> Edit : grrr i've got a "hummmm" with one of the vxlr+ adaptator, not very huge, but i can hear it when i increase the input level of the mic. So more tests are useless, i've got to resend the bad vxlr+



"hummmm"? explain me, now i'm doubting too with my vxlr+ because one of the two had the package opened, still my mics are in shipment so i can't test


----------



## morgin

I've converted my Hesuvi file to "normal" channel config and renamed it for ease to the same name as yours "210110-S5-M21-FC3-T500(L-R-C-LFE-LS-RS-LB-RB).wav". I then put both the wave and your filters .txt file into C:\Program Files\EqualizerAPO\config\conVT. When I deactivate everything in Hesuvi I only get the surround channels and changing the setting in EQapo doesn't make any difference. Is there something I'm missing?


----------



## Xam198

@ reter
It's typically the sound of a componant malfunctionning, not a buzzz, neither a ground loop but sort of "vvvvvum" , lower in frequency than a buzz. I've seen it in the level indicator of the zoom H5 : without any sound, the level of right channel was very very slightly moving, i recorded it with audacity and had confirmation. I tried both mic on the vxlr, swtiched the vxlr on the H5, the humm was following the vxlr+. I increased the input level of the H5 to max, perfectly clear sound on one input, "hummm" on the other. Back to the sender for a new one.


----------



## musicreo

​


morgin said:


> I've converted my Hesuvi file to "normal" channel config and renamed it for ease to the same name as yours "210110-S5-M21-FC3-T500(L-R-C-LFE-LS-RS-LB-RB).wav". I then put both the wave and your filters .txt file into C:\Program Files\EqualizerAPO\config\conVT. When I deactivate everything in Hesuvi I only get the surround channels and changing the setting in EQapo doesn't make any difference. Is there something I'm missing?


Where you put your files is not important as long as in the txt file the path and the name for  the wav file used for the convolution is correct.
You should just include the txt file in the EQ APO configuration editor like in my screenshot. If you want you can upload your hesuvi wav file and your converted one. Then I can take a look if everything is ok.


----------



## morgin

Thanks I really appreciate it. Here's the link to my converted and original Hrir

https://mega.nz/folder/AEtEwCQS#pyaRjbTBDNDGXEiWUvWS5g


----------



## musicreo

morgin said:


> Thanks I really appreciate it. Here's the link to my converted and original Hrir
> 
> https://mega.nz/folder/AEtEwCQS#pyaRjbTBDNDGXEiWUvWS5g


The files are ok  and work.    Is there some filter active before  in the EQ APO configuration editor that may make problems? You have deactivated HESUVI in the configuration editor?

One other thing I notice is that your files go up to 0db  and you risk clipping.


----------



## morgin

musicreo said:


> The files are ok  and work.    Is there some filter active before  in the EQ APO configuration editor that may make problems? You have deactivated HESUVI in the configuration editor?
> 
> One other thing I notice is that your files go up to 0db  and you risk clipping.


Yeh I deactivated everything in hesuvi from the menu. There’s no filters. The volume was louder because I think jakko mentioned surround sound need the volume


----------



## musicreo

morgin said:


> Yeh I deactivated everything in hesuvi from the menu.


Does that mean you switched it off in the EQ-APO editor or is it still active there and included before the txt file?


----------



## morgin (Apr 3, 2022)

musicreo said:


> Does that mean you switched it off in the EQ-APO editor or is it still active there and included before the txt file?


I deleted the original text config file to check if that made a difference and used the new txt file only with EQ-APO but it didn’t work. 

I may need to delete everything and reinstall. I have to do that sometimes when the audio isn’t quite right but this time I’ll just use the new .txt file and the changed .wav. Without installing hesuvi. 

Just to say you are a big help to me and helped me loads and I really appreciate you. Thanx


----------



## reter (Apr 3, 2022)

musicreo said:


> ​
> Where you put your files is not important as long as in the txt file the path and the name for  the wav file used for the convolution is correct.
> You should just include the txt file in the EQ APO configuration editor like in my screenshot. If you want you can upload your hesuvi wav file and your converted one. Then I can take a look if everything is ok.


My curve is totally red, does this affect the overall audio?






To fix, should i record my brir trying to mantaining the gain not over 0?


----------



## morgin

reter said:


> My curve is totally red, does this affect the overall audio?
> 
> 
> 
> ...


I’m sure it means the audio is clipping and it looks like it is clipping on all the frequencies on your end. Ideally there shouldn’t be any red


----------



## Xam198

Still have warning FT measurement has lower delay to left ear than to right, 0,29ms, after switching mics, i don't understand, any idea ? Could it be my souncard, EMU 1820, old one (ten years i think)  but good sound ?
I found the mics EM258 a bit noisy, is it the same for those of you who have them ?


----------



## reter (Apr 3, 2022)

morgin said:


> I’m sure it means the audio is clipping and it looks like it is clipping on all the frequencies on your end. Ideally there shouldn’t be any red


yea but i used hesuvi in two systems and boh have this problem, a lot of users have the same red graph so when i saw @musicreo graph that's almost perfect i thought he would be the choosed one to ask how to fix this damn problem i have sinnce almost two years

@Xam198 have you tried switching both mics and vxlr? i'm not expert, but i would test with another audio interface just to be sure


----------



## musicreo

Xam198 said:


> Still have warning FT measurement has lower delay to left ear than to right, 0,29ms, after switching mics, i don't understand, any idea ? Could it be my souncard, EMU 1820, old one (ten years i think)  but good sound ?


I guess there is some noise that is detected as signal before the sweep starts and that causes the warning.



Xam198 said:


> I found the mics EM258 a bit noisy, is it the same for those of you who have them ?


 I had a few capsules with lot of noise and some that were very good but it is possible that this was to my soldering.


----------



## musicreo

reter said:


> yea but i used hesuvi in two systems and boh have this problem, a lot of users have the same red graph so when i saw @musicreo graph that's almost perfect i thought he would be the choosed one to ask how to fix this damn problem i have sinnce almost two years



The graph is the sum of all channels in the convolution. So if all channels are close to 0db (that is very rare) you would have up  to 18.4db clipping.  If you just play stereo  it is much less. Usually you want to have some  headroom to avoid any clipping.  One simple way would be to set a  negative preamplification value before the convolution but then your soundcard must have enough power to drive your headphone even with larger negative preamplification.

Another problem would be if your impulse response filters are already clipped.


----------



## reter

musicreo said:


> The graph is the sum of all channels in the convolution. So if all channels are close to 0db (that is very rare) you would have up  to 18.4db clipping.  If you just play stereo  it is much less. Usually you want to have some  headroom to avoid any clipping.  One simple way would be to set a  negative preamplification value before the convolution but then your soundcard must have enough power to drive your headphone even with larger negative preamplification.
> 
> Another problem would be if your impulse response filters are already clipped.



I'm trying to do the negative preamp as you suggested, lucky that my jds can drive easily my 660s, but i'm not satisfied with just doing the preamplification in eqAPO, i would prefer recording a good graph and i noticed that you don't actually have a negative preamp, you only have a 8.5db on low frequency which is great, how you achieved that?


----------



## musicreo

reter said:


> i would prefer recording a good graph and i noticed that you don't actually have a negative preamp, you only have a 8.5db on low frequency which is great, how you achieved that?



Impulcifer will usually give you a recording that has no clipping but there is also a gain setting in the post processing.


----------



## Xam198 (Apr 4, 2022)

musicreo said:


> I guess there is some noise that is detected as signal before the sweep starts and that causes the warning.
> 
> 
> I had a few capsules with lot of noise and some that were very good but it is possible that this was to my soldering.


Humm , that could be it, because one mic/vxlr is "humming" and when i switch the plug mics (without changing them in my ears) , the delay decrease.
I wonder if i would have better results/less noise  whith sound professional mics...
The "humm" is typically a bad ground : when i put the hand on the vxlr+ the hum drecrease.


----------



## reter

Xam198 said:


> I wonder if i would have better results/less noise  whith sound professional mics...


That was my initial goal but sadly i didn't find any european website that sells these professional mics, no one, so i prefered risking some tax from england with pretty cheap mics

Jaakkopasanen have the noise of every binaural mics in his guide, seems that the primo are almost literally equal to sound professional, i don't know if this is reliable tho






would be great if someone actually tried both


----------



## morgin (Apr 5, 2022)

@jaakkopasanen im currently trying the mesh2hrtf and finding it a lot more difficult than impulcifer. But I had a thought.

I have my scans of my basic head and detailed ears. Can’t I place a virtual speaker (or several) in blender and use impulcifer for the measurements. That way we get the best of both worlds. We get a room with no background noise and we also get things like headphone compensation. I believe the way mesh2hrtf works is in reverse instead of the sound coming into the ears they have the sound come out of the ears bounce around till they find the speakers.


----------



## jaakkopasanen

morgin said:


> @jaakkopasanen im currently trying the mesh2hrtf and finding it a lot more difficult than impulcifer. But I had a thought.
> 
> I have my scans of my basic head and detailed ears. Can’t I place a virtual speaker (or several) in blender and use impulcifer for the measurements. That way we get the best of both worlds. We get a room with no background noise and we also get things like headphone compensation. I believe the way mesh2hrtf works is in reverse instead of the sound coming into the ears they have the sound come out of the ears bounce around till they find the speakers.


I have no experience with mesh2hftf. You'd need a some kind of acoustic space simulation software for your idea but I have no experience with those either. I doubt Blender will do anything in this case


----------



## reter (Apr 5, 2022)

musicreo said:


> I wanted to share that in the meantime I have changed the mic placement. I removed the yellow foam and place the capsule now really into the ear canal opening as shown in the image. Overall it  improved the quality of the measuremt. The second image shows how I did  my last  measurements.  I'm really happy with the results.


@musicreo sorry for the old reply but is this still the latest best results you achieved? i mean, plugging the mics this much inside the ear canal


----------



## musicreo

reter said:


> @musicreo sorry for the old reply but is this still the latest best results you achieved? i mean, plugging the mics this much inside the ear canal



Yes I still use  the measurements from that time.   I put the mics in my ear as  deep as it still feels comfortable, mark the positions on the cable and fix it on the wire construction. That  way the deep mic positioning is repeatable. However, I prefer my partial open ear canal measurements over my blocked ear canal measurements but that does not mean other methods can't get  similar  good results.


----------



## reter

@morgin do you have a picture of your mics?
I'm trying to get how can i glue these * with a pair of foam but it's so rigid that i don't want to risk breaking the cable integrity, how did you do?





Also, does the mics actual position in the ear affect the measurements? can i just glue it sided or i have to glue it so the mic is turned outwards the ear?


----------



## morgin

Sorry about the quality of the pic but you should be able to see that I stripped the housing. When inserting into the ears I looped the stripped part so it went into the ear with the foam keeping the mics facing outwards. Others may have a better solution. 

Yeh keep the mics unblocked and facing more or less outward.


----------



## reter

ok this is my first experiment, and i don't actually like it... i don't even know how i can plug this mess inside my ears
	

	
	
		
		

		
		
	


	




maybe i should strip a lot more so i can do a serpentine inside the foam so the cable can do a U curve and exit


----------



## morgin

Just be careful where the wires are soldered on, too much wiggling around breaks the connection and soldering later can cause the mics to burn out.


----------



## reter

morgin said:


> Just be careful where the wires are soldered on, too much wiggling around breaks the connection and soldering later can cause the mics to burn out.


i noticed copper around the white cable... how did you manage to strip a lot of the black cable without cutting the copper? it's very risky


----------



## morgin

With a sharp blade and patience. If it went wrong I could always resolder. Check the other posts tho their method might work better for you


----------



## Xam198

Humm  very strange, i've tested with another - not mine - computer and little speakers, but same mics, same zoom H5, same headphones : no warning on delay left/right ear. So i've tested again with my computer and the integrated sound card rather than the - very good - EMU 1820 : again the warning ! Either it is the computer (!) or the amplifier (mix of digital and analog). I'm lost


----------



## reter

morgin said:


> With a sharp blade and patience. If it went wrong I could always resolder. Check the other posts tho their method might work better for you


i think i will go for something like this https://micbooster.com/microphone-holders/89-primo-6-mm-rubber-holder-158a-fc034.html

i ordered a pair, but i should be able to make them from anti vibration rubber... i will not even try to make measurements with this mess i made, i'm sure would be a waste of time


----------



## morgin

You still need a way of getting the wire to come out of the ear because it will always be on the back of the mics. I don’t think that housing will help much unless you’re using that housing to glue onto the foam plugs so you don’t have to glue straight onto the mics.


----------



## reter

morgin said:


> You still need a way of getting the wire to come out of the ear because it will always be on the back of the mics. I don’t think that housing will help much unless you’re using that housing to glue onto the foam plugs so you don’t have to glue straight onto the mics.


Ok this is my second attempt and i'm happy with it, they fits in the ears deep enough and are somewhat stable (hot glue does miracles); tomorrow i will begin with the measurements







this foam in particular is awesome, it stretches very slowly so you can fit them deeper, the orange ones were very hard, so even the choice of the foam matters

i cut 2-3 fibers of copper but the mic is still working, hopefully this will not impact the overall efficiency, i'm still thinking about insulate that critical part with some insulating tape (just a little), but i'm not sure


----------



## morgin

reter said:


> Ok this is my second attempt and i'm happy with it, they fits in the ears deep enough and are somewhat stable (hot glue does miracles); tomorrow i will begin with the measurements
> 
> 
> 
> ...


That looks very close to mine. I cut more of the housing all the way to the mic otherwise it still irritates the ear. Plus with the bare thin wires you can loop them behind the foam and back out the ear.


----------



## reter (Apr 9, 2022)

OK done my first measurements, just wow, the first impact is absolutely awesome, it's really like i'm hearing inside my room with speakers, but i don't know if i done it right

The first graph shows my headphones measurement and i see the error line is well upper than the target line, is this normal? i tried several times and i couldn't get under the 10.5db headroom (mixing both mics and headphones volumes), and 7-10 headroom for the speaker measurements





What's that "difference" line is showing? is this a bad thing?





This is my folder showing all the graphs: https://drive.google.com/file/d/1ZGqYEXMDoHcHFRSNDMG8VChHIl-MW9wL/view?usp=sharing

This is how equalizer apo shows my hrir,





another thing is that the volume is too low, and i have to set my amp liteally way way up, as shows in the measurements i have a peak gain of 1.3db that's too low i think...?

also, a lot of reverb, i was expecting some reverb from my room but holy! this is way too much!


----------



## morgin

reter said:


> OK done my first measurements, just wow, the first impact is absolutely awesome, it's really like i'm hearing inside my room with speakers, but i don't know if i done it right
> 
> The first graph shows my headphones measurement and i see the error line is well upper than the target line, is this normal? i tried several times and i couldn't get under the 10.5db headroom (mixing both mics and headphones volumes), and 7-10 headroom for the speaker measurements
> 
> ...


That actually looks really good for your first attempt. I got to where you are after a fair few trials and changes.


----------



## morgin

reter said:


> What's that "difference" line is showing? is this a bad thing?


No that looks normal to me. Mine was very similar


reter said:


> another thing is that the volume is too low, and i have to set my amp liteally way way up, as shows in the measurements i have a peak gain of 1.3db that's too low i think...?


you will need to change the target level. This was mine.

python impulcifer.py --test_signal="data/sweep-6.15s-48000Hz-32bit-2.93Hz-24000Hz.pkl" --target_level=10 --dir_path="data/my_hrir" --plot



reter said:


> also, a lot of reverb, i was expecting some reverb from my room but holy! this is way too much!


Decay Time Management​The room decay time (reverb time) captured in the binaural room impulse responses can be shortened with --decay parameter. The value is a time it should take for the sound to decay by 60 dB in milliseconds. When the natural decay time is longer than the given target, the impulse response tails will be shortened with a slope to achieve the desired decay velocity. Decay times are not increased if the target is longer than the natural one. The decay time management can be a powerful tool for controlling ringing in the room without having to do any physical room treatments.


----------



## musicreo

morgin said:


> No that looks normal to me. Mine was very similar
> 
> you will need to change the target level. This was mine.
> 
> ...


--target_level=10 means 10db above 0db! This produces a clipped impulse response.


----------



## morgin

musicreo said:


> --target_level=10 means 10db above 0db! This produces a clipped impulse response.


Yikes I best not give advice I’m still a learner. But it’s the only way I get decent volume at 80% in windows and 70% on the media players.


----------



## musicreo

morgin said:


> Yikes I best not give advice I’m still a learner. But it’s the only way I get decent volume at 80% in windows and 70% on the media players.



 For the  "--target_level" it is better to use negative values to have some headroom. If it is to silent it is better to use the amplification in EQ-APO to amplify the input.


----------



## reter

ok test 2 done

even better headrooms for speaker and 9db for headphones, is this enough or should i go lower?














I went to target level -10, it's pretty good for me at the moment





MY HRIR TEST 2: https://drive.google.com/file/d/1Bqxy_yXTXnaBE385DJem1gQcQjJRJuX7/view?usp=sharing

now i'll try the decay, hopefully i will achieve good results


----------



## morgin

musicreo said:


> For the  "--target_level" it is better to use negative values to have some headroom. If it is to silent it is better to use the amplification in EQ-APO to amplify the input.


Thanks man. I keep saying this but I’m glad you’re part of this with so much solid advice


----------



## castleofargh

musicreo said:


> For the  "--target_level" it is better to use negative values to have some headroom. If it is to silent it is better to use the amplification in EQ-APO to amplify the input.


Digital boost wherever it is, will give the same clipping for over 0dB signal before going to the DAC. Even floating points can't save us.

If it looks like the mic or headphone might be failing at either end of the frequency range, change them(move the mic if it might be obstructed somehow, or get a headphone with better FR extension in the area where the impulse tries to give so much boost). Or add another EQ over the impulse to attenuate the area that's abnormally high. Anything above ... let's say 13 or 15kHz is probably not accurate anyway and not very important for music(less so if you're an adult and you've lost so much sensi in those ranges), so if there is some big boost only there in the final impulse, maybe EQ down that area. What amplitude you save with that, you can apply as global digital gain and not clip. 
If the peak is an abnormal boost only in the low end, a high pass(or a more subtle EQ)might help do the same and hopefully it will still sound nice. In some cases it could even improve the sound if the headphone distorts too much when trying to make loud subs it was never able to make in the first place.

If you clip in the low freqs with some digital settings errors, it might not always sound wrong(up to a certain amount of clipping!!!!) because specifically for subs, there's a lot we don't actually hear on headphone, and there's also a psychoacoustic phenomenon where our brain makes up the missing top of the slow sine by itself. We feel like we're hearing the entire sine without clip. I don't remember the percentage until it just sounds horrible to us, but it's something abused in the loudness war and absurdly overused in too many rap albums. 

For those struggling too much with loudness, you should consider a more sensitive headphone(if it has a smoother FR and better extension that probably wouldn't hurt). Or get an amp with higher gain(double the voltage gain gives +6db).


----------



## reter

Actually the only thing i'm struggling at the moment is the reverb. i tried the decay value that i think won't h+go under 200, i was 1m80cm from the speaker when i recorded which is me almost near the wall, and the speaker almost to the other side of the wall, i think i should try to record with the speaker closer, i don't know


----------



## morgin

I had a lot of reverb with my speaker in one corner of the room and me in the other corner to try and make the distance as large as possible. I found a significant decrease when I used the speaker and sat in the middle of the room with 5-6ft distance.


----------



## musicreo

castleofargh said:


> Digital boost wherever it is, will give the same clipping for over 0dB signal before going to the DAC. Even floating points can't save us.


I think there is a difference. When I boost the impulse response over 0db I cut off some information of the impulse response.  This can't be undone.  But if I boost in EQ-APO I can always change the amplification depending on the input to prevent clipping.


----------



## reter (Apr 9, 2022)

morgin said:


> I had a lot of reverb with my speaker in one corner of the room and me in the other corner to try and make the distance as large as possible. I found a significant decrease when I used the speaker and sat in the middle of the room with 5-6ft distance.


i don't have 6ft distance from the center sadly, i should try reducing the actual distance of the speaker, also how should the headphone graph be like? the two tests have a very different graph and i don't know in which case it should be better @musicreo can you tell me how i can read the graph? for me the test1 seems better than the test2, am i right?


----------



## musicreo

reter said:


> i don't have 6ft distance from the center sadly, i should try reducing the actual distance of the speaker, also how should the headphone graph be like? the two tests have a very different graph and i don't know in which case it should be better @musicreo can you tell me how i can read the graph? for me the test1 seems better than the test2, am i right?


For me both graphs look similar.  There is no obvious problem with the graphs and everything looks ok.  The difference between the channels is also not bad.  If you wonder why EQ-APO shows you a different graph that is because that graph is the sum of left or right channels.  . Which speaker do you use? They seem to have  a lot of energy in the low frequencies as the range between 35-60 Hz is boosted a lot.


----------



## reter

Also, i don't quite like how i hear


musicreo said:


> For me both graphs look similar.  There is no obvious problem with the graphs and everything looks ok.  The difference between the channels is also not bad.  If you wonder why EQ-APO shows you a different graph that is because that graph is the sum of left or right channels.  . Which speaker do you use? They seem to have  a lot of energy in the low frequencies as the range between 35-60 Hz is boosted a lot.



yea is indeed boosted a lot and i don't know why, I have the jbl 308p... also there are a lot of spikes that goes up and down, i looked at your graph and looks pretty awesome and without any noticeble spikes... is that the reverb? i should try something else to correct this problem, the decay alone can't help much

My graph is the upper one compared to yours


----------



## reter

@morgin i read that times ago you asked how to have the stereo audio more "large", i too found that 7.1 when plays stuff in stereo is too much centered even with the hesuvi front slider, doesn't help much... this is why i use ofter 5.1 instead of 7.1 with FL and FR settings more "large"
	

	
	
		
		

		
		
	


	




this is the only way i found to enjoy the content in stereo without having that tunnel like sound coming only from the front speakers

if there was a way to set the volume of each channel for the upmix content in stereo, would be great, i'd like having the sound coming from all the channels but mostly from the front...


----------



## lowdown

reter said:


> @morgin i read that times ago you asked how to have the stereo audio more "large", i too found that 7.1 when plays stuff in stereo is too much centered even with the hesuvi front slider, doesn't help much... this is why i use ofter 5.1 instead of 7.1 with FL and FR settings more "large"\
> 
> this is the only way i found to enjoy the content in stereo without having that tunnel like sound coming only from the front speakers
> 
> if there was a way to set the volume of each channel for the upmix content in stereo, would be great, i'd like having the sound coming from all the channels but mostly from the front...


I'd like to help, but I don't have knowledge of acoustic science or the technical aspects of Impulcifer or HeSuVI/EQ-APO to suggest specific fixes.  I can only describe what I did and what I hear.  I only did stereo measurements of my speakers with the binaural mics in my ears, and room measurements with a UMIK-1 positioned at the location of each ear.  My speakers were about 7 ft in front of me and about 7 ft apart, so the traditional triangle. I also only use the "Stereo" settings in HeSuVi.  But the imaging and soundstage I hear with Impulcifer is what's in the recordings.  Some literally fill half the room in front of me, nearly 180 degrees wide, others are just the individual instruments as they would be realistically positioned.  Width, height and depth vary greatly, depending on the recording.  I understand that stereo recordings can be played back using a multi-channel simulation, which may be what you're after to expand the soundstage wider than what's in the recording.  But if you want the imaging to be more true to what's in the recordings I think the answer is in how the measurements are done and the positioning of the speakers.  I'm sure others can offer more precise technical options, but just wanted to offer my experience.


----------



## morgin (Apr 9, 2022)

reter said:


> @morgin i read that times ago you asked how to have the stereo audio more "large", i too found that 7.1 when plays stuff in stereo is too much centered even with the hesuvi front slider, doesn't help much... this is why i use ofter 5.1 instead of 7.1 with FL and FR settings more "large"
> 
> 
> 
> ...


To be honest I tried those sliders and to my ears I couldn’t hear the difference. But the recording I got now makes it feel surround around the walls of my room. Even the rear speakers feel a distance away. As @lowdown said it’s all down to the stuff that you’re listening to. Some sound far and wide, some things closer especially in movies… 1917, avatar sounds amazing with distinct sounds in many directions, other movies sound closer like Dolby preset in hesuvi.


----------



## jaakkopasanen

The --Target level was specifically added to be able have over 0 dB gain. That of course risks clipping but it's unlikely with 7ch BRIRs as the level shown is the sum of all channels for left or right ear. Unless of course you go way above 0 dB. I personally have the target level at 10 dB and have never heard clipping. 10 dB should not make the impulse responses clip in 7ch BRIRs because what you have is 4+ channels summed so >12 dB gain from the summing itself. 

More flexible method is of course to do this in EqualizerApo with preamp filter but that means you can't rely only on HeSuVi.


----------



## jaakkopasanen

reter said:


> Actually the only thing i'm struggling at the moment is the reverb. i tried the decay value that i think won't h+go under 200, i was 1m80cm from the speaker when i recorded which is me almost near the wall, and the speaker almost to the other side of the wall, i think i should try to record with the speaker closer, i don't know


Definitely try sitting closer to the speaker. The further you are, the higher is the reverb level going to be relative to the direct sound. This is different from decay time and cannot be controlled in Impulcifer but is easy to do with distance to the speaker.


----------



## musicreo

reter said:


> yea is indeed boosted a lot and i don't know why, I have the jbl 308p... also there are a lot of spikes that goes up and down, i looked at your graph and looks pretty awesome and without any noticeble spikes... is that the reverb?


My graph is with room correction up to 750Hz and you can't compare it with your graph up to that frequency.  The spikes in this frequency regime are room modes and will vary a lot depending on your position in the room. 


reter said:


> i should try something else to correct this problem, the decay alone can't help much


Like jaakkopasanen suggested decrease the distance to your speaker.  When you sit close to a wall you will have reflections from that wall that will be hard to separate from the direct soundwave.


----------



## morgin

I need help converting .sofa files into .wav to use with hesuvi. I have tried with another program i found but it's not working. 
Or if someone can tell me how to use .sofa on hesuvi

I have my first results from the mesh2hrtf without any errors

https://mega.nz/folder/UQky2Yja#PMqQDyVAOGskS4H1zFe5jw


----------



## jaakkopasanen

jaakkopasanen said:


> The --Target level was specifically added to be able have over 0 dB gain. That of course risks clipping but it's unlikely with 7ch BRIRs as the level shown is the sum of all channels for left or right ear. Unless of course you go way above 0 dB. I personally have the target level at 10 dB and have never heard clipping. 10 dB should not make the impulse responses clip in 7ch BRIRs because what you have is 4+ channels summed so >12 dB gain from the summing itself.
> 
> More flexible method is of course to do this in EqualizerApo with preamp filter but that means you can't rely only on HeSuVi.


I remembered wrong. The target level doesn't set the peak but the average level and therefore negative values should be used. The purpose of this parameter is to ensure different measurements are equally loud for more fair comparison.


----------



## morgin (Apr 10, 2022)

.


----------



## reter

Done my test 3, i moved my speaker close to the corner so i could have more space and be in the center, the results are good compared to the previous test with me close to the right wall, in fact now i don't hear the sound being mostly from the left like the test 2 hrir, but it's less convincing in terms of real room listening, is this due to the speaker being in the corner?


----------



## lowdown

reter said:


> Done my test 3, i moved my speaker close to the corner so i could have more space and be in the center, the results are good compared to the previous test with me close to the right wall, in fact now i don't hear the sound being mostly from the left like the test 2 hrir, but it's less convincing in terms of real room listening, is this due to the speaker being in the corner?


Can only suggest freely experimenting.  Did you try using some of the balance command line options to help with that "sound being mostly from the left"?  I've seen some posters say they got the measurements so close between L and R that they didn't need balance options, but I tried many combinations and got my balance perfected with command options rather than more measurement or speaker adjustments.  There are obviously many variables and I think it just takes trial and error to get great results.


----------



## Xam198

morgin said:


> To be honest I tried those sliders and to my ears I couldn’t hear the difference. But the recording I got now makes it feel surround around the walls of my room. Even the rear speakers feel a distance away. As @lowdown said it’s all down to the stuff that you’re listening to. Some sound far and wide, some things closer especially in movies… 1917, avatar sounds amazing with distinct sounds in many directions, other movies sound closer like Dolby preset in hesuvi.


Morgin, how do you play the film : vlc, mpc-hc.. ? Are they mkv file in 7.1 ? What is the audio configuration of the player (outputing 7.1 channels ?) , of windows playback device and of Hesivu whan you play the film ?
Thanks !


----------



## morgin

Xam198 said:


> Morgin, how do you play the film : vlc, mpc-hc.. ? Are they mkv file in 7.1 ? What is the audio configuration of the player (outputing 7.1 channels ?) , of windows playback device and of Hesivu whan you play the film ?
> Thanks !


I use MPC HC, the movies are 7.1/5.1. 





Sorry about the quality these are old pics. But use that for mpc hc Then in the mixing tab select 7.1 and tick the boxes as shown here 



I know the pics are getting worse but you can see what’s selected. 
This is windows 



Make sure the listen tab you set it correctly 



Hesuvi


----------



## morgin

Btw just to let everyone new know. Even tho I’m trying to help I know far less than everyone else on here so I might be giving out wrong info. But my settings are working great for me so far


----------



## Xam198

Many many Thanks that's exactly what i need, that's very precise and detailled, and since it is working...


----------



## reter (Apr 10, 2022)

Stupid question: my speaker is facing the window, what if I do the measurements with the window open? should that help with the reverberation? damn, i would go on the terrace to do the measurements


----------



## sander99

reter said:


> Stupid question: my speaker is facing the window, what if o do the measurements with the window open? shout that help woth the reverberation? damn, i would go on the terrace to do the measurements


Only when there is no noise outside, no traffic, no birds, no wind, no airplanes.
What about some improvised room treatment? Hang up blankets, put a matras somewhere, ...


----------



## Xam198

I must do something wrong because the sound stay around my head.


----------



## morgin

sander99 said:


> Only when there is no noise outside, no traffic, no birds, no wind, no airplanes.
> What about some improvised room treatment? Hang up blankets, put a matras somewhere, ...


You can make a border around your room with a rope. Then hang duvets, pillows, towels, even clothes from the wardrobe to dampen the reverb.


----------



## morgin

Is there any way I can check and use .sofa files. Because otherwise I have spent hours Upon hours and have a .sofa file I cannot use. 😤😡


----------



## morgin

Xam198 said:


> I must do something wrong because the sound stay around my head.


Does the sound change when you select different hrir’s in hesuvi?


----------



## reter (Apr 10, 2022)

sander99 said:


> Only when there is no noise outside, no traffic, no birds, no wind, no airplanes.
> What about some improvised room treatment? Hang up blankets, put a matras somewhere, ...



wow didn't think about putting clothes around the room, but wonder if i have enough of them, do acoustic panels work too? i heard that absorbs some kind of frequencies but they let the bass frequencies go....



morgin said:


> You can make a border around your room with a rope. Then hang duvets, pillows, towels, even clothes from the wardrobe to dampen the reverb.


how can i set a rope on the wall without making holes?


----------



## Xam198

Yes, but when i chose mine, the level is very low and the spatialization is very light, the atmos hrir give best results.


----------



## morgin

Xam198 said:


> Yes, but when i chose mine, the level is very low and the spatialization is very light, the atmos hrir give best results.


Do your graphs look similar to what @reter recently posted?


----------



## Xam198

I will post them tomorrow, they are less regular but they seem ok to my eyes


----------



## morgin

@musicreo the -2 target and increasing pre amp in EQ-APO has made a significant benefit to my HRIR. The rear sounds are much more clearer and overall clarity and positioning of sounds are much better


----------



## Xam198

Here are my graphs


----------



## jaakkopasanen

Xam198 said:


> Here are my graphs


Definitely something wrong if you get that zigzag curve. Iirc someone had something similar earlier in the thread but don't remember the solution from the top of my head


----------



## Xam198 (Apr 11, 2022)

Whan you talk about zizag, can you precise wich ones ? The very small ones, oscillating around the curves ?


----------



## reter (Apr 11, 2022)

jaakkopasanen said:


> Definitely something wrong if you get that zigzag curve. Iirc someone had something similar earlier in the thread but don't remember the solution from the top of my head


@jaakkopasanen what about mine? i got 2db headroom, why in the high frequency the graph goes very down and does zigzag?






@Xam198 download audacity and look at your headphone.wav, also if you can, try lowering the headphones volume and raise the mics or viceversa, try multiple times until you get a good sinusoid

The sinusoid should be pretty perfect, this is my test 3:


----------



## castleofargh

Xam198 said:


> Whan you talk about zizag, can you precise wich ones ? The very small ones, oscillating around the curves ?


The spring like shape from 500Hz to 5000Hz. Chaos is fine, but this isn't chaos and I don't know what would cause it.


----------



## Xam198

castleofargh said:


> The spring like shape from 500Hz to 5000Hz. Chaos is fine, but this isn't chaos and I don't know what would cause it.


Perhaps the "humm" like a loop groud that i can ear : it's very slight, but if i increase the volume i can hear it.


----------



## Xam198

I'm getting loosed, i've got now a strange sound at the end of the sinusoid while recording, sort of "qssssqsqs" . But the file  sweep-6.15s-48000Hz-32bit-2.93Hz-24000Hz isn't corrupted.


----------



## morgin

@Xam198 I had I think what looks like the same problem. Here are my comments and fix. Hope it works. 

Do you put your volume and gain on loud when doing the measurements?


----------



## jaakkopasanen

reter said:


> @jaakkopasanen what about mine? i got 2db headroom, why in the high frequency the graph goes very down and does zigzag?
> 
> 
> 
> ...


That stuff above 10 kHz is more or less expected.


----------



## morgin

This is a long shot but worth asking. I understand if I don’t get a reply for this and not really expecting one as it’s not a medical forum 

But my mum has sudden hearing loss. She doesn’t listen to loud tv or headphones and doesn’t work near loud equipment. There’s is tinnitus too I believe that’s the result of the hearing loss. Do you guys know of anything to slow/prevent or recover the hearing?


----------



## Xam198

@ morgin, i use high volume on both headphones and speakers to have a lower input level on the zoom H5 to have a better S/N 
I've tried with another comuter, i don't have anymaore (right now) the "qsssqsssqsss" at the end od the recording of sinusoid.
Here is the sinusoid recorded /headphones


----------



## morgin

Xam198 said:


> @ morgin, i use high volume on both headphones and speakers to have a lower input level on the zoom H5 to have a better S/N
> I've tried with another comuter, i don't have anymaore (right now) the "qsssqsssqsss" at the end od the recording of sinusoid.
> Here is the sinusoid recorded /headphones


The volume made a big difference not only the loudness but to the quality. I don’t know if what you’re using is similar to behringer I use. But just try a recording based on how I fixed my issue. The volume can be fixed in eqapo afterwards. 

I had the volume loud enough for the start of the sine sweep to be picked up by the mics in my ears, but not too sensitive to any other sounds from outside (on the behringer it shows when sound is being picked up with lights)  using the gain to change the sensitivity on each mic. And make sure there’s no clipping too. 

I think everything audio interface, speaker and windows my volume was set at around 60%. 

It seemed like the louder I listen the more the mics will pick up and drown out the background noise. But that was wrong too loud causes the spiky graph and unnatural sound.


----------



## lowdown

morgin said:


> This is a long shot but worth asking. I understand if I don’t get a reply for this and not really expecting one as it’s not a medical forum
> 
> But my mum has sudden hearing loss. She doesn’t listen to loud tv or headphones and doesn’t work near loud equipment. There’s is tinnitus too I believe that’s the result of the hearing loss. Do you guys know of anything to slow/prevent or recover the hearing?


Google "causes of sudden hearing loss".  There's a wide range of possible causes, some reversible, some not.  She should definitely see a doctor.  Good luck.


----------



## Xam198 (Apr 11, 2022)

I will try with lower volume and higher recording level, keeping a good headroom.I'm using a zoom H5, i hope my problems don't came from that, i will perhaps buy a umc202hd

But one thing that worries me is that "humm", a typical sound of ground loop : if the mic touch something, my finger, the earplug, i hear this humm - I monitor it directly to the zoom H5 . Then if i touch the vxlr, the hummm disappear. One mic do this, the other less. Anybody has experimented this ? The humm is not so huge, but i can differentiate it from noise and so if i increase the level of the H5 input do decrease volume, this humm will be more important. My mics are primo EM258. I ve tested pluggin the mics to a little mixing table, same thing.


----------



## reter

I bought some plaid hopefully can help with attenuating the reverb

they seem pretty thick


----------



## morgin

Xam198 said:


> I will try with lower volume and higher recording level, keeping a good headroom.I'm using a zoom H5, i hope my problems don't came from that, i will perhaps buy a umc202hd
> 
> But one thing that worries me is that "humm", a typical sound of ground loop : if the mic touch something, my finger, the earplug, i hear this humm - I monitor it directly to the zoom H5 . Then if i touch the vxlr, the hummm disappear. One mic do this, the other less. Anybody has experimented this ? The humm is not so huge, but i can differentiate it from noise and so if i increase the level of the H5 input do decrease volume, this humm will be more important. My mics are primo EM258. I ve tested pluggin the mics to a little mixing table, same thing.


Isn’t that something to go with grounding?


----------



## musicreo (Apr 12, 2022)

morgin said:


> Is there any way I can check and use .sofa files. Because otherwise I have spent hours Upon hours and have a .sofa file I cannot use. 😤😡



You find the files  in my edited post in the  Mesh2hrtf thread. It was just a stupid error yersterday from my side.


----------



## Xam198

morgin said:


> Isn’t that something to go with grounding?


Let's say that it is the typical sounf of problem of electrical ground : but could be anything, the computer, the zoom H5... but i suspect the mics


----------



## reter

Xam198 said:


> Let's say that it is the typical sounf of problem of electrical ground : but could be anything, the computer, the zoom H5... but i suspect the mics


the mic have a jack connection, you can easily try them with another device


----------



## conql

Xam198 said:


> But one thing that worries me is that "humm", a typical sound of ground loop : if the mic touch something, my finger, the earplug, i hear this humm - I monitor it directly to the zoom H5 . Then if i touch the vxlr, the hummm disappear. One mic do this, the other less. Anybody has experimented this ? The humm is not so huge, but i can differentiate it from noise and so if i increase the level of the H5 input do decrease volume, this humm will be more important. My mics are primo EM258. I ve tested pluggin the mics to a little mixing table, same thing.


I encountered the problem too. Solved it by buying new audio jacks and soldering the mics again. not sure what's the exact reason...


----------



## reter

@musicreo i'm stuck with the headphone measurements: if i aim for the headroom, with 3db headroom i get some weird sinusoid, irregular

this is my latest headphone measurement compared to my old measurement with 10db headroom






should i still aim for headroom or should i stick with the best sinusoid i get?


----------



## Xam198 (Apr 12, 2022)

@ reter & @ conql 
I think this problem sometimes occurs when you use a stereo jack with a mono device; i don't say this is the problem here, but could be. Perhaps i will try soldering mono jack or not soldering one of the stereo plug on the jack


----------



## morgin

reter said:


> @musicreo i'm stuck with the headphone measurements: if i aim for the headroom, with 3db headroom i get some weird sinusoid, irregular
> 
> this is my latest headphone measurement compared to my old measurement with 10db headroom
> 
> ...


I was getting something similar and what @Xam198 said I was using slightly different stereo device with a mono cable


----------



## Xam198

That's definitively a ground problem : when i unplug the zoom H5, the "hummm" decrease. And if i touche the vxlr with my hand, no more humm. I will try to change the jack


----------



## reter (Apr 12, 2022)

morgin said:


> I was getting something similar and what @Xam198 said I was using slightly different stereo device with a mono cable



i don't really understand if you were answering me or him, what did you do to fix? also what's your headroom?


----------



## morgin

reter said:


> i don't really understand if you were answering me or him, what you did to fix? also what's your headroom?


I was saying his suggestion that using a mono cable with stereo or stereo cable with mono device could be right because I was getting the same jittery peaks and I used a mono cable with a stereo device.


----------



## reter (Apr 12, 2022)

morgin said:


> I was saying his suggestion that using a mono cable with stereo or stereo cable with mono device could be right because I was getting the same jittery peaks and I used a mono cable with a stereo device.


i checked, i plugged all the devices on the behringer so i think that is some distorsion created by the volume being too high? but lowering the volume raises the headroom and rising the mics does nothing... so i'm in a loop, should i keep my headroom high to mantain a good sinusoid but still trying to get the lowest headroom possible?


----------



## jaakkopasanen

@reter Don't worry about the headroom


----------



## reter

jaakkopasanen said:


> @reter Don't worry about the headroom


so i should stick for the better sinusoid i can get, right?, ok thanks @jaakkopasanen


----------



## morgin

@jaakkopasanen can we extract the headphone compensation info from impulcifer like the text files we use in hesuvi? 

I’m in between impulcifer and mesh2hrtf both have their advantages (impulcifer gives me the spacial sound like the speaker are around me 6ft away. And mesh2hrtf gives me crystal clear sound like I’ve done the measurement in an Anechoic chamber. I want to try and merge the two so I’m getting a perfect hrir.


----------



## reter

morgin said:


> @jaakkopasanen can we extract the headphone compensation info from impulcifer like the text files we use in hesuvi?
> 
> I’m in between impulcifer and mesh2hrtf both have their advantages (impulcifer gives me the spacial sound like the speaker are around me 6ft away. And mesh2hrtf gives me crystal clear sound like I’ve done the measurement in an Anechoic chamber. I want to try and merge the two so I’m getting a perfect hrir.



now that you tried both, what do you prefer and how much spacility does mesh2hrtf have compared? i prefer the spaciality and i think impulcifer gives already a clear sound, the disadvantage of course is that it's limited to the room you're doing the measurements so if you're in a room you will have reverb more or less... i think the best way to get the best with impulcifer is measuring outsite, without walls and without noise


----------



## lowdown

reter said:


> now that you tried both, what do you prefer and how much spacility does mesh2hrtf have compared? i prefer the spaciality and i think impulcifer gives already a clear sound, the disadvantage of course is that it's limited to the room you're doing the measurements so if you're in a room you will have reverb more or less... i think the best way to get the best with impulcifer is measuring outsite, without walls and without noise


Did you do the room correction measurements?


----------



## conql (Apr 13, 2022)

morgin said:


> @jaakkopasanen can we extract the headphone compensation info from impulcifer like the text files we use in hesuvi?
> 
> I’m in between impulcifer and mesh2hrtf both have their advantages (impulcifer gives me the spacial sound like the speaker are around me 6ft away. And mesh2hrtf gives me crystal clear sound like I’ve done the measurement in an Anechoic chamber. I want to try and merge the two so I’m getting a perfect hrir.


This is the script I use to generate standalone headphone compensation. Modified from impulcifer, so it should sound the same.

https://1drv.ms/u/s!AqwTOUFQXDBFlHNqm_iBjs1V5NJ_?e=H2flUl

Usage:
1. Copy the file to the root directory of impulcifer where impulcifer.py exists.
2. Edit the dir_path to the folder that stores headphones.wav





3. Run python hpeq.py

It will generate a Equalizer APO supported config called geq.txt in dir_path which contains equalization for every channel of the recording.

Edit:

Make sure you're in virtual environment beforehand just like running impulcifer.py
# On Windows
venv\Scripts\activate
# On Mac and Linux
. venv/bin/activate


----------



## morgin (Apr 13, 2022)

I use my hrir just for movies and gaming.

The reverb from impulcifer is not an issue for me I kinda prefer it over the clean mesh2hrtf. Impulcifer gives the better immersion and makes movies feel grande better than a cinema. The clarity on my best measurements is around 80% now that I’ve sampled mesh2hrtf.

Mesh2hrtf just has those extra details that you can hear clearly. The fact you can choose the mic placement and there is no room for error for each sound recording because it's virtual and will give constant results.

I'm torn between the two because each have their own benefits. After using mesh2hrtf for a couple of days I still come back to impulcifer and want to use it.

Spatially mesh2hrtf is just like the Dolby atmos on hesuvi. If I knew how to set virtual speakers in mesh2hrtf I think it would work really well.


----------



## sander99

reter said:


> i think the best way to get the best with impulcifer is measuring outsite, without walls and without noise


I don't think so. That way you get a similar effect as measuring in an anechoic room. Some reverb from the room is needed. Most music is produced with playback over speakers in a room in mind. And they monitor that way in the studio. The only problem is that not all rooms are equally suited. To get the best with Impulcifer you should record your HRIRs in a good room, like a studio monitor room.


morgin said:


> I want to try and merge the two so I’m getting a perfect hrir.


The headphone compensation is needed to take out the headphone's own sound signature. I don't expect it will (dramatically) change the feeling of distance.
Ideally a HRIR based on pure hrtf should be enhanced by somehow adding a virtual room with some reverb.


----------



## morgin

sander99 said:


> I don't think so. That way you get a similar effect as measuring in an anechoic room. Some reverb from the room is needed. Most music is produced with playback over speakers in a room in mind. And they monitor that way in the studio. The only problem is that not all rooms are equally suited. To get the best with Impulcifer you should record your HRIRs in a good room, like a studio monitor room.
> 
> The headphone compensation is needed to take out the headphone's own sound signature. I don't expect it will (dramatically) change the feeling of distance.
> Ideally a HRIR based on pure hrtf should be enhanced by somehow adding a virtual room with some reverb.


I used Ambient reverb and it kinda does work. I guess I want the sound coming from around me together but the details coming from the tv and rear speaker to be distinct sources at the same time. If that makes sense and only impulcifer gives me this.


----------



## morgin (Apr 13, 2022)

@jaakkopasanen and @musicreo (for making impulcifer and for the help given) if you can get an iPhone 10 with the app hedges (£7) and get a scan of your head (doesn’t have to be too detailed) and detailed scans of your ears, saving them as .stl in the app. I’ll be willing to do all the blender work for you so you can try. It’s the least I can do.

My first try was with semi good scans and ok mic placement. But I just want you try it out. It generates a .sofa file and @musicreo  has a script to convert to hesuvi.wav

I’m not the best with meshmixer and blender but I’ve gotten the hang of it and can do a decent merge so you can sample it.


----------



## morgin

conql said:


> This is the script I use to generate standalone headphone compensation. Modified from impulcifer, so it should sound the same.
> 
> https://1drv.ms/u/s!AqwTOUFQXDBFlHNqm_iBjs1V5NJ_?e=H2flUl
> 
> ...


thankyou buddy


----------



## morgin (Apr 13, 2022)

conql said:


> This is the script I use to generate standalone headphone compensation. Modified from impulcifer, so it should sound the same.
> 
> https://1drv.ms/u/s!AqwTOUFQXDBFlHNqm_iBjs1V5NJ_?e=H2flUl
> 
> ...


I'm still not 100% with python when I double click it always flashes and does nothing. I've tried cmd in the address bar to get to the location and typing the
"hpeq.py"
"python hpeq.py"
 "python3 hpeq.py"

 and 

"C:\Windows\System32\Impulcifer>pyhton hpeq.py"


----------



## Iohfcasa

morgin said:


> Spatially mesh2hrtf is just like the Dolby atmos on hesuvi. If I knew how to set virtual speakers in mesh2hrtf I think it would work really well.


There are some "binauraliser" daw plugins for that purpose like the "sparta" shown in his video  ,
have You already checked  the free "anaglyph"?
Speaker arrangement simulation is one thing, but room simulation the other, we have to find a suitable program for adding reverb.


----------



## conql

morgin said:


> I'm still not 100% with python when I double click it always flashes and does nothing. I've tried cmd in the address bar to get to the location and typing the
> "hpeq.py"
> "python hpeq.py"
> "python3 hpeq.py"
> ...


sorry I didn't make it clear. Perhaps you didn't activate the virtual environment. 



You should activate the virtual environment beforehand, just like running the impulcifer.py. Make sure the dir_path is correct. And it should generate fig.png and geq.txt in that folder.


----------



## morgin

Iohfcasa said:


> There are some "binauraliser" daw plugins for that purpose like the "sparta" shown in his video  ,
> have You already checked  the free "anaglyph"?
> Speaker arrangement simulation is one thing, but room simulation the other, we have to find a suitable program for adding reverb.



I have tried the sparta plugin with EQ-Apo and anaglyph but I really don't know what I'm doing and the seem to crash to when I load my .sofa file. What DAW would you recommend and do I use those alongside hesuvi and EQ-Apo


----------



## morgin

conql said:


> sorry I didn't make it clear. Perhaps you didn't activate the virtual environment.
> 
> You should activate the virtual environment beforehand, just like running the impulcifer.py. Make sure the dir_path is correct. And it should generate fig.png and geq.txt in that folder.


Got it now thanx again I appreciate the help


----------



## reter (Apr 13, 2022)

lowdown said:


> Did you do the room correction measurements?


not yet, i should buy the mic... does that help much?



OK GUYS, now it's weird, remember that i had that headroom problem while measuring the headphone? NOW i know why i was having that damn strange noise in the sinusoid! IT'S WHEN I DON'T TOUCH THE SHELL!

When i touch the behringer shell with my hand the sinusoid is PERFECT even with 1db headroom, if i stay away from it, does that weird disturbance; i tried multiple times






I'm happy that this means that the mics are not broken, but also i'm wondering if this is related to some stuff happening inside the audio interface


----------



## Iohfcasa

morgin said:


> I have tried the sparta plugin with EQ-Apo and anaglyph but I really don't know what I'm doing and the seem to crash to when I load my .sofa file. What DAW would you recommend and do I use those alongside hesuvi and EQ-Apo


I haven't used any daw with anaglyph for an individual sofa file, because I need to create my personal hrtf first.
Thought, sparta should work, because he used it  for the sofa file in his video,  perhaps he can give us further information.

A daw is needed for a binauralizer plugin to load and apply the personal sofa file, which is the format for hrtfs containing a huge amount of measuring points.
Hesuvi/ eq-apo on the other hand is needed to load your hrir (+hpeq separately), an impulse resonse generated in a specified room with a defined arrangement of speakers.

I guess, many binauralisers allow you to choose an hpeq, because it's essential for the right timbre and authentic sound, so you can go without eq-apo for  sofa convolution.

The benefit of an hrtf/ sofa file is it's flexibility allowing you to simulate different locations by adding reverb and changing the speaker arrangement.


----------



## lowdown

reter said:


> not yet, i should buy the mic... does that help much?
> 
> 
> 
> ...


I mention the room measurements because I don't have an ideal room either.  I did those measurements, then used the room options in the command line, and the sound I get in Impulcifer is like being in an ideal recording studio, or in the venue where the recordings were done.  Obviously each of our rooms and speaker setups are different, and the room options can't make up for everything, but you may be able to get it much better.

If touching the Behringer is having that much effect it could well be a grounding issue as another poster has reported.


----------



## morgin

Iohfcasa said:


> I haven't used any daw with anaglyph for an individual sofa file, because I need to create my personal hrtf first.
> Thought, sparta should work, because he used it  for the sofa file in his video,  perhaps he can give us further information.
> 
> A daw is needed for a binauralizer plugin to load and apply the personal sofa file, which is the format for hrtfs containing a huge amount of measuring points.
> ...


Sparta doesn't want to give me sound when applied in EQ-Apo i guess I'll have to look for a good DAW that works with this plugin


----------



## musicreo

Iohfcasa said:


> There are some "binauraliser" daw plugins for that purpose like the "sparta" shown in his video  ,
> have You already checked  the free "anaglyph"?
> Speaker arrangement simulation is one thing, but room simulation the other, we have to find a suitable program for adding reverb.




Anaglyph needs a special conversion for the sofa files. They provide a matlab script but that does not work probably with the mesh2hrtf sofa files.
The Sparta plugin shown in the video also crashes very often when I tested it. Also the sound was very  distorted with the mesh2hrtf sofa file.


----------



## Xam198

reter said:


> not yet, i should buy the mic... does that help much?
> 
> 
> 
> ...


What mics are you using? Primo EM258 ?


----------



## reter (Apr 13, 2022)

lowdown said:


> I mention the room measurements because I don't have an ideal room either.  I did those measurements, then used the room options in the command line, and the sound I get in Impulcifer is like being in an ideal recording studio, or in the venue where the recordings were done.  Obviously each of our rooms and speaker setups are different, and the room options can't make up for everything, but you may be able to get it much better.
> 
> If touching the Behringer is having that much effect it could well be a grounding issue as another poster has reported.


yeah, fact is that with that microphone i have to do another 14 measurements each session (with mono speaker) and honestly i don't know where i should start, how to pace the webcam, where to place the mic related to the ear canal... a lot of stuff i need to think on

the speaker placement does a lot in my measurements tho, placing near a corner of my room or near a wall does have very different results


@Xam198 yeah, i'm using the primo, i'm still wondering if i made the right choice glueing the mics to the earplugs, maybe i should try some stuff @musicreo did so i can place the mics even deeper


----------



## Xam198

I glued and unglued the mics : i still have the problem. I suspect that a mono mic plugged on stereo jack could do that,so i will try to solder mono jack.


----------



## lowdown

reter said:


> yeah, fact is that with that microphone i have to do another 14 measurements each session (with mono speaker) and honestly i don't know where i should start, how to pace the webcam, where to place the mic related to the ear canal... a lot of stuff i need to think on
> 
> the speaker placement does a lot in my measurements tho, placing near a corner of my room or near a wall does have very different results
> 
> ...


Yeah, that's a LOT.  I just did stereo measurements and command options until I got something really good, and ran out of steam to tackle even 5.1.   I think I totally got lucky on my UMIK-1 ear location measurements.  I didn't use a webcam, just a tripod to hold the mic and got as close as I could to the L/R ear placements.  As far as in ear mics I also glued foam ear plugs to the backs of the mics, which made positioning them securely and consistently so much easier.  Didn't try deeper in the ear canal so can't comment on how much difference that would make, but from my experience I don't think it's necessary.


----------



## reter (Apr 13, 2022)

lowdown said:


> Yeah, that's a LOT.  I just did stereo measurements and command options until I got something really good, and ran out of steam to tackle even 5.1.   I think I totally got lucky on my UMIK-1 ear location measurements.  I didn't use a webcam, just a tripod to hold the mic and got as close as I could to the L/R ear placements.  As far as in ear mics I also glued foam ear plugs to the backs of the mics, which made positioning them securely and consistently so much easier.  Didn't try deeper in the ear canal so can't comment on how much difference that would make, but from my experience I don't think it's necessary.


You convinced me.

Do i need to plug the umik to the behringer or i can just plug inside the computer via usb? when i do the measurements, where exactly should i put the mic? close to the ear i'm measuring or on the head? i still don't understand

EDIT: I read that i can also do one single measurement, good if i want to just try fast what it does!


----------



## lowdown

reter said:


> You convinced me.
> 
> Do i need to plug the umik to the behringer or i can just plug inside the computer via usb? when i do the measurements, where exactly should i put the mic? close to the ear i'm measuring or on the head? i still don't understand
> 
> EDIT: I read that i can also do one single measurement, good if i want to just try fast what he does!


The UMIK-1 is a USB mic so I'm pretty sure I plugged it directly into my laptop.  Been over 2 years ago so my memory is a bit fuzzy.  I'd always used it with the REW program plugged into the laptop so don't think it needs an interface.  I don't know about using the Behringer.

As far as positioning for the UMIK room measurement I used a tripod and placed it where my ears were located when I did the speaker measurements with the in ear mics. I wasn't sitting near the UMIK for those measurements.  Not sure if that's clear.  One set of measurements was with the in ear mics sitting listening to the speakers.  The second set was with the UMIK positioned at the L and then R position of where my ears were when I did the in ear speaker measurement.  At least that's my best recollection and it corresponds to the Wiki documentation.  Happy for someone to correct me if I mixed that up.


----------



## reter (Apr 13, 2022)

lowdown said:


> The UMIK-1 is a USB mic so I'm pretty sure I plugged it directly into my laptop.  Been over 2 years ago so my memory is a bit fuzzy.  I'd always used it with the REW program plugged into the laptop so don't think it needs an interface.  I don't know about using the Behringer.
> 
> As far as positioning for the UMIK room measurement I used a tripod and placed it where my ears were located when I did the speaker measurements with the in ear mics. I wasn't sitting near the UMIK for those measurements.  Not sure if that's clear.  One set of measurements was with the in ear mics sitting listening to the speakers.  The second set was with the UMIK positioned at the L and then R position of where my ears were when I did the in ear speaker measurement.  At least that's my best recollection and it corresponds to the Wiki documentation.  Happy for someone to correct me if I mixed that up.



Thanks again! So considering this image, i should point the mic upside down with the tip pointing to the ground, right?


----------



## lowdown (Apr 13, 2022)

reter said:


> Thanks again! So considering this image, i should point the mic upside down with the tip pointing to the ground, right?


I had the UMIK pointing straight up at the ceiling with the tip at ear height and as close as I could estimate to where my ears had been located during the in ear speaker measurement.  Using a webcam might allow more accuracy, but I just sat in the same spot as before, positioned the mic on the tripod at each ear and stepped away for the measurements.


----------



## reter

lowdown said:


> I had the UMIK pointing straight up at the ceiling.


oh yes now i get it, now i see the umik that's pointing up in the image, he removed the filter, ok now i see that's very easy to do


----------



## conql

reter said:


> @Xam198 yeah, i'm using the primo, i'm still wondering if i made the right choice glueing the mics to the earplugs, maybe i should try some stuff @musicreo did so i can place the mics even deeper


I tried custom silicon earplugs and musicreo's open ear canal mounting.
For me, the second one certainly improves overall tonality and measurement stability. And it provides the possibility to measure in-ear monitors. Theoretically, blocked ear canal measurements will bring more systematic errors that can't be eliminated without eardrum measurement. However, whether that error can be perceived depends on the headphones you use.




conql said:


> I made a pair of them following his idea but with different types of silicone. This is what it looks like. Though I don't have the equipment to measure its variance, it does seem quite stable during multiple measurements.





conql said:


> Here is the information about FEC criterion. Basically, when you measure the BRIRs with ear canal blocked, you're assuming that your headphones are perfectly Free-air Equivalent Coupling, so it has the same acoustic impedance as free-air. Notice that this is different from HRTF or HPTF curve and can only be compensated by prob microphones measuring at eardrums. The deviation from ideal is called Pressure Division Ratio (PDR). Since the measurement of it is complicated, I cannot find much data about it. But from this 90s paper, it's not neglectable if not significant for some headphones.
> 
> 
> 
> ...


Errors for some headphones


----------



## morgin

conql said:


> I tried custom silicon earplugs and musicreo's open ear canal mounting.
> For me, the second one certainly improves overall tonality and measurement stability. And it provides the possibility to measure in-ear monitors. Theoretically, blocked ear canal measurements will bring more systematic errors that can't be eliminated without eardrum measurement. However, whether that error can be perceived depends on the headphones you use.
> 
> 
> ...


With the second option how were you able to keep the mics in without them falling out and also have them facing outward?


----------



## reter (Apr 14, 2022)

Guys how can i move the left raw and right raw closer? i'm trying multiple times moving changing the mics gain and volume and trying to equalize but i don't know why they don't align in the graph on the right... also no matter how much gain i set, the red line in the two graphs on the left are still almost on top of the Target line 




In this one i got very lucky and i don't even know how i have got the two raw so close together


----------



## morgin

Does everyone use VBCABLE with EQ-Apo and the listen to option in windows audio settings? Is that the best to get 7.1 surround?


----------



## reter

morgin said:


> Does everyone use VBCABLE with EQ-Apo and the listen to option in windows audio settings? Is that the best to get 7.1 surround?


i use hifi cable that uses asio, i followed this guide https://sourceforge.net/p/hesuvi/discussion/general/thread/ce7c354dd7/

to be honest i think is the best and most stable way


----------



## Xam198

Why do you use VBCABLE ? I'm able to have (bad) 7.1 to headphone without it, just with eqapo anf hesuvi.
By the way, i found where comes from my "hummm" : if i touch the back of the mics, the two wires, i've got the hummm; Either the earplug touching the back of the mics make a hummm ! Surprisingly if y touche the vxlr with my hand, the humm goes away. So i keep thinking a bad ground ont the mics. Anyway, i will do my measurement keeping the vxlr in my hand to cancel the hummm.


----------



## morgin

When I first used hesuvi just for the included hrir’s, only VBCABLE method worked for me. It gives me the 7.1 sound but was wondering if the other methods gave better results somehow.


----------



## musicreo (Apr 15, 2022)

Xam198 said:


> Why do you use VBCABLE ?



Not all soundcards  support 7.1 and work with EQ-APO.


----------



## Xam198

musicreo said:


> No all soundcards  support 7.1 and work with EQ-APO.


OK, i've been using laptop with 7.1 soundcard, integrated to motherboard or not,  since a long time (for HTPC) so i forget that point.


----------



## reter (Apr 15, 2022)

morgin said:


> When I first used hesuvi just for the included hrir’s, only VBCABLE method worked for me. It gives me the 7.1 sound but was wondering if the other methods gave better results somehow.


i used vbcable a lot ago but i don't really remember how it was, hi fi cable is made from the same company vb cable is but you also have asio integrated so i think you get less latency and never got popping





you can install both with no issues, you can set which one you prefer


----------



## Xam198

I deal with the hummm but i've tested as morgin said, with lower levels on headphones and speakers, still have my zigzag. I'm seing that i don't very udertsand what the graph Headphones exactly shows me : is this the frequency answer of my headphones just after headphones measurements command with sine sweep ? If yes is there a command to generate it without having to do the hole process - with speakers" ? I have to eliminate all the possible causes, i would want first separate headphines ad speakers.
And as you can see,  this is a portion of headphones.wav, in frequencies where i have zigzags on the headphones graph, and the sinusoid is clean ! I don't uderstand...


----------



## musicreo

Xam198 said:


> I deal with the hummm but i've tested as morgin said, with lower levels on headphones and speakers, still have my zigzag. I'm seing that i don't very udertsand what the graph Headphones exactly shows me : is this the frequency answer of my headphones just after headphones measurements command with sine sweep ?


I think it is is the frequency domain of the  deconvoluted headphone measurement. So yes it is the frequency response.


Xam198 said:


> If yes is there a command to generate it without having to do the hole process - with speakers" ?


I'm not sure but I think there is no  direct option for that in impulcifer.  But the hole process just takes some seconds...


----------



## reter (Apr 15, 2022)

Xam198 said:


> I deal with the hummm but i've tested as morgin said, with lower levels on headphones and speakers, still have my zigzag. I'm seing that i don't very udertsand what the graph Headphones exactly shows me : is this the frequency answer of my headphones just after headphones measurements command with sine sweep ? If yes is there a command to generate it without having to do the hole process - with speakers" ? I have to eliminate all the possible causes, i would want first separate headphines ad speakers.
> And as you can see,  this is a portion of headphones.wav, in frequencies where i have zigzags on the headphones graph, and the sinusoid is clean ! I don't uderstand...



The headphone graph is generated after you use the --plot , otherwise it shows your previous plot with your previous headphone measurement 

So if you want to test the headphones curve you can just do the headphone measurements alone and do the plot command, it generates the headphone graph pretty fast so you don't have to wait the whole process be done, just when you see it generated the new headphone graph just close the command prompt and if it doesn't safisfy try again

I'm doing this over and over trying to perfectionate the curve but damn, my curve shows everytime the right and left raw soooo distant to each other, on other hands @musicreo and @jaakkopasanen graphs are so perfection!


----------



## lowdown

reter said:


> The headphone graph is generated after you use the --plot , otherwise it shows your previous plot with your previous headphone measurement
> 
> So if you want to test the headphones curve you can just do the headphone measurements alone and do the plot command, it generates the headphone graph pretty fast so you don't have to wait the whole process be done, just when you see it generated the new headphone graph just close the command prompt and if it doesn't safisfy try again
> 
> I'm doing this over and over trying to perfectionate the curve but damn, my curve shows everytime the right and left raw soooo distant to each other, on other hands @musicreo and @jaakkopasanen graphs are so perfection!


I'm one of the technically weaker posters here, but when I look at the graphs you posted earlier they look quite good to me.  The difference line between L and R is maybe 1 dB off.  And the plots for each channel also look good.  There are several options in the Impulcifer commands to adjust balance.  You might be better off trying those than chasing absolute perfect channel balance in the measurement step.  Using the balance command options made a huge difference for me.  Others who know much more than I do may have a different perspective, which I'd be happy to hear.


----------



## Xam198

@musicreo : for me it's more 2-3 minutes with plots
@reter : i will try, thanks
by the way i think i found why i had sometimes a warning on delay between FR and FL : simply because you have to be exactky centered between the two speakers, it seems quite sensible


----------



## castleofargh

reter said:


> The headphone graph is generated after you use the --plot , otherwise it shows your previous plot with your previous headphone measurement
> 
> So if you want to test the headphones curve you can just do the headphone measurements alone and do the plot command, it generates the headphone graph pretty fast so you don't have to wait the whole process be done, just when you see it generated the new headphone graph just close the command prompt and if it doesn't safisfy try again
> 
> I'm doing this over and over trying to perfectionate the curve but damn, my curve shows everytime the right and left raw soooo distant to each other, on other hands @musicreo and @jaakkopasanen graphs are so perfection!


Don't obsess over silly things that might not be important or even a real issue at all. 
Unless you place the mics exactly the same way into perfectly symmetrical ears(we don't have that), you won't and really shouldn't get identical measurements.
Then there is the headphone and speakers to consider. It's very common to have a few dB here and there on a headphone, as for speakers, the room and placement can have so much impact(and of course the speakers won't have perfectly identical response either). 


What you can look for is if you have global imbalance, because it's very unlikely that your own head or mic insertion would cause something with an even impact at all frequencies. So you should probably compensate for that. But even then, maybe your recording setup has one volume level per channel and they're not well matched? Or maybe your amp has some imbalance caused by the volume pot. So it's still relevant to listen and check if the center image(any mono stuff) is well centered for you subjectively. If it is, that's a really good sign that some things are done well.




Xam198 said:


> I deal with the hummm but i've tested as morgin said, with lower levels on headphones and speakers, still have my zigzag. I'm seing that i don't very udertsand what the graph Headphones exactly shows me : is this the frequency answer of my headphones just after headphones measurements command with sine sweep ? If yes is there a command to generate it without having to do the hole process - with speakers" ? I have to eliminate all the possible causes, i would want first separate headphines ad speakers.
> And as you can see,  this is a portion of headphones.wav, in frequencies where i have zigzags on the headphones graph, and the sinusoid is clean ! I don't uderstand...


On that small sample of a capture without scale it's hard to make a guess, but It looks very clean and stable so it's possibly the 50Hz from your house outlet bleeding into the ADC somehow. Doesn't look like noise picked up by the mic(too clean).


----------



## Xam198

I'm going to try with laptop on battery, only headphones measurements, to eliminate - or not - the 50Hz of my house outlet


----------



## musicreo

Xam198 said:


> @musicreo : for me it's more 2-3 minutes with plots


You can remove the --plot option, the results and  the headphone graph is plottet by default and you don't have to plot the single speakers pre and after  post processing.


----------



## Xam198

@ reter :sorry newbie question but what is the command for the headphones plot after their measurments ? Thanks !


----------



## reter (Apr 15, 2022)

castleofargh said:


> Don't obsess over silly things that might not be important or even a real issue at all.
> Unless you place the mics exactly the same way into perfectly symmetrical ears(we don't have that), you won't and really shouldn't get identical measurements.
> Then there is the headphone and speakers to consider. It's very common to have a few dB here and there on a headphone, as for speakers, the room and placement can have so much impact(and of course the speakers won't have perfectly identical response either).
> 
> ...



Thanks for the answer! This kind of stuff was making me crazy!

I noticed that one of the two mics is less sensitive (the one i plug on the right ear) so i raised the volume gain dx in behringer a little bit, am i doing right? i focus to adjust the red lines on the left to be more or less equal to both mics, is the right thing to do?

This is my last measurement






@lowdown
This is my first time doing the room measurement, i did only one single room measurement (center of the head), just to try the stuff...




but i need to boost the gain to +23db in the windows options to get the 6db headroom, this means that i get a lot of background noise due to the bad amplification the motherboard have, you can hear the noise here 


@Xam198 i do the standard general plot with only the headphone files inside the myhrir folder


```
python impulcifer.py --test_signal="data/sweep-6.15s-48000Hz-32bit-2.93Hz-24000Hz.pkl" --dir_path="data/my_hrir" --plot
```


----------



## Xam198

Hummm strange, if i don't have the FR-FL file, i get an error from impulcifer


----------



## reter

Xam198 said:


> Hummm strange, if i don't have the FR-FL file, i get an error from impulcifer


maybe i misunderstood, why don't you try doing the whole plot? you can keep the graph open while doing the plot, it refreshes automatically on your windows photo viewer when it's done


----------



## Xam198

Ok i think i found the guilty : seems to be the Zoom H5, i've been able to test without it, with the integrated soundcard, no more zigzag. I think i will command an umc202 !


----------



## Xam198

By the way, i've noticed on my first trials that were not totally Bad that if i increase the volume so that the bass begin to make vibrate the headphones, my brain IS no more cheated by the virtualisation ans the sound comes back around my head. How do you deal with that?


----------



## sander99

Xam198 said:


> By the way, i've noticed on my first trials that were not totally Bad that if i increase the volume so that the bass begin to make vibrate the headphones, my brain IS no more cheated by the virtualisation ans the sound comes back around my head. How do you deal with that?


-Try filtering out ultra low subsonic frequencies
-Maybe attenuate "problem" frequencies that are lifted unreasonably strong as part of the overall speaker simulation (if there are such "problem" frequencies)
-If none of the above works maybe use other headphones that can handle stronger bass without distortion
(-If you use a very weak amp for you headphones try another one)

This actually resembles the only downside I experience when using Sennheiser HD58X and HD600 with the Smyth Realiser A16. But not so much as in your case. In my case it isn't a total collapse of the out-of-head experience, just that a part of the sound starts to sound close to the head. And the feeling that I am not using headphones at all disappears. But it only happens when I play something with a lot of bass at very high levels, louder than I normally do.


----------



## Iohfcasa

Excluding too low frequencies from convolution should prevent from bass distortion, right?


----------



## sander99

Iohfcasa said:


> Excluding too low frequencies from convolution should prevent from bass distortion, right?


Most "heavy lifting" for the headphone drivers is in the bass. So less deep bass, a lot less work for the drivers, less chance for distortion. At least that is what I am thinking.


----------



## reter (Apr 16, 2022)

i did my test 9 and WOW, i did 1m13cm far from the speaker, i think i got the best result clarity/distance that's pleasurable for me

For the first time i measured all the 14 steps with the umik-1 with single speaker, oh my what a nightmare! also I'm happy the conservative method of room correction, i think is the less intrusive...

My last hrir is slighly to the right, very very slight, compared to my previus is a huge step forward, sadly the reverb is so much that is still there, less but still and honesly i don't want to sacrifice clarity to "fix" the reverb


----------



## morgin

I’m glad you’re getting good results. It’s awesome when you hear all the little details and positions. Did you do the 7.1? What are you testing with movies/games?


----------



## reter

morgin said:


> I’m glad you’re getting good results. It’s awesome when you hear all the little details and positions. Did you do the 7.1? What are you testing with movies/games?



I'm testing mostly videogames and music both stereo and 7.1, yesterday i tried Squad which is free to try on steam this weekend and it has a high dynamic range and WOW this is great!


----------



## reter

Is the room correction already considered when i use the fr_combination_method alone when plotting? or do i have to use the specific_limit command too?


----------



## Brandon7s (Apr 18, 2022)

Oh man, I haven't visited this thread in months and since I've been so happy with my setup, I hadn't checked back to see how things were going! I have a lot of catching up to do. I'm not sure if anyone has mentioned the cause of this yet, but I ran into this exact same problem quite a few months ago.



jaakkopasanen said:


> Definitely something wrong if you get that zigzag curve. Iirc someone had something similar earlier in the thread but don't remember the solution from the top of my head





Xam198 said:


> Whan you talk about zizag, can you precise wich ones ? The very small ones, oscillating around the curves ?



 In my particular case the problem that was causing this zigzag measurement was that the playback of the sine sweep itself was distorting due to being too hot of a signal for my system to handle. I resolved the issue by editing the sweep .wav files directly with Audacity and reducing their amplitude by 2dB. I could also get the distortion to go away by reducing the output volume of windows from 100 to about 70, though I'm not sure about that exact number since it's been so long since I've dealt with this issue.

 I'd recommend trying out a sinewave tone generator with a sine set to 50hz and then turn the browser and Windows volume control all the way up (turn your speakers down, that can be LOUD). On my own system, doing this makes the distortion obvious; you'll hear harmonics and overtones that definitely aren't a part of the sinewave. Try lower your Windows/OS volume until you don't hear any distortion, just a pure sinewave tone, and then try running an Impulcifer measurement with your playback volume no higher than that new level. This zigzag problem went away 100% when I did this. I ended up lowering the output on the sweep .wav files just so I could keep my OS volume close to 100% without distortion but lowering OS volume does the same thing, from what I can tell.


----------



## Brandon7s (Apr 18, 2022)

morgin said:


> Does everyone use VBCABLE with EQ-Apo and the listen to option in windows audio settings? Is that the best to get 7.1 surround?



 Nah, too much latency for that to be acceptable, in my opinion. I actually ended up buying a cheap 7.1 soundcard that can output via SPDIF and I use that whenever I want 7.1 surround. I've not yet tried the Hifi cable method with ASIO support but this is my first time hearing about it and it sounds like the best way to go. I'll give it a shot sometime this week.



reter said:


> Guys how can i move the left raw and right raw closer? i'm trying multiple times moving changing the mics gain and volume and trying to equalize but i don't know why they don't align in the graph on the right... also no matter how much gain i set, the red line in the two graphs on the left are still almost on top of the Target line



 Honestly, your own measurements look GREAT, I don't think you need to get the left and right any tighter in tolerance. That has a relatively minor affect on the results you hear because Impulcifer is automatically compensating for that variance so you don't hear it as being "off" balance. I've had plots that measure +/-7 dB on one side and it still sounded perfectly centered.




reter said:


> For the first time i measured all the 14 steps with the umik-1 with single speaker, oh my what a nightmare! also I'm happy the conservative method of room correction, i think is the less intrusive...



 I don't even bother using Impulcifer's room correction - not because it isn't good, but because it's so easy to fix any EQ problems after the measurement is made. I load an EQ VST into EQ-APO (I use Pro-Q 3 but any will work), then I make the necessary corrections while listening to music through the BRIR I'm correcting - and by looking at APO's charts - and then when I am happy with the EQ I burn it to a new .wav file by opening the original in Audacity and applying that same EQ correction. Its MUCH easier to do this than it is for me to go through the whole 14-stage measurement mic process and the results are just as good, in my experience.



reter said:


> My last hrir is slighly to the right, very very slight, compared to my previus is a huge step forward, sadly the reverb is so much that is still there, less but still and honesly i don't want to sacrifice clarity to "fix" the reverb



 You can easily adjust the balance issue you mention here (slightly to the right) by processing the BRIR with the --channel_balance=-0.5 or whatever number you want. That parameter adjusts the right channel, so if the right is too loud you can use a minus number to bring it back down so that the left sounds equal, therefor centering the stereo image.

 Have you tried using reverb management? If not then I HIGHLY recommend it, it works amazingly well and will definitely reduce the amount of reverb you hear.

Here's an example of what it looks like:
`python research\reverberation-management\reverberation_management.py --file="data/my_hrir/69th-Ananda-10K.wav" --track_order=hesuvi --reverb=800`

The number at the end is the target length of the resulting reverb tail in milliseconds. 800ms is a good starting point for small rooms, in my opinion.


----------



## Xam198

Brandon7s said:


> Oh man, I haven't visited this thread in months and since I've been so happy with my setup, I hadn't checked back to see how things were going! I have a lot of catching up to do. I'm not sure if anyone has mentioned the cause of this yet, but I ran into this exact same problem quite a few months ago.
> 
> 
> 
> ...


Thanks for the advices, but i found (i hope) that my zoom H5 was the cause, because with the integrated soundcard and using the mic in, no more zigzag. But for better qualityt i ordered an umc202hd. We will see.


----------



## reter (Apr 19, 2022)

Brandon7s said:


> Nah, too much latency for that to be acceptable, in my opinion. I actually ended up buying a cheap 7.1 soundcard that can output via SPDIF and I use that whenever I want 7.1 surround. I've not yet tried the Hifi cable method with ASIO support but this is my first time hearing about it and it sounds like the best way to go. I'll give it a shot sometime this week.
> 
> 
> 
> ...


thanks, i will try the channel balance you mentioned

Honestly i would try to fix the reverb from its root before cutting the frequencies, because my room is pretty empty so it causes so much reverb (never had this much reverb in a room in my life, seems like a damn underground parking) and because of that i think is almost impossible to cut it out with post processing without losing details, i will tru to put some duvets and then eventually cutting the remaining reverb


----------



## morgin

When you guys say VBCABLE gives too much latency do you mean the audio is out of sync with the video and needs to be matched. Or can this effect the surround effect too in a negative way?


----------



## reter (Apr 19, 2022)

morgin said:


> When you guys say VBCABLE gives too much latency do you mean the audio is out of sync with the video and needs to be matched. Or can this effect the surround effect too in a negative way?


Audio not in sync with the video due to the time the pc has to process it with vbcable, affects everything if you're using vbcable (or whatever alternative software) to process it, it's like a stereo mix... there are two ways to "fix" it tho:

the 1st way is like @Brandon7s said, you have to use an audio card that already have 7.1, meaning that the processing to do the stereo-to-7.1 is directly processed by the chip inside the audio card (HARDWARE SIDE) that's the fastest way, but in this situation you're restricted to some audio cards

the 2nd way is like i did, using hifi cable that uses ASIO to process the stereo-to-7.1 (SOFTWARE SIDE), this case is NOT perfect, there's still some delay but is faster and the pros is that you can use whatever audio card you want; beware that with this method you should have a dac to manually control the audio gain because asio will completely disable the master audio control of windows

sometimes could be only a psicologic matter because people won't notice delay even with vbcable...


----------



## morgin

reter said:


> Audio not in sync with the video due to the time the pc has to process it with vbcable, affects everything if you're using vbcable (or whatever alternative software) to process it, it's like a stereo mix... there are two ways to "fix" it tho:
> 
> the 1st way is like @Brandon7s said, you have to use an audio card that already have 7.1, meaning that the processing to do the stereo-to-7.1 is directly processed by the chip inside the audio card (HARDWARE SIDE) that's the fastest way, but in this situation you're restricted to some audio cards
> 
> ...


I have the delay with VBCABLE but compensate by setting the audio delay in my mpc hc player. Music and gaming I don’t notice any delay. VBCABLE is the only solution that works for me unless I buy a sound card. So if it’s just delay then I should be ok sticking with what’s working.


----------



## reter

morgin said:


> I have the delay with VBCABLE but compensate by setting the audio delay in my mpc hc player. Music and gaming I don’t notice any delay. VBCABLE is the only solution that works for me unless I buy a sound card. So if it’s just delay then I should be ok sticking with what’s working.


if you're not hearing sound popping you're okay... why don't you try hi fi cable? you can swap back to vbcable in any time, is not invasive and you can have both installed


----------



## simplefi

Been dabbling with Impulcifer for a past couple weeks and even testing with my first few tries I am getting results far beyond all the artificial "3D" surround virtualizations I've ever tried.  Awesome project!  

Does anyone know how to load HRIRs into EQ-APO directly without Hesuvi?  I am only interested in stereo so dont really use any features of Hesuvi.  EQ-APO can load impulse response files using the "convolution with impulse" advanced filter but it seems to only convolve the left speaker.  A couple free convolution VSTs I've tried do the same thing.  I suspect I may be able to get around the issue by creating separate L/R HRIRs but Impulcifer gives an error in the processing command if you do not have at least FL AND FR.wav files it seems.


----------



## Iohfcasa

Eq-apo without hesuvi seems to work fine for me, hesuvi is just a sort of control panel/ frontend for eq-apo, it doesn't do any convolution by itself.


----------



## morgin

simplefi said:


> Been dabbling with Impulcifer for a past couple weeks and even testing with my first few tries I am getting results far beyond all the artificial "3D" surround virtualizations I've ever tried.  Awesome project!
> 
> Does anyone know how to load HRIRs into EQ-APO directly without Hesuvi?  I am only interested in stereo so dont really use any features of Hesuvi.  EQ-APO can load impulse response files using the "convolution with impulse" advanced filter but it seems to only convolve the left speaker.  A couple free convolution VSTs I've tried do the same thing.  I suspect I may be able to get around the issue by creating separate L/R HRIRs but Impulcifer gives an error in the processing command if you do not have at least FL AND FR.wav files it seems.


There’s a post by @musicreo a few pages back with a txt file for eqapo that might help you. Unfortunately when I try it I get no sound


----------



## Brandon7s

simplefi said:


> Does anyone know how to load HRIRs into EQ-APO directly without Hesuvi?  I am only interested in stereo so dont really use any features of Hesuvi.  EQ-APO can load impulse response files using the "convolution with impulse" advanced filter but it seems to only convolve the left speaker.  A couple free convolution VSTs I've tried do the same thing.  I suspect I may be able to get around the issue by creating separate L/R HRIRs but Impulcifer gives an error in the processing command if you do not have at least FL AND FR.wav files it seems.


If you're using the default post-count-per-page settings here then you can follow along with the conversation from @musicreo on page 71. I'm going to try this out myself when I have the time. Here's the main post explaining the process:


musicreo said:


> Just to clarify, HESUVI is only a gui for EQ-APO.  You can use EQ-APO also without HESUVi. For example you can save code as txt file and load it in EQ-APO. This can look like this:
> 
> #Common preamp
> Preamp: 0 dB
> ...



 By the way, I gave Hi-fi ASIO Bridge Virtual Cable a try and it works very well on my system. The latency is FAAAR better than using the "listen to this device" feature built into Windows. I'll probably continue to use the separate soundcard I bought just for 7.1 access, but if I didn't have that then I'd use the Hifi VC method in a heartbeat.


----------



## simplefi

Brandon7s said:


> If you're using the default post-count-per-page settings here then you can follow along with the conversation from @musicreo on page 71. I'm going to try this out myself when I have the time. Here's the main post explaining the process:


Yep, thanks for pointing out that code from @musicreo, that did the trick.  I just deleted the channels I wasnt using.  

Another thing I have been experimenting with are tactile transducers.  Does anyone know if it is possible to have two simultaneous hardware outputs, one wet (HRIR convolved) and one dry to send to the transducer?  I used voicemeeter banana to achieve this testing on plain unprocessed audio and the results are VERY convincing if you want to make your listening more speaker-like.  I am using the Clark Synthesis transducer, and IMO these are ideal for audio because unlike other "buttkicker" type transducers, these are intended to be run full-range.  So what you end up feeling through body are the visceral hits from bass notes but also all the subtleties from every other frequency just as you would from a speaker with moving air.  In essence, you are "listening" as much with your body as your ears.  I think this is an important aspect of what makes a speaker sound like a speaker and often gets overlooked.  Unfortunately I cant figure out a way for Voicemeeter to work with EQ-APO and have an unprocessed output together with a APO processed output.


----------



## Brandon7s (Apr 20, 2022)

simplefi said:


> Yep, thanks for pointing out that code from @musicreo, that did the trick.  I just deleted the channels I wasnt using.
> 
> Another thing I have been experimenting with are tactile transducers.  Does anyone know if it is possible to have two simultaneous hardware outputs, one wet (HRIR convolved) and one dry to send to the transducer?  I used voicemeeter banana to achieve this testing on plain unprocessed audio and the results are VERY convincing if you want to make your listening more speaker-like.  I am using the Clark Synthesis transducer, and IMO these are ideal for audio because unlike other "buttkicker" type transducers, these are intended to be run full-range.  So what you end up feeling through body are the visceral hits from bass notes but also all the subtleties from every other frequency just as you would from a speaker with moving air.  In essence, you are "listening" as much with your body as your ears.  I think this is an important aspect of what makes a speaker sound like a speaker and often gets overlooked.  Unfortunately I cant figure out a way for Voicemeeter to work with EQ-APO and have an unprocessed output together with a APO processed output.


That is a fascinating idea and I'm interested in trying it myself. I'm not experienced enough in using EQ-APO to setup the signal chain as you describe, but setting up a parallel signal chain that does not run through convolution should certainly be possible within APO itself, then it'd just be a matter of routing the dry signal to a separate output from the HRIR-processed signal.


----------



## reter

Brandon7s said:


> If you're using the default post-count-per-page settings here then you can follow along with the conversation from @musicreo on page 71. I'm going to try this out myself when I have the time. Here's the main post explaining the process:
> 
> 
> By the way, I gave Hi-fi ASIO Bridge Virtual Cable a try and it works very well on my system. The latency is FAAAR better than using the "listen to this device" feature built into Windows. I'll probably continue to use the separate soundcard I bought just for 7.1 access, but if I didn't have that then I'd use the Hifi VC method in a heartbeat.


i use the topping d10s with its asio drivers that let me set the buffer even at 64 (hi fi asio minimum is 512), so i get even less latency


----------



## reter

simplefi said:


> Yep, thanks for pointing out that code from @musicreo, that did the trick.  I just deleted the channels I wasnt using.
> 
> Another thing I have been experimenting with are tactile transducers.  Does anyone know if it is possible to have two simultaneous hardware outputs, one wet (HRIR convolved) and one dry to send to the transducer?  I used voicemeeter banana to achieve this testing on plain unprocessed audio and the results are VERY convincing if you want to make your listening more speaker-like.  I am using the Clark Synthesis transducer, and IMO these are ideal for audio because unlike other "buttkicker" type transducers, these are intended to be run full-range.  So what you end up feeling through body are the visceral hits from bass notes but also all the subtleties from every other frequency just as you would from a speaker with moving air.  In essence, you are "listening" as much with your body as your ears.  I think this is an important aspect of what makes a speaker sound like a speaker and often gets overlooked.  Unfortunately I cant figure out a way for Voicemeeter to work with EQ-APO and have an unprocessed output together with a APO processed output.


oh WOW i want to do that!!!

I was planning to buy a subwoofer to feel the bass vibration when using headphones but your way is intriguing me even more... can you explain better how you set up this experiment?


----------



## simplefi

reter said:


> oh WOW i want to do that!!!
> 
> I was planning to buy a subwoofer to feel the bass vibration when using headphones but your way is intriguing me even more... can you explain better how you set up this experiment?


The tactile transducer is essentially a speaker without the speaker cone that you bolt onto the frame of your listening seat and is powered by a speaker amp.  Your seat then becomes the cone and vibrates as you would expect a cone to.  The one I use (clark synthesis) is designed to run full-range but you could easily set up a filter if you only wanted the lower end of the spectrum.  Personally I think running it full-range increases the immersion even more.  

It is a great alternative to a subwoofer since the vibrations are felt directly as opposed to traveling through the air.  You also don't have to worry about room modes or disturbing your neighbors when cranking it up.  Granted, it is not silent since your chair will act as a speaker of sorts but fortunately it's not terribly loud since a chair is heavy and dense, which is the opposite of what makes a good speaker cone!

To hook it up, you would need a parallel output to send to the transducer.  Ideally this would not be processed by HRIR or anything else (other than possibly a low pass filter and EQ) hence my original question.  This works great with regular headphone listening but I have not been able to figure out how to use HRIR convolution on one output while also having a parallel dry output for the transducer.


----------



## reter

simplefi said:


> The tactile transducer is essentially a speaker without the speaker cone that you bolt onto the frame of your listening seat and is powered by a speaker amp.  Your seat then becomes the cone and vibrates as you would expect a cone to.  The one I use (clark synthesis) is designed to run full-range but you could easily set up a filter if you only wanted the lower end of the spectrum.  Personally I think running it full-range increases the immersion even more.
> 
> It is a great alternative to a subwoofer since the vibrations are felt directly as opposed to traveling through the air.  You also don't have to worry about room modes or disturbing your neighbors when cranking it up.  Granted, it is not silent since your chair will act as a speaker of sorts but fortunately it's not terribly loud since a chair is heavy and dense, which is the opposite of what makes a good speaker cone!
> 
> To hook it up, you would need a parallel output to send to the transducer.  Ideally this would not be processed by HRIR or anything else (other than possibly a low pass filter and EQ) hence my original question.  This works great with regular headphone listening but I have not been able to figure out how to use HRIR convolution on one output while also having a parallel dry output for the transducer.


does he need only one line? i have the spdif out in my card so i should be able to try and see if he does the process before the hrir takes process







what if i setup two of these trasducers in one output, maybe woth a splitter?


----------



## simplefi

The PC does the HRIR process, and then sends that to the USB input on your DAC.  So the digital outputs would not help because they would presumably be outputting the processed digital signals, and you would also need another DAC to convert that to analog for listening.  If you set up a splitter to the line output, you would have to choose between splitting either a HRIR processed signal or unprocessed signal into two identical split outputs.  Ideally you want an unprocessed output for the transducer because the headphone compensation would surely skew the frequency response.  Alternatively you could output unprocessed signal but you wont get HRIR headphone output.


----------



## reter

simplefi said:


> The PC does the HRIR process, and then sends that to the USB input on your DAC.  So the digital outputs would not help because they would presumably be outputting the processed digital signals, and you would also need another DAC to convert that to analog for listening.  If you set up a splitter to the line output, you would have to choose between splitting either a HRIR processed signal or unprocessed signal into two identical split outputs.  Ideally you want an unprocessed output for the transducer because the headphone compensation would surely skew the frequency response.  Alternatively you could output unprocessed signal but you wont get HRIR headphone output.


You said voicemeeter banana works in outputting the unprocessed audio and what about the delay?

Due to trasducers being very small compared to a sub, i would like to try them asap, also i have open back headphones so would be the greatest choice


----------



## simplefi

reter said:


> You said voicemeeter banana works in outputting the unprocessed audio and what about the delay?
> 
> Due to trasducers being very small compared to a sub, i would like to try them asap, also i have open back headphones so would be the greatest choice


Voicemeeter banana allows you to specify two different audio devices on your PC to output to.  I have not noticed any delay, or if there is a delay then both outputs must be delayed equally because it still sounds synchronized.


----------



## reter (Apr 21, 2022)

simplefi said:


> Voicemeeter banana allows you to specify two different audio devices on your PC to output to.  I have not noticed any delay, or if there is a delay then both outputs must be delayed equally because it still sounds synchronized.


mhm, i would like to keep using hi fi cable to process my hrir while outputting the trasducers with banana

Maybe hi fi cable or vbcable have the some hidden settings for that, or ks


----------



## Xam198 (Apr 21, 2022)

Very interesting indeed, if i 'm not mistaken i think smyht realiser A16 comes with this option and the recommanded device is earthquake mbq-1. Since the beginning, my aim is to do like A16 (but only in 7.1) , i have an emu1820 with several output, and with asio and patchmix i'm pretty sure you can do that. But first i have to make impulcifer it work well and give good virtualization with a regular sound card.


----------



## reter (Apr 21, 2022)

Xam198 said:


> Very interesting indeed, if i 'm not mistaken i think smyht realiser A16 comes with this option and the recommanded device is earthquake mbq-1. Since the beginning, my aim is to do like A16 (but only in 7.1) , i have an emu1820m with several output, and with asio i'm pretty sure you can do that. But first i have to make impulcifer it work well and give good virtualization with a regular sound card.


The earthquake? seems very tiny... would that be enough? 300W of power tho is very much despite its size

anyway let us know if you achieve a proper method


EDIT: people say the earthquake vibration is weak, i think the best endgame is pairing both trasducer and subwoofer to cover all the frequencies and boost the lows


----------



## Iohfcasa

Brandon7s said:


> By the way, I gave Hi-fi ASIO Bridge Virtual Cable a try and it works very well on my system. The latency is FAAAR better than using the "listen to this device" feature built into Windows


Wait, is it possible to use a windows pc/ notebook as an convolver hooked up to the external audiosource?
The latency with eq-apo + "listen to this device" was too tremendous, but afaik  asio should have better latency in general.


----------



## reter

i found the way,  in this video 2:50 shows how to do that, so hi fi cable should be able to do this, i will try today


----------



## reter (Apr 21, 2022)

Sadly Asio4all routes all the devices as if they are the hi fi cable input, so we are at the starting point, this means i get both headphones and speaker already convoluted in EqualizerAPO...






ALSO, a lot of distorsion, i don't know if is the headphones compensation or the buffer size, anyway that's not the main problem here


If there were a way to assign the asio channels in equalizerAPO would be done, we could assign only the two channels L-R of the headphone for the hesuvi convolution and let the other one bypass it


----------



## simplefi

Turns out you can use voicemeeter for independent outputs and it is relatively straightforward, I dont know why I didnt try this before.  You just run the EQ-APO configurator and if you have voicemeeter installed, it will show the 3 voicemeeter outputs as available sound devices.  You just need to apply EQ-APO to only the output you want to use with convolution/EQ.  I tried using HDMI out for the HRIR output and my integrated sound for normal output and it worked just as expected.


----------



## Brandon7s

simplefi said:


> Turns out you can use voicemeeter for independent outputs and it is relatively straightforward, I dont know why I didnt try this before.  You just run the EQ-APO configurator and if you have voicemeeter installed, it will show the 3 voicemeeter outputs as available sound devices.  You just need to apply EQ-APO to only the output you want to use with convolution/EQ.  I tried using HDMI out for the HRIR output and my integrated sound for normal output and it worked just as expected.


Great to hear! I'm going to look into buying a transducer this weekend, I think. I'm not wanting to spend as much as the Clark cost, not without first trying it out and seeing what the noise levels are like and if it annoys the girlfriend, but it sounds like a great way to get an extra layer of immersion that isn't there without it. It would also be a piece of cake to setup the routing for when I'm monitoring guitar through a DAW and might be a big benefit to that experience in particular.


----------



## simplefi

Brandon7s said:


> Great to hear! I'm going to look into buying a transducer this weekend, I think. I'm not wanting to spend as much as the Clark cost, not without first trying it out and seeing what the noise levels are like and if it annoys the girlfriend, but it sounds like a great way to get an extra layer of immersion that isn't there without it. It would also be a piece of cake to setup the routing for when I'm monitoring guitar through a DAW and might be a big benefit to that experience in particular.


Even a low cost transducer can add to the experience so keep us posted on your impressions.  Great idea with using it with a guitar, I hadn't thought of that.  Just be aware that many transducers are intended for a LFE channel and expect a low pass filter.  Low E on a guitar is around 80 Hz which is around the cutoff of a low pass so you probably wouldnt get the entire range of the guitar.  As far as noise, I'd say it sounds like a muffled speaker stuffed under the cushions (of course depending on what you set the gain).


----------



## Brandon7s

simplefi said:


> Even a low cost transducer can add to the experience so keep us posted on your impressions.  Great idea with using it with a guitar, I hadn't thought of that.  Just be aware that many transducers are intended for a LFE channel and expect a low pass filter.  Low E on a guitar is around 80 Hz which is around the cutoff of a low pass so you probably wouldnt get the entire range of the guitar.  As far as noise, I'd say it sounds like a muffled speaker stuffed under the cushions (of course depending on what you set the gain).


80hz is a bit low, good point. I see that there's this Clark transducer that is low priced enough to not hurt the wallet much if I end up not liking it while also reaching a much higher frequency of 800hz, which seems like it'd be plenty high enough for what I want. I do have to do some more research on amplifiers to pair with it, as I don't have any that will provide the power that it needs. Any recommendations?


----------



## simplefi

Brandon7s said:


> 80hz is a bit low, good point. I see that there's this Clark transducer that is low priced enough to not hurt the wallet much if I end up not liking it while also reaching a much higher frequency of 800hz, which seems like it'd be plenty high enough for what I want. I do have to do some more research on amplifiers to pair with it, as I don't have any that will provide the power that it needs. Any recommendations?


That's the one I'm using-compared to other low cost transducers, I found that the Clark is lacking in low bass output.  It only starts generating output above 30Hz but the tradeoff is that it goes up to 800Hz which I found to be a decent range for audio purposes.  As for amps, any appropriate speaker amp will work and you only need one channel (for a single transducer).  Alternatively you can use an old receiver, try cheap a class D amp or even get a subwoofer plate amp.


----------



## reter

can i plug the trasducer to the behrinher audio interface?


----------



## simplefi

reter said:


> can i plug the trasducer to the behrinher audio interface?



No, you must hook it up to an amp.


----------



## reter (Apr 22, 2022)

simplefi said:


> No, you must hook it up to an amp.


Can i still use hi fi cable+asio4all while using voicemeeter? i hope there will be not much delay


----------



## Brandon7s

simplefi said:


> That's the one I'm using-compared to other low cost transducers, I found that the Clark is lacking in low bass output.  It only starts generating output above 30Hz but the tradeoff is that it goes up to 800Hz which I found to be a decent range for audio purposes.  As for amps, any appropriate speaker amp will work and you only need one channel (for a single transducer).  Alternatively you can use an old receiver, try cheap a class D amp or even get a subwoofer plate amp.


Ah, for some reason I thought you were using the Clark model that is a step up from this one, which I believe is more than double the price. I'll definitely go with this particular model then since it works so well for you. Now just to look into how to mount to my chair, since it's a fairly ordinary office chair and not exactly made for home theater or anything!


----------



## reter

Brandon7s said:


> Ah, for some reason I thought you were using the Clark model that is a step up from this one, which I believe is more than double the price. I'll definitely go with this particular model then since it works so well for you. Now just to look into how to mount to my chair, since it's a fairly ordinary office chair and not exactly made for home theater or anything!


let me know if the 7.1 virtualization works, also can voicemeeter set a low pass filter? i want somehow control the frequency output of the trasducer


----------



## Brandon7s

reter said:


> let me know if the 7.1 virtualization works, also can voicemeeter set a low pass filter? i want somehow control the frequency output of the trasducer


Well, I ended up at a kind of dead-end on this project, I'm afraid. I don't have a suitable chair for mounting a transducer. I'm considering buying a new one since my current chair isn't all that ergonomic and this would be a good excuse to upgrade, but I'm going to have to put this on hold until I get that sorted out first.


----------



## reter

Brandon7s said:


> Well, I ended up at a kind of dead-end on this project, I'm afraid. I don't have a suitable chair for mounting a transducer. I'm considering buying a new one since my current chair isn't all that ergonomic and this would be a good excuse to upgrade, but I'm going to have to put this on hold until I get that sorted out first.



Actually, i'm planning to mount a buttkicker LFE mini with some screws under the chair, also some antivibrant rubber between the seat and the piston to get more efficiency, but before that i want to optimize my hrir


----------



## simplefi (Apr 26, 2022)

Brandon7s said:


> Well, I ended up at a kind of dead-end on this project, I'm afraid. I don't have a suitable chair for mounting a transducer. I'm considering buying a new one since my current chair isn't all that ergonomic and this would be a good excuse to upgrade, but I'm going to have to put this on hold until I get that sorted out first.


I know I've seen other transducers (buttkicker?) that are meant to mount to office chairs and include a clamp for the center post.  Since you can set custom filters in voicemeeter I've been wanting to experiment with whether a low pass or full range transducer is better with HRIRs.  Running the transducer full range means that the above-bass audible sound it generates could possibly interfere with open-back headphone imaging.  This doesnt seem to be the case with unprocessed headphone listening but for HRIR I need to nail the measurement process before I can test this scenario.

Ironically my best measurement was one of the very first few I ever did.  I have since removed the wings from my Sound Professionals mics and have been experimenting with different mic placement in the ear.  I have not been able to reproduce or even get close to that early measurement and can't figure out why. Despite trying different mic placements, the results all sound same-ish and there isnt as much variability as I expected.  These latest results all sound like a really good crossfeed filter that give you a large open headstage.  My initial measurement on the other hand sounds like I am listening to a pair of not-so-great speakers.  But the fact that it can fool my brain into thinking that mediocre sound is coming from -over there- is kind of mind blowing.  I just haven't been able to get anything close to this result so I am wondering if the mics being held in place with the wings was a factor.


----------



## lowdown

simplefi said:


> I know I've seen other transducers (buttkicker?) that are meant to mount to office chairs and include a clamp for the center post.  Since you can set custom filters in voicemeeter I've been wanting to experiment with whether a low pass or full range transducer is better with HRIRs.  Running the transducer full range means that the above-bass audible sound it generates could possibly interfere with open-back headphone imaging.  This doesnt seem to be the case with unprocessed headphone listening but for HRIR I need to nail the measurement process before I can test this scenario.
> 
> Ironically my best measurement was one of the very first few I ever did.  I have since removed the wings from my Sound Professionals mics and have been experimenting with different mic placement in the ear.  I have not been able to reproduce or even get close to that early measurement and can't figure out why. Despite trying different mic placements, the results all sound same-ish and there isnt as much variability as I expected.  These latest results all sound like a really good crossfeed filter that give you a large open headstage.  My initial measurement on the other hand sounds like I am listening to a pair of not-so-great speakers.  But the fact that it can fool my brain into thinking that mediocre sound is coming from -over there- is kind of mind blowing.  I just haven't been able to get anything close to this result so I am wondering if the mics being held in place with the wings was a factor.


I have Sound Professional mics and also cut off the wings because they were making maintaining the placement during recording very tricky.  I glued foam ear plugs onto the backs of the mics and that totally solved the problem, so the mics stayed in position.  I ended up with lots of BRIRs from multiple recording sessions and combinations of Impulcifer command line options.  Most are just ok, with various sonic issues.  Just a few are truly astonishing, and to my ears virtually perfect.  Of course ears vary, but my experience is the wings on those mics are not needed to get ideal results.


----------



## reter

Doesn't the wing help to wear the mics in the same position multiple times? i noticed that with foam you can't wear them the same position so the measurements get slight changes evey time


----------



## lowdown (Apr 26, 2022)

reter said:


> Doesn't the wing help to wear the mics in the same position multiple times? i noticed that with foam you can't wear them the same position so the measurements get slight changes evey time


Not my experience.  I had much more variation and movement of the mic position during and between measurements with the wings than with them cut off and foam ear plugs glued to the backs.  But I imagine the wings could work fine in some ears.  Also, I can't see a reason why you couldn't glue ear plugs on the back and leave the wings on as well.  If you use something like silicone caulk, which stays flexible, to glue the ear plugs they could be removed without causing any damage to the mics.  So could be tried both ways.


----------



## Brandon7s (Apr 26, 2022)

simplefi said:


> I know I've seen other transducers (buttkicker?) that are meant to mount to office chairs and include a clamp for the center post.  Since you can set custom filters in voicemeeter I've been wanting to experiment with whether a low pass or full range transducer is better with HRIRs.  Running the transducer full range means that the above-bass audible sound it generates could possibly interfere with open-back headphone imaging.  This doesnt seem to be the case with unprocessed headphone listening but for HRIR I need to nail the measurement process before I can test this scenario.
> 
> Ironically my best measurement was one of the very first few I ever did.  I have since removed the wings from my Sound Professionals mics and have been experimenting with different mic placement in the ear.  I have not been able to reproduce or even get close to that early measurement and can't figure out why. Despite trying different mic placements, the results all sound same-ish and there isnt as much variability as I expected.  These latest results all sound like a really good crossfeed filter that give you a large open headstage.  My initial measurement on the other hand sounds like I am listening to a pair of not-so-great speakers.  But the fact that it can fool my brain into thinking that mediocre sound is coming from -over there- is kind of mind blowing.  I just haven't been able to get anything close to this result so I am wondering if the mics being held in place with the wings was a factor.


 A center-post mount would be great, I'll look into transducers with such an option. 

 It can take a LOT of tries to narrow down what works best for you. I have 108 separate measurements that I ended up doing over a period of 3 months or so. My 66th measurement was good, but my 85th, 98th, and 103rd measurements are what I use on a regular basis. After the 85th most of them sounded fairly similar, I spent a lot of time trying different mic mountings and depths, as well as experimenting with distance and speaker location in the room. The thing that made the most difference was to remove the silicone housing of the Sound Professionals' mics I use and to glue them to foam earplugs that I trimmed in half, and then gluing the cables to the foam so that the capsules didn't tilt when the cable was pulled taught. Removing the housing to the capsules let me insert the mics deeper into the ear and then gluing the cable to prevent capsule tilt made the measurement variation between the left and right much less extreme. Without that, the right and left mics were getting radically different high-frequency results since one would tilt into the wall of the ear entrance and thus being blocked. Inserting the mics into each ear at roughly the same depth also helped keep the two sides more similarly matched in frequency response and I think rust makes a big difference in quality of the final HRIR.

 I also tried inserting the mics deep into the ear without being attached to earplugs but I found those measurements to be highly inconsistent between left/right and they were also unusually bright, so I ended up going back to foam earplugs.

This is what gave me the best results, though I did trim down the length of the foam a bit in the final iteration to allow for deeper insertion: 




 Personally, using the wings was a pain in the butt for me. They made for inconsistent results and the localisation was poor, and the EQ profile was nowhere near as good as when inserted into the ear.


----------



## morgin

I’m trying to solder the side of the ear mics to something like a paper clip. Bending the paper clip so it fits around the ear like eye glasses and keeps the mic facing out but able to go further in. Maybe having the ear canal exposed and not blocked with foam might make a difference.


----------



## simplefi

Good to know that the wings aren't necessary to get good measurements.  I was thinking that the extra silicone material was either holding the mic at a more favorable position just outside the ear canal or possibly damping the mic's FR.  With the wings off, the round silicone casing is large enough to snugly sit flush inside my ear canal but no deeper.  I've also made a foam backing that cups the mic.  The foam sits in the ear canal and holds the mic protruding out.  
I wonder if deeper insertion would help the measurements?  I am a bit nervous to remove the mic from the silicone casing but this would be the only way for me to get a deeper fitting.  Though I think I have plenty of mounting options to try as of now so I will keep trying to perfect it.  



Brandon7s said:


> It can take a LOT of tries to narrow down what works best for you. I have 108 separate measurements that I ended up doing over a period of 3 months or so. My 66th measurement was good, but my 85th, 98th, and 103rd measurements are what I use on a regular basis.



Just curious for those with so many attempts, were you keeping everything consistent between each take, or were you intentionally altering variables?  My initial goal is to nail a simple stereo BRIR-no room compensation, EQ, balance, or any of that just to keep the number of variables to the bare minimum.  Once I get that down, then I can tweak the other stuff.  My seating and speaker position are constant so the only thing I'm changing is channel levels and mic placement.  But how many takes do you do for a given mic placement before deciding it's no good and trying a different placement?  

I've also experimented with different mic gains and channel levels but results were inconclusive.  My thinking was that a high volume would definitely change things.  For open-back headphones, a high volume would mean that one ear would pick up the sound from the other ear cup, but this isn't accounted for in the headphone compensation.  For speakers, a high volume would excite the room more and have stronger reflections and longer reverb trails.  Initial measurements seemed to not vary too much as long as the SNR was acceptable.


----------



## Brandon7s (Apr 26, 2022)

simplefi said:


> Just curious for those with so many attempts, were you keeping everything consistent between each take, or were you intentionally altering variables?


 I made so many measurements because I wanted to try out every idea I could think of: with and without silicone housing and wings, with and without being attached to foam earplugs, different depths, different distances from the speakers, different speaker placement in the room, different preamp gain, different speaker volumes, different EQ correction on the speaker output, etc. If I could think of a variation, I tried it. Mostly just because I was curious about the effect they'd have on the results but sometimes the variarions turned out great and so I'd stick with them. 

 I found that loudness didn't matter all that much except for exasperating room resonance issues so I ended up settling on volume that was only a little louder than my typical listening levels, and I EQd my speakers to make the troublesome room nodes less of an issue. I also tried two different pairs of Sound Professional mics - the non-Master series pair and then later on the Master-series version. 

 One of the factors that made the most difference was distance to speakers. Too far and I found it too high in reverberation and less detail, too close and it sounded harsh and was very rough to get a balanced HRIR in the left and right channels and the highs were exaggerated. I think a distance of about 5 feet from the monitors is what I found to be a good compromise. The other big factor was keeping the depth in each ear as similar to each other as possible, while still being deep enough to capture a goal d EQ profile for localization. Removing the silicone housing made getting increased depth possible, everything before that was decent but not great.

 I can get fairly consistent results with new measurements after nailing down what worked well after trying a ton of variation.


----------



## lowdown (Apr 26, 2022)

simplefi said:


> Good to know that the wings aren't necessary to get good measurements.  I was thinking that the extra silicone material was either holding the mic at a more favorable position just outside the ear canal or possibly damping the mic's FR.  With the wings off, the round silicone casing is large enough to snugly sit flush inside my ear canal but no deeper.  I've also made a foam backing that cups the mic.  The foam sits in the ear canal and holds the mic protruding out.
> I wonder if deeper insertion would help the measurements?  I am a bit nervous to remove the mic from the silicone casing but this would be the only way for me to get a deeper fitting.  Though I think I have plenty of mounting options to try as of now so I will keep trying to perfect it.
> 
> 
> ...


Here's the foam backs, and placement I used for my measurements.  I didn't intentionally try different placements.



 



My foam earplugs are a bit longer than needed, so I pushed them in as far as I could to get the positioning in the photo.  It's definitely not deep into the ear canal.  I wish I knew what variables accounted for the differences I hear between BRIR measurements, but I really have no idea.  I only did a few speaker measurements with the  speakers about 6-7 ft in front of me, and I only did one room measurement with the UMIK-1 positioned where my ears were for the speaker measurements. Some BRIRs just sound much better than others.  With the best ones the spatial and sound aspects of imaging, soundstage, balance and clarity are beyond belief.  I feel like I am at last hearing every bit of the precise spatial presence in the recordings arrayed in front of me.  I did use some command line balance options, as well as a tweaked Harmon curve, and I have a few EQ tweaks in HeSuVi.  The result is so good that the only limitation on recreating ideal musical virtual reality is the quality of the recordings.  The only other factor that might help with the illusion in my setup is I listen siting in my normal speaker listening position, so the visual aspect of hearing is being reinforced.


----------



## simplefi

Brandon7s said:


> The other big factor was keeping the depth in each ear as similar to each other as possible, while still being deep enough to capture a goal d EQ profile for localization. Removing the silicone housing made getting increased depth possible, everything before that was decent but not great.


Right now the deepest I can get the mics is just about flush with the ear canal entrance, or maybe 1mm in.  The casing makes it too tight to go beyond that.  Do you think that a deeper insertion would benefit in this case?  Also, what was your impression of the Masters vs non-masters series mics?  I am using the Masters series Sound Professionals mic but I may try the non masters one in the future to remove the casing if it also works well.

Finally, the last piece of the equation I've been considering is the headphone itself.  I have a Grado style on-ear and Sennheiser HD595 around ear.  At first I figured that minimizing pinna activation from the headphone itself would make the headphone compensation more accurate because it wold be one less thing to compensate for. In this regard an IEM would be ideal but since you cant use them with mics for FR compensation, on-ear would be the next best thing.  But after experimenting with my two headphones, it seems that the HD595s are much more accurate for reproducing HRIR than the Grado.  I am tempted to try the Hifiman Ananda or HD800 as those have been tested by Rtings to have the best virtual soundstage.


----------



## simplefi

lowdown said:


> Here's the foam backs, and placement I used for my measurements. I didn't intentionally try different placements.


Here are my mics.  With the foam cups, I get about the same placement as lowdown.  Without the foam, I can get as least a flush mount.  I'll be trying out both.  Looks like I have a bit or experimentation to do.


----------



## reter

simplefi said:


> Here are my mics.  With the foam cups, I get about the same placement as lowdown.  Without the foam, I can get as least a flush mount.  I'll be trying out both.  Looks like I have a bit or experimentation to do.


i'm waiting for the master series to arrive, let me know if you get better results with or without the foam, or if there's no difference... if i'd a chance, i would avoid glueing the mic with the foam, i'd prefer only cutting the wing and do the measurements directly


----------



## Brandon7s

simplefi said:


> Right now the deepest I can get the mics is just about flush with the ear canal entrance, or maybe 1mm in.  The casing makes it too tight to go beyond that.  Do you think that a deeper insertion would benefit in this case?  Also, what was your impression of the Masters vs non-masters series mics?  I am using the Masters series Sound Professionals mic but I may try the non masters one in the future to remove the casing if it also works well.
> 
> Finally, the last piece of the equation I've been considering is the headphone itself.  I have a Grado style on-ear and Sennheiser HD595 around ear.  At first I figured that minimizing pinna activation from the headphone itself would make the headphone compensation more accurate because it wold be one less thing to compensate for. In this regard an IEM would be ideal but since you cant use them with mics for FR compensation, on-ear would be the next best thing.  But after experimenting with my two headphones, it seems that the HD595s are much more accurate for reproducing HRIR than the Grado.  I am tempted to try the Hifiman Ananda or HD800 as those have been tested by Rtings to have the best virtual soundstage.



Removing the housing and getting a deeper insertion made for THE most significant improvements in my results. 

Master vs. Non-Master mics: honestly, I didn't really notice much of a difference and I've had great results with both of them.

 I use Anadas and love them. The results that I've gotten with Impulcifer are MUCH better with my Anadas compared to the rest of my collection, which includes the DT1990, HD6XX, ATH-R70X, and DT770 (250ohms). The HRIRs with my Anandas are more convincing and clearer, better localization too. I'm not sure if that's due to the greater detail that the Anandas can provide in general or due to the fact that the Anandas don't press down on the outer ear at all while all of the rest do. Either way, I highly recommend the Anandas - they are an amazing value.


----------



## simplefi

Glad to know the Ananda's work well.  The results I am getting on the HD595 are adequate but I am sure they could also be better.  Rtings did some comprehensive testing on how well headphones produce a soundstage by comparing the pinna response with the headphone to the pinna response of a loudspeaker as a reference. The HD800's response was most similar to that of the loudspeaker, with the Ananda not far behind.


----------



## castleofargh

simplefi said:


> Glad to know the Ananda's work well.  The results I am getting on the HD595 are adequate but I am sure they could also be better.  Rtings did some comprehensive testing on how well headphones produce a soundstage by comparing the pinna response with the headphone to the pinna response of a loudspeaker as a reference. The HD800's response was most similar to that of the loudspeaker, with the Ananda not far behind.


Rtings for that parameter, sort of looks for a headphone that will add some of what's missing in typical stereo played on headphones, and how close the result is to some statistical human HRTF, and their own subjective impressions. It's objectivity turned into cuisine. I don't blame them, at least they try to get some references and get the ball rolling, I actually like them and their various approaches a lot, but I also wouldn't go as far as calling what they do for this particular criterium, scientific. 

Anyway, here you're looking for a headphone that can disappear once its signature has been "cancelled" by the filter. I would strongly suggest not to think of those as describing the same headphone attributes because they're not. For example, in this thread, we couldn't care less if the frequency response of the headphone is close to some HRTF model or if the left and right ears are well matched.
low disto, well extended and smooth FR, big driver, those are likely to describe headphones giving you the best results for speaker simulation. There will be some more subjective stuff at play for some or most people, like weight, how much they isolate from outside(in our case, isolation is bad somehow). But ultimately, chances are that an objectively good headphone is good for speaker simulation. As simple as that.


----------



## reter (Apr 28, 2022)

guys i bought the master series but i didn't know was a single stereo jack so i can't work with the behringer




I bought this https://www.amazon.it/dp/B07CZXT9GR?smid=A2SBSMJWADKVNX&ref_=chk_typ_imgToDp&th=1 like @jaakkopasanen suggested, my only concern is that the splitter is TRS female to TRS male, hopefilly this doesn't get any problem... even the primo have TRS jack so i think the behringer should work as well


----------



## jaakkopasanen

reter said:


> guys i bought the master series but i didn't know was a single stereo jack so i can't work with the behringer
> 
> 
> I bought this https://www.amazon.it/dp/B07CZXT9GR?smid=A2SBSMJWADKVNX&ref_=chk_typ_imgToDp&th=1 like @jaakkopasanen suggested, my only concern is that the splitter is TRS female to TRS male, hopefilly this doesn't get any problem... even the primo have TRS jack so i think the behringer should work as well


Looks like you bought different adapter cable. This is linked to the wiki and am using myself https://www.amazon.com/gp/product/B0785VKZW4


----------



## reter

jaakkopasanen said:


> Looks like you bought different adapter cable. This is linked to the wiki and am using myself https://www.amazon.com/gp/product/B0785VKZW4



i saw comments under your link that they got TRS males, so i figured that you were using them too, so do you confirm that your jacks are mono? i will see tomorrow if they work


----------



## jaakkopasanen

reter said:


> i saw comments under your link that they got TRS males, so i figured that you were using them too, so do you confirm that your jacks are mono? i will see tomorrow if they work


My male jacks are mono, TS


----------



## reter (Apr 28, 2022)

jaakkopasanen said:


> My male jacks are mono, TS


sadly there's no way i can wait june for the splitter, so i ordered this https://www.amazon.it/gp/product/B0785VM44F/ref=ppx_yo_dt_b_asin_title_o00_s00?ie=UTF8&psc=1 considering i already have a f jack to f jack adapter, not an elegant way to fix but should work


----------



## Brandon7s

reter said:


> sadly there's no way i can wait june for the splitter, so i ordered this https://www.amazon.it/gp/product/B0785VM44F/ref=ppx_yo_dt_b_asin_title_o00_s00?ie=UTF8&psc=1 considering i already have a f jack to f jack adapter, not an elegant way to fix but should work


You'll have plenty of cable for long distance measurements, that's for sure.


----------



## reter

Brandon7s said:


> You'll have plenty of cable for long distance measurements, that's for sure.


i hate so much using adapters, i would prefer using some reliable solution like jakko posted but the timeline is very important for me, 2 months for a cable is too much and i'm controlling some music stores, this is indeed a very rare splitter, i found many female TRS 3.5mm to male 2xTS 6.3mm, sadly no 3.5 to 3.5, only male to male


----------



## Brandon7s

reter said:


> i hate so much using adapters, i would prefer using some reliable solution like jakko posted but the timeline is very important for me, 2 months for a cable is too much and i'm controlling some music stores, this is indeed a very rare splitter, i found many female TRS 3.5mm to male 2xTS 6.3mm, sadly no 3.5 to 3.5, only male to male


Oh I don't blame you, I'd not bother waiting for June to get the single adapter cable either! I don't think it'll hurt your results though, it likely won't be THAT long. Looking forward to hearing updates on your progress!


----------



## morgin

Quick question, I'm trying new ways of getting a better result from my best measurement. Channel balance mid is helping but wanted more clarification on room measurement. I have room-BL-left, room-BL-Right, room-BR-left etc. Do I need to have a command when processing to have these "room".wav files included or does Impulcifer process everything together? Not sure If I should be using commands such as --fr_combination_method=average and --fr_combination_method=conservative. My room measurements were done for each ear for each speaker position.


----------



## Brandon7s (Apr 30, 2022)

morgin said:


> Quick question, I'm trying new ways of getting a better result from my best measurement. Channel balance mid is helping but wanted more clarification on room measurement. I have room-BL-left, room-BL-Right, room-BR-left etc. Do I need to have a command when processing to have these "room".wav files included or does Impulcifer process everything together? Not sure If I should be using commands such as --fr_combination_method=average and --fr_combination_method=conservative. My room measurements were done for each ear for each speaker position.


Been a while since I've used room correction or read the documentation but I believe that as long as your file names are correct then the room correction will work just as you want it to. The fr_combination_method options are just different ways of processing the room correction vs. the default. I think of if them as similar to the channel_balance options (trend, mids, etc); they are alternatives to the default method but the processing still uses all available recorded speaker measurements.


----------



## reter

my first measurement with the master series...meh i don't know, i get a lot of zigzag in my graph, my last graph is with headroom 22!!! isn't that too much???








plus the left and right mics gains are very unbalanced, even worst that the primo mics, i had to lower the right mic a lot





Do someone experienced the same issue? i had to lower both mics and headphone volume otherwise i get a lot of zigzag even at 10db headroom


----------



## Brandon7s (Apr 30, 2022)

reter said:


> my first measurement with the master series...meh i don't know, i get a lot of zigzag in my graph, my last graph is with headroom 22!!! isn't that too much???
> 
> 
> 
> ...


How does it sound though? And what does the Results.png graph look like? The headphone graph isn't really all that useful for determining how it will sound but the Results.png can at least provide some insight as to the overall frequency response. 

If you're getting a lot of sharp zipzagging in the headphones.png measurement then try lowering the playback in Windows/OS by about 30 notches or so and see what happens. Raise the gain on your speakers and your preamp to compensate. This solved the issue when I was getting the same issue since the it was caused be digital distortion when the sweep.wav was being played while taking measurements. 

 By the way, I used to try to match the gain on the microphones but stopped doing that once I realized that Impulcifer is pretty good at compensating for the difference - manually matching the gain didn't end in better results in all of my tests.


----------



## reter

Brandon7s said:


> How does it sound though? And what does the Results.png graph look like? The headphone graph isn't really all that useful for determining how it will sound but the Results.png can at least provide some insight as to the overall frequency response.
> 
> If you're getting a lot of sharp zipzagging in the headphones.png measurement then try lowering the playback in Windows/OS by about 30 notches or so and see what happens. Raise the gain on your speakers and your preamp to compensate. This solved the issue when I was getting the same issue since the it was caused be digital distortion when the sweep.wav was being played while taking measurements.
> 
> By the way, I used to try to match the gain on the microphones but stopped doing that once I realized that Impulcifer is pretty good at compensating for the difference - manually matching the gain didn't end in better results in all of my tests.



Thank you, i'm using the hrir right now and seems like the front left is a little bit inaccurate, i tried with some games and i can localize the front right very well but the front left is not well localized, the side left is well localized tho

this is my pre folder https://imgur.com/a/qqj6Ggw
and this is my post folder https://imgur.com/a/dMqtQdR

Do you see any discrepancies in the measuerements? what should i pay attention in the pre and post graphs?

i placed some pillows here and there to correct the overreverb i have and i think i exxagerated, tomorrow i'll try to lower the playback in windows like you suggest and remove some of the pillows, thank you


----------



## Brandon7s (Apr 30, 2022)

reter said:


> Do you see any discrepancies in the measuerements? what should i pay attention in the pre and post graphs?



Whoah, your left and right measurements have WILDLY different noise levels! -- NEVERMIND, I just noticed that I had viewed one from your Post and one from your Pre folder on accident, oops! That's why the variance was so huge. Looking to see if I notice anything sticking out that might point to why the front left doesn't sound quite right...

 Looking at your FC-left and FC-right measurements, it seems that you're getting fairly consistent measurements from the microphones, which is good, though there does seem to be more low-end ringing fairly late on the spectrogram at below 10hz, but that shouldn't affect localization too much. It might cause an annoying kind of low-end echo that MIGHT be noticeable, but that's so low that I doubt it'd be a problem. Overall, it looks good to me from what I can see from these plots.

 My guess is that the poor localization you're getting on the front left might be a combination of both speaker placement and possibly head movement. Are you getting a nice, strong phantom center in the listening position where you recorded these? One thing I've found with Impulcifer is that little changes in position make a BIG difference, so getting your speaker and sitting position as optimal as possible makes more difference than you'd get compared to listening to your speakers ordinarily.

 Also, is this with using a measurement mic and Impulcifer's room correction, and if so then are you using generic measurements or ear-specific ones? It might be worth processing the same BRIR without the room correction just to see if it's over-correcting anything due to incorrect mic placement. That can really throw a wrench in the quality of the results if not done correctly. I've found that to be less of an issue with generic room correction processing though, so if you're using ear-specific then it might be worth just trying out a single center-of-head measurement and using that for troubleshooting purpose, if nothing else.


----------



## morgin

This is a bit weird I have been experimenting a little and found that with Impulcifer already including the headphone equalization I have good localization on my 5.1/7.1 games and movies. I then extracted my Headphone EQ from Impulcifer by using the script below as given earlier, and also included that into hesuvi virtualization. I know having double EQ should be bad but the localization and surround effect is so much more pronounced I'm having a hard time not using it. Can someone else try it and give their views?

import os

import numpy as np
import soundfile as sf
from matplotlib import pyplot as plt

from autoeq.frequency_response import FrequencyResponse
from impulse_response import ImpulseResponse
from impulse_response_estimator import ImpulseResponseEstimator


def equalize(dir_path,
             sweep_duration=5,
             target=('flat',),
             record='headphones.wav',
             fig='fig.png',
             geq='geq.txt',
             channel_balance=True,
             max_gain=40, treble_f_lower=10000, treble_f_upper=20000):
    record_path = os.path.join(dir_path, record)
    record_data, sr = sf.read(record_path)
    ire = ImpulseResponseEstimator(min_duration=sweep_duration, fs=sr)

    frs = []
    for ch in range(record_data.shape[1]):
        ir = ImpulseResponse(ire.estimate(record_data[:, ch]), sr)
        fr = ir.frequency_response()
        fr.interpolate()

        if ch >= len(target):
            target_index = 0
        else:
            target_index = ch

        if target[target_index] == 'flat':
            target_fr = FrequencyResponse(name='flat', frequency=fr.frequency, raw=np.zeros(len(fr.frequency)))
        else:
            target_fr = FrequencyResponse.read_from_csv(target[target_index])

        target_fr.interpolate()
        fr.compensate(target_fr)
        fr.equalize(max_gain=max_gain, treble_f_lower=treble_f_lower, treble_f_upper=treble_f_upper)
        frs.append(fr)

    if channel_balance:
        ref = np.mean(frs[0].equalized_raw[np.logical_and(frs[0].frequency >= 100, frs[0].frequency <= 3000)])
        gains = []
        for fr in frs:
            gain = ref - np.mean(fr.equalized_raw[np.logical_and(fr.frequency >= 100, fr.frequency <= 3000)])
            gains.append(gain)

    if geq:
        channel_names = 'L R C LFE BL BR SL SR'.split()
        geq_full = ''
        max_raw = 0
        for ch in range(len(frs)):
            geq_full += f'Channel: {channel_names[ch]}\n'
            if channel_balance:
                geq_full += f'Preamp: {gains[ch]} dB\n'
                max_raw = np.max(frs[ch].equalization)
            geq_full += frs[ch].eqapo_graphic_eq(normalize=False) + '\n'
        geq_full += f'\nChannel: ALL\nPreamp: {-max_raw} dB'

        geq_path = os.path.join(dir_path, geq)
        with open(geq_path, 'w') as geq_file:
            geq_file.write(geq_full)

    if fig:
        fig_path = os.path.join(dir_path, fig)
        figure = plt.figure()
        ax = plt.gca()
        figure.set_size_inches(10, 5)
        a_min = np.median(frs[0].raw[:10000]) - 30
        frs[0].plot_graph(fig=figure, ax=ax,
                          show=False, error=False, equalization=False, equalized=True, target=False,
                          raw_plot_kwargs={"label": "Left", "color": "red", "linewidth": 1})
        frs[1].plot_graph(fig=figure, ax=ax,
                          show=False, error=False, equalization=False, equalized=True, target=False,

                          raw_plot_kwargs={"label": "Right", "color": "blue", "linewidth": 1})
        plt.savefig(fig_path)


equalize(dir_path=r"C:\Windows\System32\Impulcifer\data\demo")


----------



## reter (Apr 30, 2022)

Brandon7s said:


> Whoah, your left and right measurements have WILDLY different noise levels! -- NEVERMIND, I just noticed that I had viewed one from your Post and one from your Pre folder on accident, oops! That's why the variance was so huge. Looking to see if I notice anything sticking out that might point to why the front left doesn't sound quite right...
> 
> Looking at your FC-left and FC-right measurements, it seems that you're getting fairly consistent measurements from the microphones, which is good, though there does seem to be more low-end ringing fairly late on the spectrogram at below 10hz, but that shouldn't affect localization too much. It might cause an annoying kind of low-end echo that MIGHT be noticeable, but that's so low that I doubt it'd be a problem. Overall, it looks good to me from what I can see from these plots.
> 
> ...



Yeah, i did just the center of the head room correction, this is my first measurement with this master series and i moved some pillows so i was expecting the measurement being not what i want achieve, tomorrow i will lower the mics gain in windows as you suggested and remove some of the pillows; anyway the umik does have a lot of noise due to the bad audio of the intergrated motherboard, is that a problem?


anyway i'm pretty sure can be something related to the mics placement in my ears or the right mic set too low compared to the left one, i'm not sure, i use a single speaker for the measurement so can be also related to my position during the measurements, of course my room is NOT ideal for recording (is my bedroom) but i'm trying my best to be in the center of the room; do you still suggest me to set the audio interface in instrument mode for both the mics?


----------



## Brandon7s (Apr 30, 2022)

reter said:


> Yeah, i did just the center of the head room correction, this is my first measurement with this master series and i moved some pillows so i was expecting the measurement being not what i want achieve, tomorrow i will lower the mics gain in windows as you suggested and remove some of the pillows; anyway the umik does have a lot of noise due to the bad audio of the intergrated motherboard, is that a problem?
> 
> 
> anyway i'm pretty sure can be something related to the mics placement in my ears or the right mic set too low compared to the left one, i'm not sure, i use a single speaker for the measurement so can be also related to my position during the measurements, of course my room is NOT ideal for recording (is my bedroom) but i'm trying my best to be in the center of the room; do you still suggest me to set the audio interface in instrument mode for both the mics?




 Ah, using one speaker for the measurements is definitely doable with great results but it's also tough since you can't get a good feel for the stereo image and tweak placement in time. If you're not already, I recommend using a measuring tape and then mark the positions for the speakers ahead of time. I found that very helpful when I tried doing single-speaker measurements.

It's going to take a bit more trial and error than it would with two speakers but if you stick with it I think you'll be impressed with the results.

 I don't have a UMIK mic but measurement microphones in general are noisy, that's just inherent in their design, so that is expected and likely won't make much of an impact on the room correction. I wouldn't worry too much about that.

 In my experience the vast majority of the issues with my past measurements have been caused by the in-ear mic placement, especially problems caused by one being tilted towards the wall of the ear and getting muffled while the other is unobstructed.

 I don't think that the instrument button has an affect when not connecting to the front inputs via 1/4th inch instrument cables, but I'd make sure they are both set to the same setting in case it does affect the gain. If they DO affect gain then setting to Instrument would supply more gain and Line would give less.


----------



## simplefi (May 1, 2022)

reter said:


> Do someone experienced the same issue? i had to lower both mics and headphone volume otherwise i get a lot of zigzag even at 10db headroom


I have the same interface and mics and my results are similar to yours.  My mics are more matched than yours it seems but I wouldnt worry about not having the gain knobs at the exact same level for both channels.  I adjust them to where they are more or less matched and don't give too much weigh to their physical position.  I also have a different interface that I will use to try and see if the results are the same.



reter said:


> do you still suggest me to set the audio interface in instrument mode for both the mics?


The line/instrument button has no effect when XLR mics are plugged in.


----------



## reter (May 1, 2022)

simplefi said:


> I have the same interface and mics and my results are similar to yours.  My mics are more matched than yours it seems but I wouldnt worry about not having the gain knobs at the exact same level for both channels.  I adjust them to where they are more or less matched and don't give too much weigh to their physical position.  I also have a different interface that I will use to try and see if the results are the same.
> 
> 
> The line/instrument button has no effect when XLR mics are plugged in.


how much headroom do you get when doing good headphone measurerements with the ms? can you also tell me how you set the mics knobs?

i tried the trick @Brandon7s lowering the mics volume from windows but doesn't help, i still have zigzag when i'm lower than 22db headroom








EDIT

I removed some pillows and cut the ear foam a little bit so i could place the mics more deep inside

I like the results despite still the 11db headroom for the mics and 20db for the speaker measurements,

this is my test n.11 https://drive.google.com/file/d/14tmXSLxXXXSXMioAZ_GW-gDeh67yqsMM/view?usp=sharing

now i start to understand why you guys do 100+ measurements, it's like discovering a new treasure every time, you just can't stop trying new stuff






I also there's not much difference with and without the channel balance set to TREND, very good this means i got a good positioning this time, right?


----------



## Brandon7s (May 1, 2022)

reter said:


> how much headroom do you get when doing good headphone measurerements with the ms? can you also tell me how you set the mics knobs?
> 
> i tried the trick @Brandon7s lowering the mics volume from windows but doesn't help, i still have zigzag when i'm lower than 22db headroom
> 
> ...


When you say zigzag are you really referring to the big spikes/dips at about about 12kHz or so? That's the only place I see any real zizagging, the rest looks good to me. My headphone graphs for some of my favorite measurement look FAR more uneven and choppy than yours so below that point and sound great.

  The spikes/dips above 12kHz are a bit extreme but aren't too far off from how the high frequency content looks on my own measurements. Most of that isn't going to be audible though, since it's so high frequency. I don't know what causes it but that doesn't look like distortion. Distortion is easier to see in the low frequencies and those look decent on your graph but my guess is they are caused by noise since the pattern looks so irregular, I doubt that such small variances will have a noticeable impact on the final results though.

  By the way, if you want to reduce your headroom then crank the preamp gain on your mic inputs up. It looks like you have plenty of room to turn it higher.l from the knob positions in your photos. I usually aim for 6dB but I've gotten excellent sounding measurements at above 10dB before. 11dB ain't too low for good results. The speaker measurement being at about 20dB Is a bit low. How high do you have the gain knobs when taking the speaker measurements? If you've got more room to turn up the preamp gain, try it.

  If you have the gain maxed on the speaker measurements though then that's a bit more of a challenge and the only other option is to make your speakers louder or get another interface (or stand-alone preamp) with more preamp gain on tap.


----------



## reter (May 1, 2022)

Brandon7s said:


> When you say zigzag are you really referring to the big spikes/dips at about about 12kHz or so? That's the only place I see any real zizagging, the rest looks good to me. My headphone graphs for some of my favorite measurement look FAR more uneven and choppy than yours so below that point and sound great.
> 
> The spikes/dips above 12kHz are a bit extreme but aren't too far off from how the high frequency content looks on my own measurements. Most of that isn't going to be audible though, since it's so high frequency. I don't know what causes it but that doesn't look like distortion. Distortion is easier to see in the low frequencies and those look decent on your graph but my guess is they are caused by noise since the pattern looks so irregular, I doubt that such small variances will have a noticeable impact on the final results though.
> 
> ...



No i was referring to the up and down curves i get at 200hz and above, never had with the primo mics...

the speaker output knob is set at 2 o'clock, and the mics are also somewhere there while i set the mic gain 30 in windows like you see in the screen





I'm very happy with the results despite the 20db headroom for the speaker, tomorrow i will try again


----------



## simplefi

reter said:


> how much headroom do you get when doing good headphone measurerements with the ms? can you also tell me how you set the mics knobs?
> 
> i tried the trick @Brandon7s lowering the mics volume from windows but doesn't help, i still have zigzag when i'm lower than 22db headroom


I get around 8-12db headroom, sometimes a little more or less.  I make sure the room is quiet (no fans, or other ambient noise) and adjust the gains so that the signal LEDs are blinking at about the same rate for both channels.  I found that if you back off the gain to the point where you arent pickup up ANY signal, the overall headroom is too high.  Picking up a little intermittent signal on the indicator before you start isnt a problem if your recorded signal is decent volume; you will get a good SNR.  Knobs are around 12 o'clock position or maybe higher.  I'll verify in the headphones plot that the L and R channels are about even in amplitude.  Here's one from my latest session.  I also have the small zigzag at around 1k, not sure why but mine seems slightly less severe than yours.


----------



## reter (May 2, 2022)

ok i did 3 measurements today, same position same room, same of everything, i cut some of the ear foam to be able to plug them deeper and i reset the mics gain to 100 in windows, considering the previous measurement i was expecting a better result, oh my, they are the worst of the worst for some reason! no clarity, the sound is like muffled and inside my head, it's the total opposite i achieved in the previous measurement!!!
I spent a lot of time positioning the mics inside my ears so they were facing exactly outside my ears.

I got somehow both good headroom (4DB or so) a good curve but the result is bad bad bad!




PRE: https://imgur.com/a/aDYivdQ
POST: https://imgur.com/a/OrFz1RH


you can compare it with my best measurement with the same speaker positioning, the one i got 11db headroom for headphones and 20db for speakers:
PRE: https://imgur.com/a/0QvFnnJ
POST: https://imgur.com/a/wHCxeFy



again, i'm struggling to find a reason why i got so difference despite the speaker position is the same for both the measurements: maybe is the headroom too low? maybe is that i set back the mics gain to 100 in windows? or the reason is that i cut the ear foam to slip them more inside? i don't really know, i'm shocked, really





EDIT

seems like a stability factor, probably cutting the foam wasn't the smartest idea, the mics weren't stable anymore so they moved along the measurement, that's why i get the muffled sound


----------



## Brandon7s (May 2, 2022)

reter said:


> ok i did 3 measurements today, same position same room, same of everything, i cut some of the ear foam to be able to plug them deeper and i reset the mics gain to 100 in windows, considering the previous measurement i was expecting a better result, oh my, they are the worst of the worst for some reason! no clarity, the sound is like muffled and inside my head, it's the total opposite i achieved in the previous measurement!!!
> I spent a lot of time positioning the mics inside my ears so they were facing exactly outside my ears.
> 
> I got somehow both good headroom (4DB or so) a good curve but the result is bad bad bad!
> ...


Your frequency response graphs in the POST folder show a pretty big valley between 4kHz and 9kHz which is probably why they sound so muffled. As to what caused that dip, there's no way for us to know for sure. I would assume that it's due to mic placement though and the only way to sort that out is through trial and error, unfortunately.

 Are you doing the full 7.1 every time you make a measurement? I'd recommend just doing normal stereo measurements with just the front left and front right speakers while you are experimenting. I can't imagine that doing full surround measurements is a quick process when using one speaker, but if it is then ignore that recommendation. 

 Try out different depths systematically, if you can. One thing that I did after about 60 different measurements, instead of being smart and doing it early on, is to make 3 measurements in a session at different mic depths. Starting with the mics inserted as far as possible without serious discomfort and then pulling the mic out a little bit at a time for the following measurements, so I ended up with a deep measurement, a medium depth one, and then a shallow measurement. I then tried out all three and greatly preferred the deepest one. That might not be the case for everyone though and the only way to find out is to try it. Unfortunately this is the hard part about using Impulcifer due to the fact that everyone's ears are different and the mics aren't purpose-built for this kind of thing and they are very easy to accidentally obstruct due to their size. You might also want to try changing the path of the mic cables, too. The cable is pretty thick on these mics compared to the structure of the ear so if you normally wrap the cable up and over the ear like one typically does with IEMs then maybe try having them hang loose straight towards the ground like one would earbuds instead and see if that makes any difference.




morgin said:


> This is a bit weird I have been experimenting a little and found that with Impulcifer already including the headphone equalization I have good localization on my 5.1/7.1 games and movies. I then extracted my Headphone EQ from Impulcifer by using the script below as given earlier, and also included that into hesuvi virtualization. I know having double EQ should be bad but the localization and surround effect is so much more pronounced I'm having a hard time not using it. Can someone else try it and give their views?
> 
> import os
> 
> ...


The furthest I've gotten into programming is making Autohotkey scripts to make my life easier, but I'd be willing to try this out if I knew how to use it. Would I just paste all of this into a text file or something?


----------



## reter (May 2, 2022)

Brandon7s said:


> Your frequency response graphs in the POST folder show a pretty big valley between 4kHz and 9kHz which is probably why they sound so muffled. As to what caused that dip, there's no way for us to know for sure. I would assume that it's due to mic placement though and the only way to sort that out is through trial and error, unfortunately.
> 
> Are you doing the full 7.1 every time you make a measurement? I'd recommend just doing normal stereo measurements with just the front left and front right speakers while you are experimenting. I can't imagine that doing full surround measurements is a quick process when using one speaker, but if it is then ignore that recommendation.
> 
> ...



yea i was doing the whole 7.1 measurements every time, i will try the stereo then... do i have to switch to stereo in the windows audio properties or i can keep 7.1?

It's not easy for me doing what you suggested, the ear plugs pushes back out little by little from themselves, also i noticed that my left canal is more tight so it's everytime a pain

i really don't think that the curve from 4khz to 9khz is the problem here, my old test 11 also have those valleys but doesn't sound this muffled, also i tried multiple positions and i found out that my best clarity/spacial is around 3ft, but now i'm facing this weird problem, i did one more test today that's the last due to the late hour, i think the ear foam wasn't the cause, tomorro i will do again the test following the same route i did for the test 11, i will see


----------



## morgin (May 2, 2022)

Here’s the script by @conql



conql said:


> This is the script I use to generate standalone headphone compensation. Modified from impulcifer, so it should sound the same.
> 
> https://1drv.ms/u/s!AqwTOUFQXDBFlHNqm_iBjs1V5NJ_?e=H2flUl
> 
> ...





conql said:


> sorry I didn't make it clear. Perhaps you didn't activate the virtual environment.
> 
> 
> 
> ...



Then you should have the text file and use that in the virtualisation in hesuvi. The graph shows flat but it increases the positions and clarity by lets say x2 for some reason. Let me know your results please.


----------



## Brandon7s (May 2, 2022)

reter said:


> yea i was doing the whole 7.1 measurements every time, i will try the stereo then... do i have to switch to stereo in the windows audio properties or i can keep 7.1?
> 
> It's not easy for me doing what you suggested, the ear plugs pushes back out little by little from themselves, also i noticed that my left canal is more tight so it's everytime a pain
> 
> i really don't think that the curve from 4khz to 9khz is the problem here, my old test 11 also have those valleys but doesn't sound this muffled, also i tried multiple positions and i found out that my best clarity/spacial is around 3ft, but now i'm facing this weird problem, i did one more test today that's the last due to the late hour, i think the ear foam wasn't the cause, tomorro i will do again the test following the same route i did for the test 11, i will see


You can leave your audio settings at 7.1 if you're listening to stereo sources like Spotify or other music, which is how I do the majority of my testing.

 My left ear-opening is also tighter than my right ear. Have you removed the silicone housing around the mic capsule, by the way? I had to do that in order to be able to get reasonable depth without it it being super uncomfortable and without the mics popping back out.  I also had to cut the foam tips nearly in half to help with that problem.

 If some of your good measurements have that valley around 7kHz then that's likely not the issue, agreed. It's very difficult to look at a graph and tell how good something will sound to you unless you've made a bunch of measurements that are good and know what they should roughly look like.



morgin said:


> Here’s the script by @conql
> 
> 
> 
> ...


Thanks! I'll try to give this a shot today and will let you know if I have any difficulties getting it working, and the results if I DO get it working.


----------



## simplefi

One thing I found to keep in mind for getting good measurements is to be completely still when measuring your speakers.  It is very easy to involuntarily move slightly.  For example, if you shift your head or neck position by a couple mm, it'll be equivalent to having a different headshape by that amount and may start to sound like listening to someone else's HRIR.


----------



## morgin

simplefi said:


> One thing I found to keep in mind for getting good measurements is to be completely still when measuring your speakers.  It is very easy to involuntarily move slightly.  For example, if you shift your head or neck position by a couple mm, it'll be equivalent to having a different headshape by that amount and may start to sound like listening to someone else's HRIR.



I agree my best one was when I was completely still on a swivel chair and just moving my eyes to copy and paste the commands. I was making sure that my head turned exactly central which meant I cannot lean into the back rest. Centering my head on the point where the swivel was the key. Also making sure the mics were fixed in place because something like swallowing spit pushed out the foam slightly. 

Would there be a way to have Impulcifer automatically start each sweep for different speaker after a couple of seconds so its just one input and then we can concentrate on just moving the head?


----------



## musicreo (May 3, 2022)

morgin said:


> Would there be a way to have Impulcifer automatically start each sweep for different speaker after a couple of seconds so its just one input and then we can concentrate on just moving the head?


You can use in the command window something like this:
&&= commands that are executed after each other
pause=pause until pressing any key
timeout/T10=pause of 10 seconds

This can look like this.
_ pause && timeout /T 18  && python recorder.py --play="data/sweep-seg-FR-stereo-6.15s-48000Hz-32bit-2.93Hz-24000Hz.wav" --record="data/my_hrir/FC_1.wav" && timeout /T 3 &&   python recorder.py --play="data/sweep-seg-FR-stereo-6.15s-48000Hz-32bit-2.93Hz-24000Hz.wav" --record="data/my_hrir/FC_2.wav" && timeout /T 10 &&   python recorder.py --play="data/sweep-seg-FR-stereo-6.15s-48000Hz-32bit-2.93Hz-24000Hz.wav" --record="data/my_hrir/FC_3.wav" && pause_


----------



## reter

Brandon7s said:


> You can leave your audio settings at 7.1 if you're listening to stereo sources like Spotify or other music, which is how I do the majority of my testing.
> 
> My left ear-opening is also tighter than my right ear. Have you removed the silicone housing around the mic capsule, by the way? I had to do that in order to be able to get reasonable depth without it it being super uncomfortable and without the mics popping back out.  I also had to cut the foam tips nearly in half to help with that problem.
> 
> ...


yep, i removed the stupid silicone housing the same day i got the mics for its useless, i cut a pair of foam but i can't get them very deep without risking to get them slip away; sadly i formatted my pc some days ago and i forgot that impulcifer saves the hrir in the same drive of windows LOL very sad i can't compare those measurements with the one i got these days, anyway the test 11 measurement is much better than everything i got in the previous measurements (mostly reverb free and very good clarity, a little bit less in spaciality but i'm working on it)



morgin said:


> Here’s the script by @conql
> 
> 
> 
> ...


thank you, i was expecting something like atmos in mesh2hrtf so i left it away but if it can actually help to get better measurement with impulcifer i will definitely do that one day

what did you do with the sofa files of mesh2hrtf exactly? you merged somewhere in impulcifer?



musicreo said:


> You can use in the command window something like this:
> &&= commands that are executed after each other
> pause=pause until pressing any key
> timeout/T10=pause of 10 seconds
> ...


great! i will try this command tomorrow, this will make my life so much easier!


----------



## Brandon7s (May 2, 2022)

morgin said:


> Here’s the script by @conql
> 
> 
> 
> ...



Holy smokes.

I just tried this and my mind is blown. On paper, I'd have expected this to sound super harsh and unnatural but it's like you said, it somehow seems to be improving clarity by at least a factor of two. I've only tried the 7.1 surround test that comes with Hesuvi when it comes to multi-channel audio so far, but localization also seems highly improved. What the heck is going on here! If I hadn't tried it myself I wouldn't believe it.

 Before trying it I expected that this might improve localization due to exaggerating the HRTF a bit, but I also expected tonal balance to be way off and for music to sound harsh and weak, but it sounds pretty great. It strongly resembles being in a very strong phantom center of a great pair of speakers, much better speakers than I actually own. Impulcifer sounds great on it's own but it's never felt quite as fully immersive as being in my speakers' sweet spot. But this seems to add another layer of immersion on top of what I was already getting. Very odd, but good.

 Oh man, I'm going to be playing with this all week long trying to figure out what's going on great find @morgin! I'm going to spend some time with this and see if I can notice any downsides, but so far I've not noticed any besides some of the high frequencies being a bit brighter than I would like, but since this is all from a parametric EQ I can easily tweak the parts that I find excessive.

@jaakkopasanen: I would be very interested in hearing you thoughts on this. I'm referring to using the the EQ curve generated by @conql's script, which it pulls from headphone.wav files, _while_ using a Impulcifer-generated BRIR as usual. It might be something that you can implement into Impulcifer as a post-processing option to help tweak clarity.


----------



## morgin (May 2, 2022)

reter said:


> thank you, i was expecting something like atmos in mesh2hrtf so i left it away but if it can actually help to get better measurement with impulcifer i will definitely do that one day
> 
> what did you do with the sofa files of mesh2hrtf exactly? you merged somewhere in impulcifer?



I'm actually not using mesh2hrtf sofa until there's a way to increase the distance of the sound. At the moment it does sound like dolby but a bit better, but still way to close to my head. The EQ I'm using is the baked in from impulcifer in my HRIR and the extracted EQ from the script by @conql.



Brandon7s said:


> I just tried this and my mind is blown. On paper, I'd have expected this to sound super harsh and unnatural but it's like you said, it somehow seems to be improving clarity by at least a factor of two. I've only tried the 7.1 surround test that comes with Hesuvi when it comes to multi-channel audio so far, but localization also seems highly improved. What the heck is going on here! If I hadn't tried it myself I wouldn't believe it.



Sweet I'm glad it's working on your end too and not just some fluke on my end. Thankyou for trying coz I know it sounded like a hack and just learning how to use that script was an effort (was for me anyway) but you still gave it a go. I hope the others try it and have good results too. So far I haven't noticed any downsides. Maybe somehow triple'ing the EQ will make it better?


----------



## reter

Brandon7s said:


> Holy smokes.
> 
> I just tried this and my mind is blown. On paper, I'd have expected this to sound super harsh and unnatural but it's like you said, it somehow seems to be improving clarity by at least a factor of two. I've only tried the 7.1 surround test that comes with Hesuvi when it comes to multi-channel audio so far, but localization also seems highly improved. What the heck is going on here! If I hadn't tried it myself I wouldn't believe it.
> 
> ...


what about spaciality? does this method actually increase clarity without sacrificing spaciality?


----------



## Brandon7s (May 2, 2022)

reter said:


> what about spaciality? does this method actually increase clarity without sacrificing spaciality?


It's a bit tough to explain, I'd say that it makes the illusion of listening to speakers in front me less strong and makes the room reflections in the BRIRs less apparent, so in that way I think I it does reduce the sense of space. However, it also increases the sense of space that I get from the _source material itself_, and localizing audio in surround is actually a little easier, from the little bit of surround testing I've done.

 So it's less like listening to speakers and more like I'm just in the room with the musicians, I'd say. It definitely changes the perception of the space that I'm in more than when not using the EQ, which always sounds like a variation of my own room.


----------



## reter (May 2, 2022)

Brandon7s said:


> It's a bit tough to explain, I'd say that it makes the illusion of listening to speakers in front me less strong and makes the room reflections in the BRIRs less apparent, so in that way I think I it does reduce the sense of space. However, it also increases the sense of space that I get from the _source material itself_, and localizing audio in surround is actually a little easier, from the little bit of surround testing I've done.
> 
> So it's less like listening to speakers and more like I'm just in the room with the musicians, I'd say. It definitely changes the perception of the space that I'm in more than when not using the EQ, which always sounds like a variation of my own room.


in that way you're telling me that is awesome 

if you guys can so multiple tests with multiple different hrir positioning and see what does when you're faaar away or very close to the speakers

so actually to do this we need an iphone X and do a mesh scan of our head, am i right?


----------



## simplefi

Brandon7s said:


> Oh man, I'm going to be playing with this all week long trying to figure out what's going on great find @morgin! I'm going to spend some time with this and see if I can notice any downsides, but so far I've not noticed any besides some of the high frequencies being a bit brighter than I would like, but since this is all from a parametric EQ I can easily tweak the parts that I find excessive.


If I am understanding this right, is this just generating the headphone compensation EQ as a separate parametric EQ in EQ-APO?  In that case running this with a measurement using "--no_headphone_compensation" should sound the same as a normal Impulcifer measurement.  Running it as you are, it is applying twice as much headphone compensation but it seems the results are speaking for themselves


----------



## morgin

reter said:


> in that way you're telling me that is awesome
> 
> if you guys can so multiple tests with multiple different hrir positioning and see what does when you're faaar away or very close to the speakers
> 
> so actually to do this we need an iphone X and do a mesh scan of our head, am i right?



No this is just using Impulcifer. Just your generated HRIR from Impulcifer and then use the scrip by @conql to extract your EQ and add that into the Hesuvi virtualization.


----------



## Brandon7s (May 3, 2022)

simplefi said:


> If I am understanding this right, is this just generating the headphone compensation EQ as a separate parametric EQ in EQ-APO?  In that case running this with a measurement using "--no_headphone_compensation" should sound the same as a normal Impulcifer measurement.  Running it as you are, it is applying twice as much headphone compensation but it seems the results are speaking for themselves



Yeah, that's what I thought, though I didn't expect for it to work as well as it is.

I'm seeing it as a clarity enhancer right now, acting as a "sharpener" to improve contrast. It seems to cut mud from what I'm pretty sure is my terrible room. It's not treated in the slightest, though I _do _do some EQ work on my measurements in order to tame the big room nodes that are pretty obvious and easy to find and fix, but that's only in the sub-200hz frequencies. I think this would be less useful for people with well-treated rooms and great speakers.

 It works much better with my BRIR's that have low variance in their headphones.png measurements, which makes sense since throwing compensation on top of already compensated measurements would just make things back to being uncompensated, of course. But I am enjoying it for the measurements that I have which are well-balanced without using headphone compensation. I'm going to have to try processing a some BRIRs with --no_headphone_compensation and then tweak the matching EQ curve generated from this script in order to exaggerate the peaks and valleys by "stretching" the curve vertically, that'd probably be less prone to imbalance issues than double-compensating while still giving it letting me tweak clarity.


----------



## musicreo

reter said:


> great! i will try this command tomorrow, this will make my life so much easier!



I would suggest to copy the activate.bat file and rename it (e.g activate_mod.bat) and then  add all commands you need for your measurement  at the end of the activate_mod.bat file.  You can start your modified measurement with "venv\Scripts\activate_mod"

This way you could also add speaker notifications. For example I had one version where I just played a wav file ("next speaker front left") before the Sweep for this speaker starts.


----------



## reter (May 4, 2022)

Done some measurements today, seems like the more deep i go with the plugs more muffled the audio is, maybe the frequencies bounces when they enter in the ear canal?

Also, can someone explain me more accurately what is the "headroom" and why lower is better? if i pump my speaker to give me less headroom i get more reverb than actual quality from the source, why do i need to record those frequencies if i can actually lower my speaker and record the source without risking to get the mics actually record more bounced reverb than the source itself?


----------



## simplefi

For Impulcifer, lower headroom just means your signal to noise ratio is higher.  You want your intended signal (sweep) to be as high as possible relative to the noise (ambient, background, hiss from equipment, etc) and turning up the volume and/or gain are both ways to achieve this.  But as you have discovered, if at some point the volume is -too- high and you are getting poor results, it is ok to turn it down as long as the SNR is acceptable.  Personally I use the speaker volume than I listen to so that the results can reflect that, and then I adjust gain to increase my SNR.  Having a high headroom in Impulcifer isn't necessarily "bad" as long as the noise isn't showing up in the results.


----------



## castleofargh

reter said:


> Done some measurements today, seems like the more deep i go with the plugs more muffled the audio is, maybe the frequencies bounces when they enter in the ear canal?
> 
> Also, can someone explain me more accurately what is the "headroom" and why lower is better? if i pump my speaker to give me less headroom i get more reverb than actual quality from the source, why do i need to record those frequencies if i can actually lower my speaker and record the source without risking to get the mics actually record more bounced reverb than the source itself?


As said, because of ambient noises. I personally like to listen to music quietly(true for speakers and headphone. If the room is reasonably quiet I'll easily listen at or below 60dB SPL. But with noises in the room in the 30 to 40dB(when it feels quiet!!!!!!), Imagine measuring like that and getting noises mixed in with the secondary cues(reverb) as soon as 20dB below the loudest signal... That's not a good idea at all.
But of course if I push my speakers near the max, I get stuff shaking in the house, the mics distort like crazy, and the result while vastly different, is just as bad. 
Again that's why from the get go I told you that sadly you should try a bunch of things and mostly rely on how the resulting convolution feels to you. You don't want to distort anything, you don't want the signal to clip anywhere. Of course you don't want any of that, but if you run away too far in the other direction, you will inevitably bring in other issues you also don't want. This is compromise paradise, so, good juggling to you.


----------



## reter (May 4, 2022)

simplefi said:


> For Impulcifer, lower headroom just means your signal to noise ratio is higher.  You want your intended signal (sweep) to be as high as possible relative to the noise (ambient, background, hiss from equipment, etc) and turning up the volume and/or gain are both ways to achieve this.  But as you have discovered, if at some point the volume is -too- high and you are getting poor results, it is ok to turn it down as long as the SNR is acceptable.  Personally I use the speaker volume than I listen to so that the results can reflect that, and then I adjust gain to increase my SNR.  Having a high headroom in Impulcifer isn't necessarily "bad" as long as the noise isn't showing up in the results.


Thank you guys

also, can i see the SNR somewhere in the impulcifer plot? how can i know if there's too much noise? i was not prepared to the sensitivity the master series has, so i'm still tweaking the gain


----------



## sander99

reter said:


> Done some measurements today, seems like the more deep i go with the plugs more muffled the audio is


Do you measure a new headphone compensation using the same mic placement you used for the speakers each time? That is what you should do ideally, and certainly for radically different mic placements.


----------



## musicreo

simplefi said:


> For Impulcifer, lower headroom just means your signal to noise ratio is higher.


The headroom is just  the amplitude peak  in the frequency response curve.  It is assumed that the SNR is better with lower headroom but this is only true if your noise does not increase the same amount like your boosted signal.


----------



## morgin

Do you guys measure your ears incase the hearing is different? I mean like the eye prescription, where one eye is more dominant than the other. Would that make any difference to our HRIR?


----------



## Brandon7s (May 4, 2022)

morgin said:


> Do you guys measure your ears incase the hearing is different? I mean like the eye prescription, where one eye is more dominant than the other. Would that make any difference to our HRIR?


Probably, I know my left and right hearing isn't identical since there's physical differences at the ear-entrance, but those should be at least partially taken into account by Impulicfer's measurements. Without having mics that can be placed into the ear canal I don't see how one would reliably measure the difference. I've tried doing tests by ear but the generated EQ curves haven't been useful for me.


----------



## castleofargh

reter said:


> Thank you guys
> 
> also, can i see the SNR somewhere in the impulcifer plot? how can i know if there's too much noise? i was not prepared to the sensitivity the master series has, so i'm still tweaking the gain


The main problem you're going to have is that you probably lack the means to calibrate your setup(tell it that a given sound is XXX dB loud). Also, most non specialized mics will show low level noise no matter what(most are unreliable below 30dB).
But if you're just trying to get some idea of the SNR(between whatever recorded noise and signal), you can do that with any app on a cellphone or with any free "RTA"(real time analyzer) software. 
I'm used to doing almost everything with REW(Room EQ Wizard) but I'm not sure I would recommend starting with that it's made to do many other stuff and does have a learning curve. while any random spectrum thingy can show you a ballpark magnitude for noise and another one when you send a test tone without you having to learn anything about anything.




morgin said:


> Do you guys measure your ears incase the hearing is different? I mean like the eye prescription, where one eye is more dominant than the other. Would that make any difference to our HRIR?


AFAIK, with the exception of clear anomaly like really massive hearing impairment, your brain will simply adapt to whatever is the norm in your daily life. Only fairly rapid changes will feel weird instead of just being progressively accepted as the new reference by the brain. Say you tend to have a resting position where you tilt your head to the left a little. After a few weeks, your brain might just accept it as how looking straight sounds. Now you make a calibration and you carefully look straight for the center look angle, and if as soon as you relax in a chair you tilt to the left without thinking, chances are that the simulation will feel like the sound is off center. Not because it is, but because your usual orientation when looking at something isn't straight. 
I probably took the weirdest example, but I like those ^_^, and it's a relatively rare but real thing. Some people end up trying to compensate an imbalance but they soon discover that using simple panning pot(making one channel louder) doesn't really remove the feeling they have. Only to find out by accident or because someone like myself proposed a weird example for the lolz, that what they need to feel centered audio is a tiny delay in one channel(most likely caused by some postural habit that the brain now calls centered). 
My long winded point is, if you've had very different ear shapes for a long time, How they change sound is how you think natural well balanced sound is. Same with small to moderate hearing loss. We rapidly adapt if given the time. IMO, outside of what might count as impairment, I'm of the opinion that you should only care to simulate the sound as it's normally coming at your ear, and let your brain do what it's used to do.


----------



## reter (May 4, 2022)

castleofargh said:


> The main problem you're going to have is that you probably lack the means to calibrate your setup(tell it that a given sound is XXX dB loud). Also, most non specialized mics will show low level noise no matter what(most are unreliable below 30dB).
> But if you're just trying to get some idea of the SNR(between whatever recorded noise and signal), you can do that with any app on a cellphone or with any free "RTA"(real time analyzer) software.
> I'm used to doing almost everything with REW(Room EQ Wizard) but I'm not sure I would recommend starting with that it's made to do many other stuff and does have a learning curve. while any random spectrum thingy can show you a ballpark magnitude for noise and another one when you send a test tone without you having to learn anything about anything.
> 
> ...


i can let my brain adapt to everything, i know that, but if i think about what i'm using now thanks to impulcifer i can't just look behind and turn back to the generic hrirs i was used (my brain was used) for a lot of years, that's why even tho my brain is already used with this hrir i made, i'm still looking for a room for improvement


anyway i have two questions:

1- there are many kind of reverbs in our life if i think about my room, in a cinema, in a parking; they all sounds different... and the reverb is the only way we can actually locate a source far away from us... then what's the most neutral reverb? i know the brain gets used to that as well, but if i think about hearing the voice of someone or listen to a song under a reverb then what's the best reverb to get the best neutral audio and spaciality?

i really don't know if it is a dumb question but i'm curious

2- what if i measure my front left and front right channels with 35-40 degrees to get a wider feeling?


----------



## Brandon7s (May 5, 2022)

reter said:


> anyway i have two questions:
> 
> 1- there are many kind of reverbs in our life if i think about my room, in a cinema, in a parking; they all sounds different... and the reverb is the only way we can actually locate a source far away from us... then what's the most neutral reverb? i know the brain gets used to that as well, but if i think about hearing the voice of someone or listen to a song under a reverb then what's the best reverb to get the best neutral audio and spaciality?
> 
> ...


1. It is my opinion that the best reverb is the reverb that matches the room that we can see. Our sight alters our expections of what we hear, and when the two are dramatically mismatched then we are no longer convinced that what we are hearing is real. Our eyes override our ears, so to speak. If the goal of using a personalized BRIR is to believe that we aren't listening to headphones anymore but instead are listening to speakers, then room reverb that matches our visual expectations is of high importance. I'm a hobbyist musician and a bit of a reverb fanatic and I've experimented a lot with placing high quality reverb effects after a BRIR that has most of the reverb removed via Impulcifer's reverb management. It's super cool sounding to get the sound of being in a concert hall or a large chamber, but its doesn't trick my brain into thinking that I'm literally hearing those spaces as if I was there. When I use shorter reverb types like small chambers and medium to small rooms, then it gets a lot more convincing.

 You can mess with the overall tonality of the reverb without breaking that illusion though. Our eyes don't generally tell us how much bass/mid/treble we expect to hear, not unless we are very used to how a room sounds and the construction of it sets specific expectations (such as a large metal box like a cargo container),but reverb length, onset delay, and to a lesser extent even diffusion characteristics are all things that our brain is VERY good at infering based on visual data. If you stray too far from that visual expectation then what you hear is an unnatural reverb. That's been my experience.

2. My speakers are at an 85 degree angle and honestly it's not all that much different sounding from a 60 degree angle, which is the typical recommendation for studio monitors. It's difficult to notice the difference in width when in the presence of a strong phantom center.


----------



## sander99

reter said:


> i can let my brain adapt to everything, i know that, but if i think about what i'm using now thanks to impulcifer i can't just look behind and turn back to the generic hrirs i was used (my brain was used) for a lot of years, that's why even tho my brain is already used with this hrir i made, i'm still looking for a room for improvement


I think you are misinterpreting what @castleofargh was saying. What Impulcifer aims at is to re-produce the exact same sound going into your ears as would have gone into your ears when listening to real speakers. So reproduce exactly what your brain is used to (adapted to) with real life sounds. And with real life sounds your brain is used to all differences between your left and right ears. If you somehow compensated for those differences you would _move away_ from what your brain is used to in real life, so you would _move away_ form what sounds most real and natural to you. What you are talking about seems to be adapting to something that your brain is not used to in real life, like for example the sound of a generic hrir, or an imperfect hrir.


----------



## reter

another question!

I was thinking to switch from a JBL 308p to a 306p for a matter of portability, would this change the general acoustic results in my measurements? in all my measurements i get a big curve on bass frequencies, is that related to the bigger speaker in relation to my pretty small room?


----------



## Brandon7s (May 6, 2022)

reter said:


> another question!
> 
> I was thinking to switch from a JBL 308p to a 306p for a matter of portability, would this change the general acoustic results in my measurements? in all my measurements i get a big curve on bass frequencies, is that related to the bigger speaker in relation to my pretty small room?


The difference between those two speakers would be very difficult to notice. I have both the 8 inch and 5 inch JBL LSR 1st gen speakers. The main advantage of the larger speakers is more volume and slightly more extended bass response. Both still roll off quite heavily at about 90hz though. [scratch that, that's just my room being bad]

The big bass curve you're hearing is almost certainly an artifact of the shape and size of your room, aka a room node.


----------



## morgin

Wouldn’t it better to have the mic gain all the way down and to have the speakers at the highest volume tolerable. To remove any outside sound pickup and to just pickup the sweep from the speakers?


----------



## Brandon7s

morgin said:


> Wouldn’t it better to have the mic gain all the way down and to have the speakers at the highest volume tolerable. To remove any outside sound pickup and to just pickup the sweep from the speakers?


That would be the best way to minimize unwanted noise, but the higher the SPL the more of the room you will hear, so problematic resonances will be more apparent. If one had a really great room, then sure.


----------



## musicreo

Brandon7s said:


> The difference between those two speakers would be very difficult to notice. I have both the 8 inch and 5 inch JBL LSR 1st gen speakers. The main advantage of the larger speakers is more volume and slightly more extended bass response. Both still roll off quite heavily at about 90hz though.


They roll off below 50Hz not at 90Hz.
308P is (+/- 3dB): 45 - 20.000 Hz
305P is (+/- 3dB): 49 - 20.000 Hz


----------



## musicreo

morgin said:


> Wouldn’t it better to have the mic gain all the way down and to have the speakers at the highest volume tolerable. To remove any outside sound pickup and to just pickup the sweep from the speakers?



Theoretically yes. Practically with the highest tolerable speaker volume the used mics still need some pre-amplification to record a good signal.


----------



## Brandon7s

musicreo said:


> They roll off below 50Hz not at 90Hz.
> 308P is (+/- 3dB): 45 - 20.000 Hz
> 305P is (+/- 3dB): 49 - 20.000 Hz


I wish mine got down to 45 and 49hz, the measurements I've taken roll off much earlier than that. 50hz is nearly inaudible on my rig with either of those speakers. Maybe my room is worse than I thought and I've got some heavy cancelation going on... I'll have to try setting them up in a different, larger room sometime and see what kind of results I get.


----------



## morgin

Has anyone used both Impulcifer and the smyth a16 realiser? What are the differences and is it worth the price? Also is it difficult to set up


----------



## Crema (May 7, 2022)

Soundprofessionals MS-CB-900 + Rubber Cap

I attach this microphone to the front of the ear canal quickly and easily.

So far, it's accurate. I'd like to compare it with BACCH-BM, but it's too expensive to do so.

I tried to solve the problem of hrir that I posted before by giving a delay to the microphone, but in the end, I couldn't solve the defect.

So I measure it again with a new microphone and speaker. The result is very good.



















\\







HD800S , Behringer B2031A + SVS PB2000PRO


----------



## Brandon7s (May 7, 2022)

Crema said:


> Soundprofessionals MS-CB-900 + Rubber Cap
> 
> I attach this microphone to the front of the ear canal quickly and easily.
> 
> ...


Very interesting mics, I hadn't thought about trying those. It looks like getting consistent results with those rubber caps would be pretty easy compared to gluing the metal capsules of the mics I'm using to foam earplugs.


----------



## morgin

Just been to get hearing aids for my mum and they used this
Is this something we could use in place of in ear mics?


----------



## Brandon7s (May 8, 2022)

morgin said:


> Just been to get hearing aids for my mum and they used this
> Is this something we could use in place of in ear mics?


I believe so, but I think a physician is required to perform the insertion? I heard the mics are expensive, too. I remember asking someone who worked with audiologist on another forum about them and I think he said are north of 1,000 USD, though he didn't specify the exact amount. That's not unobtainable or anything, but that's more than I'm willing to spend.


----------



## reter

does someone tried the 38% rule when recording the brir? i think the only way to do this method is by measuring with a single speaker https://realtraps.com/art_room-setup.htm


----------



## reter (May 12, 2022)

Guys i think i have a little too much high frequencies in my recordings, it sounds a little unnatural and enhausting to hear in long periods when i'm playing something like harmonica/flute and such






i don't know if it's the spike at 3k or the one at 15k



ironically, if i deactivate hesuvi i listen my headphones to be less sharper than with the hrir on, a lot less; this should be positive if weren't like if someone is trying to cut my eardrums


----------



## musicreo

reter said:


> i don't know if it's the spike at 3k or the one at 15k



You could try  some peaking filters for 3 and 15 kHz or  a low pass filter for 14-15 kHz to check what frequency is bothering you.


----------



## Xam198

Hi, i've finally received my umc 202 hd, no more "zigzag" in my plots.So i'm getting some results, but until know no good spatialisation. I'm beginning to think that my brain is not receptive to virtualisation. I've experimented a not so bad stereo virtualisation obtained at the end of the day (tired?) and next morning (awake!), with the exact same configuration no more spatialisation ! So i doubt... It brings me a question to the owners of smyth realiser : is is the same difficulties to obtain good virtualisation ?


----------



## morgin

Xam198 said:


> Hi, i've finally received my umc 202 hd, no more "zigzag" in my plots.So i'm getting some results, but until know no good spatialisation. I'm beginning to think that my brain is not receptive to virtualisation. I've experimented a not so bad stereo virtualisation obtained at the end of the day (tired?) and next morning (awake!), with the exact same configuration no more spatialisation ! So i doubt... It brings me a question to the owners of smyth realiser : is is the same difficulties to obtain good virtualisation ?


I’ve had problems in the past. Sometimes the windows update can mess with my hrir so I end up having to remove all sound drivers (except behringer), restart, delete the eqAPO folder with hesuvi inside, restart again for windows to automatically install sound drivers, then install vbcable, eqAPO and hesuvi then set everything back to how it was. It’s a pain. 

If that doesn’t work try what I’ve posted earlier where I extract the headphone eq and used that in hesuvi. It gives more virtualisation for some reason.


----------



## Xam198 (May 14, 2022)

morgin said:


> I’ve had problems in the past. Sometimes the windows update can mess with my hrir so I end up having to remove all sound drivers (except behringer), restart, delete the eqAPO folder with hesuvi inside, restart again for windows to automatically install sound drivers, then install vbcable, eqAPO and hesuvi then set everything back to how it was. It’s a pain.
> 
> If that doesn’t work try what I’ve posted earlier where I extract the headphone eq and used that in hesuvi. It gives more virtualisation for some reason.


Yes i should test that , but i'm more and more asking myself if the problem is ..me. I tried Out of the Head by fongaudio, and their demo, i only heard a dummy sound, with reverb, all IN my head, very very bad localisation...  a lightyear from my 5.1 system.
No, in fact, whatever i tried i had very bad difficulties with front projection : at better i succeed to hear the sound coming just a little above my head, but never one or two meters from like my actual speakers are.
And i'm wondering if i would come to buy a smyth realiser (an used A8) if i could have the same problem


----------



## Maestroso (May 14, 2022)

Hi everyone,

Long-time lurker here. I'm also an owner of a Smyth Realiser A16 but have been using Impulcifer with a stock Sound Professionals MS-TFB-2 for quite a while. In case you ask, the reason for my ongoing interest in Impulcifer is its versatility for professional use (by having access to the raw IR, rather than re-convolute digitally from the Realiser). Also, the Realiser won't support CoreAudio (MacOS) due to restrictions of the driver structure of the built-in USB interface, so any application for mixing duties is out of question right now (well, if you want to stay in the digital domain, that is). This as well as teaching duties in sound/media engineering and psychoacoustics have pushed me back towards the Impulcifer route.

In my efforts to improve the capture process, I'm currently shopping for binaural microphones that are best suited for the task and ran across the MS-TFB-2-*MKII* (Link: https://soundprofessionals.com/product/MS-TFB-2-MKII/), apparently a new iteration of my current MS-TFB-2.






These look very interesting. Have any of you gathered experience with the MKII variant of these microphones? I assume the prospect to use them as part of an in-ear plug structure might improve captures (while avoiding a mess on the mic element as is to be expected in people with narrow ear channels = moi)? However, I'm not sure whether the mic elements will insert deep enough into the ear canal. Or is it more efficient to simply cut off the hooks of my old MS-TFB-2, put them in foam plugs and crush them into the ears in good old Realiser fashion?

Any ideas or recommendations for alternatives would be much appreciated.

Thanks!

EDIT: Another candidate might be this handsome fellow: https://soundprofessionals.com/product/SP-EAR-MIC/ or the https://soundprofessionals.com/product/SP-EAR-MIC-2/ (stereo set) as pictured below. Any experience with this one?


----------



## Xam198 (May 14, 2022)

i edited this message on the zerodivisionerror, my mistake


----------



## lowdown

Xam198 said:


> Hi, i've finally received my umc 202 hd, no more "zigzag" in my plots.So i'm getting some results, but until know no good spatialisation. I'm beginning to think that my brain is not receptive to virtualisation. I've experimented a not so bad stereo virtualisation obtained at the end of the day (tired?) and next morning (awake!), with the exact same configuration no more spatialisation ! So i doubt... It brings me a question to the owners of smyth realiser : is is the same difficulties to obtain good virtualisation ?


There are so many variables I can only offer some ideas.  If you are able to locate sounds in direction and depth normally without headphones then it doesn't seem likely that the problem is your hearing or brain.  It's probably the localization cues in the sounds being delivered to your ears.  Of course something in the measurement process, or Impulcifer command options could be the culprit, but also perhaps it's vision.  That may sound odd, but what you're seeing or not when listening can have a big influence on sound localization.  Impulcifer works extremely well for me when I'm sitting in front of my speakers in my normal stereo music spot.  The illusion is exceedingly realistic, a soundstage 7 ft in front of me, pinpoint left to right imaging, realistic depth, everything as if I'm actually at a live performance.  But I can use the same headphones and HRIR in my computer room in front of the monitor with a wall close behind it and the virtualization mostly collapses.  If this isn't relevant to your situation then one thing you can check off the list, but it's worth being aware of.


----------



## castleofargh

Xam198 said:


> Yes i should test that , but i'm more and more asking myself if the problem is ..me. I tried Out of the Head by fongaudio, and their demo, i only heard a dummy sound, with reverb, all IN my head, very very bad localisation...  a lightyear from my 5.1 system.
> No, in fact, whatever i tried i had very bad difficulties with front projection : at better i succeed to hear the sound coming just a little above my head, but never one or two meters from like my actual speakers are.
> And i'm wondering if i would come to buy a smyth realiser (an used A8) if i could have the same problem


Prof Choueiri mentioned that for some people, the center image simulation just doesn’t seem to work. Probably something to do with the visual cortex dominating their subjective experience even more than is usual for humans. So when your eyes and ears disagree because you're in a different room or you're looking at a wall instead of a band playing, your brain decides to trust what it sees but somehow rejects the audio cues even more than is "normal". Things get better on the sides because you're eyes are in front, and also maybe because unlike the center image, side stuff also have interaural cues(variations in timing and frequency response between the 2 ears that define at least the direction in a much clearer way than mono stuff where only the general FR has a little impact on vertical elevation and that's about it for localization cues.
 At least that's my hypothetical explanation. I have no evidence of anything ^_^.

A few options to try:
Listen for a while in the dark, nothing visual must disturb you. Most people end up with an impressive space from audio after a few minutes. While doing this, try to not move your head at all(if you don’t have a chair high enough to maintain your head, maybe lie down on a bed). After a while, can you get frontal audio at some distance? If not, I'm not optimistic that there is a virtual solution for you.
If things go well, find out if moving your head kills the distance while in the dark. You need to be in real darkness so you eyes can't lock onto something to get spatial cues.
Last thing to test is to listen to the same convoluted audio while sitting where you’d usually sit to use your speakers. And then do the same somewhere else. That is to check how important the room and seeing the speakers will affect you. there's a paper I posted somewhere suggesting that at least at a statistical level it does make a difference to be in the original conditions, but again, how much it does for you is what counts. If the difference is marginal, you don't have to bother with that. If it's not, then you'd better record from where you'll be sitting(I imagine in front of your computer at a desk).

To me, head movements ruin everything and I really need head tracking. I also find a great benefit to listening to PRIRs I made with the monitors on my desk(at about 1m from me). Some will be amazed to have the recordings of a big room with fancy speakers, but I tried and ended up going back to my actual room(even thought it's an acoustic nightmare!!!!!). Somehow I get the same reverb, I can anchor the sound source to the speaker monitors I see in front of me(that also makes a huge difference for me). It just feels reals and I tend to keep the center image between those 2 speakers, maybe because I hear it there? Maybe because I'm expecting the center image to be between speakers so my brains wants it there? I know how I feel and what works for me, but I don't have anything beyond guesses when it comes to why.
On the A16's thread, at least 3 people have said that they didn't use the head tracking(didn't like it or didn't see the benefit). I cannot understand, but I also can't ignore that I exist and they exist. So which one are you? Who knows? hopefully my weird experimental suggestions can let you get some ideas even without actually testing head tracking+PRIRs(headtracking modeled on some dummy head were a disappointment for me, and then again, some other people loved them... subjectivity is such a bother  ).


About difficulties in making measurements, I would argue that it’s easier to fail with the A16 because you have to record all speakers at several look angles for the head tracking. I can't count how many times I ended up having the mic move in one ear while I turned to a different position, or while I tried to read something on the A16. Having a super clever part time slave to do the all thing for you while you focus only on being there and turning when asked to, that makes all the difference IMO.
About the final frequency response, the uncertainties are the same as with impulcifer . You probably could benefit from some EQ(on top of the measurements) with one of the methods discussed in this thread somewhere, that are mainly based on equal loudness contour and some luck. For your center image problem, the FR as I said can impact elevation. That in turn may or may not help trick your brain about setting a distance for the virtual center. That's really a case by case thing and it's near impossible to know when you get it right beside feeling for yourself that the center is at eye level(or whatever level it's supposed to be based on where the speakers were).

As for the A8, it seems to require *solid* knowledge of the manual. Most people seem to give up on making prirs on the A16 because they find it overwhelming(the Smyth guys said so). I sure was like that for the first 2 weeks, but then I finally pulled my fingers out of my butt and started struggling toward a result(the first step is the hardest, like always). Anyway, people who owned both, say that the A16 is much easier. Then again, you're here. Meaning that along all the 7 other people on this thread, you have the willpower to put in the work and RTFM if it helps get away from the failed stereo that is normal headphone use. You being part of that elite group of binaural mic warriors(too much?^_^), makes me hopeful that you'd deal with whatever Realiser complexity.


Where are you in France? You should definitely try to find a Realiser owner near you who would agree to demo it. There is really no substitute to trying it yourself. Il y a toujours Paris et le revendeur Francais(Gilles Gerin) qui propose une demo avec calibration, si jamais. C'est sur RDV, il faut lui demander sur av-in.com .


edit: OMG I wrote a book! Sorry about that.


----------



## Xam198 (May 15, 2022)

@ castleofargh
Thanks! Don't be sorry, "au contraire", your answer is fascinating, i will have to read it several times to try everything you've said. I've  got several hours of trials that are coming 
Thanks again 

@ lowdown : and when you made the measurements, were yours speakers at 7 feet ?


----------



## morgin

@jaakkopasanen a thought came to me. Hearing aids are a huge market and the ones my mum has are around the back of the ear and don’t pick up sound too well or give positional ques that well. She need to wear two and I guess it would only work for people who wear two hearing aids. Could impulcifer be useful? Have the mic (instead of a mic it’s a probe tube) be in ear unlike the current hearing aids and have that give information to amplify any sounds that need amplifying for the patient. 

So the sound information it’s collecting just like impulcifer is tailored to exactly what is being received inside the ears. 

The hearing aids are expensive around £2000 I’m sure we could make a lot of money with specific hrir hearing aids.


----------



## reter

lowdown said:


> There are so many variables I can only offer some ideas.  If you are able to locate sounds in direction and depth normally without headphones then it doesn't seem likely that the problem is your hearing or brain.  It's probably the localization cues in the sounds being delivered to your ears.  Of course something in the measurement process, or Impulcifer command options could be the culprit, but also perhaps it's vision.  That may sound odd, but what you're seeing or not when listening can have a big influence on sound localization.  Impulcifer works extremely well for me when I'm sitting in front of my speakers in my normal stereo music spot.  The illusion is exceedingly realistic, a soundstage 7 ft in front of me, pinpoint left to right imaging, realistic depth, everything as if I'm actually at a live performance.  But I can use the same headphones and HRIR in my computer room in front of the monitor with a wall close behind it and the virtualization mostly collapses.  If this isn't relevant to your situation then one thing you can check off the list, but it's worth being aware of.




maaaan 7ft distance, i don't have a room that big, i can go around 4ft max the results are already impressive, i can't imagine in a room that big


----------



## sander99

morgin said:


> @jaakkopasanen a thought came to me. Hearing aids are a huge market and the ones my mum has are around the back of the ear and don’t pick up sound too well or give positional ques that well. She need to wear two and I guess it would only work for people who wear two hearing aids. Could impulcifer be useful? Have the mic (instead of a mic it’s a probe tube) be in ear unlike the current hearing aids and have that give information to amplify any sounds that need amplifying for the patient.
> 
> So the sound information it’s collecting just like impulcifer is tailored to exactly what is being received inside the ears.
> 
> The hearing aids are expensive around £2000 I’m sure we could make a lot of money with specific hrir hearing aids.


I think there are also hearing aids that are very small and placed completely with the mic inside the ear canal. That way the localisation cues are largely preserved.

Impulcifer is of no use here. What should it do? Impulcifer hrirs can be used to give localization cues to sound corresponding to one of a limited number of known locations (where the loudspeakers are for example) and that doesn't have any localisation cues yet (just an audio channel). The sound picked up by mics in the ears already have all the cues so nothing needs to be changed, just amplified. And that sound can be in fact a summation of different sounds from many different directions, and those directions can be any of all possible directions - infinitely many!


----------



## lowdown

reter said:


> maaaan 7ft distance, i don't have a room that big, i can go around 4ft max the results are already impressive, i can't imagine in a room that big


It's sort of like sitting in the 1st row at the concert. But 4 ft could be like resting your chin on the front of the stage. Not a bad spot.


----------



## reter

lowdown said:


> It's sort of like sitting in the 1st row at the concert. But 4 ft could be like resting your chin on the front of the stage. Not a bad spot.


ahahahah if wasn't for the fact that i get too much high frequencies that's annoying sometimes, i should try recording the brir with the speaker in the corner to get more distance while still being in the center of the room


----------



## lowdown

reter said:


> ahahahah if wasn't for the fact that i get too much high frequencies that's annoying sometimes, i should try recording the brir with the speaker in the corner to get more distance while still being in the center of the room


Experimenting with different speaker locations and distances is a good idea.  A corner placement is likely to boost the bass frequencies, but if it's too much it can be fixed with EQ, and there may be other advantages with your speaker and room.  As others have suggested using EQ to adjust those bothersome high frequencies can be very helpful.  I've made some EQ tweaks to my best BRIR with very good results.


----------



## Xam198 (May 16, 2022)

@ castleofargh
ok i've tried with this video (and others of this channel) :

Right/left separate: perfect virtualisation : amazing !
When it comes to center stereo image with both left and right playing, at first a big part of the sound come back in my head, but another part stay on the virtual speakers - strange. But i've got the fealing that more i listen these video, the more the center sound seems to get out of my head. Do you think that a sort of training of my ears/brain could make the virtualisation work for me ?


----------



## reter

lowdown said:


> Experimenting with different speaker locations and distances is a good idea.  A corner placement is likely to boost the bass frequencies, but if it's too much it can be fixed with EQ, and there may be other advantages with your speaker and room.  As others have suggested using EQ to adjust those bothersome high frequencies can be very helpful.  I've made some EQ tweaks to my best BRIR with very good results.


actually i'd like some more bass, i think the brir i'm making have too much clarity, very much compared to the vanilla headphones (HD660S), i listen to some audio tracks and is like i'm missing some details under 1000hz or so, i can hear very clear voices and stuff happening but when it comes to some deep stuff it's literally almost inaudible if compared from before... i don't know if this is because i was using dolby hrir since a lot of years so now i'm really used to them at the point i'm getting the feeling that the real fidelity is what i was getting before; maybe it's because i was used to hrirs with large spaciality and now that i'm using impulcifer recordings at 4ft distance i feel this weird satisfaction mixed to unsatisfaction; i applied a peak filter to lower the peak i was having at 4000hz frequency, it's better than before but still there's no reason to keep recording at 4ft


----------



## musicreo

I recorded at  3.3 ft, 4 ft 5 ft, 6 ft and 7 ft.  My best measurements where obtained at 5ft.  (recording room size 6.3m*4.1m).  I think the optimal distance is strongly depending on the room and the used speaker.


----------



## castleofargh

Xam198 said:


> @ castleofargh
> ok i've tried with this video (and others of this channel) :
> 
> Right/left separate: perfect virtualisation : amazing !
> When it comes to center stereo image with both left and right playing, at first a big part of the sound come back in my head, but another part stay on the virtual speakers - strange. But i've got the fealing that more i listen these video, the more the center sound seems to get out of my head. Do you think that a sort of training of my ears/brain could make the virtualisation work for me ?



I assume you're talking about doing this in the dark? Did you notice a big impact on the center image when trying to move your head compared to staying immobile? It's not great if you struggle to get center distance even in the dark. It could still be caused by many things but a diagnostic is going to be hell, and it's also possible that you're simply one of those few for whom simulated sound doesn't give a center image at a distance.  

Maybe you can also test recording impulses with the speakers stuck together in front of you(or just one as center channel if you have tracks and settings to listen with a center channel). To find out if then you readily have distance in front for that/those speakers. Try it with the same speakers still in front of you(try not moving your head!)while listening to the simulation, as that should in principle give you the closest to a real experience of frontal sound as you can get(minus some FR deviation). 
If that feels right and you have distance, try it again without the speakers to find out how much you need to see them and know they're there. Maybe also test with locked head position and while moving a little.

Depending on circumstance, that center speaker and then simulation while still in front of it could serve as good training I guess. In any case, I would at least go for having speakers in my line of sight when using impulcifer, and over time, try to anchor the sound to them because of this https://www.nature.com/articles/srep37342


----------



## Xam198

So i tested the speakers stuck together  : it worked. First in darkness to obtain a clear centered front. Opening the eyes, i've lost then regained the centered front after some listening. I easily loose the centered, i need some training and the speakers in front of me, but at least it seems to work for me. Ouf. Many thanks for the help.


----------



## castleofargh

So you’re not a lost cause for center image, but the struggle is real. I wish I had a clear fix, but those damn humans are so complicated.
So far, did you only measure 2.0(usual pair of speakers) for stereo or did you already try multichannel measurement and playback? If like me you’re mainly using stereo albums, maybe a special workaround for you would be to measure stereo speakers plus a center channel, and find some dsp to upmix stereo into ... what’s that called? 3.0? 
At this point I’m throwing ideas in the air. I don’t clearly understand why that should be better, but if it happens to help you a little, then why not go for it.


----------



## musicreo

Maestroso said:


> In my efforts to improve the capture process, I'm currently shopping for binaural microphones that are best suited for the task and ran across the MS-TFB-2-*MKII* (Link: https://soundprofessionals.com/product/MS-TFB-2-MKII/), apparently a new iteration of my current MS-TFB-2.
> 
> 
> 
> These look very interesting. Have any of you gathered experience with the MKII variant of these microphones? I assume the prospect to use them as part of an in-ear plug structure might improve captures (while avoiding a mess on the mic element as is to be expected in people with narrow ear channels = moi)? However, I'm not sure whether the mic elements will insert deep enough into the ear canal. Or is it more efficient to simply cut off the hooks of my old MS-TFB-2, put them in foam plugs and crush them into the ears in good old Realiser fashion?



Indeed the mic specifications look very good.  But what is the size of the capsules? Is the SNR of 80db a new improvement or is the capsule just bigger?


----------



## Sekka

If anyone is looking at the MS-TFB-1 or SP-TFB-1 from soundprofessionals due to pricing, don't bother because it is only a single mic despite the amazon title and description referring to mics, plural.

I'm 99% sure I'm going to need to order a second mic, but is it possible to do these measurements one mic at a time?  If I do order a second mic, does it matter that they don't come as a pair if it's the same model?


----------



## sander99

Sekka said:


> but is it possible to do these measurements one mic at a time?


That would be extremely difficult because the timing differences between left and right ear are of course essential and critical. Plus you would have to have your head in the exact same position for the left and right ear measurement. So practically speaking: no, not possible.


Sekka said:


> If I do order a second mic, does it matter that they don't come as a pair if it's the same model?


Not a real problem I think because the headphone compensation effectively also compensates for the mics frequency response.


----------



## reter

Sekka said:


> If anyone is looking at the MS-TFB-1 or SP-TFB-1 from soundprofessionals due to pricing, don't bother because it is only a single mic despite the amazon title and description referring to mics, plural.
> 
> I'm 99% sure I'm going to need to order a second mic, but is it possible to do these measurements one mic at a time?  If I do order a second mic, does it matter that they don't come as a pair if it's the same model?


 i can say that i orderer the ms soundprofessional paired and both have different gains, so don't bother


----------



## Sekka

reter said:


> i can say that i orderer the ms soundprofessional paired and both have different gains, so don't bother


What do you mean, just that they aren't matched very well?


----------



## reter

Sekka said:


> What do you mean, just that they aren't matched very well?


yeah, if for matched you mean the volume you get recorded from the mics, they aren't matched; ironically, i got better matching with the cheaper primos

but it's not a real deal, you can just balance them with the audio interface to sound both more or less the same volume, or just let impulcifer do the thing itself


----------



## Xam198

@ castleofargh
I definitively need to see my speakers in front of me to make it works; i've tested with 3 speakers, one centered. At the begining i was closing my eyes : the precision of the soundstage was amazing, listening to a group of rock, each musician had a very precise position in space, i had the feeling they were here, just in front of me. Opening the eyes, i lost precision, my brain gathered the sounds on each speakers, depending of his position in space, but i kept a good localisation of virtual speakers. I took of the right speaker and slowly the sound moved to the left, loosing precison on the right. My conclusion is that my brain need to see the speakers to make the virtualisation works. I don't know how it will do in 7.1 since you never see the surround speakers, i have to test that.


----------



## Maestroso

musicreo said:


> Indeed the mic specifications look very good.  But what is the size of the capsules? Is the SNR of 80db a new improvement or is the capsule just bigger?


Turns out the capsules of the MKII are too large (10 mm according to support). 

I went for the SP-EAR-MIC-2, which are essential the same as the CB-900. Not the best specs but incredibly small size. Will report back how it all works out.


----------



## castleofargh

Xam198 said:


> @ castleofargh
> I definitively need to see my speakers in front of me to make it works; i've tested with 3 speakers, one centered. At the begining i was closing my eyes : the precision of the soundstage was amazing, listening to a group of rock, each musician had a very precise position in space, i had the feeling they were here, just in front of me. Opening the eyes, i lost precision, my brain gathered the sounds on each speakers, depending of his position in space, but i kept a good localisation of virtual speakers. I took of the right speaker and slowly the sound moved to the left, loosing precison on the right. My conclusion is that my brain need to see the speakers to make the virtualisation works. I don't know how it will do in 7.1 since you never see the surround speakers, i have to test that.


Now that you know, you can make use of it to help trick yourself, so it’s not all bad news. 
Hey, at least next time you read something about a reviewer saying he’s not biased by visual cues, you can facepalm and never trust anything he says ever again. It’s something!

For multichannel, we’re not as discerning anyway. Plus most multichannel materials beside demos that try to make you puke, will focus on the front and use the rest mostly as support for ambience. So unless there a clear issue(like the back coming at the front), it usually remains enjoyable. At least I don’t think it can ever be as annoying as messed up positioning for the frontal sounds.


----------



## musicreo

Maestroso said:


> Turns out the capsules of the MKII are too large (10 mm according to support).


Sorry to hear that but this is what I thought. The cheap Pui 5024HD I  used also have 80db SNR and are approx. 10mm.  You can measure with that capsules if you don't want any deep insertion and in the beginning my results with the capsules actually were not to bad. Also the gain matching of those capsules is way better than the matching of the Primo Em 258 capsules.  Actually, for the 16 Primo capsules I had only 5 where closely matched (only the gain was not matched well, the frequency response was always ok).


----------



## Joe Bloggs

Something you guys may enjoy https://www.head-fi.org/threads/behind-joe-bloggs-computer-audio-batstation-year-2022.963519/


----------



## reter

Joe Bloggs said:


> Something you guys may enjoy https://www.head-fi.org/threads/behind-joe-bloggs-computer-audio-batstation-year-2022.963519/


wait, can i edit my own impulcifer measurement with that tool to enhance my hrtf? or it's exclusively for people who can't measure their own hrtf?


----------



## Joe Bloggs (May 24, 2022)

reter said:


> wait, can i edit my own impulcifer measurement with that tool to enhance my hrtf? or it's exclusively for people who can't measure their own hrtf?


You can use it to feed up to 8 virtual speakers (that you measured yourself or otherwise) at eye level with upmixed content from stereo, and also to adjust the dynamic range of the music


----------



## Sekka

I have two MS-TFB-1 mics and an M-Track Duo, but the right channel mic will only work with my Rode VXLR+ adapters (on either adapter or either input 1 or 2 of the interface) if the jack is pulled out from the adapter by about 1/8th of an inch, but works fine plugged fully into the mic-out of my motherboard.  The left channel mic has no such issue, but there is no problem otherwise with the audio of either mic.

Does anyone have an explanation/fix for this?


----------



## Sekka

I was able to get a near-perfect recreation of my LG CX TV speakers as a test drive,  the fact that this is a free project is mind-blowing.  I have a Kali LP-6 on the way that will hopefully be high enough quality when paired with an HD800 to not feel any loss of detail.  Is it possible to EQ in the extra sub bass under 40 hz that is lacking from most speakers after the fact, or am I limited by the speaker capability?


----------



## jaakkopasanen

Sekka said:


> I was able to get a near-perfect recreation of my LG CX TV speakers as a test drive,  the fact that this is a free project is mind-blowing.  I have a Kali LP-6 on the way that will hopefully be high enough quality when paired with an HD800 to not feel any loss of detail.  Is it possible to EQ in the extra sub bass under 40 hz that is lacking from most speakers after the fact, or am I limited by the speaker capability?


If you do room measurements, Impulcifer will eq the bass for you. You could also of course do that manually in EqualizerAPO for example by adding some parametric filters.


----------



## Joe Bloggs

jaakkopasanen said:


> If you do room measurements, Impulcifer will eq the bass for you. You could also of course do that manually in EqualizerAPO for example by adding some parametric filters.


What does Impulcifier do about the phase of bass frequencies for which the speakers' response is too low to give an actual reading of the timing?


----------



## Sekka

Joe Bloggs said:


> What does Impulcifier do about the phase of bass frequencies for which the speakers' response is too low to give an actual reading of the timing?


If it's so low that the mics can't pick it up I assume it's not touched. I had to EQ some sub bass regions of the hrir created with the LG CX speakers by 45 dB for them to be reasonably audible, and those frequencies appear to be placed similarly to the stock headphones. That's a pretty extreme case though, because they are TV speakers there is literally no sub bass.


----------



## Brandon7s (Jun 4, 2022)

Sekka said:


> I was able to get a near-perfect recreation of my LG CX TV speakers as a test drive,  the fact that this is a free project is mind-blowing.  I have a Kali LP-6 on the way that will hopefully be high enough quality when paired with an HD800 to not feel any loss of detail.  Is it possible to EQ in the extra sub bass under 40 hz that is lacking from most speakers after the fact, or am I limited by the speaker capability?


I really like my LP-6s and am looking forward to hearing what your think of them.

 Unrelated, but I recently bought the Etymotic ER4SR IEMs and I'm really, really liking them with Impulcifer BRIRs. Maybe even more than I like them with my Anandas. There's something special about just how much detail these things can retrieve. I like the ER4SR with the BRIRs a lot more than I like the same BRIRs with my Moondrop Blessing 2, though that might just come down to the fact that I have to rely on IEM measurements made by other folks and the MDB2s have a surprising lack of variety in the measurements available.


----------



## Sekka

Is it possible to somehow adjust the angle of a single virtual speaker in either impulcifer or hesuvi?  I'm getting a lot of great sounding HRIRs but my left ear is angled differently than my right ear and 95% of the 30+ HRIRs I've recorded are panned to the left.  

Specifically (with single speaker measurement) it's the front left speaker that is the issue, it usually sounds almost merged into side left and closer to my head.  I can't find an angle that reliably matches with front right, which is perfect when turned 30 degrees.  I've tried anywhere from about 5 degrees to about 50, with varying results.

+30 for FL/FR orientation in Hesuvi gets it near the right position, but drags FR into the wrong position with it and warps the frequency response a bit.


----------



## reter

Sekka said:


> Is it possible to somehow adjust the angle of a single virtual speaker in either impulcifer or hesuvi?  I'm getting a lot of great sounding HRIRs but my left ear is angled differently than my right ear and 95% of the 30+ HRIRs I've recorded are panned to the left.
> 
> Specifically (with single speaker measurement) it's the front left speaker that is the issue, it usually sounds almost merged into side left and closer to my head.  I can't find an angle that reliably matches with front right, which is perfect when turned 30 degrees.  I've tried anywhere from about 5 degrees to about 50, with varying results.
> 
> +30 for FL/FR orientation in Hesuvi gets it near the right position, but drags FR into the wrong position with it and warps the frequency response a bit.


You can let impulcifer adjust it with the --channel_balance command in plot, see the measurements guide, there are a bunch of ways to balance the channels, you just have to try and see what's best for you


----------



## Sekka

reter said:


> You can let impulcifer adjust it with the --channel_balance command in plot, see the measurements guide, there are a bunch of ways to balance the channels, you just have to try and see what's best for you


My channel balance is fairly good and I have tried all of the commands, the FL speaker is still positioned incorrectly.


----------



## Sekka (Jun 8, 2022)

I ended up removing the casing for my ms-tfb mics and putting them inside a slit earbud I had lying around like Brandon did earlier in the thread, my results are already noticeably better from doing that so I'm assuming that is the main issue.

Some of the 5 hrirs I quickly recorded with the naked mic are actually panned to the right, which was previously near impossible for me unless I intentionally sabotaged the FL channel.


----------



## reter (Jun 8, 2022)

Sekka said:


> My channel balance is fairly good and I have tried all of the commands, the FL speaker is still positioned incorrectly.



are you sure you don't have some hearing loss in your right ear? i was complaining for the same thing on my front left channel until i noticed that i have some hearing loss in my left ear; the channel balance helped a bit only because it centered more the front left and right channels so it was less noticeable

try what i did: just play some 7.1 left and right stuff and try to wear your headphone in reverse and see if you have the same problem with your other ear... if so then it's the hrir the problem, if it isn't then you probably have some frequency loss in your ear


----------



## Sekka (Jun 10, 2022)

reter said:


> are you sure you don't have some hearing loss in your right ear? i was complaining for the same thing on my front left channel until i noticed that i have some hearing loss in my left ear; the channel balance helped a bit only because it centered more the front left and right channels so it was less noticeable
> 
> try what i did: just play some 7.1 left and right stuff and try to wear your headphone in reverse and see if you have the same problem with your other ear... if so then it's the hrir the problem, if it isn't then you probably have some frequency loss in your ear


Are you sure you have hearing loss in your left ear or is that just a guess?  That's usually in the higher frequencies as far as I know, unless it's a severe case.  If not I would try adding a small delay (like 0.1 ms) to the left channel in EqAPO and see if that makes a difference to you, that has worked for other people on this forum.

If your normal head posture is tilted slightly left or right, your brain will eventually accept that position as neutral.  If your head is tilted left and sound is hitting your right ear slightly sooner than your left, simulating the delay that your brain is expecting will most likely fix your issue.

Anyways, removing the mic casing fixed my issue thankfully


----------



## simplefi

Is it possible to output HRIRs in a desired sample rate that isn't 48khz without having to resample?  Seems like Impulcifer only outputs at 48khz unless the --fs parameter is used to resample to another rate.  I'm trying to record HRIRs directly using my desired sample rate.


----------



## Brandon7s (Jun 10, 2022)

Well, I did it fellas. I just ordered a pair of Senny+Drop HD8XXs. I've been waiting for a good excuse to try them for a while now and my trusty Ananda's right earpads just came apart at the seams due to the glue weakening and that was as good an excuse as any. I know, I know, that's a rather flimsy excuse.

 After using the Ety ER4SRs for a couple weeks now I honestly just crave more detail than my Anandas can provide and everything I've read online says the HD800 and it's derivatives are about as detailed as it gets in a full-sized headphones, so that was the next logical step. The ER4SR's detail is straight up addicting when using Impulcifer; it takes realism to another level. I'm very curious how the 8XX fares in comparison. Also, the 800/8XX's highly-praised comfort is attractive as well; the Anandas are very comfortable but their earcups are a bit large for my head and I find myself with a bit of abrasion below my right ear after a long day of wear. I think they'd be about perfect if the cups were a 5 millimeters shorter in length.

 I've ordered the 8XX from Amazon and they should come in on Sunday, I went with a new pair rather than saving a few hundred because I wanted to be able to return them if they aren't a noticeable upgrade in both detail retrieval and comfort. I have a rather large collection of mid-tier headphones that I need to sell since I don't use any full-sized cans but my Anandas or DT770s anymore, so this'll be a good reason to stop being lazy finally put those on the market.


----------



## Brandon7s

I've been using my new HD8XXs for about 4 hours so far today and so I figured I should post follow up for those who might be interested in how my experience is going so far with them in comparison to my Anandas.

  First things first: their stock, un-equalized frequency response is complete garbage for me. It's one of the muddied headphones I've ever used right out of the box. I honestly have no idea what on earth they were thinking when they were running these, the FR is honestly terrible. I've not tried the sticker mods yet and likely won't since I use Impulcifer with them so their stock FR doesn't really matter. I did try EQing them to the Harmon target and that is a significant improvement. It's very difficult for me to discern technical abilities of headphones without customized speaker virtualization though, since I don't experience soundstage without it - and despite these being lauded as the most wide and expansive soundstage available in headphones, these were no exception. Instruments sound randomly placed and very much in my head as usual.

As a result result I wasted little time with them being taking a 7.1 measurement with Impulcifer using my Kali LP-6s in a nearfield configuration. The results were fantastic, though I did need to do a little manual channel balancing but that's normal for me.

 I definitely think there's improvement in detail retrieval vs. the Anada but I'm having to rely on memory since I'm still waiting on their replacement pads to arrive. Once they are in I'll be able to A/B them.

 Interestingly, headphone.png measurements of this 8XX measurement has much less variance about the 1000hz range compared to the inconsistent and extreme variation I get with my Anandas, and I've taken 110 Ananda measurements. Each one has different spikes and dips in the low channels above 2000hz, and between 1000 to 2000hz has significant distended from measurement to measurement as well, though to a lesser extent. The low end variance between left and right channels is similar to what I get with my Anandas though so that's likely just the nature of the shapes of my ears, which makes me feel better about the Anandas. The higher frequency variation with them is something that puzzles me though. Maybe I'll see more of those differences with further 8XX measurements, only time will tell.

 I'm any case, the first measurement with the 8XX was on par with the best measurement I've ever gotten out of the Anandas after over 100 of them, so that's promising. Maybe their cup shape just works better on my head or perhaps I just got lucky. I'll take some more measurements later this week to find out.


----------



## reter (Jun 16, 2022)

Sekka said:


> Are you sure you have hearing loss in your left ear or is that just a guess?  That's usually in the higher frequencies as far as I know, unless it's a severe case.  If not I would try adding a small delay (like 0.1 ms) to the left channel in EqAPO and see if that makes a difference to you, that has worked for other people on this forum.
> 
> If your normal head posture is tilted slightly left or right, your brain will eventually accept that position as neutral.  If your head is tilted left and sound is hitting your right ear slightly sooner than your left, simulating the delay that your brain is expecting will most likely fix your issue.
> 
> Anyways, removing the mic casing fixed my issue thankfully


sadly i'm sure because i did the audiometric test before getting a job, also i don't have to test it, it's so noticeable for me even in real life, i can test clapping my hands left and right and i can hear the difference... i was thinking on some left audio correction but as it is i'm so much used in real life that i can be used in my "virtual" one too, if you know what i mean


----------



## Sekka (Jun 19, 2022)

I just found out that if you run the headphone measurement with HeSuVi activated and your HRIR selected, and you have the default input device set as your surround virtualizer (Cable Output for me), plotting the headphone graph will show the combined frequency response of the HRIR's speaker + headphone measurements as received by the input device.

This makes it much easier to EQ the HRIR to your liking.  If you had a smoothed version of the graph you could even run it through AutoEQ to match a target curve.


----------



## musicreo

Sekka said:


> I just found out that if you run the headphone measurement with HeSuVi activated and your HRIR selected, and you have the default input device set as your surround virtualizer (Cable Output for me), plotting the headphone graph will show the combined frequency response of the HRIR's speaker + headphone measurements as received by the input device.


 If you have measured your PRIR and headphone you already have  their combined frequency response. I don't see any sense in your measurement.


----------



## Joe Bloggs

Sekka said:


> I just found out that if you run the headphone measurement with HeSuVi activated and your HRIR selected, and you have the default input device set as your surround virtualizer (Cable Output for me), plotting the headphone graph will show the combined frequency response of the HRIR's speaker + headphone measurements as received by the input device.
> 
> This makes it much easier to EQ the HRIR to your liking.  If you had a smoothed version of the graph you could even run it through AutoEQ to match a target curve.


Combined as in downmixed to stereo?  That's a nice idea.  Taking the idea further, I often had a test set of music played through a certain PRIR / whatever playback chain I had and measured the FR of the whole track compared to the input, and EQed the result back towards the original using statistical methods.  Call it statistical PRIR tonality correction?  At the end of which you pretty much hear your headphones as they originally sounded but with spatialization.


----------



## Sekka (Jun 19, 2022)

musicreo said:


> If you have measured your PRIR and headphone you already have  their combined frequency response. I don't see any sense in your measurement.


The difference is that the measurement will react to and allow you to visualize any change you make as far as EQ, downmixing, upmixing, adjusting virtual speaker position, etc.  Basically anything that alters the digital audio signal.


----------



## reter (Jun 21, 2022)

reter said:


> sadly i'm sure because i did the audiometric test before getting a job, also i don't have to test it, it's so noticeable for me even in real life, i can test clapping my hands left and right and i can hear the difference... i was thinking on some left audio correction but as it is i'm so much used in real life that i can be used in my "virtual" one too, if you know what i mean



i tried the channel balance to raw reduce the decibel in my right ear to fix my left ear problem, i was mistaken! IT DOES A LOT, i fixed using channel_balance=-1.1 so i have 1.1db less on my right and it's like i've restored my left ear, omg so good, sure impulcifer does a lot of stuff to help

before, with 7.1 virtualization i could ear the left side of the stuff more far than the right side, now i can hear both sides localized well



EDIT: I have to correct myself, after some tests i found that it's worst in localization, i shouldn't correct by raising or lowering the volume, instead i should raise only the frequencies i lost on my left ear


----------



## Iohfcasa (Jun 25, 2022)

There shouldn't be any channel disbalance caused by measurements below ~1000hz, because the brain  can't utilize level differences (ild) in that range very well and therefore is limited to evaluation of  phase diffences (itd) for left/ right detection.

https://en.m.wikipedia.org/wiki/Sound_localization
(Itd and ild)


----------



## Brandon7s

I got my Ananda's replacement pads in last week and so I tried making measurements with both my new HD8XX and the Ananda in order to compare them as closely as possible. 

Long story short, the HD8XX's results with Impulcifer are just flat-out better. They sound sharper, have stronger localization, and have less issues with rogue treble spikes. They also measure more consistently; if measure the Anandas in two different sessions, the results of the headphone measurement will vary significantly, especially above 3000hz. The HD8XX's measurements are much more consistent from session to session, both in the final results and in the headphone measurements. There's some slight variations above 3000hz but it's far more slight than the differences between Ananda measurements. 

Here's where it gets weird. See how smooth the graph lines are in this HD8XX measurement compared to the Ananda graph? This is from the same session, same exact settings on everything, same mic placement for both headphone measurements. 

HD8XX headphone.png:





Ananda headphone.png from same measurement session.




See how the Ananda's graph is so "hairy" and saw-like at above 1000hz? I think that this could be a part of why the HD8XX results with Impulcifer are a significant step up in localization sharpness and clarity. I thought that this could be caused by distortion from driving the headphones too hard or from digital clipping so I tried reducing both the audio output to ensure digital clipping wouldn't occur and I also dropped the SPL of output of the headphones by turning the amp down and it made absolutely no difference. This is a real puzzle to me.


----------



## simplefi

Very interesting observations with your 8XX.  I picked up some HD800s to experiment with Impulcifer and my findings are similar to yours.  Localization is excellent, as is technical performance.  My headphones plot is a little "hairier" than your 8XX but not as bad as the Anandas.  Not sure how this plays into the result but I am getting great results.  

Another thing I have been experimenting with is amplification and have found that the choice of amp does indeed make an impact depending on what your goal is.  Making measurements with one amp and then playing back with another, or making different measurements using different respective amps all impact the end result.  My theory is that certain aspects like frequency response and localization can be well captured with the impulse response measurement, while other aspects of the sound cannot be captured.  If you have ever compared two amps and have been able to hear that one has better imaging or dynamics than the other, or hear the differences between a tube vs solid state amp, these are things that may not be measureable with a sine sweep.  Thus these qualities will also contribute to the final sound.  

My takeaway is that for multichannel use, frequency response and localization are the most important factors and Impulcifer measurements excels at this.  For two channel critical listening, the other factors mentioned above may also come into play depending on your system and ears.


----------



## reter

simplefi said:


> Very interesting observations with your 8XX.  I picked up some HD800s to experiment with Impulcifer and my findings are similar to yours.  Localization is excellent, as is technical performance.  My headphones plot is a little "hairier" than your 8XX but not as bad as the Anandas.  Not sure how this plays into the result but I am getting great results.
> 
> Another thing I have been experimenting with is amplification and have found that the choice of amp does indeed make an impact depending on what your goal is.  Making measurements with one amp and then playing back with another, or making different measurements using different respective amps all impact the end result.  My theory is that certain aspects like frequency response and localization can be well captured with the impulse response measurement, while other aspects of the sound cannot be captured.  If you have ever compared two amps and have been able to hear that one has better imaging or dynamics than the other, or hear the differences between a tube vs solid state amp, these are things that may not be measureable with a sine sweep.  Thus these qualities will also contribute to the final sound.
> 
> My takeaway is that for multichannel use, frequency response and localization are the most important factors and Impulcifer measurements excels at this.  For two channel critical listening, the other factors mentioned above may also come into play depending on your system and ears.



Wonder if this is because jakko uses the same headphones so he calibrated impulcifer basically for his headphones, but i could be wrong


----------



## castleofargh

reter said:


> Wonder if this is because jakko uses the same headphones so he calibrated impulcifer basically for his headphones, but i could be wrong


Solid no on this.


----------



## reter

guys, i want to buy another pair oc headphones do try with impulcifer, do you know what's the best to use with impulcifer? closed or open back? planars? should be closer to the harman curve or doesn't matter?

i have the sennheiser 660s, i would like to try something else to see how it changes


----------



## simplefi

reter said:


> guys, i want to buy another pair oc headphones do try with impulcifer, do you know what's the best to use with impulcifer? closed or open back? planars? should be closer to the harman curve or doesn't matter?
> 
> i have the sennheiser 660s, i would like to try something else to see how it changes


HD800 has been proven to work well for speaker virtualization and is what Smyth Research recommends for the Realiser (along w Stax).  Open back seem to work the best IME.


----------



## simplefi

I’ve started experimenting with room measurements using the UMIK-1. I’ve found that the sensitivity is quite low for my normal listening volume. I am getting headroom readings in the 30db range. Are the results more useable if the digital gain in windows is turned up or is it ok to use the recordings as is? I have it set for 0db.


----------



## reter

simplefi said:


> I’ve started experimenting with room measurements using the UMIK-1. I’ve found that the sensitivity is quite low for my normal listening volume. I am getting headroom readings in the 30db range. Are the results more useable if the digital gain in windows is turned up or is it ok to use the recordings as is? I have it set for 0db.


this is something i'm wondering since i bought the umik, i have to set the virtual gain up to maximum to get low headroom but in that case i can hear a lot of background noise, i don't really think it's very helpful without a proper amp


----------



## Joe Bloggs

reter said:


> this is something i'm wondering since i bought the umik, i have to set the virtual gain up to maximum to get low headroom but in that case i can hear a lot of background noise, i don't really think it's very helpful without a proper amp


How loud have you guys got the speakers playing the test sweeps?


----------



## simplefi

Joe Bloggs said:


> How loud have you guys got the speakers playing the test sweeps?


I have it set to the loudest I would listen to actual music.  I went ahead and just did the measurements at 0db gain and it still worked out.  The windows gain is digital anyway so there is nothing to gain (no pun intended) from using it.


----------



## jaakkopasanen

No need to worry about the headroom so much. The primary function of it is to tell you if you have clipping in your recording and that the signal is not crazy quiet.


----------



## morgin

Hi everyone it’s been a while. I once again want to thank @jaakkopasanen for impulcifer it’s soo good and I cannot watch movies or game without it. You saved me a lot of money on a high end surround system and I can use it at night without disturbing anyone. I am truly very grateful. 

I posted a while ago that I’m using headphone compensation in impulcifer as in the tutorial. Then I experimented by extracting the headphone compensation from impulcifer and generated a .txt file to also use in hesuvi virtualisation (instead of using oratory’s). It works wonders giving twice as much clarity in virtual speakers and effect of having speakers. I wanted to try and somehow add it a third time if anyone knows a way of doing that. Maybe add it in eqapo (though hesuvi is just and interface for eqapo) any help would be appreciated.


----------



## musicreo

morgin said:


> I wanted to try and somehow add it a third time if anyone knows a way of doing that. Maybe add it in eqapo (though hesuvi is just and interface for eqapo) any help would be appreciated.



You could repeat the EQ settings in the EQ-APO configuration editor as many times you want. But I think it is same as increasing the amplification factors by the factor two or three. For me it is not reasonable to just repeat the filter settings several times.


----------



## morgin

Thanks for the reply. 

Another question that I’m stuck on and been searching but can’t find a concrete answer. For watching surround sound movies with mpc hc and madvr do I choose bitstream or pcm?


----------



## musicreo

Bitstream should not work with EQ-APO.


----------



## morgin

musicreo said:


> Bitstream should not work with EQ-APO.



Is there anyway to use bitstream with impulcifer. I’m reading it’s better for surround sound


----------



## castleofargh

That’s relevant when you output a specific multichannel format to a multichannel system that can decode that particular format. The idea is that you keep the signal as is without trying to "understand" it because some specific device further down the chain is compatible with it(be it Dolby something or DTS something else) so you want that device to do the decoding and not your DVD player or your video app on the computer. Not because bitstream is better but because we assume that the device further down the chain will do the decoding better.
Bitstream is a fancy word for "don't touch that". It's nothing miraculous and it doesn't improve anything.
But here with impulcifer you’re sending the audio to be convolved and mixed down to stereo(the speaker simulation part) before it even gets out of the computer, you do want to touch that! Your video player must decode the audio of special multichannel formats beforehand or the convolution app won't know what to do with it.


----------



## morgin

castleofargh said:


> That’s relevant when you output a specific multichannel format to a multichannel system that can decode that particular format. The idea is that you keep the signal as is without trying to "understand" it because some specific device further down the chain is compatible with it(be it Dolby something or DTS something else) so you want that device to do the decoding and not your DVD player or your video app on the computer. Not because bitstream is better but because we assume that the device further down the chain will do the decoding better.
> Bitstream is a fancy word for "don't touch that". It's nothing miraculous and it doesn't improve anything.
> But here with impulcifer you’re sending the audio to be convolved and mixed down to stereo(the speaker simulation part) before it even gets out of the computer, you do want to touch that! Your video player must decode the audio of special multichannel formats beforehand or the convolution app won't know what to do with it.


Beautifully put because I understood what you wrote. Thankyou


----------



## Xam198

Moreover, mpc-hc allows to have 2 audio outputs. I have an EMU 1820 card, I use it for Hesuvi, and I use the sound card built into the motherboard to send the 5.1 bass channel to an amp connected to a Clark synthesis TS209. A very big thank you to jaakkopasanen, he saved my passion for homecinema - and my good relations with my neighbors.


----------



## reter

jaakkopasanen said:


> No need to worry about the headroom so much. The primary function of it is to tell you if you have clipping in your recording and that the signal is not crazy quiet.


yeah, actually i found out that i get better results if i lower my Master Series mics to like 50% of the overall gain in windows, in that case i'm getting high headroom but also better results... or maybe results that fits better for my hearing taste?

everytime i'm in the recording phase i'm asking myself how to get the best spacial effect but also the best clarity and neutral sounding (something like a studio recording)? and so i lower the gain to get that feeling, i tried the umik but i found that when i try to correct the reverb the sounding gets less "clear" and more muffled, so actually my only way to go for me is lowering the mics sensitivity in windows


----------



## Sekka

reter said:


> yeah, actually i found out that i get better results if i lower my Master Series mics to like 50% of the overall gain in windows, in that case i'm getting high headroom but also better results... or maybe results that fits better for my hearing taste?
> 
> everytime i'm in the recording phase i'm asking myself how to get the best spacial effect but also the best clarity and neutral sounding (something like a studio recording)? and so i lower the gain to get that feeling, i tried the umik but i found that when i try to correct the reverb the sounding gets less "clear" and more muffled, so actually my only way to go for me is lowering the mics sensitivity in windows


My best tip is to listen to your best raw FL/FR/BL/BR (etc).wav recordings with virtualization disabled. Any clicks, pops, squeaks, shaking furniture, or just general distortion you hear in the recordings will likely show up to some degree in the resulting hrir. I found that I had the best results from having both my speakers and mics at a low gain (lower than listening volume for speakers), then using Audacity noise reduction to completely remove the noise from each recording. My headroom was always 20-25 dB in the best recordings.

Also,  unless you have very symmetrical ears, a matching ITD measurement in readme.md is a much better indication of a balanced recording than the speaker plot.  You can compensate for a mismatched ITD with channel balance, but personally any sort of channel balance completely ruined the realism of my recordings.  Also helped me to adjust decay to the lowest value present in the readme, significantly differing decay between speakers can also lead to an unbalanced sound since you would perceive one speaker as being further away.  For me that was 315 ms RT60 so I set my --decay paramater to 315.

I never did get a frequency response perfectly matching that of my speakers even after 1000+ recordings and I'm not even sure it's possible, but I was only trying to match the spatial characteristics so it doesn't really matter.  Room correction should take care of the response, but I just used autoeq and webplot digitizer since I don't own a measurement mic.


----------



## morgin

I was wondering if it’s possible to have a section on this forum that is easily accessible for new comers to read the kind of results we end up achieving rather than them having to read through the whole forum which is huge at this point. 

It’s a big ask to get someone to invest in mics, speaker(s), time and effort. Also people look for different things and from what I read on here we are all getting quality results (after some effort) be that in stereo music, surround music, gaming watching movies.  

The reason I’m bringing this up is because I’m asking people to try it out but there are many people countering by saying it won’t be good as the real thing or that just getting expensive audio gear is better and many people are put off. 

If we start the forum with a sticky or something similar that newbies can have first access to after an introduction to impulcifer. It should hopefully drive them to try and achieve what we are trying to convey and express.


----------



## jaakkopasanen

morgin said:


> I was wondering if it’s possible to have a section on this forum that is easily accessible for new comers to read the kind of results we end up achieving rather than them having to read through the whole forum which is huge at this point.
> 
> It’s a big ask to get someone to invest in mics, speaker(s), time and effort. Also people look for different things and from what I read on here we are all getting quality results (after some effort) be that in stereo music, surround music, gaming watching movies.
> 
> ...


The right place for this would be in the Impulcifer wiki. If the good people here could share now all the tips they have for achieving good results and avoiding pit falls, I can compile a summary in the wiki.


----------



## morgin (Oct 2, 2022)

How long and what should we include? Just the things we learned or what were gaining from it and how easy hard it was?how long it took? Maybe if we type a long review type thing you can have them all listed and users can expand to see them full.

If people do start posting their outcomes then I think mine should be near the end because members on here are better at expressing their results.

Here’s mine I can edit if someone has a better template/format or layout or needs to be smaller.


“Imagine audio quality jump similar to vhs video to 4k hdr Blu-ray, its the best way I can express this.
Impulcifer is truly a game changer. After trying the demo and being convinced there was a speaker playing in front of me (grinning with a huge smile having to take off the headphones multiple times to make sure there wasn’t) I had to try and see how far I could take it. I have tried all sorts of options from a cheap surround setup to different headphones to software like Dolby atmos on pc and all hesuvi presets. Whilst they were very good I knew there must be something better. There was always improvement. The options were a high end expensive surround setup. A smyth realiser equally expensive. Or software based solutions that sometime require monthly payments. Then I found this which I’m shocked not many people talk about.

Impulcifer takes time and effort but you soon learn what to do and start getting results. There is a lot of free software to install and set up. But the guide is complete and the headfi forum users offer heaps of help and advice.

The result at the end is mind blowingly good. The sense of invisible speakers around you and the sound space. The clarity in every sound and detail is crisp. You will hear individual rocks behind you in action scenes. People all around you in crowded scenes. Basically a high end surround system but with invisible speakers tuned perfectly for your ears. You don’t even need to go as far as surround even stereo sound is phenomenal.

My tips on starting is to make sure you have at least one good quality speaker. Have a good pair of openback headphones. Make sure the in ear mics don’t move at all when taking measurements. Experiment a lot ask question on the forum. The first couple of successful results were a huge improvement to the alternatives mentioned above and from there you keep tweaking and it gets better.”


----------



## castleofargh

On many occasions where someone discussed "soundstage", or more 3D whatever, I brought up this thread and the Realiser A16 if they're rich enough. This is a great success as you all can see.
I believe that they're facing a mental barrier. In people's mind, working hard at their job to get money is an accepted situation, not a fun one but they're already doing it anyway. While audio is the hobby, where we expect to relax and enjoy. 
That difference in mindset makes almost everybody ready to spend their hard earned money into gears for the enjoyment that a cool hobby can provide, but it also makes almost everybody allergic to any amount of personal work done directly for the hobby, within the hobby.
It's clearly not a behavior limited to this app.


----------



## reter

morgin said:


> The reason I’m bringing this up is because I’m asking people to try it out but there are many people countering by saying it won’t be good as the real thing or that just getting expensive audio gear is better and many people are put off.


you know, people aren't conviced because haven't tried, i see a lot of people not conviced that 4k is much better than fullhd or that the Virtual Reality is very immersive

to be honest, there were only two occasions that i had a mindblowing excitement and incredulity even tho i was experiencing it:

1- the first time i experienced virtual reality with one of the top tier vr in the market
2- impulcifer when i achieved pretty good results

sadly there's a barrier between people and the doubt that stuff like this are a gamechanger: that barrier is called money... even tho impulcifer is not as expensive as a pimax 8kx, it still does cost a 70$ mics+100$ audio interface+150$ single speaker and some setup and trial and errors

i see a lot of people coming to hesuvi because they love the surround sound and they're trying to get the best results even tho they can't do their own measurements so i hope these people can reach the idea that 350$ of stuff to do impulcifer is worth the money


----------



## morgin

I was kinda bothered that windows didn’t allow height channels unless using Dolby atmos. Then I came across this video that shows most movies don’t make use of the height channels. 



Just thought I’d share in case anyone else was annoyed by this


----------



## Sekka

morgin said:


> I was kinda bothered that windows didn’t allow height channels unless using Dolby atmos. Then I came across this video that shows most movies don’t make use of the height channels.
> 
> Just thought I’d share in case anyone else was annoyed by this


Hopefully Google's open source version of Atmos takes off (Project Caviar), but I doubt it.  In movies, I'm not particularly impressed with most uses of surround in general, but modern games put them to shame and make perfect use of surround.  I would really enjoy the inclusion of height channels at some point.


----------



## reter (Oct 10, 2022)

Sekka said:


> Hopefully Google's open source version of Atmos takes off (Project Caviar), but I doubt it.  In movies, I'm not particularly impressed with most uses of surround in general, but modern games put them to shame and make perfect use of surround.  I would really enjoy the inclusion of height channels at some point.



wonder if we can measure some height channels with impulcifer and use them to our liking... obviously not expecting to achieve the same Atmos does, but i think this could help enhance the 7.1


----------



## jaakkopasanen

reter said:


> wonder if we can measure some height channels with impulcifer and use them to our liking... obviously not expecting to achieve the same Atmos does, but i think this could help enhance the 7.1


If something comes out of this Google's project and studios adopt it and there is a workable decoder on Windows, I will add support for height channels in Impulcifer. There's quite many ifs here though...


----------



## morgin

jaakkopasanen said:


> If something comes out of this Google's project and studios adopt it and there is a workable decoder on Windows, I will add support for height channels in Impulcifer. There's quite many ifs here though...


Some hope!… I’ll be looking forward to this. Maybe movies won’t have proper height sounds but should help when gaming. But then would that mean doing the measurements again? Oh no!


----------



## reter

morgin said:


> Some hope!… I’ll be looking forward to this. Maybe movies won’t have proper height sounds but should help when gaming. But then would that mean doing the measurements again? Oh no!


i'm most concerned on how i could put a speaker on the ceiling, redo the measurements is the last problem really


----------



## reter (Oct 18, 2022)

i need help about the umik: how can i plug it directly to the behringer with phantom power?


----------



## musicreo

reter said:


> i need help about the umik: how can i plug it directly to the behringer with phantom power?


Is the umik not a USB microphone? So there is no way to connect it with the Behringer.


----------



## morgin

I'm not sure if its placebo but I'm getting better surround by using the custom resolution utility and adding LPCM 8 channels audio. Before setting this Mpc HC would show codec output as anything but PCM now it shows PCM as output and I'm noticing better surround. Maybe someone would want to test it.


----------



## reter

morgin said:


> I'm not sure if its placebo but I'm getting better surround by using the custom resolution utility and adding LPCM 8 channels audio. Before setting this Mpc HC would show codec output as anything but PCM now it shows PCM as output and I'm noticing better surround. Maybe someone would want to test it.


isn't cru stuff related to the monitor? i think the audio format is related to the monitor audio


----------



## Crema

Inspired by BACCH-BM, I made a new binaural microphone.

The previously used rubber cap was fine, but it pushes the earwax inside and makes me visit the hospital. Discarded.

Instead, I use an etymotic ear tip to hold the Sound Professionals CB900 microphone.

It is comfortably mounted at the entrance of the external ear canal, and is easy to attach and detach.


----------



## zabusa

As far as i understand Impulicifer tries its best to emulate your speakers in their location relative to the listening position.  And its kinda scary how close it is. My question is - is it possible to use the positioning/location info but ignore my speakers FR / sound signature?

I have a relatively high end headphone (Hifiman Susvara), and while my speakers are quite good (KEF LS50 Metas) - i would kind like to use my Susvaras tuning, but with the localization of my HT

Do i process without Headphone Compensation with the* --no_headphone_compensation* argument?


----------



## Sekka (Dec 12, 2022)

zabusa said:


> As far as i understand Impulicifer tries its best to emulate your speakers in their location relative to the listening position.  And its kinda scary how close it is. My question is - is it possible to use the positioning/location info but ignore my speakers FR / sound signature?
> 
> I have a relatively high end headphone (Hifiman Susvara), and while my speakers are quite good (KEF LS50 Metas) - i would kind like to use my Susvaras tuning, but with the localization of my HT
> 
> Do i process without Headphone Compensation with the* --no_headphone_compensation* argument?


If you don't use headphone compensation, you're just adding together the FR of your headphones and the FR of your speakers.  The result will sound like neither, most likely won't sound good at all.

If you use WebPlotDigitizer to plot an FR graph of the Susvara, you can place it with the other target responses in the impulcifer\data folder and it should work like the other default targets.  Your HRIR would be EQed to match the target of the plotted Susvara when using the correct "--room_target"


----------



## morgin

Just wanted to say a huge thanks to @jaakkopasanen again. Went to the avatar imax viewing. Don’t usually go to the cinema much unless it’s a huge movie and got to witness the surround sound done by professionals, especially after having used my measured hrtf for a while. Besides the thumping bass I can honestly say my headphones with impulcifer sound better. There’s just more detail around me with distinct sounds. The theatre was quiet and I had good seats in the middle so was sitting in the ideal place. 

Thanks again for this software. It truly is mind blowing amazing. 

Btw awesome visuals in the movie, mediocre story but something else to watch in 3d


----------



## musicreo

morgin said:


> Don’t usually go to the cinema much unless it’s a huge movie and got to witness the surround sound done by professionals, especially after having used my measured hrtf for a while. Besides the thumping bass I can honestly say my headphones with impulcifer sound better. There’s just more detail around me with distinct sounds.



For Impulcifer measurements we have the speakers in perfect circle around us. This is not possible in a real listening environment. For me it is not really surprising that it can give us better sound.


----------



## Lemma

Can anyone please provide the channel layout for hrir.wav?


----------



## jaakkopasanen

Lemma said:


> Can anyone please provide the channel layout for hrir.wav?


https://github.com/jaakkopasanen/Impulcifer/blob/master/constants.py#L55


----------



## morgin (Yesterday at 3:24 PM)

I'm struggling with the bass when watching movies. I want that punchy deep bass but adding a low shelf filter in EqApo and cranking it up to almost decent is causing clipping. Is there something else I can do to get deeper bass. I know it may not be possible to replicate true bass on a pair of headphone's but asking incase there's something I don't know


----------



## castleofargh

morgin said:


> I'm struggling with the bass when watching movies. I want that punchy deep bass but adding a low shelf filter in EqApo and cranking it up to almost decent is causing clipping. Is there something else I can do to get deeper bass. I know it may not be possible to replicate true bass on a pair of headphone's but asking incase there's something I don't know


Lower everything but the part you want boosted with the EQ then turn up the amplifier and *not* some digital gain. 0dB is full scale and above it is lost. It's the max a signal should ever reach(and if we consider the portions of sines between samples, it could be a good idea to leave an extra 1 or 3dB of headroom).
Your picture says low shelf with a negative gain??? I would use negative gain and a high shelf filter(again to digitally reduce everything but the bass so you don't add more clipped parts). 

Or get a shaker, or use an actual subwoofer(wiring might be more or less annoying). I tried an actual woofer and it's amazing IMO, too bad I have humanoid life forms nearby that aren't nearly as thrilled as I am about about a woofer shaking the house while I watch something on my own with a headphone.


----------



## jaakkopasanen

morgin said:


> I'm struggling with the bass when watching movies. I want that punchy deep bass but adding a low shelf filter in EqApo and cranking it up to almost decent is causing clipping. Is there something else I can do to get deeper bass. I know it may not be possible to replicate true bass on a pair of headphone's but asking incase there's something I don't know


Your gains seem inverse. Make the preamp negative and the low shelf positive.


----------



## musicreo

I would try to equalize  or increase the volume only of the LFE channel.


----------



## morgin (Today at 10:09 AM)

When I turn down the pre amp it’s too low of a volume even with the media player at 100% and windows at 100%. Will a DAC help in this situation to give clear bass and clarity on other frequencies as well as more volume?

If so which DAC is good for a decent price?

My headphones are HD560s

I’ve tried increasing the lfe channel but not enough bass. And having to buy a subwoofer and neighbours won’t work in my situation.


----------



## jaakkopasanen

morgin said:


> When I turn down the pre amp it’s too low of a volume even with the media player at 100% and windows at 100%. Will a DAC help in this situation to give clear bass and clarity on other frequencies as well as more volume?
> 
> If so which DAC is good for a decent price?
> 
> ...


If you have maxed out your digital volume controls and the volume is not enough then you need a headphone amplifier


----------



## castleofargh

morgin said:


> When I turn down the pre amp it’s too low of a volume even with the media player at 100% and windows at 100%. Will a DAC help in this situation to give clear bass and clarity on other frequencies as well as more volume?
> 
> If so which DAC is good for a decent price?
> 
> I’ve tried increasing the lfe channel but not enough bass. And having to buy a subwoofer and neighbours won’t work in my situation.


It's a more powerful amp that you need then. Boosting anywhere digitally(on the computer) will get you the same clipping issues when the signals passes above 0dB. 

That's really a general rule of EQing. all you do with digital EQ, you need to compensate with gain to avoid pushing the signal above 0dB and then you use an amplifier to compensate by how much you've had to reduce the digital gain and simply give you the listening level you desired. That should be done at the analog level.


----------



## morgin (Today at 10:42 AM)

Headphone amplifier it is then. Any recommendations?

There are option’s that have both DAC and Amp in one are they any good or standalone amp best for me?

I’m looking at the fiio k5 pro


----------

