# iPhone AAC vs. Aptx and Aptx-hd real world



## neil74

The info on the interweb is both plentiful and vague on the comparative bitrates of APTX, AAC and SBC but as I understand AAC is capped at around 256 kbps and APTX around 350 kbps?

I regularly switch between iOS and Android I am curious as to the real world limitations of wireless headphones on an iPhone vs APTX equipped droids?  e.g. using Apple Music and it's  256 kbps AAC should mean no loss but I'm unsure how higher bitrate stuff like Spotify 328 kbps or the standard Tidal 328 kbps AACs would compare and how much of a bottleneck the AAC codec would be on an iPhone?

With aptx-hd now available on both phones and headphones I am wondering how much of a real world quality advantage Androids now have with wireless headphones?


----------



## bigshot

AAC goes up to 320 and if you do AAC 320 VBR, it can actually go above 320 if necessary. It really isn't though, because AAC is audibly transparent at 256. Once a codec reaches audible transparency, throwing more bitrate at it won't make the music sound better. So good enough truly is good enough.


----------



## pinnahertz

The Apple A2DP includes the mandatory SBC codec, which can run up over 320k, but usually is bumped down much lower.  Apple also includes AAC at 250K. Android A2DP also includes SBC with similar bitrate chokes, but throws in AptX at some reasonbly high rate, possibly up to 320k, but I couldn't confirm that.


----------



## neil74

All things being equal I am wondering though if this is black and white?

1 - iPhone 256 kbps AAC files piped straight over ACC so in theory no encoding.  Spotify on an iPhone would presumably need to be re-encoded first.  So an advantage to apple music on an iPhone?
2 - On Android with aptx those same 256 kbps AAC files would need to be encoded too so spotify may have the edge here?

For the above 2 scenarios the non-aptx iphone with native AAC files could actually be the better sounding combo?


----------



## jfvny (Nov 10, 2017)

I just got a pair of bluetooth headphones capable of AAC and was interested to test this out! I'm new-ish to describing sound though, so please bear with me

equipment used: AIAIAI TMA-2 H05-S01-E06 (the most "neutral" combination they have), iPhone 8, AAC 256 kbps (Apple music) vs MP3 320 Kbps (played through the default iOS quicktime player), track used (link here) (cos it starts out quiet, and there's a number of vocal nuances throughout)

pre-test: Just to make sure my iPhone was using the AAC bluetooth codec (or is it profile?), I played the same track from my Surface pro (regurlar bluetooth profile, can't remember if it's SBC or A2DP) on iTunes vs from my iPhone. The iPhone playback win hands down. (edit: after longer use, I'm finding that not all tracks sound different to me for SBC vs AAC; take from that what you will)

Short answer, both were equally good!
Long answer, there seems to be some differences in both tracks: the AAC one seems to have slightly less volume difference for the quick low-to-high vocal portions, but slightly more detail in the vibratos. Might be something to do with the players themselves, or the source tracks, and was really minor anyways. So I assume the 320kbps MP3 is still getting encoded to AAC to be transmitted since it doesn't lose out (and Apple certainly doesn't use the aptx codec)? Which is pretty good, cos I half expected Apple to just ignore mp3s. Whether it works for the Spotify app (or others) I can't say though, since I don't have Spotify premium.

edit: just for fun, I tried it with an AIFF (Apple's lossless format) vs the aforementioned AAC file; and there's no discernible difference either


----------



## bigshot

jfvny said:


> there seems to be some differences in both tracks: the AAC one seems to have slightly less volume difference for the quick low-to-high vocal portions, but slightly more detail in the vibratos.



That sounds like a very small difference in volume between samples. If one track is slightly quieter than the other, your ears will perceive less dynamics than if it's slightly louder. Encoders will sometimes adjust the overall level of a track to prevent clipping on a hot mastered song.


----------



## jfvny

bigshot said:


> That sounds like a very small difference in volume between samples. If one track is slightly quieter than the other, your ears will perceive less dynamics than if it's slightly louder. Encoders will sometimes adjust the overall level of a track to prevent clipping on a hot mastered song.


Yeah that was probably what happened 
Thanks for clarifying!


----------



## DarwinOSX

256k AAC is a much more modern and efficient codec than 256K MP 3 or the 320k OG that Spotify uses.


----------



## Jeepz (Dec 18, 2017)

Thanks for the timely thread! I'm new to this so it's all going over my head. I'm an Apple abuser (iPhone, iPad, iMac, Watch, etc) so APTX doesn't really apply.

I'm in the market for bluetooth IEMs. I use Apple Music (which is AAC). *Should I be looking for ones that support AAC for best sound quality? Or is A2DP support good enough?*

I'm not an audiophile (but I do appreciate good sound) and I don't plan on spending over $200 for a rig.


----------



## DarwinOSX

AAC is all you need and pretty much all of the support that.


----------



## Jeepz

DarwinOSX said:


> AAC is all you need and pretty much all of the support that.


Thanks for the quick reply 

What do you mean by this? AAC encoding is supported by all bluetooth headsets?

I'm looking at a pair of NuForce BE2 (to get my feet wet) and their product page says it's the only BT IEM in its price range to support AAC.


----------



## DarwinOSX

I think most bluetooth headsets do but people often do t realize it like the Bose for example.


----------



## DarwinOSX

It’s just as good as AptX without Qualcomm’s obnoxious licensing and ASC in general is a very good codec.


----------



## Jeepz

DarwinOSX said:


> I think most bluetooth headsets do but people often do t realize it like the Bose for example.



Yeah, that's interesting. I'll start experimenting with AAC branded and unbranded IEM's and see if there's a difference to me.


----------



## Monstieur (Dec 21, 2017)

neil74 said:


> 1 - iPhone 256 kbps AAC files piped straight over ACC so in theory no encoding.  Spotify on an iPhone would presumably need to be re-encoded first.  So an advantage to apple music on an iPhone?
> 2 - On Android with aptx those same 256 kbps AAC files would need to be encoded too so spotify may have the edge here?
> 
> For the above 2 scenarios the non-aptx iphone with native AAC files could actually be the better sounding combo?


The global audio is encoded to AAC on iOS, so music will undergo an additional transcode to AAC. However, AAC has been tested to be transparent even after 100 transcodings.


----------



## Jeepz (Dec 21, 2017)

Monstieur said:


> The global audio is encoded to AAC on iOS, so music will undergo an additional transcode to AAC. However, AAC has been tested to be transparent even after 100 transcodings.


So if the phone & headphone are using the AAC codec, Spotify (OGG Vorbis) and Apple Music (AAC) should sound about the same?


----------



## DarwinOSX

Spotify uses Ogg Vorbis. The paid version uses 320k Ogg.


----------



## Jeepz

DarwinOSX said:


> Spotify uses Ogg Vorbis. The paid version uses 320k Ogg.


Fixed.


----------



## Monstieur (Dec 22, 2017)

Jeepz said:


> So if the phone & headphone are using the AAC codec, Spotify (OGG Vorbis) and Apple Music (AAC) should sound about the same?


AAC to AAC transcoding is still transparent because it's the same algorithm - there would only be a negligible loss in data after transcoding. But (in theory at least) Vorbis to AAC transcoding would cause additional loss since each algorithm discards data slightly differently. However, since the output is 256 kbps AAC, the bitrate is high enough that it should still be transparent to humans.

At 256 kbps AAC, you can completely ignore quality as a factor even with transcoding from different players and codecs. I doubt anyone could tell the difference between even 128 kbps and 256 kbps AAC outside of a critical listening environment.


----------



## DarwinOSX

Ogg is a pretty dated and inefficient algorithm whose chief advantage to Spotify is that it’s free. I’ll take 256k over 320k Ogg any day.


----------



## Jeepz

DarwinOSX said:


> Ogg is a pretty dated and inefficient algorithm whose chief advantage to Spotify is that it’s free. I’ll take 256k over 320k Ogg any day.


AAC is the main reason I went the Apple Music route.

That... and Siri. Stupid closed iOS ecosystem 

Although honestly, I can't really tell much difference between 320 OGG and Apple's offerings on my BT set.


----------



## bigshot

Monstieur said:


> I doubt anyone could tell the difference between even 128 kbps and 256 kbps AAC outside of a critical listening environment.



It really doesn't have anything to do with how hard you listen. Artifacting is pretty clear, no matter how hard you listen for it. The thing that matters is the music you're encoding. There are certain kinds of sounds that react with codecs and create artifacting. I've got a Sammy Davis Jr CD that has some tracks with massed strings that are very difficult to encode without the tell tale outer space gurgling sound. It can be perfectly encoded at 256 with AAC and 320 with LAME. I haven't tried OGG.


----------



## DarwinOSX

Jeepz said:


> AAC is the main reason I went the Apple Music route.
> 
> That... and Siri. Stupid closed iOS ecosystem
> 
> Although honestly, I can't really tell much difference between 320 OGG and Apple's offerings on my BT set.



How does it being “closed” affect you. 
On a good headphone I can hear the difference.


----------



## Jeepz (Dec 22, 2017)

DarwinOSX said:


> How does it being “closed” affect you.


I don't have all day to list them all out, lol. More than I can count on both hands.

But in this case, the lack of Siri + Spotify compatibility is very sad. I mean, Siri sucks as it is, and being closed makes it even worse. On top of that, Apple Music's interface is inferior to Spotify. By a long shot. Especially on desktop.


----------



## Jeepz

That's so strange why macOS comes with aptX and AAC disabled by default. Every new Mac system I use I have to enable either in Terminal.

Wonder what the reasoning behind that is?


----------



## Monstieur (Dec 26, 2017)

It's not disabled by default. It must be an issue with your Bluetooth receiver, requiring you to force it. macOS prefers aptX over AAC too, when the receiver supports both.


----------



## neil74 (Dec 29, 2017)

So how about if we flip them his around and look at using an android device (that supports AAC). Would it be the same?   i.e. using AAC limited headphones such as QC35s or Beats studio 3s on an android device, would there be any difference with how they perform on an idevice vs. a droid?


----------



## Monstieur (Dec 29, 2017)

neil74 said:


> So how about if we flip them his around and look at using an android device (that supports AAC). Would it be the same?   i.e. using AAC limited headphones such as QC35s or Beats studio 3s on an android device, would there be any difference with how th See perform on an idevice vs. a droid


Android's audio stack is inferior, and various degradations could theoretically have occurred before the AAC compression stage. The ability to choose the output compression format is a recent feature. It's unknown whether all manufacturers implement it correctly.


----------



## Jeepz (Jan 1, 2018)

Monstieur said:


> It's not disabled by default. It must be an issue with your Bluetooth receiver, requiring you to force it. macOS prefers aptX over AAC too, when the receiver supports both.


This isn't true. On all my Macs (Air, Mini, iMac) I need to manually enable AAC/aptX in Terminal. I only had to do it once on each machine, but out of the box, macOS will use SBC.

This was the case for my Sony XB950's, my ADV Model 3's as well as my Bose SoundSports.

AirPods will use AAC without changing the settings, though.

This has been discussed in depth here:

https://www.areilly.com/2017/07/29/enabling-aac-and-aptx-over-bluetooth-on-macos/


----------



## Monstieur (Jan 2, 2018)

Jeepz said:


> This isn't true. On all my Macs (Air, Mini, iMac) I need to manually enable AAC/aptX in Terminal. I only had to do it once on each machine, but out of the box, macOS will use SBC.
> 
> This was the case for my Sony XB950's, my ADV Model 3's as well as my Bose SoundSports.
> 
> ...


That article doesn't attempt to identify the cause of the issue and simply assumes that all Macs use SBC by default, which is false - just because someone asserts it, it doesn't mean it's true.

Your experience is also uncommon because macOS has always preferred aptX over AAC over SBC for *years*. It's not a Sony issue either because I've also had the MDR-1000X, WH-1000XM2, and MUC-M2BT1 and all three of them default to aptX out of the box on a Mac. I've also tested the QC35 and it defaults to AAC, not SBC.


----------



## neil74

As someone who regularly switches between an idevice and Droid (currently p2xl and ipx) I think that the lack of aptx is preferable to the lack of aac. 

Using say an aac only headphone on Android will probably be close near as damn it to aptx especially if listening to aac files (apple music, Amazon or tidal non HiFi).  Whereas using an aptx set with no aac (Sennheiser!) Will likely be noticeably worse on an iPhone?

This of course assumes compressed files rather than lossless.


----------



## Monstieur

AAC is superior to aptX in every case, with the exception of aptX Low Latency for apps and games. The source format is also irrelevant since AAC @ 256 kb/s is transparent, so even with lossless files AAC will be audibly better than aptX. aptX HD is a marketing gimmick since high resolution and bit-depth are useless on a playback device - they are only useful for audio editing.


----------



## LajostheHun

Monstieur said:


> AAC is superior to aptX in every case, with the exception of aptX Low Latency for apps and games. The source format is also irrelevant since AAC @ 256 kb/s is transparent, so even with lossless files AAC will be audibly better than aptX. aptX HD is a marketing gimmick since high resolution and bit-depth are useless on a playback device - they are only useful for audio editing.


This is the science forum so you will need to back up those claims....


----------



## bigshot

I’m not familiar with aptX, but if it achieves audible transparency at a bigger data rate than AAC 256, then it’s safe to say that it’s inferior for playback of music intended to be heard by human ears.


----------



## LajostheHun

bigshot said:


> I’m not familiar with aptX, but if it achieves audible transparency at a bigger data rate than AAC 256, then it’s safe to say that it’s inferior for playback of music intended to be heard by human ears.


These codecs works differently from one another [AAC/Aptx]so they can't be compared directly like that and making blanket statements .Aptx splits the audio into 4 different sub bands and apply data reduction independently from one another, AAC works differently much closer and basically an  improved version to MPEG and MP3 . All this reminds me of the old DD/DTS debate, where people just made up a bunch of incorrect theories why the preferred DTS over Dolby.


----------



## bigshot

Audibly transparent is audibly transparent whatever way they achieve it. Is aptX audibly transparent at a particular bitrate?


----------



## Monstieur (Feb 18, 2018)

bigshot said:


> Audibly transparent is audibly transparent whatever way they achieve it. Is aptX audibly transparent at a particular bitrate?


No it's not transparent at the default bit rate (whatever that is) and you can't change the bit rate on most aptX transmitters. It has audible artefacts on certain low frequency notes. I believed it was good enough until I heard a particular song where the low notes distort on aptX but not AAC, and now I can't un-hear it.

However, I archive in AAC 256 and not lossless so if there were artefacts in AAC I wouldn't know from my library. But it's reasonable to conclude that aptX is objectively inferior to AAC 256.


----------



## castleofargh

bigshot said:


> Audibly transparent is audibly transparent whatever way they achieve it. Is aptX audibly transparent at a particular bitrate?


it's often hard to test, what would we use to know what the transparent sound is like? the headphone wired? almost never sounds right because the headphone is built to work with the internal crap it has. so you can never be sure the difference comes from the codec and not the change in DAC and amp. 



Monstieur said:


> No it's not transparent at the default bit rate (whatever that is) and you can't change the bit rate on most aptX transmitters. It has audible artefacts on certain low frequency notes. I believed it was good enough until I heard a particular song where the low notes distort on aptX but not AAC, and now I can't un-hear it.
> 
> However, I archive in AAC 256 and not lossless so if there were artefacts in AAC I wouldn't know from my library. But it's reasonable to conclude that aptX is objectively inferior to AAC 256.


 disagree with your last sentence. your experience is anecdotal at best when it comes to judging the codecs in general. I'm not saying it's the other way around for superiority, only that your setup is at best conclusive for your setup until we have more data.


----------



## Monstieur (Feb 18, 2018)

castleofargh said:


> it's often hard to test, what would we use to know what the transparent sound is like? the headphone wired? almost never sounds right because the headphone is built to work with the internal crap it has. so you can never be sure the difference comes from the codec and not the change in DAC and amp.
> 
> 
> disagree with your last sentence. your experience is anecdotal at best when it comes to judging the codecs in general. I'm not saying it's the other way around for superiority, only that your setup is at best conclusive for your setup until we have more data.


I forgot the name of the test which measures audio codec transparency. I believe it begins with the letter M. 5 is a perfect score and is transparent to humans. AAC 256 got a 5 (or very close), AAC 128 a 4.something and aptX even lower. The scores were calculated from multiple people.

In my case I'm using a Sony MUC-M2BT1 connected to my Shure SE846. It's not scientific grade, but the same amp and DAC are used for both aptX and AAC. If the aptX decoder is flawed I still consider the comparison valid because aptX was designed as an end-to-end hardware codec to be licensed to manufacturers of transmitter and receiver chipsets, while AAC is only an algorithm. A failure in any part of the chain is a failure of the aptX ecosystem.


----------



## castleofargh

whether AAC is indeed superior is not what I reacted to. you mentioned your anecdotal experience, then went from that alone to conclude it was objective evidence of AAC's superiority. it's a logical fallacy.
  if I had to judge aptx based only on using it with my sony A15, I would agree that it's a crap and have no opinion on AAC because that's not an option on the A15 ^_^. but then I used aptx on my cellphone, it worked just fine and sounded pretty much the same to me as other options. one isolated experience is not necessarily more than that.


----------



## bigshot (Feb 18, 2018)

castleofargh said:


> it's often hard to test, what would we use to know what the transparent sound is like? the headphone wired? almost never sounds right because the headphone is built to work with the internal crap it has. so you can never be sure the difference comes from the codec and not the change in DAC and amp.



You do a simple listening comparison test. DACs and amps aren't going to introduce compression artifacting. If you can hear compression artifacting, then you can be pretty doggone sure the codec isn't transparent. AAC 256 is audibly transparent. I've done a lot of comparison tests to determine that, So if you can hear artifacting with aptX, it is inferior to AAC 256.

AAC192 is audibly transparent on just about all music but there are exceptions. AAC128 sounds good but there is a tiny bit of artifacting here and there with certain kinds of sounds. aptX would probably fit in that range somewhere I would guess.


----------



## LajostheHun

bigshot said:


> Audibly transparent is audibly transparent whatever way they achieve it. Is aptX audibly transparent at a particular bitrate?


That would require extensive DBTs to determine it objectively.


----------



## LajostheHun

Monstieur said:


> No it's not transparent at the default bit rate (whatever that is) and you can't change the bit rate on most aptX transmitters. It has audible artefacts on certain low frequency notes. I believed it was good enough until I heard a particular song where the low notes distort on aptX but not AAC, and now I can't un-hear it.
> 
> However, I archive in AAC 256 and not lossless so if there were artefacts in AAC I wouldn't know from my library. But it's reasonable to conclude that aptX is objectively inferior to AAC 256.


 All of that is anecdotal and not objective in any form!!!!


----------



## LajostheHun

bigshot said:


> You do a simple listening comparison test. DACs and amps aren't going to introduce compression artifacting. If you can hear compression artifacting, then you can be pretty doggone sure the codec isn't transparent. AAC 256 is audibly transparent. I've done a lot of comparison tests to determine that, So if you can hear artifacting with aptX, it is inferior to AAC 256.
> 
> AAC192 is audibly transparent on just about all music but there are exceptions. AAC128 sounds good but there is a tiny bit of artifacting here and there with certain kinds of sounds. aptX would probably fit in that range somewhere I would guess.


Again one listener's opinion won't cut it one way or another.


----------



## Monstieur (Feb 18, 2018)

Testing both codecs with the same file on the same Bluetooth receiver is sufficiently accurate, given the degree of audible difference between the codecs. Just like you don’t need the sensors from the Large Hadron Collider to detect something as simple as light in a dark room. Even if the receiver introduced the artefact, it’s an aptX failure because aptX is a hardware codec.

Anything more is just mental masturbation.


----------



## bigshot (Feb 18, 2018)

LajostheHun said:


> Again one listener's opinion won't cut it one way or another.



It will certainly answer the question conclusively for the person doing the test!

This is the thing I don't understand about people who claim to be scientific... They cite published papers in internet forums, then they complain that the papers don't exactly fit their situation. Other people offer real world experiences based on their own testing, and they wave them away saying "anecdotal". If you really want to know, just do the test yourself! What kind of scientist doesn't do any testing and relies only on other people's published papers? What could be more applicable to your situation and use than doing a controlled listening test yourself on your own equipment using your own recordings? I tested AAC and determined the precise point of transparency. I did the same for Frauenhofer and LAME MP3. I do comparison tests with every piece of equipment I buy. I'm not taking anecdotal impressions at face value. I'm not even relying on published papers. I found out for myself. It isn't a matter of rhetoric for me. I know.



LajostheHun said:


> That would require extensive DBTs to determine it objectively.



No, it would take you sitting down and finding out for yourself. Get going buster. If you don't care one way or the other and you don't know the answer, why are you posting on the topic? Your last three posts crossed over from actual content to rhetorical obfuscation. Dotting every i and crossing every t doesn't help people understand how audio works. It can actually make it even more difficult to understand. Is aptX a lossy format? Is it audibly transparent? Answer those questions with some degree of knowledge and I'll be happy with your answer. I'm not going to require peer review to determine something that everyone can determine at home with their own stereo. I don't have aptX, so I can't. But I'm interested in hearing from people who have heard artifacting.

I don't mean to be laying into you. I apologize if I come off that way. It's just that we all seem to lose sight of the reason we're here. It isn't to put on a lab coat and demand academic perfection. It's to use scientific principles to make our home stereos sound better. Helpfulness is a virtue.


----------



## LajostheHun

bigshot said:


> It will certainly answer the question conclusively for the person doing the test!
> 
> This is the thing I don't understand about people who claim to be scientific... They cite published papers in internet forums, then they complain that the papers don't exactly fit their situation. Other people offer real world experiences based on their own testing, and they wave them away saying "anecdotal". If you really want to know, just do the test yourself! What kind of scientist doesn't do any testing and relies only on other people's published papers? What could be more applicable to your situation and use than doing a controlled listening test yourself on your own equipment using your own recordings? I tested AAC and determined the precise point of transparency. I did the same for Frauenhofer and LAME MP3. I do comparison tests with every piece of equipment I buy. I'm not taking anecdotal impressions at face value. I'm not even relying on published papers. I found out for myself. It isn't a matter of rhetoric for me. I know.



And what gave you the idea that I haven't done my objective listening evaluation on Aptx? , I simply don't confuse that with objective proof for anything. Sure I have my opinions too.



> No, it would take you sitting down and finding out for yourself. Get going buster. If you don't care one way or the other and you don't know the answer, why are you posting on the topic? Your last three posts crossed over from actual content to rhetorical obfuscation. Dotting every i and crossing every t doesn't help people understand how audio works. It can actually make it even more difficult to understand. Is aptX a lossy format? Is it audibly transparent? Answer those questions with some degree of knowledge and I'll be happy with your answer. I'm not going to require peer review to determine something that everyone can determine at home with their own stereo. I don't have aptX, so I can't. But I'm interested in hearing from people who have heard artifacting.


 No I haven't heard any artifacts  at all, however as mentioned Aptx comes in one bit rate only however the codec is not compatible to others as it does data reduction in a different manner than the typical Dolby/AAC/MP3 codecs do, so it's rate is not really relevant hence my original posts.


> I don't mean to be laying into you. I apologize if I come off that way. It's just that we all seem to lose sight of the reason we're here. It isn't to put on a lab coat and demand academic perfection. It's to use scientific principles to make our home stereos sound better. Helpfulness is a virtue.



No lab coat needed and this isn't about helping or not,it's about making statements that a poster simply can't back up with facts, simple is that.


----------



## LajostheHun (Feb 18, 2018)

Monstieur said:


> Testing both codecs with the same file on the same Bluetooth receiver is sufficiently accurate, given the degree of audible difference between the codecs. Just like you don’t need the sensors from the Large Hadron Collider to detect something as simple as light in a dark room. Even if the receiver introduced the artefact, it’s an aptX failure because aptX is a hardware codec.
> 
> Anything more is just mental masturbation.


 your thesis
sounds more PE.. or worse .....ED mentally speaking of course...


----------



## castleofargh

we don't know what we don't know. sure it's easy to think that all is fine and that anytime I press a setting, I get what I ordered. except that we're talking BT here:
- to get a special codec we need to have both the source and the receiver to handle it. any issue and the signal is likely to revert to a more compressed version, or just to good old SBC as that format is known to always be available. most headphones won't tell us anything about the format and resolution it's receiving, many sources will still let us click on something not compatible and pretend that all is well. so unless you know very well the specs and actions of your devices, a little caution seems like a good idea before drawing big fat conclusions on codecs based on such a limited experience as a sighted test.
- the initial format might impact the final format and resolution picked for the streaming. so maybe some audible artifacts wouldn't occur if the original file was in a different format/resolution. let's say I have low bitrate mp3 files and pick aptx in my settings? is my signal always converted because I picked aptx and I can? is the device "clever" enough to get that it's a waste of processing and probably fidelity? even if one device is known to behave a given way, do we know that all devices will? in this instance, if the initial files are in AAC and both devices do handle AAC at that sample rate, obviously AAC will be the best logical option as it won't require any change at all. even if a better codec was available, it's not like it would magically add fidelity to the original lossy file. so the very example given was biased from the get go while claiming to be objective evidence of AAC's superiority.


and that's just stuff coming on the top of my head, I'm far from being a BT expert. there are probably some other playful events on some gears, like having sub par connectivity, having different generations of BT, having a fixed sample rate for most formats(usually 44.1 or more often 48khz), and stuff I don't know about that would result in a different way to handle some otherwise apparently identical settings. so the good old idea that if I only change the codec setting on a particular gear, then I'm objectively testing only the codecs' variations, well IMO that's a little candid when it comes to testing BT on one or 2 combos. 
but most of all, I wonder why I end up having to make 3 posts to explain that an anecdotal sighted test is not supposed to directly result in a global objective claim. isn't it obvious?


----------



## bigshot (Feb 18, 2018)

LajostheHun said:


> And what gave you the idea that I haven't done my objective listening evaluation on Aptx? , I simply don't confuse that with objective proof for anything.



The only proof I need is "can I tell a difference?" If I can arrive at an answer objectively, then for the purposes of my stereo and my ears, my job is done. If someone else says they hear artifacts in a careful listening test, I'll tend to believe them. You can feel free to organize a test with a broad sample of test subjects and prove it universally for yourself if you'd like though. Let me know how it comes out. I'll probably tend to believe you too.



LajostheHun said:


> sounds more PE.. or worse .....ED mentally speaking of course...



Physical Education? Erectile Disfunction?


----------



## Monstieur

castleofargh said:


> we don't know what we don't know. sure it's easy to think that all is fine and that anytime I press a setting, I get what I ordered. except that we're talking BT here:
> - to get a special codec we need to have both the source and the receiver to handle it. any issue and the signal is likely to revert to a more compressed version, or just to good old SBC as that format is known to always be available. most headphones won't tell us anything about the format and resolution it's receiving, many sources will still let us click on something not compatible and pretend that all is well. so unless you know very well the specs and actions of your devices, a little caution seems like a good idea before drawing big fat conclusions on codecs based on such a limited experience as a sighted test.
> - the initial format might impact the final format and resolution picked for the streaming. so maybe some audible artifacts wouldn't occur if the original file was in a different format/resolution. let's say I have low bitrate mp3 files and pick aptx in my settings? is my signal always converted because I picked aptx and I can? is the device "clever" enough to get that it's a waste of processing and probably fidelity? even if one device is known to behave a given way, do we know that all devices will? in this instance, if the initial files are in AAC and both devices do handle AAC at that sample rate, obviously AAC will be the best logical option as it won't require any change at all. even if a better codec was available, it's not like it would magically add fidelity to the original lossy file. so the very example given was biased from the get go while claiming to be objective evidence of AAC's superiority.


You can see the codec being used on a MacBook, which is where I noticed the aptX artefacts. You can force aptX, AAC or SBC and adjust the bitrate of AAC. The global system audio is recompressed into AAC / aptX - it does not attempt to bitstream the original file if it's already AAC. Fortunately AAC 256 has been proven to be transparent even after being transcoded 100 times.


----------



## Ranny

Interestingly, I'm using a HTC U11 plus on Oreo. When connected to BT it tells you which codac its using. With the 66 Audio BTS it selects AAC instead of APTX.


----------



## neil74

This is a topic that continues to occupy me as on an iPhone I have this feeling that you are capped at 256 kbps so the likes of Tidal or aything higher is just pointless?

It could just be placebo but I have found Google play music on my iPhone to be inferior to apple music but on my Pixel 2 XL it IMO sounded slightly better (using Sony 1000x m2s and LDAC).  Is it a case that until Apple improves their adopted BT standard it will always be a bottleneck.  Mind you all the time apple music is 256 aac they probably do not care!

So for now the best course for iPhone users who want to use bluetooth would seem to stick to an AAC service so either apple music amazon or tidal premium?


----------



## Monstieur (Feb 28, 2018)

neil74 said:


> This is a topic that continues to occupy me as on an iPhone I have this feeling that you are capped at 256 kbps so the likes of Tidal or aything higher is just pointless?
> 
> It could just be placebo but I have found Google play music on my iPhone to be inferior to apple music but on my Pixel 2 XL it IMO sounded slightly better (using Sony 1000x m2s and LDAC).  Is it a case that until Apple improves their adopted BT standard it will always be a bottleneck.  Mind you all the time apple music is 256 aac they probably do not care!
> 
> So for now the best course for iPhone users who want to use bluetooth would seem to stick to an AAC service so either apple music amazon or tidal premium?


The phone always re-transcodes the sound to AAC / aptX / LDAC, so it's potentially altered right at the source. Headphones like the WH-1000XM2 also alter the sound with their ultrasonic upsampling gimmicks. The WH-1000XM2 is a bad sounding headphone which alters some frequencies audibly (listen to the iPhone keyboard clicks), so pursuing high fidelity audio is pointless when you're listening on such a device - the sound has already been destroyed far more than the minute differences between audio codecs.


----------



## neil74

Was wondering that as in theory the bitrates for AAC BT (250) and Apple AAC fileds (256) do not match so there must be some transcoding going on somewhere!

In theory then the iPhone is currently significantly limited for bluetooth audio vs. android.  Again the question as to whether you can actually hear this difference is valid but I'd say on really good systems you probably can.


----------



## Monstieur

neil74 said:


> Was wondering that as in theory the bitrates for AAC BT (250) and Apple AAC fileds (256) do not match so there must be some transcoding going on somewhere!
> 
> In theory then the iPhone is currently significantly limited for bluetooth audio vs. android.  Again the question as to whether you can actually hear this difference is valid but I'd say on really good systems you probably can.


Android's entire audio stack is inferior (latency, resampling). The presence of alternative codecs like aptX (inferior to AAC) or LDAC (gimmick inaudible to humans) does not make it better.


----------



## neil74 (Mar 1, 2018)

Monstieur said:


> Android's entire audio stack is inferior (latency, resampling). The presence of alternative codecs like aptX (inferior to AAC) or LDAC (gimmick inaudible to humans) does not make it better.



This is interesting but is this really the case?  The amount of conflicting info on this subject is quite baffling, and if true totally undermines aptx and LDAC with a lot of reviews still give android and the increased bitrate of aptx-hd and LDAC the advantage over AAC on an iDevice.

If what you say is the case then any service north of 256 kbs is overkill for bluetooth and Apple music is as good as it currently gets??


----------



## castleofargh

at this point I don't know what I have to do.


Monstieur said:


> Android's entire audio stack is inferior (latency, resampling)....


evidence of that? please don't come back with another one shot subjective anecdote for such a broad claim or I'm going to get mad.
as far as I know the lowest latency available for audio, comes with one of the aptx modes. if you have evidence of the contrary, please share it with us.  about resampling, I just don't know what you're talking about.



Monstieur said:


> ... The presence of alternative codecs like aptX (inferior to AAC)...


prove that it's inferior or stop making that sort of claim.
aptx has superior max sample rate and that at least is a fact, so it can come closer to lossless if that's the aim for the user. it might not be necessary but it certainly doesn't define "inferior" in my book.




Monstieur said:


> ... or LDAC (gimmick inaudible to humans) does not make it better.


if we're going that way, all compression codecs hope to be gimmicks inaudible to humans. where is this coming from? LDAC isn't Apple so it has to be a gimmick? LDAC also offers higher bitrate than AAC for those who care.


----------



## Monstieur

castleofargh said:


> at this point I don't know what I have to do.
> 
> evidence of that? please don't come back with another one shot subjective anecdote for such a broad claim or I'm going to get mad.
> as far as I know the lowest latency available for audio, comes with one of the aptx modes. if you have evidence of the contrary, please share it with us.  about resampling, I just don't know what you're talking about.
> ...



http://superpowered.com/android-audio-latency-problem-just-got-worse
There is significant variance across devices, but in general the latency to even the wired headphone jack on Android is much higher than Windows / macOS / iOS. This latency is even before it goes through the Bluetooth stack, and is only compounded with the latency from AAC / aptX codec. It used to be much worse a few years ago (~200ms just to the wired headphone jack) but has improved significantly. It's still unacceptably high at over ~50ms for wired headphones.

https://www.rtings.com/headphones/tests/active-features/latency
I prefer aptX Low Latency in general since it only adds ~40ms, which in apps and games far outweighs the minor improvement in quality with AAC which has a ~200ms latency. Plain aptX is also ~180ms on most devices and thus AAC is preferable if the device does not support aptX Low Latency.

Resampling is required whenever the audio output is in shared mode (it usually is, so that you can hear notifications and sounds other apps). If the hardware runs at 48000 Hz / 24-bit (most devices do), music stored at 44100 Hz must be resampled in order to be mixed into the audio output. This can be done either by the app or automatically by the OS.
Most decoders decode AAC / MP3 to 16-bit rather than 24-bit (lossy compression does not have any bit-depth - it's a PCM thing). Converting between these is not as simple as padding / truncating zeroes - it needs to be anti-aliased / dithered, again either by the app or by the OS. Both are blackboxes with unknown quality.

aptX, aptX Low Latency, AAC, MP3, etc. are legitimate innovations. aptX HD, LDAC, DSD, MQA etc. are gimmicks - they have no benefit for music listening, and studios don't archive masters in these formats either. They were invented for the sole purpose of milking audiophiles.


----------



## HuoYuanJia

Is there any information on how the masking works with the aptX encoder? Is there no single demo file available that has been encoded by Qualcomm (and maybe reconverted back to WAV)? I would really love to do a very critical ABX test. I know how MP3 works and why AAC is more efficient (removes more data in higher freqs and even cuts off completely at 18 kHz, storing a lot more resolution than MP3 up to 4 kHz), but I know nothing about aptX.

I have a _very_ hard time recognizing the AAC from the CD (or even HD for that matter). BTW, I use True VBR settings at q110. That is a tip I found years ago on Hydrogenaud.io and it is supposed to be slightly superior to "iTunes Plus" which uses a different VBR method - both average around 256 kbps.

Now regarding Bluetooth, I have some unanswered questions.
1. Between two AAC-enabled devices (for example iPhone and Bose headphones), is the transfer bit-perfect? Or is the AAC being re-encoded and trimmed yet again? If usually not, could it be that my settings that do not perfectly match the iTunes standard (though same codec) thus need to be re-encoded whereas Apple Music would not have to?
2. Is the bitrate fixed? I don't think so. I am almost sure that the file might be downsampled to 128 kbps or similar in some cases when the connection is not the best. Sometimes I believe to hear artifacts but sometimes it also cuts the sound completely. This question concerns all codecs (SBC, MP3, AAC,
3. How come MP3 is rarely supported? It is possible to use it as a codec for bluetooth. At the very least it should be better than SBC/MP2/Musicam.
4. How can I tell which connection my iPhone uses? My iMac 2017 prefers SBC. I had to download a developer app to force it to use aptX (with a Hugo 2) instead. I noticed how the bass in a Classical recording sounded off so I started to investigate. In this single particular case, switching to aptX actually seemed beneficial. Anyway, how can I tell if my Bose headphones use SBC or AAC?

So I think it is best to only use lossless audio (well, if you have the space on your device). That way the audio will be encoded before transmission every time, but you prevent further loss. For example, Spotify is always re-encoded due to OGG Vorbis not being Bluetooth-enabled.
I hope devices will be able to use ALAC or FLAC for future bluetooth devices. If the bandwidth is there for LDAC, it should be there for ALAC also.



Monstieur said:


> aptX HD, LDAC, DSD, MQA etc. are gimmicks - they have no benefit for music listening


Fully agree. I did several blind tests comparing HD with 16bit and never succeeded. There is no single controlled blind test that ever managed to prove superior sound from HD material unless the volume was lifted so high that quantization errors became audible in a silent track. (I think one research used an acoustically treated room to push environment down to 19 dB and they still had to use painful volume to uncover the CD - with normal music 49% of 500 ppl guessed correctly which is exactly a coin flip and only one contestant managed 8/10 correct guesses which is still below the significant value.)
Of course there will always be people who claim they can tell the difference. Such happened also on Archimago's blog but ironically, those that claimed to hear the difference (same group that used equipment above 1.000 $ and even over 5.000 $) actually mistook the 16bit file for the 24bit file. I can't remember the exact number (I think over 60%) which makes it even worse than guessing by chance. Wow, talk about correlation.
Luckily, this is a good way to filter bad content and if a review tries to sell me how the headphone opens up with HD files, I know I found a bullshitter and cancel reading immediately.


----------



## Monstieur (Mar 2, 2018)

HuoYuanJia said:


> I have a _very_ hard time recognizing the AAC from the CD (or even HD for that matter). BTW, I use True VBR settings at q110. That is a tip I found years ago on Hydrogenaud.io and it is supposed to be slightly superior to "iTunes Plus" which uses a different VBR method - both average around 256 kbps.


iTunes Plus uses constrained VBR which is capped at 256 kb/s. True VBR can spike above 256 kb/s.



HuoYuanJia said:


> Now regarding Bluetooth, I have some unanswered questions.
> 1. Between two AAC-enabled devices (for example iPhone and Bose headphones), is the transfer bit-perfect? Or is the AAC being re-encoded and trimmed yet again? If usually not, could it be that my settings that do not perfectly match the iTunes standard (though same codec) thus need to be re-encoded whereas Apple Music would not have to?
> 2. Is the bitrate fixed? I don't think so. I am almost sure that the file might be downsampled to 128 kbps or similar in some cases when the connection is not the best. Sometimes I believe to hear artifacts but sometimes it also cuts the sound completely. This question concerns all codecs (SBC, MP3, AAC,
> 3. How come MP3 is rarely supported? It is possible to use it as a codec for bluetooth. At the very least it should be better than SBC/MP2/Musicam.
> 4. How can I tell which connection my iPhone uses? My iMac 2017 prefers SBC. I had to download a developer app to force it to use aptX (with a Hugo 2) instead. I noticed how the bass in a Classical recording sounded off so I started to investigate. In this single particular case, switching to aptX actually seemed beneficial. Anyway, how can I tell if my Bose headphones use SBC or AAC?


1. I don't know of any devices which bitstream AAC. The system audio output is always encoded into AAC / aptX / SBC in real-time on the transmitter, including iTunes Plus / Apple Music content. Bluetooth actually allows for multiple streams i.e. you could have a bit-perfect AAC stream for music, and alternate streams for notification sounds from the phone etc. However this requires the receiver to mix the streams so the audio is degraded anyway. I trust the iOS mixer better than an unknown Bluetooth receiver.

2. IIRC macOS and iOS use AAC 250 kb/s at "normal" quality. macOS prefers aptX to AAC, and you can manually adjust the codec bitrates as well. I don't know if iOS drops the bitrate for poor connections. I have had total dropouts, but have not noticed an audible reduction in quality.

3. AAC was designed with low power encoding / decoding in mind. I suspect most transmitters implement a "normal" quality encoder. MP3 is not battery efficient since only the highest quality is transparent. Encoding quality is separate from bitrate, and it changes the sound at the same bitrate.

4. You could try the Bluetooth diagnostic profile for iOS and check the logs. Some receivers cause macOS to fall back to SBC instead of aptX. All of my AAC + aptX receivers default to aptX with my MBP. Hold the Option key while clicking the Bluetooth menu icon and it'll show you the codec in use.



HuoYuanJia said:


> So I think it is best to only use lossless audio (well, if you have the space on your device). That way the audio will be encoded before transmission every time, but you prevent further loss. For example, Spotify is always re-encoded due to OGG Vorbis not being Bluetooth-enabled.
> I hope devices will be able to use ALAC or FLAC for future bluetooth devices. If the bandwidth is there for LDAC, it should be there for ALAC also.


LDAC and aptX HD are not lossless. They just support 24-bit 96 kHz lossy encoding, which is anyway inaudible to humans. ALAC, FLAC and other lossless codecs require several times more bandwidth.

AAC 128 kb/s has been proven to have no audible degradation even after ~100 transcodings, and 250 kb/s for Bluetooth will be even more resilient. So transcoding any source format to AAC 250 kb/s for Bluetooth should be transparent. There is always the possibility that the device screws something up somewhere in the chain, but any degradation in sound would not be because of AAC transcoding.



HuoYuanJia said:


> Fully agree. I did several blind tests comparing HD with 16bit and never succeeded.


That is comparing bitdepth i.e. dynamic range alone. Higher sampling frequencies like 96 kHz can audibly change the response of some amplifiers / headphones and thus change the sound. However this is an undesirable change and is further away from the original audio.




HuoYuanJia said:


> Luckily, this is a good way to filter bad content and if a review tries to sell me how the headphone opens up with HD files, I know I found a bull****ter and cancel reading immediately.


​Several "HD" tracks are different masters with less dynamic range compression so these do in-fact sound better than the original CD release. However you can convert these tracks to 44.1 kHz / 16-bit and they would still sound the same as the "HD" master.


----------



## bigshot

Monstieur said:


> iTunes Plus uses constrained VBR which is capped at 256 kb/s. True VBR can spike above 256 kb/s.



From what I've been told, VBR in AAC can actually go above 320.


----------



## castleofargh

HuoYuanJia said:


> Is there any information on how the masking works with the aptX encoder? Is there no single demo file available that has been encoded by Qualcomm (and maybe reconverted back to WAV)? I would really love to do a very critical ABX test. I know how MP3 works and why AAC is more efficient (removes more data in higher freqs and even cuts off completely at 18 kHz, storing a lot more resolution than MP3 up to 4 kHz), but I know nothing about aptX.
> 
> I have a _very_ hard time recognizing the AAC from the CD (or even HD for that matter). BTW, I use True VBR settings at q110. That is a tip I found years ago on Hydrogenaud.io and it is supposed to be slightly superior to "iTunes Plus" which uses a different VBR method - both average around 256 kbps.
> 
> ...



I'm not sure aptx is involved in much psychoacoustic stuff, or needs to. so long as you have high enough data rate, all lossy formats end up being kind of the same, as in audibly transparent. what differentiate them is really how audibly good they are at very low bitrates.
I think I read something about how the signal is cut into small bands offered various fidelity, but TBH it's been a while and I'm not 100% sure it was aptx ^_^.

I have no clear certainty about any of your questions, and I add to those all the uncertainties due to variations from using different BT chips, the various generations of BT, the compatibility between devices, the various options within just one of those codecs(like aptx has various versions and different stuff can be prioritized)... all that is why I have so much trouble with several of the very confident previous statements from @Monstieur. because even if under some conditions something specific goes on and is confirmed, how do we know it's always like that?

about mp3, it has objectively no advantage over AAC. if the aim is good size/transparency ratio, MP3 doesn't stand a chance and it might be more relevant to hope for some vorbis stuff. but that too wouldn't make much sense now that modern gears have much better transmission rates in BT. the time of getting the file as small as possible specifically for BT is in the past. now we could pretty much do 16/44 flac at this point as BT is becoming more and more of a low energy wifi with sucky security.
so if the encoding could keep same codec like you wondered, I guess mp3 would be cool for all those who still use it. but I don't know that it even works that way for AAC. and if something is reencoded anyway, then MP3 for the transmission is just inferior to what we already have and it's normal that we don't have it. which in itself kind of slightly tips the scale of what I imagine is going on. but that's far from conclusive evidence of anything 
when we have multiple options, we try, if one really sucks, we avoid it, if they all sound the same, well good for us. it seems to me that ultimately that's how we really pick our BT settings.


----------



## buzzy

I think if you're looking for audible differences, it's best to choose tracks that are a challenge for encoders and use those as your test tracks.  Even on the worst devices I've listened to, some tracks sound fairly good.

That also has the advantage of giving you somewhat consistent way to compare.



Monstieur said:


> 1. I don't know of any devices which bitstream AAC. The system audio output is always encoded into AAC / aptX / SBC in real-time on the transmitter, including iTunes Plus / Apple Music content.


Interesting. It would seem to be missing a big opportunity to skip that decode/encode, and not hard to implement.


----------



## LajostheHun

HuoYuanJia said:


> Now regarding Bluetooth, I have some unanswered questions.
> 1. Between two AAC-enabled devices (for example iPhone and Bose headphones), is the transfer bit-perfect? Or is the AAC being re-encoded and trimmed yet again? If usually not, could it be that my settings that do not perfectly match the iTunes standard (though same codec) thus need to be re-encoded whereas Apple Music would not have to?



If it's not bit perfect it is still better than converting to another lossy format.


> 2. Is the bitrate fixed? I don't think so. I am almost sure that the file might be downsampled to 128 kbps or similar in some cases when the connection is not the best. Sometimes I believe to hear artifacts but sometimes it also cuts the sound completely. This question concerns all codecs (SBC, MP3, AAC,


I never heard any artifact but I use Android and Aptx

3.





> How come MP3 is rarely supported? It is possible to use it as a codec for bluetooth. At the very least it should be better than SBC/MP2/Musicam.



It does technically support MP3 not sure why it is not utilized, maybe licencing issues... not sure really.



> 4. How can I tell which connection my iPhone uses? My iMac 2017 prefers SBC. I had to download a developer app to force it to use aptX (with a Hugo 2) instead. I noticed how the bass in a Classical recording sounded off so I started to investigate. In this single particular case, switching to aptX actually seemed beneficial. Anyway, how can I tell if my Bose headphones use SBC or AAC?


That would be a question for IOS users, as on Android Oreo one can pre select between all supported formats.


> So I think it is best to only use lossless audio (well, if you have the space on your device). That way the audio will be encoded before transmission every time, but you prevent further loss. For example, Spotify is always re-encoded due to OGG Vorbis not being Bluetooth-enabled.
> I hope devices will be able to use ALAC or FLAC for future bluetooth devices. If the bandwidth is there for LDAC, it should be there for ALAC also.



The BT profile A2DP currently won't allow lossless codecs, as you say it is not a bandwidth issue so I'm sure in the near future this will be addressed.


----------



## HuoYuanJia (Mar 28, 2018)

Thanks for the replies! Helped me a lot already.

Here is another question regarding aptX HD. I don't have any Android advice so I have never experienced aptX HD. (Google results are a nightmare if you are looking for some technical background outside the marketing mumbo jumbo.)

My understanding is that Qualcomm splits up audio into 5 frequency bands (or is it not frequency on x-axis but depth on y-axis?), encodes them with varying compression and then puts them back together. That is aptX. With aptX HD, two of those bands receive two additional bits for lossy compression of 24 bit files, so after putting it all together on the receiver end, you have something similar to AAC (AAC does not use "bands" but it compresses with a curve so that most information is preserved up to 4 kHz and then slowly rolls off in the treble) but with 20 bit depth instead of 16 (and 24 as the original file). But if your source is 16 bit, there is no difference between aptX and aptX HD, right? The additional 2 bits information that aptX HD uses for compressing 24 bit is simply not available in 16 bit audio.

Is there any research/ paper that compares aptX with AAC? The algorithm for compression is completely different so you cannot compare kbps information directly. Qualcomm only checks the volume difference if I remember correctly.
Please correct me if I'm wrong: It should be relatively easy to make a test if you have a good testing environment. The information that is lost during compression is basically noise. I imagine it like this: The CD has 1411 kbps information, then MP3 cuts 1091 away as it thinks this information is masked by the louder tones (probably too simplified) and preserves the important 320 kbps. But in order to play the file, the decoder has to transform the data back to PCM (1411 kbps) and fills the missing information with noise. So if you turn the volume up on an MP3, you can hear the noise.
If this is basically correct, you could send sine tones over AAC and aptX and then compare their quality based on noise floor, e.g. 100 Hz, 500 Hz, 1 kHz, etc... That way you could have a direct comparison of which codec preserves more information and where.


----------



## LajostheHun

I haven't heard of any direct comparisons between them via measurements though they might exists somewhere. Your understanding is pretty much spot on of how these codecs works and understanding that raw bit rates just don't mean much, and yes pretty much everything gets converted back to PCM before D/A conversion for obvious reasons, which also muddies the simplicity of just auditioning the codecs for superiority , as the components themselves can have impact on the final sound as well.


----------



## HuoYuanJia

OK, I made an interesting discovery. While I am writing on a full in-depth beyerdynamic Aventho Wireless review, I was sketching the frequency response with REW and its tone generator. Connection was per default active via aptX. (My iMac 2017 does not have aptX HD.) Starting at 6,5 kHz artifacts become very clearly audible, all the way above 14 kHz the treble sounds complete crap. The compression is WAY TOO strong in the treble. You can hear how 6.000 Hz sounds fine and as you move up past 6.500 Hz it starts crackling.
I switched to SBC (with the developer tool Bluetooth Explorer) and all tones sounded fine again. I jumped back and forth to confirm this.

Now I am even more interested in aptX HD.


----------



## Monstieur

HuoYuanJia said:


> OK, I made an interesting discovery. While I am writing on a full in-depth beyerdynamic Aventho Wireless review, I was sketching the frequency response with REW and its tone generator. Connection was per default active via aptX. (My iMac 2017 does not have aptX HD.) Starting at 6,5 kHz artifacts become very clearly audible, all the way above 14 kHz the treble sounds complete ****. The compression is WAY TOO strong in the treble. You can hear how 6.000 Hz sounds fine and as you move up past 6.500 Hz it starts crackling.
> I switched to SBC (with the developer tool Bluetooth Explorer) and all tones sounded fine again. I jumped back and forth to confirm this.
> 
> Now I am even more interested in aptX HD.


What about AAC over Blueooth?


----------



## HuoYuanJia

Monstieur said:


> What about AAC over Blueooth?


While the Aventho Wireless officially support AAC codec over A2DP, I was not able to use AAC with my iMac (latest build) even when forced to in the options. The Bose QC35 does connect via AAC with the same computer and I can say that the treble sounds a lot clearer and did not show a similar behavior.


----------



## LajostheHun (Apr 5, 2018)

HuoYuanJia said:


> OK, I made an interesting discovery. While I am writing on a full in-depth beyerdynamic Aventho Wireless review, I was sketching the frequency response with REW and its tone generator. Connection was per default active via aptX. (My iMac 2017 does not have aptX HD.) Starting at 6,5 kHz artifacts become very clearly audible, all the way above 14 kHz the treble sounds complete ****. The compression is WAY TOO strong in the treble. You can hear how 6.000 Hz sounds fine and as you move up past 6.500 Hz it starts crackling.
> I switched to SBC (with the developer tool Bluetooth Explorer) and all tones sounded fine again. I jumped back and forth to confirm this.
> 
> Now I am even more interested in aptX HD.


Yeah I'm not sure what's the issue is in your set up, but  there is no artifacts at least when listening to music programs with mine.Of course I never use Apple products which could be the issue there. I never heard anyone complaining of Aptx being sounding like crap on any FR's.


----------



## HuoYuanJia

LajostheHun said:


> Yeah I'm not sure what's the issue is in your set up, but  there is no artifacts at least when listening to music programs with mine.Of course I never use Apple products which could be the issue there. I never heard anyone complaining of Aptx being sounding like crap on any FR's.


Have you tried single tones? I didn't notice it when hearing music either, but after discovering the artifacts I can now hear it.


----------



## LajostheHun

HuoYuanJia said:


> Have you tried single tones? I didn't notice it when hearing music either, but after discovering the artifacts I can now hear it.


No, I haven't I only use Aptx on my Android phone so I couldn't use something like REW  or something similar to generate single tones.


----------



## neil74

I’d be really interested in seeing a proper scientific review on this whole subject as clearly there is more at play than just the performance of both codecs. 


Would also love to be able to understand the details for the android Bluetooth stack being inferior (if it really is) and if this is a general android thing across the board of varies by device.   Anecdotally the V30 I had sounded pretty average over Bluetooth even with Oreo and LDAC whereas I think my current S9+ sounds pretty decent.  The iPhone is consistent I think but I still have that nagging feeling that it is compromised by its codec for Bluetooth audio and I may be completely wrong.


Source files and external hardware aside it would be good to understand how the whole android vs iPhone wireless audio really stacks up!


----------



## colonelkernel8

HuoYuanJia said:


> OK, I made an interesting discovery. While I am writing on a full in-depth beyerdynamic Aventho Wireless review, I was sketching the frequency response with REW and its tone generator. Connection was per default active via aptX. (My iMac 2017 does not have aptX HD.) Starting at 6,5 kHz artifacts become very clearly audible, all the way above 14 kHz the treble sounds complete ****. The compression is WAY TOO strong in the treble. You can hear how 6.000 Hz sounds fine and as you move up past 6.500 Hz it starts crackling.
> I switched to SBC (with the developer tool Bluetooth Explorer) and all tones sounded fine again. I jumped back and forth to confirm this.
> 
> Now I am even more interested in aptX HD.


I have those headphones. I bought a Sony NW-ZX300 just to use the apt-x HD. Pretty much the best wireless you can get right now.


----------



## zviratko

HuoYuanJia said:


> OK, I made an interesting discovery. While I am writing on a full in-depth beyerdynamic Aventho Wireless review, I was sketching the frequency response with REW and its tone generator. Connection was per default active via aptX. (My iMac 2017 does not have aptX HD.) Starting at 6,5 kHz artifacts become very clearly audible, all the way above 14 kHz the treble sounds complete ****. The compression is WAY TOO strong in the treble. You can hear how 6.000 Hz sounds fine and as you move up past 6.500 Hz it starts crackling.
> I switched to SBC (with the developer tool Bluetooth Explorer) and all tones sounded fine again. I jumped back and forth to confirm this.
> 
> Now I am even more interested in aptX HD.



I discovered this issue way back with Sennheiser Momentums. Even created a bug with Apple (and Sennheiser) about it. Sennheiser told me they can't reproduce it, Apple did nothing. The same happened later with B&W P7 and B&W PX. I believe the implementation of aptX in Apple's bluetooth stack is simply faulty, artifacts are very audible with test tones.
However IRL I have not had a problem with quality, be it aptX or AAC (even SBC sounds good when the bitpool is high enough).

One problem I do have with my new Amiron Wireless is bluetooth range - with aptX it is vastly inferior (unusable) - I literally take two steps from my Macbook and it starts dropping out. AAC is better, but still not that good. My P7 reached with aptX the same as Amiron does with AAC, and the P7 has much greater range in AAC (50% more would be my guesstimate).


----------



## inspectah_deck

I recently got B&W PX and use them via aptX HD on my Android Oreo phone.
It works great, SQ is nice, but I wonder one thing:

Does aptX HD automatically change the samplerate/bitdepth of the codec to match source material or is it fixed to 24bit/48kHz?
In Android Oreo developer settings it always defaults to 24bit/48kHz, even when playing back 16bit/44.1kHz content.

I double checked via Logcat and the logs also show 24/48.
Anybody know more about this?


----------



## Monstieur

inspectah_deck said:


> I recently got B&W PX and use them via aptX HD on my Android Oreo phone.
> It works great, SQ is nice, but I wonder one thing:
> 
> Does aptX HD automatically change the samplerate/bitdepth of the codec to match source material or is it fixed to 24bit/48kHz?
> ...


The OS usually runs the mixer at 24/48 in shared mode and will resample the music. aptX HD being a lossly codec does not have a bit-depth when transmitted. The decoding process will reproduce the equivalent of a 24-bit dynamic range.


----------



## Steve999 (Jun 10, 2018)

This is the only post on this thread I am competent to reply to with real-world data so I am very excited about that.

I have a nice portion of my library recorded in Itunes Plus (which is Apple AAC 256 VBR). For my ear it is transparent, full stop. The chance that I am guessing on ABX testing is 100 percent (sorry about that Bigshot, I am also practicing listening for the gurgling aquariums at 96 kbps in Frau CBR  on the Sammy Davis, Jr., CD, now, I promise, although my CODEC has not yet seized up. I do look forward to that though.). As to whether Apple AAC 256 VBR caps out at 256 kpbs, the answer is no, and in my library, the _average _bitrate is considerably higher than 256 kbps for most songs ripped in Apple AAC 256 VBR. The _lowest_ is 233 kbps for only two songs (a song from the Duke Ellington Blanton-Webster band, and a Pokemon song (it's one of my kid's, but I rip everything in the family and we share an Apple Music account). The Apple AAC 256 VBR rips max out at an _average_ bitrate of 304 kbps (of course some passages in the file will be higher), interestingly for a few 1950s and 1960s jazz recordings (Oscar Peterson, Grant Green, Eddie Harris); and at 303 kbps I have a lot of modern jazz recordings. The Apple AAC 256k VBR encoder doesn't seem to up the bitrate that high even for the better classical recordings. The highest classical track is a Dvorak Slovic dance at 290 kbps. The range from 233 kbps to 304 kbps is a pretty smooth progression literally including muliptle instances of every single bitrate in-between. The median is definitely 256 kbps on the button--I don't even need to count--there are a ton of tracks at that bitrate. The mean average is higher. Just looking at the distribution I'd ballpark it at 275 kbps.

For me Itunes Plus is transparent and set it and forget it, but I do agree with Bigshot that some of the space it is taking up with these higher bitrates is a waste of resources. It's just super-easy.
Now you and this thread have me paranoid about my signal chain in terms of bluetooth streaming.    How the heck am I going to ABX that for peace of mind? I always just thought to myself, that sounds pretty good! Ignorance can be bilss, I guess. Until some huckster rips you off. I moved over from OS X to Windows 10 recently because I like Windows 10 better than OS X (gasp!) (we have both in the house, between me, my wife and three kids), and Apple is doing some increasingly weird stuff with its 27" Imac line, making very it hard to DYI upgrade for most things, and with 20/20 hindsight the Imac bluetooth was a lot less stable.

I believe I have an intel wireless / bluetooth device in the Windows computer I am using now. Wait, let me check speccy--yes, it's an Intel dual band AAC Wireless AC 3165 and bluetooth transmitter and receiver. I can always get the state-of-the-art driver for it and automatically install it using the Intel site. So that much of the chain I feel pretty good about. My setup also much more reliably remembers and automatically connects to my previously connected bluetooth devices than my Imac did. . . sometimes better than I want it to. I decided I had had enough of the folks at Apple and gave my 27" Imac to my mom. I would use a non-Apple codec if Apple AAC wasn't a no-brainer. Windows 10 and third-party monitors have finally gotten to the point where I trust them for photography, and I was personally able to get more performance for less money than Apple would charge and more focused on my needs than Apple would allow me. Now I am shooting 99th percentile on Passmark, with a focus on my preferences and needs, for way less than I could get that through Apple, by thousands of dollars. Photoshop and Lightroom scream. Gaming performance is beyond great. Sound stuff is a piece of cake. Our portable environments include lots of both IOS and Android devices. Plus we have both Apple TV and Roku. So I have every single variable mentioned in this thread at play. Everything has to work smoothly and well because we usually have something in the area of 30 devices connected to the network at any one time (3 teenagers!) and I don't pay the cable man for my TV media anymore.



neil74 said:


> The info on the interweb is both plentiful and vague on the comparative bitrates of APTX, AAC and SBC but as I understand AAC is capped at around 256 kbps and APTX around 350 kbps?
> 
> I regularly switch between iOS and Android I am curious as to the real world limitations of wireless headphones on an iPhone vs APTX equipped droids?  e.g. using Apple Music and it's  256 kbps AAC should mean no loss but I'm unsure how higher bitrate stuff like Spotify 328 kbps or the standard Tidal 328 kbps AACs would compare and how much of a bottleneck the AAC codec would be on an iPhone?
> 
> With aptx-hd now available on both phones and headphones I am wondering how much of a real world quality advantage Androids now have with wireless headphones?


----------



## Monstieur

Steve999 said:


> I have a nice portion of my library recorded in Itunes Plus (which is Apple AAC 256 VBR).


I'm not sure what the iTunes Plus preset within iTunes uses, but both Apple Music and the iTunes Store use constrained VBR which does not exceed 256 Kb/s, but can dip below it. True VBR can exceed 256 Kb/s. If you encode using qAAC you can choose between CVBR and TVBR. Even 256 Kb/s CBR has been shown to be completely transparent.


----------



## Steve999 (Jun 10, 2018)

The Itunes Plus preset is the default Itunes rip, at least on my Windows Itunes--256 Apple AAC VBR, with most tracks encoding above that on average, and a few encoding below that on average, unless Foobar2000 is giving me totally bogus data, which I guess is possible. So I guess you are more likely to get a better rip from a CD using Itunes Plus than off of the Apple Music store. But I think both would be transparent, and the file from the store would probably (but not always) be smaller, which would arguably be better in some ways.

Edit: I did get a Sun Ra recording off of the Itunes store once for better sound quality--I had digitized the original LP. Apple has some weird thing lately where they remaster some Sun Ra music and are the only ones distributing it. Since I like Sun Ra in my more open-minded moments I noticed it. Anyway, _there was harsh distortion at peaks on the Apple version that was not on the LP. _I don't know where in the chain the distortion was introduced--the remastering, the encoding, or somewhere else.

Just joking around, I like to call Itunes Plus "Apple AAC 256 VBR, with A1 sauce," but now I guess there may really be something to it.



Monstieur said:


> I'm not sure what the iTunes Plus preset within iTunes uses, but both Apple Music and the iTunes Store use constrained VBR which does not exceed 256 Kb/s, but can dip below it. True VBR can exceed 256 Kb/s. If you encode using qAAC you can choose between CVBR and TVBR. Even 256 Kb/s CBR has been shown to be completely transparent.


----------



## inspectah_deck (Jun 10, 2018)

Monstieur said:


> The OS usually runs the mixer at 24/48 in shared mode and will resample the music. aptX HD being a lossly codec does not have a bit-depth when transmitted. The decoding process will reproduce the equivalent of a 24-bit dynamic range.


That makes sense, I didn´t think about that.
Is it possible to switch to an exclusive mode?
I know that USB Audio Player Pro has a Direct mode, that directly pushes the stream to the built in DAC, but I guess this doesn´t work with Bluetooth.

So what do I do?
Set USB Audio Player Pro to do the resampling to 48kHz or leave it at Direct mode and let Android do it?

I have only FLAC on my phone.


----------



## Monstieur (Jun 10, 2018)

Resampling recordings to a higher sample rate is not meaningfully lossy since the original signal was analog anyway.


----------



## Steve999 (Jun 11, 2018)

Here I go again quoting myself.

I just installed the Columns UI Interface for Foobar 2000. It gives you a real-time display of kbps rates as a song plays. For albums ripped with Itunes Plus the Columns UI tells me that the encoder can go well over 320 kbps at a given time for a track encoded at 256 VBR. The _median _average bitrate for a track ripped at this setting is definitely 256 kbps on the button, but it can go much, much higher..For example, on _Band Call_ from the CD release of Oscar Peterson's album _Night Train, _Foobar says that for a 256 Itunes Plus rip the average bitrate is 304 kbps, and the real-time indicator for the Columns UI interface (I am watching right now) gets up to as high as 362 kbps, well over 320 kbps. For the opening of the song, the encoder spends most of the time over 320 kbps. And this is for the 256 kbps setting for Itunes Plus. That is one of three of the most demanding tracks in my library according to the Itunes AAC encoder, so for most tracks you won't get that result. But I have seen the bitrate going up over 320 kbps on other Itunes Plus 256 kbps rips as well. I just chose one of my most demanding tracks as a case study. So I do think Itunes Plus is definitely Apple AAC VBR _with A1 sauce_.




Steve999 said:


> The Itunes Plus preset is the default Itunes rip, at least on my Windows Itunes--256 Apple AAC VBR, with most tracks encoding above that on average, and a few encoding below that on average, unless Foobar2000 is giving me totally bogus data, which I guess is possible. So I guess you are more likely to get a better rip from a CD using Itunes Plus than off of the Apple Music store. But I think both would be transparent, and the file from the store would probably (but not always) be smaller, which would arguably be better in some ways.
> 
> Edit: I did get a Sun Ra recording off of the Itunes store once for better sound quality--I had digitized the original LP. Apple has some weird thing lately where they remaster some Sun Ra music and are the only ones distributing it. Since I like Sun Ra in my more open-minded moments I noticed it. Anyway, _there was harsh distortion at peaks on the Apple version that was not on the LP. _I don't know where in the chain the distortion was introduced--the remastering, the encoding, or somewhere else.
> 
> Just joking around, I like to call Itunes Plus "Apple AAC 256 VBR, with A1 sauce," but now I guess there may really be something to it.


----------



## castleofargh (Jun 11, 2018)

VBR encodings can be 'Constrained' to a given maximum. but often when you set a value in VBR you ask for an 'Average' bitrate instead of a maximum. depends on what you used, maybe which year you used it. and if you went to mess around placing plenty of mysterious magical letters as parameters for your encoding for apps allowing it.


edit: I'm talking typical encoding codecs when we convert our music. not BT streaming that happens to also use their own codecs.


----------



## zviratko

I think you are confusing two different things - the source bitrate (and codec) and Bluetooth A2DP bitrate+codec.

All bluetooth devices I've seen used AAC VBR @ 256kbit (forcing CBR causes failure), and I've never seen them exceed 256kbit (but I've seen them transferring less).
I wasn't able to find any definiitive specs on this, so it's possible that >256kbit can be supported, or that some devices support CBR.

In any case, whatever source you use will first be decoded and sinked internally to PCM first, then transmitted using a codec. It might depend on the smartness of the source device how this is handled, i.e. what bitrates shows as supported on the bluetooth audio device and can be used (I'd be very surprised if it was anything else than 44.1kHz/32 bit float in case of A2DP).
aptX might claim some marketing mumbo jumbo about 48kHz/24, but that's likely just the sampling rate coming out of the bluetooth chip in the receiver, or some "marketing equivalent quality" they paired with it, but in reality it will always come as a 44kHz/32b source and nothing else.
Theoretically it should be possible to simply forward the AAC codec samples from the source (like Apple Music), but I don't think it's possible in practice due to the 256kbit limit, so you always end up resampling it. And what would the system do about mixing sounds together? One might be AAC music the other might be a ringtome stored as mp3, do you suddenly start encoding both to AAC and forwarding it? No, you mix it into the PCM stream...
Whether that is transparent or not - I don't think you'd know the difference anyway.


----------



## castleofargh

I was answering @Steve999 who clearly wasn't talking about BT.  sorry if I confused anybody with that sort of off topic ^_^.


----------



## mackie1001 (Jun 12, 2018)

AptX is really quite basic in its approach and I'd struggle to believe that the results are better than AAC in practice. All AptX is doing is throwing away increasing numbers of sample bits in different bands and applying ADPCM encoding to each band - the fact it sounds as good as it does is surprising to me.

Is it acceptable to expand the topic to discussion of LDAC too? Now that Sony have opened it up to other vendors is anyone aware of any material that goes into technical detail about how it works? I believe it's part of the Android Open Source Project but I don't know if LDAC source code makes up part of that.


----------



## zviratko

aptX produces clearly audible artefacts when presented with a steadily rising/falling tone, or more than one tone. That by itself should be a red flag.
The only good thing about it is latency - usually much lower than AAC.

LDAC might be better than AAC or aptX (if only due to the massive bitrate jump), but "better than transparent" is still transparent. And higher bitrate means much higher battery drain.


----------



## mackie1001

It's tempting to think that LDAC at ~990kbps *could* be lossless for 16/44 material but the CPU power involved in getting ALAC and FLAC encoding to the same size means that doesn't really stack up, despite what the Sony marketing machine would have to believe. Regardless, that's a lot of bits to play with.


----------



## LajostheHun

inspectah_deck said:


> I recently got B&W PX and use them via aptX HD on my Android Oreo phone.
> It works great, SQ is nice, but I wonder one thing:
> 
> Does aptX HD automatically change the samplerate/bitdepth of the codec to match source material or is it fixed to 24bit/48kHz?
> ...


I don't think that APTX HD is fixed at 24/48, and also you can change that in the developer section in real time while you using BT.


----------



## inspectah_deck

LajostheHun said:


> I don't think that APTX HD is fixed at 24/48, and also you can change that in the developer section in real time while you using BT.


Yes, I know.
The question is, what does Androids audio mixer do in the middle?
It has to mix the music from the app, but also other apps and all system sounds like notifications, incoming calls etc into one stream, that is then processed by aptX HD and pushed to the headphones.
The chain should be music player app -> Android sound mixer -> aptX HD.

So when I change the sample rate to 44.1 kHz in the music app and in the bluetooth dev options, but Android uses a native sample rate of 48kHz (checkable with apps like Audio Buffer Size from the PlayStore) it goes like this:
44.1 -> 48 -> 44.1

I don´t know if it´s possible to circumvent the Android audio mixer when using bluetooth.


----------



## LajostheHun

inspectah_deck said:


> Yes, I know.
> The question is, what does Androids audio mixer do in the middle?
> It has to mix the music from the app, but also other apps and all system sounds like notifications, incoming calls etc into one stream, that is then processed by aptX HD and pushed to the headphones.
> The chain should be music player app -> Android sound mixer -> aptX HD.
> ...


Not without rooting.


----------



## inspectah_deck

LajostheHun said:


> Not without rooting.


My phone is rooted, can you share a link?


----------



## PiSkyHiFi

AptX HD is capable of being switched down from 48/24 to 44.1/24, I know that much because Neutron player can target the output to AptX HD according to the source sample rate and send it compressed at 576 Kbps, I see the Bluetooth sample rate as 44.1/24 most of the time, thanks to Radsones ES100 App, I get this feedback for their device.

AAC is a great codec, much better quality algorithm than AptX, unfortunately AptX has been overhyped and is quite clearly a low efficiency codec compared to AAC because it is mostly designed for low latency and cheap hardware.

That being said, AptX HD is a different kettle of fish, it is superior to to Bluetooth AAC not because of the algorithms, but just because of 24 bit dynamic range in encoding/decoding and the data rate is 576 Kbps, which for a dynamic range that large, ensures that the quality improvement is almost linear over AptX at 352 Kbps.

That also puts it above AAC at 256Kbps 16 bit too, I couldn't hear the difference on the ES100, but honestly I think that's down to the DAC and AMP on that device being pretty good, but not good enough to reveal the details necessary to pick the difference (it's a great portable device really)

Not many people here have mentioned the chain they using to try to discern the differences between codecs. I think that would be essential since I wasn't able to hear compression artifacts until my equipment chain was able to reveal it.

The first time I heard the difference between AAC at 320Kbps fixed rate and FLAC, was when I combined a decent DAC (ESS 9018K2M in an M8 desktop DAC) with a decent desktop Amp (Xduoo TA-02 with S/N 110dB) and a very good good pair of cans (Beyerdynamic T1)
It was extremely subtle, but the differences were in the interpretation of shape of atmosphere in the source - if the source could reveal a rough room size and shape, then the compression changed that feeling noticeably for me  - in recordings that were good enough.
Spatial positioning was also affected, but I wouldn't get this right every time, it was very subtle.

I'm quite happy with AptX HD, as it finally has enough data rate to mean that the only way I could possibly tell the difference between this and uncompressed would be if I spent $10,000 on equipment to reproduce an analog stage that would reveal such tiny differences, even then, I might not pick it at all.

AAC is a great codec, but allowing for an efficiency improvement of maybe up to 20% to 30% over AptX at the same data rate, AptX HD is maybe around 30% more accurate and also low latency.

What happens to MP3s over Bluetooth ? They get worse than they are of course, no matter what codec, another reason to favor a bit of headroom in lossy quality.


----------



## Monstieur (Jun 17, 2018)

Dynamic range is not applicable to lossy codecs like AAC. The receiver can decode to whatever bit-depth PCM it chooses including 32-bit floating point, and for playback 16-bit is more than sufficient. AAC is transparent at 256 Kb/s and no meaningful improvement can be made other than lowering the latency.

Given the above, aptX HD is nothing but a gimmick to extract royalties on a solution for problem that doesn't exist. It would have been an improvement if aptX HD mandated aptx Low Latency on devices, but it doesn't. It's still unusable for games / movies due to high latency.

Your ancedotes strongly look like placebo, or your testing method was flawed.


----------



## PiSkyHiFi

Monstieur said:


> Dynamic range is not applicable to lossy codecs like AAC. The receiver can choose to decode it to whatever bit-depth PCM it chooses including 32-bit floating point, and for playback 16-bit is more than sufficient. AAC is transparent at 256 Kb/s and no meaningful improvement can be made other than lowering the latency or some other optimizations.
> 
> Given the above, AptX HD is nothing but a gimmick to extract royalties on a solution for problem that doesn't exist.
> 
> Your ancedones strongly suggest either placebo or your testing method was flawed.





Monstieur said:


> Dynamic range is not applicable to lossy codecs like AAC. The receiver can choose to decode it to whatever bit-depth PCM it chooses including 32-bit floating point, and for playback 16-bit is more than sufficient. AAC is transparent at 256 Kb/s and no meaningful improvement can be made other than lowering the latency or some other optimizations.
> 
> Given the above, aptX HD is nothing but a gimmick to extract royalties on a solution for problem that doesn't exist.
> 
> Your ancedotes strongly look like placebo, or your testing method was flawed.



If you're having difficulties with the math, I can help, just read my post again and do the math.


----------



## Monstieur (Jun 17, 2018)

PiSkyHiFi said:


> If you're having difficulties with the math, I can help, just read my post again and do the math.


Your calculations aren't valid. 24-bit is not better than 16-bit, and the concept is not even used in lossy compression so the bit rate has no effect on it. Resampling 48 kHz to 44.1 kHz is also imperceptible as they are both above the Nyquist frequency for audio.


----------



## PiSkyHiFi

Monstieur said:


> Your calculations aren't valid. 24-bit is not better than 16-bit, and the concept is not even used in lossy compression.



I think you just shot yourself in the foot, 24 bit isn't better than 16 bit?? - especially regarding lossy at these rates - you know, the conversion of source to frequency domain with floating point - compress, then decompress back to discrete , this is just an absurd statement.

Let me guess, you're scared of higher bandwidth too.


----------



## Brooko

PiSkyHiFi said:


> I think you just shot yourself in the foot, 24 bit isn't better than 16 bit?? - especially regarding lossy at these rates - you know, the conversion of source to frequency domain with floating point - compress, then decompress back to discrete , this is just an absurd statement.
> 
> Let me guess, you're scared of higher bandwidth too.


Suggest you might want to do some reading about bit depth yourself - and I mean this trying to be helpful rather than condescending.

This is a good starting point : https://www.head-fi.org/threads/24bit-vs-16bit-the-myth-exploded.415361/

What you're essentially talking about is dynamic range - and 16 bit (for playback) already captures everything we can possibly hear.  For recording its a different story - and that is all about noise floor.

As to your other comments earlier about compression - it doesn't affect perception of sound stage (common fallacy).  It would result in artifacts if audible.


----------



## Monstieur (Jun 17, 2018)

PiSkyHiFi said:


> I think you just shot yourself in the foot, 24 bit isn't better than 16 bit?? - especially regarding lossy at these rates - you know, the conversion of source to frequency domain with floating point - compress, then decompress back to discrete , this is just an absurd statement.


There is no "bit-depth" in AAC or MP3. The source PCM is read at whatever bit-depth it is (16/24/32-bit) and compressed. Bit-depth is not applicable to the compressed data at all. The decoder can then decode it to whatever bit-depth it chooses (including 24/32-bit), but there is no sense in decoding to anything higher than 16-bit for playback.

If you want to preserve the bit-depth of the output samples accurately, you should not use a lossy codec.


----------



## PiSkyHiFi

Brooko said:


> Suggest you might want to do some reading about bit depth yourself - and I mean this trying to be helpful rather than condescending.
> 
> This is a good starting point : https://www.head-fi.org/threads/24bit-vs-16bit-the-myth-exploded.415361/
> 
> ...



Completely condescending and inaccurate....  even the stuff about dithering adding random noise is bs - dithering can be done a number of ways, random is not the best algorithm, cautious error diffusion is better and it should be the final step, so, correct the opinion on head-fi to reflect the math and then I'll pay attention to the rest of it.

Can I hear the benefits of 24 bit over 16 bit ? not easily and mostly no.... but just as you said yourself that you use 24 bit when mixing, you should also use higher than 16 bit so for the same reasons whenever conversion of any kind takes place, like in a lossy transmission protocol for example (!?)

I'm happy with Red book 44.1/16 - I can't ABX with anything better, but this is lossy still and until Bluetooth bandwidth can accommodate lossless with room for signal strength dips, then AptX HD is most definitely a step up.

Absolutely, like 1+1 is 2.


----------



## Monstieur (Jun 17, 2018)

PiSkyHiFi said:


> Completely condescending and inaccurate....  even the stuff about dithering adding random noise is bs - dithering can be done a number of ways, random is not the best algorithm, cautious error diffusion is better and it should be the final step, so, correct the opinion on head-fi to reflect the math and then I'll pay attention to the rest of it.
> 
> Can I hear the benefits of 24 bit over 16 bit ? not easily and mostly no.... but just as you said yourself that you use 24 bit when mixing, you should also use higher than 16 bit so for the same reasons whenever conversion of any kind takes place, like in a lossy transmission protocol for example (!?)


16-bit does not even require dithering - that's how high its dynamic range is.

You keep going back to "bit-depth" and lossy compression. *There is no bit-depth in lossy compression.* Regardless of what the source PCM bit-depth was, the compressed data can be considered floating point. You can decompress it to whatever bit-depth you want. There is no sense in decompressing it back to the same bit-depth as the source because it's not lossless and the amplitude of the samples would have changed. The "accuracy" of the samples has already been lost due to lossy compression. As for dynamic range, 16-bit is sufficient for playback.

You cannot preserve bit-depth before and after lossy compression. Even if the output is 32-bit floating point, the samples are less accurate than the input.


----------



## Monstieur

PiSkyHiFi said:


> you use 24 bit when mixing, you should also use higher than 16 bit so for the same reasons whenever conversion of any kind takes place


In these conversions (manipulating audio in a DAW), the gloal is to preserve the accuracy of the samples. A higher bit depth increases the accuracy of the transformations.

In lossy compression, you're throwing away the bit-depth completely. It's completely different from applying filters in a DAW.


----------



## PiSkyHiFi

Monstieur said:


> In these conversions (manipulating audio in a DAW), the gloal is to preserve the accuracy of the samples. A higher bit depth increases the accuracy of the transformations.
> 
> In lossy compression, you're throwing away the bit-depth completely. It's completely different from applying filters in a DAW.



Utterly absurd.
Let's suppose you have a 44.1/16 signal to start with.
You don't throw away anything by converting to FP and then doing a frequency analysis - if you stop right there, it's completely reversible - best use 64 bit FP to make sure. math.

Once we apply compression, every slight increase in data rate, improves the likelihood of being able to represent the original data more faithfully until it does represent the data or reaches a codec limit. Math.

I found this absurd point in another thread about bandwidth and representation accuracy.

What should I do from here? point out that you can't decrease accuracy of representation by increasing bandwidth ?

I don't why you can't see it honestly.


----------



## Brooko

PiSkyHiFi said:


> Completely condescending and inaccurate....  even the stuff about dithering adding random noise is bs - dithering can be done a number of ways, random is not the best algorithm, cautious error diffusion is better and it should be the final step, so, correct the opinion on head-fi to reflect the math and then I'll pay attention to the rest of it.



Hillariously funny = the guy who wrote it is a producer and engineer, and knows more about the topic than most people here.  He's worked with Grammy award winners and owns his own studio.

I won't engage with you - if this is the sort of trolling response I get.  But I will state categorically - you are incorrect in your assumptions, and quite frankly you're making a clown of yourself in the approach you are taking.  I suggest you go to the thread I linked and try and tell Greg his post is BS.  I'm subscribed to that - so it will be an interesting conversation I think.


----------



## PiSkyHiFi (Jun 17, 2018)

Brooko said:


> Hillariously funny = the guy who wrote it is a producer and engineer, and knows more about the topic than most people here.  He's worked with Grammy award winners and owns his own studio.
> 
> I won't engage with you - if this is the sort of trolling response I get.  But I will state categorically - you are incorrect in your assumptions, and quite frankly you're making a clown of yourself in the approach you are taking.  I suggest you go to the thread I linked and try and tell Greg his post is BS.  I'm subscribed to that - so it will be an interesting conversation I think.



Go ahead mate, the math speaks for itself.... I couldn't care less about some blokes reputation if he has math errors....

This dude you refer to speaks of the entire dynamic range of some music being 12 dB !!

Talk about laughable...


----------



## PiSkyHiFi

I can't even begin to fathom why people would argue math... if you're reading this, do some research elsewhere, head-fi isn't a source.


----------



## PiSkyHiFi (Jun 17, 2018)

Mathematically, AAC is superior encoding at the same bitrate to any other lossy codec.

That's the end of it though, increase the bandwidth and bit depth and you'll do better than not losing anything as the information rate increases.

This is self-evident - nothing to do with bit depth being ignored by a Fourier or discrete cosine transform in floating point. (sorry, can't do a discrete transform on FP data, I mean DCT on higher bit depth representations.)

The real question is really how much more accurate/inaccurate is AAC than AptX HD at Bluetooth 5.0 specs


----------



## castleofargh

while they're non PCM codecs, we still need some sort of easy reference for what the extracted signal will contain. I agree that it's not the proper expression of resolution for non PCM signals, but if we don't use that, what else do we have? the bitrate can be even less relevant when different compression codecs are being used. so there is not much of an option, we either know all about each codecs involved, or we use the PCM equivalent they give us(with the hope that they're not misleading us too much).

now, and this is only my opinion, I have a hard time taking any so called high res BT seriously. I have yet to use a BT headphone that wouldn't have audible background hiss when I'm in a quiet room. so 24bit or 16bit equivalents for the signal, I couldn't care less when I'll still get plain old hiss some 50 to 70dB below signal depending on how loud I listen to my headphone. 
and that's without including ambient noise at all.




PiSkyHiFi said:


> I can't even begin to fathom why people would argue math... if you're reading this, do some research elsewhere, head-fi isn't a source.


 they tried to explain to you why your math is not necessarily a direct translation into fidelity. objective or subjective. math is not the issue here. taking one variable and deciding that it's the reliable one to quantify fidelity, it already can be a difficult thing to do with PCM signals sometimes. so with lossy codecs and streaming requirements, there is nothing wrong with trying to look at the whole picture, and maybe try to measure the output of some gears while using various solutions(if we're lucky to have them on the same devices). 

that said, I believe we have enough topics already where we can all go in circle with partial views about the audio chain and which variable is strongest animal in the fidelity jungle for whatever reason. I don't believe this topic needs to turn into one more of those mud fights. we've been trying to get a clearer picture about what each BT codec really does and I'd love to keep it that way.


----------



## PiSkyHiFi

castleofargh said:


> while they're non PCM codecs, we still need some sort of easy reference for what the extracted signal will contain. I agree that it's not the proper expression of resolution for non PCM signals, but if we don't use that, what else do we have? the bitrate can be even less relevant when different compression codecs are being used. so there is not much of an option, we either know all about each codecs involved, or we use the PCM equivalent they give us(with the hope that they're not misleading us too much).
> 
> now, and this is only my opinion, I have a hard time taking any so called high res BT seriously. I have yet to use a BT headphone that wouldn't have audible background hiss when I'm in a quiet room. so 24bit or 16bit equivalents for the signal, I couldn't care less when I'll still get plain old hiss some 50 to 70dB below signal depending on how loud I listen to my headphone.
> and that's without including ambient noise at all.
> ...



My main issue for BT headphones is that they probably don't have the DAC or analog stage to match what the BT data rate can achieve when using AptX HD - that is the whole point though, it's headroom to help remove digital error from the reproduction, especially because it's lossy still.


----------



## PiSkyHiFi

AptX HD is also used as a transmission protocol for multi-room wireless speakers, I sincerely hope Bluetooth achieves lossless with varying transmission rates in future.

When that happens, we can all dump on AptX HD objectively (and AAC)


----------



## Brooko

PiSkyHiFi said:


> Go ahead mate, the math speaks for itself.... I couldn't care less about some blokes reputation if he has math errors....
> 
> This dude you refer to speaks of the entire dynamic range of some music being 12 dB !!
> 
> Talk about laughable...



Evidently you skimmed the post.  What Greg said is that some music has dynamic range of as little as 12 dB.  The recordings with the highest DR is mainly classical and rarely goes above 60 dB.  So he's given you a common set of numbers (min and max) for DR in most recordings (~12-60 dB).  Whether you like it or not - he is the expert in this field.

Rather than paraphrasing - here is the pertinent part:



> So, can you actually hear any benefits of the larger (48dB) dynamic range offered by 24bit? Unfortunately, no you can't. The entire dynamic range of some types of music is sometimes less than 12dB. The recordings with the largest dynamic range tend to be symphony orchestra recordings but even these virtually never have a dynamic range greater than about 60dB. All of these are well inside the 96dB range of the humble CD. What is more, modern dithering techniques (see 3 below), perceptually enhance the dynamic range of CD by moving the quantisation noise out of the frequency band where our hearing is most sensitive. This gives a percievable dynamic range for CD up to 120dB (150dB in certain frequency bands).
> 
> You have to realise that when playing back a CD, the amplifier is usually set so that the quietest sounds on the CD can just be heard above the noise floor of the listening environment (sitting room or cans). So if the average noise floor for a sitting room is say 50dB (or 30dB for cans) then the dynamic range of the CD starts at this point and is capable of 96dB (at least) above the room noise floor. If the full dynamic range of a CD was actually used (on top of the noise floor), the home listener (if they had the equipment) would almost certainly cause themselves severe pain and permanent hearing damage. If this is the case with CD, what about 24bit Hi-Rez. If we were to use the full dynamic range of 24bit and a listener had the equipment to reproduce it all, there is a fair chance, depending on age and general health, that the listener would die instantly. The most fit would probably just go into coma for a few weeks and wake up totally deaf. I'm not joking or exaggerating here, think about it, 144dB + say 50dB for the room's noise floor. But 180dB is the figure often quoted for sound pressure levels powerful enough to kill and some people have been killed by 160dB. However, this is unlikely to happen in the real world as no DACs on the market can output the 144dB dynamic range of 24bit (so they are not true 24bit converters), almost no one has a speaker system capable of 144dB dynamic range and as said before, around 60dB is the most dynamic range you will find on a commercial recording.



I'm really not sure why you are so aggressive with this.


----------



## PiSkyHiFi

Brooko said:


> Evidently you skimmed the post.  What Greg said is that some music has dynamic range of as little as 12 dB.  The recordings with the highest DR is mainly classical and rarely goes above 60 dB.  So he's given you a common set of numbers (min and max) for DR in most recordings (~12-60 dB).  Whether you like it or not - he is the expert in this field.
> 
> Rather than paraphrasing - here is the pertinent part:
> 
> ...



Why did you underline some music ? It seems to me you didn't read my post.

Whether he likes it or not, extracting 150 dB from a 96dB source is complete BS - error diffusion, interpolation, whatever, you can infer a little more with correct mastering, but it tails off pretty quick - a lot quicker than achieving 120 to 150 dB.
I think he is trying to refer to aliasing present in PCM and using either a decent high pass filter or mathematical equivalent to help eliminate it.
I tell you what does achieve 120dB without skipping a beat, 24 bit source.

The superiority of classical music ???? unbelievable bias. Maybe he means discerning a lot of combined waveforms, probably should say that.

Music doesn't go above 60dB dynamic range?.... I feel people here are just plain digging holes for themselves, do they mean a midi interpretation of a piece... I can't even wrap my head around the intended meaning.
What about the sound.... that's actually the music too.

It's a reasonable article, with some obvious errors here and there, but it doesn't change anything to do with AptX HD, AAC and PCM.


----------



## neil74

PiSkyHiFi said:


> Mathematically, AAC is superior encoding at the same bitrate to any other lossy codec.



Opus??


----------



## PiSkyHiFi

neil74 said:


> Opus??


Ahh, I hadn't hard of this until now... 

Looks like its the replacement for vorbis

I stand corrected, thanks for the info!


----------



## PiSkyHiFi

I'm just going to leave this quote from Wikipedia about dynamic range:

"In 1981, researchers at Ampex determined that a dynamic range of 118 dB on a dithered digital audio stream was necessary for subjective noise-free playback of music in quiet listening environments."

That is the number that concurs well with my understanding, despite coming from 1981.
There does seem to be conflict about it in general, I guess because terms need to be clearly defined regarding what's being measured.


----------



## Brooko

PiSkyHiFi said:


> I'm just going to leave this quote from Wikipedia about dynamic range:
> 
> "In 1981, researchers at Ampex determined that a dynamic range of 118 dB on a dithered digital audio stream was necessary for subjective noise-free playback of music in quiet listening environments."
> 
> ...



The quote was talking about recording - not playback (its available in the AES library).

Here's the part they were talking about:


> A dynamic range of 118 dB is determined necessary for subjective noise-free reproduction of music in a dithered digital audio recorder. Maximum peak sound levels in music are compared to the minimum discernible level of white noise in a quiet listening situation. Microphone noise limitations, monitoring loudspeaker capabilities, and performance environment noise levels are also considered.



And if you take the noise floor (about 30-40 dB in a quiet recording environment), then the microphones would need to have that sort of dynamic range to be able to record the lowest and highest audible sounds.  Or in other words 30dB (noise floor) + 80-90 dB as safety to reach peak.  It still means the actual DR of even well recorded music is still going to max out at around 60-70dB (well within the 16bit window for playback).


----------



## PiSkyHiFi

Brooko said:


> The quote was talking about recording - not playback (its available in the AES library).
> 
> Here's the part they were talking about:
> 
> ...



I'm not talking volume range when I discuss dynamic range, it could be interpreted that way, but I'm discussing audible precision, which means that despite the human ear attenuating hearing a range of loud sounds or a range of soft sounds, but not both at the same time, the sound reproducing device needs the full range of at least 100dB, plus some room for some people's ability to focus on aspects of the precision that aren't obvious, like interferometry - humans can actually locate objects using this to a degree, combined with straight volume difference between ears. So spatial sense is very sensitive to precision changes.


You mention noise floor - so I'm working with a value of 0 dB, which is defined to be the audible threshold of an isolated tone.

Good old tape decks achieve up to 90dB precision, apparently up to 110dB with Dolby and the difference is audible.

Equipment I use needs to be above 110dB in precision or I can tell it pretty quickly. That's just from experience and looking at that specs of things I try.

I much prefer the solid noiseless sound of an ESS 9018 to the sound of a Wolfsen WM8740 for example, the precision difference is audible. Some say the 9018 is too dry... probably because of the complete lack of any audible noise replaced with tight precision. I love it.


----------



## castleofargh

PiSkyHiFi said:


> I'm not talking volume range when I discuss dynamic range, it could be interpreted that way, but I'm discussing audible precision, which means that despite the human ear attenuating hearing a range of loud sounds or a range of soft sounds, but not both at the same time, the sound reproducing device needs the full range of at least 100dB, plus some room for some people's ability to focus on aspects of the precision that aren't obvious, like interferometry - humans can actually locate objects using this to a degree, combined with straight volume difference between ears. So spatial sense is very sensitive to precision changes.
> 
> 
> You mention noise floor - so I'm working with a value of 0 dB, which is defined to be the audible threshold of an isolated tone.
> ...


look I really do not want this plague of a discussion to ruin yet another topic. please take it to a topic about human hearing, bit depth, dynamic or whatever it is you're trying to say. we can probably find a dozens of those topics still open on Headfi, so pick one if the subject really interests you and maybe link your answer in here so that the few who care, can go there and follow up on the discussion.
if you find a BT headphone with "110dB in precision"(whatever that means) at your usual listening level, maybe I'll reconsider all this as being relevant to this topic. then I'll probably go and buy that headphone. ^_^


----------



## PiSkyHiFi

castleofargh said:


> look I really do not want this plague of a discussion to ruin yet another topic. please take it to a topic about human hearing, bit depth, dynamic or whatever it is you're trying to say. we can probably find a dozens of those topics still open on Headfi, so pick one if the subject really interests you and maybe link your answer in here so that the few who care, can go there and follow up on the discussion.
> if you find a BT headphone with "110dB in precision"(whatever that means) at your usual listening level, maybe I'll reconsider all this as being relevant to this topic. then I'll probably go and buy that headphone. ^_^



Having fun yet? I generally disagree with not being allowed to flesh out answers with relevant info from surrounding fields, I believe a brief discussion about dynamic range was relevant to the understanding of precision when comparing different audio codecs.

If you are genuinely interested in quality Bluetooth headphones, consider modularising... Have a look at the Radsone Ear Studio ES100, which provides a portable unit that you can plug unbalanced or balanced headphones into with better than average sound quality using 2 of these internally:

https://www.akm.com/akm/en/file/datasheet/AK4375AECB.pdf

Straight out S/N ratio of 110 dB and some low distortion too.

It has support for AAC if you're Apple, or AptX HD if you have a device that supports transmitting it, AptX too, but I personally wouldn't use that unless you had to.

It sells itself so I don't have to and the designer/producer/manufactuer is active on head-fi with respect to fw updates etc. @wslee


----------



## Steve999 (Jun 17, 2018)

Welcome to head-fI Sound Science, and thanks for joining us.

Opus is pretty cool. You can grab the encoder pack for Foobar2000 and play around with it there if you are interested. My understanding, and I am no expert, is that it is state of the art, even beyond Apple AAC. However, for some mainstream uses, it has major compatability issues.

https://opus-codec.org/



PiSkyHiFi said:


> I'm just going to leave this quote from Wikipedia about dynamic range:
> 
> "In 1981, researchers at Ampex determined that a dynamic range of 118 dB on a dithered digital audio stream was necessary for subjective noise-free playback of music in quiet listening environments."
> 
> ...


----------



## castleofargh

PiSkyHiFi said:


> Having fun yet? I generally disagree with not being allowed to flesh out answers with relevant info from surrounding fields, I believe a brief discussion about dynamic range was relevant to the understanding of precision when comparing different audio codecs.
> 
> If you are genuinely interested in quality Bluetooth headphones, consider modularising... Have a look at the Radsone Ear Studio ES100, which provides a portable unit that you can plug unbalanced or balanced headphones into with better than average sound quality using 2 of these internally:
> 
> ...


to contest some of your points requires specific examples or really deep digging about the available research if we wish to get anywhere. both of which will drag this thread into a long off topic and inevitably into heated argumentation because the limits of human hearing is something people love to fight over. it's not my first dance, which is why I simply suggested to go continue this specific discussion(not to stop it!) in a proper thread. I'm not against having that conversation at all. I would just like it if this topic to survive it. 
now if this is going to turn into a we should go somewhere else vs we shouldn't, for 5 pages, it really defeats the purpose of me trying to limit off topic.

a chipset's spec is not the fidelity of a device. I truly wish it was.


----------



## PiSkyHiFi

castleofargh said:


> to contest some of your points requires specific examples or really deep digging about the available research if we wish to get anywhere. both of which will drag this thread into a long off topic and inevitably into heated argumentation because the limits of human hearing is something people love to fight over. it's not my first dance, which is why I simply suggested to go continue this specific discussion(not to stop it!) in a proper thread. I'm not against having that conversation at all. I would just like it if this topic to survive it.
> now if this is going to turn into a we should go somewhere else vs we shouldn't, for 5 pages, it really defeats the purpose of me trying to limit off topic.
> 
> a chipset's spec is not the fidelity of a device. I truly wish it was.



I defended my position regarding being on topic, I stand by it. Codecs in the real world.

A chipset's spec is not the fidelity of the device... I couldn't agree more, but it is where the chain starts, unless it is a resistor ladder or something with no integrated circuits.


----------



## Steve999 (Jun 17, 2018)

If this is off-topic please let me know. I don't even know enough to know what is off-topic, but I am very curious about all of this. I am afraid I am going to have to drag this down to my level because that is about where I am on the learning curve.

I have bluetooth 4.2 in my computer via an m.2 card. Is there anything special about that compared to prior versions of bluetooth? It seems to give me more reliable connections than I have had before and to remember my preferences better than earlier versions. I don't know if that's due to advances in the bluetooth or Windows itself or some combination.

I could buy a bluetooth 5.0 m.2 card for the slot to put in in place of the 4.2, for not very much money at all (they are both wifi plus bluetooth m.2 cards). Would this be of practical benefit?

My m.2 wifi +bluetooth card is an Intel card and I regularly get updates for it right off of the Intel sight, so I feel pretty good about that.

I truly don't understand how AAC versus Aptx versus aptx-hd comes into play with what I am doing, or if there is anything I can do at my computer to choose the best technology. I suppose i am as interested in reliability of connectivity as sound quality. I'm pretty happy with the sound quality. I'm really not too picky about sound quality. Things that are obviously wrong, I take care of, but if everything is running well I am pretty pleased.

FWIW, I turn off my graphics card for anything but photography because my CPU chip can handle anything else that comes its way with aplomb. I also use SSD drives to the extent it is practical. Those steps greatly reduce the background noise from my computer to almost nil so I can enjoy the music more, in a very practical way.

For audio from my computer I mainly use bluetooth to listen to two bluetooth speakers, one or the other. I've got a really rare top-flight Samsung speaker that I bought at Best Buy that was discounted at a fraction of the original price, and a Marshall bluetooth speaker that is less hifi but more convenient, which I got for a big discount at a holiday sale. (Yes, I am kind of a deal-hunter at times.) The Samsung will also do airplay and wifi and ethernet and line in and take a usb stick but I'm not using these options from day to day. Those are probably technically better hifi options than bluetooth, I would guess. Here is Samsung's link to product information about the Samsung speaker: https://www.samsung.com/uk/audio-video/audio-dock-e750/   I think it was more of a concept piece for Samsung and I don't think it sold very well and I saw it sitting on a shelf at Best Buy a few years ago for about a third or fourth of the original price so I went home and researched it and came back and snapped it up. It's sitting right in front of me about five feet away. It's quite a nice visual showpiece as well as a pretty darn good speaker, but nothing close to a nice home stereo setup. I know, the tube on the Samsung is a bit much, but honestly, the look is cool, and it does give off a glow and a warmth (visually and temperature-wise, not audibly) and it's otherwise quite striking and the sound is quite fulfilling to me. Again, I think they were trying to penetrate the market with a concept piece and it never took off. I am thinking the version of bluetooth it connects with might be the limiting factor though as the speaker is some years old, but I don't have any kind of grasp as to what takes place in the audio chain where bluetooth is involved.

The Marshall is about five feet behind me:  https://www.amazon.com/Marshall-Sta...099&sr=1-3&keywords=marshall+stanmore+speaker  I don't think it approaches anything as close to hifi as the Samsung but subjectively and for ease of use and due to the bass and treble knobs it really hits a soft spot for me subjectively, and having the bass and treble knobs readily accessible is really nice. It also has two line ins and  an optical in if I want to max out audio quality. I've kind of experimented over time and gradually arrived at bass and treble settings that I really enjoy the most and don't fiddle with them much anymore, and I almost always use the bluetooth.

I don't think either speaker approaches my home stereo sound, but for my home stereo I generally use Apple music with Apple TV.

I am just looking for practical words of wisdom and some learning about what I am doing and what I could do better, and whether bluetooth 5.0 would give me some practical benefit over bluetooth 4.2. And also, if it's not too much over my head (and it may be), a good general picture (for a layman) of what goes on in the chain from computer to speaker over bluetooth, especially as it relates to audio CODECs. It's a very interesting subject for me. I find the whole technology kind of amazing.

Thanks, everyone.


----------



## PiSkyHiFi (Jun 18, 2018)

Steve999 said:


> If this is off-topic please let me know. I don't even know enough to know what is off-topic, but I am very curious about all of this. I am afraid I am going to have to drag this down to my level because that is about where I am on the learning curve.
> 
> I have bluetooth 4.2 in my computer via an m.2 card. Is there anything special about that compared to prior versions of bluetooth? It seems to give me more reliable connections than I have had before and to remember my preferences better than earlier versions. I don't know if that's due to advances in the bluetooth or Windows itself or some combination.
> 
> ...



I would say your setup with Bluetooth 4.2 is fine, I read that Bluetooth 5.0 has twice the data rate with 4 times the range over 4.2, but I would take the range increase with a grain of salt, I think that depends on implementation, I have seen Bluetooth devices that claim to use "Class 1 Bluetooth", which just boosts the signal as far as I can tell. Like this home audio device:

https://www.aliexpress.com/item/Ava...tter-and-Receiver-Bypass-and/32861590670.html

I nearly pulled the trigger on that one, because it has digital bypass for your TV, meaning your TV audio can be working fine, then you choose some music from your phone or computer to play over Bluetooth and it switches over automatically.

I think in terms of AptX HD (finally relevant to this thread), that can be done over either 4.2 or 5.0, but needs to be available on both ends to be used at all, so probably not from your computer without fiddling around a lot and there are standalone transmitters as well as some phones that can be an AptX HD source.

If you aren't too worried about achieving a high degree of sound fidelity, then stick with what you have, the M.2 Bluetooth 4.2 is not doubt quite useful and going up would only mean getting more components to match.


----------



## Steve999

Thanks. 



PiSkyHiFi said:


> I would say your setup with Bluetooth 4.2 is fine, I read that Bluetooth 5.0 has twice the data rate with 4 times the range over 4.2, but I would take the range increase with a grain of salt, I think that depends on implementation, I have seen Bluetooth devices that claim to use "Class 1 Bluetooth", which just boosts the signal as far as I can tell. Like this home audio device:
> 
> https://www.aliexpress.com/item/Ava...tter-and-Receiver-Bypass-and/32861590670.html
> 
> ...


----------



## neil74

I think with Opus the only potential downside is the resampling as it is 48 khz not 44.  I am honestly not sure though as to how much of an issue this may be?

Back the codec debate I am still unsure too as to how the combination of file vs bluetooth codecs stack up in he real world, e.g. on an iPhone that only has AAC bluetooth is there any advantage to using an AAC based service such as Apple Music, Amazon or Tidal vs. say Spotify?    Many far more learned people than me say that AAC files are still transcoded over AAC bluetooth anyway.   My perception was that Spotify sounded worse over BT but now I think this may just be down to inconsistencies with it's master files rather than anything to do with vorbis vs AAC.

I am starting to think that there is no 'real world' advantage at all but I also do not fully understand how say LDAC behaves and Android with LDAC may have the edge if your files are 320 kbps as no transcoding down to 256 is needed?   As you can probably tell I am not an expert!  I am just hoping that it will not be long before this argument can be put to bed.


----------



## mackie1001

Apparently you can transcode AAC to AAC many times without further degradation so I suspect you’d get better overall results with Apple Music vs Spotify assuming the original quality before transcoding was equivalent. Not tested it myself though.


----------



## PiSkyHiFi

mackie1001 said:


> Apparently you can transcode AAC to AAC many times without further degradation so I suspect you’d get better overall results with Apple Music vs Spotify assuming the original quality before transcoding was equivalent. Not tested it myself though.



Somebody said that AAC was completely transparent after 100 recursive encodings, it isn't, but it is quite good at maintaining nearly all of the quality on repeated encodings, certainly compared to other encoders - there is a video on YouTube demonstrating just how bad other encoders can be.



There are minor differences in encoders just for AAC as well. I used the Fraunhofer encoder in FFMpeg, believed to be one of the best.

Having now being informed of Opus, which is claimed to be a better codec than AAC, that might well be a very similar story.

There is no way for any of the encoders to be certain that they are presented with PCM data that was already encoded once before, so over Bluetooth, it will be encoded again. Maybe Apple can gain control over this with end-to-end Apple products because they have a closed development system and do a pass-through from the App to the decoder at the Bluetooth sink, but it would be restricted to the Apple ecosystem.

I had my time using AAC, now I'm moving forward, focusing on FLAC for storage and AptX HD for wireless until Bluetooth can be lossless.

I really don't understand why people would say AptX HD is just a gimmick, mathematically, it's not a gimmick, I suspect they might be motivated by brand loyalty - We all know Apple has very strong brand loyalty.


----------



## neil74

The merits of aptx-hd or LDAC may be up for debate but what is for sure is that over Bluetooth on an Apple device you are capped at 250 kbps.  

Whether this is really an issue in the real world is also up for debate.  On paper though right now android seems the better option for Bluetooth audio?


----------



## PiSkyHiFi

neil74 said:


> The merits of aptx-hd or LDAC may be up for debate but what is for sure is that over Bluetooth on an Apple device you are capped at 250 kbps.
> 
> Whether this is really an issue in the real world is also up for debate.  On paper though right now android seems the better option for Bluetooth audio?



No one wants  wants to experience the wrath of a Fanboi, Apple or Android!

Sometime in the future, I might have the equipment to analyse the results of comparing them purely digitally, statistically. That's probably the only way to settle this.


----------



## shortwavelistener (Jul 14, 2018)

LajostheHun said:


> These codecs works differently from one another [AAC/Aptx]so they can't be compared directly like that and making blanket statements .Aptx splits the audio into 4 different sub bands and apply data reduction independently from one another, AAC works differently much closer and basically an  improved version to MPEG and MP3 . All this reminds me of the old DD/DTS debate, where people just made up a bunch of incorrect theories why the preferred DTS over Dolby.



aptX is superior to AAC because it uses time domain ADPCM instead of perceptual encoding (which are based on psychoacustic models) which are commonly used by MP3, AAC and WMA. This makes aptX more efficient than other lossy codecs.

aptX sounds more like WavPack Hybrid Lossy, since the compression artifacts sounds identical on low frequencies (mild distortion + cassette like hiss)

So if there any AAC file that has been transcoded into aptX, there will be minimal changes to SQ as aptX will not "shave" off very high or low frequencies like what MP3 does, rather it varies the size of the quantization step, to allow further reduction of the required data bandwidth for a given S/N ratio.

So a 16 bit AAC file will be transcoded by the BT device into a 6 or 8 bit ADPCM file - which is 99% indistinguishable from the original file, and that is what you are going to hear when using an aptX device.


----------



## Monstieur

Except that aptX audibly degrades the sound in certain frequencies and AAC is transparent.


shortwavelistener said:


> aptX is superior to AAC because it uses time domain ADPCM instead of perceptual encoding (which are based on psychoacustic models) which are commonly used by MP3, AAC and WMA. This makes aptX more efficient than other lossy codecs.
> 
> aptX sounds more like WavPack Hybrid Lossy, since the compression artifacts sounds identical on low frequencies (mild distortion + cassette like hiss)
> 
> ...


----------



## inspectah_deck

Monstieur said:


> Except that aptX audibly degrades the sound in certain frequencies and AAC is transparent.


Do you have a source for that?


----------



## LajostheHun

shortwavelistener said:


> aptX is superior to AAC because it uses time domain ADPCM instead of perceptual encoding (which are based on psychoacustic models) which are commonly used by MP3, AAC and WMA. This makes aptX more efficient than other lossy codecs.
> 
> aptX sounds more like WavPack Hybrid Lossy, since the compression artifacts sounds identical on low frequencies (mild distortion + cassette like hiss)
> 
> ...


I consider APTX superior too, so I'm not sure why you quoted me, but in any case the differences are extremely subtle IMO, not worth losing sleep over it.


----------



## shortwavelistener

Also in your earlier message that i quoted, you said that aptX splits the signal into four sub-bands. Does that mean that the signal encoding of aptX is the same as SBC? And does it also applies to LDAC as well?


----------



## Monstieur (Jul 16, 2018)

LajostheHun said:


> I consider APTX superior too, so I'm not sure why you quoted me, but in any case the differences are extremely subtle IMO, not worth losing sleep over it.


The difference between AAC and MP3 is subtle and probably impossible to ABX at 256 kb/s. aptX causes audible degradation which ruins the listening experience in particular songs if you know what it's supposed to sound like. This happens only in particular notes in particular songs, so it's a non-issue 95% of the time. I have not heard aptX HD yet so I can't comment on its subjective quality.

This document shows other flaws of aptX.
https://docs.wixstatic.com/ugd/fb72a4_a6a76213617c46c38213e298784bff55.pdf


----------



## Monstieur (Jul 20, 2018)

https://www.head-fi.org/threads/radsone-earstudio.867366/page-22#post-14086297
My take away from this is that aptX and aptX HD are barely better than SBC. They are essentially just low bit-depth ADPCM with variable noise floor across bands. The audible artefacts are probably due to implementation flaws rather than the codec itself.

AAC and MP3 are vastly superior codecs and it's laughable to even compare them to low bit-depth ADPCM. It's like comparing FM radio to a modern digital connection.

SBC and aptX are relics from an era where manufacturing low-power chips was challenging.


----------



## shortwavelistener

Monstieur said:


> https://www.head-fi.org/threads/radsone-earstudio.867366/page-22#post-14086297
> My take away from this is that aptX and aptX HD are barely better than SBC. They are essentially just low bit-depth ADPCM with variable noise floor across bands. The audible artefacts are probably due to implementation flaws rather than the codec itself.
> 
> AAC and MP3 are vastly superior codecs and it's laughable to even compare them to low bit-depth ADPCM. It's like comparing FM radio to a modern digital connection.
> ...



Ironically, SBC is the core technique used in popular lossy audio compression algorithms including the entire MPEG 1 Audio Layer family (MP1/PASC, MP2/Musepack, MP3 and MP4/AAC).


----------



## Colors

Waiting for the day they make Bluetooth audio SQ the same as wired + a good DAP + FLAC/lossless files ~


----------



## shortwavelistener

Colors said:


> Waiting for the day they make Bluetooth audio SQ the same as wired + a good DAP + FLAC/lossless files ~



Let see if the upcoming Bluetooth 5.1 can permit true lossless audio streaming using true lossless compression and not aptX.


----------



## inspectah_deck

Monstieur said:


> https://www.head-fi.org/threads/radsone-earstudio.867366/page-22#post-14086297
> My take away from this is that aptX and *aptX HD* are barely better than SBC.


I don´t see any data about aptX HD in this post to conclude that.
Although I´m thankful for your informational posts, I notice a pretty agressive pro AAC, contra aptX tone in your posts, which I find unnecessary.


----------



## bigshot

Audible transparency is the goal. There’s more than one way to skin a cat.


----------



## shortwavelistener

bigshot said:


> Audible transparency is the goal. There’s more than one way to skin a cat.



Yes, and aptX HD achieves audible transparency at *1/4* of the bitrate of a standard audio CD.

 (4/4 or full CD quality = 1411.2 kbps, while *1/4 =*  352.8 kbps)


----------



## bigshot

AAC is transparent at around 256


----------



## PiSkyHiFi

bigshot said:


> AAC is transparent at around 256



For you maybe, Apt X HD is a transmission protocol, which means is better to use higher bandwidth hardware to achieve better transparency. 576 Kbps is a step forward than assuming everyone's gear can only handle no better than AAC. Come on , it's obvious, to pretend the math is out is revealing what you think about Apple.


----------



## shortwavelistener (Jul 21, 2018)

PiSkyHiFi said:


> For you maybe, Apt X HD is a transmission protocol, which means is better to use higher bandwidth hardware to achieve better transparency. 576 Kbps is a step forward than assuming everyone's gear can only handle no better than AAC. Come on , it's obvious, to pretend the math is out is revealing what you think about Apple.



True that, and FYI AAC is "transparent" at 256 kbps due to the fact that it is "psychoacoustically" equivalent to CD quality. However if an audiophile has the right equipment as well as the basic knowledge about what to hear in AAC then they can differentiate which (lossy) codec are transparent at certain bitrates as well as comparing the codecs via an ABX test, etc.

Just my 2 cents...


----------



## LajostheHun

shortwavelistener said:


> Also in your earlier message that i quoted, you said that aptX splits the signal into four sub-bands. Does that mean that the signal encoding of aptX is the same as SBC? And does it also applies to LDAC as well?


Aptx predates all of the lossy encodes currently in use, as it was developed in the late 80's for broadcasting, since then it was sold several times, currently owned by Qualcom. Don't know if it works the same exact way as SBC, I doubt it just from the patent POV but APTX HD does, LDAC does not as it is Sony's brain child.


----------



## Brooko

shortwavelistener said:


> True that, and FYI AAC is "transparent" at 256 kbps due to the fact that it is "psychoacoustically" equivalent to CD quality. However if an audiophile has the right equipment as well as the basic knowledge about what to hear in AAC then they can differentiate which codec are transparent at certain bitrates as well as comparing the codecs via an ABX test, etc.
> 
> Just my 2 cents...



Can you please point me to ANY peer reviewed test which shows this to be true please.  A group of us tried this on Head-Fi years ago (one had a Stax set-up, and was able to successfully ABX MP3 320 from FLAC - most of us couldn't).  The same guy failed repeatedly on aac256.  As long as it was the same master recording, and double-blind volume matched ABX (with no transcoding errors) - I'm yet to find anyone who can reliably do this. I've searched and I can't find any definitive tests either.

I know in my own double blind volume matched tests - aac256 is transparent to me.  I still archive everything in FLAC (may as well have a lossless copy right) - but all my listening (portable) is done with aac256.


----------



## shortwavelistener (Jul 21, 2018)

Brooko said:


> Can you please point me to ANY peer reviewed test which shows this to be true please.  A group of us tried this on Head-Fi years ago (one had a Stax set-up, and was able to successfully ABX MP3 320 from FLAC - most of us couldn't).  The same guy failed repeatedly on aac256.  As long as it was the same master recording, and double-blind volume matched ABX (with no transcoding errors) - I'm yet to find anyone who can reliably do this. I've searched and I can't find any definitive tests either.
> 
> I know in my own double blind volume matched tests - aac256 is transparent to me.  I still archive everything in FLAC (may as well have a lossless copy right) - but all my listening (portable) is done with aac256.



What i really meant about the post that you quoted earlier is that i was referring to the differences in transparency between AAC and another lossy codec, which in this case is aptX, HD or not. I didn't even mentioned anything about lossless codecs.


----------



## Brooko

My apologies - I had read your reply incorrectly.  I had taken it that you were suggesting that aac 256 was not audibly transparent. Clearly you are saying the opposite to be true?


----------



## shortwavelistener (Jul 21, 2018)

Brooko said:


> My apologies - I had read your reply incorrectly.  I had taken it that you were suggesting that aac 256 was not audibly transparent. Clearly you are saying the opposite to be true?



Yes. AAC 256 may be transparent to the average ear, but comparing them to other lossy BT codecs (e.g. aptX, LDAC) using high-end equipment is like day and night.

Also,



mrspeakers said:


> FLAC is lossless compression, so it sounds to most people exactly like the original.  256K is lossy and does not provide an exact replica.  The A CD is by definition the most common source for uncompressed music, and FLAC off CD is very good.  FLAC can also support hi def audio like 24/96.
> 
> 256 sounds good, but there is a clear difference between it and FLAC off of a good source.  I usually rip my CDs in ALAC, which is Apple's lossless, then compress to 256 (new feature in iTunes to downsample to 256 instead of just 128) to jam more tunes onto my iPod for the gym or for travel.
> 
> EDIT: 256 usually sounds flatter in soundstage and dynamics.  It doesn't offend (except sometimes mass strings), but it doesn't engage as well either...


----------



## Brooko

Sorry can you clarify this further.  Because you are not being very clear at this point.


Are you saying that aptX and LDAC are audibly different, and you do realise that night and day means that in a volume matched test - everyone should be able to tell them apart. Its one of the terms a lot of people use but they have very little idea of the meaning.
Are you also saying that as far as transparency goes - aptX / LDAC are better or worse?  I'm assuming worse because aac is already audibly transparent, and you did say there was a night and day difference.
I saw the quote from Dan - unless he's performed a double-blind volume matched ABX with subject files all transcoded from the same master recording, then his observations are as anecdotal as anyone else's.


----------



## shortwavelistener (Jul 21, 2018)

Brooko said:


> Sorry can you clarify this further.  Because you are not being very clear at this point.
> 
> 
> Are you saying that aptX and LDAC are audibly different, and you do realise that night and day means that in a volume matched test - everyone should be able to tell them apart. Its one of the terms a lot of people use but they have very little idea of the meaning.
> ...



So far i never did any comparison between LDAC and aptX HD on real equipment (because i don't have any LDAC compatible devices - except for my Sony car stereo of course), so no, i don't even know the differences between LDAC and aptX IRL.

However AAC is transparent enough to average ears on average entry level audio equipment. However if someone claims to have "golden ears", knows what artifacts to listen, and has very high end audio equipment, then it's not impossible for them to diiferentiate AAC's transparency with aptX/LDAC. But the results are likely to be 50/50, depending on which song that they will listen to. The flaws are more likely to be discovered in classical music.


----------



## Brooko

So again (and I realise I am perhaps sounding pedantic) - what was the reference to night and day about?

Do you have any peer reviewed testing data showing differences between the codecs?


----------



## shortwavelistener (Jul 21, 2018)

Brooko said:


> So again (and I realise I am perhaps sounding pedantic) - what was the reference to night and day about?
> 
> Do you have any peer reviewed testing data showing differences between the codecs?



I referred the night and day as how easily you can perceive the differences in soundstage between AAC and aptX. In terms of SQ i know it's audibly transparent.

And no, i don't have any "authentic" peer data showing differences between aptX and AAC, but this is the closest that I can get to:

https://hydrogenaud.io/index.php/topic,78217.0.html

https://forums.naimaudio.com/topic/bluetooth-to-unitiqute2-apt-x-vs-aac

(Note that i'm not an expert, hence don't bash me)


----------



## Brooko

Would never bash anyone - just interested in increasing my own learning 



shortwavelistener said:


> I referred the night and day as how easily you can perceive the differences in soundstage between AAC and aptX.



Have you done this in a double blind abx with volume matching (same tracks)?  I guess it would be pretty difficult to set-up.  I've never personally noticed a difference between the two (in relation to Bluetooth transmission), and definitely not in terms of sound stage (which in my experience relates more to the transducer and the recording than codecs).  Obvious exception would be if there is DSP involved.

How did you set up your tests?


----------



## shortwavelistener (Jul 21, 2018)

Brooko said:


> Would never bash anyone - just interested in increasing my own learning
> 
> 
> 
> ...



Well i literally used my Dell Optiplex 760 (running Win Vista) in which i installed an ASUS Xonar Essence STX soundcard which is connected to my Trond USB BT transmitter dongle via headphone out. And for listening i just used my Sony MDR-XB650BT cans.


----------



## Brooko

Thanks - how did you switch between aac and aptx?  How did you check volume matching?  Time lapse between switching?


----------



## shortwavelistener (Jul 21, 2018)

Brooko said:


> Thanks - how did you switch between aac and aptx?  How did you check volume matching?  Time lapse between switching?



I can't switch between AAC and aptX using my PC. However for AAC i used my iPhone 5S connected wirelessly to my headphones, while for aptX it's PC > bluetooth dongle > headphones.

For ease i connected my cousin's Macbook Pro
(At times i also hooked up the BT dongle to my Mac so that i could have aptX-HD)

And also my MBP allows the switching between AAC and aptX. But my PC has a better sounding soundcard / DAC.


----------



## Brooko

OK - so different sources.  What about volume matching and time delay? Its normal to hear slightly louder volume as clearer, more detailed, with wider stage etc.  When volume matched  - the previous "day and night" differences disappear.


----------



## shortwavelistener

Brooko said:


> OK - so different sources.  What about volume matching and time delay? Its normal to hear slightly louder volume as clearer, more detailed, with wider stage etc.  When volume matched  - the previous "day and night" differences disappear.



For PC i used Foobar2000 with the ABX Comparator plugin. The plugin 

For Mac i just run Foobar2000 using Wine along with the ABX plugin.

I just use the options to get the plug in to level match for me automatically.


----------



## Brooko

But how did you ABX when you are using two different Bluetooth sources - as far as I know that is impossible.  With ABX plugin you can compare 2 different containers (FLAC vs aac for example) very easily.  How did you manage aptX?


----------



## shortwavelistener (Jul 21, 2018)

Brooko said:


> But how did you ABX when you are using two different Bluetooth sources - as far as I know that is impossible.  With ABX plugin you can compare 2 different containers (FLAC vs aac for example) very easily.  How did you manage aptX?



For the last ABX test i used my Macbook Pro. And of course i use lossless files for ABXing, either on my PC or Mac. MBP natively supports the aptX codec for BT devices. And i compared two files, one AAC and ALAC/FLAC in the ABX plugin. I'm just comparing how well those lossy BT codecs encode those lossless/lossy files.


----------



## Brooko

I don't think you understand (or maybe I don't).  AptX is a codec, not a container.  The whole idea of an ABX is that you are not supposed to know what is being tested.  So in order to test aptX vs aac, you need to switch between two different sources.  The only way to do this is sighted, so you can't abx using Foobar.


----------



## shortwavelistener (Jul 21, 2018)

Brooko said:


> I don't think you understand (or maybe I don't).  AptX is a codec, not a container.  The whole idea of an ABX is that you are not supposed to know what is being tested.  So in order to test aptX vs aac, you need to switch between two different sources.  The only way to do this is sighted, so you can't abx using Foobar.



Of course AptX is a container. And I did actually ABX on different sources. Sometimes i played music from my iPhone via BT AAC and compared them to my Mac playing music via aptX.

Oh.. And it's evening so i need to rest..


----------



## Brooko

So you didn't actually ABX (aac vs AptX) and you didn't use Foobar.  And you can't have volume matched.  Thats all i wanted to establish.

And AptX is a codec - not a container.  You can't have an AptX file.  They don't exist.
https://en.wikipedia.org/wiki/AptX


----------



## shortwavelistener (Jul 21, 2018)

Brooko said:


> So you didn't actually ABX (aac vs AptX) and you didn't use Foobar.  And you can't have volume matched.  Thats all i wanted to establish.
> 
> And AptX is a codec - not a container.  You can't have an AptX file.  They don't exist.
> https://en.wikipedia.org/wiki/AptX



Of course I know AptX is a codec. I know that already, ever since it was first came out in 2012 as a new standard for A2DP. (Isn't AptX invented in the 80s? I had to find out more) 

BTW, thanks for the information though.

Of all audiophiles here on Head-Fi, you deserve some respect bro!


----------



## bigshot (Jul 21, 2018)

PiSkyHiFi said:


> For you maybe, Apt X HD is a transmission protocol, which means is better to use higher bandwidth hardware to achieve better transparency.



Can you explain to me what "better transparency" sounds like to human ears? Transparency is transparent, isn't it? If there was room for improvement it wouldn't be transparent.

It seems to me that if certain equipment is presenting a transparent sound source as not transparent, there is probably something wrong with the equipment, not the transparent sound source.


----------



## PiSkyHiFi (Jul 21, 2018)

bigshot said:


> Can you explain to me what "better transparency" sounds like to human ears? Transparency is transparent, isn't it? If there was room for improvement it wouldn't be transparent.
> 
> It seems to me that if certain equipment is presenting a transparent sound source as not transparent, there is probably something wrong with the equipment, not the transparent sound source.



Let me guess, you read somewhere that AAC is audibly transparent to human ears, so you're going to stick with that and make everything else follow from it... I'm guessing that's what's going on.

What if I just said to you straight out that any lossy codec is by definition, not transparent. Apple might say to you, it's audibly transparent, so you can feel safe that no one else is getting better sound than you even if they have something different, but that's just them drawing the line for you.

I draw the line at FLAC over Bluetooth... lossless transmission means digital transparency, it's then up to the analog stages to make it sound accurate from there.

Until we have that, we can argue over bit-for-bit - which is digital transparency, which neither AAC nor AptX HD provide.

AptX HD is useful because the bitrate is now getting up to a reasonable percentage of Redbook CD quality, so it doesn't have to be as clever a protocol to be more transparent in the digital realm as AAC, because it stipulates a higher bitrate and depth.

If AAC could be used at 576 KBps rather than AptX HD, I'd prefer that, but it's not low latency either, so I'd prefer the Bluetooth codec that is audibly transparent, minimally digitally transparent and low latency.

LDAC is here, that's another choice that's better than AAC for Bluetooth, just for the bit depth and rates alone.

When lossless can be easily transported over Bluetooth, this discussion about transparency will be moot.


----------



## bigshot (Jul 21, 2018)

PiSkyHiFi said:


> Let me guess, you read somewhere that AAC is audibly transparent to human ears, so you're going to stick with that and make everything else follow from it... I'm guessing that's what's going on.



You're in the Sound Science forum. We test things here! This issue is very important to me. I have a very large collection of music- tens of thousands of CDs, and even more records. I built a media server in my theater to serve up both movies and music. The server now contains nearly 100 TB of storage, and over a year and a half of music.

Before I embarked on digitizing my collection, I decided to test three major codecs... Fraunhofer MP3, LAME MP3 and AAC. I researched [/SIZE]artifacting. There used to be a very good page on the web with examples dedicated to cataloguing the various ways compression artifacts could occur and what the artifacts sounded like. I pulled dozens of titles from my collection from various genres, from classical to century old acoustic recordings to jazz, acoustic music, electronic music, examples from the 20s, 30s, 40s, 50s... all the way up to the most current recordings in my collection. I included digitized Sheffield Lab direct to disc LPs and downsampled high data rate audio. And I digitized each title at 128, 192, 256 and 320 CBR using each one of the three codecs.

I went over the samples with a fine tooth comb comparing it head to head with lossless, searching for artifacts. If I found one on any music sample, I eliminated that codec and data rate from further consideration. When I was done, I found that Fraunhofer 320 was *almost* transparent, but I had one CD that could make it artifact slightly. LAME MP3 was perfectly transparent at 320. And AAC was transparent at 256. I chose AAC 256 VBR as my standard setting for my music server. It took nearly two weeks full time, but it was worth it to me because I didn't want to have to ever go back and re-rip or re-encode any of my collection. I now have a massive digital library and I listen to it all the time. I have never encountered any artifact on any of the hundreds of thousands of files.

In addition to my own collection, I'm a digital archivist in charge of a large media library that belongs to a non-profit educational organization. I know a little bit about this stuff. If I was maintaining a master file- the original recording master- I would keep it in whatever format that it was created in. That way if the file needed to be remixed or edited, it would be completely flexible. But for reference files- or more specifically, files of music I listen to in my living room- compressed audio is perfectly fine as long as it's audibly transparent.

The definition of "audibly transparent" is that a copy sounds *exactly* like the original. There's no such thing as "better sounding" than transparent. There's only larger file sizes. Bigger file sizes may have a purpose if you are working in a recording studio, but for the purposes of listening to music in the home, compressed audio that meets the threshold of transparency is as good as you can get.



PiSkyHiFi said:


> If AAC could be used at 576 KBps rather than AptX HD, I'd prefer that,



You can't judge sound quality by the data rate. Every codec has a different point of transparency. Some are more efficient than others.


----------



## PiSkyHiFi

bigshot said:


> You're in the Sound Science forum. We test things here! This issue is very important to me. I have a very large collection of music- tens of thousands of CDs, and even more records. I built a media server in my theater to serve up both movies and music. The server now contains nearly 100 TB of storage, and over a year and a half of music.
> 
> Before I embarked on digitizing my collection, I decided to test three major codecs... Fraunhofer MP3, LAME MP3 and AAC. I researched [/SIZE]artifacting. There used to be a very good page on the web with examples dedicated to cataloguing the various ways compression artifacts could occur and what the artifacts sounded like. I pulled dozens of titles from my collection from various genres, from classical to century old acoustic recordings to jazz, acoustic music, electronic music, examples from the 20s, 30s, 40s, 50s... all the way up to the most current recordings in my collection. I included digitized Sheffield Lab direct to disc LPs and downsampled high data rate audio. And I digitized each title at 128, 192, 256 and 320 CBR using each one of the three codecs.
> 
> ...



I'd need to do an in-depth statistical analysis, or I could use some simple math tricks to help guide me through this, basically the encoder would have to start being really stupid if you are using more bandwidth and approaching a decent percentage of the uncompressed required bandwidth.

16 bit depth 44.1 KHz is a 1.4Mbps stream - that's the uncompressed required bandwidth.

Compressed losslessly, you can achieve around 800 to 900 KBps - assuming Redbook and fast hardware.

So, compressing with 576KBps lossy is now approaching this point.

So, now we're comparing quality of lossy codecs that are approaching a point where their respective psycho-acoustic comparisons are becoming meaningless.

AptX and AptX HD are transmission protocols and they will be redundant soon enough.

AptX HD is mathematically better than AAC when comparing 576KBps to 256b KBps, there just isn't any need to debate that.

I once stored my entire collection as 192 KBps MP3 until I could afford the equipment that revealed my mistake. I used to encode to AAC in M4A files for portable use, but now I can store all my FLACS in my phone... lossy is nearly dead for me.


----------



## bigshot (Jul 21, 2018)

I'm talking about sound quality, not math. Bigger numbers don't automatically mean better sound. The codec has an impact on the quality of the encoding as well.

Would you like to hear one of my sample files for yourself? I can send you either a FLAC or ALAC. It has ten different samples, all the same music. The samples consist of Frau, LAME and AAC and 192, 256 and 320. One of the samples is lossless. You tell me which one is which. I'll tell you how you did. Then we'll talk about what codec and bitrate is necessary to achieve transparency. (Hint: Redbook is overkill)

By the way, 192 Fraunhofer MP3 is not transparent. If you asked me before you started encoding your library, I would have told you to use LAME 320 or AAC 256.


----------



## PiSkyHiFi

bigshot said:


> I'm talking about sound quality, not math. Bigger numbers don't automatically mean better sound. The codec has an impact on the quality of the encoding as well.
> 
> Would you like to hear one of my sample files for yourself? I can send you either a FLAC or ALAC. It has ten different samples, all the same music. The samples consist of Frau, LAME and AAC and 192, 256 and 320. One of the samples is lossless. You tell me which one is which. I'll tell you how you did. Then we'll talk about what codec and bitrate is necessary to achieve transparency. (Hint: Redbook is overkill)
> 
> By the way, 192 Fraunhofer MP3 is not transparent. If you asked me before you started encoding your library, I would have told you to use LAME 320 or AAC 256.



No, until we're comparing analog stages. we're talking math.

No, I don't want to hear on of your samples. I'm sure it took lots of work to get them - I've been there. It's approaching time to forget about the drawbacks of lossy and even bothering to compare them.

Equipment quality is the main reason I don't like lossy, it doesn't give you a chance to reconsider your options when you may simply need better equipment to tell the difference.

If you believe redbook is overkill, then you need to try better sound equipment, because it isn't and the purpose of saving space is no longer required.

We all have our own thresholds - redbook is enough for me, but no less.


----------



## bigshot (Jul 21, 2018)

There are no drawbacks to lossy if there is sufficient bandwidth to achieve transparency. I think the problem is that you don't understand what transparent means. It means perfect for human ears. Indistinguishable from the source. The source can be redbook or 24/192, it doesn't matter. If it sounds the same, for the purposes of listening to music on your system, it *is* the same. You can count beans and pick the one with the most beans, but it won't be any improvement in audible sound quality. And a codec works just as well for a cheap system as it does a high end one. The limiting factor is your ears. Perfect for human ears is as good as you are ever going to hear in your whole life.

The reason you don't know about transparent compressed audio is because you refuse to listen to it. That is perfectly fine. A nice full hard drive might be comforting when you go to sleep at night. But it doesn't mean that you have better sound quality. Just bigger files. You just don't have any experience in this area.


----------



## PiSkyHiFi

The reason I don't know about lossy....

Sorry mate, but you can stick your condescension.

It's just that now that we've hit the point where its just no longer needed.

If you've been through which lossy codec like I have, it should be good to know they can be dispensed with completely.

Bluetooth is the last bastion for lossy.


----------



## shortwavelistener

PiSkyHiFi said:


> So, now we're comparing quality of lossy codecs that are approaching a point where their respective psycho-acoustic comparisons are becoming meaningless.



Well except for aptX HD and LDAC, which is strictly ADPCM and therefore it does not utilize any form of psycho-acoustic models.


----------



## PiSkyHiFi

shortwavelistener said:


> Well except for aptX HD and LDAC, which is strictly ADPCM and therefore it does not utilize any form of psycho-acoustic models.



My point isn't about their respective psychoacoustic models or lack thereof.

My point is that the bit rate is high enough to pay less attention to the intricacies of each codec, since the overwhelming data rate reduces it's influence.


----------



## shortwavelistener

PiSkyHiFi said:


> My point isn't about their respective psychoacoustic models or lack thereof.
> 
> My point is that the bit rate is high enough to pay less attention to the intricacies of each codec, since the overwhelming data rate reduces it's influence.



Oops, sorry. Again I misinterpreted your statement regarding bitrates of BT codecs.


----------



## bigshot (Jul 22, 2018)

You can feel free to dismiss compressed audio, but the fact remains that the lion's share of the music business consists of compressed audio... iTunes store, Amazon, streaming services like Spotify, Pandora and Tidal... The vast majority of music listeners are listening to music on compressed formats. I think it's kind of important to know how compression works and what it's thresholds are. My library in lossless would span many TBs. It would require a lot more work to backup and manage. And all that extra trouble wouldn't result in one iota of better sound quality. For a small library, it might be different. But most people with small libraries have just turned to streaming so they can have a wider selection of music.

And repeating again... datarate is not a good measure of sound quality. There are voice codecs that you can boost up to the same data rate as a CD and they still won't achieve transparency. And AAC is able to reach transparency much lower than PCM. The codec is what matters, not the numbers.


----------



## shortwavelistener (Jul 23, 2018)

bigshot said:


> You can feel free to dismiss compressed audio, but the fact remains that the lion's share of the music business consists of compressed audio... iTunes store, Amazon, streaming services like Spotify, Pandora and Tidal... The vast majority of music listeners are listening to music on compressed formats. I think it's kind of important to know how compression works and what it's thresholds are. My library in lossless would span many TBs. It would require a lot more work to backup and manage. And all that extra trouble wouldn't result in one iota of better sound quality. For a small library, it might be different. But most people with small libraries have just turned to streaming so they can have a wider selection of music.
> 
> And repeating again... datarate is not a good measure of sound quality. There are voice codecs that you can boost up to the same data rate as a CD and they still won't achieve transparency. And AAC is able to reach transparency much lower than PCM. The codec is what matters, not the numbers.



Tidal does offer a lossless option if you are subscribing to their premium Tidal HiFi service. BTW i'm a Tidal HiFi subscriber myself. And you forgot *Qobuz* which *also* offers hi res compressed music streaming.


----------



## PiSkyHiFi

bigshot said:


> There are no drawbacks to lossy if there is sufficient bandwidth to achieve transparency. I think the problem is that you don't understand what transparent means. It means perfect for human ears. Indistinguishable from the source. The source can be redbook or 24/192, it doesn't matter. If it sounds the same, for the purposes of listening to music on your system, it *is* the same. You can count beans and pick the one with the most beans, but it won't be any improvement in audible sound quality. And a codec works just as well for a cheap system as it does a high end one. The limiting factor is your ears. Perfect for human ears is as good as you are ever going to hear in your whole life.
> 
> The reason you don't know about transparent compressed audio is because you refuse to listen to it. That is perfectly fine. A nice full hard drive might be comforting when you go to sleep at night. But it doesn't mean that you have better sound quality. Just bigger files. You just don't have any experience in this area.



I don't use hard drives much these days, they are delegated to be archive devices.
I was a huge advocate for AAC when it was useful, fairly transparent, but not fully - it's lossy.- you've encoded a whole library, same as myself, you would have had it done automatically of course, maybe you could stop making assumptions about how much other people know.

I still have MP3's in my collection and I use streaming services that use compressed music like Google Play Music.

It's just that I have nearly my entire collection of FLACs on a micro SD card now... storage is cheap. Doesn't make me rest any differently knowing that.

Plus there's those automatically converted 1080P HEVC files.... I don't think we'll ever dispense with video compression.


----------



## bigshot (Jul 23, 2018)

AAC is fully transparent at AAC 256 VBR. The only way to know that is to do a test. If you refuse to do a test, you don't know for yourself. I know. I've done the test. If I tried to put my entire library in lossless on SD cards, I would need about 70 SD cards. Lossless is fine for smaller libraries. But the more music you're storing, the more file size matters. And when you're listening to music, all that matters is what you can hear. High data rate audio and lossless don't affect the way music sounds. I don't need them.


----------



## PiSkyHiFi

bigshot said:


> AAC is fully transparent at AAC 256 VBR. The only way to know that is to do a test. If you refuse to do a test, you don't know for yourself. I know. I've done the test. If I tried to put my entire library in lossless on SD cards, I would need about 70 SD cards. Lossless is fine for smaller libraries. But the more music you're storing, the more file size matters. And when you're listening to music, all that matters is what you can hear. High data rate audio and lossless don't affect the way music sounds. I don't need them.



I see a pattern here... people that love AAC are in a situation where they need to compromise in order to listen functionally at all.... storage space, streaming services these days make it too easy to accept lossy as good enough.

I've been there.... out of all the lossy codecs short of Opus, AAC is fantastic... no doubt, but this thing about it being transparent at 256KBps.... that's just loaded - it's true for most people on your average equipment.

On most equipment, especially portable equipment, ABX testing on FLAC and AAC will result in no better than random guessing mostly. It's because the differences really are exceptionally small and to hear the difference, you would need to use equipment that can reveal the differences.

The differences between AAC and PCM are going to be present in phase analysis, compressed audio doesn't focus on recovering phase details like instrument positioning and room timbre - these are very subtle details and they are just examples of what can be heard with the right equipment, human ears can be trained to do rudimentary sonar, they are actually good enough for that.

This forum is so full of inaccuracies - people generalising about complicated codec algorithms as if there is a clear winner if we just all paid attention to it. There isn't, the math is difficult and relevant and there will be a final word based on statistics when someone finally gets around to it.

So... to all those people who are probably Apple shills claiming that 256 KBps AAC is simply transparent and so ends the argument, are you prepared to admit that perhaps there exists a sound setup that would reveal the differences to the average listener? Do you just simply think that no equipment, no matter how good could possibly be that revealing?


----------



## bigshot (Jul 23, 2018)

If something sounds the same, to me it *is* the same. I don't "love" any format. I am focused on function. What can do the job in the most efficient way? If you think that I am shilling for any format or company, I'm not the one that is biased in my opinions, you are. I'm simply telling you that if you took a blind listening test to compare AAC 256 VBR to lossless, you wouldn't be able to tell the difference. For the purposes of listening to music in the home there are many audibly transparent formats to choose from. Some are just more efficient than others. AAC is among the most efficient codecs that are able to achieve complete transparency. You can choose to use a less efficient one than another simply because the bigger numbers make you feel more comfortable, but it doesn't make a lick of difference to how it sounds. Both lossless and AAC 256 VBR sound the same.

Here in sound science, we are allowed to ask for proof. I offered you a test to prove what you say. You declined. That pretty much answers it. You aren't interested in knowing one way or the other. You just want to do whatever you've decided. That is perfectly fine for you. I support your right to pack hard drives and SD cards full of inaudible sound. All I am saying is that transparent is transparent. There is no "better transparency".

Also, the only time that AAC might have phase problems is if you choose joint stereo at too low of a data rate. No one uses joint stereo any more because just about everyone is using the regular stereo setting at a data rate that supports complete transparency.


----------



## PiSkyHiFi

bigshot said:


> If something sounds the same, to me it *is* the same. I don't "love" any format. I am focused on function. What can do the job in the most efficient way? If you think that I am shilling for any format or company, I'm not the one that is biased in my opinions, you are. I'm simply telling you that if you took a blind listening test to compare AAC 256 VBR to lossless, you wouldn't be able to tell the difference. For the purposes of listening to music in the home there are many audibly transparent formats to choose from. Some are just more efficient than others. AAC is among the most efficient codecs that are able to achieve complete transparency. You can choose to use a less efficient one than another simply because the bigger numbers make you feel more comfortable, but it doesn't make a lick of difference to how it sounds. Both lossless and AAC 256 VBR sound the same.



So, your answer is no then, you don't believe that with the right equipment, you might be able to hear the differences.

Also, just because shills exist, gives you no basis to assume I am one, who for and why?


----------



## bigshot (Jul 23, 2018)

PiSkyHiFi said:


> So, your answer is no then, you don't believe that with the right equipment, you might be able to hear the differences.



I have the right equipment and that's what I've used for my testing. All of my equipment is audibly transparent too. The signal coming in is what comes out. Before you ask... yes, I test for transparency with every piece of equipment I buy.

I'm not the one accusing you of being a shill, you said I was one. I think you have a definite bias though.


----------



## PiSkyHiFi

bigshot said:


> I have the right equipment and that's what I've used for my testing. All of my equipment is audibly transparent too. The signal coming in is what comes out. Before you ask... yes, I test for transparency with every piece of equipment I buy.
> 
> I'm not the one accusing you of being a shill, you said I was one. I think you have a definite bias though.



Well, I'm accusing general people in this forum and elsewhere of not admitting to their own bias when they make a claim that  some lossy codec is audibly transparent.... it is biased by definition because it assumes that the threshold for transparent is non-zero.

You test for transparency... that's great, are you aware that it is impossible to buy a system anywhere that is actually audibly transparent?

AAC is a fantastic codec ... I said that, maybe I was being biased or just honest.

So, let me get this right, you put a lot of work in to making a library and you spent a fair amount of time deciding which codec to use and you went with a lossy one.

That's fine... I've have been there fully myself, but you are now convinced that no one else could possibly ever experience something you haven't, simply because they haven't gone through this process that you have?

What equipment did you test on?


----------



## bigshot (Jul 23, 2018)

Everyone has biases. The way you eliminate that from consideration is to do controlled listening tests. I’ve done that with three compressed audio codecs and I’ve discovered that they can achieve audible transparency. I don’t care what brand of audio file I use, I just want transparency, convenience and efficiency. The key word there is transparency. I wouldn’t have selected the codec I chose if it wasn’t audibly identical to the original CD.

I have a 5.1 system in my theater/listening room and for headphones I have Oppo PM-1s and an Oppo HA-1. I have lots of other equipment too. I’ve tested all of it to make sure the electronics are audibly transparent.


----------



## PiSkyHiFi

bigshot said:


> Everyone has biases. The way you eliminate that from consideration is to do controlled listening tests. I’ve done that with three compressed audio codecs and I’ve discovered that they can achieve audible transparency. I don’t care what brand of audio file I use, I just want transparency, convenience and efficiency. The key word there is transparency. I wouldn’t have selected the codec I chose if it wasn’t audibly identical to the original CD.
> 
> I have a 5.1 system in my theater/listening room and for headphones I have Oppo PM-1s and an Oppo HA-1. I have lots of other equipment too. I’ve tested all of it to make sure the electronics are audibly transparent.



 You and I will never agree, we can't even agree on a definition of audibly transparent.

Good luck with your library.


----------



## bigshot (Jul 23, 2018)

Quoted from the audio myths thread pinned to the top of this forum...

49. Trust Me I'm a Scientist - Audio Poll: Neil Young and High-Definition Sound, May 2012

A bind test of a high def WAV file version of Neil Young's self titled debut album against some standard AAC files.

"The majority of you are audio engineers, professional musicians, and ambitious hobbyists, and I figured that if anyone would be able to tell these file types apart, it would be you guys.
So, how did you do?
Well… please accept my warm congratulations to the 49% of you who guessed right.
That’s right: even among _our_ readers, the results came out no better than a coin flip. And we didn’t even need a huge sample size to get a result that’s consistent with the tremendous mountains of research already done in this field."

Another blind test that foiund pretty much the same thing....
https://cdvsmp3.wordpress.com/cd-vs-itunes-plus-blind-test-results/

"Notice that, despite deviations, both distributions have similar bell shapes. Furthermore, all reliable p-values are in favor of the null hypothesis stated, some of them in high agreement. So, based on the data obtained, the most reasonable conclusion is that we can’t hear the difference between CD audio and iTunes plus. And this is true in all the cases considered—being young, with our sense of hearing at its peak, having musical training or using excellent audio gear doesn’t seem to help."


----------



## PiSkyHiFi

bigshot said:


> Quoted from the audio myths thread pinned to the top of this forum...
> 
> 49. Trust Me I'm a Scientist - Audio Poll: Neil Young and High-Definition Sound, May 2012
> 
> ...



I pretty much agree with most of what you're saying, since as I mentioned earlier, I have been through all of this before years ago when it was important because space was expensive. I first had MP3, then recently I used libfdk AAC for my portable player - at 320 Kbps.

For the higher res files, I actually encoded to 640 Kbps, just to preserve stuff I couldn't hear every time - probably not necessary on the portable player because it doesn't have the analog quality level to reveal possible differences.

You've made a commitment to AAC at 256 Kbps, that's fine, but it was a compromise despite your efforts to convince yourself it wasn't - it allowed you to save money on storage and now you can say things like audibly transparent, but audibly transparent means sounds the same as no electronics at all, so I think you shouldn't be using that term unless you can define it better.

Those times are over though.... there is no need to justify lossy any more, because it will disappear eventually.

It's simple, I have a FLAC and I have an AAC mpa file, which one do I use if I can now store either one in my phone without issue?

It's a no brainer. My phone storage (a 256 GB Micro SD) is now about 80% FLAC and about 5% MP3 for the tracks I don't have FLACS for.


----------



## bigshot

I've compared AAC 256 VBR to lossless on high quality equipment using careful controls. I posted a couple of published tests that showed that there was no audible difference to trained ears or on high end equipment. That is evidence to support my statement that AAC 256 VBR sounds every bit as good as FLAC. Why have files that are 5 to 10 times larger if they don't sound better?

You say that lossy is going to disappear. The opposite is true. Compressed audio and video dominates the market and there's no indication that it won't continue to do so. I haven't seen you present any evidence that AAC 256 VBR isn't audibly identical to lossless, and I haven't seen you cite any evidence to support your argument that compressed formats are going away. Do you have anything to back that up? I'm actually curious to see what you base those ideas on.


----------



## PiSkyHiFi

bigshot said:


> I've compared AAC 256 VBR to lossless on high quality equipment using careful controls. I posted a couple of published tests that showed that there was no audible difference to trained ears or on high end equipment. That is evidence to support my statement that AAC 256 VBR sounds every bit as good as FLAC. Why have files that are 5 to 10 times larger if they don't sound better?
> 
> You say that lossy is going to disappear. The opposite is true. Compressed audio and video dominates the market and there's no indication that it won't continue to do so. I haven't seen you present any evidence that AAC 256 VBR isn't audibly identical to lossless, and I haven't seen you cite any evidence to support your argument that compressed formats are going away. Do you have anything to back that up? I'm actually curious to see what you base those ideas on.



How about the dwindling 5% of my collection that is still stored lossy?

I'm not that atypical, not even rich - I'm here because people were picking on AptX HD and I've already been through this mill - it's not a better codec than AAC, but given it's using 576 KBps data rate and lossless equivalent is only 50% more than this for Redbook, I'm pretty much done with anything less completely.

So, I use Google Play Music happily at 256 Kbps AAC - re-encoded by AptX HD sometimes when I'm out and about.

But when I want to stretch my dynamic range and hear something I haven't heard before - like I do every time I pick up my home audio equipment, I have absolutely no need for compressed audio at all. I do listen for things I haven't heard before and sometimes I get lucky, but they need to be there for me to hear them.

Right now. 2018.


----------



## bigshot

If you're streaming AAC 256 VBR from Google Play, the sound quality is identical to when you play the same songs in lossless at home. If there is a difference, it's because your portable equipment isn't as good as your home equipment, not because of the file itself. I'm not familiar with AptX HD, but I would imagine that would sound the same too.

By the way, AAC 256 VBR has the same dynamic range as a CD. High bitrate AAC has the same frequency response too. I don't know if you're aware of it, but VBR allows you to redistribute bandwidth and if needed actually *exceed* the data rate you have it set for. if you encode AAC 320 VBR, the data rate can go as high as 460, if the music requires it. (I don't know what kind of music would though. I find that 256 VBR is plenty for any kind of music.) That might make you feel good if you want to judge sound quality by the numbers.


----------



## PiSkyHiFi

bigshot said:


> If you're streaming AAC 256 VBR from Google Play, the sound quality is identical to when you play the same songs in lossless at home. If there is a difference, it's because your portable equipment isn't as good as your home equipment, not because of the file itself. I'm not familiar with AptX HD, but I would imagine that would sound the same too.
> 
> By the way, AAC 256 VBR has the same dynamic range as a CD. High bitrate AAC has the same frequency response too. I don't know if you're aware of it, but VBR allows you to redistribute bandwidth and if needed actually *exceed* the data rate you have it set for. if you encode AAC 320 VBR, the data rate can go as high as 460, if the music requires it. (I don't know what kind of music would though. I find that 256 VBR is plenty for any kind of music.) That might make you feel good if you want to judge sound quality by the numbers.



The first line.... It's just wrong ok ?

If I'm streaming AAC 256 from Google Play, the sound quality is identical to lossless.

No.

Just plain no.... before you go any further rambling on with justifications for compressed perfection (audibly transparent in your terms)

You need to accept that the information is different and you should hopefully understand by now that I'm not even vaguely interested in even risking wondering if the codec is to blame for any particular sound issue I may be having.

You should be getting some hint here that I have also been through the whole VBR thing... it's a useful encoding feature - this is all debates from 10 years ago.

AAC VBR 256 does not have the same dynamic range as CD, the basic limit of CD is 96 dB and with good dithering, possibly up to 120 dB. Any lossy codec using that as a source will be reduced in quality - below 96, because it's no longer bit-for-bit once you pass through AAC, so dithering above 96 dB is lost.
Measurable by my ear at a moments notice - probably not.
Measurable by my ear under other conditions - probably not.

Am I prepared to accept that my ears dynamic range is in flux... it's never the same way twice and I can even focus on different parts of the sound to varying degrees upon subsequent listening. I fully intend on burning sound that is beyond my hearing into my brain, simply because then I can know that my ears will have good days and bad and my sound equipment will be there for them to help them hear whatever they can on that listen.

My sound equipment could always do with improvement, I'm very glad to remove lossy from the equation for when I'm testing.

I don't judge the sound quality by the numbers, because they should be pushed well out my range so I don't have to.

I won't be convinced that I should return to lossy...... because I have FLACS and space on my phone.... They are better than what can be streamed at present, even if I can't always hear that they are better - listening and improving are daily events for me and the equipment I have.

Just using AAC is going backwards for me - like doing a whole bunch of work that might make no difference to my music or possibly make it a little worse.

You should try addressing things I've said if you disagree - I can clearly see your point already.


----------



## bigshot (Jul 23, 2018)

AAC 256 VBR is audibly identical to lossless. I posted evidence of that in this post... https://www.head-fi.org/threads/iph...tx-hd-real-world.861978/page-13#post-14377431

You say it isn't audibly identical. How about posting some evidence to support that?

You're in sound science forum right now. We don't just say what we believe. We are expected to back up what we say. If you would like to back up your claim that you can easily tell the difference between lossy and lossless, I have a FLAC file I can send you with three different lossy codecs at three different data rates, along with a lossless file. I will happily send it to you to listen to. All you have to do is rank the samples from best to worst. They you'll know whether what you say is correct.


----------



## PiSkyHiFi

bigshot said:


> AAC 256 VBR is audibly identical to lossless. I posted evidence of that in this post... https://www.head-fi.org/threads/iph...tx-hd-real-world.861978/page-13#post-14377431
> 
> You say it isn't audibly identical. How about posting some evidence to support that?
> 
> You're in sound science forum right now. We don't just say what we believe. We are expected to back up what we say. If you would like to back up your claim that you can easily tell the difference between lossy and lossless, I have a FLAC file I can send you with three different lossy codecs at three different data rates, along with a lossless file. I will happily send it to you to listen to. All you have to do is rank the samples from best to worst. They you'll know whether what you say is correct.



Rather than repeat yourself silly, try reading some of the points I've made already about how inaccurate you are being, like for instance, let's take it one point at a time, the dynamic range of AAC VBR for example. You say it's the same as a CD, but that is impossible by the very nature of it being lossy from a CD source.... I don't need to say more on that, it's proven, done, I went further to explain how using good mastering, one can do better than 96 dB for CD, but if you run it through a lossy codec, you get no benefit from dithering at all - this is  just basic logic.

You are very focused on audibly transparent - Have you noticed at all that I haven't disagreed with that? Have you noticed that I am trying to get you to see beyond your AAC bias for just a minute? To see that barely sufficient is not the latest marketing phrase?

Let's look at 192 KHz for a second... complete waste of space for me, since both my ears and any possible equipment can simply not retrieve that much detail, it's overkill, but I have nothing against it.

Now let's flick back to AAC.... barely sufficient for most people is it's tag line.

Redbook CD - it just is and personally I'm OK with everything being at this level but many would like to see more detail, you're keen on slightly less information.

Since you probably ignored most of what I said, let's look at AAC seriously...

It's a fantastic codec, mostly transparent at 256 Kbps, stretches up to 18 to 19 KHz in terms of detail, much better than MP3. Not the most transparent, I think Opus is more advanced although it is different.

You don't mean audibly transparent, you mean indistinguishable on the same equipment from the PCM source - audibly transparent would be the holy grail of sound equipment, being present as if the sound system wasn't even there, as if the sound source itself was there instead - that's transparent.

Most people listening to their AAC files are doing so on equipment that isn't even close to transparent.

What if there was a minor detail in a music file that was so delicate that it took 100 listens before you even noticed it? Well, that happens to me all the time because I'm human and I need the equipment to consistently perform better than my ears to be pleasantly surprised and I am consistently surprised by minor details every day, I can never hear them all on repeated listening either.

Given that's how my ears work, what are the odds that I can hear a lot more than an ABX test would reveal? They are quite good given I'm going to need to listen to the thing at least 100 times before I can really get into that recording at all.

Why would I want to listen to a piece 100 times only to discover nothing new because the file was only barely sufficient in terms of sound information I can recognise?

You would claim that I am unlikely to ever hear anything more from a recording with more information than AAC 256 and yet, I love being surprised by detail I hadn't heard before, it happens, would that happen the same way with PCM as AAC ? hard to say, but at least I'm prepared to say it's possible.

It's a very minor detail to argue, but I am the one remaining open minded, you seem convinced that anything better than AAC 256 is a waste of space.

One day in the distant future, Apple will just drop AAC in favour of lossless, simply because hardware has no issues with it any longer.

That is exactly where the future of audio compression is going.

You must know that you've drawn the line at barely sufficient and for what..... more space?


----------



## bigshot (Jul 23, 2018)

You can play tag with me as much as you want, but I've offered evidence that AAC 256 VBR is audibly transparent. I'm just waiting for you to step up to bat and offer some sort of evidence to back up your claims that it isn't. I've done controlled listening tests comparing AAC and lossless. I've read numerous other controlled tests conducted by other people, all coming to the same conclusion I did. Why should I just take your word for it if you haven't made any effort to find out the truth for yourself? You can feel free to go on your gut instincts if you'd like. That's your prerogative. But that doesn't mean anything to me. For an opinion to have validity for other people it has to be verifiable and repeatable. This is sound science.

If you really want me to, I can go through your post and answer your misconceptions point by point and provide more links. But you didn't follow the links I've already given you that said that AAC 256 VBR is audibly transparent... The compressed audio sounds *exactly* like the uncompressed file on high end or low end equipment... to trained ears, musician's ears, to audio enthusiast's ears and to your mom's ears... under any normal circumstance in which you might be listening to music. It is audibly interchangeable with lossless and it is a fraction of the file size. It's efficient. It streams easily. There's no sacrifice at all.

Redbook is audibly identical to SACD and 24/96 too. I would be happy to provide evidence of that too. I've done tests myself on a pro-tools workstation and have found other people's tests that validate that opinion too. I'd be happy to share that info with you as well.

Sometimes "common knowledge" in audiophile circles doesn't hold up under close examination. A lot of what you read about consumer audio is thinly veiled sales pitch. Welcome to Sound Science. Sorry about your preconceptions.


----------



## PiSkyHiFi

bigshot said:


> You can play tag with me as much as you want, but I've offered evidence that AAC 256 VBR is audibly transparent. I'm just waiting for you to step up to bat and offer some sort of evidence to back up your claims that it isn't. I've done controlled listening tests comparing AAC and lossless. I've read numerous other controlled tests conducted by other people, all coming to the same conclusion I did. Why should I just take your word for it if you haven't made any effort to find out the truth for yourself? You can feel free to go on your gut instincts if you'd like. That's your prerogative. But that doesn't mean anything to me. For an opinion to have validity for other people it has to be verifiable and repeatable. This is sound science.
> 
> If you really want me to, I can go through your post and answer your misconceptions point by point and provide more links. But you didn't follow the links I've already given you that said that AAC 256 VBR is audibly transparent... The compressed audio sounds *exactly* like the uncompressed file on high end or low end equipment... to trained ears, musician's ears, to audio enthusiast's ears and to your mom's ears... under any normal circumstance in which you might be listening to music. It is audibly interchangeable with lossless and it is a fraction of the file size. It's efficient. It streams easily. There's no sacrifice at all.
> 
> ...



You're waiting for Godot there... You've offered nothing but repetitive claims and when I challenge you on a mistake, you just hammer away as if you were right the whole time.

You made mistakes.

Go back and correct them before I pay any attention, that's sound science.


----------



## Glmoneydawg (Jul 23, 2018)

PiSkyHiFi said:


> You're waiting for Godot there... You've offered nothing but repetitive claims and when I challenge you on a mistake, you just hammer away as if you were right the whole time.
> 
> You made mistakes.
> 
> Go back and correct them before I pay any attention, that's sound science.


I was you for a few decades.....recently sold my $10,000 dac and the cd drive that went with it for a very nice profit (feel a little guilty lol)replaced both with a Cambridge CXU......not a full on science nerd but i can assure you there is no difference.I am not ashamed to admit i have a turntable,records,expensive cables(i know lol)Stop chasing you're tail my friend. ....you will never catch it....unless you're the guy that bought my DAC then its cool.


----------



## PiSkyHiFi

Glmoneydawg said:


> I was you for a few decades.....recently sold my $10,000 dac and the cd drive that went with it for a very nice profit (feel a little guilty lol)replaced both with a Cambridge CXU......not a full on science nerd but i can assure you there is no difference.I am not ashamed to admit i have a turntable,records,expensive cables(i know lol)Stop chasing you're tail my friend. ....you will never catch it.



I've never been that person, science nerd yes.

Chasing my tale? I'm not your friend, I'm just trying to communicate here.

I witnessed the moment I heard MP3 bight the dust because I heard the difference on the Beyerdynamic T1 for the first time.

I don't need a Denon  cat 6 cable to know that compressed audio isn't the purpose of my sound system.


----------



## Glmoneydawg

PiSkyHiFi said:


> I've never been that person, science nerd yes.
> 
> Chasing my tale? I'm not your friend, I'm just trying to communicate here.
> 
> ...


Thats unfortunate ....i may be the the person who is closest to you and your beliefs in the sound science forum....good luck to you.


----------



## PiSkyHiFi

Glmoneydawg said:


> Thats unfortunate ....i may be the the person who is closest to you and your beliefs in the sound science forum....good luck to you.



Cheers, I think, not sure if being sarcastic though.

It's simple, I draw the line at redbook and leave it there with solid science backing up why you shouldn't need any more and why saving space is no longer a factor! AAC at 256 is just barely adequate compared to redbook, it's all the rage because of streaming, but it's still just barely adequate.

Let me put it this way then, to get to AAC 256, you must first have PCM. That's compulsory. Now just skip the part where you reduce information. Done.


----------



## Steve999 (Jul 24, 2018)

I am listening to some ridiculous Samsung bluetooth speaker with a glowy tube on top that was on like a 1/4 price sale at Best Buy a few years ago because no one would buy it. I am listening Itzhak Perlman or however you spell it over 256kbps Itunes Plus. I just ripped it. I rip a CD a night just as a leisure activity. I am about 90 or 95 percent through my library. The AAC file is getting from my computer to this speaker over some mangled early bluetooth protocol or CODEC or container with egregious transcodings I am sure. My kids are always occupying the space with my nice stereo playing video games or on a computer or taking a nap or something or asking me some kind of question. So I am in another room listening to this abomination. It's all jittery but the tube sounds nice and warm.

Here is an absurd review of this beautiful strangeness:

https://www.whathifi.com/samsung/da-e750/review

If you are interested the review makes bizarre comparisons between the Apple Airplay sound, the bluetooth sound, and the Aptx bluetooth sound, and wired streaming by ethernet cable. The review is jaw-droppingly silly. This astonishing contraption also takes USB inputs and who knows probably some other stuff. It's the swiss army knife of input options. It also docks Apple and Android stuff.

Seriously, I cannot for the life of me hear the difference between 256k Itunes Plus and the source on anything. I have all of the original CDs in vinyl jackets so they don't take up too much space if I get freaked out about it. I always used whatever was best at the time over the last 15 or 20 years from Lame MP3 to Fraunhofer or whoever and sometimes to some other CODEC just to goof around, and that's the form it's still in, unless Apple Music provided me an automatic upgrade to AAC 256kbps for free, which it did do for an awful lot of music, and is an awesome perk for my $10 a month. But at the time I was ripping I always made sure the result was transparent (the same sound to my ears) to me the best I could, with some margin for error.

I underclocked my PC and put my graphics card in silent mode so they don't interfere with the music. This much is true!

Actually it's a mid-fi speaker but the sound is really, really nice for what it is. And it looks awesome. Seriously, Best Buy could not sell the stupid thing and I just dropped by once in a while and watched the price drop and drop and drop. It has a woofer on the bottom and a bass port on the back in case you wonder where the bass comes from.

And the other point is what about the music? I am going to worry about some little knick-knack detail that only 1 in 508 young, talented and experienced people can make out after two hours of listening to a short and particularly demanding sample? Or am I going to listen to the music as music? We are living in an age of hifi I never even dreamed of when a turntable was the best source I could have access to. Now, as far as sources and audio files are concerned, we are in the age of way beyond good enough.

My CDs will be lying around in vinyl jackets for posterity, if anyone cares.

By the way a point of interest: Even Itunes Plus AAC @ 256kbps will run over 320kbps in complex passages. I have seen it as Foobar displays the bitrate in real time.

If someone can hear the difference between 256kbps AAC Itunes Plus and the original CD or lossless file and they want to put up evidence that it's so I'm cool with that.


----------



## bigshot (Jul 24, 2018)

PiSkyHiFi said:


> I draw the line at redbook and leave it there with solid science backing up why you shouldn't need any more and why saving space is no longer a factor! AAC at 256 is just barely adequate compared to redbook, it's all the rage because of streaming, but it's still just barely adequate.



Let's simplify this. One question at a time.... AAC is audibly identical to redbook. If it sounds exactly the same, how is it any worse than redbook? Just give me answers to that one question- concise and to the point. Bullet points are fine.


----------



## PiSkyHiFi

bigshot said:


> Let's simplify this. One question at a time.... AAC is audibly identical to redbook. If it sounds exactly the same, how is it any worse than redbook? Just give me answers to that one question- concise and to the point. Bullet points are fine.



You can't even examine your own fallacies, which you will need to do if you want to put me in a box of your design.

Can CD audio reach a higher dynamic range than 96 dB in any way? - Yes, through careful mastering down to Redbook, error diffusion can mathematically allow a higher dynamic range (but only slightly)

What happens if one bit is different than it should be in many locations? The dynamic range above the theoretical limit of 96 dB is now impossible, because the error diffusion has been lost.

AAC does not have as high a dynamic range as Redbook period - whether we can always hear that is a separate question, one I have painstakingly tried to address while you keep pulling out the same broken box.

Here's my suggestion, if you actually care about fixing the box you're in, go back and read things properly and stop presenting me with the same tired old box to climb in.

I bet you're going to paint scenario with audible transparency and never even consider that starting with AAC being a reduced form of the PCM is the correct path to describing it successfully - You might say a reduced form that is audibly transparent, but you haven't.... you've pretty much tried to cover up the differences the whole time, as if it's impossible to have a difference that takes time with repeated listening to hear, but it isn't impossible, just not very likely because the differences are small.

Here's the kicker though, scientifically, I'm the one claiming that AAC vs CD being mathematically different is enough to warrant justification of why the sound will be audibly the same to anyone.

I am not the one making claims against the math, I'm the one willing to deconstruct the math to find out what's really happening.

The onus is on you to define your terms - audibly transparent for instance, you said you made sure your equipment was transparent before the tests, but that doesn't make any sense because equipment can only be more or less transparent, since no equipment exists that is perfectly transparent - i.e. like the sound source was in the room with no sound system. transparency is a relative thing in sound systems.

My main point is that the time for all this is now up.

We don't need to compress audio in a lossy format for storage any more, because space is abundant, store them as a FLAC, then you don't even need to discuss the relative merits of adding artifacts that you may or may not hear.


----------



## bigshot (Jul 24, 2018)

One simple question. One simple answer. All these paragraph breaks and digressions aren't making any points. They're just exhausting me and making me not want to read your posts. Be concise. Give a clear argument to support your position. Don't wiggle. If you can give me a solid argument in your favor, I'll acknowledge it. I'm not trying to make points myself any more. I'm just trying to get you to answer a simple question...

If you agree that AAC 256 VBR sounds exactly the same as lossless, what advantage is there to maintaining your music library in lossless?


----------



## PiSkyHiFi

bigshot said:


> One simple question. One simple answer. All these paragraph breaks and digressions aren't making any points. They're just exhausting me and making me not want to read your posts. Be concise. Give a clear argument to support your position. Don't wiggle. If you can give me a solid argument in your favor, I'll acknowledge it. I'm not trying to make points myself any more. I'm just trying to get you to answer a simple question...
> 
> If you agree that AAC 256 VBR sounds exactly the same as lossless, what advantage is there to maintaining your music library in lossless?



See the box you put me in ? Its broken. I've told you one clear mistake, address it or not.


----------



## Monstieur

inspectah_deck said:


> I don´t see any data about aptX HD in this post to conclude that.
> Although I´m thankful for your informational posts, I notice a pretty agressive pro AAC, contra aptX tone in your posts, which I find unnecessary.


aptX HD just uses more bits per sub-band than aptX. It's still esentially transmitting ADPCM which is grossly inefficient compared to psychoacoustic codecs.

All aptX implementations I've heard have audible frequency response distortion - this should not happen if it was just bit-depth reduction, so it's most likely an implementation flaw. The AAC implementation on the same Bluetooth devices don't have this problem.


----------



## Monstieur (Jul 24, 2018)

shortwavelistener said:


> Ironically, SBC is the core technique used in popular lossy audio compression algorithms including the entire MPEG 1 Audio Layer family (MP1/PASC, MP2/Musepack, MP3 and MP4/AAC).


There is nothing wrong with the technique itself - it's the low bit-depth used in SBC over Bluetooth that makes it audibly degrade the sound, especially when used without psychoacoustics.

If you simply reduced the bit-depth by 1 bit, it would still technically be SBC and probably be audibly transparent.


----------



## bigshot (Jul 24, 2018)

PiSkyHiFi said:


> See the box you put me in ? Its broken. I've told you one clear mistake, address it or not.



I'm guessing that you know that you aren't arguing on point and you are afraid if you answered, you wouldn't come out on top. That's the wrong way to approach a forum like this. Talking with people in sound science isn't about winning or losing. It's about sharing information. I have information you might not have, and I invited you to share information that might show me a different side. The fact that you can't focus your argument and discuss things straightforwardly is kind of sad for you. Oh well.


----------



## Steve999 (Jul 24, 2018)

I have a question--would Apple Airplay and Apple Music be the best way for me to stream music?

If I am using Bluetooth, is there any way that I as a person who does not understand that technology can do as a rule of thumb so I get a high fidelity stream? Kind of an HWmonitor for your bluetooth stream or something to see what's going on?  Wifi and over the network hard-wired I understand decently.

On my home stereo I usually stream Apple Music over Apple TV. Good? Bad? Is there a way to optimize it? If they can match what I have on my computer they will just use the Apple 256 AAC VBR over my network sent straight from Apple, if I understand correctly. If Apple does not have a match they have uploaded my file from my computer and stream that.

If I use Spotify over a Roku through my home stereo is that worse?

If I use Spotify on my home computer to my Airplay + Bluetooth speaker is that better than using Apple Music in any way?.(My family of 5 shares a spotify account.)

I'd like to be confident that my streaming is giving me something transparent or very close to it. Honestly I am just sort of asking what button to press (figuratively)--I don't have the wherewithal to figure it out. I am guessing sticking to Apple and its ecosystem would give me the best results but would be very grateful for any knowledgeable contrary opinions.. With so many variables I cannot imagine figuring it out for myself. I get a little concerned when I read about all of this transcoding and different codecs and aptx etc. I just want to do the simple thing that gives me the best stream.

I know that's a lot of questions but I am really just looking for a simple answer based on knowledge and quality of the technologies if that's at all possible. If it requires a little study time or experimentation or configuration or a little extra effort i can do that to. I want to get to set it and forget it and just know I got the best or close to the best stream I could.

I must often stream from my computer to a Bluetooth speaker or from my Apple TV hard-wired to my nice stereo. My bluetooth speaker can do line in, ethernet in, Airplay, or Bluettooth or Bluetooth Aptx.

Thanks anyone!



Monstieur said:


> There is nothing wrong with the technique itself - it's the low bit-depth used in SBC over Bluetooth that makes it audibly degrade the sound, especially when used without psychoacoustics.
> 
> If you simply reduced the bit-depth by 1 bit, it would still technically be SBC and probably be audibly transparent.


----------



## bigshot (Jul 24, 2018)

I've found that bluetooth isn't perfectly transparent for the most serious forms of listening. It's fine for casual listening and most of the time, it isn't a problem. But AAC 256 VBR, which is the standard in the Apple Store is perfect for human ears. I use Airplay to stream around my house from my Mac Mini media server to an Airport connected to each stereo. It works great. The jitter rating of Airports is a little higher than normal, but it's still well below the threshold of audibility. I'd recommend a system like the one I use. The only problem I get is some skipping when someone in the house is hogging the wifi bandwidth with a chunky download. That doesn't happen too often.


----------



## PiSkyHiFi

bigshot said:


> I'm guessing that you know that you aren't arguing on point and you are afraid if you answered, you wouldn't come out on top. That's the wrong way to approach a forum like this. Talking with people in sound science isn't about winning or losing. It's about sharing information. I have information you might not have, and I invited you to share information that might show me a different side. The fact that you can't focus your argument and discuss things straightforwardly is kind of sad for you. Oh well.



We've already addressed this. You mean sounds the same on the same equipment when you say audibly transparent, I don't think I would pass an ABX test with 256 Kbps AAC, even on my best equipment - I'm not even interested in that because I just explained why it's not the full story, you didn't pick that up because you're focused on getting me in a box where you can't be challenged.

I challenged your logic on dynamic range and you ignored it - it's impossible for AAC to have as large a dynamic range as CD - that's just basic logic

I said AAC is a fantastic lossy codec, it's one of the best, go back and check.

I don't think it's reasonable to say that no one could ever tell the difference between AAC 256 and CD, yet you seem to pointing at that as if it's fact, like there is no evolution of sound systems at all.

Honestly, it seems absurd to me that while I'm trying to make the best out of the sound equipment I have, I would deliberately reduce the quality of the source material because some research said I couldn't possibly hear the difference.

The only reason I used AAC in the past is due to limitations on storage at the time.

If I'm going to audition different sound equipment, do you think it's sound science that I would reduce the digital representation to barely adequate deliberately? I would be just introducing new ways to prevent revealing flaws in my equipment.

Transparent means the same as without a sound system.

Because transparent means more than comparing just 2 sound system scenarios, it's about being able to close your eyes and see if your senses can be tricked by a sound system into feeling like it isn't there.

I don't know about you, but I've never encountered a sound system that actually did that fully - partly it's because that standard of transparent requires fooling subconscious mechanisms that are extremely good at piecing together a model of what is happening around our ears, so even if we can't verbalise what's happening and distinguish differences consciously, we can certainly tell that it's not real and that's sufficient to show that ABX is simply not enough to actually trick us.

Don't rely on ABX, rely on careful, considered listening many many times over and possibly never reaching a definite conclusion, because you can already tell it's a sound system and not real.

I don't think you're ever going to get that, your own definition of audibly transparent won't allow it.

In conclusion, I don't store files in AAC any more because I don't need to, it would be absurd to reduce the representation in any way unless it was the only way things could work.





Why not mention something about audibly transparent in case I missed it.


----------



## bigshot (Jul 24, 2018)

Since I’m on my phone, I’ll take it one point at a time.

AAC and dynamic range

AAC compression is not dynamic compression. AAC files have the same dynamic range as CDs.

https://audiophilereview.com/cd-dac-digital/how-much-does-mp3-affect-dynamic-range.html

People who can tell the difference

I cited two controlled tests above that showed that neither age, trained ears nor high quality equipment were a factor in being able to tell the difference between AAC and lossless. If there are people who can discern a difference consistently, we haven’t found them yet.

Transparent means it sounds real

You are misusing the term then. Transparent means that the sound going in sounds identical to the sound coming out. A transparent amp is described as “a wire with gain”. Reality is a function of recording techniques, not file format. The reason your system has never sounded real to you is because reality isn’t the goal in commercial music recording. The goal is to organize the sound for clarity and creative contrasts. Neither AAC nor lossless nor HD audio present reality. That isn’t a limitation of AAC specifically.

Conclusion

You may not need to store AAC files because you have a small library and you never stream or listen to music wireless. That is fine. But that doesn’t mean that your lossless library has better audible fidelity than the same music encoded in high bitrate AAC. I have a large library with over a year and a half of music. It all fits on a single hard drive, making it simpler to do scheduled backups. I also have a media server that streams music and movies to every room in my house. I could have shelves and shelves full of CDs in my listening room, but I don’t have to put up with that clutter. All my CDs are ripped and the discs are in boxes in the garage. Much more convenient. The same sound quality.


----------



## PiSkyHiFi

bigshot said:


> Since I’m on my phone, I’ll take it one point at a time.
> 
> AAC and dynamic range
> 
> ...



AAC is not dynamic compression, like there are people who thought otherwise.

I've been talking about data rates and codecs, there is zero evidence to suggest I think of AAC as an analog audio compressor.

That article concludes quote "The frequency response and dynamic range are essentially unchanged, there's just more junk in the signal"

So.... what about dithered mastering then... that's the counter example I gave you to prove that dynamic range can be lost.

If it helps you at all, I'm sure the person that wrote that article would agree with it - a source that is mastered down to redbook can achieve slightly better than 96 dB dynamic range with careful use of dithering, it must be bit-perfect or be processed by higher precision math to maintain any of the perceived range above 96 dB - encoding to any 16 bit based lossy format will destroy any dithering that was present in the mastering.

I don't think it requires a genius to get that, since the data is reconstructed into 16 bit and will contain random errors compared to the original dithered master, encoded error diffusion will be ignored - perhaps, if one is careful, attempts to reconstruct by using higher precision math in the compression encoder and then dithering down again might work, but it's guaranteed to not be as high dynamic range as the original source potentially could be, simply by being different.

Dynamic range is lost, or should I say potential dynamic range is lost as it will depend upon the mastering.

That's just one example of how dynamic range is lost through lossy encoding... this is science.


----------



## bigshot (Jul 24, 2018)

Perhaps I'm having trouble sorting out your points. That's why I keep trying to pare the conversation back to one thing at a time. Could you please explain what you meant by this quote?


PiSkyHiFi said:


> I challenged your logic on dynamic range and you ignored it - it's impossible for AAC to have as large a dynamic range as CD - that's just basic logic



I am confused because you then say this...


PiSkyHiFi said:


> AAC is not dynamic compression, like there are people who thought otherwise. I've been talking about data rates and codecs, there is zero evidence to suggest I think of AAC as an analog audio compressor.



I'll move on to other points when I figure out what you're getting at with this one. I originally pointed out that AAC doesn't affect dynamics earlier. You replied saying I ignored your challenge. So I cited an article that shows that AAC compression does not alter the dynamic range of music. Are you agreeing that it *is* possible for AAC to have as large a dynamic range as CDs now?

Dithering shouldn't be an issue. The same dynamics are the same dynamics. Lossy codecs are designed to throw out sound you can't hear and leave the sound you can hear. Most commercial music doesn't exceed 50 or 55dB in dynamic range anyway. CDs and high bit rate lossy are overkill when it comes to dynamic range. I only would use dithering when I downsample. When I rip a CD, it should carry across with the same audible sound of the CD, wouldn't it?

This isn't an argumentative trick. I'm honestly trying to discuss this with you and I can't figure out what you are saying. I keep talking about audibility, and you keep going back to talking in theory about things that aren't audible.

All of us use our stereos for the same thing... listening to music with human ears. Theory is great as far as it affects that particular purpose. But theory for the sake of theory that doesn't affect audibility doesn't matter for that purpose. You can say that you feel better about having larger file sizes or that you worry about smaller file sizes. I'll accept that as a psychological block you have to using lossy. But when it comes to audible sound or the purpose of listening to recorded music in the home, I can't see any advantage to lossless. It's bigger, less convenient, less cross platform, it doesn't stream and it sounds no better.


----------



## PiSkyHiFi

bigshot said:


> Perhaps I'm having trouble sorting out your points. That's why I keep trying to pare the conversation back to one thing at a time. Could you please explain what you meant by this quote?
> 
> 
> I am confused because you then say this...
> ...



The article itself was mainly pointing out that compression in audio could mean dynamic range compression (analog audio compressor) or it could mean information compression like lossy encoding and just to keep in mind that these are 2 completely different concepts.

No, I am correcting that article or extending it.... AAC cannot mathematically achieve the same potential dynamic range, I've just shown it logically based upon dithered mastering.

When I say potential dynamic range, I'm referring to the range of precision - how closely it can represent the original signal in all it's dynamics and how much error is associated with it. AAC will be less for the reasons I outlined above - I don't think there is any argument about that.

Audibly transparent is listening to a live performance, not whether a track sounds the same on a particular sound system to it's compressed doppelganger.


----------



## PiSkyHiFi

bigshot said:


> All of us use our stereos for the same thing... listening to music with human ears. Theory is great as far as it affects that particular purpose. But theory for the sake of theory that doesn't affect audibility doesn't matter for that purpose. You can say that you feel better about having larger file sizes or that you worry about smaller file sizes. I'll accept that as a psychological block you have to using lossy. But when it comes to audible sound or the purpose of listening to recorded music in the home, I can't see any advantage to lossless. It's bigger, less convenient, less cross platform, it doesn't stream and it sounds no better.



It certainly does stream, just not with Apple. For example, Google Chromecast Audio can stream lossless, although not always reliably - nothing to do with the wifi I think, it's just the software needs better buffering management and I haven't really explored this much yet.

Honestly, I cannot see any reason to revert to lossy for storage - lossless is just as convenient, it's even more cross platform and it might sound better, especially after listening many times over and not trying to compare what we can verbalise through ABX testing.

I have AptX HD for streaming at home and in the car, home streaming works better on the App side if it's Bluetooth for me - but when lossless is ubiquitous across hardware, I'll shift to that.

We are arguing over where to draw the line... and honestly I think we're actually pretty close, I have used both MP3 and AAC before for storage, it's just deprecated now.


----------



## Monstieur (Jul 24, 2018)

Steve999 said:


> I have a question--would Apple Airplay and Apple Music be the best way for me to stream music?
> 
> If I am using Bluetooth, is there any way that I as a person who does not understand that technology can do as a rule of thumb so I get a high fidelity stream? Kind of an HWmonitor for your bluetooth stream or something to see what's going on?  Wifi and over the network hard-wired I understand decently.
> 
> ...


AirPlay is lossless, so your source will be reproduced exactly.
If you use the native music app to play from Apple Music / iCloud Music Library on the Apple TV, then you get 256 kb/s AAC.
If you use Bluetooth, AAC is the best codec since it defaults to 320 kb/s, at least on macOS.


----------



## bigshot

PiSkyHiFi said:


> It certainly does stream, just not with Apple. For example, Google Chromecast Audio can stream lossless, although not always reliably - nothing to do with the wifi I think, it's just the software needs better buffering management and I haven't really explored this much yet.
> 
> Honestly, I cannot see any reason to revert to lossy for storage - lossless is just as convenient, it's even more cross platform and it might sound better, especially after listening many times over and not trying to compare what we can verbalise through ABX testing.



Streaming lossless isn't practical for the majority of people who listen to music. Compressed audio sounds the same and works pretty much flawlessly on any computer or mobile device.

There is no reason to revert to lossy for storage if you have already ripped to lossless. Transcoding a large library would be time consuming and would just result in a bigger footprint on your hard drive with two libraries.

Lossless is definitely *not* more cross platform. Your computer does FLAC easily. Mine does ALAC easily. We can jury rig our computers to play the other format, but it isn't easy. However both of our computers play compressed audio easily.

Direct switched, line level matched ABX testing is the best way to determine whether two similar sounds are perceptually different or not. Long term testing is subject to problems with auditory memory. Humans are unable to accurately determine differences between similar sounds after as little as a second or two. Our ears adjust, our echoic memory fades and the discernment is reduced to pretty much random chance.


----------



## PiSkyHiFi

bigshot said:


> Streaming lossless isn't practical for the majority of people who listen to music. Compressed audio sounds the same and works pretty much flawlessly on any computer or mobile device.
> 
> There is no reason to revert to lossy for storage if you have already ripped to lossless. Transcoding a large library would be time consuming and would just result in a bigger footprint on your hard drive with two libraries.
> 
> ...



This probably comes down whether sound is something that happens without being heard or is it entirely subjective. If a tree falls in the forest.

Sound equipment has many issues - as you pointed out, a sound system isn't necessarily trying to be transparent, actually, I think that is the aim and we are just used to justifying what we have already as if we couldn't possibly do any better.... transparency is a lofty goal.

Lossless isn't more cross platform - well, I disagree, FLAC is open source too and usually works on DAPs even when they haven't included AAC. The Xduoo X2 is an example from a while back. The main thing though is lossless can convert losslessly between formats, so using either ALAC or FLAC is not an issue.

i am adamant that ABX testing is not even half of a decent evaluation of sound quality, time to absorb the details is essential and forming conclusions maybe not even be possible, you may remain uncertain but open like myself.

Quote "Our ears adjust..."
Surely that's even more reason to focus on the experience rather than the analysis and leave room for different experiences of the same reproduction, I would like to leave room. Sound equipment can achieve frequencies we can't even hear and yet it can't achieve transparency of imaging and quality, that tells me that digital sound storage needs to have a fair amount of headroom to allow for different experiences upon repeated listening.

For me, sound burns into my brain, I can recall sound very well and I know this because I'm also a musician that plays by ear.


----------



## bigshot (Jul 25, 2018)

PiSkyHiFi said:


> Sound equipment has many issues - as you pointed out, a sound system isn't necessarily trying to be transparent, actually, I think that is the aim and we are just used to justifying what we have already as if we couldn't possibly do any better.... transparency is a lofty goal.



A sound system definitely is designed to be transparent. That's the goal of music codecs too. We're really fortunate today because we can achieve audible transparency with just about any amp or DAC you can buy. I remember when I first started in this hobby over 40 years ago. Amps were rarely transparent. LPs weren't transparent. Often, even reel to reel tapes weren't. I test every component I buy for transparency, and I haven't found anything that wasn't since the early 1980s. I bought a $40 DVD player from Walmart and it was transparent too. Aside from transducers, transparency is a given now.

What I said before is, recordings aren't made to be realistic. They are designed to sound *better* than real. More balanced, more organized, more variations and contrasts. You can't compare a live performance to a recording because they are two completely different things. But you can compare the sound of a recording contained in a WAV file and compare it to the sound of the same recording in a compressed format. If the sound of the source is the same as the sound at the other end of the chain, it's transparent. With electronics, just about everything is audibly transparent.



PiSkyHiFi said:


> Lossless isn't more cross platform - well, I disagree, FLAC is open source too and usually works on DAPs even when they haven't included AAC. The Xduoo X2 is an example from a while back. The main thing though is lossless can convert losslessly between formats, so using either ALAC or FLAC is not an issue.



FLAC isn't supported natively on Macs. You have to download a third party player to do that. iPhones don't natively support FLAC either. ALAC isn't always easy to play on PCs. There is still a format war when it comes to lossless. But there is no format war for compressed audio. Just about anything can play a 320 LAME MP3. And that is an audibly transparent codec/data rate.



PiSkyHiFi said:


> i am adamant that ABX testing is not even half of a decent evaluation of sound quality, time to absorb the details is essential and forming conclusions maybe not even be possible, you may remain uncertain but open like myself.



Well, you better be glad that the people who test medicine don't feel that way. Controlled testing is the backbone of the scientific process. You can feel free not to believe in it, but your subjective impressions are infinitely more subject to error than controlled tests. You should probably do a little googling on human perception and the effect of bias. Bias is real. All humans make decisions every day that are colored by bias. The best way to eliminate bias and come up with objective answers to our questions is through controlled testing. If you reject that, you're in the wrong forum right now, because this is sound science. The scientific method is important here. Subjective impressions based on bias are fine in the rest of Head-Fi. In fact, a lot of high end audio salesmen prefer you avoid controlled testing and measurement. Subjective impressions are really good for the profit margin!

By the way, ears take time to adjust. So the longer the sample you listen to, the less likely you are to discern a real difference. Direct A/B switching is the best, because you can put one sound right next to another sound and hear smaller differences clearly.


----------



## shortwavelistener

bigshot said:


> AAC files have the same dynamic range as CDs.



Don't mind if i ask, but ever heard of 24/96 AAC files?


----------



## bigshot

Does AAC do 24 bit? I know it's capable of 96 as well as multichannel. Never seen 96 out in the wild (but I haven't been looking for it.)


----------



## PiSkyHiFi

bigshot said:


> Does AAC do 24 bit? I know it's capable of 96 as well as multichannel. Never seen 96 out in the wild (but I haven't been looking for it.)



I believe 24 bit sources can be encoded very well with AAC and when decoded, can render to 24 bit, but of course, not over Bluetooth (yet?)

Current Bluetooth (up to 4.2) limits AAC to 256Kbps - it might have been encoded from a 24 bit source, but decoding that level back into 24 bit probably wouldn't really get closer to the original than decoding into 16 bit at that sample rate.

AAC can encode larger data rates of course for storing rather than streaming, so it's a good choice to use for 24 bit source files if you employ a larger data rate that can reproduce well back into 24-bit.

That's exactly what AptX HD is, the larger data rate of 576 Kbps allows for better reproduction into 24 bit from 24 bit source files - not by a lot, but still mostly more accurate than 16 bit rendering.

Personally, I would choose AAC over AptX HD at the same data rate, but Bluetooth doesn't allow it, so I use AptX HD.


----------



## bigshot

Is AptX MP4? I don’t know much about it.


----------



## inspectah_deck

bigshot said:


> Does AAC do 24 bit? I know it's capable of 96 as well as multichannel. Never seen 96 out in the wild (but I haven't been looking for it.)


_Bit depth is only meaningful in reference to a PCM digital signal. Non-PCM formats, such as lossy compression formats, do not have associated bit depths.
https://en.wikipedia.org/wiki/Audio_bit_depth_


----------



## bigshot

inspectah_deck said:


> _Bit depth is only meaningful in reference to a PCM digital signal. Non-PCM formats, such as lossy compression formats, do not have associated bit depths._



I kinda thought that but I didn't know for sure. Thanks for the info!!


----------



## PiSkyHiFi

bigshot said:


> I kinda thought that but I didn't know for sure. Thanks for the info!!



Just to make it clear though - lossy compression can't process information unless it's PCM to start with.... so the bit-depth of the original PCM and the resulting decoding are very important to quality of outcome.

For example, AAC uses a discrete cosine transform as it's basis for compression - the key word being discrete, this is essentially what PCM data is, it needs to be of the same order of precision to maintain precision in the chain, so it is associated in that way.


----------



## bigshot

Garbage in, garbage out I suppose.


----------



## shortwavelistener

And does anyone tried listening to music that has been "Mastered for iTunes"? And are those encoded as "24/96" 512 kbps AAC files?


----------



## Glmoneydawg

bigshot said:


> Garbage in, garbage out I suppose.


Yep...a rule that applies across the board......never stronger than the weakest link.


----------



## bigshot

Mastered for iTunes means that the source is a studio quality master (i.e.: 24/96), and the encoding is customized to make it as efficient and high quality as possible. But the end result is still AAC 256 VBR. That isn't a bad thing though, because AAC 256 VBR is audibly transparent. With human ears, you won't be able to discern it from the master.


----------



## Steve999 (Jul 26, 2018)

shortwavelistener said:


> And does anyone tried listening to music that has been "Mastered for iTunes"? And are those encoded as "24/96" 512 kbps AAC files?



Apple has a curious habit of going around remastering Sun Ra albums. I imagine I am one of the few people who have noticed this. This can be a good thing because the production values of some Sun Ra recordings seem to have not been the best available, or, you might go so far as to say, comically bad. They used to have little news snippets about the apple remasters of Sun Ra albums but I can't find them anymore. I had a particular affection for this album as a young teenager. I had it on LP and searched far and wide for a CD of it. But anyway here is the song info from an itunes proprietary rip from what I think (if my memory serves me correctly) is a 2017 or 2018 Itunes Sun Ra remaster adjacent to the song info ripped from a CD that I bought in 2015. Now I could delete my 2015 CD-rip and Itunes will download and replace it with the "match" remastered 256 AAC Itunes file DRM-free because I have Apple Music, but for the time being I amuse myself by having both as long as I am subscribed to Apple Music. And in addition I think whatever they did to remaster it in 2015 or earlier was pretty darned impressive by itself. Whatever information you can glean from this, more power to you. The remastered for itunes stuff is still in 256 AAC, the best I can tell, and is indicated to have a sample rate of 44.100 khz. It looks like my  CD rip was actually at a little higher volume (.8 db) than the itunes remaster rip, and you can also see differences in track length, file size, etc. I wonder if the track length is shorter in the Itunes remaster because they fixed the speed and the pitch.

FWIW Foobar shows the CD VBR Itunes plus AAC rip to average 301 kbps.

I did get to see Sun Ra live on two or three occasions. Memorable in the extreme.


----------



## bigshot

It could be that Apple rejected the master they were sent and the current rights holder had to go back and fix stuff. Sun Ra owned his own record label, wasn't he? He could be as sloppy as he wanted back in the day.


----------



## Steve999 (Jul 26, 2018)

bigshot said:


> It could be that Apple rejected the master they were sent and the current rights holder had to go back and fix stuff. Sun Ra owned his own record label, wasn't he? He could be as sloppy as he wanted back in the day.



@bigshot You might well be right! Sun Ra was such a difficult personality that it seems assembling his discography into a coherent narrative and assembling the best versions of his songs on a per-album basis has been an epic undertaking. Sometimes he owned his own label and sometimes he did not. Sometimes he played doo-wop and sometimes be-bop and sometimes he was just some spaced-out dude from Saturn. Nothing he ever said really made too much sense. It has also been said that he ran his band with a very heavy hand. I have read that if they were on a foreign gig and he did not like one of his musicians he would  have the whole band leave the country without the telling the one person and leave that person behind and alone in a foreign land.

I wasn't aware Itunes would send back music to be cleaned up before they would issue the music on its store. That makes sense for all of the recent Sun Ra remasters on Itunes, to include 2018 remasters, whereas Spotify will just have the older masters.

These sites provide some insight into the bizarre lack of attention to detail Sun Ra would show in issuing his recordings:

https://www.discogs.com/master/view/84375

and for this second link mouse over the picture and click on the drop-down menu stating "Cosmos (Remastered)" for more info. Or click on the play button to get a taste of the music, though the album is widely varied. At the drop-down link you can hear the whole album.



Here is a postumas assessment of this particular album:

https://www.allmusic.com/album/cosmos-mw0000042538

"A hard-to-find, alternately chaotic and tightly organized mid-'70s session that was issued on the Cobra, and then Inner City labels. Sun Ra provided some stunning moments on the Rocksichord, while leading The Arkestra through stomping full-band cuts of atmospheric or alternately hard bop compositions, peeling off various saxophonists for skittering, screaming, at times spacey dialogues."

These provide some illumination. At the second (picture drop-down) link it is stated in part:

COSMOS contains the only studio recordings captured in summer 1976 during the fourth European tour of Sun Ra and His Arkestra. Live recordings are known to exist from concerts in Switzerland, Italy, and France.

Cosmos was issued in France in '76 on Cobra (identified as Blue Silver on the sleeve and label), in the US that same year on Inner City, in 1991 on Buda (France), and as a bootleg edition or two. Each time it resurfaced, the audio quality changed, sometimes for the better, sometimes not. On the 1991 CD, the bass was mixed at woofer-quaking levels; the Inner City LP sounds flat.

In 2015, tape librarian/engineer Michael D. Anderson of the Sun Ra Music Archive began assembling a digital reissue. Tape could only be located for tracks 2, 3, and 4, and the various LP and CD configurations offered idiosyncratic production variants. Eventually, best quality recordings were extracted from a combination of sources: track 1 from the Inner City CD, and tracks 5, 6, and 7 from a near-mint Cobra LP. These were digitally remastered in several stages, to create an improvement over prior editions.


----------



## Yuurei

bigshot said:


> It could be that Apple rejected the master they were sent and the current rights holder had to go back and fix stuff. Sun Ra owned his own record label, wasn't he? He could be as sloppy as he wanted back in the day.



Yes, that could be it:



> To meet the Mastered for iTunes technical requirements, you need submit your masters in a 24-bit uncompressed audio format, such as 24-bit 96kHz sample rate *.wav format. You can submit up to 192kHz. Part of Apple's guideline is to limit or completely prevent clipping or inter-sample peaks. Apple provides the tools you need to check for clipping





> Mastered for iTunes approved aggregators can send your high bit rate masters to iTunes, but, they can only guarantee that your release will be branded as "Mastered for iTunes" if your work has been mastered by a Mastered for iTunes Mastering Engineer.



Above quotes are from this article, but since I'm not a mastering engineer I do not know how true is it (although it was interesting to read)


----------



## shortwavelistener

Yuurei said:


> Yes, that could be it:
> 
> 
> 
> ...



So, are there any Mastered for iTunes music files that are smashed to the levels aka badly remastered?


----------



## Monstieur

PiSkyHiFi said:


> That's exactly what AptX HD is, the larger data rate of 576 Kbps allows for better reproduction into 24 bit from 24 bit source files - not by a lot, but still mostly more accurate than 16 bit rendering.
> 
> Personally, I would choose AAC over AptX HD at the same data rate, but Bluetooth doesn't allow it, so I use AptX HD.


aptX HD is not even true 16-bit. The bits are discarded at the very beginning due to its ADPCM like encoding.


----------



## PiSkyHiFi (Jul 27, 2018)

Monstieur said:


> aptX HD is not even true 16-bit. The bits are discarded at the very beginning due to its ADPCM like encoding.



There is no discarding at the beginning.... The beginning being the point where the algorithm knows nothing of what to reduce or what is safe to discard without losing the accuracy of representation.  I mean ADPCM uses a differential signal to initially reduce the amount of bits required to represent the *same* digital signal as best as possible, only discarding information when unusual cases arise.

Imagine a single 1KHz maximum-volume sine wave that is represented in 16 bit PCM, it rises and falls and with each sample from zero to the upper limit (32575 high), through the zero point and down to the lower limit (-32576 low), then back to zero and repeat. ADPCM starts by storing the difference between each sample, instead of the absolute value of the sample. The difference between each sample is going to be a lot smaller number than actual range of 16 bits.

We can calculate the maximum differential between each sample for this scenario, based on the maximum angle of each sample step being 45 degrees (it's just a single sine wave) - so within a quarter of the samples for one period of this wave, the wave has gone from the zero point up to the maximum level with an initial and maximum angle of 45 degrees.
So, roughly divide the absolute range here (0 to 32575 high) by the maximum step size which is (44.1 KHz / (4 * 1KHz)) - say roughly a factor of 10, the first 2 values in this sample will be zero followed by (32575 / 10) - roughly 3200, then around 6400 for the next sample etc...

So ADPCM in this case only needs about 10% of the original range in order to represent the same original signal, that's not 10% of the information though, we're using binary, so 10%  of the range is actually roughly a saving of 3 bits per sample (2 to the power 3 is 8  - roughly 10)

So straight away, ADPCM without doing anything else is able to represent this sine wave with approx. 13 bits instead of 16 - without any loss at all.

That's the starting point, it's actually more complicated than that of course, with adaptive ranges (range step size can vary) and predictive waveforms that are subtracted out to still reduce the range of the differential representation in terms of total information.

If the encoded data rate is a reasonably high proportion of the original PCM data rate, then quite a lot of the signal comes through very close to losslessly. It's not trying to select frequencies to reduce their accuracy using pyscho-acoustic models like other codecs will, it's just another technique to start from and then use other methods to adapt the compression.

AptX uses sub-bands initially too, to help isolate and restrict possible ranges, based on narrower frequency bands.

So.... I hope some of that made sense to you.

This is sound science.


----------



## bigshot

shortwavelistener said:


> So, are there any Mastered for iTunes music files that are smashed to the levels aka badly remastered?



I wouldn't be surprised if some crept in, but the standards for qualification as Mastered for iTunes would preclude that. You can google up the specs on the Apple site if you're interested.


----------



## castleofargh

the guidelines and main concern for the all "mastered for itune" label, was basically to limit the chances of getting intersample clipping(which can occur more easily on lossy formats and in general on lower sample rate PCM). so aside from Apple trying to make anything they use their own for money, the all thing can pretty much be summarized as "we lower the gain slightly when needed".  which in itself isn't a bad idea, all lossy encoders should care for that(or all mastering engineer should stop sticking the signal at -0.1dB). but it's subjectively less impressive than "mastered for itune" ^_^.


----------



## bigshot

Don't they have some sort of requirement about the master file format? Or am I mixing it up with MQA?


----------



## Monstieur (Jul 27, 2018)

PiSkyHiFi said:


> There is no discarding at the beginning.... The beginning being the point where the algorithm knows nothing of what to reduce or what is safe to discard without losing the accuracy of representation.  I mean ADPCM uses a differential signal to initially reduce the amount of bits required to represent the *same* digital signal as best as possible, only discarding information when unusual cases arise.
> 
> Imagine a single 1KHz maximum-volume sine wave that is represented in 16 bit PCM, it rises and falls and with each sample from zero to the upper limit (32575 high), through the zero point and down to the lower limit (-32576 low), then back to zero and repeat. ADPCM starts by storing the difference between each sample, instead of the absolute value of the sample. The difference between each sample is going to be a lot smaller number than actual range of 16 bits.
> 
> ...


The aptX implementation of ADPCM does not use sufficient bits for anything close to lossless representation. It's not that audible to humans, but a spectrogram shows the loss immediately - and it's far worse than MP3 or AAC.


----------



## PiSkyHiFi

Monstieur said:


> The aptX implementation of ADPCM does not use sufficient bits for anything close to lossless representation. It's not that audible to humans, but a spectrogram shows the loss immediately - and it's far worse than MP3 or AAC.



I don't think any ADPCM codec does... It was initially a codec for transfer of voice.

I'd agree that at the same data rate, AAC is a lot more transparent, it's called advanced for a reason.

However, Aptx HD is 576 Kbps... This is about 40% of uncompressed redbook, which means the cleverness of AAC is not as effective as you are throwing less away. AAC is going to have more issues with phase reproduction for instance than Apt X at this rate, because AAC completely deconstructed the original signal with a DCT and selective precision for different frequencies and AptX is merely trying to approximate the original waveform in sub-bands. They are just different, but when the data rate is actually a large percentage of uncompressed, the differences between lossy codecs that cover the range of frequencies we can hear become smaller.


----------



## PiSkyHiFi

PiSkyHiFi said:


> I don't think any ADPCM codec does... It was initially a codec for transfer of voice.
> 
> I'd agree that at the same data rate, AAC is a lot more transparent, it's called advanced for a reason.
> 
> However, Aptx HD is 576 Kbps... This is about 40% of uncompressed redbook, which means the cleverness of AAC is not as effective as you are throwing less away. AAC is going to have more issues with phase reproduction for instance than Apt X at this rate, because AAC completely deconstructed the original signal with a DCT and selective precision for different frequencies and AptX is merely trying to approximate the original waveform in sub-bands. They are just different, but when the data rate is actually a large percentage of uncompressed, the differences between lossy codecs that cover the range of frequencies we can hear become smaller.



That does mean that for encoding redbook CD audio, AptX HD is a *lot* better than AptX.


----------



## Steve999 (Jul 27, 2018)

I knew I'd vaguely remembered this from somewhere. I've just got to say, I'm extremely grateful that someone took on the tremendous challlenge of trying to get Sun Ra's catalog in the best possible shape. If that's all the good the Mastered for Itunes format and process and Itunes Plus AAC VBR encoding ever does, that's good enough for me.   But I have a feeling it's done a lot more good than just this. I never put this all together before. I'm feeling really good about my Apple Music subscription now. 

Here's the link:

https://jazztimes.com/news/sun-ra-music-archive-reissues-21-albums-exclusively-for-itunes/

And here's my limited cut-and-paste:


----------



## shortwavelistener (Jul 27, 2018)

Monstieur said:


> The aptX implementation of ADPCM does not use sufficient bits for anything close to lossless representation. It's not that audible to humans, but a spectrogram shows the loss immediately - and it's far worse than MP3 or AAC.



Like this?


----------



## bigshot

PiSkyHiFi said:


> IAAC is a lot more transparent,



It's so transparent, it's transparent.... just like redbook and high bit/sampling rate audio and a bunch of other compressed codecs at a sufficient data rate. The advantage of AAC over other compression codecs is that it achieves transparency at a lower data rate than most other codecs.


----------



## PiSkyHiFi

bigshot said:


> It's so transparent, it's transparent.... just like redbook and high bit/sampling rate audio and a bunch of other compressed codecs at a sufficient data rate. The advantage of AAC over other compression codecs is that it achieves transparency at a lower data rate than most other codecs.



No mate.... more transparent doesn't mean it is transparent at all - I've told you, transparent is when you don't know if you're listening to a sound system or not.

You have absolutely no idea what I'm talking about do you.

You mentioned sound science, but I see nothing scientific about comparing equipment and concluding that because they sound the same with compressed and uncompressed sources of the same content, they are transparent.

It's possible your ears aren't great - it's possible your mind isn't great, it's possible you were having a bad day.
It's possible the equipment was crap - or even just not up to the demands of such a task.

The fact that you know if it's a sound system or not is sufficient to say it is not transparent, that does not preclude degrees of transparency, which imply degrees of opaque.

Now pay attention, or I'll shall taunt you a second time.


----------



## PiSkyHiFi (Jul 27, 2018)

shortwavelistener said:


> Like this?



Is it possible that that ADPCM decoder is rendering to something other than 16 bit ? It's like the original was mastered with a low pass filter into 16 bit, showing nothing above 21KHz, probably deliberately.

It's like the ADPCM file has aliasing right up to 22050 Hz - either just noise from the codec or possibly rendered into floating point or something.

I recall using wav files that had details all the way up to 22050 and their AAC versions losing details above about 18.5 KHz - kind of the opposite of what I'm seeing here.

Edit: I guess you're just pointing out a nicely mastered recording having noise added by the ADPCM codec.

Try 576Kbps 24 bit ADPCM if you can, see what this shows. All we can tell with the graph is where aliasing is added really, accuracy of signal is not revealed just by looking at frequencies present.

Edit again, not aliasing, sorry, I mean artifacts.


----------



## bigshot (Jul 28, 2018)

PiSkyHiFi said:


> No mate.... more transparent doesn't mean it is transparent at all - I've told you, transparent is when you don't know if you're listening to a sound system or not.



OK then AAC 256 and other high bitrate lossy codecs are *audibly identical* to redbook and HD audio. Once it hits the point that it's audibly identical, the sound quality can't get any better as long as your ears are . The file just gets bigger.



PiSkyHiFi said:


> It's like the original was mastered with a low pass filter into 16 bit, showing nothing above 21KHz, probably deliberately.



That isn't at all unusual. Super audible frequencies in commercial music is undesirable. It's often filtered out in the mix. Ultra sonic frequencies can't improve sound quality, they can only degrade it. See the article in my sig, CD Sound Is All You Need.

By the way, AAC filters off frequencies above 18kHz if you use 192. If you use 256 or 320 it goes up to the edge of human hearing.


----------



## PiSkyHiFi

bigshot said:


> OK then AAC 256 and other high bitrate lossy codecs are *audibly identical* to redbook and HD audio. Once it hits the point that it's audibly identical, the sound quality can't get any better. The file just gets bigger.



I would add a caveat that they are mostly audibly identical, because the variance as you have mentioned before in humans testing their own hearing abilities can be large.

Different humans have different hearing capabilities.
The same human can experience audio differently from one day to the next.
The function of memory and audio is not that well understood, even though there is much research, a lot of it conflicts because when we discuss what a human can and can't hear in an audio piece, we are also studying their consciousness, which is a really tricky one.
The dynamic range of human hearing shifts - ears adjust in a dynamic way to what they hear and humans can not hear everything at once.

All of that basically says that a study that finds that a bunch of people reported hearing no difference between lossless and lossy is actually going to have many uncertainties.

This is why I think it's important to have headroom by storing above your hearing capability to a fair degree, make those files a bit larger than just barely adequate to allow for potentially hearing things differently on different equipment at different times, even though it may not happen very often.

Also, as you said, there is no point compressing my collection if it's fine as it is as lossless and I can stream etc...


----------



## shortwavelistener (Jul 28, 2018)

PiSkyHiFi said:


> Is it possible that that ADPCM decoder is rendering to something other than 16 bit ? It's like the original was mastered with a low pass filter into 16 bit, showing nothing above 21KHz, probably deliberately.
> 
> It's like the ADPCM file has aliasing right up to 22050 Hz - either just noise from the codec or possibly rendered into floating point or something.
> 
> ...



BTW i transcoded the original WAV PCM file into ADPCM using Adobe Audition 1.5. I don't know if it was originally rendered into floating point, but by default audio files loaded into Audition are rendered as 32-bit floating point. But the ADPCM files are not dithered.


----------



## bigshot

PiSkyHiFi said:


> I would add a caveat that they are mostly audibly identical, because the variance as you have mentioned before in humans testing their own hearing abilities can be large.



There is a huge body of evidence that shows that isn't true when it comes to lossy. If there are any people walking the Earth that can discern a difference, they are very rare birds indeed, and they still can't prove that they can hear a difference to statistical levels that establish that conclusively. I posted a couple of controlled tests that showed that neither age, nor being a professional sound engineer, nor a golden ear audiophile, nor listening with fancy "resolving" equipment makes any difference at all. Beyond a certain point, lossy audio achieves transparency so that any attempt to discern it falls into a typical bell curve that represents random chance.

Human hearing abilities are well established. They've been established for nearly a century. The figures you see commonly quoted about audible frequencies and distortion levels are almost always best case scenarios using test tones. They are already overkill. With music, the threshold can be an order of magnitude lower. Numerous tests have been done comparing different codecs and formats. There is a consensus there, and it's easy to find out for yourself by doing some controlled listening tests.

You can think that you can tell the difference, but you've never done a blind comparison. I offered you one and you said you weren't interested. You weren't interested in the controlled tests that I cited for you either. You've offered no proof to back up what you say, just semantic stuff about how your definitions of words are different than mine and smoke and mirrors about how "no one can ever know for sure". That says a lot.

I'm not angry or frustrated, just disappointed. I had hopes that you might actually listen. You clearly know some things about how audio works. But you refuse to accept any information from outside your sphere of comfort. That's a good way to build an impermeable intellectual bubble around your knowledge and not let anything new in.

And when I use the word impermeable, I don't mean "Maybe sometimes in certain cases it is permeable, but we can't ever really know for sure."


----------



## PiSkyHiFi

bigshot said:


> There is a huge body of evidence that shows that isn't true when it comes to lossy. If there are any people walking the Earth that can discern a difference, they are very rare birds indeed, and they still can't prove that they can hear a difference to statistical levels that establish that conclusively. I posted a couple of controlled tests that showed that neither age, nor being a professional sound engineer, nor a golden ear audiophile, nor listening with fancy "resolving" equipment makes any difference at all. Beyond a certain point, lossy audio achieves transparency so that any attempt to discern it falls into a typical bell curve that represents random chance.
> 
> Human hearing abilities are well established. They've been established for nearly a century. The figures you see commonly quoted about audible frequencies and distortion levels are almost always best case scenarios using test tones. They are already overkill. With music, the threshold can be an order of magnitude lower. Numerous tests have been done comparing different codecs and formats. There is a consensus there, and it's easy to find out for yourself by doing some controlled listening tests.
> 
> ...



We've been through all of this already. You would have to admit that out of you and I, the one that insists that even though I can still always tell the difference between a sound system and a live performance, we've reached the pinnacle of transparency and that's the end of the story, is the more closed minded.

I draw the line at redbook, it has just enough overhead to mean I don't need to worry about re-encoding, I am free to focus on different aspects of the same recording without ever thinking, was that a compression artifact or bad master?
Even when I'm testing new equipment, I don't want to have to even consider that question.

Most of the time I don't mind listening to both lossy or lossless because it is mostly audibly identical for certain.

Done.


----------



## PiSkyHiFi

shortwavelistener said:


> BTW i transcoded the original WAV PCM file into ADPCM using Adobe Audition 1.5. I don't know if it was originally rendered into floating point, but by default audio files loaded into Audition are rendered as 32-bit floating point. But the ADPCM files are not dithered.



I think it's just introduced artifacts because this particular master looks like it has used a low pass filter instead of dithering to avoid aliasing and then the ADPCM messed up the high end a fair bit - quite a lot really, it looks worse than what I remember from comparing AAC - I was trying to find the test results I did ages ago, I'll keep searching.

This strengthens my choice of AAC over Apt X at these rates.

I still prefer Aptx HD for Bluetooth until it can do lossless, I think it's closer to the PCM than AAC


----------



## castleofargh (Jul 28, 2018)

PiSkyHiFi said:


> We've been through all of this already. You would have to admit that out of you and I, the one that insists that even though I can still always tell the difference between a sound system and a live performance, we've reached the pinnacle of transparency and that's the end of the story, is the more closed minded.
> 
> I draw the line at redbook, it has just enough overhead to mean I don't need to worry about re-encoding, I am free to focus on different aspects of the same recording without ever thinking, was that a compression artifact or bad master?
> Even when I'm testing new equipment, I don't want to have to even consider that question.
> ...


the differences between a live experience and a recording are much more than a file format. when you argue that way you use the same sort of fallacies you used to argue that psychoacoustic models of data suppression, and plain compression, could be reduced to comparing the data rate to prove one superior. each time you dismiss the most important elements to focus on the one that can make you look right.
if you can pass a blind test with AAC at 256kbps vs redbook where the difference isn't a change in loudness or in mastering(you need to be sure of both or the test means nothing), then you can claim and prove that AAC is indeed not transparent to your ear. that in turn would absolutely legitimize your need for a higher format.
but if you fail to do that, like @bigshot, I, and so many others have failed to do, you're not looking to get higher resolution for audible transparency, you're looking to get it because you want it. which is fine, just a very very important distinction.

with all said and done, my own BT headphone is far from providing transparency, let alone the resolution AAC has to offers. I notice background hiss on silent passages or in a quiet room at quiet yet comfy volume level(the cause being a sub par internal amp). I measure distortions well above -60dB(some of it is caused by my microphone but I've measured stuff way below often enough to know that my BT headphone generate the higher levels I measure). I mention -60dB because that's the area where AAC usually starts removing some elements that the algorithm estimated will be hidden by auditory masking anyway. anytime a headphone reaches 1% THD somewhere, that's extra signal at 40dB below signal. -60db is 0.1%. a level we consider really great for headphones. and I won't start with the flawed stereo, the non flat FR and other issues still attached to typical headphone use to this day. if you want the sound like it was recorded, those are the stuff that need to be improved drastically, not file format.

now don't get me wrong, when @Monstieur was saying that AAC was clearly superior to other stuff, I was almost contradicting him because I haven't seen clean controlled experiments demonstrating that it is consistently so. personally, I feel that the gears I use, including the source, still have a lot of impact on which setting will work best for certain uses. but that's mostly gut feelings, I haven't thoroughly tested anything, I wanted to look up battery consumption but never even got to test that properly. and now it's too hot for closed headphones so I'm back on IEMs until next nuclear winter(or just winter, whichever comes first). so to stop talking for no reason, I do expect gears where AAC is the obvious choice, but I also expect gear combos and listening conditions where APTx or APTxHD or the sony stuff, will just work better(if only because it's the only thing they have to offer and they're good enough to motivate their use anyway). I really don't know what's audibly best, and something most certainly is. I'm just being overly cautious when I read general claims based on anecdotal experiences or half objective arguments.


----------



## bigshot

I had hopes tor this one. Sometimes people stumble in here with potential. I might have been able to learn something from him about codecs I'm not familiar with like Aptx. But with no interest in backing what he says up, and falling back on semantic arguments and deflection to validate his opinions, I don't see much hope. I redefined my wording to eliminate the word transparent that he wasn't able to understand, and he slipped back to "we can never know". If I provide evidence that we actually can know, he slips back to semantics. The clue has been given. You can lead a horse to water...


----------



## PiSkyHiFi

castleofargh said:


> the differences between a live experience and a recording are much more than a file format. when you argue that way you use the same sort of fallacies you used to argue that psychoacoustic models of data suppression, and plain compression, could be reduced to comparing the data rate to prove one superior. each time you dismiss the most important elements to focus on the one that can make you look right.
> if you can pass a blind test with AAC at 256kbps vs redbook where the difference isn't a change in loudness or in mastering(you need to be sure of both or the test means nothing), then you can claim and prove that AAC is indeed not transparent to your ear. that in turn would absolutely legitimize your need for a higher format.
> but if you fail to do that, like @bigshot, I, and so many others have failed to do, you're not looking to get higher resolution for audible transparency, you're looking to get it because you want it. which is fine, just a very very important distinction.
> 
> ...



"the differences between a live experience and a recording are much more than a file format"

I totally agree, and I am fairly certain the scientists that drew conclusion about AAC's "audible transparency" pretty much ignored their unkonwn 18 bit DAC from 1991, it's cheap power supply, their Pioneer Amp and B+W speakers or Sennheiser HD 450 in terms of if they could even allow for the differences to be possibly heard - where does that leave audiophiles that know this is relevant?

You should go with your gut, it's merely saying there is some uncertainty at the boundaries that should be avoided if you don't want to repeatedly ask yourself if it really is enough, every time you hear something new.


----------



## bigshot

A lot of us here have verified the transparency of high data rate AAC for ourselves. We aren’t depending on old or obsolete or cheap equipment. In fact I cited a published test that showed that AAC was audibly transparent on current high end equipment too. Did you read that?

You’re in the wrong forum for advising to go with your gut.


----------



## PiSkyHiFi

bigshot said:


> A lot of us here have verified the transparency of high data rate AAC for ourselves. We aren’t depending on old or obsolete or cheap equipment. In fact I cited a published test that showed that AAC was audibly transparent on current high end equipment too. Did you read that?
> 
> You’re in the wrong forum for advising to go with your gut.



Scientifically, one should provide error bars or a degree of certainty based upon assumptions, *if* you are making claims of relative certainty.

I am the one making claims that the error wasn't included or analysed well enough to be certain about it, which means gut is sufficient to cast doubt about how these things were done.

We're talking an industry where they were mostly focussed on how to get AAC to sound better than the others at 96Kbps for quite a while, because their standards for audibly transparent were mostly concerned with telephony.

Yes, I'd like to read that article - I missed the link, please direct me.


----------



## PiSkyHiFi (Jul 28, 2018)

PiSkyHiFi said:


> Scientifically, one should provide error bars or a degree of certainty based upon assumptions, *if* you are making claims of relative certainty.
> 
> I am the one making claims that the error wasn't included or analysed well enough to be certain about it, which means gut is sufficient to cast doubt about how these things were done.
> 
> ...



Also, what about that dynamic range argument, don't you think it makes a mockery of mastering to redbook using dithering to achieve a slightly higher dynamic range, if you then throw it away by remastering to AAC from this dithered version?

I guess you could argue that mastering with dithering is overkill compared to the mostly audibly transparent AAC, but I think I've heard differences in mastering quality anecdotally and so I still err on the side of caution.

One cannot master to AAC without using PCM, it's impossible - not saying you said that, I'm just saying, AAC is an afterthought in mastering and it's to do with saving space mostly.

Mastering to 24/96 and then using AAC, well, now we're talking - that's like using a 4K screen on 13 inch laptop because you just don't want to even know that pixels exist - bring it on I say, that's an excellent way to use AAC, much better than let's take redbook, make it slightly worse and then argue that redbook was overkill - not saying you said that @bigshot , I'm just saying.


----------



## bigshot (Jul 30, 2018)

PiSkyHiFi said:


> Scientifically, one should provide error bars or a degree of certainty based upon assumptions, *if* you are making claims of relative certainty.



The best way to find out for yourself for certain is to do a controlled listening test yourself. Then you don't need to worry about statistics or uncertainty. You'll know.

For lots of great articles on listening tests of various parameters of sound equipment, see the first post in Testing Audiophile Claims and Myths. It's the most useful and interesting post in this forum. https://www.head-fi.org/threads/testing-audiophile-claims-and-myths.486598/



PiSkyHiFi said:


> Also, what about that dynamic range argument, don't you think it makes a mockery of mastering to redbook using dithering to achieve a slightly higher dynamic range, if you then throw it away by remastering to AAC from this dithered version?



Whenever you see specs, it's really good to know how those relate to what you actually hear. Do you know what the audible threshold is for a noise floor under music? And do you know what the dynamic range of the most dynamic recorded music is? Both of those figures are between -40dB and -50dB. The dynamic range of AAC 256 VBR is more than enough to achieve audible transparency.

Redbook is overkill. It was designed to cover every extreme and unlikely circumstance. 16 bit was selected for the convenience of the math, not because 16 bit was actually required. The threshold of transparency is somewhere between 12 and 14 bits.

When you work with sound and test it and look at the measurements in relation to how it actually sounds, you have context. Looking at numbers on a page and assuming that bigger is always better is just plain wrong. Human hearing has definite limits. You can shove higher quality sound and bigger numbers into your ears, but it won't sound any different.

Audiophiles know a lot about technical stuff. They read white papers, they compare numbers, they do math to calculate differences... but it's all theory because they spend no time at all researching the thresholds of human perception. That's what we're talking about here. Stuff you can and can't hear. Not abstract numbers. Everyone knows that "lossy" has less information than "lossless". But the information that is missing is inaudible. You can't hear it. That means that to human ears, the sound of audibly transparent lossy and lossless is the same.


----------



## PiSkyHiFi

bigshot said:


> The best way to find out for yourself for certain is to do a controlled listening test yourself. Then you don't need to worry about statistics or uncertainty. You'll know.
> 
> For lots of great articles on listening tests of various parameters of sound equipment, see the first post in Testing Audiophile Claims and Myths. It's the most useful and interesting post in this forum. https://www.head-fi.org/threads/testing-audiophile-claims-and-myths.486598/
> 
> ...



You haven't listened to anything I've said about the scientific limits of ABX, you come across like a zealot. If this is all you have, I don't need it. I feel we're done.


----------



## bigshot (Jul 30, 2018)

Blind testing with direct switchable line level matched comparison is the best way to determine if there is an audible difference between two samples. You can think differently about it, but you're wrong. That's how controlled testing works. It's designed to minimize the effects of bias and problems with auditory memory.

You're in Sound Science. In this forum we get to demand proof. A good way of providing that is to point to published controlled tests (which I've done to back up my opinion) and to do tests yourself (which I offered to help you with). Just pointing to abstract numbers and ignoring the thresholds of human auditory perception doesn't cut the mustard. We both agree that lossless contains data that lossy doesn't. Our disagreement is whether that data is audible. I've offered proof that it isn't. You've just pointed back at the numbers.

We're not going in circles because I'm ignoring what you say. The opposite is true.


----------



## PiSkyHiFi

bigshot said:


> Blind testing with direct switchable line level matched comparison is the best way to determine if there is an audible difference between two samples. You can think differently about it, but you're wrong. That's how controlled testing works. It's designed to minimize the effects of bias and problems with auditory memory.


I have already been through all of this. Please stop now. We disagree and I have nothing more to direct towards you. I'm here for specific issues.
I'm not going to say you're wrong because it's what you believe and you have apparently thrown away your originals, so be happy with what you have and leave the rest of us to explore beyond simplistic reductionist analysis.
Seriously, enjoy the music.


----------



## bigshot

I haven't thrown away the originals. The CDs are in boxes on steel shelving in my garage. I can compare them any time I want. But there isn't any point because I already did extensive controlled comparison tests and I determined that AAC 256 VBR is audibly transparent (which to me means no audible difference between lossless and lossy)... Earlier in the thread you conceded that high data rate AAC was indistinguishable from lossless, and I asked you what audible benefit there was in lossless... and you went back to tangents about definitions of words and abstract theories about numbers that were unrelated to audibility. That is what keeps sending us in circles. You never address my question. What is the audible benefit of lossless when lossy sounds exactly the same?


----------



## SomeGuyDude

bigshot said:


> I had hopes tor this one. Sometimes people stumble in here with potential. I might have been able to learn something from him about codecs I'm not familiar with like Aptx. But with no interest in backing what he says up, and falling back on semantic arguments and deflection to validate his opinions, I don't see much hope. I redefined my wording to eliminate the word transparent that he wasn't able to understand, and he slipped back to "we can never know". If I provide evidence that we actually can know, he slips back to semantics. The clue has been given. You can lead a horse to water...



TBH what I'm more interested in is the transition from one compression codec to another BT transfer codec, i.e. OGG -> aptX vs mp3 -> AAC vs AAC -> aptX vs AAC -> AAC

I don't give a crap about compression beyond that, I'm just wondering how they all do via those transfers.


----------



## mhoopes (Aug 9, 2018)

zviratko said:


> I think you are confusing two different things - the source bitrate (and codec) and Bluetooth A2DP bitrate+codec.
> 
> All bluetooth devices I've seen used AAC VBR @ 256kbit (forcing CBR causes failure), and I've never seen them exceed 256kbit (but I've seen them transferring less).
> I wasn't able to find any definiitive specs on this, so it's possible that >256kbit can be supported, or that some devices support CBR.
> ...



I'm jumping in a little late...but I'm on the iOS platform, and am preparing for my post-iPhone 6s Bluetooth future. As such, codec handling becomes the major quality bottleneck.

I was also curious about audio mixing in iOS. When listening to music with the Music app, alerts are audible, but the music is smoothly muted when the alert is played. I've found that most system sounds (keyboard clicks and the lock sound, specifically - and I verified they're enabled) are suppressed entirely if the ringer switch is set to "off".

I took a look at the "Accessory Design Guidelines for Apple Devices" document [link]. Here's what they have to say on pp. 56-57 about the differentiation of system sounds and music:

"*9.12.2.1 Differentiating Audio Content from System Sounds*
Music-like content can be differentiated from system sound by adding support for Audio/Video Remote Control Profile (AVRCP) version 1.3 or later. The AVRCP profile allows an accessory to be aware of the audio playback state in the Apple device, using notifications. See Audio/Video Remote Control Profile (AVRCP) (page 53).

*9.12.2.2 Expected Audio Routing Behavior for A2DP*
The accessory should tune its audio routing behavior based on audio content over the A2DP channel. If audio data contains music, then it is expected that the accessory speakers are dedicated to audio data coming via the Bluetooth link and any other audio playback is paused. If audio data contains system sound, then it is expected that the accessory can render audio as desired. If the accessory is playing audio from a different source, then system sound data can be mixed with the existing track for playback; it is not necessary to pause existing audio playback on the device."

Music-only scenario:
When the phone and receiver volume levels are not synchronized, it's recommended that the phone volume be set to 100% to maintain the maximum dynamic range. I've heard that when the volumes are synchronized (meaning there is a single volume adjustment that can be made on either transmitter or receiver), the Bluetooth stream is transmitted at 100% volume.
I don't know what kind of transcoding iOS is doing in non-synchronized volume control mode. They're not necessarily mixing other sounds in, but if not, is there still a final MPEG-4 AAC conversion involved there, and is it happening even at 100% phone volume?

It's not clear to me in the Guidelines document, and a preceding figure on p. 57 mentions "mixed in A2DP audio playback" on the Device (phone) side:


----------



## shortwavelistener

SomeGuyDude said:


> TBH what I'm more interested in is the transition from one compression codec to another BT transfer codec, i.e. OGG -> aptX vs mp3 -> AAC vs AAC -> aptX vs AAC -> AAC
> 
> I don't give a **** about compression beyond that, I'm just wondering how they all do via those transfers.



I'm also curious about these as well.


----------



## Vin60 (Oct 29, 2018)

We just had overwhelming discussion about this subject on some Russian forum and it was really a battle - all the android users insisted aac over bluetooth is irrelevant, why I, as iphone user, insisted I do not hear any difference between AAC from apple music and .flac over USB external DAC. Finally we made an audio record through computer audio input to see the spectrogram. Source was Iphone bluetooth transmission to external DACs FIIO BTR3 and then to BTI-031 as well as Huaiwei P20 PRO plus AiAiAi TMA-2 with detachable cable to check AptX formats and LDAC.
It appears that 256k AAC file, which has full frequency coverage up to 22Khz is transmitted by iphone over bluetooth with frequency cap at 19Khz (which corresponds to 192 kbit/s mp3), while android phone cap it at 14Khz (which is worse then 128 kbit/s mp3), and aptX preserved the full bandwidth..
This is very unfortunate as it means you can not have even 256 kbps AAC over bluetooth with Apple products, and you need to get high quality expensive external DAC to have proper sound over USB.  Aptx codecs should be definite preference for Android users and they should avoid AAC transmission by any means. Also its easier and cheaper for Android users to get proper sound from their devices.


----------



## bigshot

Is it capping the frequencies, or is it actually downsampling? The difference I hear between AAC192 and 256 isn't the frequency range, it's a difference in the level of artifacting.


----------



## Vin60

bigshot said:


> Is it capping the frequencies, or is it actually downsampling? The difference I hear between AAC192 and 256 isn't the frequency range, it's a difference in the level of artifacting.


I vote for downsampling - I think iphone decodes original ACC, adds volume info + system sounds and then encodes it, but with lower bitrate. This is only assumption though..


----------



## bigshot

Interesting. I guess I usually use AirPlay and that would play AAC 256 as AAC 256.


----------



## Vin60

bigshot said:


> Interesting. I guess I usually use AirPlay and that would play AAC 256 as AAC 256.


Indeed, because it does not play but just send over wifi to be decoded and played on other device. Technically Bluetooth does the same, but due to much lower bandwidth it uses dedicated codecs for transmission


----------



## bigshot (Oct 29, 2018)

Maybe the bluetooth format is nearing its sunset. Either that or people just don't care or think it makes enough of a difference. In 99% of the music I play, the difference between AAC192 and AAC256 wouldn't be audible. In fact, I've only found one album where I can actually hear the difference.


----------



## Vin60

bigshot said:


> Maybe the bluetooth format is nearing its sunset. Either that or people just don't care or think it makes enough of a difference. In 99% of the music I play, the difference between AAC192 and AAC256 wouldn't be audible. In fact, I've only found one album where I can actually hear the difference.


Its obviously at its dawn, keeping in mind aptxHD, aptxLL and LDAC, which is more then enough to send any existing sound quality file over bluetooth. Only apple is screwed so far.
Please note 192 kbps mp3 is not the same as 192 kbps AAC and if your headphones are good enough you can easily hear the difference between file capped at 22khz and file capped at 19 Khz


----------



## Monstieur (Oct 29, 2018)

Vin60 said:


> I vote for downsampling - I think iphone decodes original ACC, adds volume info + system sounds and then encodes it, but with lower bitrate. This is only assumption though..


I've tested multiple receivers and found that macOS uses AAC at 320 kb/s if aptX is disabled. Can you confirm whether the downsampling of the music stream occurs on macOS as well? Both iOS and macOS resample to 44.1 kHz 16-bit before AAC encoding.

The 22.05 kHz bandwidth is not meant to be used fully in the first place. There is a buffer of 2 (or 3?) kHz in most analog and digital workflows to allow for aliasing and gradual filter slopes, which would explain only upto 19 kHz being transmitted.

If no filters are being applied, the full 22.05 kHz signal could be transmitted, but the system resamples all audio streams and if some of them are at 48+ kHz sample rates or higher bit depths, it would need to resample while respecting the buffer. Even the act of mixing the music stream with silent system audio would probably necessitate the buffer.


----------



## Vin60

Monstieur said:


> I've tested multiple receivers and found that macOS uses AAC at 320 kb/s if aptX is disabled. Can you confirm whether the downsampling of the music stream occurs on macOS as well? Both iOS and macOS resample to 44.1 kHz 16-bit before AAC encoding.


All the Apple Music files are AAC 256 kb/s, so frankly speaking, I have no clues, why Mac Os would upsample them? Indeed they resample to 44,1 before encoding, but then result of this new encoding transmitted over bluetooth has cap at 19Khz. Theoretically its possible this is external dac problem, however the very same DACs provide full bandwidth for aptX signal, so I doubt this is the case.


----------



## Jaywalk3r

Vin60 said:


> … if your headphones are good enough you can easily hear the difference between file capped at 22khz and file capped at 19 Khz



More importantly, one's ears need to be sensitive enough. Most adults can't hear 19KHz, and few humans at all can hear over 20KHz. (The difference between those frequencies, in terms of notes, is tiny.) None of it would be "easily heard".


----------



## Monstieur

Vin60 said:


> All the Apple Music files are AAC 256 kb/s, so frankly speaking, I have no clues, why Mac Os would upsample them? Indeed they resample to 44,1 before encoding, but then result of this new encoding transmitted over bluetooth has cap at 19Khz. Theoretically its possible this is external dac problem, however the very same DACs provide full bandwidth for aptX signal, so I doubt this is the case.


Increasing the bit rate is not upsampling... It just reduces the chance of recompression artifacts.
I edited my previous post explaining the 19 kHz limit.


----------



## bigshot

Vin60 said:


> if your headphones are good enough you can easily hear the difference between file capped at 22khz and file capped at 19 Khz



Not likely. The difference between 19kHz and the upper limit of human hearing is a small fraction of a note on the musical scale. Most people over 30 can't even hear up to 19kHz. And it doesn't matter anyway, because recorded music doesn't generally have any audible content up there anyway.


----------



## Vin60

Jaywalk3r said:


> More importantly, one's ears need to be sensitive enough. Most adults can't hear 19KHz, and few humans at all can hear over 20KHz. (The difference between those frequencies, in terms of notes, is tiny.) None of it would be "easily heard".



You are absolutely right - I personally do not hear 20khz. However downsampling means not only the cap in khz, but degrading details as well, as otherwise it would be too simple.


----------



## bigshot

Have you tested to find out what your threshold of transparency is for various codecs and data rates?


----------



## Vin60

bigshot said:


> Have you tested to find out what your threshold of transparency is for various codecs and data rates?


Not sure I understand - Hz do not differ from data rates nor codecs. If you mean do I hear difference between Mp3 256 and Mp3 320? - some times yes, some times no, and I do not hear any difference between .flac and AAC 256


----------



## bigshot

You mentioned that you heard degrading of details, I was just curious where your line of transparency was where there is no degradation any more.


----------



## shortwavelistener (Oct 30, 2018)

bigshot said:


> Interesting. I guess I usually use AirPlay and that would play AAC 256 as AAC 256.



Actually, that's because Airplay encodes the audio stream in ALAC (the same goes to it's predecessor, the Airport Express too). Since ALAC is a lossless format just like FLAC, so Airplay will retain all the information of the audio file, regardless of the type of audio file played from the source.


----------



## bigshot

I didn't know that! Thanks!


----------



## Vin60

bigshot said:


> You mentioned that you heard degrading of details, I was just curious where your line of transparency was where there is no degradation any more.


Its really depends on content, but I’m sure everyone can hear difference between mp3 128 kbps and mp3 192 kbps. Keeping in mind android encodes in aac over bluetooth in less then 128 kbps mp3 quality (I presume its 96kbps) android users should definitely stick to aptx or its higher equivalents


----------



## AKGForever (Oct 30, 2018)

bigshot said:


> Mastered for iTunes means that the source is a studio quality master (i.e.: 24/96), and the encoding is customized to make it as efficient and high quality as possible. But the end result is still AAC 256 VBR. That isn't a bad thing though, because AAC 256 VBR is audibly transparent. With human ears, you won't be able to discern it from the master.



What does "Mastered for iTunes " mean when the source was originally recorded in analog?  Unfortunately I found nothing in the Apple description saying that the source has to be from original source material.  CDs and AAC made from sources made in the 1960s through the 1980s are all over the place quality wise.  Some of it is poor recording techniques but a lot of it is because the masters are multi-generational.  I also find that compilations, especially those from various artist, tend to be of poorer quality and from a mult-generational source.


----------



## bigshot (Oct 30, 2018)

All content distributors are at the mercy of what master the label pulls off the shelf. And "best ofs" and compilations are going to tend to be a hodgepodge of sources.


----------



## pstickne (Jun 10, 2019)

I found https://www.soundguys.com/understanding-bluetooth-codecs-15352/ interesting. Some points:
- _Bluetooth_ AAC is highly dependent on the specific encoding stack's implementation (and pre-encoded AAC do _not_ appear to be sent "as they are"). It is effectively only [almost] "CD quality" on iPhone devices. Naturally, it will never have more information that the original source.
- _Bluetooth_ AAC is psychoacoustic whereas aptX is not - so AAC is more akin to an inline 'conversion to MP3' (offending anyone a bit? ). This is also why AAC requires more power/processing than SBC/aptX and why comparing 'Bluetooth _transmission_ bitrates' between different protocol can be misleading.


----------



## pstickne (Jun 10, 2019)

Monstieur said:


> ..aptX HD, LDAC, DSD, MQA etc. are gimmicks..



It's a shame really, as the DSD _encoding_ was designed as an alternative/improvement to PCM encoding. Excluding licensing opportunities and the like, one reason was _to make hardware simpler and cheaper_. (And there isn't anything wrong with that goal.)

It's the re-introduction as a "Hi-Fi" badge as opposed to data representation format that is the gimmicky bit - higher numbers is better? I guess one can always covert CD (eg. "a" PCM encoded source) to DSD and call it a day.. same data, different pattern of bits.


----------



## djyang0530

ios only support aac and aac and aptx has a huge differncen since aptx can send 16/44.1


----------



## pstickne (Jun 20, 2019)

djyang0530 said:


> ios only support aac and aac and aptx has a huge differncen since aptx can send 16/44.1


Uhh, AAC “can send” 16/44 as well. However, such statements add little to no value as discussed in https://en.wikipedia.org/wiki/Audio_bit_depth Remember that both AAC and aptX are lossy and neither is PCM; one has to look at the quality of the resulting decoded (PCM, DSD, whatever) signal**.

Also, AAC is arguably a more transmission-efficient codec (for human consumed audio) as it utilizes stronger psychoacoustic principals so even looking at BT wire bandwidth usage can be very misleading.

**As noted previously, there is a wide range of differences in the quality of AAC encoders, with many Android implementations being subpar.


----------



## castleofargh

https://habr.com/en/post/456182/

I don't know enough to say if everything in this article is correct, but the little I know seemed to align. 
it certainly is very informative.


----------



## pstickne (Jun 20, 2019)

castleofargh said:


> https://habr.com/en/post/456182/
> 
> I don't know enough to say if everything in this article is correct, but the little I know seemed to align.
> it certainly is very informative.


Thanks for the link, it is great consolidated reading!

And this makes me :

"AAC has many extensions to the standard encoding method. One of them—Scalable To Lossless (SLS)—is standardized for Bluetooth and allows you to transfer lossless audio. Unfortunately,_ no SLS support could be found on existing devices_. An extension to reduce transmission delay_ AAC-LD (Low Delay) is not standardized for Bluetooth_.", so much more could be had.

"The situation with AAC is ambiguous: on one hand, _theoretically, the codec should produce quality that is indistinguishable from the original_, but practice, judging by the tests of the SoundGuys laboratory on different Android devices, is not confirmed. Most likely, the fault is on low-quality hardware audio encoders embedded in various phone chipsets. It makes sense to use AAC only on Apple devices; with Android you'd better stick with aptX/HD and LDAC."

And some article conclusion :

"People who do not hear the difference between codecs while testing via a web service claim they hear it when listening to music with Bluetooth headphones. Unfortunately, that is not a joke or a placebo effect: the difference is really audible, *but it is not caused by difference in codecs.*"

"The marketing of alternative codecs is very strong: aptX and LDAC are presented as a long-awaited replacement of the “outdated and bad” SBC, which is far from as bad as it is commonly thought of." .. "As it turned out, _the artificial limitations of Bluetooth stacks on SBC can be bypassed, so that the SBC will be on par with aptX HD_."


----------



## SergeSE

According to my recent research SBC at high bitrates (SBC XQ) is on a par with aptX HD.


----------



## SVO

"Most wireless audio devices have a maximum bitrate of 320 kbps for AAC, some support only 256 kbps. Other bitrates are extremely rare.
AAC provides excellent quality at 320 and 256 kb/s bit rates, but is prone to generation loss on already compressed content, however it’s difficult to hear any differences between the original and AAC 256 kb/s on iOS, even with several consecutive encodings. For MP3 320 kbps encoded into AAC 256 kbps the loss can be neglected."

As an iOS user who has done some fairly critical testing, this is exactly what I found.  ALAC rip to AAC produces output that is indistinguishable from lossless wired, at least for me/most.  I could hear degradation on MP3 rips below 320 kbps to AAC.


----------



## bigshot

Generation loss may not be the best way to describe it because the loss all occurs in transcoding from one codec to another. You can encode back and forth from WAV to AAC a whole bunch of times with no loss at all. The trouble comes when you transcode from MP3 to AAC.


----------



## Verificateur

Hi everyone! I really hope I am in the right place to be asking this question, as I haven’t been able to find a definitive answer. 

Question:  Will I notice a tangible improvement in sound quality when using _*Amazon Music HD*_ as source with Bluetooth headphones that support AptxHD (eg Sennheiser Momentum 3) on an Android Phone (eg LG V30) vs. Same headphones on an iPhone X (which will be AAC)? 

Thank you so much in advance for any input!


----------



## plakat

Verificateur said:


> Hi everyone! I really hope I am in the right place to be asking this question, as I haven’t been able to find a definitive answer.
> 
> Question:  Will I notice a tangible improvement in sound quality when using _*Amazon Music HD*_ as source with Bluetooth headphones that support AptxHD (eg Sennheiser Momentum 3) on an Android Phone (eg LG V30) vs. Same headphones on an iPhone X (which will be AAC)?
> 
> Thank you so much in advance for any input!



I don’t think so (many will tell your otherwise though). AAC on the iPhone is well implemented and sounds very good (especially on the go). But I‘ve read that AAC implementations on Android may have problems, so on that plattform you may be better off with aptX.

That being said, I like AAC for its lower required bandwidth and therefore more stable connections in crowded areas (mostly not a problem at the moment obviously...). Especially aptX HD was quite problematic in my tests, but I’ve also heard for aptX connections from people who had to switch the side on which they carry their phones to get a stable connection (phone had to be on the same side as the Bluetooth receiver in their headphones...)


----------



## Skorpion1904

> rtings.com/headphones/tests/active-features/latency



This link is Broken :\ my suggestion that is updated weekly
Hope this can be useful for someone


----------



## neil74

Good to see this thread is still going and it is something I have been thinking over a lot again recently.  I usually have an iDevice and droid that I switch between and I think in some ways Android has actually got worse since I started this thread. 

AAC is bad on android, it always bugged me how terrible it sounded and then when I saw that soundguys article it all made sense.  LDAC is still a little broken in that other than the P40 pro no phone I have used lets you switch from Best Effort and keep the setting and Samsung phones cannot do 990 mode at all.  Best Effort varies between devices it seems with many choosing 330 mode and whatever you view on LDAC it is pretty much only available on Sony headphones.  Lastly APTX-HD is not available on many phones now so if you have non Sony cans your choices are limited to standard APTX, AAC or SBC.

I have settled on standard APTX with android but honestly to my untrained ears I think that Bluetooth audio just sounds better on an iPhone which is crazy really.


----------



## NorthUtsire

Re. the Iphone / Android, AAC/Aptx question and which sounds better; I  haven't had a chance to compare. However there are a few observations I can share:

Android, like Windows PCs, does a degree of audio processing buried deep  in the OS kernel, this includes a mixer and sample rate converter and allows multiple sources to play at the  same time, i.e. notifications can play over the music you're listening to. However this processing tends to affect audio quality, sometimes quite seriously. I don't know if this is also true  of Iphones.
Years  ago  I freelanced as an engineer getting a well known radio station live on air. We were using a networked audio player to play encoded promos jingles and commercials to air and were worried about how  transcoding the audio from one codec to another would affect audio quality. We transcoded between AAC, MP3 and Aptx. AptX came out as the definite winner on grounds of audio quality, transcoding to AptX had a far lower impact on audio quality than to any of the other formats.
On Iphones it makes sense to use AAC, this being the format used by Itunes so the likely source of your music. On Android it makes sense to use Aptx as there is likely to be a wider range of source codecs. I haven't tested which sounds best when encoding FLAC, ALAC or uncompressed audio however.
On my Android devices, using a player that bypasses the internal processing and provides bit-perfect playback makes a big difference. I recently bought a pair of Cambridge Audio Melomania 1+ tws earphones to replace a brokem pair of Sennheisers and was initally rather disappointed in the quality, however listening via a bit perfect player they sounded far better, more definition between instruments and notes, better sense of room acoustics etc. This was true on my old Sony X compact as well  as my work Galaxy Note 20 ultra. BTW, this improvement was also obvious on my Sennheisers as well, especially via the headphone jack on the Sony when using wired cans.
Re. 256k being indistinguishable from uncompressed audio: On a good recording I can most definitely hear a difference, however I need to be listening on a good system with good source material. It is fairly obvious when listening on HD650s via an MDAC, not so much via a phone and the HD650s using a bit perfect player. 
A couple of years back I  attended a conference on Broadcast Audio and the new R128 audio control standard, part of which makes provision for lossy compression which can increase peak audio levels due to ringing in the band pass filters used in the compression codecs. One of the speakers demonstrated the effect of lossy compression by playing uncompressed then compressed audio starting at 320k and going down to 96k  via a pair of studio monitors, the difference was certainly noticeable even at 320K. He then played what had been removed from the original audio during compression. It was fascinating, listening to what was removed as the compression was increased you could hear the correlation between the removed audio and what was missing in the compressed output. If anyone's It's possible to try this at home using recording software such as Audacity. Place the source file on tracks 1 & 2 and compressed tracks on 3 & 4, reverse the phase of one pair of tracks and then mix track 1 & 3 and tracks 2 & 4, this will subtract one stereo pair from the other leaving the difference signal. You will need to play with levels and synchronisation to get the best null. You will be left with the difference between compressed and uncompressed, i.e. what the compression has removed, which was described i the conference as sounding like 'space monkeys'.

Summary: If using Iphone, stick to AAC as this is likely your source codec format, If using Android, Aptx  is likely best as it is the least destructive when transcoding form other compressed formats. 
When using Android, if possible use a bit perfect player for serious listening (i.e. not speech), especially important if using an external DAC. I use USB Audio player PRO but I'm sure there are others. Not sure whether this is necessary with Iphones.


----------



## bigshot

Once codecs reach audible transparency, the only consideration is convenience.


----------



## snakesAndfoxes (Aug 21, 2022)

I know this is an old thread, but I came across it while doing my own research on a very specific part of the AAC codec on iOS devices (that I still haven’t found an answer for yet, so my search goes on) but I felt compelled to create an account simply to respond to this, lol.

The interaction between bigshot and piskyhifi has been very interesting coming from both sides of the debate but I would argue they are both right:

In a high-level summary of his position, piskyhifi is looking strictly at the math and aptx-HD has a higher bit rate than AAC so it’s obviously better (just look at the math). And he’s right: aptx-HD is technically superior to 256AAC.

bigshot is saying AAC is audibly transparent at 256kbps so anything beyond that is just extra file size with no audio fidelity that we can even hear, so what’s the point in listening to higher bit rates? Fair point, and also true.

I think the best analogy I can come up with to describe the debate they had years ago would be what the human eye is capable of seeing on the electromagnetic spectrum.

If I had a pair of imaginary glasses with the technical specs to view gamma rays and radio waves (aptx-HD) they would be technically superior to glasses that only allow me to see visible light (256AAC).

So yes, aptx-HD is superior to AAC256, but it’s like wearing glasses that are capable of seeing radio waves that human eyesight is unable to see on its own.

Kinda pointless…

FLAC, ALAC, WAV have a specific purpose that I don’t see ever going away, but for daily use, even in home critical listening, using a lossy codec of your choice that achieves transparency is all you need; you’re not missing out on what your ear can’t hear, just like using glasses that are technically capable of viewing what you’re not capable of seeing.


----------



## bigshot

Better than AAC256 will be necessary if you’re a dog or a bat.


----------



## snakesAndfoxes

Yup, and it’s funny that everything you said years ago is still accurate even though lossless music providers are mainstream now.

It reminds me of something an audiophile said on another forum I was on recently about when Spotify will finally release a lossless tier: “I can’t wait to not hear the difference!”


----------



## SVO

You guys are spot on:  "But why don't you just make the knob say 10 rather than 11?"  "Uh, no.  It goes up to ELEVEN!"  Men LOVE specs- they feel so dang competent when they know all the specs.  When I installed high-end home theaters I had SO many clients (essentially all cashed out tech stock options) who would want to put $100k worth of equipment in disastrous room acoustics.  Me, "You know some acoustic treatments on the primary reflection points would get you a lot better sound, even with average equipment."  Him, "Pffft.  This amp has 0.0001% distortion!"


----------



## NorthUtsire

SVO said:


> You guys are spot on:  "But why don't you just make the knob say 10 rather than 11?"  "Uh, no.  It goes up to ELEVEN!"  Men LOVE specs- they feel so dang competent when they know all the specs.  When I installed high-end home theaters I had SO many clients (essentially all cashed out tech stock options) who would want to put $100k worth of equipment in disastrous room acoustics.  Me, "You know some acoustic treatments on the primary reflection points would get you a lot better sound, even with average equipment."  Him, "Pffft.  This amp has 0.0001% distortion!"


----------

