Sounds like the 24bit DAC's just have more problems than 1bit:
Quote:
The number of bits in a DAC is a poor method of determining its performance and accuracy. A better measure of performance is the accuracy of the actual bits themselves. Under ideal circumstances, a 16-bit converter would exactly convert all 16-bits of the sample data word in a linear fashion. However, this is seldom possible. In practice a 16-bit DAC is less than sufficient for accurate conversion.
The error in a 16-bit (or any multi-bit) converter relies on the accuracy of the most significant bit (MSB) of the data word. Inaccuracy in this bit can result in an error of half the signal's amplitude--a significant error by any measure. This in mind, manufacturers reasoned that converters with high bit rates could overcome this shortcoming along with others through sheer numbers. In addition to ensuring the accuracy of the MSB by having more than 16-bits, they can also improve quantization performance by adding 2x-16 more quantization levels than a 16-bit converter. Now, any nonlinearity in the conversion process would be a far smaller fraction of the overall signal and the more quantization levels result in a greater S/E ratio by virtue of Eq. 1. The extra bits used by these converters may be either thrown away, be left unused, or be put to other intelligent uses that will be discussed later. Unfortunately, it is a misconception that the use of an 18- or 20-bit DAC gives true 18 or 20-bit audio performance.
Despite the fantastic performance benefits of these nth generation multi-bit converters, they are still plagued by many errors. Linearity was already mentioned, but they are also plagued by gain error, slew-rate distortion, and zero-crossing distortion. All of these error and distortion types introduce severe harmonic distortion and group delay; thereby perturbing signal stability, imaging, and staging.
Two methods of output reconstruction have been used with the multi-bit DAC's. The first of these employed the use of the "brickwall" filter. These filters had a very sharp cutoff characteristic and held the signal gain close to unity almost to cutoff. This was necessitated because the data was at a frequency such that aliasingand noise artifacts existed immediately above the audible band. The inherent problem with such a filter design was that they had tremendousphase nonlinearities at high frequencies, and high-frequency group delay--change in phase shift with respect to frequency. The second method of output reconstruction deals with an oversampling digital filter prior to the DAC and a gentle analog filter. By gentle, it is meant that a cutoff slope of 12 dB/octave and a -3 dB point of 30-40 kHz can be used. Its design then is noncritical and low-order--which guarantees excellent phase linearity. In fact, for most practical reconstruction filters, phase distortion can be held at ±0.5° over the entire audio band.
http://www.tc.umn.edu/~erick205/Papers/paper.html