Not happy even with a lavry a/d.

GroupDIY Audio Forum

Help Support GroupDIY Audio Forum:

This site may earn a commission from merchant affiliate links, including eBay, Amazon, and others.
I also think those older RADAR converters perform admirably well considering their age (but they do not outperform the new RADARs).

There are some other older converters that hold up quite well even today... the Pacific Microsonics HDCD being one of them.
 
Not sure how scientific it is since I don't really know much about this stuff, but maybe it could be of some use!
Not very. The effects of frequency and phase response are dominant over any other significant metric of non-linear behaviour.
A 1mdB (0.001 dB) difference in the frequency response results in about -80dB residual when substracting. The analog circuitry in both the input and output circuitry introduces much higher frequency response shift.
 
I think it’s the case that the quality of the average converter has improved greatly over the years. It’s also the case that the best converters from over 20 years ago are still the best. Prism/Lavry/Weiss/Pacific Microsonices. You can get pretty close for a tenth of the price but those designs still hold their own.
 
Perhaps it is about time that somebody took all these converters and performed careful null tests between them to identify the differences
Not a useful test methodology in this case. Differing filter slopes and DSP latency renders the residuals effectively meaningless for telling us anything about the sound of the unit.

Check out www.audiosciencereview.com for some excellent objective data on many converters.
 
I also think those older RADAR converters perform admirably well considering their age (but they do not outperform the new RADARs).

There are some other older converters that hold up quite well even today... the Pacific Microsonics HDCD being one of them.
Agreed on both counts. I am also partial about DCS. Even their oldest ADC from 1989 (first 24 bit AD), the DCS 900, still sounds very, very good today to my ears.
 
Not a useful test methodology in this case. Differing filter slopes and DSP latency renders the residuals effectively meaningless for telling us anything about the sound of the unit.

Check out www.audiosciencereview.com for some excellent objective data on many converters.
Indeed, their methodology is much more meaningful, compared to the over-simplistic method in GS.
As always, it shows that it takes learning and experience to assess measurements.
 
Confusing about the AD8000 love....?

Hi Scott, I don't really know what you meant.

The AD8000 is not a 16bit converter from 1994.
I also never said that the AD8000 "sounded better than most converters nowadays" like member living sounds stated about a 16bit converter from 1994.

I just said I still like the sound of the AD8000, even in present days.
Some people said they don't like the "Apogee" sound, but I always liked the "Apogee" sound for recording. I'm probably not into the "transparent" , "Clean" side of things, I like the gear to have a sound and I can choose it to taste. If something is transparent, then it doesnt affect the sound, so I have no reason to use it. If I use a piece of gear I want it's impact.
I'm talking about Recording not Mastering.

Regards
 
Perhaps it is about time that somebody took all these converters and performed careful null tests between them to identify the differences
Null-test only indicate the existence of differences, which are bound to happen because two converters, even the same model of the same brand, cannot have perfect identical response. How can you tell converter #xxxx is better than converter #yyyy same model, same brand? In addition, errors in the substraction process are inevitable.
A very thorough spectrum analysis of the difference signal may give more useful results, but actually not more than a spectrum analysis of the output signal. The former method describes the spectrum of [(signal+artefacts A) - (1+/-epsilon) (signal + artefacts B)]. Epsilon being the relative error in substracting signals. At the end of such a test, you get the sum or difference between artefacts, which does not help identifying which one is better, and a measurement error that is often larger than the artefacts and mask them.The latter method identifies the character of artefacts without ambiguity.
Null-tests are very good at debunking myths, though. Like cable or connector "sound".
I believe the attraction for null-teste comes from the apparent simplicity, until reality sets in.
 
Last edited:
Perhaps I'm mistaken, but wouldn't at least null tests in the digital domain on ADs be easier to carry out than ones in the analogue domain with DAs?

How about this? The Alvin Lucier test! Take a pair of complementary converters, record a complex acoustic source into digits and then playback and re-record, DA-AD over and over, 10 times, 100 times, then hear how it sounds. That might at least give you a flavour of what that particular design's foibles are. Then do the same thing (ideally, you recorded the same original performance in parallel - or maybe just start with a classic recording - the imprint of the original shouldn't be being emphasised over iterations) with different DA-AD pairs from another brand.
 
I'll participate if we'll set this up.
I think many people here have a complimentary set of DA and AD, we just need to pick a source file and share it.

A 100 passes would take a serious amount of time though... but a 10sec clip would do.
 
Perhaps I'm mistaken, but wouldn't at least null tests in the digital domain on ADs be easier to carry out than ones in the analogue domain with DAs?

How about this? The Alvin Lucier test! Take a pair of complementary converters, record a complex acoustic source into digits and then playback and re-record, DA-AD over and over, 10 times, 100 times, then hear how it sounds. That might at least give you a flavour of what that particular design's foibles are.
That would definitely emphasize the frequency response errors and production of distortion. Is it such a useful test? Introducing 100x 0.1dB amplitude deviation results in a quite audible 10dB. Is it enough to disqualify a product. I doubt any converter, even of the utmost precision would pass this test.
 
A 100 passes would take a serious amount of time though... but a 10sec clip would do.

Can anything be done with automation / scripting / batch files? It's actually been a long time since I've done any proper recording stuff. I mostly tinker with building mics, so I am out of date with that kind of knowledge. I know you can do similar things in photo editing software, for example, so maybe with some DAWs too?
 
That would definitely emphasize the frequency response errors and production of distortion. Is it such a useful test? Introducing 100x 0.1dB amplitude deviation results in a quite audible 10dB. Is it enough to disqualify a product. I doubt any converter, even of the utmost precision would pass this test.

Well, I don't know if we're talking about disqualifying a product, more trying to nail down its "flavour" by exaggeration. Something concrete rather than the nebulous claims of the golden ears. Is it aliasing, is it a frequency response subtlety, is it jitter, etc. You might also see interesting things in how well the stereo image is preserved, even though both sides have degraded. That would tell you about the overall quality of the product - how well are the channels matched? It could also be interesting to see how different instances of the same complementary pair of models behave. Do they all degrade in the same way, or is it more random?

[aside, I'm not sure the multi-quote thing works with this new forum software. Try as I might, I could not reply to Perm and Abbey in the same response]
 
I'm not so much into the automation thing, I'm running a vintage DAW.
But you would not need to do it all in one go, you could run 20 passes, save the last file, drop the other 19 and a couple days later do the same thing
If there's a distinct signature it will start to show after 20 passes I guess.

I think the test makes a lot of sense, indeed it would take a hell of a design to do 100 passes without any noticable artifacts.
It would be interesting to see "where they go" and the worst may not even be able to do 10 passes without showing it's colors.

I'm up for this.
 
Well, I don't know if we're talking about disqualifying a product, more trying to nail down its "flavour" by exaggeration. Something concrete rather than the nebulous claims of the golden ears.
Probably. I was just questioning the validity of null-tests.
Is it aliasing, is it a frequency response subtlety, is it jitter, etc.
Frequency response errors are linearly cumulative; after 10 passes, it is 10 times worse. Jitter errors cumulate quadratically, so after 10 passes it is only 3 times worse. Same for noise.
You might also see interesting things in how well the stereo image is preserved, even though both sides have degraded.
Stereo image is related to the phase precision of the audio signal. The minimum deviation that can be detected by human audition is about 10us. Since typical jitter is a few dozen ps. It would take about 1T (10^12) passes to produce something significant. I am sure other effects would be dominant.
That would tell you about the overall quality of the product - how well are the channels matched? It could also be interesting to see how different instances of the same complementary pair of models behave. Do they all degrade in the same way, or is it more random?
The main issue is that frequency response effects build up quickly, and become quickly dominant. Distortion is more malicious; it builds up more slowly than frequency response errors. In order to avoid masking, cumulated frequency response errors should be EQ'd out. This is more or less what the GS method does, however its accuracy and efficiency are questionable.
 
Last edited:
Back
Top