Arda AT1201 ADC, impressive specs

GroupDIY Audio Forum

Help Support GroupDIY Audio Forum:

This site may earn a commission from merchant affiliate links, including eBay, Amazon, and others.

Kingston

Well-known member
Joined
Nov 1, 2005
Messages
3,716
Location
Helsinki, Finland
I got mail from profusion about these new ADC chips they're distributing: AT1201 and AT12612.

http://www.profusionplc.com/pro/gex/pcatdtl0?ipartno=AT1201

The specs seem impressive and price is decent. New ADC chips don't get released awfully often, and when they do it's for consumer and low power markets.

Anybody have any experience with these, or Arda in general? I'm not familiar with this manufacturer at all.

Anything else impressive happen on this front recently?
 
Hi,

I have a few AT1201's sitting here. The prices are great in the link you provided. Directly from Arda they were $42 about half a year ago.

FYI, the 124dB DNR posted for AT1201 is when their internal decimator is BYPASSED, and multibit output is used to be fed into your own filter. In reality unweighted DNR is 121dB. There are a few cool features of this chip along with a few not so great, like being too power-hungry in comparison to, for instance, PCM4222. Or its QFN64 package - not too DIY-friendly. =)

But what concerns me the most is how it SOUNDS, and that I'm yet to find out and compare with PCM4222 and the likes. I have GOOD feelings about it.

What sucks though is that ESSTech's ES9102 is gone... That was the most interesting ADC to me with its 120dB THD, 32-bit sampling, and direct SPDIF output (with their jitter elimination tech found in Sabre DACs). Last time I heard from them, they had silicon wafer issues and pulled it from production. That was about a year ago...
http://www.esstech.com/PDF/SABRE32%20ADC%20PF%20081218.pdf
 
promixe said:
What sucks though is that ESSTech's ES9102 is gone... That was the most interesting ADC to me with its 120dB THD, 32-bit sampling, and direct SPDIF output (with their jitter elimination tech found in Sabre DACs). Last time I heard from them, they had silicon wafer issues and pulled it from production. That was about a year ago...
http://www.esstech.com/PDF/SABRE32%20ADC%20PF%20081218.pdf

Ive been waiting for a 32 bit ADC!! Sad they pulled it!?
 
Why do you want 32bit ADC, if we can't even achieve 24bits of accuracy with current technology?

The only reasons I've seen in the converter world, is that it makes interface to 32bit DSP's a little easier and the marketing guys love it.

There are some ways in which you can see the 25th/26th bit with DAC's, but that requires a massive FFT and 8x averaging. Not sure you can hear it ;)
 
32bits you can represent 2^32 binary numbers with 24 bits you can represent 2^24 binary numbers. More bits = more resolution. Why wouldn't you want more bits?

I would think while 24 bits can represent an astounding amount of numbers, they do get rounded off. With 32bits numbers will get rounded off, but we'll be able to represent an awfully lot more digital numbers - leading to what I would expect a higher precision...

Why not record everything in 16bit 44.1k? It can represent everything we can hear with "digital accuracy".
I think we all can agree, even if we don't know all the theory behind it, that higher bits seems to give a lot more than just a lower digital noise floor.

Thinking about the fact that having more bits will let us represent bigger numbers = louder signals. but also more bits lets us represent smaller numbers also...

Please correct me if I'm wrong.
 
abechap024 said:
I think we all can agree, even if we don't know all the theory behind it, that higher bits gives a lot more than just a lower digital noise floor.

No.

Today's state-of-the art converters have a ~120dB signal-to-noise ratio. This means that there's 120dB between the clipping point and the noise floor. 120dB fits in 20 bits. Broadly speaking this means that a 24-bit converter has its lowest four bits being noise, whereas a 32-bit converter has its lowest twelve bits being noise. How does that buy you anything?

(As Rochey points out, the only - very minor - advantage is that most computers are more comfortable handling 32-bit data types than 24-bit ones, but with current processor speeds that speed impact is negligible).

abechap024 said:
Thinking about the fact that having more bits will let us represent bigger numbers = louder signals. but also more bits lets us represent smaller numbers also...

That would only help if the converter's dynamic range is larger than can be represented in its output data size. This is the case for 16-bit converters, but we're nowhere near that limit for 24-bit converters.

JD 'more bits != more bang' B.
[it's comparable to the megapixel wars we have/had with digital cameras -- beyond a certain limit more pixels buy you nothing for a given sensor size]
 
abechap024 said:
32bits you can represent 2^32 binary numbers with 24 bits you can represent 2^24 binary numbers. More bits = more resolution.
I don't think it would represent more effective resolution.  Remember that effective resolution is guaranteed by a combination of D/A resolution and reconstruction filter performance.
What we perceive when listening to digital audio is not a succession of stairsteps, but a continuous wave. The reconstruction filter is an important but often neglected or forgotten part of the whole process.
Since filters and D/A's are not perfect, increasing the bit depth may well produce a slightly measurable improvement, though; but considering the actual state of electronics and the audition process, I really doubt it is audible. Noise level is actually the great equalizer, producing about 3 LSB's of dither in 24-bit format, which is already more than plenty. In 32-bit format, it would produce 11 MSB's of dither, which has no practical advantage.
I don't doubt native 32-bit format may have some practical interest, but I suspect 32-bit DAC's will output fixed-point format, when most DAW's expect 32-bit floating.
32-bit fixed is not a convenient format for processing, so some kind of format conversion will still have to be done.
 
The only advantage 32bit AD-conversion has for itself is interfacing that directly to a 32bit audio file format, with no need for one extra digital dither step. And even then the advantage is dubious. Do you really want to waste all that space? You will be able to skip dithering steps when you don't have to do intermediate conversions while offline processing audio, but how many of all your audio files have noise floor even below -100dBFS? None is probably a good answer and you can always skip dither and simply truncate anyway.

So yeah, the 32-bit "feature" is pointless. I was interested in the price/performance ratio, and a new chip in the field in general.
 
Kingston said:
The only advantage 32bit AD-conversion has for itself is interfacing that directly to a 32bit audio file format, with no need for one extra digital dither step.

You don't need to dither when going up in resolution, only when you're truncating bits.

JDB.
 
the effective resolution of a 32 (or 24 bit) adc is below 24 bit and it's hard to be near 20 bit. BTW these ones (Arda and ESS) should be top level the same.
 
jdbakker said:
Kingston said:
The only advantage 32bit AD-conversion has for itself is interfacing that directly to a 32bit audio file format, with no need for one extra digital dither step.

You don't need to dither when going up in resolution, only when you're truncating bits.

whoops. yes of course, going up in resolution it's just added noise with no benefits.

Then I don't see a single reason to have a 32bit DAC.


Someone please report how that AT1201 sounds when possible.

Also, it's been like 6 years(!) since PCM4204 and its friends were originally released. I wonder what TI has been up to lately, if they have significant upgrades.

Rochey?  ;D
 
One extra addition... This got spotted by the marcom guys at the company recently. They got it from someones Facebook status.

mann this receiver I plan on designing these holidays is going to insanely bad ass! Finally found an ADC chipset I like, Texas Instruments PCM4222, holy fuck I'm planning on using a chip designed for the bees knees of professional audio recording equipment! Damn those are some fine specs
05 November at 20:33

hehehe.

 
Rochey said:
PCM4222

124dB of bliss.  :)
123dB PCM output.
AND It's actually achievable.

oh! promixe actually mentioned it above already. I assumed the whole PCM4XXX family was from 2003.

what's the DAC equivalent to that by the way PCM1794A and PCM1792A?
 
PCM4222 is a nice chip. =)

Rochey, if you don't see a reason for needing 32-bit depth in conversion, please elaborate on why TI has released the PCM1795 32-bit DAC chip (which is TI's latest DAC, answering the question above) that has only 123dB DNR? Technically, only 21-bit is needed =) I don't mean to come across in a wrong way, but that's a valid question to ask the designers of the chip, if you feel like leaving the "marketing cloak" aside for a moment and sharing with us what their answer is. =)

I don't want to open a can of worms here, but according to some high-end designers (I don't want to misquote, it might have been someone like Dan Lavry or Bruno Putzeys, but I don't recall exactly at the moment) we DO need a few "trailing" bits beneath the actual DNR spec of a chip due to LSB error accumulation in decimation process. If I can find the original source of this information I will gladly post it here.

I'm a couple of months away from having a working AT1201 prototype, I will compare it against PCM4222 and AKM5394. It might be a fun idea to actually design a single board for all ADCs in question sharing same buffers and clock, so that there are less variables in subjective listening trials... =)

FWIW, I just came across an article that says Focusrite has been evaluating AT1201 and will use it in some of their designs - http://www.prosoundnews.com/blog/27542
 
Is there some kind of "standard clocking solution" range of chips that should be used with these modern all in one AD or DA chip packages?

Something easily controllable, but very very good specifications, much like the AD and DA chip themselves? Or must we enter the world of microcontrollers, FPGA's and discrete hybrids?

I'm quite new to this whole field but it is a growing interest of mine.
 
I'm not entirely sure what you mean by "clocking range of chips", but in general to make an ADC work you need to:

- Supply clean power discreetly to analog and digital sections of the ADC chip
- Provide clean clock with minimal phase noise (Dan Lavry says ONLY fundamental frequency of a crystal should be used, no oscillating harmonics, etc...)
- Provide analog buffers that have equal or better DNR/THD than of the chip and convert impedance to what the chip needs for best performance, pad the signal to match anywhere from +18dBu to +26dBu is 0dBFS (this is my working dBu range, matter of taste I guess)
- Control the ADC chip by hardware or "software" means. I assume this is what you are asking about, talking about uCs and FPGAs, etc... Microcontroller way is called the "software" control, whereas the hardware control is when you basically hardwire the pins of the chip to switches/resistors to make its config more or less permanent or configurable by switches without a uC.

So, if you don't want to deal with uC programming you can do hardware control. Not all the chips allow that though...
 
promixe said:
Rochey, if you don't see a reason for needing 32-bit depth in conversion, please elaborate on why TI has released the PCM1795 32-bit DAC chip (which is TI's latest DAC, answering the question above) that has only 123dB DNR? Technically, only 21-bit is needed =) I don't mean to come across in a wrong way, but that's a valid question to ask the designers of the chip, if you feel like leaving the "marketing cloak" aside for a moment and sharing with us what their answer is. =)

I love it when people really challenge me ;)

The analog in the PCM1795 is somewhat based on the PCM1796. We found that by doing an 1024 point FFT and averaging 8 times, you could actually see the 25th bit above the noisefloor. AKM's 32bit device doesn't show the 25th bit. So, we blew out the interface, and the digital filtering to support a 32bit PCM input. (all for one bit, and some extra marketing).

Human hearing is about 130dB on a good day - lets put that in perspective... the loudest noise you can handle without pain (we're talking jet engine etc), to the quietest.
Now, the question is, if you can really hear it, and if it really makes an audible difference. The 25th bit is at 150dB or so.

My personal opinion (and not that of my employers) is that extended dynamic range (lets say, beyond 118db?) is for a few key things
- Marketing
- Headroom (in ADC's)
- Volume Control (in DAC's)

/R

Source: http://www.gcaudio.com/resources/howtos/loudness.html

Interesting fact: The incidence of hearing loss in classical musicians has been estimated at 4-43%, in rock musicians 13-30%.
 
promixe said:
- Provide clean clock with minimal phase noise (Dan Lavry says ONLY fundamental frequency of a crystal should be used, no oscillating harmonics, etc...)

This is what I was really asking about. The rest I'm quite familiar with already. I've been digesting AD/DA chip docs, most of the PCM4XXX range, and some AKM just out of curiosity.

But I've never had to implement clocking in any form. Since most AD and DA chips are quite straightforward to use, I was wondering if there's something "simple" packaged like that for clocking. I mean, how would one go about implementing a basic master clock for several PCM4222 chips for the usual ranges we use. And how to interface that to the outside world so that I can tell my AES-EBU soundcard that's where my sync signal is coming from.

Not trivial I presume. It's an area I will dive into somewhere in near future. I'm inspired by the rkn80 development and wish I could help more in that area, but do not yet possess the knowledge for it.
 
you could use a DIR9001 as a clock generator, up to 96kHz.
It'll decode the AES/EBU into data (0's) and clocks.

you'll then need to run a bank of ADC/DAC's in slave mode.

/R
 

Latest posts

Back
Top