QuantAsylum QA400 Audio analyser

GroupDIY Audio Forum

Help Support GroupDIY Audio Forum:

This site may earn a commission from merchant affiliate links, including eBay, Amazon, and others.
Rochey said:
Is it also a real time streamer? I was under the impression that it buffered most of the samples, then burst it accross the USB in data, not audio, format.

A dump of the descriptors will reveal a lot of useful detail ...
 
ruffrecords said:
Here is the result of lsusb -v

What's not surprising at all is the use of the Vendor-Defined class. Obviously it pushes the hard work of writing a driver onto the QA people.

But it's weird. There are four "alternate interfaces," each of which is vendor defined and each has a different collection of endpoint types. Only alternate interface 3 has the isochronous endpoints normally associated with streaming audio. So their software really does talk to the device and changes its operational mode depending on whatever it's doing.

It identifies as conforming to the USB 2.0 spec (that's the bcdUSB entry in the Device Descriptor). This means that it could support High Speed. Since there is no "OTHER_SPEED_CONFIGURATION" descriptor, we assume that it's meant to run at High Speed. At bus reset time, the device will come up as a Full Speed device. At that time, the device and the upstream hub will do a dance to determine the bus rate. They assume that they'll always talk to a High Speed port which is why they didn't bother with the OTHER_SPEED_CONFIGURATION. That would allow for the device to still work, but with limited capability, if plugged into a Full Speed hub or computer port. Since pretty much every computer made in the last 13 years since USB 2.0 was ratified supports High Speed, that's a reasonable decision.

So the device should support the advertised 192 kHz/24-bit data streams with two channels in each direction simultaneously. This all depends though on their drivers and what they intend to do with the product. It's somewhat surprising that they don't support the standard Audio Class 2.0, but then again, Windows drivers don't exist and there's a lot of required stuff in that class that might just get in their way.

Still, if someone could take a screwdriver to the case and crack it open ...

-a
 
Andy,

just for clarification, I belive the USB 1.x spec no longer exists. The USB 2.0 Spec covers low, full and high speeds.

from a simple silicon perspective, most microcontroller vendors are doing USB Full Speed onchip, with high speed requiring a separate PHY IC.

This sounds like what the Audio Precision done - tests are all done locally, then results are burst back to the PC.

/R


 
Rochey said:
just for clarification, I belive the USB 1.x spec no longer exists. The USB 2.0 Spec covers low, full and high speeds.

True, yes, but you'll still see the incorrect distinction made which says that USB 2.0 is only High Speed and if a device is Full Speed then it's "USB 1.1."

from a simple silicon perspective, most microcontroller vendors are doing USB Full Speed onchip, with high speed requiring a separate PHY IC.

Not necessarily, although I suppose it depends on where you draw the line between "microprocessor" and "microcontroller." Is an ARM Cortex-M3 a microcontroller? I'm looking at using one of the Atmel M3s for a High Speed USB design: High Speed support is built in, no external PHY required.

(Yes, I looked at TI's ARM offerings, and anything with High Speed USB support was way too much processor for what I want to do. I don't need Ethernet etc etc.)

This sounds like what the Audio Precision done - tests are all done locally, then results are burst back to the PC.

That's how you'd do a logic analyzer or an oscilloscope, but it seems like if real-time support for streaming data is available, why have gobs of local (on-device) storage and burst it to the computer for analysis?

I suppose there are many ways to do this.

-a
 
Andy Peters said:
This sounds like what the Audio Precision done - tests are all done locally, then results are burst back to the PC.
That's how you'd do a logic analyzer or an oscilloscope, but it seems like if real-time support for streaming data is available, why have gobs of local (on-device) storage and burst it to the computer for analysis?
There are advantages in doing this, namely making it compatible with the most pedestrian USB connection, and also providing some unique features.
In particular, one thing that frustrates me with the soundcard approach is the lack of a "processed" output. Ever since I started using an audio analyser (goes back to early 70's) I always had channel 2 of the 'scope connected to the post-processing output; that's how you see what type of noise or distortion you're facing. I agree that having an FFT spectrum is the next best thing, but not as instantly significant (FFT doesn't show phase).
I guess it could be possible with the soundcard approach to loop the processed output back to ch.2 D/A...
 
Andy Peters said:
So the device should support the advertised 192 kHz/24-bit data streams with two channels in each direction simultaneously. This all depends though on their drivers and what they intend to do with the product. It's somewhat surprising that they don't support the standard Audio Class 2.0, but then again, Windows drivers don't exist and there's a lot of required stuff in that class that might just get in their way.

I think this makes really sense from a developer perspective: to ensure proper operation on a windows box you must bypass the internal audio engine! re sampling, volume control, mix and whatever are a nightmare to handle. so one solution is to provide a dedicated 'tube' to the test interface and allow a ASIO connection to the application through a dedicated driver. no necessity to buffer audio, just sending it through a other path directly to the application.
I use a Duran Audio Interface that does exactly that and it is the most stable interface you can get a on a windows box.

the issues might com from the application itself, seems other are having or at least had some trouble with it.

- michael
 
I've been in touch with QA's support for a few days now; I'm pleased to announce that I got the thing working at 192k on the notebook. The only restriction is that the max number of FFT points is 32k, not 64k.
There is a patch coming out next week that will have a different frequency response test, that will not offset the baseline as a function of the number of bins.
Subject to Ethan's approval, QA's tech support may post some answers directly in this thread.
 
Andy Peters said:
Still, if someone could take a screwdriver to the case and crack it open ...

Here you go, but it's not very helpful.

 

Attachments

  • DSCN1179.JPG
    DSCN1179.JPG
    206.2 KB · Views: 82
abbey road d enfer said:
Andy Peters said:
This sounds like what the Audio Precision done - tests are all done locally, then results are burst back to the PC.
That's how you'd do a logic analyzer or an oscilloscope, but it seems like if real-time support for streaming data is available, why have gobs of local (on-device) storage and burst it to the computer for analysis?
There are advantages in doing this, namely making it compatible with the most pedestrian USB connection, and also providing some unique features.
In particular, one thing that frustrates me with the soundcard approach is the lack of a "processed" output. Ever since I started using an audio analyser (goes back to early 70's) I always had channel 2 of the 'scope connected to the post-processing output; that's how you see what type of noise or distortion you're facing. I agree that having an FFT spectrum is the next best thing, but not as instantly significant (FFT doesn't show phase).
I guess it could be possible with the soundcard approach to loop the processed output back to ch.2 D/A...

Smaart does its transfer-function measurements using two inputs on the audio device. Your stimulus output (which can be the soundcard, or something else, like a function/noise generator) is split and feeds both the DUT and the second soundcard input. The first soundcard input comes from the DUT output.  Of course transfer function gives both phase and amplitude.

(In some cases, like doing acoustics measurements or when your DUT has digital latency, you have to determine the delay through the DUT and add a complementary delay on the direct/reference input.)

Finally, Smaart has a trick setting which enables the use of the program's internal signal generator as the reference channel, so there's no need for a splitter and loopback.

-a
 
audiomixer said:
I think this makes really sense from a developer perspective: to ensure proper operation on a windows box you must bypass the internal audio engine! re sampling, volume control, mix and whatever are a nightmare to handle. so one solution is to provide a dedicated 'tube' to the test interface and allow a ASIO connection to the application through a dedicated driver. no necessity to buffer audio, just sending it through a other path directly to the application.
I use a Duran Audio Interface that does exactly that and it is the most stable interface you can get a on a windows box.

Windows is epic fail for audio. I mean, the USB Audio Class spec includes "Features" which let the device tell the operating system things like min and max gain and gain steps. OS X lets you set the gain (in the Audio Midi Setup, or from within an application) in dB conforming to the range. Windows just gives you a stupid slider with no way to set an exact gain value. And my programmer friends tell me that you can't do that kind of gain control programmatically, either. It's beyond stupid.
 
Andy Peters said:
Smaart does its transfer-function measurements using two inputs on the audio device. Your stimulus output (which can be the soundcard, or something else, like a function/noise generator) is split and feeds both the DUT and the second soundcard input. The first soundcard input comes from the DUT output.  Of course transfer function gives both phase and amplitude.

(In some cases, like doing acoustics measurements or when your DUT has digital latency, you have to determine the delay through the DUT and add a complementary delay on the direct/reference input.)

Finally, Smaart has a trick setting which enables the use of the program's internal signal generator as the reference channel, so there's no need for a splitter and loopback.

-a
I'm very familiar with the notion of assessing transfer function by comparing the DUT output to its input, using a random signal as stimulus, including its advantages and drawbacks. Ever since the days dbx announced that "your audience will love your test signal" (dbx RTA-1).
That is not what I was talking about.
When you use an AP (and just about any audio analyser), there is a "processed" output, which is basically the signal hitting the "meter". In particular in THD mode, this output, taken at the output of the notch filter, contains all the residuals - harmonics and noise.
Observing this signal on a 'scope is of enormous interest for anyone who knows their Lissajous.
In order to do that with a soundcard, the software should route the output of the virtual notch filter to the ch.2 DAC input.
That may be doable if the THD processing was done by emulating the analog THD measurement, but in most cases, soundcard THD measurement is done by FFT. Reconstructing the residual signal by recombining the individual harmonics is feasible, but phase information is essential for making it useful.
In the particular case of QA, it seems the phase information is lost.
 
abbey road d enfer said:
When you use an AP (and just about any audio analyser), there is a "processed" output, which is basically the signal hitting the "meter". In particular in THD mode, this output, taken at the output of the notch filter, contains all the residuals - harmonics and noise.
Observing this signal on a 'scope is of enormous interest for anyone who knows their Lissajous.
In order to do that with a soundcard, the software should route the output of the virtual notch filter to the ch.2 DAC input.
That may be doable if the THD processing was done by emulating the analog THD measurement, but in most cases, soundcard THD measurement is done by FFT. Reconstructing the residual signal by recombining the individual harmonics is feasible, but phase information is essential for making it useful.
In the particular case of QA, it seems the phase information is lost.

I am familiar with looking at, and even listening to, the product output from distortion analyzers. Back in the day I used to listen to the distortion residual to help evaluate audio path quality. While eyes and meters are good for quantitative judgements, ears are better for qualitative assessment in the margin.

I am not very confident in the phase information in that distortion analyzer product output since most distortion analyzers tweak the phase to realize a deeper notch filter. I guess you are talking about the phase relationship between the different higher harmonic distortion components. This phase relationship should affect the look of that distortion more than the sound of it.

JR
 
JohnRoberts said:
abbey road d enfer said:
When you use an AP (and just about any audio analyser), there is a "processed" output, which is basically the signal hitting the "meter". In particular in THD mode, this output, taken at the output of the notch filter, contains all the residuals - harmonics and noise.
Observing this signal on a 'scope is of enormous interest for anyone who knows their Lissajous.
In order to do that with a soundcard, the software should route the output of the virtual notch filter to the ch.2 DAC input.
That may be doable if the THD processing was done by emulating the analog THD measurement, but in most cases, soundcard THD measurement is done by FFT. Reconstructing the residual signal by recombining the individual harmonics is feasible, but phase information is essential for making it useful.
In the particular case of QA, it seems the phase information is lost.

I am familiar with looking at, and even listening to, the product output from distortion analyzers. Back in the day I used to listen to the distortion residual to help evaluate audio path quality. While eyes and meters are good for quantitative judgements, ears are better for qualitative assessment in the margin.
Of course, audio equipment is made to pass audio, so audio monitoring is de rigueur.
I am not very confident in the phase information in that distortion analyzer product output since most distortion analyzers tweak the phase to realize a deeper notch filter.
That is not so in my experience. When the notch filter is properly tuned, the harmonics are not shifted. That would be the case if the notch filter was too wide. Since the notch filter is a single biquad, phase shift cannot exceed 90°. When the filter is narrow enough so that the response is close to flat at the second harmonic, the phase response is close to 0.
I guess you are talking about the phase relationship between the different higher harmonic distortion components. This phase relationship should affect the look of that distortion more than the sound of it.
The look of distortion is extremely useful. Crossover distortion appears as a discontinuity right in the middle of the wave form, third appears as a loop at the beginning and end, second as a dissymetry around the vertical axis...
 
abbey road d enfer said:
Of course, audio equipment is made to pass audio, so audio monitoring is de rigueur.
Obviously, while listening to the product output alone can be more revealing without the masking of the fundamental present. While one can fall down the rabbit hole doing this, since masking is the real world.
The look of distortion is extremely useful. Crossover distortion appears as a discontinuity right in the middle of the wave form, third appears as a loop at the beginning and end, second as a dissymetry around the vertical axis...

That is your experience, I have seen different looking crossover distortion from different amp (output stage) topologies. I did some work with an automatic class A bias circuit so spent way too much time looking at this, and I bet mine looked different than your crossover distortion.  8)

I suspect the X harmonic distortion may "look" different depending on how much Y harmonic is happening along with it and which part of the circuit is not linear, while I have never paid very close attention to this phenomenon in the context of distortion. In my drum tuner I make a test signal that contains multiple octave spaced notes (just like harmonics). I experimented with the phase relationship between the sines to maximize peak signal headroom (how it looks makes a practical difference in this context).


JR
 
Hello,
I'm a new member and a new owner of the QA400 analyzer and I also have a problem to calibrate the unit.
I have tried 3 different computers, two laptops (Acer Aspire One and Asus eeePC 900) and a HP desktop one.
Three different USB cable, and read both the DiyAudio and the GroupDiy forums.
I also tried both programs available on the QA website, first the V1.060 and after the V1.0.
Non of them is working, but the V1.0 is worst

I will try to describe what I was doing up to now.
After QA400 is conected to the USB port:. L Signal amber LED dim lit
Running the program , "USB Connected" - L Signal LED off, Link LED lit.
When clicking on "Calibration" on the QA400 the Link, Run and the Drop LED's will lit.
I measure on the L and R output 1.3039 and 1.3024V, with a Fluke 8060 and also with a Fluke 8840A.
Going to the Next step I always get the message: "Calibration Failed (NaN)". in the picture top left corner: "L/R input clip"
With V1.0
Drop LED lit all the time
"Serious error has occurred application will close"
Send error...

I don't get this with V1.06, only the  "Calibration Failed (NaN)"
When I "Generate Fixed Tone" the "Drop" will LED lit.

I also tried to run the QA400 for a while before I attempted to calibrate.
Any idea what else could I try?
Thanks, Miklos
 
I had a similar problem.
I got in touch with QA's support.
They asked me to do that:
"1) Uninstall the old version of the QA400 software
2) Install version 1.060
3) With the box connected, select Help->About and share the contents of that screen? Please do this on the machine that is giving you the most trouble. This will let us see how responsive the overall USB stack is, from application through drivers to hardware. "

I did and they drew some conclusions, such as: "What the test shows below is the range of times needed to read a register in the QA400 box a few thousand times. This shows that on average is takes 538 uS, but it can take as much as 2.25 mS. This number is important because the QA400 hardware buffers 2Kbytes of captured audio, or roughly 340 audio samples. At 192Ksps, this is 1.8 mS of audio. So, if you don't pull the data over in 1.8 mS, then there's no place for it to go, and the hardware notes the buffer overflow and lights the drop light.

At 48Ksps, the problem is easier, and you get about 7.2mS before overflow occurs.

Now, the actual number achieved in practice is much better because we used overlapped IO. One thing interesting is that each USB port on your machine is often a different controller. This means you can see variation between ports on machines. Is it possible that one of the ports on your machine can deliver a reduced maximum time?"

So, after some tweaking, I got to the conclusion that I could not run a 64k point FFT at 192kHz on my machine, but a 32k was quite possible.
I suggest you get in touch with them. Matt is a competent and responsive guy.
 

Latest posts

Back
Top