Describe The Sounds Of Your Favorite Caps

GroupDIY Audio Forum

Help Support GroupDIY Audio Forum:

This site may earn a commission from merchant affiliate links, including eBay, Amazon, and others.
Point taken.

Best sounding perhaps, but maybe not the most useful.

I'm guessing the results would have been different had said engineers actually did a mix on each set of speakers in question and then compared the result on a known reference speaker system. That's where the rubber hits the road IMO.

Regards,
Mark
A previous owner of AMR speakers.


JohnRoberts said:
I recall a double blind test of studio monitors put on by a recording magazine using engineers and producers from the left coast. Long story short, several pulled out of the test and refused to let their names be mentioned in print after being informed that they had selected AMR (Peavey) monitors as the best sounding. Oops...  :eek:

JR
 
I don't recall what they used for source material to listen to, or the actual design of the test. A truly well controlled test for that many speakers, for monitoring, with that many people would probably take more time and effort than they were willing to spend.

Not to take this off on too much of a tangent but studio monitors are a funny design task.. not funny ha-ha, but funny as in not an obvious linear "better is best" design task. Most studios I've visited often used multiple different monitors for different aspects of the process, I even knew one guy with a FM transmitter so he could listen to mixes through his car radio in the parking lot. 

IMO one of the best selling studio monitors of all time (Yamaha NS10) was successful in spite of what it sounded like... Long story short, if you made a mix sound good on NS10s, it would generally sound even better on typical home hifi speaks... But it is not that simple to make a speaker wrong in useful ways.

The head of the AMR program (not me) came up with a Gordian knot like solution to an internal argument between making the speakers flat and accurate (my preference), or making them inaccurate just like the most successful competition in each speaker category. The solution was to add a two position switch, that changed the voicing of the studio monitors between a flat "reference" position and a voiced "equalized" position. 

Now this was defensible as giving the studio customer two views of the mix. One for tracking to hear the details of the print, and the other for mixing to listen to an overall result. From a more mercenary (merchandising) aspect it gave Peavey two chances to sound good to the typical customer when listening to different speakers at POS. This was not unlike the classic Bose 901 EQ switch with several chances to find at least one EQ curve that sounds good to each end user.

Back to that blind listening test the Peavey/AMR speakers were blind tested twice, once for each switch position and the winning speaker came in first and third overall....  I have a pair of those very speaks in my living room system (308). This program was a typical Peavey screw up.. The speaker factory has trouble dealing with the formica(?) outside covering for the speakers, that generated large factory scrap/reject rates. Rather than fix the process problem, they just slowed down the run rate and never ran enough to catch up with the significant back order... it pains me to think of how much money they left on the ground, because the factory pukes never got their act together. (argh...  another thing i don't miss about my old day job)

JR

 
JohnRoberts said:
Long story short, several pulled out of the test and refused to let their names be mentioned in print after being informed that they had selected AMR (Peavey) monitors as the best sounding. Oops...  :eek:
This is quite normal with reviewers like Mr. Harley.  I always give such people the option of remaining anonymous.  But I want them to do the test.

Knowing which reviewer is deaf was important to me in da old days.  For these, you need to pile on the Virgins & Unobtainium.  No need to make things sound better.  8)

The secret is to have the Listening conditions to their liking so they can hear the 'chalk & cheese' differences while they can see.  It may take a whole day to get Mr. Harley comfortable.  Then you draw the curtain and the test begins.  The wannabe Golden Pinnae will still hear 'chalk & cheese' even when you present the same signal path each time.  :eek:

I expect each test on each person to take at least 1/2 day for something like capacitors.

FYI, with some notable exceptions like JA, HiFi reviewers do quite poorly compared to the man in the street.  The woman in the street is usually more perceptive than the man.  The very best ears, I've tested have been some speaker designers and one recording engineer.  (I don't have enough tests on JA to put him in this august body)  But I also know speaker designers & recording engineers who are deaf.

L&V's old ABX team also happens to include at least one guy with very perceptive faculties.

I don't do ABX (choice of 2) tests; I do ABC (choice of 3) tests .  All 3 may be different or 2 may be the same or even all three.  You NEVER tell the victim this.  This is to get statistical significance quicker than ABX tests.

[edit]Forgot to say, you have to let the Golden Pinnae try the curtain/blindfold etc while they know exactly what they are listening to before the test proper.  They need to be satisfied it is not affecting their remarkable ability to distinguish between 'chalk & cheese'.
 
ricardo said:
You can do it properly by eg starting with what I suggested earlier (theres a lot more) or do a really slipshod measurement like most wannabe Golden Pinnae.  Your measuring instrument is your Listening Test Panel.  It has an accuracy which has to be checked and calibrated regularly, just like any other instrument.

While I don't doubt this method with all the well defined precautions works well, it's an incredibly inefficient and expensive way of doing anything.

And that's why the jury is still out for a well defined test suite of electronics measurements that tells a designer what will sound good and why - accepting some statistical anomalies of course.

And clearly this whole topic is still very much black art. Why else would every single designer have such wildly varying ways of voicing their works?
 
Kingston said:
While I don't doubt this method with all the well defined precautions works well, it's an incredibly inefficient and expensive way of doing anything.
A B&K 4191 is incredibly inefficient and expensive compared to a Panasonic WM61a.

But if what it does is important to YOU, you will spend the time & money.  Knowing which HiFi Reviewer is deaf tells you when to spend money on virgins & Unobtainium as opposed to making the product sound better.

Of course if you find an inexpensive electronic measurement that will tell me which person is deaf, I would be very pleased too.  8)

Why else would every single designer have such wildly varying ways of voicing their works?
What type of product?

.. said engineers actually did a mix on each set of speakers in question and then compared the result on a known reference speaker system ..
In my tests, the test subject always provides his own music.  I do insist on at least 5 pieces and that they include large scale, small scale, loud, soft, instrumental & voice.

They could be all pop as long as at least one piece demonstrates each of these.  But you can bring your entire record/CD collection and take the whole day if you wish.  So your engineers could bring their favourite mix(es) but need to provide at least one example of large, small, loud, quiet, instrumental, vocal etc.  'Stairway to heaven' was a popular choice for quiet vocal.  The emphasis was on stuff YOU liked to listen to.
 
Make a very simple circuit, like an API 312 type and leave sockets for the output cap and the feedback shunting gain cap. Then you can make tests by swapping these. I would recommend to use a jensen 990 DOA for this.
 
Back
Top